text
stringlengths
59
500k
subset
stringclasses
6 values
Plumbing (mathematics) In the mathematical field of geometric topology, among the techniques known as surgery theory, the process of plumbing is a way to create new manifolds out of disk bundles. It was first described by John Milnor[1] and subsequently used extensively in surgery theory to produce manifolds and normal maps with given surgery obstructions. Definition Let $\xi _{i}=(E_{i},M_{i},p_{i})$ be a rank n vector bundle over an n-dimensional smooth manifold $M_{i}$ for i = 1,2. Denote by $D(E_{i})$ the total space of the associated (closed) disk bundle $D(\xi _{i})$and suppose that $\xi _{i},M_{i}$ and $D(E_{i})$are oriented in a compatible way. If we pick two points $x_{i}\in M_{i}$, i = 1,2, and consider a ball neighbourhood of $x_{i}$ in $M_{i}$, then we get neighbourhoods $D_{i}^{n}\times D_{i}^{n}$ of the fibre over $x_{i}$ in $D(E_{i})$. Let $h:D_{1}^{n}\rightarrow D_{2}^{n}$ and $k:D_{1}^{n}\rightarrow D_{2}^{n}$ be two diffeomorphisms (either both orientation preserving or reversing). The plumbing[2] of $D(E_{1})$ and $D(E_{2})$ at $x_{1}$ and $x_{2}$ is defined to be the quotient space $P=D(E_{1})\cup _{f}D(E_{2})$ where $f:D_{1}^{n}\times D_{1}^{n}\rightarrow D_{2}^{n}\times D_{2}^{n}$ is defined by $f(x,y)=(k(y),h(x))$. The smooth structure on the quotient is defined by "straightening the angles".[2] Plumbing according to a tree If the base manifold is an n-sphere $S^{n}$, then by iterating this procedure over several vector bundles over $S^{n}$ one can plumb them together according to a tree[3]§8. If $T$ is a tree, we assign to each vertex a vector bundle $\xi $ over $S^{n}$ and we plumb the corresponding disk bundles together if two vertices are connected by an edge. One has to be careful that neighbourhoods in the total spaces do not overlap. Milnor manifolds Let $D(\tau _{S^{2k}})$ denote the disk bundle associated to the tangent bundle of the 2k-sphere. If we plumb eight copies of $D(\tau _{S^{2k}})$ according to the diagram $E_{8}$, we obtain a 4k-dimensional manifold which certain authors[4][5] call the Milnor manifold $M_{B}^{4k}$ (see also E8 manifold). For $k>1$, the boundary $\Sigma ^{4k-1}=\partial M_{B}^{4k}$ is a homotopy sphere which generates $\theta ^{4k-1}(\partial \pi )$, the group of h-cobordism classes of homotopy spheres which bound π-manifolds (see also exotic spheres for more details). Its signature is $sgn(M_{B}^{4k})=8$ and there exists[2] V.2.9 a normal map $(f,b)$ such that the surgery obstruction is $\sigma (f,b)=1$, where $g:(M_{B}^{4k},\partial M_{B}^{4k})\rightarrow (D^{4k},S^{4k-1})$ is a map of degree 1 and $b:\nu _{M_{B}^{4k}}\rightarrow \xi $ is a bundle map from the stable normal bundle of the Milnor manifold to a certain stable vector bundle. The plumbing theorem A crucial theorem for the development of surgery theory is the so-called Plumbing Theorem[2] II.1.3 (presented here in the simply connected case): For all $k>1,l\in \mathbb {Z} $, there exists a 2k-dimensional manifold $M$ with boundary $\partial M$ and a normal map $(g,c)$ where $g:(M,\partial M)\rightarrow (D^{2k},S^{2k-1})$ is such that $g|_{\partial M}$ is a homotopy equivalence, $c$ is a bundle map into the trivial bundle and the surgery obstruction is $\sigma (g,c)=l$. The proof of this theorem makes use of the Milnor manifolds defined above. References 1. John Milnor, On simply connected 4-manifolds 2. William Browder, Surgery on simply-connected manifolds 3. Friedrich Hirzebruch, Thomas Berger, Rainer Jung, Manifolds and Modular Forms 4. Ib Madsen, R. James Milgram, The classifying spaces for surgery and cobordism of manifolds 5. Santiago López de Medrano, Involutions on Manifolds • Browder, William (1972), Surgery on simply-connected manifolds, Springer-Verlag, ISBN 978-3-642-50022-0 • Milnor, John (1956), On simply connected 4-manifolds, Symposium Internal de Topología Algebráica, México • Hirzebruch, Friedrich; Berger, Thomes; Jung, Rainer (1994), Manifolds and Modular Forms, Springer-Verlag, ISBN 978-3-528-16414-0 • Madsen, Ib; Milgram, R. James (1979), The classifying spaces for surgery and cobordism of manifolds, Princeton University Press, ISBN 978-1-4008-8147-5 • López de Medrano, Santiago (1971), Involutions on Manifolds, Springer-Verlag, ISBN 978-3-642-65014-7
Wikipedia
\begin{document} \large \title{Iteration of Polynomials $AX^d+C$ Over Finite Fields } \author{Rufei Ren} \address{University of Rochester, Department of Mathematics, Hylan Building, 140 Trustee Road, Rochester, NY 14627} \email{[email protected]} \date{\today} \keywords{Arithmetic dynamical system, Weil's ``Riemann Hypothesis''.} \maketitle \setcounter{tocdepth}{1} \tableofcontents \begin{abstract} For a polynomial $f(X)=AX^d+C \in \FF_p[X]$ with $A\neq 0$ and $d\geq 2$, we prove that if $d\;|\;p-1$ and $f^{\circ i}(0)\neq f^{\circ j}(0)$ for $0\leq i<j\leq N$, then $\#f^{\circ N}(\FF_p) \sim \frac{2p}{(d-1)N},$ where $f^{\circ N}$ is the $N$-th iteration of $f$. \end{abstract} \section{Introduction} We fix a prime $p$. For a polynomial $f \in\FF_p[X]$ we denote its iterates $f^{\circ j}(X)$ by setting $f^{\circ 0}(X) = X$ and $f^{\circ (j+1)}(X) = f(f^{\circ j}(X))$. In this paper, we focus on the polynomials of the form $f(X)=AX^d+C$ with $A\neq 0$ and $d\geq 2$. Our goal is to give a non-trivial upper bound for $\#f^{\circ N}(\FF_p)$. However, as mentioned in \cite{HB}, when $f(X)=X^3+1$ and $p\equiv 2\pmod 3$, $\#f^{\circ N}(\FF_p)$ achieves the trivial bound $p$. Therefore, in order to give a non-trivial bound for $\#f^{\circ N}(\FF_p)$, it is crucial to restrict $p$ in some certain residue class modulo $d$. More precisely, we obtain the following theorem. \begin{theorem}\label{main thm} Let $f(X) = AX^d+C\in \FF_p[X]$ with $A\neq 0$, $d\geq 2$ and $d\;|\;p-1$. Suppose that \begin{equation}\label{precondtion} f^{\circ i}(0)\neq f^{\circ j}(0)\quad \textrm{for}\quad 0\leq i<j\leq N. \end{equation} Then there exists an absolute constant $M$ (not depending on $d, p, A, C$) such that whenever (1.1), we have \begin{equation}\label{goal} \left|\#f^{\circ N}(\FF_p)- \mu_N\cdot p\right|\leq Md^{d^{6N}}\sqrt{p}, \end{equation} where $\mu_N $ is defined recursively by taking $\mu_0= 1$ and $$d\mu_r = 1 -(1-\mu_{r-1})^d.$$ Moreover, we have $$\mu_r\sim \frac{2}{(d-1)r}\quad \textrm{when} \quad r\to \infty.$$ \end{theorem} \begin{remark} (1) Note that \cite[Theorem 1]{HB} is the special case of Theorem~\ref{main thm} when $d=2$ and $p\neq 2$. (2) The condition that $d\;|\;p-1$ is essential for us to obtain the estimation of $\#f^{\circ N}(\FF_p)$ as in Theorem~\ref{main thm}. It is simply because without it Lemma~\ref{first lemma} will fail even though a similar $P_i$ can be defined. We do not know if a similar result will hold for the polynomial $Ax^d+C$ when $d\nmid p-1$. We are certainly interested in this generalization. \end{remark} Theorem~\ref{main thm} has the following corollaries, whose proofs together with Theorem~\ref{main thm}'s will be given in \S~\ref{s3}. \begin{corollary}\label{cor1} Let $f(X) = AX^d + C \in \FF_p[X] $ with $A\neq 0$, $d\geq 2$ and $d\;|\;p-1$. Then there exists some $D_d>0$ depending only on $d$ (in particular not depending on $A$ and $C$) such that $f^{\circ i}(0)=f^{\circ j}(0)$ for some $i, j$ with $$i < j \leq D_{d}\frac{p}{\log\log p}.$$ \end{corollary} \begin{corollary}\label{cor2} Let $\widetilde{f}=\widetilde{A}X^d +\widetilde{C} \in \mathbb{Z}[X]$ be an integer polynomial with $\widetilde{A}, \widetilde{C} > 0$. For a prime $p$ we denote by $\widetilde{f}_p(X) $ the reduction of $\widetilde{f}$ in $\FF_p[X]$. Then there exists constant $p_{\widetilde A, \widetilde C, d}$ such that for all primes $p \geq p_{\widetilde A, \widetilde C,d}$, the sum of the cycle lengths (resp. the lengths of pre-cyclic paths) of $\Gamma_{\widetilde{f}_p}$ is at most $ \frac{21p\log d}{\log \log p}$ (resp. $ \frac{28p\log d}{\log \log p}$), where $\Gamma_{\widetilde{f}_p}$ is the directed graph whose vertices are the elements of $\mathbb{F}_p$ and such that there is an arrow from $P$ to $\widetilde{f}_p(P)$ for each $P\in\mathbb{F}_p$. \end{corollary} The general line of the argument in this paper follows the one in \cite{HB} closely, but the fact that $f(x)-f(y)$ splits into $x-y$ and two or more factors (rather than just one factor in the case of quadratics: $(ax^2+c)-(ay^2+c) = a(x-y)(x+y))$ makes the accounting more complicated and necessitates a more complicated combinatorial argument. More precisely, we need to introduce the second function $\eta$ into a graph as in Definitions~\ref{graph} and \ref{proper}, which makes our estimation of the number of $(r,k,d)$-trees in \eqref{1} coarser than the one in \cite{HB}. Fortunately, this estimation still gives us an upper bound that we need. Since this paper is essentially based on \cite{HB}, some lemmas would be similar to the ones in \cite{HB}. However, for the completeness, we will still give their proofs. After submitting this paper, the author was told that Jamie Juul proves a stronger result of Theorem~\ref{main thm} in \cite{JJ} via Chebotarev’s Density theorem (a completely different method to the one used in the current paper) based on the paper \cite{JKMT} written by Par Kurlberg, Kalyani Madhu, Tom Tucker and herself. \subsection*{Acknowledgment} First the author would like to thank Professor Ambrus Pal for finding a referee and the referee for comments. The author also want to thank Daqing Wan, Tom Tucker and Shenhui Liu for their valuable discussions. \section{Decomposition of projective variety $\mathcal{C}_N$ into union of absolutely irreducible curves.} We assume that $d\;|\;p-1$ in the whole paper and fix a primitive $d$-th root of unity $\gamma\in \FF_p$. We would drop the dependence of functions to $d$ when the context is clear. \begin{notation} We put $P_i(X,Y)=X-\gamma^i Y$ and $F^{\circ r}(X,Z)=Z^{d^r}f^{\circ r}(\frac{X}{Z})$. \end{notation} \begin{lemma}\label{first lemma} Under the assumptions of Theorem~\ref{precondtion}, for every $0\leq r\leq N-1$ and $1\leq i\leq d-1$ the polynomials of the form $P_i(f^{\circ r}(X),f^{\circ r}(Y))$ are absolutely irreducible over $\FF_p$. \end{lemma} \begin{proof} It is enough to show that there is no non-zero solution to \begin{equation}\label{eq:8} \nabla\left(W^{D}P_i\left(f^{\circ r}(\frac{U}{W}),f^{\circ r}(\frac{V}{W})\right)\right)= \underline 0, \end{equation} where $D=d^r.$ Suppose that $(u,v,w)$ is a non-zero solution to \eqref{eq:8}. Then we have \begin{eqnarray}\label{eq1} & w^{D-1}(f^{\circ r})'(\frac{u}{w})=w^{D-1}\prod\limits_{j=0}^{r-1}d(f^{\circ j}(\frac{u}{w}))^{d-1}=0,\\ & w^{D-1}\gamma^i (f^{\circ r})'(\frac{v}{w})=w^{D-1}\gamma^i\prod\limits_{j=0}^{r-1}d(f^{\circ j}(\frac{v}{w}))^{d-1}=0,\\ & -uw^{D-2}(f^{\circ r})'(\frac{u}{w})+vw^{D-2}\gamma^i(f^{\circ r})'(\frac{v}{w})+Dw^{D-1}P_i\big(f^{\circ r}(\frac{u}{w}),f^{\circ r}(\frac{v}{w})\big)=0. \end{eqnarray} Assume that $w=0$. From the equalities above, there are $0\leq s\leq r-1$ and $0\leq t \leq r-1$ such that $F^{\circ s}(u,w)=F^{\circ t}(v,w)=0.$ Since $$F^{\circ s}(u,0) = A^{\frac{d^s-1}{d-1}}u^{d^s}\quad \textrm{and}\quad F^{\circ t}(v,0) = A^{\frac{d^t-1}{d-1}}v^{d^t},$$ we obtain that $u=v=w=0,$ which is excluded. Now we assume $w\neq 0$. Then there exist $u, v \in \overline \mathbb{F}_p$ such that $$f^{\circ s}(u) = f^{\circ t}(v) = 0\textrm{~and~}f^{\circ r}(u)-\gamma^i f^{\circ r}(v) = 0.$$ If $s = t$, we have $$P_i(f^{\circ (r-s)}(0),f^{\circ (r-s)}(0))=(1-\gamma^i)f^{\circ (r-s)}(0)= 0,$$ which implies $f^{\circ (r-s)}(0) = 0 = f^{\circ 0}(0)$ with $1 \leq r-s \leq N,$ a contradiction to our assumption~\eqref{precondtion}. If $s\neq t$, we have $P_i(f^{\circ (r-s)}(0), f^{\circ (r-t)}(0))=0$, which implies $f^{\circ (r-s+1)}(0)=f^{\circ (r-t+1)}(0)$, again a contradiction to our assumption~\eqref{precondtion}. Therefore, we conclude that the polynomial $P_i(f^{\circ r}(X),f^{\circ r}(Y))$ is irreducible over the algebraic completion $\overline\mathbb{F}_p$ of $\mathbb{F}_p$ for every $r \leq N-1$. \end{proof} \begin{notation}\label{no1} We put \begin{equation*} \rho_r(m):=\#\{x\in \FF_p \;|\;f^{\circ r}(x)=m\}\quad \textrm{and}\quad \mathcal{W}(r,k):=\sum\limits_{m\in \FF_p} \rho_r(m)^k. \end{equation*} \end{notation} Note that $\mathcal{W}(r,k)$ plays an important role on counting $\#f^{\circ N}(\FF_p)$ as in the proof of Theorem~\ref{main thm}. As in \cite{HB}, for every $k\geq 0$, the function $\mathcal{W}(r,k)$ is the number of solutions to \begin{equation}\label{fr} f^{\circ r}(x_1) = \dots = f^{\circ r}(x_k)\textrm{~in~}\FF_p^k. \end{equation} For every $r\geq 0$ we define the projective variety \begin{equation}\label{C0} \mathcal{C}_r: X_0^{d^r}f^{\circ r}\left(\frac{X_1}{X_0}\right) = \dots =X_0^{d^r}f^{\circ r}\left(\frac{X_k}{X_0}\right). \end{equation} We put $$\Phi(X,Y;\ell,h):= \begin{cases} P_h(f^{\circ \ell}(X),f^{\circ \ell}(Y)),& \textrm{if~}\ell\geq 0\ \textrm{and}\ 1\leq h\leq d-1,\\ X - Y,& \textrm{if~}\ell = -1\ \textrm{and}\ h=0. \end{cases}$$ Clearly, we have $$f^{\circ r}(X)-f^{\circ r}(Y)=A^{r}(X-Y)\prod_{\ell=0}^{r-1}\prod_{h=1}^{d-1} \Phi(X,Y; \ell,h).$$ For every solution $(x_1,\dots,x_k)$ to \eqref{fr} and every pair of distinct indices $1 \leq i\neq j \leq k$, if $x_i=x_j$, we put $\ell(x_i,x_j):=-1$ and $h(x_i,x_j):=0$; otherwise, we put $\ell(x_i,x_j)$ to be the smallest integer $\ell \in \{0, 1,\dots , r - 1\} $ such that $$\Phi(x_i, x_j ; \ell, h) = 0\quad \textrm{for some~} 1\leq h\leq d-1,$$ and denote $h(x_i,x_j):=h$. Since $$\Phi(x_i,x_j;\ell(x_i,x_j),h)=f^{\circ \ell(x_i,x_j)}(x_i)-\gamma^hf^{\circ \ell(x_i,x_j)}(x_j)=0$$ holds for a unique $1\leq h\leq d-1$, we know that $h(x_i,x_j)$ is well-defined. By definitions of $\ell(\cdot,\cdot)$ and $h(\cdot,\cdot)$, we have \begin{equation} \begin{cases} \ell(x_i,x_j)=\ell(x_j,x_i)=-1 \textrm{~and~} h(x_j,x_i)= h(x_i,x_j)=0, \textrm{~or}\\ \ell(x_i,x_j)=\ell(x_j,x_i)\geq 0 \textrm{~and~} h(x_j,x_i)+ h(x_i,x_j)=d. \end{cases} \end{equation} \begin{notation} For a graph $G$ with $k$ vertices, we denote by $\mathcal{V}_G$ and $\mathcal{E}_G$ the sets of $G$'s vertices and edges, respectively. \end{notation} \begin{definition}\label{graph} An \emph{$(r, k,d)$-graph} is a graph $G$ with $\mathcal{V}_G=\{1,2,\dots,k\}$ and each of its edges $\overline{ab}$ is associated two functions $\xi_G$ and $\eta_G$ on the ordered pairs $(a, b)$ and $(b, a)$ such that \begin{itemize} \item $Range(\xi_G)=\{-1,\dots, r\}$ and $Range(\eta_G)=\{0,1,\dots,d-1\}$. \item When $\xi_G(a,b)=-1$, we have $\eta_G(b,a)=0.$ \item When $\xi_G(a,b)\geq 0$, we have $\eta_G(b,a)\in \{1,\dots,d-1\}.$ \item $\xi_G(a,b)=\xi_G(b,a) \textrm{~and~} \eta_G(b,a)+\eta_G(a,b)\equiv 0\pmod d.$ \end{itemize} If there exits at least one edge $\overline{ab}$ in $G$ such that $\xi_G(a,b)=r$, we call $G$ a \emph{strict $(r,k,d)$-graph}. If for every pair of vertices in $G$ there is an edge connecting them, we call $G$ a \emph{complete $(r, k,d)$-graph}. \end{definition} \begin{definition}\label{proper} Let $G$ be an $(r, k, d)$-graph. We call $G$ \emph{proper} if for every distinct vertices $a, b$ and $c$ such that $\overline{ab}$, $\overline{ac}$ and $\overline{bc}$ all belong to $\mathcal{E}_G$, we have the following. \begin{enumerate} \item If $\xi_G(a, b) = \xi_G(b, c)=-1$, then $\xi_G(a,c)=-1$. \item If $\xi_G(a, b) < \xi_G(b, c)$, then $\xi_G(a, c)=\xi_G(b, c)$ and $\eta_G(a, c)=\eta_G(b, c)$. \item If $0\leq \xi_G(a, b) = \xi_G(b, c)$ and $\eta_G(a,b) +\eta_G(b,c)\neq d$, then $\xi_G(a,c)=\xi_G(a,b)$ and $\eta_G(a,c)\equiv \eta_G(a,b) + \eta_G(b,c)\pmod d$. \item If $0\leq \xi_G(a, b) = \xi_G(b, c)$ and $\eta_G(a,b) +\eta_G(b,c)= d$, then $\xi_G(a,c)<\xi_G(a,b) = \xi_G(b,c)$. \end{enumerate} \end{definition} \begin{lemma}\label{pro1} Let $G_{\underline x}$ be the complete $(r-1, k,d)$-graph associated to a solution $ \underline x=(x_1,\dots,x_k)$ to \eqref{fr} with $\xi_{G_{\underline x}}(a,b):=\ell(x_a,x_b)$ and $\eta_{G_{\underline x}}(a,b):=h(x_a,x_b)$ for every $\overline{ab}\in \mathcal{E}_{G_{\underline x}}.$ Then $G_{\underline x}$ is proper. \end{lemma} \begin{proof} One can check that $G_{\underline x}$ satisfies all the conditions in Definition~\ref{proper}. \end{proof} We list the following properties for an $(r,k,d)$-graph $G$. \begin{lemma}\label{div} Let $G$ be a complete proper strict $(r,k,d)$-graph with $r\geq 0$. Then there is a unique partition $\{A_i\}_{i=1}^t$ of $\mathcal{V}_G$ such that if $a\in A_i$ and $a'\in A_j$ are two arbitrary vertices of $G$, then we have \[\begin{cases} \xi_G(a, a') < r & \textrm{if~} i= j;\\ \xi_G(a, a') = r & \textrm{if~} i\neq j. \end{cases}\] Moreover, this $t$ satisfies $2\leq t\leq d$. \end{lemma} \begin{proof} Let $a_0$ be an arbitrary vertex of $G$. We put $$B_0:=\{b\;|\; b\in \mathcal{V}_G\ \textrm{such that}\ \xi_G(a_0,b)<r\}\cup \{a_0\}$$ and $$B_j:=\{b\;|\; b\in \mathcal{V}_G\ \textrm{such that}\ \xi_G(a_0, b)=r\ \textrm{and}\ \eta_G(a_0,b)=j\}$$ for every $1\leq j\leq d-1$. Relabeling the non-empty sets among $\{B_j\;|\; 0\leq j\leq d-1\}$, we obtain $\{A_i\;|\; 1\leq i\leq t\}$. By Definition~\ref{proper}, we know that the partition $\{A_i\;|\; 1\leq i\leq t\}$ satisfies all the properties that are required in this lemma, and it is independent of the choices of the starting vertex $a_0$. \end{proof} \begin{definition}\label{inductive definition} \noindent \begin{enumerate} \item Let $G_0$ be a proper $(r,k,d)$-graph. Assume that there are three distinct vertices $a,$ $b$ and $ c$ in $G_0$ such that the edges $\overline{ab}$ and $\overline{bc}$ belong to $\mathcal{E}_{G_0}$ but $\overline{ac}$ does not; and the functions $\xi_{G_0}(\cdot,\cdot)$ and $\eta_{G_0}(\cdot,\cdot)$ satisfy one of the following. \begin{enumerate} \item[(i)] $\xi_{G_0}(a,b) = \xi_{G_0}(b,c) = -1$; \item[(ii)] $0\leq \xi_{G_0}(a,b) = \xi_{G_0}(b,c) \textrm{~and~} \eta_{G_0}(a,b)+\eta_{G_0}(b,c)\neq d;$ \item[(iii)] $ -1\leq \xi_{G_0}(a,b) < \xi_{G_0}(b,c).$ \end{enumerate} We write $G$ for the $(r,k,d)$-graph generated from ${G_0}$ by adding an extra edge $\overline{ac}$ and putting \begin{itemize} \item[(i')] $\xi_{G}(a,c): = -1$ and $\eta_{G}(a,c):=0$ for the case (i); \item[(ii')] $\xi_{G}(a,c):= \xi_{G_0}(a,b) = \xi_{G_0}(b,c)$ and $\eta_{G}(a,c)$ to be the integer in $\{1,\dots,d-1\}$ which is congruent to $\eta_{G_0}(a,b)+\eta_{G_0}(b,c)$ modulo $d$ for the case (ii); \item[(iii')] $\xi_{G}(a,c):= \xi_{G_0}(b,c)$ and $\eta_{G}(a,c):= \eta_{G_0}(b,c)$ for the case (iii). \end{itemize} If $G$ is also proper, then we say that ${G_0}$ \emph{generates} $G$. \item More generally, for two proper $(r,k,d)$-graphs $G_0$ and $G$ if there is a chain of proper $(r,k,d)$-graphs $G_0,G_1,\dots,G_s:=G$ such that for every $0\leq h\leq s-1$, the graph $G_{h+1}$ is generated from $G_h$ by adding one edge as in (1), then we also say that $G_0$ \emph{generates} $G$. Moreover, if $G$ cannot generate a bigger proper $(r,k,d)$-graph by (1), we call $G$ a \emph{maximal extension} of $G_0$. \end{enumerate} \end{definition} \begin{definition} For every two $(r,k,d)$-graphs $G_0$ and $G$, if $\mathcal{E}_{G_0}\subset\mathcal{E}_{G}$ and for every edge $\overline{ab}\in \mathcal{E}_{G_0}$ we have $\xi_{G_0}(a,b)=\xi_{G}(a,b)$ and $\eta_{G_0}(a,b)=\eta_{G}(a,b)$, then we call $G_0$ a \emph{subgraph} of $G$. \end{definition} \begin{lemma}\label{welldefined} Every subgraph $G_0$ of a complete proper $(r,k,d)$-graph $G$ is proper and has a unique extension. \end{lemma} \begin{proof} Since $G$ is proper, we know that $G_0$ is also proper. Let $G'$ be a maximal extension of $G_0$ with the chain of proper $(r,k,d)$-graphs $G_0, G_1,$ $\dots, G_s:=G'$ as in Definition~\ref{inductive definition}(1). Since $G_0$ is a subgraph of $G$, we can inductively prove that $G_h$ is a subgraph of $G$ for every $0\leq h\leq s$. Let $G''$ be an another maximal extension of $G_0$ and $h_0$ be the smallest index such that $G_{h_0}$ is not a subgraph of $G''$. Let $\overline{ab}$ be the edge that we add in $G_{h_0-1}$ to obtain $G_{h_0}$. Since $G''$ is also a subgraph of $G$, by Definition~\ref{inductive definition}(1), we can add $\overline{ab}$ into $G''$ as well, which leads a contradiction to $G''$ being a maximal extension of $G_0$. \end{proof} \begin{definition}\label{def:tree} Let $G$ be a proper $(r, k,d)$-graph. \begin{enumerate} \item We call a chain of edges $\{\overline{a_i a_{i+1}}\}_{i=0}^{s-1}$, i.e. $a_i\neq a_j$ for every $i,j\in \{0,1,\dots,s\}$ such that $i\neq j$, \emph{potentially complete} in $G$, if there exists $0\leq u\leq s-1$ such that \begin{enumerate} \item $\xi_G({a_0,a_1} )\leq\cdots\leq \xi_G({a_{u},a_{u+1}})\geq \cdots \geq \xi_G({a_{s-1},a_{s}})$ with no consecutive equalities in this chain of inequalities. \item If $\xi_G(a_{i-1}, a_{i})=\xi_G(a_i, a_{i+1})\geq 0$, then we have $\eta_G(a_{i-1}, a_{i})+\eta_G(a_{i}, a_{i+1})\not\equiv 0\pmod d.$ \end{enumerate} \item We call $G$ an \emph{$(r, k,d)$-tree} if it contains no loop and for every two vertices $a$ and $b$ the unique chain connecting $a$ and $b$ is potentially complete in $G$. \item If $G$ is an \emph{$(r, k,d)$-tree}, we denote by $\mathrm{Ch}_{G}(a,b)$ the unique chain in $G_0$ connecting the vertices $a$ and $b$. When $a=b$, we put $\mathrm{Ch}_{G_0}(a,a):=\{a\}$. \end{enumerate} \end{definition} \begin{lemma}\label{le:1} Let $k\geq 2$, and $G$ be an $(r,k,d)$-tree. Assume that $\overline{aa'}\in \mathcal{E}_G$ satisfies \begin{equation}\label{eq:9} \xi_{G}(a,a')=\max\{\xi_{G}(b,b')\;|\; \overline{bb'}\in \mathcal{E}_G \}. \end{equation} Then for every vertex $a_0$ in $G$ if we put $\mathrm{Ch}_{G}(a_0,{a}):=\{\overline{a_ia_{i+1}}\}_{i=0}^{s}$, where $a_{s+1}=a$, then the sequence $\{\xi_{G}(a_i,a_{i+1})\}_{i=0}^{s}$ is non-decreasing. \end{lemma} \begin{proof} \noindent\textbf{Case I.} When $a_s=a'$. Since $G$ is an $(r,k,d)$-tree, we know that $\mathrm{Ch}_{G}(b,{a})$ is potentially complete. Combined with \eqref{eq:9}, this implies that $\{\xi_{G}(a_i,a_{i+1})\}_{i=0}^{s-1}$ is non-deceasing. \noindent\textbf{Case II.} When $a_s\neq a'$. By Definition~\ref{def:tree}(2), we know that $\mathrm{Ch}_{G}(b,{a'})=\mathrm{Ch}_{G}(b,{a})\cup \overline{aa'}$. Replacing $a$ in \textbf{Case I} by $a'$ and putting $a_{s+1}:=a'$, we know that $\{\xi_{G}(a_i,a_{i+1})\}_{i=0}^{s}$ is non-deceasing, which completes the proof of this case. \end{proof} \begin{lemma}\label{construction} For every complete proper $(r,k,d)$-graph $G$ there exists an $(r,k,d)$-tree $G_0$ which generates $G$. \end{lemma} \begin{proof} When $k=1$ the result is trivial for every $r\geq -1$. Now assume that it holds for every $m\leq k$ and $r\geq -1$. For $m=k+1$, without loss of generality, we assume that $G$ is a complete proper strict $(r,k+1,d)$-graph. If $r=-1$, we choose an arbitrary vertex $a$ in $G$. Connecting $a$ to every other vertices in $G$, we obtain a proper $(-1,k+1,d)$-graph $G_0$. Clearly, $G_0$ is a $(-1,k+1,d)$-tree with the unique maximal extension $G$. Now assume $r\geq 0$. By Lemma~\ref{div}, we obtain a partition $\{A_i\}_{i=1}^t$ of $\mathcal{V}_G$ such that \begin{enumerate} \item[(i)] $ |A_i|\leq k$ for every $1\leq i\leq t$. \item[(ii)] For every $a\in A_i$ and $b\in A_j$ we have \[\begin{cases} \xi_{G}(a, b) < r&\textrm{if~} i= j,\\ \xi_{G}(a, b) = r& \textrm{if~} i\neq j. \end{cases}\] \end{enumerate} By induction, for every $1\leq i\leq t$ we can construct an $(r-1,|A_i|,d)$-tree $G_{i,0}$ which generates the restriction of $ G$ on $A_i$. Now we determine a representative $a_i$ for each $A_i$ as follows. \begin{itemize} \item If $|A_i|=1$, we put $a_i$ to be the unique vertex in $A_i$. \item If $|A_i|\geq 2$, we put $a_i$ to be a vertex in $A_i$ such that $$\xi_G(a_i,a_i')=\max\{\xi_G(a,b)\;|\; a, b\in A_i \}$$ for some other vertex ${a_i'}\in A_i$. \end{itemize} We denote by $G_0$ the subgraph of $G$ such that $$\mathcal{E}_{G_0}=\bigcup_{i=1}^t \mathcal{E}_{G_{i,0}}\cup \{\overline{a_1a_i}\;|\;2\leq i\leq t\}.$$ Now we prove that $G_0$ is an $(r,k+1,d)$-tree. Since $G_0$ contains no loop, it is enough to prove that for every two vertices $a$ and $b$ the chain $\mathrm{Ch}_{G_0}(a,b)$ is potentially complete. Let $a\in A_i$ and $b\in A_j$ be two distinct vertices of $G$. \noindent\textbf{Case I.} When $i=j$. From $\mathrm{Ch}_{G_{0}}(a,b)=\mathrm{Ch}_{G_{i,0}}(a,b)$, we know that $\mathrm{Ch}_{G_0}(a,b)$ is potentially complete. \noindent\textbf{Case II.} When $i\neq j$ and one of $i$ and $j$ is equal to $1$. Without loss of generality, we assume $i=1$. By the construction of $G_0$, we know that $$\mathrm{Ch}_{G_0}(a,b)=\mathrm{Ch}_{G_{1,0}}(a,{a_1})\cup \overline{a_1a_j}\cup \mathrm{Ch}_{G_{j,0}}({a_j},{b}). $$ By Lemma~\ref{le:1}, we know that $\xi_G$ is increasing along $\mathrm{Ch}_{G_{i,0}}(a,a_1)$ and decreasing along $\mathrm{Ch}_{G_{j,0}}({a_j},{b})$. Combined with (ii), this implies that $\mathrm{Ch}_{G_0}(a,b)$ is potentially complete. \noindent\textbf{Case III.} When $i\neq j$, $i\neq 1$ and $j\neq 1$. From the construction of $G_0$, we know that $$\mathrm{Ch}_{G_0}(a,b)=\mathrm{Ch}_{G_{i,0}}(a,{a_i})\cup \overline{a_ia_1}\cup \overline{a_1a_j}\cup \mathrm{Ch}_{G_{j,0}}({a_j},{b}).$$ From $\eta_G(a_1,a_i)\neq \eta_G(a_1,a_j)$, we have $$\eta_G(a_i,a_1)+ \eta_G(a_1,a_j)\equiv -\eta_G(a_1,a_i)+ \eta_G(a_1,a_j)\not\equiv 0\pmod d.$$ Similar to the argument in \textbf{Case II}, we show that $\mathrm{Ch}_{G_0}(a,b)$ is potentially complete. Now we are left to show that $G_0$ generates $G$. For every vertices $a$ and $b$, since $\mathrm{Ch}_{G_0}(a,b)$ is potentially complete, using Definition~\ref{inductive definition}(1) inductively on this chain, we generate a graph $G'$ from $G_0$ such that $G'$ is a subgraph of $G$ and $G'$ contains $\overline{ab}$. By Lemma~\ref{welldefined}, $G_0$ has the unique maximal extension. Since $a$ and $b$ are arbitrarily chosen, we know that its maximal extension is exactly $G$, which finishes the proof. \end{proof} \begin{notation} We correspond a proper $(r,k,d)$-graph $G$ a projective variety \begin{equation}\label{eq:CG0} \mathcal{C}_G:\Phi(X_a,X_b,X_0;\xi_G(a,b),\eta_G(a,b))=0\quad\textrm{for every~ } \overline{ab}\in \mathcal{E}_G, \end{equation} where $$\Phi(X,Y,Z;\ell,h)=\begin{cases} X-Y,& \textrm{when} \ \ell=-1;\\ Z^{d^\ell}\Phi(\frac{X}{Z},\frac{Y}{Z};\ell,h),& \textrm{when} \ \ell\geq 0. \end{cases}$$ \end{notation} Note that \begin{equation}\label{eq:ab} \Phi(X_a,X_b,X_0;\xi_G(a,b),\eta_G(a,b))=-\gamma^{\eta_G(a,b)}\Phi(X_b,X_a,X_0;\xi_G(b,a),\eta_G(b,a)). \end{equation} The variety $\mathcal{C}_G$ is defined independent of the order of $a$ and $b$. \begin{lemma}\label{lemma1} For every $r\geq 0$ if $G$ is a complete proper $(r-1, k,d)$-graph, then $\mathcal{C}_G$ is a subvariety of $\mathcal{C}_r$. \end{lemma} \begin{proof} Let $\underline x$ be an arbitrary point on the variety $\mathcal{C}_G$. Since $G$ is complete, for every two vertices $a$ and $b$ we have $\Phi(x_a,x_b,x_0;\xi_G(a,b),\eta_G(a,b))=0$. Combined with $\xi_G(a,b)\leq r-1$, this implies $x_0^{d^r}f^{\circ r}(x_a)=x_0^{d^r}f^{\circ r}(x_b),$ which completes the proof. \qedhere \end{proof} \begin{lemma}\label{dif} For two complete proper $(r, k,d)$-graphs $G_1$ and $G_2$ if $G_1\neq G_2$, then $\mathcal{C}_{G_1}\neq \mathcal{C}_{G_2}$. \end{lemma} \begin{proof} Suppose this lemma is false. Then there exist two complete proper $(r, k,d)$-graphs $G_1$ and $G_2$ such that $G_1\neq G_2$ and $\mathcal{C}_{G_1}=\mathcal{C}_{G_2}$. Now we have the following two cases. \textbf{Case I}. There exists an edge $\overline{ab}\in \mathcal{E}_{G_1} $ such that $\ell_1:=\xi_{G_1}(a,b)>\ell_2:=\xi_{G_2}(a,b)$. Consider the graph $G_2$. We have $f^{\circ \ell_2}(X_a)=\gamma^{\eta_{G_2}(a,b)}f^{\circ \ell_2}(X_b)$, which implies $f^{\circ \ell_1}(X_a)=f^{\circ \ell_1}(X_b)$. From $\mathcal{C}_{G_1}=\mathcal{C}_{G_2}$, we obtain $\xi_{G_1}(a,b)<\ell_1$, a contradiction. \textbf{Case II}. There exists an edge $\overline{ab} \in \mathcal{E}_{G_1}$ such that $$\xi_{G_1}(a,b)=\xi_{G_2}(a,b), \ \textrm{but}\ \eta_{G_1}(a,b)\neq \eta_{G_2}(a,b).$$ Denote $\ell:=\xi_{G_1}(a,b),$ $h_1:=\eta_{G_1}(a,b)$ and $h_2:=\eta_{G_2}(a,b).$ Then we have $f^{\circ \ell}(X_a)=\gamma^{h_1}f^{\circ \ell}(X_b)$ and $f^{\circ \ell}(X_a)=\gamma^{h_2}f^{\circ \ell}(X_b)$, which implies $f^{\circ \ell}(X_a)=f^{\circ \ell}(X_b)=0$, a contradiction to $\xi_{G_1}(a,b)=\ell$. \end{proof} Recall that we define $\mathcal{W}(r,k)$ in Notation~\ref{no1}. \begin{lemma}\label{decomposition} For every $r\geq 0$ we have $$\mathcal{W}(r,k)+(p-1)\gcd(p-1,d^r)^{k-2}=\#\Big(\bigcup_{G} \mathcal{C}_G(\FF_p)\Big),$$ where the sum runs over all complete proper $(r-1, k,d)$-graphs. \end{lemma} \begin{proof} By Lemmas~\ref{pro1} and \ref{lemma1}, we have \begin{multline*} \#\Big(\bigcup_{G} \mathcal{C}_G(\FF_p)\Big)= \#\mathcal{C}_r(\FF_p)=\#\{\underline x\in \mathcal{C}_r(\FF_p)\;|\; x_0\neq0\}+\#\{\underline x\in \mathcal{C}_r(\FF_p)\;|\; x_0=0\}\\ =\mathcal{W}(r,k)+\{(0,x_1,\dots,x_k)\;|\; x_1^{d^r}=\cdots=x_k^{d^r}\}\\ =\mathcal{W}(r,k)+(p-1)\gcd(p-1,d^r)^{k-2}. \qedhere \end{multline*} \end{proof} For a complete proper $(r-1, k,d)$-graph $G$, in order to estimate $ \#\mathcal{C}_G(\FF_p)$, we need the following key proposition. \begin{proposition}\label{key proposition} With the assumption~\eqref{precondtion}, for every complete proper $(N-1,k,d)$-graph $G$, the variety $\mathcal{C}_G$ is an absolutely irreducible curve over $\FF_p$ with degree at most $d^{(k-1)(N-1)}$. \end{proposition} We will give its proof after several lemmas. \begin{lemma}\label{complete and tree} Let $G$ be a complete proper $(r,k,d)$-graph and $G_0$ be an $(r,k,d)$-tree which generates $G$. Then we have $G$ and $G_0$ correspond the same projective variety. \end{lemma} \begin{proof} It is enough to show that $$\Phi(X_a,X_b,X_0;\xi_G(a,b),\eta_G(a,b))=0\quad \textrm{and}\quad \Phi(X_b,X_c,X_0;\xi_G(b,c),\eta_G(b,c))=0$$ imply $\Phi(X_a,X_c,X_0;\xi_G(a,c),\eta_G(a,c))=0$ whenever $a$, $b$ and $c$ satisfy one of the three cases in Definition~\ref{inductive definition}(1). For the case (i), we know that $X_a=X_b$ and $X_b=X_c$, which imply $X_a=X_c$. For the case (ii), we put $$\ell:=\xi_G(a,b)=\xi_G(b,c),\ h_1:=\eta_G(a,b) \textrm{~and~} h_2:=\eta_G(b,c).$$ Then we have $$f^{\circ \ell}(X_a)-\gamma^{h_1}f^{\circ \ell}(X_b)=0\quad\textrm{and}\quad f^{\circ \ell}(X_b)-\gamma^{h_2}f^{\circ \ell}(X_c)=0.$$ which imply \begin{equation}\label{2} f^{\circ \ell}(X_a)-\gamma^{h_1+h_2}f^{\circ \ell}(X_c)=0. \end{equation} Since $\gamma$ is a primitive $d$-th root of unity, the equality~\eqref{2} is exactly what $\xi_G(a,c)=\ell$ and $\eta_G(a,c)\equiv h_1+h_2\pmod d$ imply. For the case (iii), we put $$\ell_1:=\xi_G(a,b),\ \ell_2:=\eta_G(b,c),\ h_1:=\eta_G(a,b)\textrm{~and~}h_2:=\eta_G(b,c).$$ Then we have \begin{equation}\label{11} f^{\circ \ell_1}(X_a)=\gamma^{h_1}f^{\circ \ell_1}(X_b) \end{equation} and \begin{equation}\label{12} f^{\circ \ell_2}(X_b)-\gamma^{h_2}f^{\circ \ell_2}(X_c)=0. \end{equation} Consider the condition $\ell_1<\ell_2$ in (iii). We act $f^{\circ (\ell_2-\ell_1)}$ on the both sides of \eqref{11} and obtain $$f^{\circ \ell_2}(X_a)=f^{\circ \ell_2}(X_b).$$ Combined with \eqref{12}, this implies $$f^{\circ \ell_2}(X_a)-\gamma^{h_2}f^{\circ \ell_2}(X_c)=0,$$ which is exactly the equality obtained from $\xi_G(a,c)=\ell_2$ and $\eta_G(a,c)=h_2$. \end{proof} \begin{lemma}\label{key emma} With the assumption~\eqref{precondtion}, the variety $\mathcal{C}_{G_0}$ associated to an $(N-1,k,d)$-tree $G_0$ is a nonsingular complete intersection. Hence, $\mathcal{C}_{G_0}$ is an absolutely irreducible curve over $\FF_p$, with degree at most $d^{(k-1)(N-1)}$. \end{lemma} \begin{proof} To prove that $\mathcal{C}_{G_0}$ is a nonsingular complete intersection we need to show that the vectors in the set $\{\nabla\Phi(x_a, x_b, x_0; \xi(a,b),\eta(a,b))\}_{\overline{ab}\in \mathcal{E}_{G_0}}$ are linearly independent at every point $\underline x$ of $\mathcal{C}_{G_0}$. Suppose to the contrary that \begin{equation}\label{linear} \sum_{\overline{ab}\in \mathcal{E}_{G_0}}c_{ab}\nabla\Phi(x_a, x_b, x_0; \xi_{G_0}(a,b),\eta_{G_0}(a,b)) = \underline 0 \end{equation} for some $\underline x\in \mathcal{C}_{G_0}$ and some non-zero vector $\underline{c}\in \overline{\mathbb{F}}_p^{d-1}.$ We put $c_{ba}:=-\gamma^{\eta(a,b)}c_{ab}$. By \eqref{eq:ab}, we can freely swap $a$ and $b$ in \eqref{linear} without changing this equality. We put $G'$ to be the subgraph of $G_0$ consisting of $\overline{ab}\in \mathcal{C}_{G_0}$ such that $c_{ab}\neq 0$. Let $\mathrm{CH}=\{\overline{a_ia_{i+1}}\;|\; 0\leq i\leq s-1, s\geq 1\}$ be an arbitrary maximal chain in $G'$. (Here ``maximal'' means that the chain cannot be extended further in $G'$, which is not necessary to be the longest). Clearly, $\mathrm{CH}$ satisfies that \begin{enumerate} \item $c_{a_ia_{i+1}}\neq 0$ for every $0\leq i\leq s-1$. \item There is no vertex $b\neq {a_1}$ such that $\overline{ba_0}\in \mathcal{E}_{G_0}$ and $c_{a_0b}\neq 0$. \item There is no vertex ${b'}\neq {a_{s-1}}$ such that $\overline{b'a_s}\in \mathcal{E}_{G_0}$ and $c_{a_sb'}\neq 0$. \end{enumerate} Moreover, since $G_0$ is an $(N-1,k,d)$-tree, $\mathrm{CH}$ is potentially complete. We put $L:=\max\limits_{0\leq i\leq s-1}\{\xi_{G_0}(a_i,a_{i+1}) \}$. From the property (2) of $\mathrm{CH}$, we have \begin{equation}\label{eq:2} (\partial/\partial x_{a_0})\Phi(x_{a_0}, x_{a_1}, x_0; \xi_{G_0}(a_0,a_1),\eta_{G_0}(a_0,a_1))=0, \end{equation} which forces \begin{equation}\label{xi} \xi_{G_0}(a_0,a_1)\geq 0, \end{equation} since otherwise we have \begin{multline*} (\partial/\partial x_{a_0} )\left(\sum_{\overline{ab}\in \mathcal{E}_{G_0}}c_{ab}\nabla\Phi(x_a, x_b, x_0; \xi_{G_0}(a,b),\eta_{G_0}(a,b))\right)\\ =c_{a_0a_1}(\partial/\partial x_{a_0})\Phi(x_{a_0}, x_{a_1}, x_0; \xi_{G_0}(a_0,a_1),\eta_{G_0}(a_0,a_1))\\ =c_{a_0a_1}(\partial/\partial x_{a_0})(x_{a_0}-x_{a_1})=c_{a_0a_1}\neq 0. \end{multline*} From \eqref{xi}, we can write \eqref{eq:2} explicitly as $$(Ad)^{\xi_{G_0}(a_0,a_1)} \prod_{i=0}^{\xi_{G_0}(a_0,a_1)-1}(F^{\circ i}(x_{a_0},x_0))^{d-1} =0,$$ which implies $F^{\circ j_0}(x_{a_0},x_0) = 0$ for some index $0 \leq j_0 \leq \xi_{G_0}(a_0,a_1) - 1\leq L-1$. Similarly, from the property (3) of $\mathrm{CH}$, we have $$\xi_{G_0}(a_{s-1},a_s)\geq 0,$$ which implies that there exists $0\leq j_s \leq L-1$ such that $F^{\circ j_s}(x_{a_s},x_0) = 0$. Since $\mathrm{CH}$ is potentially complete in $ G_0$, $\xi_{G_0}(a_0,a_{1})\geq 0$ and $\xi_{G_0}(a_{s-1},a_{s})\geq 0$ imply $\xi_{G_0}(a_i,a_{i+1})\geq 0$ for all $1\leq i\leq s-1$. We next show that $x_0$ cannot vanish. If, on the contrary, we had $x_0 = 0$, then the relation $F^{\circ i}(x_{a_0},x_0) = 0$ would yield $x_{a_0} = 0.$ In general, if $x_{a_0} = x_0 = 0,$ then for any vertex ${a'}$ such that $\overline{a_0a'}\in \mathcal{E}_{G_0}$, the relation $$\Phi(x_{a_0},x_{a'}, x_0; \xi_{G_0}(a_0,a'), \eta_{G_0}(a_0,a')) = 0$$ implies $x_{a'}= 0.$ Since $G_0$ is connected, we have $x_a=0$ for all $a\in \mathcal{V}_{G_0}$, which is impossible. We may therefore assume that $x_0 = 1$, which takes us back to the affine situation, i.e. \begin{equation}\label{eq:3} f^{\circ \xi_{G_0}(a_i,a_{i+1})}(x_{a_i})=\gamma^{\eta_{G_0}(a_i,a_{i+1})}f^{\circ \xi_{G_0}(a_i,a_{i+1})}(x_{a_{i+1}}) \textrm{~for every~}0\leq i\leq s-1, \end{equation} and \begin{equation}\label{eq:1} f^{\circ j_0}(x_{a_0})=0 \textrm{~and~} f^{\circ j_s}(x_{a_s})=0 \textrm{~with~} 0\leq j_0\leq L-1\ \textrm{and}\ 0\leq j_s \leq L-1. \end{equation} Based on the number of indices $0\leq i\leq s-1$ such that $ \xi_{G_0}(a_i,a_{i+1})=L$, we have the following two cases. \noindent\textbf{Case I.} When there is a unique index $u$ in $\{0,\dots, s-1\}$ such that $ \xi_{G_0}(a_u,a_{u+1})=L$. Combined with \eqref{xi}, this shows that for every $0\leq i\leq u-1$ we have $0\leq \xi_{G_0}(a_i,a_{i+1})\leq L-1$. Together with \eqref{eq:3}, this implies $f^{\circ L}(x_{a_i})=f^{\circ L}(x_{a_{i+1}})$, and hence \begin{equation}\label{eq:4} f^{\circ L}(x_{a_0})=f^{\circ L}(x_{a_{u}}). \end{equation} Similarly, we have $f^{\circ L}(x_{a_{s}})=f^{\circ L}(x_{a_{u+1}}).$ Combining it with \eqref{eq:3} for $i=u$ and \eqref{eq:4}, we have $$f^{\circ L}(x_{a_0})=\gamma^{\eta_{G_0}(a_u,a_{u+1})}f^{\circ L}(x_{a_{s}}).$$ Together with \eqref{eq:1}, this equality implies \begin{equation}\label{eq:5} f^{\circ (L-j_0)}(0)=\gamma^{\eta_{G_0}(a_u,a_{u+1})}f^{\circ (L-j_s)}(0). \end{equation} If $j_0=j_s$, since $\gamma^{\eta_{G_0}(a_u,a_{u+1})}\neq 1$, we have $f^{\circ (L-j_0)}(0)=0$. Combined with $L\leq N-1$, this leads to a contradiction to our assumption~\eqref{precondtion}. Now assume $j_0\neq j_s$. Without loss of generality, we assume $j_0<j_s$. From \eqref{eq:5}, we have $f^{\circ (L-j_0+1)}(0)=f^{\circ (L-j_s+1)}(0),$ and hence $f^{\circ ( L+1)}(0)=f^{\circ (L-j_s+j_0+1)}(0),$ which contradicts our assumption~\eqref{precondtion}. \noindent\textbf{Case II.} When there is an index $u$ in $\{0,\dots, s-2\}$ such that $$ \xi_{G_0}(a_u,a_{u+1})=\xi_{G_0}(a_{u+1},a_{u+2})=L,$$ and for every $i\notin\{u,u+1\}$ we have $0\leq \xi_{G_0}(a_i,a_{i+1})\leq L-1$. Since $\left\{\overline{a_ia_{i+1}}\right\}_{i=0}^{s-1}$ is potentially complete in $ G_0$, we have $$\ell:=\eta_{G_0}(a_u,a_{u+1})+\eta_{G_0}(a_{u+1},a_{u+2})\not\equiv 0\pmod d.$$ From \eqref{eq:3}, we have \begin{equation*} f^{\circ L}(x_{a_u})=\gamma^{\eta_{G_0}(a_u,a_{u+1})}f^{\circ L}(x_{a_{u+1}})\textrm{~and~} f^{\circ L}(x_{a_{u+1}})=\gamma^{\eta_{G_0}(a_{u+1},a_{u+2})}f^{\circ L}(x_{a_{u+2}}), \end{equation*} which implies $$f^{\circ L}(x_{a_u})=\gamma^{\ell}f^{\circ L}(x_{a_{u+2}}). $$ Similar to \textbf{Case I}, we have \begin{equation}\label{eq:6} f^{\circ (L-j_0)}(0)=\gamma^{\ell}f^{\circ (L-j_s)}(0). \end{equation} If $j_0=j_s$, since $\gamma^{\ell}\neq 1$, we have $f^{\circ (L-j_0)}(0)=0$. Combined with $L\leq N-1$, this leads to a contradiction to our assumption~\eqref{precondtion}. Now assume $j_0\neq j_s$. Without loss of generality, we assume $j_0< j_s$. From \eqref{eq:6}, we have $f^{\circ (L-j_0+1)}(0)=f^{\circ (L-j_s+1)}(0),$ and hence $f^{\circ ( L+1)}(0)=f^{\circ (L-j_s+j_0+1)}(0),$ which contradicts our assumption~\eqref{precondtion}. Therefore, there is no non-trivial solution $\underline c$ to the system \eqref{linear}. In general, a nonsingular complete intersection is necessarily absolutely irreducible, with the codimension equal to the number of equations in the system and degree equal to the product of the degrees of the defining forms, see \cite[Lemma~3.2]{BB} for details. In our case, $\Phi(X_a, X_b, X_0; \xi_G(a,b),\eta_G(a,b))$ has degree at most $d^{N-1}$ and the system \eqref{eq:CG0} for $\mathcal{C}_{G_0}$ has $k-1$ equations. Combining them, we complete the proof. \end{proof} \begin{proof}[Proof of the Proposition~\ref{key proposition}] It follows directly from Lemmas~\ref{construction}, \ref{complete and tree} and \ref{key emma}. \end{proof} \section{Counting Points and Counting Curves}\label{s3} By Lemma~\ref{decomposition} and the inclusion-exclusion principle, we have $$\sum_{G}\#\mathcal{C}_G(\FF_p)- \sum_{G_1\neq G_2}\#(\mathcal{C}_{G_1} \cap \mathcal{C}_{G_2})(\FF_p)\leq \mathcal{W}(r,k)+(p-1)\gcd(p-1,d^r)^{k-2}\leq \sum_{G}\#\mathcal{C}_G(\FF_p),$$ where $G$ runs over all distinct complete proper $(r-1,k,d)$-graphs. Let $\mathcal{U}(r,k)$ be the number of distinct complete proper $(r,k,d)$-graphs. By convention, $\mathcal{U}(r, 0) = 1$ for all $r\geq -1$. Combining Lemma~\ref{dif} and Proposition~\ref{key proposition} with Bezout's Theorem, for every two distinct complete proper $(N-1,k,d)$-graphs $G_1$ and $G_2$, we have $$\#(\mathcal{C}_{G_1} \cap \mathcal{C}_{G_2})(\FF_p)\leq d^{2kN}.$$ Therefore, we have \begin{equation}\label{eq} \Big|\mathcal{W}(N,k)+(p-1)\gcd(p-1,d^N)^{k-2}-\sum_{G} \#\mathcal{C}_G(\FF_p)\Big|\leq \mathcal{U}(N-1,k)^2 d^{2kN}. \end{equation} By Weil's ``Riemann Hypothesis'', every absolutely irreducible projective curve $\mathcal{C}$ defined over $\FF_p$ satisfies $$|\#\mathcal{C}(\FF_p)- (p + 1)| \leq 2g \sqrt p,$$ where $g$ is the genus of $ \mathcal{C}$. In general, if $\mathcal{C}$ is an irreducible non-degenerate curve of degree $D$ in $\mathbb{P}_k$ (with $k \geq 2$), then according to the Castelnuovo genus bound \cite{GC}, one has $$g \leq (k-1)m(m-1)/2 + m\epsilon,$$ where $D-1=m(k-1)+\epsilon$ with $0\leq \epsilon<k-1$. This implies that $g \leq (D- 1)(D-2)/2$ irrespective of the degree of the ambient space in which $\mathcal{C}$ lies. Hence, we have \begin{equation}\label{eq2} |\#\mathcal{C}_G(\FF_p) -(p+ 1)| \leq d^{2kN}\sqrt{p}, \end{equation} since $\mathcal{C}_G$ has degree at most $d^{kN}.$ Combining \eqref{eq} and \eqref{eq2}, we have \begin{multline}\label{est} \Big|\mathcal{W}(N, k)+(p-1)\gcd(p-1,d^N)^{k-2}- \mathcal{U} (N-1, k)(p+ 1) \Big| \\ \leq \mathcal{U}(N-1,k)^2 d^{2kN}+\mathcal{U}(N-1,k)d^{2kN}\sqrt{p}. \end{multline} \begin{notation} For every $k\geq 1$ let $M_{k,t}$ be the set of partitions of $\{1,\dots,k\}$ of $t$ components, where $\{A_i\}$ and $\{B_i\}$ are treated as the same partition if there is a permutation $\sigma\in S_t$ such that $A_i=B_{\sigma(i)}$ for all $1\leq i\leq t$. \end{notation} \begin{definition} Recall that every complete proper strict $(r,k,d)$-graph $G$ with $r \geq 0$ corresponds a partition $\{A_i\}_{i=1}^{t}$ of $\{1, \dots , k\}$ as in Lemma~\ref{div} for some $1\leq t\leq d$. We call $G$ a \emph{$(\{A_i\},r)$-graph}, and denote the set by $M(\{A_i, r\})$. \end{definition} \begin{lemma} Let $k\geq 1$ and $1\leq t\leq d$. For every $\{A_i\}\in M_{k,t}$, we have $$\#M(\{A_i\}, r)=\frac{(d-1)!}{(d-t)!}\prod_{i=1}^t\mathcal{U}(r-1,|A_i|).$$ \end{lemma} \begin{proof} For every $1\leq i\leq t$ we choose an arbitrary vertex $a_i$ from $A_i$. We first determine $\eta(a_1,a_2)$, which can be chosen from the set $\{1,\dots,d-1\}$. Since $a_2$ and $ a_3$ belong to different sets, we have $\eta(a_1,a_3)\neq \eta(a_1,a_2)$, which restricts $\eta(a_1,a_3)$ into a set of $d-2$ elements. We keep this iteration until determine $\eta(a_1,a_i)$ for all $2\leq i\leq t$. For each set $A_i$, there are $\mathcal{U}(r-1,|A_i|)$ distinct complete proper $(r-1,|A_i|,d)$-graphs in total. Therefore, we obtain \[\#M(\{A_i\},r)=\frac{(d-1)!}{(d-t)!}\prod_{i=1}^t\mathcal{U}(r-1,|A_i|).\qedhere\] \end{proof} For every integer $r \geq -1$ we define the power series \begin{equation}\label{eq:E} E(X;r):= \sum_{k=0}^\infty \frac{\mathcal{U}(r,k)}{k!}X^k. \end{equation} Now we estimate $\mathcal{U}(r,k)$. By Lemmas~\ref{welldefined} and \ref{construction}, we know that $\mathcal{U}(r,k)$ can be bounded above by the number of $(r,k,d)$-trees. Therefore, it is enough to estimate the number of $(r,k,d)$-trees. We first determine the edges of the trees. We connect $k-1$ pairs of vertices in a $k$-vertex graph and obtain $\binom{\frac{(k-1)k}{2}}{k-1}$ distinct graphs. Clearly, every $(r,k,d)$-tree has to coincide one of these graphs. On the other hand, for every edge of an $(r,k,d)$-tree, say $\overline{ab}$, we have $$-1\leq \xi(a,b)\leq r\quad \textrm{and}\quad 0\leq \eta(a,b)\leq d-1.$$ Therefore, by Stirling's formula, we get a bound for $\mathcal{U}(r, k)$ as \begin{equation}\label{1} \mathcal{U}(r, k)\leq \binom{\frac{(k-1)k}{2}}{k-1}(r+2)^{k-1}d^{k-1}\leq \frac{((r+2)dk^2)^{k-1}}{(k-1)!} \leq C_0((r+2)dek)^{k} \end{equation} for some constant $C_0>0$, where $e$ is the base of the natural logarithms. Therefore, the power series $E(X;r)$ has radius of convergence $\frac{1}{(r+2)de^2}$ for every $r\geq -1$. Combining \eqref{est} and \eqref{1}, we obtain \begin{equation}\label{bound} \Big|\mathcal{W}(N, k)+(p-1)\gcd(p-1,d^N)^{k-2}- \mathcal{U} (N-1, k)(p+ 1) \Big|=O\Big(((N+2)dek)^{2k}d^{2kN}\sqrt{p}\Big). \end{equation} \begin{notation} For a partition $\{A_i\}_{i=1}^t$, we define a counting function $$S(\{A_i\})=s_1!s_2!\dots s_k!,$$ where $s_n$ represents the number of $A_i$ in $\{A_i\}$ of cardinality $n$, i.e. $$s_n=\#\{1\leq i\leq t\;|\; |A_i|=n \}.$$ \end{notation} We define the following equivalence relation on the set $M_{k,t}$ of partitions: $\{A_i\}\sim \{B_i\}$ if the multisets $$\{|A_i|\;|\; 1\leq i\leq t\}=\{|B_i|\;|\; 1\leq i\leq t\}.$$ \begin{lemma} For each $r\geq 0$, we have \begin{equation}\label{E} E(X;r)=\frac{(E(X;r-1))^d+d-1}{d}. \end{equation} \end{lemma} \begin{proof} For every partition $\{A_i\}\in M_{k,t}$ there are $$\frac{k!}{S(\{A_i\})\prod_{i=1}^t |A_i|! }$$ equivalent partitions to $\{A_i\}$. By Lemma~\ref{div}, we have \begin{multline}\label{eq:a1} \mathcal{U}(r, k)-\mathcal{U}(r-1,k)=\sum_{t=2}^d \sum_{\{A_i\}\in M_{k,t}}\#M(\{A_i\},r)\\ =\sum_{t=2}^d \sum_{\{A_i\}\in M_{k,t}/ \sim} \#\{\{B_i\}\;|\;\{B_i\}\sim \{A_i\}\}\#M(\{A_i\},r)\\ =\sum_{t=2}^d \sum_{\{A_i\}\in M_{k,t}/ \sim} \frac{k!}{S(\{A_i\})\prod_{i=1}^t |A_i|! }\frac{(d-1)!}{(d-t)!}\prod_{i=1}^t\mathcal{U}(r-1,|A_i|). \end{multline} Expanding $(E(X;r-1))^d$ gives us \begin{equation*} (E(X;r-1))^d=1 + \sum_{k=1}^\infty \sum_{t=1}^{d}\sum_{\{A_i\}\in M_{k,t}/ \sim} \frac{d!}{S(\{A_i\})(d-t)!}\prod_{i=1}^{t} \frac{\mathcal{U}(r-1,|A_i|)}{|A_i!|}X^k. \end{equation*} Combined with \eqref{eq:a1}, this implies $$(E(X;r-1))^d=1+d\sum_{k=1}^\infty \Big(\frac{\mathcal{U}(r-1,k)}{k!}+\frac{\mathcal{U}(r, k)-\mathcal{U}(r-1, k)}{k!}\Big)X^k =dE(X;r)-d+1,$$ and hence \eqref{E}. \end{proof} Since $\mathcal{U} (-1, k) = 1$ for all $k\geq 0$, we have $E(X; -1) = e^X$. By induction, we have $$E(X; r) =\sum_{m=0}^{d^{r+1}}v(r,m) e^{mX}$$ with non-negative real coefficients $v(r, m)$ summing to $1$. We then see that $$E(X; r) =\sum_{m=0}^{d^{r+1}}v(r,m) \sum_{k=0}^{\infty} \frac{(mX)^k}{k!}.$$ We clearly have absolute convergence for small $X$, and we rearrange to get $$E(X; r) =\sum_{k=0}^{\infty} \Big(\sum_{m=0}^{d^{r+1}}v(r,m)m^k\Big) \frac{X^k}{k!}.$$ Hence, we have $$\mathcal{U}(r,k) = \sum_{m=0}^{d^{r+1}}v(r,m)m^k.$$ We also see that the coefficient $v(r, 0)$ satisfies the recurrence \begin{equation}\label{v} v(r, 0)= \frac{d-1+v(r-1, 0)^d}{d}\textrm{~for every~} r\geq 0\end{equation} with $v(-1, 0)=0.$ We can then check that $\mu_r =1-v(r-1, 0)$ has the initial value $\mu_0 = 1$ and satisfies the recurrence \begin{equation}\label{mu} d\mu_r = 1 -(1-\mu_{r-1})^d \end{equation} described in Theorem~\ref{main thm}. \begin{proof}[Proof of Theorem~\ref{main thm}] Consider that \begin{equation}\label{eq:a2} \#f^{\circ N}(\FF_p)=p-\#\{m\in \FF_p \;|\;\rho_N(m)=0\}. \end{equation} Since the equation $f^{\circ N}(X) = m$ has at most $d^N$ solutions, we will always have $0 \leq \rho_N(m) \leq d^N$, whence \[\frac{1}{d^N!}\prod_{j=1}^{d^N}(j-\rho_N(m))=\begin{cases} 1 & \rho_N(m)=0;\\ 0 & \rho_N(m)\neq 0.\\ \end{cases}\] Setting \begin{equation}\label{3} Q(T):=\sum_{k=0}^{d^N}C_{N,k}T^k=\frac{1}{d^N!}\prod_{j=1}^{d^N}(j-T), \end{equation} we then have \begin{equation}\label{4} \begin{split} \sum_{k=0}^{d^N}C_{N,k}\mathcal{W}(N,k)=\sum_{k=0}^{d^N}\left(C_{N,k}\sum\limits_{m\in \FF_p} \rho_r(m)^k\right)=&\sum\limits_{m\in \FF_p} Q(\rho_r(m))\\=& \#\{m\in \FF_p \;|\;\rho_N(m)=0\}. \end{split} \end{equation} Our plan is to substitute the approximated value of $\mathcal{W}(N, k)$ given by \eqref{est}. We first investigate the contribution from the main term $$\mathcal{U}(N-1, k)(p+ 1)-(p-1)\gcd(p-1,d^N)^{k-2}.$$ This produces \begin{align*} &(p+1)\sum_{k=0}^{d^N}C_{N,k}\mathcal{U}(N-1,k)-(p-1)\sum_{k=0}^{d^N}C_{N,k}\gcd(p-1,d^N)^{k-2}\\ =&(p+1)\sum_{k=0}^{d^N}\left(C_{N,k} \sum_{m=0}^{d^{N}}v(N-1,m)m^k\right)-\frac{p-1}{\gcd(p-1,d^N)^2}\frac{1}{d^N!}\prod_{j=1}^{d^N}(j-\gcd(p-1,d^N))\\ =&(p+1) \sum_{m=0}^{d^N}\left(v(N-1,m)\sum_{k=0}^{d^N}C_{N,k}m^k\right). \end{align*} The identity \eqref{3} shows that this inner sum vanishes for $1 \leq m \leq d^N$, and takes the value $1$ for $ m = 0$. Thus, the main term for \eqref{4} is just $$(p+ 1)v(N-1, 0) = (p+ 1)(1 -\mu_N),$$ producing the leading term $\mu_N\cdot p$ in \eqref{goal} when combined with \eqref{eq:a2}. Now we handle the contribution to \eqref{4} arising from the error term in \eqref{bound}. For every $N\geq 2$ it has an upper bound \begin{multline*} \sum_{k=0}^{d^N}|C_{N,k}|((N+2)dek)^{2k}d^{2kN}\sqrt{p}\leq \sum_{k=0}^{d^N}|C_{N,k}|(2Nded^N)^{2k}d^{2kN}\sqrt{p} \\\leq \frac{1}{d^N!}\prod_{j=1}^{d^N}(j+(2Nd^{1+2N}e)^2)\sqrt{p} \leq (d^N+4N^2d^{2+4N}e^2)^{d^N}\sqrt{p} \leq d^{d^{6N}}\sqrt{p}. \end{multline*} Let $q_r: =\frac{1}{\mu_r}$ for every $r\geq 0$. We next prove \begin{equation}\label{aa} q_r\geq \frac{(d-1)r}{2}+1 \end{equation} for every $r\geq 0$ inductively. When $r=0$, the equality \eqref{aa} follows directly from $\mu_0=1$. Assume that \eqref{aa} holds for some $r\geq 0$. Now we prove that \eqref{aa} also holds for $r+1$. Consider the polynomial \begin{equation}\label{eq:p} P(x):=d(x+1)^d-\left(x+1+\frac{d-1}{2}\right)\left((x+1)^d-x^d\right). \end{equation} We know that for every $1\leq k\leq d-1$ the coefficient of $x^k$ in \eqref{eq:p} is equal to \begin{align*} \frac{d-1}{2}\binom{d}{k}-\binom{d}{k-1}=\binom{d}{k}\left(\frac{d-1}{2}-\frac{k}{d-k+1}\right)\geq 0 \end{align*} and the constant term of $P(x)$ is equal to $\frac{d-1}{2}>0$. Combining them, we have \begin{equation*} P(x)>0\textrm{ ~for all~} x\geq 0. \end{equation*} From $\mu_0= 1$ and \eqref{mu}, we have $q_r\geq 1$ and \begin{multline}\label{eq:7} q_{r+1}=\frac{d}{1-(1-\frac{1}{q_{r}})^d} =\frac{dq_r^d}{q_r^d-(q_r-1)^d}\\ =q_r+\frac{d-1}{2}+\frac{dq_r^d-(q_r+\frac{d-1}{2})(q_r^d-(q_r-1)^d)}{q_r^d-(q_r-1)^d} \\=q_r+\frac{d-1}{2}+\frac{P(q_r-1)}{q_r^d-(q_r-1)^d} \geq q_r+\frac{d-1}{2}. \end{multline} On the other hand, from $q_{r}\xrightarrow{r\to \infty}\infty$, we have $$\frac{dq_r^d-(q_r+\frac{d-1}{2})(q_r^d-(q_r-1)^d)}{q_r^d-(q_r-1)^d}<\frac{C_1}{q_r}\leq \frac{2C_1}{r(d-1)}$$ for some constant $C_1>0$. Combined with \eqref{eq:7}, this implies \begin{equation*} q_r\leq d+\frac{(r-1)(d-1)}{2}+\sum_{i=1}^{r-1}\frac{2C_1}{i(d-1)} \leq d+\frac{(r-1)(d-1)}{2}+\frac{2C_1}{(d-1)} (1+\log(r-1)). \end{equation*} Therefore, we obtain $q_r\sim \frac{(d-1)r}{2}$, which completes the proof. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor1}] We will prove that there exists an integer $P$ (independent to $A$ and $C$) such that this corollary holds for all primes $p\geq P$. For primes $p< P$, we can take $D_d$ large enough so that $$D_{d}\frac{p}{\log\log p} > p,$$ then the statement is trivial. Therefore, it suffices to consider sufficiently large $p$'s. Taking $$N:=\left\lfloor\frac{\frac{\log\log p+\log \frac{1}{3}}{\log d}-1}{6}\right\rfloor$$ in \eqref{goal}, we have $$6N\log d+\log \log d<(6N+1)\log d<\log\log p+\log\frac{1}{3},$$ and hence $d^{d^{6N}}<p^{1/3}$. Then the error term $$O(d^{d^{6N}}\sqrt{p})= O\left(p^{5/6}\right)\ll \frac{p}{N}.$$ Combined with Theorem~\ref{main thm}, this implies that one of the following cases has to happen: \begin{enumerate} \item $f^{\circ i}(0) = f^{\circ j}(0)$ for some $0\leq i < j \leq N\ll \frac{p}{N}$. \item $\#f^{\circ N}(\FF_p) \leq \frac{2p}{(d-1)N}+\frac{p}{N}.$ \end{enumerate} For the case (1), the corollary is trivial. Now we assume that $f$ satisfies (2). We put $k: = \left\lceil\frac{2p}{(d-1)N}+\frac{p}{N}\right\rceil$+1. Since $f^{\circ N}(0),$ $f^{\circ (N+1)}(0), \dots $, $f^{\circ (N+k)}(0)$ all belong to $f^{\circ N}(\FF_p)$ and $f^{\circ N}(\FF_p)$ has at most $k-1$ element, there exist distinct $i$, $j$ in $\{N+1,\dots,N+k\}$ such that $f^{\circ i}(0)=f^{\circ j}(0),$ which finishes the proof. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor2}] By choosing $p_{\widetilde A, \widetilde C, d} >\widetilde A,$ we may assume that $p\;\nmid\; \widetilde A.$ With the assumption that $\widetilde{A},\widetilde{C}>0$, we know that the sequence $\widetilde{f}^0(0), \widetilde{f}^1(0), \widetilde{f}^2(0), \dots$ is strictly increasing with $\widetilde{f}^j(0) \leq (\widetilde A+\widetilde C)^{d^j-1}.$ Thus, if $p \geq (\widetilde A + \widetilde C)^{d^N}$, we cannot have $p \;|\; (\widetilde{f}^j(0) -\widetilde{f}^i(0))$ with $0 \leq i < j \leq N$. The assumptions in Theorem~\ref{main thm} will therefore hold when \begin{equation}\label{i} N \leq \frac{\log \log p}{\log d}- \frac{\log \log (\widetilde{A}+\widetilde{C})}{\log d}. \end{equation} We set \begin{equation}\label{v1} N_0:=\lfloor\log\log p/(7\log {d})\rfloor+1. \end{equation} For $p$ large enough (relative to $\widetilde A, \widetilde C$ and $d$), $N_0$ satisfies \eqref{i}. By Theorem~\ref{main thm}, for $p$ large enough we have \begin{equation}\label{v2} \#\widetilde f_p^{\circ N_0}(\FF_p) <\left(\frac{2}{d-1}+1\right)p/N_0\leq \frac{3p}{N_0}< \frac{21p\log d}{\log \log p}. \end{equation} Since all cycles lie inside the set $\widetilde f_p^{\circ N_0}(\FF_p)$, we prove the first assertion of the corollary. Moreover, each pre-cycles has length less than or equal to $N_0+\#\widetilde f_p^{\circ N_0}(\FF_p)$. Combining with \eqref{v1} and \eqref{v2}, for $p$ large enough we have $$N_0+\#\widetilde f_p^{\circ N_0}(\FF_p)< \frac{4p}{N_0}<\frac{28p\log d}{\log \log p},$$ which completes the proof of the second assertion. \end{proof} \end{document}
arXiv
\begin{document} \title[Value Monoids] {Value Monoids of Zero-Dimensional Valuations of Rank One} \author{Edward Mosteig} \address{Department of Mathematics, Loyola Marymount University, Los Angeles, California 90045} \email{[email protected]} \begin{abstract} Classically, Gr\"obner bases are computed by first prescribing a set monomial order. Moss Sweedler suggested an alternative and developed a framework to perform such computations by using valuation rings in place of monomial orders. We build on these ideas by providing a class of valuations on $k(x,y)$ that are suitable for this framework. For these valuations, we compute $\nu(k[x,y]^*)$ and use this to perform computations concerning ideals in the polynomial ring $k[x,y]$. Interestingly, for these valuations, some ideals have a finite Gr\"obner basis with respect to the valuation that is not a Gr\"obner basis with respect to any monomial order, whereas other ideals only have Gr\"obner bases that are infinite with respect to the valuation. \end{abstract} \maketitle \section{Introduction}\label{introduction} Unless stated otherwise, $k$ will denote an arbitrary field, and $\N$ will denote the set of nonnegative integers. Whenever $R$ is a ring or monoid, we denote by $R^*$ the nonzero elements of $R$. One of the fundamental ideas of the theory of Gr\"obner bases is that monomial orders are well-orderings on the set of monomials, which leads us to a natural reduction process using multivariate polynomial division. In this section, we provide a brief account of a generalized theory of Gr\"obner bases that uses valuations in place of monomial orders, which will yield a more general reduction process. The development of this theory can be found in the unpublished manuscript \cite{sweedler} of Sweedler, and it is briefly discussed in this section solely for the sake of completeness. In that manuscript, Sweedler develops the theory in terms of valuation rings. Here we present the same results in terms of valuations rather than valuation rings. Proofs are omitted since they can all be found in \cite{sweedler}. Suppose $k$ is a subfield of a field $F$. A {\bf valuation on} $F$ is a homomorphism $\nu$ from the additive group of nonzero elements of $F$ to an ordered group (called the {\bf value group}) such that for $f,g \in F^*$ where $f+g \not=0$, $\nu(f+g) \le \max\{\nu(f), \nu(g)\}$. Note that the triangle inequality was chosen to be opposite of the most common definition, which is so that our results most closely coincide with those concerning monomial orders. For more details, see \cite{ms1}, \cite{ms2}, and \cite{ms3}. A {\bf valuation on $F$ over $k$} is a valuation on $F$ such that its restriction to $k^*$ is the zero map. For our purposes, we restrict our attention to valuations on rational function fields. In this setting, we require that our valuations have the additional properties given in the following definition. \begin{definition} \label{def:suitableval} We say that a valuation $\nu$ on $\kxx$ over $k$ is {\bf suitable relative to } $\kx$ if satisfies the following three properties. \begin{enumerate} \item[(i)] For all $f \in k[\mathbf{x}]$, $\nu(f) = 0$ iff $f\in k$. \item[(ii)] If $\nu(f) = \nu(g)$ where $f,g \in k({\mathbf x})^*$, then $\exists! \lambda \in k^*$ such that $f = \lambda g$ or $\nu(f- \lambda g) < \nu(f)$. \item[(iii)] $\nu(k[\mathbf{x}]^*)$ is a well-ordered monoid. \end{enumerate} \end{definition} When using monomial orders, one must determine divisibility among monomials. The analogue for valuations uses arithmetic in the monoid $\nu(\kx^*)$. \begin{definition} \label{def:division} Let $\nu$ be a valuation on $k({\mathbf x})$. Given $f, g \in \kx$, we say that $\nu(g)$ {\bf divides} $\nu(f)$, denoted $\nu(g)\ |\ \nu(f)$, if there exists $h \in \kx$ such that $\nu(f) = \nu(gh)$. We say that $h$ is an {\bf approximate quotient} of $f$ by $g$ (relative to $\nu$), if $f=gh$, or if $f\not=gh$ and $\nu(f-gh) < \nu(f)$. \end{definition} The following simple proposition follows from the definition above. \begin{prop} \label{prop:approxquotient} Let $\nu$ be a valuation on $\kxx$ over $k$ that is suitable relative to $\kx$. Let $f, g \in \kx$. Then $\nu(g)$ divides $\nu(f)$ if and only if there exists an approximate quotient $h$ of $f$ by $g$. \end{prop} The following is a generalized form of the standard polynomial reduction algorithm that makes use of valuations. \begin{alg} \label{alg:genred} Let $\nu$ be a valuation on $\kxx$ over $k$ that is suitable relative to $\kx$. Let $\nu$ be a valuation on $\kxx$ over $k$. Let $I$ be an ideal in $\kx$ and $G$ be a generating set for $I$. The following algorithm computes a reduction of a polynomial $a\in \kx$ over $G$ relative to $\nu$. \hskip 16pt\noindent\hangindent=16pt\hangafter=1 $\bullet$ Set $i=0$ and $f_0 = f$. \hskip 16pt\noindent\hangindent=16pt\hangafter=1 $\bullet$ While $f_i \not=0$ and $\nu(g) \ | \ \nu(f_i)$ for some $g \in G$ do: \noindent\hskip 64pt\noindent\hangindent=64pt\hangafter=1 Choose $g_i \in G$ such that $\nu(g_i) \ | \ \nu(f_i)$. Let $h_i$ be an an approximate quotient of $f_i$ by $g_i$. Set $f_{i+1} = f_i - g_ih_i$. Increment $i$ by 1. \end{alg} We say that $f_n$ is the $n$th reductum of $f$ over $G$. We say that $f$ reduces to $b$ if $b$ is a reductum of $f$. It can be shown that if $\nu$ is suitable with respect to $\kx$, then reduction of any element of $\kx$ over $G$ terminates after a finite number of steps. We will call a subset $G \subset I^*$ a {\bf Gr\"obner basis for $I$ with respect to $\nu$} if it satisfies the equivalent conditions of the following proposition. \begin{prop} \label{prop:gbconds} Let $\nu$ be a valuation on $\kxx$ over $k$ that is suitable relative to $\kx$. Let $I$ be an ideal in $\kx$ and $G \subseteq I^*$. The following are equivalent: \renewcommand{(\roman{enumi})}{(\roman{enumi})}\begin{enumerate}\renewcommand{(\roman{enumi})}{(\roman{enumi})} \item Every nonzero element of $I$ has a first reductum over $G$. \item Every element of $I$ reduces to $0$ over $G$. \item Given $f \in \kx$, $f \in I$ if and only if $f$ reduces to $0$ over $G$. \end{enumerate} \end{prop} We can use Gr\"obner bases in the generalized setting to solve the ideal membership problem in much the same way that we do in the case of monomial orders. Just as in the classical case, it can be shown that a Gr\"obner basis with respect to a valuation necessarily generates the given ideal. To compute Gr\"obner bases, we must work with ideals of $\nu(\kx^*)$, where an ideal $J$ of a commutative monoid $M$ is a subset $J \subset M$ such that for any $m\in M, j \in j$, $j+m \in J$. The smallest ideal containing $m_1, \dots, m_\ell$ will be denoted $\langle m_1, \dots, m_\ell \rangle$ and is called the ideal generated by $m_1, \dots, m_\ell$ \begin{definition} Let $\nu$ be a valuation on $\kxx$ over $k$ that is suitable relative to $\kx$. We say that $T \subseteq \nu(\kx^*)$ is {\bf an ideal generating set for $f$ and $g$ with respect to $\nu$} if $T$ generates the ideal $\langle \nu(f) \rangle \cap \langle \nu(g) \rangle$ in $\nu(\kx^*)$. It can be shown that for each $t\in T$ there are $a, b \in \kx^*$ such that $\nu(af) = \nu(bg) = t$ and $af=bg$ or $af\not=bg$ and $\nu(af-bg) < t$. This gives a map $T \rightarrow \kx$, $t \mapsto af-bg$. The image of this map is a {\bf syzygy family for $f$ and $g$ indexed by $T$}. We say that $af-bg$ is the element of the family corresponding to $t$. \end{definition} This definition shows one of the main differences between the generalized theory using valuations and the classical theory using monomial orders, namely, that each pair of polynomials may have many minimal syzygies. Sweedler constructs an example in \cite{sweedler} where this family consists of multiple elements. Using syzygy families, the algorithm below provides a method for constructing a Gr\"obner basis for an nonzero ideal $I$ with generating set $G$. \begin{alg} [Gr\"obner Basis Construction Algorithm] \label{alg:gengbconstruct} Let $\nu$ be a valuation on $\kxx$ over $k$ that is suitable relative to $\kx$, and $G \subseteq I^*$ is a generating set for a nonzero ideal $I$. \renewcommand{(\roman{enumi})}{(\roman{enumi})}\begin{enumerate}\renewcommand{(\roman{enumi})}{(\roman{enumi})} \item Set $G_0 = G$. \item For each pair of distinct elements $g, h \in G$, find a monoid generating set $T^0_{g,h}$ for $g,h$ and a syzygy family $S^0_{g,h}$ for $g,h$ indexed by $T^0_{g,h}$. Define $U = \bigcup_{g\not=h \in G} S^0_{g,h}$. \item Determine the set $H_i$ of nonzero final reductums that occur from reducing the lements of $U_i$ over $G_i$. \item If $H_i$ is empty, stop. \item Define $G_{i+1} = G_i \cup H_i$. \item For each pair of distinct element $g \in G_{i+1}, h \in H_i$, find a monoid generating set $T^{i+1}_{g,h}$ for $g,h$ and a syzygy family $S^{i+1}_{g,h}$ for $g,h$ indexed by $T^{i+1}_{g,h}$. Define $U = \bigcup_{g\not=h \in G} S^{i+1}_{g,h}$. \item Increment $i$ by 1 and go to step (iii). \end{enumerate} \end{alg} Sweedler shows that if $G$ is finite and $\nu(I^*)$ is Noetherian (i.e., every ascending chain of ideals stabilizes), then the construction algorithm can be completed so that it terminates with a finite Gr\"obner basis. However, even if $\nu(I^*)$ isn't Noetherian, the set $\cup_{n=1}^\infty G_n$ is a Gr\"obner basis. These algorithms will allow us to compute Gr\"obner bases using a class of valuations on $k(x,y)$ originally studied by Zariski in \cite{zariski}. In Section \ref{valmonoid}, we develop the background necessary to work with a valuation $\nu$ of this type, and we state one of the main results of the paper, which is an explicit formula for $\nu(k[x,y]^*)$. In Section \ref{associatedsequences}, we prove some intermediate results concerning sequences associated with the valuations developed in Section \ref{valmonoid}. In particular, recursive formulas are given for a generating set of $\nu(k[x,y]^*)$. In Section \ref{repsmonoid}, we build on these ideas to show that certain elements of $\nu(k[x,y]^*)$ have unique representations, which leads to a complete description of $\nu(k[x,y]^*)$ in Section \ref{construction}. Finally, in Section \ref{algorithms}, we use this description to make the algorithms developed by Sweedler constructive. With the exception of Section \ref{repsmonoid}, all of the proofs herein are fairly elementary. \section{Value Groups and Monoids from Power Series}\label{valmonoid} In this section, we examine a class of valuations of $k(x,y)$ studied by Zariski in \cite{zariski}. The value groups of these valuations were explicitly constructed by MacLane and Schilling in \cite{mac-schi}. In this section, we state one of our main results, which is an explicit construction of the restriction of such valuations to the underlying polynomial ring $k[x,y]$. Since the valuations of interest are constructed using generalized power series, we begin with a review of the relevant concepts. We say that a set $T \subset \Q$ is {\bf Noetherian} if every subset of $T$ has a largest element. Given a function $z: \Q \to k$, the {\bf support} of $z$ is defined by $\supp(z) = \{ q \in \Q \mid z(q) \not= 0 \}.$ The collection of {\bf Noetherian power series}, denoted by $\kq$, consists of all functions from $\Q$ to $k$ with Noetherian support. More commonly in the literature, generalized power series are defined as functions with well-ordered support, and we will freely use the analogues of these results for Noetherian power series. We choose the supports of our series to be opposite of the usual definition so that our results more closely fit with the theory of monomial orders and Gr\"obner bases. As demonstrated in \cite{hahn}, the collection of Noetherian power series forms a field in which addition is defined pointwise and multiplication is defined via convolution; i.e., if $z_1, z_2 \in \kq$ and $q \in \Q$, then $(z_1+z_2) (q) = z_1(q) + z_2(q)$ and $(z_1z_2)(q) = \sum_{ u+v=q} z_1(u) z_2(v)$. We often write power series as formal sums: $z = \sum_{s\in{{\mbox{\scriptsize\rm Supp}}}(z)} z(s)t^s$, where $z(s)$ denotes the image of $s$ under $z$. \begin{example}\label{ex:sumprod} Given the series $z_1 = t^{1/2} + t^{1/4} + t^{1/8} + \cdots$ and $z_2 = 3t+ 1,$ their sum and product are $$z_1+ z_2 = 3t + ( t^{1/2} + t^{1/4} + t^{1/8} + \cdots) + 1$$ and $$z_1z_2 = (3t^{3/2} + 3t^{5/4} + 3t^{9/8} + \cdots ) + (t^{1/2} + t^{1/4} + t^{1/8} + \cdots).$$ \end{example} Given a series $z \in \kq$, define the {\bf leading exponent} of $z$ to be the rational number given by $\lexp (z) = \max\{ s \mid s \in \supp(z) \}$. If $s= \lexp(z)$, we denote $z(s)$ by $\lc(z)$ and call it the {\bf leading coefficient } of $z$. Note that $\lexp(z_1 z_2) = \lexp(z_1) + \lexp(z_2)$ and $\lc(z_1z_2) = \lc(z_1)\lc(z_2)$. Moreover, we have $\lexp(z_1 + z_2) \le \max(\lexp(z_1), \lexp(z_2))$, with equality holding in case $\lexp(z_1) \not= \lexp(z_2)$. We say that a nonzero series $z \in \kq$ is {\bf simple} if it can be written in the form \begin{equation*}\label{def:simple} z = \sum_{i=1}^{n} c_i t^{e_i}, \end{equation*} where $c_i \in k^*, n \in \N^* \cup \{ \infty \}, e_i \in \Q, e_i > e_{i+1}$. Whenever we write a series in this form, we implicitly assume that each $c_i$ is nonzero and the exponents are written in descending order. We call ${\mathbf e} = (e_1, e_2, \dots)$ the {\bf exponent sequence} of $z$. Now write $e_i = n_i/d_i$ where $d_i>0$ and $\gcd(n_i,d_i)=1$. We define $r_0 =1$ and for $i \ge 1$, set $r_i = \lcm(d_1, \dots, d_{i})$ and call ${\mathbf{r}} = (r_0, r_1, r_2, \dots)$ the {\bf ramification sequence} of $z$. \begin{example} Consider the simple series $$z=2t^{1/2}+3t^{1/3}+4t^{1/4} + 5t^{1/5} + \cdots.$$ Here $\lexp(z) = 1/2$ and $\lc(z) = 2$. The series $z$ has exponent sequence $(1/2, 1/3, 1/4, 1/5, \cdots)$ and ramification sequence $(1,2,6,12,60, \cdots)$. \end{example} We are now in a position to define valuations on $k(x,y)$ based on Noetherian power series. Let $z\in \kq$ be a Noetherian power series such that $t$ and $z$ are algebraically independent over $k$. Consider the embedding $\varphi_z : k(x,y) \to \kq$, $x \mapsto t$, $y \mapsto z$. It can be shown that $\lexp$ is a valuation on $\kq$, and hence the composite map $\lexp \circ \varphi_z : k(x,y) \to \Q$ is a valuation on $k(x,y)$. Given a valuation $\nu$ on $\kxx$, $V = \{ f\in \kxx^* \mid \ \nu(f) \le 0 \}$ is a valuation ring with maximal ideal ${\mathfrak m} = \{ f\in \kxx^* \mid \ \nu(f) < 0 \}$, in which case $\dim_k (V/{\mathfrak m})$ is the {\bf dimension} of the valuation. The {\bf rank} of the valuation $\nu$ is defined to be the number of isolated subgroups of $\nu(\kxx^*)$. It follows that $\lexp \circ \varphi_z$ is a zero-dimensional valuation of rank one. \begin{example} Let $k$ be a field such that char $k \not=2$. Given $z = t^{1/2} + t^{1/4} + t^{1/8} + \cdots$, \begin{eqnarray*} (\lexp \circ \varphi_z) (x) & = & \lexp(t) = 1 \\ (\lexp \circ \varphi_z) (y) & =& \lexp(z) = 1/2 \\ (\lexp \circ \varphi_z) (y^2-x) & = & \lexp(z^2-t) = \lexp( ((t+2t^{3/4} + 2t^{5/8} + \cdots ) - t) = 3/4 \end{eqnarray*} \end{example} MacLane and Schilling proved the following result in \cite{mac-schi}: \begin{theorem} Let $z\in \kq$ be a simple series such that $t$ and $z$ are algebraically independent over $k$. If $\mathbf e$ is the exponent sequence of $z$, then the value group of $\lexp \circ \varphi_z$ is $$(\lexp \circ \varphi_z) (k(x,y)^*) = \Z + \Z e_1 + \Z e_2 + \cdots $$ \end{theorem} One of the primary goals of this paper is to restrict the valuation to the polynomial ring $k[x,y]$ and compute \begin{equation}\label{eq:mz} {\Lambda_{}} = (\lexp \circ \varphi_z) (k[x,y]^*) = \{ \lexp(f(t,z)) \mid f(x,y) \in k(x,y)^* \}, \end{equation} which we call the {\bf value monoid with respect to $z$}. Now suppose $z$ is a simple series with exponent sequence ${\mathbf e}$ and ramification sequence ${\mathbf r}$. The sequence obtained from the ramification sequence $\{ r_i \}_{i \in \N}$ by removing repetitions is called the {\bf reduced ramification sequence} and is denoted $\{ r_i^{red} \}_{i \in \N}$. For each $i \in \N$, denote by $l(i)$ the smallest natural number such that $r_i^{red} = r_{l(i)}$; i.e., \begin{equation}\label{eq:ls} l(i) = \min \{ j \in \N \mid r_j = r_i^{red} \}. \end{equation} \begin{example} The series $$z= t^2 + t^{3/2} + t^{1/2} + t^{1/3} + t^{1/5} + t^{1/7} + t^{1/11} + \cdots$$ has ramification sequence $${\mathbf{r}} = (1,2,2,6,30,210,2310, \dots),$$ and hence has reduced ramification sequence $$ (1,2,6,30,210,2310, \dots).$$ Thus $l(0) = 0, \ l(1) = 1, \ l(i) = i+1 \mbox{ for } i \ge 2.$ \end{example} We define the {\bf bounding sequence} ${\mathbf u} = (u_0, u_1, u_2, \dots)$ given by $u_0 = 0$, and for $i \ge 1$, \begin{equation}\label{eq:u} u_i = \sum_{j=0}^{i-1} \Big( \frac{r_i}{r_{j}} - \frac{r_i}{r_{j+1}} \Big) e_{j+1}. \end{equation} For $i \ge 1$, we define the {\bf monoid generating sequence}: \begin{eqnarray}\label{def:rho} \rho_i & = & u_{l(i)-1} + e_{l(i)}. \end{eqnarray} We can fully describe the value monoid with respect to $z$ in terms of the monoid generating sequence. The following result will be proved in Section \ref{construction} (in fact, it follows directly from the stronger result given in Theorem \ref{Mzunique}). \begin{theorem}\label{thm:valmonoid} Let $z\in \kq$ be a simple series such that $t$ and $z$ are algebraically independent over $k$. Assume further that the components of the exponent sequence are positive and no component is divisible by the characteristic of $k$. Then the value monoid with respect to $z$ is $$ \Lambda = (\lexp \circ \varphi_z) (k[x,y]^*) = \N + \N \rho_1 + \N \rho_2 + \cdots $$ \end{theorem} It is of interest to determine whether this result can be generalized. In particular, it would be nice to compute the value monoid after either removing the restriction that the exponent sequence must be positive or permitting some of the components of the exponent sequence to be divisible by the characteristic of the the ground field. \section{Associated Sequences} \label{associatedsequences} In this section, we prove some elementary results about the sequences described in the previous section. In particular, we will construct recurrence relations and formulas concerning the monoid generating sequence. To this end, there is one more sequence that will be needed in the sequel. Using the ramification sequence $\mathbf r$ of a simple series $z$ and the formula (\ref{eq:ls}), we define {\bf partial ramification sequence} by \begin{eqnarray*} s_i & = & r_{l(i)}/r_{l(i-1)} = r_{l(i)}/ r_{l(i)-1}\label{eq:s}. \end{eqnarray*} \begin{center} \framebox{ \parbox{6in} { \begin{convention}\label{conventions} For the remainder of this paper, we adopt the following conventions. \begin{itemize} \item The series $z \in\kq$ is simple with positive support. \item The series $z$ is transcendental over $k(t)$. \item The value monoid of $z$ is denoted ${\Lambda_{}}$. \item The exponent sequence of $z$ is denoted ${\mathbf e} = (e_1, e_2, e_3, \dots)$. \item No component of the exponent sequence is divisible by char $k$. \item The ramification sequence of $z$ is denoted ${\mathbf r} = (r_0,r_1,r_2, \dots)$. \item The bounding sequence of $z$ is denoted ${\mathbf u} = (u_0, u_1, u_2, \dots)$. \item The function $l(i)$ is defined in (\ref{eq:ls}). \item The monoid generating sequence of $z$ is denoted ${\mathbf \rho} = (\rho_1, \rho_2, \rho_3, \dots)$. \item The partial ramification sequence of $z$ is given by ${\mathbf s} = (s_1, s_2, s_3, \dots)$. \end{itemize} \end{convention} }} \end{center} Since $l(i)$ marks the index where the ramification index increases, we have $r_j = r_{l(i)}$ for $l(i) \le j < l(i+1)$, and so \begin{equation}\label{eq:rj} r_{j}/r_{j-1} = 1 \mbox{ \ for \ } l(i) < j < l(i+1). \end{equation} In particular, this yields \begin{equation}\label{eq:redram} r_{l(i-1)} = r_{l(i) - 1} \end{equation} and \begin{equation}\label{eq:redu} u_{l(i-1)} = u_{l(i) - 1} \end{equation} despite the fact that $e_{l(i-1)}$ and $e_{l(i)-1}$ need not be the same. Note that the ramification sequence of a series $z\in\kq$ increases without bound unless $z\in k((t^{1/n}))$ for some $n\in\N$. However, it is still possible that the ramification sequence occasionally (even infinitely many times) stabilizes for a finite number of steps. Whenever the ramification sequence stabilizes for a number of indices, the sequence $\{u_i\}_{i\in\N}$ also stabilizes, as seen in the next result. \begin{lem}\label{lem:staticuandr} If $r_i=r_k$ for indices $i$ and $k$, then $u_i=u_k$. \end{lem} \begin{proof} The result is trivial if $i=k$, so we assume $i<k$. Since $r_i=r_k$, it follows that $r_j = r_{j+1}$ for $i \le j \le k-1$, and so by (\ref{eq:u}), $u_k = \sum_{j=0}^{k-1} \Big( \frac{r_k}{r_{j}} - \frac{r_k}{r_{j+1}} \Big) e_{j+1} = \sum_{j=0}^{k-1} \Big( \frac{r_i}{r_{j}} - \frac{r_i}{r_{j+1}} \Big) e_{j+1} = u_i + \sum_{j=i}^{k-1} \Big( \frac{r_i}{r_{j}} - \frac{r_i}{r_{j+1}} \Big) e_{j+1} = u_i.$ \end{proof} Since our main objective is to prove that ${\Lambda_{}}$ is generated by the sequence $1, \rho_1, \rho_2, \dots$, we must first justify some elementary properties that allow us to understand better the behavior of this sequence. We begin by showing that the monoid generating sequence satisfies a simple recursive relation. \begin{lem}\label{lem:therecurrence} The monoid generating sequence in (\ref{def:rho}) satisfies the following recurrence relation: \begin{eqnarray*} \rho_1 & = & e_{l(1)}; \\ \rho_{i+1} & = & s_i \rho_i - e_{l(i)} + e_{l(i+1)}. \end{eqnarray*} \end{lem} \begin{proof} By (\ref{eq:redu}), $u_{l(1)-1} = u_{l(1-1)} = u_0 = 0$, and so by (\ref{def:rho}), $\rho_1 = e_{l(1)}$. Also, we have by (\ref{eq:u}), \begin{eqnarray*} u_{m} + e_{m+1} & = & \sum_{j=0}^{m-1} \left( \frac{r_{m}}{r_j} - \frac{r_{m}}{r_{j+1}} \right) e_{j+1} + e_{m+1}\\ & = & \left(\frac{r_{m}}{r_{m-1}}\right) \sum_{j=0}^{m-2} \left( \frac{r_{m-1}}{r_j} - \frac{r_{m-1}}{r_{j+1}} \right) e_{j+1} +\left( \frac{r_{m}}{r_{m-1}} -\frac{r_m}{r_m} \right) e_{m} + e_{m+1}\\ & = & \left(\frac{r_{m}}{r_{m-1}}\right) \sum_{j=0}^{m-2} \left( \frac{r_{m-1}}{r_j} - \frac{r_{m-1}}{r_{j+1}} \right) e_{j+1} +\left( \frac{r_{m}}{r_{m-1}} \right)e_{m} - e_{m} + e_{m+1}\\ & = & \left(\frac{r_{m}}{r_{m-1}}\right) [u_{m-1} + e_{m}] -e_{m} + e_{m+1}, \end{eqnarray*} and so \begin{equation}\label{eq:gammaprelim} \gamma_{m+1} = \left( \frac{r_m}{r_{m-1}} \right) \gamma_m - e_m + e_{m+1} \end{equation} where $\gamma_m := u_{m-1} + e_m$. Replacing $m$ by $l(i)$, we obtain \begin{equation}\label{eq:gammali} \gamma_{l(i)+1} = \left( \frac{r_{l(i)}}{r_{l(i)-1}} \right) \gamma_{l(i)} - e_{l(i)} + e_{l(i)+1} = s_i \gamma_{l(i)} - e_{l(i)} + e_{l(i)+1} .\end{equation} If $l(i) < m < l(i+1)$, then $r_m/r_{m-1} = 1$ by (\ref{eq:rj}), and so (\ref{eq:gammaprelim}) yields $$\gamma_{m+1} = \gamma_m - e_m + e_{m+1}.$$ Multiple applications of this formula yields a telescoping sum, and so \begin{eqnarray*} \gamma_{l(i+1)} & = & \gamma_{l(i+1)-1} - e_{l(i+1)-1} + e_{l(i+1)} \\ & = & (\gamma_{l(i+1)-2}- e_{l(i+1)-2} + e_{l(i+1) -1} ) - e_{l(i+1)-1} + e_{l(i+1)} \\ & = & \gamma_{l(i+1)-2}- e_{l(i+1)-2} + e_{l(i+1)} \\ & & \vdots\\ & = & \gamma_{l(i)+1}- e_{l(i)+1} + e_{l(i+1)}. \end{eqnarray*} This equation in conjunction with (\ref{eq:gammali}) yields \begin{eqnarray*}\label{eq:gammar} \gamma_{l(i+1)} & = & \gamma_{l(i)+1}- e_{l(i)+1} + e_{l(i+1)}\\ & = & s_i \gamma_{l(i)} - e_{l(i)} + e_{l(i)+1} - e_{l(i)+1} + e_{l(i+1)} \\ & = & s_i \gamma_{l(i)} - e_{l(i)} + e_{l(i+1)}, \end{eqnarray*} and since $\rho_i = u_{l(i)-1} + e_{l(i)} = \gamma_{l(i)}$ for all $i$, we have $$\rho_{i+1} = s_i \rho_i - e_{l(i)} + e_{l(i+1)}.$$ \end{proof} We can also construct a recursive formula for the terms of the ramification sequence, as given in the next result. \begin{lem}\label{lem:sumram} For $i \in \N$, \begin{equation*} r_{l(i)} = 1 + \sum_{j=1}^i (s_j-1) r_{l(j-1)}. \end{equation*} \end{lem} \begin{proof} This follows from the simple computation $$\sum_{j=1}^{i} (s_j-1)r_{l(j-1)} = \sum_{j=1}^{i} ((r_{l(j)}/r_{l(j-1)})-1)r_{l(j-1)} = \sum_{j=1}^{i} (r_{l(j)}-r_{l(j-1)}) = r_{l(i)} - r_{l(0)}= r_{l(i)} - 1. $$ For the case $i=0$, we take the summation $\sum_{j=1}^i (s_j-1) r_{l(j-1)}$ to be $0$. \end{proof} Using Lemma \ref{lem:therecurrence}, we can construct yet another recurrence relation for the terms of the monoid generating sequence. \begin{lem}\label{lem:difflams} For $i \ge 1$, \begin{equation*}\label{eq:rhodiff}\rho_i = \sum_{j=i}^{i-1}(s_j-1) \rho_j + e_{l(i)}.\end{equation*} \end{lem} \begin{proof} We proceed by induction. If $i=1$, then by Lemma \ref{lem:therecurrence}, $$\rho_1 = e_{l(1)} = 0 + e_{l(1)} = \sum_{j=1}^0 (s_j-1)\rho_j + e_{l(1)}$$ since the summation that appears is empty. Now suppose the statement holds for the index $i$. Then by Lemma \ref{lem:therecurrence} and the induction hypothesis, \begin{eqnarray*} \rho_{i+1} - \sum_{j=1}^{i} (s_j - 1) \rho_j & = & \rho_{i+1} - (s_i-1) \rho_i - \sum_{j=1}^{i-1} (s_j-1)\rho_j \\ & = & \rho_{i+1} - (s_i-1) \rho_i - (\rho_i - e_{l(i)}) \\ & = & s_i\rho_i - e_{l(i)} + e_{l(i+1)} - (s_i-1) \rho_i - (\rho_i - e_{l(i)}) \\ & = & e_{l(i+1)}. \end{eqnarray*} \end{proof} Using this lemma, we can extract information about the denominators of the components of the monoid generating sequence, as shown in the next three results. Given $q\in \Q$, $q\Z$ denotes the set $\{qz \mid z \in \Z\}$. \begin{lem}\label{cor:rhoresidue} For $i \ge 1$, $\rho_i \in (1/r_{l(i)}) \Z - (1/r_{l(i-1)}) \Z$. \end{lem} \begin{proof} The result follows by a simple induction. Indeed, $\rho_1 = e_{l(1)} \in (1/r_{l(1)}) \Z - \Z = (1/r_{l(1)}) \Z - (1/r_{l(0)})\Z $. Now, assuming that $\rho_i \in (1/r_{l(i)}) \Z$, we see by Lemma \ref{lem:difflams}, $\rho_{i+1} = \sum_{j=1}^{i}(s_j-1) \rho_j + e_{l(i+1)}$. Since $\rho_j \in (1/r_{l(j)}) \Z \subset (1/r_{l(i)}) \Z$ for $1 \le j \le i$, we have $\sum_{j=1}^{i}(s_j-1) \rho_j \in (1/r_{l(i)}) \Z$. Moreover, $e_{l(i+1)} \in (1/r_{l(i+1)}) \Z -(1/r_{l(i)}) \Z $, and so $\rho_{i+1} \in (1/r_{l(i+1)}) \Z - (1/r_{l(i)}) \Z .$ \end{proof} \begin{lem}\label{lem:rhoalpha} If we write $\rho_i = {c_i}/r_{l(i)}$, then $\gcd({c_i},s_i)=1$. \end{lem} \begin{proof} Rewrite the expression $\rho_i = {c_i} /r_{l(i)}$ in lowest terms: $\rho_i = {\alpha_i}/{\beta_i}$, ${\alpha_i},{\beta_i} \in \N^*$ where $\gcd({\alpha_i},{\beta_i}) =1$. Then ${c_i} = {\alpha_i} r_{l(i)}/{\beta_i} = {\alpha_i} \lcm(r_{l(i-1)},{\beta_i})/{\beta_i} = {\alpha_i} r_{l(i-1)}/\gcd(r_{l(i-1)},{\beta_i}).$ Also $r_{l(i)}/r_{l(i-1)} = \lcm(r_{l(i-1)},{\beta_i})/r_{l(i-1)} = {\beta_i}/\gcd(r_{l(i-1)},{\beta_i}).$ Therefore, $$\gcd({c_i},r_{l(i)}/r_{l(i-1)}) = \gcd({\alpha_i} r_{l(i-1)}/\gcd(r_{l(i-1)},{\beta_i}),{\beta_i}/\gcd(r_{l(i-1)},{\beta_i})).$$ Since $\gcd({\alpha_i,\beta_i}) = 1$ and $\gcd(r_{l(i-1)}/\gcd(r_{l(i-1)},{\beta_i}), {\beta_i}/\gcd(r_{l(i-1)},{\beta_i}))=1$, we have $\gcd(c_i,s_i) = \gcd({c_i},r_{l(i)}/r_{l(i-1)})=1$. \end{proof} \begin{lem}\label{lem:lambdaresidue} If $0 \le d_j < s_j$ for $1 \le j \le i$ and $d_i \not=0$, then \begin{equation}\label{lambdaresexpress} \sum_{j=1}^i d_j \rho_j \in (1/r_{l(i)})\Z - (1/r_{l(i-1)})\Z. \end{equation} \end{lem} \begin{proof} For $j \le i$, we have by Lemma \ref{cor:rhoresidue}, $\rho_j \in (1/r_{l(j)}) \Z \subset (1/r_{l(i)}) \Z$, and so $\sum_{j=1}^i d_j \rho_j \in (1/r_{l(i)})\Z$. We now must prove $\sum_{j=1}^i d_j \rho_j \not\in (1/r_{l(i-1)})\Z$ by induction. First, we show that $d_j \rho_j \not\in (1/r_{l(j-1)}) \Z $ whenever $0 < d_j < s_j$. Write $\rho_j = c_j/r_{l(j)}$. Suppose, for contradiction, $d_j \rho_j = (d_jc_j)/r_{l(j)} \in (1/r_{l(j-1)})\Z$ where $0 < d_j < s_j$. Thus, $r_{l(j)} \mid d_jc_jr_{l(j-1)}$. Now, $s_j = r_{l(j)}/r_{l(j-1)}$, and so $s_j \mid d_jc_j$. By Lemma \ref{lem:rhoalpha}, $\gcd(c_j,s_j) = 1$, and so $s_j \mid d_j$. Since $0 < d_j < s_j$, we have a contradiction. Now we proceed to show the inductive step. Suppose $0 \le d_j< s_j$ for $1 \le j \le i+1$ and $d_{i+1} \not=0$. We write $$\sum_{j=1}^{i+1} d_j \rho_j = \left(\sum_{j=1}^{i} d_j \rho_j\right) + d_{i+1} \rho_{i+1}.$$ By the induction hypothesis, $\sum_{j=1}^{i} d_j \rho_j \in (1/r_{l(i)}) \Z$. Now, $d_{i+1}\rho_{i+1} \in (1/r_{l(i+1)}) \Z$, and by the previous paragraph, $d_{i+1}\rho_{i+1} \not\in (1/r_{l(i)}) \Z$. Thus $\sum_{j=1}^{i+1} d_j \rho_j \in (1/{r_{l(i+1)}}) \Z - (1/{r_{l(i)}}) \Z$. \end{proof} \section{Representations of Elements of the Value Monoid} \label{repsmonoid} In this section, we demonstrate that certain elements of ${\Lambda_{}}$ have a unique representation as a sum of elements of $\{1, \rho_1, \rho_2, \dots\}$. Using these representations, we prove that ${\Lambda_{}}$ is generated by $\{1, \rho_1, \rho_2, \rho_3, \dots\}$. To accomplish this, we must factor each element of $k[t,y]$ completely as $f(t,y) = q(t) \prod(y-s_i)$ where $s_i$ lies in the algebraic closure of $k(t)$. An element of $\kq$ is said to be {\bf Puiseux} if it lies in $k((t^{-1/m}))$ for some positive integer $m$. Puiseux's Theorem states that the algebraic closure of the field of Laurent series $k((t^{-1}))$ in $\kq$ precisely consists of all elements of $\kq$ that are Puiseux. Using Kedlaya's characterization of the generalized power series that are algebraic over the Laurent power series field when $k$ has positive characteristic in \cite{ked}, we have the following characteristic-free generalization of Puiseux's Theorem. \begin{theorem} \label{theorem:puiseux} Let $w\in \kq$ such that no element of its support is divisible by char $k$. Then $w$ is algebraic over $k((t^{-1}))$ iff $w$ is Puiseux. \end{theorem} The result below follows directly from techniques found in \cite{abhyankar} and \cite{duval}. \begin{prop}\label{prop:finitePuiseux} Let $w= c_1 t^{m_1/n} + \cdots + c_s t^{m_s/n}$ be a finite Puiseux expansion with ramification index $n$ where $m_i \in \Zset^*$, $n \in \Zset^+$, and $c_i \in k^*$. If $k$ has positive characteristic, then assume that $n$ is not divisible by char $k$. Then the minimal polynomial of $w$ over $k(t)$ is $p(y) = \prod_{i=0}^{n-1}(y-w_i) \in k(t)[y],$ where \[w_i = c_1 ( \zeta^it^{1/n})^{m_1} + \cdots + c_s (\zeta^it^{1/n})^{m_s}, \] and $\zeta$ is a primitive $n$th root of unity in $\overline{k}$. \end{prop} The {\bf ramification index} of a Puiseux series $w\in \kq$ is the smallest positive integer $r$ such that $w \in k((t^{-1/r}))$. Given $z_1, z_2 \in \kq$, we say that $z_1$ and $z_2$ {\bf agree to (finite) order } $m\in\N$ if the first $m$ terms of $z_1$ and $z_2$ are identical, but the $(m+1)$st terms (if they exist) of $z_1$ and $z_2$ are different. If we use Theorem \ref{theorem:puiseux} in place of Puiseux's Theorem, then Proposition 4.6 of \cite{ms3} can be strengthened to the following characteristic-free form, where we continue the assumption that no component of the exponent sequence is divisible by char $k$ as stated in Convention \ref{conventions}. \begin{prop}\label{prop:lexpformula} Let $w$ be a Puiseux series in $\kq$. Define $p(y) \in \skl[y]$ to be the minimal polynomial of $w$ over $\skl$ where $w$ agrees with $z$ to order $m$, and none of the conjugates of $w$ agree with $z$ to a greater order. If $R$ is the ramification index of $w$, then \begin{equation}\label{eq:p(z)} \lexp(p(z)) = \Big(\frac{{{R}}}{r_m}\Big)\Big[ u_m + \lexp(z-w) \Big] \ge \Big(\frac{{{R}}}{r_m}\Big)\Big[ u_m + e_{m+1} \Big] . \end{equation} \end{prop} The simplest polynomials to which we can apply this result are those whose roots are finite Puiseux series. We make these calculations explicit in the following lemma. \begin{lem}\label{lem:rhopos} If $g(t,y)\in k(t)[y]$ is the minimal polynomial of \begin{equation*} c_1 t^{e_1} + \cdots + c_{l(i)-1}t^{e_{l(i)-1}}\end{equation*} over $k(t)$, then $\deg_y(g(t,y)) = r_{l(i)-1}$ and $\lexp(g(t,z)) = \rho_i.$ \end{lem} \begin{proof} Let $g(t,y) \in k((t^{-1}))[y]$ be the minimal polynomial of $\sum_{j=1}^{l(i)-1} c_j t^{e_j}$ over $k((t^{-1}))$. Since the exponent sequence $\mathbf e$ consists solely of positive numbers, $g(t,y) \in k[t,y]$ by Proposition \ref{prop:finitePuiseux}. Since $\sum_{j=1}^{l(i)-1} c_j t^{e_j}$ has ramification index $r_{l(i)-1}$, it follows from Proposition \ref{prop:finitePuiseux} that $\deg_y g(t,y) = r_{l(i)-1}$. Moreover, by Proposition \ref{prop:lexpformula}, $ \lexp(g(t,z)) = \left(\frac{r_{l(i)-1}}{r_{l(i)-1}} \right)(u_{l(i)-1} + e_{l(i)} ) = \rho_i.$ \end{proof} We will see that in order to generate ${\Lambda_{}}$, we need only consider images of polynomials whose roots are finite Puiseux series. To demonstrate this, we first show that over the collection of polynomials of a fixed degree in $y$, the polynomials that have the smallest image under $\lexp \circ \phi_z$ are those whose roots are finite Puiseux series. \begin{prop}\label{char0approx} Let $k$ be a perfect field. For each nonzero $p(x,y) \in k[x,y]$, there exists $h(x,y) \in k[x,y]$ such that the following hold: \begin{enumerate} \item[(i)] $\deg_y p(x,y) = \deg_y h(x,y)$, \item[(ii)] $\lexp(p(t,z)) \ge \lexp(h(t,z))$, \item[(iii)] the roots of $h(t,y)$ in $\overline{k((t^{-1}))}[y]$ are finite Puiseux series of the form $\sum_{j=1}^{l(i)-1} c_j t^{e_j}$. \end{enumerate} \end{prop} \begin{proof} First, factor $p(t,y)$ as a polynomial in $y$ as $p(t,y) = q(t) \prod_{i=1}^m p_i(t,y),$ where $q(t) \in k[t]$ and $p_i(t,y)$ is a monic, irreducible element of $\skl[y]$. We will find $h_i(x,y) \in k[x,y]$ such that $\deg_y p_i(x,y) = \deg_y h_i(x,y)$, $\lexp(p_i(t,z)) = \lexp(h_i(t,z))$, and the roots of $h_i(t,y)$ are finite Puiseux series of the desired form. It then follows that $h(x,y) = q(x) \prod_{i=1}^m h_i(x,y)$ satisfies the conditions of the proposition. Since $p_i(t,y)$ is a monic, irreducible element of $\skl[y]$, it is the minimal polynomial of some generalized power series $\beta \in \kq$. If $k$ is a field of characteristic zero, by Puiseux's Theorem (Theorem \ref{theorem:puiseux}), $\beta$ is Puiseux. If $k$ has positive characteristic, $\beta$ is not necessarily Puiseux and the algebraic closure of $k((t^{-1}))$ is described by Kedlaya in \cite{ked}. We prove the result by considering two cases: \begin{enumerate} \item[Case 1:] No element of $\supp(z)$ is divisible by char $k$. \item[Case 2:] Some element of $\supp(z)$ is divisible by char $k$. \end{enumerate} {\bf Case 1:} Without loss of generality, we assume that no conjugate of $\beta$ agrees with $z$ to a higher order. We denote this order by $m$, and denote the ramification index of $\beta$ by $R$, in which case $r_{m} \mid R$. As shown in \cite{enum}, $p_i(t,y) \in \skl[y]$ must be a polynomial of degree $R$. Let $L$ be the largest index such that $r_{L} = r_{m}$, in which case $r_{L+1} > r_{L}$, and so $L+1$ is of the form $l(\kappa)$ for some $\kappa \in \N$. Let $g(t,y) \in k[t,y]$ be the minimal polynomial of $\sum_{j=1}^{l(\kappa)-1} c_j t^{e_i}$ over $k(t)$. Then by Lemma \ref{lem:rhopos}, $\deg_y(g(t,y)) = r_{l(\kappa)-1} = r_{L} = r_{m}$ and $\lexp(g(t,z)) = \rho_{\kappa}.$ Therefore, if we define $h(x,y) = g(x,y)^{(R/r_{m})}$, then $\lexp(h(t,z)) = ({R}/{r_{m}}) \rho_{\kappa}$ and $ \deg_y (h(x,y)) = ({R}/{r_{m}}) \deg_y(g) = R = \deg_y (p_i(x,y)). $ Since $r_{L} = r_{m}$, we know by Lemma \ref{lem:staticuandr} that $u_{L} = u_{m}$. Moreover, $L \ge m $, and so $e_{m+1} \ge e_{L+1}$. Thus by Proposition \ref{prop:lexpformula}, $\lexp(p_i(t,z)) \ge ({R}/{r_{m}})[u_{m} + e_{m+1}] \ge ({R}/{r_{m}}) [u_{L} + e_{L+1}] = ({R}/{r_{m}}) [u_{l(\kappa)-1} + e_{l(\kappa)}] = ({R}/{r_{m}}) \rho_{\kappa} = \lexp(h(t,z)).$ {\bf Case 2 :} Let char $k = p$. Let $E$ be the normal closure of $k((t^{-1}))(\beta)/k((t^{-1}))$. As in the proof of Corollary 9 of \cite{ked}, if $M$ is the integral closure of $k$ in $E$, then $E$ can be expressed as a tower of Artin-Schreier extensions over $M((t^{-1/mq}))$, where $q$ is the degree of inseparability of $E/k((t^{-1}))$. Since $E$ is normal over $k((t^{-1}))$, and hence over $k((t^{-1/mq}))$, the normal closure of $k((t^{-1}))$ must be contained in $E$. The field $k(\zeta_m)((t^{-1/mq}))= k(\zeta_m, t^{-1/qm})((t^{-1}))$ is the normal closure of $k((t^{-1/mq}))$ (it is the splitting field of $X^{mq}-t^{-1}$ over $k((t^{-1}))$), and so we have the following normal extensions: $$k((t^{-1})) \subset k(\zeta_m)((t^{-1/mq})) \subset E.$$ Define $F=k(\zeta_m)((t^{-1/mq}))$, and let $\tau_\ell \in \gal(F/k((t^{-1})))$ be given by $t^{1/qm} \mapsto \zeta_m^{q\ell} t^{1/qm}$. Note that as $\zeta_m^0, \zeta_m, \dots, \zeta_m^{m-1}$ runs through all the $m$th roots of unity, so does the list $\zeta_m^0, \zeta_m^q, \dots, \zeta_m^{(m-1)q}$ since $\gcd(m,q) = 1$. Each element of $\gal(F/k((t^{-1})))$ can be written as $\tau_\ell \mu$ where $\mu \in \gal(k(\zeta_m)/k)$. We write the collection of all elements of $\gal(F/k((t^{-1})))$ as $\{ \psi_1, \dots, \psi_b\}$. Define a homomorphism $\lambda_\ell: \Q \to \overline{k}^*$ by $\lambda_\ell(ap^n/b) = \zeta_b^{a\ell s}$ where $a\in\Z$, $b\in\N^*$, $p \nmid ab$ and $s \equiv p^n \mod b$ (or, if $n<0$, we require $sp^{-n} \equiv 1 \mod b$). It is straightforward to show that if $\lambda: \Q \to \overline{k}^*$ is a homomorphism whose kernel contains $\Z$ and $\mu \in \gal(k(\zeta_m)/k)$, then \begin{equation}\label{eq:autocoeff} \sum_{i\in I} x_i t^i \mapsto \sum_{i\in I} \lambda(i) \mu(x_i) t^i \end{equation} is a $\overline{k}((t^{-1}))$-automorphism of $\kcq$ (where $I$ is any Noetherian subset of $\Q$). Given $\psi_j \in \gal(F/k((t^{-1})))$, we write $\psi_j = \tau_\ell \mu$ for some $1 \le \ell \le m$ and $\mu \in \gal(k(\zeta_m)/k)$. In case $\lambda = \lambda_\ell$, note that the function in (\ref{eq:autocoeff}) is an extension of $\psi_j$ to $\kcq$. We denote the restriction of this function to $E$ by $\phi_j$. We will show that $\phi_j$ sends $\overline{k((t^{-1}))}$ to itself, and since $E$ is a normal extension of $k((t^{-1}))$, it follows that $\phi_j \in \gal(E/k((t^{-1})))$ is an extension of $\psi_j$. To show that $\phi_j$ sends $\overline{k((t^{-1}))}$ to itself, we appeal to Kedlaya's description of the algebraic closure in Corollary 9 of \cite{ked}. First, we review a few key ideas from that paper. The support of any algebraic series must be a set of the form \begin{equation*}\label{kedsupport} S_{m,v,c} = \{(1/m)( w+b_1p^{-1} + \cdots + b_{j-1}p^{-j+1} + p^{-n}(b_j p^{-j} + \cdots )) \mid w \le v, \sum b_i \le c \} \end{equation*} where $m \in \N, v,c \ge 0$. Note that $S_{a,b,c}$ is defined differently than the form given by Kedlaya since our support is Noetherian rather than well-ordered. We say that a sequence $c_n$ satisfies a linearized recurrence relation (LRR) if for some $d_0, \dots, d_k$, for all $n\in \N$, \begin{equation*}\label{eq:lrr} d_0 c_n + d_1 c_{n+1}^p + \cdots + d_k c_{n+k}^{p^k} = 0. \end{equation*} Let $\sum x_i t^i$ be a series with support $S_{m,v,c}$. We say $\sum x_i t^i$ is {\bf twist-recurrent} if for each $w \le v$, $\sum b_i \le c$, the sequence $c_n = x_{(1/m)( w+b_1p^{-1} + \cdots + b_{j-1}p^{-j+1} + p^{-n}(b_j p^{-j} + \cdots ))}$ satisfies an LRR. According to \cite{ked}, the algebraic closure of $k((t^{-1}))$ consists of all twist-recurrent series $x = \sum x_i t^i$ such that the $x_i$ lie in a finite extension of $k$. Now suppose $\sum x_i t^i$ is a twist-recurrent series. We will show that $\phi_j \left( \sum x_i t^i \right)$ is also twist-recurrent, and so by the previous paragraph, $\phi_j$ sends $\overline{k((t^{-1}))}$ to itself. Since $\sum x_i t^i$ is twist-recurrent, it follows that $c_n = x_{(1/m)( w+b_1p^{-1} + \cdots + b_{j-1}p^{-j+1} + p^{-n}(b_j p^{-j} + \cdots ))}$ satisfies an LRR of the form $d_0 c_n + d_1 c_{n+1}^p + \cdots + d_k c_{n+k}^{p^k} = 0$. To show that $\phi_j \left( \sum x_i t^i \right)$ is twist-recurrent, we must prove that $\lambda(f(n)) \mu (c_n)$ satisfies an LRR where $f(n) = (1/m)( w+b_1p^{-1} + \cdots + b_{j-1}p^{-j+1} + p^{-n}(b_j p^{-j} + \cdots ))$, $\lambda = \lambda_\ell$ for some $\ell$ and $\mu \in \gal(k(\zeta_m)/k)$. If $c_n$ satisfies the LRR $\sum_{i=0}^k d_i c_{n+i}^{p^i} =0$, it follows that $0 = \mu \left(\sum_{i=0}^k d_i c_{n+i}^{p^i} \right) = \sum_{i=0}^k \mu(d_i) \mu(c_{n+i})^{p^i}$, and so $\mu (c_n)$ satisfies an LRR. Thus we only have to show that if $c_n$ satisfies an LRR, then so does $c_n' = \lambda_\ell(f(n)) c_n$. Now suppose $c_n$ satisfies the LRR $\sum_{i=0}^k d_i c_{n+i}^{p^i}= 0$. Rewrite $w+ b_1p^{-1} + \cdots + b_{j-1}p^{-j+1}$ as $\frac{\alpha_1}{p^{m_1}}$ where $p \nmid \alpha_1$ and $m_1 \le j-1$. If we rewrite $b_j p^{-j} + b_{j+1} p^{-j-1}\cdots $ as $\frac{\alpha_2}{p^{m_2}}$ where $p \nmid \alpha_2$ and $m_2 \ge j$, then \begin{equation*} f(n) = \frac{\alpha_1 p^{m_2+n} + \alpha_2 p^{m_1}}{mp^{m_1+m_2+n}} = \frac{\alpha_1 p^{m_2-m_1+n} + \alpha_2}{mp^{m_2+n}}. \end{equation*} If we define $s_n, d_1, d_2$ so that $s_np^n \equiv 1 \mod m$, $d_1p^{m_1} \equiv 1 \mod m$, and $d_2p^{m_2} \equiv 1 \mod m$, then $\lambda_\ell(f(n)) =\zeta_m^{(\alpha_1p^{m_2-m_1+n}+\alpha_2)s_nd_2} = \zeta_m^{\alpha_1d_1} \cdot \zeta_m^{\alpha_2d_2s_n}$, and so if we define $d_i' = \zeta_m^{-\alpha_1d_1p^i}d_i$, then $$\sum_{i=0}^k d_i' c_{n+i}'^{p^i} = \sum_{i=0}^k (\zeta_m^{-\alpha_1d_1p^i})d_i({\zeta_m^{\alpha_1d_1} \cdot \zeta_m^{\alpha_2d_2s_{n+i}}})^{p^i} c_{n+i}^{p^i} = \sum_{i=0}^k (\zeta_m^{-\alpha_1d_1p^i})d_i({\zeta_m^{\alpha_1d_1p^i})(\zeta_m^{\alpha_2d_2s_{n+i}}})^{p^i} c_{n+i}^{p^i},$$ which simplifies as $$ \sum_{i=0}^k (\zeta_m^{\alpha_2d_2s_{n+i}})^{p^i}d_i c_{n+i}^{p^i} = \sum_{i=0}^k (\zeta_m^{\alpha_2d_2s_{n}{s_i}})^{p^i}d_i c_{n+i}^{p^i} = (\zeta_m^{\alpha_2d_2s_{n}}) \sum_{i=0}^k d_i c_{n+i}^{p^i} = 0, $$ and so $c_n'$ satisfies an LRR. So far, we have shown that $\phi_j$ sends $\overline{k((t^{-1}))}$ to itself, and since $E$ is a normal extension of $k((t^{-1}))$, we know $\phi_j \in \gal(E/k((t^{-1})))$ is an extension of $\psi_j$. Let $\{\sigma_1, \dots, \sigma_d\}$ be the complete collection of $F$-automorphisms of $E$. Since $E/F$ and $F/k((t^{-1}))$ are normal extensions, a routine exercise shows that the collection $\{\phi_i\sigma_j \mid 1 \le i \le b, 1 \le j \le d \}$ consists of all $k((t^{-1}))$-automorphisms of $E$. Since $q$ is the degree of inseparability of $E$ over $k((t^{-1}))$, the minimal polynomial $m_\beta$ of $\beta$ over $k((t^{-1}))$ can be factored as \begin{equation*}\label{eq:minpoly} m_\beta(t,y) = \prod_{i=1}^{d} \left( \prod_{j=1}^{b} (y - \phi_j \sigma_i\beta) \right)^{q}. \end{equation*} For any series $s = \sum_{i \in I} c_i t^{e_i}$, we define an associated Puiseux series by $\mathcal{P}(s) = \sum_{i \in J} c_i t^{e_i}$ where $J = \{ a/b \in I \mid a\in \Z,b \in \N^* \mbox{ and } p\nmid b \}$ and remainder by $\mathcal{R}(s) = s - \mathcal{P}(s)$. Since no component of the ramification sequence of $z$ is divisible by $p$, we obtain \begin{equation}\label{eq:boundle} \lexp(z-\phi_j \sigma_i \beta) = \lexp(z- {\mathcal P}(\phi_j \sigma_i \beta) - {\mathcal R}(\phi_j \sigma_i \beta)) \ge \lexp(z-{\mathcal P}(\phi_j \sigma_i \beta)). \end{equation} Since $\phi_j$ is of the form (\ref{eq:autocoeff}), for any series $s \in\kq$, $\mathcal{P}(\phi_js) = \phi_j({\mathcal P}(s))$. Applying this to (\ref{eq:boundle}), we obtain \begin{equation*}\label{eq:boundle2} \lexp(z-\phi_j \sigma_i \beta) \ge \lexp(z-\phi_j {\mathcal P}(\sigma_i \beta)). \end{equation*} Of all the conjugates $\phi_j({\mathcal P}(\sigma_i \beta))$ of ${\mathcal P}(\sigma_i \beta)$ over $F$, choose $\alpha_i$ to be the one that agrees with $z$ to the highest order. Note that $\prod_{j=1}^b (y-\phi_j \alpha_i)$ must be of the form $m_{\alpha_i}(t,y)^{\ell_i}$ where $m_{\alpha_i}(t,y)$ is the minimal polynomial of $\alpha_i$ over $k((t^{-1}))$ and $\ell_i \in \N$. Since $\alpha_i$ is a Puiseux series such that no element of its support is divisible by $p$, we have reduced the problem to Case 1, and the proof is complete. \end{proof} Now, we define a sequence of rational numbers that give the minimal possible value of an image of a polynomial of degree $d$ under the map $\lexp \circ \varphi_z$. \begin{definition}\label{def:lambda} For each natural number $d$, \begin{equation*} \lambda_d := \min \{ \lexp(f(t,z)) \mid f\in k[x,y]^* \mbox{ \rm and } \deg_y(f(x,y)) = d \}. \end{equation*} \end{definition} \begin{lem}\label{lem:lambdad} Let $k$ be a perfect field. For any positive integer $d$, \begin{equation}\lambda_d = \lexp\left(\prod_{j=1}^w f_j(t,z)^{d_j} \right) \end{equation} where $w$ is a positive integer, the exponent $d_j$ is nonnegative, and $f_j$ is the minimal polynomial of $\sum_{i=1}^{{l(j)-1}} c_i t^{e_i}$ over $k(t)$. Moreover, $d= \sum d_j \deg_y(f_j(x,y))$. \end{lem} \begin{proof} By the definition of $\lambda_d$, there exists $p(x,y) \in k[x,y]$ such that $\deg_y(p(x,y))= d$ and $\lexp(p(t,z)) = \lambda_d.$ By Proposition \ref{char0approx}, there exists $h(x,y)$ such that $\lambda_d = \lexp(p(t,z)) \ge \lexp(h(t,z))$, $\deg_y(h(x,y)) = d$, and $h(t,y)$ has finite Puiseux series as roots. Thus, by the definition of $\lambda_d$, $ \lambda_d = \lexp(h(t,z)).$ Since $h(x,y)$ is a product of minimal polynomials of finite Puiseux series, we can write $h$ as $h(t,z) = \prod_{j=1}^w f_j(t,z)^{d_j}, $ where $w$ is a positive integer, and for each $1 \le j \le w$, the exponent $d_j$ is nonnegative, and $f_j$ is the minimal polynomial of $\sum_{i=1}^{{l(j)-1}} c_i t^{e_i}$ over $k(t)$. \end{proof} Using this lemma, we can produce a unique representation for each $\lambda_d$ in terms of the monoid generating sequence. \begin{prop}\label{lem:minrep} Let $k$ be a perfect field. For any positive integer $d$, $\lambda_d$ can be uniquely expressed in the form \begin{equation}\label{eq:lambdaunique} \lambda_d = \sum_{j=1}^w d_j \rho_j, \end{equation} where $w$ is a positive integer, and for each $1 \le j \le w$, we have \begin{equation}\label{eq:boundd} 0 \le d_j < s_j. \end{equation} In this case, \begin{equation*}\label{eq:decomposed} d = \sum_{j=1}^d d_j r_{l(j-1)}. \end{equation*} \end{prop} \begin{proof} By Lemma \ref{lem:lambdad}, there exists $h(x,y) \in k[x,y]$ such that $\lambda_d = \lexp(h(t,z))$, $\deg_y(h(x,y)) = d$, and \begin{equation*}\label{eq:hprod} h(t,z) = \prod_{j=1}^w f_j(t,z)^{d_j}, \end{equation*} where $w$ is a positive integer, and for each $1 \le j \le w$, the exponent $d_j$ is nonnegative, and $f_j$ is the minimal polynomial of $\sum_{i=1}^{{l(j)-1}} c_i t^{e_i}$ over $k(t)$. By Lemma \ref{lem:rhopos}, $\deg_y f_j(x,y) = r_{l(j)-1}$ and $\lexp( f_j(t,z)) = \rho_j$, and so \begin{equation*}\label{eq:lambdanew2} \lambda_d = \lexp\left(\prod_{j=1}^w f_j(t,z)^{d_j} \right) = \sum_{i=1}^w d_j \lexp(f_j(t,z)) = \sum_{i=1}^w d_j \rho_j \end{equation*} and \begin{equation*} d = \deg_y h(x,y) = \sum_{j=1}^w d_j \deg_y f_j(x,y) = \sum_{j=1}^w d_j r_{l(j)-1} = \sum_{j=1}^w d_j r_{l(j-1)}. \end{equation*} Next we show that each $d_j$ satisfies the bounds given by (\ref{eq:boundd}). Suppose for contradiction, for some $k$, $d_k \ge s_k = r_{l(k)}/r_{l(k-1)}$. Define $$D_j \ = \ \left\{ \begin{array} {l@{\quad}cl} d_j + 1 & & \mbox{if } j= k+1; \\ d_j - s_j & & \mbox{if } j= k; \\ d_j & & \mbox{otherwise}. \end{array} \right. $$ Using this in conjunction with the recurrence relation given in Lemma \ref{lem:therecurrence}, we obtain \begin{eqnarray*} \sum_{j=1}^w d_j \rho_j - \sum_{j=1}^w D_j \rho_j & = & (d_k - D_k) \rho_k +(d_{k+1} - D_{k+1}) \rho_{k+1} \\ & = & s_k \rho_k - \rho_{k+1} \\ & = & e_{l(k)} - e_{l(k+1)}, \end{eqnarray*} and so \begin{eqnarray*} \sum_{j=1}^w d_j r_{l(j-1)} - \sum_{j=1}^w D_j r_{l(j-1)} & = & (d_k - D_k) r_{l(k-1)} + (d_{k+1} - D_{k+1}) r_{l(k)} \\ & = & s_k r_{l(k-1)} - r_{l(k)} \\ & = & 0. \end{eqnarray*} These equations in conjunction with Lemma \ref{lem:rhopos} yield $$\lexp\left(\prod_{j=1}^w f_j(t,z)^{D_j} \right) = \sum_{j=1}^w D_j \rho_j = \sum_{j=1}^w d_j \rho_j - e_{l(k)} + e_{l(k+1)} < \sum_{j=1}^w d_j \rho_j = \lexp(h) $$ and $$\deg\left(\prod_{j=1}^w f_j(t,z)^{D_j}\right) = \sum_{j=1}^w D_j \deg(f_j) = \sum_{j=1}^w D_j r_{l(j-1)} = \sum_{j=1}^w d_j r_{l(j-1)} = \deg(h).$$ However, $\lexp(h) = \lambda_d$, and so we have contradicted the minimality of $\lexp(h)$. Thus $0 \le d_j < s_j$ for each $1 \le j \le w$, and so we have proved the bounds given by (\ref{eq:boundd}). Finally, we demonstrate that the expression for $\lambda_d$ in (\ref{eq:lambdaunique}) is uniquely determined. Suppose we are given two representations for $\lambda_d$: \begin{equation*} \lambda_d = \sum_{j=1}^w d_j \rho_j = \sum_{j=1}^w d_j' \rho_j \end{equation*} where $0 \le d_j, d_j' < s_j$. If we define $\Delta_j = d_j - d_j'$, then $\sum_{j=1}^w \Delta_j \rho_j = 0$ and $|\Delta_j| < s_j$. Multiply the expression by $r_{l(w-1)}$, and we see $$\left(\sum_{j=1}^{w-1} r_{l(w-1)} \Delta_j\rho_j\right) + r_{l(w-1)} \Delta_w \rho_w =0.$$ However, $r_{l(w-1)} \Delta_j \rho_j \in \Z$ for $j \le w-1$, and so $r_{l(w-1)} \Delta_w \rho_w \in \Z$. Now write $\rho_w$ as $c_w/r_{l(w)}$ where $c_w \in \N$. Then $r_{l(w-1)} \Delta_w c_w/r_{l(w)} \in \Z$, and so $s_w = \frac{r_{l(w)}}{r_{l(w-1)}} \mid \Delta_wc_w$. Since $s_w$ and $c_w$ are relatively prime by Lemma \ref{lem:rhoalpha}, $s_w \mid \Delta_w$. However, $|\Delta_w| < s_w$, and so $\Delta_w = 0$. Thus, $\sum_{j=1}^{w-1} \Delta_j \rho_j = 0$. Repeating this argument, we find $\Delta_{w-1} = \Delta_{w-2} = \cdots = \Delta_1 = 0$, and so $d_j = d_j'$ for all $1 \le j \le w$. \end{proof} The idea that each $\lambda_d$ has a unique representation can be extended further. In fact, there is a natural bijective correspondence between representations of natural numbers and representations of terms of the form $\lambda_d$. First, we state the following simple lemma without proof. \begin{lem}\label{lem:baserep} Let $b_0, b_1, b_2, b_3, \dots$ be a sequence of positive integers such that $b_0 = 1, b_{i+1} > b_i$ and $b_i \mid b_{i+1}$ for all $i$. Then every positive integer $n \in \N$ has a unique representation of the form $$d = \sum_{i=0}^w d_i b_i,$$ where $w$ is a positive integer, $d_w \not=0$, and $0 \le d_i < b_{i+1}/b_{i}$. \end{lem} For example, if $b_i = {10}^i$, then this says that every positive integer has a unique base 10 representation. Using this lemma, we produce a method for quickly computing $\lambda_d$. \begin{prop}\label{prop:correspondence} \label{eq:lambdad} Let $k$ be a perfect field. Given a positive integer $w$ and $0 \le d_j < s_j$ for each $1 \le j \le w$, $$d = \sum_{j=1}^w d_{j} r_{l(j-1)} \ \Leftrightarrow \ \lambda_d = \sum_{j=1}^w d_j \rho_j.$$ \end{prop} \begin{proof} The reverse implication follows directly from Proposition \ref{lem:minrep}. For the forward implication, suppose we are given $d= \sum_{j=1}^w d_j r_{l(j-1)}$ where $0 \le d_j < s_j$. By Proposition \ref{lem:minrep}, $\lambda_d$ is of the form $\lambda_d = \sum_{j=1}^{w'}d_j' \rho_j$ where $d=\sum_{j=1}^{w'}d_j' r_{l(j-1)}$. By the uniqueness promised by Lemma \ref{lem:baserep}, $w=w'$ and $d_j=d_j'$ for all $1\le j \le w$. Thus $\lambda_d = \sum_{j=1}^w d_j \rho_j$. \end{proof} \section{Construction of the Value Monoid} \label{construction} The goal of this section is to describe the value monoid ${\Lambda_{}}$ explicitly in terms of the sequences $\{\lambda_i\}_{i\in\N}$ and $\{\rho_i\}_{i\in\N}$. Throughout the remainder, in addition to Convention \ref{conventions}, we assume that $k$ is a perfect field and $\{\lambda_i\}_{i\in\N}$ is given by Definition \ref{def:lambda}. We begin by showing that $\{\lambda_i\}_{i\in\N}$ is an increasing sequence. \begin{lem}\label{lem:lambdainc} The sequence $\lambda_0, \lambda_1, \lambda_2, \dots$ is increasing. \end{lem} \begin{proof} We will show that $\lambda_{d+1} > \lambda_d$ for all $d$. By Proposition \ref{lem:minrep}, we can write $\lambda_d = \sum_{j=1}^w d_j \rho_j$ where $0 \le d_j < s_j$ and \begin{equation*} d = \sum_{j=1}^w d_j r_{l(j-1)}. \end{equation*} We now consider different cases, depending on the size of the coefficients $d_j$. \vskip 0.3 cm \noindent {\bf Case 1: \ } First we consider the case $d_j = s_j-1$ for all $j$. Then $d=\sum_{j=1}^w (s_j-1) r_{l(j-1)}$, and so by Lemma \ref{lem:sumram}, $d+1 = r_{l(w)}$. Thus by Proposition \ref{prop:correspondence}, $\lambda_{d+1} = \rho_{w+1}$ and $\lambda_d = \sum_{j=1}^w d_j \rho_j$, and so by Lemma \ref{lem:difflams}, $\lambda_{d+1} - \lambda_d = \rho_{w+1} -\sum_{j=1}^w (s_j-1) \rho_{j} = e_{l(w+1)} > 0.$ \vskip 0.3 cm \noindent {\bf Case 2: \ } Consider the case $d_1 < s_1 -1$. Now $d+1 = (d_1+1)r_{l(0)}+ \sum_{j=2}^w d_j r_{l(j-1)}$, and so by Proposition \ref{prop:correspondence}, $\lambda_{d+1} = (d_1+1)\rho_1 + \sum_{j=2}^w d_j \rho_j$. Thus $\lambda_{d+1}-\lambda_d = (d_1+1)\rho_1 - d_1\rho_1 = \rho_1 > 0$. \vskip 0.3 cm \noindent {\bf Case 3: \ } Finally we consider the case where there exists an index $v>1$ such that $d_v < s_v-1$ and for $j<v$, $d_j = s_j-1$. Write $\lambda_d$ as $\lambda_d = \sum_{j=1}^{v-1} (s_j-1) \rho_j + \sum_{j=v}^{w} d_j\rho_j.$ By Proposition \ref{prop:correspondence}, $d = \sum_{j=1}^{v-1} (s_j-1) r_{l(j-1)} + \sum_{j=v}^w d_j r_{l(j-1)},$ and so by Lemma \ref{lem:sumram}, $$d+1 = 1 + \sum_{j=1}^{v-1} (s_j-1) r_{l(j-1)} + \sum_{j=v}^w d_j r_{l(j-1)}= r_{l(v-1)} + \sum_{j=v}^w d_j r_{l(j-1)} = (d_v +1) r_{l(v-1)} + \sum_{j=v+1}^w d_j r_{l(j-1)}.$$ Therefore, by Proposition \ref{prop:correspondence}, $ \lambda_{d+1} = (d_v+1) \rho_{v} + \sum_{j=v+1}^w d_j \rho_j,$ and so $\lambda_{d+1} - \lambda_d = (d_v + 1) \rho_v + \sum_{j=v+1}^w d_j \rho_j - (\sum_{j=1}^{v-1}(s_j-1) \rho_j + \sum_{j=v}^w d_j \rho_j)= \rho_v - \sum_{j=1}^{v-1} (s_j-1) \rho_j $. By Lemma \ref{lem:difflams}, this is simply $e_l(v)$, which is positive. \end{proof} Given a submonoid $M$ of a commutative monoid $N$, we define an equivalence relation on $N$ by setting $n_1 \sim_M n_2$ if and only if there exist $m_1, m_2 \in M$ such that $m_1+n_1 = m_2+n_2$. Denote by $N/M$ the collection of all equivalence classes under this relation, and define a quotient map $\pi$ from $N$ to $N/M$ that sends $n$ to the equivalence class containing $n$. The set $N/M$ has an additive monoid structure where we define $\pi(n_1) + \pi(n_2) = \pi(n_1+n_2)$. Given a polynomial $f(x,y) \in k[x,y]$, we define ${{\mbox{deg}}}_y(f(x,y))$ to be the smallest $d\ge0$ such that $f(x,y)\in k[x]y^d+k[x]y^{d-1}+\cdots+k[x]y+k[x]$, and we denote \begin{equation}\label{def:BigLambda} \Lambda_d(z) = \{ \lexp(f(t,z)) \mid f\in k[x,y]^* \mbox{ \rm and } \deg_y(f(x,y)) \le d \}.\end{equation} Using this notation, we show that any pair of terms of the sequence $\{\lambda_i\}_{i\in\N}$ are inequivalent modulo $\Z$. \begin{prop}\label{prop:minrepsinequiv} For all $i \not= k$, $\lambda_i \not\sim_\Z \lambda_k$. \end{prop} \begin{proof} Suppose $\lambda_i \sim_\Z \lambda_k$. By Proposition \ref{lem:minrep}, for some positive integer $w$ we can write $\lambda_i = \sum_{j=1}^w d_j \rho_j$ and $\lambda_k = \sum_{j=1}^w d_j' \rho_j$ where $0 \le d_j, d_j' < s_j$. For each $1 \le j \le w$, we write $\rho_j = c_j/r_{l(j)}$, where $c_j$ and $s_j$ are relatively prime, as promised by Lemma \ref{lem:rhoalpha}. If we define $\Delta_j = d_j - d_j'$, then $|\Delta_j| < s_j = r_{l(j)}/r_{l(j-1)}$ and $\lambda_i - \lambda_k = \sum_{j=1}^w \Delta_j \rho_j \sim_\Z 0$. Multiply the expression by $r_{l(w-1)}$ to obtain \begin{eqnarray}\label{eq:congcancel} \left(\sum_{j=1}^{w-1} r_{l(w-1)} \Delta_j\rho_j\right) + r_{l(w-1)} \Delta_w \rho_w \sim_\Z 0. \end{eqnarray} However, $r_{l(w-1)} \Delta_j \rho_j \in \Z$ for $j \le w-1$ since $\rho_j \in (1/r_{l(j)})\Z$, and so by (\ref{eq:congcancel}), $r_{l(w-1)} \Delta_w c_w/r_{l(w)} = r_{l(w-1)} \Delta_w \rho_w \in \Z$. That is, $\Delta_w c_w /s_w = r_{l(w-1)} \Delta_w c_w/r_{l(w)} \in \Z$, and so $s_w \mid \Delta_w c_w$. Since $s_w$ and $c_w$ are relatively prime, $s_w \mid \Delta_w$. However, $|\Delta_w| < s_w$, and so $\Delta_w = 0$. Thus, $\sum_{j=1}^{w-1} \Delta_i \rho_j \sim_\Z 0$. Repeating this argument, we find $\Delta_{w-1} = \Delta_{w-2} = \cdots = \Delta_1 = 0$, and so $\lambda_i = \lambda_k$. By Lemma \ref{lem:lambdainc}, $i=k$. \end{proof} We quote the following result from \cite{ms2}. \begin{theorem}\label{theorem:digistight} For every positive integer $n$, the quotient $\Lambda_d/\Lambda_0$ has cardinality one greater than that of $\Lambda_{d-1}/\Lambda_0$, or equivalently, $\Lambda_d/\Lambda_0$ has cardinality $d+1$. \end{theorem} Using this theorem in conjunction with Proposition \ref{prop:minrepsinequiv}, we compute the quotient $\Lambda_d/\Lambda_0$. \begin{cor}\label{cor:quotientgen} The quotient $\Lambda_d / \Lambda_0$ consists precisely of the images of $\lambda_0, \dots, \lambda_d$. \end{cor} \begin{proof} Since $\lambda_0, \dots, \lambda_d \in \Lambda_d$, we know by Proposition \ref{prop:minrepsinequiv} that the images of $\lambda_0, \dots, \lambda_d$ are distinct in $\Lambda_d / \Lambda_0$. By Theorem \ref{theorem:digistight}, these images constitute the entire quotient $\Lambda_d / \Lambda_0$. \end{proof} For each $m \in {\Lambda_{}}$, we make the following definition: \begin{equation}\label{def:lambdam} \lambda(m) = \min \{ r \in {\Lambda_{}} \mid r \sim_\Z m \}. \end{equation} The next two results allow us to relate terms of the sequence $\{\lambda_i\}_{i\in\N}$ with elements in the image of the map $\lambda: {\Lambda_{}} \to {\Lambda_{}}$. \begin{prop}\label{prop:minrepcross} For all $i \in \N$, there exists $m \in {\Lambda_{}}$ such that $\lambda_i = \lambda(m)$. \end{prop} \begin{proof} We prove the following equivalent statement: for all $i \in \N, m \in {\Lambda_{}}$, if $m \sim_\Z \lambda_i$, then $\lambda_i \le m$. Let $i \in \N$, $m\in {\Lambda_{}}$ such that $m \sim_\Z \lambda_i$. Let $j$ be the smallest index such that $m \in \Lambda_j$. Suppose, for contradiction, $j<i$. Since the image of $m$ must lie in the quotient $\Lambda_j /\Lambda_0$, by Corollary \ref{cor:quotientgen} it follows that $m \sim_\Z \lambda_t$ for some $t\le j < i$. Thus, $\lambda_i \sim_\Z \lambda_t$, which contradicts Proposition \ref{prop:minrepsinequiv}. Therefore, $j \ge i$, and so by Lemma \ref{lem:lambdainc}, $m \ge \lambda_j \ge \lambda_i$. \end{proof} \begin{prop}\label{prop:minstrata} For all $m\in {\Lambda_{}}$, there exists $i \in \N$ such that $\lambda_i = \lambda(m)$. \end{prop} \begin{proof} Let $m \in {\Lambda_{}}$. Now $m \in \Lambda_j$ for some $j \in \N$, and so by Corollary \ref{cor:quotientgen}, $m \sim_\Z \lambda_i$ for some $i \in \N$. By Proposition \ref{prop:minrepcross}, $\lambda_i = \lambda(m')$ for some $m' \in {\Lambda_{}}$. Thus $\lambda_i \sim_\Z m \sim_\Z m'$, and so $\lambda_i = \lambda(m') = \lambda(m)$. \end{proof} We are now in a position to decompose the value monoid as a disjoint union of cosets of $\N$. \begin{theorem}\label{thm:monoidreplambda} If the exponent sequence of $z \in \kq$ is strictly positive, then the value monoid is the disjoint union \begin{equation*}\label{mzuplus} {\Lambda_{}} = \bigcup_{d=0}^\infty (\N + \lambda_d). \end{equation*} \end{theorem} \begin{proof} Given $m \in {\Lambda_{}}$, there exists an index $d$ such that $\lambda_d = \lambda(m)$ by Proposition \ref{prop:minstrata}. Therefore, $m - \lambda_d \in \N$, and so $m \in \N + \lambda_d$. The reverse containment follows directly from the fact that $\lambda_d \in {\Lambda_{}}$. The sets are disjoint due to Proposition \ref{prop:minrepsinequiv}. \end{proof} Combining Theorem \ref{thm:monoidreplambda} and Proposition \ref{lem:minrep}, we obtain the following. \begin{theorem}\label{Mzunique} Each element $m \in {\Lambda_{}}$ has a unique representation of the form \begin{equation}\label{eq:Mzunique} m = n + \sum_{j=1}^w d_j \rho_j, \end{equation} where $n\in \N$ and for each $1 \le j \le w$, $0 \le d_j < s_j.$ \end{theorem} A weaker form of this theorem was stated earlier as Theorem \ref{thm:valmonoid}. \section{Algorithms} \label{algorithms} In this section, we develop algorithms to make computations involving the value monoid ${\Lambda_{}}$. It was shown in \cite{ms3} that ${\Lambda_{}}$ is well-ordered, and so $\lexp \circ \varphi_z$ is suitable relative to $\kx$ as described in Definition \ref{def:suitableval}, and we can use $\lexp \circ \varphi_z$ in the algorithms described in Section \ref{introduction}. Throughout this section we refer to the composite maps $\lexp \circ \varphi_z$ and $\lc \circ \varphi_z$ as $\lexpz$ and $\lcz$, respectively. To begin, given a rational number $m \in \Q$, we would like to decide whether $m\in {\Lambda_{}}$, and in case it is, express it in terms of the generators $1, \rho_1, \rho_2, \dots$. To accomplish this, we first prove a lemma. \begin{definition} For each $i \in \N$, define $$\Omega_i = \{ n+ \sum_{j=1}^i d_j \rho_j \mid, n\in \N, 0 \le d_j < s_j\}.$$ \end{definition} \begin{lem}\label{lem:NorZ} $${\Lambda_{}} \cap \Z \cdot \{1, \rho_1, \rho_2, \rho_3, \dots, \rho_i\} = \Omega_i.$$ \end{lem} \begin{proof} The containment `$\supset$' being obvious, we only consider the case `$\subset$'. Let $m \in {\Lambda_{}} \cap \Z \cdot \{1, \rho_1, \rho_2, \rho_3, \dots, \rho_i \}$. By Theorem \ref{thm:monoidreplambda}, there is a unique pair $n, d \in \N$ such that $m = n + \lambda_d$. Thus $\lambda_d \in \Z \cdot \{1, \rho_1, \rho_2, \dots, \rho_i \}$, and so by Lemma \ref{cor:rhoresidue}, $\lambda_d \in (1/r_{l(i)}) \Z$. By Theorem \ref{Mzunique}, there exists a smallest $k\in \N$ such that $\lambda_d \in \Omega_k$. Suppose, for contradiction, that $k > i$. Then by Lemma \ref{lem:lambdaresidue}, $\lambda_d \in (1/ r_{l(k)}) \Z - (1/ r_{l(k-1)}) \Z \subset (1/ r_{l(k)}) \Z - (1/ r_{l(i)})\Z$, which contradicts our assertion that $\lambda_d \in (1/r_{l(i)}) \Z$. Therefore, $i=k$, and so $\lambda_d \in \N \cdot \{1, \rho_1, \dots, \rho_i \}$. \end{proof} We have the following corollary. \begin{cor}\label{omegaclosed} The set $\Omega_i$ is closed under addition. \end{cor} Given a positive rational number $m$, write $m$ as ${a}/{b}$ where $a,b$ are relatively prime positive integers. If $m \in \N$, then it is automatically in ${\Lambda_{}}$, and so we can assume that $b > 1$. Our goal is to decide using modular arithmetic whether it is possible that $m \in {\Lambda_{}}$. First, find the smallest $i$ such that $b \mid r_{l(i)}$. The set of all $\Z$-linear combinations of $1, \rho_1, \dots, \rho_{i-1}$ is precisely the set $\frac{1}{r_{l(i-1)}}\Z$. Since $b$ does not divide $r_{l(i-1)}$, it cannot possibly be an $\N$-linear combination of $1, \rho_1, \dots, \rho_{i-1}$. Now suppose $m$ is a $\Z$-linear combination of $1, \rho_1, \dots, \rho_j$ where $j> i$. However, since $b \mid r_{l(i)}$, it follows that $m \in (1/r_{l(i)}) \Z = \Z \cdot \{1, \rho_1, \dots, \rho_i\}$. If $m \in \Lambda$, then by Lemma \ref{lem:NorZ}, there exist $n, d_1, \dots, d_i \in \N$ such that \begin{equation*} m = n + \sum_{j=1}^i d_j \rho_j \end{equation*} where $0 \le d_j < s_j$ for $1 \le j \le i$ and $d_i\not=0$. From this discussion, we have the following algorithm. \begin{alg}\label{alg:decomposem} Let $m$ be a positive rational number. The following algorithm determines whether $m \in {\Lambda_{}}$. If $m \in {\Lambda_{}}$, then the algorithm produces a decomposition of $m$ as a linear combination of $1, \rho_1, \dots, \rho_i$. Set $\rho_i = c_i/r_{l(i)}$. \begin{enumerate} \item[(1)] Write $m$ as $a/b$ where $a,b$ are relatively prime, positive integers. \item[(2)] Define $i$ to be the smallest index such that $b \mid r_{l(i)}$. \item[(3)] Define $m^{(i)} = m$. \item[(4)] Try to solve the congruence $c_id_i \equiv r_{l(i)} \mod s_i$ for $d_i$ where $0 \le d_i < s_i$. If there are no solutions, then $m\not\in {\Lambda_{}}$. \item[(5)] For $j = i-1, i-2, \dots, 1$, define $m^{(j)} = m^{(j+1)} - d_{j+1}\rho_{j+1}$ and try to solve the congruence $c_jd_j \equiv r_{l(j)} \mod s_j$ for $d_j$ where $0 \le d_j < s_j$. If any of the congruences fail to yield a solution, then $m\not\in {\Lambda_{}}$. \item[(6)] Define $n = m^{(1)} - d_1 \rho_1$. Then $m = n + \sum_{j=1}^i d_j \rho_j.$ If $n \not\in \N$, then $m\not\in {\Lambda_{}}$. If $n\in {\Lambda_{}}$, then we have a decomposition of the desired form. \end{enumerate} \end{alg} Once we have a test for whether a rational number is in the value monoid, we need to be able to determine one of its preimages under the valuation. The following algorithm accomplishes this task. \begin{alg}\label{alg:monoidtopoly} Let $m \in {\Lambda_{}}$. This algorithm constructs $p(x,y) \in k[x,y]$ such that $\lexpz(p(x,y)) = m$. \begin{enumerate} \item[(1)] Using Algorithm \ref{alg:decomposem}, write $m = n + \sum_{j=1}^i d_j \rho_j.$ \item[(2)] For each $1 \le j \le i$, use Proposition \ref{prop:finitePuiseux} to compute $p_j(x,y)$, the minimal polynomial of $\sum_{j=1}^{l(i)-1} c_j x^{e_j}$ over $k(x,y)$. \item[(3)] Define $p(x,y) = x^n \prod_{j=1}^i p_j(x,y)^{d_j}$. By Lemma \ref{lem:rhopos}, $\lexpz(p(x,y)) = m$. \end{enumerate} \end{alg} The following algorithm describes how to perform division in $k[x,y]$ relative to $\lexpz$. \begin{alg}\label{alg:division} Let $f,g \in \kx$. This algorithm constructs $h \in k[x,y]$ such that $\lexpz(f-gh) < \lexpz(f)$ provided that such an $h$ exists. \begin{enumerate} \item[(1)] Compute $m = \lexpz(f) - \lexpz(g)$. \item[(2)] Use Algorithm \ref{alg:decomposem} to determine whether $m \in \Lambda$. If $m\not\in \Lambda$, then $h$ does not exist. \item[(2)] Using Algorithm \ref{alg:monoidtopoly}, find $p(x,y) \in k[x,y]$ such that $\lexpz(p) = m$. \item[(3)] Define $h(x,y) = (\lcz(f)/\lcz(gp)) p(x,y)$. Then $\lcz(f) = \lcz(gh)$, and since $\lexpz(f) = \lexpz(gh)$, it follows that $\lexpz(f-gh) < \lexpz(f)$. \end{enumerate} \end{alg} To compute syzygy families, we first need the following lemma. \begin{lem}\label{lem:bigenoughidealmem} Let $M$ be a monoid such that $\Z \subset M \subset \Q$, and let $q$ be an element of the quotient group of $M$ (i.e., the set of differences of elements of $M$). Then for $n \gg 0$, $q + n \in M$. \end{lem} We now prove that the intersection of principal ideals in ${\Lambda_{}}$, both generated by elements of $\Omega_i$, must be finitely generated by elements of $\Omega_i$. \begin{lem}\label{lem:pip} Given $f, g \in \kx^*$ such that $\lexpz(f), \lexpz(g) \in \Omega_i$, there exists a finite subset of $\Omega_i$ that generates $\langle \lexpz(f) \rangle \cap \langle \lexpz(g) \rangle$. \end{lem} \begin{proof} By Lemma \ref{lem:bigenoughidealmem}, for each element $\sigma$ of $\Omega_i$, there exists a minimal $\eta_\sigma \in \Z$ such that $\sigma - \lexpz(f) + \eta_\sigma$, $\sigma - \lexpz(g) + \eta_\sigma \in {\Lambda_{}}$; that is, $\sigma + n_\sigma \in \langle \lexpz(f) \rangle \cap \langle \lexpz(g) \rangle$. Define $\Upsilon_i$ to be the finite collection $\{ \sigma + \eta_\sigma \mid \sigma \in \Omega_i\}$. We will show that $\Upsilon_i$ generates $\langle \lexpz(f) \rangle \cap \langle \lexpz(g) \rangle$. Let $m \in \langle \lexpz(f) \rangle \cap \langle \lexpz(g) \rangle$. By Theorem \ref{Mzunique}, ${\Lambda_{}} = \bigcup_{j=0}^\infty \Omega_j$, and so for some index $I$, there exist $\alpha_f, \alpha_g \in \Omega_I$ such that $m = \lexpz(f) + \alpha_f = \lexpz(g) + \alpha_g$. Write $\alpha_f$ as $\alpha_f' + \sum_{j=i+1}^I d_j \rho_j$ and $\alpha_g$ as $\alpha_g' + \sum_{j=i+1}^I d_j' \rho_j$ where $\alpha_f', \alpha_g' \in \Omega_i$ and $0 \le d_j, d_j' < s_j$. By Corollary \ref{omegaclosed}, $\lexpz(f) + \alpha_f', \lexpz(g) + \alpha_g' \in \Omega_i$. By the uniqueness of representation promised by Theorem \ref{Mzunique}, since $m = (\lexpz(f) + \alpha_f') + \sum_{j=i+1}^I d_j \rho_j = (\lexpz(g) + \alpha_g') + \sum_{j=i+1}^I d_j' \rho_j$, we have $d_j = d_j'$ for $i+1 \le j \le I$. Thus $\lexpz(f) + \alpha_f' = \lexpz(g) + \alpha_g'$. So by Theorem \ref{Mzunique}, $m':= \lexpz(f) + \alpha_f' = \lexpz(g) + \alpha_g' =n + \sum_{j=1}^i \delta_j \rho_j$, where $n\in \N$ and $0 \le \delta_j < s_j$. Define $\sigma = \sum_{j=1}^i \delta_j \rho_j$, and let $n_\sigma$ be the smallest $n_\sigma \in \Z$ such that $\sigma + n_\sigma \in \langle \lexpz(f) \rangle \cap \langle \lexpz(g) \rangle$. Since $m' = \sigma + n \in \langle \lexpz(f) \rangle \cap \langle \lexpz(g) \rangle$, it follows that $n\ \ge n_\sigma$. Thus $m' = (n-n_\sigma) + (\sigma + n_\sigma) \in \N + \Upsilon_i$, and so $m = m' + \sum_{j=i+1}^I d_j \rho_j = (n-n_\sigma) + (\sigma + n_\sigma) + \sum_{j=i+1}^I d_j \rho_j \in \N + \Upsilon_i + {\Lambda_{}} = \Upsilon_i + {\Lambda_{}}$. \end{proof} The following algorithm uses the lemma above to produce a syzygy family for a pair of polynomials. \begin{alg}\label{alg:syzygy} Let $f, g \in k[x,y]$. This algorithm will produce $m_1, \dots, m_\ell \in {\Lambda_{}}$ such that $\langle \lexpz(f) \rangle \cap \langle \lexpz(g) \rangle = \langle m_1, \dots, m_\ell \rangle$. In addition $a_j,b_j \in \kx$ will be produced such that $\lexpz(a_jf-b_jg) < m_j$ for each $1 \le j \le \ell$. \begin{enumerate} \item[(1)] Using Algorithm \ref{alg:decomposem}, write $\lexpz(f) = n + \sum_{j=1}^i d_j \rho_j$ and $\lexpz(g) = n' + \sum_{j=1}^i d_j' \rho_j$ where $n, n'\in \N$ and $0 \le d_j, d_j' < s_j$. \item[(2)] Let $\sigma_1, \dots, \sigma_\ell$ be the elements of $\{\sum_{j=1}^i d_j \rho_j \mid 0 \le d_j < s_j\}$. For each $1 \le t \le \ell$, find a minimal $\eta_t$ such that $\sigma_t - \lexpz(f) + \eta_t, \sigma_t - \lexpz(g) + \eta_t \in {\Lambda_{}}$. To accomplish this, begin with $\eta = 0$ and keep incrementing $\eta_t$ until $\sigma_t - \lexpz(f) + \eta_t, \sigma_t - \lexpz(g) + \eta_t \in {\Lambda_{}}$ by Algorithm \ref{alg:decomposem}. \item[(3)] For each $t$, define $m_t = \eta_t + n_t$. By Lemma \ref{lem:pip}, $\{ m_1, \dots, m_\ell\}$ generates $\langle \lexpz(f) \rangle \cap \langle \lexpz(g) \rangle$. \end{enumerate} \end{alg} Below is an example of a generalized Gr\"obner basis with respect to a valuation that is not a Gr\"obner basis with respect to any monomial order. \begin{example} Let $k$ be a field that is not of characteristic two. Define $f_1 = y^2-x$ and $f_2 = xy$. Then one can check that the set $B= \{ f_1,\, f_2\}$ is a Gr\"obner basis for the ideal $I = \langle f_1, f_2 \rangle$ with respect to the valuation induced by $z = t^{1/2} + t^{1/4} + t^{1/8} + t^{1/16} + \cdots$ using Algorithm \ref{alg:gengbconstruct}. We now demonstrate that $B$ is not a Gr\"obner basis with respect to any monomial order. Suppose, for contradiction, that $B$ is a Gr\"obner basis with respect to some monomial order `$<$'. Note that $x^2, y^3 \in I$ since $x^2 = yf_2 - xf_1$ and $y^3 = y f_1 + f_2$. We consider two cases, depending on whether $x>y^2$ or $x<y^2$. If $x<y^2$, then lt$(f_1)=y^2$ and lt$(f_2)=xy$. However, $x^2 \in I$, and so if $B$ were a Gr\"obner basis with respect to `$<$', then either $y^2 \mid x^2$ or $xy \mid x^2$, a contradiction. Now suppose $x>y^2$, in which case lt$(f_1)=x$ and lt$(f_2)=xy$. However, $y^3 \in I$, and so if $B$ were a Gr\"obner basis, then either $x \mid y^3$ or $xy \mid y^3$, a contradiction. \end{example} Lastly, we note by example that some ideals do not have finite Gr\"obner bases with respect to a given valuation. We first prove a short lemma. \begin{lem}\label{lem:rhoincrease} The sequence $\rho_0, \rho_1, \rho_2, \dots$ is increasing. \end{lem} \begin{proof} Since $s_j > 1$ for each index $j$, by Lemma \ref{lem:difflams}, $ \rho_i = \sum_{j=1}^{i-1}(s_j-1)\rho_j + e_{l(i)} > \sum_{j=1}^{i-1} \rho_j + e_{l(i)} > \rho_{i-1}. $\end{proof} \begin{example} Consider the ideal $\langle x,y \rangle$ of $k[x,y]$, and let $G$ be a Gr\"obner basis with respect to the series $z\in \kq$. For each $\rho_i$, let $p_i(x,y) \in k[x,y]$ such that $\lexpz(p_i) = \rho_i$. Since $G$ is a Gr\"obner basis, there exists $g_i \in G$ such that $\lexpz(g_i) \mid \lexpz(p_i)$. That is, for some $h_i \in k[x,y]$, $\lexpz(g_i h_i) = \rho_i$. Since $G \cap k = \emptyset$, $\lexpz(g_i) > 0$, and so $\lexpz(h_i) < \rho_i$. Suppose, for contradiction, $\lexpz(g_i) \not= \rho_i$. Then $\lexpz(g_i) < \rho_i$, and so by Theorem \ref{Mzunique} and Lemma \ref{lem:rhoincrease}, $\lexpz(g_i) = n + \sum_{j=1}^{i-1} {d_j} \rho_j$ and $\lexpz(h_i) = n' + \sum_{j=1}^{i-1} {d_j'} \rho_j$. Thus, $\rho_i = \lexpz(g_ih_i) \in (1/r_{l(i-1)}) \Z$, which contradicts Lemma \ref{cor:rhoresidue}. Therefore, $\lexpz(g_i) = \rho_i$, and thus $G$ is infinite. \end{example} \end{document}
arXiv
The milk business is booming! Farmer John's milk processing factory consists of $N$ processing stations, conveniently numbered $1 \ldots N$ ($1 \leq N \leq 100$), and $N-1$ walkways, each connecting some pair of stations. (Walkways are expensive, so Farmer John has elected to use the minimum number of walkways so that one can eventually reach any station starting from any other station). To try and improve efficiency, Farmer John installs a conveyor belt in each of its walkways. Unfortunately, he realizes too late that each conveyor belt only moves one way, so now travel along each walkway is only possible in a single direction! Now, it is no longer the case that one can travel from any station to any other station. However, Farmer John thinks that all may not be lost, so long as there is at least one station $i$ such that one can eventually travel to station $i$ from every other station. Note that traveling to station $i$ from another arbitrary station $j$ may involve traveling through intermediate stations between $i$ and $j$. Please help Farmer John figure out if such a station $i$ exists. The first line contains an integer $N$, the number of processing stations. Each of the next $N-1$ lines contains two space-separated integers $a_i$ and $b_i$ with $1 \leq a_i, b_i \leq N$ and $a_i \neq b_i$. This indicates that there is a conveyor belt that moves from station $a_i$ to station $b_i$, allowing travel only in the direction from $a_i$ to $b_i$. If there exists a station $i$ such that one can walk to station $i$ from any other station, then output the minimal such $i$. Otherwise, output $-1$.
CommonCrawl
Hadamard's method of descent In mathematics, the method of descent is the term coined by the French mathematician Jacques Hadamard as a method for solving a partial differential equation in several real or complex variables, by regarding it as the specialisation of an equation in more variables, constant in the extra parameters. This method has been used to solve the wave equation, the heat equation and other versions of the Cauchy initial value problem. "Method of descent" redirects here. Not to be confused with Proof by infinite descent. As Hadamard (1923) wrote: We thus have a first example of what I shall call a 'method of descent'. Creating a phrase for an idea which is merely childish and has been used since the first steps of the theory is, I must confess, rather ambitious; but we shall come across it rather frequently, so that it will be convenient to have a word to denote it. It consists in noticing that he who can do more can do less: if we can integrate equations with m variables, we can do the same for equations with (m – 1) variables. References • Hadamard, Jacques (1923), Lectures on Cauchy's Problem in Linear Partial Differential Equations, Dover Publications, p. 49, ISBN 0486495493 • Bers, Lipman; John, Fritz; Schechter, Martin (1964), Partial differential equations, American Mathematical Society, p. 16, ISBN 0821800493 • Courant, Richard; Hilbert, David (1953), Methods of mathematical physics, Vol. II, Interscience, p. 205 • Folland, Gerald B. (1995), Introduction to partial differential equations, Princeton University Press, p. 171, ISBN 0691043612 • Maz'ya, V. G.; Shaposhnikova, T. O. (1998), Jacques Hadamard: a universal mathematician, American Mathematical Society, p. 472, ISBN 0821819232
Wikipedia
\begin{definition}[Definition:Transitive Relation/Definition 2] Let $\RR \subseteq S \times S$ be a relation in $S$. $\RR$ is '''transitive''' {{iff}}: :$\RR \circ \RR \subseteq \RR$ where $\circ$ denotes composite relation. \end{definition}
ProofWiki
Chapter 1: Fundamental Concepts 1.1 Preparatory Concepts 1.1.1 Scalar vs. Vector 1.1.2 Newton's Laws 1.1.3 Units 1.1.4 Measurement Conversions 1.1.5 Weight vs. Mass 1.1.6 Pythagorean Theorem 1.1.7 Sine/Cosine Law's 1.2 XYZ Coordinate Frame 1.2.1 Cartesian Coordinate Frame in 2D 1.2.2. Cartesian Coordinate Frame in 3D 1.3 Vectors 1.3.1 Vector Components 1.3.2 Componentizing a Vector 1.3.3 Position Vector 1.3.4 Vector Math 1.4 Dot Product 1.5 Cross Products 1.6 Torque/Moment 1.6.1 Moments 1.6.2 Scalar Method in 2 Dimensions 1.6.3 Vector Method in 3 Dimensions 1.7 Problem Solving Process 1.8 Examples Example 1.8.1: Vectors, Submitted by Tyson Ashton-Losee Example 1.8.2: Vectors, Submitted by Brian MacDonald Example 1.8.3: Dot product and cross product, submitted by Anonymous ENGN 1230 Student Example 1.8.4: Torque, Submitted by Luke McCarvill Example 1.8.5: Torque, submitted by Hamza Ben Driouech Example 1.8.6: Bonus Vector Material, Submitted by Liam Murdock Chapter 2: Particles 2.1 Particle & Rigid Body 2.2 Free Body Diagrams for Particles 2.3 Equilibrium Equations for Particles 2.4. Examples Chapter 3: Rigid Body Basics 3.1 Right Hand Rule 3.1.1 The Whole-Hand Method 3.1.2 Right Hand Rule and Torque 3.1.3 Three-Finger Configuration 3.2 Couples 3.3 Distributed Loads 3.3.1 Intensity 3.3.2 Equivalent Point Load & Location 3.3.3 Composite Distributed Loads 3.4 Reactions & Supports 3.5 Indeterminate Loads Example 3.6.1: Reaction Forces, Submitted by Andrew Williamson Example 3.6.2: Couples, Submitted by Kirsty MacLellan Example 3.6.3: Distributed Load, Submitted by Luciana Davila Chapter 4: Rigid Bodies 4.1 External Forces 4.2 Rigid Body Free Body Diagrams 4.2.1 Part FBD 4.2.2 System FBD 4.2.3 Examples 4.3 Rigid Body Equilibrium Equations 4.4 Friction and Impending Motion Example 4.5.1: External Forces, submitted by Elliott Fraser Example 4.5.2: Free-Body Diagrams, submitted by Victoria Keefe Example 4.5.3: Friction, submitted by Deanna Malone Example 4.5.4: Friction, submitted by Dhruvil Kanani Example 4.5.5: Friction, submitted by Emma Christensen Chapter 5: Trusses 5.1 Trusses Introduction 5.1.1 Two Force Members 5.1.2 Trusses 5.1.3 Parts of a Truss 5.1.4 Tension & Compression 5.2 Method of Joints 5.3 Method of Sections 5.4 Zero-Force Members Example 5.5.1: Method of Sections – Submitted by Riley Fitzpatrick Example 5.5.2: Zero-Force Members, submitted by Michael Oppong-Ampomah Chapter 6: Internal Forces 6.1 Types of Internal Forces 6.1.1 Types of Internal Forces 6.1.2 Sign Convention 6.1.3 Calculating the Internal Forces 6.2 Shear/Moment Diagrams 6.2.1 What are Shear/Moment Diagrams? 6.2.2 Distributed Loads & Shear/Moment Diagrams 6.2.3 Producing a Shear/Moment Diagram 6.2.4 Tips & Plot Shapes Example 6.3.1: Internal Forces – Submitted by Emma Christensen Example 6.3.2: Shear/Moment Diagrams – Submitted by Deanna Malone Chapter 7: Inertia 7.1 Center of Mass: Single Objects 7.1.1 Center of Mass of Two Particles 7.1.2 Center of Mass in 2D & 3D 7.1.3 The Center of Mass of a Thin Uniform Rod (Calculus Method) 7.1.4 The Center of Mass of a Non-Uniform Rod 7.2 Center of Mass: Composite Shapes 7.2.1 Centroid Tables 7.2.2 Composite Shapes 7.3 Types of Inertia 7.4 Mass Moment of Inertia 7.4.1 Intro to Mass Moment of Inertia 7.4.2 Inertia Table of Common Shapes 7.4.3 Radius of Gyration 7.5 Inertia Intro: Parallel Axis Theorem Example 7.6.1: All of Ch 7 – Submitted by William Craine Example 7.6.2 Inertia – Submitted by Luke McCarvill Appendix A: Included Open Textbooks Engineering Mechanics: Statics To start, let's calculate the center of mass! This is a weighted function, similar to when we found the location of the resultant force from multiple distributed loads and forces. [latex]\bar{x}=\frac{m_1*x_1}{m_1+m_2}+\frac{m_2*x_2}{m_1+m_2}[/latex] When the density is the same throughout a shape, the center of mass is also the centroid (geometric center). Consider two particles, having one and the same mass m, each of which is at a different position on the x axis of a Cartesian coordinate system. Common sense tells you that the average position of the material making up the two particles is midway between the two particles. Common sense is right. We give the name "center of mass" to the average position of the material making up a distribution, and the center of mass of a pair of same-mass particles is indeed midway between the two particles. How about if one of the particles is more massive than the other? One would expect the center of mass to be closer to the more massive particle, and again, one would be right. To determine the position of the center of mass of the distribution of matter in such a case, we compute a weighted sum of the positions of the particles in the distribution, where the weighting factor for a given particle is that fraction, of the total mass, that the particle's own mass is. Thus, for two particles on the x axis, one of mass m1, at x1, and the other of mass m2, at x2, the position x of the center of mass is given by equation 8-1: Note that each weighting factor is a proper fraction and that the sum of the weighting factors is always 1. Also note that if, for instance, m1 is greater than m2, then the position x1 of particle 1 will count more in the sum, thus ensuring that the center of mass is found to be closer to the more massive particle (as we know it must be). Further note that if m1 = m2, each weighting factor is 1/2, as is evident when we substitute m for both m1 and m2 in equation 8-1: $$\bar{x}=\frac{m}{m+m}x_1+\frac{m}{m+m}x_2\\\bar{x}=\frac{1}{2}x_1+\frac{1}{2}x_2\\\bar{x}=\frac{x_1+x_2}{2}$$ The center of mass is found to be midway between the two particles, right where common sense tells us it has to be. Source: Calculus-Based Physics 1, Jeffery W. Schnick. p142, https://openlibrary.ecampusontario.ca/catalogue/item/?id=ce74a181-ccde-491c-848d-05489ed182e7 Below is a more visual representation of where the COM would be for two different weighing particles. Source (image): Two_body_jacobi.svg: CWitte, from JPG by Brews oharederivative work: WillowW via Wikimedia Commons https://zh.wikipedia.org/wiki/File:Jacobi_coordinates.svg A second explanation: The most common real-life example of a system like this is a playground seesaw, or teeter-totter, with children of different weights sitting at different distances from the center. On a seesaw, if one child sits at each end, the heavier child sinks down and the lighter child is lifted into the air. If the heavier child slides in toward the center, though, the seesaw balances. Applying this concept to the masses on the rod, we note that the masses balance each other if and only if m1d1 = m2d2. This idea is not limited just to two point masses. In general, if 𝑛 masses, 𝑚1, 𝑚2,…,𝑚𝑛, are placed on a number line at points 𝑥1,𝑥2,…,𝑥𝑛, respectively, then the center of mass of the system is given by: $$ \bar x=\frac{\sum_{i=1}^n m_i x_i}{\sum_{i=1}^nm_i}$$ Suppose four point masses are placed on a number line as follows: 𝑚1=30𝑘𝑔, placed at 𝑥1=−2𝑚 𝑚2=5𝑘𝑔, placed at 𝑥2=3𝑚 𝑚3=10𝑘𝑔,placed at 𝑥3=6𝑚 𝑚4=15𝑘𝑔,placed at 𝑥4=−3𝑚. Find the moment of the system with respect to the origin and find the center of mass of the system. First, we need to calculate the moment of the system (the top part of the fraction): [latex]M =\sum_{i=1}^4 m_i *x_i \\\qquad \quad = (30kg)*(-2m) + (5kg)*(3m)+(10kg)*(6m)+(15kg)*(-3m) \\\qquad\quad = (-60+15+60-45)kg*m \\\qquad\quad = -30 kg*m[/latex] Now, to find the center of mass, we need the total mass of the system: $$ m = \sum_{i=1}^4 m_i = (30+5+10+15) kg = 60kg $$ Then we have [latex]\bar{x} = \frac{M}{m} = \frac{-30 kg*m}{60kg} = -0.5 m[/latex] The center of mass is located 1/2 m to the left of the origin. Source: "Moments and Centers of Mass" by LibreTexts, https://eng.libretexts.org/@go/page/67237 When we are looking at multiple objects in 2D or 3D, we perform the center of mass equation multiple times in the x, y, and z directions. $$ \bar x=\frac{\sum_{i=1}^n m_i x_i}{\sum_{i=1}^nm_i} \qquad \bar y=\frac{\sum_{i=1}^n m_i y_i}{\sum_{i=1}^nm_i} \qquad \bar z=\frac{\sum_{i=1}^n m_i z_i}{\sum_{i=1}^nm_i}$$ In some sense, one can think about the center of mass of a single object as its "average position." Let's consider the simplest case of an "object" consisting of two tiny particles separated along the x-axis, as seen below: If the two particles have equal mass, then it's pretty clear that the "average position" of the two-particle system is halfway between them. If the masses of the two particles are different, would the "average position" still be halfway between them? Perhaps in some sense this is true, but we are not looking for a geometric center, we are looking for the average placement of mass. If m1 has twice the mass of m2, then when it comes to the average placement of mass, m1 gets "two votes." With more of the mass concentrated at the position x1 than at x2, the center of mass should be closer to x1 than x2. We achieve the perfect balance by "weighting" the positions by the fraction of the total mass that is located there. Accordingly, we define as the center of mass: $$\bar x_{cm}=(\frac{m_1}{m_1+m_2})x_1+(\frac{m_2}{m_1+m_2})x_2=\frac{m_1x_1+m_2x_2}{M_{system}}$$ If there are more than two particles, we simply add all of them into the sum in the numerator. To extend this definition of center of mass into three dimensions, we simply need to do the same things in the y and z directions. A position vector for the center of mass of a system of many particles would then be: $$\vec{r}_{cm}=\bar x_{cm}\underline{\hat{i}}+\bar y_{cm}\underline{\hat{j}}+ \bar z_{cm}\underline{\hat{k}}\\=\frac{[m_1 x_1+m_2 x_2+…]}{M}\underline{\hat{i}}+\frac{[m_1y_1+m_2y_2+…]}{M}\underline{\hat{j}}+\frac{[m_1 z_1+m_2 z_2+…]}{M}\underline{\hat{k}}\\=\frac{m_1[x_1\underline{\hat{i}}+y_1\underline{\hat{j}}+z_1\underline{\hat{k}}]+m_2[x_2\underline{\hat{i}}+y_2\underline{\hat{j}}+z_2\underline{\hat{k}}]+…}{M}\\=\frac{m_1\vec r_1+m_2\vec r_2+…}{M}$$ Source: " Center of Mass" by Tom Weideman, https://phys.libretexts.org/Courses/University_of_California_Davis/UCD%3A_Physics_9A__Classical_Mechanics/4%3A_Linear_Momentum/4.2%3A_Center_of_Mass Suppose three point masses are placed in the x-y plane as follows (assume coordinates are given in meters): m1 = 2 kg placed at (-1, 3)m, m2 = 6 kg placed at (1, 1)m, and m3 = 4 kg placed at (2, -2)m. Find the center of mass of the system. First we calculate the total mass of the system: $$ m = \sum_{i=1}^3 m_i = (2 + 6 + 4) kg = 12 kg $$ Next we find the moments with respect to the x- and y- axes: [latex]M_x =\sum_{i=1}^3 m_i *x_i \\\qquad \quad = (2kg)*(-1m) + (6kg)*(1m)+(4kg)*(2m) \\\qquad\quad = (-2+6+8)kg*m \\\qquad\quad = 12 kg*m[/latex] [latex]M_y =\sum_{i=1}^3 m_i *y_i \\\qquad \quad = (2kg)*(3m) + (6kg)*(1m)+(4kg)*(-2m) \\\qquad\quad = (6+6-8)kg*m \\\qquad\quad = 4 kg*m[/latex] Then we have [latex]\bar{x} = \frac{M_x}{m} = \frac{12 kgm}{12m} = 1 m[/latex] [latex]\bar{y} = \frac{M_y}{m} = \frac{4 kgm}{12m} = 0.333 m[/latex] The center of mass of the system is: (1, 0.333)m. Quite often, when the finding of the position of the center of mass of a distribution of particles is called for, the distribution of particles is the set of particles making up a rigid body. The easiest rigid body for which to calculate the center of mass is the thin rod because it extends in only one dimension. (Here, we discuss an ideal thin rod. A physical thin rod must have some nonzero diameter. The ideal thin rod, however, is a good approximation to the physical thin rod as long as the diameter of the rod is small compared to its length.) In the simplest case, the calculation of the position of the center of mass is trivial. The simplest case involves a uniform thin rod. A uniform thin rod is one for which the linear mass density µ, the mass-per-length of the rod, has one and the same value at all points on the rod. The center of mass of a uniform rod is at the center of the rod. So, for instance, the center of mass of a uniform rod that extends along the x axis from x = 0 to x = L is at (L/2, 0). The linear mass density µ, typically called linear density when the context is clear, is a measure of how closely packed the elementary particles making up the rod are. Where the linear density is high, the particles are close together. To picture what is meant by a non-uniform rod, a rod whose linear density is a function of position, imagine a thin rod made of an alloy consisting of lead and aluminum. Further imagine that the percentage of lead in the rod varies smoothly from 0% at one end of the rod to 100% at the other. The linear density of such a rod would be a function of the position along the length of the rod. A one-millimeter segment of the rod at one position would have a different mass than that of a one-millimeter segment of the rod at a different position. People with some exposure to calculus have an easier time understanding what linear density is than calculus-deprived individuals do because linear density is just the ratio of the amount of mass in a rod segment to the length of the segment, in the limit as the length of the segment goes to zero. Consider a rod that extends from 0 to L along the x axis. Now suppose that ms(x) is the mass of that segment of the rod extending from 0 to x where x ≥ 0 but x < L. Then, the linear density of the rod at any point x along the rod, is just dm8/dx evaluated at the value of x in question. Now that you have a good idea of what we mean by linear mass density, we are going to illustrate how one determines the position of the center of mass of a non-uniform thin rod by means of an example. Find the position of the center of mass of a thin rod that extends from 0 to 0.890 m along the x axis of a Cartesian coordinate system and has a linear density given by µ = 0.650 kg/m3 In order to be able to determine the position of the center of mass of a rod with a given length and a given linear density as a function of position, you first need to be able to find the mass of such a rod. To do that, one might be tempted to use a method that works only for the special case of a uniform rod, namely, to try using m = µL with L being the length of the rod. The problem with this is, that µ varies along the entire length of the rod. What value would one use for µ ? One might be tempted to evaluate the given µ at x = L and use that, but that would be acting as if the linear density were constant at µ = µ(L). It is not. In fact, in the case at hand, µ(L) is the maximum linear density of the rod, it only has that value at one point on the rod. Instead, using integration, we find the equation: [latex]m=\frac{bL^3}{3}[/latex] That can now be used to calculate the mass of a non-linear rod. The value of L is given as 0.890 m and we defined b to be the constant 0.650 kg/m3, therefore $$m=\frac{0.650\frac{kg}{m^3}(0.890m)^3}{3}\\m=0.1527kg$$ That's a value that will come in handy when we calculate the position of the center of mass. Now, when we calculated the center of mass of a set of discrete particles (where a discrete particle is one that is by itself, as opposed, for instance, to being part of a rigid body) we just carried out a weighted sum in which each term was the position of a particle times its weighting factor and the weighting factor was that fraction, of the total mass, represented by the mass of the particle. We carry out a similar procedure for a continuous distribution of mass such as that which makes up the rod in question. Once again, using integration, we find the equation: [latex]\bar{x}=\frac{bL^4}{4m}[/latex] Now we substitute variables with values; the mass m of the rod that we found earlier, the constant b that we defined to simplify the appearance of the linear density function, and the given length L of the rod: $$m= \frac{\left( 0.650\frac{kg}{m^3} \right) (0.890m)^4}{4(0.1527kg)}\\\bar{x}=0.668m$$ This is our final answer for the position of the center of mass. Note that it is closer to the denser end of the rod, as we would expect. Basically: When there are multiple objects, the center of mass is the location in the x, y, and z directions between the objects. Application: To calculate the acceleration or use F = ma, m is the total mass at the center of mass. Looking Ahead: The next section will look at how to calculate the center of mass for a complex object. Previous: Chapter 7: Inertia Next: 7.2 Center of Mass: Composite Shapes Engineering Mechanics: Statics by Libby (Elizabeth) Osgood; Gayla Cameron; and Emma Christensen is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.
CommonCrawl
\begin{document} \title[Loop-erased partitioning ]{Loop-erased partitioning of a graph:\\ mean-field analysis} \author[L.~Avena]{Luca Avena$^\ddag$} \address{{$^\ddag$ Leiden University, Mathematical Institute, Niels Bohrweg 1 2333 CA, Leiden. The Netherlands.}} \email{[email protected]} \author[A.~Gaudilli\`ere]{Alexandre Gaudilli\`ere$^\star$} \address{ $\star$ Aix-Marseille Universit\'e, CNRS, Centrale Marseille. I2M UMR CNRS 7373. 39, rue Joliot Curie. 13 453 Marseille Cedex 13. France.} \email{[email protected]} \author[P.~Milanesi]{Paolo Milanesi$^\S$} \address{ $^\S$ Aix-Marseille Universit\'e, CNRS, Centrale Marseille. I2M UMR CNRS 7373. 39, rue Joliot Curie. 13 453 Marseille Cedex 13. France.} \email{[email protected]} \author [M.~Quattropani] {Matteo Quattropani$^*$} \address{$^*$ Dipartimento di Matematica e Fisica, Universit\`a di Roma Tre, Largo S. Leonardo Murialdo 1, 00146 Roma, Italy.} \email{[email protected]} \subjclass[2010]{ 05C81, 05C85, 60J10, 60J27, 60J28} \keywords{Discrete Laplacian, random partitions, loop-erased random walk, Wilson's algorithm, spanning rooted forests} \begin{abstract} We consider a random partition of the vertex set of an arbitrary graph that can be sampled using loop-erased random walks stopped at a random independent exponential time of parameter $q>0$, that we see as a tuning parameter.The related random blocks tend to cluster nodes visited by the random walk on time scale $1/q$. We explore the emerging macroscopic structure by analyzing 2-point correlations. To this aim, it is defined an interaction potential between pair of vertices, as the probability that they do not belong to the same block of the random partition. This interaction potential can be seen as an affinity measure for ``densely connected nodes'' and capture well-separated regions in network models presenting non-homogeneous landscapes. In this spirit, we compute this potential and its scaling limits on a complete graph and on a non-homogeneous weighted version with community structures. For the latter geometry we show a phase-transition for ``community detectability'' as a function of the tuning parameter and the edge weights. \end{abstract} \maketitle \section{\large{Intro: Loop-erasure and random partitioning}}\label{intro} Consider an arbitrary simple undirected weighted connected graph $G=(V, E, w)$ on $N=|V|$ vertices where $E=\{e=(x,y): x,y \in V \}$ stands for the edge set and $w: E \rightarrow [0,\infty)$ is a given edge-weight function. We call the Random Walk (RW) associated to $G$ the continuous-time Markov chain $X=(X_t)_{t\ge 0}$ with state space $V$ and \emph{the discrete Laplacian} as infinitesimal generator, i.e., the $N\times N$ matrix: \begin{equation}\label{Laplacian} \mathcal{L}= \mathcal{A}- \mathcal{D}, \end{equation} where for any $x,y\in [N]:=\{1,2,\dots ,N\}$, $\mathcal A(x,y)=w(x,y)\mathbf{1}_{\{ x\neq y\}}$ is the \emph{weighted adjacency matrix} and $\mathcal D(x,y)=\mathbf{1}_{\{ x=y\}}\sum_{z\in[N]\setminus\{x\}} w(x,z) $ is the \emph{diagonal matrix} guarantying that the entries of each row in $\mathcal L$ sum up to $0$. The goal of this paper is to explore the following probability measure on the set of partitions $\mathcal P(V)$ of the vertex set $V$. \begin{definition}[{\bf Loop-erased partitioning}]\label{LEP} Given $G=(V, E, w)$, fix a positive parameter $q>0$. We call \emph{loop-erased} a partition of $V$ in $m\leq N$ blocks sampled according to the following probability measure: \begin{equation}\label{LEPmeas} \mu_q(\Pi_m)= \frac{q^m \times \sum_{F: \Pi(F)=\Pi_m } w(F)}{Z(q)}, \quad\quad \Pi_m\in \mathcal P(V), \end{equation} where the sum is over spanning rooted forests $F$'s of $G$, $\Pi(F)$ stands for the partition of $V$ induced by a forest $F$, $w(F):=\prod_{e\in F} w(e)$ for the forest weight, and $Z(q)$ is a normalizing constant. We denote by $\Pi_q$ a random variable in $\mathcal P(V)$ with law $\mu_q$. \end{definition} In the above definition a spanning rooted forest of a graph is a collection of rooted trees spanning its vertex set. Denoting by $\mathcal F$ the set of spanning rooted forests of $G$, we note that---due to the matrix tree theorem---the normalizing constant in~\cref{LEPmeas} can be expressed as the characteristic polynomial of the matrix $\mathcal L$ evaluated at $q$, i.e. $$Z(q):=\sum_{F\in \mathcal F }q^{|F|} w(F)=\det[qI- \mathcal L],$$ where $|F|$ denotes the number of trees in $F\in \mathcal F$. Furthermore, the number of blocks in $\Pi_q$, denoted by $|\Pi_q|$, is distributed as the sum of $N$ independent Bernoulli random variables with success probabilities $\frac{q}{q+\lambda_i}$, for $i\leq N $, with $\lambda_i$'s being the eigenvalues of $- \ensuremath{\mathcal L}$. We refer the reader to~\cite[Prop. 2.1]{AG} for a proof of these statements. \subsection{Tuning parameter and underlying geometry.} The first factor $q^m$ in~\cref{LEPmeas} favors partitions having many small blocks as $q$ growths, while as $q$ vanishes, the measure degenerates into a one-block partition. The second combinatorial factor takes into account the underlying geometry and for example in the unweighted case (i.e. constant edge--weights $w\equiv1$ ) counts how many rooted forests are compatible with a given partition. In the simple setup of an unweighted complete graph on $N$ vertices , the measure in~\cref{LEP} reduces to \begin{equation}\label{completeLEP} \mu_q(\Pi_m)= \frac{q^m \times \prod_{i=1}^m n_i^{n_i-1}}{q(q+N)^{N-1}}, \end{equation} for a partition $\Pi_m=\{B_1,\ldots, B_m\}\in \mathcal P(V)$ constituted of $m$ blocks with sizes $|B_i|=:n_i$, $i\leq m$ such that $\sum_{i\leq m} n_i=N$. In particular, we see in this setup that this second factor favors partitions with a few ``fat'' blocks. Notice that~\cref{completeLEP} holds true because, by Cayley's formula, $n_i^{n_i-2}$ unrooted trees can cover block $B_i$, and since we are dealing with rooted trees, an extra volume factor $n_i$ for the possible roots is needed. In general, the competition between these two factors depends on the delicate interplay among the tuning parameter $q$, the underlying geometry and the weight function $w$. \subsection{Sampling algorithm and Loop-Erased RW (LERW)}\label{Wilson} An attractive feature of this measure is that there exists a simple exact sampling algorithm. Originally due to Wilson~\cite{W96} and based on the associated LERW killed at random times. The LERW with killing is the process obtained by running the RW $X$, erasing cycles as soon as they appear, and stopping the evolving self-avoiding trajectory at an independent random time $\tau_q$ with law an exponential of parameter $q$. The algorithm can be described as follows: \begin{enumerate} \item\label{1} pick \emph{any arbitrary} vertex in $V$ and run a LERW up to an independent time $\tau_q\overset{d}{\sim}\exp(q).$ Call $\gamma_1$ the obtained self-avoiding trajectory. \item\label{2} pick \emph{any arbitrary} vertex in $V$ that does not belong to $\gamma_1$. Run a LERW until $\min\{\tau_q, \tau_{\gamma_1}\}$, $\tau_{\gamma_1}$ being the first time the RW hits a vertex in $\gamma_1$. Call $\gamma_2$ the union of $\gamma_1$ and the new self-avoiding trajectory obtained in this step. \item Iterate step (\ref{2}) with $\gamma_{\ell+1}$ in place of $\gamma_{\ell}$ until exhaustion of the vertex set $V$. \end{enumerate} In step (\ref{2}) we note that if the killing occurs before $\tau_{\gamma_1}$, then $\gamma_2$ is a rooted forest in $G$, else $\gamma_2$ is a rooted tree. When the above algorithm stops, it produces a \emph{spanning rooted forest} $F\in \mathcal F$, where the roots are the points where the involved LERWs were killed along the algorithm steps. The resulting forest $F$ on $G$ induces the partition $\Pi(F)$ of the vertex set $V$, where each block is identified by vertices belonging to the same tree. It can be shown that the probability to obtain a given rooted spanning forest $F$ is proportional to $q$ to the power of the number of trees, times the forest weight $w(F)$. It then follows that the induced partition is distributed as $\Pi_q$ in~\cref{LEP}. We refer the reader to~\cite{AG} for the proof of the latter and for more detailed aspects of this algorithm, including dynamical variants. In the sequel we will denote by $\ensuremath{\mathbb{P}}$ a probability measure on an abstract probability space sufficiently rich for the randomness required by this algorithm. \subsection{Partition detecting ``metastable landscapes''.} The Wilson's sampling algorithm described above shows that the resulting partition has the tendency to cluster in the same block (tree) points that can be visited by the RW with high probability on time scale $\tau_q$. In this sense the loop-erased partitioning has the tendency to capture \emph{metastable-like regions} (blocks), namely, regions of points from which it is difficult for the RW to escape on time scale $1/q$. This makes the probability $\mu_q$ an interesting measure for randomized clustering procedures, see in this direction~\cite{ACGM1} and~\cite[Sec. 5]{ACGM2}. Yet, a-priori it is not clear how strong and stable is this feature of capturing ``metastable landscapes'', since it heavily depends on the underlying geometry (weighted adjacency matrix) and the choice of the killing parameter $q$. The scope of this paper is to start making precise this heuristic by analyzing 2-points correlations associated to $\mu_q$ on the simplest \emph{dense} informative geometries. \subsection{Two-point correlations} For a pair of distinct vertices $x,y\in V$, consider the event that these vertices belong to different blocks in $\Pi_q$. That is, the event $$\{B_q(x)\neq B_q(y) \}:=\{x \text{ and } y \text{ are in different blocks of } \Pi_q\},$$ where $B_q(z)$ stands for the block in $\Pi_q$ containing $z\in V$. The probability of this event induces a 2-point correlation function which turns out to be analyzable by means of LERW explorations, and it encodes relevant information on how the resulting partition looks like on the underlying graph as a function of the parameters. Here is the formal definition together with an operative characterization. \begin{definition}[{\bf Pairwise LEP-interaction potential}]\label{FIP} For given $q>0$ and $G$, and any pair $x,y\in V$, we call \emph{pairwise LEP-interaction potential} the following probability: \begin{align}\notag U_q(x,y):=&\ensuremath{\mathbb{P}} (B_q(x)\neq B_q(y))\\ &=\sum_{\gamma}\ensuremath{\mathbb{P}}^{LE _q}_x(\Gamma=\gamma)\ensuremath{\mathbb{P}}_y(\tau_\gamma>\tau_q)\label{LEdec} \end{align} where $\ensuremath{\mathbb{P}}_x^{LE_q}$ and $\ensuremath{\mathbb{P}}_x$ stand for the laws of the LERW killed at rate $q$ and of the RW, respectively, starting from $x\in V$, and the above sum runs over all possible self-avoiding paths $\gamma$'s starting at $x$. \end{definition} The representation in~\cref{LEdec} is a consequence of Wilson's sampling procedure described in~\cref{Wilson} and it holds true since, remarkably, in steps (\ref{1}) and (\ref{2}) of the algorithm the starting points can be chosen arbitrarily. Furthermore, we notice that, as for any generic random partition of $V$, such an interaction potential defines a distance on the vertex set. This specific metric $U_q(x,y)$ can be interpreted as an affinity measure capturing how densely connected vertices $x$ and $y$ are in the graph $G$. Thus providing a further motivation to analyze it. Still, the observable captured by $U_q(x,y)$ is not the only one inducing a natural notion of 2-point correlations associated to $\Pi_q$. For example, if we express the LEP-potential in Definition~\ref{FIP} as an expectation, i.e. $U_q(x,y)= \ensuremath{\mathbb{E}}\left[\mathbf{1}_{\{B_q(x)\neq B_q(y)\}}\right]$, we may think of normalizing it with the masses of the related blocks and obtain another natural 2-point correlation function. This is captured in the following definition. \begin{definition}[{\bf Pairwise RW-interaction potential}]\label{RWIP} For given $q>0$ and $G$, and any pair $x,y\in V$, we call \emph{pairwise RW-interaction potential} the following correlation function: \begin{align}\notag \overline{U}_q(x,y):=\ensuremath{\mathbb{E}}\left[\frac{\mathbf{1}_{\{B_q(x)\neq B_q(y)\}}}{\mu(B_q(x)) \mu(B_q(y))}\right], \end{align} where $\mu(\cdot)$ is the uniform measure on $V$. \end{definition} As we will see, the functional $\overline{U}_q$ is actually much simpler to analyze but it captures less insightful information on the underlying graph structure. Further, unlike $U_q$, this is not a probability, it is neither a metric, and it does not allow to derive a description of the macroscopic structure of $\Pi_q$. In a sense, the latter is not surprising, in fact (see Lemma~\ref{RWexpress}) this alternative correlation function can be expressed in terms of the sole RW Green's kernel without need to introduce the LEP $\Pi_q$. Note in particular that the uniform measure $\mu$ in Definition~\ref{RWIP} corresponds to the invariant measure of the RW $X$. \subsection{Related literature} Several properties of the forest measure associated to the loop-erased partitioning have been derived in the recent~\cite{AG,AG1}. Based on these results, in~\cite[Prop. 6]{ACGM2} and~\cite[Sect. 5.2]{ACGM3}, the authors proposed an approach making use of the loop-erased partitioning and so-called intertwining dualities to describe the evolution of \emph{local equilibria} of a finite state space Markov chain. As mentioned before, this sampling method based on LERW is originally due to Wilson~\cite{W96} and shows that the measure considered herein is intimately related to the well-known \emph{Uniform Spanning Tree} (UST) measure. Actually the measure on spanning rooted forests mentioned in~\cref{Wilson} can be seen as a generalized version of the UST measure which is recovered by taking $q\downarrow 0$ when $w\equiv1$. Therefore the results presented in this manuscript are along the line of the flourishing literature on statistical properties of the UST and LERW, see e.g.~\cite{A91,BK05,BP93,G80,K07,LS19,LSW04,P91,Pitman02,S00,S09,S19}. A detailed exact and asymptotic analysis of observables related to Wilson's algorithm on a complete graph have been pursued in~\cite{P02}. The derivation of our results is in this spirit, although we deal with the additional randomness given by the presence of the killing parameter, which in turns makes the combinatorics more involved. We further mention that in dense geometries, the UST has been studied under the perspective of the continuous random tree topology on the complete graph~\cite{A91} and with respect to local weak convergence still on the complete graph~\cite{G80} and more recently on growing expanders admitting a limiting graphon\cite{HNT18}. These other interesting lines of investigation could also be naturally considered for the forest measure in~\cref{Wilson} but we will not pursue these approaches in this work. \subsection{Paper overview} Our main theorems are presented in~\cref{results} and identify the LEP-potential in~\cref{FIP} and its asymptotics on a complete graph,~\cref{proporso}, and on a non-homogeneous complete graph with two communities,~\cref{2par2comssintpot,phasetrans}. Some consequences on the macroscopic emergent partition $\Pi_q$ on these mean-field models are derived in~\cref{macro}. The last result in~\cref{Rwdetection} concerns the asymptotics detectability related to the other 2-point correlation function in~\cref{RWIP}. The concluding~\cref{proofcomplete,proof2com} are devoted to the proofs for the complete graph and the community model, respectively. \subsection{Basic standard notation} In what follows we will use the following standard asymptotic notation. For given positive sequences $f(N)$ and $g(N)$, we write: \begin{itemize} \item $f(N)=o(g(N))$ if $\lim_{N\to\infty}\frac{f(N)}{g(N)}=0$. \item $f(N)=O(g(N))$ if $\lim\sup_{N\to\infty}\frac{f(N)}{g(N)}<\infty$. \item $f(N)=\omega(g(N))$ if $\lim_{N\to\infty}\frac{f(N)}{g(N)}=\infty$. \item $f(N)=\Omega(g(N))$ if $\lim\inf_{N\to\infty}\frac{f(N)}{g(N)}>0$. \item $f(N)=\Theta(g(N))$ if $0< \lim\inf_{N\to\infty}\frac{f(N)}{g(N)}\le \lim\sup_{N\to\infty}\frac{f(N)}{g(N)}<\infty$. \item $f(N)\sim g(N)$ if $\lim_{N\to\infty}\frac{f(N)}{g(N)}=1$. \end{itemize} For $k\leq n\in \ensuremath{\mathbb{N}}$ we will denote by $(n)_{k}:=n(n-1)(n-2)\cdots(n-k)$ the descendent factorial. Furthermore, we denote by $I$ the identity matrix, $\mathbf{1}$ and $\mathbf{1}'$, respectively, for the row and column vectors of all $1$'s, where the dimensions will be clear from the context. We will write $A^{Tr}$ for the \emph{transpose} of a matrix $A$. \section{\large{Results: correlations and emerging partition on mean-field models}}\label{results} Our first result characterizes the LEP-potential in absence of geometry for finite $N$, and shows that this probability is asymptotically non-degenerate at scale $\sqrt{N}$: \begin{theorem}\label{proporso}{\bf (Mean-field LEP-potential and limiting law)} Fix $q>0$ and let $\mathcal{K}_N$ be a complete graph on $N\geq 1$ vertices with constant edge weight $w>0$. Then, for all $x\neq y \in [N]$, \begin{equation}\label{orsoformula} U^{(N)}_q(x,y)=U^{(N)}_q=\sum_{h=1}^{N-1}\frac{q}{q+Nw}\left(\frac{Nw}{q+Nw}\right)^{h-1}\prod_{k=2}^{h}\left(1-\frac{k}{N}\right), \end{equation} Furthermore, if $q=z\cdot w \sqrt{N}$, for fixed $z,w>0$, then \begin{equation}\label{orsolimite}U_{q}:=\lim_{N\to\infty}U^{(N)}_{q}=\sqrt{2\pi}ze^{\frac{z^2}{2}}\ensuremath{\mathbb{P}}(Z>z),\end{equation} with $Z$ being a standard Gaussian random variable. \end{theorem} Notice that the critical scale $\sqrt{N}$ is the typical length of a LERW path with no killing and---as can be derived by the results in~\cite{P02}---is the typical length of the first branch of the Wilson's algorithm on the complete graph, when $q=O(\sqrt{N})$. Our second result is the analogous of~\cref{orsoformula} when still every vertex is accessible from any other, but the edge weights are non-homogeneous and give rise to a community structure. In this sense we will informally refer to this network as of a \emph{mean-field-community} model. Formally, for given positive reals $w_1$ and $w_2$, we denote by $\mathcal{K}_{2N}(w_1,w_2)$ the graph $G$ with $V=[2N]$, and $w(e)=w_1$ if $e=(x,y)$ is such that either $x,y\in[N]$ or $x,y\in[2N]\setminus[N]$, and $w(e)=w_2$ otherwise. Thus, the weight $w_1$ measures the pairwise connection intensity within the same community, while $w_2$ between pairs of nodes belonging to different communities. Given the symmetry of the model, we will use the notation $U^{(N)}_{q}(out)$ to refer to the potential $U^{(N)}_{q}(x,y)$, for $x$ and $y$ in different communities. Conversely, we set $U^{(N)}_{q}(in)$ for the potential associated to two nodes belonging to the same community. \begin{theorem}\label{2par2comssintpot}{\bf (LEP-potential for mean-field-community model) } Fix $q, w_1, w_2>0$ and consider a two-community-graph $\mathcal{K}_{2N}(w_1,w_2)$. Let $T_q\geq1$ be a geometric random variable with success parameter $$\xi_{q,N}:=\frac{q}{q+N(w_1+w_2)}$$ and let $\tilde X=\(\tilde X_n\)_{n\in\ensuremath{\mathbb{N}}_0}$ be a discrete-time Markov chain with state space $\{\underline{1},\underline{2}\}$ and transition matrix $$\tilde P=\left(\begin{matrix}p&1-p\\1-p&p \end{matrix}\right),\quad p=\frac{w_1}{w_1+w_2}.$$ Denote by $\ell(n)=\sum_{m<n}\mathbf{1}_{\left\{\tilde X_m=\underline{1} \right\}}$ the corresponding local time in state $\underline{1}$ up to time $n$ and by $\tilde \ensuremath{\mathbb{P}}_{\underline{1}}$ the corresponding path measure starting from $\underline{1}$. For $x \in[N]$, set $\star= in$ if $y\in[N]$, and $\star=out$ if $y\in[2N]\setminus[N]$,then \begin{equation} \begin{aligned}\label{g} U^{(N)}_{q}(x,y)= U^{(N)}_{q}(\star) := \sum_{n\geq 1} \ensuremath{\mathbb{P}}(T_q=n)\sum_{k= 1}^{n} \tilde P_{\underline{1}}(\ell(n)=k) N^{-n+1}\hat{f}(n,k)\theta(n,k)P^{\dagger}_{\star}(n,k) \end{aligned} \end{equation} where \begin{equation}\label{fandtheta} \hat{f}(n,k)= (N-2)_{k-1}(N-1)_{n-k},\quad\quad \theta(n,k)= \frac{\left(q-\lambda_1(n,k)\right)\left(q-\lambda_2(n,k)\right)}{q(q+2Nw_2)}\end{equation} with, for $i=1,2$, \begin{equation} \lambda_{i}(n,k)=-\frac{1}{2}\left[w_1n+w_2N+(-1)^{i}\sqrt{w_1^2(2k-n)^2+4\left(N-k\right)\left(N-k\right)w_2^2}\right], \end{equation} and \begin{equation}\label{Pmorte} P^{\dagger}_{\star}(n,k)=\frac{ q( q+k_\star(w_1-w_2)+w_2N)} {[q+k w_1][q+(n-k)w_1]+Nw_2(2q+nw_1)+w_2^2[Nn-k(n-k)]} \times \eta_\star\end{equation} with \begin{equation} k_\star:=\begin{cases} k, & \text{ if } \star = out, \\ n- k, & \text{ if } \star = in, \end{cases} \quad \quad\quad\quad \eta_\star=\begin{cases} (N-1)(N-n+k-1), & \text{ if } \star = out, \\ N(N-k-1), & \text{ if } \star = in. \end{cases} \end{equation} \end{theorem} The above theorem is saying that the pairwise LEP-potential can be seen as the double-expectation of the function $g_{\star}(n,k)=N^{-n+1} \left(\hat{f}\theta P^{\dagger}_{\star}\right)(n,k)$ in~\cref{g} with respect to the geometric time $T_q$ and to the local time of the coarse-grained RW $\tilde X$. As can be seen in the proof, the analysis of this model can be in fact reduced to the study of such a coarse-grained RW jumping between the two ``lumped communities'' up to the independent random time $T_q$. The function $g_{\star}$ is the crucial combinatorial term encoding in the different parameter regimes the most likely trajectories for such a stopped two-state macroscopic walk $\tilde X$. \begin{remark}{\bf (Extensions to many communities of arbitrary sizes and weigths) } The formula in~\cref{g} can be derived also for the general model with arbitrary number of communities of variable compatible sizes and arbitrary weights within and among communities. The corresponding statement and proof are more involved but they follow exactly the same scheme of this equal-size-two-community case captured in the above theorem. We refer the reader interested in such an extension to~\cite{Q16}. \end{remark} The next theorem gives the limit of the LEP-potential computed in~\cref{2par2comssintpot}, the resulting scenario is summarized in the phase-diagram in~\cref{fig:phdiag}. \begin{figure} \caption{ $\alpha$--$\beta$ axis, $\alpha$ controls the killing rate ($q=N^\alpha$) and $\beta$ the weight between communities ($w_2=N^{-\beta}$). The above diagram describes at glance the limiting behavior of the LEP-potential as captured in~\cref{phasetrans}. The \emph{detectability} region (b) corresponds to the regimes where the difference of the \emph{in}- and \emph{out}-potential is maximal. In this case, indeed, the RW does not manage to exit its starting community within time scale $1/q$ and hence it is confined with high probability to ``its local universe''. In the \emph{dust} region (f) both \emph{in}- and \emph{out}-potential degenerates to 1, it is in fact a regime where the killing rate is sufficiently large (recall from~\cref{orsolimite} that $\sqrt{N}$ is the critical scale for the complete graph) to produce ``dust'' as emerging partition. Finally, the \emph{global mixing} region (d) is the other degenerate regime where the RW ``mixes globally'' in the sense that it changes community many times within time scale $1/q$, hence loosing memory of its starting community. The separating lines (c)--(a)--(e) correspond to the delicate critical phases where the competition of the above behaviors occurs. This will become transparent in the proof in~\cref{detect} where such boundaries will deserve a more detailed asymptotic analysis.} \label{fig:phdiag} \end{figure} \begin{theorem}\label{phasetrans}{\bf (Detectability and phase diagram for two communities) } Under the assumptions of~\cref{2par2comssintpot}, set $w_1=1$ , $w_2=N^{-\beta}$ and $q=N^\alpha$ for some $\alpha\in\ensuremath{\mathbb{R}},\: \beta\in\ensuremath{\mathbb{R}}^+$. Then: \begin{itemize} \item[\bf{(a)}] if $1-\beta<\alpha=\frac{1}{2}$, $\lim_{N\to\infty}U^{(N)}_q(out)=1$ and $\lim_{N\to\infty}U^{(N)}_q(in)= \varepsilon_0(\beta)\in(0,1)$. \item[\bf{(b)}] if $1-\beta<\alpha<\frac{1}{2}$, $\lim_{N\to\infty}U^{(N)}_q(out)=1$ and $\lim_{N\to\infty}U^{(N)}_q(in)=0$. \item[\bf{(c)}] if $\alpha=1-\beta< \frac{1}{2}$, $\lim_{N\to\infty}U^{(N)}_q(out)=\varepsilon_2(\alpha,\beta)\in(0,1)$ and $\lim_{N\to\infty}U^{(N)}_q(in)=0$. \item[\bf{(d)}] if $\alpha<\min\{\frac{1}{2},1-\beta\}$, $\lim_{N\to\infty}U^{(N)}_q(\star)=0, \star\in\{in,out\}.$ \item[\bf{(e)}] if $\alpha=\frac{1}{2}<1-\beta$, $\lim_{N\to\infty}U^{(N)}_q(\star)= \varepsilon_1(\alpha,\beta)\in(0,1)$ , $\star\in\{in,out\}$. \item[\bf{(f)}] if $\alpha>\frac{1}{2}$, $\lim_{N\to\infty}U^{(N)}_q(\star)=1, \star\in\{in,out\}.$ \end{itemize} \end{theorem} \begin{remark}{\bf (Anticommunities for negative $\beta$)} The above theorem is stated for arbitrary $\alpha\in\ensuremath{\mathbb{R}}$ and $\beta>0$. We notice that while for $\beta=0$ we are back to the complete graph with constant weight 1, for $\beta<0$, it would be more appropriate to speak about ``anticommunities'' rather than communities. In fact in this case, at every step, the RW prefers to change community rather than staying in its original one. Thus, it is somewhat artificial to see what the loop-erased partitioning captures. This is the reason why the plot in~\cref{fig:phdiag} is restricted to $\beta\geq 0$. However, the theorem still remains valid for negative $\beta$ and, not surprisingly, the difference between the \emph{in} and \emph{out} potentials turns out to be zero. \end{remark} The next statement collects some simple consequences, deduced from these two-point LEP-potential, on the macroscopic structure of $\Pi_q$. We recall that $|\Pi_q|$ stands for the number of blocks in the random partition $\Pi_q$. \begin{corollary}\label{macro}{\bf (Macroscopic emergent structure)} Under the assumption of~\cref{phasetrans}, the following scenarios hold true. If $\beta>0$, there exists $c>0$ depending only on $\alpha$ and $\beta$ s.t. $$\ensuremath{\mathbb{P}}\left(|\Pi_q|=cN^{\alpha\wedge1}(1\pm o(1) ) \right)=1-o(1).$$ Moreover: \begin{itemize} \item[\bf{(a)}] if $1-\beta<\alpha=\frac{1}{2}$ then $\textbf{whp}$ there are two blocks of linear size s.t. each block has a fraction $(1-o(1))$ of vertices from the same community. \item[\bf{(b)}] if $1-\beta<\alpha<\frac{1}{2}$ then $\textbf{whp}$ there are two blocks of size $N(1-o(1))$ s.t. each block has a fraction $(1-o(1))$ of vertices from the same community. \item[\bf{(c)}] if $\alpha=1-\beta< \frac{1}{2}$ then $\textbf{whp}$ there is at least a block of linear size. \item[\bf{(d)}] if $\alpha<\min\{\frac{1}{2},1-\beta\}$ then $\textbf{whp}$ there is one block of size $2N(1-o(1))$. \item[\bf{(e)}] if $\alpha=\frac{1}{2}<1-\beta$ then $\textbf{whp}$ there is at least a block of linear size. \item[\bf{(f)}] if $\alpha>\frac{1}{2}$ then $\textbf{whp}$ blocks of linear size do not exist. \end{itemize} \end{corollary} \cref{phasetrans} says that the LEP--potential contains sufficient information to detect the underlying communities in a parametric region where the ratio of the {\em out} and {\em in} weights is bigger than $\sqrt{N}$. This suggests that estimating the probabilities in~\cref{FIP} could be a valuable method to design a community detection algorithm for well-separated regions. Nonetheless, there can be other observables associated to $\Pi_q$ which perform better, meaning e.g. that they can be used for detection beyond regions ({\bf a})--({\bf c}) in~\cref{fig:phdiag}. However, it is not the scope of this paper to explore the practical applications and implications of this loop-erased partitioning in the context of community detection. For this reason we will omit complexity and other algorithmic considerations. As already mentioned, our main goal is rather to start understanding analytically the measure $\mu_q$ and its emergent structure. Our last result,~\cref{Rwdetection}, is the analogous of~\cref{phasetrans} for the RW-potential in~\cref{RWIP} and shows that this other potential gives essentially no insight on the emergent partition and very little can be detected from it. To state the result, we first give in the next lemma a characterization of the RW-potential which reveals that in reality this other 2-body interaction is determined only by the RW flow in the graph rather than the LEP--measure. \begin{lemma}\label{RWexpress}{\bf (RW--potential independent of LEP structure)} For any arbitrary graph $G$ on $N$ vertices, the pairwise correlation function in~\cref{RWIP} admits the following representation: $$ \overline{U}_q(x,y)=N^2 \left[K_q(x,x)K_q(y,y)-K_q(x,y)K_q(y,x)\right],$$ where $$K_q(x,y):= q(q-\mathcal{L})^{-1}(x,y)= \mathbb{P}_x(X(\tau_q)=y) $$ is, up to the factor $q$, the Green's kernel of the RW $X$ stopped at an independent exponentially distributed time $\tau_q$, with rate $q$. \end{lemma} We can now state the detectability captured by this RW--potential in the mean-field-community model. As for the LEP-potential we adapt the notation $\overline{U}_q( in/out )$ to distinguish between pairs within the same community or not. \begin{proposition}\label{Rwdetection}{\bf (Detectability via RW--potential)} Consider the two--community--graph $\mathcal{K}_{2N}(w_1,w_2)$ with $w_1=1$, $w_2=N^{-\beta}$ and $q=\Theta(N^{\alpha})$. Then, if $\alpha\le0$ and $\beta>1-\alpha$ $$ \overline{U}_{q}(\star)\sim \begin{cases} 4q^2+8 q&\text{ if } \star= in ,\\ 4q^2+8 q+4 & \text{ if } \star= out. \end{cases}$$ On the other hand: $$\overline{U}_{q}(in)\sim\overline{U}_{q}(out)\sim\begin{cases} 4q(q+1)&\text{ if } \alpha\le0 \text{ and } \beta<1-\alpha, \\ N^{\max\{2,2\alpha\}} & \text{ if } \alpha>0. \end{cases}$$ \end{proposition} As anticipated, this last statement shows that this RW-potential is less informative than the LEP one. In particular, the detectable parametric region is narrower and corresponds to the triangle for $\alpha\leq 0$ in the detectable region depicted in~\cref{fig:phdiag}. \section{Proofs of~\cref{proporso}: homogeneous complete graph}\label{proofcomplete} \subsection*{Proof of~\cref{orsoformula}} For convenience, we consider a discretization of the continuous time Markov process with generator \begin{equation}\label{def:lap} \ensuremath{\mathcal L}=\ensuremath{\mathcal A}-\ensuremath{\mathcal D},\quad\text{ with }\quad \ensuremath{\mathcal A}=w(\mathbf{1}\mathbf{1}'-I) \quad \text{ and }\quad\text{ with } \ensuremath{\mathcal D}=(n-1)wI. \end{equation} Set $L=\frac{1}{Nw}\ensuremath{\mathcal L}$, so that $L=I-\frac{1}{N}\mathbf{1}\mathbf{1}'$ and the associated transition matrix is given by \begin{equation} P=I-L=\frac{1}{N}\mathbf{1}\mathbf{1}' \end{equation} If we consider the killing as an absorbing state within the state space of the Markov chain extended from $V$ to $V\bigcup\{\Delta\}$, $\Delta$ denoting this absorbing state, we get the adjacency matrix \begin{equation} \widehat \ensuremath{\mathcal A}=\left(\begin{matrix} w\mathbf{1}\mathbf{1}'&q\mathbf{1}\\ \mathbf{0}'&0 \end{matrix}\right), \end{equation} and generator \begin{equation} \widehat \ensuremath{\mathcal L}= \widehat \ensuremath{\mathcal A}-\widehat \ensuremath{\mathcal D},\qquad \widehat \ensuremath{\mathcal D}=\left(\begin{matrix} [(N-1)w+q]I&\mathbf{0}\\ \mathbf{0}'&0 \end{matrix}\right). \end{equation} We can then normalize it by setting \begin{equation} \widehat L=\frac{1}{Nw+q}\widehat \ensuremath{\mathcal L}=\left(\begin{matrix} \frac{w}{Nw+q}\mathbf{1}\mathbf{1}'-I&\frac{q}{Nw+q}\mathbf{1}\\ \mathbf{0}'&0 \end{matrix}\right) \end{equation} and get a discrete RW with transition matrix given by \begin{equation}\label{discrete} \widehat P= I-\widehat L=\left(\begin{matrix} \frac{w}{Nw+q}\mathbf{1}\mathbf{1}'&\frac{q}{Nw+q}\mathbf{1}\\ \mathbf{0}'&1 \end{matrix}\right)=\left(\begin{matrix} (1-p)\frac{1}{N}\mathbf{1}\mathbf{1}'&p\mathbf{1}\\ \mathbf{0}'&1 \end{matrix}\right), \end{equation} where \begin{equation}\label{p} r:=\frac{q}{Nw + q}. \end{equation} It should be clear that a sample of a LE-path starting at a given vertex can be obtained as the output of the following procedure: \begin{itemize} \item With probability $r$ the discrete process reaches the absorbing state. In particular we set $T_q$ for a geometric random variable of parameter $q/(Nw+q)$. \item With probability $1-r$ the LERW moves accordingly to the law $P(v,\cdot)$ where $v$ is the last reached node. \item We call $H_n$ the vertices covered by the LE-path up to time $n$. Then, if at time $n+1$ the transition $X_n\to X_{n+1}$ takes place and the vertex $X_{n+1}\not\in H_n$, then $ H_{n+1}=H_n\cup\{X_{n+1} \}$. Conditioning on $|H_n|$, the latter event occurs with probability $\frac{N- H_n}{N}$. Conversely, if $X_{n+1}\in H_n$, then we remove from $H_n$ all the vertices that has been visited by the LERW since its last visit to $X_{n+1}$. As consequence the quantity $|H|$ reduces. One can then compute that the reductions occur with law \begin{equation} \ensuremath{\mathbb{P}}\left( |H_{n+1}|=h\:|\:|H_n|\ge h, T_q>n+1 \right)=\frac{1}{N}. \end{equation} \end{itemize} It would be easier to look at the quantity $|H_{n}|$ by using the following metaphor. We interpret $|H_{n}|$ as the height from which a bear fall down while moving on a stair of height $n$. In particular, we will assume that \begin{itemize} \item The bear starts with probability 1 from the first stair. \item At each time the bear select a step of the stair uniformly at random, including also the step he currently stands on. \item If the choice made by the bear is a lower step (or the current one), he moves to that step. \item If he chooses an upper step, then he walks in the upper direction by a single step. \item Before doing each step, there is a probability $r$ as in~\cref{p} that the bear ``falls down''. \end{itemize} Let us next fix $q=0$, that is, $r=0$, so that we can study the bear's dynamic independently of his falling. By setting $Z(n)$ for the position of the bear at time $n\in \ensuremath{\mathbb{N}}$, we get \begin{align} \ensuremath{\mathbb{P}}(Z(0)=\cdot)=&\left(1,0,0,0,\dots,0\right)\\ \ensuremath{\mathbb{P}}(Z(1)=\cdot)=&\left(\frac{1}{N},1-\frac{1}{N},0,0,\dots,0\right)\\ \ensuremath{\mathbb{P}}(Z(2)=\cdot)=&\left(\frac{1}{N},\left(1-\frac{1}{N}\right)\frac{2}{N},\left(1-\frac{1}{N}\right)\left(1-\frac{2}{N}\right),0,\dots,0\right)\\ \ensuremath{\mathbb{P}}(Z(3)=\cdot)=&\left(\frac{1}{N},\left(1-\frac{1}{N}\right)\frac{2}{N},\left(1-\frac{1}{N}\right)\left(1-\frac{2}{N}\right)\frac{3}{N},\left(1-\frac{1}{N}\right)\left(1-\frac{2}{N}\right)\left(1-\frac{3}{N}\right),\dots,0\right)\\ \ensuremath{\mathbb{P}}(Z(n)=\cdot)=&\begin{cases} \left(1-\frac{1}{N} \right)\left(1-\frac{2}{N}\right)\cdots \left(1-\frac{h-1}{N} \right)\frac{h}{N}&\text{ if }n\ge h\\ \left(1-\frac{1}{N} \right)\left(1-\frac{2}{N}\right)\cdots \left(1-\frac{h-1}{N} \right)&\text{ if }n=h-1\\ 0&\text{ if }n<h -1. \end{cases} \end{align} The latter implies that at time $n=h$ we reached the ergodic measure over the first $h$ steps of the stair, while at time $n=N$ the probability measure is exactly the ergodic one. It is interesting to notice that an easier expression can be written for the cumulative distribution of the variable $Z(n)$, i.e. \begin{equation} \ensuremath{\mathbb{P}}\left\{Z(n)\ge h\right\}=\begin{cases} \left(1-\frac{1}{N} \right)\left(1-\frac{2}{N}\right)\cdots \left(1-\frac{n-1}{N} \right)&\text{ if }n\ge h-1\\ 0&\text{ if }n<h -1\\ \end{cases} \end{equation} Next, calling $T^-$ the time immediately before the bear falls, we get \begin{align} \nonumber\ensuremath{\mathbb{P}}\left\{Z(T^-)\ge\zeta \right\}=&\ensuremath{\mathbb{P}}\left\{T^-<h-1 \right\}\ensuremath{\mathbb{P}}\left\{Z(T^-)\ge h| T^-<n-1 \right\}+\ensuremath{\mathbb{P}}\left\{T^-\ge h-1 \right\}\ensuremath{\mathbb{P}}\left\{Z(T^-)\ge h| T^-\ge n-1 \right\}\\ =&0+(1-r)^{h-1} \left(1-\frac{1}{N} \right)\left(1-\frac{2}{N}\right)\cdots \left(1-\frac{h-1}{N}\right) \end{align} which gives us the distribution of the last step of the bear before his failing. Recall that this is equivalent to the length of the original LERW starting on $x\in \ensuremath{\mathcal K}_{N}$, when the walk is stopped at an exponential time of rate $q$. Hence, we are now left to compute the probability that another walker, starting on $y\not= x$, is killed before it hits the previously sampled LERW. Thanks to the bear metaphor, for the size of the LE-trajectory we get: \begin{equation} \ensuremath{\mathbb{P}}^{LE_q}_x(|\Gamma|\geq h)=(1-r)^{h-1}\prod_{i=1}^{h-1}\left(1-\frac{i}{N}\right) \end{equation} and by explicit computation, setting $T_\Gamma$ for the first hitting time of the LE-path $\Gamma$, \begin{align*} U^{(N)}_q(x,y)=&\sum_{h\geq 1}^{N-1}\ensuremath{\mathbb{P}}^{LE_q}_x(|\Gamma|= h)\ensuremath{\mathbb{P}}_y(T_q<T_\Gamma | |\Gamma|=h)\\ =&\sum_{h=1}^{N-1}\ensuremath{\mathbb{P}}^{LE_q}_x(|\Gamma|= h)[\ensuremath{\mathbb{P}}_y(T_q<T_\Gamma| |\Gamma|=h, y\in \Gamma)\ensuremath{\mathbb{P}}(y\in \Gamma ||\Gamma|=h)\\ &+\ensuremath{\mathbb{P}}_y(T_q<T_\Gamma||\Gamma|=h, y\notin \Gamma)\ensuremath{\mathbb{P}}(y\notin \Gamma| |\Gamma|=h)]\\ =&\sum_{h=1}^{N-1}\ensuremath{\mathbb{P}}^{LE_q}_x(|\Gamma|= h)\left(\frac{q}{q+hw}\right)\frac{N-h}{N-1}\\ =&\sum_{h=1}^{N-1}\ensuremath{\mathbb{P}}^{LE_q}_x(|\Gamma|\geq h)\left(\frac{q}{q+hw}\right)\frac{N-h}{N-1}-\sum_{h=1}^{N-1}\ensuremath{\mathbb{P}}_x(|\Gamma|\geq h+1)\left(\frac{q}{q+hw}\right)\frac{N-h}{N-1}\\ =&\sum_{h=1}^{N-1}\left[\left(\frac{Nw}{q+Nw}\right)^{h-1}\prod_{i=1}^{h-1}\left(1-\frac{i}{N}\right)\right]\left(\frac{q}{q+hw}\right)\frac{N-h}{N-1}+\\ &-\sum_{h=1}^{N-1}\left[\left(\frac{Nw}{q+Nw}\right)^{h}\prod_{i=1}^{h}\left(1-\frac{i}{N}\right)\right]\left(\frac{q}{q+hw}\right)\frac{N-h}{N-1}\\ =&\sum_{h=1}^{N-1}\frac{q}{q+Nw}\frac{N-h}{N-1}\left(\frac{Nw}{q+Nw}\right)^{h-1}\prod_{i=1}^{h-1}\left(1-\frac{i}{N}\right)\left[1-\frac{Nw}{Nw+q}\left(\frac{N-h}{N}\right)\right]\\ =&\sum_{h=1}^{N-1}\frac{q}{q+hw}\frac{N-h}{N-1}\left(\frac{Nw}{q+Nw}\right)^{h-1}\prod_{i=1}^{h-1}\left(1-\frac{i}{N}\right)\left(\frac{q+hw}{q+Nw}\right)\\ =&\sum_{h=1}^{N-1}\frac{q}{q+Nw}\left(\frac{Nw}{q+Nw}\right)^{h-1}\frac{N-h}{N-1}\prod_{i=1}^{h-1}\left(1-\frac{i}{N}\right)\\ =&\sum_{h=1}^{N-1}\frac{q}{q+Nw}\left(\frac{Nw}{q+Nw}\right)^{h-1}\prod_{i=2}^{h}\left(1-\frac{i}{N}\right)\\ =&\sum_{k=0}^{N-2}\frac{q}{q+Nw}\left(\frac{Nw}{q+Nw}\right)^{k}\prod_{i=2}^{k+1}\left(1-\frac{i}{N}\right). \end{align*} \qed \subsection*{Proof of~\cref{orsolimite}} Let \begin{equation} \frac{\xi_q}{N}:=\frac{q}{Nw+q} \end{equation} and notice that if $q=x\sqrt{N}$, with $x,w=\Theta(1)$, then \begin{equation} q=\frac{Nw\xi_q}{N-\xi_q}\Longrightarrow q\sim w\xi_q. \end{equation} Call \begin{equation} f(k,N):=\prod_{i=2}^k\left(1-\frac{i}{N}\right), \end{equation} in order to rewrite \begin{align} \begin{split} U^{(N)}_q=&\sum_{k=0}^{N-2}\left(\frac{\xi_q}{N}\right)\left(1-\frac{\xi_q}{N} \right)^{k}\prod_{i=2}^{k+1}\left(1-\frac{i}{N}\right)\\ =&\sum_{k=0}^{N-2}\left(\frac{\xi_q}{N}\right)\left(1-\frac{\xi_q}{N} \right)^{k}f(k+1,N) \end{split} \end{align} and notice that the first term in the latter sum is the probability that the geometric random variable $T_q \overset{d}{\sim} Geom\left(\frac{\xi_q}{N}\right)$ assumes value $k$. Moreover it trivially holds that \begin{equation}\label{fknmin1} f(k+1,N)\le1,\:\:\forall k\in\ensuremath{\mathbb{N}},\qquad f(k+1,N)=0,\:\:\forall k\ge N-1. \end{equation} Hence, \begin{equation}\label{uqmeant} U^{(N)}_q=\ensuremath{\mathbb{E}}[f(T_q+1,N)]. \end{equation} \noindent Let us approximate $\ln f(k+1,N)$ at the first order as follows \begin{align}\label{eolo} \begin{split} \ln f(k+1,N)=&\sum_{i=2}^{k+1}\ln\left(1-\frac{i}{N}\right)=-\sum_{i=2}^{k+1}\frac{i}{N}+O\left(\frac{i^2}{N^2}\right)\\ =&-\frac{1}{N}\frac{(k+1)(k+2)-2}{2}+kO\left(\frac{k^2}{N^2}\right)=-\frac{1}{N}\frac{k^2+3k}{2}+O\left(\frac{k^3}{N^2}\right)\\ =&-\frac{k^2}{2N}+O\left(\frac{k}{N}+\frac{k^3}{N^2}\right)=:-\frac{k^2}{2N}+c_N(k). \end{split} \end{align} Next, set $Y\,\overset{d}{\sim}\, exp(x)$ and $Z\,\overset{d}{\sim} \,\mathcal N(0,1)$, notice that $\ensuremath{\mathbb{E}}[e^{\frac{Y^2}{2}}]=\sqrt{2\pi}xe^{\frac{x^2}{2}}\ensuremath{\mathbb{P}}(Z>x)$ and that \begin{equation} \lim_{N\to\infty}|\ensuremath{\mathbb{E}}[e^{-\frac{T_q^2}{2N}}]-\ensuremath{\mathbb{E}}[e^{\frac{Y^2}{2}}]|=0, \end{equation} since $T_q/\sqrt{N}$ converges in distribution to $Y$ as $N$ diverges. In view of the latter together with~\cref{uqmeant}, we can estimate \begin{align*} \left|U^{(N)}_q-\sqrt{2\pi}xe^{\frac{x^2}{2}}\ensuremath{\mathbb{P}}(Z>x) \right|&\leq \left| \ensuremath{\mathbb{E}}[f(T_q+1,N)]-\ensuremath{\mathbb{E}}[e^{-\frac{T_q^2}{2N}}]\right| + o(1) \\ \le & \left| \ensuremath{\mathbb{E}}[f(T_q+1,N)]-\sum_{k=0}^{\lfloor N^\delta\rfloor}\ensuremath{\mathbb{P}}(T_q=k)e^{-\frac{k^2}{2N}}e^{c_N(k)}\right| \\ &+ \left|\sum_{k=0}^{\lfloor N^\delta\rfloor}\ensuremath{\mathbb{P}}(T_q=k)e^{-\frac{k^2}{2N}}e^{c_N(k)}-\ensuremath{\mathbb{E}}[e^{-\frac{T_q^2}{2N}}]\right| + o(1) \\\le& \sum_{k=\lfloor N^\delta\rfloor+1}^\infty\ensuremath{\mathbb{P}}(T_q=k) + \left|\sum_{k=0}^{\lfloor N^\delta\rfloor}\ensuremath{\mathbb{P}}(T_q=k)e^{-\frac{k^2}{2N}}e^{c_N(k)}-\sum_{k=0}^{\lfloor N^\delta\rfloor}\ensuremath{\mathbb{P}}(T_q=k)e^{-\frac{k^2}{2N}}\right| + o(1)\\ =& o(1), \end{align*} where the last inequality holds true by choosing any $\delta\in\left(\frac{1}{2},\frac{2}{3}\right)$ which in particular guarantees that $c_N(k)=o(1)$. \qed \section{Proofs for mean-field-communities}\label{proof2com} \subsection{Proof of~\cref{2par2comssintpot}} We use here the same line of argument used in the proof of~\cref{proporso}. We will consider the process having state space $V=V_1\sqcup V_2$, where $$V_1=\left\{1,\dots, N_1 \right\},\qquad V_2=\left\{N_1+1,\dots,N_1+N_2 \right\},$$ and generator \begin{equation}\label{lap2com} \ensuremath{\mathcal L}(x,y)=\begin{cases} w_1&\text{if } x\not=y \text{ and } x,y \text{ in the same community}\\ w_2&\text{if } x\not=y \text{ and } x,y \text{ not in the same community}\\ -(N_1-1)w_1-N_2 w_2&\text{if } x=y \text{ and } x\in V_1\\ -(N_2-1)w_1-N_1 w_2&\text{if } x=y \text{ and } x\in V_2. \end{cases} \end{equation} We will specialize later on the case $N_1=N_2=N$.\\ We now consider a killed LERW $\Gamma$, and we denote by $\Gamma_i$ the set of points of the $i$-th community belonging to $\Gamma$, i.e., \begin{equation} \Gamma_i=\Gamma\cap V_i,\qquad i=1,2. \end{equation} We can write \begin{equation} \ensuremath{\mathbb{P}}_x^{LE_q}(|\Gamma_1|=k_1, |\Gamma_2|=k_2)=\sum_{\gamma: |\gamma_1|=k_1,|\gamma_2|=k_2} \ensuremath{\mathbb{P}}_x^{LE_q}(\gamma)\label{marshall}, \end{equation} and we assume, without loss of generality, that $x\in V_1$; then, by conditioning, we get for $ y\neq x$ with $y\in V_j$, $j=1,2$ \begin{equation} U_q^{(N)}(x,y)=\sum_{k_1=1}^{N_1-\mathbf{1}_{j=1}}\sum_{k_2=0}^{N_2-\mathbf{1}_{j=2}}\ensuremath{\mathbb{P}}^{LE_q}_x(|\Gamma_1|=k_1, |\Gamma_2|=k_2)\cdot \ensuremath{\mathbb{P}}_y\left(T_q<T_{\Gamma}\big|\Gamma\right),\label{ma} \end{equation} $T_{\Gamma}$ being the hitting time of $\Gamma$. \subsection*{The LERW starting from $x$} A result due to Marchal~\cite{M00} provides the following explicit expression for the probability of a loop erased trajectory: \begin{equation}\label{LERWlaw} \ensuremath{\mathbb{P}}_x^{LE_q}(\Gamma=\gamma)=\prod_{i=1}^{|\gamma|}w(x_{i-1},x_i)\frac{\det_{V\setminus\gamma}{(qI+\mathcal{L})}}{\det{(qI+\mathcal{L})}}. \end{equation} By looking closely at the latter formula we distinguish two parts: a product over the weights of the edges of the path and an algebraic part containing the ratio of two determinants which encodes the ``loop-erased'' feature of the process. In particular we notice that the former contains all the details about the trajectory, while the latter only depends on the number of points visited in each community. Let $j_1$ (respectively, $j_2$) be the number of jumps from the first community to the second (from the second to the first, respectively) along the LE-path. We have \begin{equation}\label{2comm} \begin{split} \ensuremath{\mathbb{P}}_x^{LE_q}(|\Gamma_1|=k_1, &|\Gamma_2|=k_2|x\in V_1,\:y\in V_2)=\\ =&\sum_{\gamma: |\gamma_1|=k_1,|\gamma_2|=k_2} \ensuremath{\mathbb{P}}_x^{LE_q}(\Gamma=\gamma)\\ =&\binom{N_1-1}{k_1-1}\binom{N_2-1}{k_2}\cdot(k_1-1)!(k_2)!\cdot\sum_{j_{1}=0}^{\min\{k_1,k_2\}}\sum_{j_{2}=j_{1}-1}^{j_{1}}\binom{k_1-1}{j_1-\mathbf{1}_{j_{1}\neq j_{2}}}\binom{k_2-1}{j_{2}-\mathbf{1}_{j_{1}=j_{2}}}\cdot\\ &\cdot w_1^{k_1+k_2-(j_1+j_2)-1}w_2^{j_1+j_2}q\frac{\det_{V\setminus\{1,2,\dots,k_1,N_1+1,N_1+2,\dots,N_1+k_2\}}(qI+\mathcal{L})}{\det(qI+\mathcal{L})} \end{split} \end{equation} where \begin{itemize} \item The first binomial coefficients stays for the $k_1-1$ possible choices for the points in $G_1$ (one of those must be $x$) over the possible $N_1-1$ points of the first community (except $x$). In the second community we can choose any $k_2$ vertices over the possible $N_2-1$ vertices of the second community (except $y$). \item The factorials stay for the possible ordering of the nodes covered in each community. Notice that the path on the first community must start by $x$. \item We sum over all the possible jumps from the first community to the second, $j_1$, and from the second to the first, $j_2$ (notice that if $j_2$ must be equal or one smaller than $j_1$). \item For any choice over the product of the previous three terms we have a path that has probability as given by the Marchal formula. \end{itemize} In the case in which we condition on having both $x$ and $y$ in the same (first, say) community we have \begin{equation}\label{2comm2} \begin{split} \ensuremath{\mathbb{P}}_x^{LE_q}(|\Gamma_1|=k_1, &|\Gamma_2|=k_2|x\in V_1,\:y\in V_1)=\\ =&\sum_{\gamma: |\gamma_1|=k_1,|\gamma_2|=k_2} \ensuremath{\mathbb{P}}^{LE_q}_x(\Gamma=\gamma)\\ =&\binom{N_1-2}{k_1-1}\binom{N_2}{k_2}\cdot(k_1-1)!(k_2)!\cdot\sum_{j_{1}=0}^{\min\{k_1,k_2\}}\sum_{j_{2}=j_{1}-1}^{j_{1}}\binom{k_1-1}{j_1-\mathbf{1}_{j_{1}\neq j_{2}}}\binom{k_2-1}{j_{2}-\mathbf{1}_{j_{1}=j_{2}}}\cdot\\ &\cdot w_1^{k_1+k_2-(j_1+j_2)-1}w_2^{j_1+j_2}q\frac{\det_{V\setminus\{1,2,\dots,k_1,N_1+1,N_1+2,\dots,N_1+k_2\}}(qI+\mathcal{L})}{\det(qI+\mathcal{L})}. \end{split} \end{equation} Namely, only the first combinatorial term changes. \subsection*{The ratio of determinants} In our \emph{mean-field} setup, the terms in~\cref{2comm} and~\cref{2comm2} coming from ~\cref{LERWlaw} can be explicitly computed. We consider here the two communities case, i.e. $V=V_1\sqcup V_2$, where the communities possibly have different sizes, $|V_1|=N_1$ and $|V_2|=N_2$. Now, consider the matrix obtained by erasing $k_1$ ($k_2$) rows and corresponding columns in the first community (the second one, respectively) in $-\ensuremath{\mathcal L}$. We are left with a square matrix made of two square blocks on the diagonal of size $N_1-k_1=:K_1$ (respectively $N_2-k_2=:K_2$). We will denote this matrix by \begin{equation} -M= \begin{pmatrix} d_1 & \cdots &w_1 & w_2 & \cdots & w_2 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ w_1 & \cdots & d_1 & w_2 & \cdots &w_2 \\ w_2 & \cdots &w_2 &d_2 &\cdots &w_1 \\ \vdots & \ddots & \vdots &\vdots &\ddots & \vdots \\ w_2 &\cdots & w_2 &w_1 &w_1 & d_2\\ \end{pmatrix}= \begin{pmatrix} A_1 &B \\ B^{Tr} &A_2 \end{pmatrix}, \end{equation} where the elements on the diagonal are given by \begin{equation} d_1=-((N_1-1)w_1 + N_2w_2),\qquad d_2=-((N_2-1)w_1 +N_1w_2). \end{equation} We want to find $K_1+K_2$ solutions of the problem \begin{equation} -Mv=\lambda v \label{eigen1} \end{equation} First we consider eigenvectors of the form $v=(x_1,x_1,...,x_1,x_2,...,x_2)^{Tr}$, where the upper component has length $K_1$ and the lower one has length $K_2$. If we write explicitly~\cref{eigen1} we get the following linear system: \begin{equation}\label{smalllumppro} -\begin{pmatrix} d_1 +(K_1-1)w_1 & K_2w_2 \\ K_1w_2 & d_2 + (K_2-1)w_1 \end{pmatrix} \begin{pmatrix}x_1\\mathbf{x}_2\end{pmatrix}=\lambda\begin{pmatrix}x_1\\mathbf{x}_2\end{pmatrix}, \end{equation} from which we get two eigenvalues, which we will refer to as $\lambda_1$ and $\lambda_2$. \\ Then we consider $v=(x_1,x_2,..., x_{K_1},0,...,0)^{Tr}$; with this choice we are left with the system \begin{equation} -\begin{pmatrix} d_1 &\cdots &w_1 \\ \vdots &\ddots &\vdots \\ w_1 &\cdots &d_1 \end{pmatrix} \begin{pmatrix} x_1 \\ \vdots \\ x_{K_1} \end{pmatrix}=\lambda\begin{pmatrix} x_1 \\ \vdots \\ x_{K_1} \end{pmatrix}, \qquad w_2(x_1+\cdots+x_{K_1})=0\end{equation} and we have to find $K_1-1$ eigenvalues that are associated with eigenvector orthogonal to constants. By direct computation, $A_1$ has eigenvalue $\lambda_1':=(N_1w_1+N_2w_2)$ with multiplicity $K_1-1$. With the opposite choice, namely $v=(0,...,0, x_1,..., x_{K_2})^{Tr}$, we get \begin{equation} -\begin{pmatrix} d_2 &\cdots &w_1 \\ \vdots &\ddots &\vdots \\ w_1 &\cdots &d_2 \end{pmatrix} \begin{pmatrix} x_1 \\ \vdots \\ x_{K_2} \end{pmatrix}=\lambda\begin{pmatrix} x_1 \\ \vdots \\ x_{K_2} \end{pmatrix}, \qquad\qquad w_2(x_1+\cdots+x_{K_2})=0. \end{equation} Namely, there is an eigenvalue $\lambda_2':=(N_2w_1+N_1w_2)$ with multiplicity $K_2-1$. So the spectrum of $M$ is \begin{equation} \text{spec}(M)=(\lambda_1, \lambda_2, \lambda_1',\lambda_2' ) \end{equation} with multiplicity denoted by $\mu_{M}(\cdot)$: \begin{equation} \mu_{M}(\lambda_1)=1,\quad \mu_{M}(\lambda_2)=1,\quad \mu_{M}(\lambda_1')=K_1-1,\quad \mu_{M}(\lambda_2')=K_2-1. \end{equation} Therefore, we can see that the ratio of determinants in~\cref{2comm} and~\cref{2comm2} can be written explicitly. Indeed, at the denominator we have \begin{equation} \det{(qI+\mathcal{L})}=q(q+Nw_2)(q+N_1w_1+N_2w_2)^{N_1-1}(q+N_2w_1+N_1w_2)^{N_2-1}, \end{equation} while at the numerator we are left with \begin{equation} \det_{V\setminus\{1,2,\dots,k_1,N_1+1,N_1+2,\dots,N_1+k_2\}}(qI+\mathcal{L})=(q+\lambda_1)(q+\lambda_2)(q+\lambda_1')^{N_1-k_1-1}(q+\lambda_2')^{N_2-k_2-1} \end{equation} where \begin{equation} \lambda_1':=N_1w_1+N_2w_2, \qquad \lambda_2':=N_1w_2+N_2w_1, \end{equation} while $\lambda_{1}$ and $\lambda_{2}$ are the two solutions of the system in~\cref{smalllumppro}. In particular, if we specialize in the case $N_1=N_2=N$ we can conclude that the ratio of determinants is given by \begin{equation}\label{def:theta} \theta(k_1,k_2):=\frac{(q-\lambda_1(k_1,k_2))(q-\lambda_2(k_1,k_2))}{q(q+2Nw_2)(q+a)^{k_1+k_2}} \end{equation} where we defined \begin{equation} a:=N(w_1+w_2), \end{equation} and \begin{equation*} \lambda_{i}(k_1,k_2):=-\frac{1}{2}\left[w_1(k_1+k_2)+2Nw_2+(-1)^i\sqrt{w_1^2(k_1-k_2)^2+4\left(N-k_1\right)\left(N-k_1\right)w_2^2}\right],\quad i=1,2. \end{equation*} \subsection*{The path starting from $y$} Now we have to consider the second path starting from $y$ which decides the root at which $y$ will be connected in the forest generated by the algorithm. The latter corresponds to the second factor in~\cref{ma}. Notice that it is sufficient to consider such path in the simpler fashion, i.e. without erasing the loops, since we are only concerned with the absorption of the walker: either in $\gamma$ or killed at rate $q$. Moreover, we can exploit again the symmetry of the model to reduce it to a Markov chain $\bar X$ with state space $\{\bar 1,\bar 2,\bar 3,\bar 4\}$ corresponding to the sets $\left\{V_1\setminus \gamma_1, V_2\setminus \gamma_2, \gamma_1\sqcup \gamma_2,\Delta \right\}$, where $\Delta$ is again the absorbing state, i.e., the ``state-independent'' exponential killing. We will assume that $$|\gamma_i|=k_i,\qquad |V_i|=N_i,\qquad i=1,2.$$ Hence, the transition matrix we are interested in is given by \begin{equation}\label{smallprocmorte} \bar P:=\left(\begin{matrix} Q&R\\0&I \end{matrix} \right), \end{equation} where \begin{equation} Q:=D^{-1}\left(\begin{matrix} \left(N_1-k_1-1 \right)w_1&\left(N_2-k_2 -1 \right)w_2\\ \left(N_1-k_1 \right)w_2&\left(N_2-k_2 \right)w_1 \end{matrix} \right), \end{equation} \begin{equation} D^{-1}:=\left(\begin{matrix}(q+a_1-w_1)^{-1}&0\\0&(q+a_2-w_1)^{-1}\end{matrix}\right),\qquad R:=D^{-1}\left(\begin{matrix} k_1w_1+k_2w_2&q\\ k_1w_2+k_2w_1&q \end{matrix} \right). \end{equation} with \begin{equation} a_1:=N_1w_1+N_2w_2,\qquad a_2:=N_1w_2+N_2w_1. \end{equation} The states represent: \begin{itemize} \item[($\bar 1$)] nodes of the $1^{st}$ community that have \emph{not} been covered by the LE-path started at $x$. \item[($\bar 2$)] nodes of the $2^{nd}$ community that have \emph{not} been covered by the LE-path started at $x$. \item[($\bar 3$)] nodes of \emph{both} communities that have been covered by the LE-path started at $x$. \item[($\bar 4$)] the absorbing state $\Delta$. \end{itemize} Called $T_{abs}$ the hitting time of the absorbing set $\left\{\bar 3 ,\bar 4\right\}$, we want to compute the probability that the process $\bar X$ is absorbed in the state, $\bar 4$ and not in $\bar 3$. In terms of our original process, this means that the process is killed before the hitting of the LE-path starting at $x$. By direct computation \begin{align}\label{pmorte} \begin{split} \ensuremath{\mathbb{P}}_{\bar 2}(\bar X(T_{abs})=\bar 4)=&\sum_{k=0}^\infty\bar P^k(\bar 2,\bar 1)\frac{q}{q+a_1-w_1}+\sum_{k=0}^\infty\bar P^k(\bar 2,\bar 2)\frac{q}{q+a_2-w_1}\\ =&\left(\sum_{k=0}^\infty Q^k\right)D^{-1}\binom{q}{q}(2)\\ =&(I-Q)^{-1}D^{-1}\binom{q}{q}(2)\\ =:&P^{\dagger}(2) \end{split} \end{align} notice that the first component of the vector $P^\dagger\in\ensuremath{\mathbb{R}}^2$ corresponds to the \emph{intra-community} case $\left\{x,y\right\}\in V_i$ for some $i$, i.e., $U^{(N)}_q(in)$, while the second one to the \emph{inter-community} case, namely $U^{(N)}_q(out)$. \newline\newline If we now use the assumption that $N_1=N_2=N$, the steps above allow us to write the following formulas \begin{align}\label{vinter2ss} \begin{split} U^{(N)}_q(out)=&\sum_{k_1=1}^{N}\sum_{k_2=0}^{N-1}\binom{N-1}{k_1-1}\binom{N-1}{k_2}(k_1-1)!(k_2)!\theta(k_1,k_2)P^\dagger(2)\cdot\\ &\cdot\sum_{j_1=0}^{min(k_1,k_2)}\sum_{j_2=j_1-1}^{j_1}\binom{k_1-1}{f_1(j_1,j_2)}\binom{k_2-1}{f_2(j_1,j_2)}w_1^{k_1+k_2-1-j_1-j_2}w_2^{j_1+j_2}q \end{split} \end{align} \begin{align}\label{vintra2ss} \begin{split} U^{(N)}_q(in)=&\sum_{k_1=1}^{N-1}\sum_{k_2=0}^{N}\binom{N-2}{k_1-1}\binom{N}{k_2}(k_1-1)!(k_2)!\theta(k_1,k_2)P^\dagger(1)\cdot\\ &\cdot\sum_{j_1=0}^{min(k_1,k_2)}\sum_{j_2=j_1-1}^{j_1}\binom{k_1-1}{f_1(j_1,j_2)}\binom{k_2-1}{f_2(j_1,j_2)}w_1^{k_1+k_2-1-j_1-j_2}w_2^{j_1+j_2}q \end{split} \end{align} where \begin{equation} f_1(j_1,j_2):=j_1-\mathbf{1}_{\left\{j_1\not=j_2 \right\}},\:\:\:f_2(j_1,j_2):=j_2-\mathbf{1}_{\left\{j_1=j_2 \right\}}, \end{equation} $\theta(k_1,k_2)$ as in~\cref{def:theta} and \begin{equation} P^\dagger=\frac{1}{q+a-w_1}(I-Q)^{-1}\binom{q}{q}. \end{equation} By direct computation we see that \begin{equation} P^\dagger=\frac{q}{c}\binom{q+k_2(w_1-w_2)+2w_2N}{q+k_1(w_1-w_2)+2w_2N}. \end{equation} where \begin{equation} c:=(q+k_1w_1)(q+k_2w_1)+Nw_2(2q+(k_1+k_2)w_1)+w_2^2[N(k_1+k_2)-k_1k_2]. \end{equation} \subsection*{Local time interpretation} Now consider the part of the formula concerning the jumps among the two communities of the killed-LE-path starting at $x$, i.e. \begin{equation}\label{431} \sum_{j_1=0}^{min(k_1,k_2)}\sum_{j_2=j_1-1}^{j_1}\binom{k_1-1}{f_1(j_1,j_2)}\binom{k_2-1}{f_2(j_1,j_2)}w_1^{k_1+k_2-1-j_1-j_2}w_2^{j_1+j_2}. \end{equation} The latter can be thought of as a function of a Markov Chain $(\tilde X_n)_{n\in\ensuremath{\mathbb{N}}}$ on the state space $\left\{\underline{1},\underline{2}\right\}$, with transition matrix \begin{equation}\label{smallproc} \tilde P=\left(\begin{matrix}p&1-p\\1-p&p \end{matrix}\right),\qquad p=\frac{w_1}{w_1+w_2} \end{equation} where the $\underline{i}$-th state stays for the $i$-th community. Indeed, we can rewrite~\cref{431} as \begin{equation*} (w_1+w_2)^{k_1+k_2-1}\sum_{j_1=0}^{min(k_1,k_2)}\sum_{j_2=j_1-1}^{j_1}\binom{k_1-1}{f_1(j_1,j_2)}\binom{k_2-1}{f_2(j_1,j_2)}\left(\frac{w_1}{w_1+w_2}\right)^{k_1+k_2-1-j_1-j_2}\left(\frac{w_2}{w_1+w_2}\right)^{j_1+j_2}= \end{equation*} \begin{equation}\label{local} =(w_1+w_2)^{k_1+k_2-1}\tilde \ensuremath{\mathbb{P}}_1(\ell(k_1+k_2)=k_1) \end{equation} with $\ell$ being the local time as in the statement of~\cref{2par2comssintpot}. \subsection*{Geometric smoothing} From the previous steps we get the following expression \begin{align} \begin{split} U^{(N)}_q(out)=&\sum_{k_1=1}^{N}\sum_{k_2=0}^{N-1}\left(N-1\right)_{k_1-1}\left(N-1\right)_{k_2}\frac{(q-\lambda_1(k_1,k_2))(q-\lambda_2(k_1,k_2))}{q(q+2Nw_2)(q+a)^{k_1+k_2}}\cdot\\ &\cdot q(w_1+w_2)^{k_1+k_2-1}\tilde \ensuremath{\mathbb{P}}_1(\ell(k_1+k_2)=k_1)P^\dagger(2). \end{split} \end{align} Next, we would like to make appear a geometric term as in the complete and uniform case of~\cref{proporso}. Notice that multiplying and dividing by $N^{k_1+k_2-1}$ one obtains \begin{align} \begin{split} U^{(N)}_q(out)=&\sum_{k_1=1}^{N}\sum_{k_2=0}^{N-1}N^{-(k_1+k_2-1)}\left(N-1\right)_{k_1-1}\left(N-1\right)_{k_2}\frac{(q-\lambda_1(k_1,k_2))(q-\lambda_2(k_1,k_2))}{q(q+2Nw_2)}\cdot\\ &\cdot \frac{q}{q+a}\left(\frac{a}{q+a}\right)^{k_1+k_2-1}\tilde \ensuremath{\mathbb{P}}_1(\ell(k_1+k_2)=k_1)P^\dagger(2) \end{split} \end{align} we can then define \begin{equation}\label{xi} \xi_{q,N}:=\frac{q}{q+a}=\frac{q}{q+N(w_1+w_2)} \end{equation} in order to obtain \begin{align}\label{438} \begin{split} U^{(N)}_q(out)=&\sum_{k_1=1}^{N}\sum_{k_2=0}^{N-1}N^{-(k_1+k_2-1)}\left(N-1\right)_{k_1-1}\left(N-1\right)_{k_2}\frac{(q-\lambda_1(k_1,k_2))(q-\lambda_2(k_1,k_2))}{q(q+2Nw_2)}\cdot\\ &\cdot \ensuremath{\mathbb{P}}(T_q=k_1+k_2)\tilde \ensuremath{\mathbb{P}}_1(\ell(k_1+k_2)=k_1)P^\dagger(2) \end{split} \end{align} and \begin{align}\label{439} \begin{split} U^{(N)}_q(in)=&\sum_{k_1=1}^{N-1}\sum_{k_2=0}^{N}N^{-(k_1+k_2-1)}\left(N-2\right)_{k_1-1}\left(N\right)_{k_2}\frac{(q-\lambda_1(k_1,k_2))(q-\lambda_2(k_1,k_2))}{q(q+2Nw_2)}\cdot\\ &\cdot \ensuremath{\mathbb{P}}(T_q=k_1+k_2)\tilde \ensuremath{\mathbb{P}}_{\underline{1}}(\ell(k_1+k_2)=k_1)P^\dagger(1) \end{split} \end{align} where $T_q$ is an independent random variable with law $Geom\left( \xi_{q,N} \right)$. \subsection*{Conclusions} One can ideally divide the formulas in~\cref{438,439} in five terms, namely \begin{enumerate} \item The entropic term \begin{equation} N^{-(k_1+k_2-1)}\left(N-2\right)_{k_1-1}\left(N\right)_{k_2}\qquad\text{ or }\qquad N^{-(k_1+k_2-1)}\left(N-1\right)_{k_1-1}\left(N-1\right)_{k_2} \end{equation} was already present in the complete and uniform case~\cref{orsoformula}. Indeed \begin{equation} \prod_{h=2}^k\left(1-\frac{h}{N}\right)=N^{-(k-1)}(N-2)_{k-2}. \end{equation} \item The term related to the spectrum of the size 2 matrix presented in~\cref{smalllumppro}, i.e. \begin{equation} \frac{(q-\lambda_1(k_1,k_2))(q-\lambda_2(k_1,k_2))}{q(q+2Nw_2)} \end{equation} which is the same in both \emph{in} e \emph{out} community cases. It can be rewritten as the ratio between two parabolas in $q$, i.e., \begin{equation} \frac{q^2+[(k_1+k_2)w_1+2Nw_2]q+(w_1+w_2)[(k_1+k_2)Nw_2+k_1k_2(w_1-w_2)]}{q^2+2Nw_2q} \end{equation} \item The term related to the geometric random variable of parameter $\xi_{q,N}$, which was present also in the case of the uniform graph,~\cref{orsoformula}. \item The term related to the local times of the 2-states Markov chain $\tilde P$, in~\cref{smallproc}. \item The term related to the absorption probability, i.e., to the quantity $P^\dagger$, see~\cref{pmorte}, as a function of the process $\bar{P}$ presented in~\cref{smallprocmorte}. \end{enumerate} It is worth noticing that the $P^\dagger$ above is slightly different from the $P^\dagger_\star$ in the statement of~\cref{2par2comssintpot} which contains the extra factor $\eta_\star$. At this point by setting \begin{equation} g'_{out}(k_1,k_2):=N^{-(k_1+k_2-1)}\left(N-1 \right)_{k_1-1}\left(N-1 \right)_{k_2}\frac{(q-\lambda_1(k_1,k_2))(q-\lambda_2(k_1,k_2))}{q(q+2Nw_2)}P^{\dagger}(2), \end{equation} \begin{equation} g'_{in}(k_1,k_2):=N^{-(k_1+k_2-1)}\left(N-2 \right)_{k_1-1}\left(N \right)_{k_2}\frac{(q-\lambda_1(k_1,k_2))(q-\lambda_2(k_1,k_2))}{q(q+2Nw_2)}P^{\dagger}(1), \end{equation} we can write \begin{align}\label{446} \begin{split} U^{(N)}_q(out)=&\sum_{k_1=1}^{N}\sum_{k_2=0}^{N-1}g'_{out}(k_1,k_2)\ensuremath{\mathbb{P}}(T_q=k_1+k_2)\tilde \ensuremath{\mathbb{P}}_{\underline{1}}(\ell(k_1+k_2)=k_1)\\ =&\sum_{n=1}^{2N}\sum_{k_1+k_2=n}g'_{out}(k_1,k_2)\ensuremath{\mathbb{P}}(T_q=n)\tilde \ensuremath{\mathbb{P}}_{\underline{1}}(\ell(n)=k_1), \end{split} \end{align} and \begin{align}\label{447} \begin{split} U^{(N)}_q(in)=&\sum_{k_1=1}^{N-1}\sum_{k_2=0}^{N}g'_{in}(k_1,k_2)\ensuremath{\mathbb{P}}(T_q=k_1+k_2)\tilde{\ensuremath{\mathbb{P}}}_{\underline{1}}(\ell(k_1+k_2)=k_1)\\ =&\sum_{n=1}^{2N}\sum_{k_1+k_2=n}g'_{in}(k_1,k_2)\ensuremath{\mathbb{P}}(T_q=n)\tilde \ensuremath{\mathbb{P}}_{\underline{1}}(\ell(n)=k_1),\\ \end{split} \end{align} which is equivalent to the statement in~\cref{2par2comssintpot}. \qed \subsection{Proof of ~\cref{phasetrans}}\label{detect} \noindent{ \bf{ Proofs of \bf{(a)} and \bf{(b)}: $1-\beta<\alpha<(=)\frac{1}{2}$ (detectability) } } As expressed in the following lemma in this regime the RW is confined to its starting community for the entire life-time. \begin{lemma}[RW is confined to its community up to dying]\label{lemmageom} Let $1>\alpha>1-\beta$ and for $x\in [2N]$, consider the event $$E_x:=\{T_q>T_x^{out} \}$$ where $T_x^{out}$ is the first time in which the RW moves out of the community in which $x$ lies. Then, as $N\to\infty$, $$\ensuremath{\mathbb{P}}_x(E_x)=o(1).$$ \end{lemma} \begin{proof} Let $Z$ be a r.v. that can assume values in the set $\{Out, In, \Delta\}$ with probabilities: $$\ensuremath{\mathbb{P}}(Z=Out)= \frac{N^{1-\beta}}{N^\alpha+N+ N^{1-\beta}}=:a_N,$$ $$\ensuremath{\mathbb{P}}(Z=In)= \frac{N}{N^\alpha+N+ N^{1-\beta}}=:b_N \quad \text{ and } \quad \ensuremath{\mathbb{P}}(Z=\Delta)=1- (a_N+b_N).$$ Let $(Z_n)_{n\in\ensuremath{\mathbb{N}}}$ be a sequence of i.i.d. r.v.s with the same law of $Z$ and notice that $$ \ensuremath{\mathbb{P}}(T_q<T_x^{out})=\ensuremath{\mathbb{P}}\left(\min\{n\ge0\:|\:Z_n=\Delta \} <\min\{n\ge0\:|\:Z_n=Out \}\right).$$ Therefore \begin{align*} \ensuremath{\mathbb{P}}_x(E_x)=\ensuremath{\mathbb{P}}_x(T_q>T_x^{out} )=&\sum_{n=1}^\infty\ensuremath{\mathbb{P}}_x(T_x^{out}=n,T_q>n)\\ =&\sum_{n=1}^\infty b_N^{n-1}a_N\\ =&\frac{a_Nb_N}{1-b_N}\sim N^{1-\beta-\alpha}, \end{align*} from which the claim. \end{proof} In view of the decomposition in~\cref{LEdec} and the above lemma, we can write for any $x\neq y$ \begin{align}\notag U^{(N)}_q(x,y)=&\sum_{\gamma}\ensuremath{\mathbb{P}}^{LE}_x(\gamma)\[\ensuremath{\mathbb{P}}_y(T_\gamma>T_q|E_x^c)\ensuremath{\mathbb{P}}_y(E_x^c)+\ensuremath{\mathbb{P}}_y(T_\gamma>T_q|E_x)\ensuremath{\mathbb{P}}_y(E_x)\right]\\ \notag =&o(1)+(1-o(1))\sum_{\gamma}\ensuremath{\mathbb{P}}_x^{LE}(\gamma)\ensuremath{\mathbb{P}}_y(T_\gamma>T_q|E_x^c)\\ \label{LEdetect}\sim&\sum_{\gamma}\ensuremath{\mathbb{P}}_x^{LE}(\gamma)\ensuremath{\mathbb{P}}_y(T_\gamma>T_q|E_x^c). \end{align} Let us first consider $U^{(N)}_q(out)$. In this case, by ~\cref{lemmageom}, for any $\alpha\leq 1/2$ and uniformly in $\gamma$, we have that \begin{align*} \ensuremath{\mathbb{P}}_y(T_\gamma<T_q|E_x^c)\le&\ensuremath{\mathbb{P}}_y(T_y^{out}<T_q|E_x^c)\\ =&\ensuremath{\mathbb{P}}_y(E_y)\\ =&o(1). \end{align*} As a consequence $\ensuremath{\mathbb{P}}_y(T_\gamma>T_q|E_x^c)\geq 1-o(1)$, and by plugging this estimate in~\cref{LEdetect}, we get $U^{(N)}_q(out)\to 1$. Concerning $U^{(N)}_q(in)$, one has to notice that, for every LERW $\gamma$ starting from $x$ and ending at the absorbing state, we can consider the event $$E_{\gamma,y}=\{T_y^{out}<\min(T_\gamma,T_q) \}.$$ Once more, uniformly in $\gamma$, we get by~\cref{lemmageom} that \begin{align*} \ensuremath{\mathbb{P}}_y(E_{\gamma,y})\leq \ensuremath{\mathbb{P}}_y(E_y)=o(1) \end{align*} Thus, for $x,y \in [N]$, by~\cref{LEdetect}, we can estimate \begin{align*} U^{(N)}_q(x,y)=&o(1)+(1-o(1))\sum_{\gamma}\ensuremath{\mathbb{P}}_x^{LE}(\gamma|E_x^c)\ensuremath{\mathbb{P}}_y(T_\gamma>T_q|E_x^c,E_{\gamma,y}^c) \end{align*} Notice that, under such conditioning, the sum can be read as the probability that two vertices in a complete graph with $N$ vertices end up in two different trees. Therefore, this reduces to~\cref{orsolimite}, which in turns gives $U^{(N)}_q(in)\to 0$ for $\alpha<1/2$ and $U^{(N)}_q(in)\to \varepsilon_0(\alpha)$ else. \qed \noindent{\bf{ Proof of {\bf(f)} : $\alpha>\frac{1}{2}$ (high killing region)}} We will only show that $U^{(N)}_q(in)\to 1$, this will suffice since e.g. by direct computation one can check that $U^{(N)}_q(in)\geq U^{(N)}_q(out)$. Observe first that being $\alpha>\frac{1}{2}$, the length of the Loop-Erased path $\Gamma$ must be ``small'' with high probability. In particular we can bound \begin{align*} \ensuremath{\mathbb{P}}^{LE_q}_x\(|\Gamma|>\sqrt{N} \)\le& \ensuremath{\mathbb{P}}(T_q>\sqrt{N})\\ =&\(1-\frac{N^\alpha}{N+N^{1-\beta}+N^\alpha} \)^{\sqrt{N}}\\ =&o(1), \end{align*} hence \begin{align*} U^{(N)}_q(in)=&o(1)+\sum_{\gamma:\:|\gamma|\le\sqrt{n}}\ensuremath{\mathbb{P}}_x^{LE_q}(\Gamma=\gamma)\ensuremath{\mathbb{P}}_y(T_\gamma>T_q)\\ \ge&\sum_{\gamma:\:|\gamma|\le\sqrt{N}}\ensuremath{\mathbb{P}}_x^{LE_q}(\Gamma=\gamma)\frac{N^\alpha}{\sqrt{N}+N^\alpha}\\ =&1-o(1). \end{align*} \qed We next prove the remaining items in~\cref{phasetrans} for which we will implement a similar strategy which we start explaining. In all remaining regimes we need to show that $U^{(N)}_q(\star)$, $\star\in\{in,out\}$ either vanishes or stays bounded away from zero. To this aim, we will use the representation in~\cref{g}. Depending on the parameter regimes, we will split the sum over $t$ in different pieces to be treated according to the asymptotic behavior of the involved factors. To simplify the exposition we will restrict in what follows to the positive quadrant $\alpha,\beta>0$. We stress however that, as the reader can check, the following estimates hold true and actually converge faster even outside of the positive quadrant. Let us start with a few observations. We notice that $\hat{f}(n,k)\leq 1$ for every choice of $k,N,n$, moreover $\hat{f}(t,n)=0$ if $n\ge N$. Furthermore, for each $N$ , \begin{equation}\label{sum1}\sum_{n=1}^{\infty} \ensuremath{\mathbb{P}}(T_q=n)\sum_{k=1}^{n} \tilde \ensuremath{\mathbb{P}}_{\underline{1}}(\ell(n)=k)=\sum_{n=1}^{\infty} \ensuremath{\mathbb{P}}(T_q=n)=1,\end{equation} and while estimating the involved factors it will be crucial the behavior of the product $\left(\hat{f} \theta P_{\star}^{\dagger}\right)(n,k)$ for which we can in general observe the following facts. \begin{enumerate}[(A)] \item\label{b} For any $\varepsilon >0$, if $n>N^{1/2+\varepsilon}$, then it follows from~\cref{eolo} that $N\mapsto \hat{f}_N$ decays to zero, uniformly in $k$, faster than any polynomial as $N\to \infty$. For such $n$'s , since $N\mapsto \theta_N P^\dagger_{\star}$ is polynomially bounded (uniformly in $n,k$), the contribution in~\cref{g} of such terms can be neglected. \item\label{c} Whenever we consider $n$'s for which $ \theta P^\dagger_{\star}=o(1)$, because of~\cref{sum1} and the uniform control on $\hat{f}$, the contribution of such terms in~\cref{g} can also be neglected. \item\label{d} For $n$'s for which neither~\cref{b} nor~\cref{c} hold, we will estimate the asymptotics of such part of the sum by controlling the mass of the geometric time $T_q$ against $\theta P^\dagger_{\star}$, and in the most delicate cases (on the separation lines in~\cref{fig:phdiag}), taking into account the behavior of the local time too. \end{enumerate} We are now ready to treat the remaining parameter regimes using such facts. \noindent{ \bf{ Proof of {\bf(d)}: $\alpha<\min\{\frac{1}{2},1-\beta\}$ (changing-communities before dying) } } In this regime, the overall picture resembles the phenomenology of the complete graph. In particular, the RW will manage to change community before being killed and up to the killing time scale, it will forget its starting community. Moreover, with high probability a single tree of size $2N(1-o(1))$ will be formed, so that, given any two points $x,y$, they will end up in the same tree with high probability independently on their communities. To prove the claim notice that, uniformly in $n,k$, \begin{equation}\label{Pblue} P^\dagger_{\star}(n,k)\sim \frac{N^{1-\beta+\alpha} + N^{\alpha} k_\star} {2N^{1-\beta+\alpha}+nN^{1-\beta}+k(n-k)}= \frac{N^{1-\beta+\alpha} } {2N^{1-\beta+\alpha}+nN^{1-\beta}+k(n-k)}+ O\left( \frac{1}{N^{1-\beta-\alpha} } \right). \end{equation} As a consequence the asymptotics of $U^{(N)}_q(\star)$ will be independent of $\star$. To show that such a limit is zero we argue as follows. Within this parameter region: \begin{equation} \theta(n,k)\sim 1+ \frac{nN^{\alpha} + 2k(n-k)} {2N^{1-\beta+\alpha}}, \end{equation} which together with~\cref{Pblue} leads to \begin{align}\label{TPblue} \nonumber\theta P^\dagger_{\star}(n,k)=& \frac{N^{1-\beta+\alpha}} {2N^{1-\beta+\alpha}+nN^{1-\beta}+k(n-k)} +\frac{k(n-k)} {2N^{1-\beta+\alpha}+nN^{1-\beta}+k(n-k)} + O\left( \frac{k(n-k)}{N^{2(1-\beta)} }\right) + O\left( \frac{nN^{\alpha}}{N^{2(1-\beta)}} \right)\\ =:&\theta P^\dagger_I(n,k) +\theta P^\dagger_{II}(n,k)+\theta P^\dagger_{III}(n,k)+\theta P^\dagger_{IV}(n,k),\end{align} We can now plug in this asymptotic representation of $\theta P^\dagger_{\star}$ in~\cref{g}, and separately treat the four resulting terms. For the first term, namely the sum in~\cref{g} with $\theta P^\dagger_I$ in place of $\theta P^\dagger_{\star}$, we split the sum in $n$ into two parts at $N^{\alpha+\varepsilon}$, for small $\varepsilon>0$, and show that they both goes to zero, by using~\cref{d} and~\cref{c}, respectively In fact, with this ``cut'' we see that: \begin{align}\label{formulavai1} (I):= &\sum_{n=1}^{\infty} \ensuremath{\mathbb{P}}(T_q=n)\sum_{k=1}^{n} \tilde \ensuremath{\mathbb{P}}_{\underline{1}}(\ell(n)=k) \hat{f}(n,k) \theta P^\dagger_I(n,k)\\ =&\sum_{n<N^{\alpha+\varepsilon}} \ensuremath{\mathbb{P}}(T_q=n)\sum_{k=1}^{n} \tilde \ensuremath{\mathbb{P}}_{\underline{1}}(\ell(n)=k)\cdot 1\cdot \Theta(1)+\sum_{n\ge N^{\alpha+\varepsilon}}\ensuremath{\mathbb{P}}(T_q=n)\sum_{k=0}^{n} \tilde \ensuremath{\mathbb{P}}_{\underline{1}}(\ell(n)=k)\cdot 1 \cdot o(1)\\ =&\Theta\(\sum_{n<N^{\alpha+\varepsilon}} \ensuremath{\mathbb{P}}(T_q=n)\)+o(1)=o(1). \end{align} Analogously, for the second term we split the sum over $n$ into two parts at $N^{1/2+\varepsilon}$, with small $\varepsilon>0$. Using~\cref{d} for the first part and~\cref{b} for the second one, we see that \begin{align}\label{formulavai2} (II):= &\sum_{n=1}^{\infty} \ensuremath{\mathbb{P}}(T_q=n)\sum_{k=1}^{n} \tilde \ensuremath{\mathbb{P}}_{\underline{1}}(\ell(n)=k) \hat{f}(n,k) \theta P^\dagger_{II}(n,k)\\ =&\sum_{n<N^{1/2+\varepsilon}} \ensuremath{\mathbb{P}}(T_q=n)\sum_{k=1}^{n} \tilde \ensuremath{\mathbb{P}}_{\underline{1}}(\ell(n)=k)\cdot 1\cdot O(1)+o(1)\\ =&O\(\sum_{n<N^{1/2+\varepsilon}} \ensuremath{\mathbb{P}}(T_q=n) \)+o(1)\\ =&o(1). \end{align} For the third term we need to split the corresponding sum into three parts at $T_1:=N^{1-\beta-\varepsilon}$ and $T_2:=N^{1/2+\varepsilon}$, which will be controlled by~\cref{c},~\cref{d} and~\cref{b}, respectively. That is \begin{align}\label{formulavai3} (III):= &\sum_{n=1}^{\infty} \ensuremath{\mathbb{P}}(T_q=n)\sum_{k=1}^{n} \tilde \ensuremath{\mathbb{P}}_{\underline{1}}(\ell(n)=k) \hat{f}(n,k) \theta P^\dagger_{III}(n,k)\\ \le&\sum_{n<T_1} \ensuremath{\mathbb{P}}(T_q=n)\sum_{k=1}^{n} \tilde \ensuremath{\mathbb{P}}_{\underline{1}}(\ell(n)=k)\cdot 1 \cdot o(1)+\sum_{n= T_1}^{T_2}\ensuremath{\mathbb{P}}(T_q=n))\sum_{k=1}^{n} \tilde \ensuremath{\mathbb{P}}_{\underline{1}}(\ell(n)=k)\cdot 1 \cdot O(N^{-1+2\beta+2\varepsilon})+o(1)\nonumber\\ =&o(1)+O\(N^{\alpha-\beta-\varepsilon}\cdot 1\cdot 1\cdot N^{-1+2\beta+2\varepsilon}\)+o(1)\\ =&o(1). \end{align} Finally, for the last term, we split the sum at $N^{1/2+\varepsilon}$. Indeed we see that: on the one hand, for $n\le N^{1/2+\varepsilon}$, we can use~\cref{d} since $$\theta P^\dagger_{IV}(n,k)=O\(N^{\frac{1}{2}+\varepsilon+\alpha-2(1-\beta)} \)\qquad\text{ and }\qquad\ensuremath{\mathbb{P}}\(T_q\le N^{\frac{1}{2}+\varepsilon}\)=O\(N^{-\frac{1}{2}+\alpha+\varepsilon} \).$$ On the other hand, for $n\geq N^{1/2+\varepsilon}$, we can argue as in~\cref{b}. Hence, \begin{align}\label{formulavai4} (IV):= &\sum_{n=1}^{\infty} \ensuremath{\mathbb{P}}(T_q=n)\sum_{k=1}^{n} \tilde \ensuremath{\mathbb{P}}_{\underline{1}}(\ell(n)=k) \hat{f}(n,k) \theta P^\dagger_{IV}(n,k)\\ \le&\sum_{n=1}^{N^{1/2+\varepsilon}} \ensuremath{\mathbb{P}}(T_q=n)\sum_{k=1}^{n} \tilde \ensuremath{\mathbb{P}}_{\underline{1}}(\ell(n)=k)\cdot 1\cdot O\(N^{\frac{1}{2}+\varepsilon+\alpha-2(1-\beta)} \)+o(1)\\ =&O\(N^{-\frac{1}{2}+\alpha+\varepsilon} \cdot 1\cdot 1\cdot N^{\frac{1}{2}+\varepsilon+\alpha-2(1-\beta)} \)+o(1)=o(1) \end{align} \qed \noindent{\bf{Proofs of {\bf(c)} and {\bf(e)} (high-entropy separating lines)}} We start by proving {\bf(e)}, i.e. \begin{equation} \text{if } \alpha=\frac{1}{2}<1-\beta\Longrightarrow \exists \varepsilon>0\text{ s.t. }\lim_{N\to\infty}U^{(N)}_q(in)=U_q(out)=\varepsilon. \end{equation} Start noting that under our assumptions on $\alpha$ and $\beta$ we have that \begin{equation}\label{thetagiallo} \theta(n,k)\sim\frac{n\sqrt{N}+2N^{\frac{3}{2}-\beta}+2k(n-k)}{2N^{\frac{3}{2}-\beta}}, \end{equation} and \begin{equation}\label{pmortegiallo} P_{\star}^\dagger(n,k)\sim\frac{k_\star\sqrt{N}+N^{\frac{3}{2}-\beta}}{2N^{\frac{3}{2}-\beta}+nN^{1-\beta}+k(n-k)}. \end{equation} We are going to split the sum over $n$ in~\cref{g} in three parts: \begin{itemize} \item $n\le N^{\frac{1}{2}-\varepsilon}$. For such $n$'s we have that the product $\theta P_\star^\dagger(n,k)$ is of order $1$. Hence we can neglect this part by using~\cref{d} together with the estimate $$\ensuremath{\mathbb{P}}(T_q\le N^{\frac{1}{2}-\varepsilon})=O\(N^{-\frac{1}{2}-\alpha-\varepsilon} \).$$ \item $ n> N^{\frac{1}{2}+\varepsilon}$. Also this part can be neglected thanks to the argument of~\cref{b}. \item $N^{\frac{1}{2}-\varepsilon}<n\le N^{\frac{1}{2}+\varepsilon}$. This is the delicate non-vanishing part. We start by noticing that, due to~\cref{thetagiallo} and~\cref{pmortegiallo}, the leading term in $\theta P_\star^\dagger$ does not involve $k_\star$, so that ---at first order--- $U^{(N)}_q(in)$ must equal $U^{(N)}_q(out)$. In order to show that the latter two are asymptotically bounded away from zero, we fix $c\in(0,1)$ and consider \begin{align} U^{(N)}_q(\star)\ge&\sum_{n=c\sqrt{N}}^{\sqrt{N}/c}\ensuremath{\mathbb{P}}(T_q=n)\sum_{k=1}^{n}\tilde \ensuremath{\mathbb{P}}_{\underline{1}}(\ell(n)=k)\theta(n,k)P^\dagger_\star(n,k)\hat{f}(n,k)\\ \hat f=\Theta(1)\Rightarrow=&\Omega\( \sum_{n=c\sqrt{N}}^{\sqrt{N}/c}\ensuremath{\mathbb{P}}(T_q=n)\sum_{k=1}^{n}\tilde \ensuremath{\mathbb{P}}_{\underline{1}}(\ell(n)=k)\theta(t,k) P^\dagger_\star(n,k)\)\\ \theta P^\dagger_\star(n,k)\in\[\frac{1}{2+c^{-1}},\frac{1}{2+c}\right]\Rightarrow=&\Omega\(\sum_{n=c\sqrt{N}}^{\sqrt{N}/c}\ensuremath{\mathbb{P}}(T_q=n) \)=\Omega(1).\label{lessthan1} \end{align} Moreover, thanks to~\cref{lessthan1} we can easily deduce that the limit is strictly smaller than $\frac{1}{2}$. \end{itemize} We next conclude by giving the proof of {\bf(e)}, i.e., we are going to show that \begin{equation} \text{if } \alpha=1-\beta<\frac{1}{2}\Longrightarrow \exists \varepsilon>0\:\text{ s.t. }\lim_{N\to\infty}U^{(N)}_q(in)=0\:\text{ while }\lim_{N\to\infty}U^{(N)}_q(out)=\varepsilon. \end{equation} Observe that, under our assumptions on $\alpha$ and $\beta$, we have that \begin{equation} \theta(n,k)\sim\frac{3N^{2\alpha}+nN^\alpha+2k(n-k)}{3N^{2\alpha}}, \end{equation} and \begin{equation} P_{\star}^\dagger(n,k)\sim\frac{N^{2\alpha}+k_\star N^\alpha}{3N^{2\alpha}+2nN^{\alpha}+k(n-k)}, \end{equation} hence, their product behaves asymptotically as \begin{equation}\label{formulavai5} \theta P_{\star}^\dagger(n,k)=\Theta\(1+\frac{k_\star}{N^\alpha}\). \end{equation} To evaluate the asymptotic behavior of $U^{(N)}_q(\star)$, we split the sum over $n$ in~\cref{g} in three pieces: \begin{itemize} \item $n\le N^{\alpha+\varepsilon}$: where, thanks to~\cref{formulavai5}, we know that $\theta P_{\star}^\dagger(n,k)=O(N^\varepsilon)$. We argue as in~\cref{d}, obtaining \begin{align} \sum_{n\le N^{\alpha+\varepsilon}}\ensuremath{\mathbb{P}}(T_q=n)\sum_{k=1}^{n}\tilde \ensuremath{\mathbb{P}}_{\underline{1}}(\ell(n)=k)\theta(n,k)P^\dagger_\star(n,k)\hat{f}(n,k)\le&O\(N^\varepsilon\sum_{n\le N^{\alpha+\varepsilon}}\ensuremath{\mathbb{P}}(T_q=n)\)\\ =&O\(N^{-1+2\alpha} \) \end{align} \item $n> N^{\frac{1}{2}+\varepsilon}$: in this case we can argue as in~\cref{b}. \item $N^{\alpha+\varepsilon}<n\le N^{\frac{1}{2}+\varepsilon}$: in this case we have to distinguish between $U^{(N)}_q(in)$ and $U^{(N)}_q(out)$. \end{itemize} Consider first $U^{(N)}_q(in)$. We call $E_n$ the following event concerning the Markov chain $(\tilde X_n)_{n\in\ensuremath{\mathbb{N}}}$ \begin{equation} E_n:=\left\{\text{At least one jump occurs before time $n$}\right\}. \end{equation} Notice that if $N^{\alpha+\varepsilon}<n\le N^{\frac{1}{2}+\varepsilon}$ then the event $E_n^c$ occurs with high probability. Hence, for any choice of $n\in[1,N]$ and $k\in[1,n]$ we can write \begin{align} \tilde\ensuremath{\mathbb{P}}_{\underline{1}}\(\ell(n)=k \)=&\tilde\ensuremath{\mathbb{P}}_{\underline{1}}(\ell(n)=k|E_n^c)\tilde\ensuremath{\mathbb{P}}_{\underline{1}}(E_n^c)+\tilde\ensuremath{\mathbb{P}}_{\underline{1}}(\ell(n)=k|E_n)\tilde\ensuremath{\mathbb{P}}_{\underline{1}}(E_n) =\delta_{k,n}+o(1), \end{align} $\delta_{k,n}$ being the Kronecker delta. Hence \begin{align} \sum_{n= N^{\alpha+\varepsilon}}^{N^{1/2+\varepsilon}}\ensuremath{\mathbb{P}}(T_q=n)\sum_{k=1}^{n}\tilde \ensuremath{\mathbb{P}}_{\underline{1}}(\ell(n)=k)\theta P^\dagger_{in}(n,k)\hat{f}(n,k)=&\Theta\( \sum_{n= N^{\alpha+\varepsilon}}^{N^{1/2+\varepsilon}}\ensuremath{\mathbb{P}}(T_q=n)\sum_{k=1}^{n}\delta_{k,n}\(\frac{n-k}{N^\alpha}+1\)\)\\ =&\Theta\( \sum_{n= N^{\alpha+\varepsilon}}^{N^{1/2+\varepsilon}}\ensuremath{\mathbb{P}}(T_q=n)\)=o(1). \end{align} Concerning $U^{(N)}_q(out)$, it is easy to get a lower bound via a soft argument by considering the events \begin{equation} B_x=\left\{\text{The LERW starting at $x$ never changes community} \right\} \end{equation} \begin{equation} B'_y=\left\{\text{The RW starting at $y$ does not change community before dying} \right\}. \end{equation} Indeed, \begin{align*} U^{(N)}_q(out)\ge&\ensuremath{\mathbb{P}}\(B_x\)\ensuremath{\mathbb{P}}\(B'_y\)=\(\frac{N^\alpha}{N^\alpha+N^{1-\beta}}\)^2=\frac{1}{4}. \end{align*} Finally, we are left to show that $U^{(N)}_q(out)$ is asymptotically bounded away from $1$. We consider the further split \begin{align*} U^{(N)}_q(out)\le&o(1)+\sum_{n=N^{\alpha+\varepsilon}}^{\sqrt{N}}\ensuremath{\mathbb{P}}(T_q=n)\sum_{k=1}^{n}\tilde{\ensuremath{\mathbb{P}}}_{\underline{1}}(\ell(n)=k)(\hat f\theta P^\dagger_{out})(n,k)+\sum_{n=\sqrt{N}}^{N^{\frac{1}{2}+\varepsilon}}\ensuremath{\mathbb{P}}(T_q=n)\sum_{k=1}^{n}\tilde{\ensuremath{\mathbb{P}}}_{\underline{1}}(\ell(n)=k)(\hat f\theta P^\dagger_{out})(n,k). \end{align*} Focusing on the first sum in the latter display, thanks to~\cref{formulavai5}, we have that \begin{align*} \sum_{n=N^{\alpha+\varepsilon}}^{\sqrt{N}}\ensuremath{\mathbb{P}}(T_q=n)\sum_{k=1}^{n}\tilde{\ensuremath{\mathbb{P}}}_{\underline{1}}(\ell(n)=k)(\hat f\theta P^\dagger_{out})(n,k)\le& \sum_{n= N^{\alpha+\varepsilon}}^{N^{1/2}}\ensuremath{\mathbb{P}}(T_q=n)\frac{n}{N^\alpha}+\sum_{n= N^{\alpha+\varepsilon}}^{N^{1/2}}\ensuremath{\mathbb{P}}(T_q=n)\\ =&\frac{1}{N}\sum_{n= N^{\alpha+\varepsilon}}^{N^{1/2}}\(1-\frac{1}{N^{1-\alpha}} \)^n+o(1)\\ \le&\frac{1}{N}\(\frac{\sqrt{N}(\sqrt{N}+1)}{2} \)\sim\frac{1}{2}. \end{align*} Concerning the second sum, we have \begin{align*} \sum_{n=\sqrt{N}}^{N^{\frac{1}{2}+\varepsilon}}\ensuremath{\mathbb{P}}(T_q=n)\sum_{k=1}^{n}\tilde{\ensuremath{\mathbb{P}}}_{\underline{1}}(\ell(n)=k)(\hat f\theta P^\dagger_{out})(n,k)=&O\( \sum_{n=\sqrt{N}}^{N^{\frac{1}{2}+\varepsilon}}\ensuremath{\mathbb{P}}(T_q=n)\hat f(n,n)\frac{n}{N^\alpha} \)\\ =&O\(\frac{1}{N}\sum_{n=\sqrt{N}}^{N^{\frac{1}{2}+\varepsilon}}ne^{-\frac{n^2}{2N}}\)\\ =&O\(\frac{1}{\sqrt{N}}\sum_{m=1}^{N^{\varepsilon}}me^{-\frac{m^2}{2}}\)\\ =&O\(\frac{N^\varepsilon}{\sqrt{N}}\sum_{m=1}^{\infty}e^{-\frac{m^2}{2}} \)=o(1). \end{align*} \qed\\ \subsection{Proof of~\cref{macro}} Let $0=\lambda_0\le \lambda_1\le\dots\le\lambda_{2N-1}$ be the eigenvalues of $-\ensuremath{\mathcal L}$. As shown in\cite[Prop. 2.1]{AG}, the number of blocks of the induced partition, $|\Pi_q| $, is distributed as the sum of $2N$ independent Bernoulli random variables with success probabilities $\frac{q}{q+\lambda_i}$. That is $$|\Pi_q|\overset{d}{\sim} \sum_{i=0}^{2N-1}X_i^{(q)},\qquad\text{ with}\qquad X_i^{(q)}\overset{d}{\sim} Ber\(\frac{q}{q+\lambda_i} \),\quad i\in\left\{0,\dots,2N-1 \right\}$$ In case of the two-communities model we have $$\lambda_0=0,\qquad\lambda_1=2N^{1-\beta},\qquad\lambda_i=N(1+N^{-\beta}),\quad i\in\left\{2,\dots,2N-1 \right\}.$$ Therefore $$|\Pi_q|\overset{d}{\sim} 1+X+\sum_{i=1}^{2(N-1)}Y_i$$ where $$X\overset{d}{\sim} Ber\(\frac{N^\alpha}{2N^{1-\beta}+N^\alpha} \)\qquad\text{and}\qquad Y_i\overset{d}{\sim} Ber\(\frac{N^\alpha}{N(1+N^{-\beta})+N^\alpha} \),\quad i\in\{1,\dots,2(N-1)\}.$$ Hence $$\ensuremath{\mathbb{E}}|\Pi_q|\sim 1+ \frac{N^\alpha}{N^{1-\beta}+N^\alpha}+\frac{2N^{\alpha+1}}{N^\alpha+N}=\Theta(N^{\alpha\wedge 1}).$$ Moreover, we can prove the concentration result claimed in the first part of the statement by using the multiplicative version of the Chernoff bound on the sum of $Y_i$'s. Indeed, denoting by $$S:=\sum_{i=1}^{2(N-1)}Y_i$$ we have that $$\ensuremath{\mathbb{P}}\(\left|S-\ensuremath{\mathbb{E}} S \right|\ge \varepsilon\ensuremath{\mathbb{E}} S \)\le 2\exp\(-\frac{\varepsilon^2\ensuremath{\mathbb{E}} S}{3} \),$$ and since $$\ensuremath{\mathbb{E}} S\sim\frac{2N^{\alpha+1}}{N^\alpha+N}=\omega(1)$$ we can deduce the concentration of $|\Pi_q|$.\\ Notice also that the second part of the statement is a trivial consequence of the detectability result of~\cref{phasetrans}. \qed \subsection{Proof of~\cref{RWexpress}} In this proof we will consider the probability measure $\nu_q$ on the space of rooted spanning forests studied in~\cite{AG}, namely, \begin{equation} \nu_q(F)=\frac{q^{|\rho(F)|}w(F)}{Z(q)}, \quad F\in \ensuremath{\mathcal F}, \end{equation} where we denoted by $\rho(F)$ the set of roots of $F\in\ensuremath{\mathcal F}$. As mentioned in~\cref{Wilson}, we stress that the measure in~\cref{LEP} can be obtained by projecting this forest measure $\nu_q(\cdot)$ on the set of partitions. Call $\mathcal{B}_q$ the $\sigma$-field generated by the block structure $\Pi_q$ of the random forest $F$. By~\cite[Proposition 6.4]{AG}, we have \begin{align} \ensuremath{\mathbb{P}}\(x,y\in \rho(F)\bigg\rvert \ensuremath{\mathcal B}_q \)=\mathbf{1}_{\{B_q(x)\neq B_q(y)\}}\frac{\mu(x)\mu(y)}{\mu(B_q(x))\mu(B_q(y))}. \end{align} Now we notice that by~\cref{RWIP} and the tower property, \begin{align} \overline{U}_q(x,y)=\ensuremath{\mathbb{E}}\[\ensuremath{\mathbb{E}}\[\frac{\mathbf{1}_{\{B_q(x)\neq B_q(y)\}}}{\mu(B_q(x))\mu(B_q(y))} \bigg\rvert \ensuremath{\mathcal B}_q \right] \right]=\frac{1}{\mu(x)\mu(y)}\ensuremath{\mathbb{P}}\(x,y\in\rho(F)\). \end{align} We can now invoke~\cite[Theorem 3.4]{AG}, stating that the set of roots is a determinantal process with kernel $K_q$. As a consequence we obtain that \begin{equation} \ensuremath{\mathbb{P}}\(x,y\in\rho(F) \)=K_q(x,x)K_q(y,y)-K_q(x,y)K_q(y,x), \end{equation} and the claim readily follows. \qed \subsection{Proof of~\cref{Rwdetection}} We consider here the discrete time version of the process $X$ as presented in~\cref{proporso}, see~\eqref{discrete}. As a warm-up, we start by computing the potential in the complete graph with unitary weights. In this case, \begin{equation} K_q(x,y)= \delta_{x,y}\ensuremath{\mathbb{P}}(T_q=1)+\sum_{t\ge 1}\ensuremath{\mathbb{P}}_x\(X_t=y\:|\:T_q=t+1 \)\ensuremath{\mathbb{P}}(T_q=t+1), \end{equation} where \begin{equation} r_q:=\frac{q}{N+q} \qquad\text{ and } \quad \ensuremath{\mathbb{P}}(T_q=t+1)=r_q(1-r_q)^{t},\qquad\forall t\in \ensuremath{\mathbb{N}}_0. \end{equation} Therefore, \begin{align} K_q(x,y)= r_q\delta_{x,y}+\frac{1}{N}\sum_{t\ge 1}r_q(1-r_q)^t =r_q\delta_{x,y}+\frac{1}{N}\(1-r_q\)=\frac{q\delta_{x,y}+1}{q+N}. \end{align} From which: \begin{equation}\label{completeRWIP} \overline{U}_q(x,y)= \(\frac{N}{q+N}\)^2\(q^2+2q\). \end{equation} Thus, in order to have a non-degenerate potential on $\mathcal{K}_N$, we need to take $q=\Theta(1)$.\qed We next move to the mean-field-community model $\mathcal{K}_{2N}(w_1,w_2)$ with $w_1=1$, $w_2=N^{-\beta},\beta>0$ and arbitrary $q$. The corresponding discrete-time RW is killed at an independent geometric time $T_q\overset{d}{\sim} Geom(r_q)$ with \begin{equation}\label{rate} r_q:=\frac{q}{N+N^{1-\beta}+q}. \end{equation} Denoting by $J_t$ the random variable that counts the number of times, up to time $t$, in which this random walk changes community, we notice that: \begin{equation} \ensuremath{\mathbb{P}}(J_t=k\:|\:\tau=t+1)=\binom{t}{k}(1-c)^{t-k}c^k,\qquad\forall k\in[0,t], \end{equation} that is, conditioning on $T_q=t+1$, $J_t$ has binomial distribution $ Bin(t,c)$ with success parameter \begin{equation} c:=\frac{N^{1-\beta}}{N+N^{1-\beta}}. \end{equation} We are now in shape to compute the probability that $x$ is absorbed in some $y$. Without loss of generality we assume $x\in[N]$, so that $y\in[N]$ and $y\in[2N]\setminus[N]$ determines the $in-$ and $out-$potential, respectively. Thus: \begin{align}\notag K_q(x,y)=&\delta_{x,y}\ensuremath{\mathbb{P}}\(T_q=1\)+\sum_{t\ge 1}\ensuremath{\mathbb{P}}(T_q=t+1)\sum_{k\ge 0}\ensuremath{\mathbb{P}}_x(X_t=y\:|\:J_t=k;\:T_q=t+1)\ensuremath{\mathbb{P}}(J_t=k\:|\:T_q=t+1)\\ \label{latter}=&\delta_{x,y}r_q+\frac{1}{N}\sum_{t\ge 1}r_q(1-r_q)^t \big[\mathbf{1}_{y\in[N]}\ensuremath{\mathbb{P}}\(\text{Bin}(t,c)\in 2\ensuremath{\mathbb{N}}_0\)+\mathbf{1}_{y\in[2N]\setminus[N]}\ensuremath{\mathbb{P}}\(\text{Bin}(t,c)\in 2\ensuremath{\mathbb{N}}_0+1\)\big]\\ \label{latter2}=&\delta_{x,y}r_q+O\(N^{-1}\), \end{align} where the last identity is due to the fact that the sum in~\cref{latter} is a probability and hence bounded above by $1$. \noindent{{\bf (high killing)} When $q=N^\alpha$, with $\alpha>0$, $r_q=\omega\(N^{-1}\)$, thus the $O\(N^{-1}\)$ term in~\cref{latter2} is negligible, and $\overline{U}_q(in/out)\sim N^2 r_q^2$. In particular, the potential diverges as $N^2$ or $N^{2\alpha}$ depending on $\alpha\geq 1$ or $\alpha<1$, respectively. \noindent{{\bf (order one killing)} In the regime $q=O(1)$, the $O(N^{-1})$ term in~\cref{latter2} is no longer negligible and needs to be analyzed further. Let us first consider the sub-regime $q=\Theta(1)$. Notice that, when $t=\Theta(1/r_q)$, \begin{equation}\label{333} \ensuremath{\mathbb{E}}(\text{Bin}(t,c))=\frac{c}{r_q}=\frac{N^{1-\beta}}{q}=\begin{cases} o(1)&\text{if }\beta>1\\ \omega(1)&\text{if }\beta<1. \end{cases} \end{equation} Clearly, $\ensuremath{\mathbb{E}}(\text{Bin}(t,c))=o(1)$ implies that $\ensuremath{\mathbb{P}}\(\text{Bin}(t,c)\in 2\ensuremath{\mathbb{N}}_0 \)=1+o(1)$, while if $\ensuremath{\mathbb{E}}(\text{Bin}(t,c))=\omega(1)$ then $\ensuremath{\mathbb{P}}\(\text{Bin}(t,c)\in 2\ensuremath{\mathbb{N}}_0 \)=\frac{1}{2}+o(1).$ From which, if $\beta>1$, then \begin{align}\label{111} \sum_{t\ge 1}r_q(1-r_q)^t\ensuremath{\mathbb{P}}\(\text{Bin}(t,c)\in 2\ensuremath{\mathbb{N}}_0\)\sim&1, \end{align} while, for $\beta<1$: \begin{align}\label{222} \sum_{t\ge 1}r_q(1-r_q)^t\ensuremath{\mathbb{P}}\(\text{Bin}(t,c)\in 2\ensuremath{\mathbb{N}}_0+1\)\sim \sum_{t\ge 1}r_q(1-r_q)^t\ensuremath{\mathbb{P}}\(\text{Bin}(t,c)\in 2\ensuremath{\mathbb{N}}_0\)\sim \frac{1}{2}, \end{align} where in~\cref{111,222} we used the fact that, in order to compute the first order, it is sufficient to restrict the sum over $t$ to the values on the scale $\Theta(1/r_q)$. By~\cref{latter} and the above estimates, we conclude that, for $\beta>1$: \begin{align}\label{res2} K_q(x,y)\sim\begin{cases} \frac{1}{N}&\text{if } y\in[N]\setminus \{x\}\\ \frac{t\cdot c}{N}=o(N^{-1})&\text{if } y\in[2N]\setminus[N], \end{cases} \end{align} and $K_q(x,x)\sim\frac{q+1}{N}$, which together with~\cref{RWIP} lead to: \begin{align} &\beta>1\qquad\Longrightarrow\qquad \overline{U}_q(\star)\sim\begin{cases} 4q^2+8q&\text{if } \star= in\\ 4q^2+8q+4&\text{if }\star=out \end{cases}. \end{align} On the other hand, for $\beta<1$, the estimate in~\cref{222} shows that, regardless of the community of $y$, $K_q(x,y)\sim (\delta_{x,y}q+1/2)/N$. Thus the $in-$ and $out-$ potentials are asymptotically equivalent. In particular, $\overline{U}_q(in/out)\sim 4q^2+4q$. \noindent{{\bf (vanishing killing)} It remains to analyze the case when $q=N^{\alpha}$ for some negative $\alpha<0$. In this case, we have that\begin{equation}\label{444} \ensuremath{\mathbb{E}}(\text{Bin}(t,c))=N^{1-\beta-\alpha}=\begin{cases} o(1)&\text{if }1-\alpha<\beta\\ \omega(1)&\text{if }1-\alpha>\beta. \end{cases} \end{equation} We can then argue as in the case $q=\Theta(1)$ but distinguishing between $\beta$ being bigger or smaller than $1-\alpha$. In particular, due to~\cref{444}, when $\beta<1-\alpha$ the resulting $in-$ and $out-$ potentials are asymptotically equivalent and decay as $N^\alpha$. On the other hand, for $\beta>1-\alpha >1$, $r_q\sim N^{\alpha-1}$, which together with~\cref{444} and~\cref{latter} lead to the estimates: $K_q(x,x)\sim r_q+N^{-1}\sim N^{-1}$, $K_q(x,y)\sim N^{-1}$ for $y\in[N]\setminus\{x\}$ and $K_q(x,y)=o(N^{-1})$ for pairs $(x,y)$ in different communities. By plugging these estimates in~\cref{RWexpress} the statement follows. \qed \section*{{Acknowledgments}} { L. Avena was supported by NWO Gravitation Grant 024.002.003-NETWORKS. M. Quattropani was partially supported by the INdAM-GNAMPA Project 2019 ``Markov chains and games on networks''. Part of this work started during the preparation of the master thesis~\cite{Q16} and the authors are thankful to Diego Garlaschelli for acting as co-supervisor of this thesis project. } \end{document}
arXiv
\begin{document} \title{Fiber products of rank 1 superrigid lattices and quasi-isometric embeddings} \frenchspacing \maketitle \begin{abstract} Let $\Delta$ be a cocompact lattice in $\mathsf{Sp}(m,1)$, $m \geqslant 2$, or $\textup{F}_4^{(-20)}$. We exhibit examples of finitely generated subgroups of $\Delta \times \Delta$ with positive first Betti number all of whose discrete faithful representations into any real semisimple Lie group are quasi-isometric embeddings. The examples of this paper are inspired by the counterexamples of Bass--Lubotzky \cite{Bass-Lubotzky} to Platonov's conjecture. \end{abstract} \section{Introduction} \par Let $\Gamma$ be an irreducible lattice of a real algebraic semisimple Lie group $G$. If $\Gamma$ is cocompact, then the inclusion of $\Gamma$ in $G$ is a quasi-isometric embedding. More precisely, if we equip the symmetric space $G/K$ associated to $G$ with the left invariant Riemannian distance $d_{G/K}$ induced by the Killing metric, and identify (via the orbit map) $\Gamma$ as a subset of $G/K$, then $d_{G/K}$ restricted on $\Gamma$ is coarsely equivalent with any left invariant word metric on $\Gamma$ induced by a finite generating subset. If $\Gamma$ is not cocompact, then this might not be the case (e.g. for $G = \mathsf{SL}_2(\mathbb{R})$ and $\Gamma = \mathsf{SL}_2(\mathbb{Z})$). However, Lubotzky--Mozes--Raghunathan \cite{LMR} proved\footnote{More generally, the main result in \cite{LMR} applies to irreducible lattices in products $G_1(k_1)\times \cdots \times G_{\ell}(k_{\ell})$ of connected simple groups $G_i$ defined over a local field $k_i$.} that this is the case when the real rank of $G$ is at least $2$. \par The superrigidity theorem of Margulis \cite[Thm. VII 5.6]{Mar} imposes severe restrictions on linear representations of the lattice $\Gamma$ over any local field. Following \cite{FH}, we say that a representation $\rho:\Gamma \rightarrow \mathsf{SL}_d(\mathbb{R})$ {\em almost extends to a continuous representation of $G$} if there exists a representation \hbox{$\hat{\rho}: G \rightarrow \mathsf{SL}_{d}(\mathbb{R})$} and a representation $\rho':\Gamma \rightarrow \mathsf{SL}_{d}(\mathbb{R})$ with compact closure such that the images of $\hat{\rho}(\Gamma)$ and $\rho'(\Gamma)$ commute and $\rho(\gamma)=\hat{\rho}(\gamma)\rho'(\gamma)$ for every $\gamma \in \Gamma$. Margulis' superrigidity theorem (see \cite[Thm. 3.8]{FH}) implies that every linear representation $\rho$ of the lattice $\Gamma$ almost extends to a continuous representation of $G$. In particular, if $G$ is simple and the image of $\rho$ has non-compact closure, then $\hat{\rho}$ has necessarily finite kernel and hence $\rho$ is a quasi-isometric embedding (see Definition \ref{qie}) since $\Gamma$ is quasi-isometrically embedded in $G$ by \cite{LMR}. \par The supperrigidity of lattices in the quaternionic Lie group $\mathsf{Sp}(m,1)$, $m \geqslant 2$, or the rank 1 Lie group $\textup{F}_4^{(-20)}$ was established by the work of Corlette \cite{Corlette} and Gromov--Schoen \cite{GS}. Corlette's Archimedean supperrigidity, implies that every linear representation $\psi:\Delta \rightarrow \mathsf{SL}_d(\mathbb{R})$ of a lattice $\Delta$ in either $\mathsf{Sp}(m,1)$, $m \geqslant 2$, or $\textup{F}_4^{(-20)}$, almost extends to a continuous representation of $G$. In particular, if the image of $\psi$ has non-compact closure and $\Delta$ is cocompact, then $\psi$ is necessarily a quasi-isometric embedding. \par The goal of this article is to exhibit examples of finitely generated linear groups which are not commensurable to a lattice in any semisimple Lie group and all of whose discrete and faithful linear representations into any real semisimple Lie group are quasi-isometric embeddings. For a group $\Lambda$ and a normal subgroup $L$ of $\Lambda$, {\em the fiber product of $\Lambda$ with respect to $L$} is the subgroup of the product $\Lambda \times \Lambda$ defined as follows $$\Lambda \times_{L}\Lambda =\big\{(gw,g): g \in \Lambda, w \in L\big\}.$$ We denote by $\mu:\mathsf{SL}_d(\mathbb{R}) \rightarrow \mathbb{R}^d$ the Cartan projection (see Section \ref{prelim}) and fix the usual Euclidean norm $||\cdot||_{\mathbb{E}}$ on $\mathbb{R}^d$. A representation $\psi: \mathsf{H} \rightarrow \mathsf{SL}_d(\mathbb{R})$ is called {\em distal}\footnote{The term {\em distal} is from \cite[p. 537]{DK}.} if for every $h \in \mathsf{H}$ the moduli of the eigenvalues of $\psi(h)$ are equal to $1$. For a group $N$ we denote by $[N,N]$ the \hbox{commutator subgroup of $N$.} The main result of this article is the following: \begin{theorem} \label{main1} Let $\Delta$ be a cocompact lattice in $\mathsf{Sp}(m,1)$, $m \geqslant 2$, or $\textup{F}_4^{(-20)}$. Suppose that $N$ is an infinite normal subgroup of $\Delta$ such that the quotient $\Delta/N$ is a non-elementary hyperbolic group. For a representation $\rho:\Delta \times_N \Delta \rightarrow \mathsf{SL}_d(\mathbb{R})$ the following conditions are equivalent: \noindent \textup{(i)} The restrictions of $\rho$ on $\{1\}\times [N,N]$ and $[N,N]\times \{1\}$ are not distal. \noindent \textup{(ii)} $\rho$ is discrete and has finite kernel. \noindent \textup{(iii)} $\rho$ is a quasi-isometric embedding. \end{theorem} The existence of non-elementary hyperbolic quotients of the rank $1$ lattice $\Delta$ follows by the work of Gromov \cite{Gromov}, Olshanskii \cite{Ol} and Delzant \cite{Delzant}. M. Kapovich in \cite[Thm. 8.1]{Kap} proved that any such quotient is a non-linear hyperbolic group. By using the Cohen--Lyndon theorem established in \cite{Sun}, similarly as in \cite{MM}, it is possible to exhibit infinitely many pairwise non-isomorphic fiber products $\Delta \times_{N} \Delta$ in $\Delta \times \Delta$ of positive first Betti number such that the quotient $\Delta/N$ is non elementary hyperbolic (see Proposition \ref{main3}). Moreover, the fiber product $\Delta \times_N \Delta$ cannot be commensurable to a lattice in any semisimple Lie group (see Proposition \ref{nonlattice}) and by Theorem \ref{main1} all discrete faithful representations of $\Delta\times_N \Delta$ into any semisimple Lie group are quasi-isometric embeddings. Fiber products have been previously used (e.g. see \cite{PT,Bass-Lubotzky,Bridson-Grunewald}) in order to exhibit counterexamples in various settings. The examples of Theorem \ref{main1} are inspired by the construction of Bass--Lubotzky \cite{Bass-Lubotzky} of the first examples of finitely generated linear superrigid groups which are not commensurable with a lattice in any product $G_1(k_1)\times \cdots \times G_{\ell}(k_{\ell})$ of simple algebraic groups $G_i$ defined over a local field $k_i$. Their examples provide a negative answer to a conjecture of Platonov \cite[p. 437]{PR} and are constructed as the fiber product $\Lambda \times_L \Lambda$ of a cocompact lattice $\Lambda$ in the rank 1 Lie group $\textup{F}_4^{(-20)}$ with respect to a normal subgroup $L$ such that $\Lambda/L$ is a finitely presented group without non-trivial finite quotients and $\textup{H}_2(\Lambda/L,\mathbb{Z})=0$. Similar examples, where $\Lambda$ is a cocompact lattice in the quaternionic Lie group $\mathsf{Sp}(m,1)$, $m \geqslant 2$, were exhibited by Lubotzky \hbox{in \cite{Lubotzky}.} We would like to remark that is not clear whether the superrigid examples $\Lambda \times_L \Lambda$ of Bass--Lubotzky and Lubotzky can be chosen to admit a quasi-isometric embedding into some real semisimple Lie group. If this were the case, by \cite[Thm. 1.4 (b)]{Bass-Lubotzky}, $\Lambda \times_L \Lambda$ has to be a quasi-isometrically embedded subgroup of $\Lambda \times \Lambda$. In particular, since the distortion of $\Lambda \times_L \Lambda$ in the product $\Lambda \times \Lambda$ is linear, it follows that the quotient $\Lambda/L$ has linear Dehn function (e.g. see \cite[Prop. 3.2]{IT}). In partiuclar, it has to be a hyperbolic group which, by construction, does not admit non-trivial finite quotients. However, as of now, the existence of non-residually finite hyperbolic groups is an open problem and thus it is not clear whether $\Lambda/L$ can be chosen to be hyperbolic. The proof of Theorem \ref{main1} is based on the following theorem. We use Corlette's Archimedean superrigidity \cite{Corlette} to prove: \begin{theorem} \label{main2} Let $\Delta$ be a cocompact lattice in $\mathsf{Sp}(m,1)$, $m \geqslant 2$, or $\textup{F}_4^{(-20)}$. Fix \hbox{$|\cdot|_{\Delta}:\Delta \rightarrow \mathbb{N}$} a word length function on $\Delta$ and suppose that $N$ is an infinite normal subgroup of $\Delta$. Suppose that \hbox{$\rho:\Delta \times_{N} \Delta \rightarrow \mathsf{SL}_d(\mathbb{R})$} is a representation such that the restrictions of $\rho$ on $\{1\}\times [N,N]$ and \hbox{$[N,N] \times \{1\}$} are not distal. Then there exist $C,c>0$ such that $$\big|\big| \mu\big(\rho(\delta n,\delta)\big) \big|\big|_{\mathbb{E}} \geqslant c\big(\big|\delta n \big|_{\Delta}+\big|\delta \big|_{\Delta}\big)-C$$ for every $\delta \in \Delta$ and $n \in N$. \end{theorem} Theorem \ref{main1} will follow by Theorem \ref{main2} and the fact that the fiber product $\Delta \times_N \Delta$ is a quasi-isometrically embedded subgroup of $\Delta \times \Delta$ when the quotient $\Delta/N$ is hyperbolic \hbox{(see Proposition \ref{undist}).} We close this section by raising the following question which motivated the construction of the examples of this article. We only consider linear representations over $\mathbb{R}$ or $\mathbb{C}$. \begin{question} Does there exist a finitely generated group $\mathsf{\Gamma}$ which is not commensurable to a lattice in any connected semisimple Lie group, admits a discrete faithful linear representation and every linear representation of $\mathsf{\Gamma}$ with non-compact closure is a quasi-isometric embedding?\end{question} The fiber products provided by Theorem \ref{main1} admit representations with infinite kernel whose image has non-compact closure, hence they do not provide a positive answer to the previous question. The paper is organized as follows. In Section \ref{prelim} we provide some background on fiber products, proximality and semisimple representations. In Section \ref{proofs} we prove Theorem \ref{main1} and Theorem \ref{main2}. Finally, in Section \ref{add}, we provide some additional properties of the examples of Theorem \ref{main1}. \noindent \textbf{Acknowledgements.} I would like to thank Fanny Kassel for fruitful discussions and comments on an earlier version of this paper. I would also like to thank the referee for carefully reading the paper and for their comments and suggestions. This work received funding from the European Research Council (ERC) under the European's Union Horizon 2020 research and innovation programme (ERC starting grant DiGGeS, grant agreement No 715982). \section{Preliminaries} \label{prelim} Let $\Gamma$ be a finitely generated group. Throughout this paper, we fix a left invariant word metric $d_{\Gamma}:\Gamma \times \Gamma \rightarrow \mathbb{N}$ induced by some finite generating subset of $\Gamma$. The word length fuction $|\cdot|_{\Gamma}:\Gamma \rightarrow \mathbb{N}$ is defined as $|\gamma|_{\Gamma}=d_{\Gamma}(\gamma,1)$, where $1 \in \Gamma$ denotes the identity element of $\Gamma$. The {\em normal closure of a subset $\mathcal{F}$} of $\Gamma$ is the normal subgroup $\llangle \mathcal{F} \rrangle=\langle \{ gfg^{-1}: g \in \Gamma,f \in \mathcal{F}\} \rangle$ of $\Gamma$. For two subgroups $A,B$ of $\Gamma$ the commutator $[A,B]$ is the group $[A,B]=\llangle \{ [g,h]:g\in A, h \in B\} \rrangle$. The group $\Gamma$ is called {\em \textup{(}Gromov\textup{)} hyperbolic} if the Cayley graph of $\Gamma$ with respect to a fixed generating set is a $\delta$-Gromov hyperbolic space for some $\delta \geqslant 0$ (see \cite{Gromov}). A hyperbolic group is called {\em elementary} if it is virtually cyclic. \subsection{Cartan and Jordan projection} For a matrix $g \in \mathsf{SL}_d(\mathbb{R})$ we denote by $\ell_1(g)\geqslant \ldots \geqslant \ell_d(g)$ (resp. $\sigma_1(g)\geqslant \ldots \geqslant \sigma_d(g)$) the moduli of the eigenvalues (resp. singular values) of $g$ in non-increasing order. The {\em Cartan projection} $\mu:\mathsf{SL}_d(\mathbb{R})\rightarrow \mathbb{R}^d$ is the map $$\mu(g)=\big(\log\sigma_1(g),\ldots,\log \sigma_d(g) \big),$$ for $g \in \mathsf{SL}_d(\mathbb{R})$. The Cartan projection $\mu$ is a continuous, proper and surjective onto $\{(x_1,\ldots, x_d):x_1\geqslant \cdots \geqslant x_d\}$. We denote by $||\cdot ||$ the standard operator norm where $\big|\big|g\big|\big|=\sigma_1(g)$ for every $g \in \mathsf{SL}_d(\mathbb{R})$. The {\em Jordan projection} is the map $\lambda:\mathsf{SL}_d(\mathbb{R})\rightarrow \mathbb{R}^d$ is the map defined as follows $$\lambda(g)=\big(\log\ell_1(g), \ldots, \log \ell_d(g)\big)$$ for $g \in \mathsf{SL}_d(\mathbb{R})$. The Cartan and Jordan projection are related as follows $$\lambda(g)=\lim_{r \rightarrow \infty}\frac{1}{r}\mu(g^r),$$ for $g \in \mathsf{SL}_d(\mathbb{R})$ (e.g. see \cite{benoist-limitcone}). \begin{definition}\label{qie} Let $\Gamma$ be a finitely generated group. A representation $\rho:\Gamma \rightarrow \mathsf{SL}_d(\mathbb{R})$ is called a quasi-isometric embedding if there exist $C,c>1$ such that for every $\gamma \in \Gamma$ we have $$C^{-1}\big|\gamma \big|_{\Gamma}-c \leqslant \big| \big|\mu (\rho(\gamma) ) \big|\big|_{\mathbb{E}} \leqslant C\big|\gamma \big|_{\Gamma}+c.$$ \end{definition} \noindent Equivalently, if we equip the symmetric space $\mathsf{X}_{d}=\mathsf{SL}_d(\mathbb{R})/K_d$, where $K_d=\mathsf{SO}(d)$, with the distance function $$\mathsf{d}\big(gK_d,hK_d\big)=\Big(\sum_{i=1}^{d} \big(\log \sigma_i (g^{-1}h)\big)^2 \Big)^{\frac{1}{2}} \ \ g,h\in \mathsf{SL}_{d}(\mathbb{R}),$$ $\rho$ is a quasi-isometric embedding if and only if its map $\tau_{\rho}:(\Gamma,d_{\Gamma}) \rightarrow (\mathsf{X}_{d}, \mathsf{d})$, $\tau_{\rho}(\gamma)=\rho(\gamma)K_{d}$ for $\gamma \in \Gamma,$ is a quasi-isometric embedding. We also need the following elementary fact for the Cartan projection of a matrix $g \in \mathsf{SL}_d(\mathbb{R})$ and its exterior powers. \begin{fact} \label{exterior} Let $d \geqslant 2$ and $1 \leqslant m \leqslant d-1$. There exists a constant $C_{d,m}>1$, depending only on $d,m \in \mathbb{N}$ such that for every $g \in \mathsf{SL}_d(\mathbb{R})$ we have $$C_{d,m}^{-1}\big|\big| \mu(g)\big|\big|_{\mathbb{E}} \leqslant\big|\big| \mu(\wedge^m g)\big|\big|_{\mathbb{E}} \leqslant C_{d,m}\big|\big| \mu(g)\big|\big|_{\mathbb{E}}.$$ In particular, for a finitely generated group $\Gamma$, a representation $\rho: \Gamma \rightarrow \mathsf{SL}_d(\mathbb{R})$ is a quasi-isometric embedding if and only if $\wedge^m \rho:\Gamma \rightarrow \mathsf{SL}(\wedge^m \mathbb{R}^d)$ is a quasi-isometric embedding.\end{fact} We give here a proof for the reader's convenience. \begin{proof} Note that for every $h\in \mathsf{SL}_{p}(\mathbb{R})$ and $1\leqslant i\leqslant p$, we have $\log \frac{\sigma_1(h)}{\sigma_p(h)}\geqslant |\log \sigma_i(h)|$ and $2\big((\log \sigma_1(h))^2+(\log \sigma_p(h))^2\big)\geqslant \big(\log \sigma_1(h)-\log\sigma_p(h)\big)^2$, so we obtain the double inequality $$\frac{1}{\sqrt{2}}\log \frac{\sigma_1(h)}{\sigma_p(h)}\leqslant \big|\big| \mu(h)\big|\big|_{\mathbb{E}} \leqslant \sqrt{p}\log \frac{\sigma_1(h)}{\sigma_p(h)}.$$ Let us set $r_{d,m}:=\binom{d}{m}$. For $g\in \mathsf{SL}_d(\mathbb{R})$ we have $$\sigma_1(\wedge^m g)=\sigma_1(g)\cdots \sigma_m(g), \ \sigma_{r_{d,m}}(\wedge^m g)=\sigma_d(g)\cdots \sigma_{d-m+1}(g)$$ and hence the previous bound shows that \begin{align*} \frac{1}{\sqrt{2}}\log \frac{\sigma_1(g)}{\sigma_d(g)}\leqslant \big|\big| \mu(\wedge^m g)\big|\big|_{\mathbb{E}} &\leqslant \sqrt{r_{d,m}}\log \frac{\sigma_1(g)\cdots \sigma_m(g)}{\sigma_d(g)\cdots \sigma_{d-m+1}(g)}\\ &\leqslant m\sqrt{r_{d,m}} \log\frac{\sigma_1(g)}{\sigma_d(g)}\leqslant m\sqrt{2r_{d,m}} \big|\big|\mu(g)\big|\big|_{\mathbb{E}}.\end{align*} The inequality follows by taking $C_{d,m}:=m\sqrt{2r_{d,m}}$.\end{proof} \subsection{Fiber products} Let $\Delta$ be a group and $N$ be a normal subgroup of $\Delta$. Let us recall that the {\em fiber product of $\Gamma$ with respect to $N$} is the subgroup of $\Delta \times \Delta$ generated by the diagonal subgroup $\textup{diag}(\Delta \times \Delta)=\big\{(\delta,\delta):\delta\in \Delta\big \}$ and the subgroup $N \times \{1\}$: $$\Delta \times_N\Delta=\big\{(\delta n, \delta):\delta \in \Delta, n \in N \big\}.$$ \noindent Suppose that $\Delta$ is finitely generated, $S$ is a finite generating subset of $\Delta$ and $N=\llangle \mathcal{F} \rrangle$ for some finite subset $\mathcal{F}$ of $\Delta$. Observe that $\big \{(s,s):s \in S \big\} \cup \big \{(1,w):w \in \mathcal{F}\big\}$ is a finite generating subset of $\Delta\times_N \Delta$. In the special case where the quotient group $\Delta/N$ is hyperbolic, the fiber product $\Delta \times_N \Delta$ is an undistorted subgroup of (e.g. quasi-isometrically embedded) $\Delta \times \Delta$. This fact will follow by \cite[Thm. 2]{OS}. \begin{proposition} \label{undist} Suppose that $\Delta=\langle X | R \rangle$ is a finitely presented group and $N$ is a normal subgroup of $\Delta$ such that $\Delta/N$ is hyperbolic. Let us fix a word length function $|\cdot|_{\Delta \times_N \Delta}:\Delta \times_N \Delta \rightarrow \mathbb{N}$. Then there exist $C,c>0$ such that for every $\delta \in \Delta$ and $n \in N$ we have $$\big|(\delta n,\delta) \big|_{\Delta \times_N \Delta} \leqslant C\big( \big|\delta n|_{\Delta}+\big|\delta \big|_{\Delta}\big)+c.$$\end{proposition} \begin{proof} We remark that since $\Delta/N$ is finitely presented, there exists a finite subset $\mathcal{F}$ of $\Delta$ such that $N=\llangle \mathcal{F}\rrangle$. Let $F(X)$ be the free group on the set $X$ and denote by $\pi: F(X) \twoheadrightarrow \Delta$ the projection onto $\Delta$ with kernel $\llangle R \rrangle$. Let $\overline{\mathcal{F}}$ be a finite subset of $F(X)$ with $\pi(\overline{\mathcal{F}})=\mathcal{F}$ and $\overline{N}=\llangle R \cup \overline{\mathcal{F}} \rrangle$. Note that the product $\pi \times \pi: F(X)\times F(X)\twoheadrightarrow \Delta \times \Delta$ restricts to an epimorphism $\pi \times \pi: F(X)\times_{\overline{N}} F(X)\twoheadrightarrow \Delta \times_{N} \Delta$. Since $\Delta/N=F(X)/\overline{N}$ is hyberbolic, by \cite[Thm. 2]{OS} it follows that $F(X)\times_{\overline{N}}F(X)$ is quasi-isometrically embedded in $F(X)\times F(X)$ i.e. there exist $C_0,c_0>0$ such that \begin{equation} \label{undist-eq1} \big|\big(\pi(g),\pi(gn)\big) \big|_{\Delta \times_{N}\Delta} \leqslant C_0 \big (|g|_{F(X)}+|n|_{F(X)}\big)+c_0 \end{equation} for every $g \in F(X)$ and $n \in \overline{N}$. Now the conclusion follows by (\ref{undist-eq1}) and the observation that for every $\delta \in \Delta$ there exists $\overline{\delta}\in F(X)$ with $\delta=\pi(\overline{\delta})$ and \hbox{$|\overline{\delta}|_{F(X)}\leqslant |\delta|_{\Delta}$.}\end{proof} \subsection{Proximality} An element $g \in \mathsf{SL}_d(\mathbb{R})$ is called {\em proximal} if $\ell_1(g)>\ell_2(g)$. In this case $g$ has a unique eigenvalue of maximum modulus. In addition, $g$ admits a unique attracting fixed point $x_g^{+}$ in $\mathbb{P}(V)$ and a repelling hypeplane $V_{g}^{-}$ such that $V=x_{g}^{+}\oplus V_{g}^{-}$ and for every $y \in \mathbb{P}(V)\smallsetminus \mathbb{P}(V_{g}^{-})$, $\lim_{n}g^ny=x_{g}^{+}$. For a subgroup $N$ of $\mathsf{SL}_d(\mathbb{R})$ containing a proximal element, the proximal limit set of $N$, denoted by $\Lambda_{N}^{\mathbb{P}}$, is the closure of the attracting fixed points in $\mathbb{P}(\mathbb{R}^d)$ of all proximal elements in $N$. In the special case where $N$ is irreducible (i.e. does not preserve any proper subspace of $\mathbb{R}^d$) we have the following fact proved by Benoist. \begin{lemma}\textup{(}\cite{benoist-limitcone}\textup{)} \label{minimal} Let $N$ be an irreducible subgroup of $\mathsf{SL}_d(\mathbb{R})$ which contains a proximal element. Then $N$ acts minimally on $\Lambda_{N}^{\mathbb{P}}$ in $\mathbb{P}(\mathbb{R}^d)$ \textup{(}i.e. $\overline{N\cdot x}=\Lambda_{N}^{\mathbb{P}}$ for every \hbox{$x \in \Lambda_{N}^{\mathbb{P}}$\textup{)}.} \end{lemma} \subsection{Semisimple representations} Let $\Gamma$ be a discrete group. A representation $\rho:\Gamma \rightarrow \mathsf{GL}_d(\mathbb{R})$ is called {\em semisimple} if $\rho$ decomposes as a direct sum of irreducible representations of $\Gamma$. In this case, the Zariski closure $\overline{\rho(\Gamma)}^{\textup{Zar}}$ of $\rho(\Gamma)$ in $\mathsf{GL}_{d}(\mathbb{R})$ is a real reductive algebraic Lie group. Moreover, every linear representation over $\mathbb{R}$ of $\overline{\rho(\Gamma)}^{\textup{Zar}}$ is semisimple, see \cite[Ch.3, Thm. 3.13.1]{Var}. In particular, if $\rho:\Gamma \rightarrow \mathsf{GL}_d(\mathbb{R})$ is semisimple, all of its exterior powers $\wedge^{i}\rho$, $1\leqslant i\leqslant d-1$, are also semisimple. By default, we consider the trivial representation as semisimple. \par Let $\psi:\Gamma \rightarrow \mathsf{GL}_d(\mathbb{R})$ be a representation and $G$ be the Zariski closure of $\psi(\Gamma)$ in $\mathsf{GL}_d(\mathbb{R})$. Let us choose a Levi decomposition $G=L \ltimes U_{G}$, where $U_{G}$ is the unipotent radical of $G$ and denote by $\pi:G\rightarrow L$ the canonical projection. The {\em semisimplification \hbox{$\psi^{ss}:\Gamma \rightarrow \mathsf{GL}_d(\mathbb{R})$} of $\psi$} is the composition $\psi^{ss}=\psi\circ \pi$. The semisimplification $\psi^{ss}$ does not depend on the choice of $L$ up to conjugation by an element of $U_{G}$ (see \cite[p. 24]{GGKW} for more details). One of the key properties of the semisimplification $\psi^{ss}$ is that $\rho^{ss}$ is a limit of conjugates of $\rho$ and hence $$\lambda(\psi(\gamma))=\lambda (\psi^{ss}(\gamma)) \ \ \forall \gamma \in \Gamma.$$ The following result, was established by Benoist in \cite{benoist-limitcone} by using a result of Abels-Margulis-Soifer \cite{AMS} (see also \cite[Thm. 4.12]{GGKW} for a proof) and offers a connection between eigenvalues and singular values of elements in a semisimple subgroup of $\mathsf{SL}_d(\mathbb{R})$. \begin{theorem} \textup{(\cite{AMS, benoist-limitcone})} \label{finitesubset} Let $\Gamma$ be an abstract group and $\rho:\Gamma \rightarrow \mathsf{SL}_{d}(\mathbb{R})$ be a semisimple representation. Then there exists a finite subset $F$ of $\Gamma$ and $C_{\rho}>0$ with the property: for every $\gamma \in \Gamma$ there exists $f \in F$ such that $$\max_{1 \leqslant i \leqslant d} \big| \log \sigma_i(\rho(\gamma))-\log \ell_i(\rho(\gamma f))\big|\leqslant C_{\rho}.$$ \end{theorem} We also need the following lemma which essentially follows by the previous theorem. \begin{lemma} \label{qie-semisimple} Let $\Gamma$ be a finitely generated group, $\rho:\Gamma \rightarrow \mathsf{SL}_d(\mathbb{R})$ be a representation and \hbox{$\rho^{ss}:\Gamma \rightarrow \mathsf{SL}_d(\mathbb{R})$} be a semisimplification of $\rho$. There exists $\delta>0$ such that $$\big|\big|\rho(\gamma)\big|\big| \geqslant \delta \big|\big|\rho^{ss}(\gamma)\big|\big| $$ for every $\gamma \in \Gamma$. In particular, if $\rho^{ss}$ is a quasi-isometric embedding then $\rho$ is also a quasi-isometric embedding.\end{lemma} \begin{proof} Since $\rho^{ss}$ is semisimple and $\lambda(\rho^{ss}(g))=\lambda(\rho(g))$ for every $g \in \Gamma$, by Theorem \ref{finitesubset}, there exists a finite subset $F$ of $\Gamma$ and $C>0$ with the property: for every $\gamma \in \Gamma$ there exists $f \in F$ with \begin{align}\label{AMSineq}\big| \log \ell_1(\rho(\gamma f))-\log \sigma_1(\rho^{ss}(\gamma))\big|\leqslant C\end{align} Now choose $\gamma \in \Gamma$ and $f\in F$ satisfying (\ref{AMSineq}). By the submultiplicativity of the operator norm and the fact that $||g||\geqslant \ell_1(g)$ for $g\in \mathsf{SL}_d(\mathbb{R})$, we conclude for every $\gamma \in \Gamma$ that \begin{align*} \big|\big|\rho(\gamma)\big|\big| \geqslant \big|\big|\rho(f)\big|\big|^{-1} \big|\big|\rho(\gamma f)\big|\big|\geqslant \big|\big|\rho(f)\big|\big|^{-1} \ell_1(\rho(\gamma f)) \geqslant \big|\big|\rho(f)\big|\big|^{-1}e^{-C} \big|\big| \rho^{ss}(\gamma)\big|\big|.\end{align*} In particular, we conclude that $$ \big|\big|\rho(\gamma)\big|\big| \geqslant \big(\min_{f\in F} \big|\big|\rho(f)\big|\big|^{-1}\big)e^{-C} \big|\big| \rho^{ss}(\gamma)\big|\big|$$ for every $\gamma \in \Gamma$. The inequality follows.\end{proof} The following lemma is a consequence of Margulis' lemma. \begin{lemma} \label{non-distal} Suppose that $\mathsf{H}$ is a group which contains a free subgroup $F\subset \mathsf{H}$ of rank $2$ and \hbox{$\psi:\mathsf{H}\rightarrow \mathsf{SL}_d(\mathbb{R})$} is a discrete faithful representation. Then $\psi(\mathsf{H})$ is not distal. \end{lemma} \begin{proof} Let $\psi_0$ be a semisimplification of the restriction $\psi|_{F}:F \rightarrow \mathsf{SL}_d(\mathbb{R})$. Since $\psi$ is discrete and faithful, $\psi_0$ is an algebriac limit of discrete faithful representations, namely of conjugates of $\psi|_{F}$. Since $F$ does not contain non-trivial normal nilpotent subgroups, it follows by \cite[Thm. 2.11]{BIW} that $\psi_0$ is discrete and faithful. In particular, by Theorem \ref{finitesubset}, $\psi_0$ cannot be distal. Therefore, since $\lambda(\psi(g))=\lambda(\psi_0(g))$ for every $g\in F$, we conclude that $\psi|_F$ is not distal. \end{proof} We will also need the following well known elementary fact. \begin{fact}\label{normal} Let $\Gamma$ be a group and $N$ be a normal subgroup of $\Gamma$. Suppose that $\rho:\Gamma \rightarrow \mathsf{GL}_d(\mathbb{R})$ is a semisimple representation. Then the restriction $\rho|_N:N\rightarrow \mathsf{GL}_d(\mathbb{R})$ of $\rho$ on $N$ is semisimple.\end{fact} \begin{proof} We may assume that $\rho|_N$ is non-trivial and $\rho$ is irreducible. Let $V_0\neq \{0\}$ be an $\rho(N)$-invariant subspace of $\mathbb{R}^d$ of minimal dimension. Since $\textup{dim}(V_0)$ is minimal, $\rho(N)$ acts irreducibly on $V_0$. We claim that there exist $h_1\ldots, h_r\in \Gamma$ such that $V=\oplus_{i=1}^{r} \rho(h_i)V_0$. This is enough to conclude that $\rho|_N$ is semisimple since $N$ is normal in $\Gamma$ and $\rho(N)$ preserves and acts irreducibly on $\rho(h_i)V_0$ for each $i$. We prove the claim. If $V_0=\mathbb{R}^d$ then $\rho|_N$ is obviously semisimple and we take $h_1=1$. If $V_0$ is a proper subspace of $\mathbb{R}^d$, there exists $h_2 \in \Gamma$ such that $\rho(h_2)V_0\neq V_0$. Then $\rho(h_2)V_0\cap V_0$ is a proper subspace of $V_0$ which is $\rho(N)$-invariant and hence $\rho(h_2)V_0\cap V_0=\{0\}$. It follows that $V_1:=V_0+\rho(h_2)V_0$ is direct. If $V_1=\mathbb{R}^d$, then $\rho|_{N}$ is semisimple with two irreducible components. Otherwise, there exists $h_3 \in \Gamma$ such that $\rho(h_3)V_0$ is not a subspace of $V_1$. In particular, $\rho(h_3)V_0\cap V_1$ is a proper $\rho(N)$-invariant subspace of $V_0$ and so $\rho(h_3)V_0\cap V_1=\{0\}$. It follows that $\rho(h_3)V_0+V_1$ is direct and $\rho|_{N}$ is semisimple. By continuing similarly we obtain the conclusion. \end{proof} \subsection{Archimedean superrigidity} We review here the following version of Corlette's Archimedean superrigidity theorem \cite{Corlette} (see \cite[Thm. 3.8]{FH}) that we use for the proof of Theorem \ref{main2}. \begin{theorem} \label{superrigidity} Let $G$ be either $\mathsf{Sp}(m,1)$, $m \geqslant 2$, or $\textup{F}_4^{(-20)}$ and $\Delta$ be a lattice in $G$. Suppose that $\rho:\Gamma \rightarrow \mathsf{SL}_d(\mathbb{R})$ is a representation. Then there exists a continuous representation $\hat{\rho}:G \rightarrow \mathsf{SL}_d(\mathbb{R})$ and a representation $\rho':\Delta \rightarrow \mathsf{GL}_d(\mathbb{R})$ with compact Zariski closure such that: \noindent \textup{(i)} the images $\hat{\rho}(\Delta)$ and $\rho'(\Delta)$ commute.\\ \noindent \textup{(ii)} $\rho(\delta)=\hat{\rho}(\delta)\rho'(\delta)$ for every $\delta \in \Delta$.\end{theorem} \begin{rmk} Let us recall that when $\rho$ has non-compact closure then $\hat{\rho}$ has necessarily finite kernel and there exists $C_{\rho}>0$ such that $\big|\big| \mu (\rho(\delta))-\mu (\hat{\rho}(\delta)) \big| \big|_{\mathbb{E}}\leqslant C_{\rho}$ for every $\delta \in \Delta$. In particular, if $\Delta$ is cocompact in $G$ then $\rho$ is a quasi-isometric embedding.\end{rmk} \section{Proof of Theorem \ref{main1} and Theorem \ref{main2}} \label{proofs} We first prove Theorem \ref{main2} which we use for the proof of Theorem \ref{main1}. In order to simplify our notation, for a subgroup $\mathsf{H}$ of $\Delta$ we consider the subgroups of $\Delta \times \Delta$: $$\mathsf{H}_{\textup{L}}:=\mathsf{H} \times \{1\} \ \ \textup{and} \ \ \mathsf{H}_{\textup{R}}:=\{1\}\times \mathsf{H}.$$ \begin{proof}[Proof of Theorem \ref{main2}.] Let us set $\Gamma:=\Delta \times_N\Delta$. There are two cases to consider. \noindent {\bf Case 1}: {\em The representation $\rho:\Gamma \rightarrow \mathsf{SL}_d(\mathbb{R})$ is semisimple.} \begin{proof}[Proof of Case 1.] Since the restriction of $\rho$ on $[N,N]_{\textup{R}}$ is not distal, there exists $w_{0}\in [N,N]$ and $1 \leqslant r \leqslant d-1$ such that $\ell_{r}(\rho(1,w_0))>\ell_{r+1}(\rho(1,w_0))$ and $\wedge^{r}\rho(1,w_0)$ is proximal. Let us consider the exterior power $\psi:\Gamma \rightarrow \mathsf{SL}(\wedge^{r}\mathbb{R}^d)$, $\psi:=\wedge^{r}\rho$, of $\rho$. The representation $\psi$ is semisimple and the proximal limit set $\Lambda_{\psi(N_{\textup{R}})}^{\mathbb{P}}$ in $\mathbb{P}(\wedge^{r}\mathbb{R}^d)$ is non-empty. Let $V$ be the vector subspace of $\wedge^r \mathbb{R}^d$ spanned by the attracting eigenlines of proximal elements of $\psi(N_{\textup{R}})$. Since $\psi|_{N_{\textup{R}}}$ is semisimple by Fact \ref{normal} (and hence its restriction on $V$), there exists a decomposition $$V=V_{1}\oplus \cdots \oplus V_{\ell}$$ such that $\psi(N_{\textup{R}})$ preserves and acts irreducibly on each $V_{i}$, $1 \leqslant i \leqslant \ell$. Note also that the restriction of $\psi_i:N_{\textup{R}}\rightarrow \mathsf{GL}(V_i)$ of $\psi|_{N_{\textup{R}}}$ on $V_i$ is proximal by the definition of $V$. Since $\psi(N_{\textup{L}})$ centralizes $\psi(N_{\textup{R}})$, $\psi(N_{\textup{L}})$ fixes pointwise the attracting eigenline of every proximal element of $\psi(N_{\textup{R}})$. In particular, $\psi(N_{\textup{L}})$ fixes pointwise a basis of each of the subspaces $V_{1},\ldots, V_{\ell}$ of $V$. Since $\psi(N_{\textup{R}})|_{V_i}$ is irreducible, every element in $\psi(N_{\textup{L}})$ acts as a scalar on each $V_{i}$. In other words, there exist finitely many group homomorphisms \hbox{$\varepsilon_1,\ldots, \varepsilon_{\ell}:N \rightarrow \mathbb{R}^{\ast}$} such that \begin{equation} \label{thm2-eq1} \psi(n,1)v_i=\varepsilon_i(n)v_i \ \ \forall v_i\in V_i \ \forall n \in N. \end{equation} For $1 \leqslant i \leqslant \ell$, let $\Lambda_{\psi_i}^{\mathbb{P}}$ be the proximal limit set of $\psi_{i}$ in $\mathbb{P}(V_i)$ and note that $\Lambda_{\psi(N_{\textup{R}})}^{\mathbb{P}}=\Lambda_{\psi_i}^{\mathbb{P}} \cup \cdots \cup \Lambda_{\psi_{\ell}}^{\mathbb{P}}$. We need the following claim. \noindent {\em Claim 1.} {\em For every $\delta\in \Delta$, $\psi(\delta,\delta)$ determines a bijection of $\sigma(\delta):\{1,\ldots,\ell \}\rightarrow \{1,\ldots,\ell \}$ as follows: $$\psi(\delta,\delta)\Lambda_{\psi_{i}}^{\mathbb{P}}=\Lambda_{\psi_{\sigma(i)}}^{\mathbb{P}}, \ 1\leqslant i \leqslant \ell.$$} To verify the previous claim it suffices to check that if $\psi(\delta,\delta)\Lambda_{\psi}^{\mathbb{P}}\cap \Lambda_{\psi_j}^{\mathbb{P}}$ is non-empty, then $\psi(\delta,\delta)\Lambda_{\psi_i}^{\mathbb{P}}=\Lambda_{\psi_j}^{\mathbb{P}}$. Suppose that $x_{i}\in \Lambda_{\psi_i}^{\mathbb{P}}$ with $\rho(\delta,\delta)x_{i}\in \Lambda_{\psi_j}^{\mathbb{P}}$. Since $\Lambda_{\psi_j}^{\mathbb{P}}$ is $\psi(N_{\textup{R}})$-invariant, for every $n \in N$ we have $$\rho(1,\delta n \delta^{-1})\rho(\delta,\delta)x_{i}=\rho(\delta,\delta)\rho(1,n)x_{i} \in \Lambda_{\psi_j}^{\mathbb{P}}.$$ In particular, since $\Lambda_{\psi_j}^{\mathbb{P}}$ is closed, we have $\psi(\delta,\delta)\overline{\psi(N_{\textup{R}})x_{i}}\subset \Lambda_{\psi_j}^{\mathbb{P}}$. Since $\psi(N_{\textup{R}})$ acts irreducibly on $V_{i}$, by Lemma \ref{minimal}, it acts minimally on $\Lambda_{\psi_i}^{\mathbb{P}}$ and hence $\psi(\delta,\delta)\Lambda_{\psi_i}^{\mathbb{P}}\subset \Lambda_{\psi_{j}}^{\mathbb{P}}$. Since $\Lambda_{\psi_i}^{\mathbb{P}}\cap \psi(\delta,\delta)^{-1}\Lambda_{\psi_{j}}^{\mathbb{P}}$ is non-empty we similarly deduce that $ \psi(\delta,\delta)^{-1}\Lambda_{\psi_{j}}^{\mathbb{P}}\subset \Lambda_{\psi_i}^{\mathbb{P}}$. The claim follows. We conclude that $\psi(\delta,\delta)\Lambda_{\psi_i}^{\mathbb{P}}=\Lambda_{\psi_j}^{\mathbb{P}}$ and hence $\sigma(\delta)$ is a well defined bijection of $\{1,\ldots,\ell \}$. Finally, we obtain a well defined group homomorphism $\sigma:\Delta \rightarrow \textup{Sym}\big(\{1,\ldots,\ell \}\big)$, $\delta \mapsto \sigma(\delta)$. The group $\Delta':=\textup{ker}\sigma$ is a finite index subgroup $\Delta$ (of index at most $\ell!$) and by the definition of $\sigma$ has the property that \begin{equation} \label{thm2-eq0} \psi(\delta,\delta)\Lambda_{\psi_i}^{\mathbb{P}}=\Lambda_{\psi_i}^{\mathbb{P}}\end{equation} for every $1\leqslant i \leqslant \ell$ and $\delta \in \Delta'$. \noindent {\em Continuing the proof of Case 1.} Now let us recall that there exists $w_0 \in [N,N]$ such that $\psi(1,w_0)$ is proximal and $V\subset \wedge^m \mathbb{R}^d$ is the subspace spanned by the attracting eigenlines of proximal elements in $\psi(N_{\textup{R}})$. It follows from Claim 1 that $w_0^{\ell!} \in [N,N]\cap \Delta'$. By (\ref{thm2-eq1}) we have $w_0^{\ell!} \in \bigcap_{i=1}^{\ell} \textup{ker}\varepsilon_i$ and hence $\psi(1,w_0^{\ell!})\big|_{V}=\psi(w_0^{\ell!},w_0^{\ell !})\big|_{V}$ is a proximal tranformation of $\mathsf{GL}(V)$ whose attracting fixed point (which is also the attracting fixed point of $\psi(1,w_0)$ in $\mathbb{P}(\wedge^r \mathbb{R}^d)$) lies exactly in one of the limit sets $\big\{\Lambda_{\psi_i}^{\mathbb{P}}\big\}_{i=1}^{\ell}$, say in $\Lambda_{\psi_1}^{\mathbb{P}}$. Let us note that $V$ cannot be one dimensional. If this were the case, since $\Delta'$ has finite abelianization, up to finite-index, its acts trivially on $V$ and hence $\ell_1(\psi(1,w_0))=1$ which is impossible since $\psi(1,w_0)$ is a proximal matrix in $\mathsf{SL}(\wedge^{r}\mathbb{R}^d)$. It follows that $\textup{dim}(V)\geqslant 2$. The representation $\hat{\psi}:\Delta' \rightarrow \mathsf{SL}^{\pm}(V)$, $\hat{\psi}(\delta,\delta)=\psi(\delta,\delta)|_{V}$, is well defined by (\ref{thm2-eq0}), proximal by the choice of $w_0\in [N,N]$. In particular, the image of $\hat{\psi}$ has non-compact closure in $\mathsf{SL}^{\pm}(V)$. It follows by Corlette's superrigidity (see Theorem \ref{superrigidity} and the remark below) that $\hat{\psi}$ is a quasi-isometric embedding and there exist $C_1,c_1>0$, depending on $\psi$, with the property: \begin{equation} \label{thm2-eq2'}\big|\big|\psi (\delta,\delta )|_{V_{1}}\big|\big| \cdot \big|\big|\psi (\delta,\delta)^{-1}|_{V}\big|\big| \geqslant e^{\frac{1}{\sqrt{\textup{dim}V}}|| \mu(\hat{\psi}(\gamma,\gamma))||_{\mathbb{E}}}\geqslant C_{1}e^{c_1|\delta|_{\Delta}} \end{equation} for every $\delta \in \Delta'$. By using (\ref{thm2-eq2'}) for $n\in N$ and $\delta \in \Delta'$ we have the following estimate: \begin{align*}\big|\big|\psi (\delta n,\delta)\big|\big| \cdot \big|\big|\psi (n^{-1}\delta^{-1},\delta^{-1})\big|\big| &\geqslant \big|\big|\psi (\delta n,\delta)|_{V}\big|\big| \cdot \big|\big|\psi (n^{-1} \delta^{-1},\delta^{-1})|_{V}\big|\big|\\ &\geqslant \big|\big|\varepsilon_1(n) \psi (\delta,\delta)|_{V}\big|\big| \cdot \big|\big| \varepsilon_1(n)^{-1}\psi (\delta,\delta )^{-1}|_{V}\big|\big|\\ & =\big|\big|\psi (\delta,\delta)|_{V}\big|\big| \cdot \big|\big|\psi (\delta,\delta)^{-1}|_{V}\big|\big|\\ & \geqslant C_{1}e^{c_1|\delta|_{\Delta}}.\end{align*} Since $\Delta'$ has finite index in $\Delta$, by the previous estimate and Fact \ref{exterior} we conclude that there exist $C_2,c_2>0$ with the property: \begin{equation} \label{thm2-eq3} \big|\big|\rho (\delta n,\delta )\big|\big| \cdot \big|\big|\rho (\delta n,\delta)^{-1}\big|\big| \geqslant C_{2}e^{c_2|\delta|_{\Delta}} \end{equation} for every $\delta \in \Delta$ and $n \in N$. \par Let $\tau:\Gamma \rightarrow \Gamma$ be the automorphism of $\Gamma$ swapping the two coordinates. Since $\rho$ is assumed to be semisimple, $\rho \circ \tau$ is also semisimple and by assumption $\rho|_{[N,N]_{\textup{L}}}=(\rho \circ \tau)|_{[N,N]_{\textup{R}}}$ is not distal. By working as previously, we conclude that there exist $C_3,c_3>0$ with the property: \begin{equation} \big|\big|\rho (\delta,\delta n)\big|\big| \cdot \big|\big|\rho (\delta,\delta n)^{-1}\big|\big| \geqslant C_{3}e^{c_3|\delta|_{\Delta}} \end{equation} for every $\delta \in \Delta$ and $n \in N$. In particular, if $\delta:=n^{-1}$ we have \begin{equation} \label{thm2-eq4} \big|\big|\rho (n,1)\big|\big| \cdot \big|\big|\rho (n,1)^{-1}\big|\big| \geqslant C_{3}e^{c_3|n|_{\Delta}} \end{equation} for every $n \in N$. \par Now let $\delta \in \Delta$ and $n \in N$. Since $\Delta$ is finitely generated there exist $C_4,c_4>0$, independent of $\delta \in N$, such that $\big| \big| \rho(\delta,\delta) \big| \big| \cdot \big| \big| \rho(\delta,\delta)^{-1}\big|\big|\leqslant C_4 e^{c_4|\delta|_{\Delta}}$. Therefore, by using (\ref{thm2-eq4}) we have \begin{equation} \label{thm2-eq5} \big|\big|\rho (\delta n,\delta)\big|\big| \cdot \big|\big|\rho (\delta n,\delta)^{-1}\big|\big| \geqslant \frac{ \big|\big|\rho (n,1)\big|\big| \cdot \big|\big|\rho (n,1)^{-1}\big|\big|}{ \big|\big|\rho (\delta, \delta)\big|\big| \cdot \big|\big|\rho (\delta,\delta)^{-1}\big|\big|} \geqslant \frac{C_3}{C_4}e^{c_3|n|_{\Delta}-c_4|\delta|_{\Delta}} \end{equation} for $\delta\in \Delta$ and $n \in N$. By letting $\theta:=\frac{3c_4}{c_2}$, raising both terms of (\ref{thm2-eq3}) in $\theta>0$ and using (\ref{thm2-eq5}) we have that: \begin{equation} \big|\big|\rho (\delta n,\delta)\big|\big|^{\theta+1} \cdot \big|\big|\rho (\delta n,\delta)^{-1}\big|\big|^{\theta+1} \geqslant \frac{C_2^\theta C_3}{C_4} e^{c_2\theta |\delta|_{\Delta}-c_4|\delta|_{\Delta}+c_3|n|_{\Delta}}= \frac{C_2^{\theta} C_3}{C_4} e^{2c_4|\delta|_{\Delta} +c_3|n|_{\Delta}} \end{equation} for every $\delta\in \Delta$ and $n \in N$. In particular, since $|n|_{\Delta}+|\delta|_{\Delta}\geqslant \frac{1}{2}(|\delta|_{\Delta}+|\delta n|_{\Delta})$, we conclude that there exist $C_5,c_5>0$ such that \begin{equation} \big|\big|\rho (\delta n,\delta)\big|\big| \cdot \big|\big|\rho (\delta n,\delta)^{-1}\big|\big| \geqslant C_5 e^{c_5 (|n|_{\Delta}+|\delta|_{\Delta})} \geqslant C_5 e^{\frac{c_5}{2}(|\delta n|_{\Delta}+|\delta|_{\Delta})} \end{equation} for every $\delta\in \Delta$ and $n \in N$. This completes the proof of the theorem when $\rho$ is semisimple. \end{proof} \noindent {\bf Case 2:} {\em Suppose that $\rho:\Gamma \rightarrow \mathsf{SL}_d(\mathbb{R})$ is not a semisimple representation.} Let $\rho^{ss}:\Gamma \rightarrow \mathsf{SL}_d(\mathbb{R})$ be a semisimplification of $\rho$. Since $\lambda(\rho^{ss}(g))=\lambda(\rho(g))$ for every $g \in \Gamma$, the restrictions of $\rho^{ss}$ on $[N,N]_{\textup{R}}$ and $[N,N]_{\textup{L}}$ are not distal. It follows by Case 1 and Lemma \ref{qie-semisimple} that there exist $C,c,\delta>0$ such that \begin{align*}\big| \big| \mu (\rho(\delta n,\delta))\big|\big|_{\mathbb{E}}\geqslant \delta \big| \big| \mu (\rho^{ss}(\delta n,\delta))\big|\big|_{\mathbb{E}} &\geqslant \frac{1}{\sqrt{2}} \log \big(\big|\big|\rho^{ss}(\delta,\delta n)\big|\big|\cdot \big|\big|\rho^{ss}(\delta,\delta n)^{-1}\big|\big| \big) \\ &\geqslant c\big(|\delta n|_{\Delta}+|n|_{\Delta}\big)-C\end{align*} for every $\delta \in \Delta$ and $n\in N$. The proof is complete. \end{proof} \begin{proof}[Proof of Theorem \ref{main1}.] For the implication $\textup{(iii)} \Rightarrow \textup{(ii)}$ note that the kernel of $\rho$ is torsion subgroup of $\Delta \times_N \Delta$ and hence of $\Delta \times \Delta$. Since torsion subgroups of hyperbolic groups are finite \cite{Gromov} the same holds for the direct product $\Delta \times \Delta$, hence $\textup{ker}\rho$ is finite. The implication $\textup{(ii)}\Rightarrow \textup{(i)}$ follows by Lemma \ref{non-distal}. Now we prove $\textup{(i)}\Rightarrow \textup{(iii)}$. Suppose that $\rho:\Delta \times_N \Delta \rightarrow \mathsf{SL}_d(\mathbb{R})$ is a representation such that the restrictions of $\rho$ on $[N,N]_{\textup{R}}$ and $[N,N]_{\textup{L}}$ are not distal. Since the quotient group $\Delta/N$ is hyperbolic, by Theorem \ref{main2} and Proposition \ref{undist} there exist constants $C,C',c,c'>0$ such that \begin{align*} \big| \big| \mu (\rho(\delta n,\delta))\big|\big|_{\mathbb{E}} \geqslant c\big(|\delta n|_{\Delta}+|n|_{\Delta}\big)-C \geqslant c'\big|(\delta n,\delta)\big|_{\Delta \times_N \Delta}-C' \end{align*} for every $\delta \in \Delta$ and $n\in N$. In particular, $\rho$ is a quasi-isometric embedding. Hence $\textup{(i)}\Rightarrow \textup{(iii)}$ follows.\end{proof} \section{Further properties of the examples} \label{add} In this section we provide some further properties of the examples constructed in this paper. For a finitely generated group $\mathsf{H}$ its first Betti number, denoted by $b_1(\mathsf{H})$, is the free rank of the finitely generated abelian group $\mathsf{H}/[\mathsf{H},\mathsf{H}]$. First we explain that there exist infinitely many isomorphism classes of examples in $\Delta \times \Delta$ with positive first Betti number satisfying the conclusion of Theorem \ref{main1}. \begin{proposition} \label{main3} Let $\Delta$ be a torsion-free cocompact lattice in $\mathsf{Sp}(m,1)$, $m \geqslant 2$, or $\textup{F}_4^{(-20)}$. There exist infinitely many non-isomorphic subgroups $P$ of $\Delta \times \Delta$ with positive first Betti number such that for $d \in \mathbb{N}$ every discrete faithful representation of $P$ into $\mathsf{SL}_d(\mathbb{R})$ is a quasi-isometric embedding. \end{proposition} The first key tool that we need is the following theorem about the existence of quotients of hyperbolic groups which are non-elementary hyperbolic. Let us recall that for a group $\Gamma$ and a finite subset $\mathcal{F}$ of $\Gamma$ we set $\llangle \mathcal{F}\rrangle =\big \langle \{ gfg^{-1}:g \in \Gamma, f \in \mathcal{F}\}\big\rangle$. \begin{theorem} \textup{(}\cite{Gromov, Ol,Delzant}\textup{)}\label{quotient} Let $\Gamma$ be a non-elementary hyperbolic group and $w \in \Gamma$ be an infinite order element. Then the quotient $\Gamma/\llangle w^m \rrangle$ is non-elementary hyperbolic for all but finitely many $m\in \mathbb{N}$.\end{theorem} Let $\Lambda$ be a group, $L$ be a subgroup of $\Lambda$ and $N$ be a normal subgroup of $L$. The triple $(\Lambda,L,N)$ satisfies the {\em Cohen--Lyndon property} if there exists a set $T$ of left coset representatives of $\llangle N\rrangle L$ in $\Lambda$ such that $$ \llangle N \rrangle= \big \langle \{ tNt^{-1}:t \in T\} \big\rangle= \bigast_{t\in T}tNt^{-1}.$$ Sun in \cite{Sun} established a Cohen--Lyndon type theorem for any group $\Lambda$ and any hyperbolically embedded subgroup $L$ of $\Lambda$. We need the following special case of \cite[Thm. 2.5]{Sun} for maximal cyclic subgroups of torsion-free hyperbolic groups. \begin{theorem} \textup{(}\cite{Sun}\textup{)} \label{LC} Let $\Gamma$ be a non-elementary torsion-free word hyperbolic group and $\langle w \rangle$ be an infinite maximal cyclic subgroup of $\Gamma$. Then for all but finitely many $n\in \mathbb{N}$ the triple $(\Gamma, \langle w \rangle, \langle w^n \rangle )$ has the Cohen--Lyndon property. \end{theorem} Mj--Mondal in \cite{MM} proved the following proposition in order to establish sufficient conditions so that certain fiber products do not have Property (T). \begin{proposition} \textup{(}\cite[Prop. 3.6]{MM}\textup{)} \label{MM} Let $\Lambda$ be a group, $L$ be a subgroup of $\Lambda$ and $N$ be a normal subgroup of $L$. Suppose that the triple $(\Lambda,L,N)$ satisfies the Cohen--Lyndon property. Then there exists a surjective group homomorphism $$\phi: \llangle N \rrangle \slash [ \llangle N \rrangle, \Lambda] \twoheadrightarrow N/[L,N].$$\end{proposition} We will need the following consequence of the compactness theorem, see \cite[p. 340]{Paulin} and \cite[Thm. 3.9]{Bestvina}, and the fact that every isometric action of a group with Property (T) on a real tree has a globally fixed point. \begin{proposition}\textup{(\cite{Paulin, Bestvina})} Let $\Gamma_1$ be a finitely generated group with Property (T) and $\Gamma_2$ be a hyperbolic group. Suppose that $\big\{\varphi_n:\Gamma_1\rightarrow \Gamma_2\big\}_{n\in \mathbb{N}}$ is a sequence of group homomorphisms. There exists $r\in \mathbb{N}$, a subsequence $(\varphi_{m_n})_{n\in \mathbb{N}}$ and $(\gamma_n)_{n\in \mathbb{N}}\subset \Gamma_2$ \hbox{such that for every $\delta\in \Gamma_1$ and $n\in \mathbb{N}$:} $$\varphi_{m_n}(\delta)=\gamma_n \varphi_{r}(\delta)\gamma_n^{-1}.$$ \end{proposition} Now we can give the proof of Proposition \ref{main3}. \begin{proof}[Proof of Proposition \ref{main3}.] Let $\langle f_1 \rangle$ be an infinite maximal cyclic subgroup of $\Delta$. By Theorem \ref{quotient} and Theorem \ref{LC} we may choose $k_1 \in \mathbb{N}$ such that $\Delta/ \langle \langle f_1^{k_1} \rangle \rangle$ is non-elementary hyperbolic and $(\Delta, \langle f_1 \rangle, \langle f_1^{k_1} \rangle)$ has the Cohen--Lyndon property. Observe that the quotient of $\Delta \times _{\llangle f_1^{k_1} \rrangle} \Delta$ by the normal subgroup $\Delta \times _{[\Delta,\llangle f_1^{k_1} \rrangle]} \Delta$ is isomorphic to $\llangle f_1^{k_1} \rrangle/ [\Delta,\llangle f_1^{k_1} \rrangle]$ and hence by Proposition \ref{MM} we have $b_1\big(\Delta \times_{\llangle f_1^{k_1} \rrangle} \Delta\big)>0$. Let $N_1:=\llangle f_1^{k_1} \rrangle$ and $\Delta_1:=\Delta/\llangle f_1^{k_1} \rrangle$. Next, we choose $\langle f_2 N_1 \rangle$ a maximal cyclic infinite subgroup of $\Delta/N_1$. Note that $\langle f_2 \rangle$ is an infinite maximal cyclic subgroup of $\Delta$, hence by Theorem \ref{quotient}, Theorem \ref{LC} and Proposition \ref{MM} we may choose $k_2 \in \mathbb{N}$ such that $\Delta_2:=\Delta/\langle \langle f_2^{k_2} \rangle \rangle$ and $\Delta/N_2$, $N_2:=\llangle f_1^{k_1},f_2^{k_2} \rrangle$, are non-elementary hyperbolic and $b_1\big(\Delta \times_{\llangle f_2^{k_2} \rrangle} \Delta\big)>0$. By continuing similarly, we obtain a sequence of elements $(f_q)_{q \in \mathbb{N}}$ of $\Delta$ and integers $(k_q)_{q \in \mathbb{N}}$ such that: \noindent \textup{(i)} For every $q \in \mathbb{N}$, the quotient $\Delta/N_q$, $N_{q}= \llangle f_{1}^{k_1},\dots, f_{q}^{k_q} \rrangle$, is non-elementary hyperbolic. \noindent \textup{(ii)} For $q<p$, $\langle f_{p}N_q \rangle$ is an infinite maximal cyclic subgroup of $\Delta/N_{q}$. In particular, $\langle f_q \rangle$ is a maximal cyclic subgroup of $\Delta$. \noindent \textup{(iii)} For every $q \in \mathbb{N}$, $\Delta_q:=\Delta/\llangle f_{q}^{k_q} \rrangle$ is a non-elementary hyperbolic group and \hbox{$b_1 \big(\Delta \times_{\llangle f_q^{k_q}\rrangle}\Delta \big)>0$.} We claim that for every $q_0\in \mathbb{N}$ there exist finitely many $q\in \mathbb{N}$ such that $\Delta_q$ is isomorphic to $\Delta_{q_0}$. Suppose that this does not happen, i.e. there exists an infinite sequence $(s_q)_{q \in \mathbb{N}}$ and isomorphisms $\phi_{s_q}:\Delta_{s_q} \rightarrow \Delta_{q_0}$. Let $\pi_{s_q}:\Delta \twoheadrightarrow \Delta_{s_q}$ be the projection with kernel $\llangle f_{s_q}^{k_{s_q}} \rrangle$. In particular, we obtain a sequence of surjective group homomorphisms $\phi_{s_q}\circ \pi_{s_q}: \Delta \twoheadrightarrow \Delta_{q_0}$. Let us set $\mathsf{N}_{s_q}:=\llangle f_{s_q}^{k_{s_q}}\rrangle$ for $q \in \mathbb{N}$. It follows by the previous proposition that, up to passing to a subsequence, there exists a sequence $(g_q)_{q\in \mathbb{N}}$ of elements in $\Delta_{q_0}$ and $r \in \mathbb{N}$ such that $$\phi_{s_q}(\pi_{s_q}(g))=g_q \phi_{s_{r}}(\pi_{s_r}(g))g_{q}^{-1}$$ for every $q \in \mathbb{N}$ and $g \in \Delta$. In particular, since $\phi_{s_r}$ is injective, for $g:=f_{s_q}^{k_{s_q}}$ and $q\in \mathbb{N}$ large enough, we have $\pi_{s_r}(f_{s_q}^{k_{s_q}})=1$ or equivalently $f_{s_q}^{k_{s_q}}\in \llangle f_{s_r}^{k_{s_r}}\rrangle$. This is a contradiction since by construction (see (iii)) $f_{s_q}N_{s_r}$ generates an infinite maximal cyclic subgroup of $\Delta/\mathsf{N}_{s_r}$ for $q\in \mathbb{N}$ large enough. Therefore, we may pass to a subsequence, still denoted $\big\{\Delta_{s_q}\big \}_{q\in \mathbb{N}}$, such that $\Delta_{s_q}$ is not isomorphic to $\Delta_{s_p}$ \hbox{for $p \neq q$.} \par By construction, for every $q \in \mathbb{N}$, $\Delta \times_{\mathsf{N}_{s_q}}\Delta$ has positive first Betti number and satisfies the conclusion of Theorem \ref{main1} since $\Delta/\mathsf{N}_{s_q}$ is hyperbolic. Now observe that in the fiber product $\Delta \times_{\mathsf{N}_{s_p}}\Delta$ the subgroups $\{1\}\times \mathsf{N}_{s_p}$ and $ \mathsf{N}_{s_p}\times \{1\}$ are the only non abelian centralizers of non-cyclic non-trivial subgroups of $\Delta \times_{\mathsf{N}_{s_p}}\Delta$ (see \cite[\S 6]{Bridson-Grunewald}). Now suppose that there exists a group isomorphism $f_{q,p}:\Delta \times_{\mathsf{N}_{s_q}}\Delta \rightarrow \Delta \times_{\mathsf{N}_{s_p}}\Delta$. The previous observation shows $f_{q,p}(\mathsf{N}_{s_q}\times \mathsf{N}_{s_q})=\mathsf{N}_{s_p}\times \mathsf{N}_{s_p}$ \hbox{and since} $$\Delta \times_{\mathsf{N}_{s_i}} \Delta/\mathsf{N}_{s_i}\times \mathsf{N}_{s_i} \cong \Delta/\mathsf{N}_{s_i}\ i\in \{p,q\},$$ $f_{q,p}$ induces an isomorphism $f_{q,p}':\Delta/\mathsf{N}_{s_q} \rightarrow \Delta/\mathsf{N}_{s_p}$. Therefore, $p=q$. It follows that $\big\{\Delta \times_{\mathsf{N}_{s_p}}\Delta \big\}_{p \in \mathbb{N}}$ is an infinite sequence of pairwise non-isomorphic subgroups of $\Delta \times \Delta$.\end{proof} We close this section by showing that that the fiber product of a hyperbolic group with respect to an infinite index infinite normal subgroup is not commensurable to a lattice in any semisimple group. The proof follows the strategy of the proof in \cite[Thm. 1.4 (d)]{Bass-Lubotzky} (see p. 1171--1172), however with certain modifications since $\mathsf{\Gamma}$ is not assumed to be a superrigid rank 1 lattice. \begin{proposition} \label{nonlattice} Let $\mathsf{\Gamma}$ be a virtually torsion-free hyperbolic group and $\mathsf{N}$ be an infinite normal subgroup of $\mathsf{\Gamma}$ of infinite index. Suppose that $G_1,\ldots, G_{\ell}$ are connected simple algebraic groups defined over the local field $k_1,\ldots,k_{\ell}$ respectively. The fiber product $\mathsf{\Gamma}\times_{\mathsf{N}}\mathsf{\Gamma}$ is not commensurable to a lattice in the locally compact group ${\bf G}=G_1(k_1)\times \cdots \times G_{\ell}(k_{\ell})$.\end{proposition} \begin{proof} Suppose that $Q$ is a finite-index subgroup of $\mathsf{\Gamma}\times_{\mathsf{N}}\mathsf{\Gamma}$ which is a lattice in ${\bf G}$. Note that $Q$ contains a finite-index subgroup $Q_1$ which is normal in $\mathsf{\Gamma}\times_{\mathsf{N}}\mathsf{\Gamma}$. The intersection $Q_1\cap \textup{diag}(\mathsf{\Gamma}\times \mathsf{\Gamma})$ is a finite-index normal subgroup of $\textup{diag}(\mathsf{\Gamma}\times \mathsf{\Gamma})$ and is of the form $\textup{diag}(\mathsf{\Gamma}_1\times \mathsf{\Gamma}_1)$ for some finite-index normal subgroup $\mathsf{\Gamma}_1$ of $\mathsf{\Gamma}$. Thus $Q$ contains a finite-index subgroup of the form $\mathsf{\Gamma}_1\times_{\mathsf{N}_1}\mathsf{\Gamma}_1$, where $\mathsf{N}_1=\mathsf{N}\cap\mathsf{\Gamma}_1$ is of finite index in $\mathsf{N}$. By the previous observation, without loss of generality, we may assume that $Q=\mathsf{\Gamma}\times_{\mathsf{N}}\mathsf{\Gamma}$ and that $\mathsf{\Gamma}$ is torsion-free. Let $\textup{pr}_i: Q\rightarrow G_i(k_i)$ denote the projection to the $i$-th coordinate for $1\leqslant i \leqslant \ell$. We may also assume that $\textup{pr}_i(Q)$ is not relatively compact. By Borel's density theorem \cite{Borel} (see also \cite[Cor. 3.2]{Dani}) the projection $\textup{pr}_i(Q)$ is Zariski dense in $G_i(k_i)$. Observe that the normalizer of the Zariski closure of $\textup{pr}_i(\mathsf{N}_{\textup{L}})$ (and $\textup{pr}_i(\mathsf{N}_{\textup{R}})$) in $G_i(k_i)$ is algebraic. Since $\textup{pr}_i(\mathsf{N}_{\textup{L}})$ and $\textup{pr}_i(\mathsf{N}_{\textup{R}})$ commute, either $\textup{pr}_i(\mathsf{N}_{\textup{L}})$ or $\textup{pr}_i(\mathsf{N}_{\textup{R}})$ is central. Moreover, up to passing to a finite index subgroup of $\mathsf{N}$, we may assume that for every $1 \leqslant i \leqslant \ell$ either $\textup{pr}_i(\mathsf{N}_{\textup{L}})$ or $\textup{pr}_i(\mathsf{N}_{\textup{R}})$ are trivial. It follows that $\ell \geqslant 2$ and let ${\bf G_1}$ (resp. ${\bf G_2}$) be the product of the $G_i(k_i)$ such that $\textup{pr}_i(\mathsf{N}_{\textup{L}})$ (resp. $\textup{pr}_i(\mathsf{N}_{\textup{R}})$) is trivial. In particular, we obtain a discrete faithful representation $\rho:Q\xhookrightarrow{} {\bf G_1}\times {\bf G_2}$, $$\rho(\gamma, \gamma n)=\big(\rho_1(\gamma, \gamma n),\rho_2(\gamma, \gamma n)\big),(\gamma, \gamma n)\in Q,$$ such that $\rho(Q)$ is a lattice. Let us observe that the restriction of $\rho_1$ on $\textup{diag}(\mathsf{\Gamma}\times \mathsf{\Gamma})$ is faithful. Indeed, $\rho_1(\gamma,\gamma)$ is trivial for some $\gamma \in \Gamma$, then for every $n\in \mathsf{N}$ we have $\rho(\gamma, \gamma)\rho(1,n) \rho(\gamma, \gamma)^{-1}=\rho(1,n)$ since $\rho_2(1,n)$ is trivial by the definition of ${\bf G_2}$. Note that $\rho$ is faithful and hence $\gamma n\gamma^{-1}=n$. It follows that $\gamma$ is trivial since the centralizer of $\mathsf{N}$ in $\mathsf{\Gamma}$ is trivial. \par Now let $\mathsf{H}=\rho_1(\textup{diag}(\mathsf{\Gamma \times \Gamma}))\times \rho_2(\textup{diag}(\mathsf{\Gamma \times \Gamma}))$ and observe that $\mathsf{H}$ is a discrete subgroup of ${\bf G_1}\times {\bf G_2}$. Indeed, if $(\gamma_r)_{r\in \mathbb{N}}$ and $(\delta_r)_{r \in \mathbb{N}}$ are sequences in $\mathsf{\Gamma}$ such that the sequence $g_{r}:=\big(\rho_1(\gamma_r,\gamma_r),\rho_2(\delta_r,\delta_r)\big)$ converges to $(1,1)$, then $\lim_{r } g_r \rho(n_1,n_2)g_{r}^{-1}=\lim_{r}\rho(\gamma_r n_1 \gamma_r^{-1}, \delta_r n_2 \delta_r^{-1})=\rho(n_1,n_2).$ Since $\rho$ is discrete, for all but finitely many $r \in \mathbb{N}$, $\gamma_r$ (resp. $\delta_r$) centralizes $n_1\in \mathsf{N}$ (resp. $n_2\in \mathsf{N}$). Recal that (since $\mathsf{\Gamma}$ is torsion-free) the centralizer of a maximal cyclic subgroup of $\mathsf{\Gamma}$ is cyclic and since $n_1,n_2\in \mathsf{N}$ were arbitrary and $\mathsf{N}$ is not cyclic, $(\gamma_r)_{r \in \mathbb{N}}$ and $(\delta_r)_{r \in \mathbb{N}}$ have to be eventually trivial. It follows that $\mathsf{H}$ is a discrete subgroup of ${\bf G_1}\times {\bf G_2}$. Since $\rho(Q)$ is assumed to be a lattice in ${\bf G_1}\times {\bf G_2}$, it has finite index in $\mathsf{H}$. In particular, there exists a finite index subgroup $\mathsf{\Gamma}'$ of $\mathsf{\Gamma}$ such that $\rho_1(\textup{diag}(\mathsf{\Gamma}'\times \Gamma'))\times \{1\}$ is a subgroup of $\rho(Q)$ centralizing the group $\rho(\mathsf{N}_{\textup{L}})$. The centralizer of $\rho(\mathsf{N}_{\textup{L}})$ in $\rho(Q)$ is $\rho(\mathsf{N}_{\textup{R}})$, hence for every $\gamma'\in \mathsf{\Gamma}'$ there exists $n'\in \mathsf{N}$ with $\rho_1(\gamma',\gamma')=\rho_1(1,n)=\rho_1(n,n)$, so $\gamma'=n$ since $\rho_1|_{\textup{diag}(\mathsf{\Gamma}'\times \Gamma')}$ is faithful. This contradicts the fact that $\mathsf{N}$ has infinite index in $\mathsf{\Gamma}$.\end{proof} \end{document}
arXiv
Torsion of a curve In the differential geometry of curves in three dimensions, the torsion of a curve measures how sharply it is twisting out of the osculating plane. Taken together, the curvature and the torsion of a space curve are analogous to the curvature of a plane curve. For example, they are coefficients in the system of differential equations for the Frenet frame given by the Frenet–Serret formulas. Definition Let r be a space curve parametrized by arc length s and with the unit tangent vector T. If the curvature κ of r at a certain point is not zero then the principal normal vector and the binormal vector at that point are the unit vectors $\mathbf {N} ={\frac {\mathbf {T} '}{\kappa }},\quad \mathbf {B} =\mathbf {T} \times \mathbf {N} $ respectively, where the prime denotes the derivative of the vector with respect to the parameter s. The torsion τ measures the speed of rotation of the binormal vector at the given point. It is found from the equation $\mathbf {B} '=-\tau \mathbf {N} .$ which means $\tau =-\mathbf {N} \cdot \mathbf {B} '.$ As $\mathbf {N} \cdot \mathbf {B} =0$, this is equivalent to $\tau =\mathbf {N} '\cdot \mathbf {B} $. Remark: The derivative of the binormal vector is perpendicular to both the binormal and the tangent, hence it has to be proportional to the principal normal vector. The negative sign is simply a matter of convention: it is a byproduct of the historical development of the subject. Geometric relevance: The torsion τ(s) measures the turnaround of the binormal vector. The larger the torsion is, the faster the binormal vector rotates around the axis given by the tangent vector (see graphical illustrations). In the animated figure the rotation of the binormal vector is clearly visible at the peaks of the torsion function. Properties • A plane curve with non-vanishing curvature has zero torsion at all points. Conversely, if the torsion of a regular curve with non-vanishing curvature is identically zero, then this curve belongs to a fixed plane. • The curvature and the torsion of a helix are constant. Conversely, any space curve whose curvature and torsion are both constant and non-zero is a helix. The torsion is positive for a right-handed[1] helix and is negative for a left-handed one. Alternative description Let r = r(t) be the parametric equation of a space curve. Assume that this is a regular parametrization and that the curvature of the curve does not vanish. Analytically, r(t) is a three times differentiable function of t with values in R3 and the vectors $\mathbf {r'} (t),\mathbf {r''} (t)$ are linearly independent. Then the torsion can be computed from the following formula: $\tau ={\frac {\det \left({\mathbf {r} ',\mathbf {r} '',\mathbf {r} '''}\right)}{\left\|{\mathbf {r} '\times \mathbf {r} ''}\right\|^{2}}}={\frac {\left({\mathbf {r} '\times \mathbf {r} ''}\right)\cdot \mathbf {r} '''}{\left\|{\mathbf {r} '\times \mathbf {r} ''}\right\|^{2}}}.$ Here the primes denote the derivatives with respect to t and the cross denotes the cross product. For r = (x, y, z), the formula in components is $\tau ={\frac {x'''\left(y'z''-y''z'\right)+y'''\left(x''z'-x'z''\right)+z'''\left(x'y''-x''y'\right)}{\left(y'z''-y''z'\right)^{2}+\left(x''z'-x'z''\right)^{2}+\left(x'y''-x''y'\right)^{2}}}.$ Notes 1. Weisstein, Eric W. "Torsion". mathworld.wolfram.com. References • Pressley, Andrew (2001), Elementary Differential Geometry, Springer Undergraduate Mathematics Series, Springer-Verlag, ISBN 1-85233-152-6 Wikimedia Commons has media related to Graphical illustrations of the torsion of space curves. Various notions of curvature defined in differential geometry Differential geometry of curves • Curvature • Torsion of a curve • Frenet–Serret formulas • Radius of curvature (applications) • Affine curvature • Total curvature • Total absolute curvature Differential geometry of surfaces • Principal curvatures • Gaussian curvature • Mean curvature • Darboux frame • Gauss–Codazzi equations • First fundamental form • Second fundamental form • Third fundamental form Riemannian geometry • Curvature of Riemannian manifolds • Riemann curvature tensor • Ricci curvature • Scalar curvature • Sectional curvature Curvature of connections • Curvature form • Torsion tensor • Cocurvature • Holonomy
Wikipedia
Optical Society of Korea (한국광학회) 2508-7266(pISSN) Electricity/Electronics > Optical Instrument "COPP will be an international, peer-reviewed, and open access journal published bimonthly. The journal will contain articles about optical science, optical technology, photonics, quantum electronics, digital holography and information optics, biophotonics, display, and optical materials. " KSCI KCI SCOPUS SCIE Retrieval of LIDAR Aerosol Parameter Using Sun/Sky Radiometer at Gangneung, Korea Shin, Sung-Kyun;Lee, Kwon-Ho;Lee, Kyu-Tae 175 https://doi.org/10.3807/COPP.2017.1.3.175 PDF KSCI The aerosol optical properties such as depolarization ratio (${\delta}$) and aerosol extinction-to-backscatter ratios (S, LIDAR ratio) and ${\AA}ngstr{\ddot{o}m$ exponent (${\AA}$) derived from measurement with AERONET sun/sky radiometer at Gangneung-Wonju National University (GWNU), Gangneung, Korea ($37.77^{\circ}N$, $128.87^{\circ}E$) during a winter season (December 2014 - February 2015) are presented. The PM concentration measurements are conducted simultaneously and used to identify the high-PM events. The observation period was divided into three cases according to the PM concentrations. We analysed the ${\delta}$, S, and ${\AA}$ during these high PM-events. These aerosol optical properties are calculated by the sun/sky radiometer data and used to classify a type of aerosols (e.g., dust, anthropogenic pollution). The higher values of ${\delta}$ with lower values of S and ${\AA}$ were measured for the dust particles. The mean values of ${\delta}$, S, and ${\AA}$ at 440-870 nm wavelength pair (${\AA}_{440-870}$) for the Asia dust were 0.19-0.24, 36-56 sr, and 0.48, respectively. The anthropogenic aerosol plumes are distinguished with the lower values of ${\delta}$ and higher values of ${\AA}$. The mean values of spectral ${\delta}$ and ${\AA}_{440-870}$ for this case varied 0.06-0.16 and 1.33-1.39, respectively. We found that aerosol columnar optical properties obtained from the sun/sky radiometer measurement are useful to identify the aerosol type. Moreover, the columnar aerosol optical properties calculated based on sun/sky radiometer measurements such as ${\delta}$, S, and ${\AA}$ will be further used for the validation of aerosol parameters obtained from LIDAR observation as well as for quantification of the air quality. Visibility Measurement in an Atmospheric Environment Simulation Chamber Tai, Hongda;Zhuang, Zibo;Jiang, Lihui;Sun, Dongsong 186 Obtaining accurate visibility measurements is a common atmospheric optical problem, and of vital significance to civil aviation. To effectively evaluate and improve the accuracy of visibility measurements, an outdoor atmospheric simulation chamber with dimensions of $1.8{\times}1.6{\times}55.7m^3$ was constructed. The simulation chamber could provide a relatively homogeneous haze environment, in which the visibility varied from 10 km to 0.2 km over 5 hours. A baseline-changing visibility measurement system was constructed in the chamber. A mobile platform (receiver) was moved from 5 m to 45 m, stopping every 5 m, to measure and record the transmittance. The total least-squares method was used to fit the extinction coefficient. During the experiment conducted in the chamber, the unit weight variance was as low as $1.33{\times}10^{-4}$ under high-visibility conditions, and the coefficient of determination ($R^2$) was as high as 0.99 under low-visibility conditions, indicating high stability and accuracy of the system used to measure the extinction coefficients and strong consistency between repeated measurements. A Grimm portable aerosol spectrometer (PAS) was used to record the aerosol distribution, and then Mie theory was used to calculate the extinction coefficients. The theoretical results were found to be consistent with the measurements and exhibited a positive correlation, although they were higher than the measured values. An Automatic Corona-discharge Detection System for Railways Based on Solar-blind Ultraviolet Detection Li, Jiaqi;Zhou, Yue;Yi, Xiangyu;Zhang, Mingchao;Chen, Xue;Cui, Muhan;Yan, Feng 196 Corona discharge is always a sign of failure processes of high-voltage electrical apparatus, including those utilized in electric railway systems. Solar-blind ultraviolet (UV) cameras are effective tools for corona inspection. In this work, we present an automatic railway corona-discharge detection system based on solar-blind ultraviolet detection. The UV camera, mounted on top of a train, inspects the electrical apparatus, including transmission lines and insulators, along the railway during fast cruising of the train. An algorithm based on the Hough transform is proposed for distinguishing the emitting objects (corona discharge) from the noise. The detection system can report the suspected corona discharge in real time during fast cruises. An experiment was carried out during a routine inspection of railway apparatus in Xinjiang Province, China. Several corona-discharge points were found along the railway. The false-alarm rate was controlled to less than one time per hour during this inspection. Passively Q-switched Erbium Doped All-fiber Laser with High Pulse Energy Based on Evanescent Field Interaction with Single-walled Carbon Nanotube Saturable Absorber Jeong, Hwanseong;Yeom, Dong-Il 203 We report a passive Q-switching of an all-fiber erbium-doped fiber laser delivering high pulse energy by using a high quality single-walled carbon nanotube saturable absorber (SWCNT-SA). A side-polished fiber coated with the SWCNT is employed as an in-line SA for evanescent wave interaction between the incident light and the SWCNT. This lateral interaction scheme enables a stable Q-switched fiber laser that generates high pulse energy. The central wavelength of the Q-switched pulse laser was measured as 1560 nm. A repetition rate frequency of the Q-switched laser is controlled from 78 kHz to 190 kHz by adjusting the applied pump power from 124 mW to 790 mW. The variation of pulse energy from 51 nJ to 270 nJ is also observed as increasing the pump power. The pulse energy of 270 nJ achieved at maximum pump power is 3 times larger than those reported in Q-switched all-fiber lasers using a SWCNT-SA. The tunable behaviors in pulse duration, pulse repetition rate, and pulse energy as a function of pump power are reported, and are well matched with theoretical expectation. Automotive Adaptive Front Lighting Requiring Only On/Off Modulation of Multi-array LEDs Lee, Jun Ho;Byeon, Jina;Go, Dong Jin;Park, Jong Ryul 207 The Adaptive Front-lighting System (AFS) is a part of the active safety system, providing optimized vision to the driver during night time and other poor-sight conditions of the road by automatic adaptation of lighting to environmental and traffic conditions. Basically, an AFS provides four different modes of the passing beam as designated in an United Nations Economic Commission for Europe regulation (ECE324-R123): neutral state or country light (Class C), urban light (Class V), highway light (Class E), and adverse weather light (Class W). In this paper, we first present an optics design for an AFS system capable of producing the Class C/V/E/W patterns requiring only on/off modulation of multi-array LEDs with no need for any additional mechanical components. The AFS optics consists of two separated modules, cutoff and spread; the cutoff module lights a narrow central area with high luminous intensity, satisfying the cutoff regulation, and the spread module forms a wide spread beam of low luminous intensity. Each module consists of two major parts; the first converts a discretely positioned LED array into a full-filled area emitting light source plane, and the second projects the light source plane to a 25 m away target plane. With the combination of these two optics modules, the four beam patterns are formed by simple on/off modulation of multi-array LEDs. Then we report the development of a prototype that was demonstrated to provide the four beam patterns. Optimization of Tilted Bragg Grating Tunable Filters Based on Polymeric Optical Waveguides Park, Tae-Hyun;Huang, Guanghao;Kim, Eon-Tae;Oh, Min-Cheol 214 A wavelength filter based on a polymer Bragg reflector has received much attention due to its simple structure and wide tuning range. Tilted Bragg gratings and asymmetric Y-branches are integrated to extract the reflected optical signals in different directions. To optimize device performance, design procedures are thoroughly considered and various design parameters are applied to fabricated devices. An asymmetric Y-branch with an angle of $0.3^{\circ}$ produced crosstalk less than -25 dB, and the even-odd mode coupling was optimized for a grating tilt angle of $2.5^{\circ}$, which closely followed the design results. Through this experiment, it was confirmed that this device has a large manufacturing tolerance, which is important for mass production of this optical device. Numerical Analysis of Working Distance of Square-shaped Beam Homogenizer for Laser Shock Peening Kim, Taeshin;Hwang, Seungjin;Hong, Kyung Hee;Yu, Tae Jun 221 To apply a square-shaped beam homogenizer to laser shock peening, it should be designed with a long working distance and by considering metal targets with various shapes and textures. For long working distances, a square-shaped beam homogenizer with a long depth of focus is required. In the range of working distance, the laser beam is required to have not only high efficiency but high uniformity, in other words, a good peening quality is guaranteed. In this study, we defined this range as the working distance for laser shock peening. We have simulated the effect of some parameters on the working distance. The parameters include the focal length of the condenser lens, pitch size of the array lens, and plasma threshold of the metal. The simulation was performed through numerical analysis by considering the diffraction effect. Double Resonance Perfect Absorption in a Dielectric Nanoparticle Array Hong, Seokhyeon;Lee, Young Jin;Moon, Kihwan;Kwon, Soon-Hong 228 We propose a reflector-type perfect absorber with double absorption lines using electric and magnetic dipoles of Mie resonances in an array of silicon nanospheres on a silver substrate. In the visible range, hundreds of nanometer-sized nanospheres show strong absorption lines up to 99%, which are enhanced by the interference between Mie scattering and reflections from the silver substrate. The air gap distance between the silicon particles and silver substrate controls this interference, and the absorption wavelengths can be controlled by adjusting the diameter of the silicon particles over the entire range of visible wavelengths. Additionally, our structure has a filling factor of 0.322 when the absorbance is nearly 100%. Gaussian Decomposition Method in Designing a Freeform Lens for an LED Fishing/Working Lamp Nguyen, Anh Q.D.;Nguyen, Vinh H.;Lee, Hsiao-Yi 233 In this paper we propose a freeform secondary lens for an LED fishing/working lamp (LFWL). This innovative LED lamp is used to replace the traditional HID fishing lamp, to satisfy the lighting demands of fishing and the on-board activities on fishing boats. To realize the freeform lens's geometry, Gaussian decomposition is involved in our optics-design process for approaching the targeted light intensity distribution curve (LIDC) of the LFWL lens. The simulated results show that the illumination on the deck, on the sea's surface, and underwater shows only small differences between LED fishing/working lamps and HID fishing lamps. Meanwhile, a lighting efficiency of 91% with just one third of the power consumption can be achieved, when the proposed LED fishing/working lamps are used instead of HID fishing lamps. Design of a Plasmonic Switch Using Ultrathin Chalcogenide Phase-change Material Lee, Seung-Yeol 239 A compact plasmonic switching scheme, based on the phase change of a thin-film chalcogenide material ($Ge_2Sb_2Te_5$), is proposed and numerically investigated at optical-communication wavelengths. Surface plasmon polariton modal analysis is conducted for various thicknesses of dielectric and phase-change material layers, and the optimized condition is induced by finding the region of interest that shows a high extinction ratio of surface plasmon polariton modes before and after the phase transition. Full electromagnetic simulations show that multiple reflections inside the active region may conditionally increase the overall efficiency of the on/off ratio at a specific length of the active region. However, it is shown that the optimized geometrical condition, which shows generally large on/off ratio for any length of active region, can be distinguished by observing the multiple-reflection characteristic inside the active region. The proposed scheme shows an on/off switching ratio greater than 30 dB for a length of a few micrometers, which can be potentially applied to integrated active plasmonic systems. Development of an Ultraviolet Raman Spectrometer for Standoff Detection of Chemicals Ha, Yeon Chul;Lee, Jae Hwan;Koh, Young Jin;Lee, Seo Kyung;Kim, Yun Ki 247 In this study, an ultraviolet Raman spectrometer was designed and fabricated to detect chemical contamination on the ground. The region of the Raman spectrum that indicated the characteristics of the chemicals was $350-3800cm^{-1}$. To fabricate a Raman spectrometer operating in this range, the layout and angle of optical components of the spectrometer were designed using a grating equation. Experimental devices were configured to measure the Raman spectra of chemicals based on the fabricated Raman spectrometer. The wavenumber of the spectrometer was calibrated by measuring the Raman spectrum of polytetrafluoroethylene, $O_2$, and $N_2$. The spectral range of the spectrometer was measured to be 23.46 nm ($3442cm^{-1}$) with a resolution of 0.195 nm ($30.3cm^{-1}$) at 253.65 nm. After calibration, the main Raman peaks of cyclohexane, methanol, and acetonitrile were found to be similar to the references within a relative error of 0.55%.
CommonCrawl
\begin{document} \footnote[0] {2010 {\it Mathematics Subject Classification.} Primary 35K05; Secondary 35B40;} \maketitle \begin{abstract} The Cauchy problem for the Hardy-H\'enon parabolic equation is studied in the critical and subcritical regime in weighted Lebesgue spaces on the Euclidean space $\mathbb{R}^d$. Well-posedness for singular initial data and existence of non-radial forward self-similar solution of the problem are previously shown only for the Hardy and Fujita cases ($\gamma\le 0$) in earlier works. The weighted spaces enable us to treat the potential $|x|^{\gamma}$ as an increase or decrease of the weight, thereby we can prove well-posedness to the problem for all $\gamma$ with $-\min\{2,d\}<\gamma$ including the H\'enon case ($\gamma>0$). As a byproduct of the well-posedness, the self-similar solutions to the problem are also constructed for all $\gamma$ without restrictions. A non-existence result of local solution for supercritical data is also shown. Therefore our critical exponent $s_c$ turns out to be optimal in regards to the solvability. \end{abstract} \section{Introduction}\label{sec:1} \subsection{Background and setting of the problem} We consider the Cauchy problem of the Hardy-H\'enon parabolic equation \begin{equation}\label{HH} \begin{cases} \partial_t u - \Delta u = |\cdot|^{\gamma} |u|^{\alpha-1} u, &(t,x)\in (0,T)\times D, \\ u(0) = u_0 \in L^q_{s}(\mathbb{R}^d), \end{cases} \end{equation} where $T>0,$ $d\in \mathbb{N}$, $\gamma\in \mathbb{R},$ $\alpha\in \mathbb{R},$ $D:=\mathbb{R}^d$ if $\gamma\ge0$ and $D:=\mathbb{R}^d\setminus\{0\}$ if $\gamma<0.$ Here, $\partial_t:=\partial/\partial t$ is the time derivative, $\Delta:=\sum_{j=1}^d\partial^2/\partial x_j^2$ is the Laplace operator on $\mathbb{R}^d$, $u=u(t,x)$ is the unknown real- or complex-valued function on $(0,T)\times \mathbb R^d$, and $u_0=u_0(x)$ is a prescribed real- or complex-valued function on $\mathbb R^d$. In this paper, we assume that the initial data $u_0$ belongs to weighted Lebesgue spaces $L^q_s(\mathbb{R}^d)$ given by \[L^q_s(\mathbb{R}^d):=\left\{ f \in \mathcal{M} (\mathbb{R}^d) \,;\, \|f\|_{L^q_s} < \infty \right\} \] endowed with the norm \[ \|f\|_{L^q_s} := \left(\int_{\mathbb{R}^d} ( |x|^s |f(x)|)^q \, dx \right)^\frac1{q}, \] where $s\in \mathbb{R}$ and $q\in [1,\infty]$ and $\mathcal{M} (\mathbb{R}^d)$ denotes the set of all Lebesgue measurable functions on $\mathbb{R}^d$. We express the time-space-dependent function $u$ as $u(t)$ or $u(t,x)$ depending on circumstances. We introduce a exponent $\alpha_F(d,\gamma)$ given by \[ \alpha_F(d,\gamma):=1+\frac{2+\gamma}{d}, \] which is often referred as the {\it Fujita exponent} and is known to divide the existence and nonexistence of positive global solutions (See \cite[Theorem 1.6]{Qi1998}). The equation \eqref{HH} with $\gamma<0$ is known as a {\it Hardy parabolic equation} while that with $\gamma>0$ is known as a {\it H\'enon parabolic equation}. The elliptic part of \eqref{HH}, that is, \begin{equation}\nonumber -\Delta \phi=|x|^{\gamma}|\phi|^{\alpha-1}\phi,\ \ \ x\in \mathbb{R}^d, \end{equation} was proposed by H\'enon as a model to study the rotating stellar systems (see \cite{H-1973}), and has been extensively studied in the mathematical context, especially in the field of nonlinear analysis and variational methods (see \cite{GhoMor2013} for example). The case $\gamma=0$ corresponds to a heat equation with a standard power-type nonlinearity, often called the {\it Fujita equation}, which has been extensively studied in various directions. Regarding well-posedness of the Fujita equation ($\gamma=0$) in Lebesgue spaces, we refer to \cites{Wei1979, Wei1980, Gig86}, among many. Concerning the global dynamics and asymptotic behaviors, we refer to \cites{Ish2008,IT-arxiv,CIT-arxiv} for the Fujita and Hardy cases of \eqref{HH} with Sobolev-critical exponents. Articles \cites{HisIsh2018, HisTak-arxiv} give definitive results on the optimal singularity of initial data to assure the solvability for $\gamma\le 0.$ In \cite{Tay2020}, unconditional uniqueness has been established for the Hardy case $\gamma<0.$ Concerning earlier conditional uniqueness when $\gamma<0$, we refer to \cites{BenTayWei2017, Ben2019}. Lastly, we refer to \cite{Maj-arxiv} for the analysis of the problem \eqref{HH} with an external forcing term in addition to the nonlinear term. Let us recall that the equation \eqref{HH} is invariant under the scale transformation \begin{equation}\label{scale} u_{\lambda}(t,x) := \lambda^{\frac{2+\gamma}{\alpha-1}} u(\lambda^2 t, \lambda x), \quad \lambda>0. \end{equation} More precisely, if $u$ is the classical solution to \eqref{HH}, then $u_{\lambda}$ defined as above also solves the equation with the rescaled initial data $ \lambda^{\frac{2+\gamma}{\alpha-1}} u_0(\lambda x).$ Under \eqref{scale}, the $L^q_{s}(\mathbb{R}^d)$-norm scales as follows: $\|u_\lambda (0)\|_{L^q_{s}} = \lambda^{-s+\frac{2+\gamma}{\alpha-1}-\frac{d}{q}} \|u(0)\|_{L^q_{s}}.$ We say that the space $L^q_{s}(\mathbb{R}^d)$ is (scale-){\sl critical} if $s=s_c$ with \begin{equation}\label{d:sc} s_c=s_c(q)= s_c(d,\gamma,\alpha,q) := \frac{2+\gamma}{\alpha-1} - \frac{d}{q}, \end{equation} {\sl subcritical} if $s<s_c,$ and {\sl supercritical} if $s>s_c.$ In particular, when $s=s_c = 0,$ $L^{\frac{d(\alpha-1)}{2+\gamma}}(\mathbb{R}^d)$ is a critical Lebesgue space. One of our purposes in this article is to establish well-posedness results in the critical and subcritical cases ($s\le s_c$) for all the range of the parameter $\gamma$ such that $-\min\{2,d\} < \gamma,$ including the H\'enon case ($\gamma>0$). In terms of well-posedness in function spaces containing sign-changing singular data, the equation \eqref{HH} has been studied mainly for $\gamma<0$ (Hardy case). As far as we know, there has been no result concerning well-posedness in the sense of Hadamard (Existence, uniqueness and continuous dependency) of the H\'enon parabolic equation $\gamma>0$ for sign-changing singular data. For the Hardy and Fujita cases that are well-studied, our results provide well-posedness in new function spaces (See Remark \ref{r:HH.LWP}). We stress that the use of weighted spaces enables us to treat the equations for all $\gamma$ in a unified manner. Our second purpose of this article is to prove the existence of forward self-similar solutions for all of Hardy, Fujita and H\'enon cases, without restrictions on the exponent $\alpha.$ A forward self-similar solution is a solution such that $u_{\lambda} = u$ for all $\lambda>0,$ where $u_{\lambda}$ is as in \eqref{scale}. In \cite[Lemma 4.4]{Wan1993}, the existence of radially symmetric self-similar solutions for $d\ge3$, $\gamma>-2$ and $\alpha\ge1+\frac{2(2+\gamma)}{d-2}$ is established. Later, the case $\alpha_F(d,\gamma)<\alpha<1+\frac{2(2+\gamma)}{d-2}$ is treated in \cite{Hir2008} under some additional restriction on $\gamma,$ namely $\gamma\le 0$ for $d\ge4$ and $\gamma\le \sqrt{3}-1$ for $d=3.$ In \cite[Theorem 1.4]{BenTayWei2017}, the existence of self-similar solutions that are not necessarily radially symmetric has been proved for all $\alpha>\alpha_F(d,\gamma),$ but only for the Hardy case $\gamma<0$ (See also \cite{Chi2019}). Our result (Theorem \ref{t:HH.self.sim}) covers all the previous results and asserts the existence of non-radial forward self-similar solutions for $\gamma$ and $\alpha$ such that $-\min(2,d)<\gamma$ and $\alpha > \alpha_F(d,\gamma)$. In earlier works, the crux of the matter has been the handling of the singular potential $|x|^{\gamma}.$ If $\gamma<0$, the conventional methods are to regard the potential $|x|^{\gamma}$ as a function belonging either to the Lorentz space $L^{\frac{d}{-\gamma},\infty}(\mathbb{R}^d)$ (\cites{BenTayWei2017, Tay2020}) or the homogeneous Besov space $\dot B^{\frac{d}{q}+\gamma}_{q,\infty}(\mathbb{R}^d),$ $1\le q \le \infty$ (\cite{Chi2019}), and apply appropriate versions of H\"older's inequality to establish suitable heat kernel estimates. In contrast to their previous works, in this article, we treat the potential $|x|^{\gamma}$ as the increase or decrease of the order of the weight in $L^q_s(\mathbb{R}^d)$-norms, thereby covering the H\'enon case ($\gamma>0$) as well. In this regard, the introduction of the weighted spaces is crucial to our results. Indeed, if the data only belongs to the critical Lebesgue space, then we may only treat the Hardy case ($\gamma<0$) in our main theorem (See Remark \ref{r:HH.LWP} below). The proofs of the well-posedness results rely on Banach's fixed point theorem. The essential ingredient in the proof of various nonlinear estimates is the following linear estimate for the heat semigroup $\{e^{t\Delta}\}_{t>0}$ on weighted Lebesgue spaces: \begin{equation}\nonumber \| e^{t\Delta} f\|_{L^q_{s'}} \le C t^{-\frac{d}2 (\frac1{p}-\frac1{q}) - \frac{s-s'}{2} } \| f\|_{L^p_{s}}, \end{equation} (see Lemma \ref{l:wLpLq} for precise statement), which is known in the literatures such as \cite{Tsu2011} except for the end-point cases. In this article, we first extend the above estimate to the end-point cases $(i)$ $1<p<q=\infty,$ $(ii)$ $p=q=1,$ $(iii)$ $1=p<q<\infty,$ $(iv)$ $p=q=\infty$ and $(v)$ $(p,q)=(1,\infty)$. To complete the picture of the admissible range of our well-posedness results, we also discuss the non-existence of positive distributional local solutions to \eqref{HH} for suitable supercritical data $u_0 \in L^q_{s}(\mathbb{R}^d)$ with $s>s_c.$ \subsection{Main results} In order to state our results, we introduce the following auxiliary function spaces. Let $\mathscr{D}'([0,T)\times\mathbb{R}^d)$ be the space of distributions on $[0,T)\times\mathbb{R}^d$. \begin{definition}[Kato class] \label{def:Kato} Let $T \in (0,\infty],$ $s\in\mathbb{R}$ and $q\in [1,\infty].$ \begin{enumerate}[(1)] \item In the critical regime, i.e. $\tilde s=s_c$, where $s_c$ is defined by \eqref{d:sc}, for $s<\tilde s$, the space $\mathcal{K}^{s}(T)$ is defined by \begin{equation}\nonumber \mathcal{K}^{s}(T) :=\left\{u\in \mathscr{D}'([0,T)\times\mathbb{R}^d) \,;\, \|u\|_{\mathcal{K}^{s}(T')} <\infty\ \text{for any } T' \in (0,T)\right\} \end{equation} endowed with a norm \[ \|u\|_{\mathcal K^{s}(T)} :=\sup_{0\le t\le T}t^{\frac{s_c -s}{2}} \|u(t)\|_{L^q_s}. \] We simply write $\mathcal{K}^{s}=\mathcal{K}^{s}(\infty)$ when $T=\infty,$ if it does not cause confusion. \item In the subcritical regime, i.e. $\tilde s<s_c$, for $s<\tilde s$, the space $\tilde{\mathcal{K}}^{s}(T)$ is defined by \begin{equation}\nonumber \tilde{\mathcal{K}}^{s}(T) :=\left\{u\in \mathscr{D}'([0,T)\times\mathbb{R}^d) \,;\, \|u\|_{\tilde{\mathcal{K}}^{s}(T')} <\infty\ \text{for any } T' \in (0,T)\right\} \end{equation} endowed with a norm \[ \|u\|_{\tilde{\mathcal{K}}^{s}(T)} :=\sup_{0\le t\le T}t^{\frac{\tilde s -s}{2}} \|u(t)\|_{L^q_s}. \] \end{enumerate} \end{definition} For $t\in \mathbb{R}_+$, we introduce the heat kernel $g_t:\mathbb{R}^d\rightarrow \mathbb{R}_+$ given by \begin{equation}\label{d:h.krnl} g_t(x) := (4\pi t)^{-\frac{d}{2}} e^{-\frac{|x|^2}{4t}}, \ x \in\mathbb{R}^d. \end{equation} We denote by $\{e^{t\Delta}\}_{t \ge 0}$ the free heat semigroup defined by \[ (e^{t\Delta} \varphi) (x) := (g_t \ast \varphi) (x) \] for $\varphi \in L^1_{loc}(\mathbb{R}^d),$ where $\ast$ denotes the convolution with respect to the space variable. Let $\mathcal{S}'(\mathbb{R}^d)$ denotes the space of the Schwarz distributions. For $\varphi \in \mathcal{S}'(\mathbb{R}^d)$, $e^{t\Delta}\varphi$ is defined by duality. In what follows, we denote by $C_0^\infty(\mathbb{R}^d)$ the space of all smooth functions with compact support. We also denote by $\mathcal{L}^q_s (\mathbb{R}^d)$ the closure of $C_0^\infty(\mathbb{R}^d)$ with respect to the topology of $L^q_s (\mathbb{R}^d).$ Next we give a definition of mild solution as follows. \begin{definition}[Mild solution]\label{def:sol-A} Let $T \in (0,\infty]$, $\tilde s\le s_c$ and $u_0 \in L^q_{\tilde s} (\mathbb{R}^d)$. Let $Y := \mathcal{K}^s(T)$ if $\tilde s = s_c$ and $Y := \tilde{\mathcal{K}}^{s}(T)$ if $\tilde s<s_c.$ A function $u : [0,T] \times \mathbb{R}^d \to \mathbb C\ \text{or}\ \mathbb{R}$ is called an $L^q_{\tilde s} (\mathbb{R}^d)$-mild solution to \eqref{HH} with initial data $u(0)=u_0$ if it satisfies $u\in C([0,T]; L^q_{\tilde s} (\mathbb{R}^d)) \cap Y$ and the integral equation \begin{equation}\label{integral-eq} u(t,x) = e^{t\Delta} u_0(x) + \int_0^t e^{(t-\tau)\Delta} \left\{ |\cdot|^{\gamma} |u(\tau,\cdot)|^{\alpha-1}u(\tau, \cdot)\right\}(x) \, d\tau \end{equation} for any $t \in [0,T]$ and almost everywhere $x \in \mathbb{R}^d$. The time $T$ is said to be the maximal existence time, which is denoted by $T_m$, if the solution cannot be extended beyond $[0,T).$ More precisely, \begin{equation}\label{d:Tm} T_m = T_m (u_0) := \sup \left\{T>0 \,;\, \left.\begin{aligned}&\text{There exists } \text{ a unique solution $u$ of \eqref{HH}} \\ &\text{in } C([0,T]; L^q_{\tilde s}(\mathbb{R}^d)) \cap Y \text{ with initial data $u_0$} \end{aligned}\right. \right\}. \end{equation} We say that $u$ is global in time if $T_m = + \infty$ and that $u$ blows up in a finite time otherwise. Moreover, we say that $u$ is dissipative if $T_m = + \infty$ and \[ \lim_{t\to\infty} \|u(t)\|_{L^q_{\tilde s}} = 0. \] \end{definition} The following is one of our main results on local well-posedness of \eqref{HH} in the critical space $L^q_{s_c}(\mathbb{R}^d)$. \begin{thm}[Well-posedness in the critical space] \label{t:HH.LWP} Let $d\in\mathbb{N},$ $\gamma\in\mathbb{R}$ and $\alpha\in\mathbb{R}$ satisfy \begin{equation}\label{t:HH.LWP.c0} \gamma> -\min(2,d) \quad\text{and}\quad \alpha> \alpha_F(d,\gamma). \end{equation} Let $q\in [1,\infty]$ be such that \begin{equation}\label{t:HH.LWP.c1} \alpha\le q \le \infty \quad\text{and}\quad \frac1{q} < \min \left\{ \frac{2}{d(\alpha-1)}, \, \frac{2}{d(\alpha-1)} + \frac{(d-2)\alpha - d -\gamma}{d(\alpha-1)^2}\right\} \end{equation} and let $s \in \mathbb{R}$ be such that \begin{equation}\label{t:HH.LWP.c2} s_c - \frac{d(\alpha-1)}{\alpha} \left(\frac{2}{d(\alpha-1)} - \frac1{q} \right) \le s < \min \left\{ s_c, \, s_c + \frac{(d-2)\alpha - d -\gamma}{\alpha(\alpha-1)} \right\}. \end{equation} Then the Cauchy problem \eqref{HH} is locally well-posed in $L^q_{s_c}(\mathbb{R}^d)$ for arbitrary data $u_0\in L^q_{s_c}(\mathbb{R}^d)$ and globally well-posed for small data $u_0\in L^q_{s_c}(\mathbb{R}^d)$. More precisely, the following assertions hold. \begin{enumerate}[$(i)$] \item {\rm (}Existence{\rm )} For any $u_0 \in L^q_{s_c}(\mathbb{R}^d)$ with $q <\infty$ (Replace $L^\infty_{s_c}(\mathbb{R}^d)$ with $\mathcal{L}^\infty_{s_c}(\mathbb{R}^d)$ when $q = \infty$), there exist a positive number $T$ and an $L^q_{s_c}(\mathbb{R}^d)$-mild solution $u$ \ to \eqref{HH} satisfying \begin{equation}\label{t:HH.LWP.est} \|u\|_{\mathcal{K}^s(T)} \le 2 \|e^{t\Delta} u_0 \|_{\mathcal{K}^s(T)}. \end{equation} Moreover, the solution can be extended to the maximal interval $[0,T_m),$ where $T_m$ is defined by \eqref{d:Tm}. \item {\rm (}Uniqueness{\rm )} Let $T>0.$ If $u, v \in \mathcal{K}^s(T)$ satisfy \eqref{integral-eq} with $u(0) = v(0)=u_0 \in L^q_{s_c}(\mathbb{R}^d)$ (Replace $L^\infty_{s_c}(\mathbb{R}^d)$ with $\mathcal{L}^\infty_{s_c}(\mathbb{R}^d)$ when $q=\infty$), then $u=v$ on $[0,T].$ \item {\rm (}Continuous dependence on initial data{\rm )} Let $u$ and $v$ be the $L^q_{s_c}(\mathbb{R}^d)$-mild solutions constructed in (i) with given initial data $u_0$ and $v_0$ respectively. Let $T(u_0)$ and $T(v_0)$ be the corresponding existence times. Then there exists a constant $C$ depending on $u_0$ and $v_0$ such that the solutions $u$ and $v$ satisfy \begin{equation}\nonumber \|u-v\|_{L^\infty(0,T;L^q_{s_c}) \cap \mathcal{K}^s(T)} \le C \|u_0-v_0\|_{L^q_{s_c}} \end{equation} for some $T\le \min\{T(u_0), T(v_0)\}.$ \item {\rm (}Blow-up criterion{\rm )} If $u$ is an $L^q_{s_c}(\mathbb{R}^d)$-mild solution constructed in the assertion $(i)$ and $T_m<\infty,$ then $\|u\|_{\mathcal{K}^s(T_m)}=\infty.$ \item {\rm (}Small data global existence and dissipation{\rm )} There exists $\varepsilon_0>0$ depending only on $d,\gamma,\alpha,q$ and $s$ such that if $u_0 \in \mathcal{S}'(\mathbb{R}^d)$ satisfies $\|e^{t\Delta}u_0\|_{\mathcal{K}^s}<\varepsilon_0,$ then $T_m=\infty$ and $\|u\|_{\mathcal{K}^s} \le 2\varepsilon_0.$ Moreover, the solution $u$ is dissipative. In particular, if $\|u_0\|_{L^p_{s_c}}$ is sufficiently small, then $\|e^{t\Delta}u_0\|_{\mathcal{K}^s}<\varepsilon_0.$ \end{enumerate} \end{thm} \begin{rem}[Optimality of the power $\alpha$ for the nonlinearity] By the blow-up result in \cite{Qi1998}, the condition $\alpha>\alpha_F(d,\gamma)$ is known to be optimal. Indeed, if $\alpha \le \alpha_F(d,\gamma)$, then the solutions of \eqref{HH} with positive initial data blow up in a finite time. \end{rem} \begin{rem}[Uniqueness $(ii)$] In $(ii)$, $T$ is arbitrary and there is no restriction on the size of the quantity $\|u\|_{\mathcal{K}^s(T)}.$ We note that this uniqueness result concerns a so-called {\sl conditional} uniqueness since we can prove that $u\in \mathcal{K}^s(T)$ is a solution to \eqref{HH} if and only if $u \in C([0,T] ; L^q_{s_c}(\mathbb{R}^d)) \cap \mathcal{K}^s(T)$ is a solution to \eqref{HH}, provided that $u_0 \in L^q_{s_c}(\mathbb{R}^d).$ See Remark \ref{r:crt.est2} below. We note that for the Hardy case, unconditional uniqueness has been established by \cite{Tay2020} in the Lebesgue framework. \end{rem} \begin{exa}[Small data global existence $(v)$] We give a typical example of the initial data $u_0$ satisfying the assumptions in $(v)$ : $u_0 \in L^1_{loc}(\mathbb{R}^d)$ such that $|u_0(x)| \le c |x|^{-\frac{2+\gamma}{\alpha-1}}$ for almost all $x\in\mathbb{R}^d,$ where $c$ is a sufficiently small constant. This initial data in particular generates a self-similar solution. See Theorem \ref{t:HH.self.sim} below. \end{exa} \begin{rem}[New contributions for $\gamma\neq0$] \label{r:HH.LWP} For the Hardy case $\gamma > 0,$ Theorem \ref{t:HH.LWP} is new concerning sign-changing solutions for singular initial data. Theorem \ref{t:HH.LWP} also gives a new result in the Hardy case ($\gamma<0$). In particular, when $s_c \equiv 0$, that is, $q = \frac{d(\alpha-1)}{2+\gamma}$, the critical space is the usual Lebesgue space $L^{\frac{d(\alpha-1)}{2+\gamma}}(\mathbb{R}^d).$ Theorem \ref{t:HH.LWP} gives a new well-posedness result in the usual Lebesgue space $L^{\frac{d(\alpha-1)}{2+\gamma}}(\mathbb{R}^d)$ for $d\ge2$ and $-2<\gamma<0$. \end{rem} \begin{rem} We note that $s_c$ is always positive when $\gamma>0$ while $s_c$ can be either negative or non-negative. In other words, the initial data $u_0$ must have a stronger decay at infinity when $\gamma>0.$ \end{rem} We next discuss global existence of forward self-similar solutions to \eqref{HH}. As mentioned earlier, the result below is not known in the literature for large $\gamma>0.$ \begin{thm}[Existence of forward self-similar solutions] \label{t:HH.self.sim} Let $d\in\mathbb{N},$ $\gamma\in\mathbb{R}$ and $\alpha\in\mathbb{R}$ satisfy \eqref{t:HH.LWP.c0}. Let $\varphi(x) := \omega(x) |x|^{-\frac{2+\gamma}{\alpha-1}},$ where $\omega\in L^\infty(\mathbb{R}^d)$ is homogeneous of degree 0 and $\|\omega\|_{L^\infty}$ is sufficiently small so that $\|e^{t\Delta}\varphi\|_{\mathcal{K}^s}<\varepsilon_0$, where $\varepsilon_0$ appears in Theorem \ref{t:HH.LWP}. Then there exists a self-similar solution $u_\mathcal{S}$ of \eqref{HH} with the initial data $\varphi$ such that $u_\mathcal{S}(t) \to \varphi$ in $\mathcal{S}'(\mathbb{R}^d)$ as $t\to0.$ \end{thm} The following theorem deals with the local well-posedness of \eqref{HH} in the subcritical space $L^q_{\tilde s}(\mathbb{R}^d)$ with $\tilde s< s_c.$ \begin{thm}[Well-posedness in the subcritical space] \label{t:HH.LWP.sub} Let $d\in\mathbb{N},$ $\gamma\in\mathbb{R}$ and $\alpha\in\mathbb{R}$ satisfy \eqref{t:HH.LWP.c0}. Let $\tilde s\in\mathbb{R}$ be such that \begin{equation}\label{t:HH.LWP.sub.cs} \max\left\{-\frac{d}{\alpha}, \, \frac{\gamma}{\alpha-1} \right\} <\tilde s < \frac{2+\gamma}{\alpha-1}. \end{equation} Let $q\in[1,\infty]$ be such that \begin{equation}\label{t:HH.LWP.sub.c1} \alpha\le q \le \infty \quad\text{and}\quad -\frac{\tilde s}{d} < \frac1{q} < \min \left\{ \frac{2}{d(\alpha-1)}, \, \frac1{\alpha} \left(1-\frac{\tilde s}{d} \right), \, \frac1{d} \left(\frac{2 + \gamma}{\alpha-1} -\tilde s \right) \right\} \end{equation} and let $s \in \mathbb{R}$ be such that \begin{equation}\label{t:HH.LWP.sub.c2} \frac{\tilde s+\gamma}{\alpha} \le s \quad\text{and}\quad - \frac{d}{q} < s < \min \left\{ \frac{d+\gamma}{\alpha} - \frac{d}{q}, \tilde s \right\}. \end{equation} Then the Cauchy problem \eqref{HH} is locally well-posed in $L^q_{\tilde{s}}(\mathbb{R}^d)$ for arbitrary data $u_0\in L^q_{\tilde{s}}(\mathbb{R}^d)$. More precisely, the following assertions hold. \begin{enumerate}[$(i)$] \item {\rm (}Existence{\rm )} For any $u_0 \in L^q_{\tilde s}(\mathbb{R}^d),$ there exist a positive number $T$ depending only on $\|u_0\|_{L^q_{\tilde s}}$ and an $L^q_{\tilde s}(\mathbb{R}^d)$-mild solution $u $ \ to \eqref{HH} satisfying \begin{equation}\nonumber \|u\|_{\tilde{\mathcal{K}}^s(T)} \le 2 \|e^{t\Delta} u_0 \|_{\tilde{\mathcal{K}}^s(T)}. \end{equation} Moreover, the solution can be extended to the maximal interval $[0,T_m),$ where $T_m$ is defined by \eqref{d:Tm}. \item {\rm (}Uniqueness in $\tilde{\mathcal{K}}^s(T)${\rm )} Let $T>0.$ If $u, v \in \tilde{\mathcal{K}}^s(T)$ satisfy \eqref{integral-eq} with $u(0) = v(0)=u_0,$ then $u=v$ on $[0,T].$ \item {\rm (}Continuous dependence on initial data{\rm )} For any initial data $u_0$ and $v_0$ in $L^q_{\tilde s}(\mathbb{R}^d),$ let $T(u_0)$ and $T(v_0)$ be the corresponding existence time given by $(i).$ Then there exists a constant $C$ depending on $u_0$ and $v_0$ such that the corresponding solutions $u$ and $v$ satisfy \begin{equation}\nonumber \|u-v\|_{L^\infty(0,T;L^q_{\tilde s}) \cap \tilde{\mathcal{K}}^s(T)} \le C \|u_0-v_0\|_{L^q_{\tilde s}} \end{equation} for some $T\le \min\{T(u_0), T(v_0)\}.$ \item {\rm (}Blow-up criterion{\rm )} If $T_m<\infty,$ then $\lim_{t\rightarrow T_m-0}\|u(t)\|_{L^q_{\tilde s}}=\infty.$ Moreover, the following lower bound of blow-up rate holds: there exists a positive constant $C$ independent of $t$ such that \begin{equation}\label{t:HH.LWP:Tm} \|u(t)\|_{L^q_{\tilde s}} \ge \frac{C}{(T_m - t)^{\frac{s_c-\tilde s}{2}} } \end{equation} for $t\in (0,T_m)$. \end{enumerate} \end{thm} \begin{rem} Note that \eqref{t:HH.LWP.sub.c1} implies $\tilde s<s_c,$ i.e., $u_0 \in L^q_{\tilde s}(\mathbb{R}^d)$ is a scale-subcritical data. \end{rem} Finally, for the scale-supercritical case, i.e. $s>s_c$, we prove non-existence of a weak local positive solution, whose definition is given below. More precisely, we may prove that there exists a positive initial data $u_0$ in $L^q_s(\mathbb{R}^d)$ with $s>s_c$ that does not generate a local solution to \eqref{HH} even in the distributional sense. \begin{definition}[Weak solution] \label{d:w.sol} Let $T>0$. We call a function $u:[0,T)\times \mathbb{R}^d\rightarrow \mathbb{R}$ a weak solution to the Cauchy problem \eqref{HH} if $u$ belongs to $L^{\alpha}(0,T;L^{\alpha}_{\frac{\gamma}{\alpha},loc}(\mathbb{R}^d))$ and if it satisfies the equation \eqref{HH} in the distributional sense, i.e., \begin{align}\label{weak} \notag\int_{\mathbb{R}^d} &u(T',x) \eta (T',x) \, dx-\int_{\mathbb{R}^d} u_0(x) \eta (0,x) \, dx\\ &= \int_{[0,T']\times\mathbb{R}^d} u(t ,x)(\Delta \eta + \eta_t) (t ,x) + |x|^{\gamma} |u(t, x)|^{\alpha-1} u(t,x) \,\eta(t,x) \, dx\,dt \end{align} for all $T'\in [0,T]$ and for all $\eta \in C^{1,2}([0,T]\times \mathbb{R}^d)$ such that $\operatorname{supp} \eta(t, \cdot)$ is compact. \end{definition} We remark that our $L^q_{\tilde s}(\mathbb{R}^d)$-mild solutions are weak solutions in the above sense. See Lemma \ref{mildweak} in Appendix. \begin{thm}[Nonexistence of local positive weak solution] \label{t:nonex} Let $d\in \mathbb N$ and $\gamma \in \mathbb R$. Assume that $q\in [1,\infty],$ $\alpha\in\mathbb{R}$ and $s\in\mathbb{R}$ satisfy $\alpha>\max(1, \alpha_F(d,\gamma))$ and $s>s_c$. Then there exists an initial data $u_0 \in L^q_s (\mathbb{R}^d)$ such that the problem \eqref{HH} with $u(0)=u_0$ has no local positive weak solution. \end{thm} \bigbreak The rest of the paper is organized as follows: In Section 2, we prove the linear estimates and nonlinear ones in weighted Lebesgue spaces. Section 3 is devoted to the proof of Theorems \ref{t:HH.LWP}, \ref{t:HH.LWP.sub} and \ref{t:HH.self.sim}. We then give a sketch of the proof of Theorem \ref{t:nonex} in Section 5. In Appendix, we collect some elementary properties related to our function spaces and prove Lemma \ref{mildweak}. \section{Linear and nonlinear estimates} Throughout the rest of the paper, we denote by $C$ a harmless constant that may change from line to line. \subsection{Linear estimate} The following estimate for the heat semigroup $\{e^{t\Delta}\}_{t\ge0}$ in weighted Lebesgue space is known except for the endpoint cases (see \cites{Tsu2011, OkaTsu2016}). \begin{lem}[Linear estimate] \label{l:wLpLq} Let $d\in\mathbb N,$ $1\le p \le q \le \infty$ and \begin{equation} \label{l:wLpLq:cs} -\frac{d}{q} < s' \le s < d\left( 1-\frac{1}{p} \right). \end{equation} In addition, $s\le 0$ when $p=1$ and $0\le s'$ when $q=\infty.$ In particular, \eqref{l:wLpLq:cs} is understood as $s'=s=0$ when $p=1$ and $q=\infty.$ Then there exists some positive constant $C$ depending on $d,$ $p,$ $q,$ $s$ and $s'$ such that \begin{equation}\nonumber \| e^{t\Delta} f\|_{L^q_{s'}} \le C t^{-\frac{d}2 (\frac1{p}-\frac1{q}) - \frac{s-s'}{2} } \| f\|_{L^p_{s}} \end{equation} for all $f\in L^p_{s}(\mathbb{R}^d)$ and $t>0$. Moreover, condition \eqref{l:wLpLq:cs} is optimal. \end{lem} We mainly focus on the endpoint cases in the following proof. \begin{proof} The inequality for $1< p\le q <\infty$ follows from Lemma 3.2, \cite[Proposition C.1]{OkaTsu2016} and the fact that the weight function $|x|^{s p}$ belongs to the Muckenhoupt class $A_p$ if and only if $- \frac{d}{p} < s < d(1- \frac1{p}).$ For the endpoint exponents, we divide the proof into five cases : $(i)$ $1<p<q=\infty,$ $(ii)$ $p=q=1,$ $(iii)$ $1=p<q<\infty,$ $(iv)$ $p=q=\infty$ and $(v)$ $(p,q)=(1,\infty).$ It suffices to prove the inequality for $e^{\Delta} f$ and then resort to a dilation argument as in the proof of \cite[Proposition 2.1]{BenTayWei2017}. Throughout the proof of this lemma, we write $a\lesssim b$ if $a \le C b$ with some constant $C.$\\ \underline{$(i)$ $1<p<q=\infty$}: Since $|x|^{s'} \lesssim |x-y|^{s'} + |y|^{s'}$ if $s' \ge 0,$ we have \begin{equation*} |x|^{s'} | e^{\Delta} f(x)| \lesssim \int_{\mathbb{R}^d} |x-y|^{s'} g(x-y) |f(x)| \, dy + \int_{\mathbb{R}^d} |y|^{s'} g(x-y) |f(x)| \, dy = : I_1 + I_2. \end{equation*} For $I_1,$ H\"older's inequality with $\frac1{p}+\frac1{p'}=1,$ $p>1,$ leads to \begin{equation*}\nonumber I_1 \le \left( \int_{\mathbb{R}^d} (|y|^{-s} |x-y|^{s'} g(x-y) )^{p'}\, dy \right)^{\frac1{p'}} \, \|f\|_{L^p_s} \lesssim \|f\|_{L^p_s}, \end{equation*} thanks to Lemma \ref{l:g.unfrm.bnd} $(1)$ with $q\equiv p',$ $a\equiv s$ and $b\equiv s',$ where $0\le s<\frac{d}{p'}$ and $s'\ge0.$ Similarly, H\"older's inequality and Lemma \ref{l:g.unfrm.bnd} $(2)$ with $q\equiv p'$ and $c\equiv s-s'$ yields \begin{align*}\nonumber I_2 & \le \left( \int_{\mathbb{R}^d} (|y|^{-(s-s')} g(x-y) )^{p'}\, dy \right)^{\frac1{p'}} \, \|f\|_{L^p_s} \lesssim \|f\|_{L^p_s}, \end{align*} where $0\le s-s' < \frac{d}{p'}.$ Thus, $\| e^{\Delta} f\|_{L^{\infty}_{s'}} \lesssim \| f\|_{L^p_{s}}$ provided that $0 \le s' \le s < d\left(1-\frac1{p}\right).$ \underline{$(ii)$ $p=q=1$}: We have $|y|^{-s} \lesssim |x-y|^{-s} + |x|^{-s}$ if $s\le 0$ and thus \begin{align*} \|e^{\Delta} f\|_{L^1_{s'}} &\lesssim \int_{\mathbb{R}^d} |x|^{s'} \int_{\mathbb{R}^d} g(x-y) |x-y|^{-s} |y|^{s} |f(y)| \,dy \, dx \\ &\qquad\qquad+ \int_{\mathbb{R}^d} |x|^{s'-s} \int_{\mathbb{R}^d} g(x-y) |y|^{s} |f(y)| \,dy \, dx \\ &\lesssim \int_{\mathbb{R}^d} \left( \int_{\mathbb{R}^d} |x|^{s'} g(x-y) |x-y|^{-s} \, dx \right) |y|^{s} |f(y)| \,dy\\ &\qquad\qquad+ \int_{\mathbb{R}^d} \left( \int_{\mathbb{R}^d} |x|^{s'-s} g(x-y) \, dx\right) |y|^{s} |f(y)| \,dy \ \lesssim \|f\|_{L^1_s} \end{align*} thanks to Fubini's theorem and Lemma \ref{l:g.unfrm.bnd} with $q\equiv 1,$ $a\equiv -s',$ $b\equiv -s$ and $c\equiv s-s',$ where $0 \le -s' < d,$ $0 \le -s$ and $0 \le s-s' < d.$ Thus, $\| e^{\Delta} f\|_{L^{1}_{s'}} \lesssim \| f\|_{L^1_{s}}$ provided that $-d<s'\le s \le 0.$ \underline{$(iii)$ $1=p < q < \infty$}: By H\"older's inequality with $1=\frac1{q}+\frac1{q'},$ $q<\infty,$ we have \begin{align*} |e^{\Delta} f(x)| &\le \left( \int_{\mathbb{R}^d} |y|^{-sq} g(x-y)^q |y|^s |f(y)| \, dy \right)^{\frac1{q}} \|f\|_{L^1_s}^{\frac1{q'}} \end{align*} for $s\le 0.$ Taking the $L^q_{s'}(\mathbb{R}^d)$-norm of the both sides of the above, we obtain \begin{align*} \|e^{\Delta} f\|_{L^q_{s'}} &\le \left( \int_{\mathbb{R}^d} |x|^{s'q} \left( \int_{\mathbb{R}^d} |y|^{s(1-q)} g(x-y)^q |f(y)| \, dy \right) dx \right)^{\frac1{q}} \|f\|_{L^1_s}^{\frac1{q'}}. \end{align*} Since $|y|^{-qs} \lesssim |x-y|^{-qs} + |x|^{-qs}$ if $s<0,$ Fubini's theorem and Lemma \ref{l:g.unfrm.bnd} with $q\equiv q,$ $a\equiv -s',$ $b\equiv -s$ and $c\equiv s-s'$ yield \begin{align*} \int_{\mathbb{R}^d} |x|^{s'q} &\left( \int_{\mathbb{R}^d} |y|^{-sq} g(x-y)^q |y|^s |f(y)| \, dy \right) dx \\ &\lesssim \int_{\mathbb{R}^d} |x|^{s'q} \left( \int_{\mathbb{R}^d} |x-y|^{-qs} g(x-y)^q |y|^{s} |f(y)| \, dy \right) dx \\ &\qquad\qquad + \int_{\mathbb{R}^d} |x|^{-(s-s')q} \left( \int_{\mathbb{R}^d} g(x-y)^q |y|^{s} |f(y)| \, dy \right) dx \\ &\lesssim \int_{\mathbb{R}^d} |y|^{s} |f(y)| \left( \int_{\mathbb{R}^d} (|x|^{s'} |x-y|^{-s} g(x-y) )^q \, dx \right) dy \\ &\qquad\qquad + \int_{\mathbb{R}^d} |y|^{s} |f(y)| \left( \int_{\mathbb{R}^d} (|x|^{-(s-s')} g(x-y))^q \, dx \right) dy \lesssim \|f\|_{L^1_s}, \end{align*} where $0\le -s' < \frac{d}{q},$ $0\le -s$ and $0 \le s-s' < \frac{d}{q}.$ Thus, $\| e^{\Delta} f\|_{L^{q}_{s'}} \lesssim \| f\|_{L^1_{s}}$ provided that $-\frac{d}{q}<s'\le s \le 0.$ \underline{$(iv)$ $p=q=\infty$}: Since $|x|^{s'} \lesssim |x-y|^{s'} + |y|^{s'}$ if $s' \ge 0,$ we have \begin{align*} |x|^{s'} &|e^{\Delta} f(x)| \le |x|^{s'} \int_{\mathbb{R}^d} |y|^{-s} g(x-y) \, dy \|f\|_{L^\infty_s}\\ &\lesssim \left( \int_{\mathbb{R}^d} |y|^{-s} |x-y|^{s'} g(x-y) \, dy + \int_{\mathbb{R}^d} |y|^{s'-s} g(x-y) \, dy \right) \|f\|_{L^\infty_s} \lesssim \|f\|_{L^\infty_s} \end{align*} thanks to Lemma \ref{l:g.unfrm.bnd} with $q\equiv 1,$ $a\equiv s,$ $b\equiv s'$ and $c\equiv s-s',$ where $0\le s< d,$ $0\le s'$ and $0\le s-s' < d.$ Thus, $\| e^{\Delta} f\|_{L^{\infty}_{s'}} \lesssim \| f\|_{L^{\infty}_{s}}$ provided that $0\le s'\le s <d.$ The case $(v)$ $(p,q) = (1,\infty)$ is trivial. We complete the proof of the endpoint estimates. \smallbreak Next, we prove the optimality of \eqref{l:wLpLq:cs} for $1< p \le q < \infty$ by contradiction. Suppose that the inequality holds when $s' \le -\frac{d}{q}.$ We notice that every function $g$ in $L^q_{s'}(\mathbb{R}^d)$ must satisfy $\displaystyle \liminf_{|x|\to0} |x|^{\frac{d}{q}+s'} |g(x)| =0$ thanks to Corollary \ref{c:wLp.sg.dcy} in Appendix. In particular, we have $ \displaystyle \liminf_{|x|\to0} |g(x)| =0 $ as $0 \le -s'-\frac{d}{q}.$ Since $0<\frac{d}{p}+s < d,$ a function $f$ defined by \[ f(x) := \left\{\begin{aligned} &C, &&|x|\le 1\\ &0, &&\text{else}, \end{aligned}\right. \] where $C$ is a positive constant, belongs to $L^p_s(\mathbb{R}^d).$ However, clearly \[ \displaystyle \liminf_{|x|\to0} |e^{t\Delta} f(x)| \neq 0, \] which implies that $e^{t\Delta} f \notin L^{q}_{s'}(\mathbb{R}^d)$ and leads to a contradiction. The optimality of the upper bound of \eqref{l:wLpLq:cs} is based on the fact that the space $L^p_s(\mathbb{R}^d)$ contains functions that are not in $L^1_{loc}(\mathbb{R}^d),$ if $d\left( 1-\frac{1}{p} \right)< s.$ Let \[ f(x) := \left\{\begin{aligned} &|x|^{-d}, &&|x|\le 1\\ &0, &&\text{else}, \end{aligned}\right. \] so that it belongs to $L^p_s(\mathbb{R}^d)$ as $p(d-s)<d$ (Note that the space $L^p_s(\mathbb{R}^d)$ is defined for all measurable functions). A standard argument then shows that $e^{t\Delta} f$ does not make sense for the function $f.$ Indeed, for every $t>0$ and every $x$ such that $|x|\le 1,$ the estimates hold: \begin{equation*} e^{t\Delta} f(x) = (4\pi t)^{-\frac{d}2} \int_{|y|\le 1} e^{\frac{-|x-y|^2}{4t}} |y|^{-d} \, dy \ge (4\pi t)^{-\frac{d}2} \int_{|y|\le 1} e^{-\frac{1}{t}} |y|^{-d} \, dy = \infty, \end{equation*} where we have used $|x-y|\le 2.$ Thus $e^{t\Delta}$ is not well-defined in $L^p_s(\mathbb{R}^d)$ if $d\left( 1-\frac{1}{p} \right)< s.$ When $s= d(1-\frac1{p}),$ it suffices to take \[ f(x) := \left\{\begin{aligned} &|x|^{-d} \left( \log\left(e+ \frac1{|x|}\right) \right)^{-\frac{a}{p}}, &&|x|\le 1,\\ &0, &&\text{else}, \end{aligned}\right. \] where $p\ge a>1,$ and show that $e^{t\Delta} f$ is not well-defined for the function by carrying out the same argument as above. Thus, we conclude the lemma. \end{proof} \subsection{Nonlinear estimates} Given $u_0\in L^q_{s_c}(\mathbb{R}^d)$ in the critical regime (resp. $L^q_{\tilde{s}}(\mathbb{R}^d)$ in the subcritical regime) and $T>0,$ let us define a map $\Phi : u \mapsto \Phi(u)$ on $\mathcal{K}^s(T)$ (resp. $\tilde{\mathcal{K}}^s(T)$) by \begin{equation}\label{map} \Phi(u) (t) := e^{t\Delta} u_0 + N(u)(t) \end{equation} with \begin{equation}\label{mapN} N(u)(t) := \int_0^t e^{(t-\tau)\Delta} \left\{ |\cdot|^{\gamma} F(u(\tau,\cdot)) \right\} d\tau \quad\text{and}\quad F(u) := |u|^{\alpha-1}u. \end{equation} \subsubsection{Critical case} The following are the stability and contraction estimates in the critical regime. The assertion $(2)$ below for $\theta<1$ is not required in the proof of existence but is used in the proof of uniqueness. \begin{lem} \label{l:Kato.est} Let $T \in (0,\infty]$ and $d\in\mathbb{N}.$ Let $\gamma\in\mathbb{R}$ and $\alpha\in\mathbb{R}$ satisfy \eqref{t:HH.LWP.c0}. \begin{enumerate}[$(1)$] \item Let $q\in [1,\infty]$ be such that \begin{equation}\label{l:Kato.est.c1} \alpha\le q \le \infty \quad\text{and}\quad \frac1{q} < \min \left\{ \frac{2}{d(\alpha-1)}, \, \frac{2}{d(\alpha-1)} + \frac{(d-2)\alpha - d -\gamma}{d\alpha (\alpha-1)} \right\}. \end{equation} Let $s \in \mathbb{R}$ be such that \begin{equation}\label{l:Kato.est.c2} \frac{\gamma}{\alpha-1}\le s \quad\text{and}\quad \max\left\{- \frac{d}{q}, \, s_c - \frac2{\alpha} \right\} < s < \min \left\{ s_c, \, s_c + \frac{(d-2)\alpha - d -\gamma}{\alpha(\alpha-1)} \right\}, \end{equation} where $s_c$ is as in \eqref{d:sc}. Then there exists a positive constant $C_0$ depending only on $d,$ $\alpha,$ $\gamma,$ $q$ and $s$ such that the map $N$ defined by \eqref{mapN} satisfies \begin{equation}\label{l:Kato.est1} \|N(u)\|_{\mathcal{K}^s(T)} \le C_0 \|u\|_{\mathcal{K}^s(T)}^{\alpha} \end{equation} for all $u \in \mathcal{K}^s(T).$ \item Let $q\in [1,\infty]$ be such that \begin{equation}\label{l:Kato.est.c1'} \begin{aligned} &\alpha \le q \le \infty, \\ \text{and}\quad &\frac1{q} < \min \left\{ \frac2{d(\alpha-1)},\, \frac2{d(\alpha-1)} + \frac{\theta(d-2)(\alpha-1) - 2 - \gamma} {d(\alpha-1)(1+\theta(\alpha-1))} \right\}, \end{aligned} \end{equation} where $\theta \in (0,1]$ ($\frac1{2+\gamma} < \theta$ if d=1). Let $s \in \mathbb{R}$ be such that \begin{equation}\label{l:Kato.est.c2'} \begin{aligned} & s_c - \frac{d}{\theta} \left( \frac{2}{d(\alpha-1)} -\frac{1}{q} \right)\le s \\ \text{and}\quad & \max\left\{ -\frac{d}{q}, \, s_c - \frac2{1+\theta(\alpha-1)} \right\} < s < \min\left\{ s_c, \, s_c + \frac{(d-2)\alpha-d-2}{(1+\theta(\alpha-1))(\alpha-1)} \right\}. \end{aligned} \end{equation} Then there exists a positive constant $C_1$ depending only on $d,$ $\alpha,$ $\gamma,$ $q,$ $s$ and $\theta$ such that the map $N$ defined by \eqref{mapN} satisfies \begin{equation}\label{l:Kato.est2} \begin{aligned} \|N(u) - N(v)\|_{\mathcal{K}^s(T)} \le C_1 &\left(\|u\|_{\mathcal{K}^s(T)} +\|v\|_{\mathcal{K}^s(T)} \right)^{\theta(\alpha-1)} \\ &\times\left(\|u\|_{L^\infty(0,T; L^q_{s_c}) } +\|v\|_{L^\infty(0,T; L^q_{s_c}) } \right)^{(1-\theta)(\alpha-1)} \|u-v\|_{\mathcal{K}^s(T)} \end{aligned} \end{equation} for all $u,v \in \mathcal{K}^s(T) \cap L^\infty(0,T ; L^q_{s_c}(\mathbb{R}^d))$ ($u,v \in \mathcal{K}^s(T)$ if $\theta = 1$). \end{enumerate} \end{lem} \begin{rem} Note that \eqref{l:Kato.est.c1'} and \eqref{l:Kato.est.c2'} for $\theta=1$ are equivalent to \eqref{l:Kato.est.c1} and \eqref{l:Kato.est.c2}, respectively. The estimate \eqref{l:Kato.est2} fails for $\theta=0$ as $C_1$ is divergent as $\theta\to0.$ \end{rem} \begin{proof} We first prove \eqref{l:Kato.est1}. We have \begin{align*} \|N(u)(t)\|_{L^q_s} &\le C \int_0^t (t-\tau)^{-\frac{d(\alpha-1)}{2q} - \frac12 \{(\alpha-1)s - \gamma\}} \| |\cdot|^{\gamma} F(u(\tau)) \|_{L^{\frac{q}{\alpha}}_{\sigma}} d\tau \\ \end{align*} by Lemma \ref{l:wLpLq} with $q\equiv q,$ $p\equiv \frac{q}{\alpha},$ $s\equiv s$ and $s'\equiv \sigma := \alpha s-\gamma,$ provided that $1\le \frac{q}{\alpha} \le q \le \infty$ and $-\frac{d}{q} < s \le \alpha s -\gamma < d(1-\frac{\alpha}{q}),$ i.e., \begin{equation}\label{l:Kato.est:pr1} \alpha \le q \le \infty, \quad \frac{\gamma}{\alpha-1} \le s \quad\text{and}\quad -\frac{d}{q} < s < \frac{\gamma+d}{\alpha}-\frac{d}{q}. \end{equation} As $\| |\cdot|^{\gamma} F(u) \|_{L^{\frac{q}{\alpha}}_{\sigma}}= \| u \|_{L^q_s}^{\alpha}, $ we have \begin{align*} \|N(u)(t)\|_{L^q_s} &\le C \int_0^t (t-\tau)^{-\frac{d(\alpha-1)}{2q} - \frac12 \{(\alpha-1)s - \gamma\}} \tau^{-\frac{(s_c-s)\alpha}{2}} d\tau \times \|u \|_{\mathcal{K}^s(T)}^{\alpha}, \end{align*} where the last integral is bounded by \begin{equation*} t^{-\frac{s_c-s}2} B\left(\frac{\alpha-1}2 (s_c -s), 1- \frac{(s_c-s)\alpha}2\right), \end{equation*} where $B:(0,\infty)^2\rightarrow \mathbb{R}_{>0}$ is the beta function given by $B(x,y):=\int_0^1t^{x-1}(1-t)^{y-1}dt$, which is convergent if and only if \begin{equation}\label{l:Kato.est:pr2} s_c - \frac2{\alpha} < s < s_c. \end{equation} Gathering \eqref{l:Kato.est:pr1} and \eqref{l:Kato.est:pr2}, we have condition \eqref{l:Kato.est.c2}. For such an $s$ to exist, it suffices to take $\gamma,$ $\alpha$ and $q$ so that conditions \eqref{t:HH.LWP.c0} and \eqref{l:Kato.est.c1} are met. \smallbreak We next show \eqref{l:Kato.est2}. Since there exists a constant $C=C(\alpha)$ such that \begin{equation}\label{diff.pt.est} |F(u)-F(v)| \le C (|u|^{\alpha-1} + |v|^{\alpha-1})|u-v| \quad\text{for all} \quad u,v \in \mathbb{C}, \end{equation} we have \begin{align*} \|N(u)(t) - N(v)(t)\|_{L^q_s} &\le C \int_0^t (t-\tau)^{-\frac{d(\alpha-1)}{2q} - \frac12 \{(\alpha-1) (\theta s + (1-\theta) s_c) -\gamma\}} \\ & \quad \times \left\| |\cdot|^{\gamma} (|u|^{\alpha-1} + |v|^{\alpha-1})|u-v| \right\|_{L^{\frac{q}{\alpha}}_{\sigma}} d\tau, \end{align*} thanks to Lemma \ref{l:wLpLq} with $q\equiv q,$ $p\equiv \frac{q}{\alpha},$ $s\equiv s$ and $s'\equiv \sigma := (\alpha-1) (\theta s + (1-\theta) s_c) +s-\gamma,$ provided that $1\le \frac{q}{\alpha} \le q \le \infty$ and $-\frac{d}{q} < s \le (\alpha-1) (\theta s + (1-\theta) s_c) +s-\gamma < d(1-\frac{\alpha}{q}),$ $\theta \in (0,1],$ i.e., \begin{equation}\label{l:Kato.est:pr1'} \begin{aligned} & \alpha \le q \le \infty, \quad s_c - \frac{d}{\theta} \left( \frac{2}{d(\alpha-1)} -\frac{1}{q} \right) \le s\\ \text{and}\quad & -\frac{d}{q} < s < s_c + \frac1{1+\theta(\alpha-1)} \left( d-2 -\frac{2+\gamma}{\alpha-1} \right). \end{aligned} \end{equation} By H\"older's inequality with $\frac{\alpha}{q} = \frac{\theta(\alpha-1)}{q} + \frac{(1-\theta)(\alpha-1)}{q} + \frac1{q},$ we have \begin{align*} & \left\| |\cdot|^{\gamma} (|u|^{\alpha-1} + |v|^{\alpha-1})|u-v| \right\|_{L^{\frac{q}{\alpha}}_{\sigma}} \\ &\le \left( \|u\|_{L^q_s} + \|v\|_{L^q_s} \right)^{\theta(\alpha-1)} \left( \|u\|_{L^q_{s_c}} + \|v\|_{L^q_{s_c}} \right)^{(1-\theta)(\alpha-1)} \, \|u-v\|_{L^q_s}. \end{align*} Thus, \begin{align*} &\|N(u)(t) - N(v)(t)\|_{L^q_s} \\ &\le C t^{-\frac{s_c-s}2} B\left(\theta\frac{\alpha-1}2 (s_c -s), 1- \frac{\theta (\alpha -1)+ 1}2 (s_c-s)\right) \\ &\times \left(\|u\|_{\mathcal{K}^s(T)} +\|v\|_{\mathcal{K}^s(T)} \right)^{\theta(\alpha-1)} \left(\|u\|_{L^\infty(0,T; L^q_{s_c}) } +\|v\|_{L^\infty(0,T; L^q_{s_c}) } \right)^{(1-\theta)(\alpha-1)} \|u-v\|_{\mathcal{K}^s(T)} \\ \end{align*} in which the last beta function is convergent if $\theta>0$ and \begin{equation}\label{l:Kato.est:pr2'} s_c - \frac2{\theta(\alpha-1)+1} < s < s_c. \end{equation} Gathering \eqref{l:Kato.est:pr1'} and \eqref{l:Kato.est:pr2'}, we deduce that the restrictions for $s$ are \eqref{l:Kato.est.c2'}. Consequently, for such an $s$ to exist, it suffices to take $q$ such that \eqref{l:Kato.est.c1'}. Finally, for such a $q$ to exist, one must have $0<\frac1{d(1+\theta(\alpha-1))} \{ \frac2{\alpha-1} + \theta ( d- \frac{2+\gamma}{\alpha-1} ) \},$ i.e., $\alpha>1+\frac{2+\gamma}{d} - \frac2{\theta d}$ and $0 < \frac2{d(\alpha-1)},$ both of which hold thanks to \eqref{t:HH.LWP.c0}. This concludes the proof of the lemma. \end{proof} The following is the stability estimate for the critical norm. \begin{lem} \label{l:crt.est} Let $T \in (0,\infty]$ and $d\in\mathbb{N}.$ Let $\gamma\in\mathbb{R}$ and $\alpha\in\mathbb{R}$ satisfy \eqref{t:HH.LWP.c0} . Let $q\in [1,\infty]$ be such that \begin{equation}\label{l:crt.est.c1} \alpha\le q \le \infty \quad\text{and}\quad \frac1{q} < \min \left\{ \frac{2}{d(\alpha-1)}, \, \frac{2}{d(\alpha-1)} + \frac{(d-2)\alpha - d -\gamma}{d(\alpha-1)^2} \right\} \end{equation} and let $s \in \mathbb{R}$ be such that \begin{equation}\label{l:crt.est.c2} s_c - \frac{d(\alpha-1)}{\alpha} \left(\frac{2}{d(\alpha-1)} - \frac1{q} \right) \le s < \min \left\{ s_c, \, s_c + \frac{(d-2)\alpha - d -\gamma}{\alpha(\alpha-1)} \right\}, \end{equation} where $s_c$ is as in \eqref{d:sc}. Then there exists a positive constant $C_2$ depending only on $d,$ $\alpha,$ $\gamma,$ $q$ and $s$ such that the map $N$ defined by \eqref{mapN} satisfies \begin{equation}\nonumber \|N(u)\|_{L^\infty(0,T ; L^q_{s_c})} \le C_2 \|u\|_{\mathcal{K}^s(T)}^{\alpha} \end{equation} for all $u,v \in \mathcal{K}^s(T).$ \end{lem} \begin{proof} Let $T>0$ and $u,v \in \mathcal{K}^s(T).$ We have \begin{align*} \|N(u)(t)\|_{L^q_{s_c}} &\le C \int_0^t (t-\tau)^{-\frac{d(\alpha-1)}{2q} - \frac12 (\alpha s -\gamma- s_c)} \| u(\tau) \|_{L^q_s}^{\alpha} d\tau \\ &\le C B\left(\frac{\alpha}2 (s_c-s), 1 - \frac{(s_c-s)\alpha}2 \right) \times \|u \|_{\mathcal{K}^s(T)}^{\alpha}, \end{align*} thanks to Lemma \ref{l:wLpLq} with $q\equiv q,$ $p\equiv \frac{q}{\alpha},$ $s\equiv s_c$ and $s'\equiv \alpha s-\gamma,$ provided that $1\le \frac{q}{\alpha} \le q \le \infty$ and $-\frac{d}{q} < s_c \le \alpha s -\gamma < d(1-\frac{\alpha}{q}),$ i.e., \begin{equation}\nonumber -2<\gamma, \quad \alpha \le q \le \infty \quad\text{and}\quad \frac{s_c+\gamma}{\alpha} \le s < \frac{d+\gamma}{\alpha}-\frac{d}{q}. \end{equation} The final beta function is convergent if \eqref{l:Kato.est:pr2} holds. Since $s_c - \frac2{\alpha} \le \frac{s_c+\gamma}{\alpha},$ the restrictions on $s$ are \eqref{l:crt.est.c2}. For such an $s$ to exist, $q$ must satisfy \eqref{l:crt.est.c1} in addition to $\alpha\le q \le\infty.$ Indeed, $\frac{s_c + \gamma}{\alpha}<s_c$ is equivalent to $\frac1{q} < \frac2{d(\alpha-1)}$ and $\frac{s_c+\gamma}{\alpha} < s_c$ is equivalent to $\frac1{q} < \frac1{\alpha-1} \left(1-\frac{2+\gamma}{d(\alpha-1)} \right).$ This completes the proof of the lemma. \end{proof} \begin{rem} \label{r:crt.est2} Note that the above lemma along with Lemma \ref{l:wLpLq} imply that a solution $u \in \mathcal{K}^s(T)$ yields the regularity $u\in C([0,T] ; L^q_{s_c}(\mathbb{R}^d)),$ if $u_0 \in L^{q}_{s_c}(\mathbb{R}^d).$ Thus, if we allow the abuse of notation, the equivalence $\mathcal{K}^s(T) = C([0,T] ; L^q_{s_c}(\mathbb{R}^d)) \cap \mathcal{K}^s(T)$ holds as solution spaces of \eqref{HH}. \end{rem} \subsubsection{Subcritical case} The following are the stability and contraction estimates in the subcritical regime. \begin{lem} \label{l:Kato.est.sub} Let $T \in (0,\infty]$ and $d\in\mathbb{N}.$ Let $\gamma\in\mathbb{R}$ and $\alpha\in\mathbb{R}$ satisfy \eqref{t:HH.LWP.c0}. Fix $\tilde s\in\mathbb{R}$ so that \begin{equation}\label{l:Kato.est.sub.c0} \tilde s < \frac{2+\gamma}{\alpha-1}. \end{equation} Let $q\in [1,\infty]$ be such that \begin{equation}\label{l:Kato.est.sub.c1} \alpha\le q \le \infty \quad\text{and}\quad \frac1{q} < \min \left\{ \frac{2}{d(\alpha-1)}, \, \frac1{\alpha} \left(1 - \frac{\gamma}{d(\alpha-1)}\right), \, \frac1{d} \left(\frac{2+\gamma}{\alpha-1} -\tilde s \right)\right\} \end{equation} and let $s \in \mathbb{R}$ be such that \begin{equation}\label{l:Kato.est.sub.c2} \frac{\gamma}{\alpha-1}\le s \quad\text{and}\quad \max\left\{\tilde s - \frac2{\alpha}, \, - \frac{d}{q} \right\} < s < \min \left\{ \frac{d+\gamma}{\alpha} - \frac{d}{q}, \, s_c \right\}, \end{equation} where $s_c$ is as in \eqref{d:sc}, Then there exist positive constants $\tilde C_0$ and $\tilde C_1$ depending only on $d,$ $\alpha,$ $\gamma,$ $q,$ $\tilde s$ and $s$ such that the map $N$ defined by \eqref{mapN} satisfies \begin{equation}\label{l:Kato.est.sub1} \|N(u)\|_{\tilde{\mathcal{K}}^s(T)} \le \tilde C_0 T^{\frac{\alpha-1}2(s_c-\tilde s)} \|u\|_{\tilde{\mathcal{K}}^s(T)}^{\alpha} \end{equation} and \begin{equation}\label{l:Kato.est.sub2} \|N(u) - N(v)\|_{\tilde{\mathcal{K}}^s(T)} \le \tilde C_1 T^{\frac{\alpha-1}2(s_c-\tilde s)} \left( \|u\|_{\tilde{\mathcal{K}}^s(T)}^{\alpha-1} + \|v\|_{\tilde{\mathcal{K}}^s(T)}^{\alpha-1} \right) \|u-v\|_{\tilde{\mathcal{K}}^s(T)} \end{equation} for all $u,v \in \tilde{\mathcal{K}}^s(T).$ \end{lem} \begin{rem} Note that $\frac1{q}<\frac1{d} \left(\frac{2+\gamma}{\alpha-1} -\tilde s \right)$ in \eqref{l:Kato.est.sub.c1} amounts to $\tilde s<s_c,$ so the power of $T$ in \eqref{l:Kato.est.sub1} and \eqref{l:Kato.est.sub2} is positive. \end{rem} \begin{proof}[Proof of Lemma \ref{l:Kato.est.sub}] We have \begin{align*} \|N(u)(t)&\|_{L^q_{s}} \le C \int_0^t (t-\tau)^{-\frac{d(\alpha-1)}{2q} - \frac12 ((\alpha-1) s -\gamma)} \| u(\tau) \|_{L^q_s}^{\alpha} d\tau \\ &\le C t^{-\frac12(s-\tilde s)} t^{\frac{\alpha-1}2(s_c-\tilde s)} B\left(\frac{(\alpha-1)}2 (s_c-s), 1 - \frac{(\tilde s-s)\alpha}2 \right) \times \|u \|_{\tilde{\mathcal{K}}^s(T)}^{\alpha}, \end{align*} thanks to Lemma \ref{l:wLpLq} with $q\equiv q,$ $p\equiv \frac{q}{\alpha},$ $s\equiv \tilde s$ and $s'\equiv \alpha s-\gamma,$ provided that $1\le \frac{q}{\alpha} \le q \le \infty$ and $-\frac{d}{q} < s \le \alpha s -\gamma < d(1-\frac{\alpha}{q}),$ i.e., \begin{equation}\nonumber \alpha \le q \le \infty, \quad \frac{\gamma}{\alpha-1} \le s \quad\text{and}\quad -\frac{d}{q} < s < \frac{d+\gamma}{\alpha}-\frac{d}{q}. \end{equation} The final beta function is convergent if $\tilde s-\frac2{\alpha}<s< s_c.$ Thus, the restrictions on $s$ are \eqref{l:Kato.est.sub.c2}. For such an $s$ to exist, $q$ must satisfy \eqref{l:Kato.est.sub.c1}. Finally, for such a $q$ to exist, we immediately see that $\tilde s$ must satisfy \eqref{l:Kato.est.sub.c0}. The proof for the difference is similar to the above so we omit the details. This completes the proof of the lemma. \end{proof} \begin{lem} \label{l:subcrt.est} Let $T \in (0,\infty]$ and $d\in\mathbb{N}.$ Let $\gamma\in\mathbb{R}$ and $\alpha\in\mathbb{R}$ satisfy \eqref{t:HH.LWP.c0}. Fix $\tilde s$ so that \eqref{t:HH.LWP.sub.cs} is satisfied. Let $q\in [1,\infty]$ be such that \begin{equation}\label{l:subcrt.est.c1} \alpha\le q \le \infty \quad\text{and}\quad -\frac{\tilde s}{d} <\frac1{q} <\min\left\{ \frac1{\alpha} \left(1 -\frac{\tilde s}{d}\right), \, \frac1{d} \left(\frac{2+\gamma}{\alpha-1} -\tilde s \right) \right\} \end{equation} and let $s \in \mathbb{R}$ be such that \begin{equation}\label{l:subcrt.est.c2} \frac{\tilde s+\gamma}{\alpha} \le s < \min \left\{ \frac{d+\gamma}{\alpha} - \frac{d}{q}, \tilde s \right\}, \end{equation} where $s_c$ is as in \eqref{d:sc}. Then there exists a positive constant $\tilde C_2$ depending only on $d,$ $\gamma,$ $\alpha,$ $\tilde s,$ $q$ and $s$ such that the map $N$ defined by \eqref{mapN} satisfies \begin{equation}\nonumber \|N(u)\|_{L^\infty(0,T ; L^q_{\tilde s})} \le \tilde C_2 T^{\frac{\alpha-1}2(s_c-\tilde s)} \|u\|_{\tilde{\mathcal{K}}^s(T)}^{\alpha} \end{equation} for all $u\in \tilde{\mathcal{K}}^s(T).$ \end{lem} \begin{proof} We have \begin{align*} \|N(u)(t)&\|_{L^q_{\tilde s}} \le C \int_0^t (t-\tau)^{-\frac{d(\alpha-1)}{2q} - \frac12 (\alpha s -\gamma- \tilde s)} \| u(\tau) \|_{L^q_s}^{\alpha} d\tau \\ &\le C t^{\frac{\alpha-1}2(s_c-\tilde s)} B\left(\frac12 \{(\alpha-1)(s_c-s) + \tilde s-s\}, 1 - \frac{(\tilde s-s)\alpha}2 \right) \times \|u \|_{\tilde{\mathcal{K}}^s(T)}^{\alpha}, \end{align*} thanks to Lemma \ref{l:wLpLq} with $q\equiv q,$ $p\equiv \frac{q}{\alpha},$ $s\equiv \tilde s$ and $s'\equiv \alpha s-\gamma,$ provided that $1\le \frac{q}{\alpha} \le q \le \infty$ and $-\frac{d}{q} < \tilde s \le \alpha s -\gamma < d(1-\frac{\alpha}{q}),$ i.e., \begin{equation}\nonumber \alpha \le q \le \infty, \quad -\frac{\tilde s}{d} <\frac1{q} \quad\text{and}\quad \frac{\tilde s+\gamma}{\alpha} \le s < \frac{d+\gamma}{\alpha}-\frac{d}{q}. \end{equation} The final beta function is convergent if $\tilde s-\frac2{\alpha}<s<\tilde s< s_c.$ Since $\tilde s-\frac2{\alpha}<\frac{\tilde s+\gamma}{\alpha}$ (by $\tilde s < s_c < \frac{2+\gamma}{\alpha-1}$), the restrictions on $s$ are \eqref{l:subcrt.est.c2}. For such an $s$ to exist, $q$ must satisfy \eqref{l:subcrt.est.c1} and $\frac{\gamma}{\alpha-1} < \tilde s.$ Finally, for such a $q$ to exist, $\tilde s$ must satisfy \eqref{t:HH.LWP.sub.cs} since \[ -\frac{d}{\alpha-1} <\max\left\{ -\frac{d}{\alpha}, \, \frac{\gamma}{\alpha-1} \right\} \quad\text{and}\quad \frac{2+\gamma}{\alpha-1} < d. \] This completes the proof of the lemma. \end{proof} \subsubsection{Upgrade of regularity} The following lemma is used to show the regularity of the $L^q_{\tilde{s}}(\mathbb{R}^d)$-mild solution. \begin{lem} \label{l:b.strap} Let $p,q \in [1,\infty]$ and $s,s' \in \mathbb{R}.$ Under condition \eqref{t:HH.LWP.c0}, let pairs $(q,s)$ and $(p,s')$ be such that either \begin{equation} \label{l:b.strap:c} \begin{aligned} &\alpha \le q < \infty, \quad \max\left\{- \frac{d}{q}, \, \frac{1}{\alpha}\left( \gamma- \frac{d}{q} \right)\right\} < s <\frac{d+\gamma}{\alpha} - \frac{d}{q}, \\ &\max\left\{0, \, -\frac{s}{d}, \, \frac{\gamma-\alpha s}{d} \right\} <\frac1{p} \le \frac{1}{q}, \quad -\frac{d}{p} < s' \le \min\{s, \, \alpha s -\gamma\}. \end{aligned} \end{equation} or \begin{equation} \label{l:b.strap:p=inf} \begin{aligned} &\alpha < q \le \infty, \quad \max\left\{0,\, \frac{\gamma}{\alpha}\right\} \le s <\frac{d+\gamma}{\alpha} - \frac{d}{q},\\ &p= \infty, \quad 0\le s' \le \min\{s, \, \alpha s -\gamma\}. \end{aligned} \end{equation} Let $u$ be the $L^q_{s_c}(\mathbb{R}^d)$-mild solution of \eqref{HH} with initial data $u_0 \in \mathcal{S}'(\mathbb{R}^d)$ on $[0,T_m)$ such that \begin{equation}\nonumber \sup_{t\in [0,T_m)} t^{\frac{s_c(q)-s}{2} } \|u(t)\|_{L^q_{s}} < \infty. \end{equation} Then it follows that \begin{equation}\nonumber \sup_{t\in [0,T_m)} t^{\frac{s_c(p)-s'}{2} } \|u(t)\|_{L^p_{s'}} < \infty. \end{equation} \end{lem} \begin{proof} We use a similar argument as in \cite{SnoTayWei2001} (See also \cite{BenTayWei2017}). Let \[ A := \sup_{t\in [0,T_m)} t^{\frac{s_c(q)-s}2}\|u(t)\|_{L^{q}_{s}}<\infty. \] Let $t\in (0,T_m)$. We use the integral representation \begin{equation}\nonumber u(t) = e^{\frac{t}2 \Delta} u(t/2) + \int_{\frac{t}{2}}^t e^{(t-\tau)\Delta} \left\{ |\cdot|^{\gamma} F(u(\tau,\cdot)) \right\} d\tau. \end{equation} It follows from Lemma \ref{l:wLpLq} with $q\equiv p,$ $p\equiv q,$ $s'\equiv s'$ and $s\equiv s,$ that \begin{align*} \|e^{\frac{t}2\Delta} u(t/2)\|_{L^{p}_{s'}} \le C t^{-\frac{d}{2}(\frac1{q}-\frac1{p}) - \frac{s-s'}2} \|u(t/2)\|_{L^{q}_{s}} \le C t^{-\frac{1}{2}(\frac{2+\gamma}{\alpha-1}-\frac{d}{p} - s')} A \end{align*} if \begin{equation}\label{l:b.strap:pr0} \begin{aligned} &1\le q \le p \le \infty \quad\text{and}\quad -\frac{d}{p} < s' \le s < d\left(1-\frac1{q}\right) \quad\left(0\le s' \text{ if } p=\infty\right). \\ \end{aligned} \end{equation} On the other hand, \begin{equation}\nonumber \begin{aligned} \|&N(u)(t)\|_{L^{p}_{s'}} \le C \int_{\frac{t}{2}}^t (t-\tau)^{-\frac{d}2(\frac{\alpha}{q}-\frac1{p}) - \frac{\alpha s -\gamma-s'}2} \| u(\tau) \|_{L^q_s}^{\alpha} d\tau \\ &\le CA^{\alpha} \int_{\frac{t}{2}}^t (t-\tau)^{-\frac{d}2(\frac{\alpha}{q}-\frac1{p}) - \frac{\alpha s -s'-\gamma}2} \tau^{-\frac{(s_c-s)\alpha}{2}} d\tau \\ &= CA^{\alpha} t^{-\frac12(\frac{2+\gamma}{\alpha-1} - \frac{d}{p} - s')} \int_{\frac{1}{2}}^1 (1-\tau)^{-\frac{d}2(\frac{\alpha}{q}-\frac1{p}) - \frac{\alpha s -s'-\gamma}2} \tau^{-\frac{(s_c-s)\alpha}{2}} d\tau, \end{aligned} \end{equation} thanks to Lemma \ref{l:wLpLq} with $q\equiv p,$ $p\equiv \frac{q}{\alpha},$ $s'\equiv s'$ and $s\equiv \alpha s-\gamma,$ provided that \begin{equation}\label{l:b.strap:pr1} \begin{aligned} &1\le \frac{q}{\alpha} \le p \le \infty \quad\text{and}\quad -\frac{d}{p} < s' \le \alpha s-\gamma < d\left(1-\frac{\alpha}{q}\right) \quad\left(0\le s' \text{ if } p=\infty\right).\\ \end{aligned} \end{equation} i.e., $\alpha \le q \le \infty,$ $\frac1{p} \le \frac{\alpha}{q},$ $-\frac{d}{p} < s' \le \alpha s-\gamma$ ($0\le s'\le \alpha s-\gamma$ if $p=\infty$) and $s < \frac{d+\gamma}{\alpha} - \frac{d}{q}.$ The final integral is convergent if \begin{equation}\label{l:b.strap:pr2} 1-\frac{d}2\left(\frac{\alpha}{q}-\frac1{p}\right) - \frac{\alpha s-s' -\gamma}2>0, \quad\text{i.e.,}\quad \alpha\left( \frac{d}{q} + s\right) - 2 - \gamma - \frac{d}{p}<s'. \end{equation} Thus, we have \begin{equation}\nonumber \sup_{t\in [0,T_m)} t^{\frac{s_c(p)-s'}{2} } \|u(t)\|_{L^p_{s'}} \le C (A + A^{\alpha}) \end{equation} under \eqref{l:b.strap:pr0}, \eqref{l:b.strap:pr1} and \eqref{l:b.strap:pr2}. Since $s < \frac{d+\gamma}{\alpha} - \frac{d}{q}$ from \eqref{l:b.strap:pr1}, we have $\alpha( \frac{d}{q} + s) - 2 - \gamma - \frac{d}{p} < - \frac{d}{p}.$ Thus, the conditions for $s'$ are that in \eqref{l:b.strap:c}. By tedious but straightforward computations, we may easily see that under condition \eqref{t:HH.LWP.c0}, the necessary and sufficient conditions of \eqref{l:b.strap:pr0}, \eqref{l:b.strap:pr1} and \eqref{l:b.strap:pr2} are \eqref{l:b.strap:c} or \eqref{l:b.strap:p=inf}. Hence, the lemma is proved. \end{proof} \section{Local well-posedness and self-similar solutions} \subsection{Proof of Theorem \ref{t:HH.LWP}} In order to prove Theorem \ref{t:HH.LWP}, we prepare the following lemma. \begin{lem}\label{l:exist.crt} Let positive numbers $\rho>0$ and $M>0$ satisfy \begin{equation}\label{l:exist.crt.c0} \rho + C_0 M^\alpha \le M \quad\text{and}\quad 2 C_1 M^{\alpha-1} <1, \end{equation} where $C_0$ and $C_1$ are as in Lemma \ref{l:Kato.est}. Under conditions \eqref{t:HH.LWP.c0}, \eqref{t:HH.LWP.c1} and \eqref{t:HH.LWP.c2}, let $T\in (0,\infty]$ and $u_0 \in \mathcal{S}'(\mathbb{R}^d)$ be such that $e^{t\Delta}u_0 \in \mathcal{K}^s(T).$ If $\|e^{t\Delta}u_0\|_{\mathcal{K}^s(T)}\le \rho,$ then a solution $u$ to \eqref{HH} exists such that $u -e^{t\Delta} u_0 \in L^\infty(0,T ; L^q_{s_c}(\mathbb{R}^d)) \cap C((0,T] ; L^q_{s_c}(\mathbb{R}^d))$ and $\|u\|_{\mathcal{K}^s(T)} \le M.$ Moreover, the solution satisfies the following properties: \begin{enumerate}[$(i)$] \item $u -e^{t\Delta} u_0 \in L^\infty(0,T ; L^q_{\sigma}(\mathbb{R}^d))$ for $\sigma$ such that \begin{equation} \label{l:exist.crt.csig} s_c \le \sigma \le \alpha s -\gamma. \end{equation} \item $u -e^{t\Delta} u_0 \in C([0,T) ; L^q_{\sigma}(\mathbb{R}^d))$ and $\displaystyle \lim_{t\to0} \|u(t) -e^{t\Delta} u_0\|_{L^q_{\sigma}} = 0$ for $\sigma$ such that \eqref{l:exist.crt.csig} and $\sigma>s_c.$ \item $\displaystyle \lim_{t\to0} u(t) = u_0$ in the sense of distributions. \item Let $\gamma\ge0.$ Then the solution $u$ satisfies \[ \sup_{0<t<T} t^{\frac{s_c(p_{\theta} ) - s_{\theta}}{2}} \|u(t)\|_{L^{p_{\theta}}_{s_{\theta}}} < \infty \] where \begin{equation}\nonumber p_{\theta} = \frac{q}{\theta}, \quad \theta s \le s_{\theta} \le s \quad\text{and}\quad 0\le \theta \le 1. \end{equation} In particular, if $\gamma\ge0,$ $u(t) \in L^\infty(\mathbb{R}^d)$ for $t>0.$ \end{enumerate} \end{lem} \begin{rem}\label{r:exist} To meet \eqref{l:exist.crt.c0}, it suffices to take $M=2\rho$ and \[ M < \min\left\{ (2C_0)^{-\frac{1}{\alpha-1}}, \, (2C_1)^{- \frac{1}{\alpha-1}} \right\}. \] \end{rem} \begin{rem} If $u_0\in L^q_{s_c}(\mathbb{R}^d)$, then $u_0$ satisfies the assumptions of Lemma \ref{l:exist.crt} with $T=\infty.$ Indeed, letting $q\equiv q,$ $p\equiv q,$ $s\equiv s_c$ and $s'\equiv s$ in Lemma \ref{l:wLpLq}, we obtain \begin{equation}\nonumber \|e^{t\Delta} u_0\|_{L^q_s} \le C t^{-\frac{s_c-s}2} \|u_0\|_{L^q_{s_c}} \end{equation} provided that $-\frac{d}{q} < s\le s_c < d(1-\frac1{q}),$ i.e., $\gamma>-2$ and $\alpha>\alpha_F(d,\gamma).$ Thus, $e^{t\Delta} u_0 \in \mathcal{K}^s.$ \end{rem} \begin{proof}[Proof of Lemma \ref{l:exist.crt}] Setting the metric $d(u,v) := \|u-v\|_{\mathcal{K}^s(T)}$, we may show that $(\mathcal{K}^s(T),d)$ is a nonempty complete metric space. Let $X_M := \{ u \in\mathcal{K}^s(T) \,;\, \|u\|_{\mathcal{K}^s(T)} \le M \}$ be the closed ball in $\mathcal{K}^s(T)$ centered at the origin with radius $M$. We prove that the map defined in \eqref{map} has a fixed point in $X_M.$ Thanks to Lemma \ref{l:Kato.est} and \eqref{l:exist.crt.c0}, we have \begin{equation}\nonumber \begin{aligned} \|\Phi (u)\|_{\mathcal{K}^s(T)} \le \|e^{t\Delta} u_0 \|_{\mathcal{K}^s(T)} + C_0 \|u\|_{\mathcal{K}^s(T)}^\alpha \le \rho + C_0 M^\alpha \le M \end{aligned}\end{equation} and \begin{equation}\label{t:HH.LWP.pr.Lip'} \|\Phi (u)-\Phi (v)\|_{\mathcal{K}^s(T)} \le C_1 \left( \|u\|_{\mathcal{K}^s(T)}^{\alpha-1} +\|v\|_{\mathcal{K}^s(T)}^{\alpha-1} \right) \|u-v\|_{\mathcal{K}^s(T)} \le 2 C_1 M^{\alpha-1} \|u-v\|_{\mathcal{K}^s(T)} \end{equation} for any $u, v\in X_M,$ where $2 C_1 M^{\alpha-1}<1.$ These prove that $\Phi(u) \in X_M$ and that $\Phi$ is a contraction mapping in $X_M.$ Thus, Banach's fixed point theorem ensures the existence of a unique fixed point $u$ for the map $\Phi$ in $X_M,$ provided that $q$ and $s$ satisfy \eqref{l:Kato.est.c1} and \eqref{l:Kato.est.c2}. The fixed point $u$ also satisfies, by construction, the estimate $\|u\|_{\mathcal{K}^s(T)} \le M.$ Having obtained a fixed point in $\mathcal{K}^s(T)$ for some $T,$ we have $u -e^{t\Delta} u_0 \in L^\infty(0,T;L^q_{s_c}(\mathbb{R}^d))$ by Lemma \ref{l:crt.est}, provided further that \eqref{l:crt.est.c1} and \eqref{l:crt.est.c2} are satisfied. We see that $\frac1{q} < \frac{2}{d(\alpha-1)}$, $q>0,$ $\alpha>1$ and $\gamma>-2$ imply \[ \max\left\{\frac{\gamma}{\alpha-1}, \, - \frac{d}{q} \right\} < \frac{s_c+\gamma}{\alpha} = s_c - \frac{d(\alpha-1)}{\alpha} \left(\frac{2}{d(\alpha-1)} - \frac1{q} \right) \] so $\frac{s_c+\gamma}{\alpha}$ is the stronger lower bound for $s.$ Thus, $s$ must satisfy \eqref{t:HH.LWP.c2}. Combining \eqref{l:Kato.est.c1} and \eqref{l:crt.est.c1}, we end up with \begin{equation}\label{t:HH.LWP:pr1} \frac1{q} < \min \left\{ \frac{2}{d(\alpha-1)}, \, \frac1{\alpha} \left(1 - \frac{\gamma}{d(\alpha-1)}\right), \, \frac1{\alpha-1} \left(1-\frac{2+\gamma}{d(\alpha-1)} \right)\right\}, \end{equation} which in fact amounts to \eqref{t:HH.LWP.c1}. \smallbreak We next prove the assertion $(i)$--$(iii).$ Fix a solution $u \in \mathcal{K}^s(T)$ with $q$ and $s$ as in \eqref{t:HH.LWP.c1} and \eqref{t:HH.LWP.c2}. We have \begin{equation}\label{t:HH.LWP:pr2} \begin{aligned} \|N(u)(t)\|_{L^q_{\sigma}} &\le C \int_0^t (t-\tau)^{-\frac{d(\alpha-1)}{2q} - \frac12 (\alpha s -\gamma- \sigma)} \| u(\tau) \|_{L^q_s}^{\alpha} d\tau \\ &\le C \int_0^t (t-\tau)^{-\frac{d(\alpha-1)}{2q} - \frac12 (\alpha s -\gamma- \sigma)} \tau^{-\frac{(s_c-s)\alpha}{2}} d\tau \times \|u \|_{\mathcal{K}^s(T)}^{\alpha} \\ &= C t^{\frac{\sigma-s_c}2} B\left(\frac{(\alpha-1)(s_c-s) + \sigma-s}2, \, 1 - \frac{(s_c-s)\alpha}2 \right) \times \|u \|_{\mathcal{K}^s(T)}^{\alpha}, \end{aligned} \end{equation} thanks to Lemma \ref{l:wLpLq} with $q\equiv q,$ $p\equiv \frac{q}{\alpha},$ $s' \equiv \sigma$ and $s\equiv \alpha s-\gamma,$ provided that $1\le \frac{q}{\alpha} \le q \le \infty$ and $-\frac{d}{q} < \sigma \le \alpha s -\gamma < d(1-\frac{\alpha}{q}).$ The power of $t$ in the final line is non-negative if $\sigma \ge s_c.$ The use of Lemma \ref{l:wLpLq} along with the convergence of the beta function require, in addition to \eqref{t:HH.LWP.c1} and \eqref{t:HH.LWP.c2}, that $\sigma$ satisfies \eqref{l:exist.crt.csig}. For such a $\sigma$ to exist, one needs $\frac{s_c+\gamma}{\alpha}\le s,$ which is assured by \eqref{t:HH.LWP.c2}. If $\sigma>s_c,$ (i.e., if $\frac{s_c+\gamma}{\alpha}< s,$) then the power of $t$ is positive, thus the right-hand side of \eqref{t:HH.LWP:pr2} goes to zero as $t\to0.$ Hence, the assertions $(ii)$ and $(iii)$ are proved. \smallbreak Finally, we prove the assertion $(iv).$ Fix a solution $u \in \mathcal{K}^s(T)$ with $q$ and $s$ as in \eqref{t:HH.LWP.c1} and \eqref{t:HH.LWP.c2}. Here, we notice that under $\gamma>0,$ the lower bound of \eqref{t:HH.LWP.c2} always satisfies \begin{equation}\nonumber \max\left\{0, \, \frac{\gamma}{\alpha}\right\} \le s_c - \frac{d(\alpha-1)}{\alpha} \left(\frac{2}{d(\alpha-1)} - \frac1{q} \right), \end{equation} which implies that the condition \eqref{l:b.strap:p=inf} of Lemma \ref{l:b.strap} is always satisfied as well. Thus, Lemma \ref{l:b.strap} immediately implies \begin{equation}\nonumber \sup_{t\in [0,T)} t^{\frac{s_c(\infty)-s'}{2} } \|u(t)\|_{L^{\infty}_{s'}} < \infty \end{equation} for \begin{equation}\nonumber 0\le s' \le \min\{s, \, \alpha s -\gamma\}. \end{equation} We also have $u\in \mathcal{K}^s(T)$ by assumption. Thus, the conclusion follows from Proposition \ref{p:wL.sp} $(3).$ \end{proof} We start by proving the uniqueness of our solution. \subsubsection{Proof of $(ii)$} Let $T>0$ be given and fixed. We prove the uniqueness in $\mathcal{K}^s(T).$ Under conditions \eqref{t:HH.LWP.c0}, \eqref{t:HH.LWP.c1} and \eqref{t:HH.LWP.c2}, let $u$ and $v$ be two solutions to \eqref{integral-eq} belonging to $C([0,T] ; L^q_{s_c}(\mathbb{R}^d)) \cap \mathcal{K}^s(T)$ with the same initial data $u_0 \in L^q_{s_c}(\mathbb{R}^d)$ ($u_0 \in \mathcal{L}^{\infty}_{s_c}(\mathbb{R}^d)$ if $q=\infty$) \footnote{We assume $u_0 \in \mathcal{L}^{\infty}_{s_c}(\mathbb{R}^d)$ if $q=\infty$ in order to utilize the density, which is needed in the proof of \eqref{id.lmt.K} and \eqref{id.lmt.crt}.} such that \begin{equation}\nonumber \|u\|_{\mathcal{K}^s(T)}+\|v\|_{\mathcal{K}^s(T)} \le K, \end{equation} for some positive constant $K.$ Let us recall that we have the following two limits at our disposal: \begin{equation}\label{id.lmt.K} \lim_{T\to0} \|e^{t\Delta} u_0\|_{\mathcal{K}^s(T)} = 0 \end{equation} and \begin{equation}\label{id.lmt.crt} \lim_{T\to0} \|u - e^{t\Delta} u_0\|_{L^\infty(0,T;L^q_{s_c})} = 0. \end{equation} The former is the well-known fact stemming from the density of $C_0^\infty(\mathbb{R}^d)$ in $L^q_{s_c}(\mathbb{R}^d)$ (See Proposition \ref{p:wL.sp} in Appendix). The latter is shown by the triangle inequality and the continuity at $t=0$ of solutions for both the linear and nonlinear problems. Let $w :=u-v.$ By \eqref{diff.pt.est}, we have \begin{equation}\nonumber |F(u)-F(v)| \le C |e^{t\Delta} u_0|^{\alpha-1} |u-v| + C (|u-e^{t\Delta} u_0|^{\alpha-1} + |v-e^{t\Delta} u_0|^{\alpha-1})|u-v|, \end{equation} which implies that $|w| \le C( I_1 + I_2 + I_3)$ (thanks to the maximum principle), where \begin{equation}\nonumber \begin{aligned} & I_1 := \int_0^{t} e^{(t-\tau)\Delta} \left\{|\cdot|^{\gamma} |e^{t\Delta} u_0|^{\alpha-1} |w| \right\} \, d\tau, \\ & I_2 := \int_0^{t} e^{(t-\tau)\Delta} \left\{|\cdot|^{\gamma} |u-e^{t\Delta} u_0|^{\alpha-1} |w| \right\} \, d\tau \\ \text{and}\quad & I_3 := \int_0^{t} e^{(t-\tau)\Delta} \left\{|\cdot|^{\gamma} |v-e^{t\Delta} u_0|^{\alpha-1} |w| \right\} \, d\tau. \end{aligned} \end{equation} Given $q$ and $s$ satisfying \eqref{t:HH.LWP.c1} and \eqref{t:HH.LWP.c2}, we may always choose $\theta$ so that \eqref{l:Kato.est.c1'} and \eqref{l:Kato.est.c2'} are satisfied. Indeed, \eqref{l:Kato.est.c1'} and \eqref{l:Kato.est.c2'} become \eqref{l:Kato.est.c1} and \eqref{l:Kato.est.c2} as $\theta\to1,$ respectively, which are weaker than the assumptions on $q$ and $s$ in Theorem \ref{t:HH.LWP}. The only condition that has to be considered independently is $s_c - \frac{d}{\theta} \left( \frac{2}{d(\alpha-1)} -\frac{1}{q} \right) \le s$ in \eqref{l:Kato.est.c2'} (as this is not a strict inequality), but this causes no problem since $s_c - \frac{d}{\theta} \left( \frac{2}{d(\alpha-1)} -\frac{1}{q} \right) \le s_c - \frac{d(\alpha-1)}{\alpha} \left(\frac{2}{d(\alpha-1)} - \frac1{q} \right)$ holds for any $\theta \in (0,1].$ Thus, we may use estimate \eqref{l:Kato.est2} freely for our $q$ and $s.$ By the same calculation leading to \eqref{l:Kato.est1}, we deduce that \begin{equation}\label{pr.uni.1} \|I_1\|_{\mathcal{K}^s(T)} \le C \|e^{t\Delta} u_0\|_{\mathcal{K}^s(T)}^{\alpha-1} \|w\|_{\mathcal{K}^s(T)}. \end{equation} For $I_2,$ estimate \eqref{l:Kato.est2} implies \begin{equation}\label{pr.uni.2} \begin{aligned} \|I_2\|_{\mathcal{K}^s(T)} &\le C \|u-e^{t\Delta} u_0\|_{\mathcal{K}^s(T)}^{\theta(\alpha-1)} \|u-e^{t\Delta} u_0\|_{L^\infty(0,T; L^q_{s_c}) }^{(1-\theta)(\alpha-1)} \|w\|_{\mathcal{K}^s(T)} \\ &\le C K^{\theta(\alpha-1)} \|u-e^{t\Delta} u_0\|_{L^\infty(0,T; L^q_{s_c}) }^{(1-\theta)(\alpha-1)} \|w\|_{\mathcal{K}^s(T)}. \end{aligned} \end{equation} Similarly, we have \begin{equation}\label{pr.uni.3} \begin{aligned} \|I_3\|_{\mathcal{K}^s(T)} \le C K^{\theta(\alpha-1)} \, \|v-e^{t\Delta} u_0\|_{L^\infty(0,T; L^q_{s_c}) }^{(1-\theta)(\alpha-1)} \, \|w\|_{\mathcal{K}^s(T)}. \end{aligned} \end{equation} Gathering \eqref{pr.uni.1}, \eqref{pr.uni.2} and \eqref{pr.uni.3}, we deduce that there exists some positive constant $C$ independent of $T,$ $u_0,$ $u$ and $v$ such that \begin{equation}\nonumber \|w\|_{\mathcal{K}^s(T)} \le C \mathcal{N}(T, u_0, u, v) \|w\|_{\mathcal{K}^s(T)} \end{equation} where \begin{equation}\nonumber \mathcal{N}(T, u_0, u, v) := \|e^{t\Delta} u_0\|_{\mathcal{K}^s(T)}^{\alpha-1} +\|u-e^{t\Delta} u_0\|_{L^\infty(0,T; L^q_{s_c}) }^{(1-\theta)(\alpha-1)} + \|v-e^{t\Delta} u_0\|_{L^\infty(0,T; L^q_{s_c}) }^{(1-\theta)(\alpha-1)}. \end{equation} Since $0<\theta<1,$ the above quantity goes to zero as $T$ tends to zero, thanks to \eqref{id.lmt.crt} and \eqref{id.lmt.K}. Thus, there exists some $T'$ such that \begin{equation}\nonumber \|w\|_{\mathcal{K}^s(T')} \le \frac1{2} \|w\|_{\mathcal{K}^s(T')} \end{equation} for instance, which implies the uniqueness on the interval $[0,T'].$ Set \begin{equation}\nonumber T^* = \sup \{t\in [0,T] \,; \, u(\tau) = v(\tau) , \ 0\le \tau \le t\}. \end{equation} The preceding argument shows that $T^*>0.$ Now assume by contradiction that $T^*<T.$ By continuity of $u$ and $v,$ we have $u(T^*) = v(T^*).$ Setting $u^*(t) = u(t+T^*)$ and $v^*(t) = v(t+T^*),$ we may express the solutions as \begin{align*}\nonumber & u^*(t) = e^{t\Delta} u(T^*) + \int_0^t e^{(t-\tau)\Delta} \left\{ |\cdot|^{\gamma} F(u(T^*+\tau,\cdot)) \right\} d\tau \\ \text{and} \quad & v^*(t) = e^{t\Delta} u(T^*) + \int_0^t e^{(t-\tau)\Delta} \left\{ |\cdot|^{\gamma} F(v(T^*+\tau,\cdot)) \right\} d\tau, \end{align*} where $0\le t < T - T^*.$ By a similar calculation as above, we may show that \begin{equation}\nonumber \|u^* - v^*\|_{\mathcal{K}^s(T)} \le \mathcal{N}(T, u(T^*), u, v) \|u^* - v^*\|_{\mathcal{K}^s(T)}, \end{equation} which implies again that there exist some $T'$ such that $u^*(t) = v^*(t)$ for $t\in [0,T'],$ i.e., $u(t) = v(t)$ for $t \in (T^*, T^* +T'),$ a contradiction. Thus, $u(t)=v(t)$ on the whole interval $[0,T].$ This completes the proof of Theorem \ref{t:HH.LWP} $(ii).$ \subsubsection{Proof of $(i)$} Let $u_0 \in L^q_{s_c}(\mathbb{R}^d)$ ($u_0 \in \mathcal{L}^{\infty}_{s_c}(\mathbb{R}^d)$ if $q=\infty$). We recall that $C_0^\infty(\mathbb{R}^d)$ is dense in the space $L^q_{s_c}(\mathbb{R}^d)$ by Proposition \ref{p:wL.sp}, which ensures the property \eqref{id.lmt.K}. Thus, there exists some real number $T$ that is small enough so that $\|e^{t\Delta} u_0\|_{\mathcal{K}^s(T)}\le \rho.$ Now Lemma \ref{l:exist.crt} asserts that \begin{equation}\nonumber \|u\|_{L^\infty(0,T \,;\, L^q_{s_c})} \le \|e^{t\Delta} u_0\|_{L^\infty(0,T \,;\, L^q_{s_c})} + C_2 \|u \|_{\mathcal{K}^s(T)}^{\alpha} \le \|u_0\|_{L^q_{s_c}} + C_2 M^{\alpha}. \end{equation} The time-continuity at $t=0$ follows from a well-known argument (see \cites{OkaTsu2016, Tsu2011} for example). Thus, $u$ is an $L^q_{s_c}(\mathbb{R}^d)$-mild solution to \eqref{HH} on $[0,T]$ such that $\|u\|_{\mathcal{K}^s(T)}\le M.$ To deduce the estimate \eqref{t:HH.LWP.est}, it suffices to take $\rho=\|e^{t\Delta} u_0\|_{\mathcal{K}^s(T)}$ and $M$ as in Remark \ref{r:exist}. Given $u_0 \in L^q_{s_c}(\mathbb{R}^d),$ let the maximal existence time $T_m = T_m (u_0)$ be defined by \eqref{d:Tm} with $\tilde s = s_c.$ By a standard argument, uniqueness ensures that the solution can be extended to the maximal interval $[0,T_m).$ \subsubsection{Proof of $(iii)$} Given two initial data $u_0, v_0 \in L^q_{s_c}(\mathbb{R}^d),$ we next show the Lipschitz continuity of the flow map. Let $u$ and $v$ be two solutions associated with the initial data $u_0$ and $v_0,$ respectively, constructed in (i) with the estimate $\|u\|_{\mathcal{K}^s(T)}\le 2\|e^{t\Delta} u_0\|_{\mathcal{K}^s(T)}.$ Let $w := u-v$ and $w_0 := u_0-v_0.$ We carry out the same calculations as before to see that there exists a positive constant $C_3$ such that \begin{align*} \|w\|_{L^\infty(0,T; L^q_{s_c})\cap \mathcal{K}^s(T)} &\le \|e^{t\Delta}w_0\|_{L^\infty(0,T; L^q_{s_c})\cap \mathcal{K}^s(T)} + C_3 \left( \|u \|_{\mathcal{K}^s(T)}^{\alpha-1} +\|v \|_{\mathcal{K}^s(T)}^{\alpha-1}\right) \|w\|_{\mathcal{K}^s(T)} \\ &\le \|w_0\|_{L^q_{s_c}} + 2 C_3 M^{\alpha-1} \|w\|_{\mathcal{K}^s(T)}, \end{align*} where $M= \max \{ \|e^{t\Delta} u_0 \|_{\mathcal{K}^s(T)}, \, \|e^{t\Delta} v_0 \|_{\mathcal{K}^s(T)}\}.$ By taking $T$ smaller if necessary ($2 C_3 M^{\alpha-1}\le \frac12$ for instance), we deduce the Lipschitz stability on the short time-interval $[0,T].$ \subsubsection{Proof of $(iv)$} We prove the blow-up criterion by a contradiction argument. Let $T_m<\infty$ and suppose that $\|u\|_{\mathcal{K}^s(T_m)} <\infty$ holds. Let $u$ be a maximal solution and let $t_0 \in (0,T_m),$ to be fixed later. We aim to prove there exists an $\varepsilon>0$ such that \begin{equation}\label{buc-aim} \|e^{t\Delta} u(t_0)\|_{\mathcal{K}^s(T_m-t_0+\varepsilon)} \le \rho, \end{equation} where $\rho > 0$ is the constant as in Lemma \ref{l:exist.crt}. Once \eqref{buc-aim} is proved, the solution $u$ can be smoothly extended to $T_m+\varepsilon.$ Moreover, $u$ is unique in $C([0,T_m+\varepsilon] ; L^q_{s_c}(\mathbb{R}^d)) \cap \mathcal{K}^s(T_m+\varepsilon)$ by $(ii),$ which contradicts the definition of $T_m.$ Thus, $\|u\|_{\mathcal{K}^s(T_m)}=\infty$ if $T_m<\infty.$ Let us concentrate on proving \eqref{buc-aim}. We may express the maximal solution as follows: \begin{equation}\nonumber u(t+t_0) = e^{t\Delta} u(t_0) + \int_0^t e^{(t-\tau)\Delta} \left\{ |\cdot|^{\gamma} F(u(t_0+\tau)) \right\} d\tau, \quad 0\le t < T_m - t_0. \end{equation} Thus, we have \begin{align*}\nonumber \|e^{t\Delta}u(t_0)&\|_{\mathcal{K}^s(T_m-t_0)} \\ &\le \|u(\cdot+t_0)\|_{\mathcal{K}^s(T_m-t_0)} + \left\| \int_0^t e^{(t-\tau)\Delta} \left\{ |\cdot|^{\gamma} F(u(t_0+\tau)) \right\} d\tau \right\|_{\mathcal{K}^s(T_m-t_0)}. \end{align*} For the first term, we have \begin{equation}\label{pr.iv.1} \begin{aligned} \|u(\cdot+t_0)&\|_{\mathcal{K}^s(T_m-t_0)} = \sup_{0\le t \le T_m - t_0} t^{\frac{s_c-s}2} \|u(t + t_0)\|_{L^q} = \sup_{t_0\le s\le T_m} (s-t_0)^{\frac{s_c-s}2} \|u(s)\|_{L^q} \\ &\le \left(\frac{T_m-t_0}{t_0} \right)^{\frac{s_c-s}2} \sup_{t_0 \le s\le T_m} s^{\frac{s_c-s}2} \|u(s)\|_{L^q} \le \left(\frac{T_m-t_0}{t_0} \right)^{\frac{s_c-s}2} \|u\|_{\mathcal{K}^s(T_m)}. \end{aligned} \end{equation} For the second term, Lemma \ref{l:Kato.est} yields \begin{equation}\label{pr.iv.2} \left\| \int_0^t e^{(t-\tau)\Delta} \left\{ |\cdot|^{\gamma} F(u(t_0+\tau)) \right\} d\tau \right\|_{\mathcal{K}^s(T_m-t_0)} \le C_0 \|u(\cdot+t_0)\|_{\mathcal{K}^s(T_m-t_0)}^\alpha. \end{equation} Since the right-hand sides in \eqref{pr.iv.1} and \eqref{pr.iv.2} go to $0$ as $t_0 \to T_m$, we may fix some $t_0$ close enough to $T_m$ so that \begin{equation}\nonumber \|e^{t\Delta} u(t_0)\|_{\mathcal{K}^s(T_m-t_0)} \le 2^{-\frac{s_c-s}2} \frac{\rho}{2}. \end{equation} Let $\varepsilon\in (0, T_m -t_0)$, to be fixed later. Then, we have \begin{equation}\label{pr.iv.4} \begin{aligned} \sup_{2\varepsilon\le t\le T_m - t_0 +\varepsilon} t^{\frac{s_c-s}2} \|e^{t\Delta} u(t_0)\|_{L^q_s} &= \sup_{\varepsilon \le s\le T_m - t_0} \left( \frac{s+\varepsilon}{s}\right)^{\frac{s_c-s}2} s^{\frac{s_c-s}2} \|e^{(s+\varepsilon)\Delta} u(t_0)\|_{L^q_s} \\ &\le \sup_{\varepsilon \le s\le T_m - t_0} \left( \frac{s+\varepsilon}{s}\right)^{\frac{s_c-s}2} \|e^{t\Delta} u(t_0)\|_{\mathcal{K}^s(T_m-t_0)}\\ &\le 2^{\frac{s_c-s}2} \|e^{t\Delta} u(t_0)\|_{\mathcal{K}^s(T_m-t_0)} \le \frac{\rho}{2}, \end{aligned} \end{equation} where we have used $$\displaystyle{\sup_{\varepsilon \le s\le T_m - t_0} \frac{s+\varepsilon}{s} \le 2.}$$ On the other hand, since $u(t_0) \in L^q_{s_c}(\mathbb{R}^d),$ we may fix some $\varepsilon>0$ such that \begin{equation}\label{pr.iv.0} \|e^{t\Delta}u(t_0)\|_{\mathcal{K}^s(2\varepsilon)} \le \frac{\rho}2, \end{equation} By \eqref{pr.iv.0} and \eqref{pr.iv.4}, we deduce that \[ \begin{split} \|e^{t\Delta} u(t_0)\|_{\mathcal{K}^s(T_m-t_0+\varepsilon)} & \le \|e^{t\Delta}u(t_0)\|_{\mathcal{K}^s(2\varepsilon)} + \sup_{2\varepsilon\le t\le T_m - t_0 +\varepsilon} t^{\frac{s_c-s}2} \|e^{t\Delta} u(t_0)\|_{L^q_s} \\ & \le \frac{\rho}2 + \frac{\rho}2 = \rho, \end{split} \] which proves \eqref{buc-aim}. \subsubsection{Proof of $(v)$} Taking $T=\infty$ in Lemma \ref{l:exist.crt}, we deduce the global existence. Lastly, we show that if $T_m=\infty,$ then the solution is dissipative. We sketch the proof, as most of the computations are similar to the previous ones. We take $\{u_{0n}\}_{n\ge0} \subset C_0^\infty(\mathbb{R}^d)$ such that $u_{0n} \to u_0$ in $L^q_{s_c}(\mathbb{R}^d)$ and decompose the integral equation into \begin{equation}\nonumber\begin{aligned} u(t) = e^{t\Delta} u_{0n} + e^{t\Delta} (u_0-u_{0n}) &+ e^{(t-t')\Delta} \int_{0}^{t'} e^{(t'-\tau)\Delta} \left( |\cdot|^{\gamma} F(u(\tau)) \right)d\tau \\ &+ \int_{t'}^{t} e^{(t-\tau)\Delta} \left( |\cdot|^{\gamma} F(u(\tau)) \right)d\tau, \end{aligned}\end{equation} where $0 < t' < t.$ The first and second linear terms obviously tend to 0 as $n\to \infty$ and $t\to\infty.$ On the other hand, we may let $t'$ so close to $t$ so that the fourth term is small. Now that $t'$ is fixed, the third term can be written as $e^{(t-t')\Delta} f(t')$ with $f(t') \in L^q_{s_c}(\mathbb{R}^d)$, so we may use the semigroup property of $e^{t\Delta}$ and an approximation argument again. This completes the proof of the theorem. \subsection{Proof of Theorem \ref{t:HH.LWP.sub}} \begin{lem}\label{l:exist.sub} Let real numbers $T\in (0,\infty),$ $\rho>0$ and $M>0$ satisfy \begin{equation}\label{l:exist.sub.c0} \rho + \tilde C_0 T^{\frac{\alpha-1}2(s_c-\tilde s)} M^\alpha \le M \quad\text{and}\quad 2 \tilde C_1 T^{\frac{\alpha-1}2(s_c-\tilde s)} M^{\alpha-1} <1, \end{equation} where $\tilde C_0$ and $\tilde C_1$ are as in Lemma \ref{l:Kato.est.sub}. Under conditions \eqref{t:HH.LWP.c0}, \eqref{t:HH.LWP.c1} and \eqref{t:HH.LWP.c2}, let $u_0 \in \mathcal{S}'(\mathbb{R}^d)$ be such that $e^{t\Delta}u_0 \in \tilde{\mathcal{K}}^s(T)$ for $T$ fixed as above. If $\|e^{t\Delta}u_0\|_{\tilde{\mathcal{K}}^s(T)}\le \rho,$ then a solution $u$ to \eqref{HH} exists such that $u -e^{t\Delta} u_0 \in C([0,T] ; L^q_{\tilde s}(\mathbb{R}^d))$ and $\|u\|_{\tilde{\mathcal{K}}^s(T)} \le M.$ \end{lem} \begin{rem}\label{r:exist.sub} To meet condition \eqref{l:exist.sub.c0}, it suffices to take $M=2\rho$ and $T$ such that \[ T < \min\left\{ (2^{\alpha}\tilde C_0)^{-\frac{2}{(\alpha-1)(s_c-\tilde s)}} , \, (2^{\alpha}\tilde C_1)^{-\frac{2}{(\alpha-1)(s_c-\tilde s)}} \right\} \rho^{-\frac2{s_c-\tilde s}}. \] \end{rem} \begin{proof}[Proof of Lemma \ref{l:exist.sub}] Setting the metric $d(u,v) := \|u-v\|_{\tilde{\mathcal{K}}^s(T)}$, we may show that $(\tilde{\mathcal{K}}^s(T),d)$ is a nonempty complete metric space. Let $X_M := \{ u \in\mathcal{K}^s(T) \,;\, \|u\|_{\tilde{\mathcal{K}}^s(T)} \le M \}$ be the closed ball in $\tilde{\mathcal{K}}^s(T)$ centered at the origin with radius $M$. Similarly to the critical case, we may prove that the map defined in \eqref{map} has a fixed point in $\tilde X_M,$ thanks to Lemma \ref{l:Kato.est.sub} and \eqref{l:exist.sub.c0}. Thus, Banach's fixed point theorem ensures the existence of a unique fixed point $u$ for the map $\Phi$ in $\tilde X_M.$ Having obtained a fixed point in $\tilde{\mathcal{K}}^s(T),$ we deduce $u -e^{t\Delta} u_0 \in L^\infty(0,T;L^q_{\tilde s}(\mathbb{R}^d))$ thanks to Lemma \ref{l:subcrt.est}, provided further that \eqref{t:HH.LWP.sub.cs}, \eqref{l:subcrt.est.c1} and \eqref{l:subcrt.est.c2} are satisfied. We see that $s<\tilde s < s_c$ imply \[ \max\left\{\frac{\gamma}{\alpha-1}, \, \tilde s - \frac2{\alpha} \right\} < \frac{\tilde s+\gamma}{\alpha} \] so $\frac{s_c+\gamma}{\alpha}$ is a new lower bound for $s.$ In conjunction with this stronger lower bound $\frac{s_c+\gamma}{\alpha} \le s,$ there also appears a new upper bound for $\frac1{q}.$ More precisely, for such an $s$ satisfying \eqref{l:Kato.est.sub.c2} and \eqref{l:subcrt.est.c2} to exist, $q$ must satisfy, in addition to \eqref{l:Kato.est.sub.c1} and \eqref{l:subcrt.est.c1}, \begin{equation}\label{t:HH.LWP.sub:pr2} \frac1{q} < \frac1{d\alpha} \left(\frac{2\alpha + \gamma}{\alpha-1} -\tilde s\right). \end{equation} Indeed, $\frac{\tilde s+\gamma}{\alpha} < s_c$ is equivalent to $\frac1{q} <\frac1{d\alpha} (\frac{2\alpha + \gamma}{\alpha-1} -\tilde s).$ We notice that $\frac1{\alpha} (1-\frac{\tilde s}{d}) < \frac1{\alpha} (1-\frac{\gamma}{d(\alpha-1)} )$ and $\frac1{d\alpha} (\frac{2\alpha + \gamma}{\alpha-1} -\tilde s) > \frac1{d} (\frac{2 + \gamma}{\alpha-1} -\tilde s)$ as $\frac{\gamma}{\alpha-1} < \tilde s.$ Thus, combining \eqref{l:Kato.est.sub.c1} and \eqref{t:HH.LWP.sub:pr2}, we deduce that the conditions for $q$ are \eqref{t:HH.LWP.sub.c1} \end{proof} We omit the proofs of $(i),$ $(ii)$ and $(iii)$ of Theorem \ref{t:HH.LWP.sub} as they are standard. We only prove $(iv).$ \subsubsection{Proof of $(iv)$} Let $u_0 \in L^q_{\tilde s}(\mathbb{R}^d)$ be such that $T_m = T_m(u_0)$ is finite and let $u \in C([0,T_m) ; L^q_{\tilde s}(\mathbb{R}^d))$ be the maximal solution of \eqref{HH}. Fix $t_0 \in (0,T_m)$ and so that we may express the maximal solution by \begin{equation}\nonumber u(t+t_0) = e^{t\Delta} u(t_0) + \int_0^t e^{(t-\tau)\Delta} \left\{ |\cdot|^{\gamma} F(u(t_0+\tau,\cdot)) \right\} d\tau, \quad 0\le t < T_m - t_0. \end{equation} We observe that \begin{equation}\nonumber \|u(t_0)\|_{L^q_{\tilde s}} + \tilde C_0 (T_m-t_0)^{\frac{\alpha-1}2(s_c-\tilde s)} M^\alpha > M \end{equation} holds for all $M>0,$ where $\tilde C_0$ is as in \eqref{l:Kato.est.sub1}. Otherwise there exists $M>0$ such that \begin{equation}\nonumber \|u(t_0)\|_{L^q_{\tilde s}} + \tilde C_0 (T_m-t_0)^{\frac{\alpha-1}2(s_c-\tilde s)} M^\alpha \le M \end{equation} so that one may argue as in the proof of existence to obtain a local solution such that $\|u(t + t_0)\|_{L^q_{\tilde s}} \le M$ for $t\in [0,T_m -t_0]$ and in particular, $u(T_m)$ is well-defined in $L^q_s(\mathbb{R}^d),$ contradicting the definition of $T_m.$ Let $M = 2\|u(t_0)\|_{L^q_{\tilde s}}$ so that \begin{equation}\nonumber \|u(t_0)\|_{L^q_{\tilde s}} + 2^{\alpha} \tilde C_0 \|u(t_0)\|_{L^q_{\tilde s}}^\alpha (T_m-t_0)^{\frac{\alpha-1}2(s_c-\tilde s)} > 2 \|u(t_0)\|_{L^q_{\tilde s}}, \end{equation} which yields \eqref{t:HH.LWP:Tm}. In particular, $\|u(t)\|_{L^q_{\tilde s}} \to \infty$ as $t\to T_m.$ Thus, we conclude Theorem \ref{t:HH.LWP.sub}. \subsection{Proof of Theorem \ref{t:HH.self.sim}} Let $\psi (x) := |x|^{-\frac{2+\gamma}{\alpha-1}}$ for $x\ne 0$. We first claim that a initial data $u_0$ given by $u_0(x):= c\psi(x)$ with a sufficiently small $c$ satisfies the all assumptions of $(v)$ in Theorem \ref{t:HH.LWP} with $T=\infty,$ thereby generating a global solution to the Cauchy problem (\ref{HH}) with the initial data $u_0$. Since $\psi \in L^1_{loc}(\mathbb{R}^d)$ as $\alpha>\alpha_F(d,\gamma),$ $\psi \in \mathcal{S}'(\mathbb{R}^d)$ and $e^{t\Delta} \psi$ is well-defined. Since $s<s_c,$ there exist $s_1, s_2 \in\mathbb{R}$ such that $s<s_1 < s_c <s_2.$ As in the proof of \cite[Theorem 1.3]{BenTayWei2017}, we can prove that $\psi$ can be decomposed into $\psi = \psi_1 + \psi_2,$ $\psi_1 := \chi_{|x|>1} \psi$ and $\psi_2 := \chi_{|x|<1} \psi$ so that $\psi_1 \in L^q_{s_1}(\mathbb{R}^d)$ and $\psi_2 \in L^q_{s_2}(\mathbb{R}^d).$ This implies that the estimate $\|e^{\Delta} \psi\|_{L^q_s} \le C( \| \psi_1\|_{L^q_{s_1}} + \| \psi_2\|_{L^q_{s_2}})$ holds, thanks to Lemma \ref{l:wLpLq}. By the homogeneity of the data, we deduce $\|e^{t\Delta} \psi\|_{\mathcal{K}^s} < \infty.$ Thus, if the constant $c$ is taken small enough so that $(v)$ in Theorem \ref{t:HH.LWP} is satisfied, the initial data $u_0=c\psi$ generates a unique global solution to (\ref{HH}). Let $\varphi := \omega \psi$ be as in the assumption of Theorem \ref{t:HH.self.sim}. Then we note that $\varphi$ is homogeneous of degree $-\frac{2+\gamma}{\alpha-1}.$ We show that the global solution $u$ to (\ref{HH}) with the initial data $\varphi$, which is obtained by $(v)$ in Theorem \ref{t:HH.LWP}, is also self-similar. To this end, for $\lambda>0$, let $\varphi_{\lambda}$ be defined by $\varphi_{\lambda} (x) := \lambda^{\frac{2-\gamma}{\alpha-1}} \varphi(\lambda x).$ Since the identity $\|\varphi_{\lambda}\|_{\mathcal{K}^s} = \|\varphi \|_{\mathcal{K}^s}$ holds for all $\lambda>0,$ it follows that $\varphi_{\lambda}$ also satisfies the assumptions of $(v)$ in Theorem \ref{t:HH.LWP}. As $u_\lambda$ given by \eqref{scale} is a solution of \eqref{HH} with initial data $\varphi_{\lambda},$ and $\|u_{\lambda}\|_{\mathcal{K}^s} = \|u \|_{\mathcal{K}^s}$ for all $\lambda>0,$ we deduce that $u$ must be self-similar since $\varphi_{\lambda}=\varphi$. We denote the global self-similar solution $u$ by $u_{\mathcal{S}}$. The fact $u_{\mathcal{S}}(t)\rightarrow\varphi$ in $\mathcal{S}'(\mathbb{R}^d)$ as $t\rightarrow +0$ follows from $(iii)$ in Lemma \ref{l:exist.crt}. This completes the proof of Theorem \ref{t:HH.self.sim}. \section{Nonexistence of local positive weak solution} In this section we give a proof of Theorem \ref{t:nonex}. As the argument is standard, we only give a sketch of the proof. For the details, we refer to \cite[Proposition 2.4, Theorem 2.5]{II-15}. \subsection{Proof of Theorem \ref{t:nonex}} Let $T\in (0,1)$. Suppose that the conclusion of Theorem \ref{t:nonex} does not hold. Then there exists a positive weak solution $u$ on $[0,T)$ (See Definition \ref{d:w.sol}). Let \[ \psi_T(t,x) := \eta\left(\frac{t}{T}\right) \phi\left(\frac{x}{\sqrt{T}}\right), \] where $\eta \in C^\infty_0([0,\infty))$ and $\phi\in C^\infty_0(\mathbb R^d)$ are such that \[ \eta (t) := \begin{cases} 1,\quad 0\le t \le \frac12,\\ 0,\quad t\ge1, \end{cases} \quad \text{and}\quad \phi(x) := \begin{cases} 1,\quad |x| \le \frac12,\\ 0,\quad |x|\ge1. \end{cases} \] Let $l\in\mathbb N$ with $l\ge3$, which will be chosen later. We note that $\psi_T^l\in C^{1,2}([0,T)\times \mathbb{R}^d)$ and the estimates $|\partial_t \{\psi_T (t,x)\}^l|\le \frac{C}{T} \psi_T(t,x)^{l-1}$ and $|\partial_{x_j}^2\{\psi_T(t,x)^l\}|\le \frac{C}{T} \psi_T(t,x)^{l-1}$ hold for $j=1,\ldots, d.$ We define a function $I:[0,T)\rightarrow \mathbb{R}_{\ge 0}$ given by \[ I(T):=\int_{[0,T)\times \{|x|<\sqrt{T}\}}|x|^{\gamma} u(t,x)^{\alpha} \, \psi_T^l \, dtdx. \] We note that $I(T)<\infty$, since $u\in L_t^{\alpha}(0,T;L^{\alpha}_{\frac{\gamma}{\alpha},loc}(\mathbb{R}^d))$. By using the weak form (\ref{weak}) and the above estimates, the estimates hold: \[ \begin{aligned} I(T) + \int_{|x|<\sqrt{T}} u_0(x) \phi^l\left(\frac{x}{\sqrt{T}}\right)\, dx & = \left|\int_{[0,T)\times \{|x|<\sqrt{T}\}}u(\partial_t \psi_T^l + \Delta \psi_T^l )\,dt\,dx \right|\\ & \le \frac{C}{T} \int_{[0,T)\times \{|x|<\sqrt{T}\}}|u| \psi_T^{\frac{l}{\alpha}} \,dt\,dx. \end{aligned} \] Here we choose $l$ as \begin{equation}\nonumber -\frac{l}{\alpha}+l-2>0, \quad\text{i.e.,}\quad l > \frac{2\alpha}{\alpha-1}. \end{equation} By H\"older's inequality and Young's inequality, we may estimate the integral in the right-hand side above by \[ \begin{aligned} T^{-1}&\int_{[0,T)\times \{|x|<\sqrt{T}\}}|u| \psi_T^{\frac{l}{\alpha}}\, dtdx \le I(T)^\frac{1}{\alpha} \cdot T^{-1}K(T)^\frac{1}{\alpha'} \le \frac12 I(T) + \frac{C}{T^{\alpha'}}K(T). \end{aligned} \] where $1= \frac{1}{\alpha} + \frac{1}{\alpha'}$, i.e., $\alpha'=\frac{\alpha}{\alpha-1}$, and \[ K(T) := \int_{[0,T)\times \{|x|<\sqrt{T}\}}(|x|^{-\frac{\gamma}{\alpha}})^{\alpha'}\, dtdx = T\int_{|x|<\sqrt{T}}|x|^{-\frac{\gamma}{\alpha-1}}\, dx=CT^{1-\frac{\gamma}{2(\alpha-1)}+\frac{d}{2}} \] due to $\alpha>1+\gamma/d$. Summarizing the estimates obtained now, we have \begin{equation}\label{ineq1} \begin{aligned} \int_{|x|<\sqrt{T}} u_0(x) \phi^l\left(\frac{x}{\sqrt{T}}\right)\, dx \le I(T) + 2\int_{|x|<\sqrt{T}} u_0(x) \phi^l\left(\frac{x}{\sqrt{T}}\right)\, dx \le C T^{- \frac{2+\gamma}{2(\alpha-1)} + \frac{d}{2}}. \end{aligned} \end{equation} We now choose the initial data $u_0$ as \[ u_0(x) := \begin{cases} |x|^{-\beta} \quad & |x|\le 1,\\ 0 & \text{otherwise} \end{cases} \] with \begin{equation}\label{beta1} \beta< \min\left\{s + \frac{d}{q},d\right\}. \end{equation} Then $u_0 \in L^q_s (\mathbb{R}^d)$ and by $T<1$ and $\beta<d$, we have \begin{equation}\label{ineq2} \begin{aligned} \int_{|x|<\sqrt{T}} u_0(x) \phi^l\left(\frac{x}{\sqrt{T}}\right)\, dx & = T^{-\frac{\beta-d}{2}} \int_{|y|<1} |y|^{-\beta} \phi^l(y)\, dx = C T^{-\frac{\beta-d}{2}}. \end{aligned} \end{equation} Combining \eqref{ineq1} and \eqref{ineq2}, we obtain \begin{equation}\label{contradiction} 0< C \le T^{\frac{\beta}{2} - \frac{2+\gamma}{2(\alpha-1)}} \to 0 \quad \text{as }T\to 0, \end{equation} where \begin{equation}\label{beta2} \frac{\beta}{2} - \frac{2+\gamma}{2(\alpha-1)} >0\quad \text{i.e.}\quad \beta > \frac{2+\gamma}{\alpha-1}, \end{equation} which leads to a contradiction. Thus the proposition holds if we take $\beta$ satisfying \eqref{beta1} and \eqref{beta2}, which amount to $s>s_c$ and $\alpha>\alpha_F(d,\gamma)$. The proof is complete. \section{Appendix} \par We list basic properties of the weighted Lebesgue spaces $L^q_s(\mathbb{R}^d)$. \begin{prop} \label{p:wL.sp} Let $s\in\mathbb{R}$ and $q\in [1,\infty].$ Then the following holds: \begin{enumerate}[$(1)$] \item The space $L^q_{s}(\mathbb{R}^d)$ is a Banach space. \item $C_0^\infty(\mathbb{R}^d)$ is dense in $L^q_{s}(\mathbb{R}^d)$ if $q$ and $s$ satisfy \begin{equation}\nonumber 1\le q < \infty \quad\text{and}\quad -\frac{d}{q} < s < d\left( 1-\frac{1}{q} \right). \end{equation} \item For $s_1, s_2 \in \mathbb{R},$ $q_1, q_2 \in [1,\infty],$ we have \begin{equation}\nonumber \|f\|_{L^q_s} \le \|f\|_{L^{q_1}_{s_1}}^{\theta} \|f\|_{L^{q_2}_{s_2}}^{1-\theta} \end{equation} for $s = \theta s_1 + (1-\theta) s_2,$ $\frac1{q} = \frac{\theta}{q_1} + \frac{1-\theta}{q_2}$ and $\theta \in (0,1).$ \end{enumerate} \end{prop} \begin{proof} $(1)$ The space $L^q_{s}(\mathbb{R}^d)$ is a Lebesgue space with a measure $d\mu = |x|^{sq} \,dx.$ See any standard textbook for the proof of its completeness. \\ $(2)$ Recall that the weight $|x|^{sq}$ belongs to the Muckenhoupt class $A_q$ if and only if $- \frac{d}{q} < s < d(1- \frac1{q})$ when $q\in (1,\infty),$ and $|x|^{s} \in A_1$ if and only if $-d<s\le 0$ when $q=1.$ Now the density follows from \cite{NakTomYab2004}[Theorem 1.1]. \\ $(3)$ For $s$ and $q$ as in the assumption, we have \begin{align*}\nonumber \|f\|_{L^q_s} &\le \||\cdot|^{s_1} f\|_{L^{q_1}}^{\theta} \||\cdot|^{s_2} f\|_{L^{q_2}}^{1-\theta} = \|f\|_{L^{q_1}_{s_1}}^{\theta} \|f\|_{L^{q_2}_{s_2}}^{1-\theta}. \end{align*} \end{proof} The following pointwise bound is well-known in the literature. \begin{lem}\label{l:g.unfrm.bnd} Let $d\in\mathbb N,$ $q\in[1,\infty)$ and $a,b,c\in\mathbb{R}.$ Let $g(x):=(4\pi)^{-\frac{d}2} e^{-\frac{|x|^2}4}.$ \begin{enumerate}[$(1)$] \item There exists a constant $C$ depending only on $d,$ $q,$ $a$ and $b$ such that \begin{equation}\nonumber \sup_{x\in\mathbb{R}^d} \int_{\mathbb{R}^d} ( |y|^{-a} |x-y|^{b} g(x-y))^q \,dy \le C \end{equation} provided that $0\le a<\frac{d}{q}$ and $b\ge0.$ \item There exists a constant $C$ depending only on $d,$ $q$ and $c$ such that \begin{equation}\nonumber \sup_{x\in\mathbb{R}^d} \int_{\mathbb{R}^d} ( |y|^{-c} g(x-y))^q \, dy \le C \end{equation} provided that $0\le c<\frac{d}{q}.$ \end{enumerate} \end{lem} \begin{proof} In what follows, we shall use the fact that there exists an absolute constant $C$ such that \begin{equation}\label{l:g.unfrm.bnd:pr1} g(x) \le C \langle x\rangle^{-N} \end{equation} for any $N\in\mathbb N,$ where $\langle x\rangle :=(1+|x|^2)^{\frac{1}2}.$ Let \begin{equation}\nonumber \begin{aligned} I(x) &:= \int_{\mathbb{R}^d} ( |y|^{-a} |x-y|^{b} g(x-y))^q \, dy \\ &= \int_{|y|< |x-y|} ( |y|^{-a} |x-y|^{b} g(x-y))^q \, dy + \int_{|y|> |x-y|} ( |y|^{-a} |x-y|^{b} g(x-y))^q \, dy \\ &=: I_1 (x) + I_2 (x). \end{aligned} \end{equation} Thanks to \eqref{l:g.unfrm.bnd:pr1} and $0\le b,$ we have \begin{align*} I_1(x) \le C \int_{|y|< |x-y|} |y|^{-aq} \langle x-y\rangle^{-(d+1)}\, dy \le C \int_{|y|< |x-y|} |y|^{-aq} \langle y\rangle^{-(d+1)} \, dy < \infty, \end{align*} if $aq<d.$ Moreover, we have \begin{equation*} I_2(x) \le C \int_{|y|> |x-y|} |x-y|^{-(a-b)q} g(x-y)^q \, dy \le C \int_{\mathbb{R}^d} |y|^{-(a-b)q} g(y)^q \, dy < \infty, \end{equation*} if $a\ge 0$ and $(a-b)q <d.$ Thus, $I(x) < \infty$ uniformly with respect to $x\in\mathbb{R}^d.$ The proof for the second inequality is similar so we omit it. \end{proof} We recall the following elementary characterization of $L^1(\mathbb{R}^d)$-functions. \begin{prop} \label{p:L1sg.dcy} If $f \in L^1(\mathbb{R}^d),$ then \begin{equation}\nonumber \liminf_{|x|\to 0} |x|^d |f(x)| = \liminf_{|x|\to \infty} |x|^d |f(x)| =0. \end{equation} \end{prop} \begin{proof} We show the contrapositive. Suppose that $\displaystyle\liminf_{|x|\to0} |x|^d |f(x)| = c>0.$ Then there exists some positive $\delta$ such that $\frac{c}2 \le |x|^d |f(x)|$ for $|x|\le \delta.$ Thus, \begin{equation}\nonumber \int_{|x|\le\delta} |f(x)| dx \ge c \int_0^\delta r^{-1} \,dr = c \left[ \log r\right]_{0}^r = + \infty, \end{equation} which implies $f\notin L^1(\mathbb{R}^d).$ The second equality is similarly proved. \end{proof} As a corollary, we have the following. \begin{cor}\label{c:wLp.sg.dcy} Let $s\in\mathbb{R}$ and $p \in [1,\infty].$ If $f \in L^{p}_{s}(\mathbb{R}^d),$ then \begin{equation}\nonumber \liminf_{|x|\to 0} |x|^{s+\frac{d}{p}} |g(x)| = \liminf_{|x|\to \infty} |x|^{s+\frac{d}{p}} |g(x)| =0. \end{equation} \end{cor} Finally, we give a proof of the fact that the $L^q_{\tilde{s}}(\mathbb{R}^d)$-mild solutions also satisfy the equation \eqref{HH} in the distributional sense. \begin{lem} \label{mildweak} We assume the same assumptions as in Theorem \ref{t:HH.LWP} (resp. Theorem \ref{t:HH.LWP.sub}). Let $u$ be a $L^q_{\tilde{s}}(\mathbb{R}^d)$-mild solution on $[0,T)$ in the sense of Definition \ref{def:sol-A}. Then $u$ is a weak solution in the sense of Definition \ref{d:w.sol}. \end{lem} \begin{proof} We prove the critical case only, since the subcritical case can be treated in the similar manner. Let $T>0$ and $u$ be an $L^q_{s_c}(\mathbb{R}^d)$-mild solution on $[0,T]$. First we prove $u\in L^{\alpha}(0,T;L^{\alpha}_{\frac{\gamma}{\alpha},loc}(\mathbb{R}^d))$. Let $\Omega\subset \mathbb{R}^d$ be a compact subset of $\mathbb{R}^d$. We also assume that $q>\alpha$ since the case $q=\alpha$ can be treated in the similar manner with a slight modification. Since $s_c-2\le s<(d+\gamma)/\alpha-d/q$, by the H\"older inequality, the following estimates hold: \begin{align*} \|u\|_{L^{\alpha}(0,T;L^{\alpha}_{\frac{\gamma}{\alpha}}(\Omega))}^{\alpha} &\le \int_0^T\left(\int_{\Omega}|x|^{\frac{q(\gamma-\alpha s)}{q-\alpha}}dx\right)^{\frac{q}{q-\alpha}}\|u(t)\|_{L_s^q} \, dt \\ &\le C\int_0^Tt^{\frac{s-s_c}{2}}\,dt \, \|u\|_{\mathcal{K}^s(T)}<\infty, \end{align*} which implies that $u$ belongs to $L_t^{\alpha}(0,T;L^{\alpha}_{\frac{\gamma}{\alpha},loc}(\mathbb{R}^d))$. Next we prove that $u$ satisfies the weak form (\ref{weak}). Let $\eta\in C^{1,2}([0,T]\times\mathbb{R}^d)$ be such that for any $t\in [0,T]$, $\operatorname{supp} \eta(t, \cdot)$ is compact. Let $T'\in (0,T)$. Since $C_0^{\infty}(\mathbb{R}^d)$ is dense in $L^q_{s_c}(\mathbb{R}^d)$ thanks to Proposition \ref{p:wL.sp}, there exists a sequence $\{u_{0j}\}\subset C_0^{\infty}(\mathbb{R}^d)$ such that the following identity holds: \[ \lim_{j\rightarrow\infty}\|u_0-u_{0j}\|_{L^q_{s_c}}=0. \] By this identity and the integration by parts, we can prove the following identity: \begin{align*} \int_{[0,T']\times\mathbb{R}^d}&(e^{t\Delta}u_0)(x)(\Delta\eta+\partial_t\eta)(t,x)\,dxdt \\ &=\int_{\mathbb{R}^d}(e^{T'\Delta}u_0)(x)\, \eta(T',x)\,dx-\int_{\mathbb{R}^d}u_0(x)\eta(0,x)\,dx. \end{align*} Thus it suffices to prove the identity \begin{equation} \label{weak1} \int_{[0,T']\times\mathbb{R}^d}N(u(t,x))(\Delta\eta+\partial_t\eta)(t,x) \,dxdt =-\int_{[0,T']\times\mathbb{R}^d}|x|^{\gamma}F(u(t,x))\eta(t,x) \,dxdt, \end{equation} where $N$ is defined by (\ref{mapN}). We write $G(t,x):=|x|^{\gamma}F(u(t,x))$. Then we can express $N(u)$ as \[ N(u)=\int_0^te^{(t-\tau)\Delta}G(\tau)\,d\tau. \] Moreover, the equality \[ \sup_{t\in [0,T]}t^{\frac{(s_c-s)\alpha}{2}}\|G(t)\|_{L_{\sigma}^{\frac{q}{\alpha}}}=\|u\|_{\mathcal{K}^s(T)}^{\alpha}<\infty \] is valid, where $\sigma:=\alpha s-\gamma$. Since the time interval $[0,T]$ is compact, by using mollifiers with respect to the time variable and the space variables, we can find $\{G_j\}\subset C_0^{\infty}([0,\infty)\times \mathbb{R}^d)$ such that \begin{equation} \label{appro1} \lim_{j\rightarrow\infty}\sup_{t\in [0,T]}t^{\frac{(s_c-s)\alpha}{2}}\|G(t)-G_j(t)\|_{L^{\frac{q}{\alpha}}_{\sigma}}=0. \end{equation} We define a sequence $\{N_j\}$ as \[ N_j(t,x):=\int_0^{t}e^{(t-\tau)\Delta}G_j(\tau,x) \, d\tau. \] In a similar manner as the proof of Theorem \ref{t:HH.LWP}, we can prove that \[ \|N_j-N(u)\|_{\mathcal{K}^s(T)}\le C\sup_{t\in [0,T]}t^{\frac{(s_c-s)\alpha}{2}}\|G_j(t)-G(t)\|_{L_{\sigma}^{\frac{q}{\alpha}}}\rightarrow 0 \] as $j\rightarrow \infty$. By this fact, we deduce that \[ \text{R.H.S of (\ref{weak1})} =\lim_{j\rightarrow\infty}\int_{[0,T']\times\mathbb{R}^d} N_j(t,x)(\Delta\eta+\partial_t\eta)(t,x) \,dxdt. \] Since $G_j$ is smooth, so is $N_j$ and hence, by the integration by parts, the identity \[ \int_{[0,T']\times\mathbb{R}^d}N_j(t,x)(\Delta\eta+\partial_t\eta)(t,x)\,dx\,dt =\int_{[0,T']\times\mathbb{R}^d}G_j(t,x)\eta(t,x) \,dx\,dt. \] holds for any $j$. By taking the limit $j\rightarrow\infty$ in the right-hand side and (\ref{appro1}), we have \[ \lim_{j\rightarrow\infty}\int_{[0,T']\times\mathbb{R}^d} N_j(t,x)(\Delta\eta+\partial_t\eta)(t,x) \,dxdt =\int_{[0,T']\times\mathbb{R}^d}G(t,x)\eta(t,x)\,dxdt. \] Thus we obtain (\ref{weak1}), which completes the proof of the lemma. \end{proof} \begin{bibdiv} \begin{biblist}[\normalsize] \bib{BenTayWei2017}{article}{ author={Ben Slimene, B.}, author={Tayachi, S.}, author={Weissler, F. B.}, title={Well-posedness, global existence and large time behavior for Hardy-H\'enon parabolic equations}, journal={Nonlinear Anal.}, volume={152}, date={2017}, pages={116--148}, } \bib{Ben2019}{article}{ author={Ben Slimene, B.}, title={Asymptotically self-similar global solutions for Hardy-H\'{e}non parabolic systems}, journal={Differ. Equ. Appl.}, volume={11}, date={2019}, number={4}, pages={439--462}, } \bib{Chi2019}{article}{ author={Chikami, N.}, title={Composition estimates and well-posedness for Hardy-H\'{e}non parabolic equations in Besov spaces}, journal={J. Elliptic Parabol. Equ.}, volume={5}, date={2019}, number={2}, pages={215--250}, } \bib{CIT-arxiv}{article}{ author={Chikami, N.}, author={Ikeda, M.}, author={Taniguchi, K.}, title={Well-posedness and global dynamics for the critical Hardy-Sobolev parabolic equation}, journal={arXiv:2009.07108v2}, date={2019}, } \bib{GW-2005}{article}{ author={Gazzola, F.}, author={Weth, T.}, title={Finite time blow-up and global solutions for semilinear parabolic equations with initial data at high energy level}, journal={Differential Integral Equations}, volume={18}, date={2005}, number={9}, pages={961--990}, } \bib{GhoMor2013}{book}{ author={Ghoussoub, N.}, author={Moradifam, A.}, title={Functional inequalities: new perspectives and new applications}, series={Mathematical Surveys and Monographs}, volume={187}, publisher={American Mathematical Society, Providence, RI}, date={2013}, pages={xxiv+299}, } \bib{Gig86}{article}{ author={Giga, M.}, title={Solutions for semilinear parabolic equations in $L^p$ and regularity of weak solutions of the Navier-Stokes system}, journal={J. Differential Equations}, volume={62}, date={1986}, number={2}, pages={186--212}, } \bib{H-1973}{article}{ author={H\'enon, M.}, title={Numerical experiments on the stability of spherical stellar systems}, journal={Astron. Astrophys}, volume={24}, date={1973}, pages={229--238}, } \bib{Hir2008}{article}{ author={Hirose, M.}, title={Existence of global solutions for a semilinear parabolic Cauchy problem}, journal={Differential Integral Equations}, volume={21}, date={2008}, number={7-8}, pages={623--652}, } \bib{HisIsh2018}{article}{ author={Hisa, K.}, author={Ishige, K.}, title={Existence of solutions for a fractional semilinear parabolic equation with singular initial data}, journal={Nonlinear Anal.}, volume={175}, date={2018}, pages={108--132}, } \bib{HisTak-arxiv}{article}{ author={Hisa, K.}, author={Takahashi, J.}, title={Optimal singularities of initial data for solvability of the Hardy parabolic equation}, journal={arXiv:2102.04618}, date={2021}, } \bib{II-15}{article}{ author={Ikeda, M.}, author={Inui, T.}, title={Some non-existence results for the semilinear Schr\"odinger equation without gauge invariance}, journal={J. Math. Anal. Appl.}, volume={425}, date={2015}, pages={758--773}, } \bib{IT-arxiv}{article}{ author={Ikeda, M.}, author={Taniguchi, K.}, title={Global well-posedness, dissipation and blow up for semilinear heat equations in energy spaces associated with self-adjoint operators}, journal={arXiv:1902.01016v3}, date={2019}, } \bib{Ish2008}{article}{ author={Ishiwata, M.}, title={Asymptotic behavior of strong solutions for nonlinear parabolic equations with critical Sobolev exponent}, journal={Adv. Differential Equations}, volume={13}, date={2008}, number={3-4}, pages={349--366}, } \bib{Maj-arxiv}{article}{ author={Majdoub, M.}, title={Well-posedness and blow-up for an inhomogeneous semilinear parabolic equation}, journal={arXiv:2008.01290v3}, date={2021}, } \bib{NakTomYab2004}{article}{ author={Nakai, E.}, author={Tomita, N.}, author={Yabuta, K.}, title={Density of the set of all infinitely differentiable functions with compact support in weighted Sobolev spaces}, journal={Sci. Math. Jpn.}, volume={60}, date={2004}, number={1}, } \bib{OkaTsu2016}{article}{ author={Okabe, T.}, author={Tsutsui, Y.}, title={Navier-Stokes flow in the weighted Hardy space with applications to time decay problem}, journal={J. Differential Equations}, volume={261}, date={2016}, number={3}, pages={1712--1755}, } \bib{Ouh2005}{book}{ author={Ouhabaz, E. M.}, title={Analysis of heat equations on domains}, series={London Mathematical Society Monographs Series}, volume={31}, publisher={Princeton University Press, Princeton, NJ}, date={2005}, } \bib{Pin1997}{article}{ author={Pinsky, R.~G.~}, title={Existence and nonexistence of global solutions for $u_t=\Delta u+a(x)u^p$ in ${\bf R}^d$}, journal={J. Differential Equations}, volume={133}, date={1997}, number={1}, pages={152--177}, issn={0022-0396}, } \bib{Qi1998}{article}{ author={Qi, Y.}, title={The critical exponents of parabolic equations and blow-up in ${\bf R}^n$}, journal={Proc. Roy. Soc. Edinburgh Sect. A}, volume={128}, date={1998}, number={1}, pages={123--136}, } \bib{SnoTayWei2001}{article}{ author={Snoussi, S.}, author={Tayachi, S.}, author={Weissler, F. B.}, title={Asymptotically self-similar global solutions of a general semilinear heat equation}, journal={Math. Ann.}, volume={321}, date={2001}, number={1}, pages={131--155}, } \bib{Tay2020}{article}{ author={Tayachi, S.}, title={Uniqueness and non-uniqueness of solutions for critical Hardy-H\'{e}non parabolic equations}, journal={J. Math. Anal. Appl.}, volume={488}, date={2020}, number={1}, } \bib{Tsu2011}{article}{ author={Tsutsui, Y.}, title={The Navier-Stokes equations and weak Herz spaces}, journal={Adv. Differential Equations}, volume={16}, date={2011}, number={11-12}, pages={1049--1085}, } \bib{Tsu2014}{article}{ author={Tsutsui, Y.}, title={An application of weighted Hardy spaces to the Navier-Stokes equations}, journal={J. Funct. Anal.}, volume={266}, date={2014}, number={3}, pages={1395--1420}, } \bib{Wan1993}{article}{ author={Wang, X.}, title={On the Cauchy problem for reaction-diffusion equations}, journal={Trans. Amer. Math. Soc.}, volume={337}, date={1993}, number={2}, pages={549--590}, } \bib{Wei1979}{article}{ author={Weissler, F. B.}, title={Semilinear evolution equations in Banach spaces}, journal={J. Functional Analysis}, volume={32}, date={1979}, number={3}, pages={277--296}, } \bib{Wei1980}{article}{ author={Weissler, F. B.}, title={Local existence and nonexistence for semilinear parabolic equations in $L^{p}$}, journal={Indiana Univ. Math. J.}, volume={29}, date={1980}, number={1}, pages={79--102}, } \end{biblist} \end{bibdiv} \end{document}
arXiv
> seo > DxOOpticsPro1142Build12373Elitex64PatchSerialKey 🕹️ DxOOpticsPro1142Build12373Elitex64PatchSerialKey 🕹️ DxOOpticsPro1142Build12373Elitex64PatchSerialKey Another link with serial key that I found : Microsoft is shutting down some of its online services starting July 14. It's important to understand that you can no longer access some features of Office 365 and Microsoft 365, including OneDrive, Skype for Business, and Outlook.com. These services will continue to work until July 14, 2021, but after that date we will no longer offer security support or make significant improvements to the software. There will be no fix or issue-resolution for security or other critical issues after July 14, 2021. For instructions on how to download your files before the service is discontinued, see How to download your files. It's also good to go through the Microsoft website if you wish to make use of any of their services. It lists which will be closed at the end of July. Gas-particle partitioning of heavy metals and the health impact of diesel vehicle exhaust particles: an in vitro study. The gas-particle partitioning of heavy metals from urban road dust and tailpipe particles and the health consequences of such particles were investigated using an in vitro exposure model with a modified Ames chamber. Experiments were conducted with road dust samples collected from a moderately polluted highway in the Far North of Korea, a heavily polluted rural site in the mid-Okinawa area, Japan, and a relatively unpolluted urban site near the central-northern metropolis in Japan. The road dusts were collected with a vacuum collector placed on a moving ground vehicle, and analyzed to determine both the total concentrations of Cr, Ni, Cu, Zn, Pb and Cd and the particle-phase metal concentrations (i.e., metals adsorbed onto particles), while the source-specific concentrations of particulate organic carbon, polycyclic aromatic hydrocarbons (PAHs), and black carbon (BC) were measured by using standard analytical methods. The gas-particle partitioning coefficients (Kg) determined from the relationship between the gas-phase and particle-phase concentrations of four metals (Cr, Ni, Cu and Pb) were 0.002 to 0.041 for particles collected in the Far North, and the mean (geometric) Kg values for the three particles were 0.007 and 0.031 for the unpolluted urban site in the mid-Okinawa and the heavily polluted rural site, respectively, and The files you want are stored in the folder named "Files". There's no way you will be able to access those files without some form of authentication; such as a valid serial key or license key. solve the summation in $n=\sum_1^{\infty}\frac{(-1)^{n+1}}{(2n+1)(2n+3)}$ $n=\sum_1^{\infty}\frac{(-1)^{n+1}}{(2n+1)(2n+3)}$ I know that I should do some tricks. But I can't really figure out any approach here. Thanks to anyone who can help me. Hint: notice that $$\sum_{n=1}^\infty \frac{x^{2n}}{(2n+1)(2n+3)} = \frac{1}{\sin(\pi x)}$$ $$\sum_{n=1}^\infty \frac{1}{n^2+n+1} = \frac{\pi^2}{6}$$ Costain Costain is a British private contractor building firm which is part of the Costain Group. It is a subsidiary of J.Sainsbury PLC, one of the UK's largest retailers. Costain was formed in October 2000 after the acquisition of rival contractor Bob Adams and Sons, and the subsequent merger with various other companies. In 2015, Costain was listed on the UK Stock Market where it trades as CSN. Costain operates in the design and construction of industrial estates, warehousing, commercial offices, retail facilities and logistics warehouses in the UK, Ireland, and Europe. In June 2018, the firm built a 'Super Site' on the A3 at Bilston, West Midlands. About 70% of the firm's work is in the UK and 30% in Northern Ireland. Most of the work is on housing and social housing projects, and was about 46% of revenue in 2018/19. Category:Companies based in Warwick Category:Construction and civil engineering companies of the United Kingdom Category:Construction and civil engineering companies established in 2000 Category:Sainsbury's Category:Companies listed on https://www.mitsubishi-motors.com.jo/sites/default/files/webform/elipear971.pdf https://oregondealz.com/wp-content/uploads/2022/07/Matlab_Software_Free_FREE_Download_For_Windows_7_64_Bit_With_Crack.pdf https://www.canoeoutfitters.com/sites/default/files/webform/trandafirii-sunt-pentru-cei-bogati-film-19.pdf https://inmueblesencolombia.com/?p=66605 https://ryhinmobiliaria.co/wp-content/uploads/2022/07/ottokam.pdf http://prayerandpatience.com/wp-content/uploads/2022/07/C_M_Krishna_Real_Time_Systems_Pdf_EXCLUSIVE.pdf https://www.fairhaven-ma.gov/sites/g/files/vyhlif7541/f/uploads/records_access-_retirement.pdf https://feimes.com/moonlight-engine-latest-version-link/ http://oldeberkoop.com/?p=8608 https://www.cameraitacina.com/en/system/files/webform/feedback/lylicha34.pdf https://acaciasports.com/wp-content/uploads/2022/07/jcb_service_parts_pro_keygen_13.pdf https://www.town.dartmouth.ma.us/sites/g/files/vyhlif466/f/news/smart_calendar_fy_2022.pdf https://www.uky.edu/chs/system/files/webform/better-spoken-english-book-by-shreesh-chaudhary-pdf-free-download.pdf https://masteryvault.com/wp-content/uploads/2022/07/CRACK_NOD32FiXv21nsaneexe.pdf https://versiis.com/42650/codigoactivacionexcelfix-top/ http://ksycomputer.com/?p=33727 https://x-streem.com/upload/files/2022/07/OnRlXrlCvxeTeP6Gl6NS_06_af1851f4bb4d32efaf102cb1bbdaef50_file.pdf https://www.mitrajyothi.org/sites/default/files/webform/Malena-Full-Movie-In-Hindi-Free-Download.pdf https://rsmerchantservices.com/sample-ear-file-download-hot-14/ Jumper says: All credits to show more say: Due to problems with this website it must be discontinued! The good news is that the software is still available. Where to Get Your Copies > Official Pinnacle Website (MSI / KG Direct) Direct Source (Windows) GamesPress (Windows) Glow-Net (Windows) Guides\Configs Sale\FAQ · · · · seoDxOOpticsPro1142Build12373Elitex64PatchSerialKey Blackmagic Design DaVinci Resolve Studio 14.1.1 Crack Utorrent ((LINK)) ⏩ Keysight Advanced Design System (ADS) 2019 Free Download ((HOT)) 🎆
CommonCrawl
Frequency partition of a graph In graph theory, a discipline within mathematics, the frequency partition of a graph (simple graph) is a partition of its vertices grouped by their degree. For example, the degree sequence of the left-hand graph below is (3, 3, 3, 2, 2, 1) and its frequency partition is 6 = 3 + 2 + 1. This indicates that it has 3 vertices with some degree, 2 vertices with some other degree, and 1 vertex with a third degree. The degree sequence of the bipartite graph in the middle below is (3, 2, 2, 2, 2, 2, 1, 1, 1) and its frequency partition is 9 = 5 + 3 + 1. The degree sequence of the right-hand graph below is (3, 3, 3, 3, 3, 3, 2) and its frequency partition is 7 = 6 + 1. • A graph with frequency partition 6 = 3 + 2 + 1. • A bipartite graph with frequency partition 9 = 5 + 3 + 1. • A graph with frequency partition 7 = 6 + 1. In general, there are many non-isomorphic graphs with a given frequency partition. A graph and its complement have the same frequency partition. For any partition p = f1 + f2 + ... + fk of an integer p > 1, other than p = 1 + 1 + 1 + ... + 1, there is at least one (connected) simple graph having this partition as its frequency partition.[1] Frequency partitions of various graph families are completely identified; frequency partitions of many families of graphs are not identified. Frequency partitions of Eulerian graphs For a frequency partition p = f1 + f2 + ... + fk of an integer p > 1, its graphic degree sequence is denoted as ((d1)f1,(d2)f2, (d3)f3, ..., (dk) fk) where degrees di's are different and fi ≥ fj for i < j. Bhat-Nayak et al. (1979) showed that a partition of p with k parts, k ≤ integral part of $(p-1)/2$ is a frequency partition[2] of a Eulerian graph and conversely. Frequency partition of trees, Hamiltonian graphs, tournaments and hypegraphs The frequency partitions of families of graphs such as trees,[3] Hamiltonian graphs[4] directed graphs and tournaments[5] and to k-uniform hypergraphs.[6] have been characterized. Unsolved problems in frequency partitions The frequency partitions of the following families of graphs have not yet been characterized: • Line graphs • Bipartite graphs [7] References 1. Chinn, P. Z. (1971), "The frequency partition of a graph. Recent Trends in Graph Theory", Lecture Notes in Mathematics, Berlin: Springer-Verlag, vol. 186, pp. 69–70 2. Rao, Siddani Bhaskara; Bhat-Nayak, Vasanti N.; Naik, Ranjan N. (1979), "Characterization of frequency partitions of Eulerian graphs", Proceedings of the Symposium on Graph Theory (Indian Statist. Inst., Calcutta, 1976), ISI Lecture Notes, vol. 4, Macmillan of India, New Delhi, pp. 124–137, MR 0553937. Also in Lecture Notes in Mathematics, Combinatorics and Graph Theory, Springer-Verlag, New York, Vol. 885 (1980), p 500. 3. Rao, T. M. (1974), "Frequency sequences in Graphs", Journal of Combinatorial Theory, Series B, 17: 19–21, doi:10.1016/0095-8956(74)90042-2 • Bhat-Nayak, Vasanti N.; Naik, Ranjan N. & Rao, S. B. (1977), "Frequency partitions: forcibly pancyclic and forcibly nonhamiltonian degree sequences", Discrete Mathematics, 20: 93–102, doi:10.1016/0012-365x(77)90049-8 4. Alspach, B. & Reid, K. B. (1978), "Degree Frequencies in Digraphs and Tournaments", Journal of Graph Theory, 2: 241–249, doi:10.1002/jgt.3190020307 5. Bhat-Nayak, V. N. & Naik, R. N. (1985), "Frequency partitions of k-uniform hypergraphs", Utilitas Math., 28: 99–104 6. S. B. Rao, A survey of the theory of potentially p-graphic and forcibly p-graphic sequences, in: S. B. Rao edited., Combinatorics and Graph Theory Lecture Notes in Math., Vol. 885 (Springer, Berlin, 1981), 417-440 External section • Berge, C. (1989), Hypergraphs, Combinatorics of Finite sets, Amsterdam: North-Holland, ISBN 0-444-87489-5
Wikipedia
Lecture 5 - Chapter 1: Galois Connections March 2018 edited April 2018 in Applied Category Theory Course Okay: I've told you what a Galois connection is. But now it's time to explain why they matter. This will take much longer - and be much more fun. Galois connections do something really cool: they tell you the best possible way to recover data that can't be recovered. More precisely, they tell you the best approximation to reversing a computation that can't be reversed. Someone hands you the output of some computation, and asks you what the input was. Sometimes there's a unique right answer. But sometimes there's more than one answer, or none! That's when your job gets hard. In fact, impossible! But don't let that stop you. Suppose we have a function between sets, \(f : A \to B\) . We say a function \(g: B \to A \) is the inverse of \(f\) if $$ g(f(a)) = a \textrm{ for all } a \in A \quad \textrm{ and } \quad f(g(b)) = b \textrm{ for all } b \in B $$ Another equivalent way to say this is that $$ f(a) = b \textrm{ if and only if } a = g(b) $$ for all \(a \in A\) and \(b \in B\). So, the idea is that \(g\) undoes \(f\). For example, if \(A = B = \mathbb{R}\) is the set of real numbers, and \(f\) doubles every number, then \(f\) has an inverse \(g\), which halves every number. But what if \(A = B = \mathbb{N}\) is the set of natural numbers, and \(f\) doubles every natural number. This function has no inverse! So, if I say "\(2a = 4\); tell me \(a\)" you can say \(a = 2\). But if I say "\(2a = 3\); tell me \(a\)" you're stuck. But you can still try to give me a "best approximation" to the nonexistent natural number \(a\) with \(2 a = 3\). "Best" in what sense? We could look for the number \(a\) that makes \(2a\) as close as possible to 3. There are two equally good options: \(a = 1\) and \(a = 2\). Here we are using the usual distance function, or metric, on \(\mathbb{N}\), which says that the distance between \(x\) and \(y\) is \(|x-y|\). But we're not talking about distance functions in this class now! We're talking about preorders. Can we define a "best approximation" using just the relation \(\le\) on \(\mathbb{N}\)? Yes! But we can do it in two ways! Best approximation from below. Find the largest possible \(a \in \mathbb{N}\) such that \(2a \le 3\). Answer: \(a = 1\). Best approximation from above. Find the smallest possible \(a \in \mathbb{N}\) such that \(3 \le 2a\). Answer: \(a = 2\). Okay, now work this out more generally: Puzzle 14. Find the function \(g : \mathbb{N} \to \mathbb{N}\) such that \(g(b) \) is the largest possible natural number \(a\) with \(2a \le b\). Puzzle 15. Find the function \(g : \mathbb{N} \to \mathbb{N}\) such that \(g(b)\) is the smallest possible natural number \(a\) with \(b \le 2a\). Now think about Lecture 4 and the puzzles there! I'll copy them here with notation that better matches what I'm using now: Puzzle 12. Find a right adjoint for the function \(f: \mathbb{N} \to \mathbb{N}\) that doubles natural numbers: that is, a function \(g : \mathbb{N} \to \mathbb{N}\) with $$ f(a) \le b \textrm{ if and only if } a \le g(b) $$ for all \(a,b \in \mathbb{N}\). Puzzle 13. Find a left adjoint for the same function \(f\): that is, a function \(g : \mathbb{N} \to \mathbb{N}\) with $$ g(b) \le a \textrm{ if and only if } b \le f(a) $$ Next: Puzzle 16. What's going on here? What's the pattern you see, and why is it working this way? Alex Kreitzberg March 2018 edited April 2018 Puzzle 14 Checking some concrete values, \(2(1) \leq 3, 2(2) \not \leq 3, 2(2) \leq 5, 2(3) \not \leq 5\). These suggest the function \(g(b) = \lfloor b/2 \rfloor \) is our maximum. More formally, we want \(g(b) = max\){\( a : 2a \leq b, a \in Z \)}, We need to show it's in our set, and that any other element in our set is smaller. First, \(2\lfloor b / 2 \rfloor \leq b \) so \(g(b) \in \){\( a : 2a \leq b \)}. Second, division by 2 and flooring are both monotonic functions, so if a is in our set, we have $$ 2a \leq b \Rightarrow a \leq b/2 \Rightarrow \lfloor a \rfloor \leq \lfloor b/2 \rfloor \Rightarrow a \leq \lfloor b/2 \rfloor $$ \(\lfloor b/2 \rfloor\) is the required maximum. Puzzle 15 This argument is analogous, except with \(\lceil b / 2 \rceil \). I would type it out, but I don't have time currently (famous last words). Puzzle 16 I'm going to give an observation, but my understanding on this isn't complete. Given the definitions for adjunctions introduced in this lecture, it's clear they are unique (Edit: This is true for the given example, but isn't true for every preorder, I shouldn't have said this was clear. And because the Galois connection definition is well defined for any preorder, my following suggestion won't generalize to a characterization for preorders by way of uniqueness!). Which means the definition in puzzle 12 is equivalent to the max definition. We can therefore show we can prove properties from one version to the other. I'll give the direction I've currently figured out. Suppose \(g\) is defined as in problem 14. Because all our functions are monotonic we have $$f(a) \leq b \Rightarrow g(f(a)) \leq g(b) \Rightarrow a \leq g(b)$$ and $$a \leq g(b) \Rightarrow f(a) \leq f(g(b)) \Rightarrow f(a) \leq b$$ Because \(f(g(b)) \leq b\) by definition of g. (It's the largest element x such that \(f(x) \leq b\)). It should be possible to show these definitions are equivalent to maximizing in the sense defined in problem 14. Comment Source:**Puzzle 14** Checking some concrete values, \\(2(1) \leq 3, 2(2) \not \leq 3, 2(2) \leq 5, 2(3) \not \leq 5\\). These suggest the function \\(g(b) = \lfloor b/2 \rfloor \\) is our maximum. More formally, we want \\(g(b) = max\\){\\( a : 2a \leq b, a \in Z \\)}, We need to show it's in our set, and that any other element in our set is smaller. First, \\(2\lfloor b / 2 \rfloor \leq b \\) so \\(g(b) \in \\){\\( a : 2a \leq b \\)}. Second, division by 2 and flooring are both monotonic functions, so if a is in our set, we have $$ 2a \leq b \Rightarrow a \leq b/2 \Rightarrow \lfloor a \rfloor \leq \lfloor b/2 \rfloor \Rightarrow a \leq \lfloor b/2 \rfloor $$ \\(\lfloor b/2 \rfloor\\) is the required maximum. **Puzzle 15** This argument is analogous, except with \\(\lceil b / 2 \rceil \\). I would type it out, but I don't have time currently (famous last words). **Puzzle 16** I'm going to give an observation, but my understanding on this isn't complete. Given the definitions for adjunctions introduced in this lecture, it's clear they are unique (**Edit: This is true for the given example, but isn't true for every preorder, I shouldn't have said this was clear. And because the Galois connection definition is well defined for any preorder, my following suggestion won't generalize to a characterization for preorders by way of uniqueness!**). Which means the definition in puzzle 12 is equivalent to the max definition. We can therefore show we can prove properties from one version to the other. I'll give the direction I've currently figured out. Suppose \\(g\\) is defined as in problem 14. Because all our functions are monotonic we have $$f(a) \leq b \Rightarrow g(f(a)) \leq g(b) \Rightarrow a \leq g(b)$$ and $$a \leq g(b) \Rightarrow f(a) \leq f(g(b)) \Rightarrow f(a) \leq b$$ Because \\(f(g(b)) \leq b\\) by definition of g. (It's the largest element x such that \\(f(x) \leq b\\)). It should be possible to show these definitions are equivalent to maximizing in the sense defined in problem 14. I'm not sure if this is the answer you want John... I want to expand on Alex Kreitzberg's observation. He is touching on an alternate definition of a Galois pair \(f \dashv g\): $$ f \text{ and } g \text{ are monotone functions and } f(g(b)) \leq b \text{ and } a \leq g(f(a)) $$ This is equivalent to the definition Fong, Spivak and you yourself use. Moreover, if a monotone function has a left (or right) Galois adjoint it is unique. Comment Source:> **Puzzle 16**. What's going on here? What's the pattern you see, and why is it working this way? I'm not sure if this is the answer you want John... I want to expand on Alex Kreitzberg's observation. He is touching on an _alternate definition_ of a Galois pair \\(f \dashv g\\): $$ f \text{ and } g \text{ are monotone functions and } f(g(b)) \leq b \text{ and } a \leq g(f(a)) $$ This is equivalent to the definition Fong, Spivak and you yourself use. Moreover, if a monotone function has a left (or right) Galois adjoint it is unique. Here's my go at Puzzle 16: Let's say we have two monotone functions \(f : A\to B\) and \(g:B\to A\) between preorders and we're wondering whether the following three conditions on \(f\) and \(g\) are equivalent: $$\text{For all }a\text{ and }b,\ f(a)\leq b \iff a \leq g(b)$$ $$\text{For all }a,\ f(a)\text{ is the smallest $b$ with }a\leq g(b).$$ $$\text{For all }b,\ g(b)\text{ is the largest $a$ with }f(a)\leq b.$$ We'll show that the first and second are equivalent. First, note that since \(g\) is monotone, for any choice of \(a\) the set of all \(b\) such that \(a\leq g(b)\) is an upper set of \(B\). Therefore saying that \(f(a)\) is the smallest \(b\) with \(a\leq g(b)\) is saying that this upper set is the set of all elements of \(B\) at least as large as \(f(a)\). In other words, \(a\leq g(b)\) if and only if \(b\) is in this upper set if and only if \(b \geq f(a)\). The equivalence between the first and third conditions is similar. I was surprised that you didn't need both the second and third to get something equivalent to the first! In fact, the second and third are already equivalent to each other. Comment Source:Here's my go at Puzzle 16: Let's say we have two monotone functions \\(f : A\to B\\) and \\(g:B\to A\\) between preorders and we're wondering whether the following three conditions on \\(f\\) and \\(g\\) are equivalent: $$\text{For all }a\text{ and }b,\ f(a)\leq b \iff a \leq g(b)$$ $$\text{For all }a,\ f(a)\text{ is the smallest $b$ with }a\leq g(b).$$ $$\text{For all }b,\ g(b)\text{ is the largest $a$ with }f(a)\leq b.$$ We'll show that the first and second are equivalent. First, note that since \\(g\\) is monotone, for any choice of \\(a\\) the set of all \\(b\\) such that \\(a\leq g(b)\\) is an upper set of \\(B\\). Therefore saying that \\(f(a)\\) is the smallest \\(b\\) with \\(a\leq g(b)\\) is saying that this upper set *is* the set of all elements of \\(B\\) at least as large as \\(f(a)\\). In other words, \\(a\leq g(b)\\) if and only if \\(b\\) is in this upper set if and only if \\(b \geq f(a)\\). The equivalence between the first and third conditions is similar. I was surprised that you didn't need both the second and third to get something equivalent to the first! In fact, the second and third are already equivalent to each other. Some excellent responses! Just one small issue, coming from some mistakes in Seven Sketches. Everything Matthew and Owen just said is true for posets, but not for preorders. Remember that a preorder is a set with a binary relation \(\le\) that's reflexive and transitive. A poset is a preorder where \(x \le y\) and \(y \le x\) imply \(x = y\). The left or right adjoint of a monotone function between posets is unique if it exists. This need not be true for preorders. The issue can be seen clearly in the phrases "the smallest \(b\) with \(a \le g(b)\)". In a poset, such an \(a\) is unique if it exists. In a preorder, that's not true, since we could have \(a \le a'\) and \(a' \le a\) yet still \(a \ne a'\). Adding to the confusion, Seven Sketches uses "poset" to mean "preorder", and "skeletal poset" to mean "poset". So, when the authors say the left or right adjoint of a monotone function between posets is unique if it exists, that's true with the usual definition of poset, but not for their definition. Luckily, I have convinced the authors to straighten this out. Here's what I wrote in an email to Brendan Fong. He just replied saying that he and David are fixing the mistakes I describe, and switching to the standard definition of "poset". Someone in the course pointed out something that's more than a typo. If you're going to use "poset" to mean "preorder" (bad, bad, bad) then you can't talk about "the" meet or join of two elements in a poset, because even when it exists it's not unique. Of course you can use "the" in the sophisticated way, meaning "unique up to canonical isomorphism"... but that seems a bit fancy for your intended audience, and it at least would need to be explained. You guys just say things like: Let P be a poset, and let A be a subset. We say that an element is the meet of A if ... You could fix this by changing "the" to "a", but every equation you write down involving meets and joins is wrong unless you restrict to the "skeletal poset" case. For example, Example 1.62: In any poset P, we have \(p \vee p = p \wedge p = p\). More importantly, Prop. 1.84 - right adjoints preserve meets. The equations here are really just isomorphisms! This then makes your statement of the adjoint functor theorem for posets incorrect. I think this is the best solution: Call preorders "preorders" and call posets "posets". Do not breed a crew of students who use these words in nonstandard ways! You won't breed enough of them to take over the world, so all you will accomplish is making them less able to communicate with other people. And for what: just because you don't like the sound of the word "preorder"? Define meets and joins for preorders, but point out that they're unique for posets, and say this makes things a bit less messy. State the adjoint functor theorem for posets... actual posets! Comment Source:Some excellent responses! Just one small issue, coming from some mistakes in _Seven Sketches_. Everything Matthew and Owen just said is true for posets, but not for preorders. Remember that a **preorder** is a set with a binary relation \\(\le\\) that's reflexive and transitive. A **poset** is a preorder where \\(x \le y\\) and \\(y \le x\\) imply \\(x = y\\). The left or right adjoint of a monotone function between posets is unique if it exists. This need not be true for preorders. The issue can be seen clearly in the phrases "the smallest \\(b\\) with \\(a \le g(b)\\)". In a poset, such an \\(a\\) is unique if it exists. In a preorder, that's not true, since we could have \\(a \le a'\\) and \\(a' \le a\\) yet still \\(a \ne a'\\). Adding to the confusion, _Seven Sketches_ uses "poset" to mean "preorder", and "skeletal poset" to mean "poset". So, when the authors say the left or right adjoint of a monotone function between posets is unique if it exists, that's true with the _usual_ definition of poset, but not for _their_ definition. <img src = "http://math.ucr.edu/home/baez/emoticons/confused_rolleyes.gif"> Luckily, I have convinced the authors to straighten this out. Here's what I wrote in an email to Brendan Fong. He just replied saying that he and David are fixing the mistakes I describe, and switching to the standard definition of "poset". <hr/> Someone in the course pointed out something that's more than a typo. If you're going to use "poset" to mean "preorder" (bad, bad, bad) then you can't talk about "the" meet or join of two elements in a poset, because even when it exists it's not unique. Of course you can use "the" in the sophisticated way, meaning "unique up to canonical isomorphism"... but that seems a bit fancy for your intended audience, and it at least would need to be explained. You guys just say things like: > Let P be a poset, and let A be a subset. We say that an element is the meet of A if ... You could fix this by changing "the" to "a", but every equation you write down involving meets and joins is wrong unless you restrict to the "skeletal poset" case. For example, Example 1.62: > In any poset P, we have \\(p \vee p = p \wedge p = p\\). More importantly, Prop. 1.84 - right adjoints preserve meets. The equations here are really just isomorphisms! This then makes your statement of the adjoint functor theorem for posets incorrect. I think this is the best solution: 1. Call preorders "preorders" and call posets "posets". Do not breed a crew of students who use these words in nonstandard ways! You won't breed enough of them to take over the world, so all you will accomplish is making them less able to communicate with other people. And for what: just because you don't like the sound of the word "preorder"? 2. Define meets and joins for preorders, but point out that they're unique for posets, and say this makes things a bit less messy. 3. State the adjoint functor theorem for posets... actual posets! John Baez #4: Some excellent responses! Just one small issue, coming from some mistakes in Seven Sketches. Everything Matthew and Owen just said is true for posets, but not for preorders. Remember that a preorder is a set with a binary relation ≤ that's reflexive and transitive. A poset is a preorder where \(x \leq y\) and \(y \leq x\) imply \(x = y\). Okay... but I don't see how my alternative definition uses anti-symmetry (i.e. the rule \(x \leq y\) and \(y \leq x\) imply \(x = y\)). Here's my attempted proof: Lemma: Assume that \(f\) and \(g\) are monotone and for all \(a\) and \(b\) we have \(f(g(b))\leq b\) and \(a \leq g(f(a))\) We want to show \(f \dashv g\), which is to say for all \(a\) and \(b\): $$ f(a)\leq b\text{ if and only if } a \leq g(b) $$ Proof. I hope it's okay if I only show \(f(a)\leq b \Longrightarrow a \leq g(b)\), since the other direction is quite similar. Assume \(f(a)\leq b\). Then by monotony of \(g\) we have \(g(f(a)) \leq g(b)\). However, since \(a \leq g(f(a))\) by assumption, then we have \(a \leq g(b)\) by transitivity. \(\Box\) Since anti-symmetry wasn't used I don't see why this proof doesn't apply to preorders...? I greatly appreciate you taking the time to help me out. Comment Source:[John Baez #4](https://forum.azimuthproject.org/discussion/comment/16344/#Comment_16344): > Some excellent responses! Just one small issue, coming from some mistakes in Seven Sketches. Everything Matthew and Owen just said is true for posets, but not for preorders. > Remember that a **preorder** is a set with a binary relation ≤ that's reflexive and transitive. A **poset** is a preorder where \\(x \leq y\\) and \\(y \leq x\\) imply \\(x = y\\). Okay... but I don't see how my alternative definition uses anti-symmetry (i.e. the rule \\(x \leq y\\) and \\(y \leq x\\) imply \\(x = y\\)). Here's my attempted proof: **Lemma**: Assume that \\(f\\) and \\(g\\) are monotone and for all \\(a\\) and \\(b\\) we have \\(f(g(b))\leq b\\) and \\(a \leq g(f(a))\\) We want to show \\(f \dashv g\\), which is to say for all \\(a\\) and \\(b\\): $$ f(a)\leq b\text{ if and only if } a \leq g(b) $$ **Proof.** I hope it's okay if I only show \\(f(a)\leq b \Longrightarrow a \leq g(b)\\), since the other direction is quite similar. Assume \\(f(a)\leq b\\). Then by monotony of \\(g\\) we have \\(g(f(a)) \leq g(b)\\). However, since \\(a \leq g(f(a))\\) by assumption, then we have \\(a \leq g(b)\\) by transitivity. \\(\Box\\) Since anti-symmetry wasn't used I don't see why this proof doesn't apply to preorders...? I greatly appreciate you taking the time to help me out. Matthew: I was being pretty vague when I wrote Everything Matthew and Owen just said is true for posets, but not for preorders. I didn't mean nothing you said was true for preorders. For example, I think the alternative characterization of Galois connections works fine for preorders. Looking over what you said, this is the only thing that I'm sure is false for preorders: I tried to hint at the reason why: Do you see how to cook up a monotone function between preorders that has more than one left adjoint? Comment Source:Matthew: I was being pretty vague when I wrote > Everything Matthew and Owen just said is true for posets, but not for preorders. I didn't mean _nothing_ you said was true for preorders. For example, I think the alternative characterization of Galois connections works fine for preorders. Looking over what you said, this is the only thing that I'm sure is false for preorders: > Moreover, if a monotone function has a left (or right) Galois adjoint it is unique. I tried to hint at the reason why: > The left or right adjoint of a monotone function between posets is unique if it exists. This need not be true for preorders. > The issue can be seen clearly in the phrases "the smallest \\(b\\) with \\(a \le g(b)\\)". In a poset, such an \\(a\\) is unique if it exists. In a preorder, that's not true, since we could have \\(a \le a'\\) and \\(a' \le a\\) yet still \\(a \ne a'\\). Do you see how to cook up a monotone function between preorders that has more than one left adjoint? Yeah, I think I can see one - consider \(\mathbb{Z} ∐ \mathbb{Z}\). Let \(u : \mathbb{Z} ∐ \mathbb{Z} \to \mathbb{Z} \) be the forgetful functor that takes \(x_l \mapsto x\) and \(x_r \mapsto x\). Define the preorder on \(\mathbb{Z} ∐ \mathbb{Z}\) to be \(a \leq b\) if and only if \(u(a) \leq_{\mathbb{Z}} u(b)\). Now consider the endomorphism \(f : \mathbb{Z} ∐ \mathbb{Z} \to \mathbb{Z} ∐ \mathbb{Z}\) where: $$ x_l \mapsto (x+1)_l \\ x_r \mapsto (x+1)_r $$ I can see two left/right adjoints for this. First, this function is invertible, some one left/right adjoint is \(f^{-1}\). Explicitly, this maps: $$ x_l \mapsto (x-1)_l \\ x_r \mapsto (x-1)_r $$ There is also another left/right adjoint \(s\) that switches the sides of the coproduct: $$ x_l \mapsto (x-1)_r \\ x_r \mapsto (x-1)_l $$ There are in fact an infinite number of left/right adjoints to \(f\). Consider any partition \(P\) on \(\mathbb{Z} ∐ \mathbb{Z}\). For each \(p \in P\), we can map the elements using either \(f^{-1}\) or \(s\). The resulting map is another left/right adjoint. I am sure there is a simpler example. Thank you again for taking the time to help me get clear on the difference between adjoints for preorders and adjoints for posets! Comment Source:> Do you see how to cook up a monotone function between preorders that has more than one left adjoint? Yeah, I think I can see one - consider \\(\mathbb{Z} ∐ \mathbb{Z}\\). Let \\(u : \mathbb{Z} ∐ \mathbb{Z} \to \mathbb{Z} \\) be the forgetful functor that takes \\(x_l \mapsto x\\) and \\(x_r \mapsto x\\). Define the preorder on \\(\mathbb{Z} ∐ \mathbb{Z}\\) to be \\(a \leq b\\) if and only if \\(u(a) \leq_{\mathbb{Z}} u(b)\\). Now consider the endomorphism \\(f : \mathbb{Z} ∐ \mathbb{Z} \to \mathbb{Z} ∐ \mathbb{Z}\\) where: $$ x_l \mapsto (x+1)_l \\\\ x_r \mapsto (x+1)_r $$ I can see two left/right adjoints for this. First, this function is invertible, some one left/right adjoint is \\(f^{-1}\\). Explicitly, this maps: $$ x_l \mapsto (x-1)_l \\\\ x_r \mapsto (x-1)_r $$ There is also another left/right adjoint \\(s\\) that switches the sides of the coproduct: $$ x_l \mapsto (x-1)_r \\\\ x_r \mapsto (x-1)_l $$ There are in fact an infinite number of left/right adjoints to \\(f\\). Consider any partition \\(P\\) on \\(\mathbb{Z} ∐ \mathbb{Z}\\). For each \\(p \in P\\), we can map the elements using either \\(f^{-1}\\) or \\(s\\). The resulting map is another left/right adjoint. —————————— I am sure there is a simpler example. Thank you again for taking the time to help me get clear on the difference between adjoints for preorders and adjoints for posets! Great! Here's a fun example. Let \(A\) be any set, and make it into a preorder by defining every element to be less than or equal to every other element. Do the same for some set \(B\). Then any function \(f : A \to B\) is monotone, because we have \(f(a) \le f(a')\) no matter what \(a,a' \in A\) are. Similarly any function \(g : B \to A\) is monotone. And no matter what \(f\) and \(g\) are, \(g\) will be be a right adjoint to \(f\), since $$ f(a) \le b \textrm{ if and only if } a \le g(b) $$ (both are always true). Similarly, \(g\) will always be a left adjoint to \(f\). This shows that when we make our preorders as far from posets as possible, right and left adjoints become ridiculously non-unique. Comment Source:Great! Here's a fun example. Let \\(A\\) be any set, and make it into a preorder by defining every element to be less than or equal to every other element. Do the same for some set \\(B\\). Then any function \\(f : A \to B\\) is monotone, because we have \\(f(a) \le f(a')\\) no matter what \\(a,a' \in A\\) are. Similarly any function \\(g : B \to A\\) is monotone. And no matter what \\(f\\) and \\(g\\) are, \\(g\\) will be be a right adjoint to \\(f\\), since $$ f(a) \le b \textrm{ if and only if } a \le g(b) $$ (both are always true). Similarly, \\(g\\) will always be a left adjoint to \\(f\\). This shows that when we make our preorders as far from posets as possible, right and left adjoints become ridiculously non-unique. David Tanzer Inverse functions as a special case of adjoints: if \(A\) and \(B\) be preorders, where the ordering is the identity relation, then \(f: A \rightarrow B\) and \(g: B \rightarrow A\) are adjoint iff they are inverse functions. Comment Source:Inverse functions as a special case of adjoints: if \\(A\\) and \\(B\\) be preorders, where the ordering is the identity relation, then \\(f: A \rightarrow B\\) and \\(g: B \rightarrow A\\) are adjoint iff they are inverse functions. John Baez #6 wrote: For example, I think the alternative characterization of Galois connections works fine for preorders. I actually see 4 equivalent definitions of a Galois connection \(f \dashv g\) for two preorders \(\langle A, \sqsubseteq\rangle\) and \(\langle B, \preceq\rangle\): (1) \(f(a) \preceq b\) if and only if \(a \sqsubseteq g(b)\) (2) \(f\) and \(g\) are mono and \(f(g(b)) \preceq b\) and \(a \sqsubseteq g(f(a))\) (3) \(f\) is mono and \(f(g(b)) \preceq b\) and \(f(a) \preceq b \Longrightarrow a \sqsubseteq g(b)\) (4) \(g\) is mono and \(a \sqsubseteq g(f(a))\) and \(a \sqsubseteq g(b) \Longrightarrow f(a) \preceq b\) (3) and (4) are based on Owen Biesel's observation. It looks like these definitions are pretty general - I think you can use them to give alternate ways of programming adjunctions in Haskell. Let me double check, if this is the case we can maybe make a change to the Haskell adjunctions library. Comment Source:[John Baez #6](https://forum.azimuthproject.org/discussion/comment/16420/#Comment_16420) wrote: > For example, I think the alternative characterization of Galois connections works fine for preorders. I actually see 4 equivalent definitions of a Galois connection \\(f \dashv g\\) for two preorders \\(\langle A, \sqsubseteq\rangle\\) and \\(\langle B, \preceq\rangle\\): (1) \\(f(a) \preceq b\\) if and only if \\(a \sqsubseteq g(b)\\) (2) \\(f\\) and \\(g\\) are mono and \\(f(g(b)) \preceq b\\) and \\(a \sqsubseteq g(f(a))\\) (3) \\(f\\) is mono and \\(f(g(b)) \preceq b\\) and \\(f(a) \preceq b \Longrightarrow a \sqsubseteq g(b)\\) (4) \\(g\\) is mono and \\(a \sqsubseteq g(f(a))\\) and \\(a \sqsubseteq g(b) \Longrightarrow f(a) \preceq b\\) -------------------------------------------- (3) and (4) are based on Owen Biesel's observation. It looks like these definitions are pretty general - I think you can use them to give alternate ways of programming adjunctions in Haskell. Let me double check, if this is the case we can maybe make a change to the Haskell `adjunctions` library. Matthew - that would be cool! Comment Source:Matthew - that would be cool! Made the PR this morning :D Comment Source:Made the PR this morning :D Scott Finnie Minor typo (I think) in your lecture John: But you can still try to give me a "best approximation" to the nonexistent natural number a with 2a=4. Think that should be 2a=3? If not, it's a "thinko" on my part (kudos to Patrick O'Neill for the "thinko" concept!). Comment Source:Minor typo (I think) in your lecture John: >But you can still try to give me a "best approximation" to the nonexistent natural number a with 2a=4. Think that should be 2a=**3**? If not, it's a "thinko" on my part (kudos to [Patrick O'Neill](https://forum.azimuthproject.org/discussion/comment/16151/#Comment_16151) for the "thinko" concept!). Thanks, Scott! It was definitely a typo on my part, not a thinko on yours. You'll be relieved to hear that there is indeed a natural number with \(2a = 4\). Even in the "new math". Comment Source:Thanks, Scott! It was definitely a typo on my part, not a thinko on yours. You'll be relieved to hear that there is indeed a natural number with \\(2a = 4\\). Even in the "new math". <img src = "http://math.ucr.edu/home/baez/emoticons/tongue2.gif"> JohnBeattie Matthew Doty #10 Thanks, very enlightening, especially using different notation for the two orders instead of e.g. \(\sqsubseteq_A, \sqsubseteq_B\). Typo: the orders need to be swapped in the definitions 1 - 4. I actually see 4 equivalent definitions of a Galois connection \(f \dashv g\) for two preorders \(\langle A, \sqsubseteq\rangle\) and \(\langle B, \preceq\rangle\) Comment Source:[Matthew Doty #10](https://forum.azimuthproject.org/discussion/comment/16506/#Comment_16506) Thanks, very enlightening, especially using different notation for the two orders instead of e.g. \\(\sqsubseteq_A, \sqsubseteq_B\\). Typo: the orders need to be swapped in the definitions 1 - 4. > I actually see 4 equivalent definitions of a Galois connection \\(f \dashv g\\) for two preorders \\(\langle A, \sqsubseteq\rangle\\) and \\(\langle B, \preceq\rangle\\) So: (1) \\(f(a) \preceq b\\) if and only if \\(a \sqsubseteq g(b)\\) (2) \\(f\\) and \\(g\\) are mono and \\(f(g(b)) \preceq b\\) and \\(a \sqsubseteq g(f(a))\\) (3) \\(f\\) is mono and \\(f(g(b)) \preceq b\\) and \\(f(a) \preceq b \Longrightarrow a \sqsubseteq g(b)\\) (4) \\(g\\) is mono and \\(a \sqsubseteq g(f(a))\\) and \\(a \sqsubseteq g(b) \Longrightarrow f(a) \preceq b\\) Great to have you on the forums. Comment Source:Thanks John! Great to have you on the forums. Marc Kaufmann This may not be very useful, but since I had this thought while reading, I might as well post it. I was wondering what you meant by best approximation, and I can see how this is a natural way of defining it, given that all we have is the partial (or pre)order. I was wondering though whether another type of best approximation might be about limiting the domain, rather than limiting the value the function takes. So for instance, the domain on which we have an inverse for the function \(f: \mathbb{N} \to \mathbb{N}\) with \(f(n) = 2\cdot n \) is \(2 \cdot \mathbb{N}\) (by which I mean the set of all even numbers). Thus, in that case I would get an approximation that is limited in its domain, but accurate, whereas the right and left adjoints are given on the full domain, but wrong in places. After I thought a bit about it, I felt though that this is worse than the right and left adjoints, because the right and left adjoints together contain more information. I think (without having proved it) that the domain I was thinking of is the domain where the right and left adjoints agree in value -- so it has less information than the adjoints. Comment Source:This may not be very useful, but since I had this thought while reading, I might as well post it. I was wondering what you meant by *best* approximation, and I can see how this is a natural way of defining it, given that all we have is the partial (or pre)order. I was wondering though whether another type of *best* approximation might be about limiting the domain, rather than limiting the value the function takes. So for instance, the domain on which we have an inverse for the function \\(f: \mathbb{N} \to \mathbb{N}\\) with \\(f(n) = 2\cdot n \\) is \\(2 \cdot \mathbb{N}\\) (by which I mean the set of all even numbers). Thus, in that case I would get an approximation that is limited in its domain, but accurate, whereas the right and left adjoints are given on the full domain, but wrong in places. After I thought a bit about it, I felt though that this is worse than the right and left adjoints, because the right and left adjoints together contain more information. I think (without having proved it) that the domain I was thinking of is the domain where the right and left adjoints agree in value -- so it has less information than the adjoints. Valter Sorana I just discovered an application of Galois connection to economics (or, more precisely, mechanism design) in a newly revised paper by Georg Noldeke and Larry Samuelson on "The Implementation Duality". They use the "antitone" definition of Galois connection, though (i.e., \( f(p) \leq q \Leftrightarrow p \geq g(q) \) ). Here is a quote from p. 8 of the paper (a "profile" u gives utility u(x) to an agent of type x, (Φv)(x) is the highest utility that an agent of type x can get when trading/being matched to a counterpart; similarly for v(y) and Ψu): Suppose we have a pair of profiles u and v such that each buyer x ∈ X is content to obtain u(x) rather than matching with any seller y ∈ Y and providing that seller with utility v(y), that is, the inequality u ≥ Φv holds. It is then intuitive that every seller y ∈ Y would similarly weakly prefer to obtain utility v(y) to matching with any buyer x ∈ X who insists on receiving utility u(x), that is, the inequality v ≥ Ψu holds. Reversing the roles of buyers and sellers in this explanation motivates the other direction of the equivalence. Comment Source:I just discovered an application of Galois connection to economics (or, more precisely, mechanism design) in a newly revised paper by Georg Noldeke and Larry Samuelson on ["The Implementation Duality"](https://cowles.yale.edu/sites/default/files/files/pub/d20/d2091.pdf). They use the "antitone" definition of Galois connection, though (i.e., \\( f(p) \leq q \Leftrightarrow p \geq g(q) \\) ). Here is a quote from p. 8 of the paper (a "profile" u gives utility u(x) to an agent of type x, (Φv)(x) is the highest utility that an agent of type x can get when trading/being matched to a counterpart; similarly for v(y) and Ψu): > Suppose we have a pair of profiles u and v such that each buyer x ∈ X is content to obtain u(x) rather than matching with any seller y ∈ Y and providing that seller with utility v(y), that is, the inequality u ≥ Φv holds. It is then intuitive that every seller y ∈ Y would similarly weakly prefer to obtain utility v(y) to matching with any buyer x ∈ X who insists on receiving utility u(x), that is, the inequality v ≥ Ψu holds. Reversing the roles of buyers and sellers in this explanation motivates the other direction of the equivalence. Ignacio Viglizzo Trying to summarize what I get here as a pattern: If \(f\dashv g\), then: the right adjoint \(g\) approximates the inverse of \(f\) from above: \(p\leq g(f(p))\) the left adjoint \(f\) approximates the inverse of \(g\) from below \(f(g(q))\leq q\). [Edited: I switched "right" and "left" in my first attempt, as Valter points out below. ] Comment Source:Trying to summarize what I get here as a pattern: If \\(f\dashv g\\), then: the right adjoint \\(g\\) approximates the inverse of \\(f\\) from above: \\(p\leq g(f(p))\\) and the left adjoint \\(f\\) approximates the inverse of \\(g\\) from below \\(f(g(q))\leq q\\). [Edited: I switched "right" and "left" in my first attempt, as Valter points out below. ] @IgnacioViglizzo : isn't it the other way round? the right adjoint \(g\) of \(f\) of approximates the inverse of \(f\) from above: \(p\leq g(f(p))\), whereas a true inverse (if it existed) would bring \(f^{-1}(f(p))\) down to \(p\). the left adjoint \(f\) of \(g\) approximates the inverse of \(g\) from below\(f(g(q))\leq q\), whereas a true inverse (if it existed) would bring \(g^{-1}(g(q))\) up to \(q\). But this seems to go against John's characterization of right adjoints being conservative and left ones being "generous", so I may have made a mistake somewhere. Comment Source:@IgnacioViglizzo : isn't it the other way round? the right adjoint \\(g\\) of \\(f\\) of approximates the inverse of \\(f\\) from above: \\(p\leq g(f(p))\\), whereas a true inverse (if it existed) would bring \\(f^{-1}(f(p))\\) down to \\(p\\). and the left adjoint \\(f\\) of \\(g\\) approximates the inverse of \\(g\\) from below\\(f(g(q))\leq q\\), whereas a true inverse (if it existed) would bring \\(g^{-1}(g(q))\\) up to \\(q\\). But this seems to go against John's characterization of right adjoints being conservative and left ones being "generous", so I may have made a mistake somewhere. @ValterSorana: you are completely right! It is so easy to get this mixed up! Comment Source:@ValterSorana: you are completely right! It is so easy to get this mixed up! A couple mnemonics I find helpful: When we write \(f\dashv g\), the left adjoint \(f\) is on the left, and the right adjoint \(g\) is on the right. Also, in the important relationships defining adjoints, the left adjoint appears on the left side of \(\leq\), and the right adjoint appears on the right side. For example: $$f(a)\leq b \iff a \leq g(b).$$ The \(f\) appears on the left-hand side of the first inequality, and the \(g\) appears on the right side of the second. And they're still on their correct sides if we write the two inequalities in the other order, as in $$a \leq g(b) \iff f(a)\leq b.$$ The rule of thumb for other important inequalities, like \(a \leq g(f(a))\) and \(f(g(b)) \leq b\), if you look at which function is on the outside: the right adjoint \(g\) is on the right side of \(\leq\) when it's on the outside of the composite, and when the left adjoint is on the outside it's on the left side. They need to be there in order for the defining relationship to translate these two inequalities into the always true statements \(f(a)\leq f(a)\) and \(g(b)\leq g(b)\). Comment Source:A couple mnemonics I find helpful: When we write \\(f\dashv g\\), the left adjoint \\(f\\) is on the left, and the right adjoint \\(g\\) is on the right. Also, in the important relationships defining adjoints, the left adjoint appears on the left side of \\(\leq\\), and the right adjoint appears on the right side. For example: \[f(a)\leq b \iff a \leq g(b).\] The \\(f\\) appears on the left-hand side of the first inequality, and the \\(g\\) appears on the right side of the second. And they're still on their correct sides if we write the two inequalities in the other order, as in \[a \leq g(b) \iff f(a)\leq b.\] The rule of thumb for other important inequalities, like \\(a \leq g(f(a))\\) and \\(f(g(b)) \leq b\\), if you look at which function is on the outside: the right adjoint \\(g\\) is on the right side of \\(\leq\\) when it's on the outside of the composite, and when the left adjoint is on the outside it's on the left side. They need to be there in order for the defining relationship to translate these two inequalities into the always true statements \\(f(a)\leq f(a)\\) and \\(g(b)\leq g(b)\\).
CommonCrawl
\begin{definition}[Definition:Preadditive Category] A '''preadditive category''' is a monoidally enriched category over the monoidal category of abelian groups $\mathbf {A b}$. That is, a category such that: :its hom sets are abelian groups and where: :composition is bilinear. \end{definition}
ProofWiki
Compute $\dbinom{505}{505}$. $\dbinom{505}{505}=\dbinom{505}{0}=\boxed{1}.$
Math Dataset
\begin{definition}[Definition:Primitive Recursion/Several Variables] Let $f: \N^k \to \N$ and $g: \N^{k + 2} \to \N$ be functions. Let $\tuple {n_1, n_2, \ldots, n_k} \in \N^k$. Then the function $h: \N^{k + 1} \to \N$ is '''obtained from $f$ and $g$ by primitive recursion''' {{iff}}: :$\forall n \in \N: \map h {n_1, n_2, \ldots, n_k, n} = \begin {cases} \map f {n_1, n_2, \ldots, n_k} & : n = 0 \\ \map g {n_1, n_2, \ldots, n_k, n - 1, \map h {n_1, n_2, \ldots, n_k, n - 1} } & : n > 0 \end {cases}$ Category:Definitions/Mathematical Logic \end{definition}
ProofWiki
\begin{document} \maketitle \begin{abstract} In this paper, we investigate the minimal length of chains of minimal rational curves needed to join two general points on a Fano manifold of Picard number $1$ under mild assumptions. In particular, we give a sharp bound of the length by a fundamental argument. As an application, we compute the length for Fano manifolds of dimension $\leq 7$. \end{abstract} \section{Introduction} We say that a complex projective manifold $X$ is {\it Fano} if its anticanonical divisor is ample. Rational curves on Fano manifolds have been studied by several authors. For instance, J. Koll\'{a}r, Y. Miyaoka and S. Mori proved the following: \begin{them}[\cite{KMM1}, \cite{Na}]\label{KMM} For a Fano $n$-fold of Picard number $\rho=1$, two general points can be connected by a smooth rational curve whose anticanonical degree is at most $n(n+1)$. \end{them} In \cite{KMM1}, they also remarked that their proof can be modified to improve it to a bound which is asymptotically $\frac{n^2}{4}$. As a consequence of Theorem~\ref{KMM}, we know the $n$-dimensional Fano manifolds of $\rho=1$ form a bounded family. In this direction, J. M. Hwang and S. Kebekus studied the minimal length of chains of minimal rational curves needed to join two general points \cite{HK}. In the previous article \cite{Wa}, we computed the minimal length in some cases. For example, we dealt with the case where the dimension of $X$ is at most $5$. As a corollary, we provided a better bound on the degree of Fano $5$-folds of $\rho=1$. Let $X$ be a Fano $n$-fold of $\rho=1$, ${\rm RatCurves}^n(X)$ the normalization of the space of rational curves on $X$ (see \cite[II. Definition-Proposition~2.11]{Ko}) and ${\mathscr{K}}$ a {\it minimal rational component}, which is a dominating irreducible component of ${\rm RatCurves}^n(X)$ whose anticanonical degree is minimal among such families. As in \cite[Assumption~2.1]{HK}, assume that for general $x \in X$, \begin{enumerate} \item ${\mathscr{K}}_x:=\{[C]\in {\mathscr{K}}|x \in C \}$ is irreducible, and \item $p:=\dim {\mathscr{K}}_x >0$. \end{enumerate} Remark that all known examples with $p>0$ satisfy the first condition (\cite[Remark~2.2]{HK}). Furthermore if $p=0$, our problem dealing in this paper is easy (see Remark~\ref{remark} and \cite[Remark~2.2]{HK}). We denote by $l_{{\mathscr{K}}}$ the minimal length of chains of general ${\mathscr{K}}$-curves needed to join two general points (for a precise definition, refer to Definition~\ref{defil}). In this paper, we give a sharp bound of the length $l_{{\mathscr{K}}}$ by a fundamental argument under the mild assumptions $\rm (i)$ and $\rm (ii)$. Our main result is \begin{them}\label{MT} Let $X$ be a Fano $n$-fold of $\rho=1$ and ${\mathscr{K}}$ a minimal rational component of $X$ such that ${\mathscr{K}}_x$ is irreducible of dimension $p>0$ for general $x \in X$. Then we have \begin{eqnarray} \lfloor \frac{n-1}{p+1}\rfloor +1 \leq l_{{\mathscr{K}}} \leq \lfloor \frac{n-p}{2}\rfloor +1, \nonumber \end{eqnarray} where $\lfloor d \rfloor$ is the largest integer $\leq d$. \end{them} Remark that the lower bound comes from \cite[Proposition~2.4]{HK} (see Proposition~\ref{in}) directly. Our main contribution is to establish the sharp upper bound. As a byproduct, applying the argument of \cite[Proof of the Theorem~Step~3, Corollary~1]{KMM1}, this theorem implies the following: \begin{cor}\label{co} Let $X$ be a Fano manifold as in Theorem~\ref{MT}. Then the following holds. \begin{enumerate} \item Two general points on $X$ can be connected by a smooth rational curve whose anticanonical degree is at most $(p+2)(\lfloor \frac{n-p}{2}\rfloor +1) \leq \frac{(n+3)^2}{8}$. \item $(-K_X)^n \leq \{(p+2)(\lfloor \frac{n-p}{2}\rfloor +1)\}^n \leq \{\frac{(n+3)^2}{8}\}^n$, where $-K_X$ stands for the anticanonical divisor of $X$. \end{enumerate} \end{cor} This paper is organized as follows: In Section $2$, we give a precise definition of the {\it length} of chains of minimal rational curves. In Section $3$, we give a proof of our main theorem via a fundamental approach. In Section $4$, we investigate Fano manifolds whose {\it varieties of minimal rational tangents} have low-dimensional secant varieties. In Section $5$, we study the lengths of Fano manifolds of dimension $\leq 7$ by applying some previous results. In this paper, we work over the complex number field. \section{Definition of length}\label{Dl} \begin{defi} \rm \begin{enumerate} \item By a {\it variety}, we mean an integral separated scheme of finite type over the complex number field. We call a $1$-dimensional proper variety a {\it curve}. A {\it manifold} means a smooth variety. \item For a rational curve $C$ on a manifold $X$, let $f:{\mathbb{P}}^1 \rightarrow C \subset X$ be the normalization. Then $C$ is {\it free} if $f^*T_X$ is semipositive, where $T_X$ stands for the tangent bundle of $X$. \item For a projective variety $X$ and a rational curve $C$ on $X$, $C$ is {\it ${\mathscr{K}}$-curve} if $[C]$ is contained in a subset ${\mathscr{K}} \subset {\rm RatCurves}^n(X)$. \item For a projective variety $X$ and an irreducible component ${\mathscr{K}}$ of ${\rm RatCurves}^n(X)$, ${\mathscr{K}}$ is a {\it dominating family} if for a general point $x \in X$ there exists a ${\mathscr{K}}_x$-curve. \item For a Fano manifold $X$, a {\it minimal rational component} means a dominating irreducible component of ${\rm RatCurves}^n(X)$ whose anticanonical degree is minimal among such families. \item For a vector space $V$, ${\mathbb{P}}(V)$ denotes the projective space of lines through the origin in $V$. \end{enumerate} \end{defi} Except Theorem~\ref{ke} and Lemma~\ref{fl}, we always assume the following throughout this section. \begin{NA}\label{NA} \rm Let $X$ be a Fano $n$-fold of $\rho=1$, ${\mathscr{K}}$ a minimal rational component of $X$ such that for general $x \in X$, \begin{enumerate} \item ${\mathscr{K}}_x:=\{[C]\in {\mathscr{K}}|x \in C \}$ is irreducible, and \item $p:=\dim {\mathscr{K}}_x >0$. \end{enumerate} Notice that $p$ does not depend on the choice of a minimal rational component ${\mathscr{K}}$. It is a significant invariant of $X$. Let $\pi: {\mathscr{U}} \rightarrow {\mathscr{K}}$ and $\iota: {\mathscr{U}} \rightarrow X$ be the associated universal morphisms. Remark that $\pi$ is a {\it ${\mathbb{P}}^1$-bundle} in the sense of \cite[II. Definition~2.5]{Ko}, that is, smooth, proper and for every $z \in {\mathscr{K}}$ the fiber $\pi^{-1}(z)$ is a rational curve (\cite[II. Corollary~2.12]{Ko}). \end{NA} \begin{them}[{\cite[Theorem~3.3]{Ke2}}]\label{ke} Let $X$ be a normal projective variety and ${\mathscr{K}} \subset {\rm RatCurves}^n(X)$ a dominating family of rational curves of minimal degrees. Then, for a general point $x \in X$, there are only finitely many ${\mathscr{K}}_x$-curves which are singular at $x$.\end{them} \begin{rem}\rm The original statement of Theorem~\ref{ke} is proved under much weaker assumptions. For detail, see \cite[Theorem~3.3]{Ke2}. \end{rem} Let $X$ and ${\mathscr{K}}$ be as in the Notation-Assumptions~\ref{NA}. From Theorem~\ref{ke} and a well-known argument similar to the one used in the proof of \cite[II. Theorem~3.11]{Ko}, we know there exists a non-empty open subset $X^0 \subset X$ satisfying \begin{enumerate} \item any ${\mathscr{K}}$-curve meeting $X^0$ is free, and \item for any $x \in X^0$, there are only finitely many ${\mathscr{K}}_x$-curves which are singular at $x$. \end{enumerate} Here ${\mathscr{K}}^0:=\pi(\iota^{-1}(X^0)) \subset {\mathscr{K}}$ and ${\mathscr{U}}^0:=\pi^{-1}({\mathscr{K}}^0) \subset {\mathscr{U}}$ are open subsets. Then we have the universal family of ${\mathscr{K}}^0$, that is, $\pi_0:=\pi|_{{\mathscr{U}}^0}: {\mathscr{U}}^0 \rightarrow {\mathscr{K}}^0$ and $\iota_0:=\iota|_{{\mathscr{U}}^0}: {\mathscr{U}}^0 \rightarrow X$. Since any ${\mathscr{K}}^0$-curve is free, $\iota_0: {\mathscr{U}}^0 \rightarrow X$ is smooth (see \cite[II. Theorem~2.15, Corollary~3.5.3]{Ko}). \begin{lem}\label{irr} For general $x \in X^0$, ${\iota_0}^{-1}(x)$ is irreducible. \end{lem} \begin{proof} For a general point $x \in X^0$, we have a surjective morphism $\pi : {\iota_0}^{-1}(x) \rightarrow {\mathscr{K}}_x$. The smoothness of $\iota_0 :{\mathscr{U}}^0 \rightarrow X$ implies that ${\iota_0}^{-1}(x)$ is equidimensional. Since ${\mathscr{K}}_x$ is irreducible of positive dimension and there are only finitely many ${\mathscr{K}}_x$-curves which are singular at $x$, ${\iota_0}^{-1}(x)$ is irreducible. \end{proof} Replacing $X^0$ with a smaller open subset of $X$, we may assume \begin{enumerate} \item[\rm (iii)] ${\rm for~any}~x \in X^0, {\iota_0}^{-1}(x)~{\rm is~irreducible.}$ \end{enumerate} \begin{defi}\rm For general $x \in X^0$, define inductively \begin{enumerate} \item $V_x^0:=\{x\}$, and \item $V_x^{m+1}:={\iota_0}({\pi_0}^{-1}({\pi_0}({\iota_0}^{-1}(V_x^m \cap X^0))))$. \end{enumerate} \end{defi} \begin{lem}\label{fl} Let $f:X \rightarrow Y$ be a flat morphism between varieties with irreducible fibers and $W$ an irreducible constructible subset of $Y$. Then $f^{-1}(W)$ is irreducible. \end{lem} \begin{proof} This is a well-known fact. For instance, see \cite[Lemma~5.3]{De}.\end{proof} Let consider $W^m_x:={\iota_0}^{-1}(V_x^m \cap X^0)$ and $\widetilde{W^m_x}:={\pi_0}^{-1}({\pi_0}({\iota_0}^{-1}(V_x^m \cap X^0)))$. \begin{lem}\label{dim} For general $x \in X^0$, the following holds. \begin{enumerate} \item $V_x^m$, $W^m_x$ and $\widetilde{W^m_x}$ are irreducible constructible subsets. \item If $\dim V_x^m = \dim V_x^{m+1}$, we have $\dim V_x^m=n$. \item If $\dim V_x^m < n$, we have $\dim W^m_x=\dim V_x^m + p$ and $\dim \widetilde{W^m_x}=\dim V_x^m + p+1$. \end{enumerate} \end{lem} \begin{proof} $\rm (i)$ We prove by induction on $m$. When $m=0$, $V_x^0=\{ x \}$ is irreducible. Assume that $V_x^m$ is irreducible. Remark that ${\iota_0}:{\mathscr{U}}^0 \rightarrow X$ and ${\pi_0}: {\mathscr{U}}^0 \rightarrow {\mathscr{K}}^0$ are flat. Since ${\iota_0}^{-1}(x)~{\rm is~irreducible}$ for any $x \in X^0$, we know $W^m_x$ and $\widetilde{W^m_x}$ are irreducible from Lemma~\ref{fl}. Because $V_x^{m+1}={\iota_0} (\widetilde{W^m_x})$, $V_x^{m+1}$ is also irreducible. Hence ${\rm (i)}$ holds. \\ $\rm (ii)$ This is in \cite{KMM1}. For the reader's convenience, we recall their proof. First assume that there exists a rational curve $[C] \in {\mathscr{K}}^0$ which is not contained in $\overline{V_x^m}$ satisfying $C \cap (V_x^m \cap X^0) \neq \emptyset$. Then $V_x^m$ is a proper subset of $V_x^{m+1}$. This implies that $\dim V_x^m < \dim V_x^{m+1}$. Hence, if $\dim V_x^m = \dim V_x^{m+1}$, every ${\mathscr{K}}^0$-curve meeting $V_x^m \cap X^0$ is contained in $\overline{V_x^m}$. Assume that $\dim V_x^m = \dim V_x^{m+1}$ for general $x \in X$. Let $q$ be the codimension of $V_x^m$ in $X$ and $T \subset X^0$ a sufficiently general $(q-1)$-dimensional subvariety. Denote $\bigcup_{x \in T}(V_x^m \cap X^0)$ by $H^0$ and its closure by $H$. Since the Picard number of $X$ is $1$, $H$ is an ample divisor on $X$. A general member $[C] \in {\mathscr{K}}^0$ is not contained in $H$. So we have $C \cap H^0 = \emptyset$. On the other hand, we see that $C \cap (H \setminus H^0) = \emptyset$. This follows from \cite[II. Proposition~3.7]{Ko}. It concludes that $C \cap H$ is empty. However this contradicts the ampleness of $H$. \\ $\rm (iii)$ Since ${\iota_0} : {\mathscr{U}}^0 \rightarrow X$ is flat, $W^m_x \rightarrow V_x^m \cap X^0$ is a flat morphism with irreducible fibers. This implies that $\dim W^m_x=\dim V_x^m + p$. Since ${\pi_0}: {\mathscr{U}}^0 \rightarrow {\mathscr{K}}^0$ is a ${\mathbb{P}}^1$-bundle, $\dim \widetilde{W^m_x}=\dim W^m_x$ or $\dim W^m_x+1$. If the former equality holds, we have $\overline{W_x^m} \cap \widetilde{W_x^m} = \widetilde{W_x^m}$ in ${\mathscr{U}}^0$. Here, for a subset $A \subset {\mathscr{U}}^0$, denote by $\overline{A}$ the closure of $A$ in ${\mathscr{U}}^0$. Furthermore we see that \begin{equation*} \iota_0(\widetilde{W^m_x})=\iota_0(\overline{W_x^m} \cap \widetilde{W_x^m}) \subset \overline{\iota_0(W_x^m)} \cap \iota_0(\widetilde{W_x^m}) \subset \iota_0(\widetilde{W_x^m}). \end{equation*} This yields that $\iota_0(\widetilde{W_x^m})=\overline{\iota_0(W_x^m)} \cap \iota_0(\widetilde{W_x^m}).$ Hence $\overline{\iota_0(\widetilde{W_x^m})}=\overline{\iota_0(W_x^m)}$. This concludes that $\dim V_x^{m+1}=\dim \iota_0(\widetilde{W_x^m})=\dim {\iota_0(W_x^m)}=\dim V_x^m$. This contradicts the assumption $\dim V_x^m < n$. Thus we have $\dim \widetilde{W^m_x}=\dim W^m_x+1=\dim V_x^m + p+1$. \end{proof} \begin{defi}\rm For general $x \in X^0$, we denote the dimension of $V_x^m$ by $d_m$. This definition does not depend on the choice of general $x \in X^0$. \end{defi} \begin{pro}[{\cite[Proposition~2.4]{HK}}]\label{in} \begin{enumerate} \item $d_1=p+1$, and \item $d_{m+1} \leq d_m + p+1$. \end{enumerate} \end{pro} \begin{proof} The first part is derived from Mori's Bend-and-Break and the properness of ${\mathscr{K}}_x$. Hence the second is trivial. \end{proof} \begin{defi}[{\cite[Subsection~2.2]{HK}}]\label{defil} \rm From Lemma~\ref{dim}~$\rm (ii)$, there exists an integer $m>0$ satisfying $d_m=n$ and $d_{m-1}<n$. We denote such $m$ by $l_{{\mathscr{K}}}$ and call {\it length} with respect to ${\mathscr{K}}$. \end{defi} \begin{rem}\label{remark} \rm \begin{enumerate} \item From Lemma~\ref{dim}~$\rm (ii)$, we have $l_{{\mathscr{K}}} \leq n$. \item When $p=0$, we know $l_{{\mathscr{K}}}=n$ from the above $\rm (i)$ and Proposition~\ref{in}. In this case, it is easy to see that this holds without the assumption of the irreducibility of ${\mathscr{K}}_x$. \end{enumerate} \end{rem} \section{Main Theorem} Continuously, we always work under the Assumptions~\ref{NA} and use notation as in the previous section. \begin{pro}\label{key} Let $X$ and ${\mathscr{K}}$ be as in the Notation-Assumptions~\ref{NA}. If $d_{m+1}=d_m+1$, we have $d_{m+1}=n$. \end{pro} \begin{proof} We have $\widetilde{W^m_x} \cap \iota_0^{-1}(X^0) \subset W_x^{m+1} \subset \widetilde{W^{m+1}_x}$. Furthermore we know $\dim \widetilde{W^m_x}=d_m + p+1$ and $\dim W_x^{m+1}=d_{m+1}+p=d_m+p+1$ from Lemma~\ref{dim} $\rm (iii)$ and our assumption. For a subset $A \subset {\mathscr{U}}^0$, denote by $\overline{A}$ the closure of $A$ in ${\mathscr{U}}^0$. We see that $\overline{\widetilde{W^m_x} \cap \iota_0^{-1}(X^0)} \cap W_x^{m+1} =W_x^{m+1}$. Hence we have \begin{eqnarray*} \widetilde{W^{m+1}_x} &=& \pi_0^{-1}(\pi_0(W_x^{m+1}))=\pi_0^{-1}(\pi_0((\overline{\widetilde{W^m_x} \cap \iota_0^{-1}(X^0)}) \cap W_x^{m+1})) \\ &\subset& \pi_0^{-1}(\overline{\pi_0(\widetilde{W^m_x} \cap \iota_0^{-1}(X^0))} \cap \pi_0(W_x^{m+1})) \\ &=& \pi_0^{-1}(\overline{\pi_0(\widetilde{W^m_x} \cap \iota_0^{-1}(X^0))}) \cap \pi_0^{-1}(\pi_0(W_x^{m+1})) \\ &=& \overline{\pi_0^{-1}(\pi_0(\widetilde{W^m_x} \cap \iota_0^{-1}(X^0)))} \cap \widetilde{W^{m+1}_x}. \end{eqnarray*} Here the last equality holds because $\pi_0$ is an open morphism. Moreover we see that $\pi_0^{-1}(\pi_0(\widetilde{W^m_x} \cap \iota_0^{-1}(X^0))) \subset \widetilde{W^m_x}$. Therefore we have $\widetilde{W^{m+1}_x} \subset \overline{\widetilde{W^m_x}}$. This implies that $\overline{\widetilde{W^m_x}}=\overline{\widetilde{W^{m+1}_x}}$. Hence we obtain that $\dim \widetilde{W^m_x} = \dim \widetilde{W_x^{m+1}}$. Thus we see $d_{m+1}= d_{m+2}$. As a consequence, we have $d_{m+1}=n$ by Lemma~\ref{dim}~$\rm (ii)$. \end{proof} \begin{proof}[Proof of Theorem~\ref{MT}] Obviously, it follows from the definition that $d_{l_{{\mathscr{K}}}}=n$. Proposition~\ref{in} and \ref{key} imply that \begin{eqnarray} (p+1)+2(m-1) \leq d_m \leq m(p+1)~{\rm for}~m < l_{{\mathscr{K}}}. \nonumber \end{eqnarray} When $d_m=(p+1)+2(m-1)$ for any $m<l_{{\mathscr{K}}}$, we have $n=d_{l_{{\mathscr{K}}}}=(p+1)+2(l_{{\mathscr{K}}}-1)$ or $(p+1)+2(l_{{\mathscr{K}}}-2)+1$. In this case, $l_{{\mathscr{K}}}=\lfloor \frac{n-p}{2} \rfloor+1$. On the other hand, when $d_m=m(p+1)$ for any $m<l_{{\mathscr{K}}}$, we have $n=d_{l_{{\mathscr{K}}}}=(l_{{\mathscr{K}}}-1)(p+1)+k$ for $1 \leq k \leq p+1$. In this case, $l_{{\mathscr{K}}}=\lfloor \frac{n-1}{p+1}\rfloor +1$. Hence our assertion holds. \end{proof} \begin{cor}\label{sp} Let $X$ and ${\mathscr{K}}$ be as in the Notation-Assumptions~\ref{NA}. $l_{{\mathscr{K}}}$ is equal to $\lfloor \frac{n-1}{p+1} \rfloor+1= \lfloor \frac{n-p}{2}\rfloor +1$ if and only if one of the following holds:\\ ${\rm (i)}~ n-3 \leq p \leq n-1$, ${\rm (ii)}~ p=1$, or ${\rm (iii)}~ (n,p)=(7,2)$. \end{cor} \begin{proof} The "if" part is derived from Theorem~\ref{MT}. The "only if" part follows from a direct computation. \end{proof} \section{Varieties of minimal rational tangents and their secant varieties} \subsection{Basic facts of varieties of minimal rational tangents} Assume that $X$ is a Fano $n$-fold of $\rho=1$ (or more generally, a uniruled manifold) and ${\mathscr{K}}$ a minimal rational component of $X$ such that $p=\dim {\mathscr{K}}_x$ for general $x \in X$. It is {\it not} necessary to suppose the Assumption~\ref{NA}. Denote by $\widetilde{{\mathscr{K}}_x}$ the normalization of ${\mathscr{K}}_x$. Then it is known that $\widetilde{{\mathscr{K}}_x}$ is smooth for general $x \in X$ (see \cite[Theorem~1.3]{Hw2}). For a general point $x \in X$, we define the tangent map ${\tau}_x : \widetilde{{{\mathscr{K}}}_x} \rightarrow {\mathbb{P}}(T_xX)$ by assigning the tangent vector at $x$ to each member of $\widetilde{{\mathscr{K}}_x}$ which is smooth at $x$. The regularity of $\tau_x$ follows from \cite[Theorem~3.4]{Ke2}. We denote by ${\mathscr{C}}_x \subset {\mathbb{P}}(T_xX)$ the image of ${\tau}_x$, which is called the {\it variety of minimal rational tangents} at $x$. Let ${\mathbb{P}}(W_x)$ be the linear span of ${\mathscr{C}}_x \subset {\mathbb{P}}(T_xX)$ and $W$ the distribution defined by $W_x$ for general $x \in X$ (see \cite[Section~2]{Hw2}). Hwang's survey \cite{Hw2} is a standard reference on varieties of minimal rational tangents. \begin{them} [{\cite[Theorem~1]{HM2}},{\cite[Theorem~3.4]{Ke2}}]\label{norm} Let $X$ be a Fano manifold (or more generally, a uniruled manifold). Then the tangent map ${\tau}_x : \widetilde{{{\mathscr{K}}}_x} \rightarrow {\mathscr{C}}_x \subset {\mathbb{P}}(T_xX)$ is the normalization. \end{them} \begin{them}[\cite{CMSB,Ke1}]\label{CMSB} Let $X$ be a Fano $n$-fold (or more generally, a uniruled $n$-fold). If $p=n-1$, namely ${\mathscr{C}}_x ={\mathbb{P}}(T_xX)$, then $X$ is isomorphic to ${\mathbb{P}}^n$. \end{them} \begin{them}[\cite{HH}]\label{HHM} Let $X$ be a Fano $n$-fold of $\rho=1$. Let $S=G/P$ be a rational homogeneous variety corresponding to a long simple root and ${\mathscr{C}}_o \subset {\mathbb{P}}(T_oS)$ the variety of minimal rational tangents at a reference point $o \in S$. Assume ${\mathscr{C}}_o \subset {\mathbb{P}}(T_oS)$ and ${\mathscr{C}}_x \subset {\mathbb{P}}(T_xX)$ are isomorphic as projective subvarieties. Then $X$ is isomorphic to $S$. \end{them} \begin{cor}\label{LG} Let $X$ be a Fano $n$-fold of $\rho=1$. If ${\mathscr{C}}_x \subset {\mathbb{P}}(T_xX)$ is projectively equivalent to the Veronese surface $v_2({\mathbb{P}}^2) \subset {\mathbb{P}}^5$, $X$ is the $6$-dimensional Lagrangian Grassmann $LG(3,6)$ which parametrizes $3$-dimensional isotropic subspaces of a symplectic vector space ${\mathbb{C}}^6$. \end{cor} \begin{proof} $LG(3,6)$ is the rational homogeneous variety corresponding to the unique long simple root of the Dynkin diagram $C_3$. Furthermore the variety of minimal rational tangents of $LG(3,6)$ at a general point is projectively equivalent to $v_2({\mathbb{P}}^2)$ (for example, see \cite[Proposition~1]{HM}). Hence Theorem~\ref{HHM} implies that $X$ is isomorphic to $LG(3,6)$. \end{proof} \begin{them}[\cite{Mi}]\label{Mi} Let $X$ be a Fano $n$-fold of $\rho=1$. If $n \geq 3$, the following are equivalent. \begin{enumerate} \item{$X$ is isomorphic to a smooth quadric hypersurface $Q^n$.} \item{The minimal value of the anticanonical degree of rational curves passing through a general point $x_0 \in X$ is equal to $n$.} \end{enumerate} \end{them} \begin{cor}\label{Mi2} Let $X$ be a Fano $n$-fold of $\rho=1$. If $p=n-2 \geq 1$, namely ${\mathscr{C}}_x \subset {\mathbb{P}}(T_xX)$ is a hypersurface, $X$ is isomorphic to $Q^n$. \end{cor} \begin{proof} From our assumption $p=n-2$, $X$ is covered by rational curves of anticanonical degree $\leq n$. Since finitely many families of rational curves of anticanonical degree $<n$ cannot be dominating under the assumption $p=n-2$. Therefore $X$ is isomorphic to $Q^n$ by Theorem~\ref{Mi}. \end{proof} \begin{pro}[{\cite[Proposition~5]{Hw3}}]\label{lin} Let $X$ be a Fano $n$-fold of $\rho=1$. Then ${\mathscr{C}}_x \subset {\mathbb{P}}(T_xX)$ cannot be a linear subspace except ${\mathscr{C}}_x = {\mathbb{P}}(T_xX)$. \end{pro} \begin{pro}[{\cite[Proposition~16]{A2}}]\label{ara} Let $X$ be a Fano manifold (or more generally, a uniruled manifold). If ${\mathscr{C}}_x \subset {\mathbb{P}}(W_x)$ is an irreducible hypersurface for general $x \in X$, $W \subset T_X$ is integrable. \end{pro} \begin{pro}[{\cite[Proposition 2]{Hw1}}]\label{hwlem2} Let $X$ be a Fano $n$-fold of $\rho=1$. $W$ is integrable if and only if $W_x$ coincides with $T_xX$ for general $x \in X$. \end{pro} \begin{pro}[cf. {\cite[Proposition~2.4 and 2.6]{Hw2}}]\label{mok} Let $X$ be a Fano $n$-fold of $\rho=1$. Assume that ${\mathscr{C}}_x$ is smooth and irreducible. If $2(p+1)> \dim W_x$ holds, $W \subset T_X$ is integrable. \end{pro} \subsection{Secant variety} \begin{defi}{\rm For varieties $Z_1, Z_2 \subset {\mathbb{P}}^N$, we define the {\it join} $S(Z_1,Z_2) \subset {\mathbb{P}}^N$ by the closure of the union of lines connecting two distinct points $x_1 \in Z_1$ and $x_2 \in Z_2$. In the special case that $Z=Z_1=Z_2$, $SZ:=S(Z,Z)$ is called the {\it secant variety} of $Z$. } \end{defi} \begin{pro}[{\cite[Corollary 2.3.7]{Ru}}, {\cite{Se}}]\label{russo} Let $Z \subset {\mathbb{P}}^N$ be an irreducible nondegenerate variety of dimension $n \geq 2$. Assume that $\dim SZ = n+2< N$. Then $Z$ is projectively equivalent to one of the following: \begin{enumerate} \item $Z \subset {\mathbb{P}}^N$ is a cone over a curve, or \item $Z \subset {\mathbb{P}}^{n+3}$ is a cone over the Veronese surface $v_2({\mathbb{P}}^2) \subset {\mathbb{P}}^5$ (When $n=2$, then $Z=v_2({\mathbb{P}}^2) \subset {\mathbb{P}}^5$). \end{enumerate} \end{pro} \begin{lem}[{\cite[Lemma~4.3]{A1}}]\label{arau} Let $Z \subset {\mathbb{P}}^N$ be an irreducible cone whose normalization is smooth. Then $Z \subset {\mathbb{P}}^N$ is a linear space. \end{lem} \subsection{Varieties of minimal rational tangents admitting low dimensional secant varieties} \begin{pro}\label{p+1} Let $X$ and ${\mathscr{K}}$ be as in the Notation-Assumptions~\ref{NA}. Denote by ${\mathscr{C}}_x$ the variety of minimal rational tangents at a general point $x \in X$. \begin{enumerate} \item If $\dim S{\mathscr{C}}_x=p$ for general $x \in X$, then $X={\mathbb{P}}^n$. \item If $\dim S{\mathscr{C}}_x=p+1$ for general $x \in X$, then $X=Q^n$. \end{enumerate} \end{pro} \begin{proof} $\rm (i)$ Assume $\dim S{\mathscr{C}}_x=p$ for general $x \in X$. Then ${\mathscr{C}}_x=S{\mathscr{C}}_x \subset {\mathbb{P}}(T_xX)$ is linear. Proposition~\ref{lin} implies that ${\mathscr{C}}_x={\mathbb{P}}(T_xX)$. Hence we have $p=n-1$. By Theorem~\ref{CMSB}, we see $X$ is isomorphic to ${\mathbb{P}}^n$.\\ $\rm (ii)$ Assume $\dim S{\mathscr{C}}_x=p+1$ for general $x \in X$. Then, for $z \in S{\mathscr{C}}_x \setminus {\mathscr{C}}_x$, $S(z,{\mathscr{C}}_x)$ coincides with $S{\mathscr{C}}_x$. So we see $S(z,S{\mathscr{C}}_x)=S(z,S(z,{\mathscr{C}}_x))=S(z,{\mathscr{C}}_x)=S{\mathscr{C}}_x$. This implies that $S{\mathscr{C}}_x$ is a $(p+1)$-dimensional linear subspace. Thus we see ${\mathscr{C}}_x \subset {\mathbb{P}}(W_x)=S{\mathscr{C}}_x$ is an irreducible hypersurface for general $x \in X$. From Proposition~\ref{ara}, $W \subset T_X$ is integrable. However Proposition~\ref{hwlem2} implies that $W_x$ coincides with $T_xX$ for general $x \in X$. Therefore we have $p=n-2$. It follows that $X$ is isomorphic to $Q^n$ from Corollary~\ref{Mi2}. \end{proof} \begin{pro}\label{p+2} Under the same assumption as in Proposition~\ref{p+1}, if $\dim S{\mathscr{C}}_x=p+2$ for general $x \in X$, then one of the following holds: \begin{enumerate} \item $p=1$, \item $p=n-3$, \item $X$ is the Lagrangian Grassmann $LG(3,6)$, \item ${\mathscr{C}}_x$ is the Veronese surface in its linear span ${\mathbb{P}}(W_x)={\mathbb{P}}^5$ and $n>6$, or \item ${\mathscr{C}}_x$ is a degenerate singular variety satisfying $S{\mathscr{C}}_x={\mathbb{P}}^{p+2}$. \end{enumerate} \end{pro} \begin{proof} Suppose that $\rm (i)$, $\rm (ii)$ and $\rm (iii)$ do not hold. Then it is enough to show that either $\rm (iv)$ or $\rm (v)$ hold. First assume that $S{\mathscr{C}}_x$ does not coincide with ${\mathbb{P}}(W_x)$. From Proposition~\ref{russo}, Theorem~\ref{norm} and Lemma~\ref{arau}, ${\mathscr{C}}_x$ is projectively equivalent to the Veronese surface $v_2({\mathbb{P}}^2)$. If ${\mathscr{C}}_x \subset {\mathbb{P}}(T_xX)$ is nondegenerate, $X$ is isomorphic to the Lagrangian Grassmann $LG(3,6)$ by Corollary~\ref{LG}. It contradicts our assumption. Hence ${\mathscr{C}}_x$ is degenerate. Second assume that $S{\mathscr{C}}_x={\mathbb{P}}(W_x)$. Note that $W_x$ does not coincide with $T_xX$ because $p$ is not $n-3$ by our assumption. Hence ${\mathscr{C}}_x \subset {\mathbb{P}}(T_xX)$ is degenerate. Here we have $2(p+1)> \dim W_x$. If ${\mathscr{C}}_x$ is smooth, Proposition~\ref{mok} implies that the distribution $W \subset T_X$ is integrable. Furthermore we see $W_x=T_xX$ by Proposition~\ref{hwlem2}. It is a contradiction. Thus ${\mathscr{C}}_x$ is singular. \end{proof} \section{Low dimensional case} Let $X$ be a Fano $n$-fold of $\rho=1$, ${\mathscr{K}}$ a minimal rational component of $X$ satisfying the Notation-Assumptions~\ref{NA}. We use the notation introduced in Section~\ref{Dl}. \begin{them}[{\cite[Theorem~3.12]{HK}}]\label{HKS} $d_2 \geq \dim S{\mathscr{C}}_x+1$. \end{them} \begin{lem}\label{p+3} If $l_{{\mathscr{K}}}=\lfloor \frac{n-p}{2}\rfloor +1$, we have $\dim S{\mathscr{C}}_x \leq p+3$. Moreover if $\dim S{\mathscr{C}}_x=p+3$, then $n$ and $p+1$ are congruent modulo $2$. \end{lem} \begin{proof} Suppose that $l_{{\mathscr{K}}}=\lfloor \frac{n-p}{2}\rfloor +1$ holds. According to Proposition~\ref{key}, we know $d_2 +2(l_{{\mathscr{K}}}-3) +1 \leq d_{l_{{\mathscr{K}}}}=n$. Hence we have $d_2 \leq n-2l_{{\mathscr{K}}}+5=n-2\lfloor \frac{n-p}{2}\rfloor +3$. The right hand side is equal to $p+3$ or $p+4$. Furthermore if it is $p+4$, then $n$ and $p+1$ are congruent modulo $2$. Consequently, our assertion follows from Theorem~\ref{HKS}. \end{proof} From Theorem~\ref{MT} (cf. Corollary~\ref{sp}) and Remark~\ref{remark}~{\rm (ii)}, we can compute the length $l_{{\mathscr{K}}}$ in the case $n \leq 7$. In fact, we obtain the following table: \begin{center} \begin{tabular}{|c|c|c||c|c|c||c|c|c||c|c|c||c|c|c|} \hline $n$ & $p$ & $l_{{\mathscr{K}}}$ & $n$ & $p$ & $l_{{\mathscr{K}}}$ & $n$ & $p$ & $l_{{\mathscr{K}}}$ & $n$ & $p$ & $l_{{\mathscr{K}}}$ & $n$ & $p$ & $l_{{\mathscr{K}}}$ \\ \hline \hline $3$&$2$&$1$& $4$&$3$&$1$& $5$&$4$&$1$& $6$&$5$&$1$& $7$&$6$&$1$ \\ $3$&$1$&$2$& $4$&$2$&$2$& $5$&$3$&$2$& $6$&$4$&$2$& $7$&$5$&$2$ \\ $3$&$0$&$3$& $4$&$1$&$2$& $5$&$2$&$2$& $6$&$3$&$2$& $7$&$4$&$2$ \\ $$ & $$&$$ & $4$&$0$&$4$& $5$&$1$&$3$& $6$&$2$&$2~{\rm or}~3$& $7$&$3$&$2~{\rm or}~3$ \\ $$ & $$& $$& $$ &$$ & $$& $5$&$0$&$5$& $6$&$1$&$3$& $7$&$2$&$3$ \\ $$ & $$& $$& $$ &$$ & $$& $$ &$$ &$$ & $6$&$0$&$6$& $7$&$1$&$4$ \\ $$ & $$& $$& $$ &$$ & $$& $$ &$$ &$$ & $$& $$&$$& $7$&$0$&$7$ \\ \hline \end{tabular} \end{center} Here we assume the irreducibility of ${\mathscr{K}}_x$ if $p \geq 1$. However $l_{{\mathscr{K}}}=n$ holds without the assumption of the irreducibility of ${\mathscr{K}}_x$ if $p=0$. From this table, we see the length $l_{{\mathscr{K}}}$ depends only on the pair $n$ and $p$ in the case $n \leq 5$. However it does not hold when $n \geq 6$. In fact, we have the following examples. \begin{exa} \begin{enumerate} \item A $6$-dimensional smooth hypersurface of degree $4$ satisfies $(n,p,l_{{\mathscr{K}}})=(6,2,2)$. \item The Lagrangian Grassmann $LG(3,6)$ satisfies $(n,p,l_{{\mathscr{K}}})=(6,2,3)$. \end{enumerate} \end{exa} \begin{proof} See \cite[Proposition~6.2, Corollary~6.6]{HK}. \end{proof} Here we study the structure of $X$ when $(n,p,l_{{\mathscr{K}}})=(6,2,3)$ and $(7,3,3)$. \begin{pro} When $(n,p,l_{{\mathscr{K}}})=(6,2,3)$, one of the following holds: \begin{enumerate} \item $X=LG(3,6)$, or \item ${\mathscr{C}}_x \subset {\mathbb{P}}(T_xX)$ is a degenerate singular surface satisfying $S{\mathscr{C}}_x={\mathbb{P}}^{4}$. \end{enumerate} \end{pro} \begin{proof} From Lemma~\ref{p+3}, we have $\dim S{\mathscr{C}}_x \leq 4$. Hence our assertion is derived from Proposition~\ref{p+1} and \ref{p+2}. \end{proof} The same argument implies the following: \begin{pro} When $(n,p,l_{{\mathscr{K}}})=(7,3,3)$, then ${\mathscr{C}}_x \subset {\mathbb{P}}(T_xX)$ is a degenerate singular $3$-fold satisfying $S{\mathscr{C}}_x={\mathbb{P}}^{5}$. \end{pro} \begin{rem} \rm In general, it is believed that any variety of minimal rational tangents ${\mathscr{C}}_x$ at a general point is smooth. \end{rem} {\bf Acknowledgements} The author would like to thank Professor Hajime Kaji for valuable seminars and encouragements. He is also grateful to referees, for their careful reading of the text and useful suggestions and comments. In particular, one of referees pointed out a gap of the proof of Lemma~\ref{dim}. The author is supported by Research Fellowships of the Japan Society for the Promotion of Science for Young Scientists. \end{document}
arXiv
Impact of tissue transport on PET hypoxia quantification in pancreatic tumours Edward Taylor ORCID: orcid.org/0000-0001-7374-03901,2, Jennifer Gottwald1,3, Ivan Yeung1,4, Harald Keller1,4, Michael Milosevic1,4, Neesha C. Dhani1,5, Iram Siddiqui6, David W. Hedley1,3,5 & David A. Jaffray1,2,3,4,7 EJNMMI Research volume 7, Article number: 101 (2017) Cite this article The clinical impact of hypoxia in solid tumours is indisputable and yet questions about the sensitivity of hypoxia-PET imaging have impeded its uptake into routine clinical practice. Notably, the binding rate of hypoxia-sensitive PET tracers is slow, comparable to the rate of diffusive equilibration in some tissue types, including mucinous and necrotic tissue. This means that tracer uptake on the scale of a PET imaging voxel—large enough to include such tissue and hypoxic cells—can be as much determined by tissue transport properties as it is by hypoxia. Dynamic PET imaging of 20 patients with pancreatic ductal adenocarcinoma was used to assess the impact of transport on surrogate metrics of hypoxia: the tumour-to-blood ratio [TBR(t)] at time t post-tracer injection and the trapping rate k 3 inferred from a two-tissue compartment model. Transport quantities obtained from this model included the vascular influx and efflux rate coefficients, k 1 and k 2, and the distribution volume v d ≡k 1/(k 2+k 3). Correlations between voxel- and whole tumour-scale k 3 and TBR values were weak to modest: the population average of the Pearson correlation coefficients (r) between voxel-scale k 3 and TBR (1 h) [TBR(2 h)] values was 0.10 [0.01] in the 20 patients, while the correlation between tumour-scale k 3 and TBR(2 h) values was 0.58. Using Patlak's formula to correct uptake for the distribution volume, correlations became strong (r=0.80[0.52] and r=0.93, respectively). The distribution volume was substantially below unity for a large fraction of tumours studied, with v d ranging from 0.68 to 1 (population average, 0.85). Surprisingly, k 3 values were strongly correlated with v d in all patients. A model was proposed to explain this in which k 3 is a combination of the hypoxia-sensitive tracer binding rate k b and the rate k eq of equilibration in slow-equilibrating regions occupying a volume fraction 1−v d of the imaged tissue. This model was used to calculate the proposed hypoxia surrogate marker k b. Hypoxia-sensitive PET tracers are slow to reach diffusive equilibrium in a substantial fraction of pancreatic tumours, confounding quantification of hypoxia using both static (TBR) and dynamic (k 3) PET imaging. TBR is reduced by distribution volume effects and k 3 is enhanced by slow equilibration. We proposed a novel model to quantify tissue transport properties and hypoxia-sensitive tracer binding in order to improve the sensitivity of hypoxia-PET imaging. Positron emission tomography imaging of hypoxia is a promising way to detect hypoxia non-invasively in solid tumours [1, 2]. A major challenge to this approach is that the binding rate of hypoxia-sensitive PET tracers such as fluoromisonidazole (FMISO) and fluoroazomycinarabinoside (FAZA) is slow as compared to, e.g., flurodeoxyglucose (FDG), and can be comparable to diffusive equilibration rates in tumour tissues. As an example, a typical threshold used to decide whether or not a PET voxel hypoxic is that the voxel-scale tracer concentration exceeds that in blood by 20% after 2 h; i.e.,TBR (2 h) >1.2 [3–5]. This means that the binding rate of tracer in hypoxic tissue is $$ k_{\text{b}}\gtrsim \frac{0.2}{2\mathrm{ h}} = 0.1~\mathrm{h}^{-1}. $$ In comparison, the rate at which tracer diffuses across a distance l through the extravascular space of tissue scales as $$ k_{\text{eq}} \sim D/l^{2}, $$ where D is the diffusivity of the tracer. For FAZA and similarly sized molecules (on the order of several hundred Daltons), D∼10 μm2/s in most tissue [6, 7]. Hence, taking l∼100 μm to be the distance between capillaries, the equilibration rate k eq∼20 h−1 for tracer is typically much faster than the binding rate, and comparable to the rate of extravasation, k 1. On the other hand, for tissue with substantial mucous deposits (common in carcinomas [8] such as pancreatic ductal adenocarcinoma [9]), where diffusivity can be slowed by two or more orders of magnitude [10, 11], the rate of equilibration slows drastically, becoming comparable to the binding rate. This can also happen in tissue with necrotic regions (\(l\gtrsim 500\;\mu \mathrm {m}\)) interspersed with hypoxic cells. Slow diffusive equilibration has two important consequences for quantifying tumour hypoxia based on tracer uptake. First, if an imaging voxel contains both hypoxic cells and either mucous or small necroses, the voxel-scale TBR value will be reduced by the fact that tracer does not reach diffusive equilibrium at the standard imaging time, between 2 and 3 h post-injection. Hence, the sensitivity of static PET imaging to hypoxia is diminished. Second, as tracer slowly equilibrates in mucinous and necrotic tissue, its concentration increases at a rate comparable to that due to hypoxia-induced binding and a compartment model [12–15] may not be able to distinguish the two processes. In this case, we hypothesize that the trapping rate k 3 represents a sum of the binding rate k b and the rate of equilibration. Quantifying hypoxia based on k 3 will thus overestimate its extent since k 3≥k b. In this paper, we seek to test these hypotheses by modeling the pharmamcokinetics of FAZA in 20 patients with pancreatic ductal adenocarcinoma (PDAC), applying basic principles of diffusive equilibration to interpret transport data calculated from a standard two-tissue compartment model. Patient population and PET/CT scans Data was taken from 20 patients with biopsy-confirmed pancreatic ductal adenocarcinoma and FAZA-PET scans. Dynamic PET imaging scans were acquired over 1 h following injection of FAZA. The 1-h time-activity curves (TAC 1) were each binned into 34 frames: 12 10-s frames, followed by 8 32-s frames, followed by 7 2-min frames, followed by 7 5-min frames. Patients returned for a static PET scan at 2 h. CT scans used for co-registration were taken at the beginning of the dynamic and static PET scans. Further details of this patient cohort and the PET/CT scans have been described previously [16]. Region of interest contours PET activity data was obtained for regions of interest (ROIs) contoured using co-registered CT images. Tumour ROIs were contoured by a radiologist using the CT scan at 2 h. This was co-registered manually to the initial CT scan and the two CT ROI sets were co-registered to the dynamic and static PET scans. In order to minimize effects resulting from high liver uptake of FAZA, aorta ROIs were contoured from the same range of PET/CT slices (along the cranial-caudal axis) as the tumour ROIs. At the level of the pancreas, the aorta is between 1.5 and 2 cm in diameter; to minimize partial volume effects, ROIs in the aorta were restricted to 0.75 cm in diameter and combined so that at least 25 PET voxels (3.9 ×3.9×3.3 mm 3 each) were imaged. Compartment model analysis Dynamic PET TACs of FAZA were analyzed using the two-tissue compartment model [12–15, 17–19]: $$ \frac{d C_{d}(t)}{dt} = k_{1}C_{\text{In}}(t)-\left[k_{2}+k_{3}\right]C_{d}(t) $$ $$ \frac{dC_{b}(t)}{dt} = k_{3}C_{d}(t). $$ Here, the concentration of tracer in the extravascular space of an imaged region has been partitioned into an unbound, diffusing component C d as well as a component C b that is bound by hypoxia. C In is the "input" function, which we took to be the imaged tracer concentration in the aorta, as described above. As noted earlier, k 1 and k 2 are the vascular influx and efflux coefficients and k 3 is the tracer trapping rate. The total tracer concentration in an imaged region is $$ C(t) = v_{b}C_{\text{In}}(t) + (1-v_{b})\left[C_{d}(t)+C_{b}(t)\right], $$ where v b is the volume fraction occupied by blood in the region of interest. The above model was fitted to both the 1-h TACs (TAC 1) as well as the combined 2-h TACs (TAC 2) comprising the 1-h TACs plus static scans at 2 h (in part to asses co-registration errors, which should be greater for TAC 2). Coefficients (v b , k 1, k 2, and k 3) were determined by minimizing $$ \chi^{2} = \sum_{i}^{N}w_{i}\left[C_{\text{model}}(t_{i})-C_{\text{data}}(t_{i})\right]^{2}, $$ where C model(t i ) are the model activity values [Eqs. (3)–(5)] and C data(t i ) are the measured values acquired during the N discrete time frames; N=34 for TAC 1 and N=35 for TAC 2. To avoid over-weighting short-duration early time frames, we used the weighting function w i =δ t i in Eq. 6, where δ t i was the duration of the ith time frame (because the t=2 h time-point in TAC 2 did not represent a true 1-h time bin beyond the TAC 1 data set, we used δ t 35=δ t 34=5 min). Equation 6 was minimized in Wolfram Mathematica 11.1 using its built-in numerical minimization routine (NMinimize) with C model(t i ) calculated using trapezoidal integration. An important tissue transport quantity is the distribution volume: $$ v_{d} \equiv \frac{k_{1}}{k_{2}+k_{3}}. $$ It represents the volume fraction of an imaged ROI in which tracer initially fills; i.e., rapidly equilibrates in. Patlak's formula [20, 21], $$ \text{TBR}(t) = v_{b} + (1-v_{b})v_{d} + K_{i} (1-v_{b})\frac{\int^{t}_{0}\;d\tau\;C_{\text{In}}(\tau)}{C_{\text{In}}(t)}, $$ for the tumour-to-blood ratio at time t was used to "correct" TBR for distribution volume effects: $$ \begin{aligned} \text{TBR}_{\text{corrected}}(t) & \equiv \frac{\text{TBR}(t)-v_{b}(1-v_{d})}{v_{d}} \\ & = 1 + k_{3}(1-v_{b})\frac{\int^{t}_{0}\;d\tau\;C_{\text{In}}(\tau)}{C_{\text{In}}(t)}. \end{aligned} $$ In Eq. (8), K i ≡k 3 v d is sometimes referred to as the "net trapping rate". TBR corrected represents the theoretical tumour-to-blood ratio that would have arisen had the distribution volume been unity. Correlations were analyzed between k 3, v d , TBR, and TBR corrected, where TBR was calculated as $$ \text{TBR}(t) \equiv \frac{C_{\text{data}}(t)}{C_{\text{In}}(t)} $$ at both t=1 and 2 h. Pearson correlation coefficients were calculated to quantify correlations between voxel- and tumour-scale values of these quantities. Voxel-scale coefficients were calculated by fitting the above model to the individual TACs for each voxel, while tumour-scale values were obtained using the average TAC in each tumour. Correlations were reported as the population average (over twenty tumours) of the intra-tumour voxel-scale r values ("voxel-scale") and as correlations between tumour-scale values ("tumour-scale"). Correlations between TBR and k 3 Comparing voxel-scale k 3 and TBR values in each tumour, weak correlations were found at 1 h (average of voxel-scale r values = 0.10) and at 2 h (r value = 0.01). Patient-specific results are shown in Online Resource 1 (Additional file 1). Strong correlations were found between voxel-scale k 3 and TBRcorrected at 1 h (population average r value = 0.80) and moderate correlations were found at 2 h (r value = 0.53). Although standard imaging protocols call for measurement of TBR at least 2 h after tracer injection, transport coefficient (v b , k 1, k 2, k 3) values obtained using the 1- and 2-h data sets were equivalent to within fit errors to the compartment model. The reduction in correlations is thus a metric for co-registration errors between the 1- and 2-h data sets, as well as the diminished validity of Eq. (8), which is only a good approximation at times less than the equilibration time 1/k eq [21]. Representative voxel-scale correlations are shown in Figs. 1a–d for one patient. Table 1 displays population averages of voxel-scale correlations using the 2-h data sets as well as the mean values of the corresponding quantities. Correlations between tumour-to-blood uptake ratios and the trapping rate are enhanced when uptake is corrected for the distribution volume. Left side: tumour-to-blood uptake ratio of FAZA versus trapping rate; right: tumour-to-blood uptake ratio corrected for the distribution volume versus trapping rate. a and b voxel-scale values for a representative patient tumour (pt. 2) using TAC 1. c and d same as a and b but with TAC 2. e and f Tumour-scale values using TAC 2 for all 20 tumours. Pearson correlation coefficients are shown Table 1 Top: Correlation matrix of Pearson correlation coefficients between the mean voxel-scale parameters across the twenty tumours studied using the 2-h data sets. Bottom: Population average values of the corresponding voxel-scale coefficients. Standard deviations of mean values across patients are indicated in parentheses Whole-tumour kinetics are less sensitive to co-registration errors and tumour-scale trapping rate exhibited modest correlations with TBR (across twenty patients, mean r = 0.58) but strong correlations with TBRcorrected (mean r = 0.93); see Fig. 1e, f and Table 2. Mean tumour-scale values of k 3, v d , TBR, and TBRcorrected were identical to the values shown in Table 1 to within a few percent. Table 2 Correlation matrix of Pearson correlation coefficients between the tumour-scale parameters across the twenty tumours studied using the 2-h data sets Relationship between v d and k 3 In all patients, voxel-scale k 3 values were found to depend strongly on v d (population average of voxel-scale r-values = -0.59; see Table 1), with k 3 increasing as v d decreases. Figures 2a and d show two representative examples. Parametric maps of a transverse slice in each of these patients are shown in Fig. 3. Tumour-scale correlations between v d and k 3 are reduced (r=−0.34) but still substantial; see Table 2. Dependence of the trapping rate on tracer equilibration and binding. a and d show voxel-scale trapping rate values versus voxel-scale TBR values for patients 1 and 2, respectively. b and e show the corresponding equilibration rates, calculated from Eq. (19); the solid lines indicate fits to Eq. (18), yielding k eq=0.45 h −1 for pt. 1 and k eq=0.52 h −1 for pt. 2. (c) and (f): The voxel-scale binding rates k b calculated from Eq. (17) using the K eq values shown in b and e Examples of negative correlations between k 3 and v d and discordance between k 3 and TBR in parametric maps for patients 1 and 2. From left to right: pre-PET transverse CT scan; FAZA-PET TBR at 1 h for the tumour contour shown on the CT; TBR at 2 h; k 3 map; v d map. Strong negative correlations between k 3 and v d are evident. In both tumour slices, there are regions where v d is well-below unity and variations in k 3 and TBR are discordant To account for the unexpected correlations between k 3 and v d , we propose a model (shown schematically in Fig. 4) in which an imaged voxel is comprised of two tissue types: one in which tracer reaches diffusive equilibration rapidly (with concentration C (r)), and one in which it reaches equilibrium slowly (with concentration C (s)): $$ C_{d}(t) =v_{s} C^{(s)}_{d} + (1-v_{s})C^{(r)}_{d}(t). $$ Schematic of our partitioning model. From left to right: at t=0 (left panel), tracer (gray-filled regions) is only in the capillary; for \(k^{-1}_{1}\ll t\ll k_{\text {eq}}^{-1}\) (middle panel), tracer fills the rapid-equilibration regions and begins to bind where hypoxia arises; for \(t\gtrsim k_{\text {eq}}^{-1}\) (right panel), tracer fills all regions, including the slow-equilibration regions that occupy a volume fraction v s of the region of interest Here, v s represents the voxel volume fraction in which tracer is slow to equilibrate. As noted in the Introduction, tracer will equilibrate slowly in mucinous and necrotic tissue owing to the slow diffusivity and long diffusive distances, respectively. Having defined the above sub-compartments, the distributed-parameter compartment model [22] that describes the effects of having regions of slow-equilibration is $$ \begin{aligned} \frac{d C^{(r)}_{d}(t)}{dt} &= \frac{k_{1}}{1-v_{s}}\left[C_{\text{In}}(t)-C^{(r)}_{d}(t)\right] \\ & \quad -\left(k_{\mathrm{b}}+\frac{k_{\text{eq}} v_{s}}{1-v_{s}}\right)C^{(r)}_{d}(t) + \frac{k_{\text{eq}} v_{s}}{1-v_{s}}C^{(s)}_{d}(t), \end{aligned} $$ $$ \frac{dC^{(s)}_{d}(t)}{dt} = k_{\text{eq}} \left[C^{(r)}_{d}(t)-C^{(s)}_{d}(t)\right], $$ $$ \frac{dC_{b}(t)}{dt} = k_{\mathrm{b}} C^{(r)}_{d}(t). $$ The factors of 1−v s and v s here ensure detailed balance amongst the compartments. k b is the binding rate due to hypoxia and k eq represents the equilibration rate in the regions of slow-equilibration. Recall from the Introduction that we expect it to be on the order of (0.1→1) h −1 when equilibration is driven by diffusion; see Eq. (2). In writing Eq. (14), it has been assumed that tracer does not bind inside regions of slow-equilibration since, e.g., necrotic cells and extracellular mucous deposits do not bind hypoxia-PET nitroimidazole tracers [12]. At times \(k^{-1}_{1}\lesssim t\ll k_{\text {eq}}^{-1}\), after diffusive equilibration is achieved in the rapidly equilibrating regions \(\left [C^{(r)}_{d}(t)\simeq C_{\text {In}}(t)\right ]\) but not yet in the slow-equilibrating regions, the tissue-to-blood ratio is readily obtained by integrating Eqs. (12)–(14): $$ {\begin{aligned} \text{TBR}(t) & \simeq \;v_{b} + (1-v_{b})(1-v_{s})\\ & \quad + \left(k_{\mathrm{b}} +\frac{k_{\text{eq}} v_{s}}{1-v_{s}}\right) (1-v_{b})(1-v_{s})\frac{\int^{t}_{0} d\tau \; C^{(r)}_{d}(\tau)}{C_{\text{In}}(t)}. \end{aligned}} $$ In arriving at this result, we have neglected back-flux from the slow-diffusion region, dropping the contribution arising from \(C^{(s)}_{d}\) in Eq. (13). This is valid as long as \(t\lesssim k_{\text {eq}}^{-1}\). Since \(C^{(r)}_{d}(t)\to C_{\text {In}}(t)\) for \(t\gtrsim k^{-1}_{1}\), Eq. (15) is identical to the Patlak result Eq. (8), with $$ v_{s} = 1-v_{d}. $$ $$ k_{3} = k_{\mathrm{b}} + \frac{k_{\text{eq}} \left(1-v_{d}\right)}{v_{d}}\equiv k_{\mathrm{b}} + K_{\text{eq}}(v_{d}), $$ where we have defined $$ K_{\text{eq}}(v_{d}) \equiv k_{\text{eq}} (1-v_{d})/v_{d}. $$ Equations (16) and (17) are our main theoretical results. They show that the distribution volume v d defined in Eq. (7) is the volume fraction of tissue in which tracer rapidly equilibrates and that the standard two-tissue compartment model trapping rate in general represents the sum of the rate of binding due to hypoxia and the equilibration rate. In turn, this means that it is not possible to distinguish binding from equilibration from just the shape of the time-activity curves. To distinguish k b and K eq in k 3, voxel-scale k 3 values were arranged into bins based on distribution volume values. Because there will always be a cohort of normoxic voxels in a tumour for which k b=0 (unless the hypoxic fraction is unity, simple Poissonian statistics dictates as much), it is assumed that the lowest M values of k 3 in these bins represent equilibration: $$ K_{\text{eq}} \left[(v_{d})_{i}\right] = \frac{1}{M}\sum_{j=1}^{M} \text{min}\left[\{k_{3}\}_{(v_{d})_{i}}\right]_{j}. $$ Equation (19) is strictly valid in the limit where the variance in k eq values is much smaller than the variance in k b values (so that the two distributions can be distinguished). The choice of M is dictated by their relative sizes: $$ \frac{M}{N_{b}} = \frac{\left(\left.\sigma_{k_{\text{eq}}}\right/k_{\text{eq}}\right)}{\sqrt{\left(\left.\sigma_{k_{\text{eq}}}\right/k_{\text{eq}}\right)^{2} + \left(\left.\sigma_{k_{\mathrm{b}}}\right/k_{\mathrm{b}}\right)^{2}}}, $$ where N b is the total number of values within each bin, σ X and X denote the standard deviation and mean values of X=k b or k eq. Assuming that the relative variance \(\left (\left.\sigma _{k_{\mathrm {b}}}\right /k_{\mathrm {b}}\right)\) is equal to that for the oxygen partial pressure \(P_{O_{2}}\) (the case, e.g., when the two are related by a Michaelis-Menten-type relation [12]), the variance in k b is expected to be large, based on the broad distribution of \(P_{O_{2}}\) levels in tumours: \(\left (\left.\sigma _{P_{O_{2}}}\right /P_{O_{2}}\right)\gtrsim 1\) [23]. In contrast, the relative variance in k eq—reflecting that of the size l of the regions in which tracer is slow to equilibrate—is small. This was estimated by calculating the variance in the minimum k 3 value in each bin with respect to a v d -dependent average (see, e.g., the curve fits in Fig. 2). Across our twenty patients, we found an average value \(\left (\left.\sigma _{k_{\text {eq}}}\right /k_{\text {eq}}\right)\sim 0.4\). As a compromise to having a sufficient number of voxels to ensure the validity of statistics and few enough to have sufficient resolution in v d -space to carry out these curve fits, bins were chosen to contain ten voxels. Hence, we chose M=0.4×10=4. A sensitivity analysis of the predicted equilibration rates and the choice of M is presented in Online Resource 2 (Additional file 2). An example of this algorithm is shown for two patients in Figs. 2 and 5. Voxel-scale values of K eq in each of these bins as determined by Eq. (19) are plotted in Fig. 2b and e. The solid lines in this figure are fits to K eq(v d )=k eq(1−v d )/v d . (The poor fit in Fig. 2e for λ≲0.6 may be due to a percolation effect: for distribution volumes less than ∼0.65, regions of slow equilibration begin to overlap [24] and v d will become dependent on the mean size l of these regions. Hence, from Eq. (2), k eq will also begin to depend on v d ). Also shown in Fig. 2c and f are the voxel-scale binding rates determined from Eqs. (17) and (19). Figure 5 shows parametric maps of k 3, k eq and k b for the same tumour slices shown in Fig. 3. Parametric maps for an axial tumour slice from patients 1 (left) and 2 (right) showing the spatial distribution of binding and equilibration rates The correlation matrix between derived voxel-scale parameters from our model is shown in Table 3 along with population averages of these parameters. The relative sizes of the correlations between k 3 and K eq (r=0.57) and k b (r = 0.86) are measures of how much equilibration and binding were found to contribute to the net trapping rate k 3. Most of the v d dependence of k 3 is contained in K eq, as evidenced by the strong correlations between v d and K eq(r=−0.73) but comparatively weak correlations k b and v d (r=−0.27). Not shown are correlations between these quantities and the vascular influx rate k 1 since these were small (|r|<0.15) for all cases. Table 3 Top: Correlation matrix of Pearson correlation coefficients between the mean voxel-scale parameters across the twenty tumours studied using the 2-h data sets. Bottom: Population-averages of the corresponding voxel-scale rate coefficients; values are shown in units of h −1. Standard deviations of mean values across patients are indicated in parentheses. Also shown is the population average k eq value, which was calculated from fits to data from all voxels in each tumour, as described in the text The v d -dependence of k 3 in our model is a consequence only of mass conservation and the assumption that there exists a compartment in which tracer is slow to reach diffusive equilibrium. It does not depend on a specific microscopic model for equilibration. We tested the prediction given by Eq. (17) by fitting the binned K eq values to a function of the form K eq(v d ,γ)=k eq[(1−v d )/v d ]γ to determine how close γ was to its predicted value of unity. Averaging over all tumours, we found γ=(0.9±0.4), with the error given by the standard deviation of values across all tumours. This confirms that our model in which tracer equilibrates slowly in a fraction 1−v d of tissue is consistent with our data. The mean equilibration rate derived from these fits was k eq=0.44 h −1 (standard deviation of 0.29 h −1 across all patients), corresponding to an equilibration time of 1/k eq∼2.3 h. It is well-appreciated that the uptake of hypoxia-sensitive PET tracers is dependent on tissue transport properties as well as hypoxia [13, 14, 17, 18, 25]. In principle, dynamic PET modeling corrects for transport properties such as slow tissue diffusivity that can impede the uptake of tracer and reduce sensitivity to hypoxia when such features are co-localized with hypoxia in PET voxels. This is especially problematic since PET voxels are typically large enough [ ∼(4 mm)3] to include diverse cell populations, with widely varying pathology [26]. The quantity of primary interest in a compartment model analysis of dynamic PET imaging is the trapping rate k 3, commonly believed to be sensitive to hypoxia via the underlying binding kinetics [12–14]. Static PET imaging is more feasible clinically, however, and it is often assumed that one can adopt static imaging in place of kinetic imaging when some appropriate uptake metric–SUV for FDG-PET or TBR for hypoxia-PET–is well-correlated with k 3 [27, 28]. In this paper, we have investigated dynamic and static PET in 20 patients with pancreatic adenocarcinoma (PDAC) and found k 3 values to be only modestly correlated with TBR. Using Patlak's formula to analyze these correlations, we found that a highly variable distribution volume across patients was primarily responsible for the reduced correlations, consistent with recent findings of FMISO kinetics in head and neck tumours [25]. Correcting for the distribution volume, correlations were considerably stronger and the corrected tumour-to-blood ratio was increased (see Fig. 1). This shows that tracer uptake at 2 h in these patients is sensitive both to hypoxia and tissue transport properties (distribution volume), with the result that variability in tissue transport properties reduces the sensitivity of static PET imaging to hypoxia. Figure 6 compares hypoxic fractions in the twenty tumours calculated using: a.) the fraction of voxels for which TBR>1.2 and b.) the fraction of voxels for which k b>0.2 h −1, a threshold chosen such that the two hypoxic fractions agree when transport effects are small (v d >0.9). When transport effects are substantial (v d <0.9), correlations between the two methods of calculating hypoxic fractions are greatly reduced (r goes from 0.92 to 0.68), with the TBR approach underreporting hypoxia on average. Impact of transport on calculation of hypoxic fraction. When v d >0.9, hypoxic fractions calculated from TBR>1.2 (HF) and k b>0.2 h −1 (HF kin) are in substantial agreement. When v d <0.9, correlations are greatly diminished (r=0.68), with HF underestimating hypoxia At first glance, this would suggest that these tumours would benefit from dynamic PET imaging. The trapping rate was found to exhibit a strong dependence on the distribution volume, however, implying that k 3 describes both the binding rate due to hypoxia as well as the rate of equilibration. A model was developed to explain this in which the extravascular tissue space was divided into two regions, one in which tracer rapidly achieved diffusive equilibration and one in which it equilibrated slowly. The population-averaged equilibration rate k eq≃(0.44±0.29) h −1 in the latter region is consistent with our estimate in the Introduction of having either mucinous regions (on the order of tens to hundreds of microns in extent) where diffusivity is greatly slowed or micronecroses, smaller than a PET imaging voxel but larger than ∼ 500 μm across. The long equilibration time [1/k eq∼2.3 h] implied by this result means that unbound tracer will not equilibrate until well-after tracer injection, at times t≫1/k eq. At this time, the concentration of tracer in both the slow- and fast-equilibrating regions will approach that in blood and the effect of the distribution volume on TBR will vanish. Ideally, static hypoxia-PET imaging would be carried out when t≫1/k eq in order to remove this sensitivity to transport. Unfortunately, the half-life of 18F is short and imaging times are typically restricted to be 3 h or less. (In our study, it was felt that accrual would be challenged by imaging patients past 2 h.) If slow equilibration were due to necroses, k 1–a measure of perfusion–would be correlated with k eq. No such correlations were found, leading us to hypothesize that mucous deposits comprised the regions of slow equilibration. Necroses are also rare in PDAC, whereas mucous gel-forming mucins are commonly over-expressed [9]. Amongst the twenty patients, the tumour volume fraction v d in which tracer equilibrated rapidly varied from 0.68 to 1, with an average value of 0.85. This implies mucinous region volume fractions ranging from 0 to 30%, with an average value of 15%. Tumours were resected in four patients and examined by a pathologist [I.S.]. Although not a sufficient number to be able to definitively attribute the reduced distribution volume to mucous, the patients with the smallest and largest distribution volumes of this four exhibited significant and negligible mucin expression, respectively; see Fig. 7. Resected histology slices from two patients (16 and 17 in Online Resource 1 (Additional file 1)), illustrating the hypothesized dependence of the distribution volume on mucin expression. The tumour on the left exhibits little mucin while that on the right exhibits abundant apical mucin. The average distribution volumes for these tumours are 0.92 and 0.76, respectively, representing above- and below average levels. The black scale bars in the lower-right hand corners of these plots indicates a length of 200 μm; in comparison, a PET voxel is ∼ 4 mm across. Brown regions indicate staining for pimonidazole Our conclusion that equilibration is slow in parts of pancreatic tumours is not inconsistent with claims by us [21] and others [25] that tumour-scale equilibration rates are rapid. The characteristic equilibration rate in the fast-equilibrating regions can be approximated by k 1 which, even for the hypo-perfused PDAC tumours studied in this work, was fast compared to k b and k eq. The population average of the tumour-scale k 1 values was ∼ 0.3 min −1 [16]. Regions of slow-equilibration occupy a relatively small fraction of the tumours and hence, the tumour-scale equilibration rate is not strongly affected by these. Although we have proposed a scheme to differentiate binding from equilibration, and hence, to quantify hypoxic status via the surrogate binding rate k b, the accuracy of this approach relies on the assumption that the variance in the equilibration rate is much smaller than the variance in the binding rate: \(\left (\left.\sigma _{k_{\text {eq}}}\right /k_{\text {eq}}\right)\ll \left (\left.\sigma _{k_{\mathrm {b}}}\right /k_{\mathrm {b}}\right)\). Only then can we attribute the lowest few k 3 values in each v d bin to K eq and not k b. The fact that the estimated \(\left (\left.\sigma _{k_{\text {eq}}}\right /k_{\text {eq}}\right)\) was only marginally smaller than \(\left (\left.\sigma _{k_{\mathrm {b}}}\right /k_{\mathrm {b}}\right)\) means that our analysis did not completely distinguish equilibration and binding. In effectively assuming that the variance in the equilibration rate was zero, our analysis erred on the side of underestimating the equilibration rate and hence, overestimated the binding rate k b. At the same time, our scheme still represents an improvement over hypoxia quantification using k 3 since k 3 will always be larger than our estimated k b, which in turn is likely larger than the true k b. Full validation of our approach will rely on comparing our estimates of k b and oxygen levels using other methods such as immunohistochemical staining of resected tumours. We plan on doing this in the future. Beyond hypoxia quantification, dynamic PET imaging reveals additional information about tumour physiology that may prove to be clinically important [13, 14, 25, 29]. In our case, we have found that the distribution volume of FAZA (and likely all freely-diffusible PET tracers) quantifies the amount of mucous present in pancreatic tumours. Over-expression of the mucous gel-forming mucin MUC5AC in PDAC is prognostic for shorter survival time [30], greater metastatic potential [9, 31], and immune system avoidance [32]. We hypothesize that the distribution volume in other tumour sites will likewise provide complementary physiological information beyond hypoxic status. A key question raised by this work is whether or not the tissue transport effects identified here confound hypoxia quantification using other hypoxia-PET tracers such as FMISO and in other tumour sites. The primary impediment to tracer equilibration is slow diffusivity. FAZA has been estimated to diffuse marginally faster than FMISO [7], and so the issues identified here should impact FMISO to a comparable degree. Indeed, similar effects as the ones reported here have arisen in FMISO imaging of pre-clinical tumour models [33], as well as clinical pharmacokinetic studies of head and neck tumours [17, 25]. In all cases, a variable distribution volume diminished correlations between TBR and k 3. [The fact that K i =v d k 3 but not k 3 was found to be well-correlated with TBR in Ref. [33] can be understood from Eq. (8): K i removes the variance in TBR arising from v d in the trapping term, but not the first two terms on the right-hand side of this equation.] In recent work, Grkovski et al. discuss the important role of the distribution volume in static PET hypoxia quantification and also report significant negative correlations between k 3 and v d [25]. The present work builds on these analyses by proposing a model in which k 3 is sensitive both to hypoxia-induced binding as well as diffusive equilibration of un-bound tracer. The uptake of hypoxia-sensitive PET tracers in pancreatic tumours depends in a significant way on both tissue transport properties as well as the presence of hypoxia. Both dynamic- and static-PET based hypoxia surrogates— k 3 and TBR—are affected by regions where diffusive equilibrium is achieved very slowly, over several hours. We have proposed a scheme to extract the hypoxia-sensitive tracer binding rate as well as the from dynamic PET data and proposed this as a novel hypoxia biomarker. Our results are of relevance for all hypoxia-PET tracers and any tumour site where transport of small-molecular weight agents is challenged. Fleming IN, Manavaki R, Blower PJ, West C, Williams KJ, Harris AL, Domarkas J, Lord S, Baldry C, Gilbert FJ. Imaging tumour hypoxia with positron emission tomography. Brit J Cancer. 2015; 112:238–50. Rajendran JG, Krohn KA. F-18 fluoromisonidazole for imaging tumor hypoxia: imaging the microenvironment for personalized cancer therapy. Semin Nucl Med. 2015; 45(2):151–62. Koh WJ, Rasey JS, Evans ML, Grierson JR, Lewellen TK, Graham MM, Krohn KA, Griffin TW. Imaging of hypoxia in human tumors with [F-18] fluoromisonidazole. Int J Radiat Oncol Biol Phys. 1992; 22(2):199–212. Rajendran JG, Schwartz DL, O'Sullivan J, Peterson LM, Ng P, Scharnhorst J, Grierson JR, Krohn KA. Tumor hypoxia imaging with [F-18] fluoromisonidazole positron emission tomography in head and neck cancer. Clin Cancer Res. 2006; 12(18):5435–41. Muzi M, Peterson LM, O'Sullivan JN, Fink JR, Rajendran JG, McLaughlin LJ, Muzi JP, Mankoff DA, Krohn KA. 18F-Fluoromisonidazole Quantification of Hypoxia in Human Cancer Patients Using Image-Derived Blood Surrogate Tissue Reference Regions. J Nucl Med. 2015; 56(8):1223–8. Pruijn FB, Patel K, Hay MP, Wilson WR, Hicks KO. Prediction of tumour tissue diffusion coefficients of hypoxia-activated prodrugs from physicochemical parameters. Aust J Chem. 2008; 61:687–93. Wack LJ, Mönnich D, van Elmpt W, Zegers CML, Troost EGC, Zips D, Thorwath D. Comparison of [18F]-FMISO, [18F]-FAZA, and [18F]-HX4 for PET imaging of hypoxia—a simulation study. Acta Oncologica. 2015; 54:1370–7. Lau SK, Weiss LM, Chu PG. Differential expression of MUC1, MUC2, and MUC5AC in carcinomas of various sites: an immunohistochemical study. Am J Clin Pathol. 2004; 122(1):61–9. Kaur S, Kumar S, Momi N, Sasson AR, Batra SK. Mucins in pancreatic cancer and its microenvironment. Nat Rev Gastroenterol Hepatol. 2013; 10(10):607–20. Georgiades P, Pudney PD, Thornton DJ, Waigh TA. Particle tracking microrheology of purified gastrointestinal mucins. Biopolymers. 2014; 101(4):366–77. Runnsjö A, Dabkowska AP, Sparr E, Kocherbitov V, Arnebrant T, Engblom J. Diffusion through Pig Gastric Mucin: Effect of Relative Humidity. PLoS ONE. 2016; 11(6):e0157596. Casciari JJ, Graham MM, Rasey JS. A modeling approach for quantifying tumor hypoxia with [F-18]fluoromisonidazole PET time-activity data. Med Phys. 1995; 22:1127–39. Thorwarth D, Eschmann SM, Paulsen F, Alber M. A kinetic model for dynamic [ 18F]-Fmiso PET data to analyse tumour hypoxia. Phys Med Biol. 2005; 50:2209–24. Thorwarth D, Eschmann SM, Scheiderbauer J, Paulsen F, Alber M. Kinetic analysis of dynamic 18F-fluoromisonidazole PET correlates with radiation treatment outcome in head-and-neck cancer. BMC Cancer. 2005; 5:152. Wang W, Georgi J-C, Nehmeh SA, Narayanan M, Paulus T, Bal M, O'Donoghue J, Zanzonico PB, Schmidtlein CR, Lee NY, Humm JL. Evaluation of a compartmental model for estimating tumor hypoxia via FMISO dynamic PET imaging. Phys Med Biol. 2009; 54:3083–99. Metran-Nascente C, Yeung I, Vines DC, Metser U, Dhani DC, Green D, Milosevic M, Jaffray D, Hedley DW. Measurement of tumor hypoxia in patients with advanced pancreatic cancer based on 18F-fluoroazomyin arabinoside uptake. J Nucl Med. 2016; 57(3):361–6. Wang W, Lee NY, Georgi J-C, Narayanan M, Guillem J, Schöder H, Humm JL. Pharmacokinetic Analysis of Hypoxia 18F-Fluoromisonidazole Dynamic PET in Head and Neck Cancer. J Nucl Med. 2010; 51(1):37–45. Bartlett RM, Beattie BJ, Naryanan M, Georgi J-C, Chen Q, Carlin SD, Roble G, Zanzonico PB, Gonen M, O'Donoghue J, Fischer A, Humm JL. Image-Guided PO2 Probe Measurements Correlated with Parametric Images Derived from 18F-Fluoromisonidazole Small-Animal PET Data in Rats. J Nucl Med. 2012; 53(10):1608–15. Wang K, Georgi J-C, Zanzonico P, Narayanan M, Paulus T, Bal M, Wang W, Cai A, O' Donoghue J, Ling CC, Humm JL. Hypoxia Imaging of Rodent Xenografts with 18F-Fluoromisonidazole: Comparison of Dynamic and Static PET Imaging. Int J Med Physics Clin Eng Radiat Oncol. 2012; 1(3):95–104. Patlak CS, Blasberg RG, Fenstermacher JD. Graphical evaluation of blood-to-brain transfer constants from multiple-time uptake data. J Cereb Blood Flow Metab. 1983; 3(1):1–7. Taylor E, Yeung I, Keller H, Wouters BG, Milosevic M, Hedley DW, Jaffray DW. Quantifying hypoxia in human cancers using static PET imaging. Phys Med Biol. 2016; 61:7957. Larson KB, Markham J, Raichle ME. Tracer-kinetic models for measuring cerebral blood flow using externally detected radiotracers. J Cereb Blood Flow Metab. 1987; 7(4):443–63. Nordsmark M, Bentzen SM, Overgaard J. Measurement of human tumour oxygenation status by a polarographic needle electrode. An analysis of inter- and intratumour heterogeneity. Acta Oncol. 1994; 33(4):383–9. CD Lorenz, RM Ziff. Precise determination of the critical percolation threshold for the three-dimensional "Swiss cheese" model using a growth algorithm. J Chem Phys. 2011; 114(8):3659–61. Grkovski M, Schöder H, Lee NY, Carlin SD, Beattie BT, Riaz N, Leeman JE, O'Donoghue JA, Humm JL. Multiparametric Imaging of Tumor Hypoxia and Perfusion with 18F-Fluoromisonidazole Dynamic PET in Head and Neck Cancer. J Nucl Med. 2017; 58:1072–80. Busk M, Horsman MR, Overgaard J. Resolution in PET hypoxia imaging: Voxel size matters. Acta Oncologica. 2008; 47(7):1201–10. Freedman NM, Sundaram SK, Kurdziel K, Carrasquillo JA, Whatley M, Carson JM, Sellers D, Libutti SK, Yang JC, Bacharach SL. Comparison of SUV and Patlak slope for monitoring of cancer therapy using serial PET scans. Eur J Nucl Med Mol Imaging. 2003; 30(1):46–53. Doot RK, Dunnwald LK, Schubert EK, Muzi M, Peterson LM, Kinahan PE, Kurland BF, Mankoff DA. Dynamic and static approaches to quantifying 18F-FDG uptake for measuring cancer response to therapy, including the effect of granulocyte CSF. J Nucl Med. 2007; 48(6):920–5. Grkovski M, Lee NY, Schöder H, Carlin SD, Beattie BT, Riaz N, Leeman JE, O'Donoghue JA, Humm JL. Monitoring early response to chemoradiotherapy with 18F-FMISO dynamic PET in head and neck cancer. Eur J Nucl Med Mol Imaging. 2017; 44(10):1682–91. Takikita M, et al. Associations between Selected Biomarkers and Prognosis in a Population-Based Pancreatic Cancer Tissue Microarray. Cancer Res. 2009; 69(7):2950–5. Yamazoe S, Tanaka H, Sawada T, Amano R, Yamada N, Ohira M, Hirakawa K. RNA interference suppression of mucin 5AC (MUC5AC) reduces the adhesive and invasive capacity of human pancreatic cancer cells. J Exp Clin Cancer Res. 2010; 29:53. Hoshi H, Sawada T, Uchida M, Saito H, Iijima H, Toda-Agetsuma M, Wada T, Yamazoe S, Tanaka H, Kimura K, Kakehashi A, Wei M, Hirakawa K, Wanibuchi H. Tumor-associated MUC5AC stimulates in vivo tumorigenicity of human pancreatic cancer. Int J Oncol. 2011; 38(3):619–27. Busk M, Munk OL, Jakobsen S, Wang T, Skals M, Steiniche T, Horsman MR, Overgaard J. Assessing hypoxia in animal tumor models based on pharmocokinetic analysis of dynamic FAZA PET. Acta Oncol. 2010; 49(7):922–33. The authors thank Caryn Geady for assistance with some of the figures and Douglass Vines, Brandon Driscoll, and Tina Shek for useful discussions. This work was funded by a Terry Fox New Frontiers Program Grant, the Quantitative Imaging Network, Canadian Institutes for Health Research, and the Orey and Mary Fidani family chair in radiation physics. Princess Margaret Cancer Centre, University Health Network, Toronto, Canada Edward Taylor, Jennifer Gottwald, Ivan Yeung, Harald Keller, Michael Milosevic, Neesha C. Dhani, David W. Hedley & David A. Jaffray Techna Institute, University Health Network, Toronto, Canada Edward Taylor & David A. Jaffray Department of Medical Biophysics, University of Toronto, Toronto, Canada Jennifer Gottwald, David W. Hedley & David A. Jaffray Department of Radiation Oncology, University of Toronto, Toronto, Canada Ivan Yeung, Harald Keller, Michael Milosevic & David A. Jaffray Division of Medical Oncology and Hematology, Princess Margaret Cancer Centre, Toronto, Canada Neesha C. Dhani & David W. Hedley Department of Pathology, Hospital for Sick Children, Toronto, Canada Iram Siddiqui Institute for Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Canada David A. Jaffray Jennifer Gottwald Ivan Yeung Harald Keller Michael Milosevic Neesha C. Dhani David W. Hedley ET and JG carried out the compartment model analysis. ET, IY, HK, and MM developed the model used to analyze data. NCD, DWH, and DAJ participated in the design of the study. IS carried out the histology analysis. All authors read and approved the final manuscript. Correspondence to Edward Taylor. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. The study protocol was approved by the University Health Network Research Ethics Board and a signed written informed consent was obtained from all individual participants included in the study. Additional file 1 Table S1. (PDF 21 kb) Supplemental information. (PDF 100 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Taylor, E., Gottwald, J., Yeung, I. et al. Impact of tissue transport on PET hypoxia quantification in pancreatic tumours. EJNMMI Res 7, 101 (2017). https://doi.org/10.1186/s13550-017-0347-3 Hypoxia imaging PET tracer kinetic modelling
CommonCrawl
\begin{definition}[Definition:Multigraph/Simple Edge] Let $G = \struct {V, E}$ be a multigraph. A '''simple edge''' is an edge $u v$ of $G$ which is the only edge of $G$ which is incident to both $u$ and $v$. Category:Definitions/Multigraphs \end{definition}
ProofWiki
Chemistry; Biosynthesis De Novo Design of Potent Antimicrobial Peptides V. Frecer, B. Ho, J. L. Ding V. Frecer 1Department of Biological Sciences, Faculty of Science B. Ho 2Department of Microbiology, Faculty of Medicine, National University of Singapore, Singapore, Republic of Singapore J. L. Ding For correspondence: [email protected] Lipopolysaccharide (LPS), shed by gram-negative bacteria during infection and antimicrobial therapy, may lead to lethal endotoxic shock syndrome. A rational design strategy based on the presumed mechanism of antibacterial effect was adopted to design cationic antimicrobial peptides capable of binding to LPS through tandemly repeated sequences of alternating cationic and nonpolar residues. The peptides were designed to achieve enhanced antimicrobial potency due to initial bacterial membrane binding with a reduced risk of endotoxic shock. The peptides designed displayed binding affinities to LPS and lipid A (LA) in the low micromolar range and by molecular modeling were predicted to form amphipathic β-hairpin-like structures when they bind to LPS or LA. They also exhibited strong effects against gram-negative bacteria, with MICs in the nanomolar range, and low cytotoxic and hemolytic activities at concentrations significantly exceeding their MICs. Quantitative structure-activity relationship (QSAR) analysis of peptide sequences and their antimicrobial, cytotoxic, and hemolytic activities revealed that site-directed substitutions of residues in the hydrophobic face of the amphipathic peptides with less lipophilic residues selectively decrease the hemolytic effect without significantly affecting the antimicrobial or cytotoxic activity. On the other hand, the antimicrobial effect can be enhanced by substitutions in the polar face with more polar residues, which increase the amphipathicity of the peptide. On the basis of the QSARs, new analogs that have strong antimicrobial effects but that lack hemolytic activity can be proposed. The findings highlight the importance of peptide amphipathicity and allow a rational method that can be used to dissociate the antimicrobial and hemolytic effects of cationic peptides, which have potent antimicrobial properties, to be proposed. Nosocomial infections have drawn much attention from the medical and scientific communities. Several factors have posed tremendous burdens on the clinical management of such infections: (i) the ubiquity of gram-negative bacteria and the ease of acquisition of infections caused by gram-negative bacteria, (ii) the trend toward increasing numbers of antibiotic-resistant strains of gram-negative bacteria, and (iii) the lack of lasting, effective antibiotics. Therefore, the development of new and more effective antibiotics which use novel antimicrobial mechanisms of action is urgently needed. Endotoxin (or lipopolysaccharide [LPS]), a constitutive component of the outer membrane of gram-negative bacteria, is shed during infection and antimicrobial therapy and/or when the bacteria lyse (35). The resulting endotoxemia is among the leading causes of death in the developed world (22). Consequently, neutralization of LPS or its endotoxic membrane anchor moiety, lipid A (LA) (31), by a novel class of antimicrobial peptides may help to eliminate the risk of development of endotoxic shock during or after treatment of bacterial infections (6). Earlier studies on peptides derived from putative LPS-binding sequences of endotoxin-binding host defense proteins indicated that an LPS- and LA-binding motif may be formed by amphipathic sequences rich in cationic residues (9, 24). Recently, the concept of eradication of gram-negative bacteria by targeted disruption of LPS by cationic peptides was introduced (1, 6, 7). It has been proposed that even relatively short symmetric amphipathic peptide sequences containing cationic residues, such as HBHPHBH and HBHBHBH (where B is a cationic residue, H is a hydrophobic residue, and P is a polar residue), with a β-sheet conformation will bind to the bisphosphorylated glucosamine disaccharide head group of LA, primarily by ion-pair formation between anionic phosphates of LA and the cationic side chains (5). Antimicrobial peptides represent elements of innate and induced immune defenses against invading pathogens (1, 7, 20). They fold into a variety of secondary structures such as α-helical, β-sheet, cyclic, and hairpin loop peptides with one or more disulfide bridges and include the magainins, cecropins, defensins, lactoferricins, tachyplesins, protegrins, thanatin, and others (1, 2). Despite their diversity, most antimicrobial peptides share common features that include a net positive charge and an amphipathic character, which segregates hydrophilic and hydrophobic residues to opposite faces of the molecule (7, 20). Thus, antimicrobial peptides probably also share common mechanisms of bactericidal action. Although the precise mode of their action is not fully understood, it has been proposed that they target the bacterial membrane (1, 11, 20). The cationic peptides initially bind to the negatively charged LPS or LA of gram-negative bacteria (1, 2, 7, 11, 20). This initial binding leads to membrane permeation through (i) minor disruption of the phospholipid chain order and packing in the outer membrane, the "self-promoted uptake" (7); (ii) transmembrane channel formation via a "barrel-stave" or toroidal pore mechanism (10); or (iii) membrane destruction via a carpet-like mechanism (20), which may ultimately result in the killing of the bacteria. Evidence has accumulated to suggest that aggregation of amphipathic peptides on the bacterial membrane surface may be important for their antimicrobial activities (11, 20). Therefore, the design of novel antimicrobial peptides can be based on the similarities between the endotoxin-binding and antimicrobial cationic peptides, since both these effects require similar structural features of the peptides, namely, cationic and amphipathic characters. It can be expected that a strong interaction of the peptides with LPS or LA will promote their destructive action against the bacterial membrane and reduce the risk of development of endotoxemia. In fact, potent cyclic antimicrobial peptides selective for gram-negative bacteria on the basis of the β-stranded framework mimicking the putative LPS-binding sites of the LPS-binding protein family have been successfully developed (19). Certain antimicrobial peptides show affinities not only to bacteria but also to higher eukaryotic cells, even though the outer leaflet of healthy mammalian cells is composed predominantly of neutral (zwitterionic) phospholipids (11, 20). Structure-activity studies with these antibacterial peptides indicate that changes in the amphipathicity could be used to dissociate the antimicrobial activity from the hemolytic activity (13). Recently, it was shown that peptide cyclization increases the selectivity for bacteria by substantially reducing the hemolytic activity (21). We report here on the de novo design of synthetic cationic peptides with two LPS- and LA-binding sites which show structural similarity to cyclic β-sheet defense peptides, such as protegrin 1, thanatin, and androctonin (2). Systematic modifications of molecular properties made by varying the amino acid residues of the amphipathic LPS- and LA-binding motifs (5) while preserving the size, symmetry, and amphipathic character of the peptides are described. The purpose of these substitutions was to determine whether introduction of the binding affinity to endotoxin could enhance the antimicrobial activities of cyclic cationic peptides. Furthermore, we have investigated whether directed substitutions can selectively increase the antimicrobial potencies independently from the hemolytic activities of the peptides. Seven cationic peptides, each cyclicized via a disulfide bridge (Fig. 1), were synthesized and their biological properties were characterized. Quantitative structure-activity relationships (QSARs) were obtained by linking the experimental potencies to simple physicochemical molecular properties that can be easily derived from the peptide sequences. Implications for the molecular mechanism of the antibacterial effect are suggested, and practical guidelines for the design of nonhemolytic highly active antimicrobial peptides are proposed. Sequence and chemical structure of designed cyclic cationic peptide V1. The V peptides are composed of two symmetric amphipathic LPS- and LA-binding motifs, HBHB(P)HBH (see Table 1 for the sequences), that form two strands of a β-hairpin joined by a G9S10G11 turn on one side and a disulfide bond between C1 and C19 bridging the N- and C-terminal residues on the other side. Prediction of the secondary structure preferences of the V-peptide sequences within a protein environment was done with the PHD program (25). E, extended (β sheet); L, loop structure; dots, random conformation. Molecular modeling.Molecular models of peptides V1 to V7 were constructed so that they formed amphipathic β-hairpin-like structures containing two identical LPS- and LA-binding sequences connected by the GSG loop, with patterns of secondary structure predicted by use of the PHD algorithm (25). A disulfide bridge between the N- and C-terminal residues (Cys1 and Cys19, respectively) stabilized the 19-residue cyclic peptides (Fig. 1). The models were built with the Insight II program (Accelrys, San Diego, Calif.). The coordinates of the Escherichia coli LA were taken from Frecer et al. (5). The model of the reference cationic peptide antibiotic, polymyxin B (PmB), was obtained from nuclear magnetic resonance measurements of the PmB-LA complex (23). Molecular mechanics.Molecular mechanics (MM)simulations with the V peptides, PmB, and LA and their 1:1 complexes were carried out with the Discover program (Accelrys) by using an all-atom representation. Class II consistent force field CFF91 (16) was used. The geometries of all molecular structures were extensively minimized by using conjugate gradient minimization (5). Structural and dynamic properties of peptides.Initial minimized models of the cyclic peptides were placed in a periodic solvent box containing approximately 700 water molecules and were subjected to molecular dynamics (MD) simulation. The system was heated from 0 to 300 K over 5 ps and was equilibrated for 5 ps. The MD simulation time step was set to 1 fs, and the integration was carried out with the Verlet algorithm. An ensemble of 100 configurations was collected over 100 ps at 1 configuration/ps for calculation of the mean values of the geometric parameters and estimation of conformational entropy after energy minimization. Docking of LA to peptides.A flexible induced-fit docking method based on MD simulation was used to dock the LA monomer to the V peptides and PmB, which allowed for the full flexibility of the ligand and the receptor (5). The docking search produced >100 stable configurations of the peptide-LA complex for each V peptide. The configuration with the lowest total energy and the lowest peptide-LA interaction energy (negative values) was used as the starting point for the droplet model MD simulation, in which the complexes were enclosed by three solvation shells and were thermally equilibrated (5). Estimation of binding affinity.The relative affinity of reversible binding of solvated LA to the cationic peptides to form a peptide-LA complex (the computed binding affinity [ΔΔEcomp]) was estimated (5) with reference peptide PmB. Lipophilicity and amphipathicity parameters.The peptide lipophilicity index (Πo/w) of each of the V peptides was estimated as the sum of experimental lipophilicity parameters of amino acids (πo/w) of all residues, defined for side-chain partitioning in the system n-octanol-water (3). The amphipathicity index (AI) was calculated as the sum of side-chain parameters πo/w over a subset of residues containing basic and polar amino acids [AI = ΣiPF(πo/w)i (residues with odd sequential numbers, i; Fig. 1)], which is referred to as the polar face (PF) of the cyclic V peptide. The subset containing the lipophilic H residues defines the nonpolar face of the V peptide (residues with even sequential numbers). Peptide synthesis.The antimicrobial V peptides were synthesized by Genemed Synthesis (South San Francisco, Calif.) by solid-phase synthesis and standard 9-fluorenylmethoxy carbonyl chemistry and were purified to >95% purity by reverse-phase high-pressure liquid chromatography. A cysteine derivative with an acetamidomethyl protecting group was used for disulfide bridge formation. The peptide composition and the efficiency of complete cyclization were confirmed by mass spectrometry. SPR.The kinetics of the real-time interaction of the peptides with LA from E. coli F-583 (Sigma, St. Louis, Mo.) were determined from the surface plasmon resonance (SPR) determined with an HPA biosensor chip (BiaCore 2000; Biacore, Uppsala, Sweden). The peptide association and dissociation rates for 100 to 500 μM solutions of the V peptides or PmB (Medipharm, Bedford, Ohio) were determined in pyrogen-free water (Baxter, Morton Grove, Ill.) by the method of Tan et al. (32). CD spectroscopy.The circular dichroism (CD) spectra of the peptides at 40 μM were recorded at 25°C in water and in the presence of small unilamellar vesicles (3:1 ratio of phosphatidylcholine and 0.75 nM LA), as described by Tan et al. (32). Antibacterial activity test.MIC testing was performed as a modification (40) of the method proposed by the Hancock Laboratory (MIC Determination for Cationic Antimicrobial Peptides by Modified Micro Titer Broth Dilution Method). Briefly, 100 μl of 2 × 105 to 7 × 105 CFU of bacterial suspension per ml of Mueller-Hinton broth (Becton Dickinson, Sparks, Md.) was dispensed into sterile 0.2-ml polypropylene tubes. Eleven microliters of serially diluted peptides in 0.01% acetic acid and 0.2% bovine serum albumin (Sigma) over a concentration range of 4 to 692 nM (0.01 to 1.25 μg/ml) was then added. The cultures were shaken at 37°C for 18 to 24 h. Viable cell counts were determined by the standard drop-count method (36). Cultures without the peptides were used as positive controls. Uninoculated Mueller-Hinton broth was used as a negative control. The tests were carried out in triplicate. Hemolytic activity assay.Hemolytic activity was measured as the amount of hemoglobin released by lysis of human erythrocytes incubated for 1 h with serial twofold dilutions of the V peptides at concentrations ranging from 0.01 to 375 μg/ml in pyrogen-free saline at 37°C (28). Cytotoxicity test.The cytotoxic activities of the V peptides were tested by measuring the bioreduction of Cell Titer 96 AQueous One Solution Reagent (Promega, Madison, Wis.) by healthy THP-1 human monocytes (2 × 104 cells) incubated for 1 h at 37°C with serial twofold dilutions of the V peptides at concentrations ranging from 0.01 to 400 μg/ml in pyrogen-free saline (32). Competitive inhibition of endotoxin induction by Limulus amebocyte lysate (LAL) assay.The efficiencies of the V peptides to bind to LPS in solution was measured with a Kinetic-QCL kit (BioWhittaker, Walkersville, Md.) at 37°C in the presence of 5 endotoxin units (EU) of LPS per ml and V peptides at concentrations ranging from 0.1 to 3.9 μM (32). The LPS-binding curves were used to determine the Kds of the peptide-LPS complexes and the number of LPS molecules that bound to a single V-peptide moiety. Design of peptides.We have designed a series of cyclic cationic peptides for which a high affinity of binding to LA was predicted from molecular modeling. The peptides were composed of two identical symmetric amphipathic LPS- and LA-binding motifs containing seven alternating H and B or P residues with the general sequence Ac-C-HBHB(P)HBH-GSG-HBHB(P)HBH-C-NH2 (Fig. 1), where Ac is an acetyl group. The actual sequences of the binding motifs are given in Table 1. Each cyclic peptide included a GSG loop sequence, which enabled enhancement of the LPS- and LA-binding affinities of the tandemly repeated endotoxin-binding motifs, and a Cys1-Cys19 disulfide bridge linking the terminal residues. To achieve a high level of antimicrobial activity and selectivity toward bacteria instead of eukaryotic cells, the peptides were optimized by adjusting their molecular properties. In peptides V1 to V7, the molecular charges, amphipathicities, and lipophilicities of the peptides have been modulated by varying the cationic (polar) amino acid residues in the center of the binding motifs, where B(P) is Lys or Arg(Ser or Gln), and the hydrophobic residues, where H is Ala, Val, Phe, or Trp, which preserved the symmetries, sizes, and amphipathic characters of the peptides with alternating polar and nonpolar residues (Table 1). Lysine residues were previously shown to contribute the most toward the high affinity to LA when they were placed at the flanking basic residue position of the HBHB(P)HBH motif with a β-sheet conformation (5). Molecular properties of cyclic V peptides with variable symmetric cationic LPS- and LA-binding motifs Computational prediction of peptide flexibility.MD simulations showed that the backbone conformations of free V peptides evolved from the initial β-hairpin with defined patterns of secondary structure into flexible random conformations (data not shown). The φ and ψ backbone torsion angles of the central B or P residues of the LPS- and LA-binding motifs (residues 5 and 15), which characterize the local secondary structure, drifted to random conformations and their mean values fluctuated during the simulation, with similar high standard deviations of up to ±100°. Two interatomic distances between the Cα carbons of residues 5 to 15 and 1 to 10 (Fig. 1), which describe the overall shape of the peptide backbone, evolved to time-averaged values corresponding approximately to the circular shapes of the solvated peptides and fluctuated with similar standard deviations of up to ±5 Å. The patterns of the molecular shape fluctuations and torsional flexibility indicate high degrees of flexibility of the free V peptides in solution. Therefore, we can assume that the entropic contributions to the receptor binding affinity associated with the internal degrees of freedom of the V peptides will also be similar. CD spectroscopy.The CD spectra provide only low-resolution information. Therefore, interpretation in terms of the V-peptide secondary structure is problematic (37), although differences between the V1 to V4 peptides and the V6 and V7 peptides, with the last two peptides containing aromatic H residues, are evident (Fig. 2A). Nevertheless, comparison of the CD spectra measured in water (Fig. 2A) and in the presence of small unilamellar vesicles that mimic the bacterial membrane surface (18) (Fig. 2B) indicates that the peptides underwent conformational transitions in the vicinity of the heterogeneous solvent-membrane interface. This observation is in agreement with the findings from the MD simulations, which indicated a high degree of flexibility of the free peptide backbones in water. CD spectra of cationic V peptides. (A) Spectra of peptides V1 to V7 recorded in water; (B) spectra of peptides V1 to V7 recorded in the presence of phosphatidylcholine and E. coli LA (molar ratio, 3:1), which form small lipidic vesicles that mimic the surface of a bacterial membrane. Mode of LPS binding to V peptides.The binding of LPS to V1 and V2, which represent peptides with two different types of binding motifs, was measured by the competitive inhibition of LPS-induced binding by the LAL assay (32). The number of LPS- and LA-binding sites (n) on the V peptides during the association peptide + nLPS↔ peptide-LPSn was determined from the plot of the average number of bound LPS molecules per peptide molecule (v) as the asymptote v equal to 1 is approached by the binding curve (v versus LPS concentration) at high ligand concentrations (Fig. 3). The binding curves for V1 and V2 provided estimates of the Kd values of a 1:1 peptide-LPS complex (at v equal to 0.5) as 1.8 and 2.0 μM, respectively. Typical curves for binding of LPS to V peptides, represented by the V1 and V2 peptides. The peptide-LPS association is characterized by curves of the average number of bound LPS molecules (n) per single peptide molecule (v), where v = n[peptide-LPSn]/([LPS] + n[peptide-LPSn]) plotted against the ligand concentration ([LPS]). These curves allow estimation of the number of binding sites per peptide when the asymptote of v is equal to 1. The Kd of the peptide-LPSn complex was read for v equal to 0.5. Hill's plot showed slopes of n equal to 0.995 and 1.205 for the V1 and V2 peptides, respectively, indicating the presence of a single noncooperative LPS- and LA-binding site on each peptide molecule (Fig. 4). This observation suggests that V peptides form complexes with LPS or LA with a 1:1 stoichiometry. Hill's plot for LPS binding to V1 and V2 peptides. The number of LPS-binding sites on each peptide molecule (Hill's coefficient, n) was derived from the slope of the plot ln[v/(n − v)] = lnKd + nln[LPS] (see legend to Fig. 3). Linear regression (dashed lines) supplied the slopes n equal to 0.995 and 1.205 for the V1 and V2 peptides, respectively, with correlation coefficients of 0.999 and 0.992, respectively (in theory, n is equal to 1 for a single binding site). Affinities of binding to LA.A representative SPR sensorgram displaying the interaction of the cationic V1 peptide with the anionic LA monolayer coated on the HPA biosensor chip (Fig. 5) shows a concentration-dependent increase in response units upon peptide binding. These changes corresponded to the average rate constants of the peptide-LA association (k1 = 8.25 × 102 M−1 s−1) and dissociation (k−1 = 6.47 × 10−4 s−1), which define a Kd of 7.8 × 10−7 M. Thus, the cyclic V peptides exhibited micromolar and submicromolar Kds for the peptide-LA complexes comparable to that for the reference antiendotoxin agent, PmB, a cyclic cationic peptide, which was observed to bind to the E. coli LA with a Kd of 7.1 × 10−7 M (Table 1) (27, 30). Representative SPR sensorgram of V peptides. Sensorgrams indicate the association and dissociation phases of V1 peptide binding to an LA monolayer immobilized on an HPA chip. The fits of the rate constants to the association and dissociation curves yielded a Kd of 0.78 μM. The dissociation rates may be somewhat underestimated due to ligand rebinding, which is a common feature of SPR experiments (33). Computational prediction of affinities of binding to LA.To gain more detailed structural information about the nature of the interactions of V peptides with the LA moiety, the 1:1 complexes of peptide-LA were modeled. The two identical LPS- and LA-binding sequences [HBHB(P)HBH] separated by the GSG turn formed a single binding site. The MD simulations have shown that the bound conformations of V1 to V7 resemble a β-hairpin-like structure induced by the proximity of the amphipathic LA moiety. The bound peptides displayed a distinct polar face, formed mainly by the side chains of B and P residues, and a nonpolar face, constituted by the H residues (Fig. 6). In the complexes, the cationic residues of the binding motifs formed ion pairs with the anionic phosphates of LA. The amide and ester linkages of LA formed a network of hydrogen bonds with the peptide backbone, while the fatty acid chains of LA established hydrophobic contacts with the chains of the H residues (Fig. 6). The relative binding affinities of V peptides to LA (ΔΔEcomp; Table 1) were calculated with reference to the binding affinity of the antiendotoxin peptide PmB. Computer model of peptide-LA complex derived by MD-assisted docking in water. (A) In the model LA is attached to the dual LPS- and LA-binding sequence of peptide V4. The ribbon shows the backbone conformation of the V4 peptide (Ac-C-VKVQVKV-GSG-VKVQVKV-C-NH2), which acquired a β-hairpin-like structure in the 1:1 complex with the LA monomer in water. The β-hairpin supersecondary structure of V4 was induced by the amphipathic LA counterpart. Residue side chains and LA are shown in stick representation. Hydrophobic Val residues (yellow) interact primarily with fatty acid chains of LA (yellow). Cationic Lys residues (blue) form ion pairs with the anionic phosphate groups (red) of the LA head group (purple). Hydrogen atoms were omitted for clarity. (B) Side view of the V4-LA complex. (C) The microscopic droplet discrete solvation model used in the MD-assisted docking of the LA molecule to V4 and thermal averaging of the V4-LA complex shows that it contains three layers of water molecules surrounding the complex (four solvation shells with about 2,000 H2O molecules, shown in stick representation). Within the homogeneous subgroup, the V1 to V4 peptides, which contained conserved hydrophobic residues (all H residues were Val), peptides V2 and V4, which had lower molecular charges (4 è [è is the charge of one electron]), showed higher ΔΔEcomp values (weaker binding). On the other hand, in the V1, V5, V6, and V7 peptide subgroup, which had identical polar faces (all B residues were Lys) and a variable hydrophobic face (where H was Ala, Val, Phe, or Trp), the presence of smaller, less hydrophobic residues (Ala or Val) resulted in binding stronger than that when bulkier aromatic (Phe or Trp) residues were present. Antimicrobial activity.The V peptides displayed a wide range of antibacterial potencies against five gram-negative bacterial species, with the concentrations at which they showed activity ranging from 4 to ≥692 nM (Table 2). Most of the peptides showed specificity for individual gram-negative bacterial species, resulting in up to 138-fold differences in the MICs of the same derivative. The MIC of the reference antimicrobial peptide, PmB, for Pseudomonas aeruginosa was determined to be 830 nM (1 μg/ml). Reports in the literature give a range of potencies against P. aeruginosa ranging from 1 to 100 μg/ml (27, 36). Antimicrobial, cytotoxic, and hemolytic activities of cyclic cationic V peptides It is noteworthy that the range of MICs of peptides V1 to V7 (Table 2) was up to 208-fold lower than that of reference antibiotic PmB, which has an MIC of 830 nM (1 μg/ml). The most potent peptide, V4, displayed a broad spectrum of antibacterial activity, with the lowest MIC being 4 nM (10 ng/ml) (Table 2). Hemolytic activity.The effective concentration that caused 50% erythrocyte lysis (EC50) exceeded the nanomolar range of the antimicrobial activity by 3 to 4 orders of magnitude (Table 2). At nanomolar concentrations, the V peptides showed no hemolytic activity. The V6 and V7 peptides, which contain aromatic Phe and Trp residues and which possess the highest experimental lipophilicity index, Πo/w (Table 1), also showed the highest degrees of hemolytic activity, while the less lipophilic V4 and V5 peptides exhibited the lowest activities. With an EC50 of 3.2 mM (6.5 mg/ml), the hemolytic activity of the most active peptide, V4, was almost three times lower than that of the reference antibiotic, PmB (27). The range of specificity indices (SIs), in which SI describes the specificities of the V peptides for gram-negative bacteria over that for human erythrocytes, was calculated as EC50 for hemolysis/MIC for gram-negative bacteria. The majority of the peptides exhibited improved specificities and increased SIs compared to those for PmB (Table 2), for which the estimated SI is 190 (27). The SI of the most active peptide, V4, demonstrated a greater than 2,400-fold increase compared to that of PmB, which resulted from the higher level of antimicrobial activity and the lower level of hemolytic activity of the V4 peptide. Cytotoxic activity.The EC50s of the V peptides for cytotoxicity ranged from 40 μM to 5.7 mM, which exceeds their MICs by up to 3 orders of magnitude (Table 2). The peptide with the highest SI, V4, exhibited 0% cytotoxicity toward human monocytes at nanomolar concentrations. The V4 peptide caused 50% cell lysis only at an EC50 of 88 μM (180 μg/ml), which is about 4 orders of magnitude higher than the concentration needed for its antimicrobial effect. QSARs.The hypothetical mechanism of bacterial membrane disruption by cationic amphipathic peptides may involve several molecular properties of the peptides that are related to the individual stages of the process: a net positive charge (attachment to anionic outer membrane constituents), amphipathicity (aggregation on the membrane surface), and lipophilicity (permeation into the membrane). It is likely that only those peptides which possess a balanced combination of these properties can achieve sufficient activity in each step of the concerted mechanism and attain higher levels of antimicrobial effects. Therefore, an analysis that compares these properties to the observed biological effects can provide valuable insight into the relationships between the sequences of antimicrobial peptides and their potencies. Simple properties which can be derived from the peptide sequences, such as the molecular charge (QM), AI, and Πo/w, were correlated to the mean antimicrobial effect against gram-negative bacteria by multivariate linear regression: $$mathtex$$\[\mathrm{ln}(\mathrm{MIC}){=}9.49Q_{M}{+}10.17\mathrm{AI}{-}0.05{\Pi}_{o/w}{-}22.16\]$$mathtex$$(1) The three-parameter correlation equation showed promising statistical characteristics [n = 7 samples, correlation coefficient (R2) = 0.99, leave-one-out cross-validated correlation coefficient (Rxv2) = 0.98, standard error (σ) = 0.23, F-test statistic (F) = 69.85, confidence level (α) > 95%] and pointed to an important relationship. Namely, the t statistics of the correlation, which describe the contribution of each individual variable to the multivariate regression model, revealed that QM and AI represent the leading terms and were strongly related to the antimicrobial activities of the V peptides. Both the antimicrobial and the hemolytic activities of the cationic peptides involve cell membrane lysis and have been reported to depend on the same physicochemical properties (14, 21, 38). A similar multiparameter correlation equation was obtained for the hemolytic activities of the V peptides: $$mathtex$$\[\mathrm{ln}(\mathrm{EC}_{50}\ \mathrm{for\ hemolysis})\ {=}\ {-}5.34Q_{M}{-}4.94\mathrm{AI}\ {-}\ 0.23{\Pi}_{o/w}{+}31.87\]$$mathtex$$(2) The correlation parameters were as follows: n = 7, R2 = 0.92, Rxv2 = 0.84, σ = 0.76, F = 7.40, and α > 95%. In this case the t statistics showed that the hemolytic effect was influenced primarily by molecular lipophilicity. For the cytotoxic effects of the V peptides, we obtained the correlation equation $$mathtex$$\[\mathrm{ln}(\mathrm{EC}_{50}\ \mathrm{for\ cytotoxicity})\ {=}\ 8.98Q_{M}{+}11.74AI\ {-}\ 0.04{\Pi}_{o/w}\ {-}\ 8.70\]$$mathtex$$(3) The correlation parameters (n = 7, R2 = 0.98, Rxv2 = 0.96, σ = 0.47, F = 27.03, α > 95%) and t statistics indicated that the cytotoxic activity was determined mainly by the peptide charge and amphipathicity. Only the combination of three molecular properties (charge, amphipathicity, and lipophilicity) was found to correlate with the observed antimicrobial, hemolytic, and cytotoxic activities of the V peptides. Single-variate QSAR correlations of these properties to the biological effects could not be established, suggesting that the membrane disruption may involve a concerted process. However, the t test of the multivariate correlation equations (equations 1 to 3) revealed that the antimicrobial effect on bacteria was determined predominantly by the V-peptide charge and amphipathicity, i.e., by the number of cationic and polar residues forming the polar face of the V peptides and their distributions throughout the two symmetric amphipathic LPS- and LA-binding motifs. On the other hand, the hemolytic activity against eukaryotic cells was influenced mainly by the molecular lipophilicity, i.e., the sum of the lipophilicities of all residues, with the major contribution coming from the H residues, which form the nonpolar face of the V peptides, which is predicted to acquire a β-hairpin-like structure in the peptide-LA complexes. Binding affinities of V peptides to LA.The binding affinities of the V peptides computed for the LA monomer and the Kds derived from SPR measurements with an LA monolayer represent two diverse models for the prediction of the interaction of peptides with the outer membranes of gram-negative bacteria. The first theoretical model simulates the binding of a single free LA molecule to the V peptide, which forms a 1:1 complex in a dilute aqueous solution. In the second experimental model, the Kds from SPR measurements reflect the adsorption of the V peptide on the monolayer of sterically hindered LA immobilized on the biosensor chip. Thus, for example, the Kds determined from the SPR measurements do not take into account the contribution from hydrophobic interactions between the acyl chains of LA and the side chains of nonpolar residues of the peptides, in contrast to ΔΔEcomp. Nevertheless, the empirically measured Kds and the computed ΔΔEcomp represent complementary data (with limited correlation), and both the measured and the computed data confirmed that the cationic peptides designed exhibit a strong affinity for E. coli LA that is comparable to the affinity of the reference antiendotoxin agent, PmB (27, 30). The flexible V peptides underwent conformational transitions and attained different secondary structures in the vicinity of the heterogeneous membrane-mimicking interface upon association with the amphipathic LA or phosphatidylcholine molecules. Similar observations were made for other linear and cyclic peptides (37). Molecular modeling suggested that in the complexes with LA, the peptide backbone acquired an amphipathic β-hairpin structure with distinct polar and nonpolar faces induced by the amphipathic LA molecule (Fig. 6). The presence of a central hinge of the type GIG, similar to the GSG loop of each of the V peptides, was found to be responsible for the effective antibiotic activity of a composite 20-residue synthetic peptide, cecropin A (residues 1 to 8)-magainin 2 (residues 1 to 12), with a helix-hinge-helix structure (28), indicating that structural flexibility is important. Antimicrobial activity.The MICs of five antibiotics for E. coli, Klebsiella pneumoniae, and P. aeruginosa were reported by Lancini et al. (15) to be 85 μg/ml for erythromycin, 6 μg/ml for tetracycline, 3 μg/ml for chloramphenicol, 1 μg/ml for rifampin, and 0.17 μg/ml for gentamicin. In comparison, the most potent peptide, V4, with an MIC of 10 ng/ml, displayed 17- to 8,500-fold greater potencies than the five established antibiotics with activities against these three bacterial species. Muhle and Tam (19) designed amphipathic cyclic cationic antimicrobial peptides similar to the V peptides with sequences such as c(PACRCRAG-PARCRCAG), where "c" means cyclic, constrained by two cross-linking disulfide bonds. The MICs of these peptides were 20 nM for E. coli, which is comparable to that of V4. Also, bactericidal and permeability increasing protein, a 60-kDa LPS-binding protein, had a low MIC (less than 1 nM) for gram-negative bacteria (34). V4, with a molecular mass of 2 kDa, achieved a comparably strong antimicrobial effect. The V peptides with the highest affinities to the E. coli LA did not necessarily show the strongest antimicrobial effects. This indicates that initial binding to the outer membranes of gram-negative bacteria, which differ in their LPS compositions, alone does not determine their overall antimicrobial effects. This observation is opposite earlier assumptions that strong initial binding of cationic amphipathic peptides to outer membrane components may interfere with the antimicrobial activity (21, 38). In fact, a higher affinity to the outer bacterial membrane seems to be a favorable prerequisite for the antimicrobial effects, since the V peptides displayed low micromolar Kds to the LA of E. coli and antimicrobial activities at concentrations in the nanomolar range. QSAR analysis.The validity of the QSAR model for the antimicrobial potencies of the V peptides against gram-negative bacteria was verified with the set of cyclic cationic amphipathic peptides designed by Muhle and Tam (19). These peptides displayed potent activities against gram-negative bacteria (E. coli and P. aeruginosa), with the lowest MIC starting at 160 nM. Our correlation equation for the MIC was able to reproduce the qualitative rank order of antimicrobial potencies at low salt concentrations for eight of nine peptides: R6F > R6A > R5Y ≅ R5W > R4A ≅ R4Y > K4A > K5L. Our correlation equation failed to correctly rank only one peptide, R5L (19). Strategy for dissociation of antimicrobial and hemolytic effects.The QSAR correlation equations obtained for the V1 to V7 peptides permit quantitative prediction of the biological activities of the analogs of the V peptides synthesized and tested and can help elucidate whether site-directed replacements of residues in the polar and nonpolar faces of the peptides (while preserving the molecular charge and overall symmetry of the LPS- and LA-binding motifs) may lead to independent variations in their antimicrobial and hemolytic potencies. The QSAR equation (equation 1) predicts rapid increases in the MICs with an increase in the molecular charge QM over 4 è when amphipathicity and hydrophobicity are kept constant at the levels of the most promising peptide, V4. Model analogs of V4 that shared the polar HKHQHKH motif and that differed only in the H residues (which retain the amphipathicity index of V4) are predicted to possess decreasing hemolytic activities with decreasing lipophilicities, while their predicted antimicrobial and cytotoxic activities remain basically unchanged. In other analogs of V4, the replacement of the two central Gln residues by more polar Asn residues is predicted to lead to significantly increased antimicrobial potencies due to the increased amphipathicity independent of the H residues (the predicted MICs are lower than that of V4). Thus, the variations in the H residues forming the hydrophobic face of the analogs of V4 mainly affected the hemolytic activity, which was shown to depend strongly on the Πo/w, but did not affect the predicted MICs* of the analogs. Since the hemolytic activity correlates strongly with the Πo/ws of V peptides and is less dependent on QM and AI, we may conclude that the lysis of human erythrocytes is probably caused by the enhanced penetration of more hydrophobic peptides into the phospholipid membranes of eukaryotic cells. Therefore, replacement of the H residues with less hydrophobic residues in the nonpolar face of the amphipathic analogs (with the polar face kept unchanged) seems to be a suitable design strategy to reduce the hemolytic activities of the V peptides while preserving their antimicrobial potencies. On the other hand, directed substitutions of the B and P residues in the polar faces of the V peptides with more polar residues, which increase the amphipathic character (more negative AI values) of the peptide while preserving the net charge, the symmetry of the binding motifs, and the composition of the hydrophobic face, are predicted to bring about a significant increase in antimicrobial potencies. Thus, the rational strategy of residue substitution directed to either the polar or the nonpolar faces of the cyclic V peptides might result in the dissociation of the antimicrobial and hemolytic effects of cationic amphipathic V peptides. Implications for hypothetical mechanism of antimicrobial effect.The amphipathicities of molecules have been related to their abilities to form aggregates and supramolecular complexes (12). Increased aggregation or the formation of assemblies containing amphipathic particles on the surfaces of bacterial membranes may therefore be responsible for their antimicrobial effects (11, 20). The aggregation of cationic peptides alone is less probable at neutral pH. However, the introduction of anionic amphipathic LA molecules into such assemblies may facilitate the formation of aggregates by cationic peptides. Recently, it was shown that PmB is able to sequester LA from the surfaces of liposomes containing dimyristoylphosphatidylethanolamine (33). Since the antimicrobial activities of the V peptides strongly increase with the increasing amphipathicities of the molecules at constant QM and Πo/w, we hypothesize that aggregates of V peptides rather than individual molecules exert strong antimicrobial effects. The presence of antibacterial activities at concentrations in the low nanomolar range suggests that the V peptides possess an intricate mechanism of interaction with the bacterial membrane rather than a nonspecific mechanism of membrane disruption. In fact, it was reported (17) that antimicrobial peptides like magainin 2 and PGLa form peptide-lipid heterosupramolecular pores in the phospholipid bilayers, which explains the observed synergism in their antimicrobial effects. Membrane pores formed by α-helical magainin as well as β-hairpin-shaped protegrin 1 produced a diffraction pattern similar to that of the well-established transmembrane gramicidin channel (8, 10, 39). Similar conclusions which relate the ability of cationic peptides to form aggregates to their antimicrobial potencies have recently been presented for dermaseptin S4 (4), protegrin 1 (26), and human defensins (29). The results of the present study allow us to propose that amphipathicity and, possibly, also the formation of pore-like aggregates involving the peptides and LA may by responsible for the strong antimicrobial potencies of the cationic V peptides. The general sequence Ac-C-HBHB(P)HBH-GSG-HBHB(P)HBH-C-NH2 represents an almost ideal amphipathic pattern for a cyclic β-hairpin peptide. It is therefore not surprising that the MICs of the most active V peptides are about 2 orders of magnitude lower than those of known cationic antimicrobial peptides. In conclusion, the V peptides designed de novo harbor potencies unsurpassed by any known antibiotics of metabolite or peptide origin. The general sequence pattern Ac-C-HBHB(P)HBH-GSG-HBHB(P)HBH-C-NH2 may be adopted for the further rational design of a repertoire of antiendotoxin peptides from which candidates with potent antimicrobial activities with high SIs may be selected through empirical tests. This work was supported by the National Science and Technology Board of Singapore (NSTB grant LS/99/004) and the Agency for Science, Technology and Research (A*STAR grant 03/1/21/17/227). We thank Y. H. Yau, P. M. L. Ng, and M. Paulini for technical assistance. V.F. is on leave from the Cancer Research Institute, Slovak Academy of Sciences, Bratislava, Slovakia. Received 17 November 2003. Returned for modification 2 February 2004. Accepted 1 May 2004. Boman, H. G. 1995. Peptide antibiotics and their role in innate immunity. Annu. Rev. Immunol.13:61-92. Dimarcq, J.-L., P. Bulet, C. Hetru, and J. Hoffmann. 1998. Cysteine-rich antimicrobial peptides in invertebrates. Biopolymers47:465-477. Fauchère, J.-L. 1996. Lipophilicity in peptide chemistry and peptide drug design, p. 355-373. In V. Pliška, B. Testa, and H. van der Waterbeemd (ed.), Lipophilicity in drug action and toxicity. VCH Publishers, Weinheim, Germany. Feder, R., A. Dagan, and A. Mor. 2000. Structure-activity relationship study of antimicrobial dermaseptin S4 showing the consequences of peptide oligomerization on selective cytotoxicity. J. Biol. Chem.275:4230-4238. Frecer, V., B. Ho, and J. L. Ding. 2000. Interpretation of biological activity data of bacterial endotoxins by simple molecular models of mechanism of action. Eur. J. Biochem.267:837-852. Gough, M., R. E. W. Hancock, and N. M. Kelly. 1996. Antiendotoxin activity of cationic peptide antimicrobial agents. Infect. Immun.64:4922-4927. Hancock, R. E. W. 1999. Host defence (cationic) peptides: what is their future clinical potential? Drugs57:469-473. He, K., S. J. Ludtke, D. L. Worcester, and H. W. Huang. 1995. Antimicrobial peptide pores in membranes detected by neutron in-plane scattering. Biochemistry34:15614-15618. Hoess, A., S. Watson, G. R. Siber, and R. Liddington. 1993. Crystal structure of an endotoxin-neutralizing protein from horseshoe crab, Limulus anti-LPS factor, at 1.5 Å resolution. EMBO J.12:3351-3356. Huang, H. W. 2000. Action of antimicrobial peptides: two-state model. Biochemistry39:8347-8352. Hwang, P. M., and H. J. Vogel. 1998. Structure-function relationships of antimicrobial peptides. Biochem. Cell. Biol.76:235-246. Israelachvili, J. N., S. Marcelja, and R. G. Horn. 1980. Physical principles of membrane organization. Q. Rev. Biophys.13:121-200. Kondejewski, L. H., S. W. Farmer, D. S. Wishart, C. M. Kay, R. E. W. Hancock, and R. S. Hodges. 1996. Modulation of structure and antibacterial and haemolytic activity by ring size in cyclic gramicidin S analogs. J. Biol. Chem.271:25261-25268. Kondejewski, L. H., M. Jelokhani-Niaraki, S. W. Farmer, B. Lix, C. M. Kay, B. D. Sykes, R. E. W. Hancock, and R. S. Hodges. 1999. Dissociation of antimicrobial and haemolytic activities in cyclic peptide diastereomers by systematic alterations in amphipathicity. J. Biol. Chem.274:13181-13192. Lancini, G., F. Parenti, and G. G. Gallo. 1995. Antibiotics: a multidisciplinary approach, p. 19. Plenum Press, New York, N.Y. Maple, J. R., M.-J. Hwang, T. P. Stockfish, U. Dinur, M. Waldman, C. S. Ewing, and A. T. Hagler. 1994. Derivation of class II force fields. I. Methodology and quantum force field for the alkyl functional group and alkane molecules. J. Comput. Chem.15:162-182. Matsuzaki, K., Y. Mitani, K. Akada, O. Murase, S. Yoneyama, M. Zasloff, and K. Miyajima. 1998. Mechanism of synergism between antimicrobial peptides magainin 2 and PGLa. Biochemistry37:15144-15153. Matsuzaki, K., K. Sigishita, and K. Miyajima. 1999. Interactions of an antimicrobial peptide, magainin 2, with lipopolysaccharide-containing liposomes as a model for outer membranes of gram-negative bacteria. FEBS Lett.449:221-224. Muhle, S. A., and J. P. Tam. 2001. Design of gram-negative selective antimicrobial peptides. Biochemistry40:5777-5785. Oren, Z., and Y. Shai. 1998. Mode of action of linear amphipathic α-helical antimicrobial peptides. Biopolymers47:451-463. Oren, Z., and Y. Shai. 2000. Cyclization of a cytolytic amphipathic α-helical peptide and its diastereomer: effect on structure, interaction with model membranes, and biological function. Biochemistry39:6103-6114. Parillo, J. E. 1993. Pathogenic mechanisms of septic shock. N. Engl. J. Med.328:1471-1477. Pristovšek, P., and J. Kidrič. 1999. Solution structure of polymyxins B and E and effect of binding to lipopolysaccharide: an NMR and molecular modelling study. J. Med. Chem.42:4604-4613. Ried, C., C. Wahl, T. Miethke, G. Wellnhofer, C. Landgraf, J. Schneider-Mergener, and A. Hoess. 1996. High affinity endotoxin-binding and neutralizing peptides based on crystal structure of a recombinant Limulus anti-lipopolysaccharide factor. J. Biol. Chem.271:28120-28127. Rost, B., and C. Sander. 1994. Combining evolutionary information and neural networks to predict protein secondary structure. Proteins19:55-72. Roumestand, C., V. Louis, A. Aumelas, G. Grassy, B. Calas, and A. Chavanieu. 1998. Oligomerization of protegrin-1 in the presence of DPC micelles. A proton high-resolution NMR study. FEBS Lett.421:263-267. Rustici, A., M. Velucchi, R. Faggioni, M. Sironi, P. Ghezzi, S. Quataet, B. Green, and M. Porro. 1993. Molecular mapping and detoxification of the lipid A binding site by synthetic peptides. Science259:361-365. Shin, S. Y., J. H. Kang, S. Y. Jang, Y. Kim, K. L. Kim, and K.-S. Hahm. 2000. Effects of the hinge region of cecropin A(1-8)-magainin 2(1-12), a synthetic antimicrobial peptide, on liposomes, bacterial and tumor cells. Biochim. Biophys. Acta1463:209-218. Skalicky, J. J., M. E. Selsted, and A. Pardi. 1994. Structure and dynamics of the neutrophil defensins NP-2, NP-5, and HNP-1: NMR studies of amide hydrogen exchange kinetics. Proteins20:52-67. Srimal, S., N. Surolia, S. Balasubramanian, and A. Surolia. 1996. Titration calorimetric studies to elucidate the specificity of the interactions of polymyxin B with lipopolysaccharides and lipid A. Biochem. J.315:679-686. Takada, H., and S. Kotani. 1989. Structural requirements of lipid A for endotoxic and other biological activities. Crit. Rev. Microbiol.16:477-523. Tan, N. S., P. M. L. Ng, Y. H. Yau, P. K. Chong, B. Ho, and J. L. Ding. 2000. Definition of endotoxin binding sites in horseshoe crab factor C recombinant sushi proteins and neutralization of endotoxin by sushi peptides. FASEB J.14:1801-1813. Thomas, C. J., N. Surolia, and A. Surolia. 1999. Surface plasmon resonance studies resolve the enigmatic endotoxin neutralizing activity of polymyxin B. J. Biol. Chem.274:29624-29627. Tobias, P. S., K. Soldau, N. M. Iovine, P. Elsbach, and J. Weiss. 1997. Lipopolysaccharide (LPS)-binding proteins BPI and LBP form different types of complexes with LPS. J. Biol. Chem.272:18682-18685. Tracey, K. J., Y. Fong, D. Hesse, K. R. Manogue, A. T. Lee, G. C. Kuo, S. F. Lowry, and A. Cerami. 1987. Anti-cachectin/TNF monoclonal antibodies prevent septic shock during lethal bacteraemia. Nature330:662-664. Wiedeman, B., and H. Grimm. 1996. Susceptibility to antibiotics: species incidence and trends, p. 900-1168. In V. Lorian (ed.), Antibiotics in laboratory medicine, 4th ed. The Williams & Wilkins Co., Baltimore, Md. Woody, R. W. 1995. Circular dichroism. Methods Enzymol.246:34-71. Wu, M., and R. W. E. Hancock. 1999. Interaction of the cyclic antimicrobial cationic peptide bactenectin with the outer and cytoplasmic membrane. J. Biol. Chem.274:29-35. Yang, L., T. M. Weiss, T. A. Harroun, W. T. Heller, and H. W. Huang. 1998. Neutron off-plane scattering of aligned membranes. I. Method of measurement. Biophys. J.75:641-645. Yau, Y. H., B. Ho, N. S. Tan, P. M. L. Ng, and J. L. Ding. 2001. High therapeutic index of factor C sushi peptides: potent antimicrobials against Pseudomonas aeruginosa. Antimicrob. Agents Chemother.45:2820-2825. You are going to email the following De Novo Design of Potent Antimicrobial Peptides Anti-Bacterial Agents
CommonCrawl
Volume 20 Supplement 8 Decipher computational analytics in digital health and precision medicine A hybrid gene selection method based on gene scoring strategy and improved particle swarm optimization Fei Han1, 2Email author, Di Tang1, 2, Yu-Wen-Tian Sun1, 2, Zhun Cheng3, Jing Jiang1, 2 and Qiu-Wei Li1, 2 BMC Bioinformatics201920 (Suppl 8) :289 © The Author(s) 2019 Gene selection is one of the critical steps in the course of the classification of microarray data. Since particle swarm optimization has no complicated evolutionary operators and fewer parameters need to be adjusted, it has been used increasingly as an effective technique for gene selection. Since particle swarm optimization is apt to converge to local minima which lead to premature convergence, some particle swarm optimization based gene selection methods may select non-optimal genes with high probability. To select predictive genes with low redundancy as well as not filtering out key genes is still a challenge. To obtain predictive genes with lower redundancy as well as overcome the deficiencies of traditional particle swarm optimization based gene selection methods, a hybrid gene selection method based on gene scoring strategy and improved particle swarm optimization is proposed in this paper. To select the genes highly related to out samples' classes, a gene scoring strategy based on randomization and extreme learning machine is proposed to filter much irrelevant genes. With the third-level gene pool established by multiple filter strategy, an improved particle swarm optimization is proposed to perform gene selection. In the improved particle swarm optimization, to decrease the likelihood of the premature of the swarm the Metropolis criterion of simulated annealing algorithm is introduced to update the particles, and the half of the swarm are reinitialized when the swarm is trapped into local minima. Combining the gene scoring strategy with the improved particle swarm optimization, the new method could select functional gene subsets which are significantly sensitive to the samples' classes. With the few discriminative genes selected by the proposed method, extreme learning machine and support vector machine classifiers achieve much high prediction accuracy on several public microarray data, which in turn verifies the efficiency and effectiveness of the proposed gene selection method. Gene selection Gene scoring Particle swarm optimization Microarray data One of the major applications of microarray data analysis is to perform sample classification between different disease phenotypes, for diagnostic and prognostic purposes [1]. However, for small size of samples in comparison to high dimensionality, along with experimental variations in measured gene expression levels, it is difficult to implement a particular biological classification problem as well as gain deeper understanding of the functions of particular genes [1]. Gene selection is one of the critical steps in the course of the classification of microarray data [2]. Selecting a useful gene subset not only decreases the computational complexity, but also increases the classification accuracy. The methods for gene selection are broadly divided into three categories: filter, wrapper and embedded methods [3]. A filter method relies on general characteristics of the training data to select genes without involving any classifier for evaluation. Most filter methods consider each feature separately with ignoring feature dependencies, which may lead to worse classification performance when compared to other types of feature selection methods [4]. In addition to considering feature dependencies, wrapper methods take into account the interaction between feature subset search and model selection. However, wrapper methods have a higher risk of overfitting than filter ones and are very computationally intensive [5]. Embedded methods have the advantage that they include the interaction with the classification model, while being far less computationally intensive than wrapper methods [6]. Since it has no complicated evolutionary operators and fewer parameters need to be adjusted [7, 8], particle swarm optimization (PSO) [9, 10] has been used increasingly as an effective technique for global optimization in past decades. In recent years, PSO has been also implemented to perform gene selection. In [11], a combination of Integer-Coded GA (ICGA) and particle swarm optimization, coupled with extreme learning machine (ELM) was used to select an optimal set of genes. In [12, 13], binary PSO (BPSO) combined with filter method was applied to search optimal gene subsets. The method in [12] simplified gene selection and obtained a higher classification accuracy compared with some similar gene selection methods based on GA, while the method in [13] could determine the appropriate number of genes and obtained high classification accuracy by support vector machine. In [14], the Kmeans-PSO-ELM method used K-means method to group the initial gene pool into several clusters, and the standard PSO combined with ELM was used to perform gene selection, which could obtain a compact set of informative genes. Since traditional PSO is apt to converge to local minima which lead to premature convergence, the above PSO based gene selection method still has much room to improve. To overcome the deficiencies of the above PSO based gene selection methods and obtain predictive genes with more interpretability, two gene selection methods based on binary PSO and gene-to-class sensitivity (GCS) information were proposed in [15, 16]. In the KMeans-GCSI-MBPSO-ELM [16], GCS information combined with K-means method was used to identify relevant genes for subsequent sample classification, and a modified BPSO coupling GCS information (GCSI) combined with ELM was used to select smallest possible gene subsets. Although the KMeans-GCSI-MBPSO-ELM could obtain predictive genes with lower redundancy and better interpretability, it might filter out a few critical genes highly related to samples' classes in some cases and thus lead into worse classification accuracy [16]. To overcome the weakness of the KMeans-GCSI-MBPSO-ELM, the BPSO-GCSI-ELM [15] method also encoded GCS information into binary PSO to perform gene selection by initializing particles, updating the particles, modifying maximum velocity, and adopting mutation operation adaptively. Although the BPSO-GCSI-ELM method could avoid filtering out some critical genes, it may increase the computational cost because of the large initial gene pool. To obtain predictive genes with lower redundancy as well as overcome the deficiencies of the above mentioned gene selection methods, a hybrid gene selection method based on gene scoring strategy and improved particle swarm optimization (PSO) is proposed in this paper. Firstly, with the initial gene pool obtained with double filter strategies, randomization method combined with ELM is proposed to score each gene, and the third-level gene pool for further gene selection is established. Secondly, an improved PSO aiming at improving the search ability of the swarm is proposed to perform gene selection. In the improved PSO, to decrease the probability of converging into local minima, the Metropolis criterion of simulated annealing (SA) algorithm is introduced to update the particles, and the half of the swarm are reinitialized when the swarm is trapped into local minima. With the compact and relevant gene pool obtained by multiple filter strategies, the improved PSO could select the optimal gene subsets with high probability. Finally, experimental results on six public microarray data verify the effectiveness and efficiency of the proposed hybrid gene selection method. The remainder of this paper is organized as follows. The related preliminaries are briefly described in "Background" section. The proposed gene selection method is introduced in "Methods" section. "Results" section gives the experimental results on six public microarray data. Finally, the concluding remarks are offered in "Conclusions" section. Particle swarm optimization (PSO) is a population-based stochastic optimization technique developed by Eberhart and Kennedy [9]. PSO works by initializing a flock of birds randomly over the searching space, where each bird is called a particle with no quality or volume. Each particle flies with a certain velocity according to its momentum and the influence of its own previous best position (Pib) as well as the best position of all particles (Pg). Assume that the dimension of searching space is D and the total number of particles is n. Then the original PSO is described as follows $$\begin{array}{@{}rcl@{}} v_{id}(t+1)&=v_{id}(t)+c_{1}\times Y_{1}()\times \left[p_{ibd}(t)-x_{id}(t)\right]\\&+c_{2}\times Y_{2}()\times \left[p_{gd}(t)-x_{id}(t)\right] \end{array} $$ $$ \begin{aligned} x_{id}(t+1)=x_{id}(t)+v_{id}(t+1),1\leq i\leq n, 1\leq d\leq D \end{aligned} $$ where vi(t) and xi(t) denote the velocity vector and the position of the i-th particle, respectively, at the t-th iteration; Pib(t) and Pg(t) denote the previous best position of the i-th particle and the best position of all particle, respectively; c1 and c2 are the positive acceleration constants; Y1() and r2() are random number between 0 and 1. In addition, it needs to place a limit on the velocity. To improve the convergence performance of the original PSO, a modified particle swarm optimization [10] was proposed. An inertial weight was introduced in the velocity vector evolution equation described as follows: $$ \begin{aligned} {} v_{id}(t+1)&=w_{t}\times v_{id}(t)+c_{1}\times Y_{1}()\times \left[p_{ibd}(t)-x_{id}(t)\right]\\ &\quad+c_{2}\times Y_{2}()\times \left[p_{gd}(t)-x_{id}(t)\right] \end{aligned} $$ where w is the inertial weight. Shi & Eberhart [10] advised the linearly decreasing method to adjust the weight as follows: $$\begin{array}{@{}rcl@{}} w(t)=w_{ini}-\frac{w_{ini}-w_{end}}{T_{max}}\times t \end{array} $$ where t is the current iteration number; wini,wend and Tmax are the initial inertial weight, the final inertial weight and the maximum number of iteration, respectively. Extreme learning machine In [17], a learning algorithm for single-hidden layer feedforward neural networks (SLFN) called extreme learning machine (ELM) was proposed to solve the problem caused by gradient-based learning algorithms. ELM randomly chooses the input weights and hidden biases, and analytically determines the output weights of SLFN. ELM has much better generalization performance with much faster learning speed than gradient-based algorithms [18, 19]. For N arbitrary distinct samples (XXi,Ti)(i=1,2,…,N.), where XXi=[xxi1,xxi2,…,xxin]∈Rn, Ti=[ ti1, ti2, …, tim] ∈ Rm. A SLFN with NH hidden neurons and activation function g() can approximate these N samples with zero error. This means that $$\begin{array}{@{}rcl@{}} {Hw}_{o}=T \end{array} $$ $$ \begin{aligned} & H\left({wh}_{1},...,{wh}_{N_{H}},b_{1},...,b_{N_{H}},{XX}_{1},...,{XX}_{N}\right)\\ &=\left[ \begin{array}{ccc} g\left({wh}_{1}\cdot {XX}_{1}+{b}_{1}\right) & \cdots & g\left({wh}_{N_{H}}\cdot {XX}_{1}+{b}_{N_{H}}\right)\\ \vdots & \ddots & \vdots \\ g\left({wh}_{1}\cdot {XX}_{N}+{b}_{1}\right) & \cdots & g\left({wh}_{N_{H}}\cdot {XX}_{N}+{b}_{N_{H}}\right)\\ \end{array} \right]\\ \end{aligned} $$ $$ \begin{aligned} w_{o}= \left[ \begin{array}{c} {w_{o1}}^{T} \\ \vdots \\ {w_{o_{N_{H}}}}^{T}\\ \end{array} \right] \end{aligned} \quad\quad \texttt{and} \quad\quad \begin{aligned} T= \left[ \begin{array}{c} {t_{1}}^{T} \\ \vdots \\ {t_{N}}^{T} \\ \end{array} \right] \end{aligned} \texttt{.} $$ The whi=[whi1,whi2,...,whin]T is the input weight vector connecting the i-th hidden neuron and the input neurons, the woi=[woi1,woi2,...,woim]T is the output weight vector connecting the i-th hidden neuron and the output neurons, and the bi is the bias of the i-th hidden neuron. In the course of learning, first, the input weights and the hidden biases are arbitrarily chosen and need not be adjusted at all. Second, the smallest norm least-squares solution of the Eq. 5 is obtained as follows: $$\begin{array}{@{}rcl@{}} w_{o}=H^{+}T \end{array} $$ where H+ is the Moore-Penrose (MP) generalized inverse of matrix H. It was concluded that the ELM has the minimum train-ing error and smallest norm of weights [18, 19]. The smallest norm of weights tends to have the best generalization performance [18, 19]. Since the solution is obtained by an analytical method and all the parameters of SLFN need not be adjusted, ELM converges much faster than gradient-based algorithm. The proposed gene selection method Gene selection generally consists of two steps, which are to identify relevant genes and to tend to select smallest subsets from the relevant genes. Different from the KMeans-GCSI-MBPSO-ELM [16] and BPSO-GCSI-ELM [15] methods, a scoring criterion following the double filter strategy is proposed to select highly relevant genes in this paper, which may decrease the size of the gene pool dramatically. For selecting compact gene subset from the refined gene pool, an improved PSO with the new strategies for reinitializing the swarm and updating of the Pg is proposed. Since the proposed method combines the scoring criterion with the improved PSO, coupled with ELM, to perform gene selection, it is referred to as the SC-IPSO-ELM method. The rough frame of the proposed method is shown in Fig. 1, and the detailed steps are described as follows. The frame of the proposed hybrid gene selection method Step 1: Form a first-level initial gene pool. The dataset is divided into training and testing datasets. Select 200–400 genes from all original genes by using information index to classification (IIC) method [16, 20] as follows: $$\begin{array}{@{}rcl@{}} d(g)\,=\,\sum_{j=1}^{c}\sum_{k=1,k=j}^{c}\left[\frac{1}{2}\frac{|\mu_{g^{j}}-\mu_{g^{k}}|}{\sigma_{g^{j}}+\sigma_{g^{k}}}\,+\,\frac{1}{2}ln\left(\frac{\sigma_{g^{j}}^{2}\,+\,\sigma_{g^{k}}^{2}}{2\sigma_{g^{j}}\sigma_{g^{k}}}\right)\right] \end{array} $$ where \(\phantom {\dot {i}\!}\mu _{g^{j}}\) and \(\phantom {\dot {i}\!}\mu _{g^{k}}\) are the means of expression value of the gene g in the j-th and k-th classes, respectively, and \(\phantom {\dot {i}\!}\sigma _{g^{j}}\) and \(\phantom {\dot {i}\!}\sigma _{g^{k}}\) are the standard deviations of expression value of gene g in the j-th and k-th classes, respectively. c is the total number of classes. From [16, 20], the higher the value of d(g), the more classification information the gene g contains, so the gene g is more relevant to samples categories. The high classification accuracy will be obtained with high probability by a classifier if the microarray data is projected onto the gene g whose IIC value, d(g), is high. The genes are ranked by their IIC values on the training dataset, and those genes with higher IIC values are chosen to establish the first-level gene pool. Step 2: Establish a second-level initial gene pool. Randomly generate different gene subsets from the first-level gene pool. Then, each gene subset's predictive ability is evaluated according to the 5-fold cross validation (CV) classification accuracy obtained by ELM on the training dataset projected onto the gene subset. When the 5-fold cross validation classification accuracy is less than the predetermined value (θac), the corresponding gene subset is deleted. Thus, the genes in the remained gene subsets have comparatively high predictive ability and form the second-level initial gene pool. The number of the gene subsets in the second-level gene pool is noted as lse. Each gene subset is ranked as integer number (from 1 to lse) according to the corresponding 5-fold cross validation classification accuracy. The higher the classification accuracy is, the smaller the rank number of the corresponding gene subset is. Step 3: Establish a third-level initial gene pool by scoring strategy. The psedo-code of the scoring rule for the i-th gene in the second-level gene pool is listed as Algorithm 1. where Rj is the ranked number of the j-th gene sub-set in the second-level gene pool. After obtaining the scores of all genes in the second-level gene pool, they are normalized into the interval of [0, 1] with linear transformation. Obviously, the higher value of the gene score is, the more relevant to the samples classes of the gene is. Further filter out those genes with much lower score values, and the remaining genes in the second-level pool form the third-level gene pool. Step 4: Use an improved PSO to select the optimal gene subsets from the third-level initial gene pool. The i-th particle Xi=(xi1,xi2,…,xiD) represents a candidate gene subset, and the element xij is the serial number of the selected gene. The dimension of the particles is equal to the number of the selected genes from the third-level initial gene pool, which is predetermined according to [15, 16]. The fitness function of the i-th particle, f(Xi), is the 5-fold cross validation classification accuracy obtained by ELM on the training dataset projected onto the selected gene subset represented by the i-th particle. The optimization process of the improved PSO is the same as the traditional PSO except the following respects. One is the strategy of updating the best position of the swarm. To decrease the probability of premature convergence of the swarm, the Metropolis criterion in SA [21] is introduced to update the best position of the swarm. In the (i+1)-th optimization generation, the best position of the swarm, pg, is updated by the Eq. 8 as follows: $$ {\begin{aligned} p_{g}(i+1)=\left\{ \begin{array}{lr} X_{j}, & f(X_{j})-f(p_{g}(i))\geq \varepsilon\\ X_{j} \,\texttt{with the} \,P = e^{-\frac{|f(X_{j})-f(p_{g}(i))|}{T(i+1)}},& |f(X_{j})-f(p_{g}(i))|< \varepsilon \end{array} \right. \end{aligned}} $$ where T(i+1) is the annealing temperature which decreases linearly as the following equation $$\begin{array}{@{}rcl@{}} T(i+1)=T_{0}-\frac{T_{0}-T_{end}}{{It}_{max}}\times (i+1) \end{array} $$ In Eq. 7, T0, Tend, and Itmax are the initial annealing temperature, final annealing temperature and maximum optimization generation number. The other is the strategy of mutating the swarm. When the swarm converges to the local minima, the particles in the swarm are close to each other, and the swarm loses its diversity. Mutating the swarm makes the particles repel each other and improves the diversity of the swarm, so the swarm jumps the local minima with high probability. In the improved PSO, the mutation operation is taken if the global best fitness value of the swarm does not change for predetermined generations (Nmu), which randomly select half number of particles in the swarm to reinitialize. The SC-IPSO-ELM method firstly identifies the relevant genes by the randomization method combined with ELM. Then, with the proposed gene scoring criterion, the much more relevant and compact gene pool is obtained. Finally, to obtain the optimal gene subsets, the tradition PSO is modified to improve its global search ability. Although the SC-IPSO-ELM method does not encode prior information to perform gene selection as the KMeans-GCSI-MBPSO-ELM [16] and BPSO-GCSI-ELM [15] methods, it could also select the most predictive genes with low redundancy effectively. Moreover, the multiple filter strategies produce much more compact gene pool than the methods in [15, 16], which could decrease the computational cost of PSO searching the optimal gene subsets. Compared to the gene-to-class sensitivity information, genes' rank information obtained by the scoring strategy is more robust, so the SC-IPSO-ELM method may not filter out predictive genes with higher probability than the methods in [15, 16]. The proposed gene selection method contains filtering irrelevant genes to establish the gene pool and using PSO to select functional gene subsets from the gene pool, and its computational complexity can be calculated as follows: $$\begin{array}{@{}rcl@{}} {CC}_{SC-IPSO-ELM}=O(N_{TG}\times N_{Train})+O(l\times N_{g1}) \\ +O(l_{se}\times N_{g2})+O(N_{PSO}\times {Iter}_{PSO}) \end{array} $$ where NTG,NTrain,l,Ng1,lse,Ng2,NPSO and IterPSO are the number of the original total genes, the number of training data, the number of the initial randomly generated gene subsets in Step 2, the size the first-level gene pool, the number of the selected gene subsets in Step 2, the size of the second-level gene pool, the swarm size and the maximum iteration number in the improved PSO, respectively. The four items on the right side of Eq. 10 are the computational complexity of Step 1, Step 2, Step 3 and Step 4 of the proposed method, respectively. The first and fourth terms are as the same as those of the methods in [15, 16]. The Ng1 and Ng2 both are much smaller than NTG. Generally, the l and lse are not greater than NTrain. The computational complexity of the SC-IPSO-ELM method can be approximated as the sum of the first and fourth terms on the right side of Eq. 10 which is similar to the methods in [15, 16], so the time complexity of the proposed method is at the same order of magnitude of that of the methods in [15, 16]. Since the third-level gene pool is established by multiple filter strategy, the size of the third-level gene pool is small. The small third-level gene pool leads to small NPSO and IterPSO, which may decrease the computational cost of Step 4. To verify the effectiveness and efficiency of the proposed gene selection method, we conduct experiments on the six public microarray datasets including Leukemia, Colon, SRBCT, Brain cancer data,Lung and Lymphoma data. The detailed description of the datasets is listed in Table 1. Six microarray datasets Total Samples Training samples Testing samples Number of classes Number of genes SRBCT The Leukemia data [22] contains total 72 samples in two classes, acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML), which contain 47 and 25 samples, respectively. Every sample contains 7129 gene expression values. The Leukemia data are available at https://link.springer.com/article/10.1186/1471-2105-7-228#SupplementaryMaterial. The Brain cancer data contains 60 samples in two classes, 46 patients with classic and 14 patients with desmoplastic brain cancer. The Lymphoma data includes 58 samples where 32 patients did cured and 26 patients did not cured. Each sample in the Brain cancer and Lymphoma has 7129 genes. These two data are available at http://linus.nci.nih.gov/~brb/DataArchive_New.html. The Colon data consists of expression levels of 62 samples of which 40 samples are colon cancer samples and the remaining are normal samples. Although original expression levels for 6000 genes are measured, 4000 genes out of all the 6000 genes were removed considering the reliability of measured values in the measured expression levels. The measured expression values of 2000 genes are publicly available at http://microarray.princeton.edu/oncology/. The entire SRBCT data [23] includes the expression data of 2308 genes. There are totally 63 training samples and 25 testing samples, five of the testing samples being not SRBCT. The 63 training samples contain 23 Ewing family of tumors (EWS), 20 rhabdomyosarcoma (RMS), 12 neuroblastoma (NB), and 8 Burkitt lymphomas (BL). The 20 testing samples contain 6 EWS, 5 RMS, 6 NB, and 3 BL. The data are available at https://link.springer.com/article/10.1186/1471-2105-7-228#SupplementaryMaterial. The LUNG data [24, 25] contains in total 203 samples in five classes, adenocarcinomas, squamous cell lung carcinomas, pulmonary carcinoids, small-cell lung carcinomas and normal lung, which have 139, 21, 20, 6,17 samples, respectively. Each sample has 12600 genes. The genes with standard deviations smaller than 50 expression units were removed and a dataset with 203 samples and 3312 genes was obtained [24, 25]. The data is also available at https://link.springer.com/article/10.1186/1471-2105-7-228#SupplementaryMaterial. In the experiments on all data, the swarm size is 60, the maximum iteration number is selected as 20, the acceleration constants c1 and c2 are both selected as 1.49445, and the inertial weight varies from 0.9 to 0.4. The size of the third-level gene pool is 40 on all data. The parameter Nmu is fixed as 3 on all data. The values of these parameters are determined by the cross-validation runs on the training datasets and according to [15, 16]. The prediction ability of the selected gene subsets To verify the prediction ability of the selected gene subsets obtained by the proposed method, ELM is used to perform sample classification with some gene subsets selected by the SC-IPSO-ELM method on the six datasets. Each experiment is conducted 100 times, and the mean classification accuracies are listed in Table 2. The classification accuracy obtained by elm with different gene subsets selected by the sc-ipso-elm method on the six microarray data Selected gene subsets 5-fold CV Accuracy Mean(%) ±std Test Accuracy Mean(%) ±std 100 ±0.00 42335,2642,1843,4050 1091,798,337 90.14 ±0.036 3052,973,3041,3692,4796 4628,7129,7045,4413,798 7129,2881,3052,865,1970,2935,4871 14,1976,1325,1993,1870,1892,653,1917,187,22,1209,1060 377,792,14,1976,765,187,251,1110,175,53,1293,1740,200 792,1423,14,1976,1909,1110,1589,102,107,1916,175,1151 792,14,1976,765,1909,1524,1110,175,43,53,1293,1740,251 742,1003,1954,430,2050,123 545,1955,1434,509,971,255 1003,545,1911,153,123,1489,2161 1955,2050,545,2144,2045,123,1489 1765,2779,2841,1474,2045,3191,2763,2817,525,1630 525,1493,607,2763,792,580,867,368,3279,2158,1225 1765,883,2763,792,580,867,985,3279,2988,2045,814 1765,525,2763,2841,1474,2583,867,985,2045,814,918 152,2347,2650,5679,438,1855,5863 1855,2828,152,2437,806,530,1102 152,2437,4829,2828,6441,806,2508 From Table 2, with the small gene subsets selected by the proposed approach, ELM obtains 100% 5-fold cross validation and test accuracies both on the Leukemia and SRBCT data, With the about five and thirteen genes selected by the SC-IPSO-ELM method on the Brain cancer and Colon, respectively, ELM obtains high prediction accuracies. These results indicate that the SC-IPSO-ELM method has the ability of selecting those predictive genes highly related to samples' classes. Biological and functional analysis of the selected gene subsets The experiment on each microarray data is conducted 500 times, and the top ten frequently selected genes are listed in Tables 3, 4, 5, 6, 7 and 8 for the six microarray data. The top ten frequently selected genes with the sc-ipso-elm method on the leukemia data Gene No. CCND3 Cyclin D3 ∗∘ CF3 Transcription factor 3 (E2A immunoglobulin enhancer bind-ing factors E12/E47) MB-1 gene ∗∘⊲⋆∙ GB DEF = T-cell antigen receptor gene T3-delta ∗⋆ CD33 CD33 antigen (differenti-ation antigen) ∗∘ CST3 Cystatin C (amyloid an-giopathy and cerebral hemor-rhage) ∗∘⊲⋆∙ ME491 gene extracted from H.sapiens gene for Me491/CD63 antigen CTSD Cathepsin D (lysosomal aspartyl protease) ∗∘⊲⋆ DF D component of comple-ment (adipsin) Tryptase-III mRNA, 3' end *also selected in [15]; ∘also selected in [26]; ⊲also selected in [22]; ⋆also selected in [16]; ∙also selected in [27] The top ten frequently selected genes with the sc-ipso-elm method on the brain cancer data Lipoma HMGIC fusion partner-like 2 KIAA0265 protein Granzyme B (granzyme 2, cytotoxic T-lymphocyte-associated serine esterase 1) Chemokine (C-C motif) ligand 1 Kell blood group ∗ Protein phosphatase 2 (formerly 2A), regulatory subunit A (PR 65), beta isoform CBF1 interacting corepressor Histone deacetylase 1 FXYD domain containing ion transport regulator 3 Rab9 effector protein with kelch motifs *also selected in [15] The top ten frequently selected genes with the sc-ipso-elm method on the colon data MYOSIN LIGHT CHAIN ALKALI, SMOOTH-MUSCLE ISOFORM (HU-MAN) ∗∘⊲⋆ COLLAGEN ALPHA 2(XI) CHAIN (Homo sapiens) H.sapiens Wee1 hu gene LEUKOCYTE ANTIGEN CD37 (Homo sapiens) ⊲⋆ ATP SYNTHASE COUPLING FACTOR 6MITOCHONDRIAL PRE-CURSOR (HUMAN) ∘⋆ HEAT SHOCK PROTEIN HSP 90-BETA (HUMAN) Human Mullerian inhibiting substance gene, complete cds ⊲ MYOSIN HEAVY CHAIN, NONMUSCLE (Gallus gal-lus) Human vasoactive intestinal peptide (VIP) mRNA, com-plete cds GLIA DERIVED NEXIN PRECURSOR (Mus muscu-lus) *also selected in [28]; ∘also selected in [29]; ⊲also selected in [15]; ⋆also selected in [16] The top ten frequently selected genes with the sc-ipso-elm method on the srbct data Transmembrane protein ∗∘⊲⋆ Sarcoglycan, alpha (50kD dystrophin-associated glycoprotein) ∗⋆∙ Cadherin 2, N-cadherin (neuronal) ∘⊲∙ Wiskott-Aldrich syndrome (ecezema-thrombocytopenia Antigen identified by monoclonal antibodies 12E7, F21 and O13 ∗⋆⊲∙ Protein tyrosine phosphatase, non-receptor type 13 (APO-1/CD95 (Fas)-associated phosphatase) Proteasome (prosome, macropain) subunit, beta type, 8 (large multifunctional protease 7) ⊲ ESTs Caveolin 1, caveolae protein, 22kD Human DNA for insulin-like growth factor II (IGF-2); exon 7 and additional ORF The top ten frequently selected genes with the sc-ipso-elm method on the lung data 185_at Neuro-oncological ventral antigen 1 Collagen, type IV, alpha 1 ∘ Cadherin 2, N-cadherin (neuronal) ∗∘ Pre-B-cell leukemia transcription factor 3 Claudin 4 Delta-like homolog (Drosophila) Nuclear receptor co-repressor 1 ∗∘ Chromosome 14 open reading frame 2 Adenylate cyclase 6 Ornithine decarboxylase antizyme 1 *also selected in [16]; ∘also selected in [15] The top ten frequently selected genes with the sc-ipso-elm method on the lymphoma data M97935_5_at Signal transducer and activator of transcription 1, 91kDa L17328_at Fasciculation and elongation protein zeta 2 (zygin II) M18185_at Gastric inhibitory polypeptide Serine (or cysteine) proteinase inhibitor, clade A (alpha-1 antiproteinase, antitrypsin), member 7 Neurotrophin 3 Chaperonin containing TCP1, subunit 7 (eta) Mitogen-activated protein kinase kinase kinase 4 ∗ U22178_s_at Microseminoprotein, beta- Anaplastic lymphoma kinase (Ki-1) ATP-binding cassette, sub-family D (ALD), member 2 From Tables 3, 4, 5, 6, 7 and 8, many genes selected by the SC-IPSO-ELM method were also selected by one or more methods proposed in [15, 16, 22, 23, 26–31]. On the Leukemia data, gene U05259, a B lymphocyte antigen receptor, encodes cell surface proteins for which monoclonal antibodies have been demonstrated to be useful in distinguishing lymphoid from myeloid lineage cells [18]. Gene M63138 is the member of the peptidase C1 family involved in the pathogenesis of breast cancer and possibly Alzheimer's disease [18]. A muscle index can be calculated based on an average intensity of 17 ESTs in the array that are homologous to smooth muscle genes which included gene H20709 in the Colon data. Although the SC-IPSO-ELM method does not encode gene-to-class sensitivity (GCS) information extracted from the microarray data, it could also select some genes with comparatively high GCS values selected by the GCSI-based methods. Since the expression levels of all genes in the Brain cancer and Lymphoma data are not distinct in two classes, the different approaches considering different factors may select different discriminative gene subsets. Thus, the genes selected by the SC-IPSO-ELM are surely different from ones selected by other gene selected methods, which is verified by Tables 4 and 8. Figure 2 shows the heatmap with top ten frequently selected genes for the six data. It can be found that most of frequently selected genes' expression levels clearly differentiate between/among two/multi classes on all data but the Brain cancer and Lymphoma data. From Fig. 2b and e, there has no single gene whose expression levels are distinct between two classes, which was verified in [15, 16]. Hence, the proposed method is capable of selecting predictive genes whose expression levels are distinct among different classes in most cases. The heatmap of expression levels based on the top ten frequently selected genes on the six data Comparison with the GCSI based gene selection methods In [15, 16], two effective gene selection methods by considering GCS information we proposed. Experimental results on several public microarray data verified that the two methods, the KMeans-GCSI-MBPSO-ELM and BPSO-GCSI-ELM methods, outperformed than some PSO-based methods and other classical gene selection methods such as GS2, GS1, Cho's and F-test. To avoid repetition of the comparison with the PSO-based and other classical gene selection methods, the SC-IPSO-ELM method is compared with only the KMeans-GCSI-MBPSO-ELM and BPSO-GCSI-ELM methods on the six data by using ELM and support vector machine (SVM), and the corresponding results of the average of 100 trials are listed in Tables 9 and 10. The 5-fold cv classification accuracies of elm based on the three gene selection methods on the six microarray data KMeans-GCSI-MBPSO-ELM BPSO-GCSI-ELM SC-IPSO-ELM 5-fold CV Accuracy(%) ± std 100.00 ±0.00 88.63 ±0.0216 The classification accuracies of svm based on the three gene selection methods on the six microarray data From Tables 9 and 10, the SC-IPSO-ELM method selects the almost same number of genes as the two GCSI based methods on the Leukemia, Brain cancer, SRBCT, LUNG and Lymphoma data, while it selects the most number of genes on the Colon data among three methods. ELM achieves 100% 5-fold CV accuracy on the Leukemia and SRBCT data with the genes selected by the three methods, and SVM achieves the same 5-fold CV accuracy on the Leukemia data with the genes selected by the three methods. ELM and SVM both obtain the highest 5-fold CV accuracy on the Brain cancer, Colon data and Lymphoma data with the genes selected by the SC-IPSO-ELM method, SVM obtains the slightly higher 5-fold CV accuracy on the SRBCT data with the SC-IPSO-ELM than that with the two GCSI based methods, and SVM obtains the highest 5-fold CV accuracy on the LUNG data with the BPSO-GCSI-ELM. On the whole, the SC-IPSO-ELM could select more predictive gene subsets than the two GCSI based methods. Discussion on the parameter selection To establish second-level gene pool, it is critical to determine the value of the parameter, θac. Figure 3 shows the relationship between the classification accuracy on the training data obtained by ELM and the parameter, θac. On the Leukemia, Colon data, LUNG and Lymphoma data, the 5-fold CV and test accuracy both have an upward trend as the values of the parameter, increases, while they have a downward trend as the values of the parameter increases on the Brain cancer data. On the SRBCT data, the test accuracy decreases as the value of the parameter increases, while the 5-fold CV accuracy increases as the value of the parameter increases. The parameter, θac versus the classification accuracy on the training dataset obtained by ELM For using the improved PSO to select the gene subset, the dimension of the particle is the number of the selected genes. Figure 4 shows the effect on different number of the selected genes. The 5-fold CV accuracy obtained by ELM has an upward trend as the number of the selected genes increases on the six data but the Colon data, while the curves of the test accuracy obtained by ELM fluctuate as the number of the selected genes increases on the six data. The number of the selected genes versus the classification accuracy on the training dataset obtained by ELM Figures 3 and 4 provide a guide on how to select the values of the parameters θac and the number of the selected genes in the SC-IPSO-ELM. In general, these parameters should be selected empirically in particular applications. To obtain predictive genes with lower redundancy, a hybrid gene selection method based on gene scoring strategy and improved PSO was proposed in this paper. To decrease the computational cost, the relevant genes are filtered out through different strategies to establish more compact gene pool for further gene selection. Then, the improved PSO was proposed to select the most predictive gene subsets from the gene pool. Experimental results verified the proposed method could select highly predictive and compact gene subsets and outperformed than other PSO-based and GCSI-based gene selection methods. However, the proposed method selects genes lack of much interpretability. Future work will include how to encode some prior information into the proposed method for gene selection and apply it to RNA-Seq data analysis. The authors would like to thank the anonymous reviewers for their time and their valuable comments. Publication costs are funded by the National Natural Science Foundation of China [Nos. 61572241 and 61271385], the National Key R &D Program of China [No. 2017YFC0806600], the Foundation of the Peak of Six Talents of Jiangsu Province [No. 2015-DZXX-024], the Fifth "333 High Level Talented Person Cultivating Project" of Jiangsu Province [No. (2016) III-0845] This article has been published as part of BMC Bioinformatics Volume 20 Supplement 8, 2019: Decipher computational analytics in digital health and precision medicine. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-20-supplement-8. FH proposed the frame and wrote the manuscript. DT and YS conducted the experiments. ZC, JJ and QL designed the experiments. All authors read and approved the final manuscript. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. School of Computer Science and Communication Engineering, Jiangsu University, Xuefu Road, Zhenjiang, Jiangsu, China Jiangsu Key Laboratory of Security Technology for Industrial Cyberspace, Zhenjiang, Jiangsu, China School of Engineering, Nanjing Agricultural University, Weigang Road, Nanjing, Jiangsu, China Maulik U. Analysis of gene microarray data in a soft computing framework. Appl Soft Comput. 2011; 11:4152–60.View ArticleGoogle Scholar Cao HB, Lei SF, Deng HW, Wang YP. Identification of genes for complex diseases using integrated analysis of multiple types of genomic data. Plos One. 2012; 7(9):42755.View ArticleGoogle Scholar Kohavi R, John GH. Wrappers for feature subset selection. Artif Intell. 1997; 97(1-2):273–324.View ArticleGoogle Scholar Saeys Y, Inza I, Larranaga P. A review of feature selection techniques in bioinformatics. Bioinformatics. 2007; 23(19):2507–17.PubMedView ArticleGoogle Scholar Aldonado S, Weber R. A wrapper method for feature selection using support vector machines. Inf Sci. 2009; 179(13):2208–17.View ArticleGoogle Scholar Bermejo P, Puerta JM. A grasp algorithm for fast hybrid (filter-wrapper) feature subset selection in high-dimensional datasets. Pattern Recog. 2011; 32:701–11.View ArticleGoogle Scholar Lee CM, Ko CN. Time series prediction using rbf neural networks with a nonlinear time-varying evolution pso algorithm. Neurocomputing. 2009; 73(1):449–60.View ArticleGoogle Scholar Yu JB, Wang SJ, Xi LF. Evolving artificial neural networks using an improved pso and dpso. Neurocomputing. 2008; 71(4):1054–60.View ArticleGoogle Scholar Kennedy J, Eberhart R. Particle swarm optimization. In: IEEE International Conference on Neural Networks. Perth: IEEE: 1995. p. 1942–8.Google Scholar Shi YH, Eberhart RC. A modified particle swarm optimizer. In: IEEE World Congress on Computational Intelligence. Anchorage: IEEE: 1990. p. 69–73.Google Scholar Saraswathi S, Sundaram S, Sundararajan N, Zimmermann M, Nilsen-Hamilton M. ICGA-PSO-ELM approach for accurate multiclass cancer classification resulting in reduced gene sets in which genes encoding secreted proteins are highly represented. IEEE/ACM Trans Comput Biol & Bioinforma. 2011; 8(2):452–63.View ArticleGoogle Scholar Yang C, Chuang LY, Ke CH, Yang C. A hybrid feature selection method for microarray classification. Int J Comput Sci. 2008; 35(3):285–90.Google Scholar Shen Q, Shi WM, Kong W, Ye BX. A combination of modified particle swarm optimization algorithm and support vector machine for gene selection and tumor classification. Talanta. 2007; 71(4):1679–83.PubMedView ArticleGoogle Scholar Yang S, Han F, Guan J. A hybrid gene selection and classification approach for microarray data based on clustering and pso. Commun Comput & Inf Sci. 2013; 375:88–93.Google Scholar Han F, Yang C, Wu YQ, Zhu JS, Ling QH, Song YQ, Huang DS. A gene selection method for microarray data based on binary pso encoding gene-to-class sensitivity information. IEEE/ACM Trans Comput Biol & Bioinforma. 2017; 14(1):85–96.View ArticleGoogle Scholar Han F, Sun W, Ling QH. A novel strategy for gene selection of microarray data based on gene-to-class sensitivity information. Plos One. 2014; 9(5):97530.View ArticleGoogle Scholar Huang GB, Zhu QY, Siew CK. Extreme learning machine: a new learning scheme of feedforward neural networks. In: IEEE International Joint Conference on Neural Networks. Budapest: IEEE: 2004. p. 985–990.Google Scholar Soria-Olivas E, Gomez-Sanchis J, Martin JD, Vila-Frances J, Martinez M, Magdalena JR, Serrano AJ. Belm: Bayesian extreme learning machine. IEEE Trans Neural Netw. 2011; 22(3):505–9.PubMedView ArticleGoogle Scholar Han F, Huang DS. Improved extreme learning machine for function approximation by encoding a priori information. Neurocomputing. 2006; 69(16–18):2369–73.View ArticleGoogle Scholar Li YX. Feature selection for cancer classification based on support vector machine. J Comput Res & Dev. 2005; 42(10):1796–1801.View ArticleGoogle Scholar Strobl MA, Barker D. On simulated annealing phase transitions in phylogeny reconstruction. Mol Phylogenet Evol. 2016; 101:46–55.PubMedPubMed CentralView ArticleGoogle Scholar Golub TR, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, Mesirov JP, Coller H, Loh ML, Downing JR, Caligiuri MA. Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science. 1999; 286(2):531–6.PubMedView ArticleGoogle Scholar Khan J, Wei JS, Ringner M, Lao HS, Ladanyi M, Westermann F, Berthold F, Schwab M, Antonescu CR, Peterson C. Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks. Nat Med. 2001; 7(6):673–9.PubMedPubMed CentralView ArticleGoogle Scholar Yang K, Li J, Cai Z. A stable gene selection in microarray data analysis. BMC Bioinformatics. 2006; 7:228–43.PubMedPubMed CentralView ArticleGoogle Scholar Bhattacharjee A, Staunton J, Richards WG. Classification of human lung carcinomas by mrna expression profiling reveals distinct adenocarcinoma subclasses. Proc Natl Acad Sci. 2001; 98:13790–5.PubMedView ArticleGoogle Scholar Tong DL. Hybridising genetic algorithm-neural network (gann) in marker genes detection. In: International Conference on Machine Learning and Cybernetics. Warsaw: Springer: 2009. p. 1082–7.Google Scholar Lee KE, Sha N, Dougherty ER, Vannucci M, Mallick BK. Gene selection: a bayesian variable selection approach. Bioinformatics. 2003; 19(1):90–7.PubMedView ArticleGoogle Scholar Alon U, Barkai N, Notterman DA, Gish K, Ybarra S, Mack D, Levine AJ. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proc Natl Acad Sci U S A. 1999; 96(12):6745–50.PubMedPubMed CentralView ArticleGoogle Scholar Huang TM, Kecman V. Gene extraction for cancer diagnosis by support vector machines. In: International Conference on Artificial Neural Networks. Baoding: IEEE: 2005. p. 617–24.Google Scholar Kar S, Sharma KD, Maitra M. Gene selection from microarray gene expression data for classification of cancer subgroups employing pso and adaptive k-nearest neighborhood technique. Expert Syst Appl. 2015; 42(1):612–27.View ArticleGoogle Scholar Chu F, Wang L. Applications of support vector machines to cancer classification with microarray data. Int J Neural Syst. 2005; 15(6):475.PubMedView ArticleGoogle Scholar
CommonCrawl
Thomas Gerald Room Thomas Gerald Room FRS FAA (10 November 1902 – 2 April 1986) was an Australian mathematician who is best known for Room squares. He was a Foundation Fellow of the Australian Academy of Science.[1][2] FRS FAA Thomas Gerald Room Born(1902-11-02)2 November 1902 Died2 April 1986(1986-04-02) (aged 83) NationalityAustralian Alma materSt John's College, Cambridge Scientific career FieldsMathematics InstitutionsUniversity of Sydney Biography Thomas Room was born on 10 November 1902, near London, England. He studied mathematics in St John's College, Cambridge, and was a wrangler in 1923. He continued at Cambridge as a graduate student, and was elected as a fellow in 1925, but instead took a position at the University of Liverpool. He returned to Cambridge in 1927, at which time he completed his PhD, with a thesis supervised by H. F. Baker.[3][4] Room remained at Cambridge until 1935, when he moved to the University of Sydney, where he accepted the position of Chair of the Mathematics Department, a position he held until his retirement in 1968.[5] During World War II he worked for the Australian government, helping to decrypt Japanese communications. In January 1940, with the encouragement of the Australian Army, he, together with some colleagues at the University of Sydney, began to study Japanese codes. The others were the mathematician Richard Lyons and the classicists Arthur Dale Trendall and Athanasius Treweek. By this time Room had already begun learning Japanese under Margaret Ethel Lake (1883-?) at the University of Sydney. In May 1941 Room and Treweek attended a meeting at the Victoria Barracks in Melbourne with the Director of Naval Intelligence of the Royal Australian Navy, several Australian Army intelligence officers and Eric Nave, an expert Japanese cryptographer with the Royal Australian Navy. As a result it was agreed that Room's group, with the agreement of the University of Sydney, would move in August 1941 to work under Nave at the Special Intelligence Bureau in Melbourne. On 1 September 1941, Room was sent to the Far East Combined Bureau in Singapore to study the codebreaking techniques used there. After the outbreak of war they were working for FRUMEL (Fleet Radio Unit Melbourne), a joint American-Australian intelligence unit, but when Lieutenant Rudolph Fabian took over command of FRUMEL and particularly when, in October 1942, FRUMEL was placed under direct control of the US Navy, civilians such as the member of Room's group were found surplus to requirements and returned to their academic posts. [6][7] After the war, Room served as dean of the faculty of science at the University of Sydney from 1952 to 1956 and again from 1960 to 1965.[8] He also held visiting positions at the University of Washington in 1948, and the Institute for Advanced Study and Princeton University in 1957.[9][10][11] He retired from Sydney in 1968 but took short-term positions afterwards at Westfield College in London and the Open University before returning to Australia in 1974. He died on 2 April 1986. Room married Jessica Bannerman, whom he met in Sydney, in 1937; they had one son and two daughters.[12][13] Research Room's PhD work concerned generalizations of the Schläfli double six, a configuration formed by the 27 lines on a cubic algebraic surface.[1][4] In 1938 he published the book The geometry of determinantal loci through the Cambridge University Press.[1] Nearly 500 pages long, the book combines methods of synthetic geometry and algebraic geometry to study higher-dimensional generalizations of quartic surfaces and cubic surfaces. It describes many infinite families of algebraic varieties, and individual varieties in these families, following a unifying principle that nearly all loci arising in algebraic geometry can be expressed as the solution to an equation involving the determinant of an appropriate matrix.[1][14] In the postwar period, Room shifted the focus of his work to Clifford algebra and spinor groups.[1] Later, in the 1960s, he also began investigating finite geometry, and wrote a textbook on the foundations of geometry.[1] Room invented Room squares in a brief note published in 1955.[15] A Room square is an n × n grid in which some of the cells are filled by sets of two of the numbers from 0 to n in such a way that each number appears once in each row or column and each two-element set occupies exactly one cell of the grid. Although Room squares had previously been studied by Robert Richard Anstice,[16] Anstice's work had become forgotten and Room squares were named after Room. In his initial work on the subject, Room showed that, for a Room square to exist, n must be odd and cannot equal 3 or 5. It was later shown by W. D. Wallis in 1973 that these are necessary and sufficient conditions: every other odd value of n has an associated Room square. The nonexistence of a Room square for n = 5 and its existence for n = 7 can both be explained in terms of configurations in projective geometry.[1] Despite retiring in 1968, Room remained active mathematically for several more years, and published the book Miniquaternion geometry: An introduction to the study of projective planes in 1971 with his student Philip B. Kirkpatrick.[1] Awards and honours In 1941, Room won the Thomas Ranken Lyle Medal of the Australian National Research Council and was elected as a Fellow of the Royal Society.[1][17][18] He was one of the Foundation Fellows of the Australian Academy of Science, chartered in 1954.[1][2] From 1960 to 1962, he served as president of the Australian Mathematical Society and he later became the first editor of its journal.[1] The T. G. Room award of the Mathematical Association of New South Wales, awarded to the student with the best score in the NSW Higher School Certificate Mathematics Extension 2 examination, is named in Room's honour.[1][19] References 1. Hirschfeld, J. W. P.; Wall, G. E. (1987). "Thomas Gerald Room. 10 November 1902 – 2 April 1986". Biographical Memoirs of Fellows of the Royal Society. 33: 575–601. doi:10.1098/rsbm.1987.0020. JSTOR 769963. S2CID 73328766.. Also published in Historical Records of Australian Science 7 (1): 109–122, doi:10.1071/HR9870710109. An abridged version is online at the web site of the Australian Academy of Science. 2. John Mack. Room, Thomas Gerald (1902–1986). {{cite book}}: |work= ignored (help) First published in Australian Dictionary of Biography, Volume 18, (MUP), 2012. 3. Hirschfeld, J. W. P.; Wall, G. E. (1987). "Thomas Gerald Room. 10 November 1902 – 2 April 1986", Biographical Memoirs of Fellows of the Royal Society 33: 574. doi:10.1098/rsbm.1987.0020. JSTOR 769963.. Also published in Historical Records of Australian Science 7 (1): 109–122, doi:10.1071/HR9870710109. An abridged version is online at the web site of the Australian Academy of Science. 4. Thomas Gerald Room at the Mathematics Genealogy Project 5. "The University. Chair of Mathematics. Professor T. G. Room", The Sydney Morning Herald, 21 December 1934. 6. Peter Kornicki, Eavesdropping on the Emperor: Interrogators and Codebreakers in Britain's War with Japan (London: Hurst & Co., 2021), pp. 209-211, 216-7. 7. Peter Donovan, and John Mack, ‘Sydney University, T. G. Room and codebreaking in WW II’, Australian mathematical society gazette 29 (2002): 76-85, 141-8. 8. Hirschfeld, J. W. P.; Wall, G. E. (1987), "Thomas Gerald Room. 10 November 1902 – 2 April 1986". Biographical Memoirs of Fellows of the Royal Society 33: 574. doi:10.1098/rsbm.1987.0020. JSTOR 769963. Also published in Historical Records of Australian Science 7 (1): 109–122, doi:10.1071/HR9870710109. An abridged version is online at the web site of the Australian Academy of Science. 9. Hirschfeld, J. W. P.; Wall, G. E. (1987). "Thomas Gerald Room. 10 November 1902 – 2 April 1986". Biographical Memoirs of Fellows of the Royal Society 33: 574. doi:10.1098/rsbm.1987.0020. JSTOR 769963.. Also published in Historical Records of Australian Science 7 (1): 109–122, doi:10.1071/HR9870710109. An abridged version is online at the web site of the Australian Academy of Science. 10. "Princeton Appoints 17 Guest Professors", The New York Times, 4 September 1957. 11. "Institute Names 128 For Research; Scholars Will Do Advanced Study on Historical Topics And in Mathematics", The New York Times, 15 September 1957. 12. Hirschfeld, J. W. P.; Wall, G. E. (1987). "Thomas Gerald Room. 10 November 1902 – 2 April 1986". Biographical Memoirs of Fellows of the Royal Society 33: 574. doi:10.1098/rsbm.1987.0020. JSTOR 769963.. Also published in Historical Records of Australian Science 7 (1): 109–122, doi:10.1071/HR9870710109. An abridged version is online at the web site of the Australian Academy of Science. 13. "Professor and Bride Dodge Rice", The Sydney Morning Herald, 8 November 1937. 14. Review of The geometry of determinantal loci by Virgil Snyder (1939), Bulletin of the AMS 45: 499–501, doi:10.1090/S0002-9904-1939-07011-0. 15. Room, T. G. (1955), "A new type of magic square", Mathematical Gazette, 39: 307, doi:10.2307/3608578, JSTOR 3608578, S2CID 125711658. 16. O'Connor, John J.; Robertson, Edmund F., "Robert Anstice", MacTutor History of Mathematics Archive, University of St Andrews. 17. "Lyle Medals Awarded", The Sydney Morning Herald, 10 July 1941. 18. Thomas Ranken Lyle Medal Archived 28 November 2010 at the Wayback Machine, Australian Academy of Science. Retrieved 6 June 2010. 19. The T G Room Award Archived 4 August 2012 at archive.today, Mathematical Association of New South Wales. Retrieved 1 June 2010. Authority control International • ISNI • VIAF National • Germany • Israel • United States • Netherlands Academics • CiNii • MathSciNet • Mathematics Genealogy Project • zbMATH People • Australia • Deutsche Biographie • Trove Other • IdRef
Wikipedia
Agreement between two large pan-cancer CRISPR-Cas9 gene dependency data sets Joshua M. Dempster ORCID: orcid.org/0000-0002-3634-95761, Clare Pacini ORCID: orcid.org/0000-0001-7791-09402,3, Sasha Pantel1, Fiona M. Behan ORCID: orcid.org/0000-0002-9070-76882,3, Thomas Green1, John Krill-Burger ORCID: orcid.org/0000-0001-9614-46711, Charlotte M. Beaver2, Scott T. Younger1, Victor Zhivich1, Hanna Najgebauer2,3, Felicity Allen2, Emanuel Gonçalves2, Rebecca Shepherd2, John G. Doench ORCID: orcid.org/0000-0002-3707-98891, Kosuke Yusa2 nAff7, Francisca Vazquez ORCID: orcid.org/0000-0002-2857-46851, Leopold Parts2,4, Jesse S. Boehm ORCID: orcid.org/0000-0002-6795-63361, Todd R. Golub ORCID: orcid.org/0000-0003-0113-24031,5, William C. Hahn ORCID: orcid.org/0000-0003-2840-97911,5, David E. Root ORCID: orcid.org/0000-0001-5122-861X1, Mathew J. Garnett2,3, Aviad Tsherniak ORCID: orcid.org/0000-0002-3797-18771 & Francesco Iorio ORCID: orcid.org/0000-0001-7063-89132,3,6 Nature Communications volume 10, Article number: 5817 (2019) Cite this article Genome-scale CRISPR-Cas9 viability screens performed in cancer cell lines provide a systematic approach to identify cancer dependencies and new therapeutic targets. As multiple large-scale screens become available, a formal assessment of the reproducibility of these experiments becomes necessary. We analyze data from recently published pan-cancer CRISPR-Cas9 screens performed at the Broad and Sanger Institutes. Despite significant differences in experimental protocols and reagents, we find that the screen results are highly concordant across multiple metrics with both common and specific dependencies jointly identified across the two studies. Furthermore, robust biomarkers of gene dependency found in one data set are recovered in the other. Through further analysis and replication experiments at each institute, we show that batch effects are driven principally by two key experimental parameters: the reagent library and the assay length. These results indicate that the Broad and Sanger CRISPR-Cas9 viability screens yield robust and reproducible findings. Despite recent advances in cancer research, most cancer patients still have no clinical indications for approved targeted therapies1. Expanding precision oncology to the general patient population will require identifying and exploiting many new genomic targets. To tackle this problem, large-scale pharmacogenomic screenings have been performed across panels of human cancer cell lines2,3. The advent of genome editing by CRISPR-Cas9 technology has allowed extending these studies beyond currently druggable targets with precision and scale4,5. Pooled CRISPR-Cas9 screens employing genome-scale libraries of single-guide RNAs (sgRNAs) are being performed on growing numbers of cancer in vitro models6,7,8,9,10,11,12. The output of these screens can be used to identify and prioritize new cancer therapeutic targets13. However, fully characterizing genetic vulnerabilities in cancers is estimated to require thousands of genome-scale screens14. We present a comparative analysis of data sets derived from the two largest independent CRISPR-Cas9 based gene-dependency screening studies in cancer cell lines published to date13,15,16, part of the Cancer Dependency Map effort17,18. The analysis aims to assess the concordance of these data sets and that of the analytical outcomes they yield when investigated individually. To this aim, our computational strategy includes comparisons at different levels of data-processing and abstraction: from gene-level dependencies to molecular markers of dependencies, and genome-scale cell line profiles of dependencies. Lastly, we shed light on the differences in the experimental settings that give rise to batch effects across independent studies of this kind, discerning between biological and technical confounding factors. Overview of data sets and comparison strategy We compared two sets of pooled genome-scale CRISPR-Cas9 drop out screens in cancer cell lines, generated at the Broad Institute and the Sanger Institute through independently designed experimental pipelines (detailed in Fig. 1a, Supplementary Data 1 and Supplementary Methods), considering 147 cell lines and 16,733 genes screened independently by both institutes (Supplementary Data 2). We performed comparisons of individual gene scores, quantifying the reduction of cell viability upon gene inactivation via CRISPR-Cas9 targeting; of profiles of such scores across cell lines (gene dependency profiles); of profiles of such scores across genes in individual cell lines (cell line dependency profiles). Fig. 1: Comparison of experimental protocols and gene score results. a Experimental settings and reagents used in the experimental pipelines underlying the two compared data sets. b Densities of individual gene scores in individual cell lines, in the Broad and Sanger data sets, across processing levels. The distributions of gene scores for previously identified essential genes12 are shown in red. c Examples of the relationship between a gene's score rank in a cell line and the cell line's rank for that gene using Broad unprocessed gene scores, with gene ranks in their 90th percentile of least dependent lines highlighted. Cell lines in the 90th percentile of least dependent lines on RPS8 (a common essential gene) still rank this gene among the strongest of their dependencies. d Distribution of gene ranks for the 90th percentile of least dependent cell lines for each gene in both data sets. Black dotted lines indicate natural thresholds at the minimum gene density along each axis. The y-axis is equivalent to the y-axis in (c) at the 90th percentile mark, as indicated by the arrows. We calculated gene scores using three different strategies. First, we considered fully processed gene scores, available for download from the Broad17 and Sanger13,18 Cancer Dependency Map web-portals (processed data). Because data processing pipelines vary significantly between the two data sets, we also examined minimally processed gene scores, generated by computing median sgRNA abundance fold changes for each gene (unprocessed data). Lastly, we applied the established batch correction method ComBat19 to the unprocessed gene scores to remove experimental batch effects between the data sets. This is achieved by ComBat through aligning gene means and variances between the data sets using an empirical Bayes framework. We refer to this form of the data as the batch-corrected gene scores. Agreement of gene scores We found concordant gene scores across all genes and cell lines with Pearson correlation = 0.658, 0.627, and 0.765, respectively for processed, unprocessed and batch-corrected data (p-values below machine precision in all cases, N = 2,465,631, Fig. 1b). Spearman correlations across the different comparisons were 0.347, 0.411, and 0.551 respectively, again significant below machine precision. The reproducibility of gene scores between the two data sets can be considered a function of two variables: the mean dependency across all cell lines for each gene (relevant to infer common dependencies), and the patterns of scores across cell lines for each gene (relevant to predict selective oncology therapeutic targets). Mean gene scores among all cell lines showed excellent agreement (Supplementary Fig. 1a), with Pearson correlation = 0.784 and 0.818, respectively for processed andunprocessed data (p below machine precision in both cases using SciPy's beta distribution test; N = 16,773). The effect of ComBat correction on our data is to align gene means and variances (Supplementary Fig. 1b). As expected, after ComBat correction the Pearson correlation of gene means was = 0.9997, and the correlation of gene standard deviations (SDs) was = 0.957. We further tested whether it was possible to recover consistent sets of common dependencies. To this end, we defined as common dependencies those genes that rank among the top dependencies when considering only their 90th percentile of least dependent cell lines, with the score threshold for top dependencies determined by the local minimum in the data (Fig. 1c). For the unprocessed data, the Broad and Sanger jointly identify 1,031 common dependency genes (Supplementary Data 3). 260 putative common dependencies were only identified by the Sanger and 397 were only identified by the Broad (Cohen's kappa = 0.737, Fisher's exact test p-value below machine precision, N = 16,773, Fig. 1d). Agreement of selective gene score profiles across cell lines In both studies, most genes show little variation in their scores across cell lines. Thus we expect low shared variance even if most scores are numerically similar between the data sets20. Accordingly, we focused on a group of genes for which the score variance across lines is of potential biological interest. These are genes whose dependency profile suggests a strong biological selectivity in at least one of the two unprocessed data sets, identified using the Likelihood Ratio Test (NormLRT) test introduced in McDonald et al.21. We call these 49 genes Strongly Selective Dependencies (SSDs) (Supplementary Data 4). We evaluated the agreement between gene score patterns across cell lines using Pearson's correlations to test the reproducibility of selective viability phenotypes. Figure 2a illustrates the score patterns for the example cancer genes MDM4 (R = 0.820, beta test p = 6.91 × 10–37), KRAS (R = 0.765, p = 1.66 × 10–29), CTNNB1 (R = 0.803, p = 1.92 × 10–34), and SMARCA4 (R = 0.664, p = 4.61 × 10–20) with unprocessed data (N = 147). For SSDs and unprocessed data, the median correlation was 0.633 and 84% of SSDs showed a correlation greater than 0.4. Five SSDs showed a correlation below 0.2 (ABHD2, CDC62, HIF1A, HSPA5, C17orf64), and are discussed further below. As expected, correlation across data sets for all genes was lower (median R = 0.187, 8.34% genes with R > 0.4). Fig. 2: Reproducibility of gene and cell line dependency profiles. a Examples of gene score pattern comparisons for selected known cancer genes. b Distribution of correlations of scores for individual genes in unprocessed data. c Gene scores for strongly selective dependencies across all cell lines, with the threshold for calling a line dependent set at an FDR of 0.05. d tSNE visualization of cell lines in unprocessed data based on the correlation between cell line profiles of gene scores. Colors represent the cell line while shape denotes the study of origin. e The same as in (d) but for data batch-corrected using ComBat. f Recovery of a cell line's counterpart in the other data set before (Uncorrected) and after correction (Corrected). Value on the y-axis shows percentages of cell lines whose matching counterpart in the other data set is within its k-nearest cell lines, i.e. the k-neighborhood on the x-axis, based on a Pearson correlation distance metric. nAUC values are shown in brackets. Three different gene sets were considered to calculate the correlation between cell lines. First, using all genes (uncorrected and corrected all), second, using genes that are dependencies for at least one cell line (corrected variable) and third, using strongly selective dependencies (corrected SSD) genes. One important use of these screens is to consistently classify cells as dependent or not dependent on selective dependencies. Therefore, we evaluated the agreement of the Broad and Sanger data sets on identifying cell lines that are dependent on each SSD gene. We classified cell lines as dependent on a given gene if its gene score represents a false discovery rate (FDR) less than 0.05 (see the Methods section). Genes scores with greater than 5% FDR are dominated by a large group of scores near zero (Fig. 2c). The area under the receiver-operator characteristic (AUROC) for recovering binary Sanger dependency on SSDs using Broad gene scores was 0.940 in processed data, 0.963 in unprocessed data, and 0.971 in corrected data; to recover Broad binary dependency from Sanger scores, AUROC scores were 0.918, 0.870, and 0.968 respectively. The recall of Sanger-identified dependent cell lines in Broad data was 0.781 with precision equal to 0.255 for processed data, 0.775 and 0.258 for unprocessed data, and 0.754 and 0.587 for batch-corrected data (Supplementary Fig. 1c). Agreement is higher than could be expected by chance under all processing regimes (Fisher's exact test p = 8.99 × 10–43 in processed, 9.65 × 10–44 in unprocessed, and 5.29 × 10–198 in batch-corrected data; N = 7,203). A large proportion of Broad-exclusive dependent cell lines (53.4% in processed data and 47.7% in unprocessed data) were due to the single gene HSPA5, which is an SSD in Sanger data but a common dependency in Broad data. Examining SSDs individually, we found median Cohen's kappa for sensitivity to individual SSDs of 0.461 in processed, 0.609 in unprocessed, and 0.758 in batch-corrected data. In unprocessed data, 59.2% of SSDs had Cohen's kappa greater than 0.4, as opposed to 0.03% expected by chance (Supplementary Fig. 1c). Agreement of cell line dependency profiles Previous literature on reproducibility highlighted the importance of considering agreement along both the perturbation and cell line axes of the data22,23,24. We assembled a combined data set of cell line dependency profiles from both studies and computed all possible pairwise correlation distances between them, using genes that were dependencies in at least one cell line (variable genes). A t-distributed stochastic neighbor embedding (tSNE)25 visualisation derived from these distance scores is shown in Fig. 2d. For the uncorrected data, we observed a perfect clustering of the dependency profiles by their study of origin, confirming a major batch effect. However, following batch correction, we observed integration between studies and increased proximity of cell lines from one study to their counterparts in the other study (Fig. 2e). To quantify agreement, for each cell line dependency profile in one data set, we ranked all the others (from both data sets) based on their correlation distance to the profile under consideration. For batch-corrected data, 175 of 294 (60%) cell line dependency profiles from one study have their counterpart in the other study as the closest (first) neighbor, and 209 of 294 (71%) of cell lines have it among the five closest neighbors (area under the normalized Recall curve — nAUC — averaged across all profiles = 0.91 for batch-corrected data, and = 0.53 for uncorrected data, Fig. 2f). Similar results were obtained across dependency profiles restricted to different sets of genes, with the best performance obtained when considering SSD genes only (nAUC = 0.94) and worst performances when considering all genes (nUAC = 0.90). The percentage of cell lines matching closest to their counterparts in the other study was 57% when considering all genes and 43% when considering SSD genes. Further, the tSNE plots for each tested gene set showed similar improvement after correction (Supplementary Fig. 2a–b). The batch correction also aligned numbers of significant (at 5% FDR) dependencies across cell lines between the two data sets (median number of dependencies 2,109 and 1,717 before, and 2,053 and 1,950 after correction, for Broad and Sanger respectively, Supplementary Fig. 3a). The average proportion of dependencies detected in both studies over those detected in at least one study also increased across cell lines from 47.75% to 59.14%. Furthermore, the correlation between cell lines after correction rose above the correlation within each individual screen for each gene set considered (Supplementary Fig. 3b). We finally examined whether the residual disagreement in corrected data might be related to screen quality and if there are tissues for which corresponding cell lines showed a consistently higher/lower agreement across the two studies. We assessed screen quality by computing true positive rates (TPRs) for recovering common essential genes in each cell line with a fixed 5% FDR, determined from the distribution of nonessential genes in the cell line. We found that mean screen quality is a strong predictor of screen agreement for both the uncorrected and batch-corrected data sets (t-test p-values 2.06 × 10–35, 4.74 × 10–35, N = 147 and adjusted R-squared 0.65, 0.64 for uncorrected and batch-corrected respectively; Supplementary Fig. 3c). In addition, we observed no differences in screen agreement when stratifying cell line based on their tissue of origin (Supplementary Fig. 3d), with screen quality being highly correlated with screen agreement invariantly across tissues (Supplementary Fig. 3e and Supplementary Data 5). Agreement of gene dependency biomarkers A selective dependency is of limited therapeutic value unless it can be reliably associated with an informative molecular feature of cancer (biomarker). Following a similar approach to that presented by the Cancer Cell Line Encyclopedia and Drug Sensitivity in Cancer consortia20, we performed a systematic test for molecular-feature/dependency associations on the two data sets. To this aim, we considered a set of Cancer Functional Events consisting of 578 molecular features selected in Iorio et al.26 based on their clinical relevance and encompassing mutations in high-confidence cancer driver genes, amplifications/deletions of chromosomal segments recurrently altered in cancer, hypermethylated gene promoters, microsatellite instability status, and the tissue of origin of the cell lines (Supplementary Data 5). We considered each of these features in turn and observed its status in the cell lines screened at both Sanger and Broad. Based on this, cell lines were split into two groups (respectively with negative/positive feature) and each of the SSD genes was t-tested for significant differences in gene scores across the obtained two groups of cell lines. These tests yielded 71 out of 29,350 possible significant associations (FDR < 5%, ΔFC < −1) between molecular features and gene dependency when using the Broad unprocessed data, and 90 when using the Sanger unprocessed data (Supplementary Data 6). Of these, 55 (77% of the Broad associations and 61% of the Sanger ones) were found in both data sets (FET p-value = 9.08 × 10–133, Fig. 3a and Supplementary Data 6). The concordance between the associations identified by each study was proportional to the threshold used to define significance (Supplementary Data 7). This was assessed by first considering the associations found significant (FDR < 5%) in one study as positive controls and calculating precision, recall, and sensitivity using a rank predictor based on the p-values obtained in the other study for all associations. We then tested how performance changed when considering increasingly stringent subsets of significant associations as positive controls and found that the most significant associations in one study were the most likely to be recovered in the other (Fig. 3b). Further, the overall correlation between differences in gene depletion FCs between cell lines with and without a specified molecular feature was equal to 0.763, and 99.2% of associations had the same sign of differential dependency across the two studies (Fig. 3a). This indicates that the studies agree not only on the existence of specific biomarkers but also on their robustness. Fig. 3: Reproducibility of biomarkers. a Results from a systematic association test between molecular features and differential gene dependencies (of the SSD genes) across the two studies. Each point represents a test for differential dependency on a given gene (on the second line of the point label) based on the status of a molecular feature (on the first line). b Precision/Recall and Recall/Specificity curves obtained when considering as positives controls the top significant molecular-feature/gene-dependency associations found in one of the studies and ranking all the tested molecular-feature/gene-dependency associations based on their p-values in the other study. To define top-significant associations different significance thresholds matching the quantile threshold specified in the legend are considered, where 100% includes all associations with FDR less than 5%. c Examples of significant statistical associations between genomic features and differential gene dependencies across the two studies. The box covers the interquartile range with the median line drawn within it. The whiskers of the boxplot extend to a maximum of 1.5 times the size of the interquartile range. d Comparison of results of a systematic correlation test between gene expression and dependency of SSD genes across the two studies. The gray dashed lines indicate the thresholds of significant correlations at a 5% false discovery rate identified for each study. Labeled points show the gene expression marker on the first line and gene dependency on the second line. Each tested association between gene expression and SSD dependency is represented by a single purple point. Regions with higher density of points are shown in white. e Examples of significant correlations between gene expression and dependencies consistently identified in both studies. Gene dependency associations identified with both data sets included expected as well as potentially novel hits. Examples of expected associations included increased dependency on ERBB2 in ERBB2-amplified cell lines, increased dependency on beta-catenin in APC mutant cell lines and increased dependency on MYCN in peripheral nervous system cell lines. A potentially novel association between FAM72B promoter hypermethylation and beta-catenin was also consistently identified across data sets (Fig. 3c). We also considered gene expression to mine for possible biomarkers of gene dependency using RNA-seq data sets maintained at Broad and Sanger institutes. To this aim, we considered as potential biomarkers 1,987 genes from intersecting the top 2,000 most variable gene expression levels measured by either institute. Clustering the RNA-seq profiles revealed that each cell line' transcriptome matched closest to its counterpart from the other institute (Supplementary Fig. 4a). We correlated the gene expression level for the most variably expressed genes to the gene dependency profiles of the SSD genes. Systematic tests of each correlation identified significant associations between gene expression and dependency. Further, as with the genomic biomarkers, we found significant overlap between gene expression biomarker associations identified in each data set with 4,459 (52% of Broad and 66% of Sanger gene expression biomarkers) found significant for both studies, out of 97,363 tested (Fisher's exact test p-value below machine precision), and strong overall agreement of correlation scores between gene expression markers and SSD genes dependency across data sets (Pearson's correlation 0.804, Fig. 3d). We observed both positive and negative correlations consistently across data sets; for example, ERBB2 gene score was positively correlated with its expression, while ATP6V0E1 showed significant dependency when its paralog ATP6V0E2 had a low expression (Fig. 3e). Elucidating sources of disagreement between the two data sets Despite the concordance observed between the Broad and Sanger data sets, we found batch effects in the unprocessed data both in individual genes and across cell lines. Although the bulk of these effects are mitigated by applying an established correction procedure27, their cause is an important experimental question. We conducted gene set enrichment analysis of genes sorted according to the loadings of the first two principal components of the combined unprocessed gene scores using a comprehensive collection of 186 KEGG pathway gene sets from Molecular Signature Database (MsigDB)28. We found significant enrichment for genes involved in spliceosome and ribosome in the first principal component, indicating that screen quality likely explains some variability in the data (Supplementary Fig. 5a, b). We then enumerated the experimental differences between data sets (Fig. 1a) to identify likely causes of batch effects. The choice of sgRNA can significantly influence the observed phenotype in CRISPR-Cas9 experiments, implicating the differing sgRNA libraries as a likely source of batch effect29. Additionally, previous studies have shown that some gene inactivations results in cellular fitness reduction only in lengthy experiments11. Accordingly, we selected the sgRNA library and the time point of viability readout for primary investigation as causes of major batch effects across the two compared studies. To elucidate the role of the sgRNA library, we examined the data at the level of individual sgRNA scores. The correlation between fold change patterns of reagents targeting the same gene (co-targeting) across studies was related to the selectivity of that gene's dependency (as quantified by the NormLRT score21, Fig. 4a): a reminder that most co-targeting reagents show low correlation because they target genes exerting little phenotypic variation. However, even among SSDs there was a clear relationship between sgRNA correlations within and between data sets (beta test p = 4.9 × 10–10, N = 49; Fig. 4b). In particular, we note that the five SSDs (ABHD2, CDC62, HIF1A, HSPA5, C17orf64) identified earlier as having poor agreement between data sets have poor sgRNA correlation within data sets, thus indicating that this metric can be used to assess the reliability of a selective dependency. Fig. 4: Influence of reagent library on gene score. a Distributions of sgRNA depletion score correlations for sgRNAs targeting genes with varying NormLRT scores within each data set (left) and between them (right). Each gene is binned according to the mean of its NormLRT score across the two data sets. The x-axis defines the color gradient. The y-axis reports the average of all correlations between pairs of sgRNAs that belong to the same data set and target that gene. Boxes cover the interquartile range with the median indicated by a horizontal line. Whiskers extend up to 1.5 time the interquartile range with outliers shown as fliers. b Relationship between sgRNA correlation within data sets and gene correlation between data sets. The linear trend is shown for SSD genes. c The mean depletion of guides targeting common dependencies across all replicates vs Azimuth estimates of guide efficacy. The x-axis defines the color gradient. d Comparison of Broad and Sanger unprocessed gene scores for genes matching SSD with highest minimum median estimated sgRNA efficacy (MESE) across both libraries (left, TFA2C), common dependency in either data set and greatest difference between KY and Avana MESE (center, EIF3F), and the SSD with worst KY MESE (right, MDM2). One possible explanation of gene score disagreement is that sgRNAs in one of the two data sets had poor on-target efficacy. To identify such cases, we need an independent assessment of sgRNA efficacy. We estimated the efficacy of each sgRNA in both libraries using Azimuth 2.0 (ref. 29), which uses only information about the genome in the region targeted by the sgRNA. We found that among genes identified as common dependencies in either data set, mean sgRNA depletion indeed had a strong relationship to the sgRNA's Azimuth estimated efficacy (Fig. 4c). Thus, for genes where Azimuth estimates are quite different between data sets, observed phenotype differences are probably due to differences in sgRNA efficacy. For each gene in each library, we calculated the median estimated sgRNA efficacy (MESE) and found cases where differing MESE values appear to explain gene score differences. Some examples of this effect are EIF3F (common essential in Sanger screens with MESE 0.613, non-scoring in Broad screens with MESE 0.398) and MDM2 (strongly selective in Broad screens with MESE 0.585, correlated but not strongly selective in Sanger screens with MESE 0.402) (Fig. 4d). We next investigated the role of different experimental time points on the screens' agreement. Given that the Broad used a longer assay length (21 days versus 14 days) we expected differences to be observed between late dependencies across the data sets. Therefore, we compared the distribution of gene scores for genes known to exert a loss of viability effect upon inactivation at an early- or late-time (early or late dependencies)11. While early dependencies have similar score distributions in both data sets (median average score −0.781 at the Sanger and −0.830 at the Broad), late dependencies are more depleted at the Broad with median average score −0.402 compared to −0.269 for the Sanger screens (Fig. 5a). The probability of observing a difference at least this extreme for a random set of genes of the same size is 2.57 × 10–78. Fig. 5: Influence of time point. a Distribution of early and late common dependency gene scores in the Broad and Sanger data sets averaged across cell lines. Boxes cover the interquartile range with the median indicated by a horizontal line. Whiskers extend up to 1.5 time the interquartile range with outliers shown as fliers. b Distribution of corrected gene scores for asparagine synthetase (ASNS) by media and institute. Blue and orange lines indicate the median of nonessential and essential gene scores, respectively. c GO terms significantly enriched in Broad-exclusive dependencies. For each GO term the bar length indicates the ratio of cell lines showing Broad-exclusive dependencies with a statistically significant enrichment of that GO term. Many other experimental differences may also contribute to differences in reported response. For example, Lagziel et al. showed that many metabolic gene dependency profiles in Achilles are related to screening media, with e.g. asparagine synthetase (ASNS) notably more dependent in media lacking asparagine30. The Broad Institute used provider-recommended media for all Achilles screens, while the Sanger Institute adapted cells to either RPMI or a fifty-percent mix of DMEM and F12. While DMEM lacks asparagine, both RPMI and F12 contain it; thus, ASNS is expected to be a strong dependency only in Broad screens, and only in DMEM or other asparagine-deficient media. We confirmed this result (Fig. 5b). The difference between ASNS dependency in DMEM and either RPMI or DMEM:F12 in Broad screens is significant (Student's t-test p = 1.52 × 10–10, N = 100 and p = 0.0173, N = 80). In contrast, the difference between the RPMI and DMEM:F12 media conditions is not significant in either the Broad (p = 0.961, N = 34) or the Sanger (p = 0.964, N = 147). Although ASNS is the strongest example, it is likely that some of the differences in other metabolic genes between institutes are explained by media. Unlike differences in sgRNA efficacy, both time point and media effects are expected to relate to the biological role of late dependencies. As the Broad Institute uses longer screens and includes a greater variety of media, Broad-exclusive dependencies are likely to contain enrichment for gene functional sets. We confirmed this by functionally characterizing, using gene ontology (GO), genes that were exclusively detected as depleted in individual cell lines (at 5% FDR), in one of the two studies, excluding genes with significantly different sgRNA efficacies between libraries. Results showed 29 GO categories significantly enriched in the Broad-exclusive dependency genes (Broad-exclusive GO terms) for more than 50% of cell lines (Fig. 5c and Supplementary Data 8). The Broad-exclusive enriched GO terms included classes related to mitochondrial and RNA processing gene categories and other gene categories previously characterized as late dependencies11. In contrast, no GO terms were significantly enriched in the Sanger-exclusive common dependencies in more than 30% of cell lines. Batch effect sources: experimental verification To verify that batch effects between the data sets can be removed by changing the library and the readout time point, we undertook replication experiments independently at Broad and Sanger institutes, where these factors were systematically permuted. The Broad sequenced cells collected from its original HT-29 and JIMT-1 screens at the 14-day time point and conducted an additional screen of these cell lines using the KY1.1 library with readouts at days 14 and 21. The Sanger used both the Broad's and the Sanger's clones of HT-29 to conduct a new KY screen and an Avana screen with readouts at days 14 and 21. Principal component analysis (PCA) of the concatenated unprocessed gene scores, including replication screens, showed a clear institute batch effect dominating the first principal component. By highlighting replication screens, we found that this effect is chiefly due to library choice, with time point playing a smaller role (Fig. 6a, Supplementary Fig. 6a). Changing from Sanger to Broad clones of HT-29 had minimal impact. We examined the change in gene score profile for each screen caused by changing either the library or time point while keeping other conditions constant. Gene score changes induced by either library or time point alterations were consistent across multiple conditions (Fig. 6b). Sanger-exclusive common dependencies were strongly enriched for genes that became more depleted with the KY library, and Broad-exclusive common dependencies were enriched among genes more depleted with the Avana library (Supplementary Fig. 6b). Late dependencies were strongly enriched among genes that became more depleted in the later time points, while early dependencies were not (Supplementary Fig. 6c). We compared the deviations in gene score between Broad and Sanger screens under different conditions, first comparing Broad original and replication screens of HT-29 (Fig. 6c) and JIMT-1 (Supplementary Fig. 6d) to the original Sanger screens of the same cell line. Matching library and time point removed most of the average gene score change (batch effect) between institutes, as indicated by the low correlation of the remaining gene score differences in the replication screens with the average gene score change. Specifically, matching Sanger's library and time point reduces the variance of gene scores in HT-29 from 0.0486 to 0.0252 and in JIMT-1 from 0.0556 to 0.0260. We next compared Sanger original and replication screens of HT-29 to the Broad original HT-29 screen. Matching library and time point successfully detrended the data in this case as well; however, the Sanger Avana screens of HT-29 contained considerable excess noise, causing these screens to have a higher overall variance from the Broad than the original screens (0.0486 vs 0.115). Nonetheless, the replication experiments confirm that the majority of batch effects between data sets are driven by the library and time point. Fig. 6: Results of replication experiments. a Original and replication screens from each institute plotted by their first two principal components. HT-29 screens are highlighted. Axes are scaled to the variance explained by each component. b Correlations of the changes in gene score caused when changing a single experimental condition. c The difference in unprocessed gene scores between Broad screens of HT-29 and the original Sanger screen (Sanger minus Broad), beginning with the Broad's original screen and ending with the Broad's screen using the KY library at the 14-day time point. Each point is a gene. The horizontal axis is the mean difference of the gene's score between the Sanger and Broad original unprocessed data sets. d A similar plot taking the Broad's original screen as the fixed reference and varying the Sanger experimental conditions (Broad minus Sanger). Providing sufficient experimental data to adequately sample the diversity of human cancers requires high-throughput screens. However, the benefits of large data sets can only be exploited if the underlying experiments are reliable and robustly reproducible. In this work, we survey the agreement between two large, independent CRISPR-Cas9 knock-out data sets, generated at the Broad and Sanger institutes. Our findings illustrate a high degree of consistency in estimating gene dependencies between studies at multiple levels of data processing, albeit with the longer duration of the Broad screens leading to stronger dependencies for a number of genes. The data sets are concordant in identifying common dependencies and identifying mean dependency signals. Their agreement is also striking in the more challenging task of identifying which cell lines are dependent on selective dependencies. Indeed, when we compared the two data sets at the level of gene dependency markers we found consistent results at the level of common informative molecular features, as well as with respect to their quantitative strength. We observed that a source of disagreement across the compared data set is due to diffuse batch effects visible when the whole profiles of individual cell lines are compared. Such effects can be readily corrected with standard methods without compromising data quality, thus making possible integration and future joint analyses of the two compared data sets. Furthermore, much of this batch effect can be decomposed into a combination of two experimental choices: the sgRNA library and the duration of the screen. The effect of each choice on the mean depletion of genes is readily explicable and reproducible, as shown by screens of two lines performed at the Broad using the Sanger's library and screen duration and a reciprocal screen performed at the Sanger with the Broad library and duration. Consequently, identifying high-efficacy reagents and choosing the appropriate screen duration should be given high priority when designing CRISPR-Cas9 knock-out experiments. Unprocessed gene scores Read counts for the Broad were taken from avana_public_19Q1 (ref. 31) and filtered so that they contained only replicates corresponding to overlapping cell lines and only sgRNAs with one exact match to a gene. Read counts for Sanger were taken from Behan et al.13 and similarly filtered, then both read counts were filtered to contain only sgRNAs matching genes common to all versions of the data. In both cases, reads per million (RPM) were calculated and an additional pseudo-count of 1 added to the RPM. Log fold change was calculated from the reference pDNA. In the case of the Broad, both pDNA and screen results fall into distinct batches, corresponding to evolving PCR strategies. Cell lines sequenced with a given batch were matched to pDNA profiles belonging to the same batch. Multiple pDNA RPM profiles in each batch were median-collapsed to form a single profile of pDNA reads for each batch. Initial gene scores for each replicate were calculated from the median of the sgRNAs targeting that replicate. Each replicate's initial gene scores for both Broad and Sanger were then shifted and scaled so the median of nonessential genes in each replicate was 0 and the median of essential genes in each replicate was negative one12. Replicates were then median-collapsed to produce gene- by cell-line matrices. Processed gene scores Broad gene scores were taken from avana_public_19Q1 gene_effect31 and reflect CERES15 processing. The scores were filtered for genes and cell lines shared between institutes and with the unprocessed data, then shifted and scaled so the median of nonessential genes in each cell line was 0 and the median of essential genes in each cell line was −1 (ref. 12). Sanger gene scores were taken from the quantile-normalized averaged log fold-change scores, post-correction with CRISPRcleanR32, and globally rescaled by a single factor so that the median of essential genes across all cell lines was −1 (ref. 12). Batch-corrected gene scores The unprocessed sgRNA log FCs were mean collapsed by gene and replicates. Data were quantile normalized for each institute separately before processing with ComBat using the R package sva. One batch factor was used in ComBat defined by the institute of origin. The ComBat corrected data were then quantile normalized to give the final batch-corrected data set. Alternate conditions Screens with alternate libraries, cell lines, and time points were processed similarly to the Unprocessed data above. Gene expression log2(Transcript per million +1) data were downloaded for the Broad from the Figshare repository for the Broad data set. For the Sanger data set, we used fragments per kilobase million (FPKM) expression data from Cell Model Passports33. We added a pseudo-count of 1 to the FPKM values and transformed to log2. Gene expression values are quantile normalized for each institute separately. For the Sanger data, Ensembl gene ids were converted to Hugo gene symbols using BiomaRt package in R. Guide efficacy estimates On-target guide efficacies for the single-target sgRNAs in each library were estimated using Azimuth 2.0 (ref. 29) against GRCh38. Comparison of all gene scores Gene scores from the chosen processing method for both Broad and Sanger were raveled and Pearson correlations calculated between the two data sets. 100,000 gene-cell line pairs were chosen at random and density-plotted against each other using a Gaussian kernel with the width determined by Scott's rule34. All gene scores for essential genes were similarly plotted in Fig. 1b. Comparison of gene means Cell line scores for each gene in both Broad and Sanger data sets with the chosen processing method were collapsed to the mean score, and a Pearson correlation calculated. Gene ranking, common essential identification For each gene in the chosen data set, its score rank among all gene scores in its 90th percentile least depleted cell line was calculated. We call this the gene's 90th percentile ranking. The density of 90th percentile rankings was then estimated using a Gaussian kernel with width 0.1 and the central point of minimum density identified. Genes whose 90th percentile rankings fell below the point of minimum density were classified as common essential. Identification of selective gene sets Selective dependency distributions across cell lines are identified using a Likelihood Ratio Test as described in McDonald et al.21. For each gene, the log-likelihood of the fit to a normal distribution and a skew-t distribution is computed using R packages MASS35 and sn36, respectively. In the event that the default fit to the skew-t distribution fails, a two-step fitting process is invoked. This involves keeping the degrees of freedom parameter (ν) fixed during an initial fit and then using the parameter estimates as starting values for a second fit without any fixed values. This process repeats up to 9 times using ν values in the list (2, 5, 10, 25, 50, 100, 250, 500, 1000) sequentially until a solution is reached. The reported LRT score is calculated as follows: $${\mathrm{LRT}} = {\mathrm{2}} \ast \left[ {{\mathrm{ln}}\left( {{\mathrm{likelihood}}\;{\mathrm{for}}\;{\mathrm{Skewed - t}}} \right) - {\mathrm{ln}}\left( {{\mathrm{likelihood}}\;{\mathrm{for}}\;{\mathrm{Gaussian}}} \right)} \right]$$ The numerical optimization methods used for the estimates do not guarantee the maximum of the objective function is reached. In a small number of cases, we failed to find a solution even with multiple attempts. NormLRT scores have been left blank for these genes. Genes with NormLRT scores greater than 100 and mean gene score greater than −0.5 in at least one institute's unprocessed data set were classified as SSDs. Binarized agreement of SSDs For each processing method, Broad and Sanger gene scores were concatenated. Scores for nonessential genes across all cell lines and both institutes were taken as the null distribution, and a left-tailed p-value calculated for each score. The resulting p-values for each processing method were converted to FDR using the Benjamini–Hochberg algorithm as implemented in the python package statsmodels. The gene score threshold corresponding to a FDR of 0.05 or lower was used to binarize gene scores. These thresholds were −1.02 for unprocessed gene scores, −0.633 for processed gene scores, and −0.765 for corrected gene scores. Cohen's kappa was calculated for each gene individually. Fisher's exact test, precision, recall, and AUROC scores were calculated globally for all SSD sensitivities in the three data versions. Cell line agreement analysis To obtain the two dimensional visualisations of the combined data set before and after batch correction and considering different gene sets, we computed the sample-wise correlation distance matrix and used this as input into the t-statistic Stochastic Neighbor Embedding (tSNE) procedure25, using the tsne function of the tsne R package, with 1000 iterations, a perplexity of 100 and other parameters set to their default value. To evaluate genome-wide cell line agreement we considered a simple nearest-neighbor classifier that, for each dependency profile of a given cell line in one of the two studies, predicted its matching counterpart in the other study. This prediction was based on the correlation distance between one profile and all the other profiles. To estimate the performance of this classifier, we computed a Recall curve for each of the 294 dependency profiles in the tested data set. Each of these curves was assembled by concatenating the number of observed true-positives amongst the first k neighbors of the corresponding dependency profile (for k = 1–293). We then averaged the 294 resulting Recall curves into a single curve and converted it to percentages by multiplying by 100/294. Finally, we computed the area under the resulting curve and normalized it by dividing by 293. We considered the area under this curve (nAUC) as a performance indicator of the k-nearest neighbor. Cell line profiles agreement in relation to data quality First, to estimate the initial data quality we calculated true positive rates (TPRs, or Recalls) for the sets of significant dependency genes detected across cell lines, within the two studies. To this aim, we used as positive control a reference set of a priori known essential genes12. We assessed the resulting TPRs for variation before/after batch correction, and for correlations with the inter-study agreement. We used cell lines' binary event matrices based on mutation data, copy number alterations, the tissue of origin and MSI status. The resulting set of 587 features were present in at least 3 different cell lines and fewer than 144. We performed a systematic two-sample unpaired Student's t-test (with the assumption of equal variance between compared populations) to assess the differential essentiality of each of the SSD genes across a dichotomy of cell lines defined by the status (present/absent) of each CFE in turn. SSD genes were those with NormLRT values greater than 100 in either institute. From these tests, we obtained p-values against the null hypothesis that the two compared populations had an equal mean, with the alternative hypothesis indicating an association between the tested CFE/gene-dependency pair. P-values were corrected for multiple hypothesis testing using Benjamini–Hochberg. We also estimated the effect size of each tested association by means of Cohen's Delta (ΔFC), i.e. difference in population means divided by their pooled standard deviations. For gene expression analysis we calculated the Pearson correlation across the cell lines between the SSD gene dependency profiles and the gene expression profiles from each institute. The significance of the correlation was assessed using the t-distribution (n − 2 degrees of freedom) and p-values were corrected for multiple hypothesis testing using the q-value method. For the agreement assessment via ROC indicators (Recall, Precision and Specificity), for each of the two studies in turn we picked the most significant 20, 40, 60, 80, and 100% associations as true controls and evaluated the performance of a rank classifier based on the corresponding significance p-values obtained in the other study. For the analysis involving transcriptional data, we used the RNA-seq data from each institute for overlapping cell lines, which includes some sequencing files that have been used by both institutes and processed separately. Rank-based dependency significance and agreement To identify significantly depleted genes for a given cell line, we ranked all the genes in the corresponding essentiality profiles based on their depletion logFCs (averaged across targeting guides), in increasing order. We used this ranked list to classify genes from two sets of prior known essential (E) and non-essential (N) genes, respectively12. For each rank position k, we determined a set of predicted genes P(k) = {s ∈ E ∪ N: ϱ(s) ≤ k}, with ϱ(s) indicating the rank position of s, and the corresponding precision PPV(k) as: $${\it{PPV}}\left( {\it{k}} \right) = \left| {{\it{P}}\left( {\it{k}} \right){\it{ \cap E}}} \right|{\it{/}}\left| {{\it{P}}\left( {\it{k}} \right)} \right|$$ Subsequently, we determined the largest rank position k* with P(k*) ≥ 0.95 (equivalent to a FDR ≤ 0.05). Finally, a 5% FDR logFCs threshold F* was determined as the logFCs of the gene s such that ϱ(s) = k*, and we considered all the genes with a logFC < F* as significantly depleted at 5% FDR level. For each cell line, we determined two sets of significantly depleted genes (at 5% FDR): B and S, for Broad and Sanger data sets, respectively. We then quantified their agreement using the Jaccard index37 J(B, S) = | B ∩ S | / | B ∪ S |, and defined their disagreement as 1 − (B, S). Summary agreement/disagreement scores were derived by averaging the agreement/disagreement across all cell lines. sgRNA correlations Broad and Sanger log fold-changes for their original screens were median-collapsed to guide by cell line matrices. For each gene present in the unprocessed gene scores, a correlation matrix between all the sgRNAs targeting that gene in each guide by cell line matrix was computed. The mean of the values in this matrix for each institute, excluding the correlations of sgRNAs with themselves, was retained. The mean sgRNA correlation within institutes was then calculated from the mean of the Broad and Sanger sgRNA correlation matrix means. The mean sgRNA correlation between institutes for each gene was calculated from the mean of all possible pairs of sgRNAs targeting that gene with one sgRNA chosen from Sanger and one from Broad. Relating sgRNA depletion and efficacy We chose the set of genes found to be essential in at least one unprocessed data set. The log fold-change of guides targeting those genes in each data set was calculated and compared to the guide's estimated on-target efficacy. Difference in late essential gene scores between data sets We randomly selected n genes, where n is the number of late essential genes, and calculated the difference in median gene score for those genes between the Broad and Sanger institutes. We repeated this 10,000 times to generate the null distribution for median difference. No instances of the null were as extreme as the observed difference between median late essential scores. However, the null was well-approximated by a Gaussian distribution, which allowed us to extrapolate a p-value for the observed difference in medians. Time point gene ontology analysis We tested for enrichment of GO terms associated with genes showing a significant depletion in only one institute. To rule out the differences due to the library, genes with significantly different guide efficacies were filtered from the analysis. Using the Azimuth scores average (mean) efficacy scores for each gene at each institute were calculated. A null distribution of differences in gene efficacy was estimated using genes not present in either institute specific sets (which were defined as depleted in at least 25% of cell lines). Institute specific genes greater than 2 standard deviations from the mean of the null distribution were removed. For the filtered gene set prior known essential and non-essential gene sets from32 were used to find significant depletions for each cell line and institute at 5% FDR. For each cell line, the genes identified as significantly depleted in only Broad or only Sanger were functionally characterized using GO enrichment analysis38. To this aim, we downloaded a collection of gene sets (one for each GO category) from the Molecular Signature Database (MsigDB)28, and performed a systematic hypergeometric test to quantify the over-representation of each GO category for each set of study-exclusive dependency genes, per cell line. We corrected the resulting p-values for all the tests performed within each study using the Benjamini–Hochberg procedure39, and considered a GO category enriched in a cell line if the corrected p-value resulting from the corresponding test was < 0.05. Principal component analysis of the batch effect The Broad and Sanger unprocessed gene scores and the gene scores for the alternate conditions tested by both institutes were concatenated into a single matrix with a column for each screen. Principal components were found for the transpose of this matrix, where each row is a screen and each column a pseudogene. Components 1 and 2 were plotted for all original screens and the alternate screens for either HT-29 (Fig. 6a) or JIMT-1 (Supplementary Fig. 6a). The aspect ratio for the plot was set to match the relative variance explained by the first two principal components. Consistency of time point and library effects on gene scores To evaluate library differences, we took all screens that had been duplicated in each library with all other conditions (time point, clone, and screen location) kept constant. For each of these screens, we subtracted the gene scores of the version performed with the KY library from the version performed with the Avana library to create library difference profiles. For the case of Sanger's day-14 KY screen of the Sanger HT-29 clone, two versions exist, the original and an alternative that was eventually grown out to 21 days. We used the alternate version of this screen to be consistent with the day 21 results. A correlation matrix of library difference profiles was then calculated and is plotted to the left of Fig. 6b. The procedure was repeated for time point differences, creating time point difference profiles by subtracting day 14 results from day 21 results for pairs of screen readouts that differed in time point but not library, clone, or screen location. Matching experimental conditions For the cell line HT-29, we took Sanger's original screen as a baseline. We then subtracted from this baseline from four Broad HT-29 screens: the original (Avana library at day 21), then with the Avana library at day 14, the KY library at day 21, and the KY library at day 14, generating four arrays indexed by gene which form the y-axes in the succession of plots in Fig. 6c. We also computed the mean score of each gene across all original Broad screens and subtracted it from the mean score of each gene across all the original Sanger screens to form the x-axis of all four plots. For each condition, the standard deviation of the HT-29 screen differences (y-axes) was computed along with the correlation of the HT-29 screen differences with the mean differences (x-axis). The plots themselves are Gaussian kernel density estimates. We repeated this process for JIMT-1 (Supplementary Fig. 6d) and then for HT-29 while swapping the roles of Broad and Sanger (Fig. 6d). For the Sanger alternate condition screens we used the Sanger clone of HT-29, and for its day 14 KY screen we used the Sanger's original HT-29 screen. Replication experiments The replication screens at Broad and Sanger were performed using the normal current protocol of the respective institution13,15 except with respect to the specifically noted changes to the library (and the associated primer sequences required for post-screen amplification of the sgRNA barcodes) and the time point. See Supplementary Methods for details. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. The data used for this paper have been posted to Figshare (https://doi.org/10.6084/m9.figshare.7970993.v1). Code availability Scripts to perform all analyses and generate figures are available at https://github.com/DepMap-Analytics/Comparative-Analysis. Prasad, V. Perspective: the precision-oncology illusion. Nature 537, S63 (2016). Barretina, J. et al. The Cancer Cell Line Encyclopedia enables predictive modelling of anticancer drug sensitivity. Nature 483, 603–607 (2012). Garnett, M. J. et al. Systematic identification of genomic markers of drug sensitivity in cancer cells. Nature 483, 570–575 (2012). Evers, B. et al. CRISPR knockout screening outperforms shRNA and CRISPRi in identifying essential genes. Nat. Biotechnol. 34, 631–633 (2016). Morgens, D. W., Deans, R. M., Li, A. & Bassik, M. C. Systematic comparison of CRISPR/Cas9 and RNAi screens for essential genes. Nat. Biotechnol. 34, 634–636 (2016). Shalem, O. et al. Genome-scale CRISPR-Cas9 knockout screening in human cells. Science 343, 84–87 (2014). Koike-Yusa, H., Li, Y., Tan, E.-P., Velasco-Herrera, M. D. C. & Yusa, K. Genome-wide recessive genetic screening in mammalian cells with a lentiviral CRISPR-guide RNA library. Nat. Biotechnol. 32, 267–273 (2014). Wang, T., Wei, J. J., Sabatini, D. M. & Lander, E. S. Genetic screens in human cells using the CRISPR-Cas9 system. Science 343, 80–84 (2014). Wang, T. et al. Gene essentiality profiling reveals gene networks and synthetic lethal interactions with oncogenic Ras. Cell 168, 890–903.e15 (2017). Shi, J. et al. Discovery of cancer drug targets by CRISPR-Cas9 screening of protein domains. Nat. Biotechnol. 33, 661–667 (2015). Tzelepis, K. et al. A CRISPR dropout screen identifies genetic vulnerabilities and therapeutic targets in acute myeloid leukemia. Cell Rep. 17, 1193–1205 (2016). Hart, T. et al. High-resolution CRISPR screens reveal fitness genes and genotype-specific cancer liabilities. Cell 163, 1515–1526 (2015). Behan, F. M. et al. Prioritisation of oncology therapeutic targets using CRISPR-Cas9 screening. Nature 568, 511–516 (2019). Tsherniak, A. et al. Defining a Cancer Dependency Map. Cell 170, 564–576.e16 (2017). Meyers, R. M. et al. Computational correction of copy number effect improves specificity of CRISPR-Cas9 essentiality screens in cancer cells. Nat. Genet. 49, 1779–1784 (2017). DepMap Achilles 19Q1 Public. https://doi.org/10.6084/m9.figshare.7655150.v1 (2019). DepMap at Broad Institute. Cancer Dependency Map. DepMap https://depmap.org/portal/ (2018). DepMap at Sanger Institute. Project Score https://score.depmap.sanger.ac.uk/, part of the Sanger Cancer Dependency Map. Sanger DepMap Portal (2019). Available at: https://depmap.sanger.ac.uk/. (Accessed: 9th April 2019). Johnson, W. E., Li, C. & Rabinovic, A. Adjusting batch effects in microarray expression data using empirical Bayes methods. Biostatistics 8, 118–127 (2007). Cancer Cell Line Encyclopedia Consortium; Genomics of Drug Sensitivity in Cancer Consortium. Pharmacogenomic agreement between two cancer cell line data sets. Nature 528, 84–87 (2015). McDonald, E. R. 3rd et al. Project DRIVE: a compendium of cancer dependencies and synthetic lethal relationships uncovered by large-scale, deep RNAi screening. Cell 170, 577–592.e10 (2017). Haibe-Kains, B. et al. Inconsistency in large pharmacogenomic studies. Nature 504, 389–393 (2013). Geeleher, P., Gamazon, E. R., Seoighe, C., Cox, N. J. & Huang, R. S. Consistency in large pharmacogenomic studies. Nature 540, E1–E2 (2016). Mpindi, J. P. et al. Consistency in drug response profiling. Nature 540, E5–E6 (2016). Bushati, N., Smith, J., Briscoe, J. & Watkins, C. An intuitive graphical visualization technique for the interrogation of transcriptome data. Nucleic Acids Res. 39, 7380–7389 (2011). Iorio, F. et al. A Landscape of Pharmacogenomic Interactions in. Cancer Cell 166, 740–754 (2016). Leek, J. T., Johnson, W. E., Parker, H. S., Jaffe, A. E. & Storey, J. D. The sva package for removing batch effects and other unwanted variation in high-throughput experiments. Bioinformatics 28, 882–883 (2012). Subramanian, A. et al. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc. Natl Acad. Sci. USA 102, 15545 (2005). Doench, J. G. et al. Optimized sgRNA design to maximize activity and minimize off-target effects of CRISPR-Cas9. Nat. Biotechnol. 34, 184 (2016). Lagziel, S., Lee, W. D. & Shlomi, T. Inferring cancer dependencies on metabolic genes from large-scale genetic screens. BMC Biology 17, 37 (2019). DepMap, B. DepMap Achilles 19Q1 Public. Figshare https://figshare.com/s/362d32844d53eb5753c5 (2019). Iorio, F. et al. Unsupervised correction of gene-independent cell responses to CRISPR-Cas9 targeting. BMC Genomics 19, 604 (2018). Garcia-Alonso, L. et al. Transcription factor activities enhance markers of drug sensitivity in cancer. Cancer Res. 78, 769–780 (2018). Ramsay, P. H. & Scott, D. W. Multivariate density estimation, theory, practice, and visualization. Technometrics 35, 451 (1993). Ripley, B. D. Modern applied statistics with S 4th edn (Springer, 2002). Azzalini, A. The R package sn: The skew-normal and related distributions, such as the skew-t (version 1.5). http://azzalini.stat.unipd.it/SN (2017). Jaccard, P. Étude comparative de la distribution florale dans une portion des Alpes et des Jura. Bull. Soc. Vaud. Sci. Nat. 37, 547–579 (1901). Ashburner, M. et al. Gene ontology: tool for the unification of biology. The Gene Ontology Consortium. Nat. Genet. 25, 25–29 (2000). Benjamini, Y. & Hochberg, Y. Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. J. R. Stat. Soc. Ser. B Stat. Methodol. 57, 289–300 (1995). This work was funded by Open Targets (OTAR2-055) to F.I. and (OTAR015) to M.J.G. and K.Y., by the Wellcome Trust grant no. 206194 to M.J.G., by Wellcome and the Estonian Research Council (IUT 34-4) to L.P., by grants U01 CA176058 and U01 CA199253 to W.C.H and by the HL Snyder Foundation (W.C.H.). Kosuke Yusa Present address: Stem Cell Genetics, Institute for Frontier Life and Medical Sciences, Kyoto University, Kyoto, 606-8507, Japan Broad Institute of MIT and Harvard, Cambridge, MA, 02142, USA Joshua M. Dempster , Sasha Pantel , Thomas Green , John Krill-Burger , Scott T. Younger , Victor Zhivich , John G. Doench , Francisca Vazquez , Jesse S. Boehm , Todd R. Golub , William C. Hahn , David E. Root & Aviad Tsherniak Wellcome Sanger Institute, Wellcome Genome Campus, Hinxton, Cambridge, CB10 1SA, UK Clare Pacini , Fiona M. Behan , Charlotte M. Beaver , Hanna Najgebauer , Felicity Allen , Emanuel Gonçalves , Rebecca Shepherd , Kosuke Yusa , Leopold Parts , Mathew J. Garnett & Francesco Iorio Open Targets, Wellcome Genome Campus, Hinxton, Cambridge, CB10 1SA, UK Department of Computer Science, University of Tartu, 50090, Tartu, Estonia Leopold Parts Dana-Farber Cancer Institute, Boston, MA, 02215, USA Todd R. Golub & William C. Hahn Human Technopole, 20157, Milano, Italy Francesco Iorio Search for Joshua M. Dempster in: Search for Clare Pacini in: Search for Sasha Pantel in: Search for Fiona M. Behan in: Search for Thomas Green in: Search for John Krill-Burger in: Search for Charlotte M. Beaver in: Search for Scott T. Younger in: Search for Victor Zhivich in: Search for Hanna Najgebauer in: Search for Felicity Allen in: Search for Emanuel Gonçalves in: Search for Rebecca Shepherd in: Search for John G. Doench in: Search for Kosuke Yusa in: Search for Francisca Vazquez in: Search for Leopold Parts in: Search for Jesse S. Boehm in: Search for Todd R. Golub in: Search for William C. Hahn in: Search for David E. Root in: Search for Mathew J. Garnett in: Search for Aviad Tsherniak in: Search for Francesco Iorio in: J.M.D., F.I. and A.T. conceived and designed the study. J.M.D. and C.P. conducted the analyses described under Results. J.M.D., C.P. and F.I. wrote the paper and produced the figures. A.T. wrote the paper. H.N. produced figures and curated data. J.M.D. munged and collated gene scores. C.P. munged and collated cell characterizations. J.K.-.B. produced the script used to calculate NormLRT scores. V.Z., S.P., S.T.Y. and D.E.R. conducted the Broad's replications of Sanger screens, while F.B., R.S. and C.M.B conducted Sanger's replications and curated corresponding data. J.G.D. and K.Y. provided ideas and discussed the integration of Avana and KY libraries, and T.G. provided the Azimuth scores for both. F.A., E.G., F.V., L.P., J.S.B., T.R.G., W.C.H. and M.J.G. edited the paper and contributed ideas on some of the analyses. J.S.B., T.R.G., W.C.H. and M.J.G. acquired funds and contributed to study supervision. A.T. and F.I. acquired funds and supervised the study. Correspondence to Aviad Tsherniak or Francesco Iorio. C.P., F.M.B., H.N., M.J.G. and F.I. receive funding from Open Targets, a public-private initiative involving academia and industry. K.Y. and M.J.G. receive funding from AstraZeneca. M.J.G performed consultancy for Sanofi. J.G.D. and A.T. perform consulting for Tango Therapeutics. W.C.H. performs consulting for Thermo Fisher, AdjulB, MBM Capital, and Paraxel, and is a founder and scientific advisory board member of KSQ Therapeutics. T.R.G. performs consulting for GlaxoSmithKline, Sherlock Biosciences, and Foundation Medicine. F.I. performs consultancy for the joint CRUK - AstraZeneca Functional Genomics Centre. All the other authors declare no competing interests. Peer review information Nature Communications thanks Stephane Angers and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Peer Review File Supplementary Data 1 Dempster, J.M., Pacini, C., Pantel, S. et al. Agreement between two large pan-cancer CRISPR-Cas9 gene dependency data sets. Nat Commun 10, 5817 (2019) doi:10.1038/s41467-019-13805-y Received: 20 June 2019
CommonCrawl
What is meant by `DiracDelta'[t]`? While calculating an inverse Laplace transform Wolfram Alpha returned to me the following output: 7 + 2 DiracDelta[-1 + t] + 14 DiracDelta[t] + HeavisideTheta[-1 + t] + 16 DiracDelta'[t] What does DiracDelta'[t] mean? A derivative of Dirac Delta function? Wouldn't that be infinite at $0$ and zero everywhere else? That is, basically the Dirac Delta function itself? dirac-delta wolfram-alpha "Infinite at zero and zero everywhere else" is a woefully inadequate description of the dirac delta. The best (and usually literal) definition of the dirac delta is basically that the notation resembling an integral containing a dirac delta is defined to mean evaluation: $$ \int_{-\infty}^{\infty} f(x) \delta(x-a) \, \mathrm{d}x := f(a) $$ whenever $f$ is continuous at $a$. Notation involving the derivative is defined by a similar formula: $$ \int_{-\infty}^{\infty} f(x) \delta'(x-a) \, \mathrm{d}x := -f'(a) $$ where $f$ is continuously differentiable at $a$. The idea behind the definition is that it is is meant to invoke partial integration; to imagine a hypothetical calculation $$ \int_{-\infty}^{\infty} \left( f(x) \delta'(x-a) + f'(x) \delta(x-a) \right) \, \mathrm{d}x = (f(x) \delta(x-a))\big|_{x=-\infty}^{x=\infty} = 0 $$ There is a systematic approach to this sort of stuff: they're called distributions. On a suitable space of test functions, this partial integration formula is the definition of the derivative. HurkylHurkyl Change of variables for a Dirac delta function strange transform of dirac delta function Definition of the Dirac Delta function gives Sin converging to zero at infinity. Laplace Transform of Dirac Delta function What does Dirac delta function of a constant mean? Interpretation of ODE with Dirac delta input and initial conditions at $t=0$ Fourier transform of $t^2$ discrepancy Wolfram Alpha and Fourier Transform inconsistencies? We Can Think of the Dirac Delta Function as Being the Limit Point of a Series of Functions That Put Less and Less Mass On All Points Other Than Zero?
CommonCrawl
Hilbert–Poincaré series In mathematics, and in particular in the field of algebra, a Hilbert–Poincaré series (also known under the name Hilbert series), named after David Hilbert and Henri Poincaré, is an adaptation of the notion of dimension to the context of graded algebraic structures (where the dimension of the entire structure is often infinite). It is a formal power series in one indeterminate, say $t$, where the coefficient of $t^{n}$ gives the dimension (or rank) of the sub-structure of elements homogeneous of degree $n$. It is closely related to the Hilbert polynomial in cases when the latter exists; however, the Hilbert–Poincaré series describes the rank in every degree, while the Hilbert polynomial describes it only in all but finitely many degrees, and therefore provides less information. In particular the Hilbert–Poincaré series cannot be deduced from the Hilbert polynomial even if the latter exists. In good cases, the Hilbert–Poincaré series can be expressed as a rational function of its argument $t$. See also: Hilbert series and Hilbert polynomial Definition Let K be a field, and let $V=\textstyle \bigoplus _{i\in \mathbb {N} }V_{i}$ be an $\mathbb {N} $-graded vector space over K, where each subspace $V_{i}$ of vectors of degree i is finite-dimensional. Then the Hilbert–Poincaré series of V is the formal power series $\sum _{i\in \mathbb {N} }\dim _{K}(V_{i})t^{i}.$[1] A similar definition can be given for an $\mathbb {N} $-graded R-module over any commutative ring R in which each submodule of elements homogeneous of a fixed degree n is free of finite rank; it suffices to replace the dimension by the rank. Often the graded vector space or module of which the Hilbert–Poincaré series is considered has additional structure, for instance, that of a ring, but the Hilbert–Poincaré series is independent of the multiplicative or other structure. Example: Since there are ${\binom {n+k}{n}}$ monomials of degree k in variables $X_{0},\dots ,X_{n}$ (by induction, say), one can deduce that the sum of the Hilbert–Poincaré series of $K[X_{0},\dots ,X_{n}]$ is the rational function $1/(1-t)^{n+1}$.[2] Hilbert–Serre theorem Suppose M is a finitely generated graded module over $A[x_{1},\dots ,x_{n}],\deg x_{i}=d_{i}$ with an Artinian ring (e.g., a field) A. Then the Poincaré series of M is a polynomial with integral coefficients divided by $\prod (1-t^{d_{i}})$.[3] The standard proof today is an induction on n. Hilbert's original proof made a use of Hilbert's syzygy theorem (a projective resolution of M), which gives more homological information. Here is a proof by induction on the number n of indeterminates. If $n=0$, then, since M has finite length, $M_{k}=0$ if k is large enough. Next, suppose the theorem is true for $n-1$ and consider the exact sequence of graded modules (exact degree-wise), with the notation $N(l)_{k}=N_{k+l}$, $0\to K(-d_{n})\to M(-d_{n}){\overset {x_{n}}{\to }}M\to C\to 0$. Since the length is additive, Poincaré series are also additive. Hence, we have: $P(M,t)=-P(K(-d_{n}),t)+P(M(-d_{n}),t)-P(C,t)$. We can write $P(M(-d_{n}),t)=t^{d_{n}}P(M,t)$. Since K is killed by $x_{n}$, we can regard it as a graded module over $A[x_{0},\dots ,x_{n-1}]$; the same is true for C. The theorem thus now follows from the inductive hypothesis. Chain complex An example of graded vector space is associated to a chain complex, or cochain complex C of vector spaces; the latter takes the form $0\to C^{0}{\stackrel {d_{0}}{\longrightarrow }}C^{1}{\stackrel {d_{1}}{\longrightarrow }}C^{2}{\stackrel {d_{2}}{\longrightarrow }}\cdots {\stackrel {d_{n-1}}{\longrightarrow }}C^{n}\longrightarrow 0.$ The Hilbert–Poincaré series (here often called the Poincaré polynomial) of the graded vector space $\bigoplus _{i}C^{i}$ for this complex is $P_{C}(t)=\sum _{j=0}^{n}\dim(C^{j})t^{j}.$ The Hilbert–Poincaré polynomial of the cohomology, with cohomology spaces Hj = Hj(C), is $P_{H}(t)=\sum _{j=0}^{n}\dim(H^{j})t^{j}.$ A famous relation between the two is that there is a polynomial $Q(t)$ with non-negative coefficients, such that $P_{C}(t)-P_{H}(t)=(1+t)Q(t).$ References 1. Atiyah & Macdonald 1969, Ch. 11. 2. Atiyah & Macdonald 1969, Ch. 11, an example just after Proposition 11.3. 3. Atiyah & Macdonald 1969, Ch. 11, Theorem 11.1. • Atiyah, Michael Francis; Macdonald, I.G. (1969). Introduction to Commutative Algebra. Westview Press. ISBN 978-0-201-40751-8.
Wikipedia
A method for small-area estimation of population mortality in settings affected by crises Francesco Checchi ORCID: orcid.org/0000-0001-9030-53821, Adrienne Testa1, Amy Gimma1, Emilie Koum-Besson1 & Abdihamid Warsame1 Population Health Metrics volume 20, Article number: 4 (2022) Cite this article Populations affected by crises (armed conflict, food insecurity, natural disasters) are poorly covered by demographic surveillance. As such, crisis-wide estimation of population mortality is extremely challenging, resulting in a lack of evidence to inform humanitarian response and conflict resolution. We describe here a 'small-area estimation' method to circumvent these data gaps and quantify both total and excess (i.e. crisis-attributable) death rates and tolls, both overall and for granular geographic (e.g. district) and time (e.g. month) strata. The method is based on analysis of data previously collected by national and humanitarian actors, including ground survey observations of mortality, displacement-adjusted population denominators and datasets of variables that may predict the death rate. We describe the six sequential steps required for the method's implementation and illustrate its recent application in Somalia, South Sudan and northeast Nigeria, based on a generic set of analysis scripts. Descriptive analysis of ground survey data reveals informative patterns, e.g. concerning the contribution of injuries to overall mortality, or household net migration. Despite some data sparsity, for each crisis that we have applied the method to thus far, available predictor data allow the specification of reasonably predictive mixed effects models of crude and under 5 years death rate, validated using cross-validation. Assumptions about values of the predictors in the absence of a crisis provide counterfactual and excess mortality estimates. The method enables retrospective estimation of crisis-attributable mortality with considerable geographic and period stratification, and can therefore contribute to better understanding and historical memorialisation of the public health effects of crises. We discuss key limitations and areas for further development. Mortality estimation in crisis-affected populations In populations exposed to conditions of crisis (armed conflict, food insecurity, natural disasters, etc.), estimates of population mortality provide a basis on which to predicate an appropriate humanitarian response [1, 2], and support advocacy and historical documentation [3, 4]. Over the past two decades, estimates of mortality have informed war crime prosecution in the former Yugoslavia [5], illuminated the toll of armed conflict in Darfur [6, 7], the Democratic Republic of Congo [8] and Iraq [9, 10], documented the impact of famine in Somalia [11] and, most recently, demonstrated the direct and indirect health impacts of the SARS-CoV-2 pandemic [12,13,14]. Crisis-attributable mortality is difficult to estimate, even in high-income countries [15, 16]. In low-income and/or insecure settings, additional challenges [4, 17] arise, including (i) lack of robust vital events registration; (ii) unfeasibility of representative primary data collection due to insecurity, lack of authorisations, funding constraints or other factors; and (iii) inability to collect robust retrospective data due to having to elicit information on demographic events over a long period in the past (e.g. > 2 years). Response bias as questionnaires probe farther back in time, plus survival and selection biases caused by households disintegrating due to high mortality or migration, challenge survey validity [17]. Establishing a counterfactual (i.e. non-crisis) death rate presents a further challenge, particularly in very protracted crises (e.g. Afghanistan or the eastern Democratic Republic of Congo) where such a baseline has been unobservable for decades. Scope of this paper Here, we describe the design and implementation of a method that addresses the above challenges, and estimates crisis-attributable death rates and tolls based on previously collected data. Applications of previous iterations of the method in Somalia (2010–2012) [11] and South Sudan (2013–2018) [18] have been published elsewhere. Further applications in Somalia (2014–2018), Nigeria and the Democratic Republic of the Congo will be published separately. South Sudan, Somalia and Nigeria examples are however used here to illustrate the application and constraints of the method. General design Why a small-area estimation approach? Small-area estimation was developed in the United States to estimate characteristics of interest, e.g. smoking prevalence or poverty levels, for small geographical units (e.g. counties) without having to conduct primary data collection within each such unit [19]. Our method is designed to deliver estimates for small geographical and time strata based solely on existing data. General framework Crisis-attributable mortality can be defined conceptually as the difference between the number (or rate) of deaths that has actually occurred during the crisis and the number (rate) that would have occurred in the absence of the crisis. As illustrated hypothetically in Fig. 1, in a counterfactual (i.e. no-crisis) scenario it is plausible that the pre-crisis secular decline would have continued; the crisis has negated these improvements and effectively returned the population to a 'higher' baseline than pre-crisis; moreover, excess, crisis-attributable mortality may occur even years after crisis conditions (e.g. armed conflict) resolve (e.g. increased tuberculosis mortality due to higher transmission of M. tuberculosis when people lived in displacement camps years earlier, or the multi-generational effects of psychological stress). Illustration of actual and counterfactual mortality during and after a hypothetical crisis We wish to estimate excess mortality for the entire 'person-time' at risk during the crisis, but also for specific sub-periods and geographic units (these could be administrative level 2 entities such as counties or districts; they could also however be geographical units whose boundaries may correlate more closely with mortality risk, such as settlements for internally displaced persons (IDPs) or 'livelihood zones', namely areas characterised by a dominant economic activity, e.g. pastoralism or agriculture). Information on where and when mortality is highest may be useful to identify gaps in the humanitarian response or to better understand the dynamics of an armed conflict. More generally, we can write $$D_{E,kt} = D_{A,kt} - D_{C,kt} = y_{A,kt} N_{A,kt} - y_{C,kt} N_{C,kt}$$ where \(D\) is the death toll, \(y\) is the mean death rate and \(N\) the population at risk; \(E\), \(A\) and \(C\) denote excess, actual (i.e. what truly happened) and counterfactual (what would have happened in the absence of a crisis) levels; \(k\) is any geographic unit (e.g. a district), and \(t\) any time unit (e.g. a month) within the crisis period (thus, \(kt\), the smallest analysis stratum, could be a district-month). Note that \({N}_{C}\) may differ from \({N}_{A}\), for example because in a no-crisis counterfactual forced displacement would not have occurred. If the quantities on the right-hand side of Eq. (1) are all estimated, we can sum results for any \(kt\) strata for different aggregations of interest or to compute the overall death toll. Equation (1) also applies for age- or cause-specific mortality (e.g. among children under 5 years old; due to intentional injury), provided these stratifications are available or can also be estimated. Estimation steps Our adaptation of small-area estimation consists of using available data to fit and validate a statistical model (specific to each crisis) that predicts the death rate \({y}_{kt}\) as a function of several predictor variables; and applying this model to project \({y}_{A,kt}\) and \({y}_{C,kt}\) under actual (observed) and assumed counterfactual conditions. Separately, \({N}_{A,kt}\) and \({N}_{C,kt}\) are reconstructed based on growth rates and displacement patterns. Excess deaths are then computed by applying Eq. (1). Table 1 summarises the steps involved in the full application of the method. Data management details are omitted here, but annotated on R statistical scripts (see Declarations and Additional file 1: pages 11–13). Step 2, namely reconstructing population denominators, will be detailed in a separate paper. Table 1 Summary of estimation steps Defining the analysis person-time and strata Specifying the population and period for which estimates are sought, and the granularity with which these may be computed, determines most of the subsequent steps. In some scenarios, this will be straightforward (e.g. an entire country or a specific region is affected by armed conflict with a clear start and end date). In other cases, the analysis may be conducted to estimate mortality up to a certain time point in the crisis. The definition of 'crisis' also needs to be made explicit: for example, Somalia has experienced 30 years of armed conflict; against this backdrop, drought and flooding emergencies have repeatedly occurred. Our analyses to date in Somalia have aimed to estimate mortality attributable to exceptional food insecurity events (2010–2012, 2017–2018) [20] triggered by drought, i.e. above and beyond any excess deaths caused by the protracted conflict alone. Accordingly, we have defined the period of analysis as that over which key food security indicators and other markers of crisis conditions were reported to be unusually poor. In Nigeria, we wished to estimate mortality attributable to the armed conflict between the government and Boko Haram, which affects three states (Borno, Yobe, Adamawa) in the northeast: this is a more straightforward scenario in which a relatively recent baseline of no conflict precedes the crisis. Refugees who leave the crisis-affected region should also be considered within the study population. However, this bears several complexities: for example, refugees will be exposed to different risk factors and may paradoxically experience lower mortality than if they had remained in their country of origin, implying a negative excess mortality: this has been documented for South Sudanese refugees in Uganda [21], and could plausibly apply to the large Syrian refugee population now living in Europe. In practice, the person-time boundaries of the analysis and the smallest level of stratification (\(kt\)) may be constrained by data availability. However, if possible a 'buffer' period (e.g. 6–12 months pre-crisis) should be included in the analysis to allow exploration of lagged effects of predictors on mortality and to use 'baseline' observations to set counterfactual values for the predictors (see below, 'Excess mortality estimation' section). Furthermore, stratification should be as granular as possible to maximise observations available for model fitting and the utility of estimates. As detailed below, sample surveys conducted by various humanitarian actors are the commonest source of mortality ground data with which to fit and validate models. In a Somalia study (2010–2012) [11] we conducted in the aftermath of a severe famine, nearly all such surveys had as their sampling universe the intersection of regional and livelihood zone boundaries: for example, within Gedo region some surveys were designed to represent communities that predominantly relied on pastoralism, while other surveys covered IDPs or riverine agriculturalists. Most of the predictors and demographic estimates were also collected at or could be aggregated to this stratification level, and by month. Our chosen \(kt\) was thus regional livelihood zones and months (Table 2). Table 2 Geographic analysis strata, Somalia (2010–2012) [11] In more recent work, available data have supported stratification by level 2 administrative unit (counties and districts, respectively). Implementation of specific steps Data collection and management steps Mortality data Ground mortality observations are required to train and validate a predictive model. The Standardised Monitoring and Assessment of Relief and Transitions (SMART) initiative [22] has developed a globally applicable protocol for rapid surveys that primarily aim to estimate the prevalence of acute malnutrition, but often also include a questionnaire module that elicits information from sampled households on their demographic experience over a retrospective 'recall' period, typically 3–6 months long [23]. SMART surveys are highly standardised and conducted routinely in most humanitarian responses [24], typically at administrative level 2 or similarly small scale. Surveys mostly rely on two-stage cluster sampling, though some, e.g. in IDP camps, are exhaustive or use systematic random selection. Sample sizes of 300–1000 households and 20–30 clusters are typical, i.e. sampled households are only a small fraction of the total. Survey design and analysis are automated by Emergency Nutrition Assessment (ENA) software, reducing the potential for surveyor error [25]. We identified 205 analysis-eligible SMART surveys in Somalia (2010–2012), 210 in South Sudan (2013–2018), 91 in Somalia (2014–2018) and 70 in Nigeria (2016–2018). Despite these substantial numbers, geographic and period data coverage can be sparse, as illustrated in Fig. 2 for South Sudan. Coverage of SMART mortality surveys, by state and month, South Sudan, 2013–2018. Heat colours denote the percentage of the state's population that fell within the sampling frame of at least one survey After cleaning datasets to resolve errors (e.g. values out of the allowed range), the crude death rate (CDR), under 5 years death rate (U5DR or CDR among the population aged under 5 years), crude birth rate, in-, out- and net migration rate, and, for individual questionnaire surveys only, cause- and gender-specific death rates may be computed (Additional file 1: page 2 and Table S1). The CDR and U5DR in particular are widely used by humanitarian actors to benchmark the severity of a crisis in health terms [1]. Inspection of crude patterns in survey indicators may be informative: for example, in South Sudan many surveys indicated high injury-attributable death rates and relative risks of dying among males, compared to females (Fig. 3). Trends in selected survey-estimated indicators, South Sudan, 2013–2018. Each dot-line segment denotes the recall period of one survey. Panel A death rate due to injury trauma per 10,000 person-days. Panel B net household migration rate per 1000 person-years Humanitarian surveys have varying robustness [26, 27]. While SMART survey reports do not systematically report quality issues, they should nonetheless be scrutinised to identify potential biases, particularly any restriction of the effective sampling frame to only a fraction of the intended sampling universe, due for example to insecurity or inaccessibility. We attribute to each survey \(s\) a weight \({w}_{s}={{w}_{B,s}w}_{Q,s}\), where\({w}_{B,s}\), a representativeness weight, is the approximate fraction of the sampling universe that was actually included in the sample, as per the survey's report (for example, if a report states that the sampling frame excluded 3 out of 5 districts, we set \({w}_{B,s}=0.4\); where an unspecified number of sampling units are excluded from the sampling frame, we assume \({w}_{B,s}= 0.5\)); and \({w}_{Q,s}\) is a quality weight derived from the dataset (see Additional file 1: page 3). Predictor data If the statistical objective of analysis is merely to predict the death rate, any set of predictor variables that does so accurately, whatever their causal relationship with mortality, may be appropriate. However, choosing predictors that are causally related to mortality, or proxies for mortality risk determinants, is likely to enhance predictive power and help assess the model's internal validity. To this end, we have defined a generic framework of factors leading to crisis mortality (Additional file 1: Figure S1). At least some of the selected predictors should be related to plausible drivers of excess mortality risk: for example, in a drought-triggered food security crisis these might include rainfall, food purchasing power, burden of malnutrition and the incidence of epidemics (cholera, measles); in an armed conflict, the intensity of violence and disruptions to public health services might be more relevant. Identifying such 'crisis-specific' predictors is critical, as the method defines no-crisis scenarios by specifying counterfactual values for these very predictors. In armed conflict settings and humanitarian responses, data collection is often unsystematic and disrupted [28]. In our experience to date, data are available for only few causal factors, and negotiation with agencies and humanitarian coordination mechanisms holding non-public datasets occupies a large share of analyst time. Such datasets generally have poor integrity; they are typically entered onto spreadsheet software without standardisation of geographical nomenclature, value cell or formula protections, variable dictionaries or automatic error checking—thus necessitating extensive curation. Missingness is a common problem (Additional file 1: Figures S2 and S3). We retain potential predictor datasets by applying a '70–70–70' rule, namely ≥ 70% complete for ≥ 70% of \(k\) and ≥ 70% of \(t\). Remaining missingness is resolved through imputation, either statistical or manual (i.e. based on contextual knowledge). In order to reduce the influence of outliers (some of which may be data entry errors), where appropriate we apply moderate smoothing or running means to time series. Details of predictors considered are presented in crisis-specific papers; Table 3 shows predictors included in the final models for each of the crises studied thus far. Table 3 Predictors included in the final models of CDR, by crisis Analysis steps Predictive model fitting If the raw datasets of mortality surveys are mostly unavailable, only stratum-level regression is feasible (Additional file 1: page 9). If raw data for most mortality surveys are available, household-level regression may be undertaken. SMART surveys do not report the exact date of deaths within the recall period: therefore, we merge predictor with survey data by computing the former's weighted mean over the survey's recall period. The data structure is partly longitudinal: for example, in Nigeria, five consecutive survey rounds took place during 2016–2019. While each survey round drew an independent sample, most Local Government Areas (LGAs; administrative level 2 units) hosted survey clusters during each round. In Somalia, some surveys were only representative of IDP settlements or urban areas within districts: we assume simplistically that district-wide predictor values also apply to these populations. We use a generalised linear model with weights \({w}_{s}\) (see above) and a quasi-Poisson distributional assumption to account for overdispersion in the death count outcome. The model's formula is thus: $$\log d_{{i,j,k,T_{r,s} }} = x_{{1,k,T_{r,s} }} \beta_{1} + x_{{2,k,T_{r,s} }} \beta_{2} + x_{{3,k,T_{r,s} }} \beta_{3} \ldots + x_{{p,k,T_{r,s} }} \beta_{p} + u_{j} + u_{k} + \log \Pi_{{i,j,k,T_{r,s} }} + \epsilon_{i,j,k}$$ where \({d}_{i,j,k,{T}_{r,s}}\) is the number of deaths in household \(i\) within survey cluster \(j\) and geographic stratum \(k\) occurring during the recall period \({T}_{r}\) of survey \(s\), where \(r\) means recall; \({x}_{1,k,{T}_{r,s}}, {x}_{2,k,{T}_{r,s}}, {x}_{3,k,{T}_{r,s}}\dots {x}_{p,k,{T}_{r,s}}\) are the values of predictors \({x}_{1}\),\({x}_{2}\), \({x}_{3}\dots {x}_{p}\) averaged over the survey's recall period, and for stratum\(k\); \({\beta }_{1}\), \({\beta }_{2}\), \({\beta }_{3}\dots {\beta }_{p}\) etc., are the corresponding fixed-effect linear coefficients; \({u}_{j}\) and \({u}_{k}\) are, respectively, random effects for cluster \(j\) and stratum \(k\), assumed to follow a normal distribution with mean 0 (\({u}_{j}\sim \mathcal{N}(0,{{\sigma }_{{u}_{j}}}^{2}\)) and \({u}_{k}\sim \mathcal{N}(0,{{\sigma }_{{u}_{k}}}^{2}\))), and capturing a plausible hierarchy of data as well as the repeated nature of observations; \(\mathrm{log}{\Pi }_{i,j,k,{T}_{r,s}}\) is an offset to account for varying household person-time \(\Pi\) at risk (Additional file 1: Table S1); and \(\epsilon_{i,j,k}\) is the residual error not explained by the model. We validate candidate models for out-of-sample prediction through k-fold cross-validation (CV; partition of data into folds is at the \({k,T}_{r,s}\) level given predictors are not specified below this level). We use the mean Dawid–Sebastiani score (\(\mathrm{DSS}\)) [29] as a proper scoring rule appropriate for count outcomes to evaluate model fit on the training data and on CV (in the latter case, we take the mean \(\mathrm{DSS}\) across all folds). After exploratory analysis, where possible we select between maintaining the continuous version of the predictor or categorising into bins, as well as alternative lags, based on the lowest \({\mathrm{DSS}}_{\mathrm{CV}}\), and screen out predictors that are not significantly better-fitting than the null model based on an F-test p-value threshold. We fit each possible combination of remaining predictors (\(X\)predictors = \({2}^{X}\) possible combinations) and shortlist candidate models whose \(\mathrm{DSS}\) is within a given bottom quantile. We select the final set of predictors based on \({\mathrm{DSS}}_{\mathrm{CV}}\), plausibility considerations and whether they are crisis-specific (see above). We test for plausible interactions and, lastly, add random effects, retaining the mixed model if its \({\mathrm{DSS}}_{\mathrm{CV}}\) improves on the fixed-effects alternative. In practice, a mixed model may be of limited utility if most prediction happens for person-time with new levels of the random effect (e.g. in geographic strata not covered by any survey used to train the model on). As an example, we provide in Table 4 model coefficients and performance metrics for South Sudan, all computed based on observations and predictions aggregated at the \({k,T}_{r,s}\) level; predictive accuracy on cross-validation is shown in Fig. 4. As shown, the DSS, which, like other prediction scores, quantifies the error between observations and predictions, increases only slightly on CV, indicating that the model only marginally overfits data and is valid when used out-of-sample. There is also little evidence of predictive bias. Aside from moderately good performance, model coefficients support model validity: mortality increases with insecurity and where measles epidemics are present, but decreases if people are living in Protection of Civilians camps (in South Sudan, these places afforded relative safety and more intense humanitarian services) and as purchasing power improves. Table 4 Final model to predict crude death rate, South Sudan (2013–2018) Predicted versus observed numbers of deaths per stratum (county), South Sudan, 2013–2018, based on ten-fold cross-validation. The red line indicates perfect fit Excess mortality estimation In our framework, excess mortality estimation requires projecting the death toll in counterfactual no-crisis scenarios. These scenarios should specify counterfactual values for all crisis-specific predictors included in the final models, and for the population denominators. Several approaches to set counterfactual values may be used: (i) in the absence of a crisis, it may be assumed that certain predictors or types of displacement would have taken a zero value: for example, epidemics (e.g. cholera, measles) that are known to be associated with extreme food insecurity crises might not have occurred; similarly, no war-related displacement would have happened; (ii) pre-crisis values of the predictors, if available, may be adopted as counterfactuals: for some predictors (e.g. market prices), we use the local average (e.g. the district median prior to the crisis' start); for others (e.g. rainfall), seasonality should also be considered; (iii) if no pre-crisis data are available, levels from reasonably comparable regions within the country that are not affected by the crisis may instead be considered. Table 5 shows 'most likely' counterfactual assumptions for the South Sudan analysis we previously conducted. To explore uncertainty in these assumptions, we also define reasonable best- and worst-case scenarios. Table 5 Most likely scenario counterfactual assumptions, South Sudan (2013–2018) To propagate error in the model predictions of \({y}_{A,k,t}\) and \({y}_{C,k,t}\) into final estimates, we can set up a bootstrap simulation that, for a large number of iterations and each \(kt\) stratum, implements Eq. (1) by drawing random values from the models' normal distribution of log standard errors. Outputs of each iteration are then summed across all \(kt\) or for specific aggregations of interest (e.g. a single year within the crisis period), and point estimates and 95% confidence intervals are computed as the median, 2.5th and 97.5th percentiles of the resulting distribution of iteration sums. Note that if counterfactual population denominators are considerably different from the actuals (e.g. if large-scale displacement outside the region of interest has occurred), comparing actual and counterfactual mortality is fraught due to the difference in at-risk populations: we therefore scale excess death rates to the actual population denominators. Sensitivity analyses While a number of sensitivity analyses may be conducted to explore estimate uncertainty, we focus here on two particularly important issues. Population denominator uncertainty Most displacement data in crisis settings do not arise from statistically robust estimation methods. Over-reporting of population figures may occur if population counts are perceived as registration for relief allocation [30]. Conversely, insecurity and lack of connectivity may result in undetected population movements. We thus explore combinations of sensitivity values for both displacement and demographic estimates (as a ratio of true to reported values, where values < 1 indicate over-reporting, and vice versa), and re-run analysis accordingly. Under-estimation of mortality in surveys In previous South Sudan work, possible under-estimation of deaths among children under 5 years has been noted, as indicated by a low ratio of under 5 years to all-age deaths and low proportion of infant deaths (Table 6). Similar concerns have been raised in Yemen [31]. Under-reporting of infant and particularly neonatal deaths is plausible, due to stigma and/or emotional trauma associated with losing a young child or insufficient probing during questionnaire administration. We thus re-run analysis after augmenting the model training data (number of deaths and person-time within surveyed households) based on a varying assumed proportion of all deaths that are unobserved (Additional file 1: page 10). Table 6 Average survey-estimated crude death rate per 10,000 person-days, under 5 years death rate per 10,000 person-days and percentage of infant deaths among all deaths below 5 years of age, by country Advantages of the method The approach we have described can efficiently reconstruct the evolution of mortality across long retrospective periods and large areas, including where ground data collection would be unfeasible due to inaccessibility or the difficulty of asking households to recall events over a long recall period; in South Sudan, a setting with virtually no vital events registration, our application of the method generated evidence supporting a large excess death toll (about 380,000, half attributable to intentional injuries) attributable to 5 years of war, that might otherwise have evaded historical documentation forever. Somalia estimates (2010–2012) documented the impact of one of the worst famines in the past decades. Predictive models underlying the estimates have quantifiable external validity. While predictive power is ultimately their most important attribute, observing the directionality of coefficients can help to appraise internal validity, particularly if dose–response associations are noted. To our knowledge, no other studies have developed statistical models that predict with reasonable accuracy the crude or under 5 years death rate among some of the world's most vulnerable populations. A known challenge of crisis-attributable mortality estimation is defining an appropriate counter-factual: our method achieves this by generating non-crisis death tolls through the same statistical processes that result in the estimate of actual mortality, yielding meaningful confidence intervals. It explicitly links the definition of the crisis with the choice of counterfactual predictors and values, drawing upon a causal framework of how excess mortality comes about and contextual understanding of the crisis itself. Lastly, the method does not require any primary data collection. The method's main limitations reflect sources of unknown error in input data: (i) error in the predictor data, for example arising from differences in the way predictors are measured over time or in different locations; random error would result in underestimation of associations between predictors and mortality, or 'regression dilution' in predictive terms; bias could cause over- or underestimation; (ii) bias in mortality data, e.g. due to problems with under-ascertainment of deaths (see above), which survey quality weights may reduce but not eliminate; (iii) nonparametric uncertainty around population and displacement estimates; (iv) demographic projections based on inaccurate assumed growth rates (both (iii) and (iv) will be discussed in a separate paper); (v) inappropriate assumptions on counterfactual conditions; and (vi) omission of excess mortality among people who migrate out of the affected region (e.g. refugees), or due to long-term impacts of the crisis beyond its resolution. These limitations imply that estimates should be interpreted with caution, with reference to confidence intervals and after thorough exploration of uncertainty through alternative counterfactual scenarios and sensitivity analyses. Perhaps the most important limitation among the above concerns how counterfactual conditions are specified. Varying predictor values to represent no-crisis conditions presents analogies with both interrupted time series [32] and growth models [33]. However, our approach quantifies the effects of multi-factorial and dynamic crises rather than a single public health intervention implemented in a fairly stable setting: as such, our estimates rely heavily on a few model predictors faithfully representing a more complex system; moreover, counterfactual values for many predictors (e.g. food security, vaccination coverage) are not simply zero, as in the case of a counterfactually absent intervention, but rather some quantity relative to the actual levels. The method's applicability is limited by the following data requirements: (i) at least some ground mortality information arising from a population-based method of recognised validity, e.g. a survey or prospective surveillance system. Such data should be granular in nature, i.e. representative of small geographic units and time periods (alternatively, one could use large-area surveys as long as the location of surveyed communities is reported in the dataset). Some documentation (e.g. survey reports) should be available to scrutinise methods; (ii) data covering the entirety or most of the person-time of interest for at least a few variables that may plausibly be expected to predict mortality. The system for measuring these predictors should have remained consistent over time. The pattern of data missingness should be mostly random: missingness clustered in specific areas or periods (particularly at the start or end of the time series, or where mortality data are also least available) makes imputation harder and more bias-prone; (iii) reasonable demographic estimates based on a census or similarly robust data collection exercise, performed no more than a few years prior to the analysis; in addition, data on displacement (including both the geographic unit of origin and that of arrival) covering most or all of the person-time should be available, or composable from existing reports and databases. Minimal data requirements, e.g. how many ground surveys or predictor variables are needed, are difficult to establish a priori: the predictive power of the model is a function not just of the amount of data, but also of the extent to which these data capture population variability and the local strength of correlation between predictors and mortality. As such, an additional limitation of the method is that the precision, and thus interpretability, of estimates arising from it may only become clear a posteriori. Computational implementation With the exception of step 2 (population denominator reconstruction), for which only crisis-specific analysis methods appear feasible, we have developed generic R analysis scripts that implement estimation steps for any crisis setting and generate output datasets, tables and graphs (see Additional file 1: pages 10–13 and https://github.com/francescochecchi/mortality_small_area_estimation). The analyst interacts with these scripts through Microsoft Excel spreadsheets containing input datasets and various parameters to control the analysis. We are currently testing an extension of the method for forecasting mortality over short time horizons of 3–6 months: this could provide an efficient means to do real-time estimation across the crisis-affected region, thereby generating information for decision-makers tasked with allocating humanitarian resources. Key requirements for such an application would be immediate predictor data sharing and standing capacity to implement analysis. Other improvements to the method are worth exploring. As instances of its use accumulate, a Bayesian estimation framework specifying informative priors for key predictor coefficients (e.g. armed conflict intensity) may be attractive. Improvements to model fitting could include machine learning techniques or Bayesian model averaging; due to limited resources, we have not systematically compared our generalised linear model with any of these alternatives. Indeed, these further developments will require dedicated scientific resources and buy-in from humanitarian stakeholders who hold access to key input data. Disclaimer Geographical names and boundaries presented in this paper are used solely for the purpose of producing scientific estimates, and do not necessarily represent the views or official positions of the authors, the London School of Hygiene and Tropical Medicine, any of the agencies that have supplied data for this analysis, or the donors. The authors are solely responsible for the analyses presented here, and acknowledgment of data sources does not imply that the agencies or individuals providing data endorse the results of the analysis. The data that support the findings of this study are available from various United Nations and non-governmental agencies, but restrictions apply to the availability of these data, which were used under license for the current study, and so are not all publicly available. Data are however available from the authors upon reasonable request and with permission of the above agencies. Furthermore, we have uploaded curated R scripts and complete datasets for Somalia on https://github.com/francescochecchi/mortality_small_area_estimation (also see Additional file 1: pages 10–13). These materials should enable independent replication of all our analysis steps. Data will be made available to the extent possible as part of the publication of country-specific papers. CDR: Crude death rate CV: Cross-validation DSS: Dawid–Sebastiani score ENA: Emergency Nutrition Assessment IDP: Internally displaced person LGA: Local Government Area (Nigeria) SMART: Standardised Monitoring of Relief and Transitions initiative U5DR: Under 5 years death rate Checchi F. Estimation of population mortality in crisis-affected populations: guidance for humanitarian coordination mechanisms. Geneva: World Health Organization; 2018. https://www.who.int/health-cluster/resources/publications/LSHTM-Mortality-Estimation-Options-oct2018.pdf. Heudtlass P, Speybroeck N, Guha-Sapir D. Excess mortality in refugees, internally displaced persons and resident populations in complex humanitarian emergencies (1998–2012)—insights from operational data. Confl Health. 2016;10:15. Checchi F, Roberts L. Documenting mortality in crises: what keeps us from doing better. PLoS Med. 2008;5:e146. Checchi F, Warsame A, Treacy-Wong V, Polonsky J, van Ommeren M, Prudhon C. Public health information in crisis-affected populations: a review of methods and their use for advocacy and action. Lancet. 2017;390:2297–313. Ball P, Betts W, Scheuren F, Dudukovich J, Asher J. Killings and refugee flow in Kosovo, March–June 1999: A report to the International Criminal Tribunal for the Former Yugoslavia. Washington, DC: American Association for the Advancement of Science; 2002. http://www.icty.org/x/file/About/OTP/War_Demographics/en/s_milosevic_kosovo_020103.pdf. Accessed 6 April 2021. Degomme O, Guha-Sapir D. Patterns of mortality rates in Darfur conflict. The Lancet. 2010;375:294–300. https://doi.org/10.1016/S0140-6736(09)61967-X. Depoortere E, Checchi F, Broillet F, Gerstl S, Minetti A, Gayraud O, et al. Violence and mortality in West Darfur, Sudan (2003–04): epidemiological evidence from four surveys. Lancet. 2004;364:1315–20. Coghlan B, Ngoy P, Mulumba F, Hardy C, Bemo VN, Stewart T, et al. Update on mortality in the Democratic Republic of Congo: results from a third nationwide survey. Disaster Med Public Health Prep. 2009;3:88–96. Hagopian A, Flaxman AD, Takaro TK, Esa Al Shatari SA, Rajaratnam J, Becker S, et al. Mortality in Iraq associated with the 2003–2011 war and occupation: findings from a national cluster sample survey by the university collaborative Iraq Mortality Study. PLoS Med. 2013;10:e1001533. Iraq Family Health Survey Study Group, Alkhuzai AH, Ahmad IJ, Hweel MJ, Ismail TW, Hasan HH, et al. Violence-related mortality in Iraq from 2002 to 2006. N Engl J Med. 2008;2008(358):484–93. Checchi F, Robinson, Courtland. Mortality among populations of southern and central Somalia affected by severe food insecurity and famine during 2010–2012—Somalia. Nairobi: Food and Agriculture Organization; 2013. https://reliefweb.int/report/somalia/mortality-among-populations-southern-and-central-somalia-affected-severe-food. Accessed 11 Jan 2021. Stokes AC, Lundberg DJ, Elo IT, Hempstead K, Bor J, Preston SH. COVID-19 and excess mortality in the United States: a county-level analysis. PLoS Med. 2021;18: e1003571. https://doi.org/10.1371/journal.pmed.1003571. Koum Besson ES, Norris A, Bin Ghouth AS, Freemantle T, Alhaffar M, Vazquez Y, et al. Excess mortality during the COVID-19 pandemic: a geospatial and statistical analysis in Aden governorate, Yemen. BMJ Glob Health. 2021;6: e004564. https://doi.org/10.1136/bmjgh-2020-004564. Imperial College COVID-19 Response Team, Watson OJ, Alhaffar M, Mehchy Z, Whittaker C, Akil Z, et al. Leveraging community mortality indicators to infer COVID-19 mortality and transmission dynamics in Damascus, Syria. Nat Commun. 2021;12:2394. https://doi.org/10.1038/s41467-021-22474-9. Toulemon L, Barbieri M. The mortality impact of the August 2003 heat wave in France: Investigating the 'harvesting' effect and other long-term consequences. Popul Stud. 2008;62:39–53. https://doi.org/10.1080/00324720701804249. Sandberg J, Santos-Burgoa C, Roess A, Goldman-Hawes A, Pérez CM, Garcia-Meza A, et al. All over the place?: Differences in and consistency of excess mortality estimates in Puerto Rico after hurricane Maria. Epidemiology. 2019;30:549–52. Working Group for Mortality Estimation in Emergencies. Wanted: studies on mortality estimation methods for humanitarian emergencies, suggestions for future research. Emerg Themes Epidemiol. 2007;4:9. Checchi F, Testa A, Warsame A, Quach L, Burns R. Estimates of crisis-attributable mortality in South Sudan, December 2013–April 2018: A statistical analysis—South Sudan. ReliefWeb. 2018. https://reliefweb.int/report/south-sudan/estimates-crisis-attributable-mortality-south-sudan-december-2013-april-2018. Accessed 11 Jan 2021. Rao JNK, Molina I. Small area estimation: Rao/small area estimation. Hoboken, NJ: Wiley; 2015. https://doi.org/10.1002/9781118735855. Seal A, Checchi F, Balfour N, Nur A-R, Jelle M. A weak health response is increasing the risk of excess mortality as food crisis worsens in Somalia. Confl Health. 2017;11:12. Komakech H, Atuyambe L, Orach CG. Integration of health services, access and utilization by refugees and host populations in West Nile districts, Uganda. Confl Health. 2019;13:1. https://doi.org/10.1186/s13031-018-0184-7. Standardised Monitoring and Assessment of Relief and Transitions (SMART). Measuring mortality, nutritional status, and food security in crisis situations: SMART Methodology. https://smartmethodology.org/. Accessed 14 Feb 2021. Cairns KL, Woodruff BA, Myatt M, Bartlett L, Goldberg H, Roberts L. Cross-sectional survey methods to assess retrospectively mortality in humanitarian emergencies. Disasters. 2009;33:503–21. Altare C, Guha-Sapir D. The Complex Emergency Database: a global repository of small-scale surveys on nutrition, health and mortality. PLoS ONE. 2014;9:e109022. Erhardt J. Emergency Nutrition Assessment (ENA) Software for SMART. 2020. https://smartmethodology.org/survey-planning-tools/smart-emergency-nutrition-assessment/. Prudhon C, de Radiguès X, Dale N, Checchi F. An algorithm to assess methodological quality of nutrition and mortality cross-sectional surveys: development and application to surveys conducted in Darfur, Sudan. Popul Health Metr. 2011;9:57. Grellety E, Golden MH. Change in quality of malnutrition surveys between 1986 and 2015. Emerg Themes Epidemiol. 2018;15:8. Maxwell D, Gottlieb G, Coates J, Radday A, Kim J, Venkat A, et al. Humanitarian information systems: anticipating, analyzing, and acting in crisis. Tufts - Feinstein International Center. https://fic.tufts.edu/research-item/the-constraints-and-complexities-of-information-and-analysis/. Accessed 14 Feb 2021. Gneiting T, Raftery AE. Strictly proper scoring rules, prediction, and estimation. J Am Stat Assoc. 2007;102:359–78. Harrell-Bond B, Voutira E, Leopold M. Counting the refugees: gifts, givers, patrons and clients. J Refugee Stud. 1992;5:205–25. https://doi.org/10.1093/jrs/5.3-4.205. Maxwell D, Hailey P, Spainhour Baker L, Kim JJ. Constraints and complexities of information and analysis in humanitarian emergencies: evidence from Yemen. Feinstein International Center, Tufts University and Centre for Humanitarian Change; 2019. https://fic.tufts.edu/publication-item/famine-and-analysis-in-yemen/. Accessed 6 April 2021. Lopez Bernal J, Cummins S, Gasparrini A. Interrupted time series regression for the evaluation of public health interventions: a tutorial. Int J Epidemiol. 2016. https://doi.org/10.1093/ije/dyw098. Curran PJ, Obeidat K, Losardo D. Twelve frequently asked questions about growth curve modeling. J Cognit Dev. 2010;11:121–36. We are indebted to Dr. Sandrine Foldvari Tobelem and Prof Nicholas Jewell for statistical advice and encouragement. We are also grateful to Anna Carnegie for project management support and Dr Chris Jarvis for geospatial analysis advice. This document is an output from a project funded by the UK Foreign, Commonwealth and Development Office (FCDO; formerly Department for International Development) through the Research for Evidence Division (RED) for the benefit of developing countries. However, the views expressed and information contained in it are not necessarily those of or endorsed by FCDO, which can accept no responsibility for such views or information or for any reliance placed on them. The analyses described in this report were partly funded by UK Research and Innovation as part of the Global Challenges Research Fund, grant number ES/P010873/1 (South Sudan, Somalia 2014–2018), the United States Institute of Peace (South Sudan), the UN Food and Agriculture Organisation (Somalia 2010–2012) and the Famine Early Warning Systems Network (Somalia 2010–2012). Department of Infectious Disease Epidemiology, Faculty of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, London, UK Francesco Checchi, Adrienne Testa, Amy Gimma, Emilie Koum-Besson & Abdihamid Warsame Francesco Checchi Adrienne Testa Amy Gimma Emilie Koum-Besson Abdihamid Warsame FC designed the method, did statistical analysis and wrote this paper. AT, AG, EKB and AW contributed to study design, collected and managed data. All authors read and approved the final manuscript. Correspondence to Francesco Checchi. All data were previously collected for routine humanitarian response and/or health service provision purposes, and were either in the public domain or shared in fully anonymised format. The study was approved by the Ethics Committee of the London School of Hygiene & Tropical Medicine (ref. 15334), with amendments to cover implementation in different countries. Country-specific approvals came from the Nigerian Institute of Medical Research Institutional Review Board (ref. IRB/18/065) and the Research and the Ethics Review Committee of the Ministry of Health and Human Services, Somali Federal Republic (ref. MOH&HS/DGO/1944/Dec/2018). We applied to the Ethics Review Committee of the South Sudan Ministry of Health (6 Apr 2018), but did not receive a response despite repeated inquiries. Additional details on specific analysis steps. Checchi, F., Testa, A., Gimma, A. et al. A method for small-area estimation of population mortality in settings affected by crises. Popul Health Metrics 20, 4 (2022). https://doi.org/10.1186/s12963-022-00283-6 Displaced Small area estimation Secondary data
CommonCrawl
\begin{document} \title{On the tame kernels of imaginary cyclic quartic fields with class number one} \author{Zhang Long} \address{School of Mathematics and Statistics, Qingdao University, Qingdao 266071, P.R. China; Institute of Applied Mathematics of Shandong, Qingdao University, Qingdao 266071, P.R. China} \curraddr{} \email{zhanglong\[email protected]} \thanks{} \author{Xu Kejian} \address{School of Mathematics and Statistics, Qingdao University, Qingdao 266071, P.R. China; Institute of Applied Mathematics of Shandong, Qingdao University, Qingdao 266071, P.R. China} \curraddr{} \email{[email protected]} \thanks{} \subjclass[2010]{Primary 19C99, 19F15.} \date{} \dedicatory{} \keywords{tame kernel, cyclic quartic field, multi-threaded parallel computing, Object-Oriented Programm} \begin{abstract} Tate first proposed a method to determine $K_2\mathcal{O}_F,$ the tame kernel of $F,$ and gave the concrete computations for some special quadratic fields with small discriminant. After that, many examples for quadratic fields with larger discriminants are given, and similar works also have been done for cubic fields and for some special quartic fields with discriminants not large. In the present paper, we investigate the case of more general imaginary cyclic quartic field $F=\mathbb{Q}\Big(\sqrt{-(D+B\sqrt{D})}\Big)$ with class number one and large discriminants. The key problem is how to decrease the huge theoretical bound appearing in the computation to a manageable one and the main difficulty is how to deal with the large-scale data emerged in the process of computation. To solve this problem we have established a general architecture for the computation, in particular we have done the works: (1) the PARI's functions are invoked in C++ codes; (2) the parallel programming approach is used in C++ codes; (3) in the design of algorithms and codes, the object-oriented viewpoint is used, so an extensible program is obtained. As an application of our program, we prove that $K_2\mathcal{O}_F$ is trivial in the following three cases: $B=1,D=2$ or $B=2, D=13$ or $B=2, D=29.$ In the last case, the discriminant of $F$ is 24389, hence, we can claim that our architecture also works for the computation of the tame kernel of a number field with discriminant less than 25000. \end{abstract} \maketitle \begin{section}{Introduction} Let $F$ be a number field and $\mathcal{O}_{F}$ the ring of algebraic integers of $F,$ and let $K_{2}\mathcal{O}_{F}$ denote the $K_2$ of $\mathcal{O}_{F}.$ Garland \cite{Garland001} proved that $K_{2}\mathcal{O}_{F}$ is a finite abelian group. However, $K_{2}\mathcal{O}_{F}$ can be regarded as tame kernel. In fact, let $K_2F$ be the Milnor $K_2$-group, and let $k_v=\mathcal{O}_{F}/\mathcal{P}_v$ and $k^{*}_v$ the multiplicative group of $k_v$, where $\mathcal{P}_v$ is the prime ideal corresponding to a finite prime place $v.$ Then we have the well-known tame homomorphism: \begin{equation*} \begin{split} \partial_{v}:K_{2}F\rightarrow{k^{*}_v} \end{split} \end{equation*} which is defined by \begin{equation*} \begin{split} \partial_{v}(\{x,y\})={(-1)^{v(x)v(y)}\frac{x^{v(y)}}{y^{v(x)}}(\mbox{mod}\, \mathcal{P}_{v})}, \end{split} \end{equation*} where $v(x), v(y)$ denote the valuations of $x,y$ with respect to the prime $v$ respectively, and thus we have $$\partial=\bigoplus_{v}\partial_{v}:K_{2}F\rightarrow{\bigoplus_{v}k^{*}_{v}},$$ where $v$ runs over all finite places. The kernel ker$\partial$ is called the tame kernel of the field $F.$ D.Quillen \cite{Quillen}proved that ker$\partial=K_2\mathcal{O}_F.$ There is no an effective algorithm for determining the tame kernel of a given number field directly, because it is defined noneffectively, The first method of determining the tame kernel of a given number field was proposed by J.Tate \cite{Tate001}. Now, we describe Tate's method in more details. Let $Nv$ be the number $|k_v|,$ which is called the norm of $v$, and let $v_{1},v_{2},v_{3}, \ldots,v_n,\ldots$ be all finite places of $F$ ordered in such a way that $Nv_{i }\leq{Nv_{i+1}}$, for $i=1,2,3,\cdots.$ Let $ S_{m}=\{v_{1},\cdots,v_{m}\}$ ($S_{0}= \emptyset$), and let $$ \mathcal{O}_{m}=\{a\in{F}: v(a)\geq0,v\not\in{S_{m}}\},$$ \begin{equation*} \begin{split} U_{m}=\{a\in{F}: v(a)=0,v\not\in{S_{m}}\}. \end{split} \end{equation*} Thus $ \mathcal{O}_{0}$ and $U_{0}$ are just the ring of algebraic integers and the group of units respectively. \par Let $K_{2}^{S_m}F$ be the subgroup of $K_{2}F$ generated by symbols $\{x,y\}$, where $x,y\in{U_{m}}$. Then we have $K_{2}F=\bigcup_{m=1}^{\infty}K_{2}^{S_m}F.$ Clearly, $\partial_{v_{m}}$ induces the homomorphism \begin{equation*} \begin{split} \partial_{v_{m}}:\frac{K_{2}^{S_m}F}{K_{2}^{S_{m-1}}F} \longrightarrow {k^{*}_{v_{m}}}. \end{split} \end{equation*} Bass and Tate \cite{Bass001} proved that for sufficiently large $m,$ $\partial_{v_{m}}$ is isomorphic, which implies \begin{equation*} \begin{split} K_{2}\mathcal{O}_{F}=\mbox{ker}\Big(\partial: K_2^{S_m}F\longrightarrow \coprod_{v\in S_m}k^*_v\Big). \end{split} \end{equation*} Thus, if we can make the large $m$ as small as possible and get sufficiently many relations satisfied by elements of $K_{2}^{S_{m}}F$, then we may determine the tame kernel $K_{2}\mathcal{O}_{F}.$ So the problem is reduced to finding conditions for $\partial_{v_{m}}$ to be isomorphic for sufficiently large $m.$ The conditions were found by Tate. Assume that the prime ideal $\mathcal{P}_{m}$ of $ \mathcal{O}_{m}$ corresponding to $v_{m}$ is generated by $\pi_{m}.$ Define the morphisms: $$\ \ \alpha: U_m\longrightarrow \frac{K_{2}^{S_{m}}F}{K_{2}^{S_{m-1}}F}, \ \ \ \ \alpha(u)=\{u,\pi_{m}\}(\mbox{mod}\, K_{2}^{S_{m-1}}F),$$ $$\beta: U_m\longrightarrow k^{*}_{v_{m}}, \ \ \ \ \beta(u)=u(\mbox{mod}\,\pi_{m}.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$ Then the conditions found by Tate are presented in the following theorem. \begin{theorem}\cite{Tate001} Suppose that prime ideal $\mathcal{P}$ corresponding to a finite place $v\not\in{S_{m}}$ is generated by $\pi\in{ \mathcal{O}_{F}}$ and that $U_{1}'$ is a group generated by $(1+\pi{U_{m}})\bigcap{U_{m}}$. If there are subsets $W_{m},C_{m},G_{m}$ of $U_{m}$ satisfying the following conditions: (i) $W_{m}\subseteq{C_{m}U_{1}'}$ and $U_{m}$ is generated by $W_{m}$, (ii) $C_{m}G_{m}\subseteq{C_{m}U_{1}'}$ and $k^{*}_v$ is generated by $\beta{(G_{m})}$, (iii) $1\in{C_{m}\bigcap{ker\beta}}\subseteq{U_1'},$\\ then $\partial{_{v}} $ is an isomorphism. \end{theorem} Hence, according to Tate's above method, to determine the tame kernel of a given number field, it suffices to construct suitable subsets $W_{m},C_{m},G_{m}$ of $U_{m}$ and determine the bound of $m.$ Using his method, Tate could give the analysis for the six first imaginary quadratic cases because in these cases the bound of $m$ is very small. More precisely, let $F=\mathbb{Q}(\sqrt{-d}).$ Then Tate proved that $K_2\mathcal{O}_F$ is trivial if $d=1,2,3,11,$ and $K_2\mathcal{O}_F\cong \mathbb{Z}/2\mathbb{Z}$ if $d=7,15.$ Subsequently, Qin \cite{Qin001,Qin002} investigated the cases $d=6$ and $35$ with a modification of the choice of the subset $C_m$ in Tate's method, and nearly at the same time, Ska\l ba \cite{Skalba001} gave the computations of the cases $d=5$ and $19$ with the help of his generalized Thue theorem (GTT); essentially, it is also a modification of the choice of $C_m.$ After that, for quadratic fields Browkin improved Ska\l ba's method to get a more accurate bound of $m,$ which allowed him to compute the cases $d=23$ and $31$ \cite{Browkin001,Browkin002}. It should be pointed out that all of these works were done by hand. The further computations for quadratic fields are due to Belabas and Gangl who used computers and determined the tame kernel for all $d$ up to $1000$ with only $7$ exceptions. \cite{Belabas001} The tame kernels of cubic fields had been investigated by Browkin in \cite{Browkin004}. His numerical computations were performed using the package PARI/GP. The cases of quartic fields are more complicated. Using Ska\l ba's GTT, Guo proved that $K_2\mathcal{O}_F$ is trivial when $F=\mathbb{Q}(\zeta_8)$ (see \cite{XuejunGuo001}). He did it also by hand. When $F=\mathbb{Q}(\zeta_5)=\mathbb{Q}(\sqrt{-(5+2\sqrt{5})}),$ under the assumption of the Lichtenbaum conjecture, Browkin once conjectured in the paper(\cite{Browkin003} that the tame kernel $K_2\mathcal{O}_F$ is trivial. In a recent paper, we confirmed Browkin's conjecture \cite{zx}. However, the arithmetic properties of field $\mathbb{Q}(\zeta_5)$ are much more complicated than those of quadratic fields and biquadratic fields. Therefore the discussion is longer, and more cases are considered. Actually, we have to use PARI/GP and some other algorithms. For further computations, the bound $m$ should be determined theoretically. This was solved by R.Groenewegen \cite{GROENEWEGEN001}, who gave a theoretical bound of $m$ for a general number field. In this paper, for the cyclic quartic field we also find a way to obtain the theoretical bound, and in some cases our bounds are better than Groenewegen's (Remark 4.7). Thus, for a given number field, if the theoretical bound is good enough, that is, if it is a manageable, in another words, if the computation can be done by hand, then through constructing enough relations, we may determine the tame kernel of the given number field. But unfortunately, actually these theoretical bounds may be very large, far from being manageable. This weak point makes the concrete computation nearly impossible for a higher degree number field, even for a cyclic quartic field. Hence, a new problem arises: {\bf Problem:} {\it Whether one can give a practical method to decrease the theoretical bound to a manageable one ?} Belabas and Gangl \cite{Belabas001} considered this problem. In order to get a manageable bound of $m,$ they proceed as follows. Let $T=S\cup \{v\}$ and assume that $K_2\mathcal{O}_F\subseteq K_2^{T}F.$ They want to prove that, in fact, we already have $K_2\mathcal{O}_F\subseteq K_2^{S}F.$ This will be used in the following situation: starting from the initial $S$ determined by the theoretical bound, we iterate this process, successively truncating $S$ by deleting its last element with respect to the given ordering, hoping to reduce the set of places to a manageable size. This is a very natural way to decrease the theoretical bound to a manageable one, which has been used by many authors. But again unfortunately its concrete realization is not easy in general. In fact, if the discriminant of the given number field is not large, then the difference between the theoretical bound and the manageable one is not large either, so it is easy to for one to do the work by writing a simple program or computing manually; but, if the discriminant is large, then the difference between the theoretical bound and the manageable one is also large, so the work-load must increase exponentially, as Balabas told us in a private letter, hence, in this case, we must face some challenges coming from dealing with the computation-intense task in the process solving the complex question. In order to realize their plan, in particular in the construction of the set $C,$ which is one of the most difficulties to overcome, Balabas and Gangl \cite{Belabas001}, use the following three algorithms: a) Fincke and Pohst's algorithm; b) Method of lattice; c) LLL-algorithm. Balabas and Gangl's plan was eventually adapted for arbitrary number fields and implemented in the PARI/GP scripting language, but so far, as they pointed out \cite{Belabas001}, parts of the program remain specific to the imaginary quadratic case. In the present paper, we give a completely different and new approach. The key idea is that we use Object-Oriented Programming(OOP) and the Multi-threaded Parallel Technology. It is well known that the idea of Object-Oriented Programming(OOP), developed as the dominant programming methodology in the early and mid 1990s, is to design data forms that correspond to the essential features of a problem. So OOP brings a new approach to the challenge of the large-scale programming.\cite{prata} In this paper, to compute the tame kernels of imaginary cyclic quartic fields of class number one with large discriminant, with the object-oriented viewpoint we develop a program, the software framework of which is extensible and reusable and can be made as a base on which more tame kernels of number fields can be computed. Moreover, to visualize the program's architectural blueprint, we also use Unified Modeling Language(UML)\cite{JCO}, which is a general-purpose, developmental and modeling language in the field of software engineering. More precisely, in order to establish the software framework and visualize the architectural blueprint, we need to do the following works. Firstly, we need to reduce Tate's theorem to a software engineering version, so as to give a main use case of a user's interaction with the system. The use case is not only a beginning of building the software framework but also a main driving force. In order to visualize the use case, we give the use case diagram (see Figure 1), which can be regarded as a UML description of Tate's theorem. \begin{figure} \caption{the use case diagram} \end{figure} Secondly, we design three classes: {\it CquarField}, {\it Cideal} and {\it Ccheck} as the structure of our program since in the view of OOP all classes of a software constitute the core of the software framework. Moreover, using UML we represent the relationships with the classes as a static class diagram, which is generally used for general conceptual modelling of the systematic of a program and for detailed modelling translating the models into programming code. The relationships with the three classes are represented the following static class diagram (see Figure 2) and the detailed design of the three classes and the static class diagram are introduced in (4.3). \begin{figure} \caption{the static class diagram} \end{figure} Finally, in the view of software engineering, it is not enough to provide the use case diagram and the static class diagram to represent the program's architecture, in another words, we must show how objects operate with one another and in what order. Since it is well known that in UML, a sequence diagram, which is an interaction diagram and also a construct of a message sequence chart, shows object interactions arranged in time sequence, thus we design the sequence diagram (see Figure 3) according to the relationships with the objects, which are represented in Tate's theorem, and in view of the difficulties we must face during construction of the program of computing the tame kernel of imaginary cyclic quartic field, such as large-scale computing. \lstset{breaklines=true} \hoffset=0cm \voffset=-3.0cm \begin{figure} \caption{the sequence diagram} \end{figure} This is what we have done in this paper in the design of the program framework and the program architecture in UML. However, during building the program, we meet two difficulties. One difficulty is how to create the codes which can be used to compute invariants of a number field. Though some authors have designed some excellent algorithms for the computation, the workload is so burdensome that it is almost impossible to implement so much algorithms for the computation of tame kernels. So it may be the viable option to use the third-party libraries to obtain the invariants. Hence, PARI library, looked on as a reliable component, provides a powerful support to our program. The other difficulty is how to deal with the large-scale data emerging in the process of computation. In this study, we find that the amount of computation of tame kernels grows explosively as the discriminant and degree of extension of number fields get larger. In \cite{Belabas001} Belabas and Gangl have computed some tame kernels of the quartic fields with absolute values of discriminants not large and the workloads in computation of the tame kernels of these quartic fields are nearly equal to that of $F=\mathbb{Q}(\zeta_{5})$. But now, as an example, we compute the tame kernel of $F=\mathbb{Q}(\sqrt{-(13+2\sqrt{13})})$ whose discriminant is 2917, and we find that the workloads for the computation of the tame kernels of $F=\mathbb{Q}(\zeta_{5})$ and $F=\mathbb{Q}(\sqrt{-(13+2\sqrt{13})})$ are not to be mentioned in the same breath. In fact, in the case of $F=\mathbb{Q}(\sqrt{-(13+2\sqrt{13})})$, we once wrote some script codes with PARI/GP to compute its tame kernel. After deploying the codes on PC and running about 24 hours, we make a rough estimate of running time. It needs at least one year! So these script codes are not time-base. Thus, it is for this reason that motivates us to design, in order to decrease the running time, the above architecture, which is an extensible, reusable and component-based application by associating the Multi-threaded Parallel Technology and PARI library with the implemented architecture. And at last, deploying the application and running about 2 hours, we obtain the result of tame kernel of $F=\mathbb{Q}(\sqrt{-(13+2\sqrt{13})})$. After that, we took about 3 months to compute the tame kernel of the number field $F=\mathbb{Q}(\sqrt{-(29+2\sqrt{29})})$ whose discriminant is $24389.$ In a private letter, Balabas told us that it took about 8 hours to obtain the tame kernel of $F=\mathbb{Q}(\sqrt{-(13+2\sqrt{13})})$ by a program implementing the algorithms in the paper \cite{Belabas001}. The program has been published in \url{https://www.math.u-bordeaux.fr/~kbelabas/research/software/K2-1.1.tgz.} We also tried ever to use the same program to compute the tame kernel of $F=\mathbb{Q}(\sqrt{-(29+2\sqrt{29})})$. But, after running the program about 2 hours, a bug emerged and the program was interrupted. This story implies that although some kind of problems can be solved efficiently by using the existing program without difficulty, the computation of large-scale problems may be a nontrivial task, even a long-time running being acceptable, because of the restriction of the memory and CPU limitation. Therefore, the design of programs as well as its efficiency and reasonability may be essentially depended on the scale of computation. Hence, as an application of our program, now we are sure from the above computation that our architecture also works for the computation of the tame kernel of a number field with discriminant less than 25000. In particular, as concrete examples, we have proved the following theorem. \begin{theorem} Let $F=\mathbb{Q}\Big(\sqrt{-(D+B\sqrt{D})}\Big)$ be cyclic quartic field. Then the tame kernel $K_{2}\mathcal{O}_{F}$ is trivial in the following cases: (i) \cite{zx} $B=2, D=5,$ i.e., $F=\mathbb{Q}(\zeta_5);$ (ii) $B=1, D=2;$ (iii) $B=2, D=13;$ (iv) $B=2, D=29.$ \end{theorem} \begin{remark}\quad (i) We have $h_F=1$ in the four cases in Lemma 2.1. (ii) By the present algorithms, the computation of the tame kernel of $\mathbb{Q}(\zeta_5)$ is quite easy. \end{remark} In the following, the conditions $W_{m}\subseteq{C_{m}U_{1}'}$ and $C_{m}G_{m}\subseteq{C_{m}U_{1}'}$ in Theorem 1.1 will be referred to be condition I and condition II respectively. \end{section} \begin{section}{The cyclic quartic fields} The following explit representation of a cyclic quartic field is proved in the reference \cite{HARDY001}. \begin{lemma}\quad If $F$ is a real or imaginary cyclic quartic extensin of $\mathbb{Q}$, then there are integers A,B,C and D such that \begin{equation} F=\mathbb{Q}\Big(\sqrt{A(D+B\sqrt{D})}\Big)=\mathbb{Q}\Big(\sqrt{A(D-B\sqrt{D})}\Big) \end{equation} where \begin{equation} \begin{cases} A~ is~ squarefree~ and~ odd,\\ D=B^2+C^2~is ~squarefree,~B>0,~C>0,\\ A~and~D~are~relatively~prime. \end{cases} \end{equation} \end{lemma} Moreover, any field satisfying (2.1) and (2.2) is cyclic quartic extension of $\mathbb{Q},$ and the representation of $F$ is unique in the sense that if we have another representation, say $F=\mathbb{Q}(\sqrt{A_1(D_1+B_1\sqrt{D_1})}),$ where $A_1, ~B_1,~C_1$ and $D_1$ are integers satisfying the conditions of (2), then $A=A_1, ~B=B_1,~C=C_1$and $D=D_1$. On the other hand, it is given in the reference \cite{HARDY001} a table of all the imaginary cyclic quartic fields $F=\mathbb{Q}\Big(\sqrt{A(D+B\sqrt{D})}\Big)$, where $A, ~B,~C$ and $D$ are integers satisfying the condition (2.2). Now, we can list all imaginary cyclic quartic fields with class number one as follows. \begin{equation*} \begin{split} \mbox{Case}\, 1:& ~F=\mathbb{Q}\Big(\sqrt{-(5+2\sqrt{5})}\Big), ~\mbox{where} ~A=-1, B=2, C=1, D=5;\\ \mbox{Case}\, 2:& ~F=\mathbb{Q}\Big(\sqrt{-(13+2\sqrt{13})}\Big), ~\mbox{where}~ A=-1, B=2, C=3, D=13;\\ \mbox{ Case}\, 3:& ~F=\mathbb{Q}\Big(\sqrt{-(2+\sqrt{2})}\Big), ~\mbox{where}~ A=-1, B=1, C=1, D=2;\\ \mbox{ Case}\, 4:& ~F=\mathbb{Q}\Big(\sqrt{-(29+2\sqrt{29})}\Big), ~\mbox{where}~ A=-1, B=2, C=5, D=29;\\ \mbox{ Case}\, 5:& ~F=\mathbb{Q}\Big(\sqrt{-(37+6\sqrt{37})}\Big), ~\mbox{where}~ A=-1, B=6, C=1, D=37;\\ \mbox{Case}\, 6:& ~F=\mathbb{Q}\Big(\sqrt{-(53+2\sqrt{53})}\Big), ~\mbox{where}~ A=-1, B=2, C=7, D=53;\\ \mbox{Case}\, 7:& ~F=\mathbb{Q}\Big(\sqrt{-(61+6\sqrt{61})}\Big), ~\mbox{where}~ A=-1, B=6, C=5, D=61;\\ \end{split} \end{equation*} In \cite{HUDSON001}, the integral basis of the cyclic quartic field $F=\mathbb{Q}\Big(\sqrt{A(D+B\sqrt{D})}\Big)$ is given as follows. \begin{lemma}\quad Let $F=\mathbb{Q}\Big(\sqrt{A(D+B\sqrt{D})}\Big)$ be a cyclic quartic extension of $\mathbb{Q},$ where $A,B,C$ and $D$ satisfy the condition (2.2) in Lemma 2.1. Set $$a'=\sqrt{A(D+B\sqrt{D})},\ \ ~b'=\sqrt{A(D-B\sqrt{D})}.$$ Then an integral basis for $F$ is given as follows. \begin{equation*} \begin{split} (i)&\ \ \ \{1,\sqrt{D},a',b'\}~ if~ D\equiv{0}(mod\, 2);\\ (ii)&\ \ \ \{1,\frac{1}{2}(1+\sqrt{D}),a',b'\}~ if~ D\equiv{B}\equiv{1}(mod \, 2);\\ (iii)&\ \ \{1,\frac{1}{2}(1+\sqrt{D}),\frac{1}{2}(a'+b'),\frac{1}{2}(a'-b')\}\\ & if~ D\equiv{1}(mod \, 2),B\equiv{0}(mod \, 2),A+B\equiv{3}(mod \, 4);\\ (iv)&\ \ \{1,\frac{1}{2}(1+\sqrt{D}),\frac{1}{4}(1+\sqrt{D}+a'+b'),\frac{1}{4}(1-\sqrt{D}+a'-b')\}\\ & if~ D\equiv{1}(mod \, 2),B\equiv{0}(mod \, 2),A+B\equiv{1}(mod4),A\equiv{C}(mod \, 4);\\ (v)&\ \ \{1,\frac{1}{2}(1+\sqrt{D}),\frac{1}{4}(1+\sqrt{D}+a'-b'),\frac{1}{4}(1-\sqrt{D}+a'+b')\}\\ & if~ D\equiv{1}(mod \, 2),B\equiv{0}(mod \, 2),A+B\equiv{1}(mod \, 4),A\equiv{-C}(mod \, 4);\\ \end{split} \end{equation*} \end{lemma} Hence, the integral bases of Case 2, of Case 1, Case 3, Case 4, Case 5, Case 7 and of Case 6 are respectively $$\{1,\sqrt{D},a',b'\};$$ $$\{1,\frac{1}{2}(1+\sqrt{D}),\frac{1}{4}(1+\sqrt{D}+a'-b'),\frac{1}{4}(1-\sqrt{D}+a'+b')\};$$ $$\{1,\frac{1}{2}(1+\sqrt{D}),\frac{1}{4}(1+\sqrt{D}+a'+b'),\frac{1}{4}(1-\sqrt{D}+a'-b')\}.$$ \begin{lemma}\quad Let $F=\mathbb{Q}\Big(\sqrt{-(D+B\sqrt{D})}\Big)$ be a cyclic quartic extension of $\mathbb{Q}$ with class number $h(F)=1,$ where $B,C$ and $D$ satisfy the condition (2.2) in Lemma 2.1. Set $\beta=i\sqrt{D+B\sqrt{D}}$ and $F=\mathbb{Q}(\beta).$ Then the following statements hold.\\ (i) The minimal polynomial of $\beta$ over $\mathbb{Q}$ is $$f(x)=x^4+2Dx^2+(D^2-DB^2).$$ (ii) The four conjugated roots of $\beta$ are $$\beta_1=\beta=ia,~\beta_2=\bar{\beta}=-ia,~\beta_3=ib,~\beta_4=-ib,$$ where $a=\sqrt{D+B\sqrt{D}}$ and $b=\sqrt{D-B\sqrt{D}}.$\\ (iii) The Galois group $Gal(F/\mathbb{Q})$ equals $\langle \sigma\rangle$ with $\sigma$ satisfying $$\sigma(\beta_1)=\beta_4,~\sigma(\beta_2)=\beta_3,~\sigma(\beta_3)=\beta_1,~\sigma(\beta_4)=\beta_2.$$ (iv) The rank $r(U)$ of unit group $U$ of $F$ is $1.$ We denote the fundamental unit by $\xi.$\\ (v) In Case 1, Case 3, Case 4, Case 5, Case 7, the field $F$ has the same integral base, which is $$\gamma_0=1,\gamma_1=\frac{1}{2}(1+\sqrt{D}),\gamma_2=\frac{1}{4}(1+\sqrt{D}+\beta-\beta_3),\gamma_3=\frac{1}{4}(1-\sqrt{D}+\beta+\beta_3).$$ Moreover, the transition matrix from $1,\beta,\beta^2,\beta^3$ to $\gamma_0,\gamma_1,\gamma_2,\gamma_3$ is \begin{equation} M_1= \begin{pmatrix} &1 &0 &0 &0\\ &\frac{B-D}{2B} &0 &-\frac{1}{2B} &0\\ &\frac{B-D}{4B} &\frac{BC-B^2-D}{4BC} &-\frac{1}{4B} &-\frac{1}{4BC}\\ &\frac{B+D}{4B} &\frac{BC+B^2+D}{4BC} &\frac{1}{4B} &\frac{1}{4BC} \end{pmatrix} \end{equation} (vi) In Case 2 and Case 6, the field $F$ has the same integral base, which is $$\gamma_0=1,\gamma_1=\frac{1}{2}(1+\sqrt{D}),\gamma_2=\frac{1}{4}(1+\sqrt{D}+\beta+\beta_3),\gamma_3=\frac{1}{4}(1-\sqrt{D}+\beta-\beta_3).$$ Moreover, the transition matrix from $1,\beta,\beta^2,\beta^3$ to $\gamma_0,\gamma_1,\gamma_2,\gamma_3$ is \begin{equation} M_2= \begin{pmatrix} &1 &0 &0 &0\\ &\frac{B-D}{2B} &0 &-\frac{1}{2B} &0\\ &\frac{B-D}{4B} &\frac{BC+B^2+D}{4BC} &-\frac{1}{4B} &\frac{1}{4BC}\\ &\frac{B+D}{4B} &\frac{BC-B^2-D}{4BC} &\frac{1}{4B} &-\frac{1}{4BC} \end{pmatrix} \end{equation} \end{lemma} \begin{proof}\quad The proofs of (i),(ii),(iii) and (iv) are easy. So we only prove (v) and (vi). We will express $\beta_3=ib$ by $1,\beta,\beta^2,\beta^3.$ Assume that \begin{equation} \begin{split} \beta_3=ib=x_0+x_1ia+x_2(ia)^2+x_3(ia)^3, \end{split} \end{equation} where $a=\sqrt{D+B\sqrt{D}}$, $b=\sqrt{D-B\sqrt{D}}$ and $x_0,x_1,x_2,x_3 \in{\mathbb{Q}}.$ Then the following equations hold: \begin{equation} \begin{split} x_0-a^2x_2=0 \end{split} \end{equation} \begin{equation} \begin{split} b-ax_1+a^3x_3=0 \end{split} \end{equation} From (2.6), we have $x_0=x_2=0.$ However, from (2.7), we get \begin{equation*} \begin{split} b^2&=a^2(x_1-a^2x_3)^2\\ &=a^2(x^2_1+a^4x_3^2-2a^2x_1x_3)\\ &=(D+B\sqrt{D})[x_1^2+(D+B\sqrt{D})^2x_3^2-2(D+B\sqrt{D})x_1x_3]\\ &=(D+B\sqrt{D})[x_1^2+(D^2+B^2D)x_3^2-2Dx_1x_3+(2BDx_3^2-2Bx_1x_3)\sqrt{D}]\\ &=[Dx_1^2+(D^3+3B^2D^2)x_3^2-2(D^2+B^2D)x_1x_3]\\ &+[Bx_1^2+(3BD^2+B^3D)x_3^2-4BDx_1x_3]\sqrt{D}. \end{split} \end{equation*} By comparing with the both sides of the equality, we can get the system of equations on $x_1$ and $x_3$ \begin{equation} \begin{split} D=Dx_1^2+(D^3+3B^2D^2)x_3^2-2(D^2+B^2D)x_1x_3 \end{split} \end{equation} \begin{equation} \begin{split} -B=Bx_1^2+(3BD^2+B^3D)x_3^2-4BDx_1x_3. \end{split} \end{equation} i.e. \begin{equation} \begin{split} 1=x_1^2+(D^2+3B^2D)x_3^2-2(D+B^2)x_1x_3 \end{split} \end{equation} \begin{equation} \begin{split} -1=x_1^2+(3D^2+B^2D)x_3^2-4Dx_1x_3. \end{split} \end{equation} Adding the two equations, we have \begin{equation} \begin{split} x_1^2+(2D^2+2B^2D)x_3^2-(3D+B^2)x_1x_3=0. \end{split} \end{equation} If $x_3=0$, clearly we have $x_1=0,$ impossible. Thus $\frac{x_1}{x_3}$ is a root of the equation: \begin{equation} \begin{split} x^2-(3D+B^2)x+2D(D+B^2)=0. \end{split} \end{equation} Clearly, $2D, D+B^2$ are the two roots of (2.13), so $$\frac{x_1}{x_3}=2D\ \ \mbox{or} \ \ \frac{x_1}{x_3}=D+B^2.$$ If $x_1=2Dx_3,$ then from (2.10), we can get that $(D^2-B^2D)x_3^2=1.$ So we have $$x_3=\pm\frac{\sqrt{D}}{CD},\ \ \ x_1=\pm\frac{2\sqrt{D}}{C}.$$ However, putting these expressions in (2.11), we get immediately a contradiction. Hence, we must have $x_1=(D+B^2)x_3$. Therefore from (2.11) we get $$-1=(B^2+D)^2x_3^2+(3D^2+B^2D)x_3^2-4D(D+B^2)x_3^2=-B^2C^2x_3^2.$$ Thus $x_3=\pm\frac{1}{BC}.$ We can check that $x_0=x_2=0,x_1=\frac{D+B^2}{BC}$ and $x_3=\frac{1}{BC}$ satisfy the equation (2.5), which means $$\beta_3=\frac{D+B^2}{BC}\cdot \beta+\frac{1}{BC}\cdot \beta^3.$$ Note that $\sqrt{D}=\frac{-D}{B}-\frac{\beta^2}{B}.$ Then, in Case 1, Case 3, Case 4, Case 5, Case 7, we can express $\gamma_0,\gamma_1,\gamma_2,\gamma_3$ by $1,\beta,\beta^2,\beta^3$ as follows. \begin{equation*} \begin{pmatrix} &\gamma_0\\&\gamma_1\\&\gamma_2\\&\gamma_3 \end{pmatrix}= \begin{pmatrix} 1 &0 &0 &0\\ \frac{B-D}{2B} &0 &-\frac{1}{2B} &0\\ \frac{B-D}{4B} &\frac{BC-B^2-D}{4BC} &-\frac{1}{4B} &-\frac{1}{4BC}\\ \frac{B+D}{4B} &\frac{BC+B^2+D}{4BC} &\frac{1}{4B} &\frac{1}{4BC} \end{pmatrix} \begin{pmatrix} 1\\ \beta\\ \beta^2\\ \beta^3 \end{pmatrix} =M_1 \begin{pmatrix} 1\\ \beta\\ \beta^2\\ \beta^3 \end{pmatrix} \end{equation*} Similarly, in Case 2 and Case 6, we get \begin{equation*} \begin{pmatrix} \gamma_0\\ \gamma_1\\ \gamma_2\\ \gamma_3 \end{pmatrix}= \begin{pmatrix} 1 &0 &0 &0\\ \frac{B-D}{2B} & 0 &-\frac{1}{2B} &0\\ \frac{B-D}{4B} &\frac{BC+B^2+D}{4BC} &-\frac{1}{4B} &\frac{1}{4BC}\\ \frac{B+D}{4B} &\frac{BC-B^2-D}{4BC} &\frac{1}{4B} &-\frac{1}{4BC} \end{pmatrix} \begin{pmatrix} 1\\ \beta\\ \beta^2\\ \beta^3 \end{pmatrix} =M_2 \begin{pmatrix} 1\\ \beta\\ \beta^2\\ \beta^3 \end{pmatrix}. \end{equation*} \end{proof} \end{section} \begin{section}{The tame kernel of an imaginary cyclic quartic field} \begin{subsection}{Lemmas} \begin{lemma} Let $F=\mathbb{Q}(\sqrt{-(D+B\sqrt{D}}))$ be a cyclic quartic field with the class number $h(F)=1$ and let $\beta=ia$ with $a=\sqrt{D+B\sqrt{D}}.$ Then, for any prime ideal $\mathcal{P}$ of $F$, there exists an element $\alpha\in{\mathcal{O}_F}$ satisfying\\ (i) $\mathcal{P}=(\alpha)$;\\ (ii) $|\sigma(\xi)|\le{\Big|\frac{\sigma(\alpha)}{\alpha}\Big|\le|\xi|},$ where $\xi$ is the foundanment unit of $F.$ Moreover, we have $$\frac{|N(\alpha)|^{\frac{1}{4}}}{|\xi|^{\frac{1}{2}}}\le|\alpha|\le\frac{|N(\alpha)|^{\frac{1}{4}}}{|\sigma(\xi)|^{\frac{1}{2}}}.$$ \end{lemma} \begin{proof} Because the class number $h_F$ is $1,$ the prime ideal $\mathcal{P}$ of $F$ is a principal ideal, i.e. $\mathcal{P}=(y)$ for some $y\in{\mathcal{O}_{F}}.$\\ i). If $|\sigma(\xi)|\le\Big|\frac{\sigma(y)}{y}\Big|\le|\xi|,$ let $\alpha=y.$ Then the lemma is true. ii). If $\Big|\frac{\sigma(y)}{y}\Big|>|\xi|,$ since $\Big|\frac{\sigma(\xi)}{\xi}\Big|<1,$ there is a positive integer $k$ satisfying \begin{equation} \begin{split} \Big|\frac{\sigma(y)}{y}\Big|\Big|\frac{\sigma(\xi)}{\xi}\Big|^{k}\le|\xi|<\Big|\frac{\sigma(y)}{y}\Big|\Big|\frac{\sigma(\xi)}{\xi}\Big|^{k-1}. \end{split} \end{equation} Let $\alpha=y\xi^k.$ Then, we get $\Big|\frac{\sigma(\alpha)}{\alpha}\Big|\le|\xi|.$ However $\Big|\frac{\sigma(\alpha)}{\alpha}\Big|=|\sigma(\xi)|\Big|\frac{\sigma(y\xi^{k-1})}{y\xi^{k-1}}\Big|>|\sigma(\xi)||\xi|>|\sigma(\xi)|.$ iii). If $\Big|\frac{\sigma(y)}{y}\Big|<|\sigma(\xi)|,$ as in ii), there is a positive integer $k$ such that \begin{equation} \begin{split} \Big|\frac{\sigma(y)}{y}\Big|\Big|\frac{\xi}{\sigma(\xi)}\Big|^{k-1}<|\sigma(\xi)|\leq \Big|\frac{\sigma(y)}{y}\Big|\Big|\frac{\xi}{\sigma(\xi)}\Big|^{k}. \end{split} \end{equation} Let $\alpha=\frac{y}{\xi^k}.$ Thus by (14),we have \begin{equation*} \begin{split} |\sigma(\xi)|\le\Big|\frac{\sigma(\alpha)}{\alpha}\Big|\le|\xi|. \end{split} \end{equation*} So \begin{equation*} \begin{split} |\sigma(\xi)||\alpha^2|\le|\sigma(\alpha)||\alpha|\le|\xi||\alpha|^2. \end{split} \end{equation*} Hence \begin{equation*} \begin{split} |\sigma(\xi)|^2|\alpha^4|\le|N(\alpha)|\le|\xi|^2|\alpha|^4. \end{split} \end{equation*} Therefore \begin{equation} \begin{split} \frac{|N(\alpha)|^{\frac{1}{4}}}{|\xi|^{\frac{1}{2}}}\le|\alpha|\le\frac{|N(\alpha)|^{\frac{1}{4}}}{|\sigma(\xi)|^{\frac{1}{2}}}. \end{split} \end{equation} \end{proof} We denote $[t]$ to the nearest integer number to $t$. Let $\{t\}=t-[t].$ So $\{t\}\in{[-\frac{1}{2},\frac{1}{2}]}.$ \begin{lemma} For any $0\neq \alpha, x\in{\mathcal{O}_F}$, there is a $y\in{\mathcal{O}_F}$ such that $$|x-\alpha y|\le c_1|\alpha|,$$ $$|\sigma(x-\alpha y)|\le c_2|\sigma(\alpha)|,$$ where $c_{1},c_2$ are constants depending only on the field $F,$ i.e., on $A,B,C$ and $D.$ So $$N(x-\alpha y)\le c_1^2c_2^2N(\alpha).$$ \end{lemma} \begin{proof} Assume that $\frac{x}{\alpha}=k_0\gamma_0+k_1\gamma_1+k_2\gamma_2+k_3\gamma_3$ where $\gamma_0,\gamma_1,\gamma_2,\gamma_3$ are the integral basis of $F$ and $k_i\in{\mathbb{Q}},i=0,1,2,3.$ Let $$y=[k_0]\gamma_0+[k_1]\gamma_1+[k_2]\gamma_2+[k_3]\gamma_3\in{\mathcal{O}_F}.$$ We will show that $y$ satisfies the requirement. Suppose that \begin{equation*} \begin{split} z=&x-y\alpha=\Big(\sum_{i=0}^{3}{k_i\gamma_{i}}\Big)\alpha-\Big(\sum_{i=0}^{3}{[k_i]\gamma_{i}}\Big)\alpha=\Big(\sum_{i=0}^{3}{\{k_i\}\gamma_{i}}\Big)\alpha= \Big(\sum_{i=0}^{3}{z_i\gamma_{i}}\Big)\alpha, \end{split} \end{equation*} where $z_i=\{k_i\}\in[-\frac{1}{2},\frac{1}{2}]\cap \mathbb{Q}.$ Let $M=M_1$ or $M_2,$ and let$z'=\sum_{i=0}^{3}{z_i\gamma_{i}}.$ We can compute the maximal value of $|z|.$ Let $g=|z|^2$. Then \begin{equation*} \begin{split} g&=|z|^2=|z'|^2|\alpha|^2\\ &=(z_0,z_1,z_2,z_3)M \begin{pmatrix} 1 &-ia &-a^2 &ia^3\\ ia &a^2 &-ia^3 &-a^4\\ -a^2 &ia^3 &a^4 &-ia^5\\ -ia^3 &-a^4 &ia^5 &a^6 \end{pmatrix} M^{T} \begin{pmatrix} z_0\\ z_1\\ z_2\\ z_3\\ \end{pmatrix}|\alpha|^2\\ &=(z_0,z_1,z_2,z_3)M H_1M^{T} \begin{pmatrix} z_0\\ z_1\\ z_2\\ z_3 \end{pmatrix}|\alpha|^2, \end{split} \end{equation*} where $$H_1:=\begin{pmatrix} 1 &0 &-a^2 &0\\ 0 &a^2 &0 &-a^4\\ -a^2 &0 &a^4 &0\\ 0 &-a^4 &0 &a^6 \end{pmatrix}.$$ Let \begin{equation*} \begin{split} h_1(z_0,z_1,z_2,z_3):&=(z_0,z_1,z_2,z_3)M H_1M^{T} \begin{pmatrix} z_0\\ z_1\\ z_2\\ z_3 \end{pmatrix}. \end{split} \end{equation*} By pari/gp, we can check that the values of $h_1(z_0,z_1,z_2,z_3)$ on those stationary are zero. Thus $h_1(z_0,z_1,z_2,z_3)$ reaches its maximal value on the boundary. Hence, for any $A,B,C$ and $D,$ we have $$|x-y\alpha|= |z|\le |z'||\alpha| \le c_{1}'^{\frac{1}{2}}|\alpha|,$$ where $$c_{1}'=\mbox{max}\{h(z_0,z_1,z_2,z_3): z_i=-\frac{1}{2}~\mbox{or} ~\frac{1}{2},i=0,1,2,3\}.$$ Similarly, let $$h_2 (z_0,z_1,z_2,z_3)=(z_0,z_1,z_2,z_3)M H_2M^{T} \begin{pmatrix} z_0\\ z_1\\ z_2\\ z_3 \end{pmatrix}.$$ with $$H_2:=\begin{pmatrix} 1 &0 &-b^2 &0\\ 0 &b^2 &0 &-b^4\\ -b^2 &0 &b^4 &0\\ 0 &-b^4 &0 &b^6 \end{pmatrix}.$$ Then we have $$|\sigma(x-y\alpha)|=|\sigma(z)|\le |\sigma(z')||\sigma(\alpha)|\le c_{2}'^{\frac{1}{2}}|\sigma(\alpha)|,$$ where $$c_{2}'=\mbox{max}\{h_2 (z_0,z_1,z_2,z_3): z_i=-\frac{1}{2}~\mbox{or} ~\frac{1}{2},i=0,1,2,3\}.$$ However both $$|z'|^2=(z_0,z_1,z_2,z_3)M H_1M^{T} \begin{pmatrix} z_0\\ z_1\\ z_2\\ z_3 \end{pmatrix} $$ and $$ |\sigma(z')|^2=(z_0,z_1,z_2,z_3)M H_2M^{T} \begin{pmatrix} z_0\\ z_1\\ z_2\\ z_3 \end{pmatrix} $$ are positive definite quadratic forms determined by $a>0$ and $b>0$. So $|z'|$ and $|\sigma(z')|$ reach maximal value at same point. Let $c_i= {c_{i}'}^{\frac{1}{2}}, i=1,2.$ Then the proof is completed. \end{proof} \end{subsection} \begin{subsection}{Construction of $W_m,C_m,G_m$} Let $F=\mathbb{Q}\Big(\sqrt{-(D+B\sqrt{D}})\Big)$ be a cyclic quartic field with the class number $h(F)=1,$ and let $S_{m+1}=\{v_1,v_2,\cdots,v_{m+1}\}$, where $v_{i}$ corresponds to the prime ideal $\mathcal{P}_i:=\mathcal{P}_{v_i}$ for $i=1,2,\cdots,m+1.$ In order to use Theorem 1.1 to compute the tame kernel $K_2\mathcal{O}_F,$ we construct $W_m,C_m$ and $G_m$ as follows. Firstly, by Lemma 3.1, for each $i$ there exists an $\alpha_i\in{\mathcal{O}_F}$ satisfying $\mathcal{P}_{i}=(\alpha_i)$ and $|\sigma(\xi)|\le \Big|\frac{\sigma(\alpha_i)}{\alpha_i}\Big|\le |\xi|,$ where $i=1,2,\cdots,m+1.$ Thus we define $$W_{m}=\{\alpha_1,\alpha_2,\cdots,\alpha_m\}\bigcup \{-1,\xi\}.$$ Clearly, from the construction of $W_m,$ we know immediately that $U_m$ can be generated by $W_m.$ Secondly, let \begin{equation*} \begin{split} C'_{m}=\{c\in{\mathcal{O}_K}: \ |c|\le c_1|\alpha_{m+1}|, |\sigma(c)|\le c_2|\sigma(\alpha_{m+1})|\}. \end{split} \end{equation*} Then the set $C_{m}$ defined to be a subset of $C'_{m}$ such that $1\in{C_{m}},~0\notin{C_{m}}$ and $c_1-c_2\notin \mathcal{P}_{m+1}$ for any $c_1,c_2\in{C_m}.$ Clearly, we have $1\in C_m\cap Ker\beta\subseteq U'_1,$ which implies that condition (iii) of Theorem 1.1 is satisfied. In the following, we will prove that there must exist a $C_m$ which satisfies condition I and condition II further. Finally, let $\delta:=\big(\frac{2}{\pi}\big)^\frac{1}{2}|D|^\frac{1}{8}$ and define \begin{equation*} \begin{split} G'_{m}=\Big\{g\in{\mathcal{O}_K}: |g|\le \delta N(\mathcal{P}_{m+1})^{\frac{1}{8}}, |\sigma(g)|\le \delta N(\sigma(\mathcal{P}_{m+1}))^{\frac{1}{8}}\Big\}. \end{split} \end{equation*} When $N(\mathcal{P}_{m+1})>\delta^8,$ by GTT theorem and the proof of Lemma 1.2 in [10], there exists a subset $G_m\subseteq U_m$ with $G_m\subseteq G'_m$ such that $k^{*}_v$ can be generated by $\beta(G_m),$ which means the second part of condition (ii) in Theorem 1.1 is satisfied. \end{subsection} \begin{subsection}{Theoretical bounds} \begin{subsubsection}{The bounds in imaginary cyclic quartic field case} The following lemma is very helpful. \begin{lemma}\quad Suppose that the elements $a,b\in\mathcal{O}_F \bigcap U_{m}$ satisfy the conditions $a\equiv b(mod\, \mathcal{P}_{m+1})$ and $N(a-b)<N^2(\mathcal{P}_{m+1}).$ Then $\frac{a}{b}\in{U'_1}.$ \end{lemma} \begin{proof}\quad See Claim 2 in the proof of Lemma 3.4 in [12]. \end{proof} Define $$c'=\mbox{max} \Big\{c_1\frac{|\sigma(\xi)|}{|\xi|}+c_2\frac{|\xi|}{|\sigma(\xi)|}, c_2\frac{|\sigma(\xi)|}{|\xi|}+c_1\frac{|\xi|}{|\sigma(\xi)|} \Big\},$$ \begin{lemma}\quad If $N(\mathcal{P}_{m+1})\ge \Big(1+c_1c_2+c'\Big)^2,$ then $W_m\subseteq{C_mU'_1},$ i.e., condition I is satisfied. \end{lemma} \begin{proof}\quad By Lemma 3.3, if for any $w\in W_m$ there always exists a $c\in C_m$ satisfying $c\equiv w(\mbox{mod}\, \mathcal{P}_{m+1})$ and $N(w-c)<N^2(\mathcal{P}_{m+1}),$ then we have $W_m\subseteq C_mU'_1.$ So it suffices to investigate when the inequality $N(w-c)<N^2(\mathcal{P}_{m+1})$ holds. However, we have \begin{equation*} \begin{split} N(w-c)&=|w-c||\sigma(w)-\sigma(c)||\sigma^2(w)-\sigma^2(c)||\sigma^3(w)-\sigma^3(c)|\\ &=|w-c||\bar w-\bar c||\sigma(w)-\sigma(c)||\overline{\sigma(w)}-\overline{\sigma}|\\ &=(|w-c||\sigma(w)-\sigma(c)|)^2. \end{split} \end{equation*} We will estimate the term $|w-c||\sigma(w)-\sigma(c)|.$ First we have \begin{equation*} \begin{split} &|w-c||\sigma(w)-\sigma(c)| =|w\sigma(w)-w\sigma(c)-c\sigma(w)+c\sigma(c)|\\ \le&|w\sigma(w)|+|w\sigma(c)|+|c\sigma(w)|+|c\sigma(c)|\\ =&N^{\frac{1}{2}}(w)+|w\sigma(c)|+|c\sigma(w)|+N^{\frac{1}{2}}(c). \end{split} \end{equation*} From the construction of $W_m$ and $C_m$, we have $N^{\frac{1}{2}}(w)\le N^{\frac{1}{2}}(\mathcal{P}_{m+1})$ and $N^{\frac{1}{2}}(c)\le c_1c_2N^{\frac{1}{2}}(\mathcal{P}_{m+1}).$ Now we estimate the term $|w\sigma(c)|+|c\sigma(w)|=|c\sigma(w)|+\frac{N^{\frac{1}{2}}(w)N^{\frac{1}{2}}(c)}{|c\sigma(w)|}.$ By Lemma 3.1, for any $c\in C_m$ we have \begin{equation*} \begin{split} |c|\le c_1|\alpha_{m+1}|\le c_1\frac{N^{\frac{1}{4}}(\alpha_{m+1})}{|\sigma(\xi)|^{\frac{1}{2}}}. \end{split} \end{equation*} In virtue of $\Big|\frac{\sigma(\alpha_{m+1})}{\alpha_{m+1}}\Big|\le |\xi|$ and $|N(\alpha_{m+1})|=|\alpha_{m+1}|^2|\sigma(\alpha_{m+1})|^2,$ we have \begin{equation*} \begin{split} |c|=&\frac{N^{\frac{1}{2}}(c)}{|\sigma(c)|}\ge \frac{N^{\frac{1}{2}}(c)}{c_2|\sigma(\alpha_{m+1})|} =\frac{N^{\frac{1}{2}}(c)|\alpha_{m+1}|}{c_2|N^{\frac{1}{2}}(\alpha_{m+1})|}\\ \ge& \frac{N^{\frac{1}{2}}(c)N^{\frac{1}{4}}(\alpha_{m+1})}{c_2|\xi|^{\frac{1}{2}}N^{\frac{1}{2}}(\alpha_{m+1})} =\frac{N^{\frac{1}{2}}(c)}{c_2|\xi|^{\frac{1}{2}}N^{\frac{1}{4}}(\alpha_{m+1})}. \end{split} \end{equation*} So we have \begin{equation*} \begin{split} \frac{N^{\frac{1}{2}}(c)}{c_2|\xi|^{\frac{1}{2}}N^{\frac{1}{4}}(\alpha_{m+1})}\le |c|\le \frac{c_1N^{\frac{1}{4}}(\alpha_{m+1})}{|\sigma(\xi)|^{\frac{1}{2}}}. \end{split} \end{equation*} When $w \ne \xi\in W_m,$ from the construction of $W_m,$ we have \begin{equation*} \begin{split} \frac{|\sigma(\xi)|}{|\xi|^\frac{1}{2}}N^{\frac{1}{4}}(w)\le |\sigma(\xi)||w|\le |\sigma(w)| \le|\xi||w| \le \frac{|\xi|}{|\sigma(\xi)|^{\frac{1}{2}}}N^{\frac{1}{4}}(w). \end{split} \end{equation*} When $w=-1$ or $\xi,$ clearly the inequality above also holds. Thus we get \begin{equation*} \begin{split} \frac{|\sigma(\xi)|N^{\frac{1}{2}}(c)N^{\frac{1}{4}}(w)}{c_2|\xi|N^{\frac{1}{4}}(\alpha_{m+1})}\le |c\sigma(w)|\le c_1 \frac{|\xi|}{|\sigma(\xi)|}N^{\frac{1}{4}}(w)N^{\frac{1}{4}}(\alpha_{m+1}). \end{split} \end{equation*} It is easy to show the function $f(x)=x+\frac{N^{\frac{1}{2}}(w)N^{\frac{1}{2}}(c)}{x}$ meet its maximal value on the boundary. However the function values of $f(x)$ on boundary $x=\frac{|\sigma(\xi)|N^{\frac{1}{2}}(c)N^{\frac{1}{4}}(w)}{c_2|\xi|N^{\frac{1}{4}}(\alpha_{m+1})}$ or $c_1 \frac{|\xi|}{|\sigma(\xi)|}N^{\frac{1}{4}}(w)N^{\frac{1}{4}}(\alpha_{m+1})$ can be computed as follows. \begin{equation*} \begin{split} &\frac{|\sigma(\xi)|N^{\frac{1}{2}}(c)N^{\frac{1}{4}}(w)}{c_2|\xi|N^{\frac{1}{4}}(\alpha_{m+1})}+\frac{c_2|\xi|}{|\sigma(\xi)|}N^{\frac{1}{4}}(w)N^{\frac{1}{4}}(\mathcal{P}_{m+1})\\ &\le c_1\frac{|\sigma(\xi)|}{|\xi|}\frac{N^{\frac{1}{2}}(\alpha_{m+1})N^{\frac{1}{4}}(\alpha_{m+1})}{N^{\frac{1}{4}}(\alpha_{m+1})}+\frac{c_2|\xi|}{|\sigma(\xi)|}N^{\frac{1}{2}}(\alpha_{m+1})\\ &=\Big(c_1\frac{|\sigma(\xi)|}{|\xi|}+c_2\frac{|\xi|}{|\sigma(\xi)|}\Big)N^{\frac{1}{2}}(\alpha_{m+1}) \end{split} \end{equation*} and \begin{equation*} \begin{split} &\frac{c_1|\xi|}{|\sigma(\xi)|}N^{\frac{1}{4}}(w)N^{\frac{1}{4}}(\mathcal{P}_{m+1})+\frac{|\sigma(\xi)|N^{\frac{1}{2}}(c)N^{\frac{1}{4}}(w)}{c_1|\xi|N^{\frac{1}{4}}(\alpha_{m+1})}\\ &\le \frac{c_1|\xi|}{|\sigma(\xi)|}N^{\frac{1}{2}}(\alpha_{m+1})+c_1c_2\frac{|\sigma(\xi)|}{c_1|\xi|}\frac{N^{\frac{1}{2}}(\alpha_{m+1})N^{\frac{1}{4}}(\alpha_{m+1})}{N^{\frac{1}{4}}(\alpha_{m+1})}\\ &=\Big(c_2\frac{|\sigma(\xi)|}{|\xi|} + c_1\frac{|\xi|}{|\sigma(\xi)|}\Big)N^{\frac{1}{2}}(\alpha_{m+1}). \end{split} \end{equation*} So we have \begin{equation*} \begin{split} |c\sigma(w)|+|w\sigma(c)|=|c\sigma(w)|+\frac{N^{\frac{1}{2}}(w)N^{\frac{1}{2}}(c)}{|c\sigma(w)|} \le c'N^{\frac{1}{2}}(\alpha_{m+1}), \end{split} \end{equation*} where $c'=\mbox{max} \Big\{c_1\frac{|\sigma(\xi)|}{|\xi|}+c_2\frac{|\xi|}{|\sigma(\xi)|}, c_2\frac{|\sigma(\xi)|}{|\xi|}+c_1\frac{|\xi|}{|\sigma(\xi)|} \Big\}.$ Summarily we get \begin{equation*} \begin{split} |N(w-c)|&=(|w-c||\sigma(w)-\sigma(c)|)^2\\ &\le(N^{\frac{1}{2}}(w)+|w\sigma(c)|+|c\sigma(w)|+N^{\frac{1}{2}}(c))^2\\ &\le\Big(1+c_1c_2+c'\Big)^2N(\alpha_{m+1}). \end{split} \end{equation*} So when $N(\alpha_{m+1})=N(\mathcal{P}_{m+1})>\Big(1+c_1c_2+c'\Big)^2$, we have $W_m\subseteq CU'_1$. \end{proof} \begin{lemma}\quad If $N(\alpha_{m+1})>\Big(\frac{\delta\sqrt{c_1c_2}}{2}+\sqrt{\frac{c_1c_2\delta^2}{4}+\sqrt{c_1c_2}}\Big)^8,$ then $C_mG_m\subseteq C_mU'_1,$ i.e., condition II is satisfied. \end{lemma} \begin{proof}\quad By Lemma 3.3, in order to prove $C_mG_m\subseteq C_mU'_1$, it is sufficient to prove that for any $c\in C_m$ and $g\in G_m$ there exists a $\tilde{c}\in C_m$ such that $cg\equiv \tilde{c}~(\mbox{mod}\,\mathcal{P}_{m+1})$ and $N(cg-\tilde{c})<N^2(\mathcal{P}_{m+1}).$ So we should investigate when the inequality $N(cg-\tilde{c})<N^2(\mathcal{P}_{m+1})$ holds. Let $c,\tilde{c}\in C_m,g\in G_m,$ and let $M_1,M_2\in \mathbb{R}$ with the conditions: \begin{equation*} \begin{split} N(c)\le M_1,~ ~N(\tilde{c})\le M_1, ~ ~ |g|\le M_2, ~ ~|\sigma(g)|\le M_2. \end{split} \end{equation*} Then \begin{equation*} \begin{split} N^{\frac{1}{2}}(cg-\tilde{c})=&|cg-\tilde{c}||\sigma(c)\sigma(g)-\sigma(\tilde{c})|\\ \le&(|cg|+|\tilde{c}|)(|\sigma(c)\sigma(g)|+|\sigma(\tilde{c})|)\\ \le&|c\sigma(c)||g\sigma(g)|+M_2(|c\sigma(\tilde{c})|+|\tilde{c}\sigma(c)|)+|\tilde{c}\sigma(\tilde{c})|\\ =&N^{\frac{1}{2}}(c)N^{\frac{1}{2}}(g)+M_2(|c\sigma(\tilde{c})|+|\tilde{c}\sigma(c)|)+N^{\frac{1}{2}}(\tilde{c}). \end{split} \end{equation*} Let us estimate the term $|c\sigma(\tilde{c})|+|\tilde{c}\sigma(c)|=|c\sigma(\tilde{c})|+\frac{N^{\frac{1}{2}}(c)N^{\frac{1}{2}}(\tilde{c})}{|c\sigma(\tilde{c})|}.$ By the definition of $C_m$, it is obvious that $|c|\le c_1|\alpha_{m+1}|$ and $|c|=\frac{N^{\frac{1}{2}}(c)}{|\sigma(c)|}\ge \frac{1}{|c_2\sigma(\alpha_{m+1})|}N^{\frac{1}{2}}(c).$ So we have \begin{equation*} \begin{split} \frac{N^{\frac{1}{2}}(c)}{c_2|\sigma(\alpha_{m+1})|}\le |c| \le c_1|\alpha_{m+1}|. \end{split} \end{equation*} Similarly, we have \begin{equation*} \begin{split} \frac{N^{\frac{1}{2}}(\tilde{c})}{c_1|\alpha_{m+1}|}\le \frac{N^{\frac{1}{2}}(\tilde{c})}{|\tilde{c}|}\le |\sigma(\tilde{c})|\le c_2|\sigma(\alpha_{m+1})|. \end{split} \end{equation*} Therefore \begin{equation*} \begin{split} \frac{N^{\frac{1}{2}}(c)N^{\frac{1}{2}}(\tilde{c})}{c_{1}c_{2}N^{\frac{1}{2}}(\alpha_{m+1})}\le |c\sigma(\tilde{c})|\le c_1c_2N^{\frac{1}{2}}(\alpha_{m+1}) \end{split} \end{equation*} Let $f(x)=x+\frac{N^{\frac{1}{2}}(c)N^{\frac{1}{2}}(\tilde{c})}{x}.$ It is easy to show that $f(x)$ meets the maximal value at $x=c_{1}c_{2}N^{\frac{1}{2}}(\alpha_{m+1}).$ So \begin{equation*} \begin{split} |c\sigma(\tilde{c})|+|\tilde{c}\sigma(c)|&=|c\sigma(\tilde{c})|+\frac{N^{\frac{1}{2}}(c)N^{\frac{1}{2}}(\tilde{c})}{|c\sigma(\tilde{c})|}\\ &\le c_{1}c_{2}N^{\frac{1}{2}}(\alpha_{m+1})+\frac{N^{\frac{1}{2}}(c)N^{\frac{1}{2}}(\tilde{c})}{ c_{1}c_{2}N^{\frac{1}{2}}(\alpha_{m+1})}\\ &\leq2c_1c_2N^{\frac{1}{2}}(\alpha_{m+1}). \end{split} \end{equation*} Then \begin{equation*} \begin{split} N^{\frac{1}{2}}(cg-\tilde{c})&\le N^{\frac{1}{2}}(c)N^{\frac{1}{2}}(g)+M_2(|c\sigma(\tilde{c})|+|\tilde{c}\sigma(c)|)+N^{\frac{1}{2}}(\tilde{c})\\ &\le M_1^\frac{1}{2}M_2^2+2c_1c_2M_2N^{\frac{1}{2}}(\mathcal{P}_{m+1})+M_1^{\frac{1}{2}} \end{split} \end{equation*} By the definition of $C_m$ and $G_m,$ we can take $M_1=c_1^2c_2^2N(\alpha_{m+1})$ and $M_2=\delta N^{\frac{1}{8}}(\alpha_{m+1}).$ Hence we have \begin{equation*} \begin{split} N^{\frac{1}{2}}(cg-\tilde{c})&\le c_1c_2\delta^2N^{\frac{3}{4}}(\alpha_{m+1})+2\delta c_1c_2N^{\frac{5}{8}}(\alpha_{m+1})+c_1c_2N^{\frac{1}{2}}(\alpha_{m+1}). \end{split} \end{equation*} So it is sufficent to consider the inequatity \begin{equation*} \begin{split} c_1c_2\delta^2N^{\frac{3}{4}}(\alpha_{m+1})+2\delta c_1c_2N^{\frac{5}{8}}(\alpha_{m+1})+c_1c_2N^{\frac{1}{2}}(\alpha_{m+1})<N(\alpha_{m+1}), \end{split} \end{equation*} i.e. $$N^{\frac{1}{2}}(\alpha_{m+1})-c_1c_2\delta^2N^{\frac{1}{4}}(\alpha_{m+1})-2\delta c_1c_2N^{\frac{1}{8}}(\alpha_{m+1})-c_1c_2>0.$$ This implies that when $N(\alpha_{m+1})>\Big(\frac{\delta\sqrt{c_1c_2}}{2}+\sqrt{\frac{c_1c_2\delta^2}{4}+\sqrt{c_1c_2}}\Big)^8,$ we have $N^{\frac{1}{2}}(cg-\tilde{c})<N(\alpha_{m+1}),$ as required. \end{proof} \end{subsubsection} \begin{subsubsection}{Groenewegen's general bound} For any $m\in{\mathbb{Z}^{+}},$ we denote $$K_2U_m:=(U_{m} \otimes U_{m})/\langle a\otimes b |a, b\in{U_{m}},\ a+b=1\ \mbox{or} \ a+b=0\rangle$$ and $$K_{2}^{(m)}\mathcal{O}_F=\mbox{ker}\Big(K_2U_m\rightarrow \bigoplus_{Nv\leq{Nv_{m}}}k_{v}^{*}\Big).$$ It is clear that there is a natural map $K_2U_m\rightarrow K_{2}F.$ Moreover we write $$c_{F}=\mbox{max}\{2^{2n}\rho d^{2},2^{2n/3},\rho^{1/3}(d\tilde{d}^2)^{2/3},\rho d^3\}$$ where $$d=\frac{2^{n}\Gamma{(\frac{n+2}{2})}}{(\pi n)^{n/2}}|\Delta|^{1/2} , \ \ \tilde{d}=\Big(\frac{2}{\pi}\Big)^s |\Delta|^{1/2}$$ and $\rho$ is the packing density of an $n$-dimensional sphere. In \cite{GROENEWEGEN001}, Groenewegen proved the following theorem. \begin{theorem}\quad For every number field $F$, for $N v_{m}>c_{F},$ the image of $K_{2}^{(m)}\mathcal{O}_F$ in $K_{2}F$ is equal to the tame kernel of $F$. \end{theorem} \begin{remark} \quad For an imaginary cyclic quartic field $\mathbb{Q}\Big(\sqrt{-(D+B\sqrt{D})}\Big)$ of class number one (see Section 2), by Theorem 3.6 we can get a common bound of $m$ for both condition I and condition II. But, from the computation of the next section, we know that for condition I the bound obtained by Lemma 3.4 is better than that obtained by Theorem 3.6 except for the cases $B=6,D=37$ and $B=2,D=61$, and for condition II, the bound obtained by Theorem 3.6 is better than that obtained by Lemma 3.5 except for the case $B=1,D=2.$ The comparison of the results is listed in the following table. \end{remark} \begin{table}[!hbp] \caption{}\label{eqtable} \centering \ \ \ \ \ \ \ \ \ \ \ \begin{tabular}{|c|c|c|c|} \hline number field $F$ & Lemma 3.4 & Lemma 3.5 & Theorem 3.6\\ \hline $B=1,D=2$ & $172.525$ & $3253.539$ & $16146.993$ \\ \hline $B=2,D=13$ & $1173.677$ & $45879.279$ & $17321.1$ \\ \hline $B=2,D=29$ & $48710.067$ & $1867701099.860$ & $192289.567$ \\ \hline $B=6,D=37$& $5284749.383$ & $61546835.003$ & $399362.147$ \\ \hline $B=2,D=53$ & $114166.647$ & $4086894943.478$ & $1173787.115$ \\ \hline $B=2,D=61$ & $180648285.891$ & $1680328728.448$ & $1789580.481$ \\ \hline \end{tabular} \end{table} \end{subsubsection} \end{subsection} \end{section} \begin{section}{Decreasing the value $m$} \begin{subsection}{The general idea} Let $F=\mathbb{Q}\Big(\sqrt{-(D+B\sqrt{D})}\Big)$ be an imaginary cyclic quartic field with class number $h_F =1$ and $\xi$ the fundamental unit. As Balabas and Gangl did, we also aim at decreasing theoretical bound of $m$ practically. The general idea is as follows. At first, by Lemma 3.2, we get the constants $c_1,c_2,c'.$ Let $c''=$ min$\{(1+c_N+c')^2, c_F\}.$ If $c''\leq c_{F},$ there exists an $m_{1}\in{\mathbb Z^{+}}$ satisfying $N(\mathcal{P}_{m_{1}})\leq c''$ and $N(\mathcal{P}_{m_{1}+1})> c''.$ Thus, by Lemma 3.4, for $m\in{\mathbb Z^{+}}$ satisfying $m>m_{1}$ and $c''<N(\mathcal{P}_{m_{1}+1})\leq N(\mathcal{P}_{m}),$ condition I holds for $m_1+1$. We want to show that it holds also for $m_1.$ If $c_{F}\leq c'',$ there exists an $m_{1}'\in{\mathbb Z^{+}}$ satisfying $N(\mathcal{P}_{m_{1}'})\leq c''$ and $N(\mathcal{P}_{m_{1}'+1})> c''.$ By Theorem 3.6, the image of $K_{2}^{(m_{1}'+1)}\mathcal{O}$ in $K_{2}F$ is equal to the tame kernel of $F$. However it is obvious that the image of $K_{2}^{(m_{1}'+1)}\mathcal{O}$ in $K_{2}F$ is $\mbox{ker}\Big(\partial: K_2^{S_{m_{1}'+1}}(F)\longrightarrow \coprod_{v\in S_{m_{1}'+1}}k^*_v\Big).$ So it is necessary to show condition I holds for $m_{1}'.$ Without loss of generality, we denote $m_{1}'$ also by $m_{1}.$ Similarly, from Lemma 3.5 or Theorem 3.6, there exists an $m_{2}\in{\mathbb Z^{+}}$ such that condition II holds for $m_2+1$. We want to show that condition II holds also for $m_2.$ Then, for $m=m_1$ (resp. $m_{2}),$ we will construct the subset $G_{m-1},W_{m-1}$ and $C_{m-1}$ satisfying condition I (resp. condition II). In this way, the value of $m$ can be decreased step by step. \end{subsection} \begin{subsection}{Checking $\partial_{m}$ to be an isomorphism} \ Our idea for checking $\partial_{m}$ to be an isomorphism is described as follows. \textbf{(I) Constructing the subset $W_{m-1}.$} Let $\xi$ be the fundamental unit. By (3.2), the subset $$W_{m-1}=\{\alpha_1,\alpha_2,\cdots,\alpha_{m-1}\}\bigcup \{-1,\xi\}$$ needs to be defined, where $\alpha_i\in{\mathcal{O}_F}$ satisfies $\mathcal{P}_{i}=(\alpha_i)$ and $|\sigma(\xi)|\le \Big|\frac{\sigma(\alpha_i)}{\alpha_i}\Big|\le |\xi|$ for each $i=1,2,\cdots,m-1.$ However, firstly for some fixed $i\in\{1,2,\cdots,m-1\},$ we must confirm that the generator $\alpha_i$ of the prime ideal $\mathcal{P}_i$ satisfies that $\Big|\frac{\sigma(\alpha_i)}{\alpha_i}\Big|$ nearly equals $1.$ Fortunately, in the PARI library the function \textbf{GEN bnfisprincipal0(GEN bnf, GEN x, long flag)} can return such a generator $\alpha_{i}$ for the prime ideal $\mathcal{P}_{i}$. In fact, in the algorithm implemented by the above function, the generator has been reduced, which means that $\Big|\frac{\sigma(\alpha_i)}{\alpha_i}\Big|$ nearly equals $1.$ Secondly, we must get such $\alpha_i$ for each $i\leq m-1.$ Thus, we must get at first the prime ideals whose norms are less than or equal to the boundary determined by Lemma 3.4 (Lemma 3.5 respectively). In fact, for each prime number $p\in{\mathbb{Z}}$, it is easy to determine its residue class degree $f_{p}$ and to obtain the prime ideals above it by the PARI function \textbf{GEN idealprimedec(GEN nf, GEN p, long f).} So by iterating through the prime numbers which can be factored into the prime ideals with norm less than the boundary, we can get the required $\alpha_i\in W_{m-1}$ for each $i=1,2,\cdots,m-1.$ \textbf{(II) Constructing the subset $G_{m-1}.$} For the only element $g_{m-1}\in G_{m-1}$, we can know that (i) $g_{m-1}(mod\, \mathcal{P}_{m})$ is the only generator of the multiplicative cyclic group $k^{*}_{v_{m}}$ of the residue class field $k_{v_{m}}$ by the second part of condition (ii) in Theorem 1.1; (ii) the value $\big|\frac{g_{m-1}}{\sigma{(g_{m-1})}}\big|$ should nearly equal $1, $ by the proof of Lemma 3.5. In the case of $f_{v_{m}}=1$, it is obvious that $$\langle g'_{m-1}(mod\, \mathcal{P}_{m}) \rangle =k^{*}_{v_{m}}\cong{\mathbb{Z}/(\mathcal{P}_{m}\cup{\mathbb{Z}})}= \langle g'_{m-1}(mod\, (\mathcal{P}_{m}\cup{\mathbb{Z}})) \rangle,$$ where $g'_{m-1}\in{\mathbb{Z}}.$ Set $g_{m-1}=g'_{m-1}.$ Then we can get $G_{m-1}=\{g_{m-1}\}$ with $\big|\frac{g_{m-1}}{\sigma{(g_{m-1})}}\big|=1.$ In the case of $f_{v_{m}}\neq 1,$ by the PARI function \textbf{GEN Idealstar(GEN nf, GEN ideal, long flag)}, the generator $g_{m-1}(mod\, \mathcal{P}_{m})$ of the cyclic group $k^{*}_{v_{m}}$ can be obtained. So we can set $G_{m-1}=\{g_{m-1}\}.$ It is easy to show that the above condition (i) and (ii) are satisfied for the only element $g_{m-1}$ of the set $G_{m-1}$. \textbf{(III) Constructing the subset $C_{m-1}.$} By (3.2), the subset $C_{m-1}$ contains the lifting of all elements of the multiplicative group $k^{*}_{v_{m}}$ and $1\in{F}.$ Moreover, by the proofs of Lemma 3.4 and Lemma 3.5, each element $c_{m-1}$ of the set $C_{m-1}$ should satisfy that the value $\big|\frac{c_{m-1}}{\sigma{(c_{m-1})}}\big|$ nearly equals $1.$ We can get the generator $g_{m-1}(mod\, \mathcal{P}_{m})$ of the group $k^{*}_{v_{m}},$ so each element $c_{m-1,i}(mod\, \mathcal{P}_{m})$ of the group $k^{*}_{v_{m}}$ can be expressed as $$c_{m-1,i}(mod\, \mathcal{P}_{m})=(g_{m-1}(mod\, \mathcal{P}_{m}))^{i}$$ where $i=1,2,\cdots,N(v_m)-1.$ But it is difficult to find a lifting $c_{m-1,i}$ of the element $c_{m-1,i}(mod\, \mathcal{P}_{m})$, which satisfies that the value $\big|\frac{c_{m-1,i}}{\sigma{(c_{m-1,i})}}\big|$ nearly equals $1.$ The method we use to get a suitable lifting can be shown as follows. Firstly, let $c'_{m-1,i}=g^{i}_{m-1}-\beta\xi^{k}$ for each $i=1,2,\cdots,N(v_m)-1,$ where $\beta\in{\mathcal{O}_F}$ and $k$ is nonnegative integer. Secondly, when $\beta$ runs through the elements of $\mathcal{O}_F$ in increasing order by norm and $k$ runs through all nonnegative integers in increasing order, we can determine whether $c'_{m-1,i}\in{\mathcal{P}_{m}}$ is true. Thus we can get the minimum $\beta$ and $k$ such that $c'_{m-1,i}\in{\mathcal{P}_{m}}$ for each $i=1,2,\cdots,N(v_m)-1,$ and therefore $\beta\xi^{k}$ is a lifting of $c_{m-1,i}(mod\, \mathcal{P}_{m}).$ Hence, we can let $c_{m-1,i}=\beta\xi^{k}.$ Lastly, we can obtain the set $C_{m-1}=\{c_{m-1,i}|i=1,2,\cdots,N(v_m)-1\}\bigcup{\{1\}}.$ \textbf{(IV) Checking condition I(II).} After obtaining the subsets $W_{m-1},G_{m-1}$ and $C_{m-1}$, now we can check condition I(II). Fortunately for us, the PARI function \textbf{GEN bnfissunit(GEN bnf, GEN sfu, GEN x)} can help us to check whether $\gamma\in{U_{m}}$ is true for some $\gamma\in{\mathcal{O}_F}.$ Thus it is easy to write programme to check condition I(II) for the finite prime place $v_{m}.$ Using the above ideas, we can design the software architecture and algorithms and write a programm to compute some tame kernels $K_2\mathcal{O}_F$ for the cyclic quartic fields $F=\mathbb{Q}\Big(\sqrt{-(D+B\sqrt{D})}\Big)$ with class number one. \end{subsection} \begin{subsection}{Designing the classes} \ It is well known that Tate' theorem is right for any number field. Thus we can build a software architecture to be extensible and reusable for computing the tame kernel of a general number field, with the cases of imaginary cyclic quartic fields with class number one as examples. So in the following computation, firstly we will focus on all objects instead of the process. All of objects are as follows: (1) the cyclic quartic field $F=\mathbb{Q}\Big(\sqrt{-(D+B\sqrt{D})}\Big)$; (2) the prime ideal $v_m$ of the algebraic integral ring $\mathcal{O}_{F};$ (3) the verification method which is used in this section; (4) the group of $S_{m}$-units $U_{m}=\{a\in{F}|v(a)=0,v\not\in{S_{m}}\};$ (5) three subsets $C_{m-1}$, $W_{m-1}$ and $G_{m-1}$ of $U_{m-1}$ corresponding to $v_m$; (6) the constants $c_1$, $c_2$ corresponding to $F.$ Then, according to the objects and the relationships among them, we design the following three classes: (1) {\it CquarField} (an abstraction description of the field $F=\mathbb{Q}\Big(\sqrt{-(D+B\sqrt{D})}\Big)$; (2) {\it Cideal} (an abstraction description of the prime ideal $v_m$); (3) {\it Ccheck} (an abstraction description of the verification method). Moreover, the constants $c_1$, $c_2$ are regarded as the attributes of $CquarField$ and the sets $C_{m-1}$,$W_{m-1},G_{m-1}$ as the the attributes of $Cideal;$ an object of {\it CquarField} is regarded as an attribute of $Cideal,$ which is actually an abstraction description about "prime ideal is subject to the cyclic quartic field $F$ "; an object of {\it CquarField} is also regarded as an attribute of the class {\it Ccheck}, which means that "the verification method is corresponding to a given cyclic quartic field". Summarily, the relations in the above descriptions can be indicated by the static class diagram given in Figure 2. \begin{remark} \quad The reason why we use the Object-Oriented Programming(OOP) is that the architecture can be expanded. For example, if we can find a way to compute the tame kernel $K_{2}\mathcal{O}_{F_1}$ for another number field $F_1$, the only things we must do are: (1) creating a class $CF_1$ corresponding to $F_1$; (2) creating a class $CF$ as the parent class of $CF_1$ and $CquarField$; (3) making an object of $CF$ as an attribute of $Cideal$ and $Ccheck$. \end{remark} Thus, we have complete the creation of the embryonic form of the architecture. The last work is to implement the classes. \begin{subsection}{The methods of the three classes} By the theory of the Object Oriented Programming, a class is partitioned into three parts: the name, the attributes and the methods. For the above three classes, we have designed their methods, which are listed as follows (The algorithms implemented by these methods will be described in the next section): (i) The methods of $CquarField$: \begin{lstlisting}{language=C++} /*return the constant c_1*/ GEN getc_1(); /*return the constant c_2*/ GEN getc_2(); /*return the transition matrix between the basises*/ GEN transMatrix(); /*return the bound determined by Lemma 3.4*/ GEN getBoundOne(); /*return the bound determined by Lemma 3.5*/ GEN getBoundTwo(); /*return all prime ideals whose norms are less than *the bound which is determined by Lemma 3.4 (resp.Lemma 3.5) *and corresponds to the parameter num_condition 1 (resp.2)*/ GEN getPrimeTable(int num_condition); \end{lstlisting} (ii) The methods of $Cideal$: \begin{lstlisting}{language=C++} /*return the set W_{m-1} corresponding to the prime *ideal represented by the class Cideal*/ GEN getSetInitW(); /*return the set G_{m-1} corresponding to the prime *ideal represented by the class Cideal*/ GEN getSetInitG(); /*return all ideals whose norms are less than *the norm of the prime ideal represented by *the class Cideal*/ GEN fgetAllideal(); /*return the set C_{m-1} corresponding to the prime *ideal represented by the class Cideal; *this is the parent thread function*/ GEN Para_getSetInitC(); /*This is the child thread function*/ static void* Part_getSetInitC(void *arg); /*check condition I corresponding to the prime ideal *represented by the class Cideal*/ bool checkConditionOne(); /*check condition II corresponding to the prime ideal *represented by the class Cideal*/ bool checkConditionTwo(); /*return the set U_{m} corresponding to the prime *ideal represented by the class Cideal*/ GEN getUm(); \end{lstlisting} (iii) The methods of $Ccheck$: \begin{lstlisting}{language=C++} /*the parent thread function to check condition I */ bool Para_checkConditionOne(int num_thread); /*the parent thread function to check condition II */ bool Para_checkConditionTwo(int num_thread); /*the parent thread function to check condition I*/ static void* Part_checkConditionOne(void *arg); /*the child thread function to check condition II*/ static void* Part_checkConditionTwo(void *arg). \end{lstlisting} \end{subsection} \subsection{Create the sequence diagram thst shows the expected workflow} In order to show the process of computting the tame kernel of the number field $F=\mathbb{Q}\Big(\sqrt{A(D+B\sqrt{D})}\Big),$ we create a sequence diagram given in Figure 3. \begin{remark}{Some remark on the sequence diagram:}\quad Firstly, we create an object of the class Ccheck, named as checker, by calling the constructed function \textbf{Ccheck::check(int a,int b,int c,int d)} of the class Ccheck, where the formal parameters a,b,c and d indicate the four parameters $A,B,C$ and $D$ of the cyclic quartic field $F=\mathbb{Q}\Big(\sqrt{A(D+B\sqrt{D})}\Big),$ respectively. In the process, we create an object qfCom of the class CquarField, which indicates the cyclic quartic field $F=\mathbb{Q}\Big(\sqrt{A(D+B\sqrt{D})}\Big).$ Moreover, some important invariants, such as the fundamental unit, the discriminant of the number field $F$ and so on, of the cyclic quartic field $F$ are obtained. Secondly, after lots of tests we find some easy facts on the subset $C_{s-1}$ of $U_{s-1}$ corresponding to the prime place $v_s$ of the number field $F$ as follows. (1) In the process of obtaining the subsets $C_{s-1},$ $G_{s-1}$ and $W_{s-1}$ of $U_{s-1}$, the most difficult one is to obtain $C_{s-1};$ (2) The value of the theoretical bound, determined by the lemma 3.4, 3.5 and theorem 3.6, is very large. So the number of the sets $C_{s-1}$ obtained by computing are also very large. (3) We suppose that some important information of tame kernel of the number field $F$ must be hidden in the subset $C_{s-1}$ of the set $U_{s-1}$ for every prime ideal $v_{s}$ of the number field $F.$\\ Thus, it must take a long time to obtain the set $C_{s-1}$ for every prime place $v_{s}$ of the number field $F$ whose norm $N(v_{s})$ is less than the theoretical bound. And we think that it is a good idea to obtain the sets $C_{s-1}$ prior to the sets $G_{s-1}$ and $W_{s-1}$. Moreover, for finding more information on the tame kernel of the number field $F$ from those sets, we also hope that all of the obtained sets $C_{s-1}$ are preserved in persistent storage. Then, built on the above ideals, for every prime place $v_s$ of the number field $F$ whose norm $N(v_{s})$ is less than the theoretical bound, after finishing creating the object checker, the sets $C_{s-1}$ are needed to get as follows. (step 1) In order to obtain all of the set $C_{s-1}$, the method \textbf{bool Ccheck::Para\_g\\etSetC(int num\_thread, int num\_threadf)} is called, where the first(second) parameter means how many the threads are used for obtaining the set $C_{s-1}$ corresponding to the prime ideal $v_{s}$ with residue class degree $f_{v_{s}}=1$($f_{v_{s}}\neq1$). (step 2) But the number of $v_{s}$, with norm $N(v_{s})$ less than the theoretical bound, is very large. Then a technology of the parallel computing is needed. The method \textbf{void* Ccheck:: Part\_getSetC(void *arg)} is child thread function. (step 3) In the process of calling the method \textbf{Para\_getSetC(int num\_thread,\ \ int num\_threadf)} to obtain all sets $C_{s-1}$, we must finish the following two works. One hand, it is necessary to get all prime numbers corresponding to the prime ideals of the number field $F$ whose norms are less than the theoretical bound, for which the method \textbf{GEN CquarField::getPrimeTable(int num\_condition)} is designed; On the other hand, by the definition of $C'_{s}$(3.2), we also must obtain all ideals of the number field $F$ whose norms are less than $c_1c_2N(v_{s+1}).$ However, if using the PARI library function \textbf{GEN ideallist0(GEN nf, long bound, long flag)}, it will take a very long time to realizes the capability because the value $c_1c_2N(v_{s+1})$ is too large. For example, in the case of $F=\mathbb{Q}\Big(\sqrt{-(13+2\sqrt{13})}\Big),$ the theoretical bound is $45879$ and we must take about $2.5$ hours to obtain the ideals mentioned above; and in the case of $F=\mathbb{Q}\Big(\sqrt{-(29+2\sqrt{29})}\Big),$ the theoretical bound is $192289$ and we must take about $60$ hours. Moreover, there is no PARI function that returns all ideals whose norms are some $n\in{\mathbb{Z}}$ in pari library. So in order to minimize the consumption of time we must make use of the parallel computing in this procedure. Thus the methods \textbf{GEN CquarField::para\_getAllideal(long num\_thread,long num\_condition)} and \textbf{void* CquarField::Part\_getAll\\ideal(void *arg)} must be designed as the father thread function and the child thread function respectively. The parameter \textbf{num\_thread} means how many threads can be used for computing those ideals, and the parameter \textbf{num\_condition} means on which condition the ideals are computed. By the two methods, it takes only about $10$ minutes(resp. $1.5$ hours) to obtain the ideals when $F=\mathbb{Q}\Big(\sqrt{-(13+2\sqrt{13})}\Big)$ (resp.$F=\mathbb{Q}\Big(\sqrt{-(29+2\sqrt{29})}\Big)$). (step 4) In this step, by going through all prime ideals with the norms less than the theoretical bound, we obtain all of sets $C_{s-1}.$ However, in every loop, we must finish the following works. Firstly, calling the constructed function \textbf{Cideal::Cideal(\\CquarField* quarf, GEN gen\_prime, int i\_th)} we can create the object of the class Cideal corresponding to prime ideal of the number field $F$ whose norm is less than the theoretical bound; secondly, calling the method \textbf{GEN Cideal::getSet\\InitG()} we obtain a generator element of the cyclic group $k_{v_{s}}^{*}$; lastly, calling the father thread function \textbf{void Cideal::Para\_getSetInitC()} and the child thread function \textbf{void* Cideal::Part\_getSetInitC(void *arg)} we obtain the set $C_{s-1}$ and save as a text file. Finally, to check condition I(resp. II), we use POSIX threads to design the father thread function \textbf{bool Ccheck::Para\_checkConditionOne(int num\_thread)} (resp. \textbf{bool Ccheck::Para\_checkConditionTwo(int num\_th-read)}) and the child thread function \textbf{void* Ccheck::Part\_checkConditionOne\ \ (void *arg)}(resp. \textbf{void* Ccheck::Part\_checkConditionTwo(void *arg)}) to check condition I(resp. II). \end{remark} \end{subsection} \begin{subsection}{The methods of the three classes} By the theory of the Object Oriented Programming, a class is partitioned into three parts: the name, the attributes and the methods. For the above three classes, we have designed their methods, which are listed as follows (The algorithms implemented by these methods will be described in the next section): (i) The methods of $CquarField$: \begin{lstlisting}{language=C++} /*return the constant c_1*/ GEN getc_1(); /*return the constant c_2*/ GEN getc_2(); /*return the transition matrix between the basises*/ GEN transMatrix(); /*return the bound determined by Lemma 3.4*/ GEN getBoundOne(); /*return the bound determined by Lemma 3.5*/ GEN getBoundTwo(); /*return all prime ideals whose norms are less than *the bound which is determined by Lemma 3.4 (resp.Lemma 3.5) *and corresponds to the parameter num_condition 1 (resp.2)*/ GEN getPrimeTable(int num_condition); \end{lstlisting} (ii) The methods of $Cideal$: \begin{lstlisting}{language=C++} /*return the set W_{m-1} corresponding to the prime *ideal represented by the class Cideal*/ GEN getSetInitW(); /*return the set G_{m-1} corresponding to the prime *ideal represented by the class Cideal*/ GEN getSetInitG(); /*return all ideals whose norms are less than *the norm of the prime ideal represented by *the class Cideal*/ GEN fgetAllideal(); /*return the set C_{m-1} corresponding to the prime *ideal represented by the class Cideal; *this is the parent thread function*/ GEN Para_getSetInitC(); /*This is the child thread function*/ static void* Part_getSetInitC(void *arg); /*check condition I corresponding to the prime ideal *represented by the class Cideal*/ bool checkConditionOne(); /*check condition II corresponding to the prime ideal *represented by the class Cideal*/ bool checkConditionTwo(); /*return the set U_{m} corresponding to the prime *ideal represented by the class Cideal*/ GEN getUm(); \end{lstlisting} (iii) The methods of $Ccheck$: \begin{lstlisting}{language=C++} /*the parent thread function to check condition I */ bool Para_checkConditionOne(int num_thread); /*the parent thread function to check condition II */ bool Para_checkConditionTwo(int num_thread); /*the parent thread function to check condition I*/ static void* Part_checkConditionOne(void *arg); /*the child thread function to check condition II*/ static void* Part_checkConditionTwo(void *arg). \end{lstlisting} \end{subsection} \begin{subsection}{The algorithms implemented by the methods.} \begin{subsubsection}{Some frequently-used algorithms} During decreasing the value $m$, there are three things we must compute from time to time. The first one is to decompose a (positive) prime number $p$ into prime ideals in the cyclic quartic field $F$, the second one is to obtain the generators of an ideal in the cyclic quartic field $F$ and the third one is to get the condition of determining whether an element of $\mathcal{O}_F$ is in the group $U_{m-1}.$ However, the three things can be done by using Lemma 4.2, and Algorithm 4.1 and Algorithm 4.2 below. Moreover, as is well known, Lemma 4.1 can be implemented by the PARI's functions \textbf{GEN idealprimedec(GEN nf, GEN p)} and Algorithm 4.1 and Algorithm 4.2 by \textbf{GEN bnfisprincipal0(GEN bnf, GEN x, long flag)} and \textbf{GEN bnfissunit(GEN bnf, GEN sfu, GEN x)}. \begin{lemma}[Theorem 4.8.13 (\cite{cohen138})]\quad Let $F=\mathbb{Q}(\theta)$ ba a number field, where $\theta$ is an algebiaic integer, whose minimal polnomial is denoted $T(X)$. Let $f$ be the index of $\theta$. Then for any prime $p$ not dividing $f$ one can obtain the prime decomposition of $p\mathcal{O}_{F}$ as follows. Let $$T(X)\equiv \prod_{1}^{g}T_{i}(X)^{e_{i}}\pmod p$$ be the decomposition of $T$ into irriducible factors in $\mathbb{F}_{p}[X],$ where the $T_{i}(X)$ are taken to be monic. Then $$p\mathcal{O}_{F}=\prod_{i=1}^{g}{\mathcal{P}_{i}}^{e_{i}},$$ where $$\mathcal{P}_{i}=(p,T_{i}(\theta))=p\mathcal{O}_{F}+T_{i}(\theta)\mathcal{O}_{F}.$$ Furthermore, the residual index $f_i$ is equal to the degree of $T_i(X).$ \end{lemma} \begin{algorithm}[!htb] \caption{(Algorithm 6.5.10 (\cite{cohen138}))} \begin{algorithmic} \REQUIRE Given an ideal $I$ of $\mathcal{O}_F$ for a number field $F=\mathbb{Q}(\theta).$\\ \ENSURE Test whether $I$ is a principal ideal, and if it is,compute an $\alpha\in{F}$ such that $I=\alpha\mathcal{O}_F.$\\ \STATE \textbf{1.[reduce to primitive]} If $I$ is not a primitive integral ideal, compute a rational number $a$ such that $I/(a)$ is primitive integral, and set $I\leftarrow I/(a).$\\ \STATE \textbf{2.[Small norm]} If $N(I)$ is divisible only by prime numbers below the prime ideals in the factor base, set $v_{i}\leftarrow 0$ for $i<s,\beta\leftarrow a$ and go to step $4.$\\ \STATE \textbf{3.[Generate random relations]} Choose random nonnegative integers $v_{i}<20$ for $i<s$,compute the ideal $I_{1}\leftarrow I\prod_{1\leq{i}\leq{s}}S_{i}^{v_i},$ and let $J=I_{1}/(\gamma)$ be the ideal obtained by LLL-reducing $I_1$ along the direction of the zero vector. If $N(J)$ is divisible only bt the prime numbers less than equal to $L_1$, set $I\leftarrow J, \beta\leftarrow a\gamma$ and go to step 4. Otherwise, go to step 3.\\ \STATE \textbf{4.[Factor $I$]} Using Algorithm 4.8.17 in \cite{cohen138}, factor $I$ on the factor base FB. Let $I=\prod_{1\leq{i}\leq{k}}p_{i}^{x_i}.$ Let $X$(resp.Y) be the column vector of the $x_i-v_i$ for $i\leq r$(resp.$i>r$), where $r$ is the number of rows of the matrix $B,$ as above, and where we set $v_i=0$ for $i>s.$\\ \STATE \textbf{5.[Check if principal]} Let $Z\leftarrow D^{-1}U(X-BY)$(since $D$ is a diagonal matrix, no matrix inverse must be computed here). If some entry of $Z$ is not integral, output a message saying that the ideal $I$ is not a principal ideal and terminate the algorithm.\\ \STATE \textbf{6.[Use Archimedean information]} Let A be the ($c_1+k$)-column vector whose first $c_1$ elements are zero, whose next $r$ elements are the elements of Z, and whose last $k-r$ elements are element of $Y.$ Let $A_{C}=(a_{i})_{1\leq{i}\leq{r_{u}}}\leftarrow{M_{C}^{''}A}.$\\ \STATE \textbf{7.[Restore correct information]} Set $s\leftarrow(\mbox{ln}N(I)/n),$ and let $A'=(a_{i}')_{1\leq{i}\leq{n}}$ be defined by $a_{i}'\leq{\mbox{exp}(s+a_i)}$ if $i\leq{r_1},a_{i}'\leftarrow \mbox{exp}(s+\bar{(a_{i-r_2})})$ if $r_{u}<i\leq n$\\ \STATE \textbf{8.[Round]} Set $A''\leftarrow{\Omega^{-1}A'}$ where $\Omega=\sigma_j(\omega_i)$ as in Algorithm 6.5.8 in\cite{cohen138}. The coefficients of $A''$ must be close to rational integers. If this is not the case, then either the precision used to make the computation was insufficient or the desired $\alpha$ is too large. Otherwise, round the coefficients of $A''$ to the nearest integer.\\ \STATE \textbf{9.[Terminate]} Let $\alpha '$ be the element of $\mathcal{O}_F$ whose coordinates in the integral basis are given by the vector $A''.$ Set $\alpha\leftarrow{\beta\alpha '}.$ If $I\neq\alpha\mathcal{O}_F,$ output an error message stating that the accuracy is not sufficient to compute $\alpha.$ Otherwise, output $\alpha$ and terminate the algorithm. \end{algorithmic} \end{algorithm} \begin{algorithm}[!htb] \caption{(Algorithm 7.4.8 (\cite{cohen193}))} \begin{algorithmic} \STATE Let Cl$(F)=(B,D_{B})$ be the SNF of the class group of $F$, where $B=(\bar{\mathfrak{b}_{i}})$ and the $\mathfrak{b}_i$ are the ideals of $F.$ The algorithm computes algebraic integers $\gamma_{i}$ for $1\leq{i}\leq{s}$ such that $U_{S}(F)=U(F)\oplus_{1\leq{i}\leq{s}}{\mathbb{Z}\gamma_{i}}.$ We let $\mathfrak{p}_{i}$ be the prime ideals of $S.$\\ \STATE\textbf{1.[Compute discrete logarithms]} Using the principal ideal algorithm, compute the matrix $P$ whose columns are the discrete logarithms of $\bar{\mathfrak{p}}$ with respect to $B,$ for each $\mathfrak{p}\in{S}.$\\ \STATE\textbf{2.[Compute big HNF]} Using one of the algorithms for HNF computations, compute the unimodular matrix $U=\Big(\begin{matrix}U_1 & U_2\\U_3 & U_4\end{matrix}\Big)$ such that $(P|D_{B})U=(0|H)$ with H in HNF.\\ \STATE\textbf{3.[Compute $\gamma\mathcal{O}_{F}$]} Compute the HNF W of the matrix $U_1,$ and set $[a_1,a_2,\cdots,a_s]\leftarrow{[\mathfrak{p}_{1},\cdots,\mathfrak{p}_{s}]W}.$\\ \STATE\textbf{4.[Find generators]}(Here the $a_{j}$ are principal ideals)Using the principal ideal algorithm again, for each $j,$ find $\gamma_{j}$ such that $a_j=\gamma_{j}\mathcal{O}_{F}.$ Output the $\gamma_{j}$ and terminate the algorithm. \end{algorithmic} \end{algorithm} \end{subsubsection} \begin{subsubsection}{The algorithms implemented by the methods of CquarField} In the class\\ {\it Cquarfield}, by Lemma 2.3, it is easy to design an algorithm implemented by the method \textbf{GEN transMatrix()} which returns the transition matrix from the basis $1,\beta,\beta^2,\beta^3$ to the basis $\gamma_0,\gamma_1,\gamma_2,\gamma_3$; similarly, by Lemma 3.2, it is also easy to design algorithms implemented by \textbf{GEN getc\_1()}, \textbf{GEN getc\_2()} which return the constants $c_1$, $c_2$ corresponding to the field $F=\mathbb{Q}\Big(\sqrt{-(D+B\sqrt{D})}\Big);$ by Lemma 3.4 and Lemma 3.5, it is very easy to design an algorithm implemented by the methods \textbf{GEN getBoundOne()} and \textbf{GEN getBoundTwo()} which can be used to compute the bounds for condition I and condition II. We must obtain all prime ideals whose norms are less than the bounds for condition I and condition II, which can be realized by the method \textbf{GEN getPrimeTable(int num\_condition)} of {\it CquarField}. In fact, let $b_1=N(v_{m_{0}}),$ $b_2=N(v_{m_{0}'}),$ and let $$T_{F,i}=\{p=\mathfrak{P}\cap\mathbb{Z}\in{\mathbb{Z}}|N(\mathfrak{P})< b_i,\mathfrak{P}\in{\mbox{spec}\mathcal{O}_{F}}\},$$ $$T_{F,i}'=\{\mathfrak{P}\in{\mbox{spec}\mathcal{O}_{F}}|N(\mathfrak{P})<b_i\},\ \ i=1,2,\ \ \ \ \ \ \ $$ where spec$\mathcal{O}_{F}$ denotes the set of prime ideals of the cyclic quartic field $F.$ To obtain the above sets, we design the following Algorithm 4.3 which can be implemented by the method \textbf{GEN getPrimeTable(int num\_condition)} of {\it CquarField}. \begin{algorithm}[!htb] \caption{(The algorithm on {\bf getPrimeTable()})} \begin{algorithmic} \STATE \textbf{1.[Obtain the bound on norm]} By Lemma 4.5, Lemma 4.6 and Theorem 4.7, the bound on norm $b_1$(resp.$b_2)$ satisfying condition I (resp. condition II) can be obtained.\\ \STATE \textbf{2.[Obtain the set $T_{F,1}$(resp.$T_{F,2}$)]} Using the PARI' function GEN factoru(ulong n), the factorization of n can be returned. Moreover, the result is a 2-component vector [P, E], where P and E are the prime divisors of n and the valuation of $n$ at prime point $p$ respectively.\\ \STATE \textbf{3.[Obtain the sets $T_{F,1}^{'}$(resp.$T_{F,2}^{'}$)]} Using Algorithm 1, for any $k=p^{s}\in{T_{F,1}^{'}}$(resp.$T_{F,2}^{'}),$ the factoration ideals $\mathcal{P}_{i}$ of $p$ can be obtained. By comparing the value $s$ with the norm of the ideal $\mathcal{P}_{i}$, the elements of the set $T_{F,1}'$(resp.$T_{F,2}'$) can be obtained. \end{algorithmic} \end{algorithm} \ \ \ \ \ \ \ \ \ \ \end{subsubsection} \begin{subsubsection}{The algorithms implemented by the methods of Cideal} In the class {\it Cideal}, the method \textbf{GEN getSetInitG()} can be used to compute the set $G_{m-1}$ corresponding to the object, prime ideal $v_m$, of \textbf{Cideal}. We can easily finish the codes of the method \textbf{GEN getSetInitG()} by using the PARI's functions \textbf{GEN znstar(GEN n)} and \textbf{GEN idealstar0(GEN nf, GEN I, long flag)}, because the two functions have implemented the following well-known Algorithm 4.4. \begin{algorithm}[!htb] \caption{(Algorithm 4.2.2 (\cite{cohen193}))} \begin{algorithmic} \STATE Let $m_0=\prod_{\mathfrak{P}}\mathfrak{P}^{v_{\mathfrak{P}}}$ be an integral ideal, and assume that we are given the SNF of $(\mathcal{O}_{F}/\mathfrak{P}^{v_{\mathfrak{P}}})^{*}=(G_{\mathfrak{P}},D_{\mathfrak{P}}).$ The algorithm computes the SNF of $(\mathcal{O}_{F}/m_{0})^{*}.$\\ \STATE\textbf{1.[Compute $\alpha_{\mathfrak{P}}$ and $\beta_{\mathfrak{P}}$]} Using Extended Euclid Algorithm in Dedekind Domains (Algorithm 1.3.2 (\cite{cohen193})), compute $\alpha_{\mathfrak{P}}$ and $\beta_{\mathfrak{P}}$ such that $\alpha_{\mathfrak{P}}\in{m_{0}/\mathfrak{P}^{v_{\mathfrak{P}}}},\beta_{\mathfrak{P}}\in{\mathfrak{P}^{v_{\mathfrak{P}}}}$ and $\alpha_{\mathfrak{P}}+\beta_{\mathfrak{P}}=1.$\\ \STATE\textbf{2.[Terminate]} Let $G$ be the concatenation of the $\beta_{\mathfrak{P}}1_{\mathcal{O}_{F}}+\alpha_{\mathfrak{P}}G_{\mathfrak{P}}$ and let $D$ be the diagonal concatenation of the SNF matrices $D_\mathfrak{P}.$ Using the algorithm of SNF for Finite Groups (Algorithm 4.1.3 (\cite{cohen193})) on the system of generators and relations $(G,D),$ output the SNF of the group $(\mathcal{O}_{F}/m_{0})^{*}$ and the auxiliary matrix $U_{\alpha},$ and terminate the algorithm. \end{algorithmic} \end{algorithm} In order to obtain the set $W_{m-1}$ corresponding to the prime ideal $v_{m}$, we design the method \textbf{GEN getSetInitW()} of {\it Cideal}. In the process of realizing this method, we use the PARI's function \textbf{GEN ideallist0(GEN nf, long bound, long flag)}, because they have implemented the following Algorithm 4.5 and returned all ideals whose norms are less than the value {\bf bound}. We also give Algorithm 4.6, which outputs the set $W_{m-1}$ and is implemented by the method \textbf{GEN getSetInitW()}, as follows. \ \ \ \ \ \ \ \begin{algorithm}[!htb] \caption{(Algorithm 2.3.23 (\cite{cohen193}))} \begin{algorithmic} \STATE Let $K$ be a number field and $B$ be a positive integer. The algorithm outputs a list $\mathcal{L}$ such that for each $n\leq{B},$ $\mathcal{L}_{n}$ is the list of all integral ideals of absolute norm equal to $n.$\\ \STATE \textbf{1.[Initialize]}For $2\leq{n}\leq{B}$ set $\mathcal{L}_{N}\leftarrow{\emptyset},$ then set $\mathcal{L}_{1}\leftarrow{\mathcal{O}_{K}}$ and $p\leftarrow{0}.$\\ \STATE \textbf{2.[Next prime]}Replace $p$ by the smallest prime strictly larger than $p.$ If $p>B,$ output $\mathcal{L}$ and terminate the algorithm.\\ \textbf{3.[Factor $\mathbb{O}_{F}$]} Using Algorithm 6.2.9 in \cite{cohen138}, factor $p\mathcal{O}_K$ as $p\mathcal{O}_K=\prod_{1\leq{i}\leq{g}}\mathcal{P}_{i}^{e_{i}}$ with $e_{i}\geq{1},$ and let $f_{i}=f(\mathcal{P}_{i}|p).$ Set $j\leftarrow{0}.$\\ \STATE \textbf{4.[Next prime ideal]} Set $j\leftarrow{j+1}.$ If $j>g,$ go to step 2. Otherwise, set $q\leftarrow{p^{f_{j}}},n\leftarrow{0}.$\\ \STATE \textbf{5.[Loop through all multiples of $q$]} Set $n\leftarrow{n+q}.$ If $n>B,$ go to step 4. Otherwise,set $\mathcal{L}_{n}\leftarrow{\mathcal{L}_{n}\cup p_j\mathcal{L}_{n/q}},$ where $\mathcal{L}_{n}$ is the list of products by the ideal $p_j$ of the elements of $\mathcal{L}_{n/q}$ and go to step 5.\\ \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{(The algorithm on {\bf getSetInitW()})} \begin{algorithmic} \STATE\textbf{1.[Initialize]} For the prime ideal $v_m$ set $W_{m-1}\leftarrow{\emptyset}$ and $\mathcal{I}\leftarrow{\emptyset}.$\\ \STATE\textbf{2.[Obtain ideals which nrom are less than $N(v_{m})$]} Using the PARI's function {\bf GEN ideallist0(GEN nf, long bound, long flag)}, all of ideals whose norms are less than $N(v_{m})$ can be obtained. Then put them into the set $\mathcal{I}$\\ \STATE\textbf{3.[Obtain all prime ideals whose norm are less than $N(v_{m})$]} By looping through the set $\mathcal{I}$ and checking the structure of the ideal returned by PARI'function, we can get all prime ideals whose norm are less than $N(v_{m})$.\\ \STATE\textbf{4.[Obtain the set $W_{m-1}$]} For the prime ideal $\mathcal{P}_{i},$ by using the PARI's function prime ideals whose norms are less than $N(v_{m})$, the generator $\alpha_{i}$ can be returned. Then set $W_{m-1}\leftarrow{\alpha_{i}},$ where $i=1,2,\cdots,m.$ \end{algorithmic} \end{algorithm} In the process of obtaining the sets $G_{m-1}$, $W_{m-1}$ and $C_{m-1}$, the most difficult thing is the computation of $C_{m-1}$, because we meet the following two difficulties: (i) The set $C_{m-1}$ is too large when $N(v_{m})$ is large since we have $|C_{m-1}|=N(v_{m})-1$; (ii) we know that the set $C_{m-1}$ consists of the representatives of some elements in $k_{v_{m}}^{*}$, but we can not ensure that the set $C_{m-1}$ must satisfy condition I and condition II under arbitrary-chosen representatives. To overcome these difficulties, we use the method of traversal but with the choice of representatives in a conjecturally right way. Firstly, we find that the element $c\in{C_{m-1}}$ should be ``some shortest distance point" in ``some distance" of the set $c+v_{m}.$ So in order to look for the right $c\in{C_{m-1}}$, we set the range from an element whose norm is one. Secondly, it is our choice to take full advantage of multi-core processor hardware performance to reduce the computation time. Thus, we must use the technology of the multi-threaded parallel computing to improve the speed of Algorithm 4.7 obtaining the set $C_{m-1}$ as follows. \ \ \ \begin{algorithm} \caption{(The algorithm on {\bf getSetInitC()})} \begin{algorithmic} \STATE Let $v_{m}$ be a prime ideal of $\mathcal{O}_F.$ The algorithm outputs a set $C_{m-1}$ satisfying the following conditions:\\ (i) The set $C_{m-1}$ consists of the representatives of some elements in $(k_{v_{m}})^{*}$;\\ (ii) For any element $c\in{C_{m-1}},$ the equation $N(c)=min\{N(c+t\alpha_{m})|t\in{\mathcal{O}_F}\}$ holds.\\ \STATE\textbf{1.[Initialize]} Set num\_good$\leftarrow0$ and $C_{m-1}\leftarrow{\emptyset}$. Invoking the methods \textbf{GEN getc\_1()} and \textbf{GEN getc\_2()}, we can get the constant numbers $c_1$ and $c_2,$ respectively. Moreover, invoking the PARI's function \textbf{GEN ideallist0(GEN nf, long bound, long flag)} and \textbf{long pr\_get\_f(GEN pr)}, we can get the residue class degree $f_m$ of $v_m$ and the set $C'_{m-1}$ of all ideals whose norms are less than or equal to $(c_1c_2)^2N(v_m),$ respectively. Invoking the method \textbf{GEN Cideal::getSetInitG()}, we can get the unique element $g\in{G_{m-1}}.$ Lastly, set num\_C$'\leftarrow|C'_{m-1}|$ and num\_C$\leftarrow{N(V_m)-1}.$ \\ \STATE\textbf{2.[Compare num\_good with num\_C]} If num\_good=num\_C holds, the algorithm return $C_{m-1}$ and is terminated.\\ \STATE\textbf{3.[Set the germs of $C_{m-1}$]} For $1\leq{i}\leq{num\_C}$, if $f_m=1$ set c\_i$\leftarrow{i}$. Otherwise, set c\_i$\leftarrow{g^i}.$\\ \STATE\textbf{4.[Look for the appropriate elements in $C_{m-1}$]} For $1\leq{j}\leq{num\_C'}$ and $1\leq{k}\leq{70},$ set $c'\_j\leftarrow C'_{m-1}[j].$ Then invoke the \textbf{PARI}'s function \textbf{long idealval(GEN nf, GEN x, GEN pr)} to decide whether or not $c_i-c'_j\xi^k$ is in $U_m.$ Let the function's returned value be $b$. If $b>0$, set $c_i\leftarrow{c'_j\xi^k}$ and $num\_good\leftarrow{num\_good+1},$ and go to step 2; otherwise, set $j\leftarrow j+1$ and $k\leftarrow k+1.$ \end{algorithmic} \end{algorithm} In the class {\it Cideal}, the methods \textbf{GEN Para\_getSetInitC()} and \textbf{static void* Part\_getSetInitC(void *arg)} are the parent thread and the child thread respectively. Using these methods, we can obtain the set $C_{m-1}$ corresponding to the prime ideal $v_{m}.$ Moreover we can change the number of the child threads with different computer's hardware. In the class {\it Cideal}, the two methods introduced above are \textbf{bool Cideal::checkC\\ onditionOne()} and \textbf{bool Cideal::checkConditionTwo()}. As a result, conditions I and condition II can be verified respectively for the prime ideal $v_{m}$ by the two methods, which implement respectively Algorithm 4.8 and Algorithm 4.9 below. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \begin{algorithm} \caption{(The algorithm on {\bf checkConditionOne()})} \begin{algorithmic} \STATE Let $v_{m}$ be a prime ideal of $\mathcal{O}_F.$ The algorithm check whether or not condition I is hold for $v_m.$\\ \STATE\textbf{1.[Initialize]} Set num\_good$\leftarrow0$. Invoking the methods \textbf{GEN getSetInitW()}, \textbf{GEN getSetInitC()}and \textbf{getUm()}, we can get the sets $W_{m-1},C_{m-1}$ and $U_m,$ respectively. Moreover, we can get the cardinal numbers of the sets $W_{m-1},C_{m-1},$ denoted by $num\_W$ and $num\_C,$ respectively.\\ \STATE\textbf{2.[Compare num\_good with num\_W]} If num\_good=num\_W holds, the algorithm returns true and is terminated.\\ \STATE\textbf{3.[Loop through all element in SetW]} For $1\leq{i}\leq{num\_W}$, set w\_i$\leftarrow{W_{m-1}[i]}$.\\ \STATE\textbf{4.[Look for the appropriate element $c$ in setC]} For $1\leq{j}\leq{num\_C},$ set c\_j$\leftarrow{C_{m-1}[j]}$. Invoking the \textbf{PARI}'s function \textbf{GEN bnfissunit(GEN bnf, GEN sfu, GEN x)} and getting it's returned value $b$, we can decide whether or not $\frac{w_i}{c_j}-1$ is in $U_m.$ More precisely, if $b>0$, set $num\_good\leftarrow{num\_good+1}$ and go to step 2; otherwise, set j$\leftarrow j+1.$ \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{(The algorithm on {\bf checkConditionTwo()})} \begin{algorithmic} \STATE Let $v_{m}$ be a prime ideal of $\mathcal{O}_F.$ The algorithm check whether or not condition II holds for $v_m.$\\ \STATE\textbf{1.[Initialize]} Set num\_good$\leftarrow0$. Invoking the methods \textbf{GEN getSetInitG()}, \STATE\textbf{GEN getSetInitC()}and \textbf{getUm()}, we can get the sets $G_{m-1}=\{g\},C_{m-1}$ and $U_m,$ respectively. Moreover, we can get the cardinal number of the set $C_{m-1}.$ denoted by $num\_C.$\\ \STATE\textbf{2.[Compare num\_good with num\_C]} If num\_good=num\_C holds, the algorithm returns true and is terminated.\\ \STATE\textbf{3.[Loop through all element in SetC]} For $1\leq{i}\leq{num\_C}$, set c\_i$\leftarrow{C_{m-1}[i]}$.\\ \STATE\textbf{4.[Look for the appropriate element $c'$ in setC]} For $1\leq{j}\leq{num\_C},$ set c'\_j$\leftarrow{C_{m-1}[j]}$. Invoking the \textbf{PARI}'s function \textbf{GEN bnfissunit(GEN bnf, GEN sfu, GEN x)} and getting it's returned value $b$, we can decide whether or not $\frac{c_i}{gc'_j}-1$ is in $U_m$ where $g$ is the unique element in set $G_{m-1}.$ More precisely, if $b>0$, set $num\_good\leftarrow{num\_good+1}$ and go to step 2; otherwise, set j$\leftarrow j+1.$ \end{algorithmic} \end{algorithm} \end{subsubsection} \begin{subsubsection}{The algorithms implemented the methods of Ccheck} In the class {\it Cideal}, the method \textbf{GEN getSetInitG()} can be used to compute the set $G_{m-1}$ corresponding to the object, prime ideal $v_m$, of \textbf{Cideal}. We can easily finish the codes of the method \textbf{GEN getSetInitG()} by using the PARI's functions \textbf{GEN znstar(GEN n)} and \textbf{GEN idealstar0(GEN nf, GEN I, long flag)}, because the two functions have implemented the following well-known Algorithm 4.10. \begin{algorithm}[!htb] \caption{(Algorithm 4.2.2 (\cite{cohen193}))} \begin{algorithmic} \STATE Let $m_0=\prod_{\mathfrak{P}}\mathfrak{P}^{v_{\mathfrak{P}}}$ be an integral ideal, and assume that we are given the SNF of $(\mathcal{O}_{F}/\mathfrak{P}^{v_{\mathfrak{P}}})^{*}=(G_{\mathfrak{P}},D_{\mathfrak{P}}).$ The algorithm computes the SNF of $(\mathcal{O}_{F}/m_{0})^{*}.$\\ \STATE\textbf{1.[Compute $\alpha_{\mathfrak{P}}$ and $\beta_{\mathfrak{P}}$]} Using Extended Euclid Algorithm in Dedekind Domains (Algorithm 1.3.2 (\cite{cohen193})), compute $\alpha_{\mathfrak{P}}$ and $\beta_{\mathfrak{P}}$ such that $\alpha_{\mathfrak{P}}\in{m_{0}/\mathfrak{P}^{v_{\mathfrak{P}}}},\beta_{\mathfrak{P}}\in{\mathfrak{P}^{v_{\mathfrak{P}}}}$ and $\alpha_{\mathfrak{P}}+\beta_{\mathfrak{P}}=1.$\\ \STATE\textbf{2.[Terminate]} Let $G$ be the concatenation of the $\beta_{\mathfrak{P}}1_{\mathcal{O}_{F}}+\alpha_{\mathfrak{P}}G_{\mathfrak{P}}$ and let $D$ be the diagonal concatenation of the SNF matrices $D_\mathfrak{P}.$ Using the algorithm of SNF for Finite Groups (Algorithm 4.1.3 (\cite{cohen193})) on the system of generators and relations $(G,D),$ output the SNF of the group $(\mathcal{O}_{F}/m_{0})^{*}$ and the auxiliary matrix $U_{\alpha},$ and terminate the algorithm. \end{algorithmic} \end{algorithm} In order to obtain the set $W_{m-1}$ corresponding to the prime ideal $v_{m}$, we design the method \textbf{GEN getSetInitW()} of {\it Cideal}. In the process of realizing this method, we use the PARI's function \textbf{GEN ideallist0(GEN nf, long bound, long flag)}, because they have implemented the following Algorithm 4.5 and returned all ideals whose norms are less than the value {\bf bound}. We also give Algorithm 4.6, which outputs the set $W_{m-1}$ and is implemented by the method \textbf{GEN getSetInitW()}, as follows. \begin{algorithm}[!htb] \caption{(Algorithm 2.3.23 (\cite{cohen193}))} \begin{algorithmic} \STATE Let $K$ be a number field and $B$ be a positive integer. The algorithm outputs a list $\mathcal{L}$ such that for each $n\leq{B},$ $\mathcal{L}_{n}$ is the list of all integral ideals of absolute norm equal to $n.$\\ \STATE \textbf{1.[Initialize]}For $2\leq{n}\leq{B}$ set $\mathcal{L}_{N}\leftarrow{\emptyset},$ then set $\mathcal{L}_{1}\leftarrow{\mathcal{O}_{K}}$ and $p\leftarrow{0}.$\\ \STATE \textbf{2.[Next prime]}Replace $p$ by the smallest prime strictly larger than $p.$ If $p>B,$ output $\mathcal{L}$ and terminate the algorithm.\\ \textbf{3.[Factor $\mathbb{O}_{F}$]} Using Algorithm 6.2.9 in \cite{cohen138}, factor $p\mathcal{O}_K$ as $p\mathcal{O}_K=\prod_{1\leq{i}\leq{g}}\mathcal{P}_{i}^{e_{i}}$ with $e_{i}\geq{1},$ and let $f_{i}=f(\mathcal{P}_{i}|p).$ Set $j\leftarrow{0}.$\\ \STATE \textbf{4.[Next prime ideal]} Set $j\leftarrow{j+1}.$ If $j>g,$ go to step 2. Otherwise, set $q\leftarrow{p^{f_{j}}},n\leftarrow{0}.$\\ \STATE \textbf{5.[Loop through all multiples of $q$]} Set $n\leftarrow{n+q}.$ If $n>B,$ go to step 4. Otherwise,set $\mathcal{L}_{n}\leftarrow{\mathcal{L}_{n}\cup p_j\mathcal{L}_{n/q}},$ where $\mathcal{L}_{n}$ is the list of products by the ideal $p_j$ of the elements of $\mathcal{L}_{n/q}$ and go to step 5.\\ \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{(The algorithm on {\bf getSetInitW()})} \begin{algorithmic} \STATE\textbf{1.[Initialize]} For the prime ideal $v_m$ set $W_{m-1}\leftarrow{\emptyset}$ and $\mathcal{I}\leftarrow{\emptyset}.$\\ \STATE\textbf{2.[Obtain ideals which nrom are less than $N(v_{m})$]} Using the PARI's function {\bf GEN ideallist0(GEN nf, long bound, long flag)}, all of ideals whose norms are less than $N(v_{m})$ can be obtained. Then put them into the set $\mathcal{I}$\\ \STATE\textbf{3.[Obtain all prime ideals whose norm are less than $N(v_{m})$]} By looping through the set $\mathcal{I}$ and checking the structure of the ideal returned by PARI'function, we can get all prime ideals whose norm are less than $N(v_{m})$.\\ \STATE\textbf{4.[Obtain the set $W_{m-1}$]} For the prime ideal $\mathcal{P}_{i},$ by using the PARI's function prime ideals whose norms are less than $N(v_{m})$, the generator $\alpha_{i}$ can be returned. Then set $W_{m-1}\leftarrow{\alpha_{i}},$ where $i=1,2,\cdots,m.$ \end{algorithmic} \end{algorithm} \ \ \ \ \ \ \ In the process of obtaining the sets $G_{m-1}$, $W_{m-1}$ and $C_{m-1}$, the most difficult thing is the computation of $C_{m-1}$, because we meet the following two difficulties: (i) The set $C_{m-1}$ is too large when $N(v_{m})$ is large since we have $|C_{m-1}|=N(v_{m})-1$; (ii) we know that the set $C_{m-1}$ consists of the representatives of some elements in $k_{v_{m}}^{*}$, but we can not ensure that the set $C_{m-1}$ must satisfy condition I and condition II under arbitrary-chosen representatives. To overcome these difficulties, we use the method of traversal but with the choice of representatives in a conjecturally right way. Firstly, we find that the element $c\in{C_{m-1}}$ should be ``some shortest distance point" in ``some distance" of the set $c+v_{m}.$ So in order to look for the right $c\in{C_{m-1}}$, we set the range from an element whose norm is one. Secondly, it is our choice to take full advantage of multi-core processor hardware performance to reduce the computation time. Thus, we must use the technology of the multi-threaded parallel computing to improve the speed of Algorithm 4.13 obtaining the set $C_{m-1}$ as follows. \begin{algorithm} \caption{(The algorithm on {\bf getSetInitC()})} \begin{algorithmic} \STATE Let $v_{m}$ be a prime ideal of $\mathcal{O}_F.$ The algorithm outputs a set $C_{m-1}$ satisfying the following conditions:\\ (i) The set $C_{m-1}$ consists of the representatives of some elements in $(k_{v_{m}})^{*}$;\\ (ii) For any element $c\in{C_{m-1}},$ the equation $N(c)=min\{N(c+t\alpha_{m})|t\in{\mathcal{O}_F}\}$ holds.\\ \STATE\textbf{1.[Initialize]} Set num\_good$\leftarrow0$ and $C_{m-1}\leftarrow{\emptyset}$. Invoking the methods \textbf{GEN getc\_1()} and \textbf{GEN getc\_2()}, we can get the constant numbers $c_1$ and $c_2,$ respectively. Moreover, invoking the PARI's function \textbf{GEN ideallist0(GEN nf, long bound, long flag)} and \textbf{long pr\_get\_f(GEN pr)}, we can get the residue class degree $f_m$ of $v_m$ and the set $C'_{m-1}$ of all ideals whose norms are less than or equal to $(c_1c_2)^2N(v_m),$ respectively. Invoking the method \textbf{GEN Cideal::getSetInitG()}, we can get the unique element $g\in{G_{m-1}}.$ Lastly, set num\_C$'\leftarrow|C'_{m-1}|$ and num\_C$\leftarrow{N(V_m)-1}.$ \\ \STATE\textbf{2.[Compare num\_good with num\_C]} If num\_good=num\_C holds, the algorithm return $C_{m-1}$ and is terminated.\\ \STATE\textbf{3.[Set the germs of $C_{m-1}$]} For $1\leq{i}\leq{num\_C}$, if $f_m=1$ set c\_i$\leftarrow{i}$. Otherwise, set c\_i$\leftarrow{g^i}.$\\ \STATE\textbf{4.[Look for the appropriate elements in $C_{m-1}$]} For $1\leq{j}\leq{num\_C'}$ and $1\leq{k}\leq{70},$ set $c'\_j\leftarrow C'_{m-1}[j].$ Then invoke the \textbf{PARI}'s function \textbf{long idealval(GEN nf, GEN x, GEN pr)} to decide whether or not $c_i-c'_j\xi^k$ is in $U_m.$ Let the function's returned value be $b$. If $b>0$, set $c_i\leftarrow{c'_j\xi^k}$ and $num\_good\leftarrow{num\_good+1},$ and go to step 2; otherwise, set $j\leftarrow j+1$ and $k\leftarrow k+1.$ \end{algorithmic} \end{algorithm} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ In the class {\it Cideal}, the methods \textbf{GEN Para\_getSetInitC()} and \textbf{static void* Part\_getSetInitC(void *arg)} are the parent thread and the child thread respectively. Using these methods, we can obtain the set $C_{m-1}$ corresponding to the prime ideal $v_{m}.$ Moreover we can change the number of the child threads with different computer's hardware. In the class {\it Cideal}, the two methods introduced above are \textbf{bool Cideal::checkC\\ onditionOne()} and \textbf{bool Cideal::checkConditionTwo()}. As a result, conditions I and condition II can be verified respectively for the prime ideal $v_{m}$ by the two methods, which implement respectively Algorithm 4.14 and Algorithm 4.15 below. \begin{algorithm} \caption{(The algorithm on {\bf checkConditionOne()})} \begin{algorithmic} \STATE Let $v_{m}$ be a prime ideal of $\mathcal{O}_F.$ The algorithm check whether or not condition I is hold for $v_m.$\\ \STATE\textbf{1.[Initialize]} Set num\_good$\leftarrow0$. Invoking the methods \textbf{GEN getSetInitW()}, \textbf{GEN getSetInitC()}and \textbf{getUm()}, we can get the sets $W_{m-1},C_{m-1}$ and $U_m,$ respectively. Moreover, we can get the cardinal numbers of the sets $W_{m-1},C_{m-1},$ denoted by $num\_W$ and $num\_C,$ respectively.\\ \STATE\textbf{2.[Compare num\_good with num\_W]} If num\_good=num\_W holds, the algorithm returns true and is terminated.\\ \STATE\textbf{3.[Loop through all element in SetW]} For $1\leq{i}\leq{num\_W}$, set w\_i$\leftarrow{W_{m-1}[i]}$.\\ \STATE\textbf{4.[Look for the appropriate element $c$ in setC]} For $1\leq{j}\leq{num\_C},$ set c\_j$\leftarrow{C_{m-1}[j]}$. Invoking the \textbf{PARI}'s function \textbf{GEN bnfissunit(GEN bnf, GEN sfu, GEN x)} and getting it's returned value $b$, we can decide whether or not $\frac{w_i}{c_j}-1$ is in $U_m.$ More precisely, if $b>0$, set $num\_good\leftarrow{num\_good+1}$ and go to step 2; otherwise, set j$\leftarrow j+1.$ \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{(The algorithm on {\bf checkConditionTwo()})} \begin{algorithmic} \STATE Let $v_{m}$ be a prime ideal of $\mathcal{O}_F.$ The algorithm check whether or not condition II holds for $v_m.$\\ \STATE\textbf{1.[Initialize]} Set num\_good$\leftarrow0$. Invoking the methods \textbf{GEN getSetInitG()}, \STATE\textbf{GEN getSetInitC()}and \textbf{getUm()}, we can get the sets $G_{m-1}=\{g\},C_{m-1}$ and $U_m,$ respectively. Moreover, we can get the cardinal number of the set $C_{m-1}.$ denoted by $num\_C.$\\ \STATE\textbf{2.[Compare num\_good with num\_C]} If num\_good=num\_C holds, the algorithm returns true and is terminated.\\ \STATE\textbf{3.[Loop through all element in SetC]} For $1\leq{i}\leq{num\_C}$, set c\_i$\leftarrow{C_{m-1}[i]}$.\\ \STATE\textbf{4.[Look for the appropriate element $c'$ in setC]} For $1\leq{j}\leq{num\_C},$ set c'\_j$\leftarrow{C_{m-1}[j]}$. Invoking the \textbf{PARI}'s function \textbf{GEN bnfissunit(GEN bnf, GEN sfu, GEN x)} and getting it's returned value $b$, we can decide whether or not $\frac{c_i}{gc'_j}-1$ is in $U_m$ where $g$ is the unique element in set $G_{m-1}.$ More precisely, if $b>0$, set $num\_good\leftarrow{num\_good+1}$ and go to step 2; otherwise, set j$\leftarrow j+1.$ \end{algorithmic} \end{algorithm} \end{subsubsection} \end{subsection} \end{section} \ \ \ \ \ \ \ \ \ \ \ \begin{section}{The proof of Theorem 1.2} Let $F=\mathbb{Q}\Big(\sqrt{-(D+B\sqrt{D})}\Big)$ be an imaginary cyclic quartic field. For the case $B=1,D=2,$ invoking the method \textbf{GEN CquarField::getBoundOne()}(resp.\textbf{GEN CquarField::getBoundOne()}) we can know that for the prime ideals whose norms are greater than or equal to $172.525$ (resp. $3253.529$), condition I (resp. condition II) holds. Moreover, by invoking the method \textbf{bool Ccheck::Para\_checkCon\\dition One(int num\_thread)} (resp. \textbf{bool Ccheck::Par a\_checkConditionOne (int num\_thread)}), it is proved that for the prime ideals whose norms are less than $172.525$ (resp. $3253.529$), condition I (resp.condition II) holds also. Similarly, for the case $B=2,D=13$ we can show that\\ (i) the bound determined by Lemma 3.4 ( resp. Theorem 3.6) is $1173.7$ (resp. $17321.1$);\\ (ii) for the prime ideals whose norms are less than $1173.7$(resp. $17321.7$), condition I (resp.condition II) holds. And for the case $B=2,D=29$ we can show that\\ (i) the bound determined by Lemma 3.4 ( resp. Theorem 3.6) is $48710.1$ (resp. $192289.6$);\\ (ii) for the prime ideals whose norms are less than $48710.1$(resp. $192289.6$), condition I (resp.condition II) holds. For $F=\mathbb{Q}\Big(\sqrt{-(2+\sqrt{2})}\Big),$ $\mathbb{Q}\Big(\sqrt{-(13+2\sqrt{13})}\Big)$ or $\mathbb{Q}\Big(\sqrt{-(29+2\sqrt{29})}\Big)$ by PARI/GP, we know that the torsion element is only $-1.$ Hence, it is easy to show that $K_2\mathcal{O}_F$ can be generated by the two elements of order 2: $\{-1,-1\}, \{-1,\xi\},$ where $\xi$ is a fundamental unit of $F.$ However, in \cite{Browkin000}, Browkin proved the following formula: $$\mbox{2-rank}K_2\mathcal{O}_F=r_1(F)+g(2)-1+\mbox{2-rank}\Big(\mbox{Cl}(F)/\mbox{Cl}_2(F)\Big),$$ where $r_1(F)$ is the number of real places of $F, g(2)$ the number of primes over $2$, and Cl$(F)$ the class group of $F.$ It is well known that Cl$(F)=1$ and that by $PARI/GP,$ there is only one prime in $\mathcal{O}_F$ lying over $2.$ So the formula takes the form: $$\mbox{2-rank}K_2\mathcal{O}_F=0+1-1+0=0.$$ Hence there is no element of order 2. Thus the tame kernel $K_2\mathcal{O}_F$ is trivial. The proof is completed. \begin{remark}\quad For $F=\mathbb{Q}\Big(\sqrt{-(13+2\sqrt{13})}\Big)$ and $F=\mathbb{Q}\Big(\sqrt{-(29+2\sqrt{29})}\Big)$, we keep a record of every $C_{m-1}$ in some text files, which can be found in \url{http://pan.baidu.com/s/1kVnSOCn} and \url{https://pan.baidu.com/s/1dFRn8ch} respectively. \end{remark} \end{section} \end{document}
arXiv
Urban spatial order: street network orientation, configuration, and entropy Geoff Boeing ORCID: orcid.org/0000-0003-1851-64111 Street networks may be planned according to clear organizing principles or they may evolve organically through accretion, but their configurations and orientations help define a city's spatial logic and order. Measures of entropy reveal a city's streets' order and disorder. Past studies have explored individual cases of orientation and entropy, but little is known about broader patterns and trends worldwide. This study examines street network orientation, configuration, and entropy in 100 cities around the world using OpenStreetMap data and OSMnx. It measures the entropy of street bearings in weighted and unweighted network models, along with each city's typical street segment length, average circuity, average node degree, and the network's proportions of four-way intersections and dead-ends. It also develops a new indicator of orientation-order that quantifies how a city's street network follows the geometric ordering logic of a single grid. A cluster analysis is performed to explore similarities and differences among these study sites in multiple dimensions. Significant statistical relationships exist between city orientation-order and other indicators of spatial order, including street circuity and measures of connectedness. On average, US/Canadian study sites are far more grid-like than those elsewhere, exhibiting less entropy and circuity. These indicators, taken in concert, help reveal the extent and nuance of the grid. These methods demonstrate automatic, scalable, reproducible tools to empirically measure and visualize city spatial order, illustrating complex urban transportation system patterns and configurations around the world. Spatial networks such as streets, paths, and transit lines organize the human dynamics of complex urban systems. They shape travel behavior, location decisions, and the texture of the urban fabric (Jacobs 1995; Levinson and El-Geneidy 2009; Parthasarathi et al. 2015). Accordingly, researchers have recently devoted much attention to street network patterns, performance, complexity, and configuration (Barthelemy et al. 2013; Batty 2005a; Boeing 2018a; Buhl et al. 2006; Chan et al. 2011; Ducruet and Beauguitte 2014; Jiang et al. 2014; Jiang and Claramunt 2004; Marshall 2004; Masucci et al. 2013; Nilsson and Gil 2019; Tsiotas and Polyzos 2018; Wang 2015). In these spatial networks, entropy has deep theoretical connections with complexity (Batty 2005b; Batty et al. 2014). One research stream has explored the nature of entropy and order in urban street networks, seeking to quantify patterns of spatial order and disorder in urban circulation systems (Gudmundsson and Mohajeri 2013; Li et al. 2018; Mohajeri et al. 2013a, 2013b; Mohajeri and Gudmundsson 2012, 2014; Yeh and Li 2001). Theories of urban order span sociological frameworks of physical-social disorder (e.g., "broken windows" theory), to public health goals of opening-up and sanitizing pathogenic urban spaces, to city planners' pursuit of functional differentiation and regulation (Boyer 1983; Hatuka and Forsyth 2005; Mele 2017; O'Brien et al. 2019; Park and Burgess 1925; Xu 2008). This study considers the spatial logic and geometric ordering that arise through street network orientation. A city's development eras, design paradigms, underlying terrain, culture, and local economic conditions influence the pattern, topology, and grain of its street networks (Jackson 1985; Kostof 1991). These networks in turn structure the human interactions and transportation processes that run along them, forming an important pillar of city planners' quest for spatial order (Rose-Redwood and Bigon 2018). In particular, network orientation and geometry have played an outsized role in urban planning since its earliest days (Smith 2007). Measuring these network patterns can help researchers, planners, and community members understand local histories of urban design, transportation planning, and morphology; evaluate existing transportation system patterns and configurations; and explore new infrastructure proposals and alternatives. It also furthers the science of cities by providing a better understanding of urban patterns and how they correspond to evolutionary mechanisms, planning, and design. However, due to traditional data gathering challenges, this research literature has necessarily relied on small samples, limited geographies, and abstract indicators. Past studies have typically explored circuity and entropy in individual or paired case studies—less is known about broader cross-sectional trends worldwide. How do street network configurations organize and order urban space in cities around the world? This paper addresses this gap by empirically modeling and measuring order and configuration in 100 city street networks around the world, comprising over 4.8 million nodes and 3.3 million edges. It measures street network orientation entropy, circuity, connectedness, and grain. It also develops a straightforward new indicator, the orientation-order φ, to quantify the extent to which a street network follows the spatial ordering logic of a single grid. It finds significant statistical relationships between city orientation and other indicators of spatial order (including street circuity and connectedness). The most common orientation worldwide, even among cities lacking a strong grid, tends toward north-south-east-west. It also finds that American cities tend to be far more grid-like and less circuitous than cities elsewhere. Considered jointly, this collection of indicators helps reveal the extent and nuance of the grid around the world. Street network planning The orthogonal grid, the most common planned street pattern, is often traced back to Hippodamus of Miletus (Mazza 2009; Paden 2001)—whom Aristotle labeled the father of city planning for his orthogonal design of Piraeus in ancient Greece—but archaeologists have found vestiges in earlier settlements around the world (Burns 1976; Stanislawski 1946). Mohenjo-Daro in the Indus Valley, dating to 2500 BCE, featured a north-south-east-west orthogonal grid (McIntosh 2007). Ancient Chinese urban design organized capital cities around gridded patterns codified in the Kao Gong Ji, a scientific text from c. 500 BCE (Elman and Kern 2009). Teotihuacan featured an offset grid, dating to 100 BCE, that aligned with the Valley of Mexico's zenith sunrise (Peterson and Chiu 1987; Sparavigna 2017). The Roman Empire used standardized street grids to efficiently lay out new towns and colonies during rapid imperial expansion (Kaiser 2011). Many medieval towns were even planned around approximate, if distorted, grids possibly to maximize sun exposure on east-west streets during winter market days (Lilley 2001). In 1573, King Phillip II of Spain issued the Law of the Indies, systematizing how colonists sited new settlements and designed rectilinear gridded street networks around central plazas (Low 2009; Rodriguez 2005). In the US, many east coast cities planned their expansions around street grids, including Philadelphia in 1682, Savannah in 1733, Washington in 1791, and New York in 1811 (Jackson 1985; Sennett 1990). The subsequent US Homestead Act sweepingly organized the American interior according to the spatial logic of the gridiron (Boeing 2018b). In the context of urban form, the concept of "spatial order" is fuzzy. Street networks that deviate from griddedness inherently possess different spatial logics and ordering principles (Karimi 1997; Southworth and Ben-Joseph 1995, 2004). Cities planned without a grid—as well as unplanned cities that grew through accretion—may lack clearly defined orientation order, but can still be well-structured in terms of complex human dynamics and land use (Hanson 1989). Specific visual/geometric order should not be confused for functional/social order (Roy 2005; Salingaros 1998; Smith 2007). Different design logics support different transportation technologies and appeal to different cultures and eras (Jackson 1985). The grid has been used to express political power, promote military rule, improve cadastral legibility, foster egalitarianism, and encourage land speculation and development (Ellickson 2013; Groth 1981; Low 2009; Martin 2000; Mazza 2009; Rose-Redwood 2011; Sennett 1990). Many cities spatially juxtapose planned and unplanned districts or non-binarily intermingle top-down design with bottom-up self-organized complexity. Old cores may comprise organic patterns adjacent to later gridirons, in turn adjacent to later winding suburbs. Even previously highly-ordered urban cores can grow in entropy as later generations carve shortcuts through blocks, reorganize space through infill or consolidation, and adapt to shifting points of interest—all of which occurred in medieval Rome and Barcelona, for instance (Kostof 1991). Street network modeling Street networks are typically modeled as graphs where nodes represent intersections and dead-ends, and edges represent the street segments that link them (Barthelemy and Flammini 2008; Cardillo et al. 2006; Lin and Ban 2013; Marshall et al. 2018; Porta et al. 2006). These edges are spatially embedded and have both a length and a compass bearing (Barthelemy 2011). The present study models urban street networks as undirected nonplanar multigraphs with possible self-loops. While directed graphs most-faithfully represent constraints on flows (such as vehicular traffic on a one-way street), undirected graphs better model urban form by corresponding 1:1 with street segments (i.e., the linear sides of city blocks). While many street networks are approximately planar (having relatively few overpasses or underpasses), nonplanar graphs provide more accurate models by accommodating those bridges and tunnels that do often exist (Boeing 2018c; Eppstein and Goodrich 2008). The data to study these networks typically come from shapefiles of digitized streets. In the US, the Census Bureau provides TIGER/Line shapefiles of roads nationwide. In other countries, individual municipal, state, or federal agencies may provide similar data, however, digitization standards and data availability vary. Accordingly, cross-sectional research of street network orientation and entropy has tended to be limited to individual geographical regions or examine small samples. However, today, OpenStreetMap presents a new alternative data source. OpenStreetMap is a collaborative worldwide mapping project that includes streets, buildings, amenities, and other spatial features. Although its data quality varies somewhat between countries, in general its streets data are high quality, especially in cities (Barrington-Leigh and Millard-Ball 2017; Barron et al. 2014; Zielstra et al. 2013). This data source offers the opportunity to conduct cross-sectional research into street network form and configuration around the world. Recently, scholars have studied street network order and disorder through circuity and orientation entropy. The former measures street curvature and how this relates to other urban patterns and processes (Boeing 2019; Giacomin and Levinson 2015; Levinson and El-Geneidy 2009). The latter quantifies and visualizes the entropy of street orientations to assess how ordered they are (Courtat et al. 2011; Gudmundsson and Mohajeri 2013; Mohajeri et al. 2013a, 2013b; Mohajeri and Gudmundsson 2012, 2014), as entropy quantifies the fundamentally related concepts of disorder, uncertainty, and dispersion. Louf and Barthelemy (2014) explore city block geometries around the world as a function of block size and form factor, clustering them to identify differences between US and European cities. However, less is known about cross-sectional trends in the spatial orientation and ordering of street networks worldwide. This study builds on this prior research into circuity, order, and entropy by drawing on OpenStreetMap data to examine cities around the world and explore their patterns and relationships. To better understand urban spatial order and city street network entropy, we analyze 100 large cities across North America, South America, Europe, Africa, Asia, and Oceania. Our sampling strategy emulates Louf and Barthelemy's (2014) to select cities through a balance of high population, regional significance, and some stratification to ensure geographical diversity within regions. Accordingly, this sample comprises a broad cross-section of different histories, cultures, development eras, and design paradigms. Of course, no single consistent definition of "city" or its spatial jurisdiction exists worldwide as these vary between countries for historical and political reasons. We aim for consistency by trying to use each study site's closest approximation of a "municipality" for the city limits. The lone exception is Manhattan, where we focus on one borough's famous grid instead of the amalgam of boroughs that compose New York City. Once these study sites are defined, we use the OSMnx software to download the street network within each city boundary and then calculate several indicators. OSMnx is a free, open-source, Python-based toolkit to automatically download spatial data (including municipal boundaries and streets) from OpenStreetMap and construct graph-theoretic objects for network analysis (Boeing 2017). For each city, we calculate the street network's edges' individual compass bearings with OSMnx using two different methods. The first method simplifies the topology of each graph such that nodes exist only at intersections and dead-ends; edges thus represent street segments (possibly curving, as full spatial geometry is retained) between them (ibid.). In this method, the bearing of edge euv equals the compass heading from u to v and its reciprocal (e.g., if the bearing from u to v is 90° then we additionally add a bearing of 270° since the one-dimensional street centerline points in both directions). This captures the orientation of street segments but ignores the nuances of mid-block curvature. To address this, the second method does not simplify the topology: edges represent OpenStreetMap's raw straight-line street segments, either between intersections or in chunks approximating curving streets. This method weights each edge's bearing by length to adjust for extremely short edges in these curve-approximations. In both methods, self-looping edges have undefined bearings, which are ignored. Once we have calculated all of the bearings (and their reciprocals) for all the edges in a city, we divide them into 36 equal-sized bins (i.e., each bin represents 10°). To avoid extreme bin-edge effects around common values like 0° and 90°, we shift each bin by − 5° so that these values sit at the centers of their bins rather than at their edges. This allows similar common bearings such as 359.9° and 0.1° to fall in the same bin as each other. Once the bearings are binned, we calculate the Shannon entropy, Η, of the city's orientations' distribution (Shannon 1948). For each city's graph, we first calculate the entropy of the unweighted/simplified street orientations, Ηo, as: $$ {H}_o=-{\sum}_{i=1}^n\mathrm{P}\left({o}_i\right){\log}_e\mathrm{P}\left({o}_i\right) $$ where n represents the total number of bins, i indexes the bins, and P(oi) represents the proportion of orientations that fall in the ith bin. We similarly calculate the entropy of the weighted/unsimplified street orientations, Ηw, as: $$ {H}_w=-{\sum}_{i=1}^n\mathrm{P}\left({w}_i\right)\ {\log}_e\ \mathrm{P}\left({w}_i\right) $$ where n represents the total number of bins, i indexes the bins, and P(wi) represents the proportion of weighted orientations that fall in the ith bin. While Ηw is biased by the city's shape (due to length-weighting), Ηo is not. The natural logarithm means the value of Η is in dimensionless units called "nats," or the natural unit of information. The maximum entropy, Ηmax, that any city could have equals the logarithm of the number of bins: 3.584 nats. This represents the maximum entropy distribution, a perfectly uniform distribution of street bearings across all bins. If all the bearings fell into a single bin, entropy would be minimized and equal 0. However, given the undirected graph, the minimal theoretical entropy a street network could have (e.g., if all of its streets ran only north-south, thus falling evenly into two bins) would be 0.693 nats. But given the nature of the real world, a more plausible minimum would instead be an idealized city grid with all streets in four equal proportions (e.g., north-south-east-west). This perfect grid entropy, Ηg, would equal 1.386 nats. Therefore, we can calculate a normalized measure of orientation-order, φ, to indicate where a city stands on a linear spectrum from completely disordered/uniform to perfectly ordered/grid-like as: $$ \varphi =1-{\left(\frac{H_o-{H}_g}{H_{\mathrm{max}}-{H}_g}\right)}^2 $$ Thus, a φ value of 0 indicates low order (i.e., perfect disorder and maximum entropy with a uniform distribution of streets in every direction) and a φ value of 1 indicates high order (i.e., a single perfectly-ordered idealized four-way grid and minimal possible entropy). Note that the value is squared to linearize its already normalized scale between 0 and 1, allowing us to interpret it as the extent to which a city is ordered according to a single grid. All remaining indicators' formulae use the (unweighted) simplified graph for the most faithful model of the urban form, geographically and topologically. We calculate each city's median street segment length ĩ, average node degree k̅ (i.e., how many edges are incident to the nodes on average), proportion of nodes that are dead-ends Pde, and proportion of nodes that are four-way intersections P4w. Finally, we calculate each city street network's average circuity, ς, as: $$ \varsigma =\frac{L_{\mathrm{net}}}{L_{\mathrm{gc}}} $$ where Lnet represents the sum of all edge lengths in the graph and Lgc represents the sum of all great-circle distances between all pairs of adjacent nodes. Thus, ς represents how much more circuitous a city's street network is than it would be if all its edges were straight-line paths between nodes (Boeing 2019; Qureshi et al. 2002). We visualize these characteristics and examine their statistical relationships to explore the nature of spatial order/disorder in the street networks' orientations, hypothesizing that more-gridded cities (i.e., higher φ values) have higher connectedness (i.e., higher node degrees, more four-way intersections, fewer dead-ends) and less-winding street patterns. Finally, to systematically interpret city similarities and differences, we cluster the study sites in a four-dimensional feature space of the key indicators of interest (k̅, φ, ĩ, and ς), representing a cross-section of street network character. We first standardize the features for appropriate scaling, then perform hierarchical agglomerative clustering using the Ward linkage method with a Euclidean metric. Table 1 presents the indicators' values for each of the cities studied. We find that Ηo and Ηw are very strongly correlated (Pearson product-moment correlation coefficient r > 0.99, p < 0.001) and thus provide essentially redundant statistical information about these networks. Therefore, the remainder of these findings focus on Ηo unless otherwise explicitly stated. Three American cities (Chicago, Miami, and Minneapolis) have the lowest orientation entropies of all the cities studied, indicating that their street networks are the most ordered. In fact, all 16 cities with the lowest entropies are in the US and Canada. Outside of the US/Canada, Mogadishu, Kyoto, and Melbourne have the lowest orientation entropies. Surprisingly, the city with the highest entropy, Charlotte, is also in the US. São Paulo and Rome immediately follow it as the next highest cities. Chicago, the most ordered city, has a φ of 0.90, while Charlotte, the most disordered, has a φ of 0.002. Recall that a φ of 0 indicates a uniform distribution of streets in every direction and a φ of 1 indicates a single perfectly-ordered grid. Charlotte's and São Paulo's street orientations are nearly perfectly disordered. Table 1 Resulting indicators for the 100 study sites Venice, Mogadishu, Helsinki, Jerusalem, and Casablanca have the shortest median street segment lengths (indicating fine-grained networks) while Kiev, Moscow, Pyongyang, Beijing, and Shanghai have the longest (indicating coarse-grained networks). Due to their straight gridded streets, Buenos Aires, Detroit, and Chicago have the least circuitous networks (only 1.1%–1.6% more circuitous than straight-line distances), while Caracas, Hong Kong, and Sarajevo have the most circuitous networks (13.3%–14.8% more circuitous than straight-line distances) due largely to topography. Helsinki and Bangkok have the lowest average node degrees, each with fewer than 2.4 streets per node. Buenos Aires and Manhattan have the greatest average node degrees, both over 3.5 streets per node. Buenos Aires and Manhattan similarly have the largest proportions of four-way intersections and the smallest proportions of dead-end nodes. Figure 1 and Table 2 aggregate these results by world region (though note that the regional aggregation sample sizes are relatively small and thus the usual caveats apply). On average, the US/Canadian cities exhibit the lowest street orientation entropy, circuity, and proportions of dead-ends as well as the highest median street segment lengths, average node degrees, and proportions of four-way intersections. They are also by far the most grid-like in terms of φ. On average, the European cities exhibit the highest street orientation entropy and proportion of dead-ends as well as the lowest average node degrees. They are the least grid-like in terms of φ. Probability densities of cities' φ, Ηo, and Ηw, by region, estimated with kernel density estimation. The area under each curve equals 1 Table 2 Mean values of indicators aggregated by world region To illustrate the geography of these order/entropy trends, Fig. 2 maps the 100 study sites by φ terciles. As expected, most of the sites in the US and Canada fall in the highest tercile (i.e., they have low entropy and highly-ordered, grid-like street orientations), but the notable exceptions of high-entropy Charlotte, Boston, and Pittsburgh fall in the lowest tercile. Most of the sites in Europe fall in the lowest tercile (i.e., they have high entropy and disordered street orientations). Most of the sites across the Middle East and South Asia fall in the middle tercile. Map of study sites in terciles of orientation-order, φ To better visualize spatial order and entropy, we plot polar histograms of each city's street orientations. Each polar histogram contains 36 bins, matching the description in the methods section. Each histogram bar's direction represents the compass bearings of the streets (in that histogram bin) and its length represents the relative frequency of streets with those bearings. The two examples in Fig. 3 demonstrate this. On the left, Manhattan's 29° angled grid originates from the New York Commissioners' Plan of 1811, which laid out its iconic 800-ft × 200-ft blocks (Ballon 2012; Koeppel 2015). Broadway weaves diagonally across it, revealing the path dependence of the old Wickquasgeck Trail's vestiges, by which Native Americans traversed the island long before the first Dutch colonists arrived (Holloway 2013). On the right, Boston features a grid in some neighborhoods like the Back Bay and South Boston, but they tend to not align with one another, resulting in the polar histogram's jumble of competing orientations. Furthermore, the grids are not ubiquitous and Boston's other streets wind in various directions, resulting from its age (old by American standards), terrain (relatively hilly), and historical annexation of various independent towns with their own pre-existing street networks. Street networks and corresponding polar histograms for Manhattan and Boston Figures 4 and 5 visualize each city's street orientations as a polar histogram. Figure 4 presents them alphabetically to correspond with Table 1 while Fig. 5 presents them in descending order of φ values to better illustrate the connection between entropy, griddedness, and statistical dispersion. The plots exhibit perfect 180° rotational symmetry and, typically, approximate 90° rotational symmetry as well. About half of these cities (49%) have an at least approximate north-south-east-west orientation trend (i.e., 0°-90°-180°-270° are their most common four street bearing bins). Another 14% have the adjacent orientations (i.e., 10°-100°-190°-280° or 80°-170°-260°-350°) as their most common. Thus, even cities without a strong grid orientation often still demonstrate an overall tendency favoring north-south-east-west orientation (e.g., as seen in Berlin, Hanoi, Istanbul, and Jerusalem). Polar histograms of 100 world cities' street orientations, sorted alphabetically corresponding with Table 1 Polar histograms from Fig. 4, resorted by descending φ from most to least grid-like (equivalent to least to greatest entropy) Straightforward orthogonal grids can be seen in the histograms of Chicago, Miami, and others. Detroit presents an interesting case, as it primarily comprises two separate orthogonal grids, one a slight rotation of the other. While Seattle's histogram looks fairly grid-like, it is not fully so: most of Seattle is indeed on a north-south-east-west grid, but its downtown rotates by both 32° and 49° (Speidel 1967). Accordingly, there are observations in all of its bins and its Ηo = 2.54 and φ = 0.72, whereas a perfect grid would have Ηo = 1.39 and φ = 1. Thus, it is about 72% of the way between perfect disorder and a single perfect grid. However, its rotated downtown comprises a relatively small number of streets such that the rest of the city's much larger volume swamps the histogram's relative frequencies. The same effects are true of similar cites, such as Denver and Minneapolis, that have downtown grids at an offset from the rest of the city (Goodstein 1994). If an entire city is on a grid except for one relatively small district, the primary grid tends to overwhelm the fewer offset streets (cf. Detroit, with its two distinct and more evenly-sized separate grids). Figures 4 and 5 put Chicago's low entropy and Charlotte's high entropy in perspective. Of these 100 cities, Chicago exhibits the closest approximation of a single perfect grid with the majority of its streets falling into just four bins centered on 0°, 90°, 180°, and 270°. Its φ = 0.90, suggesting it is 90% of the way between perfect disorder and a single perfect grid, somewhat remarkable for such a large city. Most American cities' polar histograms similarly tend to cluster in at least a rough, approximate way. Charlotte, Rome, and São Paulo, meanwhile, have nearly uniform distributions of street orientations around the compass. Rather than one or two primary orthogonal grids organizing city circulation, their streets run more evenly in every direction. As discussed earlier, orientation entropy and weighted orientation entropy are strongly correlated. Additionally, φ moderately and negatively correlates with average circuity (r(φ, ς) = − 0.432, p < 0.001) and the proportion of dead-ends (r(φ, Pde) = − 0.376, p < 0.001), and moderately and positively correlates with the average node degree (r(φ, k̅) = 0.518, p < 0.001) and proportion of four-way intersections (r(φ, P4w) = 0.634, p < 0.001). As hypothesized, cities with more grid-like street orientations tend to also have more streets per node, more four-way junctions, fewer winding street patterns, and fewer dead-ends. Besides these relationships, φ also has a weak but significant correlation with median street segment length (r(φ, ĩ) = 0.27, p < 0.01), concurring with previous findings examining the UK alone (Gudmundsson and Mohajeri 2013). Average circuity moderately strongly and negatively correlates with the average node degree (r(ς, k̅) = − 0.672, p < 0.001) and the proportion of four-way intersections (r(ς, P4w) = − 0.689, p < 0.001). Cities with more winding street patterns tend to have fewer streets per node and fewer grid-like four-way junctions. Figure 6 presents the dendrogram obtained from the cluster analysis, allowing us to systematically explore cities that are more- or less-similar to each other. The dendrogram's structure suggests three high-level superclusters of cities, but for further analysis, we cut its tree at an intermediate level (eight clusters) for better external validity and more nuanced insight into those larger structures. To visualize these clusters another way, we map their four-dimensional feature space to two dimensions using t-SNE, a manifold learning approach for nonlinear dimensionality reduction that is well-suited for embedding higher-dimensional data in a plane for visualization (van der Maaten and Hinton 2008). Figure 7 scatterplots the cities in these two dimensions: the t-SNE projection preserves their cluster structure relatively well despite inherent information loss, but, given the global density-equalizing nature of the algorithm, the relative distances within and between clusters are not preserved in the embedding and should not be interpreted otherwise. Cluster analysis dendrogram. Cluster colors correspond to Fig. 7 Scatterplot of cities in two dimensions via t-SNE. Cluster colors correspond to Fig. 6. Triangles represent US/Canadian cities and circles represent other cities Most of the North American cities lie near each other in three adjacent clusters (red, orange, and blue), which contain grid-like—and almost exclusively North American—cities. The orange cluster represents relatively dense, gridded cities like Chicago, Portland, Vancouver, and Manhattan. The blue cluster contains less-perfectly gridded US cities, typified by San Francisco and Washington (plus, interestingly, Buenos Aires). The red cluster represents sprawling but relatively low-entropy cities like Los Angeles, Phoenix, and Las Vegas. Sprawling, high-entropy Charlotte is in a separate cluster (alongside Honolulu) dominated by cities that developed at least in part under the auspices of twentieth century communism, including Moscow, Kiev, Warsaw, Prague, Berlin, Kabul, Pyongyang, and Ulaanbaatar. Beijing and Shanghai are alone in their own cluster, more dissimilar from the other study sites. The dark gray cluster comprises the three cities with the most circuitous networks: Caracas, Hong Kong, and Sarajevo. While the US cities tend to group together in the red, orange, and blue clusters, the other world regions' cities tend to distribute more evenly across the green, purple, and light gray clusters. The urban design historian Spiro Kostof once said: "We 'read' form correctly only to the extent that we are familiar with the precise cultural conditions that generated it… The more we know about cultures, about the structure of society in various periods of history in different parts of the world, the better we are able to read their built environment" (Kostof 1991, p. 10). This study does not identify whether or how a city is planned or not. Specific spatial logics cannot be conflated with planning itself, which takes diverse forms and embodies innumerable patterns and complex structures, as do informal settlements and organic urban fabrics. In many cities, centrally planned and self-organized spatial patterns coexist, as the urban form evolves over time or as a city expands to accrete new heterogeneous urban forms through synoecism. Yet these findings do, in concert, illustrate different urban spatial ordering principles and help explain some nuances of griddedness. For example, gridded Buenos Aires has a φ value suggesting it only follows a single grid to a 15% extent. However, its low circuity and high average node degree values demonstrate how it actually comprises multiple competing grids—which can indeed be seen in Figs. 4 and 5—and it clusters accordingly in Figs. 6 and 7 with gridded American cities. Jointly considered, the φ indicator, average circuity, average node degree, and median street segment length tell us about the extent of griddedness and its character (curvilinear, straight-line, monolithic, heterogeneous, coarse-grained, etc.). Charlotte further illustrates the importance of taking these indicators together. Although its φ and orientation entropy are more similar to European cities' than American cities', it is of course an oversimplification to claim that Charlotte is therefore the US city with the most "European" street network—in fact, its median street segment length is about 50% longer than that of the average European city, and among European cities, Charlotte clusters primarily with those of the Communist Bloc. Pittsburgh, on the other hand, sits alone in a small sub-cluster with Munich and Vienna. We find that cities with higher φ values also tend to have higher node degrees, more four-way intersections, fewer dead-ends, and less-winding street patterns. That is, cities that are more consistently organized according to a grid tend to exhibit greater connectedness and less circuity. Interestingly, the Ηo and Ηw orientation entropies are extremely similar and strongly correlated: the weighted curvatures (versus straight-line orientation) of individual street segments have little impact on citywide orientation entropy, but the average circuity of the city network as a whole positively correlates with orientation entropy. This finding deserves further exploration. These results also demonstrate substantial regional differences around the world. Across these study sites, US/Canadian cities have an average φ value nearly thirteen-times greater than that of European cities, alongside nearly double the average proportion of four-way intersections. Meanwhile, these European cities' streets on average are 42% more circuitous than those of the US/Canadian cities. These findings illustrate the differences between North American and European urban patterns. However, likely due to such regional heterogeneity, this study finds statistical relationships somewhat weaker (though still significant) than prior findings examining cities in the UK exclusively. Accordingly, given the heterogeneity of these world regions, future research can estimate separate statistical models for individual regions or countries—or even the neighborhoods of a single city to draw these findings closer to the scale of planning/design practice. The methods and indicators developed here offer planners and designers a toolbox to quantify urban form patterns and compare their own cities to those elsewhere in the world. Our preliminary results suggest trends and patterns, but future work should introduce additional controls to clarify relationships and make these findings more actionable for researchers and practitioners. For instance, topography likely constrains griddedness and influences circuity and orientation entropy: a study of urban elevation change and hilliness in conjunction with entropy and circuity would help clarify these relationships. Additionally, further research can unpack the relationship between development era, design paradigm, city size, transportation planning objectives, and street network entropy to explore how network growth and evolution affect spatial order. Finally, given the importance of taking multiple indicators in concert, future work can develop a grid-index to unify them and eventually include streetscape and width attributes as further enrichment to explore walkability and travel behavior. Street networks organize and constrain a city's transportation dynamics according to a certain spatial logic—be it planned or unplanned, ordered or disordered. Past studies of this spatial order have been challenged by small samples, limited geographies, and abstract entropy indicators. This study accordingly looked at a larger sample of cities around the world, empirically examining street network configuration and entropy across 100 cities for the first time. It measured network orientation entropy, circuity, connectedness, and grain. It also developed an orientation-order indicator φ, to quantify the extent to which a network is ordered according to a single grid. This study found significant correlations between φ and other indicators of spatial order, including street circuity and measures of connectedness. It empirically confirmed that the cities in the US and Canada are more grid-like (exhibiting far less entropy and circuity) than was typical elsewhere. It is noteworthy that Chicago—the foremost theoretical model of twentieth century city growth and development in urban studies (Dear 2001; Park and Burgess 1925; Wirth 1928)—is an extreme outlier among world cities in terms of spatial orientation-order. In sum, these methods and indicators demonstrate scalable techniques to empirically measure and visualize the complexity of spatial order, illustrating patterns in urbanization and transportation around the world. All data used in this study are publicly available from https://www.openstreetmap.org/. Ballon H (ed) (2012) The greatest grid: the master plan of Manhattan, 1811–2011. Columbia University Press, New York Barrington-Leigh C, Millard-Ball A (2017) The world's user-generated road map is more than 80% complete. PLoS One 12:e0180698. https://doi.org/10.1371/journal.pone.0180698 Barron C, Neis P, Zipf A (2014) A comprehensive framework for intrinsic OpenStreetMap quality analysis. Trans GIS 18:877–895. https://doi.org/10.1111/tgis.12073 Barthelemy M (2011) Spatial networks. Phys Rep 499:1–101. https://doi.org/10.1016/j.physrep.2010.11.002 Barthelemy M, Bordin P, Berestycki H, Gribaudi M (2013) Self-organization versus top-down planning in the evolution of a city. Sci Rep 3. https://doi.org/10.1038/srep02153 Barthelemy M, Flammini A (2008) Modeling Urban Street Patterns. Phys Rev Lett 100. https://doi.org/10.1103/PhysRevLett.100.138702 Batty M (2005a) Network geography: relations, interactions, scaling and spatial processes in GIS. In: Unwin DJ, Fisher P (eds) Re-presenting GIS. Wiley, Chichester, pp 149–170 Batty M (2005b) Cities and complexity: understanding cities with cellular automata, agent-based models, and fractals. MIT Press, Cambridge Batty M, Morphet R, Masucci P, Stanilov K (2014) Entropy, complexity, and spatial information. J Geogr Syst 16:363–385. https://doi.org/10.1007/s10109-014-0202-2 Boeing G (2017) OSMnx: new methods for acquiring, constructing, analyzing, and visualizing complex street networks. Comput Environ Urban Syst 65:126–139. https://doi.org/10.1016/j.compenvurbsys.2017.05.004 Boeing G (2018a) Measuring the complexity of urban form and design. Urban Des Int 23:281–292. https://doi.org/10.1057/s41289-018-0072-1 Boeing G (2018b) A multi-scale analysis of 27,000 urban street networks: every US City, town, urbanized area, and Zillow neighborhood. Environ Plan B: Urban Anal City Sci:1–19. https://doi.org/10.1177/2399808318784595 Boeing G (2018c) Planarity and street network representation in urban form analysis. Environ Plan B: Urban Anal City Sci:1–13. https://doi.org/10.1177/2399808318802941 Boeing G (2019) The morphology and circuity of walkable and drivable street networks. In: D'Acci L (ed) The mathematics of urban morphology. Birkhäuser, Basel. https://doi.org/10.1007/978-3-030-12381-9_12 Boyer CM (1983) Dreaming the Rational City. The MIT Press, Cambridge Buhl J, Gautrais J, Reeves N, Solé RV, Valverde S, Kuntz P, Theraulaz G (2006) Topological patterns in street networks of self-organized urban settlements. Eur Phys J B 49:513–522. https://doi.org/10.1140/epjb/e2006-00085-1 Burns A (1976) Hippodamus and the Planned City. Historia: Zeitschrift Alte Geschichte 25:414–428 Cardillo A, Scellato S, Latora V, Porta S (2006) Structural properties of planar graphs of urban street patterns. Phys Rev E 73. https://doi.org/10.1103/PhysRevE.73.066107 Chan SHY, Donner RV, Lämmer S (2011) Urban road networks — spatial networks with universal geometric features? Eur PhysJ B 84:563–577. https://doi.org/10.1140/epjb/e2011-10889-3 Courtat T, Gloaguen C, Douady S (2011) Mathematics and morphogenesis of cities: a geometrical approach. Phys Rev E 83:1–12. https://doi.org/10.1103/PhysRevE.83.036106 Dear M (2001) From Chicago to L.A.: making sense of urban theory. Sage publications, Thousand oaks Ducruet C, Beauguitte L (2014) Spatial science and network science: review and outcomes of a complex relationship. Netw Spat Econ 14:297–316. https://doi.org/10.1007/s11067-013-9222-6 Ellickson RC (2013) The law and economics of street layouts: how a grid pattern benefits a downtown. Alabama Law Rev 64:463–510 Elman B, Kern M (eds) (2009) Statecraft and classical learning: the rituals of Zhou in east Asian history. Brill Academic Pub, Boston Eppstein D, Goodrich MT (2008) Studying (non-planar) road networks through an algorithmic Lens. In: Proceedings of the 16th ACM SIGSPATIAL international conference on advances in geographic information systems, GIS '08. Presented at the SIGSPATIAL '08, Irvine, California, p 16. https://doi.org/10.1145/1463434.1463455 Giacomin DJ, Levinson DM (2015) Road network circuity in metropolitan areas. Environ Plan B: Plan Des 42:1040–1053. https://doi.org/10.1068/b130131p Goodstein P (1994) Denver streets: names, numbers, locations, Logic. New Social Publications, Denver Groth P (1981) Streetgrids as frameworks for urban variety. Harvard Archit Rev 2:68–75 Gudmundsson A, Mohajeri N (2013) Entropy and order in urban street networks. Sci Rep 3. https://doi.org/10.1038/srep03324 Hanson J (1989) Order and structure in Urban Design. Ekistics 56:22–42 Hatuka T, Forsyth L (2005) Urban design in the context of glocalization and nationalism. Urban Des Int 10:69–86. https://doi.org/10.1057/palgrave.udi.9000142 Holloway M (2013) The measure of Manhattan: the tumultuous career and surprising legacy of John Randel, Jr., cartographer, surveyor, inventor. W. W. Norton & Company, New York Jackson KT (1985) Crabgrass frontier: the suburbanization of the United States. Oxford University Press, New York Jacobs A (1995) Great streets. MIT Press, Cambridge Jiang B, Claramunt C (2004) Topological analysis of urban street networks. Environ Plan B: Plan Des 31:151–162. https://doi.org/10.1068/b306 Jiang B, Duan Y, Lu F, Yang T, Zhao J (2014) Topological structure of urban street networks from the perspective of degree correlations. Environ Plan B: Plan Des 41:813–828. https://doi.org/10.1068/b39110 Kaiser A (2011) Roman urban street networks: streets and the Organization of Space in four cities. Routledge, London Karimi K (1997) The spatial logic of organic cities in Iran and the United Kingdom. Presented at the space syntax first international symposium. England, London Koeppel G (2015) City on a grid: how New York became New York. Da Capo Press, Boston Kostof S (1991) The City shaped: urban patterns and meanings through history. Bulfinch Press, New York Levinson D, El-Geneidy A (2009) The minimum circuity frontier and the journey to work. Reg Sci Urban Econ 39:732–738. https://doi.org/10.1016/j.regsciurbeco.2009.07.003 Li W, Hu D, Liu Y (2018) An improved measuring method for the information entropy of network topology. Trans GIS 22:1632–1648. https://doi.org/10.1111/tgis.12487 Lilley KD (2001) Urban planning and the design of towns in the middle ages. Plan Perspect 16:1–24. https://doi.org/10.1080/02665430010000751 Lin J, Ban Y (2013) Complex network topology of transportation systems. Transp Rev 33:658–685. https://doi.org/10.1080/01441647.2013.848955 Louf R, Barthelemy M (2014) A typology of street patterns. J R Soc Interface 11:1–7. https://doi.org/10.1098/rsif.2014.0924 Low SM (2009) Indigenous architecture and the Spanish American plaza in Mesoamerica and the Caribbean. Am Anthropol 97:748–762. https://doi.org/10.1525/aa.1995.97.4.02a00160 Marshall S (2004) Streets and patterns. Spon Press, New York Marshall S, Gil J, Kropf K, Tomko M, Figueiredo L (2018) Street network studies: from networks to models and their representations. Netw Spat Econ. https://doi.org/10.1007/s11067-018-9427-9 Martin L (2000) The grid as generator. Archit Res Q 4:309–322. https://doi.org/10.1017/S1359135500000403 Masucci AP, Stanilov K, Batty M (2013) Limited urban growth: London's street network dynamics since the 18th century. PLoS One 8:e69469. https://doi.org/10.1371/journal.pone.0069469 Mazza L (2009) Plan and constitution - Aristotle's Hippodamus: towards an "ostensive" definition of spatial planning. Town Plan Rev 80:113–141. https://doi.org/10.3828/tpr.80.2.2 McIntosh JR (2007) The ancient Indus Valley: new perspectives. ABC-CLIO, Santa Barbara Mele C (2017) Spatial order through the family: the regulation of urban space in Singapore. Urban Geogr 38:1084–1108. https://doi.org/10.1080/02723638.2016.1187372 Mohajeri N, French J, Gudmundsson A (2013a) Entropy measures of street-network dispersion: analysis of coastal cities in Brazil and Britain. Entropy 15:3340–3360. https://doi.org/10.3390/e15093340 Mohajeri N, French JR, Batty M (2013b) Evolution and entropy in the organization of urban street patterns. Ann GIS 19:1–16. https://doi.org/10.1080/19475683.2012.758175 Mohajeri N, Gudmundsson A (2012) Entropies and scaling exponents of street and fracture networks. Entropy 14:800–833. https://doi.org/10.3390/e14040800 Mohajeri N, Gudmundsson A (2014) The evolution and complexity of urban street networks: urban street networks. Geogr Anal 46:345–367. https://doi.org/10.1111/gean.12061 Nilsson L, Gil J (2019) The signature of organic urban growth: degree distribution patterns of the City's street network structure. In: D'Acci L (ed) The mathematics of urban morphology. Springer, Cham, pp 93–121. https://doi.org/10.1007/978-3-030-12381-9_5 O'Brien DT, Farrell C, Welsh BC (2019) Looking through broken windows: the impact of neighborhood disorder on aggression and fear of crime is an artifact of research design. Ann Rev Criminol 2:53–71. https://doi.org/10.1146/annurev-criminol-011518-024638 Paden R (2001) The two professions of Hippodamus of Miletus. Philos Geogr 4:25–48. https://doi.org/10.1080/10903770124644 Park RE, Burgess EW (eds) (1925) The City. University of Chicago Press, Chicago Parthasarathi P, Hochmair H, Levinson D (2015) Street network structure and household activity spaces. Urban Stud 52:1090–1112. https://doi.org/10.1177/0042098014537956 Peterson CW, Chiu BC (1987) On the astronomical origin of the offset street grid at Teotihuacan. J Hist Astron 18:13–18. https://doi.org/10.1177/002182868701801102 Porta S, Crucitti P, Latora V (2006) The network analysis of urban streets: a primal approach. Environ Plan B: Plan Des 33:705–725. https://doi.org/10.1068/b32045 Qureshi M, Hwang H-L, Chin S-M (2002) Comparison of distance estimates for commodity flow survey: great circle distances versus network-based distances. Transp Res Rec:212–216. https://doi.org/10.3141/1804-28 Rodriguez R (2005) The foundational process of cities in Spanish America: the law of indies as a planning tool for urbanization in early colonial towns in Venezuela. Focus 2:47–58. https://doi.org/10.15368/focus.2005v2n1.8 Rose-Redwood R (2011) Mythologies of the grid in the Empire City, 1800-2011. Geogr Rev 101:396–413. https://doi.org/10.1111/j.1931-0846.2011.00103.x Rose-Redwood R, Bigon L (2018) Gridded worlds: an urban anthology. Springer, Cham Roy A (2005) Urban informality: toward an epistemology of planning. J Am Plan Assoc 71:147–158. https://doi.org/10.1080/01944360508976689 Salingaros NA (1998) Theory of the urban web. J Urban Des 3:53–71. https://doi.org/10.1080/13574809808724416 Sennett R (1990) American cities: the grid plan and the Protestant ethic. Int Soc Sci J 42:269–285. https://doi.org/10.1007/978-3-319-76490-0_11 Shannon CE (1948) A mathematical theory of communication. Bell Syst Tech J 27(379–423):623–656. https://doi.org/10.1145/584091.584093 Smith ME (2007) Form and meaning in the earliest cities: a new approach to ancient urban planning. J Plan Hist 6:3–47. https://doi.org/10.1177/1538513206293713 Southworth M, Ben-Joseph E (1995) Street standards and the shaping of suburbia. J Am Plan Assoc 61:65–81. https://doi.org/10.1080/01944369508975620 Southworth M, Ben-Joseph E (2004) Reconsidering the Cul-de-sac. Access 24:28–33. https://doi.org/10.1179/chi.2008.28.1.135 Sparavigna AC (2017) The zenith passage of the sun and the architectures of the tropical zone. Mech Mater Sci Eng 10:239–250. https://doi.org/10.2412/mmse.20.89.933 Speidel WC (1967) Sons of the profits. Nettle Creek Publishing, Seattle Stanislawski D (1946) The origin and spread of the grid-pattern town. Geogr Rev 36:105–120. https://doi.org/10.2307/211076 Tsiotas D, Polyzos S (2018) The complexity in the study of spatial networks: an epistemological approach. Netw Spat Econ 18:1–32. https://doi.org/10.1007/s11067-017-9354-1 van der Maaten L, Hinton G (2008) Visualizing Data using t-SNE. J Mach Learn Res 9:2579–2605 MATH Google Scholar Wang J (2015) Resilience of self-organised and top-down planned cities—a case study on London and Beijing street networks. PLoS One 10:e0141736. https://doi.org/10.1371/journal.pone.0141736 Wirth L (1928) The ghetto. University of Chicago Press, Chicago Xu Y (2008) Urban communities, state, spatial order, and modernity: studies of Imperial and republican Beijing in perspective. China Rev Int 15:1–38. https://doi.org/10.1353/cri.0.0139 Yeh AG-O, Li X (2001) Measuring and monitoring of urban sprawl in a rapidly growing region using entropy. Photogramm Eng Remote Sens 67:83–90 Zielstra D, Hochmair HH, Neis P (2013) Assessing the effect of data imports on the completeness of OpenStreetMap – a United States case study. Trans GIS 17:315–334. https://doi.org/10.1111/tgis.12037 Department of Urban Planning and Spatial Analysis, Sol Price School of Public Policy, University of Southern California, 201B Lewis Hall, Los Angeles, California, 90089-0626, USA Geoff Boeing The author designed and conducted the study and wrote, revised, and approved the final manuscript. Correspondence to Geoff Boeing. The author declares that he has no competing interests. Boeing, G. Urban spatial order: street network orientation, configuration, and entropy. Appl Netw Sci 4, 67 (2019). https://doi.org/10.1007/s41109-019-0189-1
CommonCrawl
\begin{definition}[Definition:Continuous Real Function/Closed Interval] Let $f$ be a real function defined on a closed interval $\closedint a b$. $f$ is '''continuous on $\closedint a b$''' {{iff}} it is: :$(1): \quad$ continuous at every point of the open interval $\openint a b$ :$(2): \quad$ continuous on the right at $a$ :$(3): \quad$ continuous on the left at $b$. That is, if $f$ is to be continuous over the ''whole'' of a closed interval, it needs to be continuous at the end points. Because we only have "access" to the function on one side of each end point, all we can do is insist on continuity on the side of the end points on which the function is defined. \end{definition}
ProofWiki
(-) Remove France (2) filter France (2) (-) Remove Germany (4) filter Germany (4) (-) Remove Lithuania (0) filter Lithuania (0) Project acronym AMAREC Project Amenability, Approximation and Reconstruction Researcher (PI) Wilhelm WINTER Host Institution (HI) WESTFAELISCHE WILHELMS-UNIVERSITAET MUENSTER Summary Algebras of operators on Hilbert spaces were originally introduced as the right framework for the mathematical description of quantum mechanics. In modern mathematics the scope has much broadened due to the highly versatile nature of operator algebras. They are particularly useful in the analysis of groups and their actions. Amenability is a finiteness property which occurs in many different contexts and which can be characterised in many different ways. We will analyse amenability in terms of approximation properties, in the frameworks of abstract C*-algebras, of topological dynamical systems, and of discrete groups. Such approximation properties will serve as bridging devices between these setups, and they will be used to systematically recover geometric information about the underlying structures. When passing from groups, and more generally from dynamical systems, to operator algebras, one loses information, but one gains new tools to isolate and analyse pertinent properties of the underlying structure. We will mostly be interested in the topological setting, and in the associated C*-algebras. Amenability of groups or of dynamical systems then translates into the completely positive approximation property. Systems of completely positive approximations store all the essential data about a C*-algebra, and sometimes one can arrange the systems so that one can directly read of such information. For transformation group C*-algebras, one can achieve this by using approximation properties of the underlying dynamics. To some extent one can even go back, and extract dynamical approximation properties from completely positive approximations of the C*-algebra. This interplay between approximation properties in topological dynamics and in noncommutative topology carries a surprisingly rich structure. It connects directly to the heart of the classification problem for nuclear C*-algebras on the one hand, and to central open questions on amenable dynamics on the other. Algebras of operators on Hilbert spaces were originally introduced as the right framework for the mathematical description of quantum mechanics. In modern mathematics the scope has much broadened due to the highly versatile nature of operator algebras. They are particularly useful in the analysis of groups and their actions. Amenability is a finiteness property which occurs in many different contexts and which can be characterised in many different ways. We will analyse amenability in terms of approximation properties, in the frameworks of abstract C*-algebras, of topological dynamical systems, and of discrete groups. Such approximation properties will serve as bridging devices between these setups, and they will be used to systematically recover geometric information about the underlying structures. When passing from groups, and more generally from dynamical systems, to operator algebras, one loses information, but one gains new tools to isolate and analyse pertinent properties of the underlying structure. We will mostly be interested in the topological setting, and in the associated C*-algebras. Amenability of groups or of dynamical systems then translates into the completely positive approximation property. Systems of completely positive approximations store all the essential data about a C*-algebra, and sometimes one can arrange the systems so that one can directly read of such information. For transformation group C*-algebras, one can achieve this by using approximation properties of the underlying dynamics. To some extent one can even go back, and extract dynamical approximation properties from completely positive approximations of the C*-algebra. This interplay between approximation properties in topological dynamics and in noncommutative topology carries a surprisingly rich structure. It connects directly to the heart of the classification problem for nuclear C*-algebras on the one hand, and to central open questions on amenable dynamics on the other. Project acronym BeyondA1 Project Set theory beyond the first uncountable cardinal Researcher (PI) Assaf Shmuel Rinot Host Institution (HI) BAR ILAN UNIVERSITY Summary We propose to establish a research group that will unveil the combinatorial nature of the second uncountable cardinal. This includes its Ramsey-theoretic, order-theoretic, graph-theoretic and topological features. Among others, we will be directly addressing fundamental problems due to Erdos, Rado, Galvin, and Shelah. While some of these problems are old and well-known, an unexpected series of breakthroughs from the last three years suggest that now is a promising point in time to carry out such a project. Indeed, through a short period, four previously unattainable problems concerning the second uncountable cardinal were successfully tackled: Aspero on a club-guessing problem of Shelah, Krueger on the club-isomorphism problem for Aronszajn trees, Neeman on the isomorphism problem for dense sets of reals, and the PI on the Souslin problem. Each of these results was obtained through the development of a completely new technical framework, and these frameworks could now pave the way for the solution of some major open questions. A goal of the highest risk in this project is the discovery of a consistent (possibly, parameterized) forcing axiom that will (preferably, simultaneously) provide structure theorems for stationary sets, linearly ordered sets, trees, graphs, and partition relations, as well as the refutation of various forms of club-guessing principles, all at the level of the second uncountable cardinal. In comparison, at the level of the first uncountable cardinal, a forcing axiom due to Foreman, Magidor and Shelah achieves exactly that. To approach our goals, the proposed project is divided into four core areas: Uncountable trees, Ramsey theory on ordinals, Club-guessing principles, and Forcing Axioms. There is a rich bilateral interaction between any pair of the four different cores, but the proposed division will allow an efficient allocation of manpower, and will increase the chances of parallel success. We propose to establish a research group that will unveil the combinatorial nature of the second uncountable cardinal. This includes its Ramsey-theoretic, order-theoretic, graph-theoretic and topological features. Among others, we will be directly addressing fundamental problems due to Erdos, Rado, Galvin, and Shelah. While some of these problems are old and well-known, an unexpected series of breakthroughs from the last three years suggest that now is a promising point in time to carry out such a project. Indeed, through a short period, four previously unattainable problems concerning the second uncountable cardinal were successfully tackled: Aspero on a club-guessing problem of Shelah, Krueger on the club-isomorphism problem for Aronszajn trees, Neeman on the isomorphism problem for dense sets of reals, and the PI on the Souslin problem. Each of these results was obtained through the development of a completely new technical framework, and these frameworks could now pave the way for the solution of some major open questions. A goal of the highest risk in this project is the discovery of a consistent (possibly, parameterized) forcing axiom that will (preferably, simultaneously) provide structure theorems for stationary sets, linearly ordered sets, trees, graphs, and partition relations, as well as the refutation of various forms of club-guessing principles, all at the level of the second uncountable cardinal. In comparison, at the level of the first uncountable cardinal, a forcing axiom due to Foreman, Magidor and Shelah achieves exactly that. To approach our goals, the proposed project is divided into four core areas: Uncountable trees, Ramsey theory on ordinals, Club-guessing principles, and Forcing Axioms. There is a rich bilateral interaction between any pair of the four different cores, but the proposed division will allow an efficient allocation of manpower, and will increase the chances of parallel success. Project acronym CURVATURE Project Optimal transport techniques in the geometric analysis of spaces with curvature bounds Researcher (PI) Andrea MONDINO Host Institution (HI) THE UNIVERSITY OF WARWICK Summary The unifying goal of the CURVATURE project is to develop new strategies and tools in order to attack fundamental questions in the theory of smooth and non-smooth spaces satisfying (mainly Ricci or sectional) curvature restrictions/bounds. The program involves analysis and geometry, with strong connections to probability and mathematical physics. The problems will be attacked by an innovative merging of geometric analysis and optimal transport techniques that already enabled the PI and collaborators to solve important open questions in the field. The project is composed of three inter-connected themes: Theme I investigates the structure of non smooth spaces with Ricci curvature bounded below and their link with Alexandrov geometry. The goal of this theme is two-fold: on the one hand get a refined structural picture of non-smooth spaces with Ricci curvature lower bounds, on the other hand apply the new methods to make progress in some long-standing open problems in Alexandrov geometry. Theme II aims to achieve a unified treatment of geometric and functional inequalities for both smooth and non-smooth, finite and infinite dimensional spaces satisfying Ricci curvature lower bounds. The approach will be used also to establish new quantitative versions of classical geometric/functional inequalities for smooth Riemannian manifolds and to make progress in long standing open problems for both Riemannian and sub-Riemannian manifolds. Theme III will investigate optimal transport in a Lorentzian setting, where the Ricci curvature plays a key role in Einstein's equations of general relativity. The three themes together will yield a unique unifying insight of smooth and non-smooth structures with curvature bounds. The unifying goal of the CURVATURE project is to develop new strategies and tools in order to attack fundamental questions in the theory of smooth and non-smooth spaces satisfying (mainly Ricci or sectional) curvature restrictions/bounds. The program involves analysis and geometry, with strong connections to probability and mathematical physics. The problems will be attacked by an innovative merging of geometric analysis and optimal transport techniques that already enabled the PI and collaborators to solve important open questions in the field. The project is composed of three inter-connected themes: Theme I investigates the structure of non smooth spaces with Ricci curvature bounded below and their link with Alexandrov geometry. The goal of this theme is two-fold: on the one hand get a refined structural picture of non-smooth spaces with Ricci curvature lower bounds, on the other hand apply the new methods to make progress in some long-standing open problems in Alexandrov geometry. Theme II aims to achieve a unified treatment of geometric and functional inequalities for both smooth and non-smooth, finite and infinite dimensional spaces satisfying Ricci curvature lower bounds. The approach will be used also to establish new quantitative versions of classical geometric/functional inequalities for smooth Riemannian manifolds and to make progress in long standing open problems for both Riemannian and sub-Riemannian manifolds. Theme III will investigate optimal transport in a Lorentzian setting, where the Ricci curvature plays a key role in Einstein's equations of general relativity. The three themes together will yield a unique unifying insight of smooth and non-smooth structures with curvature bounds. Project acronym EffectiveTG Project Effective Methods in Tame Geometry and Applications in Arithmetic and Dynamics Researcher (PI) Gal BINYAMINI Summary Tame geometry studies structures in which every definable set has a finite geometric complexity. The study of tame geometry spans several interrelated mathematical fields, including semialgebraic, subanalytic, and o-minimal geometry. The past decade has seen the emergence of a spectacular link between tame geometry and arithmetic following the discovery of the fundamental Pila-Wilkie counting theorem and its applications in unlikely diophantine intersections. The P-W theorem itself relies crucially on the Yomdin-Gromov theorem, a classical result of tame geometry with fundamental applications in smooth dynamics. It is natural to ask whether the complexity of a tame set can be estimated effectively in terms of the defining formulas. While a large body of work is devoted to answering such questions in the semialgebraic case, surprisingly little is known concerning more general tame structures - specifically those needed in recent applications to arithmetic. The nature of the link between tame geometry and arithmetic is such that any progress toward effectivizing the theory of tame structures will likely lead to effective results in the domain of unlikely intersections. Similarly, a more effective version of the Yomdin-Gromov theorem is known to imply important consequences in smooth dynamics. The proposed research will approach effectivity in tame geometry from a fundamentally new direction, bringing to bear methods from the theory of differential equations which have until recently never been used in this context. Toward this end, our key goals will be to gain insight into the differential algebraic and complex analytic structure of tame sets; and to apply this insight in combination with results from the theory of differential equations to effectivize key results in tame geometry and its applications to arithmetic and dynamics. I believe that my preliminary work in this direction amply demonstrates the feasibility and potential of this approach. Tame geometry studies structures in which every definable set has a finite geometric complexity. The study of tame geometry spans several interrelated mathematical fields, including semialgebraic, subanalytic, and o-minimal geometry. The past decade has seen the emergence of a spectacular link between tame geometry and arithmetic following the discovery of the fundamental Pila-Wilkie counting theorem and its applications in unlikely diophantine intersections. The P-W theorem itself relies crucially on the Yomdin-Gromov theorem, a classical result of tame geometry with fundamental applications in smooth dynamics. It is natural to ask whether the complexity of a tame set can be estimated effectively in terms of the defining formulas. While a large body of work is devoted to answering such questions in the semialgebraic case, surprisingly little is known concerning more general tame structures - specifically those needed in recent applications to arithmetic. The nature of the link between tame geometry and arithmetic is such that any progress toward effectivizing the theory of tame structures will likely lead to effective results in the domain of unlikely intersections. Similarly, a more effective version of the Yomdin-Gromov theorem is known to imply important consequences in smooth dynamics. The proposed research will approach effectivity in tame geometry from a fundamentally new direction, bringing to bear methods from the theory of differential equations which have until recently never been used in this context. Toward this end, our key goals will be to gain insight into the differential algebraic and complex analytic structure of tame sets; and to apply this insight in combination with results from the theory of differential equations to effectivize key results in tame geometry and its applications to arithmetic and dynamics. I believe that my preliminary work in this direction amply demonstrates the feasibility and potential of this approach. Project acronym EFMA Project Equidistribution, fractal measures and arithmetic Researcher (PI) Peter Pal VARJU Host Institution (HI) THE CHANCELLOR MASTERS AND SCHOLARS OF THE UNIVERSITY OF CAMBRIDGE Summary The subject of this proposal lies at the crossroads of analysis, additive combinatorics, number theory and fractal geometry exploring equidistribution phenomena for random walks on groups and group actions and regularity properties of self-similar, self-affine and Furstenberg boundary measures and other kinds of stationary measures. Many of the problems I will study in this project are deeply linked with problems in number theory, such as bounds for the separation between algebraic numbers, Lehmer's conjecture and irreducibility of polynomials. The central aim of the project is to gain insight into and eventually resolve problems in several main directions including the following. I will address the main challenges that remain in our understanding of the spectral gap of averaging operators on finite groups and Lie groups and I will study the applications of such estimates. I will build on the dramatic recent progress on a problem of Erdos from 1939 regarding Bernoulli convolutions. I will also investigate other families of fractal measures. I will examine the arithmetic properties (such as irreducibility and their Galois groups) of generic polynomials with bounded coefficients and in other related families of polynomials. While these lines of research may seem unrelated, both the problems and the methods I propose to study them are deeply connected. The subject of this proposal lies at the crossroads of analysis, additive combinatorics, number theory and fractal geometry exploring equidistribution phenomena for random walks on groups and group actions and regularity properties of self-similar, self-affine and Furstenberg boundary measures and other kinds of stationary measures. Many of the problems I will study in this project are deeply linked with problems in number theory, such as bounds for the separation between algebraic numbers, Lehmer's conjecture and irreducibility of polynomials. The central aim of the project is to gain insight into and eventually resolve problems in several main directions including the following. I will address the main challenges that remain in our understanding of the spectral gap of averaging operators on finite groups and Lie groups and I will study the applications of such estimates. I will build on the dramatic recent progress on a problem of Erdos from 1939 regarding Bernoulli convolutions. I will also investigate other families of fractal measures. I will examine the arithmetic properties (such as irreducibility and their Galois groups) of generic polynomials with bounded coefficients and in other related families of polynomials. While these lines of research may seem unrelated, both the problems and the methods I propose to study them are deeply connected. Project acronym Emergence Project Emergence of wild differentiable dynamical systems Researcher (PI) pierre berger Host Institution (HI) CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE CNRS Summary Many physical or biological systems display time-dependent states which can be mathematically modelled by a differentiable dynamical system. The state of the system consists of a finite number of variables, and the short time evolution is given by a differentiable equation or the iteration of a differentiable map. The evolution of a state is called an orbit of the system. The theory of dynamical systems studies the long time evolution of the orbits. For some systems, called chaotic, it is impossible to predict the state of an orbit after a long period of time. However, in some cases, one may predict the probability of an orbit to have a certain state. A paradigm is given by the Boltzmann ergodic hypothesis in thermodynamics: over long periods of time, the time spent by a typical orbit in some region of the phase space is proportional to the "measure" of this region. The concept of Ergodicity has been mathematically formalized by Birkhoff. Then it has been successfully applied (in particular) by the schools of Kolmogorov and Anosov in the USSR, and Smale in the USA to describe the statistical behaviours of typical orbits of many differentiable dynamical systems. For some systems, called wild, infinitely many possible statistical behaviour coexist. Those are spread all over a huge space of different ergodic measures, as initially discovered by Newhouse in the 70's. Such systems are completely misunderstood. In 2016, contrarily to the general belief, it has been discovered that wild systems form a rather typical set of systems (in some categories). This project proposes the first global, ergodic study of wild dynamics, by focusing on dynamics which are too complex to be well described by means of finitely many statistics, as recently quantified by the notion of Emergence. Paradigmatic examples will be investigated and shown to be typical in many senses and among many categories. They will be used to construct a theory on wild dynamics around the concept of Emergence. Many physical or biological systems display time-dependent states which can be mathematically modelled by a differentiable dynamical system. The state of the system consists of a finite number of variables, and the short time evolution is given by a differentiable equation or the iteration of a differentiable map. The evolution of a state is called an orbit of the system. The theory of dynamical systems studies the long time evolution of the orbits. For some systems, called chaotic, it is impossible to predict the state of an orbit after a long period of time. However, in some cases, one may predict the probability of an orbit to have a certain state. A paradigm is given by the Boltzmann ergodic hypothesis in thermodynamics: over long periods of time, the time spent by a typical orbit in some region of the phase space is proportional to the "measure" of this region. The concept of Ergodicity has been mathematically formalized by Birkhoff. Then it has been successfully applied (in particular) by the schools of Kolmogorov and Anosov in the USSR, and Smale in the USA to describe the statistical behaviours of typical orbits of many differentiable dynamical systems. For some systems, called wild, infinitely many possible statistical behaviour coexist. Those are spread all over a huge space of different ergodic measures, as initially discovered by Newhouse in the 70's. Such systems are completely misunderstood. In 2016, contrarily to the general belief, it has been discovered that wild systems form a rather typical set of systems (in some categories). This project proposes the first global, ergodic study of wild dynamics, by focusing on dynamics which are too complex to be well described by means of finitely many statistics, as recently quantified by the notion of Emergence. Paradigmatic examples will be investigated and shown to be typical in many senses and among many categories. They will be used to construct a theory on wild dynamics around the concept of Emergence. Project acronym FHiCuNCAG Project Foundations for Higher and Curved Noncommutative Algebraic Geometry Researcher (PI) Wendy Joy Lowen Host Institution (HI) UNIVERSITEIT ANTWERPEN Summary With this research programme, inspired by open problems within noncommutative algebraic geometry (NCAG) as well as by actual developments in algebraic topology, it is our aim to lay out new foundations for NCAG. On the one hand, the categorical approach to geometry put forth in NCAG has seen a wide range of applications both in mathematics and in theoretical physics. On the other hand, algebraic topology has received a vast impetus from the development of higher topos theory by Lurie and others. The current project is aimed at cross-fertilisation between the two subjects, in particular through the development of "higher linear topos theory". We will approach the higher structure on Hochschild type complexes from two angles. Firstly, focusing on intrinsic incarnations of spaces as large categories, we will use the tensor products developed jointly with Ramos González and Shoikhet to obtain a "large version" of the Deligne conjecture. Secondly, focusing on concrete representations, we will develop new operadic techniques in order to endow complexes like the Gerstenhaber-Schack complex for prestacks (due to Dinh Van-Lowen) and the deformation complexes for monoidal categories and pasting diagrams (due to Shrestha and Yetter) with new combinatorial structure. In another direction, we will move from Hochschild cohomology of abelian categories (in the sense of Lowen-Van den Bergh) to Mac Lane cohomology for exact categories (in the sense of Kaledin-Lowen), extending the scope of NCAG to "non-linear deformations". One of the mysteries in algebraic deformation theory is the curvature problem: in the process of deformation we are brought to the boundaries of NCAG territory through the introduction of a curvature component which disables the standard approaches to cohomology. Eventually, it is our goal to set up a new framework for NCAG which incorporates curved objects, drawing inspiration from the realm of higher categories. With this research programme, inspired by open problems within noncommutative algebraic geometry (NCAG) as well as by actual developments in algebraic topology, it is our aim to lay out new foundations for NCAG. On the one hand, the categorical approach to geometry put forth in NCAG has seen a wide range of applications both in mathematics and in theoretical physics. On the other hand, algebraic topology has received a vast impetus from the development of higher topos theory by Lurie and others. The current project is aimed at cross-fertilisation between the two subjects, in particular through the development of "higher linear topos theory". We will approach the higher structure on Hochschild type complexes from two angles. Firstly, focusing on intrinsic incarnations of spaces as large categories, we will use the tensor products developed jointly with Ramos González and Shoikhet to obtain a "large version" of the Deligne conjecture. Secondly, focusing on concrete representations, we will develop new operadic techniques in order to endow complexes like the Gerstenhaber-Schack complex for prestacks (due to Dinh Van-Lowen) and the deformation complexes for monoidal categories and pasting diagrams (due to Shrestha and Yetter) with new combinatorial structure. In another direction, we will move from Hochschild cohomology of abelian categories (in the sense of Lowen-Van den Bergh) to Mac Lane cohomology for exact categories (in the sense of Kaledin-Lowen), extending the scope of NCAG to "non-linear deformations". One of the mysteries in algebraic deformation theory is the curvature problem: in the process of deformation we are brought to the boundaries of NCAG territory through the introduction of a curvature component which disables the standard approaches to cohomology. Eventually, it is our goal to set up a new framework for NCAG which incorporates curved objects, drawing inspiration from the realm of higher categories. Project acronym GTBB Project General theory for Big Bayes Researcher (PI) Judith Rousseau Summary In the modern era of complex and large data sets, there is stringent need for flexible, sound and scalable inferential methods to analyse them. Bayesian approaches have been increasingly used in statistics and machine learning and in all sorts of applications such as biostatistics, astrophysics, social science etc. Major advantages of Bayesian approaches are: their ability to model complex models in a hierarchical way, their coherency and ability to deliver not only point estimators but also measures of uncertainty from the posterior distribution which is a probability distribution on the parameter space at the core of all Bayesian inference. The increasing complexity of the data sets raise huge challenges for Bayesian approaches: theoretical and computational. The aim of this project is to develop a general theory for the analysis of Bayesian methods in complex and high (or infinite) dimensional models which will cover not only fine understanding of the posterior distributions but also an analysis of the output of the algorithms used to implement the approaches. The main objectives of the project are (briefly): 1. Asymptotic analysis of the posterior distribution of complex high dimensional models 2. Interactions between the asymptotic theory of high dimensional posterior distributions and computational complexity. We will also enrich these theoretical developments by 3 strongly related domains of applications, namely neuroscience, terrorism and crimes and ecology. In the modern era of complex and large data sets, there is stringent need for flexible, sound and scalable inferential methods to analyse them. Bayesian approaches have been increasingly used in statistics and machine learning and in all sorts of applications such as biostatistics, astrophysics, social science etc. Major advantages of Bayesian approaches are: their ability to model complex models in a hierarchical way, their coherency and ability to deliver not only point estimators but also measures of uncertainty from the posterior distribution which is a probability distribution on the parameter space at the core of all Bayesian inference. The increasing complexity of the data sets raise huge challenges for Bayesian approaches: theoretical and computational. The aim of this project is to develop a general theory for the analysis of Bayesian methods in complex and high (or infinite) dimensional models which will cover not only fine understanding of the posterior distributions but also an analysis of the output of the algorithms used to implement the approaches. The main objectives of the project are (briefly): 1. Asymptotic analysis of the posterior distribution of complex high dimensional models 2. Interactions between the asymptotic theory of high dimensional posterior distributions and computational complexity. We will also enrich these theoretical developments by 3 strongly related domains of applications, namely neuroscience, terrorism and crimes and ecology. Project acronym HiCoShiVa Project Higher coherent coholomogy of Shimura varieties Researcher (PI) Vincent Hubert Pilloni Summary One can attach certain complex analytic functions to algebraic varieties defined over the rational numbers, called Zeta functions. They are a vast generalization of Riemann's zeta function. The Hasse-Weil conjecture predicts that these Zeta functions satisfy a functional equation and admit a meromorphic continuation to the whole complex plane. This follows from the conjectural Langlands program, which aims in particular at proving that Zeta functions of algebraic varieties are products of automorphic L-functions. Automorphic forms belong to the representation theory of reductive groups but certain automorphic forms actually appear in the cohomology of locally symmetric spaces, and in particular the cohomology of automorphic vector bundles over Shimura varieties. This is a bridge towards arithmetic geometry. There has been tremendous activity in this subject and the Hasse-Weil conjecture is known for proper smooth algebraic varieties over totally real number fields with regular Hodge numbers. This covers in particular the case of genus one curves. Nevertheless, lots of basic examples fail to have this regularity property : higher genus curves, Artin motives... The project HiCoShiVa is focused on this irregular situation. On the Shimura Variety side we will have to deal with higher cohomology groups and torsion. The main innovation of the project is to construct p-adic variations of the coherent cohomology. We are able to consider higher coherent cohomology classes, while previous works in this area have been concerned with degree 0 cohomology. The applications will be the construction of automorphic Galois representations, the modularity of irregular motives and new cases of the Hasse-Weil conjecture, and the construction of p-adic L-functions. One can attach certain complex analytic functions to algebraic varieties defined over the rational numbers, called Zeta functions. They are a vast generalization of Riemann's zeta function. The Hasse-Weil conjecture predicts that these Zeta functions satisfy a functional equation and admit a meromorphic continuation to the whole complex plane. This follows from the conjectural Langlands program, which aims in particular at proving that Zeta functions of algebraic varieties are products of automorphic L-functions. Automorphic forms belong to the representation theory of reductive groups but certain automorphic forms actually appear in the cohomology of locally symmetric spaces, and in particular the cohomology of automorphic vector bundles over Shimura varieties. This is a bridge towards arithmetic geometry. There has been tremendous activity in this subject and the Hasse-Weil conjecture is known for proper smooth algebraic varieties over totally real number fields with regular Hodge numbers. This covers in particular the case of genus one curves. Nevertheless, lots of basic examples fail to have this regularity property : higher genus curves, Artin motives... The project HiCoShiVa is focused on this irregular situation. On the Shimura Variety side we will have to deal with higher cohomology groups and torsion. The main innovation of the project is to construct p-adic variations of the coherent cohomology. We are able to consider higher coherent cohomology classes, while previous works in this area have been concerned with degree 0 cohomology. The applications will be the construction of automorphic Galois representations, the modularity of irregular motives and new cases of the Hasse-Weil conjecture, and the construction of p-adic L-functions. Project acronym HomDyn Project Homogenous dynamics, arithmetic and equidistribution Researcher (PI) Elon Lindenstrauss Summary We consider the dynamics of actions on homogeneous spaces of algebraic groups, and propose to tackle a wide range of problems in the area, including the central open problems. One main focus in our proposal is the study of the intriguing and somewhat subtle rigidity properties of higher rank diagonal actions. We plan to develop new tools to study invariant measures for such actions, including the zero entropy case, and in particular Furstenberg's Conjecture about $\times 2,\times 3$-invariant measures on $\R / \Z$. A second main focus is on obtaining quantitative and effective equidistribution and density results for unipotent flows, with emphasis on obtaining results with a polynomial error term. One important ingredient in our study of both diagonalizable and unipotent actions is arithmetic combinatorics. Interconnections between these subjects and arithmetic equidistribution properties, Diophantine approximations and automorphic forms will be pursued. We consider the dynamics of actions on homogeneous spaces of algebraic groups, and propose to tackle a wide range of problems in the area, including the central open problems. One main focus in our proposal is the study of the intriguing and somewhat subtle rigidity properties of higher rank diagonal actions. We plan to develop new tools to study invariant measures for such actions, including the zero entropy case, and in particular Furstenberg's Conjecture about $\times 2,\times 3$-invariant measures on $\R / \Z$. A second main focus is on obtaining quantitative and effective equidistribution and density results for unipotent flows, with emphasis on obtaining results with a polynomial error term. One important ingredient in our study of both diagonalizable and unipotent actions is arithmetic combinatorics. Interconnections between these subjects and arithmetic equidistribution properties, Diophantine approximations and automorphic forms will be pursued.
CommonCrawl
Math ⋅ Arithmetic ⋅ How to Figure Out Percentages ••• GetUpStudio/iStock/GettyImages How to Convert Percent to Fraction By Lisa Maloney Percentages are everywhere in life: You use them to figure out how much to tip at a restaurant, how much of a work goal you've met, and how much that dress that's on sale will cost. No matter what the context, remember that percentages are actually fractions and proportions in disguise, which makes them a great tool for gauging the relative size of one thing against another. Understanding Percentages Here's why percentages are fractions in disguise: "Percent" actually means "one part out of every hundred." So once percent is one part out of 100 or the fraction 1/100. Two percent is two parts out of 100 or the fraction 2/100 and so on. Because percentages are always gauged against a common scale (out of 100), they're very easy to compare to one another. They're also easy to convert in and out of decimal form, which makes calculations easy. Calculating Percentages Convert the Percentage to a Decimal Divide the percentage by 100 to convert it into a decimal. So if you're calculating 20 percent, you have: \frac{20}{100} = 0.2 Multiply the Original Quantity by the Percentage Multiply the original quantity by the percentage you want to figure out. For example, if you ate out at a nice restaurant, ended up with a bill of $90 and now want to tip 20 percent of that bill, you'd multiply $90 by 20 percent, express as a decimal: \$90 × 0.2 = \$18 $18 is 20 percent of $90, so if you received good service, that's how much you'd tip. Working Backward to Find Percentages What if, after that nice meal at the restaurant, you get a bill for $120 and hear it already has an 18 percent gratuity? You can use the percentage of the gratuity to work backwards and find out how much the bill was before the tip. Total the Percentage Paid Add the percentage of the initial meal cost you originally paid (100 percent, which in plain English means "the whole thing") and the percentage of gratuity paid – in this case, 18 percent. So you paid 100 + 18 = 118 percent of the total meal cost. Divide the percentage by 100 to convert it to a decimal. In this case, you have: \frac{118}{100} = 1.18 Divide the Total Paid by the Percentage Paid Divide the total amount you paid by the total percentage that represents. The result will be the cost of the original meal, before the extra percentage was added on. In this case, that means: \frac{\$120}{1.18} = \$101.70 So your meal cost $101.70 before you added the 18 percent gratuity. Victoria University: Everyday Use of Percentages Math Is Fun: Percentages Purple Math: Basic "Percent Of" Word Problems Math Only Math: Real Life Problems on Percentage Lisa studied mathematics at the University of Alaska, Anchorage, and spent several years tutoring high school and university students through scary -- but fun! -- math subjects like algebra and calculus. How to Find Sale Price How to Convert .06 to Percentage How to Calculate Reverse Percentage How to Calculate Percent Off How to Calculate a 20 Percent Markup
CommonCrawl
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up. Why don't electrons crash into the nuclei they "orbit"? I'm having trouble understanding the simple "planetary" model of the atom that I'm being taught in my basic chemistry course. In particular, I can't see how a negatively charged electron can stay in "orbit" around a positively charged nucleus. Even if the electron actually orbits the nucleus, wouldn't that orbit eventually decay? I can't reconcile the rapidly moving electrons required by the planetary model with the way atoms are described as forming bonds. If electrons are zooming around in orbits, how do they suddenly "stop" to form bonds. I understand that certain aspects of quantum mechanics were created to address these problems, and that there are other models of atoms. My question here is whether the planetary model itself addresses these concerns in some way (that I'm missing) and whether I'm right to be uncomfortable with it. quantum-mechanics electrons atoms models oromeorome $\begingroup$ One more reference - Why doesn't orbital electron fall into the nucleus of Rb85, but falls into the nucleus of Rb83? $\endgroup$ – voix Jan 25 '12 at 15:40 $\begingroup$ to 1:They are on the lowest energy level. They can't decay to lower ones. to 2: they don't stop, the planetary model is just that, a model(and a pretty bad one). $\endgroup$ – P3trus Jan 25 '12 at 17:13 $\begingroup$ similar question on mathoverflow, with some detailed answers: mathoverflow.net/q/119495 $\endgroup$ – user4552 May 26 '13 at 17:34 $\begingroup$ The planetary model is pretty bogus, don't trust it too much. $\endgroup$ – DanielSank Jun 20 '15 at 7:36 $\begingroup$ Because of its wave nature, the electron in its ground state is actually smeared symmetrically about the proton (ignoring spin-spin effects) and spherically symmetric charge distributions do not radiate. See also physics.stackexchange.com/q/264123 $\endgroup$ – jim Jun 27 '16 at 15:13 You are right, the planetary model of the atom does not make sense when one considers the electromagnetic forces involved. The electron in an orbit is accelerating continuously and would thus radiate away its energy and fall into the nucleus. One of the reasons for "inventing" quantum mechanics was exactly this conundrum. The Bohr model was proposed to solve this, by stipulating that the orbits were closed and quantized and no energy could be lost while the electron was in orbit, thus creating the stability of the atom necessary to form solids and liquids. It also explained the lines observed in the spectra from excited atoms as transitions between orbits. If you study further into physics you will learn about quantum mechanics and the axioms and postulates that form the equations whose solutions give exact numbers for what was the first guess at a model of the atom. Quantum mechanics is accepted as the underlying level of all physical forces at the microscopic level, and sometimes quantum mechanics can be seen macroscopically, as with superconductivity, for example. Macroscopic forces, like those due to classical electric and magnetic fields, are limiting cases of the real forces which reign microscopically. anna vanna v $\begingroup$ anna v: Follow up question (excuse me if it's silly): why would the electron fall into the nucleus? It'd be losing its charge but how would that affect its kinetic energy? $\endgroup$ – Fingolfin Nov 15 '16 at 12:44 $\begingroup$ @xci13 a rotating charge does not lose its charge, more so the electron keeps its charge . A rotating electron accelerates, and classically an accelerating or decelerating charge emits radiation losing its kinetic energy. As it loses energy it spirals in and fall on the nucleus, classically that is. $\endgroup$ – anna v Nov 15 '16 at 14:02 $\begingroup$ Thank you! Can you explain a bit further how the radiation affects the kinetic energy? I still don't grasp why the radiation would affect the kinetic energy at all. Again, sorry for the novice question. $\endgroup$ – Fingolfin Nov 15 '16 at 14:09 $\begingroup$ classically radiation takes energy with the Poynting vector, and energy conservation assures that the accelerating electron loses it (in the system where the nucleus is at rest). en.wikipedia.org/wiki/Poynting_vector#Interpretation $\endgroup$ – anna v Nov 15 '16 at 14:20 $\begingroup$ At a basic level, without the existence of acceleration radiation, the planetary model could work. So the OP is not right about why the model fails. Otherwise, it should be surprising the Moon doesn't crash in the Earth. $\endgroup$ – G. Bergeron Dec 8 '16 at 10:18 Yes. What you've given is a proof that the classical, planetary model of the atom fails. Right. There are even simpler objections of this type. For example, the planetary model of hydrogen would be confined to a plane, but we know hydrogen atoms aren't flat. My question here is whether the planetary model itself addresses these concerns in some way (that I'm missing)[...] No, the planetary model is simply wrong. The Bohr model, which was an early attempt to patch up the planetary model, is also wrong (e.g., it predicts a flat hydrogen atom with nonzero angular momentum in its ground state). The quantum-mechanical resolution of this problem can be approached at a variety of levels of mathematical and physical sophistication. For a sophisticated discussion, see this mathoverflow question and the answers and references therein: https://mathoverflow.net/questions/119495/mathematical-proof-of-the-stability-of-atoms At the very simplest level, the resolution works like this. We have to completely abandon the idea that subatomic particles have well-defined trajectories in space. We have the de Broglie relation $|p|=h/\lambda$, where $p$ is the momentum of an electron, $h$ is Planck's constant, and $\lambda$ is the wavelength of the electron. Let's limit ourselves to one dimension. Suppose an electron is confined to a region of space with width $L$, and there are impenetrable walls on both sides, so the electron has zero probability of being outside this one-dimensional "box." This box is a simplified model of an atom. The electron is a wave, and when it's confined to a space like this, it's a standing wave. The standing-wave pattern with the longest possible wavelength has $\lambda=2L$, corresponding to a superposition of two traveling waves with momenta $p=\pm h/2L$. This maximum wavelength imposes a minimum on $|p|$, which corresponds to a minimum kinetic energy. Although this model is wrong in detail (and, in fact, agrees with the actual description of the hydrogen atom even more poorly than the Bohr model), it has the right ingredients in it to explain why atoms don't collapse. Unlike the Bohr model, it has the right conceptual ingredients to allow it to be generalized, expanded, and made more rigorous, leading to a full mathematical description of the atom. Unlike the Bohr model, it makes clear what is fundamentally going on: when we confine a particle to a small space, we get a lower limit on its energy, and therefore once it's in the standing-wave pattern with that energy, it can't collapse; it's already in the state of lowest possible energy. $\begingroup$ Just to add there's a calculation here of how long a hydrogen atom would last with the planetary model. It works out at 1.6 × 10−11 s. See page 3. That's without relatavistic corrections, which reduce the lifespan of the atom. physics.princeton.edu/~mcdonald/examples/orbitdecay.pdf $\endgroup$ – Robert Walker Dec 9 '17 at 17:46 The treatment of electrons as waves has combined with spherical harmonics (below image) to form the foundation for a modern understanding of how electrons "orbit." Tweaks to the spherical harmonic differential equations yields the Schrodinger equation, which yields the accepted models of electron orbital structures: The only element for which the Schrodinger equation may be solved exactly (approximation is necessary for the rest) is Hydrogen: These models predict essentially zero probability that an electron will enter the nucleus for most orbitals. In the orbitals where there is some time that an electron spends time in the nucleus it is believed to be energetically unfavourable for the electron to bind to the proton. If electrons were merely point charges this would not be possible, but the wave-nature of electrons creates phenomena such as the Pauli-exclusion principle that predict otherwise. Steven Lu $\begingroup$ All the s-states have an anti-node at the center, and predict that the electron spends a small but non-negligible fraction of the time in the nucleus. $\endgroup$ – dmckee --- ex-moderator kitten Jun 14 '13 at 21:27 $\begingroup$ When it is energetically favorable, they do. It's called "electron capture". See physics.stackexchange.com/q/4481 for a slightly longer discussion. Or physics.stackexchange.com/a/9418/520. $\endgroup$ – dmckee --- ex-moderator kitten Jun 14 '13 at 22:03 $\begingroup$ Would also be interesting to compare with why positronium is unstable and magic isotope numbers. $\endgroup$ – Ciro Santilli TRUMP BAN IS BAD Dec 27 '19 at 17:59 Briefly, The Bohr--planetary model doesn't really address these issues. Bohr, a genius, just asserted that the phenomena at the atomic level were a combination of stationarity while being in an orbit, and discrete quantum jumps between the orbits. It was a postulate that yielded some agreement with experiment and was very helpful for the future development of quantum mechanics solely because it got people to think about stationarity and discreteness. 2 It is totally useless for discussing chemical bonds. You are quite right to be uncomfortable with it. 3 It would be stretching a point, but you could see the Quantum Mechanics of Heisenberg and Schroedinger as the only way to salvage the planetary model of Bohr, by finally coming up with an explanation for the stationarity of an electron's state around (but no longer considered as « orbiting ») the nucleus and an explanation for discrete jumps as a response to perturbations from outside. But this required seeing the electron more as a wave and hence not having any definite location along the orbit. joseph f. johnsonjoseph f. johnson $\begingroup$ Bohr did not just assert it, this just shows you never read Bohr. Bohr created the correspondence principle to explain how to quantize. $\endgroup$ – Ron Maimon Oct 7 '12 at 14:12 here's an answer from Dr.Richard Feynman http://www.feynmanlectures.caltech.edu/II_01.html#Ch1-S1 You know, of course, that atoms are made with positive protons in the nucleus and with electrons outside. You may ask: "If this electrical force is so terrific, why don't the protons and electrons just get on top of each other? If they want to be in an intimate mixture, why isn't it still more intimate?" The answer has to do with the quantum effects. If we try to confine our electrons in a region that is very close to the protons, then according to the uncertainty principle they must have some mean square momentum which is larger the more we try to confine them. It is this motion, required by the laws of quantum mechanics, that keeps the electrical attraction from bringing the charges any closer together. good_ole_raygood_ole_ray From the askers perspective, the explanatory powers of most of these answers seem pretty bad. I prefer Emilio Pisanty's answer here: Why isn't Hydrogen's electron pulled into the nucleus? because it explains exactly how the uncertanity principle dictates the facts of this atomic reality. The summarized problem is that, if the charged and attracted electron and proton fell into each other, we would know exactly their position, and by the Heisenberg uncertainty principle our knowledge of the momentum would be immensely small, it could be anything. The chances therefore of the momentum being large enough to "escape" this essentially electrostatic attraction are very large. Therefore, the electrons recede to an average distance from the nucleus. The electron is in the position it is (or rather average position) to keep these two opposing forces in balance. The Heisenberg uncertainty acts as a force of repulsion, in similarity with the effect of compressing a gas. More compression=more pushback. Andres SalasAndres Salas $\begingroup$ I also prefer John Rennie's answer: physics.stackexchange.com/q/88441 $\endgroup$ – Andres Salas Jul 30 '14 at 15:36 Sometimes electrons do "crash into the nucleus" - it's called electron capture and is a mode of decay for some unstable isotopes. There is no orbit around nucleus, since expectation value for angular momentum for ground state $\psi_0$ is zero; $\langle{\psi_0}\rangle=0\;.$ That is why we cannot talk about classical planet model, like Bohr did. Also Heisenberg's uncertainty principle prevents electrons from having well defined orbits. Electron is just somewhere outside nucleus. Since proton is positively charged and electron is negatively they have attractive Coulomb's force. But tiny quantum particles, as electrons, behave as waves and they cannot be compressed into too small volume without increasing their kinetic energy. So electron on its ground state $\psi_0$ is on equilibrium state between Coulomb's force and strange quantum pressure. Electrons don't crash into the nucleus of an atom. The reason is deep-rooted in quantum mechanics. According to Heisenberg's uncertainty principle, the uncertainty in position and momentum are related by $$\Delta x\Delta p_x\geqslant\hbar/2$$ When the electron approaches closer to the nucleus, the electron gets confined within a smaller region of space so that the uncertainty in position $\Delta x$ of the electron decreases. Accordingly, the uncertainty in momentum $\Delta p_x$ increases. This means that the electrons have an energy higher on the average and thereby the system deviates from equilibrium. If the electron falls in the nucleus i.e., $\Delta x\rightarrow0$, then $\Delta p_x\rightarrow\infty$ which implies infinite energy. So, in order to maintain stability of the system, the electrons try to remain away from the nucleus. However if the electron manages to get crashed into the nucleus, then it would gain an infinite amount of energy according to the uncertainty principle which is impractical to occur in nature. RichardRichard $\begingroup$ This is not exactly correct as the width of the nucleus is a known finite number, i.e. $\Delta x \neq 0$. $\endgroup$ – Mathews24 May 17 '20 at 1:11 $\begingroup$ @Mathews24 Yes, the size of the nucleus is known and it can't be equal to zero according to quantum mechanics. That's the reason I had used $\Delta x\rightarrow 0$. This doesn't mean $\Delta x=0$. You can refer the first chapter of Quantum Mechanics by Landau and Lifshitz for a more rigorous explanation. $\endgroup$ – Richard May 17 '20 at 2:29 A planet orbiting a star with eccentricity smaller than unity would have to lose kinetic energy in order to spiral into the star. This could happen in the long run for a planetary system due to emission of gravitational radiation, and due to tidal forces, heating up the star or the planet followed by radiative cooling. In quantum mechanics this cannot happen. If the planet has eccentricity equal to unity, analogous to an s orbital, it crashes straight into the star where its kinetic energy is converted into heat. Again, in quantum mechanics this cannot happen. Whether quantum mechanics explains why, or only how, by construction, such an atomic collapse does not happen is a matter of interpretation. Note that electron capture by some nuclei as discussed in other replies requires that the weak interaction is taken into account. I interpret the original question as being about any nucleus, not just the ones susceptible to electron capture. my2ctsmy2cts $\begingroup$ I get negatives, delete votes bit no argument. Is physics.stackexchange.com about popularity or physics. Give me some arguments ipo anonymous, emotional negativity. $\endgroup$ – my2cts Jul 26 '19 at 8:46 While all these answers are fundamentally correct, especially with regards to Schrodinger and the shell model of electrons, there is one very basic means of radioactive decay, that of electron capture, which has not yet been discussed. Yes indeed, electrons orbiting around the atom can be captured into the nucleus. (For reference, see http://en.wikipedia.org/wiki/Electron_capture) Electron capture is a process in which a proton-rich nuclide absorbs an inner atomic electron, thereby changing a nuclear proton to a neutron and simultaneously causing the emission of an electron neutrino. Various photon emissions follow, as the energy of the atom falls to the ground state of the new nuclide. Electron capture is a common decay mode for isotopes with an over-abundance of protons in the nucleus. What is interesting about the phenomenon of electron capture is that it depends not on the electrons in the electron cloud of the atom, but rather on the nucleus. Thus, one can not ignore the fact that the behavior of electron capture is dependent solely on the nucleus, not the electrons. For example, if the nucleus is, for example, Carbon-9, 100% of this isotope will decay via electron capture to 9-Boron. Yet Carbon-14, which has the same electric charge and same number of electrons in an identically configured electron cloud, never decays via electron capture. Quantum physics, especially when the answer is focusing on the electrons of the atom, has trouble explaining the behavior of Electron Capture with a sufficient credibility. So in answer to your question, electrons do indeed fall into the nucleus, via the phenomenon of electron capture, yet that behavior can not be explained by examining the quantum physics of the electrons. $\begingroup$ The quantum mechanics of electron capture is very well understood. $\endgroup$ – Brandon Enright Mar 5 '14 at 1:31 $\begingroup$ I am very aware of the explanations offered by quantum mechanics, and no, they do not answer my questions. $\endgroup$ – user41827 Mar 5 '14 at 23:04 $\begingroup$ That's fine but don't provide an answer saying "Quantum physics, especially when the answer is focusing on the electrons of the atom, has trouble explaining the behavior of Electron Capture with a sufficient credibility." just because you have questions about the process. $\endgroup$ – Brandon Enright Mar 5 '14 at 23:09 $\begingroup$ You must examine the quantum physics of the nucleus, not the electrons. The quantum physics of the electrons says the phenomenon can't happen, yet it happens all the time. That's why they had determined the existence of the electron neutrino, the particle that allows this to happen. Anyone who says that an electron can't fall into a nucleus, because quantum physics prevents it, is incorrect. The electron neutrino is the mediator of this process, and this allows it. $\endgroup$ – user41827 Mar 5 '14 at 23:30 $\begingroup$ Do not misunderstand me. I have no questions. Let me clarify. What I am saying is that the answer will not be found by examining the quantum physics of the electrons. It is not the electrons that regulate this process. It is the quantum physics of the nucleus, which has been very much ignored in these previous answers. $\endgroup$ – user41827 Mar 5 '14 at 23:31 Not the answer you're looking for? Browse other questions tagged quantum-mechanics electrons atoms models or ask your own question. Prove that an electron in a hydrogen atom doesn't emit radiation Why can't electrons fall into the nucleus? Why isn't Hydrogen's electron pulled into the nucleus? What prevents an atom's electrons from "collapsing" onto its protons? Why does electron $(-)$ keep rotating round the nucleus $(+)$ even they are attracted? Why do electrons revolve around the nucleus? does matter radiate energy? Which phenomenon makes the electrons revolve around the nucleus instead of crashing into it? Why don't electrons in atoms radiate away their energy? Why atoms do not radiate energy? Why do electrons occupy the space around nuclei, and not collide with them? What prevents the nucleus from wandering into the electron cloud? How is ionization explained in Quantum Mechanics? Bohr/De Broglie simplfied model - joining orbitals Electron as a standing wave and its stability Where are the inaccuracies in the Bohr model of the atom? Quantum mechanical picture of "electrons orbiting the nucleus" Electrons in the quantum mechanical model of the atom
CommonCrawl
\begin{document} \title{On the rank of the distance matrix of graphs} \begin{abstract} Let $G$ be a connected graph with $V(G)=\{v_1,\ldots,v_n\}$. The $(i,j)$-entry of the distance matrix $D(G)$ of $G$ is the distance between $v_i$ and $v_j$. In this article, using the well-known Ramsey's theorem, we prove that for each integer $k\ge 2$, there is a finite amount of graphs whose distance matrices have rank $k$. We exhibit the list of graphs with distance matrices of rank $2$ and $3$. Besides, we study the rank of the distance matrices of graphs belonging to a family of graphs with their diameters at most two, the trivially perfect graphs. We show that for each $\eta\ge 1$ there exists a trivially perfect graph with nullity $\eta$. We also show that for threshold graphs, which are a subfamily of the family of trivially perfect graphs, the nullity is bounded by one. \end{abstract} \keywords{Distance Matrix, Distance Rank, Threshold Graph, Trivially Perfect Graph.} \section{Introduction} All graphs mentioned in this article are finite and have neither loops nor multiple edges. Let $G$ be a connected graph on $n$ vertices with vertex set $V =\{v_1,\dots,v_n\}$. The distance in $G$ between vertices $v_i$ and $v_j$ , denoted $d_G(v_i, v_j )$, is the number of edges of a shortest path linking $v_i$ and $v_j$. When the graph $G$ is clear from the context we write $d(v_i,v_j)$. The distance matrix of $G$, denoted $D(G)$, is the $n\times n$ symmetric matrix having its $(i,j)$-entry equal to $d(v_i,v_j )$. The distance matrix has attracted the attention of many researchers. The interest in this matrix was motivated by the connection with a communication problem (see~\cite{GrahamLovasz1978,GraphamandPollak1973} for more details). In an early article, Graham and Pollack \cite{GraphamandPollak1973} presented a remarkable result, proving that the determinant of the distance matrix of a tree $T$ on $n$ vertices only depends on $n$, being equal to $(-1)^{n-1}(n-1)2^{n-2}$. More recently, formulas for the determinat of connected graphs on $n$ vertices with $n$ edges \cite{BapatKirklandandNeumann2005} (unicyclic graphs) and $n+1$ edges~\cite{Dratmanetal2021} (bicyclic graphs) have been computed. Determining the family of graphs with a given nullity for some associated matrix is a problem of interest for the graph-theoretic community. For instance, it is well-known that the nullity of the Laplacian matrix $L(G)$ of a given graph $G$ coincides with the number of connected components of $G$ (see \cite{Merris1994}). Bo and Liu considered graphs whose adjacency matrix has rank two or three~\cite{ChengandLiu2007}; i.e., graphs with nullity $n-2$ and $n-3$, where $n$ is the number of vertices of the graph. Later, Cang et al. characterized graphs whose adjacency matrix has rank four~\cite{CHY2011} and five~\cite{CHY2012}. The remainder of this article is organized as follows. In Section~\ref{sec: general concepts} we present some definitions and preliminary results. Section~\ref{sec: distance rank of general graphs} is devoted to proving that for any integer $k\ge 2$, there exists a finite number of graphs with distance rank $k$. Section~\ref{sec: twins and null space} presents a collection of results in connection with the distance rank of a graph and a partition of its vertex set into sets of twins. In Section~\ref{sec: threshold graphs} we prove that the nullity of any threshold graph is at most one, and we also present an infinite family of threshold graphs with nullity one. Finally, Section~\ref{sec: trivially perfect graphs} contains a sufficient condition for a trivially perfect graph to have a nonsingular distance matrix and a result that guarantees an example of a trivially perfect graph with nullity $\eta$, for each positive integer $\eta\ge 2$. In Section~\ref{sec: conclusions}, we close the article with some conclusions and open questions. \section{General concepts}\label{sec: general concepts} Let $G$ be a graph. We use $V(G)$ and $E(G)$ to denote the set of vertices of $G$ and the set of edges of $G$, respectively. We use $N_G(v)$ to denote the set of neighbors of a vertex $v\in V(G)$ and $N_G[v]=N_G(v)\cup\{v\}$, we omit the subscript in case the context is clear enough. A vertex $v$ is a \emph{universal vertex} if $N_G[v]=V(G)$. Let $S\subseteq V(G)$. We use $N_G(S)$ to denote the set of those vertices with at least one neighbor in $S$ and $N_G[S]=N_G(S)\cup S$, omitting the subscript in case the context is clear enough. Two vertices $u$ and $v$ are \emph{true twins} (resp. \emph{false twins}) if $N[u]=N[v]$ (resp. $N(u)=N(v)$). A vertex $v$ is \emph{universal} if $N[v]=V(G)$. Let $X\subseteq V(G)$. We use $G[X]$ to denote the subgraph of $G$ induced by $X$. A \emph{stable set} (or \emph{independent set}) of a graph is a set of pairwise nonadjacent vertices. By $\overline G$, we denote the \emph{complement graph} of $G$. The \emph{maximum independent number}, denoted $\alpha(G)$, is the cardinality of an independent set with the maximum number of vertices. A \emph{clique} is a set of pairwise adjacent vertices. A \emph{split graph} is a graph whose vertices can be partitioned into an independent set and a clique. A \emph{complete graph} is a graph such that all its vertices are pairwise adjacent. We use $C_n$, $K_n$, $K_{1,n-1}$ and $P_n$ to denote the isomorphism classes of cycles, complete graphs, stars and paths, all of them on $n$ vertices, respectively. Let $\mathcal H$ be a set of graphs. A graph is said to be \emph{$\mathcal H$-free} if it does not contain any graph in $\mathcal H$ as an induced subgraph. In the case in which $\mathcal H=\{H\}$, we use \emph{$H$-free} for short. Let $G$ and $H$ be two graphs. We use $G+H$ (resp. $G\vee H$) to denote the disjoint union of $G$ and $H$ (resp. the joint between $G$ and $H$; i.e., $G+H$ plus all edges having an endpoint in $V(G)$ and the other one in $V(H)$). A \emph{cograph} is a $P_4$-free graph. If $G$ is a cograph, then $G$ or $\overline G$ is connected~\cite{Corneil81}. Thus, if $G$ is a connected cograph, then $G=H\vee J$, for two cographs $H$ and $J$. A graph is \emph{trivially perfect} if, for each induced subgraph, the maximum cardinality of an independent set agrees with the number of maximal cliques. Indeed, trivially perfect graphs are precisely the $\{P_4,C_4\}$-free graphs~\cite{Gol78}. In addition, a graph is trivially perfect if and only if every connected induced subgraph has a universal vertex (see~\cite{Chang96}). A graph is \emph{threshold} if it is $\{2K_2,P_4,C_4\}$-free. Observe that threshold graphs are precisely the split cographs. For more details about the graph classes described above, we refer the reader to~\cite{Golumbic2004}. \section{Distance rank of general graphs}\label{sec: distance rank of general graphs} The \emph{rank} of a graph $G$, denoted $\emph{rank}(G)$, is the rank of its adjacency matrix. For each integer $k\ge 2$ there exists an infinite family of graphs having rank $k$ (see~\cite{CHY2011}). The rank of $D(G)$, denoted $\emph{rank}_d(G)$, is called the \emph{distance rank} of $G$. Unlike what happpens with the rank of a graph, as a consequence of Ramsey's Theorem, for every integer $k\ge 2$ there exists a finite family of graphs having distance rank equal to $k$. Recall that given two integers $r,t\ge 2$ there exists a positive integer $R(r,t)$, such that for every graph $G$ with $|V(G)|\ge R(r,t)$, $G$ contains either a clique with at least $r$ vertices or an independent set with at least $t$ vertices~\cite{Ramsey1929}. When $r=t$, $R(t)$ stands for $R(t,t)$. For bounds of $R(r,t)$ see for instance~\cite{Spencer1975}. \subsection{General characteristic} Let $n\ge 2$. If $G=K_n$, clearly $n=\emph{rank}(G)=\emph{rank}_d(G)$. Besides, if $G$ is a tree on $n$ vertices, then $\emph{rank}_d(G)=n$~\cite{GraphamandPollak1973}, and thus $\emph{rank}_d(G)(K_{1,n-1})=\emph{rank}_d(G)(P_n)=n$. Let $G$ and $H$ be two graphs. The graph $H$ is said to be an \emph{isometric subgraph} of $G$ if $H$ is a subgraph of $G$ such that $d_H(u,v)=d_G(u,v)$ for every $u,v\in V(H)$. We state the following immediate lemma without proof. \begin{lemma}~\label{lem: isometric subgraphs} If $H$ is an isometric subgraph of $G$, then $\emph{rank}_d(H)\le \emph{rank}_d(G)$. \end{lemma} The diameter of a graph $G$, denoted $\emph{diam}(G)$, is the maximum distance between two vertices. An induced path $P$ of $G$ on $\emph{diam}(G)+1$ vertices is called a \emph{diameter path}. By Lemma~\ref{lem: isometric subgraphs} and~\cite{GraphamandPollak1973}, since every graph contains a diameter path as an isometric subgraph, the lemma below follows. \begin{lemma}~\label{lem: diameter lower bound} If $G$ is a connected graph, then $\emph{diam}(G)+1\le \emph{rank}_d(G)$. \end{lemma} It is well-known that the number of vertices of a graph $G$ is upper-bounded by a function on its maximum degree $\Delta(G)$ and $\emph{diam}(G)$. \begin{lemma}~\cite[Exercise 2.1.60]{west2001}~\label{lem: upper bound diameter and delta} Let $G$ be a graph. If $\emph{diam}(G)=d$ and $\Delta(G)=r$, then \[|V(G)|\le\frac{1+[(r-1)^d-1]r}{r-2}=f(d,r)\] \end{lemma} As a consequence of Ramsey's theorem we prove the main result of this section. \begin{theorem}\label{thm: finite number of graphs with rank k} If $k$ is an integer with $k\ge 2$, then there is a finite number of connected graphs $G$ such that $\emph{rank}_d(G)=k$. \end{theorem} \begin{proof} Consider a connected graph $G$ such that $\emph{rank}_d(G)=k$. On the one hand if $\emph{diam}(G)\ge k$, by Lemma~\ref{lem: diameter lower bound}, $\emph{rank}_d(G)>k$. On the other hand, if $\Delta(G)\ge R(k)$, by Ramsey's Theorem, $G$ contains either a complete subgraph $K_{k+1}$ or a star $K_{1,k}$ as an isometric subgraph. Thus, Lemma~\ref{lem: isometric subgraphs}, $\emph{rank}_d(G)>k$. Hence, if $\emph{rank}_d(G)=k$, then $\emph{diam}(G)< k$ and $\Delta(G)< R(k)$. Therefore, by Lemma~\ref{lem: upper bound diameter and delta}, $|V(G)|\le f(k,R(k))$ and the result holds. \end{proof} \subsection{Graphs with distance rank $k\in\{2,3\}$} A connected graph $G$ with at least three vertices contains either $P_3$ or $K_3$ as isometric subgraphs and thus $\emph{rank}_d(G)\ge 3$. For graphs used throughout this section, see Figure~\ref{fig: graphs}. In particular, it is easy to check that $\emph{rank}_d(Pa)=\emph{rank}_d(Di)=\emph{rank}_d(Hou)=4$. \begin{figure} \caption{$Pa$, the paw graph; $Di$, the diamond graph; and $Hou$, the house graph.} \label{fig: graphs} \end{figure} \begin{remark} A connected graph $G$ has $\emph{rank}_d(G)=2$ if and only if $G=K_2$. \end{remark} The following lemma is a consequence of the isometric subgraph definition. \begin{lemma}\label{lem: distance two} If $H$ is a connected induced subgraph of a connected graph $G$ such that $d_H(u,v)\le 2$ for every $u,v\in V(H)$, then $H$ is an isometric subgraph of $G$. \end{lemma} As a consequence of the above lemma the graphs with distance rank equals three are cographs. \begin{lemma}\label{lem: distance rank three implies cograph} If $G$ is a connected graph with $\emph{rank}_d(G)=3$, then $G$ is a cograph. \end{lemma} \begin{proof} We prove the contrapositive statement. Assume that $G$ contains a path with four vertices $P:\;a,b,c,d$ as an induced subgraph. If $P$ was an isometric subgraph, then $\emph{rank}_d(G)\ge 4$ by Lemma~\ref{lem: isometric subgraphs}. Assume that $d_G(a,d)=2$. Consequently, there exists a vertex $v$ in $G$ that is adjacent to $a$ and $d$. Thus $G[\{a,b,c,d,v\}]$ contains a diamond as an induced subgraph or is isomorphic to $C_5$ or the house. Since the diamond and the house have distance rank $4$ and the $C_5$ has distance rank $5$, it follows from Lemma~\ref{lem: distance two} that $\emph{rank}_d(G)\ge 4$. Thus, if $G$ is not a cograph, then $\emph{rank}_d(G)\geq 4$. Therefore, the result follows. \end{proof} \begin{theorem} If $G$ is a connected graph with $\emph{rank}_d(G)=3$, then $G$ is one of the following graphs: $K_3$, $P_3$, or $C_4$. \end{theorem} \begin{proof} Let $G$ be a graph with $\emph{rank}_d(G)=3$. By Lemma~\ref{lem: distance rank three implies cograph} $G$ is a cograph. As $G$ is also connected and has at least $3$ vertices, we have $G=F\lor H$, where $F$ and $H$ are two non-empty cographs. Notice that, by Lemma~\ref{lem: distance two}, $G$ does not contain a paw as an induced subgraph because the distance rank of the paw is equal to $4$. Since $G$ contains neither a diamond nor a paw as induced subgraphs, $H$ (resp. $F$) contains neither $P_3$ nor $K_2+P_1$ as induced subgraphs. Hence $H$ (resp. $F$) is either a complete graph or isomporphic to $nK_1$. Assume first that one of $H$ and $F$ is a complete graph with at least two vertices, say $H$. By Lemma~\ref{lem: distance two}, since $\emph{rank}_d(K_4)=4$, $H$ has exactly two vertices. Since $G$ contains neither a diamond nor $K_4$ as induced subgraphs, $F$ contains only one vertex, and thus $G$ is isomorphic to $K_3$. We can assume now that $F$ and $H$ are isomorphic to $rK_1$ and $sK_1$, respectively. Since $G$ does not contain $S_{1,3}$ as an induced subgraph, we conclude that $r\le 2$ and $s\le 2$. Therefore, $G$ is isomorphic to $P_3$, or $C_4$. \end{proof} \section{Twins and null space}\label{sec: twins and null space} Let $G$ be a graph with vertices $v_1,v_2,\ldots, v_n$, and assume that $v_1$ and $v_2$ are either true twins or false twins. Notice that if $j\not\in \{1,2\}$, then $d_G(v_1,v_j)=d_G(v_2,v_j)$. Let $D$ be the distance matrix of $G$ and $\vec{x}$ a vector in the null space of $D$. We denote the coordinate of $\vec{x}$ that corresponds to vertex $v_i$ as $\vec{x}_{v_i}$. Notice that the coordinate corresponding to $v_i$ of $D\vec{x}$ satisfies \[ [D\vec{x}]_{v_i}=\sum_{j=1}^nd_G(v_i,v_j)\vec{x}_{v_j}, \] for every $1\le i\le n$. Hence \begin{align*} [D\vec{x}]_{v_1}-[D\vec{x}]_{v_2}=&\sum_{j=1}^nd_G(v_1,v_j)\vec{x}_{v_j}-\sum_{j=1}^nd_G(v_2,v_j)\vec{x}_{v_j}\\ =&d_G(v_1,v_1)\vec{x}_{v_1}+d_G(v_1,v_2)\vec{x}_{v_2}-d_G(v_1,v_2)\vec{x}_{v_1}-d_G(v_2,v_2)\vec{x}_{v_2}\\ =&d_G(v_1,v_2)(\vec{x}_{v_2}-\vec{x}_{v_1}). \end{align*} Since $\vec{x}$ is in the null space of $D$, $[D\vec{x}]_{v_1}=[D\vec{x}]_{v_2}=0$. Thus $d_G(v_1,v_2)(\vec{x}_{v_2}-\vec{x}_{v_1})=0$, which implies $\vec{x}_{v_2}=\vec{x}_{v_1}$. From the preceding discussion we obtain the following result. \begin{lemma}\label{lem:twinsiguales} Let $G$ be a graph with distance matrix $D$. If $v_i$ and $v_j$ are either true twins or false twins and $\vec{x}$ is in the null space of $D$, then $\vec{x}_{v_i}=\vec{x}_{v_j}$. \end{lemma} Lemma \ref{lem:twinsiguales} allows to use a smaller matrix to study the null space of $D$. To do that, we introduce some notation. We say that a partition $\mathcal{W}=\{W_1,W_2,\ldots,W_k\}$ of the set of vertices is a \textit{twin partition} of a graph $G$ if $W_i$ is either a set of true twins or a set of false twins for every $i$. Notice that we allow $|W_i|=1$. If $W_i$ is a set of true (false) twins for every $i$, then we say that $\mathcal{W}$ is a \textit{true (false) twin partition} of $G$. Let $\mathcal{W}=\{W_1,W_2,\ldots,W_k\}$ be a twin partition of $G$ and $w_1,\ldots, w_k$ a set of vertices with $w_i\in W_i$ for each $1\le i\le k$. We define the \textit{quotient matrix} $D/\mathcal{W}$ by \[ (D/\mathcal{W})_{i,j}= \begin{cases} |W_j|d_G(w_i,w_j)& \text{ if $i\neq j$,}\\ (|W_i|-1)&\text{ if $i=j$ and $W_i$ is a set of true twins,}\\ 2(|W_i|-1)&\text{ if $i=j$ and $W_i$ is a set of false twins.}\\ \end{cases} \] Let $\vec{x}\in \mathbb{R}^n$ be a vector such that $\vec{x}_{v_i}=\vec{x}_{v_j}$ if $v_i$ and $v_j$ are twin vertices and let $\vec{y}\in\mathbb{R}^k$ such that $\vec{y}_{v_i}=\vec{x}_{w_i}$. We have \begin{align*} [D/\mathcal{W} \vec{y}]_{v_i}=\sum_{j=1,j\neq i}^{k}d_G(w_i,w_j)|W_j|\vec{x}_{w_j} +c_i(|W_i|-1)\vec{x}_{w_i}, \end{align*} where $c_i=1$ if $W_i$ consists of true twins and $c_i=2$ if $W_i$ consists of false twins. On the other hand \begin{align*} [D\vec{x}]_{w_i}=&\sum_{v_j\in V}d_G(v_i,v_j)\vec{x}_{v_j}\\ =&\sum_{\ell=1,\ell \neq i}^k\sum_{v_j\in W_\ell}d_G(v_i,v_j)\vec{x}_{v_j}+\sum_{v_j\in W_i, v_j\neq w_i}d_G(v_i,v_j)\vec{x}_{v_j}\\ =&\sum_{\ell=1,\ell\neq i}^k|W_\ell|d_G(v_i,w_\ell)\vec{x}_{w_\ell}+c_i(|W_i|-1)\vec{x}_{w_i}\\ =&[D/\mathcal{W} \vec{y}]_{v_i}. \end{align*} Thus, $\vec{x}$ is in the null space of $D$ if and only if $\vec{y}$ is in the null space of $D/\mathcal{W}$. Combined with Lemma \ref{lem:twinsiguales}, this implies that the nullity of $D$ equals the nullity of $D/\mathcal{W}$. \begin{lemma}\label{lem:matrizcociente} Let $G$ be a graph, $D$ the distance matrix of $G$ and $\mathcal{W}=\{W_1\ldots,$ $W_k\}$ be a partition of the vertices of $G$ into sets of twins, each of them consisting of either true twins or false twins. For each $i$, let $w_i$ be a vertex in $W_i$. If $D/\mathcal{W}$ is the matrix defined as \[ D/\mathcal{W}_{i,j}=\begin{cases} |W_j|d_G(w_i,w_j)&\text{if $i\neq j$,}\\ |W_i|-1&\text{if $i=j$ and $W_i$ consists of true twins, and}\\ 2(|W_i|-1)&\text{if $i=j$ and $W_i$ consists of false twins,}\\ \end{cases} \] then the nullity of $D$ is equal to the nullity of $D/\mathcal{W}$. \end{lemma} \section{Threshold graphs}~\label{sec: threshold graphs} It is well-known that we can obtain any threshold graph by repeatedly adding either isolated vertices or universal vertices. Thus, a threshold graph can be represented by a finite sequence $(a_i)_{i=1}^n$, with $a_i\in\{0,1\}$, with edges of the form $\{v_i,v_j\}$ if $a_i=1$ and $i>j$. We are going to assume $a_n=1$ as otherwise the graph is not connected. Notice that $a_1$ can be assumed to be $0$ since otherwise would give place to the same graph. Since the sequence $(a_i)$ consists of some consecutive zeros, followed by consecutive ones and so on, we can write it as $[0^{n_1},1^{n_2},0^{n_3},\ldots ,1^{n_{2k-2}},0^{n_{2k-1}},1^{n_{2k}}]$, where $a^b$ represents $b$ consecutive copies of the number $a$. Notice that in $[0^{n_1},1^{n_2},0^{n_3},\ldots ,1^{n_{2k-2}},0^{n_{2k-1}},1^{n_{2k}}]$ the number $0$ appears in every odd position and $1$ in every even position, thus the only values providing information are $(n_i)$. We can represent $(a_i)$ with the sequence $[n_1,n_2,n_3,\ldots,n_{2k-2},n_{2k-1},n_{2k}]$, called the \emph{power sequence} of the threshold graph $G$. As every $0$ vertex is at distance $2$ of all previous vertices and every $1$ vertex is at distance $1$ of all previous vertices, if $[n_1,n_2,n_3,\ldots,n_{2k-2},n_{2k-1},n_{2k}]$ is the power sequence of a threshold graph $G$, then the distance matrix $D$ of $G$ is \[ \begin{pmatrix} 2(J-I) & J & 2J & J & \ldots & J & 2J & J \\ J & J-I & 2J & J & \ldots &J & 2J &J \\ 2J & 2J & 2(J-I) & J &\ldots &J & 2J &J \\ J & J & J & J-I & \ldots & J & 2J & J\\ \vdots & \vdots & \vdots & \vdots & \ldots & \vdots & \vdots & \vdots\\ J & J & J & J & \ldots &J-I & 2J & J\\ 2J & 2J & 2J & 2J & \ldots & 2J & 2(J-I) & J\\ J & J & J & J & \ldots & J & J & (J-I) \end{pmatrix}, \] where each $J$ in position $i,j$ stands for a block of $n_i\times n_j$ ones, and each $I$ in position $i,i$ an $n_i\times n_i$ identity matrix. Notice that consecutive zeros produce false twins, whereas consecutive ones produce true twins. We can partition the vertices of $G$ into $\mathcal{W}=\{W_1,\ldots, W_{2k}\}$, where $W_i$ consists of $n_i$ false twins if $i$ is odd and $n_i$ true twins if $i$ is even. Consequently $D/ \mathcal W$ equals \[ \begin{pmatrix} 2n_1-2&n_2& 2n_3 & n_4 & \ldots & n_{2k-2} & 2n_{2k-1} & n_{2k}\\ n_1&n_2-1& 2n_3 & n_4 & \ldots & n_{2k-2} & 2n_{2k-1} & n_{2k}\\ 2n_1&2n_2& 2n_3-2 & n_4 & \ldots & n_{2k-2} & 2n_{2k-1} & n_{2k}\\ n_1&n_2& n_3 & n_4-1 & \ldots & n_{2k-2} & 2n_{2k-1} & n_{2k}\\ \vdots & \vdots & \vdots & \vdots & \ldots & \vdots & \vdots & \vdots\\ n_1&n_2& n_3 & n_4 & \ldots & n_{2k-2}-1 & 2n_{2k-1} & n_{2k}\\ 2n_1&2n_2& 2n_3 & 2n_4 & \ldots & 2n_{2k-2} & 2n_{2k-1}-2 & n_{2k}\\ n_1&n_2& n_3 & n_4 & \ldots & n_{2k-2} & n_{2k-1} & n_{2k}-1\\ \end{pmatrix}, \] Lemma \ref{lem:matrizcociente} allows us to use $D/ \mathcal W$ instead of $D$ to study its nullity. Given a matrix $A$ having $m$ rows, we denote by $r_i(A)$ the $i$-th row of $A$ for each $1\le i\le m$. When the context is clear enough, we use $r_i$ for shortness. We proceed to apply row operations to $D/\mathcal{W}$. We begin by doing $r_i-r_{i+1} \to r_i$ for $i$ moving from $1$ to $2k-1$ \[ \begin{pmatrix} n_1-2&1& 0 & 0 & \ldots & 0 & 0 & 0\\ -n_1&-n_2-1& 2 & 0 & \ldots & 0 & 0 & 0\\ n_1&n_2& n_3-2 & 1 & \ldots & 0 & 0 & 0\\ -n_1&-n_2& -n_3 & -n_4-1 & \ldots & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \vdots & \ldots & \vdots & \vdots & \vdots\\ -n_1&-n_2& -n_3 & -n_4 & \ldots & -n_{2k-2}-1 & 2 & 0\\ n_1&n_2& n_3 & n_4 & \ldots & n_{2k-2} & n_{2k-1}-2 & 1\\ n_1&n_2& n_3 & n_4 & \ldots & n_{2k-2} & n_{2k-1} & n_{2k}-1\\ \end{pmatrix}, \] we multiply every even row by $-1$, but the last one \[ \begin{pmatrix} n_1-2&1& 0 & 0 & \ldots & 0 & 0 & 0\\ n_1&n_2+1& -2 & 0 & \ldots & 0 & 0 & 0\\ n_1&n_2& n_3-2 & 1 & \ldots & 0 & 0 & 0\\ n_1&n_2& n_3 & n_4+1 & \ldots & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \vdots & \ldots & \vdots & \vdots & \vdots\\ n_1&n_2& n_3 & n_4 & \ldots & n_{2k-2}+1 & -2 & 0\\ n_1&n_2& n_3 & n_4 & \ldots & n_{2k-2} & n_{2k-1}-2 & 1\\ n_1&n_2& n_3 & n_4 & \ldots & n_{2k-2} & n_{2k-1} & n_{2k}-1\\ \end{pmatrix}. \] Finally, we do $r_{2k-i}-r_{2k-i-1} \to r_{2k-i}$ for $i$ moving from $0$ to $2k-2$, \[ \begin{pmatrix} n_1-2&1& 0 & 0 & \ldots & 0 & 0 & 0\\ 2&n_2& -2 & 0 & \ldots & 0 & 0 & 0\\ 0&-1& n_3 & 1 & \ldots & 0 & 0 & 0\\ 0&0& 2 & n_4 & \ldots & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \vdots & \ldots & \vdots & \vdots & \vdots\\ 0&0&0&0& \ldots & n_{2k-2} & -2 & 0\\ 0&0&0&0& \ldots & -1 & n_{2k-1} & 1\\ 0&0&0&0& \ldots & 0 & 2 & n_{2k}-2\\ \end{pmatrix}. \] The first $2k-1$ rows are linearly independent. Thus the nullity of $D/\mathcal{W}$ is at most $1$. Lemma \ref{lem:matrizcociente} yields the following. \begin{theorem}\label{thm: nulity of threshold graphs} If $D$ is the distance matrix of a connected threshold graph, then the nullity of $D$ is at most $1$. \end{theorem} We now want to find precisely which threshold graphs have nullity $1$. Dividing even rows of the last matrix by $-2$, we obtain \[ \begin{pmatrix} n_1-2&1& 0 & 0 & \ldots & 0 & 0 & 0\\ -1&-n_2/2& 1 & 0 & \ldots & 0 & 0 & 0\\ 0&-1& n_3 & 1 & \ldots & 0 & 0 & 0\\ 0&0& -1 & -n_4/2 & \ldots & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \vdots & \ldots & \vdots & \vdots & \vdots\\ 0&0&0&0& \ldots & -n_{2k-2}/2 & 1 & 0\\ 0&0&0&0& \ldots & -1 & n_{2k-1} & 1\\ 0&0&0&0& \ldots & 0 & -1 & (2-n_{2k})/2\\ \end{pmatrix}, \] that has the same nullity as $D/\mathcal{W}$. Notice that if we let \[ \alpha_i=\begin{cases} n_1-2&\text{if $i=1$}\\ n_i&\text{if $i>1$ is odd}\\ -n_i/2&\text{if $i<2k$ is even}\\ (2-n_{2k})/2&\text{if $i=2k$} \end{cases} \] the last matrix is of the form \[ \begin{pmatrix} \alpha_1&1& 0 & 0 & \ldots & 0 & 0 & 0\\ -1&\alpha_2& 1 & 0 & \ldots & 0 & 0 & 0\\ 0&-1&\alpha_3& 1 & \ldots & 0 & 0 & 0\\ 0&0& -1 & \alpha_4 & \ldots & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \vdots & \ldots & \vdots & \vdots & \vdots\\ 0&0&0&0& \ldots & \alpha_{2k-2} & 1 & 0\\ 0&0&0&0& \ldots & -1 & \alpha_{2k-1} & 1\\ 0&0&0&0& \ldots & 0 & -1 & \alpha_{2k}\\ \end{pmatrix}. \] We can obtain the determinant of this last matrix inductively. Let $D_i$ be the main minor of $D$ obtained by deleting each row $k$ greater than $i$ and its corresponding columns, and let $d_i$ be the determinant of $D_i$. It is not hard to prove that $d_1=\alpha_1$ and $d_2=1+\alpha_1\alpha_2$ and \begin{equation*} d_i=\alpha_id_{i-1}+d_{i-2}, \end{equation*} for each integer $3\le i\le 2k$. Thanks to the recursion, we can find some infinite families of threshold graphs with nullity $1$. For example, if $\alpha_1,\ldots,\alpha_{2k-2}$ are such that $d_{2k-2}=0$, then $\alpha_{2k}=0$ implies $d_{2k}=0$ regardless of the value of $\alpha_{2k-1}$. As a way to apply this, notice that both $(\alpha_1,\alpha_2)=(2,-1/2)$ and $(\alpha_1,\alpha_2)=(1,-1)$ imply $d_2=0$. In addition, if $\alpha_4=0$, then $[4,1,n_3,2]$ and $[3,2,n_3,2]$ are power sequences of threshold graphs with distance nullity $1$ for every $n_3$, meaning that $K_2\vee (n_3K_1+(K_1\vee 4K_1))$ and $K_2\vee(n_3K_1+(K_2\vee 3K_1))$ are threshold graph whose distance matrices have nullity one. Unfortunately if we wanted to keep applying this construction as is to yield a power sequence of length $6$ we would need to do $[4,1,n_3,0,n_5,2]=[4,1,n_3+n_5,2]$ because of the difference between $\alpha_i$ when $i<2k$ and $\alpha_{2k}$. What we can do instead is use the fact that, when $d_{i-2}=0$, we have \begin{align*} d_i&=\alpha_id_{i-1}\\ d_{i+1}&=\alpha_{i+1}d_i+d_{i-1}=(\alpha_{i+1}\alpha_i+1)d_{i-1}\\ \end{align*} which is similar to how the recursion begins, multiplying by $d_{i-1}$ and replacing $(\alpha_1,\alpha_2)$ with $(\alpha_i,\alpha_{i+1})$. Thus, if $\alpha_1,\ldots,\alpha_{i}$ yield $d_{i}=0$ and $\bar{\alpha_1},\ldots,\bar{\alpha_j}$ imply $\bar{d_j}=0$, setting $\alpha_{i+k}=\bar{\alpha_k}$ implies $d_{i+j}=0$. As a way to apply this, we can use $(\alpha_1,\alpha_2)=(1,-1)$ together with $(\bar{\alpha_1},\bar{\alpha_2},\bar{\alpha_3},\bar{\alpha_4})=(1,-1,\epsilon,0)$, with $\epsilon$ being any value we want to choose. This yields that threshold graphs with power sequences $[3,2,1,2,\epsilon,2]$ have nullity $1$. And repeatedly applying this construction, we get that threshold graphs with power sequences of the form \[ [3,2,1,2,1,2,1,2,1,2,1,2,\ldots,1,2,\epsilon,2] \] have nullity $1$. \section{Trivially perfect graphs}~\label{sec: trivially perfect graphs} In this section, we give sufficient conditions for a trivially perfect graph to have a nonsingular distance matrix. Let $G$ be a trivially perfect graph and let $\mathcal{W}$ be a true twin partition of $G$. There exists a tree $T=(\mathcal W, E)$, called \emph{rooted clique tree of $G$}, such that if $W,W'\in \mathcal{W}$, $w\in W$ and $w'\in W'$, then $w$ and $w'$ are adjacent if and only if $W=W'$, or $W'$ is descendant of $W$ in $T$ or vice versa. By $T_W$, we denote the subtree of $T$ rooted at $W$ containing all descendants of $W$. The \emph{arrow matrix of $T$} is recursively defined as follows. If $\mathcal{W}=\{R\}$, $A_T=|R|-1$. Assume that $|W|\ge 3$. Let the elements of $\mathcal{W}$ be numbered as follows: \begin{itemize} \item if $i<j$ then $W_i$ is not a descendant of $W_j$; \item if $i<j<k$ and $W_k$ is a descendant of $W_i$, then $W_j$ is a descendant of $W_i$. \end{itemize} See Fig.~\ref{fig: tp graph}. Further, let $W_{h_1},W_{h_2},\ldots,W_{h_\ell}$ be the children of $R=W_1$, renumbered so that if $i<j$, $W_i=W_{h_m}$ and $W_j=W_{h_n}$, then $h_m<h_n$. We define the \emph{arrow matrix} of $T$ as \[A_T= \begin{pmatrix} |R|+1&|W_2|&|W_3|&\cdots&|W_{\ell}|\\ |R|\\ |R|\\ \vdots&&\text{\huge $B_T$}\\ |R|\\ |R|\\ \end{pmatrix} ,\] where \[ B_T=\begin{pmatrix} A_1&\mathbb{0}&\cdots& \mathbb{0}\\ \mathbb{0} &A_2&\cdots&\mathbb{0}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbb{0}&\mathbb{0}&\cdots&A_{\ell}\\ \end{pmatrix} \] where $A_i$ is the arrow matrix of $T(V_i)$. The ordering of $\mathcal W$ induced by the rows of $A_T$ is called an \emph{arrow ordering}. \begin{theorem}\label{thm: nonsingular trivially perfect graphs} Let $G$ be a trivially perfect graph, having a true twin partition $\mathcal{W}=\{W_1,W_2,\ldots,W_k\}$ such that $|W_i|\ge 6$ for each $i=1,\ldots,k$, then $D(G)$ has an inverse, i.e.; $\eta(G)=0$. \end{theorem} As the proof of Theorem~\ref{thm: nonsingular trivially perfect graphs} is a bit technical, we give an illustration of how it works before proceeding with the actual proof. \subsection{Illustration of Theorem~\ref{thm: nonsingular trivially perfect graphs}}~\label{subsec: example} Consider the trivially perfect graph $G=K_6\vee((K_7\vee(K_9+K_8))+(K_9\vee((K_8\vee(K_6+K_7))+K_6)))$ with the vertex set partition $\mathcal W=\{W_1,W_2,W_3,W_4$, $W_5,W_6\}$ (see Fig.~\ref{fig. trivially perfect example}), whose rooted clique tree appears on Figure \ref{fig: tp graph}. Notice that the quotient matrix $D/\mathcal W$ is \[ \begin{pmatrix} 6-1& 7 & 9& 8& 9& 8& 6& 7& 6\\ 6 & 7-1& 9& 8& 2\cdot 9& 2\cdot 8& 2\cdot 6& 2 \cdot 7& 2 \cdot 6\\ 6& 7 & 9-1& 2\cdot 8& 2\cdot 9& 2\cdot 8& 2\cdot 6& 2 \cdot 7& 2 \cdot 6\\ 6& 7& 2\cdot 9& 8-1& 2\cdot 9& 2\cdot 8& 2\cdot 6& 2 \cdot 7& 2 \cdot 6\\ 6& 2\cdot 7& 2\cdot 9& 2\cdot 8& 9-1& 8& 6& 7& 6\\ 6& 2\cdot 7& 2\cdot 9& 2\cdot 8& 9& 8-1& 6& 7& 2\cdot 6\\ 6& 2\cdot 7& 2\cdot 9& 2\cdot 8& 9& 8& 5& 2\cdot 7& 2\cdot 6\\ 6& 2\cdot 7& 2\cdot 9& 2\cdot 8& 9& 8& 2\cdot 6& 7-1& 2\cdot 6\\ 6& 2\cdot 7& 2\cdot 9& 2\cdot 8& 9& 2\cdot 8& 2\cdot 6& 2\cdot 7& 6-1\\ \end{pmatrix}, \] where the $i$-th row represents $W_i$. We denote such a row by $r_{W_i}$. \begin{figure} \caption{Rooted tree of $G=K_6\vee((K_6\vee(K_6+K_6))+(K_6\vee((K_6\vee(K_6+K_6))+K_6)))$; i.e., each $W_i$ is a clique with six vertices. Rooted tree of $G=K_6\vee((K_7\vee(K_9+K_8))+(K_9\vee((K_8\vee(K_6+K_7))+K_6)))$.} \label{fig. trivially perfect example} \label{fig: tp graph} \end{figure} Now we apply on $D/\mathcal W$ the following elementary operations, first $r_{W_i}-2r_{W_1}\to r_{W_i}$ and then $-r_{W_i}\to r_{W_i}$, for each $i\geq 2$, obtaining the following matrix \[M_1= \begin{pmatrix} 5 & 7 & 9 & 8 & 9 & 8& 6& 7& 6\\ 4 & 8 & 9 & 8 & 0 & 0& 0& 0& 0\\ 4 & 7 & 10& 0 & 0 & 0& 0& 0& 0\\ 4 & 7 & 0 & 9 & 0 & 0& 0& 0& 0\\ 4 & 0 & 0 & 0 & 10& 8& 6& 7& 6\\ 4 & 0 & 0 & 0 & 9 & 9& 6& 7& 0\\ 4 & 0 & 0 & 0 & 9 & 8& 7& 0& 0\\ 4 & 0 & 0 & 0 & 9 & 8& 0& 8& 0\\ 4 & 0 & 0 & 0 & 9 & 0& 0& 0& 7\\ \end{pmatrix}. \] To make some more $0$'s we do the following row operations. First we subtract from the row corresponding to the root the rows corresponding to its children, i.e., $r_{W_1}-r_{W_2}-r_{W_5}\to r_{W_1}$. Do the same for $r_{W_5}$, $r_{W_5}-r_{W_6}-r_{W_9}\to r_{W_5}$. This was done because $W_5$ has grandchildren (i.e. it has child who has children of its own). This yields the matrix \[M_2'= \begin{pmatrix} -3& -1& 0 & 0 & -1& 0& 0& 0& 0\\ 4 & 8 & 9 & 8 & 0 & 0& 0& 0& 0\\ 4 & 7 & 10& 0 & 0 & 0& 0& 0& 0\\ 4 & 7 & 0 & 9 & 0 & 0& 0& 0& 0\\ -4& 0 & 0 & 0 & -8&-1& 0& 0&-1\\ 4 & 0 & 0 & 0 & 9 & 9& 6& 7& 0\\ 4 & 0 & 0 & 0 & 9 & 8& 7& 0& 0\\ 4 & 0 & 0 & 0 & 9 & 8& 0& 8& 0\\ 4 & 0 & 0 & 0 & 9 & 0& 0& 0& 7\\ \end{pmatrix}. \] We keep making $0$'s appear as follows. We take every vertex that has children, but not grandchildren, and use them to make $0$'s. This means we do $r_{W_2}-\frac{9}{10}r_{W_3}-\frac{8}{9}r_{W_4} \to r_{W_2} $ and $r_{W_6}-\frac{6}{7}r_{W_7}-\frac{7}{8}r_{W_8} \to r_{W_6}$. This gives the matrix \[M_2= \begin{pmatrix} -3& -1& 0 & 0 & -1& 0& 0& 0& 0\\ \frac{-142}{45} & \frac{-407}{90} & 0 & 0 & 0 & 0& 0& 0& 0\\ 4 & 7 & 10& 0 & 0 & 0& 0& 0& 0\\ 4 & 7 & 0 & 9 & 0 & 0& 0& 0& 0\\ -4& 0 & 0 & 0 & -8&-1& 0& 0&-1\\ \frac{-41}{14} & 0 & 0 & 0 & \frac{-369}{56} & \frac{-34}{7}& 0& 0& 0\\ 4 & 0 & 0 & 0 & 9 & 8& 7& 0& 0\\ 4 & 0 & 0 & 0 & 9 & 8& 0& 8& 0\\ 4 & 0 & 0 & 0 & 9 & 0& 0& 0& 7\\ \end{pmatrix}. \] We can do now something similar for $r_{W_5}$, although we need to multiply $r_{W_6}$ by a different value. We do $r_{W_5}-\frac{7}{34}r_{W_6}\to r_{W_5}$ and then $r_{W_5} +\frac{1}{7}r_{W_9} \to r_{W_{5}}$. Notice that in this case we have $\frac{7}{34}=\frac{34}{7}^{-1}=\frac{-1}{(M_2)_{6,6}}$, and $\frac{1}{7}=\frac{1}{|W_9|+1}$. This yields the matrix \[N= \begin{pmatrix} -3& -1& 0 & 0 & -1& 0& 0& 0& 0\\ \frac{-142}{45} & \frac{-407}{90} & 0 & 0 & 0 & 0& 0& 0& 0\\ 4 & 7 & 10& 0 & 0 & 0& 0& 0& 0\\ 4 & 7 & 0 & 9 & 0 & 0& 0& 0& 0\\ \frac{-1345}{476}& 0 & 0 & 0 & \frac{-10201}{1904}& 0& 0& 0& 0\\ \frac{-41}{14} & 0 & 0 & 0 & \frac{-369}{56} & \frac{-34}{7}& 0& 0& 0\\ 4 & 0 & 0 & 0 & 9 & 8& 7& 0& 0\\ 4 & 0 & 0 & 0 & 9 & 8& 0& 8& 0\\ 4 & 0 & 0 & 0 & 9 & 0& 0& 0& 7\\ \end{pmatrix}. \] Finally, we can do the same process for $r_{W_1}$, using $r_{W_2}$ and $r_{W_5}$. Thus we do $r_{W_1}-\frac{90}{407}r_{W_2}\to r_{W_1}$ and then $r_{W_1}-\frac{1904}{10201}r_{W_5} \to r_{W_1}$. In this case, as neither $W_2$ nor $W_5$ were leaves, we are just using $\frac{-90}{407}=N_{2,2}^{-1}$ and $\frac{-1904}{10201}=N_{5,5}^{-1}$. Thus, we obtain the following lower triangular matrix, which is non-singular because it does not have any zeros in the main diagonal. \[ \begin{pmatrix} \frac{-7368677}{4151807} & 0& 0 & 0 & 0& 0& 0& 0& 0\\[0.75ex] \frac{-142}{45} & \frac{-407}{90} & 0 & 0 & 0 & 0& 0& 0& 0\\[0.75ex] 4 & 7 & 10& 0 & 0 & 0& 0& 0& 0\\[0.75ex] 4 & 7 & 0 & 9 & 0 & 0& 0& 0& 0\\[0.75ex] \frac{-1345}{476}& 0 & 0 & 0 & \frac{-10201}{1904}& 0& 0& 0& 0\\[0.75ex] \frac{-41}{14} & 0 & 0 & 0 & \frac{-369}{56} & \frac{-34}{7}& 0& 0& 0\\[0.75ex] 4 & 0 & 0 & 0 & 9 & 8& 7& 0& 0\\[0.75ex] 4 & 0 & 0 & 0 & 9 & 8& 0& 8& 0\\[0.75ex] 4 & 0 & 0 & 0 & 9 & 0& 0& 0& 7\\[0.75ex] \end{pmatrix}. \] \subsection{Proof of Theorem~\ref{thm: nonsingular trivially perfect graphs}}\label{subsec: proof} Before proceeding with the proof, we need to define the height of the vertices of a rooted tree. This definition is done inductively. If $v$ has no children we define the \emph{height of $v$} as $h(v)=0$. If $v$ has children, and the height of every child of $v$ has been defined, we define the \emph{height of $v$} as \[ h(v)=1+\max_{w|\text{$w$ is a child of $v$}}{h(w)}. \] Thus, for the vertices of the rooted tree in Figure~\ref{fig: tp graph} we have \begin{align*} h(W_3)=&h(W_4)=h(W_7)=h(W_8)=h(W_9)=0,\\ h(W_2)=&h(W_6)=1,\\ h(W_5)=&2,\\ h(W_1)=&3. \end{align*} We are ready now to present the prove Theorem~\ref{thm: nonsingular trivially perfect graphs}. \begin{proof}[Proof of Theorem~\ref{thm: nonsingular trivially perfect graphs}] Let $T=(\mathcal W, E)$ be a rooted clique tree of $G$. Consider now an arrow ordering $W_1,\ldots,W_{|\mathcal W}|$. The quotient matrix of $G$, under this ordering, has the following structure. \[D/\mathcal{W}= \begin{pmatrix} |R|-1 &\vec{x}_1^t& \vec{x}_2^t&\cdots & \vec{x}_k^t\\ |R|\cdot\mathbbm{1} & B_1 & 2\cdot\mathbbm{1}\vec{x}_2^t&\cdots& 2\cdot\mathbbm{1}\vec{x}_k^t\\ |R|\cdot\mathbbm{1} & 2\cdot\mathbbm{1} \vec{x}_1^t&B_2&\cdots& 2\cdot\mathbbm{1}\vec{x}_k^t\\ \vdots & \vdots &\vdots&\ddots&\vdots\\ |R|\cdot\mathbbm{1} & 2\cdot\mathbbm{1} \vec{x}_1^t& 2\vec{x}_2^t\cdot\mathbbm{1}&\cdots&B_k\\ \end{pmatrix} ,\] where $R$ is the root of $T$, $B_i$ is the quotient matrix of the distance matrix, induced by those vertices in $G$ belonging to some vertex of $T_{W_i}$, where $W_i$ is the $i$-th child of $R$ under the considered ordering of $\mathcal W$, and the vector $\vec{x}_i$ has $|W|$ in each entry corresponding to $W \in V (T_{W_i})$ for each $1\le i\le k$. Now we apply on $D/\mathcal W$ the following elementary operations, first $r_W-2r_R\to r_W$ and then $ -r_W\to r_W$, for each $W\in\mathcal W\setminus\{R\}$, obtaining the following matrix \[M_1= \begin{pmatrix} |R|-1&\vec{x}_1^t&\vec{x}_2^t&\cdots&\vec{x}_k^t\\ (|R|-2)\cdot\mathbbm{1}&A_1&\mathbb{0}&\cdots&\mathbb{0}\\ (|R|-2)\cdot\mathbbm{1}&\mathbb{0}&A_2&\cdots&\mathbb{0}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ (|R|-2)\cdot\mathbbm{1}&\mathbb{0}&\mathbb{0}&\cdots&A_k\\ \end{pmatrix} ,\] where the $A_i$'s are the arrow matrices of the subtrees $T_{W_i}$'s of $T$. We can transform $M_1$ into \[M_2= \begin{pmatrix} |R|-1&-\vec{a}_1^t&-\vec{a}_2^t&\cdots&-\vec{a}_k^t\\ \vec{b}_1&C_1&\mathbb{0}&\cdots&\mathbb{0}\\ \vec{b}_2&\mathbb{0}&C_2&\cdots&\mathbb{0}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ \vec{b}_k&\mathbb{0}&\mathbb{0}&\cdots&C_k\\ \end{pmatrix} ,\] such that, for each $i$, the first entry of $\vec{b}_i$ is $k_i(|R|-2)$ with $k_i\le \frac {-1} 2$, \[C_i= \begin{pmatrix} 1+|W_i|&-\vec{d}_1^t&-\vec{d}_2^t&\cdots&-\vec{d}_\ell^t\\ \vec{c}_1^i&C_1^i&\mathbb{0}&\cdots& \mathbb{0}\\ \vec{c}_2^i&\mathbb{0} &C_2^i&\cdots&\mathbb{0}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ \vec{c}_{\ell}^i&\mathbb{0}&\mathbb{0}&\cdots&C_{\ell}^i\\ \end{pmatrix} ,\] and $\vec{a}_i$ stands for the vector having as many rows as $B_i$, a $1$ in the first column and $0$'s in the rest of its entries. The vector $\vec{d}_i$ has as many rows as $C_j^i$, a $1$ in the first columns and $0$'s in the rest of its entries. The block $C_j^i$ is a lower matrix, $(C_j^i)_{11}=(1+m_j)|S_j|$ with $m_j\le\frac{-1} 2$ and $S_j$ is the child of $W_i$ corresponding to the first row of $C_j^i$, and the first entry of $\vec{c}_j^i$ is $m_j|S_j|$. We will prove that there exists a sequence of elementary row operations leading $M_1$ to $M_2$. First, we do $r_R-\sum_ W r_W\to r_R$, the sum is taken among all vertices $W\in V(T)$ such that $W$ is a child of $R$. We repeat this procedure on each $T_W$ such that $h(T_W)\ge 2$ and $W$ is a child of $R$. Then we proceed with every child of the $T_W$'s and so on as long as possible. Let us call this new matrix $M_2^\prime$. Notice that entries of $M_2^\prime$ have been modified according to $M_2$ as follow $(M_2^\prime)_{WV}=|W|+1-\alpha_W |W|$ for each $W=V$ or $W$ ancestor of $V$, where $\alpha_W$ is the number of children of $W$ on $T_W$; and $(M_2^\prime)_{RR}=|R|-1-\alpha_R (|R|-2)$, where $\alpha_R$ is the number of children of $R$ on $T$. We proceed by applying induction on $h(T_W)$, the height of $T_W$. Base case: $h(T_W)=1$. We do $ r_W -\sum_{V}\frac{|V|}{|V|+1}\cdot r_V \to r_W$, the sum is taken over all children $V$ of $W$. Under this row operation we obtain a matrix $N$ such that $N_{WV}=0$ for every descendant $V$ of $W$, $N_{WW}=|W|+1-\alpha_W |W|$, $N_{WV}=|V|-\alpha_W |V|$ for each $V$ ancestor of $W$ distinct of $R$, and $N_{WR}=|R|-2-\alpha_W (|R|-2)$. Thus $N_{VV}=1+m_W |V|$, $N_{WV}=m_W|V|$ for each ancestor $W$ distinct of $R$ and $N_{WR}=m_W(|R|-2)$, where $m_W=1-s_W+\sum_V\frac 1 {|V|+1}$ and $s_W$ is the number of children of $W$. Hence, since $|V|\ge 6>3$, $m_W< 1-s_W+\frac{s_W} 4$. Therefore, $s_W\ge 2$ implies $m_W<\frac{-1}{2}$. Assume now, by inductive hypothesis, that we can obtain a matrix $N$ from $M'_2$, by means of elementary rows operations such that if $1\le h(T_W)<k<h(T_R)$ with $W\neq R$, $N_{WV}=0$ for each descendant $V$ of $W$, $N_{WW}=1+m_W|W|$ and $N_{WV}=m_V |V|$ with $m_V\le\frac{-1} 2$ for each ancestor $V$ of $W$ distinct of $R$, and $N_{WR}=m_R(|R|-2)$ with $m_R\le\frac{-1}{2}$. These are the only entries modified concerning to $M'_2$. Let $W'$ be a vertex of $T$ such that $1<h(T_{W'})=k$. We modify row $W'$ according to $r_W'+\sum_{V}\frac{1}{m_V|V|+1}r_V\to r_{W'}$, where the sum is taken over all children $V$ of $W'$ such that $h(T_V)\ge 1$; and then we do $r_{W'}+\sum_{V'}\frac{1}{|V'|+1}\cdot r_{V'}\to r_{W'}$, the sum is taken over all children $V'$ of $W'$ such that $h(T_{V'})=0$. Hence the new matrix $N'$ satisfies \begin{align*} N'_{W'W'}&=\left(|W'|+1-s_{W'}|W'|+\sum_{h(T_V)\ge 1}\frac{m_V |W'|}{m_V|V|+1}+\sum_{h(T_{V'})=0}\frac{|W'|}{|V'|+1}\right)\\ &\le 1+|W'|\left(1-s_{W'}+\sum_{h(T_V)\ge 1}\frac{1}{|V|-2}+\sum_{h(T_{V'})=0}\frac{1}{|V'|+1}\right)\\ &\le 1+|W'|\left(1-\frac 3 4 s_{W'}\right).\\ \end{align*} By the inductive hypothesis, $m_V\le \frac{-1}{2}$ for each $V$ child of $W'$ such that $h(T)<k$ and thus the first inequality holds. The last one follows from $|V|\ge 6$ for each vertex $V$ of $T$. We conclude that $N_{W'W'}<0$. Using the inductive hypothesis and reasoning as in the base case, it follows that $N'_{WV}=0$ for each descendant $V$ of $W$ and $N'_{WV}=m_V|V|$ with $m_V\le\frac{-1}2$ for each ancestor $V$ of $W$ distinct of $R$, and $M_{WR}=(|R|-2)m_R$. In particular, the result holds for each child $W$ of $R$. Hence $M_2$ can be obtained from $D/\mathcal{W}$ through elementary row operations. Finally, use the same strategy as in the inductive hypothesis to prove our result. We can prove that, if we do $r_R+\sum_{V}\frac{1}{m_V|V|+1}r_V\to r_{R}$, where the sum is taken over all children $V$ of $R$ such that $h(T_V)\ge 1$; and then we do $r_{R}+\sum_{V'}\frac{1}{|V'|+1}\cdot r_{V'} \to r_{R}$, where the sum is taken over all children $V'$ of $R$, we obtain a lower matrix whose main diagonal has no zero entry. \end{proof} \subsection{Nullity} Trivially perfect graphs are a superclass of threshold graphs, but, unlike threshold graphs, for every $k\ge 2$ there exists a trivially pefect graph with nullity $k$. \begin{theorem}\label{thm: nullity} Let $N=3k+r$, where $r\in\{1,2,3\}$ and $k\ge 2$ is an integer. Let $n \in \mathbb{N}$ with $n\geq \textcolor{magenta}{7}k+r$. Then, there exists a trivially perfect graph $G$ that has a true twin partition into $N$ sets and $|V(G)|=n$ such that the distance matrix of $G$ has nullity $\ell$, where \begin{itemize} \item $\ell=k-1$ if $n= 7k+r$ or if $r=1$ and $n\ge 7k+3$, \item $\ell=k$ if $r=1$ and $n=7k+2$, \item $\ell\in \{k-1,k\}$ otherwise. \end{itemize} \end{theorem} \begin{proof} Let $N$ be an integer number such that $N=3k+r$ with $r= 1,2 \ or \ 3$ and $k\ge2$. Let $G$ be a trivially perfect graph with $|V(G)|=n\ge\textcolor{magenta}{7}k+r$, having a true twin partition $\mathcal{W}=\{R_1, \cdots, R_r, W_1,W_{1,1}, W_{1,2},W_2,W_{2,1}, W_{2,2},\ldots,W_k,W_{k,1}, W_{k,2}\}$ such that \begin{itemize} \item $|W_i|=3$ for $1\le i \le k$, \item $|W_{i,1}|=|W_{i,2}|=2$ for $1\le i \le k$, \item $|R_i|\ge 1$ for $1\le i \le r$, \item $|R_1|+\cdots+|R_r|=n-7k$. \end{itemize} Let $T=(\mathcal W, E)$ a rooted clique tree of $G$, where \begin{itemize} \item $R_1$ is the root, \item $R_i$ is descendant of $R_1$ for $ 2 \le i\le r$, if $r\ge2$, \item $W_i$ is descendant of $R_1$ for $ 1 \le i\le k$, \item $W_{i,1}$ and $W_{i,2}$ are descendants of $W_i$ for $1 \le i\le k$. \end{itemize} Consider the distance matrix $D$ of $G$, Lemma \ref{lem:matrizcociente} allows us to use $D/ \mathcal W$ instead of $D$ to study its nullity. Using the same transformations as in the proof of Theorem \ref{thm: nonsingular trivially perfect graphs}, we obtain the matrix \[M= \begin{pmatrix} |R_1|-1&\vec{x}_1^t&\vec{x}_2^t&\cdots&\vec{x}_k^t\\ (|R_1|-2)\cdot\mathbbm{1}&A_1&\mathbb{0}&\cdots&\mathbb{0}\\ (|R_1|-2)\cdot\mathbbm{1}&\mathbb{0}&A_2&\cdots&\mathbb{0}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ (|R_1|-2)\cdot\mathbbm{1}&\mathbb{0}&\mathbb{0}&\cdots&A_k\\ \end{pmatrix} ,\] if $r=1$, or \[M= \begin{pmatrix} |R_1|-1 & |R_2| & \cdots & |R_r|& \vec{x}_1^t&\vec{x}_2^t&\cdots&\vec{x}_k^t\\ |R_1|-2 & |R_2|+1 & \mathbb{0}^t & 0 & \mathbb{0}^t& \mathbb{0}^t &\cdots& \mathbb{0}^t\\ \vdots & \mathbb{0} & \ddots & \mathbb{0} & \mathbb{0}^t& \mathbb{0}^t &\cdots& \mathbb{0}^t\\ |R_1|-2 & 0 & \mathbb{0}^t &|R_r|+1& \mathbb{0}^t& \mathbb{0}^t &\cdots& \mathbb{0}^t\\ (|R_1|-2)\cdot\mathbbm{1} & \mathbb{0} &\cdots& \mathbb{0} & A_1&\mathbb{0}&\cdots&\mathbb{0}\\ (|R_1|-2)\cdot\mathbbm{1} & \mathbb{0} &\cdots& \mathbb{0} & \mathbb{0} &A_2&\cdots&\mathbb{0}\\ \vdots &\vdots & \vdots & \vdots &\vdots&\vdots&\ddots&\vdots\\ (|R_1|-2)\cdot\mathbbm{1} & \mathbb{0} &\cdots& \mathbb{0} & \mathbb{0}&\mathbb{0}&\cdots&A_k\\ \end{pmatrix} ,\] if $r\ge2$, where $$ A_i= \begin{pmatrix} |W_i|+1 & |W_{i,1}| & |W_{i,2}|\\ |W_i| & |W_{i,1}|+1 & 0\\ |W_i| & 0 & |W_{i,2}|+1 \end{pmatrix} = \begin{pmatrix} 4 & 2 & 2\\ 3 & 3 & 0\\ 3 & 0 & 3 \end{pmatrix}, $$ and $$\vec{x}_i^t = (|W_i|, |W_{i,1}|, |W_{i,2}|)=(3,2,2) ,$$ for $1\le i\le k$. By elementary row operations, we obtain \[\hat{M}= \begin{pmatrix} \hat{R} &\vec{\hat{x}}^t&\vec{\hat{x}}^t&\cdots&\vec{\hat{x}}^t\\ (|R_1|-2)\cdot\vec{\hat{y}}&\hat{A}&\mathbb{0}&\cdots&\mathbb{0}\\ (|R_1|-2)\cdot\vec{\hat{y}}&\mathbb{0}&\hat{A}&\cdots&\mathbb{0}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ (|R_1|-2)\cdot\vec{\hat{y}}&\mathbb{0}&\mathbb{0}&\cdots&\hat{A}\\ \end{pmatrix} ,\] if $r=1$, or \[\hat{M}= \begin{pmatrix} \hat{R} & 0 & \cdots & 0& \vec{\hat{x}}^t&\vec{\hat{x}}^t&\cdots&\vec{\hat{x}}^t\\ |R_1|-2 & |R_2|+1 & \mathbb{0}^t & 0 & \mathbb{0}^t& \mathbb{0}^t &\cdots& \mathbb{0}^t\\ \vdots & \mathbb{0} & \ddots & \mathbb{0} & \mathbb{0}^t& \mathbb{0}^t &\cdots& \mathbb{0}^t\\ |R_1|-2 & 0 & \mathbb{0}^t &|R_r|+1& \mathbb{0}^t& \mathbb{0}^t &\cdots& \mathbb{0}^t\\ (|R_1|-2)\cdot\vec{\hat{y}} & \mathbb{0} &\cdots& \mathbb{0} & \hat{A}&\mathbb{0}&\cdots&\mathbb{0}\\ (|R_1|-2)\cdot\vec{\hat{y}} & \mathbb{0} &\cdots& \mathbb{0} & \mathbb{0} &\hat{A}&\cdots&\mathbb{0}\\ \vdots &\vdots & \vdots & \vdots &\vdots&\vdots&\ddots&\vdots\\ (|R_1|-2)\cdot\vec{\hat{y}} & \mathbb{0} &\cdots& \mathbb{0} & \mathbb{0}&\mathbb{0}&\cdots&\hat{A}\\ \end{pmatrix} ,\] if $r\ge2$, where $$ \hat{R}= \begin{cases} (|R_1|-1) - \frac{4k}{3}\cdot(|R_1|-2) & \text{if } r=1\\ (|R_1|-1) - (\frac{4k}{3} + \frac{|R_2|}{|R_2|+1}+\cdots+\frac{|R_r|}{|R_r|+1})\cdot(|R_1|-2) & \text{if } r\ge2, \end{cases} $$ $$ \hat{A}= \begin{pmatrix} 0 & 0 & 0\\ 3 & 3 & 0\\ 3 & 0 & 3 \end{pmatrix}, \ \ \vec{\hat{x}}^t = (-1,0,0), \text{ and} \ \ \vec{\hat{y}}^t = \left(-\frac{1}{3},1,1\right).$$ Notice that the $(r+1+3i)$-th row corresponds to the first row of $\vec{\hat{y}}$ and $\hat{A}$, and thus it equals \[ \begin{pmatrix} -\frac{|R_1|-2}{3}&0&0&\cdots &0 \end{pmatrix}. \] Thus, the nullity of $\hat{M}$ is at least $k$ if $|R_1|=2$, and at least $k-1$ otherwise. Furthermore, it is easy to check that the rest of the rows form a linearly independent set, and that this set does not generate the $(r+1+3i)$-th row if $|R_1|\neq 2$. The result now follows from the fact that $1\leq |R_1|\leq n-7k-r+1$, and that if $r=1$, then $|R_1|=n-7k-r+1$. \end{proof} In the proof of Theorem \ref{thm: nullity}, we assign values to $(|W_i|, |W_{i,1}|, |W_{i,2}|)$ so that $A_i$ has nullity $1$, for all $1\le i \le k$. We can see that the matrix $A_i$ has nullity $1$ if and only if $(|W_i|, |W_{i,1}|, |W_{i,2}|)$ is one of $(2,3,3)$, $(2,2,5)$, $(2,5,2)$, $(3,2,2)$, $(3,1,5)$, $(3,5,1)$, $(4,1,3)$, $(4,3,1)$, $(6,1,2)$, and $(6,2,1)$, for $1\le i \le k$. In particular we use $(3,2,2)$ because with this choice we obtain the minimum lower bound for the number of vertices. \section{Conclusion and further research}\label{sec: conclusions} The proof of Theorem~\ref{thm: finite number of graphs with rank k} presents an upper bound for the number of graphs with distance rank equals $k$ in terms of the Ramsey number $R(k)$. Nevertheless, this upper bound seems to be far from being tight. Indeed, $\lfloor f(3,R(3))\rfloor= 186$, and the number of connected graphs with distance rank $3$ is equal to three. It would be interesting to find a tighter upper bound for the number of connected graphs with distance rank $k$. In Theorem~\ref{thm: nulity of threshold graphs}, we prove that a connected threshold graph has nullity at most one. We also present a family of infinite power sequences giving place to an infinite family of connected threshold graphs with nullity one. A challenging problem is characterizing those connected threshold graphs with nullity equal to zero or one. Unlike threshold graphs, for each integer $k\ge 2$ there exists a trivially perfect graph with nullity equal to $k$, see Theorem~\ref{thm: nullity}. Notice that, Theorem~\ref{thm: nonsingular trivially perfect graphs} guarantees that if each set of the twin partition of a trivially perfect graph is big enough, then its distance matrix is nonsingular. Consequently, connected threshold graphs with nullity one have a small set in their twin partition, as they are a subclass of trivially perfect graphs. \section*{Acknowledgments} Ezequiel Dratman and Luciano N. Grippo acknowledge partial support from ANPCyT PICT 2017-1315. The first two authors and Ver\'onica Moyano were partially supported from Universidad Nacional de General Sarmiento, grant UNGS-30/1135. Adri\'{a}n Pastine ackowledges partial suppport from Universidad Nacional de San Luis, Argentina, grants PROICO 03-0918 and PROIPRO 03-1720, and from ANPCyT grants PICT-2020-SERIEA-04064 and PICT-2020-SERIEA-00549. This article was conceived during a visit of the fourth author to Universidad Nacional de General Sarmiento and he would like to thank them for their hospitality. \end{document}
arXiv
where $y(t)$ is the output and $x(t)$ is the input given to the system. $$y_1(t) + y_2(t) = x_1(e^t) + x_2(e^t)$$ How will system respond to this input? Additivity requires a little more than direct addition of some $x_1$ and $x_2$. It should involve any linear combination of $x_1$ and $x_2$, $a_1 x_1 + a_2 x_2$ where $a_1$ and $a_2$ are "kind of scalars". This could make us dive into complicated stuff, like in Does scaling property imply superposition? Not the answer you're looking for? Browse other questions tagged continuous-signals linear-systems or ask your own question. Is the system represented by the equation $y(t) = x(2t)$ time invariant?
CommonCrawl
\begin{document} \title [Fourier coefficients of the Duke-Imamoglu-Ikeda lift]{Estimates for the Fourier coefficients of the Duke-Imamoglu-Ikeda lift } \author{Tamotsu IKEDA} \address{Graduate school of mathematics, Kyoto University, Kitashirakawa, Kyoto, 606-8502, Japan} \email{[email protected]} \author{Hidenori KATSURADA} \address{Department of Mathematics, Hokkaido, University, Kita 10, Nishi 8, Kitaku, Sapporo, Hokkaido, 060-0810, Japan, and Muroran Institute of Technology 27-1 Mizumoto, Muroran, 050-8585, Japan} \email{[email protected]} \thanks{The research was partially supported by JSPS KAKENHI Grant Number 17H02834, 16H03919, 22K03228 and 21K03152. } \date {July 5, 2022} \maketitle \newcommand{\alpha}{\alpha} \newcommand{\beta}{\beta} \newcommand{\gamma}{\gamma} \newcommand{\delta}{\delta} \newcommand{\epsilon}{\epsilon} \newcommand{\zeta}{\zeta} \newcommand{\theta}{\theta} \newcommand{\iota}{\iota} \newcommand{\kappa}{\kappa} \newcommand{\lambda}{\lambda} \newcommand{\sigma}{\sigma} \newcommand{\upsilon}{\upsilon} \newcommand{\omega}{\omega} \newcommand{\varepsilon}{\varepsilon} \newcommand{\vartheta}{\vartheta} \newcommand{\varpi}{\varpi} \newcommand{\varrho}{\varrho} \newcommand{\varsigma}{\varsigma} \newcommand{\varphi}{\varphi} \newcommand{\Gamma}{\Gamma} \newcommand{\Delta}{\Delta} \newcommand{\Theta}{\Theta} \newcommand{\Lambda}{\Lambda} \newcommand{\Sigma}{\Sigma} \newcommand{\Upsilon}{\Upsilon} \newcommand{\Omega}{\Omega} \newcommand{\frka}{{\mathfrak a}} \newcommand{\frkA}{{\mathfrak A}} \newcommand{\frkb}{{\mathfrak b}} \newcommand{\frkB}{{\mathfrak B}} \newcommand{\frkc}{{\mathfrak c}} \newcommand{\frkC}{{\mathfrak C}} \newcommand{\frkd}{{\mathfrak d}} \newcommand{\frkD}{{\mathfrak D}} \newcommand{\frke}{{\mathfrak e}} \newcommand{\frkE}{{\mathfrak E}} \newcommand{\frkf}{{\mathfrak f}} \newcommand{\frkF}{{\mathfrak F}} \newcommand{\frkg}{{\mathfrak g}} \newcommand{\frkG}{{\mathfrak G}} \newcommand{\frkh}{{\mathfrak h}} \newcommand{\frkH}{{\mathfrak H}} \newcommand{\frki}{{\mathfrak i}} \newcommand{\frkI}{{\mathfrak I}} \newcommand{\frkj}{{\mathfrak j}} \newcommand{\frkJ}{{\mathfrak J}} \newcommand{\frkk}{{\mathfrak k}} \newcommand{\frkK}{{\mathfrak K}} \newcommand{\frkl}{{\mathfrak l}} \newcommand{\frkL}{{\mathfrak L}} \newcommand{\frkm}{{\mathfrak m}} \newcommand{\frkM}{{\mathfrak M}} \newcommand{\frkn}{{\mathfrak n}} \newcommand{\frkN}{{\mathfrak N}} \newcommand{\frko}{{\mathfrak o}} \newcommand{\frkO}{{\mathfrak O}} \newcommand{\frkp}{{\mathfrak p}} \newcommand{\frkP}{{\mathfrak P}} \newcommand{\frkq}{{\mathfrak q}} \newcommand{\frkQ}{{\mathfrak Q}} \newcommand{\frkr}{{\mathfrak r}} \newcommand{\frkR}{{\mathfrak R}} \newcommand{\frks}{{\mathfrak s}} \newcommand{\frkS}{{\mathfrak S}} \newcommand{\frkt}{{\mathfrak t}} \newcommand{\frkT}{{\mathfrak T}} \newcommand{\frku}{{\mathfrak u}} \newcommand{\frkU}{{\mathfrak U}} \newcommand{\frkv}{{\mathfrak v}} \newcommand{\frkV}{{\mathfrak V}} \newcommand{\frkw}{{\mathfrak w}} \newcommand{\frkW}{{\mathfrak W}} \newcommand{\frkx}{{\mathfrak x}} \newcommand{\frkX}{{\mathfrak X}} \newcommand{\frky}{{\mathfrak y}} \newcommand{\frkY}{{\mathfrak Y}} \newcommand{\frkz}{{\mathfrak z}} \newcommand{\frkZ}{{\mathfrak Z}} \newcommand{\bfa}{{\mathbf a}} \newcommand{\bfA}{{\mathbf A}} \newcommand{\bfb}{{\mathbf b}} \newcommand{\bfB}{{\mathbf B}} \newcommand{\bfc}{{\mathbf c}} \newcommand{\bfC}{{\mathbf C}} \newcommand{\bfd}{{\mathbf d}} \newcommand{\bfD}{{\mathbf D}} \newcommand{\bfe}{{\mathbf e}} \newcommand{\bfE}{{\mathbf E}} \newcommand{\bff}{{\mathbf f}} \newcommand{\bfF}{{\mathbf F}} \newcommand{\bfg}{{\mathbf g}} \newcommand{\bfG}{{\mathbf G}} \newcommand{\bfh}{{\mathbf h}} \newcommand{\bfH}{{\mathbf H}} \newcommand{\bfi}{{\mathbf i}} \newcommand{\bfI}{{\mathbf I}} \newcommand{\bfj}{{\mathbf j}} \newcommand{\bfJ}{{\mathbf J}} \newcommand{\bfk}{{\mathbf k}} \newcommand{\bfK}{{\mathbf K}} \newcommand{\bfl}{{\mathbf l}} \newcommand{\bfL}{{\mathbf L}} \newcommand{\bfm}{{\mathbf m}} \newcommand{\bfM}{{\mathbf M}} \newcommand{\bfn}{{\mathbf n}} \newcommand{\bfN}{{\mathbf N}} \newcommand{\bfo}{{\mathbf o}} \newcommand{\bfO}{{\mathbf O}} \newcommand{\bfp}{{\mathbf p}} \newcommand{\bfP}{{\mathbf P}} \newcommand{\bfq}{{\mathbf q}} \newcommand{\bfQ}{{\mathbf Q}} \newcommand{\bfr}{{\mathbf r}} \newcommand{\bfR}{{\mathbf R}} \newcommand{\bfs}{{\mathbf s}} \newcommand{\bfS}{{\mathbf S}} \newcommand{\bft}{{\mathbf t}} \newcommand{\bfT}{{\mathbf T}} \newcommand{\bfu}{{\mathbf u}} \newcommand{\bfU}{{\mathbf U}} \newcommand{\bfv}{{\mathbf v}} \newcommand{\bfV}{{\mathbf V}} \newcommand{\bfw}{{\mathbf w}} \newcommand{\bfW}{{\mathbf W}} \newcommand{\bfx}{{\mathbf x}} \newcommand{\bfX}{{\mathbf X}} \newcommand{\bfy}{{\mathbf y}} \newcommand{\bfY}{{\mathbf Y}} \newcommand{\bfz}{{\mathbf z}} \newcommand{\bfZ}{{\mathbf Z}} \newcommand{{\mathcal A}}{{\mathcal A}} \newcommand{{\mathcal B}}{{\mathcal B}} \newcommand{{\mathcal C}}{{\mathcal C}} \newcommand{{\mathcal D}}{{\mathcal D}} \newcommand{{\mathcal E}}{{\mathcal E}} \newcommand{{\mathcal F}}{{\mathcal F}} \newcommand{{\mathcal G}}{{\mathcal G}} \newcommand{{\mathcal H}}{{\mathcal H}} \newcommand{{\mathcal I}}{{\mathcal I}} \newcommand{{\mathcal J}}{{\mathcal J}} \newcommand{{\mathcal K}}{{\mathcal K}} \newcommand{{\mathcal L}}{{\mathcal L}} \newcommand{{\mathcal M}}{{\mathcal M}} \newcommand{{\mathcal N}}{{\mathcal N}} \newcommand{{\mathcal O}}{{\mathcal O}} \newcommand{{\mathcal P}}{{\mathcal P}} \newcommand{{\mathcal Q}}{{\mathcal Q}} \newcommand{{\mathcal R}}{{\mathcal R}} \newcommand{{\mathcal S}}{{\mathcal S}} \newcommand{{\mathcal T}}{{\mathcal T}} \newcommand{{\mathcal U}}{{\mathcal U}} \newcommand{{\mathcal V}}{{\mathcal V}} \newcommand{{\mathcal W}}{{\mathcal W}} \newcommand{{\mathcal X}}{{\mathcal X}} \newcommand{{\mathcal Y}}{{\mathcal Y}} \newcommand{{\mathcal Z}}{{\mathcal Z}} \newcommand{{\mathscr A}}{{\mathscr A}} \newcommand{{\mathscr B}}{{\mathscr B}} \newcommand{{\mathscr C}}{{\mathscr C}} \newcommand{{\mathscr D}}{{\mathscr D}} \newcommand{{\mathscr E}}{{\mathscr E}} \newcommand{{\mathscr F}}{{\mathscr F}} \newcommand{{\mathscr G}}{{\mathscr G}} \newcommand{{\mathscr H}}{{\mathscr H}} \newcommand{{\mathscr I}}{{\mathscr I}} \newcommand{{\mathscr J}}{{\mathscr J}} \newcommand{{\mathscr K}}{{\mathscr K}} \newcommand{{\mathscr L}}{{\mathscr L}} \newcommand{{\mathscr M}}{{\mathscr M}} \newcommand{{\mathscr N}}{{\mathscr N}} \newcommand{{\mathscr O}}{{\mathscr O}} \newcommand{{\mathscr P}}{{\mathscr P}} \newcommand{{\mathscr Q}}{{\mathscr Q}} \newcommand{{\mathscr R}}{{\mathscr R}} \newcommand{{\mathscr S}}{{\mathscr S}} \newcommand{{\mathscr T}}{{\mathscr T}} \newcommand{{\mathscr U}}{{\mathscr U}} \newcommand{{\mathscr V}}{{\mathscr V}} \newcommand{{\mathscr W}}{{\mathscr W}} \newcommand{{\mathscr X}}{{\mathscr X}} \newcommand{{\mathscr Y}}{{\mathscr Y}} \newcommand{{\mathscr Z}}{{\mathscr Z}} \newcommand{{\mathbb A}}{{\mathbb A}} \newcommand{{\mathbb B}}{{\mathbb B}} \newcommand{{\mathbb C}}{{\mathbb C}} \newcommand{{\mathbb D}}{{\mathbb D}} \newcommand{{\mathbb E}}{{\mathbb E}} \newcommand{{\mathbb F}}{{\mathbb F}} \newcommand{{\mathbb G}}{{\mathbb G}} \newcommand{{\mathbb H}}{{\mathbb H}} \newcommand{{\mathbb I}}{{\mathbb I}} \newcommand{{\mathbb J}}{{\mathbb J}} \newcommand{{\mathbb K}}{{\mathbb K}} \newcommand{{\mathbb L}}{{\mathbb L}} \newcommand{{\mathbb M}}{{\mathbb M}} \newcommand{{\mathbb N}}{{\mathbb N}} \newcommand{{\mathbb O}}{{\mathbb O}} \newcommand{{\mathbb P}}{{\mathbb P}} \newcommand{{\mathbb Q}}{{\mathbb Q}} \newcommand{{\mathbb R}}{{\mathbb R}} \newcommand{{\mathbb S}}{{\mathbb S}} \newcommand{{\mathbb T}}{{\mathbb T}} \newcommand{{\mathbb U}}{{\mathbb U}} \newcommand{{\mathbb V}}{{\mathbb V}} \newcommand{{\mathbb W}}{{\mathbb W}} \newcommand{{\mathbb X}}{{\mathbb X}} \newcommand{{\mathbb Y}}{{\mathbb Y}} \newcommand{{\mathbb Z}}{{\mathbb Z}} \newcommand{\tta}{\hbox{\tt a}} \newcommand{\ttA}{\hbox{\tt A}} \newcommand{\ttb}{\hbox{\tt b}} \newcommand{\ttB}{\hbox{\tt B}} \newcommand{\ttc}{\hbox{\tt c}} \newcommand{\ttC}{\hbox{\tt C}} \newcommand{\ttd}{\hbox{\tt d}} \newcommand{\ttD}{\hbox{\tt D}} \newcommand{\tte}{\hbox{\tt e}} \newcommand{\ttE}{\hbox{\tt E}} \newcommand{\ttf}{\hbox{\tt f}} \newcommand{\ttF}{\hbox{\tt F}} \newcommand{\ttg}{\hbox{\tt g}} \newcommand{\ttG}{\hbox{\tt G}} \newcommand{\tth}{\hbox{\tt h}} \newcommand{\ttH}{\hbox{\tt H}} \newcommand{\tti}{\hbox{\tt i}} \newcommand{\ttI}{\hbox{\tt I}} \newcommand{\ttj}{\hbox{\tt j}} \newcommand{\ttJ}{\hbox{\tt J}} \newcommand{\ttk}{\hbox{\tt k}} \newcommand{\ttK}{\hbox{\tt K}} \newcommand{\ttl}{\hbox{\tt l}} \newcommand{\ttL}{\hbox{\tt L}} \newcommand{\ttm}{\hbox{\tt m}} \newcommand{\ttM}{\hbox{\tt M}} \newcommand{\ttn}{\hbox{\tt n}} \newcommand{\ttN}{\hbox{\tt N}} \newcommand{\tto}{\hbox{\tt o}} \newcommand{\ttO}{\hbox{\tt O}} \newcommand{\ttp}{\hbox{\tt p}} \newcommand{\ttP}{\hbox{\tt P}} \newcommand{\ttq}{\hbox{\tt q}} \newcommand{\ttQ}{\hbox{\tt Q}} \newcommand{\ttr}{\hbox{\tt r}} \newcommand{\ttR}{\hbox{\tt R}} \newcommand{\tts}{\hbox{\tt s}} \newcommand{\ttS}{\hbox{\tt S}} \newcommand{\ttt}{\hbox{\tt t}} \newcommand{\ttT}{\hbox{\tt T}} \newcommand{\ttu}{\hbox{\tt u}} \newcommand{\ttU}{\hbox{\tt U}} \newcommand{\ttv}{\hbox{\tt v}} \newcommand{\ttV}{\hbox{\tt V}} \newcommand{\ttw}{\hbox{\tt w}} \newcommand{\ttW}{\hbox{\tt W}} \newcommand{\ttx}{\hbox{\tt x}} \newcommand{\ttX}{\hbox{\tt X}} \newcommand{\tty}{\hbox{\tt y}} \newcommand{\ttY}{\hbox{\tt Y}} \newcommand{\ttz}{\hbox{\tt z}} \newcommand{\ttZ}{\hbox{\tt Z}} \newcommand{\phantom}{\phantom} \newcommand{\displaystyle }{\displaystyle } \newcommand{\vphantom{\vrule height 3pt }}{\vphantom{\vrule height 3pt }} \def\bdm #1#2#3#4{\left( \begin{array} {c|c}{\displaystyle {#1}} & {\displaystyle {#2}} \\ \hline {\displaystyle {#3}\vphantom{\displaystyle {#3}^1}} & {\displaystyle {#4}} \end{array} \right)} \newcommand{\widetilde }{\widetilde } \newcommand{\backslash }{\backslash } \newcommand{{\mathrm{GL}}}{{\mathrm{GL}}} \newcommand{{\mathrm{SL}}}{{\mathrm{SL}}} \newcommand{{\mathrm{GSp}}}{{\mathrm{GSp}}} \newcommand{{\mathrm{PGSp}}}{{\mathrm{PGSp}}} \newcommand{{\mathrm{Sp}}}{{\mathrm{Sp}}} \newcommand{{\mathrm{SO}}}{{\mathrm{SO}}} \newcommand{{\mathrm{SU}}}{{\mathrm{SU}}} \newcommand{\mathrm{Ind}}{\mathrm{Ind}} \newcommand{{\mathrm{Hom}}}{{\mathrm{Hom}}} \newcommand{{\mathrm{Ad}}}{{\mathrm{Ad}}} \newcommand{{\mathrm{Sym}}}{{\mathrm{Sym}}} \newcommand{\mathrm{M}}{\mathrm{M}} \newcommand{\mathrm{sgn}}{\mathrm{sgn}} \newcommand{\,^t\!}{\,^t\!} \newcommand{\sqrt{-1}}{\sqrt{-1}} \newcommand{\hbox{\bf 0}}{\hbox{\bf 0}} \newcommand{\hbox{\bf 1}}{\hbox{\bf 1}} \newcommand{\lower .3em \hbox{\rm\char'27}\!}{\lower .3em \hbox{\rm\char'27}\!} \newcommand{\bA_{\hbox{\eightrm f}}}{\bA_{\hbox{\eightrm f}}} \newcommand{{\textstyle{\frac12}}}{{\textstyle{\frac12}}} \newcommand{\hbox{\rm\char'43}}{\hbox{\rm\char'43}} \newcommand{\operatorname{Gal}}{\operatorname{Gal}} \newcommand{{\boldsymbol{\delta}}}{{\boldsymbol{\delta}}} \newcommand{{\boldsymbol{\chi}}}{{\boldsymbol{\chi}}} \newcommand{{\boldsymbol{\gamma}}}{{\boldsymbol{\gamma}}} \newcommand{{\boldsymbol{\omega}}}{{\boldsymbol{\omega}}} \newcommand{{\boldsymbol{\psi}}}{{\boldsymbol{\psi}}} \newcommand{\mathrm{GK}}{\mathrm{GK}} \newcommand{\mathrm{EGK}}{\mathrm{EGK}} \newcommand{\mathrm{ord}}{\mathrm{ord}} \newcommand{\mathrm{diag}}{\mathrm{diag}} \newcommand{{\underline{a}}}{{\underline{a}}} \newcommand{\ZZ_{\geq 0}^n}{{\mathbb Z}_{\geq 0}^n} \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{{\bf Remark}} \newcommand{\mathrm{supp}}{\mathrm{supp}} \def\mattwono(#1;#2;#3;#4){\begin{array}{cc} #1 & #2 \\ #3 & #4 \end{array}} \def\mattwo(#1;#2;#3;#4){\left(\begin{matrix} #1 & #2 \\ #3 & #4 \end{matrix}\right)} \def\smallmattwo(#1;#2;#3;#4){\left(\begin{smallmatrix} #1 & #2 \\ #3 & #4 \end{smallmatrix}\right)} \def\matthree(#1;#2;#3;#4;#5;#6;#7;#8;#9){\left(\begin{matrix} #1 & #2 & #3\\ #4 & #5 & #6\\ #7 & #8 &#9 \end{matrix}\right)} \def\mattwo(#1;#2;#3;#4){\left(\begin{matrix} #1 & #2 \\ #3 & #4 \end{matrix}\right)} \def\rowthree(#1;#2;#3){\begin{matrix} #1 \\ #2 \\ #3 \end{matrix}} \def\columnthree(#1;#2;#3){\begin{matrix} #1 & #2 & #3 \end{matrix}} \def\rowfive(#1;#2;#3;#4;#5){\begin{array}{lllll} #1 \\ #2 \\ #3 \\ #4 \\ #5 \end{array}} \def\columnfive(#1;#2;#3;#4;#5){\begin{array}{lllll} #1 & #2 & #3 & #4 & #5 \end{array}} \def\mattwothree(#1;#2;#3;#4;#5;#6){\begin{matrix} #1 & #2 & #3 \\ #4 & #5 & #6 \end{matrix}} \def\matthreetwo(#1;#2;#3;#4;#5;#6){\begin{array}{lc} #1 & #2 \\ #3 & #4 \\ #5 & #6 \end{array}} \def\columnthree(#1;#2;#3){\begin{matrix} #1 & #2 & #3 \end{matrix}} \def\rowthree(#1;#2;#3){\begin{matrix} #1 \\ #2 \\ #3 \end{matrix}} \def\smallddots{\mathinner {\mskip1mu\raise3pt\vbox{\kern7pt\hbox{.}} \mskip1mu\raise0pt\hbox{.} \mskip1mu\raise-3pt\hbox{.}\mskip1mu}} \begin{abstract} Let $k$ and $n$ be positive even integers. For a Hecke eigenform $h$ in the Kohnen plus subspace of weight $k-n/2+1/2$ for $\varGamma_0(4)$, let $I_n(h)$ be the Duke-Imamoglu-Ikeda lift of $h$ to the space of cusp forms of weight $k$ for $Sp_n({\mathbb Z})$. We then give an estimate of the Fourier coefficients of $I_n(h)$. It is better than the usual Hecke bound for the Fourier coefficients of a Siegel cusp form. \end{abstract} \section{Introduction} Let $\varGamma^{(n)}=Sp_n({\mathbb Z})$ be the Siegel modular group of genus $n$, and $S_k(\varGamma^{(n)})$ the space of cusp forms of weight $k$ for $\varGamma^{(n)}$. Then we have the following Fourier expansion: \[F(Z)=\sum_{B} c_F(B) \exp(2\pi \sqrt{-1} \mathrm{tr}(BZ)),\] where $B$ runs over all positive definite half-integral matrices of size $n$. It is an interesting problem to estimate $c_F(B)$. By the standard method we obtain the following estimate, called the Hecke bound, of $c_F(B)$ for $B \in {\mathcal H}_n({\mathbb Z})_{>0}$: \[c_F(B) \ll_F (\det (B))^{k/2}.\] However it is weak in general. In the case $n=1$, we have Deligne's estimate (cf. \cite{De74}), which is best possible, and from now on we consider the case $n \ge 2$. Then there are several improvements of the Hecke bound (cf. \cite{BK93}, \cite{BR88}, \cite{Breulmann96}, \cite{Bringmann}, \cite{F87},\cite{Ki84},\cite{Ko93},\cite{RW89}). Among others, B\"ocherer and Kohnen \cite{BK93} proved that \begin{align*} c_F(B) \ll_{\varepsilon,F} \det (B)^{{k \over 2}-{1 \over 2n} -(1-{1 \over n}) \alpha_n+\varepsilon} \ (\varepsilon>0) \tag{$*$} \end{align*} if $k >n+1$. Here $$\alpha_n=\Big(4(n-1)+4\Big[{n-1 \over 2}\Big]+{2 \over n+2}\Big)^{-1}.$$ In this paper, we improve this bound for the Duke-Imamoglu-Ikeda lift $I_n(h)$ of a cuspidal Hecke eigenform $h$ in $S_{k-n/2+1/2}^+(\varGamma_0(4))$ to $S_k(\varGamma^{(n)})$. (As for a precise definition of the Duke-Imamoglu-Ikeda lift, see Section 3.) That is, we prove the following estimate: (Theorem \ref{th.main-result}) We have \begin{align*} c_{I_n(h)}(B) \ll_{\varepsilon,I_n(h)} \frkd_B^{-n/4+5/12}(\det (2B))^{(k-1)/2+\varepsilon} \ (\varepsilon>0) \tag{**} \end{align*} for any $B \in {\mathcal H}_n({\mathbb Z})_{>0}$, where $\frkd_B$ is the discriminant of ${\mathbb Q}(\sqrt{(-1)^{n/2} \det B})/{\mathbb Q}$. From the above result, we have \begin{align*} c_{I_n(h)}(B)|\ll_{\varepsilon,I_n(h) } (\det (2B))^{(k-1)/2+\varepsilon} \ (\varepsilon>0) \tag{***} \end{align*} for any $B \in {\mathcal H}_n({\mathbb Z})_{>0}$. We note that our estimate is slightly stronger than (*). We explain how to obtain the estimate (**). By definition, $c_{I_n(h)}(B)$ is expressed in terms of the $|\frkd_B|$-th Fourier coefficient $c_h(|\frkd_B|)$ of $h$, and $\prod_p \widetilde F_p(B,\alpha_p)$, where for a prime number $p$, $\widetilde F_p(B,X)$ is the polynomial in $X$ and $X^{-1}$ defined in \cite{IK22}, and $\alpha_p$ is a certain complex number such that $|\alpha_p|=1$ (cf. Section 3). In view of Corollary \ref{cor.estimate-of-FH}, Theorem \ref{th.H-and-Siegel-series}, we can estimate $\widetilde F_p(B,a)$ for any $a \in {\mathbb C}$ in purely combinatorial method (cf. Theorem \ref{th.estimate-of-F}), and therefore we obtain the following estimate (cf. Theorem \ref{th.refined-estimate} (1)): \begin{align*} &|c_{I_n(h)}(B)| \le |c_h(|\frkd_B|)|\frkf_B^{k-1} \prod_{i=1}^n \prod_{p | \frkf_B}(1+\frke_{i,B}^{(p)})), \end{align*} where $\frkf_B=\sqrt{\det (2B)/|\frkd_B|}$, and $\frke_{i,B}^{(p)}$ is that defined after Remark \ref{rem.unstability-of-Kohnen-plus-space}. On the other hand, by \cite{CI00}, we obtain a reasonable estimate of $c_h(|\frkd_B|)$. Combining these two estimates, we obtain the estimate (**). We also obtain another estimate for $c_{I_n(h)}(B)$ (cf. Theorem \ref{th.main-result2}). It is expected that we can obtain a similar estimate for the Fourier coefficient of the lift constructed in \cite{IY20}. This paper is organized as follows. In Section 2, we review the Siegel series. In Section 3, we state our main result. In Section 4, we review the Gross-Keating invariant. In Section 5, we give an estimate of $\widetilde F_p(B,\alpha_p)$, and in Section 6, we prove our main result. We thank Valentin Blomer for many valuable discussions, by which this paper is motivated. We also thank him for many useful comments, by which our main result has been improved greatly. {\bf Notation} Let $R$ be a commutative ring. We denote by $R^{\times}$ the group of units in $R$. We denote by $M_{mn}(R)$ the set of $(m,n)$ matrices with entries in $R$, and especially write $M_n(R)=M_{nn}(R)$. We often identify an element $a$ of $R$ and the matrix $(a)$ of degree 1 whose component is $a$. If $m$ or $n$ is 0, we understand an element of $M_{mn}(R)$ is the {\it empty matrix} and denote it by $\emptyset$. Let $GL_n(R)$ be the group consisting of all invertible elements of $M_n(R)$, and $\mathrm{Sym}_n(R)$ the set of symmetric matrices of degree $n$ with entries in $R$. For a semigroup $S$ we put $S^{\Box}=\{s^2 \ | \ s \in S \}$. Let $R$ be an integral domain of characteristic different from $2$, and $K$ its quotient field. We say that an element $A$ of $\mathrm{Sym}_n(K)$ is non-degenerate if the determinant $\det A$ of $A$ is non-zero. For a subset $S$ of $\mathrm{Sym}_n(K)$, we denote by $S^{{\rm{nd}}}$ the subset of $S$ consisting of non-degenerate matrices. We say that a symmetric matrix $A=(a_{ij})$ of degree $n$ with entries in $K$ is half-integral over $R$ if $a_{ii} \ (i=1,...,n)$ and $2a_{ij} \ (1 \le i \not= j \le n)$ belong to $R$. We denote by ${\mathcal H}_n(R)$ the set of half-integral matrices of degree $n$ over $R$. We note that ${\mathcal H}_n(R)=\mathrm{Sym}_n(R)$ if $R$ contains the inverse of $2$. We denote by ${\mathbb Z}_{> 0}$ and ${\mathbb Z}_{\ge 0}$ the set of positive integers and the set of non-negative integers, respectively. For an $(m,n)$ matrix $X$ and an $(m,m)$ matrix $A$, we write $A[X] ={}^tXAX$, where $^t X$ denotes the transpose of $X$. Let $G$ be a subgroup of $GL_n(K)$. Then we say that two elements $B$ and $B'$ in $\mathrm{Sym}_n(K)$ are $G$-equivalent if there is an element $g$ of $G$ such that $B'=B[g]$. We denote by $1_m$ the unit matrix of degree $m$ and by $O_{m,n}$ the zero matrix of type $(m,n)$. We sometimes abbreviate $O_{m,n}$ as $O$ if there is no fear of confusion. For two square matrices $X$ and $Y$ we write $X \bot Y =\mattwo(X;O;O;Y)$. We often write $x \bot Y$ instead of $(x) \bot Y$ if $(x)$ is a matrix of degree 1. For an $m \times n$ matrix, $B=(b_{ij})$ and sequences ${\bf i}=(i_1,\ldots,i_r), (j_1,\ldots,j_r)$ of integers such that $1 \le i_1, \ldots, i_r \le m, 1 \le j_1,\ldots,j_r \le n$, we put \[B\begin{pmatrix}{\bf i} \\ {\bf j}\end{pmatrix}= (b_{i_k,j_l})_{1 \le k,l \le r}.\] \section{Siegel series} Let $F$ be a non-archimedean local field of characteristic $0$, and $\frko=\frko_F$ its ring of integers. The maximal ideal and the residue field of $\frko$ is denoted by $\frkp$ and $\frkk$, respectively. We fix a prime element $\varpi$ of $\frko$ once and for all. The cardinality of $\frkk$ is denoted by $q$. Let $\mathrm{ord}=\mathrm{ord}_{\frkp}$ denote additive valuation on $F$ normalized so that $\mathrm{ord}(\varpi)=1$. If $a=0$, We write $\mathrm{ord}(0)=\infty$ and we make the convention that $\mathrm{ord}(0) > \mathrm{ord}(b)$ for any $b \in F^{\times}$. We also denote by $|*|_{\frkp}$ denote the valuation on $F$ normalized so that $|\varpi|_{\frkp}=q^{-1}$. We put $e_0=\mathrm{ord}_{\frkp}(2)$. For a non-degenerate element $B\in{\mathcal H}_n(\frko)$, we put $D_B=(-4)^{[n/2]}\det B$. If $n$ is even, we denote the discriminant ideal of $F(\sqrt{D_B})/F$ by $\frkD_B$. We also put \[ \xi_B= \begin{cases} 1 & \text{ if $D_B\in F^{\times 2}$,} \\ -1 & \text{ if $F(\sqrt{D_B})/F$ is unramified quadratic,} \\ 0 & \text{ if $F(\sqrt{D_B})/F$ is ramified quadratic.} \end{cases} \] Put $$\frke_B= \begin{cases} \mathrm{ord}(D_B)-\mathrm{ord}(\frkD_B) & \text{ if $n$ is even} \\ \mathrm{ord}(D_B) & \text{ if $n$ is odd.} \end{cases}$$ We make the convention that $\xi_B=1, \frke_B=0$ if $B$ is the empty matrix. Once for all, we fix an additive character $\psi$ of $F$ of order zero, that is, a character such that $$\frko =\{ a \in F \ | \ \psi(ax)=1 \ \text{ for any} \ x \in \frko \}.$$ For a half-integral matrix $B$ of degree $n$ over $\frko$ define the local Siegel series $b_{\frkp}(B,s)$ by $$b_{{\frkp}}(B,s)= \sum_{R} \psi(\mathrm{tr}(BR))\mu(R)^{-s},$$ where $R$ runs over a complete set of representatives of $\mathrm{Sym}_n(F)/\mathrm{Sym}_n(\frko)$ and $\mu(R)=[R\frko^n+\frko^n:\frko^n]$. The series $b_{\frkp}(B,s)$ converges absolutely if the real part of $s$ is large enough, and it has a meromorphic continuation to the whole $s$-plane. Now for a non-degenerate half-integral matrix $B$ of degree $n$ over $\frko $ define a polynomial $\gamma_q(B,X)$ in $X$ by $$\gamma_q(B,X)= \begin{cases} (1-X)\prod_{i=1}^{n/2}(1-q^{2i}X^2)(1-q^{n/2}\xi_B X)^{-1} & \text{ if $n$ is even} \\ (1-X)\prod_{i=1}^{(n-1)/2}(1-q^{2i}X^2) & \text{ if $n$ is odd.} \end{cases}$$ Then it is shown by \cite{Sh1} that there exists a polynomial $F_{\frkp}(B,X)$ in $X$ such that $$F_{\frkp}(B,q^{-s})={b_{\frkp}(B,s) \over \gamma_q(B,q^{-s})}.$$ We define a symbol $X^{1/2}$ so that $(X^{1/2})^2=X$. We define $\widetilde F_{\frkp}(B,X)$ as $$\widetilde F_{\frkp}(B,X)=X^{-\frke_B/2}F(B,q^{-(n+1)/2}X).$$ We note that $\widetilde F_{\frkp}(B,X) \in {\mathbb Q}(q^{1/2})[X,X^{-1}]$ if $n$ is even, and $\widetilde F_{\frkp}(B,X) \in {\mathbb Q}[X^{1/2},X^{-1/2}]$ if $n$ is odd. \section{The Duke-Imamoglu-Ikeda lift and main result} Put $J_n=\begin{pmatrix}O_n&-1_n\\1_n&O_n\end{pmatrix}$. Furthermore, put $$\varGamma^{(n)}=Sp_n({{\mathbb Z}})=\{M \in GL_{2n}({{\mathbb Z}}) \ | \ J_n[M]=J_n \}. $$ Let ${\Bbb H}_n$ be Siegel's upper half-space of degree $n$. We define $j(\gamma,Z)=\det (CZ+D)$ for $\gamma = \begin{pmatrix} A & B \\ C & D \end{pmatrix}$ and $Z \in {\Bbb H}_n$. We note that $\varGamma^{(1)}=SL_2({\mathbb Z})$. Let $l$ be an integer or half-integer. For a congruence subgroup $\varGamma$ of $\varGamma^{(n)}$, we denote by $M_{l}(\varGamma)$ the space of Siegel modular forms of weight $l$ with respect to $\varGamma$, and by $S_{l}(\varGamma)$ its subspace consisting of cusp forms. Let $T$ be an element of ${{\mathcal H}_n}({\mathbb Z})_{>0}$ with $n$ even. Let $\frkd_T$ be the discriminant of ${\mathbb Q}(\sqrt{(-1)^{n/2} \det (T)})/{\mathbb Q}$. Then we have $(-1)^{n/2} \det (2T)/\frkd_T=\frkf_T^2$ with $\frkf_T \in {\mathbb Z}_{>0}$. Now let $k$ be a positive even integer, and $\varGamma_0(4)=\Bigl\{\bigl( \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \bigr) \in SL_2({\mathbb Z}) \ | \ c \equiv 0 \text{ mod} 4. \bigr\}$. Let $$h(z)=\sum_{m \in {{\mathbb Z}}_{>0} \atop (-1)^{n/2}m \equiv 0, 1 \ {\rm mod} \ 4 }c_h(m){\bf e}(mz)$$ be a Hecke eigenform in the Kohnen plus space $S_{k-n/2+1/2}^+(\varGamma_0(4))$ and $f(z)=\sum_{m=1}^{\infty}c_f(m){\bf e}(mz)$ be the primitive form in $S_{2k-n}(SL_2(\Bbb Z))$ corresponding to $h$ under the Shimura correspondence (cf. Kohnen \cite{Ko}). We define a Fourier series $I_n(h)(Z)$ in $Z \in {\Bbb H}_n$ by $$I_n(h)(Z)= \sum_{T \in {{\mathcal H}_n}({\mathbb Z})_{> 0}} c_{I_n(h)}(T){\bf e}({\rm tr}(TZ))$$ where $c_{I_n(h)}(T)=c_h(|{\textfrak d}_T|) {\textfrak f}_T^{k-(n+1)/2} \prod_p\widetilde F_p(T,\alpha_f(p)). $ Then the first named author \cite{I01} showed that $I_n(h)(Z)$ is a Hecke eigenform in $S_k(\varGamma^{(n)})$ whose standard $L$-function coincides with $\zeta(s)\prod_{i=1}^n L(s+k-i,f)$ (see also \cite{IY20}). We call $I_n(h)$ the Duke-Imamoglu-Ikeda lift (D-I-I lift for short) of $h$. \begin{theorem} \label{th.main-result} Let the notation be as above. Then we have \begin{align*} c_{I_n(h)}(B) \ll_{\varepsilon,I_n(h)} \frkd_B^{-n/4+5/12}(\det (2B))^{(k-1)/2+\varepsilon} \ (\varepsilon >0) \end{align*} for any $B \in {\mathcal H}_n({\mathbb Z})_{>0}$. In particular, we have \begin{align*} c_{I_n(h)}(B) \ll_{\varepsilon,I_n(h)} (\det (2B))^{(k-1)/2+\varepsilon} \ (\varepsilon >0) \end{align*} for any $B \in {\mathcal H}_n({\mathbb Z})_{>0}$. \end{theorem} We give another estimate of $c_{I_n(h)}(B)$ in terms of minors of $B$. We denote by ${\mathcal I}_r$ the set of sequences $(i_1,\ldots,i_r)$ of integers such that $1 \le i_1<\cdots<i_r \le n$. Let $R$ be an integral domain of characteristic different from $2$. For an element $B =(b_{ij}) \in {\mathcal H}_n(R)$, and ${\bf i}=(i_1,\ldots,i_r), {\bf j}=(j_1,\ldots,j_r) \in {\mathcal I}_r$ we define $b_{{\bf i},{\bf j}}^{(r)}=b_{{\bf i},{\bf j}}^{(r)}(B)$ as \[ b_{{\bf i},{\bf j}}^{(r)}=2^{2[r/2]+1 -\delta_{{\bf i},{\bf j}}}\det B\begin{pmatrix} {\bf i} \\ {\bf j} \end{pmatrix},\] where \[\delta_{{\bf i},{\bf j}}=\begin{cases} 1 & \text{ if } {\bf i}={\bf j} \\ 0 & \text{ otherwise}.\end{cases}.\] For $B \in {\mathcal H}_n({\mathbb Z})_{>0}$ put \[G_r(B)=\mathrm{GCD}_{({\bf i},{\bf j}) \in {\mathcal I}_r \times {\mathcal I}_r} b_{{\bf i},{\bf j}}^{(r)}.\] \begin{theorem} \label{th.main-result2} Let the notation be as above. Then we have \begin{align*} &c_{I_n(h)}(B) \\ &\ll_{\varepsilon,I_n(h)} \frkd_B^{1/6}(\det (2B))^{k/2-(n+1)/4+\varepsilon}\prod_{i=1}^{n-1}G_i(B)^{1/2} \quad (\varepsilon >0) \end{align*} for any $B \in {\mathcal H}_n({\mathbb Z})_{>0}$. In particular, we have \begin{align*} &c_{I_n(h)}(B) \\ &\ll_{\varepsilon,I_n(h)} (\det (2B))^{k/2-n/4-1/12+\varepsilon} \prod_{i=1}^{n-1}G_i(B)^{1/2}\quad (\varepsilon >0) \end{align*} for any $B \in {\mathcal H}_n({\mathbb Z})_{>0}$. \end{theorem} \section{The Gross-Keating invariant} \label{sec:1} We first recall the definition of the Gross-Keating invariant of a quadratic form over $\frko$ following \cite{IK18}. For two matrices $B, B'\in{\mathcal H}_n(\frko)$, we sometimes write $B\sim B'$ if $B$ and $B'$ are $GL_n(\frko)$-equivalent. The $GL_n(\frko)$-equivalence class of $B$ is denoted by $\{B\}$. Let $B=(b_{ij}) \in {\mathcal H}_n(\frko)^{\rm nd}$. Let $S(B)$ be the set of all non-decreasing sequences $(a_1, \ldots, a_n)\in\ZZ_{\geq 0}^n$ such that \begin{align*} \mathrm{ord}(b_i)&\geq a_i, \\ \mathrm{ord}(2 b_{ij})&\geq (a_i+a_j)/2\qquad (1\leq i,j\leq n). \end{align*} Set \[ S(\{B\})=\bigcup_{B'\in\{B\}} S(B')=\bigcup_{U\in{\mathrm{GL}}_n(\frko)} S(B[U]). \] The Gross-Keating invariant (or the GK-invariant for short) ${\underline{a}}=(a_1, a_2, \ldots, a_n)$ of $B$ is the greatest element of $S(\{B\})$ with respect to the lexicographic order $\succ$ on $\ZZ_{\geq 0}^n$. Here, the lexicographic order $\succ$ is, as usual, defined as follows. For $(y_1, y_2, \ldots, y_n), (z_1, z_2, \ldots, z_n)\in {\mathbb Z}_{\geq 0}^n$, let $j$ be the largest integer such that $y_i=z_i$ for $i<j$. Then $(y_1, y_2, \ldots, y_n)\succ (z_1, z_2, \ldots, z_n)$ if $y_j>z_j$. The Gross-Keating invariant is denoted by $\mathrm{GK}(B)$. A sequence of length $0$ is denoted by $\emptyset$. When $B$ is a matrix of degree $0$, we understand $\mathrm{GK}(B)=\emptyset$. By definition, the Gross-Keating invariant $\mathrm{GK}(B)$ is determined only by the $GL_n(\frko)$-equivalence class of $B$. We say that $B\in{\mathcal H}_n(\frko)$ is an optimal form if $\mathrm{GK}(B)\in S(B)$. Let $B \in {\mathcal H}_n(\frko)$. Then $B$ is $GL_n(\frko)$-equivalent to an optimal form $B'$. Then we say that $B$ has an optimal decomposition $B'$. We say that $B \in {\mathcal H}_n(\frko)$ is a diagonal Jordan form if $B$ is expressed as $$B=\varpi^{a_1} u_1 \bot \cdots \bot \varpi^{a_n}u_n$$ with $a_1 \le \cdots \le a_n$ and $u_1,\cdots,u_n \in \frko^{\times}$. Then, in the non-dyadic case, the diagonal Jordan form $B$ above is optimal, and $\mathrm{GK}(B)=(a_1,\ldots,a_n)$. Therefore, the diagonal Jordan decomposition is an optimal decomposition. However, in the dyadic case, not all half-integral symmetric matrices have a diagonal Jordan decomposition, and the Jordan decomposition is not necessarily an optimal decomposition. Let $B \in {\mathcal H}_n(\frko)^{\rm nd}$, and let $\mathrm{GK}(B)=(a_1,\ldots,a_n)$. For $1 \le i \le n$ put \[\frke_i =\frke_i(B)=\begin{cases} a_1+\cdots+a_i & \text{ if } i \text{ is odd} \\ 2[(a_1+\cdots+a_i)/2] & \text{ if } i \text{ is even} \end{cases}\] The following result is due to \cite[Theorem 0.1]{IK18}, and plays an important role in proving our main result: \begin{theorem} \label{th.GK-invariant} Let $B \in {\mathcal H}_n(\frko)^{\mathrm{nd}}$. Then \begin{align*} \frke_n(B)=\frke_B . \end{align*} \end{theorem} For our later purpose, we give an estimate of $\frke_i$. We put \[{\bf f}_r(B)=\min_{X \in M_{nr}(\frko) \atop \det B[X] \not=0} \frke_{B[X]},\] and \[{\bf d}_r(B)=\min_{X \in M_{nr}(\frko)} \mathrm{ord}_{\frkp}(\det B[X]).\] Clearly we have ${\bf f}_r(B) \le {\bf d}_r(B)$. \begin{theorem}\label{th.estimate-of-GK} Let $B \in {\mathcal H}_n(\frko)^{\rm nd}$, and $r \le n-1$ be a positive integer. Then, \[\frke_r \le {\bf f}_r(B),\] and in particular \[\frke_r \le {\bf d}_r(B).\] \end{theorem} \begin{proof} The assertion follows from \cite[Lemma 3.8]{IK18}. \end{proof} We express ${\bf d}_r(B)$ in terms of minors of $B$. For a sequence ${\bf i}=(i_1,\ldots,i_r)$ of integers let $\mathrm{supp} ({\bf i})$ denote the set $\{i_1,\ldots,i_r\}$. For $B \in {\mathcal H}_n(\frko)$ \[{\bf g}_r(B)=\min_{({\bf i},{\bf j}) \in {\mathcal I}_r \times {\mathcal I}_r} \mathrm{ord}_{\frkp}(b_{{\bf i},{\bf j}}^{(r)}).\] For $X=(x_{ij}) \in M_{nr}(R)$, and ${\bf i}=(i_1,\ldots,i_r) \in {\mathcal I}_r$ let $X({\bf i})=\det X\begin{pmatrix} 1,\ldots, r \\ i_1,\ldots,i_r \end{pmatrix}$. \begin{lemma} \label{lem.det-of-H[X]} Let $B \in {\mathcal H}_n(R)$. Then, for any $X,Y \in M_{nr}(R)$, we have \begin{align*} 2^{2[r/2]+1-\delta_{X,Y}}\det ({}^tXBY)=\sum_{({\bf i},{\bf j}) \in {\mathcal I}_r \times {\mathcal I}_r} 2^{\delta_{{\bf i},{\bf j}}-\delta_{X,Y}}b_{{\bf i},{\bf j}}^{(r)} X({\bf i})Y({\bf j}), \end{align*} where $\delta_{X,Y}=\begin{cases} 1 & \text{ if } X=Y \\ 0 & \text{otherwise}\\\end{cases}$. In particular, we have \begin{align*} &2^{2[r/2]}\det B[X] =\sum_{({\bf i},{\bf j}) \in {\mathcal I}_r \times {\mathcal I}_r \atop {\bf j} \succ {\bf i}} b_{{\bf i},{\bf j}}^{(r)} X({\bf i})X({\bf j}) \end{align*} \end{lemma} \begin{proof} The first assertion can be proved using \cite[II, Theorem 9 (ii)]{Satake75} twice, and the second assertion can be proved remarking that $ \det B\begin{pmatrix} {\bf j} \\ {\bf i} \end{pmatrix} =\det B\begin{pmatrix} {\bf i} \\ {\bf j} \end{pmatrix}$. \end{proof} Let $B$ and $B'$ be elements of ${\mathcal H}_n(\frko)^{\rm nd}$ and suppose that $B'$ is $GL_n(\frko)$-equivalent to $B$. Then clearly we have ${\bf d}_r(B')={\bf d}_r(B)$. Moreover, by the above lemma, we have ${\bf g}_r(B)={\bf g}_r(B')$. \begin{theorem}\label{th.estimate-of-GK2} Let $B \in {\mathcal H}_n(\frko)^{\rm nd}$. Then any positive integer $r \le n$, we have \[{\bf d}_r(B)={\bf g}_r(B).\] \end{theorem} \begin{proof} The assertion clearly holds if $r=n$, and therefore we assume $r \le n-1$. By Lemma \ref{lem.det-of-H[X]}, we have ${\bf g}_r(B) \le {\bf d}_r(B)$. For each ${\bf i}=(i_1,\ldots,i_r), {\bf j}=(j_1,\ldots,j_r) \in {\mathcal I}_r$ put $s({\bf i},{\bf j})=\#(\mathrm{supp}({\bf i}) \cup \mathrm{supp}({\bf j}))$, and \[s(B)=\min_{({\bf i},{\bf j}) \in {\mathcal I}_r \times {\mathcal I}_r \atop \mathrm{ord}_\frkp(b_{{\bf i},{\bf j}}^{(r)})={\bf g}_r(B)} s({\bf i},{\bf j}).\] Then we have $r \le s(B) \le \min(n,2r)$. First let $F$ be a non-dyadic field. In view of the remark before this theorem, we may assume that $B$ is a diagonal matrix. Let $({\bf i},{\bf j}) \in {\mathcal I}_r \times {\mathcal I}_r$ such that $\mathrm{ord}_\frkp(b_{{\bf i},{\bf j}}^{(r)})={\bf g}_r(B)$. Clearly we have $s(B)=r$, and hence ${\bf i}={\bf j}$. Permutating the rows and columns of $B$ appropriately, we may assume ${\bf i}={\bf j}=(1,\ldots,r)$. Then by Lemma \ref{lem.det-of-H[X]}, we have \[B \Big[\begin{pmatrix} 1_r \\ O_{n-r,r} \end{pmatrix}\Big]=b_{{\bf i},{\bf i}}^{(r)}.\] Hence we have ${\bf d}_r(B) \le {\bf g}_r(B)$. This proves the assertion. Next let $F$ be a dyadic field, and put $e_0=\mathrm{ord}_\frkp(2)$ as in Section 2. Then, by \cite[Section 2]{IK18} and \cite[Section 93]{Omeara73}, we may assume that \begin{align*} B=\varpi^{a_1} K_1 \bot \varpi^{a_{n_1}}K_{n_1} \bot \varpi^{a_{n_1}+1} u_{n_1+1} \bot \varpi^{a_{n_2}} u_{n_2}, \tag{$\bullet$} \end{align*} where $a_i \in {\mathbb Z}_{\ge 0} \ (i=1,\ldots,n_2), u_i \in \frko^\times \ (i=n_1+1,\ldots,n_2)$ and \[K_i=\begin{pmatrix} \alpha_i & \varpi^{f_i}/2 \\\varpi^{f_i}/2 & \beta_i \end{pmatrix}\] with $\alpha_i \in \frko^\times, \beta_i \in \frko, 0 \le f_i \le e_0-1$ ($i=1,\ldots,n_1)$. We claim that $s(B) \le r+1$. Suppose that $s(B) \ge r+2$. Then, clearly we have $n_1 \ge 2$. Let $({\bf i},{\bf j}) \in {\mathcal I}_r \times {\mathcal I}_r$ such that $s({\bf i},{\bf j})=s(B)$ and $\mathrm{ord}_\frkp(b_{{\bf i},{\bf j}}^{(r)})={\bf g}_r(B)$. Let $i_k$ be the least integer such that $i_k \in \mathrm{supp} ({\bf i}) \setminus (\mathrm{supp} ({\bf i}) \cap \mathrm{supp}({\bf j}))$. Then, we have $i_k \le 2n_1$. By ($\bullet$), we have $j_k=i_k+1$ or $j_k=i_k-1$. Without loss of generality, we may assume $j_k=i_k+1$. Then, $i_k=2i-1$ and $j_k=2i$ with some $1 \le i \le n_1$. Let $i_{l}$ be the least integer such that $i_{l} >i_k$ and $i_{l} \in \mathrm{supp} ({\bf i}) \setminus (\mathrm{supp} ({\bf i}) \cap \mathrm{supp}({\bf j}))$. Again by ($\bullet$), we have $(i_{l},j_{l})=(2j-1,2j)$ or $(i_{l},j_{l})=(2j,2j-1)$ with some $i<j \le n_1$. In the former case, \begin{align*} \det B \begin{pmatrix} {\bf i} \\ {\bf j} \end{pmatrix}= &\det B\begin{pmatrix} 2i-1, 2j-1 \\ 2i,2j\end{pmatrix} \det B\begin{pmatrix} {\bf i}'' \\ {\bf j}'' \end{pmatrix} \\ &=4^{-1}\varpi^{a_i+a_j +f_i +f_j}\det B\begin{pmatrix} {\bf i}'' \\ {\bf j}'' \end{pmatrix} \end{align*} where $({\bf i}'',{\bf j}'') \in {\mathcal I}_{r-2} \times {\mathcal I}_{r-2}$ such that $\mathrm{supp}({\bf i}'')=\mathrm{supp}({\bf i}) \setminus \{2i-1,2j-1\}$ and $\mathrm{supp}({\bf j}'')=\mathrm{supp}({\bf j}) \setminus \{2i,2j\}$. Without loss of generality, we may assume $a_i+f_i \le a_j+f_j$. Let $({\bf i}',{\bf j}')$ be an element of ${\mathcal I}_r \times {\mathcal I}_r$ such that $\mathrm{supp} ({\bf i}')=\mathrm{supp} ({\bf i}'') \cup \{2i,2i-1 \}$. and $\mathrm{supp} ({\bf j}')=\mathrm{supp} ({\bf i}'') \cup \{2i,2i-1 \}$. Then, $s({\bf i}',{\bf j}')=s({\bf i},{\bf j})-2$ and \begin{align*} \det B \begin{pmatrix} {\bf i}' \\ {\bf j}' \end{pmatrix}= \det B\begin{pmatrix} 2i-1, 2i \\ 2i-1,2i\end{pmatrix} \det B\begin{pmatrix} {\bf i}'' \\ {\bf j}'' \end{pmatrix} =\det (\varpi^{a_i} K_i) \det B\begin{pmatrix} {\bf i}'' \\ {\bf j}'' \end{pmatrix} , \end{align*} and hence \[\mathrm{ord}_\frkp(b_{{\bf i}',{\bf j}'}^{(r)}) \le \mathrm{ord}_\frkp( b_{{\bf i},{\bf j}}^{(r)}).\] In the latter case, we also obtain a similar inequality. This is a contradiction, and we prove the claim. Suppose that $s(B)=r$. Then, in the same way as in the non-dyadic case, we prove ${\bf d}_r(B) \le {\bf g}_r(B)$. Next suppose that $s(B)=r+1$. Then, we may assume ${\bf i}=(1,\ldots,r)$ and ${\bf j}=(1,\ldots,r-1,r+1)$. If $\mathrm{ord}_{\frkp}(b_{{\bf j},{\bf j}}^{(r)})={\bf g}_r(B)$, then the assertion can be proved in the same manner as above, and we may assume that $\mathrm{ord}_{\frkp}(b_{{\bf j},{\bf j}}^{(r)}) >{\bf g}_r(B)$. Put $X=\begin{pmatrix} 1_{r-1} & 0 \\ 0 &1 \\ 0 & 1 \\ O_{n-r-1,r-1} & 0 \end{pmatrix}$. Then, again by Lemma \ref{lem.det-of-H[X]}, we have \[2^{2[r/2]}\det B[X]=b_{{\bf i},{\bf i}}^{(r)} +b_{{\bf i},{\bf j}}^{(r)} +b_{{\bf j},{\bf j}}^{(r)},\] and hence \[\mathrm{ord}_\frkp(2^{2[r/2]}\det B[X])=\mathrm{ord}_\frkp(b_{{\bf i},{\bf j}}^{(r)}).\] Hence we have ${\bf d}_r(B) \le {\bf g}_r(B)$ also in this case. This completes the assertion. \end{proof} \begin{remark} For $B=(b_{ij}) \in {\mathcal H}_n(\frko)^{\rm nd}$, we have \[\frke_1=\min_{1 \le i,j \le n} \mathrm{ord}_\frkp(b_{i,j}^{(1)}).\] This has been proved in the case $F={\mathbb Q}_p$ (cf. \cite{Y04}), and can be proved in the same manner in the general case. \end{remark} \section{Estimate of $\widetilde F_\frkp(B,\alpha)$.} In this section we estimate $\widetilde F_\frkp(B,X)$ for $B \in {\mathcal H}_n(\frko)^\mathrm{nd}$ with $n$ even and $\alpha \in {\mathbb C}^\times$. This is one of key ingredients in the proof of our main result. We recall the definition of a naive $\mathrm{EGK}$ datum (cf. \cite{IK18}). Let ${\mathcal Z}_3=\{0,1,-1 \}$. \begin{definition} \label{def.NEGK} An element $(a_1,\ldots,a_n;\varepsilon_1,\ldots,\varepsilon_n)$ of ${\mathbb Z}_{\ge 0}^n \times {\mathcal Z}_3^n$ is said to be a naive $\mathrm{EGK}$ datum of length $n$ if the following conditions hold: \begin{itemize} \item [(N1)] $a_1 \le \cdots \le a_n$. \item [(N2)] Assume that $i$ is even. Then $\varepsilon_i \not=0$ if and only if $a_1+\cdots+a_i$ is even. \item [(N3)] Assume that $i$ is odd. Then $\varepsilon_i \not=0$. \item[(N4)] $\varepsilon_1=1$. \item[(N5)] Let $i \ge 3$ be an odd integer and assume that $a_1+\cdots + a_{i-1}$ is even. Then $\varepsilon_i=\varepsilon_{i-1}^{a_i+a_{i-1}}\varepsilon_{i-2}$. \end{itemize} We denote by $\mathcal{NEGK}_n$ the set of all naive $\mathrm{EGK}$ data of length $n$. \end{definition} \begin{definition} \label{def.mono-ass-NEGK} For integers $e,\widetilde e$, a real number $\xi$, define rational functions $C(e,\widetilde e,\xi;Y,X)$ and $D(e,\widetilde e,\xi;Y,X)$ in $Y^{1/2}$ and $X^{1/2}$ by \[C(e,\widetilde e,\xi;Y,X)={Y^{\widetilde e/2}X^{-(e- \widetilde e)/2-1}(1-\xi Y^{-1} X) \over X^{-1}-X} \] and \[D(e,\widetilde e,\xi;Y,X)= {Y^{\widetilde e/2}X^{-(e-\widetilde e)/2} \over 1- \xi X} .\] \end{definition} For a positive integer $i$ put $$C_i(e,\widetilde e,\xi;Y,X)= \begin{cases} C(e,\widetilde e,\xi;Y,X) & \text { if $i$ is even } \\ D(e,\widetilde e,\xi;Y,X) & \text{ if $i$ is odd.} \end{cases}.$$ \begin{definition} \label{def.integer-ass-sequence} For a sequence $\underline a=(a_1,\ldots,a_n)$ of integers and an integer $1 \le i \le n$, we define $\frke_i=\frke_i(\underline a)$ as $$\frke_i= \begin{cases} a_1+\cdots +a_i & \text{ if $i$ is odd} \\ 2[(a_1+\cdots+a_i)/2] & \text{ if $i$ is even.} \end{cases}$$ We also put $\frke_0=0$. \end{definition} For a naive $\mathrm{EGK}$ datum $(a_1,\ldots,a_n;\varepsilon_1,\ldots,\varepsilon_n)$ and an integer $1 \le m\le n$, put $H_m=(a_1,\ldots,a_m;\varepsilon_1,\ldots,\varepsilon_m)$. Then $H_m$ is also a naive $\mathrm{EGK}$ datum of length $m$. \begin{definition} \label{def.pol-ass-NEGK} For a naive $\mathrm{EGK}$ datum $H=(a_1,\ldots,a_n;\varepsilon_1,\ldots,\varepsilon_n)$ we define a rational function ${\mathcal F}(H;Y,X)$ in $X^{1/2}$ and $Y^{1/2}$ as follows: First we define $${\mathcal F}(H;Y,X)=X^{-a_1/2}+X^{-a_1/2+1}+\cdots+X^{a_1/2-1}+X^{a_1/2}$$ if $n=1$. Let $n>1$. Then $H'= (a_1,\ldots,a_{n-1};\varepsilon_1,\ldots,\varepsilon_{n-1})$ is a naive $\mathrm{EGK}$ datum of length $n-1$. Assume that ${\mathcal F}(H';Y,X)$ is defined for $H'$. Then, we define ${\mathcal F}(H;Y,X)$ as \begin{align*} &{\mathcal F}(H;Y,X)=C_n(\frke_n,\frke_{n-1},\xi;Y,X){\mathcal F}(H';Y,YX)\\ &+\zeta C_n(\frke_n,\frke_{n-1},\xi;Y,X^{-1}){\mathcal F}(H';Y,YX^{-1}), \end{align*} where $\xi=\varepsilon_n$ or $\varepsilon_{n-1}$ according as $n$ is even or odd, and $\zeta=1$ or $\varepsilon_n$ according as $n$ is even or odd. \end{definition} The following result is due to \cite[Proposition 4.1]{IK22}. \begin{proposition} \label{prop.fc} Let $H$ be a naive $\mathrm{EGK}$ datum of length $n$. Then we have \begin{align*} {\mathcal F}(H;Y,X^{-1})=\zeta {\mathcal F}(H;Y,X), \end{align*} where $\zeta=\varepsilon_n$ or $1$ according as $n$ is odd or even. \end{proposition} For a naive $\mathrm{EGK}$ datum $H$, let ${\mathcal G}(H;Y,X)=X^{\frke_n/2}{\mathcal F}(H;Y,X)$. It follows from the proof of \cite[Proposition 4.2]{IK22}, ${\mathcal G}(H;Y,X)$ is a polynomial in $X$ of degree $\frke_{n}$ with coefficients in $ {\mathbb Q}[Y,Y^{-1}]$, write \begin{align*} {\mathcal G}(H;Y,X)=\sum_{i=0}^{\frke_n} a_i(H,Y)X^i. \end{align*} We give an induction formula for ${\mathcal G}(H;Y,X)$. \begin{theorem}\label{th.induction-formula-for-G} Let $H=(a_1,\ldots,a_n;\varepsilon_1,\ldots,\varepsilon_n)$ be a naive $\mathrm{EGK}$ datum of length $n$, and put $a_i(Y)=a_i(H,Y)$ and $b_i(Y)=a_i(H_{n-1};Y)$. \begin{itemize} \item[(1)] Let $n$ be an even integer such that $n \ge 2$, Then, for any $l=0,\ldots,\frke_n$, $a_i(Y)$ is expressed as \begin{align*} a_i(Y)=&\sum_{\max((l-\frke_{n-1})/2,0) \le j \le l/2} b_{l-2j}(Y)Y^{l-2j}\\ &-\varepsilon_n\sum_{\max((l-1-\frke_{n-1})/2,0) \le j \le (l-1)/2} b_{l-1-2j}(Y)Y^{l-2-2j}\\ &-\sum_{0 \le j \le (\frke_{n-1}+l-\frke_n-2)/2} b_{\frke_n-l-2j+2}(Y)Y^{\frke_n-l+2j+2}\\ &+\varepsilon_n \sum_{0 \le j \le (\frke_{n-1}+l-\frke_n-1)/2} b_{\frke_n-l-2j+1}(Y)Y^{\frke_n-l+2j}. \end{align*} \item[(2)] Let $n$ be odd such that $n \ge 3$. \begin{itemize} \item[(2.1)] Assume that $\varepsilon_{n-1} \not=0$. Then, for any $l=0,\ldots,\frke_n$, $a_i(Y)$ is expressed as \begin{align*} a_i(Y)& =\sum_{\max(l-\frke_{n-1},0) \le j \le l } b_{l-j}(Y)Y^{l-j}\\ &-\varepsilon_n \sum_{0 \le j \le \frke_{n-1}+l-\frke_n-1} b_{\frke_n-l-2j+2}(Y)Y^{\frke_n-l+j+1}\varepsilon_{n-1}^j. \end{align*} \item[(2.2)] Assume that $\varepsilon_{n-1} =0$. Then, for any $l=0,\ldots,\frke_n$, $a_i(Y)$ is expressed as \[a_i(Y)=b_l Y^l +\varepsilon_n b_{\frke_n-l}Y^{\frke_n-l}.\] \end{itemize} \end{itemize} Throughout (1),(2),(3), we make the convention that the sum $\sum_{0 \le j \le a} (*) $ is zero if $a<0$. We also understand $b_j=0$ if $j<0$ or $j >\frke_{n-1}$. \end{theorem} \begin{proof} Let $n$ be even. Then \begin{align*} &{\mathcal G}(H;Y,X) \tag{A}\\ &={(1-Y^{-1}\varepsilon_n X){\mathcal G}(H_{n-1}:Y,YX) \over 1-X^2} \\ &-{X^{e_n+2} (1-Y^{-1}\varepsilon_n X^{-1}){\mathcal G}(H_{n-1};Y,YX^{-1}) \over 1-X^2} \end{align*} Let ${\mathcal H}(Y,X)$ denote the right-hand side of (A). Then, as a formal power series in $X$, ${\mathcal H}(Y,X)$ can be written as \begin{align*} &{\mathcal H}(Y,X)\\ &=(1-Y^{-1}\varepsilon_n X){\mathcal G}(H_{n-1}:Y,YX) \sum_{j=0}^{\infty} X^{2j}\\ &-X^{e_n+2} (1-Y^{-1}\varepsilon_n X^{-1}){\mathcal G}(H_{n-1};Y,YX^{-1})\sum_{j=0}^{\infty} X^{2j}\\ &=(1-Y^{-1}\varepsilon_nX)\sum_{i=0}^{\frke_{n-1}} b_i(Y)(YX)^i \sum_{j=0}^{\infty} X^{2j}\\ &-X^{e_n+2}(1-Y^{-1}\varepsilon_n X^{-1})\sum_{i=0}^{\frke_{n-1}} b_i(Y)(YX^{-1})^i \sum_{j=0}^{\infty} X^{2j}\\ &=\sum_{l=0}^{\infty} (\sum_{0 \le i \le \frke_{n-1}, j \ge 0 \atop i+2j=l} b_i(Y)Y^i)X^l -Y^{-1}\varepsilon_n\sum_{l=0}^{\infty} (\sum_{0 \le i \le \frke_{n-1}, j \ge 0 \atop i+2j=l-1} b_i(Y)Y^i)X^l \\ &-\sum_{l=0}^{\infty} (\sum_{0 \le i \le \frke_{n-1}, j \ge 0 \atop \frke_n-i+2j=l} b_i(Y)Y^i)X^l+Y^{-1}\varepsilon_n\sum_{l=0}^{\infty} (\sum_{0 \le i \le \frke_{n-1}, j \ge 0 \atop \frke_n-i+2j=l+1} b_i(Y)Y^i)X^l. \end{align*} Since ${\mathcal G}(H;Y,X)$ is a polynomial in $X$ of degree $\frke_n$, the $l$-th coefficient of ${\mathcal H}(Y,X)$ as a power series in $X$ is $a_i(Y)$ or $0$ according as $l \le \frke_n$ or $l >\frke_n$, and by a simple computation we prove the assertion. Let $n$ be odd and. Then we have \begin{align*} {\mathcal G}(H;Y,X)= {{\mathcal G}(H_{n-1};Y,YX) \over 1-\varepsilon_{n-1} X}-\varepsilon_n {X^{\frke_n+1}{\mathcal G}(H_{n-1};Y,YX^{-1}) \over 1-\varepsilon_{n-1}X}.\tag{B} \end{align*} Assume that $\varepsilon_{n-1}\not=0$. Then, as a formal power series in $X$, the right-hand side of (B) can be written as $${\mathcal G}(H_{n-1};Y,YX) \sum_{j=0}^{\infty} (\varepsilon_{n-1} X)^j -\varepsilon_n X^{\frke_n+1}{\mathcal G}(H_{n-1};Y,YX^{-1}) \sum_{j=0}^{\infty} (\varepsilon_{n-1}X)^j.$$ Then the assertion can be proved in the same manner as above. Let $n$ be odd, and $\varepsilon_{n-1}=0$. Then $${\mathcal G}(H;Y,X)={\mathcal G}(H_{n-1};Y,YX)+\varepsilon_n X^{\frke_n}{\mathcal G}(H_{n-1};Y,YX^{-1}).$$ That is, \[{\mathcal G}(H;Y,X)=\sum_{i=0}^{e_{n-1}} b_i (Y)(YX)^i+\varepsilon_nX^{\frke_n}\sum_{i=0}^{e_{n-1}}b_i(Y)(YX^{-1})^i.\] Thus the assertion directly follows. \end{proof} \begin{theorem} \label{th.estimate-of-CF-of-GH} Let $H=(a_1,\ldots,a_n;\varepsilon_1,\ldots,\varepsilon_n)$ be a naive $\mathrm{EGK}$ datum of length $n$. Let $q$ be a positive integer. Then \begin{align*} |a_i(q^{1/2})| \le \prod_{l=1}^{n-1} (\frke_{l}+1) \prod_{l=1}^{n-1} q^{e_ {l}/2} \tag{C} \end{align*} for any $i=0,\ldots,e_n$. \end{theorem} \begin{proof} We prove the assertion by induction on $n$. The assertion clearly holds if $n=1$. Let $n \ge 2$ and assume that the assertion holds for any naive $\mathrm{EGK}$ datum of length $n'<n$. By the functional equation of ${\mathcal F}(H;X,Y)$, it suffices to prove the assertion for $0 \le i \le e_n/2$. Put $b_i=b_i(q^{1/2})$. Let $n$ be even. Then, by Theorem \ref{th.induction-formula-for-G}, (1), for $0 \le i \le \frke_n/2$, $a_i(q^{1/2})$ is given by \begin{align*} a_i(q^{1/2}) &=\sum_{\max((i-\frke_{n-1})/2,0) \le j \le i/2} b_{i-2j}q^{(i-2j)/2} \\ &-\varepsilon_n \sum_{\max(i-1-\frke_{n-1})/2,0) \le j \le (i-1)/2} b_{i-2j-1}q^{(i-2j-2)/2} \\ &-\sum_{0 \le j \le (i-\frke_n+\frke_{n-1}-2)/2} b_{\frke_n+2-2j-i}q^{(\frke_n+2-2j-i)/2} \\ &+\varepsilon_n \sum_{0 \le j \le (i-\frke_n+\frke_{n-1}-1)/2} b_{\frke_n+1-2j-i}q^{(\frke_n-2j-i)/2}. \end{align*} Here we make the convention the sum $\sum_{0 \le j \le a} (*) $ is zero if $a<0$. We also understand $b_j=0$ if $j<0$ or $j >\frke_{n-1}$. For each $i=0,\ldots,\frke_n/2$, put \begin{align*} {\mathbb B}_{i,1}&=\#(\{ j \in {\mathbb Z} \ | \ \max((i-\frke_{n-1})/2,0) \le j \le i/2 \}),\\ {\mathbb B}_{i,2}&=\#(\{ j \in {\mathbb Z} \ | \ \max((i-1-\frke_{n-1})/2,0) \le j \le (i-1)/2 \}),\\ {\mathbb B}_{i,3}&=\#(\{ j \in {\mathbb Z} \ | \ 0 \le j \le (i-\frke_n+\frke_{n-1}-2)/2 \}),\\ {\mathbb B}_{i,4}&=\#(\{ j \in {\mathbb Z} \ | \ 0 \le j \le (i-\frke_n+\frke_{n-1}-1)/2 \}, \end{align*} and ${\mathbb B}_i={\mathbb B}_{i,1}+{\mathbb B}_{i,2}+{\mathbb B}_{i,3}+{\mathbb B}_{i,4}$. By the induction assumption, we have \[|b_i| \le \prod_{l=1}^{n-2} (\frke_{l}+1) \prod_{l=1}^{n-2} q^{e_ {l}/2}.\] Hence we have \begin{align*} |a_i(q^{1/2})| & \le {\mathbb B}_i q^{\frke_{n-1}/2} \prod_{l=1}^{n-2} (\frke_{l}+1) \prod_{l=1}^{n-2} q^{e_ {l}/2} \end{align*} We claim that we have \begin{align*} {\mathbb B}_i \le \frke_{n-1}+1. \tag{D} \end{align*} To prove this we note that \begin{align*} \#(\{ j \in {\mathbb Z} \ | \ \alpha \le j \le \beta) = \begin{cases} (\beta-\alpha+2)/2 & \text{ if } \alpha,\beta \text { are even} \\ (\beta-\alpha)/2 & \text{ if } \alpha,\beta \text{ are odd} \\ (\beta-\alpha+1)/2 & \text{ otherwise} \end{cases} \tag{E} \end{align*} for any integers $0 \le \alpha \le \beta$. Assume $i \ge \frke_{n-1}+1$. Then, $\max((i-\frke_{n-1})/2,0)=(i-\frke_{n-1})/2$ and $\max((i-1-\frke_{n-1})/2,0)=(i-1-\frke_{n-1})/2$, and by (E), we easily see that we have ${\mathbb B}_{i.1}+{\mathbb B}_{i,2}=\frke_{n-1}+1$. Moreover, since we have $\frke_{n-1}+1 \le i \le \frke_n/2$, we have \[(i-\frke_n+\frke_{n-1}-2)/2 <(i-\frke_n+\frke_{n-1}-1)/2 \le (i-\frke_n/2-3)/2<0,\] and hence ${\mathbb B}_{i.3}+{\mathbb B}_{i,4}=0$. This proves (D). Assume that $i \le \frke_{n-1}$. Then, $\max((i-\frke_{n-1})/2,0)=\max((i-1-\frke_{n-1})/2,0)=0$, and by (E), we have ${\mathbb B}_{i.1}+{\mathbb B}_{i,2}=i+1$. Assume that $i-\frke_n+\frke_{n-1}-2 \ge 0$. Then, by (E), we have ${\mathbb B}_{i,3}+{\mathbb B}_{i.4}=i-\frke_n+\frke_{n-1}$, and hence ${\mathbb B}_i=2i-\frke_n+\frke_{n-1}+1\le \frke_{n-1}+1$, which proves (D). Assume that $i-\frke_n+\frke_{n-1}-1 \le 0$. Then, we have ${\mathbb B}_{i,3}+{\mathbb B}_{i,4} \le 1$. Moreover, since we have \[i \le \frke_n-\frke_{n-1}+1 \le \frke_{n-1},\] and $\frke_n/2$ is an integer, we have $\frke_n/2 \le \frke_{n-1}-1$. Hence we have \[{\mathbb B}_i \le i+2 \le \frke_n/2+2 \le \frke_{n-1}+1.\] This completes (D), and hence (C). Let $n$ be odd and assume that $\varepsilon_{n-1} \not=0$. Then, by Theorem \ref{th.induction-formula-for-G}, (2.1), for $0 \le i \le \frke_n/2$, $a_l(q^{1/2})$ is given by \begin{align*} a_i(q^{1/2})&=\sum_{\max(i-\frke_{n-1},0) \le j \le i } b_{i-j}q^{(i-j)/2}\\ &-\varepsilon_n \sum_{0 \le j \le \frke_{n-1}+i-\frke_n-1} b_{\frke_n-i-2j+2}q^{(\frke_n-i+j+1)/2}\varepsilon_{n-1}^j. \end{align*} Then the assertion can be proved in the same manner as above. Finally let $n$ be odd and assume that $\varepsilon_{n-1}=0$. Then, we have \begin{align*} {\mathcal G}(H;q^{1/2},X)=\sum_{i=0}^{\frke_{n-1}} b_i (q^{1/2}X)^i +\varepsilon_n X^{\frke_n}\sum_{i=0}^{\frke_{n-1}} b_i (q^{1/2}X^{-1})^i. \end{align*} Then, for $i \le \frke_n/2$, $a_i(q^{1/2})$ is given by \begin{align*} a_i(q^{1/2})=q^{l/2}b_i +\varepsilon_n q^{(\frke_n-i)/2}b_{\frke_n-i}. \end{align*} Since $\frke_{n-1}$ is odd, we have $\frke_{n-1}+1 \ge 2$. Thus the assertion can be proved by the induction assumption. This completes the induction. \end{proof} \begin{theorem} \label{th.estimate-of-FH} Let $H$ be as in Theorem \ref{th.estimate-of-CF-of-GH}. Let $s \in {\mathbb C}$ such that $\mathrm{Re}(s) \le r_0$. Then \[|{\mathcal F}(H;q^{1/2},q^s)| \le q^{\frke_n r_0/2} \prod_{i=1}^n (\frke_i+1) \prod_{i=1}^{n-1} q^{e_ {i}/2}.\] \end{theorem} \begin{proof} We have \begin{align*} {\mathcal F}(H;q^{1/2},q^s)=\sum_{i=-\frke_n/2}^{-1} a_{i+\frke_n/2}(q^{1/2})q^{si} +\sum_{i=0}^{\frke_n/2} a_i(q^{1/2})q^{si}. \end{align*} Thus the assertion follows from Theorem \ref{th.estimate-of-CF-of-GH}. \end{proof} \begin{corollary} \label{cor.estimate-of-FH} Let the notation be as above. Let $n$ be even. Then \[|{\mathcal F}(H;q^{1/2},q^s)| \le q^{r_0\frke_n/2}q^{(n-1)\frke_n/4}\prod_{i=1}^n (\frke_i+1) .\] \end{corollary} \begin{proof} By definition, \begin{align*} \sum_{i=1}^{n-1} \frke_{i} \le \sum_{i=1}^{n-1} (n-i)a_i. \end{align*} For $\underline a=(a_1,\ldots,a_n)$, put $|\underline a|=a_1+\cdots+a_n$. First suppose that $|\underline a|$ is even, then, by definition, we have $\frke_n=|\underline a|$, and hence we have \begin{align*} &2\sum_{i=1}^{n-1} (n-i)a_i -(n-1) \frke_n =\sum_{i=1}^n (n-2i+1)a_i \\ & = \sum_{i=1}^{n/2}(n-2i+1)(a_i-a_{n-i+1}) \le 0. \end{align*} This implies that we have \begin{align*} \sum_{i=1}^{n-1} (n-i)a_i \le \frac{n-1} 2 \frke_n,\end{align*} and hence \begin{align*} \sum_{i=1}^{n-1} \frke_{i} \le \frac{n-1} 2 \frke_n. \end{align*} Next suppose that $|\underline a|$ is odd, then, again by definition, we have $\frke_n=|\underline a|-1$, and $a_n \ge a_1+1$. Hence we have \begin{align*} &2\sum_{i=1}^{n-1} (n-i)a_i -(n-1) \frke_n =\sum_{i=1}^n (n-2i+1)a_i +(n-1)\\ & = \sum_{i=2}^{n/2}(n-2i+1)(a_i-a_{n-i+1}) +(n-1)(a_1+1-a_n) \le 0. \end{align*} Thus the assertion can be proved in the same manner as above. \end{proof} \begin{theorem} \label{th.H-and-Siegel-series} Let $q=\#(\frko/\frkp)$. Let $B \in {\mathcal H}_n(\frko)^{\mathrm{nd}}$. Then there is a naive $\mathrm{EGK}$ datum $H=(a_1,\ldots,a_n;\varepsilon_1,\ldots,\varepsilon_n)$ of length $n$ such that $(a_1,\ldots,a_n)$ is the Gross-Keating invariant of $B$ and \[{\mathcal F}(H;q^{1/2},X)=\widetilde F_{\frkp}(B,X).\] \end{theorem} \begin{proof} The assertion follows from \cite[Theorem 4.3]{IK18}, \cite[Corollary 5.1]{IK18}, \cite[Theorem. 1.1]{IK22}, and \cite[Proposition 4.5]{IK22}. \end{proof} By Corollary \ref{cor.estimate-of-FH} and Theorems \ref{th.estimate-of-FH} and \ref{th.H-and-Siegel-series}, we immediately have the following theorem. \begin{theorem}\label{th.estimate-of-F} Let $B \in {\mathcal H}_n(\frko)^{\mathrm{nd}}$ with $n$ even and $s \in {\mathbb C}$ such that $\mathrm{Re}(s) \le r_0$. Let $\frke_i=\frke_i(B)$ be that in Section 4. Then we have the following estimates. \begin{itemize} \item[(1)] \begin{align*} |\widetilde F_\frkp(B,\alpha)| \le q^{r_0\frke_n/2}q^{(n-1)\frke_n/4} \prod_{i=1}^n (\frke_i +1). \end{align*} \item[(2)] \begin{align*} |\widetilde F_\frkp(B,\alpha)| \le q^{r_0\frke_n/2} \prod_{i=1}^{n-1} q^{e_ {i}/2}.\prod_{i=1}^n (\frke_i +1). \end{align*} \end{itemize} \end{theorem} \section{Proofs of Theorem \ref{th.main-result} and \ref{th.main-result2}} In this section we prove our main result. \begin{lemma} \label{lem.Conrey-Iwaniec} Let $\kappa \in {1 \over 2} + {\mathbb Z}$ such that $\kappa \ge 13/2$. Let $g(z)=\sum_{m=1}^\infty c_g(h){\bf e}(mz) \in S_\kappa(\varGamma_0(4))$. Then, for any fundamental discriminant $D$ we have \begin{align*} c_g(|D|) \ll_{\varepsilon,g} |D|^{\kappa/2-1/3+\varepsilon} \ (\varepsilon>0). \end{align*} \end{lemma} \begin{proof} It is known that $D$ can be expressed as $D=2^sD'$, with $s=0,2$ or $3$ and $D'$ a square free odd integer. Suppose that $s=0$. Then, by \cite[Corollary 1.3]{CI00}, we have \begin{align*} c_g(|D|) \ll_{\varepsilon, g} |D|^{\kappa/2-1/3+\varepsilon} \ (\varepsilon>0). \end{align*} Suppose that $s \ge 2$. Let $T_{\kappa,1}^4(4)$ be the operator on $S_{\kappa}(\varGamma_0(4))$ in \cite[page 450]{Sh73}. Let \[\widetilde g(z):=g|T_{\kappa,1}^4(4)(z)=\sum_{m=1}^\infty b(m){\bf e}(mz).\] Then, by Theorem \cite[Theorem 1.7]{Sh73}, we have $c_g(|D|)=b(2^{s-2}|D'|)$. Since $\widetilde g$ belongs to $S_\kappa(\varGamma_0(4))$ and $2^{s-2}|D'|$ is square free, again by \cite[Corollary 1.3]{CI00}, we have \begin{align*} c_g(|D|) =b(2^{s-2}|D'|) \ll_{\varepsilon,\widetilde g} |2^{s-2}D'|^{\kappa/2-1/3+\varepsilon} \ll_{\varepsilon,g} |D|^{\kappa/2-1/3+\varepsilon} \ (\varepsilon >0). \end{align*} This completes the proof. \end{proof} \begin{remark}\label{rem.unstability-of-Kohnen-plus-space} For $g \in S_\kappa^+(\varGamma_0(4))$, $g|T_{\kappa,1}^4(4)$ does not necessarily belong to $S_\kappa^+(\varGamma_0(4))$. \end{remark} For an element $C \in {\mathcal H}_r({\mathbb Z})_{>0}$, put \[\boldsymbol{\Delta}(C)=\begin{cases} \frkf_C^2 & \text{ if } r \text{ is even} \\ 2^{r-1}\det C & \text{ if } r \text{ is odd.} \end{cases}\] Let $B =(b_{ij}) \in {\mathcal H}_n({\mathbb Z})_{>0}$. For each prime number $p$, we denote by $\mathrm{GK}(B)^{(p)}=\underline a^{(p)}=(a_1^{(p)},\ldots,a_n^{(p)})$ the Gross-Keating invariant of $B$ viewing $B$ as an element of ${\mathcal H}_n({\mathbb Z}_p)$, and for $r=1,\ldots,n$ put \[\frke_{r,B}^{(p)}=\begin{cases} 2[(a_1^{(p)}+\cdots+a_r^{(p)})/2] & \text{ if } r \text{ is even}\\ a_1^{(p)}+\cdots+a_r^{(p)} & \text{ if } r \text{ is odd}, \end{cases}\] and in particular put $\frke_B^{(p)}=\frke_{n,B}^{(p)}$. Moreover, for $r \le n$, put \[\mathscr{E}_r(B)=\mathrm{GCD}_{X \in M_{nr}({\mathbb Z}) \atop B[X] >0} \boldsymbol{\Delta}(B[X]),\] \[D_r(B)=\mathrm{GCD}_{X \in M_{nr}({\mathbb Z}) \atop B[X] >0} 2^{2[r/2]} \det B[X].\] Clearly ${\mathscr E}_r(B)$ divides $D_r(B)$, \begin{lemma}\label{lem.estimate-of-GGK} Let $B \in {\mathcal H}_n({\mathbb Z})_{>0}$ with $n$ even. \begin{itemize} \item[(1)] For any positive integer $r \le n$, the product $\prod_{p | \frkf_B} p^{\frke_{r,B}^{(p)}} $ divides ${\mathscr E}_r(B)$. In particular, we have \[\prod_{p \frkf_B} p^{\frke_{n,B}^{(p)}} =\frkf_B^2.\] \item[(2)] For any positive integer $r \le n$, $G_r(B)=D_r(B)$. \end{itemize} \end{lemma} \begin{proof} The assertion (1) follows from Theorems \ref{th.GK-invariant}, \ref{th.estimate-of-GK}. We prove (2). By Lemma \ref{lem.det-of-H[X]}, $G_r(B)$ divides $D_r(B)$. To prove that $D_r(B)$ divides $G_r(B)$, for each prime number $p$, we denote by ${\bf g}_r(B)^{(p)}$ and ${\bf d}_r(B)^{(p)}$ the quantities ${\bf g}_r(B)$ and ${\bf d}_r(B)$ defined in Section 3, respectively, viewing $B$ as an element of ${\mathcal H}_n({\mathbb Z}_p)^{\rm nd}$. Let $X \in M_{nr}({\mathbb Z}_p)$ and suppose that $\det B[X] \not=0$. Then we can take $X_0 \in M_{nr}({\mathbb Z})$ such that $X_0 \equiv X \text{ mod } p^eM_{nr}({\mathbb Z}_p)$ with $e > \mathrm{ord}_p(\det B[X])$. Then, by definition, we have \[\mathrm{ord}_p(D_r(B)) \le \mathrm{ord}_p(\det B[X_0])=\mathrm{ord}_p(\det B[X]).\] By Theorem \ref{th.estimate-of-GK2}, this implies that \[\mathrm{ord}_p(D_r(B)) \le {\bf d}_r(B)^{(p)} ={\bf g}_r(B)^{(p)},\] and hence $D_r(B)$ divides $\prod_p p^{{\bf g}_r(B)^{(p)}}=G_r(B)$. This proves the assertion (2). \end{proof} The following theorem is a refined version of Theorem \ref{th.main-result}. \begin{theorem}\label{th.refined-estimate} Let the notation and the assumption be as in Theorem \ref{th.main-result}. Let $B \in {\mathcal H}_n({\mathbb Z})_{>0}$. The we have the following estimates. \begin{itemize} \item[(1)] We have \begin{align*} &|c_{I_n(h)}(B)| \le |c_h(|\frkd_B|)|\frkf_B^{k-1} \prod_{i=1}^n \prod_{p | \frkf_B}(1+\frke_{i,B}^{(p)})). \end{align*} \item[(2)] We have \begin{align*} &|c_{I_n(h)}(B)| \le |c_h(|\frkd_B|)|\frkf_B^{k-(n+1)/2} \prod_{i=1}^{n-1} {\mathscr E}_i(B)^{1/2} \prod_{i=1}^n \prod_{p | \frkf_B}(1+\frke_{i,B}^{(p)})), \end{align*} and in particular we have \begin{align*} &|c_{I_n(h)}(B)| \le |c_h(|\frkd_B|)|\frkf_B^{k-(n+1)/2} \prod_{i=1}^{n-1} G_i(B)^{1/2} \prod_{i=1}^n \prod_{p | \frkf_B}(1+\frke_{i,B}^{(p)})). \end{align*} \end{itemize} \end{theorem} \begin{proof} From now on we write $\widetilde F_p(B,X)$ instead of $\widetilde F_\frkp(B,X)$ if $\frkp=(p)$. We have \begin{align*} c_{I_n(h)}(B)=c_h(|\frkd_B|)\frkf_B^{k-(n+1)/2} \prod_{p | \frkf_B} \widetilde F_p(B,\alpha_p). \end{align*} We have $|\alpha_p|=1$ for any prime number $p$. Hence, by Theorem \ref{th.estimate-of-F} (1) and Lemma \ref{lem.estimate-of-GGK}, we have \begin{align*} &|\prod_{p | \frkf_B} \widetilde F_p(B,\alpha_p)| \\ &\le \prod_{p | \frkf_B}p^{(n-1)\frke_B^{(p)}/4} \prod_{i=1}^n \prod_{p | \frkf_B}(1+\frke_{i,B}^{(p)}))\\ &=\frkf_B^{(n-1)/2}\prod_{i=1}^n \prod_{p | \frkf_B}(1+\frke_{i,B}^{(p)})). \end{align*} This proves the assertion (1). Similarly, by Theorem \ref{th.estimate-of-F} (2), we have \begin{align*} &|\prod_{p | \frkf_B} \widetilde F_p(B,\alpha_p)| \\ &\le \frkf_B^{k-(n+1)/2}\prod_{i=1}^{n-1} \prod_{p | \frkf_B} p^{\frke_{i,B}^{(p)}/2} \prod_{i=1}^n \prod_{p | \frkf_B}(1+\frke_{i,B}^{(p)})). \end{align*} By Lemma \ref{lem.estimate-of-GGK} (1) and the remark before it, we have \[\prod_{p | \frkf_B} p^{\frke_{i,B}^{(p)}/2} \le {\mathscr E}_i(B)^{1/2} \le D_i(B)^{1/2}=G_i(B)^{1/2}.\] This proves the assertion (2). \end{proof} {\bf Proofs of Theorems \ref{th.main-result} and \ref{th.main-result2}} By Theorem \ref{th.GK-invariant}, we have $\frke_B^{(p)}=2\mathrm{ord}_p(\frkf_B)$ and $e_{i,B} \le \mathrm{ord}_p(\det (2B))$ for any $i=1,\ldots,n$. Hence we have \[\prod_{p | \frkf_B} (1+\frke_{i,B}^{(p)}) \le d(\det (2B))\] for any $i=1,\ldots,n$, where $d(a)$ is the number of positive divisors of $a$ for $a \in {\mathbb Z}_{>0}$. Hence, by Theorem \ref{th.refined-estimate} (1), we have \begin{align*} |c_{I_n(h)}(B)| & \le |c_h(|\frkd_B|)|\frkf_B^{k-1} d(\det (2B))^n\\ &= |c_h(|\frkd_B|)|\frkd_B|^{-k/2 +1/2}d(\det (2B))^n\det (2B)^{(k-1)/2}\\ \end{align*} By Lemma \ref{lem.Conrey-Iwaniec} we have \begin{align*} c_h(|\frkd_B|) \ll_{\varepsilon,h} |\frkd_B|^{k/2-n/4-1/12+\varepsilon} \quad (\varepsilon >0) \end{align*} Hence we have \begin{align*} &c_{I_n(h)}(B) \\ &\ll_{\varepsilon, I_n(h)} |\frkd_B|^{-n/4+5/12+\varepsilon} d(\det (2B))^n(\det (2B))^{(k-1)/2}, \end{align*} We have $|\frkd_B|^{\varepsilon} \le \det (2B)^{\varepsilon}$ and \begin{align*} d(\det (2B))^n \ll_{\varepsilon} (\det (2B))^\varepsilon \quad (\varepsilon >0) \end{align*} for any $B \in {\mathcal H}_n({\mathbb Z})_{>0}$. Thus we complete the proof of Theorem \ref{th.main-result}. Similarly, we can prove Theorem \ref{th.main-result2}. \end{document}
arXiv
\begin{document} \preprint{} \title{ Bound-state eigenenergy outside and inside the continuum \\ for unstable multilevel systems } \author{Manabu Miyamoto} \email{[email protected] } \affiliation{ Department of Physics, Waseda University, 3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555, Japan } \date{\today} \begin{abstract} The eigenvalue problem for the dressed bound-state of unstable multilevel systems is examined both outside and inside the continuum, based on the $N$-level Friedrichs model which describes the couplings between the discrete levels and the continuous spectrum. It is shown that a bound-state eigenenergy always exists below each of the discrete levels that lie outside the continuum. Furthermore, by strengthening the couplings gradually, the eigenenergy corresponding to each of the discrete levels inside the continuum finally emerges. On the other hand, the absence of the eigenenergy inside the continuum is proved in weak but finite coupling regimes, provided that each of the form factors that determine the transition between some definite level and the continuum does not vanish at that energy level. An application to the spontaneous emission process for the hydrogen atom interacting with the electromagnetic field is demonstrated. \end{abstract} \pacs{42.50.Vk, 42.50.Md} \maketitle \section{Introduction} \label{sec:0} The theoretical description of the unstable quantum system often refers to the system of a finite level coupled with the spectral continuum. In weak coupling regimes, the initial state localized at the finite level undergoes exponential decay \cite{Nakazato(1996)}. However, by changing the couplings to stronger regimes, instead of the total decay a partial one can occur \cite{Gaveau(1995)}. This means that the superposition between the states localized at the finite level and the continuum forms the dressed bound state, that is, a bound eigenstate extended over the total Hilbert space. The formation of the bound eigenstate is of great interest in the study of the various systems having to do with such matters as the photodetachment of electrons from negative ions \cite{Rzazewski(1982),Nakazato(2003)} and the spontaneous emission of photons from atoms in photonic crystals \cite{John(1990),Kofman(1994),Wang(2003)}. It is then clarified that the energy of the bound eigenstate depends not only on the strength of the couplings but also on the relative location between the electron bound-energy and the detachment threshold \cite{Rzazewski(1982),Nakazato(2003)}, or between the energy of the atomic frequency and the continuum edge of the radiation frequency \cite{Kofman(1994),Wang(2003)}. Further research has been directed to those eigenstates, aiming at the decoherence control \cite{Antoniou(2004),Pellegrin(2005)} In these analyses, however, single-level systems are treated often, while multilevel systems are less examined. In the latter, some peculiar time evolutions are theoretically observed: steplike decay \cite{Frishman(2001)}, decaying oscillation \cite{Antoniou(2003)}, and various long-time nonexponential decays \cite{Miyamoto(2004),Miyamoto(2005)}. These peculiarities are never found in single-level approaches. Furthermore, to the author's knowledge, the possibility of a bound-state eigenenergy ``inside'' the continuum has not been studied except in a special multilevel case where all form factors are assumed to be identical \cite{Antoniou(2004),Davies(1974)}. In the present paper, we attempt to examine the eigenvalue problem for the dressed bound-state in multilevel cases, based on the $N$-level Friedrichs model \cite{Friedrichs(1948),Exner(1985)}, allowing some class of form factors, including identical cases. We show that for the discrete energy levels lying outside the continuum, the bound-state eigenenergy always remains below each of them. Moreover, by increasing the couplings, the bound-state eigenenergy corresponding to each of the discrete levels inside the continuum can emerge out of that continuum. For the bound-state eigenenergy inside the continuum, we can only prove its absence in weak coupling cases under the condition that the form factors do not vanish at the energy of each level. This result is just an extension of the lemma already proved for a system with identical form factors \cite{Davies(1974)}. An upper bound of the coupling constant for the case of no bound-state eigenenergy being inside the continuum is also obtained explicitly. We apply this result to the spontaneous emission process for the hydrogen atom under the four-level approximation. In the next section, we introduce the $N$-level Friedrichs model and its eigenvalue problem. In Sec. \ref{sec:3}, we consider the eigenvalues outside the continuum, with resort to the perturbation theory about the eigenvalue of hermitian matrix. The discussion developed here helps us to undertake the problem for the inside case, which is argued in Sec. \ref{sec:4}. Concluding remarks are given in Sec. \ref{sec:5}. We also present an appendix where both the small and large energy behaviors of the energy shift are studied in detail. \section{The $N$-level Friedrichs model and the eigenvalue problem} \label{sec:2} The $N$-level Friedrichs model describes the $N$-level system coupled with the continuum system. The total Hamiltonian $H$ is defined by \begin{equation} H=H_0 + \lambda V, \label{eqn:2.60} \end{equation} where $\lambda \in \mathbb{R}$ is the coupling constant. We here define the free Hamiltonian $H_0$ as \begin{equation} H_0 =\sum_{n=1}^N \omega_n \ketbra{n}{n} +\int_\Omega \omega \ketbra{\omega}{\omega} \rho(\omega) d\omega , \label{eqn:2.40} \end{equation} where it was assumed that $\omega_1 \leq \omega_2 \leq \ldots \leq \omega_N$. $\ket{n}$ and $\ket{\omega}$ satisfy the orthonormality condition: $\langle n | n^{\prime} \rangle = \delta_{n n^{\prime}}$, $\braket{ \omega }{ \omega^{\prime} } =\delta (\omega - \omega^{\prime})/\rho (\omega )$, and $\langle n | \omega \rangle = 0 $, where $\delta_{n n^{\prime}}$ is Kronecker's delta and $\delta (\omega - \omega^{\prime})$ is Dirac's delta function. $\rho (\omega )$ is a nonnegative function interpreted as, e.g., an electromagnetic density of mode, and $\Omega=\{\omega | \rho (\omega)\neq 0 \}$ is a specific region, like the energy band allowed by the electromagnetic mode. The interaction Hamiltonian $V$ describing the couplings between $\ket{n}$ and $\ket{\omega}$ is \begin{equation} V = \sum_{n=1}^{N} \int_\Omega \left[ v_n (\omega ) | \omega \rangle \langle n | + v_n^* (\omega ) | n \rangle \langle \omega | \right] \rho(\omega) d \omega , \label{eqn:2.70} \end{equation} where ($^*$) denotes the complex conjugate and $v_n (\omega )$ is the form factor characterizing the transition between $| n \rangle$ and $| \omega \rangle$. We here assumed that $v_n \in L^2 (0, \infty )$, i.e., \begin{equation} \int_\Omega |v_n (\omega )|^2 \rho(\omega) d\omega <\infty. \label{eqn:2.80} \end{equation} For clarity of discussion below, we assume that $\rho (\omega )=1$ for $\omega \geq 0$ and $0$ otherwise, so that $\Omega =[0, \infty)$. Then, we merely write $\int_\Omega$ by $\int_{0}^{\infty}$, and the outside of the continuum means the half line $(-\infty, 0)$. An extension of $\Omega$ to more general cases, such as gap structures, is not difficult, however, our facilitation could extract the essential of the matter. Let us next set up the eigenvalue problem for this model. We suppose that the eigenstate corresponding to the eigenvalue $E$ is of the form $\ket{u_E} =\sum_{n=1}^N c_n \ket{n} +\int_0^{\infty} f(\omega) \ket{\omega} d\omega $, and it is normalizable, i.e., \cite{pointspectrum} \begin{equation} \braket{u_E}{u_E} = \sum_{n=1}^{N} |c_n |^2 + \int_0^{\infty} |f(\omega) |^2 d\omega < \infty . \label{eqn:2.25} \end{equation} Then, the eigenequation $H\ket{u_E} =E \ket{u_E}$ is equivalent to the following ones, \begin{equation} \omega_n c_n + \lambda \int_0^{\infty} v_n^* (\omega) f(\omega) d\omega =Ec_n , ~~~ \forall n =1, \ldots, N, \label{eqn:3.10} \end{equation} \begin{equation} \omega f(\omega ) + \lambda \sum_{n=1}^{N} c_n v_n (\omega ) =E f(\omega ) . \label{eqn:3.20} \end{equation} Equation (\ref{eqn:3.20}) immediately implies \begin{equation} f(\omega)=- \lambda \frac{\sum_{n=1}^{N} c_n v_n (\omega )}{\omega -E}. \label{eqn:3.25} \end{equation} By setting this into Eq. (\ref{eqn:2.25}), we have the normalization condition \begin{equation} \int_{0}^{\infty } |f(\omega)|^2 d\omega =\lambda^2 \int_{0}^{\infty } \frac{|\sum_{n=1}^{N} c_n v_n (\omega ) |^2}{|\omega -E|^2} d\omega < \infty , \label{eqn:3.30} \end{equation} which is the essential of the localization of dressed bound-state. \section{Bound-state eigenenergy outside the continuum} \label{sec:3} We first review the results on the negative-eigenvalue problem for $N=1$, the single-level case \cite{Horwitz(1971)}. If $E<0$, the integral in Eq. (\ref{eqn:3.30}) always converges under the condition (\ref{eqn:2.80}). In fact, \begin{equation} |c_1|^2 \int_{0}^{\infty } \frac{|v_1 (\omega ) |^2}{|\omega +|E| |^2} d\omega \leq \frac{|c_1|^2}{|E|^2} \int_{0}^{\infty } |v_1 (\omega ) |^2 d\omega < \infty. \label{eqn:3.33} \end{equation} Thus, the substitution of $f(\omega)$ into Eq. (\ref{eqn:3.10}) is allowed. By introducing the function $\kappa (E)$ as \begin{equation} \kappa(E)=\omega_1 - \lambda^2 \int_{0}^{\infty } \frac{|v_1 (\omega ) |^2}{\omega -E} d\omega, \label{eqn:3.34} \end{equation} Eq. (\ref{eqn:3.10}) reads \cite{c_1} \begin{equation} \kappa (E)=E, \label{eqn:3.35} \end{equation} which is either an algebraic or transcendental equation of $E$, depending on $v_1 (\omega)$. $\kappa(E)$ has two important properties as follows, \begin{equation} \kappa(E') \geq \kappa(E), ~~~ \mbox{and} ~~~ \kappa(E)\leq \omega_1, \label{eqn:3.36} \end{equation} for all $E$ and $E'$ satisfying $E' \leq E<0$. The former means that $\kappa(E)$ is monotone decreasing in $E$. Therefore, there is only one solution (negative eigenvalue) $E$ of Eq. (\ref{eqn:3.35}) if and only if \begin{equation} \lim_{E\uparrow 0} \kappa(E)= \omega_1 - \lim_{E\uparrow 0} \lambda^2 \int_{0}^{\infty } \frac{|v_1 (\omega ) |^2}{\omega -E} d\omega < 0. \label{eqn:3.37} \end{equation} When $E>0$, $E$ should be a zero of $v_1 (\omega)$ so that Eq. (\ref{eqn:3.30}) holds. This is discussed in detail in Sec. \ref{sec:4}. Let us now turn to the $N$-level case. Corresponding to Eq. (\ref{eqn:3.33}), this time we have that \begin{equation} \int_{0}^{\infty } \frac{|\sum_{n=1}^{N} c_n v_n (\omega ) |^2}{|\omega +|E| |^2} d\omega \leq \frac{\sum_{n=1}^{N} \int_{0}^{\infty} | v_n (\omega)|^2 d\omega }{|E|^2 } < \infty , \label{eqn:3.1.10} \end{equation} and Eq. (\ref{eqn:3.30}) is satisfied again, where we used that $\sum_{n=1}^{N} |c_n |^2 \leq1$. Substituting Eq. (\ref{eqn:3.25}) into (\ref{eqn:3.10}), one obtains \begin{equation} \sum_{n^{\prime }=1}^{N} \bigl[ \omega_n \delta_{nn^{\prime }} -\lambda^2 S_{nn^{\prime }}(E) \bigr] c_{n^{\prime }} =E c_n, \label{eqn:3.1.20} \end{equation} where \begin{equation} S_{nn^{\prime }}(z) =\int_{0}^{\infty } \frac{v_n^* (\omega ) v_{n^{\prime}} (\omega )}{\omega -z} d\omega , \label{eqn:3.1.30} \end{equation} with $z \in \mathbb{C}\backslash [0,\infty)$. For a later convenience, we here introduce an $N \times N$ matrix $S(z)$ with the components $S_{nn^{\prime }}(z) $. Note that $S(E)$ for $E<0$ turns out to be a Gram matrix \cite{MatrixAnalysis}, which is positive semidefinite. One obtains the following property of $S(E)$: \begin{lm}\label{pp:4.1} $S(E^{\prime}) \leq S(E)$ for $E^{\prime} \leq E <0$. \end{lm} {\sl Proof} : We have that \begin{equation} S_{nn^{\prime}}(E) - S_{nn^{\prime}}(E^{\prime}) = (E -E^{\prime} ) T_{nn^{\prime}}(E,E^{\prime}) , \label{eqn:3.1.60} \end{equation} for all $E$ and $E^{\prime}$ satisfying $E^{\prime} \leq E <0$. We here introduce the matrix $T(E ,E^{\prime})$ whose components are \begin{equation} T_{nn^{\prime}}(E,E^{\prime}) := \int_{0}^{\infty } \frac{v_n^* (\omega ) v_{n^{\prime}} (\omega )} {(\omega -E)(\omega -E^{\prime})} d\omega . \label{eqn:3.1.70} \end{equation} Note that since $T(E ,E^{\prime})$ is a Gram matrix, it is positive semidefinite. Therefore the proof is completed. \qed We also introduce the matrices $K_0$ and $K(E)=K_0-\lambda^2S(E)$ with components \begin{equation} {K_0}_{nn^{\prime}} := \omega_n \delta_{nn^{\prime }} , \label{eqn:3.1.35} \end{equation} and\begin{equation} K_{nn^{\prime}}(E) := \omega_n \delta_{nn^{\prime }} -\lambda^2 S_{nn^{\prime }}(E) , \label{eqn:3.1.40} \end{equation} respectively. For any $E<0$, $K(E)$ becomes a hermitian matrix, and thus there are $N$ eigenvalues of $K(E)$. We denote them by $\{\kappa_n (E) \}_{n=1}^{N}$, where $\kappa_1 (E) \leq \kappa_2 (E) \leq \ldots \leq \kappa_N (E)$. The existence of a nontrivial solution $\{ c_n \}$ of Eq. (\ref{eqn:3.1.20}) is guaranteed if and only if there exists a negative $E$ to satisfy \begin{equation} \kappa_n (E) =E , \label{eqn:3.1.50} \end{equation} for a certain integer $n$. As in the former part of Eq. (\ref{eqn:3.36}), $\kappa_n(E)$ has the following property: \begin{lm}\label{pp:4.2} For any fixed $n$, $\kappa_n (E^{\prime}) \geq \kappa_n (E)$ for $E^{\prime} \leq E <0$. \end{lm} {\sl Proof} : We see from Eq. (\ref{eqn:3.1.60}) that \begin{equation} K(E) - K(E^{\prime}) = -(E -E^{\prime} )\lambda^2 T(E ,E^{\prime}) \leq 0, \label{eqn:3.1.80} \end{equation} for $E^{\prime} \leq E <0$. Then, by using the Theorem 4.3.1 in Ref. \cite{MatrixAnalysis}, the following inequality between the eigenvalues of $K(E)$, $K(E')$, and $T(E ,E^{\prime})$ holds \cite{inequality}, \begin{eqnarray} &&\kappa_n (E^{\prime}) -(E -E^{\prime} )\lambda^2 \tau_N (E ,E^{\prime}) \nonumber \\ &&\leq \kappa_n (E) \leq \kappa_n (E^{\prime}) -(E -E^{\prime} )\lambda^2 \tau_1 (E ,E^{\prime}), \label{eqn:3.1.100} \end{eqnarray} where $\tau_n (E ,E^{\prime})$ denotes the $n$-th eigenvalue of $T(E ,E^{\prime})$. Note that since $T(E ,E^{\prime})\geq 0$, all $\tau_n (E ,E^{\prime})\geq0$. Then, $-(E -E^{\prime} )\tau_1 (E ,E^{\prime}) \leq 0$ for $E \geq E^{\prime} $, and the inequality \begin{equation} \kappa_n (E) \leq \kappa_n (E^{\prime}), \label{eqn:3.3.110} \end{equation} immediately follows from the last part of Eq. (\ref{eqn:3.1.100}). \qed We also have the statement below, which corresponds to the latter part of Eq. (\ref{eqn:3.36}). \begin{lm}\label{pp:4.3} For any fixed $n$, $ \kappa_n (E) \leq \omega_n$ for all $E<0$, and $\displaystyle \lim_{E \to -\infty} \kappa_n (E) =\omega_n$. \end{lm} {\sl Proof} : From Eq. (\ref{eqn:3.1.40}) and Theorem 4.3.1 in Ref. \cite{MatrixAnalysis} again, one obtains that \begin{equation} \omega_n - \lambda^2 \sigma_N (E) \leq \kappa_n (E) \leq \omega_n - \lambda^2 \sigma_1 (E) , \label{eqn:3.1.140} \end{equation} where $\sigma_n (E)$ denotes the $n$-th eigenvalue of $S(E)$. If we recall the fact that $S(E)\geq 0$ implies $\sigma_n (E) \geq 0$ for every $n$, the above inequality reads \begin{equation} 0\leq \lambda^2 \sigma_1 (E) \leq \omega_n - \kappa_n (E) \leq \lambda^2 \sigma_N (E) . \label{eqn:3.1.150} \end{equation} Asymptotic behavior of the right-hand side of the above can be evaluated from Eq. (\ref{eqn:3.1.30}) as \begin{equation} \sigma_N (E) \leq \mathrm{tr} (S(E)) \leq \frac{1}{|E| } \sum_{n=1}^{N} \int_{0}^{\infty} | v_n (\omega)|^2 d\omega \rightarrow 0 , \label{eqn:3.1.160} \end{equation} as $E \rightarrow -\infty$, and thus the lemma is proved. \qed Therefore, summarizing Lemmas \ref{pp:4.2} and \ref{pp:4.3}, we obtain \begin{thm}\label{thm:4.1} If $\lim_{E\uparrow 0} \kappa_n (E) <0$ up to $n=M$, then each of the $\kappa_n (E)$ for $n=1, \ldots, M$ intersects $E$ only once, so that $M$ negative eigenenergies of $H$ exist. In particular, if $H_0$ has $N_-$ negative eigenenergies, i.e., $\omega_n <0$ up to $n=N_-$, then $N_-$ negative eigenenergies of $H$, denoted by $E_n$, exist and satisfy $E_n \leq \omega_n$. \end{thm} We also see from Eq. (\ref{eqn:3.1.140}) that \begin{equation} \kappa_n (E) \leq \omega_n - \lambda^2 \sigma_1 (E). \label{eqn:3.1.170} \end{equation} This means that when $|\lambda |$ is large enough, every $\kappa_n (E)$, even originating from a positive $\omega_n$, becomes negative, unless $\sigma_1 (E)=0$, i.e., the $v_n (\omega)$'s are linearly dependent \cite{MatrixAnalysis}. More precisely, the following statement holds. \begin{pp}\label{pp:4.4} Suppose that only $N_{\rm ind}$ form factors are linearly independent among them. Then, it follows that for any $E<0$, \begin{equation} -\lambda^2 \sigma_{N+1-n} (E) +\omega_1 \leq \kappa_n (E) \leq -\lambda^2 \sigma_{N+1-n} (E) +\omega_N , \label{eqn:3.1.180} \end{equation} and $\sigma_{N+1-n} (E)\neq 0$ for $n=1, \ldots, N_{\rm ind}$, while \begin{equation} \omega_1 \leq \kappa_n (E) \leq \omega_N , \label{eqn:3.1.190} \end{equation} for $n=N_{\rm ind} +1, \ldots, N$. Therefore, only the first $N_{\rm ind}$ eigenvalues of $K(E)$ are ensured to be negative as $|\lambda|$ goes to infinity without regard to the location of $\{ \omega_n \}_{n=1}^N$. \end{pp} {\sl Proof} : Taking $-\lambda^2 S(E)$ as the unperturbed part of $K(E)$, we obtain Eq. (\ref{eqn:3.1.180}) for all $n$. Note that if only $N_{\rm ind}$ form factors are linearly independent, it holds that $\sigma_{m} (E)=0$ for $m=1, \ldots, N-N_{\rm ind}$ and otherwise does not vanish. Then, the assertion is proved straightforwardly. \qed \begin{figure} \caption{ $\omega_3-\kappa_n(0)$ for $n=1$, $2$, $3$ (three solid lines) for a three-level system with form factors (\ref{eqn:3.1.200}), plotted against $\lambda$, and $\omega_3-\omega_1$, $\omega_3-\omega_2$ (two dashed lines), and $\omega_3$ (dot-dashed line), for reference, where $\omega_3-\omega_1 >\omega_3 > \omega_3-\omega_2$. Three different regions are distinguished, corresponding to the number of solid lines satisfying $\omega_3-\kappa_n(0)>\omega_3$, that is, just the number of negative eigenenergies of $H$, by Theorem \ref{thm:4.1}. } \label{fig:figure1} \end{figure} \begin{figure} \caption{ $\omega_3-\kappa_n(E)$ for $n=1$, $2$, $3$ (three solid lines) for a three-level system of the form factors (\ref{eqn:3.1.200}), $\omega_3-E$ (short-dashed line), and $\omega_3-\omega_1$ and $\omega_3-\omega_2$ (two dashed lines). We plot them in $\lambda=0.1$, $0.7$, and $10.0$, in (a), (b), and (c), respectively, choosing the parameters as $\omega_1/\Lambda =-0.01$, $\omega_2/\Lambda=0.01$, and $\omega_3/\Lambda=0.02$. (a) For a relatively small $\lambda$, only $\omega_3-\kappa_1(E)$ intersects $\omega_3-E$ at $E/\Lambda \simeq-0.02$, which is predicted by Theorem \ref{thm:4.1}. Thus there is one negative eigenenergy. One also sees that $\omega_3-\kappa_1(E)$ and $\omega_3-\kappa_2(E)$ still lie closely above the asymptotes $\omega_3-\omega_1$ and $\omega_3-\omega_2$, respectively. (b) Both $\omega_3-\kappa_1(E)$ and $\omega_3-\kappa_2(E)$ intersect $\omega_3-E$ in the vicinity of $E=-0.3$ and $0.0$, respectively, so that there are two negative eigenenergies. (c) All $\omega_3-\kappa_n(E)$ for $n=1$, $2$, and $3$ intersect $\omega_3-E$, and thus three negative eigenenergies exist. In this figure, only two intersections for $n=2$ and $3$ are depicted. } \label{fig:figure2} \end{figure} To illustrate the emergence of the negative eigenenergies, described in Theorem \ref{thm:4.1} and Proposition \ref{pp:4.4}, let us consider the three-level system especially in the case where $\omega_1 <0$ while $\omega_2 >0$ and $\omega_3>0$. We also choose three form factors, such as \begin{equation} v_n (\omega)=\Lambda^{1/2} \frac{\sqrt{\omega/\Lambda} [1+a_n (\omega/\Lambda)^{2(n-1)}]} {[1+(\omega/\Lambda)^2]^{1+n}} , \label{eqn:3.1.200} \end{equation} where $\Lambda$ is the cut off constant, and $a_n$ is a parameter. The form factors described by such algebraic functions are often found in various systems involving the process of the spontaneous emission of photons from the hydrogen atom \cite{Seke(1994),Facchi(1998)}, the photodetachment of electrons from negative ions \cite{Rzazewski(1982),Nakazato(2003),Haan(1984)}, and quantum dots \cite{Antoniou(2001)}. In the calculation depicted in Figs. \ref{fig:figure1} and \ref{fig:figure2}, we have chosen a set of parameters $\omega_1 /\Lambda =-0.01$, $\omega_2 /\Lambda =0.01$, and $\omega_3 /\Lambda =0.02$, and $a_1=0.0$, $a_2=2.0$, $a_3=1.0$. These choices for $a_n$ guarantee linear independency among $v_n$'s , so that $N_{\rm ind}=3$. Figure \ref{fig:figure1} shows $\omega_3-\kappa_n(0)$ for $n=1,2,3$, changing $\lambda$ from $0.1$ to $10.0$, and $\omega_3 -\omega_1$, $\omega_3 -\omega_2$, (two dashed lines) and $\omega_3$ (dot-dashed line) for reference. The latter satisfy the relation that $\omega_3 -\omega_1>\omega_3>\omega_3 -\omega_2>0$. One may recognize three different regions in this figure: for small $\lambda \lesssim 0.2$, one inequality $\omega_3-\kappa_1 (0)>\omega_3$, i.e., $\kappa_1 (0)<0$, holds. In the next region $0.2 \lesssim \lambda \lesssim 1.0$, two inequalities, $\omega_3-\kappa_1 (0)>\omega_3$ and $\omega_3-\kappa_2 (0)>\omega_3$, hold. For $\lambda \gtrsim 1.0$ the last region, three inequalities, $\omega_3-\kappa_n (0)>\omega_3$ for all $n=1, 2, 3$, are satisfied. Therefore, according to Theorem \ref{thm:4.1}, one sees that one, two, and three negative eigenenergies of $H$ exist in the first, second, and third regions, respectively. It is worth noting that the appearance of the negative eigenenergy in the first region merely occurs from the fact that $\omega_1<0$ (see, the latter part of Theorem \ref{thm:4.1}), whereas that in other regions could be understood as a strong-coupling effect (Proposition \ref{pp:4.4}). Figure \ref{fig:figure2} shows three curves of $\omega_3-\kappa_n(E)$ for $n=1, 2, 3$ (three solid lines) and $\omega_3-E$ (short dashed line), plotted against $E$. An intersection of the former and the latter means an emergence of a negative eigenenergy. We also plot the asymptotes $\omega_3 -\omega_1$ and $\omega_3 -\omega_2$ (two dashed lines), to which $\omega_3-\kappa_1(E)$ and $\omega_3-\kappa_2(E)$ are close from above as $E\to -\infty$, respectively (see, Lemma \ref{pp:4.3}). Figures \ref{fig:figure2} (a), \ref{fig:figure2} (b), and \ref{fig:figure2} (c), are in the cases where $\lambda =0.1$, which belongs to the first region, $\lambda=0.7$, of the second one, and $\lambda=10.0$, of the last one, respectively. See Fig. \ref{fig:figure1}. It is seen in Fig.\ \ref{fig:figure2} (a) that $\omega_3-E$ intersects $\omega_3-\kappa_1(E)$ only, so that there is one negative eigenenergy. In Fig. \ref{fig:figure2} (b), one distinguishes the two intersections between $\omega_3-E$ and $\omega_3-\kappa_1(E)$, and between $\omega_3-E$ and $\omega_3-\kappa_2(E)$. Thus two negative eigenenergies appear. The intersection between the latter pair still lies around $E=0.0$. In Fig.\ \ref{fig:figure2} (c), where a relatively large $\lambda$ was chosen, $\omega_3-E$ finally intersects all three lines, $\omega_3-\kappa_n(E)$ for $n=1, 2, 3$, which tells us three negative eigenenergies exist. \section{Absence of bound-state eigenenergy inside the continuum} \label{sec:4} Let us next examine the nonnegative-eigenvalue problem for Eqs. (\ref{eqn:3.10}) and (\ref{eqn:3.20}). In this case, the normalization condition (\ref{eqn:3.30}) does not hold automatically, unlike the case where $E<0$, because of a possible divergence of $f(\omega)$ at $\omega=E$. Before going to the $N$-level case, let us first observe the single-level one. Except in the trivial case where $c_1 =0$, the condition (\ref{eqn:3.30}) for an eigenvalue $E \geq0$, if any, imposes the nontrivial condition or constraint that \begin{equation} v_1 (E)=0, \label{eqn:3.2.55a} \end{equation} where we assume some extent of the smoothness of $v_1 (\omega )$ \cite{zeros}. Then, $f(\omega)=-\lambda c_1 v_1 (\omega )/(\omega-E)$ is ensured to be square integrable, and Eq. (\ref{eqn:3.10}) reads \begin{equation} \omega_1 -\lambda^2 \int_0^{\infty }\frac{|v_1 (\omega)|^2}{\omega -E} d\omega =E. \label{eqn:3.2.55b} \end{equation} To find the solution $E$ of Eq. (\ref{eqn:3.2.55b}), one may attempt to interpret it as an intersection between the left-hand and the right-hand sides, as in Eq. (\ref{eqn:3.35}). However, this approach seems impossible at first, because the left-hand side of Eq. (\ref{eqn:3.2.55b}) is not well defined for a general $E$ except such points satisfying Eq. (\ref{eqn:3.2.55a}). This matter can be solved by alternatively considering the following equation, \begin{equation} \omega_1 -\lambda^2 P\int_0^{\infty }\frac{|v_1 (\omega)|^2}{\omega -E} d\omega =E, \label{eqn:3.2.55c} \end{equation} that is obtained from Eq. (\ref{eqn:3.2.55b}) by replacing $\int_0^{\infty }\frac{|v_1 (\omega)|^2}{\omega -E} d\omega$ with its principal value $P\int_0^{\infty }\frac{|v_1 (\omega)|^2}{\omega -E} d\omega$. In this case, the left-hand side can make sense for a general $E$, and we can treat $E$ as an independent variable. If we find the solution $E$ of Eq. (\ref{eqn:3.2.55c}), and furthermore if it satisfies Eq. (\ref{eqn:3.2.55a}), it becomes a true solution of the original equation (\ref{eqn:3.2.55b}). Indeed, in such a situation, we have that $\int_0^{\infty }\frac{|v_1 (\omega)|^2}{\omega -E} d\omega =P\int_0^{\infty }\frac{|v_1 (\omega)|^2}{\omega -E} d\omega$, and thus Eq. (\ref{eqn:3.2.55c}) just reproduces Eq. (\ref{eqn:3.2.55b}). In the $N$-level cases, the condition (\ref{eqn:3.30}) for an eigenvalue $E \geq0$, if any, can be translated into the equivalent condition for both the coefficients $\{ c_n \}_{n=1}^N$ and $E$, that is, \begin{equation} \sum_{n=1}^{N} c_n v_n (E) =0. \label{eqn:3.2.10} \end{equation} Under this condition, we can safely substitute Eq. (\ref{eqn:3.25}) into Eq. (\ref{eqn:3.10}). However, similarly to Eq. (\ref{eqn:3.2.55c}), we consider the alternative equation in the $N$-level cases as \begin{equation} \sum_{n^{\prime} =1}^{N} [\omega_n \delta_{nn^\prime} -\lambda^2 D_{nn^\prime}(E)] c_{n^{\prime} }=Ec_n , \label{eqn:3.2.50} \end{equation} for $n=1, 2, \ldots, N$, where \begin{equation} D_{n n^{\prime}} (E) := P \int_0^{\infty } \frac{v_n^* (\omega ) v_{n^{\prime}} (\omega)}{\omega -E} d\omega, \label{eqn:3.2.30} \end{equation} which are the components of the hermitian matrix $D(E)$ defined for all $E\geq 0$. One sees that Eq. (\ref{eqn:3.2.50}) has the same form as Eq. (\ref{eqn:3.1.20}), except the point that $S(E)$ ($E<0$) is replaced by $D(E)$ ($E\geq 0$). Then, we can implement a formulation in the matrix form, just as in the preceding section. In fact, the solutions of Eq. (\ref{eqn:3.2.50}) can be connected with those of Eq. (\ref{eqn:3.10}) under the condition (\ref{eqn:3.2.10}). We first note that \begin{eqnarray} &&\hspace{-5mm} P \int_0^{\infty } \frac{v_n^* (\omega ) \sum_{{n^{\prime}}=1}^{N} c_{n^{\prime}} v_{n^{\prime}} (\omega)}{\omega -E}d\omega \nonumber \\ &&= \sum_{{n^{\prime}}=1}^{N} c_{n^{\prime}} P \int_0^{\infty } \frac{v_n^* (\omega ) v_{n^{\prime}} (\omega)}{\omega -E} d\omega, \label{eqn:3.2.20b} \end{eqnarray} which is always valid for all $E$. Then, substituting this relation into Eq. (\ref{eqn:3.2.50}), we have \begin{equation} \omega_n c_n -\lambda^2 P \int_0^{\infty } \frac{v_n^* (\omega ) \sum_{{n^{\prime}}=1}^{N} c_{n^{\prime}} v_{n^{\prime}} (\omega)}{\omega -E}d\omega=Ec_n, \label{eqn:3.2.25} \end{equation} for $n=1, 2, \ldots, N$. For a comparison, see Eq. (\ref{eqn:3.10}) again. Therefore, if the solutions $E$ and $\{ c_n \}_{n=1}^N$ of Eq. (\ref{eqn:3.2.25}), i.e., Eq. (\ref{eqn:3.2.50}), satisfy the condition (\ref{eqn:3.2.10}), Eq. (\ref{eqn:3.2.25}) can reproduce Eq. (\ref{eqn:3.10}), so that the solutions of Eq. (\ref{eqn:3.2.25}) become the true ones of Eq. (\ref{eqn:3.10}). Our procedure for finding the coefficients $\{ c_n \}_{n=1}^N$ and the nonnegative eigenvalue $E$ of $H$ that satisfy Eqs. (\ref{eqn:3.10}) and (\ref{eqn:3.20}) consists of the two steps: we first solve Eq. (\ref{eqn:3.2.50}), and then we check whether the solutions satisfy the condition (\ref{eqn:3.2.10}). For a later convenience, we introduce the hermitian matrix $K(E)$ for $E\geq 0$ whose components are defined by \begin{equation} K_{n n^{\prime}} (E) := \omega_n \delta_{n n^{\prime}} -\lambda^2 D_{n n^{\prime}}(E). \label{eqn:3.2.40} \end{equation} Then, the existence of a nontrivial solution of Eq. (\ref{eqn:3.2.50}) is ensured if and only if there exists a nonnegative $E$ to satisfy \begin{equation} \kappa_n (E, \lambda) =E , \label{eqn:3.2.53} \end{equation} for a certain integer $n$, where $\{\kappa_n (E, \lambda) \}_{n=1}^{N}$ are the eigenvalues of $K(E)$, arranged in increasing order. To summarize again, if an eigenvalue of $K(E)$ is $E$, then it is an eigenvalue of $H$, provided that it also satisfies the condition (\ref{eqn:3.2.10}). It is worth noting that the condition (\ref{eqn:3.2.10}) seems not necessarily to require the existence of a zero of $v_n (\omega)$, unlike the single-level case of Eq. (\ref{eqn:3.2.55a}). However, the following statement means that if $v_n (\omega_n )\neq 0$ for all $\omega_n >0$, the weak-coupling condition results in no positive eigenvalue of $H$ strictly. \begin{thm}\label{thm:4.2} Suppose that $H_0$ has $N_+$ positive eigenvalues without any degeneracy, and each $v_n (\omega )$ is an $L^2$-function of the form, $v_n (\omega )=\omega^{p_n} f_n (\omega )$, where $p_n >0$ and $f_n (\omega )$ is a $C^1$-function in $[0, \infty)$. Furthermore, it is assumed that there is some $\delta_0 >0$ such that $\sup_{\omega >\delta_0} |v_n^* (\omega ) v_{n'} (\omega )| <\infty$ and $\sup_{\omega >\delta_0} |d[v_n^* (\omega ) v_{n'} (\omega )]/d\omega |<\infty$ for all $n$ and $n'$. Then, if $\lambda$ is sufficiently small but not zero and the condition that $v_n(\omega_n)\neq 0$ for all $n\geq N-N_+ +1$ is satisfied, $H$ has no positive eigenvalues. \end{thm} {\sl Proof} : Under the assumption that $E>0$, we first consider the eigenvalue problem \begin{equation} \sum_{j=1}^{N} K_{ij} (E) c_{n j} =\kappa_n (E, \lambda) c_{n i} , \label{eqn:3.2.70} \end{equation} for $i=1, 2, \ldots, N$, where $\{ c_{n i} \}_{i=1}^N$ is the normalized eigenvector corresponding the $n$-th eigenvalue $\kappa_n (E, \lambda)$ of $K(E)$. Then, by Theorem 4.3.1 in Ref. \cite{MatrixAnalysis}, one sees that \begin{eqnarray} \hspace{-5mm} |\kappa_n (E,\lambda) -\omega_n | &\leq& \lambda^2 \max\{|\delta_1 (E) |, |\delta_N (E) | \} \nonumber \\ &=& \lambda^2 \| D(E) \| \leq \lambda^2 \sup_{E>0} \| D(E) \| , \label{eqn:3.2.80} \end{eqnarray} for all $n$, where $\delta_n (E)$ is the $n$-th eigenvalue of $D(E)$. Note that from the assumption in the theorem and the propositions in the appendix, it holds that $\sup_{E>0} \| D(E) \| <\infty $. Therefore, by choosing $\lambda$ so that $|\lambda | < \lambda_a $, we have the fact that $|\kappa_n (E, \lambda)-\omega_n |<R_a$ for all $E>0$ and all $n$, and in particular $\kappa_n (E, \lambda) >0$ for all $E>0$ for all $n\geq N-N_+ +1$, where $\lambda_a = ( R_a /\sup_{E>0} \| D(E) \| )^{1/2}$ and \begin{equation} R_a = \min \left\{ \omega_{N-N_+ +1}/3 , \min_{n, m} \{|\omega_n -\omega_m |/3 ~|~ n \neq m \} \right\} . \label{eqn:3.2.85} \end{equation} The latter means that $\kappa_n (E,\lambda)$ only for $n\geq N-N_+ +1$ becomes a candidate for positive eigenvalue of $H$. Note that $\kappa_{N-N_+ }$ cannot be such a candidate even if $\omega_{N-N_+} =0$. Because in such a case, putting $\lambda_b = ( R_b /\sup_{E>0} \| D(E) \| )^{1/2}$, we find from Eq. (\ref{eqn:3.2.80}) that for $|\lambda |<\lambda_b $, $|\kappa_{N-N_+} (E,\lambda) | < R_b$ for all $E>0$. We here choose such a $R_b$ as to satisfy that $D(E') \geq 0 $ for all positive $E'< R_b$. Existence of such an $R_b$ is ensured by Eq. (\ref{eqn:a20}) in Proposition \ref{pp:a.1}. Then, from Theorem 4.3.1 in Ref. \cite{MatrixAnalysis} again, we have the estimation that \begin{equation} -\lambda^2 \delta_N (E) \leq \kappa_{N-N_+} (E,\lambda) \leq -\lambda^2 \delta_1 (E) \leq 0, \label{eqn:3.2.90} \end{equation} for all $E< R_b$. Hence, we conclude that if $|\lambda| <\lambda_b$, it holds that $\kappa_{N-N_+} (E,\lambda)<E$ for all $E>0$ \cite{E=0in multilevel}. However, we can show that if we choose $\lambda$ sufficiently small, any such a $\kappa_n (E,\lambda)$ and eigenvector $\sum_i c_{ni} \ket{i}$ cannot satisfy Eq. (\ref{eqn:3.2.10}), no matter how well they satisfy Eqs. (\ref{eqn:3.2.50}) and (\ref{eqn:3.2.53}). To this end, let us look at Eq. (\ref{eqn:3.2.10}), which is rewritten as \begin{eqnarray} &&\hspace*{-10mm} \left| \sum_{i=1}^{N} c_{ni} v_i (\kappa_n ) \right|^2 = \left\| P_n (E, \lambda) \sum_{i=1}^{N} v_i^* (\kappa_n ) \ket{i} \right\|^2 \label{eqn:3.2.100a} \\ &&\hspace*{-10mm} = |v_n^* (\kappa_n ) |^2 \nonumber \\ &&\hspace*{-10mm} ~~~+ \sum_{i=1}^{N} \bra{i} v_i (\kappa_n ) \bigl[ P_n (E, \lambda) -\ketbra{n}{n} \bigr] \sum_{i'=1}^{N} v_{i'}^* (\kappa_n ) \ket{i'} , \label{eqn:3.2.100b} \end{eqnarray} where $P_n (E, \lambda)$ denotes the projection operator associated with the $n$-th eigenvalue $\kappa_n (E,\lambda)$. One sees that the first term on the right-hand side of Eq. (\ref{eqn:3.2.100b}) behaves as \begin{equation} \lim_{\lambda \to 0} |v_n^* (\kappa_n (E,\lambda)) |^2 = |v_n^* (\omega_n ) |^2, \label{eqn:3.2.120} \end{equation} for all $E>0$ uniformly, because of Eq. (\ref{eqn:3.2.80}). From the assumption of the theorem, $|v_n^* (\omega_n ) |^2$ does not vanish. For the second term on the right-hand side of Eq. (\ref{eqn:3.2.100b}), we can use the result on the perturbation of the projection operator \cite{Kato(1966)}, which leads to the fact that \begin{equation} P_n (E, \lambda) = \ketbra{n}{n} + \sum_{j=1}^{\infty} \lambda^{2j} P_n^{(j)}, \label{eqn:3.2.140} \end{equation} with \begin{equation} P_n^{(j)}:= -\frac{1}{2\pi i} \oint_{\Gamma_n} (K_0 -\zeta)^{-1} [D(E) (K_0 -\zeta)^{-1} ]^j d\zeta, \label{eqn:3.2.150} \end{equation} where $\Gamma_n$ is the closed positively-oriented circle around $\zeta=\omega_n$ with radius $\min_{m(\neq n)} \{| \omega_n - \omega_m | /3 \}$. Series (\ref{eqn:3.2.140}) is ensured to converge uniformly for all $\lambda $ such that $|\lambda |< \min \{ \lambda_a, \lambda_b \}$, because \begin{equation} \sup_{\zeta \in \Gamma_n} |\lambda|^2 \|D(E) \| \| (K_0 -\zeta)^{-1} \| < \lambda_a^2 /\lambda_n^2 \leq 1 , \label{eqn:3.2.160} \end{equation} where \begin{equation} \lambda_n = \biggl[ \min_{m(\neq n)} \{| \omega_n - \omega_m | /3 \} \bigg/ \sup_{E>0} \|D(E) \| \biggr]^{1/2}. \label{eqn:3.2.165} \end{equation} From the assumption of no degeneracy among $\{ \omega_n \}_{n=1}^N$ and the discussion after Eq. (\ref{eqn:3.2.80}), for such a $\lambda$, all $\Gamma_n$'s are disconnected from each other, and there should be only one eigenvalue of $K$ in each circle. This leads to $\dim [P_n (E, \lambda) \mathbb{C}^N ] = \dim [\ketbra{n}{n} \mathbb{C}^N ] =1$, so that $\lambda =0$ is not an exceptional point \cite{Kato(1966)}. It is worth noting that $\lambda_n$ does not depend on $E$. Thus, the second term on the right-hand side of Eq. (\ref{eqn:3.2.100b}) is estimated as \begin{eqnarray} &&\hspace{-5mm} \sum_{i=1}^{N} \bra{i} v_i (\kappa_n ) \bigl[ P_n (E, \lambda) -\ketbra{n}{n} \bigr] \sum_{i'=1}^{N} v_{i'}^* (\kappa_n ) \ket{i'} \nonumber \\ &&\leq \left\| \sum_{i=1}^{N} v_i^* (\kappa_n ) \ket{i} \right\|^2 \| P_n (E, \lambda) -\ketbra{n}{n} \| \\ &&\leq \left[ \sum_{i=1}^{N} \sup_{|\omega -\omega_n | <R_a}|v_i (\omega)|^2 \right] \frac{(\lambda/ \lambda_n )^2 }{1-(\lambda/\lambda_n )^2 } \to 0, \label{eqn:3.2.110c} \end{eqnarray} as $\lambda \to 0$, for all $E>0$ uniformly, where it was used that $\sup_{\zeta \in \Gamma_n} \oint_{\Gamma_n} \| (K_0 -\zeta)^{-1} \| |d\zeta| \leq 2\pi$. Eq. (\ref{eqn:3.2.100b}) with the results (\ref{eqn:3.2.120}) and (\ref{eqn:3.2.110c}) means that Eq. (\ref{eqn:3.2.10}) is never satisfied for sufficiently small $\lambda$ with $|\lambda | < \min \{ \lambda_a, \lambda_b \}$, even if $\kappa_n (E,\lambda)=E$ holds. \qed It is worth considering the opposite condition that $v_n (\omega_n )=0$. In this case, we could infer the existence of an eigenvalue inside the continuum, from the decay process arising from the pole $z_{{\rm p}, n}$. Indeed, if we recall the explicit form of the decay rate \cite{decayrate}, if the opposite condition holds, the decay rate comes small so that a much slower decay occurs. Then, one may associate such a behavior with the presence of a bound state \cite{example}, though it is not obvious whether this pole actually becomes an eigenenergy of $H$. Let us now evaluate an explicit value of $\lambda$ for which there is no positive eigenvalue of $H$. Under the assumption of the analyticity of $v_n$, one sees that if $|\lambda | < \min \{ \lambda_a, \lambda_b \}$, Eq. (\ref{eqn:3.2.120}) is rewritten by using Eqs. (\ref{eqn:3.2.80}) and (\ref{eqn:3.2.165}) as \begin{eqnarray} &&\hspace{-5mm}\left| |v_n^* (\kappa_n (E,\lambda)) |^2 - |v_n^* (\omega_n ) |^2 \right| \nonumber \\ &&\leq \sup_{|\omega -\omega_n | <R_a} \left| \frac{d |v_n (\omega)|^2 }{d\omega} \right| |\kappa_n (E,\lambda) - \omega_n | \label{eqn:3.2.200a}\\ &&= \frac{\lambda^2}{ \lambda_n^2} \min_{m (\neq n)} \{ |\omega_n-\omega_m|/3 \} \sup_{\omega >0} \left| \frac{d |v_n (\omega)|^2 }{d\omega} \right| . \label{eqn:3.2.200b} \end{eqnarray} Therefore, by setting Eqs. (\ref{eqn:3.2.110c}) and (\ref{eqn:3.2.200b}) into Eq. (\ref{eqn:3.2.100a}), the left-hand side of Eq. (\ref{eqn:3.2.100a}) is ensured to be positive, and no positive eigenenergy of $H$ exists, providing that $\lambda$ is chosen to satisfy the $N_+ +1$ inequalities, \begin{equation} |\lambda | < \min \{ \lambda_a, \lambda_b \}, \label{eqn:3.2.205} \end{equation} and \begin{eqnarray} \hspace{-5mm}|v_n^* (\omega_n ) |^2 &>& \frac{\lambda^2}{\lambda_n^2} \min_{m (\neq n)} \{ |\omega_n-\omega_m|/3 \} \sup_{\omega >0} \left| \frac{d |v_n (\omega)|^2 }{d\omega} \right| \nonumber \\ &&+ \left[ \sum_{i=1}^{N} \sup_{|\omega -\omega_n | <R_a} |v_i (\omega)|^2 \right] \frac{\lambda^2/\lambda_n^2 }{1-\lambda^2/\lambda_n^2 } , \label{eqn:3.2.210} \end{eqnarray} for $n = N-N_+ +1, \ldots, N$. By solving Eq. (\ref{eqn:3.2.210}) for $\lambda$ explicitly, Eqs. (\ref{eqn:3.2.205}) and (\ref{eqn:3.2.210}) are reduced into the single inequality \begin{equation} |\lambda | < \min \{ \lambda_a, \lambda_b, \bar{\lambda}_{N-N_+ +1}, \ldots, \bar{\lambda}_N \}, \label{eqn:3.2.220} \end{equation} with \begin{eqnarray} &&\hspace*{-3mm} \bar{\lambda}_n \hspace*{-1mm} = \hspace*{-1mm} \sqrt{ \frac{\lambda_n^2 }{2\beta_n}[ \alpha_n +\beta_n+\gamma_n - \sqrt{(\alpha_n +\beta_n+\gamma_n )^2-4\alpha_n \beta_n}] } \nonumber \\ && \hspace*{1mm} < \lambda_n , \label{eqn:3.2.230} \end{eqnarray} where $\alpha_n =|v_n^* (\omega_n ) |^2 $, $\beta_n =\min_{m (\neq n)} \{ |\omega_n-\omega_m|/3 \} \sup_{\omega >0} \left| d |v_n (\omega)|^2 /d\omega \right| $, and $\gamma_n =\sum_{i=1}^{N} \sup_{|\omega -\omega_n | <R_a} |v_i (\omega)|^2$. In order to demonstrate Theorem \ref{thm:4.2}, we apply it to the spontaneous emission process for the hydrogen atom interacting with the electromagnetic field \cite{Facchi(1998)}. We suppose that $|n\rangle$ is the product state between the $(n+1)p$-state of the atom and the vacuum state of the field, and also $|\omega \rangle$ the product state between the $1s$-state of the atom and the one-photon state. Then, an initially excited atom is expected to make a transition to the ground state by emitting a photon. We treat the atom as a four-level system composed of the ground state and the three excited states: the $2p$, $3p$, and $4p$ state. The form factors corresponding to the $2p-1s$, $3p-1s$, and $4p-1s$ transitions were obtained as follows \cite{Seke(1994),Facchi(1998),Miyamoto(2005)}, \begin{eqnarray} \hspace*{-5mm} v_1^* (\omega) &=& i \Lambda_1^{1/2} \frac{(\omega/\Lambda_1 )^{1/2}}{[1+(\omega/\Lambda_1 )^2]^2} , \label{eqn:3.2.320a} \\ v_2^* (\omega) &=& i 81 \Lambda_1^{1/2} \frac{(\omega/\Lambda_2 )^{1/2} [1+2(\omega/\Lambda_2 )^2]} {128\sqrt{2} [1+(\omega/\Lambda_2 )^2]^3} , \label{eqn:3.2.320b} \\ v_3^* (\omega) &=& i 54 \sqrt{3} \Lambda_1^{1/2} (\omega/\Lambda_3 )^{1/2} \nonumber \\ && \times \frac{45+146(\omega/\Lambda_3 )^2 +125(\omega/\Lambda_3 )^4} {15625 [1+(\omega/\Lambda_3 )^2]^4} , \label{eqn:3.2.320c} \end{eqnarray} where $\Lambda_1=8.498 \times 10^{18} \ s^{-1}$, $\Lambda_2=(8/9)\Lambda_1 \ s^{-1}$, and $\Lambda_3=(10/12)\Lambda_1 \ s^{-1}$ are the cut off constants. One sees that these form factors satisfy all conditions required in Theorem \ref{thm:4.2}. The coupling constant is also given by $\lambda^2 =6.435 \times 10^{-9}$. The eigenvalues of $H_0$ are given by $\omega_n = \frac{4}{3}\Omega [1-(n+1)^{-2}]$ with $\Omega = 1.55 \times 10^{16} s^{-1}$, all of which are embedded in the energy continuum. The Hamiltonian (\ref{eqn:2.60}) is then derived under the four-level approximation (i.e., $N=N_+=3$) and the rotating-wave approximation. The various parameters are numerically obtained as follows: $R_a=| \omega_2 - \omega_3 | /3 =(7/324)\Omega$, $\sup_{E>0} \|D(E) \|=-\delta_1 (E) =11.332 \Lambda_1$ at $E=0.6145 \Lambda_1$, $\lambda_1^2=5.45 \times 10^{-3}\Omega/\Lambda_1$, $\lambda_2^2=\lambda_3^2=\lambda_a^2=1.91 \times 10^{-3}\Omega/\Lambda_1$, $\alpha_1=1.82 \times 10^{-3}\Lambda_1$, $\alpha_2=4.87 \times 10^{-4}\Lambda_1$, $\alpha_3=1.99 \times 10^{-4}\Lambda_1$, $\beta_1=6.17 \times 10^{-2}\Omega$, $\beta_2=4.87 \times 10^{-3}\Omega$, $\beta_3=1.88 \times 10^{-3}\Omega$, $\gamma_1=2.45 \times 10^{-3}\Lambda_1$, $\gamma_2=3.04 \times 10^{-3}\Lambda_1$, $\gamma_3=2.45 \times 10^{-3}\Lambda_1$, from which Eq. (\ref{eqn:3.2.230}) reads $\bar{\lambda}_1^2=4.18 \times 10^{-6}$, $\bar{\lambda}_2^2=5.01 \times 10^{-7}$, and $\bar{\lambda}_3^2=2.14 \times 10^{-7}$. Then, it follows that \begin{equation} \min \{ \lambda_a^2, \bar{\lambda}_{1}^2, \bar{\lambda}_{2}^2, \bar{\lambda}_3^2 \} = \bar{\lambda}_3^2 >\lambda^2 , \label{eqn:3.2.240} \end{equation} and thus Eq. (\ref{eqn:3.2.220}) holds. This conclusion indicates that the intrinsic values of the parameters characterizing the system does not allow any bound state. In fact, we have not observed any such state. It is worth noticing that the upper bound estimated in Eq. (\ref{eqn:3.2.240}) is dominated by the factor $\lambda_3^2$, roughly speaking, the minimum level-spacing over the maximum cut off constant. \section{Concluding remarks} \label{sec:5} We have considered the eigenvalue problem for unstable multilevel systems, on the basis of the $N$-level Friedrichs model, where the eigenenergies are supposed outside or possibly inside the continuum. The outside case is essentially determined by the location of the discrete level $\omega_n$ of the free Hamiltonian and the strength of the coupling constant $\lambda$. If $\omega_n$ lies outside the continuum, the corresponding eigenvalue always lies below $\omega_n$. If $\omega_n$ lies inside the continuum, by choosing a $\lambda$ large enough the eigenvalue originating from $\omega_n$ can emerge from the continuum. Such behaviors are similar to those seen in single-level cases, however, this is not the case if the form factors $v_n$ are linearly dependent. On the other hand, we have shown the absence of the eigenvalue lying inside the continuum in the weak coupling cases, under the condition that $v_n (\omega_n )\neq 0$ if $\omega_n $ lies inside the continuum. This statement is just an extension of Lemma 2.1 in Ref. \cite{Davies(1974)}, where only identical form factors were considered, and the upper bound for $|\lambda |$ required in the lemma was not estimated. We have evaluated this upper bound in our case, which proves to be proportional to the minimum level-spacing over the maximum cut off constant. Hence, comparing this value with the actual $\lambda$, one can check at least the absence of the eigenvalue, even in the case that one cannot evaluate the reduced resolvent explicitly. At first sight, the normalization condition, i.e., Eq. (\ref{eqn:3.2.10}), seems not necessarily to require the zeros of the form factors for a presence of a bound-state eigenenergy inside the continuum, though it is misplaced in weak-coupling regimes. However, we still do not have a definite answer to this matter in other coupling regimes where the multilevel effect may allow a presence of a bound-state eigenenergy inside the continuum without zeros of the form factors. \section*{Acknowledgments} The author would like to thank Professor I.\ Ohba and Professor H.\ Nakazato for useful comments. He would also like to thank the Yukawa Institute for Theoretical Physics at Kyoto University, where this work was initiated during the YITP-04-15, Fundamental Problems and Applications of Quantum Field Theory. This work is partly supported by a Grant for the 21st Century COE Program at Waseda University from the Ministry of Education, Culture, Sports, Science and Technology, Japan. \appendix* \section{} In this section, we present the two Propositions \ref{pp:a.1} and \ref{pp:a.4}. The former and the latter state that the behavior of the energy shift $D(E)$ at small and large energies is quite regular without any divergence, respectively, under some form-factor conditions that are often satisfied by actual systems. \begin{pp}\label{pp:a.1} Suppose that the function $\eta (\omega)$ belonging to $L^1 ([0, \infty ))$ is of the form \begin{equation} \eta (\omega ) := \omega^p r(\omega ), \label{eqn:a10} \end{equation} where $p>0$ and $r(\omega)$ is a $C^1$-function defined in $[0,\infty)$. It then holds that $\eta (\omega)/\omega \in L^1 ([0, \infty ))$ and \begin{equation} \int_{0}^{\infty} \frac{\eta (\omega)}{\omega} d\omega = \lim_{E \uparrow 0} \int_{0}^{\infty} \frac{\eta (\omega)}{\omega -E} d\omega =\lim_{E \downarrow 0} P\int_{0}^{\infty} \frac{\eta (\omega)}{\omega -E} d\omega . \label{eqn:a20} \end{equation} \end{pp} {\sl Proof} : From the proof of Proposition 3.2.2 in Ref. \cite{Exner(1985)}, the principal value of the integral on the right-hand side is written by the absolutely integrable function as follows \begin{equation} P\int_{0}^{\infty} \frac{\eta (\omega)}{\omega -E} d\omega =\int_{0}^{\infty} \frac{\eta (\omega)-\eta (E)\varphi_{\delta}(\omega-E)}{\omega -E} d\omega , \label{eqn:a30} \end{equation} for all $E>0$, where $\varphi_{\delta}(\omega)$ is a $C_0^{\infty}$-function with support $[-\delta , \delta ]$ ($0< \forall \delta < E$), even with respect to the origin, and such that $\varphi_{\delta}(0)=1$. In the following, we choose $\varphi_{\delta}(\omega)=\exp[1-1/(1-(\omega/\delta)^2)]$ for $\omega \in (-\delta , \delta )$ or $0$ otherwise, and $\delta=E/2$. On the other hand, since from the assumption (\ref{eqn:a10}) $\eta (\omega) /\omega$ is absolutely integrable, the first equality in Eq. (\ref{eqn:a20}) is obvious. Therefore, it is sufficient to show that \begin{equation} \lim_{E \downarrow 0} \int_{0}^{\infty} \left[ \frac{\eta (\omega)}{\omega} -\frac{\eta (\omega)-\eta (E)\varphi_{\delta}(\omega-E)}{\omega -E} \right] d\omega =0. \label{eqn:a40} \end{equation} Note that the above integrand can be rewritten as \begin{eqnarray} &&\hspace*{-10mm}\frac{\eta (\omega)}{\omega} -\frac{\eta (\omega)-\eta (E)\varphi_{\delta}(\omega-E)}{\omega -E} \nonumber \\ &&\hspace*{-10mm}= -E\frac{\eta (\omega)}{\omega(\omega-E)} +\frac{\eta (E)\varphi_{\delta}(\omega-E)}{\omega -E} \label{eqn:a50} \\ &&\hspace*{-10mm}= \frac{\eta (E)\varphi_{\delta}(\omega-E)}{\omega} -E\frac{\eta (\omega)-\eta (E)\varphi_{\delta}(\omega-E)}{\omega(\omega -E)} . \label{eqn:a60} \end{eqnarray} Let us first consider the case where $\omega \in I:= (0, E/2] \cup [3E/2, \infty)$. Then, since $\varphi_{\delta} (\omega-E)=0$, we can use Eq. (\ref{eqn:a50}) to estimate the integrand: \begin{equation} \left| E\frac{\eta (\omega)}{\omega(\omega-E)} \right| \leq 2 \left| \frac{\eta (\omega)}{\omega} \right| , \label{eqn:a70} \end{equation} where the right-hand side is absolutely integrable and independent of $E$. Furthermore, it follows that $\lim_{E \downarrow 0} E \chi_{I} (\omega ) \eta (\omega)/[\omega(\omega-E)] =0$ for every $\omega \in (0, \infty )$, where $ \chi_{I} (\omega ) = 1$ ($\omega \in I$) or $0$ ($\omega \notin I$), being the characteristic function. Thus, by the dominated convergence theorem, we can see that \begin{equation} \lim_{E \downarrow 0} \left( \int_{0}^{E/2} + \int_{3E/2}^{\infty} \right) E\frac{\eta (\omega)}{\omega(\omega-E)} d\omega = 0. \label{eqn:a80} \end{equation} For $\omega \in (E/2, 3E/2 )$, we can use Eq. (\ref{eqn:a60}). The integration of the first term of Eq. (\ref{eqn:a60}) is estimated by \begin{equation} \begin{array}{l} {\displaystyle \left| \int_{E/2}^{3E/2} \frac{\eta (E)\varphi_{\delta}(\omega-E)}{\omega} d\omega \right|} \\ {\displaystyle \leq \frac{\eta (E)}{E/2} \int_{E/2}^{3E/2} \varphi_{\delta}(\omega-E) d\omega = \eta (E) \int_{-1}^{1} \varphi_{1}(x) dx \to 0, } \end{array} \label{eqn:a85} \end{equation} as $E \downarrow 0$. The second term of Eq. (\ref{eqn:a60}) is also estimated by \begin{eqnarray} && |\eta(\omega)-\eta(E)\varphi_{\delta}(\omega-E)| \nonumber \\ && \leq |\eta(\omega) -\eta(E)|+|\eta(E)||1-\varphi_{\delta}(\omega-E)| . \label{eqn:a90} \end{eqnarray} The integration of the first term on the right-hand side right-hand side of the above is evaluated as \begin{eqnarray} && \int_{E/2}^{3E/2} E\frac{|\eta (\omega)-\eta (E)|}{\omega|\omega -E|} d\omega \nonumber \\ &&\leq (\ln 3) E \sup_{E/2 \leq \omega \leq 3E/2} \left| \eta^{\prime}(\omega) \right| \label{eqn:a100} \\ &&\leq (\ln 3) E \left[ p E^{p-1} \max\{ ({\textstyle \frac{1}{2}})^{p-1}, ({\textstyle \frac{3}{2}})^{p-1}\} \sup_{\omega \in [0, 3E/2]} \left| r(\omega) \right| \right. \nonumber \\ &&~~~ + \left. \left(\frac{3E}{2}\right)^{p} \sup_{\omega \in [0, 3E/2]} \left| r^{\prime}(\omega) \right| \right] \to 0 \mbox{ as } E \downarrow 0, \label{eqn:a120} \end{eqnarray} where the prime on $\eta^{\prime} (\omega)$ implies the differentiation of $\eta (\omega)$ and so on. The integral corresponding to the last term on the right-hand side of Eq. (\ref{eqn:a90}) is also estimated as \begin{eqnarray} &&\hspace*{-5mm} \int_{E/2}^{3E/2} E\frac{|\eta (E)||1-\varphi_{\delta}(\omega-E)|}{\omega|\omega -E|} d\omega \nonumber \\ &&\hspace*{-5mm}\leq (\ln 3) E |\eta (E)| \sup_{E/2 \leq \omega \leq 3E/2} \left| \varphi_{\delta}^{\prime} (\omega-E) \right| \label{eqn:a130} \\ &&\hspace*{-5mm}= 2 (\ln 3) |\eta (E)| \sup_{|x| \leq 1} \left| \varphi_{1}^{\prime}(x) \right| \rightarrow 0 ~~~(E \downarrow 0). \label{eqn:a140} \end{eqnarray} Thus, we can obtain \begin{equation} \lim_{E\downarrow 0} \int_{E/2}^{3E/2} E\frac{\eta (\omega)-\eta (E)\varphi_{\delta}(\omega-E)}{\omega(\omega -E)} d\omega=0. \label{eqn:a150} \end{equation} Equations (\ref{eqn:a80}), (\ref{eqn:a85}), and (\ref{eqn:a150}) mean the completion of the proof of (\ref{eqn:a40}). \qed \begin{pp}\label{pp:a.4} Suppose that the function $\eta (\omega)$ belongs to $L^1 ([0, \infty )) \cap C^1 ([0, \infty ))$, and satisfies that $\sup_{\omega \geq \delta_0} |\eta (\omega)| <\infty$ and $\sup_{\omega \geq \delta_0} |\eta' (\omega)|<\infty $ for some $\delta_0 >0$. Then, \begin{equation} \sup_{E>\delta_0} \left| P\int_0^\infty \frac{\eta (\omega)}{\omega -E} d\omega \right| <\infty . \label{eqn:a290} \end{equation} \end{pp} {\sl Proof} : To examine this integral, we use the expression (\ref{eqn:a30}) and divide the interval $[0, \infty )$ into $I_{\delta, E} = [E-\delta , E+\delta ] $ and $\overline{I_{\delta, E}} = [0, \infty )\setminus I_{\delta, E} $, again, where we assume $\delta_0 >\delta>0$. In the latter interval, it is estimated that $\chi_{\overline{I_{\delta, E}}} (\omega) |\eta (\omega)/(\omega -E)| \leq |\eta (\omega)|/\delta \in L^1 ([0, \infty ))$. Then, \begin{equation} \sup_{E>\delta_0} \left| \int_0^\infty \chi_{\overline{I_{\delta, E}}} (\omega) \frac{\eta (\omega)}{\omega -E} d\omega \right| \leq \frac{1}{\delta} \int_0^\infty |\eta (\omega)| d\omega <\infty . \label{eqn:a300} \end{equation} In the former interval, the integrand in Eq. (\ref{eqn:a30}) is evaluated as \begin{eqnarray} \left| \frac{\eta (\omega)-\eta (E) \varphi_\delta (\omega-E)}{\omega -E} \right| &\leq& \sup_{\omega \in I_{\delta, E} } |\eta' (\omega)| \nonumber \\ && +|\eta (E)| \sup_{|\omega |\leq \delta } |{\varphi_\delta}' (\omega)| , \label{eqn:a310} \end{eqnarray} which results in \begin{eqnarray} && \sup_{E>\delta_0} \left| \int_0^\infty \chi_{I_{\delta, E}} (\omega) \frac{\eta (\omega)-\eta (E) \varphi_\delta (\omega-E)}{\omega -E} \right| \nonumber \\ &&\leq 2\delta \left[ \sup_{E>\delta_0} |\eta' (E)| +\sup_{E>\delta_0} |\eta (E)| \sup_{|\omega |\leq \delta } |{\varphi_\delta}' (\omega)| \right] <\infty, \nonumber \\ && \label{eqn:a320} \end{eqnarray} where we used the assumption for $\eta (\omega)$ in the statement. Incorporating Eq. (\ref{eqn:a300}) with Eq. (\ref{eqn:a320}), Eq. (\ref{eqn:a290}) is obtained. \qed \end{document}
arXiv
Viggo Brun Viggo Brun (13 October 1885 – 15 August 1978) was a Norwegian professor, mathematician and number theorist. [1] Viggo Brun Born13 October 1885 Lier, Norway Died15 August 1978 Drøbak, Norway CitizenshipNorway Known forBrun's Theorem, Brun Sieve Scientific career FieldsNumber Theory Contributions In 1915, he introduced a new method, based on Legendre's version of the sieve of Eratosthenes, now known as the Brun sieve, which addresses additive problems such as Goldbach's conjecture and the twin prime conjecture. He used it to prove that there exist infinitely many integers n such that n and n+2 have at most nine prime factors, and that all large even integers are the sum of two numbers with at most nine prime factors.[2] He also showed that the sum of the reciprocals of twin primes converges to a finite value, now called Brun's constant: by contrast, the sum of the reciprocals of all primes is divergent. He developed a multi-dimensional continued fraction algorithm in 1919–1920 and applied this to problems in musical theory. He also served as praeses of the Royal Norwegian Society of Sciences and Letters in 1946.[3] Biography Brun was born at Lier in Buskerud, Norway. He studied at the University of Oslo and began research at the University of Göttingen in 1910. In 1923, Brun became a professor at the Technical University in Trondheim and in 1946 a professor at the University of Oslo.[4] He retired in 1955 at the age of 70 and died in 1978 (at 92 years-old) at Drøbak in Akershus, Norway.[5] See also • Brun's theorem • Brun-Titchmarsh theorem • Brun sieve • Sieve theory References 1. "Viggo Brun". numbertheory.org. 18 June 2003. Retrieved January 1, 2017. 2. J J O'Connor; E F Robertson. "Viggo Brun". School of Mathematics and Statistics, University of St Andrews, Scotland. Retrieved January 1, 2017. 3. Bratberg, Terje (1996). "Vitenskapsselskapet". In Arntzen, Jon Gunnar (ed.). Trondheim byleksikon. Oslo: Kunnskapsforlaget. pp. 599–600. ISBN 82-573-0642-8. 4. "Viggo Brun". Store norske leksikon. Retrieved January 1, 2017. 5. Bent Birkeland. "Viggo Brun". Norsk biografisk leksikon. Retrieved January 1, 2017. Other sources • H. Halberstam and H. E. Richert, Sieve methods, Academic Press (1974) ISBN 0-12-318250-6. Gives an account of Brun's sieve. • C.J. Scriba, Viggo Brun, Historia Mathematica 7 (1980) 1–6. • C.J. Scriba, Zur Erinnerung an Viggo Brun, Mitt. Math. Ges. Hamburg 11 (1985) 271-290 External links • Brun's Constant • Brun's Pure Sieve • Viggo Brun personal archive exists at NTN University Library Dorabiblioteket Authority control International • ISNI • VIAF National • Germany • Israel • United States • Netherlands Academics • MathSciNet • zbMATH People • Deutsche Biographie Other • IdRef
Wikipedia
\begin{definition}[Definition:Infinity] Informally, the term '''infinity''' is used to mean '''some infinite number''', but this concept falls very far short of a usable definition. The symbol $\infty$ (supposedly invented by {{AuthorRef|John Wallis}}) is often used in this context to mean '''an infinite number'''. However, outside of its formal use in the definition of limits its use is strongly discouraged until you know what you're talking about. It is defined as having the following properties: :$\forall n \in \Z: n < \infty$ :$\forall n \in \Z: n + \infty = \infty$ :$\forall n \in \Z: n \times \infty = \infty$ :$\infty^2 = \infty$ Similarly, the quantity written as $-\infty$ is defined as having the following properties: :$\forall n \in \Z: -\infty< n$ :$\forall n \in \Z: -\infty + n = -\infty$ :$\forall n \in \Z: -\infty \times n = -\infty$ :$\paren {-\infty}^2 = -\infty$ The latter result seems wrong when you think of the rule that a negative number square equals a positive one, but remember that infinity is not exactly a number as such. \end{definition}
ProofWiki
\begin{definition}[Definition:Group Relation on Set] Let $X$ be a set. A '''group relation''' on $X$ is a pair $(u, v)$ where $u$ and $v$ are group words on $X$. A '''group relation''' $(u, v)$ is also denoted $u=v$. \end{definition}
ProofWiki
De Groot dual In mathematics, in particular in topology, the de Groot dual (after Johannes de Groot) of a topology τ on a set X is the topology τ* whose closed sets are generated by compact saturated subsets of (X, τ). References • R. Kopperman (1995), Asymmetry and duality in topology. Topology Applications, 66(1), 1–39, 1995.
Wikipedia
Login | Create Sort by: Relevance Date Users's collections Twitter Group by: Day Week Month Year All time Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity) Stochastic modeling of multiwavelength variability of the classical BL Lac object OJ 287 on timescales ranging from decades to hours (1709.04457) A. Goyal, L. Stawarz, S. Zola, V. Marchenko, M. Soida, K. Nilsson, S. Ciprini, A. Baran, M. Ostrowski, P. J. Wiita, Gopal-Krishna, A. Siemiginowska, M. Sobolewska, S. Jorstad, A. Marscher, M. F. Aller H. D. Aller T. Hovatta, D. B. Caton, D. Reichart, K. Matsumoto, K. Sadakane, K. Gazeas, M. Kidger, V. Piirola, H. Jermak, F. Alicavus, K. S. Baliyan, A. Baransky, A. Berdyugin, P. Blay, P. Boumis, D. Boyd, Y. Bufan, M. Campas Torrent, F. Campos, J. Carrillo Gomez, J. Dalessio, B. Debski, D. Dimitrov, M. Drozdz, H. Er, A. Erdem, A. Escartin Perez, V. Fallah Ramazani, A. V. Filippenko, E. Gafton, F. Garcia, V. Godunova, F. Gomez Pinilla, M. Gopinathan, J. B. Haislip, S. Haque, J. Harmanen, R. Hudec, G. Hurst, K. M. Ivarsen, A. Joshi, M. Kagitani, N. Karaman, R. Karjalainen, N. Kaur, D. Kozie l-Wierzbowska, E. Kuligowska, T. Kundera, S. Kurowski, A. Kvammen, A. P. LaCluyze, B. C. Lee, A. Liakos, J. Lozano de Haro, I. Mohammed, J. P. Moore, M. Mugrauer, R. Naves Nogues, A. W. Neely, W. Ogloza, S. Okano, U. Pajdosz, J. C. Pandey, M. Perri, G. Poyner, J. Provencal, T. Pursimo, A. Raj, B. Rajkumar, R. Reinthal, T. Reynolds, J. Saario, S. Sadegi, T. Sakanoi, J. L. Salto Gonzalez, Sameer, A. Heung, O. Simon, M. Siwak, T. Schweyer, F. C. Soldan Alfaro, E. Sonbas, J. Strobl, L. O. Takalo, L. Tremosa Espasa, J. R. Valdes, V. V. Vasylenko, F. Verrecchia, J. R. Webb, M. Yoneda, M. Zejmo, W. Zheng, P. Zielinski, J. Janik, V. Chavushyan, C. C. Cheung, M. Giroletti July 10, 2018 astro-ph.HE We present the results of our power spectral density analysis for the BL Lac object OJ\,287, utilizing the {\it Fermi}-LAT survey at high-energy $\gamma$-rays, {\it Swift}-XRT in X-rays, several ground-based telescopes and the {\it Kepler} satellite in the optical, and radio telescopes at GHz frequencies. The light curves are modeled in terms of continuous-time auto-regressive moving average (CARMA) processes. Owing to the inclusion of the {\it Kepler} data, we were able to construct \emph{for the first time} the optical variability power spectrum of a blazar without any gaps across $\sim6$ dex in temporal frequencies. Our analysis reveals that the radio power spectra are of a colored-noise type on timescales ranging from tens of years down to months, with no evidence for breaks or other spectral features. The overall optical power spectrum is also consistent with a colored noise on the variability timescales ranging from 117 years down to hours, with no hints of any quasi-periodic oscillations. The X-ray power spectrum resembles the radio and optical power spectra on the analogous timescales ranging from tens of years down to months. Finally, the $\gamma$-ray power spectrum is noticeably different from the radio, optical, and X-ray power spectra of the source: we have detected a characteristic relaxation timescale in the {\it Fermi}-LAT data, corresponding to $\sim 150$\,days, such that on timescales longer than this, the power spectrum is consistent with uncorrelated (white) noise, while on shorter variability timescales there is correlated (colored) noise. Radio and optical intra-day variability observations of five blazars (1705.00124) X. Liu, P.P. Yang, J. Liu, B.R. Liu, S.M. Hu, O.M. Kurtanidze, S. Zola, A. Kraus, T.P. Krichbaum, R.Z. Su, K. Gazeas, K. Sadakane, K. Nilson, D.E. Reichart, M. Kidger, K. Matsumoto, S. Okano, M. Siwak, J.R. Webb, T. Pursimo, F. Garcia, R. Naves Nogues, A. Erdem, F. Alicavus, T. Balonek, S.G. Jorstad April 29, 2017 astro-ph.GA We carried out a pilot campaign of radio and optical band intra-day variability (IDV) observations of five blazars (3C66A, S5 0716+714, OJ287, B0925+504, and BL Lacertae) on December 18--21, 2015 by using the radio telescope in Effelsberg (Germany) and several optical telescopes in Asia, Europe, and America. After calibration, the light curves from both 5 GHz radio band and the optical R band were obtained, although the data were not smoothly sampled over the sampling period of about four days. We tentatively analyse the amplitudes and time scales of the variabilities, and any possible periodicity. The blazars vary significantly in the radio (except 3C66A and BL Lacertae with only marginal variations) and optical bands on intra- and inter-day time scales, and the source B0925+504 exhibits a strong quasi-periodic radio variability. No significant correlation between the radio- and optical-band variability appears in the five sources, which we attribute to the radio IDV being dominated by interstellar scintillation whereas the optical variability comes from the source itself. However, the radio- and optical-band variations appear to be weakly correlated in some sources and should be investigated based on well-sampled data from future observations. Detection of Possible Quasi-periodic Oscillations in the Long-term Optical Light Curve of the BL Lac Object OJ 287 (1609.02388) G. Bhatta, S. Zola, Ł. Stawarz, M. Ostrowski, M. Winiarski, W. Ogłoza, M. Dróżdz, M. Siwak, A. Liakos, D. Kozieł-Wierzbowska, K. Gazeas, B. Debski, T. Kundera, G. Stachowski, V. S. Paliya Sept. 8, 2016 astro-ph.HE Detection of periodicity in the broad-band non-thermal emission of blazars has so far been proven to be elusive. However, there are a number of scenarios which could lead to quasi-periodic variations in blazar light curves. For example, orbital or thermal/viscous period of accreting matter around central supermassive black holes could, in principle, be imprinted in the multi-wavelength emission of small-scale blazar jets, carrying as such crucial information about plasma conditions within the jet launching regions. In this paper, we present the results of our time series analysis of $\sim 9.2$ year-long, and exceptionally well-sampled optical light curve of the BL Lac OJ 287. The study primarily uses the data from our own observations performed at the Mt. Suhora and Krak\'ow Observatories in Poland, and at the Athens Observatory in Greece. Additionally, SMARTS observations were used to fill in some of the gaps in the data. The Lomb-Scargle Periodogram and the Weighted Wavelet Z-transform methods were employed to search for the possible QPOs in the resulting optical light curve of the source. Both the methods consistently yielded possible quasi-periodic signal around the periods of $\sim 400$ and $\sim 800$ days, the former one with a significance (over the underlying colored noise) of $\geq 99\%$. A number of likely explanations for such are discussed, with a preference given to a modulation of the jet production efficiency by highly magnetized accretion disks. This supports the previous findings and the interpretation reported recently in the literature for OJ 287 and other blazar sources. Multifrequency Photo-polarimetric WEBT Observation Campaign on the Blazar S5 0716+714: Source Microvariability and Search for Characteristic Timescales (1608.03531) G. Bhatta, Ł. Stawarz, M. Ostrowski, A. Markowitz, H. Akitaya, A. A. Arkharov, R. Bachev, E. Benítez, G. A. Borman, D. Carosati, A. D. Cason, R. Chanishvili, G. Damljanovic, S. Dhalla, A. Frasca, D. Hiriart, S-M. Hu, R. Itoh, D. Jableka, S. Jorstad, M. D. Jovanovic, K. S. Kawabata, S. A. Klimanov, O. Kurtanidze, V. M. Larionov, D. Laurence, G. Leto, A. P. Marscher, J. W. Moody, Y. Moritani, J. M. Ohlert, A. Di Paola, C. M. Raiteri, N. Rizzi, A. C. Sadun, M. Sasada, S. Sergeev, A. Strigachev, K. Takaki, I. S. Troitsky, T. Ui, M. Villata, O. Vince, J. R. Webb, M. Yoshida, S. Zola Aug. 11, 2016 astro-ph.HE Here we report on the results of the WEBT photo-polarimetric campaign targeting the blazar S5~0716+71, organized in March 2014 to monitor the source simultaneously in BVRI and near IR filters. The campaign resulted in an unprecedented dataset spanning $\sim 110$\,h of nearly continuous, multi-band observations, including two sets of densely sampled polarimetric data mainly in R filter. During the campaign, the source displayed pronounced variability with peak-to-peak variations of about $30\%$ and "bluer-when-brighter" spectral evolution, consisting of a day-timescale modulation with superimposed hourlong microflares characterized by $\sim 0.1$\,mag flux changes. We performed an in-depth search for quasi-periodicities in the source light curve; hints for the presence of oscillations on timescales of $\sim 3$\,h and $\sim 5$\,h do not represent highly significant departures from a pure red-noise power spectrum. We observed that, at a certain configuration of the optical polarization angle relative to the positional angle of the innermost radio jet in the source, changes in the polarization degree led the total flux variability by about 2\,h; meanwhile, when the relative configuration of the polarization and jet angles altered, no such lag could be noted. The microflaring events, when analyzed as separate pulse emission components, were found to be characterized by a very high polarization degree ($> 30\%$) and polarization angles which differed substantially from the polarization angle of the underlying background component, or from the radio jet positional angle. We discuss the results in the general context of blazar emission and energy dissipation models. Photometric, Spectroscopic and Orbital Period Study of Three Early Type Semi-detached Systems: XZ Aql, UX Her and AT Peg (1607.06930) S. Zola, O. Basturk, A. Liakos, K. Gazeas, H.V. Senavci, R.H. Nelson, I. Ozavci, B. Zakrzewski, M. Yilmaz July 23, 2016 astro-ph.SR In this paper we present a combined photometric, spectroscopic and orbital period study of three early-type eclipsing binary systems: XZ Aql, UX Her, and AT Peg. As a result, we have derived the absolute parameters of their components and, on that basis, we discuss their evolutionary states. Furthermore, we compare their parameters with those of other binary systems and with the theoretical models. An analysis of all available up-to-date times of minima indicated that all three systems studied here show cyclic orbital changes, their origin is discussed in detail. Finally, we performed a frequency analysis for possible pulsational behavior and as a result we suggest that XZ Aql hosts a {\delta} Scuti component. Primary black hole spin in OJ287 as determined by the General Relativity centenary flare (1603.04171) M. J. Valtonen, S. Zola, S. Ciprini, A. Gopakumar, K. Matsumoto, K. Sadakane, M. Kidger, K. Gazeas, K. Nilsson, A. Berdyugin, V. Piirola, H. Jermak, K. S. Baliyan, F. Alicavus, D. Boyd, M. Campas Torrent, F. Campos, J. Carrillo Gomez, D. B. Caton, V. Chavushyan, J. Dalessio, B. Debski, D. Dimitrov, M. Drozdz, H. Er, A. Erdem, A. Escartin Perez, V. Fallah Ramazani, A. V. Filippenko, S. Ganesh, F. Garcia, F. Gomez Pinilla, M. Gopinathan, J. B. Haislip, R. Hudec, G. Hurst, K. M. Ivarsen, M. Jelinek, A. Joshi, M. Kagitani, N. Kaur, W. C. Keel, A. P. LaCluyze, B. C. Lee, E. Lindfors, J. Lozano de Haro, J. P. Moore, M. Mugrauer, R. Naves Nogues, A. W. Neely, R. H. Nelson, W. Ogloza, S. Okano, J. C. Pandey, M. Perri, P. Pihajoki, G. Poyner, J. Provencal, T. Pursimo, A. Raj, D. E. Reichart, R. Reinthal, S. Sadegi, T. Sakanoi, J. L. Salto Gonzalez, T. Schweyer, M. Siwak, F. C. Soldan Alfaro, E. Sonbas, I. Steele, J. T. Stocke, J. Strobl, L. O. Takalo, T. Tomov, L. Tremosa Espasa, J. R. Valdes, J. Valero Perez, F. Verrecchia, J. R. Webb, M. Yoneda, M. Zejmo, W. Zheng, J. Telting, J. Saario, T. Reynolds, A. Kvammen, E. Gafton, R. Karjalainen, J. Harmanen, P. Blay March 14, 2016 astro-ph.HE OJ287 is a quasi-periodic quasar with roughly 12 year optical cycles. It displays prominent outbursts which are predictable in a binary black hole model. The model predicted a major optical outburst in December 2015. We found that the outburst did occur within the expected time range, peaking on 2015 December 5 at magnitude 12.9 in the optical R-band. Based on Swift/XRT satellite measurements and optical polarization data, we find that it included a major thermal component. Its timing provides an accurate estimate for the spin of the primary black hole, chi = 0.313 +- 0.01. The present outburst also confirms the established general relativistic properties of the system such as the loss of orbital energy to gravitational radiation at the 2 % accuracy level and it opens up the possibility of testing the black hole no-hair theorem with a 10 % accuracy during the present decade. Precursor flares in OJ 287 (1212.5206) P. Pihajoki, M. Valtonen, S. Zola, A. Liakos, M. Drozdz, M. Winiarski, W. Ogloza, D. Koziel-Wierzbowska, J. Provencal, K. Nilsson, A. Berdyugin, E. Lindfors, R. Reinthal, A. Sillanpää, L. Takalo, M.M.M. Santangelo, H. Salo, S. Chandra, S. Ganesh, K.S. Baliyan, S.A. Coggins-Hill, A. Gopakumar Dec. 20, 2012 astro-ph.HE We have studied three most recent precursor flares in the light curve of the blazar OJ 287 while invoking the presence of a precessing binary black hole in the system to explain the nature of these flares. Precursor flare timings from the historical light curves are compared with theoretical predictions from our model that incorporate effects of an accretion disk and post-Newtonian description for the binary black hole orbit. We find that the precursor flares coincide with the secondary black hole descending towards the accretion disk of the primary black hole from the observed side, with a mean z-component of approximately z_c = 4000 AU. We use this model of precursor flares to predict that precursor flare of similar nature should happen around 2020.96 before the next major outburst in 2022. Spectroscopic and Photometric Study of the Contact Binary BO CVn (1204.3584) S. Zola, R. H. Nelson, V. Senavci, T. Szymanski, A. Kuzmicz, M. Winiarski, D. Jableka April 16, 2012 astro-ph.SR We present the results of the study of the contact binary system BO CVn. We have obtained physical parameters of the components based on combined analysis of new, multi-color light curves and spectroscopic mass ratio. This is the first time the latter has been determined for this object. We derived the contact configuration for the system with a very high filling factor of about 88 percent. We were able to reproduce the observed light curve, namely the flat bottom of the secondary minimum, only if a third light has been added into the list of free parameters. The resulting third light contribution is significant, about 20-24 percent, while the absolute parameters of components are: M1=1.16, M2=0.39, R1=1.62 and R2=1.00 (in solar units). The O-C diagram shows an upward parabola which, under the conservative mass transfer assumption, would correspond to a mass transfer rate of dM/dt = 6.3 \times 10-8M\odot/yr, matter being transferred from the less massive component to the more massive one. No cyclic, short-period variations have been found in the O-C diagram (but longer-term variations remain a possibility) Empirical Determination of Convection Parameters in White Dwarfs I : Whole Earth Telescope Observations of EC14012-1446 (1204.2558) J. L. Provencal, M. H. Montgomery, A. Kanaan, S. E. Thompson, J. Dalessio, H. L. Shipman, D. Childers, J. C. Clemens, R. Rosen, P. Henrique, A. Bischoff-Kim, W. Strickland, D. Chandler, B. Walter, T. K. Watson, B. Castanheira, S. Wang, G. Handler, M. Wood, S. Vennes, P. Nemeth, S. O. Kepler, M. Reed, A. Nitta, S. J. Kleinman, T. Brown, S. -L. Kim, D. Sullivan, Wen-Ping Chen, M. Yang, Chia-You Shih, X. J. Jiang, A. V. Sergeev, A. Maksim, R. Janulis, K. S. Baliyan, H. O. Vats, S. Zola, A. Baran, M. Winiarski, W. Ogloza, M. Paparo, Z. Bognar, P. Papics, D. Kilkenny, R. Sefako, D. Buckley, N. Loaring, A. Kniazev, R. Silvotti, S. Galleti, T. Nagel, G. Vauclair, N. Dolez, J. R. Fremy, J. Perez, J. M. Almenara, L. Fraga We report on analysis of 308.3 hrs of high speed photometry targeting the pulsating DA white dwarf EC14012-1446. The data were acquired with the Whole Earth Telescope (WET) during the 2008 international observing run XCOV26. The Fourier transform of the light curve contains 19 independent frequencies and numerous combination frequencies. The dominant peaks are 1633.907, 1887.404, and 2504.897 microHz. Our analysis of the combination amplitudes reveals that the parent frequencies are consistent with modes of spherical degree l=1. The combination amplitudes also provide m identifications for the largest amplitude parent frequencies. Our seismology analysis, which includes 2004--2007 archival data, confirms these identifications, provides constraints on additional frequencies, and finds an average period spacing of 41 s. Building on this foundation, we present nonlinear fits to high signal-to-noise light curves from the SOAR 4.1m, McDonald 2.1m, and KPNO 2m telescopes. The fits indicate a time-averaged convective response timescale of 99.4 +/- 17 s, a temperature exponent 85 +/- 6.2 and an inclination angle of 32.9 +/- 3.2 degrees. We present our current empirical map of the convective response timescale across the DA instability strip. CGCG 292-057 - a radio galaxy with merger-modulated radio activity (1203.0538) D. Kozieł-Wierzbowska, M. Jamrozy, S. Zola, G. Stachowski, A. Kuźmicz March 2, 2012 astro-ph.GA We announce the discovery of a unique combination of features in a radio source identified with the merger galaxy CGCG 292-057. The radio galaxy both exhibits a highly complex, X-like structure and shows signs of recurrent activity in the form of double-double morphology. The outer lobes of CGCG 292-057 are characterized by low radio power, P_{1400MHz} \simeq 2 * 10^{24} W\Hz^{-1}, placing this source below the FRII/FRI luminosity threshold, and are highly polarized (almost 20 per cent at 1400 MHz) as is typical of X-shaped radio sources. The host is a LINER-type galaxy with a relatively low black hole mass and double-peaked narrow emission lines. These features make this galaxy a primary target for studies of merger-triggered radio activity. Whole Earth Telescope Observations of the subdwarf B star KPD 1930+2752: A rich, short period pulsator in a close binary (1011.0387) M. D. Reed, S.L. Harms, S. Poindexter, A.-Y. Zhou, J.R. Eggen, M.A. Morris, A.C. Quint, S. McDaniel, A. Baran, N. Dolez, S. D. Kawaler, D. W. Kurtz, P. Moskalik, R. Riddle, S. Zola, R. H. Ostensen, J.-E. Solheim, S.O. Kepler, A. F. M. Costa, J. L. Provencal, F. Mullally, D. W. Winget, M. Vuckovic, R. Crowe, D. Terry, R. Avila, B. Berkey, S. Stewart, J. Bodnarik, D. Bolton, P.-M. Binder, K. Sekiguchi, D. J. Sullivan, S.-L. Kim, W.-P. Chen, C.-W. Chen, H.-C. Lin, X.-J. Jian, H. Wu, J.-P. Gou, Z. Liu, E. Leibowitz, Y. Lipkin, C. Akan, O. Cakirli, R. Janulis, R. Pretorius, W. Ogloza, G. Stachowski, M. Paparo, R. Szabo, Z. Csubry, D. Zsuffa, R. Silvotti, S. Marinoni, I. Bruni, G. Vauclair, M. Chevreton, J.M. Matthews, C. Cameron, H. Pablo Nov. 10, 2010 astro-ph.SR KPD 1930+2752 is a short-period pulsating subdwarf B (sdB) star. It is also an ellipsoidal variable with a known binary period just over two hours. The companion is most likely a white dwarf and the total mass of the system is close to the Chandresakhar limit. In this paper we report the results of Whole Earth Telescope (WET) photometric observations during 2003 and a smaller multisite campaign from 2002. From 355 hours of WET data, we detect 68 pulsation frequencies and suggest an additional 13 frequencies within a crowded and complex temporal spectrum between 3065 and 6343 $\mu$Hz (periods between 326 and 157 s). We examine pulsation properties including phase and amplitude stability in an attempt to understand the nature of the pulsation mechanism. We examine a stochastic mechanism by comparing amplitude variations with simulated stochastic data. We also use the binary nature of KPD 1930+2752 for identifying pulsation modes via multiplet structure and a tidally-induced pulsation geometry. Our results indicate a complicated pulsation structure that includes short-period ($\approx 16$ h) amplitude variability, rotationally split modes, tidally-induced modes, and some pulsations which are geometrically limited on the sdB star. Pulsational Mapping of Calcium Across the Surface of a White Dwarf (1003.3374) Susan E. Thompson, M. H. Montgomery, T. von Hippel, A. Nitta, J. Dalessio, J. Provencal, W. Strickland, J. A. Holtzman, A. Mukadam, D. Sullivan, T. Nagel, D. Koziel-Wierzbowska, S. Zola, T. Kundera, M. Winiarski, M. Drozdz, E. Kuligowska, W. Ogloza, Zs. Bognar, G. Handler, A. Kanaan, T. Ribeira, R. Rosen, D. Reichart, J. Haislip, B. N. Barlow, B. H. Dunlap, K. Ivarsen, A. LaCluyze, F. Mullally March 26, 2010 astro-ph.SR We constrain the distribution of calcium across the surface of the white dwarf star G29-38 by combining time series spectroscopy from Gemini-North with global time series photometry from the Whole Earth Telescope. G29-38 is actively accreting metals from a known debris disk. Since the metals sink significantly faster than they mix across the surface, any inhomogeneity in the accretion process will appear as an inhomogeneity of the metals on the surface of the star. We measure the flux amplitudes and the calcium equivalent width amplitudes for two large pulsations excited on G29-38 in 2008. The ratio of these amplitudes best fits a model for polar accretion of calcium and rules out equatorial accretion. Multi-ring structure of the eclipsing disk in EE Cep - possible planets? (0910.0432) C. Galan, M. Mikolajewski, T. Tomov, E. Swierczynski, M. Wiecek, T. Brozek, G. Maciejewski, P. Wychudzki, M. Hajduk, P. T. Rozanski, E. Ragan, B. Budzisz, P. Dobierski, S. Frackowiak, M. Kurpinska-Winiarska, M. Winiarski, S. Zola, W. Ogloza, A. Kuzmicz, M. Drozdz, E. Kuligowska, J. Krzesinski, T. Szymanski, M. Siwak, T. Kundera, B. Staels, J. Hopkins, J. Pye, L. Elder, G. Myers, D. Dimitrov, V. Popov, E. Semkov, S. Peneva, D. Kolev, I. Iliev, I. Barzova, I. Stateva, N. Tomov, S. Dvorak, I. Miller, L. Brat, P. Niarchos, A. Liakos, K. Gazeas, A. Pigulski, G. Kopacki, A. Narwid, A. Majewska, M. Steslicki, E. Niemczura, Y. Ogmen, A. Oksanen, H. Kucakova, T. A. Lister, T. A. Heras, A. Dapergolas, I. Bellas-Velidis, R. Kocian, A. Majcher Oct. 2, 2009 astro-ph.SR The photometric and spectroscopic observational campaign organized for the 2008/9 eclipse of EE Cep revealed features, which indicate that the eclipsing disk in the EE Cep system has a multi-ring structure. We suggest that the gaps in the disk can be related to the possible planet formation. The chromospherically--active binary CF Tuc revisited (0905.2905) D. Dogru, A. Erdem, S. S. Dogru, S. Zola May 18, 2009 astro-ph.SR New high-resolution spectra, of the chromospherically active binary system CF Tuc, taken at the Mt. John University Observatory in 2007, were analyzed using two methods: cross-correlation and Fourier--based disentangling. As a result, new radial velocity curves of both components were obtained. The resulting orbital elements of CF Tuc are: $a_{1}{\sin}i$=$0.0254\pm0.0001$ AU, $a_{2}{\sin}i$=$0.0228\pm0.0001$ AU, $M_{1}{\sin}i$=$0.902\pm0.005$ $M_{\odot}$, and $M_{2}{\sin}i$=$1.008\pm0.006$ $M_{\odot}$. The cooler component of the system shows H$\alpha$ and CaII H & K emissions. Our spectroscopic data and recent $BV$ light curves were solved simultaneously using the Wilson-Devinney code. A dark spot on the surface of the cooler component was assumed to explain large asymmetries observed in the light curves. The following absolute parameters of the components were determined: $M_{1}$=$1.11\pm0.01$ $M_{\odot}$, $M_{2}$=$1.23\pm0.01$ $M_{\odot}$, $R_{1}$=$1.63\pm0.02$ $R_{\odot}$, $R_{2}$=$3.60\pm0.02$ $R_{\odot}$, $L_{1}$=$3.32\pm0.51$ $L_{\odot}$ and $L_{2}$=$3.91\pm0.84$ $L_{\odot}$. The orbital period of the system was studied using the O-C analysis. The O-C diagram could be interpreted in terms of either two abrupt changes or a quasi-sinusoidal form superimposed on a downward parabola. These variations are discussed by reference to the combined effect of mass transfer and mass loss, the Applegate mechanism and also a light-time effect due to the existence of a massive third body (possibly a black hole) in the system. The distance to CF Tuc was calculated to be $89\pm6$ pc from the dynamic parallax, neglecting interstellar absorption, in agreement with the Hipparcos value. Detection of a tertiary brown dwarf companion in the sdB-type eclipsing binary HS 0705+6700 (0903.1357) S. Qian, L. Zhu, S. Zola, W. Liao, L. Liu, L. Li, M. Winiarski, E. Kuligowska, J. Kreiner HS 0705+6700 is a short-period (P=2.3 hours), close binary containing a hot sdB-type primary and a fully convective secondary. We have monitored this eclipsing binary for more than 2 years and as a result, 32 times of light minimum were obtained. Based on our new eclipse times together with these compiled from the literature, it is discovered that the O-C curve of HS 0705+6700 shows a cyclic variation with a period of 7.15 years and a semiamplitude of 92.4 s. The periodic change was analyzed for the light-travel time effect that may be due to the presence of a tertiary companion. The mass of the third body is determined to be M3 sin i = 0.0377 (+/-0.0043) Msun when a total mass of 0.617 Msun for HS 0705+6700 is adopted. For orbital inclinations i >= 32.8, the mass of the tertiary component would be below the stable hydrogen-burning limit of M3~0.072 Msun, and thus it would be a brown dwarf. The third body is orbiting the sdB-type binary at a distance shorter than 3.6 astronomical units (AU). HS 0705+6700 was formed through the evolution of a common envelope after the primary becomes a red giant. The detection of a sub-stellar companion in HS 0705+6700 system at this distance from the binary could give some constraints on stellar evolution in such systems and the interactions between red giants and their companions. Physical parameters of close binary systems: VI (0903.1364) K.D. Gazeas, P.G. Niarchos, S. Zola, J.M. Kreiner, S.M. Rucinski March 7, 2009 astro-ph.SR New high-quality CCD photometric light curves for the W UMa-type systems V410 Aur, CK Boo, FP Boo, V921 Her, ET Leo, XZ Leo, V839 Oph, V2357 Oph, AQ Psc and VY Sex are presented. The new multicolor light curves, combined with the spectroscopic data recently obtained at David Dunlap Observatory, are analyzed with the Wilson-Devinney code to yield the physical parameters (masses, radii and luminosities) of the components. Our models for all ten systems resulted in a contact configuration. Four binaries (V921 Her, XZ Leo, V2357 Oph and VY Sex) have low, while two (V410 Aur and CK Boo) have high fill-out factors. FP Boo, ET Leo, V839 Oph and AQ Psc have medium values of the fill-out factor. Three of the systems (FP Boo, V921 Her and XZ Leo) have very bright primaries as a result of their high temperatures and large radii. Physical parameters of components in close binary systems: V (0903.1365) S. Zola, J.M. Kreiner, B. Zakrzewski, D.P. Kjurkchieva, D.V. Marchev, A. Baran, S.M. Rucinski, W. Ogloza, M. Siwak, D. Koziel, M. Drozdz, B. Pokrzywka The paper presents combined spectroscopic and photometric orbital solutions for ten close binary systems: CN And, V776 Cas, FU Dra, UV Lyn, BB Peg, V592 Per, OU Ser, EQ Tau, HN UMa and HT Vir. The photometric data consist of new multicolor light curves, while the spectroscopy has been recently obtained within the radial velocity program at the David Dunlap Observatory (DDO). Absolute parameters of the components for these binary systems are derived. Our results confirm that CN And is not a contact system. Its configuration is semi-detached with the secondary component filling its Roche lobe. The configuration of nine other systems is contact. Three systems (V776 Cas, V592 Per and OU Ser) have high (44-77%) and six (FU Dra, UV Lyn, BB Peg, EQ Tau, HN UMa and HT Vir) low or intermediate (8-32%) fill-out factors. The absolute physical parameters are derived. 2006 Whole Earth Telescope Observations of GD358: A New Look at the Prototype DBV (0811.0768) J. L. Provencal, M. H. Montgomery, A. Kanaan, H. L. Shipman, D. Childers, A. Baran, S. O. Kepler, M. Reed, A. Zhou, J. Eggen, T. K. Watson, D. E. Winget, S. E. Thompson, B. Riaz, A. Nitta, S. J. Kleinman, R. Crowe, J. Slivkoff, P. Sherard, N. Purves, P. Binder, R. Knight, S. -L. Kim, Wen-Ping Chen, M. Yang, H. C. Lin, C. C. Lin, C. W. Chen, X. J. Jiang, A. V. Sergeev, D. Mkrtichian, E. Janiashvili, M. Andreev, R. Janulis, M. Siwak, S. Zola, D. Koziel, G. Stachowski, M. Paparo, Zs. Bognar, G. Handler, D. Lorenz, B. Steininger, P. Beck, T. Nagel, D. Kusterer, A. Hoffman, E. Reiff, R. Kowalski, G. Vauclair, S. Charpinet, M. Chevreton, J. E. Solheim, E. Pakstiene, L. Fraga, J. Dalessio Nov. 5, 2008 astro-ph We report on the analysis of 436.1 hrs of nearly continuous high-speed photometry on the pulsating DB white dwarf GD358 acquired with the Whole Earth Telescope (WET) during the 2006 international observing run, designated XCOV25. The Fourier transform (FT) of the light curve contains power between 1000 to 4000 microHz, with the dominant peak at 1234 microHz. We find 27 independent frequencies distributed in 10 modes, as well as numerous combination frequencies. Our discussion focuses on a new asteroseismological analysis of GD358, incorporating the 2006 data set and drawing on 24 years of archival observations. Our results reveal that, while the general frequency locations of the identified modes are consistent throughout the years, the multiplet structure is complex and cannot be interpreted simply as l=1 modes in the limit of slow rotation. The high k multiplets exhibit significant variability in structure, amplitude and frequency. Any identification of the m components for the high k multiplets is highly suspect. The k=9 and 8 modes typically do show triplet structure more consistent with theoretical expectations. The frequencies and amplitudes exhibit some variability, but much less than the high k modes. Analysis of the k=9 and 8 multiplet splittings from 1990 to 2008 reveal a long-term change in multiplet splittings coinciding with the 1996 "sforzando" event, where GD358 dramatically altered its pulsation characteristics on a timescale of hours. We explore potential implications, including the possible connections between convection and/or magnetic fields and pulsations. We suggest future investigations, including theoretical investigations of the relationship between magnetic fields, pulsation, growth rates, and convection. The pulsating hot subdwarf Balloon 090100001: results of the 2005 multisite campaign (0810.4010) A. Baran, R. Oreiro, A. Pigulski, F. Perez Hernandez, A. Ulla, M. D. Reed, C. Rodriguez-Lopez, P. Moskalik, S.-L. Kim, W.-P. Chen, R. Crowe, M. Siwak, L. Armendarez, P. M. Binder, K.-J. Choo, A. Dye, J. R. Eggen, R. Garrido, J. M. Gonzalez Perez, S. L. Harms, F.-Y. Huang, D. Koziel, H.-T. Lee, J. MacDonald, L. Fox Machado, T. Monserrat, J. Stevick, S. Stewart, D. Terry, A.-Y. Zhou, S. Zola Oct. 22, 2008 astro-ph We present the results of a multisite photometric campaign on the pulsating sdB star Balloon 090100001. The star is one of the two known hybrid hot subdwarfs with both long- and short-period oscillations. The campaign involved eight telescopes with three obtaining UBVR data, four B-band data, and one Stromgren uvby photometry. The campaign covered 48 nights, providing a temporal resolution of 0.36microHz with a detection threshold of about 0.2mmag in B-filter data. Balloon 090100001 has the richest pulsation spectrum of any known pulsating subdwarf B star and our analysis detected 114 frequencies including 97 independent and 17 combination ones. The strongest mode (f_1) in the 2.8mHz region is most likely radial while the remaining ones in this region form two nearly symmetric multiplets: a triplet and quintuplet, attributed to rotationally split \ell=1 and 2 modes, respectively. We find clear increases of splitting in both multiplets between the 2004 and 2005 observing campaigns, amounting to 15% on average. The observed splittings imply that the rotational rate in Bal09 depends on stellar latitude and is the fastest on the equator. We use a small grid of models to constrain the main mode (f_1), which most likely represents the radial fundamental pulsation. The groups of p-mode frequencies appear to lie in the vicinity of consecutive radial overtones, up to the third one. Despite the large number of g-mode frequencies observed, we failed to identify them, most likely because of the disruption of asymptotic behaviour by mode trapping. The observed frequencies were not, however, fully exploited in terms of seismic analysis which should be done in the future with a larger grid of reliable evolutionary models of hot subdwarfs. A massive binary black-hole system in OJ287 and a test of general relativity (0809.1280) M. J. Valtonen, H. J. Lehto, K. Nilsson, J. Heidt, L. O. Takalo, A. Sillanpää, C. Villforth, M. Kidger, G. Poyner, T. Pursimo, S. Zola, J.-H. Wu, X. Zhou, K. Sadakane, M. Drozdz, D. Koziel, D. Marchev, W. Ogloza, C. Porowski, M. Siwak, G. Stachowski, M. Winiarski, V.-P. Hentunen, M. Nissinen, A. Liakos, S. Dogru Sept. 8, 2008 astro-ph Tests of Einstein's general theory of relativity have mostly been carried out in weak gravitational fields where the space-time curvature effects are first-order deviations from Newton's theory. Binary pulsars provide a means of probing the strong gravitational field around a neutron star, but strong-field effects may be best tested in systems containing black holes. Here we report such a test in a close binary system of two candidate black holes in the quasar OJ287. This quasar shows quasi-periodic optical outbursts at 12 yr intervals, with two outburst peaks per interval. The latest outburst occurred in September 2007, within a day of the time predicted by the binary black-hole model and general relativity. The observations confirm the binary nature of the system and also provide evidence for the loss of orbital energy in agreement (within 10 per cent) with the emission of gravitational waves from the system. In the absence of gravitational wave emission the outburst would have happened twenty days later. Whole Earth Telescope observations of the hot helium atmosphere pulsating white dwarf EC 20058-5234 (0803.1638) WET Collaboration: D.J. Sullivan, T.S. Metcalfe, D. O'Donoghue, D.E. Winget, D. Kilkenny, F. van Wyk, A. Kanaan, S.O. Kepler, A. Nitta, S.D. Kawaler, M.H. Montgomery, R.E. Nather, M.S. O'Brien, A. Bischoff-Kim, M. Wood, X.J. Jiang, E.M. Leibowitz, P. Ibbetson, S. Zola, J. Krzesinski, G. Pajdosz, G. Vauclair, N. Dolez, M. Chevreton March 11, 2008 astro-ph We present the analysis of a total of 177h of high-quality optical time-series photometry of the helium atmosphere pulsating white dwarf (DBV) EC 20058-5234. The bulk of the observations (135h) were obtained during a WET campaign (XCOV15) in July 1997 that featured coordinated observing from 4 southern observatory sites over an 8-day period. The remaining data (42h) were obtained in June 2004 at Mt John Observatory in NZ over a one-week observing period. This work significantly extends the discovery observations of this low-amplitude (few percent) pulsator by increasing the number of detected frequencies from 8 to 18, and employs a simulation procedure to confirm the reality of these frequencies to a high level of significance (1 in 1000). The nature of the observed pulsation spectrum precludes identification of unique pulsation mode properties using any clearly discernable trends. However, we have used a global modelling procedure employing genetic algorithm techniques to identify the n, l values of 8 pulsation modes, and thereby obtain asteroseismic measurements of several model parameters, including the stellar mass (0.55 M_sun) and T_eff (~28200 K). These values are consistent with those derived from published spectral fitting: T_eff ~ 28400 K and log g ~ 7.86. We also present persuasive evidence from apparent rotational mode splitting for two of the modes that indicates this compact object is a relatively rapid rotator with a period of 2h. In direct analogy with the corresponding properties of the hydrogen (DAV) atmosphere pulsators, the stable low-amplitude pulsation behaviour of EC 20058 is entirely consistent with its inferred effective temperature, which indicates it is close to the blue edge of the DBV instability strip. (abridged) The pulsation modes of the pre-white dwarf PG 1159-035 (0711.2244) J. E. S. Costa, S. O. Kepler, D. E. Winget, M. S. O'Brien, S. D. Kawaler, A. F. M. Costa, O. Giovannini, A. Kanaan, A. S. Mukadam, F. Mullally, A. Nitta, J. L. Provençal, H. Shipman, M. A. Wood, T. J. Ahrens, A. Grauer, M. Kilic, P. A. Bradley, K. Sekiguchi, R. Crowe, X. J. Jiang, D. Sullivan, T. Sullivan, R. Rosen, J. C. Clemens, R. Janulis, D. O'Donoghue, W. Ogloza, A. Baran, R. Silvotti, S. Marinoni, G. Vauclair, N. Dolez, M. Chevreton, S. Dreizler, S. Schuh, J. Deetjen, T. Nagel, J.-E. Solheim, J. M. Gonzalez Perez, A. Ulla, Martin Barstow, M. Burleigh, S. Good, T.S. Metcalfe, S.-L. Kim, H. Lee, A. Sergeev, M.C. Akan, Ö. Çakirli, M. Paparo, G. Viraghalmy, B. N. Ashoka, G. Handler, Özlem Hürkal, F. Johannessen, S. J. Kleinman, R. Kalytis, J. Krzesinski, E. Klumpe, J. Larrison, T. Lawrence, E. Meištas, P. Martinez, R. E. Nather, J.-N. Fu, E. Pakštienė, R. Rosen, E. Romero-Colmenero, R. Riddle, S. Seetha, N. M. Silvestri, M. Vučković, B. Warner, S. Zola, L. G. Althaus, A. H. Córsico, M. H. Montgomery Dec. 18, 2007 astro-ph PG 1159-035, a pre-white dwarf with T_eff=140,000 K, is the prototype of both two classes: the PG1159 spectroscopic class and the DOV pulsating class. Previous studies of PG 1159-035 photometric data obtained with the Whole Earth Telescope (WET) showed a rich frequency spectrum allowing the identification of 122 pulsation modes. In this work, we used all available WET photometric data from 1983, 1985, 1989, 1993 and 2002 to identify the pulsation periods and identified 76 additional pulsation modes, increasing to 198 the number of known pulsation modes in PG 1159-035, the largest number of modes detected in any star besides the Sun. From the period spacing we estimated a mass M = 0.59 +/- 0.02 solar masses for PG 1159-035, with the uncertainty dominated by the models, not the observation. Deviations in the regular period spacing suggest that some of the pulsation modes are trapped, even though the star is a pre-white dwarf and the gravitational settling is ongoing. The position of the transition zone that causes the mode trapping was calculated at r_c = 0.83 +/- 0.05 stellar radius. From the multiplet splitting, we calculated the rotational period P_rot = 1.3920 +/- 0.0008 days and an upper limit for the magnetic field, B < 2000 G. The total power of the pulsation modes at the stellar surface changed less than 30% for l=1 modes and less than 50% for l=2 modes. We find no evidence of linear combinations between the 198 pulsation mode frequencies. PG 1159-035 models have not significative convection zones, supporting the hypothesis that nonlinearity arises in the convection zones in cooler pulsating white dwarf stars. Follow-up observations of pulsating subdwarf B stars: Multisite campaigns on PG 1618+563B and PG 0048+091 (0704.1496) M.D. Reed, S.J. O'Toole, D.M. Terndrup, J.R. Eggen, A.-Y. Zhou, D. An, C.-W. Chen, W.P. Chen, H.-C. Lin, C. Akan, O. Cakirli, H. Worters, D. Kilkenny, M. Siwak, S. Zola, Seung-Lee Kim, G.A. Gelven, S.L. Harms, G.W. Wolf April 11, 2007 astro-ph We present follow-up observations of pulsating subdwarf B (sdB) stars as part of our efforts to resolve the pulsation spectra for use in asteroseismological analyses. This paper reports on multisite campaigns of the pulsating sdB stars PG 1618+563B and PG 0048+091. Data were obtained from observatories placed around the globe for coverage from all longitudes. For PG 1618+563B, our five-site campaign uncovered a dichotomy of pulsation states: Early during the campaign the amplitudes and phases (and perhaps frequencies) were quite variable while data obtained late in the campaign were able to fully resolve five stable pulsation frequencies. For PG 0048+091, our five-site campaign uncovered a plethora of frequencies with short pulsation lifetimes. We find them to have observed properties consistent with stochastically excited oscillations, an unexpected result for subdwarf B stars. We discuss our findings and their impact on subdwarf B asteroseismology. The new sample of giant radio sources II. Update of optical counterparts, further spectroscopy of identified faint host galaxies, high-frequency radio maps, and polarisation properties of the sources (astro-ph/0605002) J. Machalski, M. Jamrozy, S. Zola, D. Koziel (Astronomical Observatory, Jagiellonian University) Our sample of giant radio-source candidates, published in Paper I (Machalski et al. 2001), is updated and supplemented with further radio and optical data. In this paper we present: (i) newly detected host galaxies, their photometric magnitude, and redshift estimate for the sample sources not identified yet, (ii) optical spectra and spectroscopic redshift for the host galaxies fainter than about 18.5 mag taken with the Apache Point Observatory 3.5m telescope, and (iii) the VLA 4.9 GHz total-intensity and polarised-intensity radio maps of the sample members. In a few cases they reveal extremely faint radio cores undetected before, which confirm the previously uncertain optical identifications. The radio maps are analysed and the polarisation properties of the sample sources summarised. A comparison of our updated sample with three samples published by other authors implies that all these four samples probe the same part of the population of extragalactic radio sources. There is no significant difference between the distributions of intrinsic size and radio power among these samples. The median redshift of 0.38 +/- 0.07 in our sample is the highest among the corresponding values in the four samples, indicating that the angular size and flux-density limits in our sample, lower than those for the other three samples, result in effective detections of more distant, giant-size galaxies compared to those detected in the other samples. This sample and a comparison sample of `normal'-size radio galaxies will be used in Paper III (Machalski & Jamrozy 2006) to investigate of a number of trends and correlations in the entire data. Resolving the pulsations of the subdwarf B star KPD 2109+4401 (astro-ph/0511827) A.-Y. Zhou, M.D. Reed, S. Harms, D.M. Terndrup, D. An, S. Zola, K. D. Gazeas, P. G. Niarchos, W. Ogloza, A. Baran, G. W. Wolf Nov. 30, 2005 astro-ph We present the results of extensive time series photometry of the pulsating subdwarf B star KPD 2109+4401. Our data set consists of 29 data runs with a total length of 182.6 hours over 31 days, collected at five observatories in 2004. These data are comprised of high signal-to-noise observations acquired with larger telescopes and wider time coverage observations obtained with smaller telescopes. They are sufficient to resolve the pulsation structure to 0.4 $\mu$Hz and are the most extensive data set for this star to date. With these data, we identify eight pulsation frequencies extending from 4701 to 5481 $\mu$Hz, corresponding to periods of 182 to 213 s. The pulsation frequencies and their amplitudes are examined over several time-scales with some frequencies showing amplitude variability.
CommonCrawl
Let $a$, $b$, $c$, $d$, and $e$ be positive integers with $a+b+c+d+e=2010$ and let $M$ be the largest of the sum $a+b$, $b+c$, $c+d$ and $d+e$. What is the smallest possible value of $M$? We have that \[M = \max \{a + b, b + c, c + d, d + e\}.\]In particular, $a + b \le M,$ $b + c \le M,$ and $d + e \le M.$ Since $b$ is a positive integer, $c < M.$ Hence, \[(a + b) + c + (d + e) < 3M.\]Then $2010 < 3M,$ so $M > 670.$ Since $M$ is an integer, $M \ge 671.$ Equality occurs if $a = 669,$ $b = 1,$ $c = 670,$ $d = 1,$ and $e = 669,$ so the smallest possible value of $M$ is $\boxed{671}.$
Math Dataset
u-chart In statistical quality control, the u-chart is a type of control chart used to monitor "count"-type data where the sample size is greater than one, typically the average number of nonconformities per unit. u-chart Originally proposed byWalter A. Shewhart Process observations Rational subgroup sizen > 1 Measurement typeNumber of nonconformances per unit Quality characteristic typeAttributes data Underlying distributionPoisson distribution Performance Size of shift to detect≥ 1.5σ Process variation chart Not applicable Process mean chart Center line${\bar {u}}={\frac {\sum _{i=1}^{m}\sum _{j=1}^{n}{\mbox{no. of defects for }}x_{ij}}{mn}}$ Control limits${\bar {u}}\pm 3{\sqrt {\frac {\bar {u}}{n_{i}}}}$ Plotted statistic${\bar {u}}_{i}={\frac {\sum _{j=1}^{n}{\mbox{no. of defects for }}x_{ij}}{n}}$ The u-chart differs from the c-chart in that it accounts for the possibility that the number or size of inspection units for which nonconformities are to be counted may vary. Larger samples may be an economic necessity or may be necessary to increase the area of opportunity in order to track very low nonconformity levels.[1] Examples of processes suitable for monitoring with a u-chart include: • Monitoring the number of nonconformities per lot of raw material received where the lot size varies • Monitoring the number of new infections in a hospital per day • Monitoring the number of accidents for delivery trucks per day As with the c-chart, the Poisson distribution is the basis for the chart and requires the same assumptions. The control limits for this chart type are ${\bar {u}}\pm 3{\sqrt {\frac {\bar {u}}{n_{i}}}}$ where ${\bar {u}}$ is the estimate of the long-term process mean established during control-chart setup. The observations $u_{i}={\frac {x_{i}}{n_{i}}}$ are plotted against these control limits, where xi is the number of nonconformities for the ith subgroup and ni is the number of inspection units in the ith subgroup. See also • c-chart References 1. Montgomery, Douglas (2005). Introduction to Statistical Quality Control. Hoboken, New Jersey: John Wiley & Sons, Inc. p. 294. ISBN 978-0-471-65631-9. OCLC 56729567. Archived from the original on 2008-06-20.
Wikipedia
Evaluate $\log_{3}{81}-\log_{3}{\frac{1}{9}}$. Let $\log_{3}{81}=a$. Then $3^a=81=3^4$, so $a=4$. Let $\log_{3}{\frac{1}{9}}=b$. Then $\frac{1}{9}=3^b$. Express $\frac{1}{9}$ as a power of $3$: $\frac{1}{9}=\frac{1}{3^2}=3^{-2}$. Thus $3^b=3^{-2}$ and $b=-2$. We want to find $\log_{3}{81}-\log_{3}{\frac{1}{9}}=a-b=(4)-(-2)=\boxed{6}$.
Math Dataset
\begin{document} \title{Generalized solution of a mixed problem for linear hyperbolic system} \author{Lalla Saadia Chadli, Said Melliani and Aziz Moujahid} \date{{\small Laboratoire de Mod\'elisation et Calcul (LMC), Facult\'e des Sciences et Techniques} \\ {\small Universit\'e Sultan Moulay Slimane, BP 523, B\'eni Mellal, Morocco} } \fancyhf{} \addtolength{\headwidth}{\marginparsep} \addtolength{\headwidth}{\marginparwidth} \fancyhead{} \fancyhead[RO,LE]{\thepage} \fancyhead[CE]{Generalized solution of a mixed problem for linear hyperbolic system} \fancyhead[CO]{L. S. Chadli, S. Melliani and A. Moujahid} \renewcommand{0.0pt}{0.0pt} \maketitle \begin{abstract} In the first part of this article, we will prove an existence-uniqueness result for generalized solutions of a mixed problem for linear hyperbolic system in the Colombeau algebra. In the second part, we apply this result to a wave propagation problem in a discontinuous environment. \end{abstract} \section{Introduction} In 1982, Colombeau introduced an algebra $\mathcal{G}$ of generalized functions to deal with the multiplication problem for distributions,see Colombeau \cite{Colombeau1, Colombeau2}. This algebra $\mathcal{G}$ is a differential algebra which contains the space $\mathcal{D}'$ of distributions. Furthermore, nonlinear operations more general than the multiplication make sense in the algebra $\mathcal{G}$. Therefore the algebra $\mathcal{G}$ is a very convenient one to find and study solutions of nonlinear differential equations with singular data and coefficients. \noindent Consider the mixed problem for the linear hyperbolic system in two variables \begin{equation} \begin{cases} \Bigl( \partial _{t} + \Lambda \left( x,t\right) \partial _{x} \Bigr) U = F(x,t) U + A(x,t) & (x,t) \in (\mathbb{R}_{+}^{*})^{2} \\ U\left( x,0\right) = U_{0}\left( x\right) & x\in \mathbb{R}_{+} \\ U_{i}\left( 0,t\right) =\sum\limits_{k=r+1}^{n}\:v _{ik}\left( t\right) U_{k}\left( 0,t\right) +H_{i}\left( t\right) & i=1,\ldots ,r\hspace{0.3in} t\geq 0 \\ + \text{ Compatibility conditions} & \end{cases} \label{SYS1} \end{equation} where $\Lambda $, $F$ and $V$ are $(n\times n)$ matrices whose terms are discontinuous functions. The matrix $\Lambda$ is real and diagonal such that \[ \Lambda _{1} > \Lambda _{2} > \cdots >\Lambda _{r}>0 > \Lambda _{r+1} > \cdots > \Lambda _{n} \] In the case where $\Lambda \in L^{\infty }\left(\mathbb{R}_{+}^{2}\right)$ and $F\in W_{\mbox{loc}}^{-1,\infty }\left(\mathbb{R}_{+}^{2} \right)$, multiplicative products of distributions appear in system~\eqref{SYS1}, and so there is no general way of giving a meaning to system \eqref{SYS1} in the sense of distribution. This hyperbolic system even when it is in the form of a system of conservation laws does not admit any solutions distributions in general see \cite{Hurd}. Our approach is to study \eqref{SYS1} in Colombeau's algebra \cite{Colombeau1, Colombeau2}, and under some hypotheses on $\Lambda $, $F$, $\nu $ and $H$, the system \eqref{SYS1} admits an unique solution in $\mathcal{G}\left( \mathbb{R}_{+}^{2}\right)$. This result completes work already made in the global case by M. Oberguggenberger \cite{Oberguggenberger1}. \noindent The second part of this article, we will apply this result to the wave propagation problem in a discontinuous environment. the following system \begin{equation} \begin{cases} \Bigl( \partial _{t}+c\left( x\right) \partial _{x}\Bigr) u\left(x,t\right) =0 & \left( x,t\right) \in (\mathbb{R}_{+}^{*})^{2} \\ \Bigl( \partial _{t}-c\left( x\right) \partial _{x}\Bigr) v\left(x,t\right) =0 & \left( x,t\right) \in (\mathbb{R}_{+}^{*})^{2} \\ u\left( x,0\right) =u_{0}\left( x\right) & x\geq 0 \\ v\left( x,0\right) =v_{0}\left( x\right) & x\geq 0 \\ u\left( 0,t\right) =h\left( t\right) v\left( 0,t\right) +b\left( t\right) & t\geq 0 \\ + \text{ Compatibility conditions} & \end{cases} \label{SYS2} \end{equation} with \begin{equation*} c(x) = \begin{cases} c_{R} & \text{if $x > x_0$} \\ c_{L} & \text{if $0 < x < x_0$} \end{cases} \end{equation*} $c_{R}$ and $c_{L}$ are real constants, $u_{0}$ and $v_{0}$ are continuous almost everywhere. \noindent For this problem one can find a classical solution on \( \left\{0\leq x<x_{0} : t\geq 0\right\} \) and \( \left\{ x>x_{0} : t\geq 0\right\} \), \ so imposing a transmission condition in $x=x_{0}$ : the continuity of $u$ and $v$, one will have a classical solution on $\left\{ x\geq 0\mbox{ },\mbox{ } t\geq 0 \right\}$. \noindent Further if $\left(u_{0},v_{0}\right)$ are generalized functions, one can show that the problem \eqref{SYS2} has a unique solution $\left(U,V\right) \in \mathcal{G}\left( \mathbb{R}_{+}^{2}\right) \times \mathcal{G}\left( \mathbb{R}_{+}^{2}\right)$, without having us need of the passage conditions, in the same way one shows that this solution admits an associated distribution that is equal to the classical solution by adjusting. \section{Existence and uniqueness} We recall some definitions from the theory of generalized functions which we need in the sequel. \noindent We define the algebra $\mathcal{G}\left( \mathbb{R}^{m}\right)$ as follows \[ A_q \left(\mathbb{R}\right) = \Bigl\{ \chi \in \mathcal{D}\left(\mathbb{R}\right) : \int_{\mathbb{R}} \chi (x)\:dx=1 \ \textrm{ and } \ \int_{\mathbb{R}} x^k\:\chi (x)\:dx=0 \quad \textrm{ for } \quad 1\leq k\leq q \Bigr\} \] and \[ A_q \left(\mathbb{R}^m\right) = \Bigl\{ \varphi \left(x_1,\ldots,x_m\right)=\prod^{m}_{j=1}\left\{\chi \left(x_j\right)\right\} \Bigr\} \] Let $\mathcal{E}\left[ \mathbb{R}^{m}\right]$ be the set of functions on $\mathcal{A}_1\left( \mathbb{R}^{m}\right)\times \mathcal{C}^{\infty}\left( \mathbb{R}^{m}\right)$ with values in $\mathbb{C}$ witch are $\mathcal{C}^{\infty}$ to seconde variable. Obviously $\mathcal{E}\left[ \mathbb{R}^{m}\right]$ with point wise multiplication is an algebra but $\mathcal{C}^{\infty}\left( \mathbb{R}^{m}\right)$ is not a subalgebra. \noindent Then given $\varphi \in \mathcal{A}_1\left( \mathbb{R}^{m}\right)$ and $\varepsilon \in \left]0\:,\:1\right[$, we define a function $\varphi_{\varepsilon}$ by \[ \varphi_{\varepsilon}\left(x\right)=\varepsilon^{-m}\varphi \left(\frac{x}{\varepsilon}\right) \ \ \ \textrm{ for } \ \ x\in \mathbb{R}^{m} \] An element of $\mathcal{E}\left[ \mathbb{R}^{m}\right]$ is called "moderate" if for every compact subset $K$ of $\mathbb{R}^{m}$ and every differential operator \noindent $D=\partial^{k_1}_{x_1},\ldots,\partial^{k_m}_{x_m}$ there is $N\in \mathbb{N}$ such that the following holds \begin{equation} \left\{ \begin{array}{ll} \forall\varphi\in \mathcal{A}_N\left( \mathbb{R}^{m}\right), \ \exists C, \exists \eta>0 \ \ \ & \textrm{such that} \\ \sup\limits_{x\in K}\left|D\:u\left(\varphi_{\varepsilon},x\right)\right|\leq C\varepsilon^{-N} \hspace{0.3in} & \textrm{if} \ 0<\varepsilon<\eta \end{array} \right. \end{equation} $\mathcal{E}_M\left[ \mathbb{R}^{m}\right]$ denotes the subset of moderate elements where the index $M$ stands for "\emph{Moderate}". We define an ideal $\mathcal{N}\left[ \mathbb{R}^{m}\right]$ of $\mathcal{E}_M\left[ \mathbb{R}^{m}\right]$ as follows : \noindent $u\in \mathcal{N}\left[ \mathbb{R}^{m}\right]$ if for every compact subset $K$ of $\mathbb{R}^{m}$ and every differential operator $D$, there is $N\in \mathbb{N}$ such that : \begin{equation} \left\{ \begin{array}{ll} \forall q\geq N, \ \forall\varphi\in \mathcal{A}_q\left( \mathbb{R}^{m}\right), \ \exists C, \exists \eta>0 \ \ \ & \textrm{ such that} \\ \sup\limits_{x\in K}\left|D\:u\left(\varphi_{\varepsilon},x\right)\right|\leq C\varepsilon^{q-N} \hspace{0.3in} & \textrm{if} \ 0<\varepsilon<\eta \end{array} \right. \end{equation}Finally the algebra $\mathcal{G}\left( \mathbb{R}^{m}\right)$ is defined as the quotient of $\mathcal{E}_M\left[ \mathbb{R}^{m}\right]$ with respect to $\mathcal{N}\left[ \mathbb{R}^{m}\right]$. In what follows, the elements of $\mathcal{G}\left( \mathbb{R}^{2}\right)$ will be written with capital letters and their representatives in $\mathcal{E}_M\left[ \mathbb{R}^{2}\right]$ with small letters. Furthermore we use the following simplified notations : \[ u\left(\varphi_{\varepsilon},x\right)=u^{\varepsilon}\left(x\right) \] \noindent In our work we need a subset of $\mathcal{E}_M\left[ \mathbb{R}_{+}^{2}\right]$ that contains elements $u$ satisfying the following properties : \begin{itemize} \item[(a)] $\exists $ $N\in \mathbb{N}$ such that for all $\varphi \in \mathcal{A}_{N}\left( \mathbb{R}_{+}^{2}\right)$ \[ \exists c>0\hspace{0.3in}\eta >0\mbox{ }:\sup\limits_{y\in \mathbb{R} _{+}^{2}}\left| u\left( \varphi _{\varepsilon },y\right) \right| \leq c \hspace{0.3in}\textrm{if} \ 0<\varepsilon <\eta \] \item[(b)] For every compact subset $K$ of $\mathbb{R}^{2}_{+}$ , $\exists $ $N\in \mathbb{N}$ such that $\forall \varphi \in \mathcal{A}_{N}\left( \mathbb{R}_{+}^{2}\right)$ \[ \exists c>0\hspace{0.3in}\exists \eta >0\mbox{ }:\sup\limits_{y\in K}\left| u\left( \varphi _{\varepsilon },y\right) \right| \leq N \log \left( \frac{c}{\varepsilon} \right) \hspace{0.3in}\textrm{if} \ 0 <\varepsilon < \eta \] \end{itemize} \begin{definition} A generalized function $U\in \mathcal{G}\left( \mathbb{R}_{+}^{2}\right) $ admitting a representative $u$ with the property (a) (respectively (b)) is called globally bounded (respectively locally logarithmic growth). \end{definition} \begin{definition} the system \eqref{SYS1} satisfies the compatibility conditions in $\mathcal{G}\left( \mathbb{R}_{+}^{2}\right) $ if there exist $u_{0}^{\varepsilon}$, $\lambda ^{\varepsilon }$, $f^{\varepsilon }$, $h^{\varepsilon }$, $v ^{\varepsilon }$ et $a^{\varepsilon }$ the representatives of $U_{0}$, $\Lambda$, $F$, $H$, $V$ and $A$ that satisfy to the classic conditions compatibility in order to have a $\mathcal{C}^{\infty }$ solution for the classic problem. \end{definition} \begin{theorem} Let $F$, $\Lambda$ and $A$ be $n\times n$ matrices with coefficients in $\mathcal{G}\left( \mathbb{R}_{+}^{2}\right) $, suppose that: there exists $r$ as : \[ \Lambda _{1} > \Lambda _{2} > \cdots >\Lambda _{r} > 0 > \Lambda _{r+1} > \cdots > \Lambda _{n} \] $\Lambda _{i}$ ($i=1,\ldots ,n$) are globally bounded, $\partial _{x}\Lambda _{i}$ and $F_{i}$ are locally logarithmic growth, so for an initial data $U_{0}$ in $\mathcal{G}\left( \mathbb{R}_{+}\right) $, $V _{i}$ an element in $\mathcal{G}\left( \mathbb{R}_{+}\right) $ globally bounded and $H_{i}$ in $\mathcal{G}\left( \mathbb{R}_{+}\right) $, then the problem \ref{SYS1} has an unique solution in $\mathcal{G}\left( \mathbb{R}_{+}^{2}\right) $. \end{theorem} \noindent \textbf{Proof} : The proof of the theorem is an adaptation to the demonstration of the theorem 1.2 in \cite{Oberguggenberger1}, therefore one is going to give the big lines rightly. \noindent Let $\lambda$ a representative of $\Lambda$ in $\mathcal{G} \left( \mathbb{R}_{+}^{+}\right)$ such that \[ \lambda _{1} > \lambda _{2} > \cdots > \lambda _{r} > 0 > \lambda _{r+1} > \cdots > \lambda _{n} \] with $\lambda _{i}$ satisfies the property (a) and $\partial_{x}\lambda _{i}$ satisfies the property (b). \\ \noindent Let $f$ and $a$ are any representatives of $F$ and $A$ in $\mathcal{G}\left( \mathbb{R}_{+}^{2}\right) $ with $f$ satisfies (b). \noindent $v$, $h$ and $u_{0}$ are any representatives of $V$, $H$ and $U_{0}$ in $\mathcal{G}\left( \mathbb{R}_{+}\right) $ with $v$ satisfies (a). \noindent so Let's consider the following problem \begin{equation} \begin{cases} \Bigl( \partial_{t}+\lambda_{i}^{\varepsilon} (x,t) \partial_{x}\Bigr) u_{i}^{\varepsilon} = \sum\limits_{k=1}^{n} f_{ik}^{\varepsilon} (x,t) u_{k}^{\varepsilon} (x,t) + a_{i}^{\varepsilon} (x,t) & (x,t) \in (\mathbb{R}_{+}^{*})^{2} \\ u_{i}^{\varepsilon} (x,0) = u_{0_{i}}^{\varepsilon} (x) & i=1, \ldots, n\quad x\in \mathbb{R}_{+} \\ u_{i}^{\varepsilon} (0,t) = \sum\limits_{k=r+1}^{n} \nu_{ik}^{\varepsilon} (t) u_{k}^{\varepsilon} (0,t) + h_{i}^{\varepsilon} (t) & i=1, \ldots, r \quad t\geq 0 \end{cases} \label{sys5}\tag*{ $\textbf{(I}_{\varepsilon} \textbf{)}$ } \end{equation} if we denote $\gamma _{i}^{\varepsilon}$ the corresponding characteristic curve to $\lambda _{i}^{\varepsilon}$ then the problem $\big( \textbf{I}_{\varepsilon} \bigr)$ admits an unique solution $u^{\varepsilon}$, $u_{i}^{\varepsilon}\in \mathcal{C}^{\infty}\left( \mathbb{R}_{+}^{2}\right)$ given by \ \\ \noindent for $i=r+1,...,n$ \begin{eqnarray*} u_{i}^{\varepsilon }\left( x,t\right) =u_{0_{i}}^{\varepsilon }\left(\gamma ^{\varepsilon}_{i}(x,t,0) \right) & + & \int_{0}^{t}\Bigl[ \sum\limits_{k=1}^{n} f_{ik}^{\varepsilon} \Bigl( \gamma _{i}^{\varepsilon }\left( x,t,\tau \right) ,\tau \Bigr) u_{k}^{\varepsilon }\Bigl( \gamma _{i}^{\varepsilon }\left( x,t,\tau \right) ,\tau \Bigr) \\ & + & a_{i}^{\varepsilon }\Bigl( \gamma _{i}^{\varepsilon }\left( x,t,\tau \right) ,\tau \Bigr) \Bigr] d\tau \end{eqnarray*} for $i=1,\ldots ,r$ \begin{eqnarray*} u_{i}^{\varepsilon }\left( x,t\right) & = & \sum\limits_{k=r+1}^{n}v _{ik}^{\varepsilon }\left( t_{0}\right) \int_{0}^{t_{0}}\sum\limits_{s=1}^{n}\biggl[ f_{ks}^{\varepsilon }\Bigl( \gamma _{k}^{\varepsilon }\left( 0,t_{0},\tau \right) ,\tau \Bigr) u_{s}^{\varepsilon }\Bigl( \gamma _{k}^{\varepsilon }\left( 0,t_{0},\tau \right) ,\tau \Bigr) \biggr] d\tau \\ & + & \int_{t_{0}}^{t}\sum\limits_{k=1}^{n}\biggl[ f_{ik}^{\varepsilon }\Bigl( \gamma _{i}^{\varepsilon }\left( x,t,\tau \right) ,\tau \Bigr) u_{k}^{\varepsilon }\Bigl( \gamma _{i}^{\varepsilon }\left( x,t,\tau \right) ,\tau \Bigr) \biggr] d\tau \\ & + & \int_{t_{0}}^{t}a_{i}^{\varepsilon }\Bigl( \gamma _{i}^{\varepsilon }\left( x,t,\tau \right) ,\tau \Bigr) d\tau \\ & + & \sum\limits_{k=r+1}^{n}v _{ik}^{\varepsilon }\left( t_{0}\right) \int_{0}^{t_{0}}a_{k}^{\varepsilon } \Bigl( \gamma _{k}^{\varepsilon }\left( 0,t_{0},\tau \right) ,\tau \Bigr) d\tau \\ & + & \sum\limits_{k=r+1}^{n}v _{ik}^{\varepsilon }\left( t_{0}\right) u_{0_{k}}^{\varepsilon }\Bigl( \gamma _{k}^{\varepsilon }\left( 0,t_0,0\right) \Bigr) +h_{i}^{\varepsilon }\left( t_{0}\right) \end{eqnarray*} where $t_{0}$ is such that the curve $\gamma _{i}$ cuts the axis $ \left( 0t\right) $ at a point $P_{i}\left( 0,t_{0}\right) $. $ u_{i}^{\varepsilon }$ is $\mathcal{C}^{\infty }$ function, so it remains to show therefore that $ u_{i}^{\varepsilon }$ is moderate growth. \noindent from assumptions, we have \[ \begin{array}{l} \exists M>0\quad \mbox{such that : } \displaystyle \left| \frac{d\gamma _{i}^{\varepsilon }\left( x,t,\tau \right) }{d\tau }\right| <M\quad \forall (x,t)\in \mathbb{R}_{+}^{2} \quad \forall i=1,\ldots ,n \\ \exists M_{1}>0\quad \mbox{such that : }\max\limits_{i,j}\left| v _{i,j}^{\varepsilon }\left( y\right) \right| <M_{1}\quad \forall y\in \mathbb{R}_{+} \end{array} \] Let $K_{0}$ be a compact in $\mathbb{R}_{+}$, we draw the straight line with a slope $-M$, the determination domain $K_{T}$ of the solution $u_{i}^{\varepsilon }$ does not depend on $\varepsilon $. \begin{figure}\label{fig3} \end{figure} $ $ $\Box$ \begin{lemma} Let $u^{\varepsilon }$ a solution of problem ($\mbox{I}_{\varepsilon}$) then $u_{i}^{\varepsilon }$ verified \begin{eqnarray*} \sup\limits_{\left( x,t\right) \in K_{T}}\left| u_{i}^{\varepsilon }\left( x,t\right) \right| &\leq &M_{2}\left[ \sup\limits_{k}\sup\limits_{\left( x,t\right) \in K_{T}}\left| a_{k}^{\varepsilon}\left( x,t\right) \right| .T+\right. \\ &&\left. \sup\limits_{k}\sup\limits_{x\in K_{0}}\left| u_{0_{k}}^{\varepsilon} \left(x\right) \right| +\sup\limits_{k}\sup\limits_{t\in \left[ 0,T\right] }\left| h_{k}^{\varepsilon} \left(t\right) \right| \right] \times \\ &&\exp \left( nM_{2}\sup\limits_{i,k}\sup\limits_{\left( x,t\right) \in K_{T}}\left| f_{ik}^{\varepsilon}\left( x,t\right) \right| .T\right) \end{eqnarray*} with \[ M_{2}=\max \left( nM_{1},1\right) \] \end{lemma} \textbf{Proof} : \noindent for $i=1,\ldots ,r$, and from the integral equation that verified by $u_{i}^{\varepsilon }$ we have \begin{eqnarray*} \sup\limits_{\left( x,t\right) \in K_{T}}\left| u_{i}^{\varepsilon }\left( x,t\right) \right| &\leq &M_{2}\left[ T\sup\limits_{\left( x,t\right) \in K_{T}}\left| a_{k}^{\varepsilon }\left( x,t\right) \right| +\sup\limits_{k}\sup\limits_{x \in K_{0}}\left| u_{0_{k}}^{\varepsilon }\left( x\right) \right| +\right. \\ &&\left. \sup\limits_{k}\sup\limits_{t\in \left[ 0,T\right] }\left| h_{k}^{\varepsilon }\left( t\right) \right| \right] + \\ &&nM_{2}\int_{0}^{T}\sup\limits_{\left( x,t\right) \in K_{\tau}}\left| f^{\varepsilon }\left( x,t\right) \right| \sup\limits_{k}\sup\limits_{\left( x,t \right) \in K_{\tau }}\left| u_{k}^{\varepsilon }\left( x,t \right) \right| d\tau \end{eqnarray*} and the proof is completed by applying the Gronwall's lemma to the function \[ s\rightarrow \max\limits_{k}\sup\limits_{\left( x,t\right) \in K_{s}}\left| u_{k}^{\varepsilon }\left( x,t\right) \right| \] \noindent For $i=r+1,\ldots ,n$ it is the same way with $t_{0}=0$, $v =0$, $h=0$. \ \\ $ $ $\Box$ \ \\ \noindent the next of the proof of theorem 1, we have \ \\ \noindent \hspace{.5in} $\exists N_1 \in \mathbb{N}$ such that : $\forall \phi \in \mathcal{A}_{N_1} (\mathbb{R}_+)$ \[ \exists C_1 > 0 \quad \exists \eta > 0: \quad\quad \sup_{(x,t)\in K_T}\left|a^{\varepsilon}(x,t) \right| \leq C_1 \varepsilon^{-N_1} \quad \textrm{if} \ 0<\varepsilon<\eta \] \hspace{.5in} $\exists N_2 \in \mathbb{N}$ such that : $\forall \phi \in \mathcal{A}_{N_2} (\mathbb{R}_+)$ \[ \exists C_2 > 0 \quad \exists \eta > 0: \quad\quad \sup_{x\in K_0}\left|u_0^{\varepsilon}(x) \right| \leq C_2 \varepsilon^{-N_2} \quad \textrm{if} \ 0<\varepsilon<\eta \] \hspace{.5in} $\exists N_3 \in \mathbb{N}$ such that : $\forall \phi \in \mathcal{A}_{N_3} (\mathbb{R}_+)$ \[ \exists C_3 > 0 \quad \exists \eta > 0: \quad\quad \sup_{t\in [0,T]}\left|h^{\varepsilon}(t) \right| \leq C_3 \varepsilon^{-N_3} \quad \textrm{if} \ 0<\varepsilon<\eta \] \hspace{.5in} $\exists N_4 \in \mathbb{N}$ such that : $\forall \phi \in \mathcal{A}_{N_4} (\mathbb{R}^{2}_{+})$ \[ \exists C_4 > 0 \quad \exists \eta > 0: \quad\quad \sup_{(x,t)\in K_T}\left|f^{\varepsilon}(x,t) \right| \leq N_4 \log\left(\frac{C_4}{\varepsilon}\right) \quad \textrm{if} \ 0<\varepsilon<\eta \] therefore according to the lemma, we have \[ \forall \phi \in \mathcal{A}_{N_5}, \ \ \exists C>0,\quad \eta >0: \ \ \sup_{(x,t)\in K_T}\left|u_{i}^ {\varepsilon}(x,t) \right| \leq C_5 \varepsilon^{-N_5} \quad \textrm{if} \ 0<\varepsilon<\eta \] with \[ N_5 = E\left(N_1 +N_2 +N_3 +NTC_4 N_4 \right)+1 \] for the other derivatives, differentiating the system ($\mbox{I}_{\varepsilon}$) for example with regard to $x$, one gets a system similar to the first. And because $\partial_x \Lambda$ is locally logarithmic growth one gets the same estimation as before, \ldots, then one has \[ u_{i}^ {\varepsilon} \in \mathcal{E}_{M}(\mathbb{R}^{2}_{+}) \ \ \ \ i=1,\ldots,n \] either the existence of the solution for the problem (1) is in $\mathcal{G}(\mathbb{R}^{2}_{+})$. \noindent \textbf{Uniqueness} \noindent Let $U$, $V$ two solutions in $\mathcal{G}(\mathbb{R}^{2}_{+})$ of the problem ($\mbox{I}_{\varepsilon}$), with the same initial data and the same boundary values. One must show that so $u^{\varepsilon}$ is a representative of $U$ and $\mathcal{G}(\mathbb{R}^{2}_{+})$ and if $v^{\varepsilon}$ is a representative of $V$ in $\mathcal{G}(\mathbb{R}^{2}_{+})$ then $u^{\varepsilon} - v^{\varepsilon} \in \mathcal{N}(\mathbb{R}^{2}_{+})$ see [2]. \noindent indeed : $u^{\varepsilon} - v^{\varepsilon}$ verifies the same problem that previously and therefore the demonstration is the same. Then one has \[ u^{\varepsilon} - v^{\varepsilon} = O \left( \varepsilon^q \right)\ \ \ \forall q \] $ $ $\Box$ \begin{remark} To get the solution in the case where $\Lambda \in \textbf{L}^{\infty} (\mathbb{R}^{2}_{+})$, $F \in \textbf{W}^{-1,\infty}(\mathbb{R}^{2}_{+})$, one uses the following result. see [4, proposition 2] \end{remark} \begin{proposition} \hspace{3.0in} {\textbf{a)}} \ Let $\omega\in \textbf{W}^{-1,\infty}_{loc}(\mathbb{R}^{2}_{+})$ then there exist $U\in \mathcal{G}(\mathbb{R}^{2})$ such that: $U$ is associated to $\omega$ and $U$ is locally logarithmic growth. \textbf{b)} \ let $\omega\in \textbf{L}^{\infty}(\mathbb{R}^{2})$ then there exist $U\in \mathcal{G}(\mathbb{R}^{2})$ such that: $U$ is associated to $\omega$ and $U$ is globally bounded, and $\partial^{\alpha}U$ is locally logarithmic growth. \hspace{1.0in} \( \alpha = \left(\alpha_1,\alpha_2 \right) \ \ \ \ \mbox{ such that } \ \ \ \left| \alpha \right| = \alpha_1 +\alpha_2 = 1 \) \end{proposition} \begin{remark} For $g \in \textbf{L}^{\infty}(\mathbb{R}_{+})$ one can find $G\in \mathcal{G}(\mathbb{R}_{+})$ such that: \[ G \approx g \] and there exist a representative $g^{\varepsilon}$ of $G$ such that $g^{\varepsilon}$ is nil at the neighborhood of $0$ for all $\varepsilon$. \end{remark} \noindent \textbf{Application} Consider the problem ( \ref{SYS2} ) \begin{equation} \left\{ \begin{array}{ll} \Bigl( \partial _{t}+c\left( x\right) \partial _{x}\Bigr) u\left( x,t\right) =0 & \left( x,t\right) \in (\mathbb{R}_{+}^{*})^{2} \\ \Bigl( \partial _{t}-c\left( x\right) \partial _{x}\Bigr) v\left( x,t\right) =0 & \left( x,t\right) \in (\mathbb{R}_{+}^{*})^{2} \\ u\left( x,0\right) =u_{0}\left( x\right) & x\geq 0 \\ v\left( x,0\right) =v_{0}\left( x\right) & x\geq 0 \\ u\left( 0,t\right) = v\left( 0,t\right) & t\geq 0 \\ + \textrm{ Compatibility conditions} & \end{array} \right. \nonumber \label{SYS3} \end{equation} with \[ c\left( x\right) =\left\{ \begin{array}{ll} c_{R} & \mbox{if }x>x_{0} \\ c_{L} & \mbox{if }0<x<x_{0} \end{array} \right. \] \noindent For the initials data $u_0$, $v_0$ continuous almost everywhere, and nil at neighborhood of $0$. \noindent the problem (2) admits a classic solution for \begin{eqnarray*} \bigl\{ 0<x<x_0:t\geq0 \bigr\} & \mbox{ and } & \bigl\{ x>x_0 : t\geq0 \bigr\} \end{eqnarray*} and while imposing a passage condition on the $x_0$ (continuity of $u$ and $v$ at the point $x_0$ ) then one will have a solution on \[ \bigl\{ x\geq0 : t\geq0 \bigr\} \] defined by \begin{eqnarray*} v(x,t) &=& v_0 \left( \gamma_2 (x,t,0) \right) \\ u(x,t) &=& \left\{ \begin{array}{ll} u_0 \left( \gamma_1 (x,t,0) \right) & \mbox{ on (I)} \\ v(0,t) & \mbox{ on (II)} \end{array} \right. \end{eqnarray*} so one designates by $\Gamma$ the characteristic curve comes from of $\left(0,0\right)$ the part (I) designates the set of $\left(x,t\right)\in \mathbb{R}^{2}_{+}$ below $\Gamma$. and the part (II) the set the points $\left(x,t\right)$ over $\Gamma$ (see the figure (2)). $\gamma_1$ the connected curve characteristic corresponding to $c$. $\gamma_2$ the connected curve characteristic corresponding to $-c$. \begin{figure}\label{fig3} \end{figure} \begin{proposition} given $u_0$, $v_0$ two continuous functions nearly everywhere, bounded and nil at the neighborhood of $0$ then the problem (2) admit an unique solution $U$, $V$ in $\mathcal{G}(\mathbb{R}^{2}_{+})$ besides one has: \[ U\approx u \hspace{.2in} \mbox{et} \hspace{.2in} V\approx v \] with $u$ et $v$ are the distributions solutions of the same problem obtained by imposing a passage condition. \end{proposition} \noindent \textbf{Proof} $c\in \textbf{L}^{\infty}\left( \mathbb{R}_{+} \right)$, from the proposition (1) there exists $C\in \mathcal{G}(\mathbb{R}_{+})$ such that \[ C \approx c \] $c$ is globally bounded and $\partial_x C$ is locally logarithmic growth. And so, from the theorem 1, there exists an unique solution $U$, $V$ in $\mathcal{G}(\mathbb{R}^{2}_{+})$ of the problem (2). \ \\ \noindent To show that \[ U\approx u \] we suppose that $\left(x,t\right)$ belongs to the region limited by the broken characteristic curve $\Gamma$ comes from the origin and the axis $(ox)$ which we note (region I). \noindent If $(x,t)$ is over of this curve, the demonstration is identical but with reflection (region II) and for $\left(x,t\right)\in \Gamma$ (the characteristic curve comes from the origin) this set is negligible. \ \\ \noindent let $c^{\varepsilon}$ a representative of $C$ in $\mathcal{G}(\mathbb{R}_{+})$ $u_{0}^{\varepsilon}$ a representative of $U_0$ in $\mathcal{G}(\mathbb{R}_{+})$ $v_{0}^{\varepsilon}$ a representative of $V_0$ in $\mathcal{G}(\mathbb{R}_{+})$ \noindent considering then the following problem \[ \left\{ \begin{array}{ll} \left(\partial_t + c^{\varepsilon} \partial_x\right)u^{\varepsilon} = 0 & \left(x,t\right)\in (\mathbb{R}^{*}_{+})^{2} \\ \left(\partial_t - c^{\varepsilon} \partial_x\right)v^{\varepsilon} = 0 & \left(x,t\right)\in (\mathbb{R}^{*}_{+})^{2} \\ u^{\varepsilon} (x,0) = u_{0} ^{\varepsilon}(x) & x\in \mathbb{R}_{+} \\ v^{\varepsilon} (x,0) = v_{0} ^{\varepsilon}(x) & x\in \mathbb{R}_{+} \\ u^{\varepsilon}\left( 0,t\right) = v^{\varepsilon}\left( 0,t\right) & t\in \mathbb{R}_{+} \end{array} \right. \] This problem admits an unique solution $u^{\varepsilon}$, $v^{\varepsilon}$ in $\mathcal{C}^{\infty}(\mathbb{R}^{2}_{+})$. \ \\ \noindent taking $$\gamma_{1} ^{\varepsilon} = \gamma_1*\phi_{\eta_{\varepsilon}} $$ with $\phi\in \mathcal{D}(\mathbb{R}^{+})$ such that $$\int_{\mathbb{R}^{+}} \phi(\lambda)d\lambda = 1 \quad\quad supp \ \phi_{\eta_{\varepsilon}} \subset \left]x_{0}-\eta_{\varepsilon},x_{0}+\eta_{\varepsilon} \right[ \quad\quad \eta_{\varepsilon} = \left| \log \varepsilon \right|^{-1}$$ \ \\ it is evident that for all $(x,t)$ in (region I) \[ u^{\varepsilon} (x,t) = u_{0} ^{\varepsilon} \bigl(\gamma_{1} ^{\varepsilon}(x,t,0) \bigr) \] then to show that $U\approx u$ it is necessary and sufficient to show that : $\forall\psi\in \mathcal{D}(\mathbb{R}^{2}_{+})$ \[ \lim_{\varepsilon \rightarrow 0} \int_{\mbox{region I}} \Bigl( u_{0} ^{\varepsilon}\bigl(\gamma_{1} ^{\varepsilon}(x,t,0)\bigr) - u_{0}\bigl(\gamma_{1}(x,t,0)\bigr) \Bigr)\psi(x,t) dx dt = 0 \] we have \begin{eqnarray*} \int \Bigl( u_{0}^{\:\varepsilon} \bigl( \gamma_{1} ^{\varepsilon} (x,t,0) \bigr) - u_{0} \bigl( \gamma_{1} (x,t,0) \bigr) \Bigr) \psi (x,t) dx\:dt = \hspace{2.0in} \\ \int \Bigl( u_{0}^{\varepsilon} \bigl( \gamma_{1} ^{\varepsilon} (x,t,0) \bigr) - u_{0} \bigl( \gamma_{1}^{\:\varepsilon} (x,t,0) \bigr) \Bigr) \psi (x,t) dx\:dt \hspace{1.0in} \\ + \int \Bigl( u_{0} \bigl( \gamma_{1} ^{\varepsilon} (x,t,0) \bigr) - u_{0} \bigl( \gamma_{1} (x,t,0) \bigr) \Bigr) \psi (x,t) dx\:dt \hspace{1.0in} \end{eqnarray*} but \begin{eqnarray*} \int \Bigl( u_{0}^{\varepsilon} \bigl( \gamma_{1} ^{\varepsilon} (x,t,0) \bigr) - u_{0} \bigl( \gamma_{1}^{\:\varepsilon} (x,t,0) \bigr) \Bigr) \psi (x,t) dx\:dt \hspace{2.0in} \\ = \int \bigl( u_{0}^{\varepsilon} - u_0 \bigr) \bigl( \gamma_{1} ^{\varepsilon} (x,t,0) \bigr) \psi (x,t) dx\:dt \hspace{1.0in} \\ \leq \sup\limits_{x\in \mathbb{R}_+} \left| u_0 \ast \phi_{\varepsilon} - u_0 \right| \left| \int_{\mathbb{R}^{2}_{+}} \psi (x,t) dx\:dt \right| \hspace{1.0in} \end{eqnarray*} so \[ \lim_{\varepsilon \rightarrow 0} \int \Bigl( u_{0}^{\varepsilon} \bigl( \gamma_{1} ^{\varepsilon} (x,t,0) \bigr) - u_{0} \bigl( \gamma_{1}^{\:\varepsilon} (x,t,0) \bigr) \Bigr) \psi (x,t) dx\:dt = 0 \] to show that \[ \lim_{\varepsilon \rightarrow 0} \int \Bigl( u_{0} \bigl( \gamma_{1} ^{\varepsilon} (x,t,0) \bigr) - u_{0} \bigl( \gamma_{1} (x,t,0) \bigr) \Bigr) \psi (x,t) dx\:dt = 0 \] it is sufficient to show that \[ \lim_{\varepsilon \rightarrow 0} \Bigl( \gamma_{1} ^{\varepsilon} (x,t,0) - \gamma_{1} (x,t,0) \Bigr) = 0 \] or $c$ is globally bounded, then \[ \exists M >0 \quad\quad \sup\limits_{x\in \mathbb{R}_+} \left| c^{\varepsilon}(x) \right| < M \] so we can to surround the curve $\gamma_{1} ^{\varepsilon}$ between two broken curves, (see the figure 3 ). \noindent and taking the intersection of these two curves with the axis $(0x)$, it gives us two points \begin{eqnarray*} x_1 & = & c_L \Bigl( - \frac{2 \eta_{\varepsilon}}{M} - \frac{x_0 + \eta_{\varepsilon} - x}{c_R} - t \Bigr) - \eta_{\varepsilon} + x_0 \\ x_2 & = & - c_L \Bigl( - \frac{2 \eta_{\varepsilon}}{M} + \frac{x_0 + \eta_{\varepsilon} - x}{c_R} + t \Bigr) - \eta_{\varepsilon} + x_0 \end{eqnarray*} such that \[ x_1 \leq \gamma_{1} ^{\varepsilon} (x,t,0) \leq x_2 \] \begin{figure}\label{fig3} \end{figure} hence \begin{eqnarray*} \lim_{\varepsilon \rightarrow 0} \gamma_{1} ^{\varepsilon} (x,t,0) &=& - c_L t + \frac{c_L}{c_R} \bigl( x - x_0 \bigr) + x_0 \\ &=& \gamma_{1} (x,t,0) \end{eqnarray*} then \[ U \approx u \] for $v$, the demonstration is the same. $ $ $\Box$ \end{document}
arXiv
Only show content I have access to (102) Only show open access (13) Materials Research (274) MRS Online Proceedings Library Archive (254) Epidemiology & Infection (26) Journal of Mechanics (25) Journal of Materials Research (16) Journal of the International Neuropsychological Society (10) Proceedings of the International Astronomical Union (10) The Journal of Laryngology & Otology (8) Bulletin of the Australian Mathematical Society (5) Epidemiology and Psychiatric Sciences (4) Materials Research Society (274) BSAS (19) International Neuropsychological Society INS (10) Malaysian Society of Otorhinolaryngologists Head and Neck Surgeons (6) Australian Mathematical Society Inc (5) American Academy of Cerebral and Developmental Medicine (2) Developmental Origins of Health and Disease Society (2) The New Zealand Society of Otolaryngology, Head and Neck Surgery (2) Cytohistology of Small Tissue Samples (1) Treatment outcomes of laryngectomy compared to non-surgical management of T3 laryngeal carcinomas: a 10-year multicentre audit of 179 patients in the northeast of England D J Lin, M Goodfellow, J Ong, M Y Chin, L Lazarova, H C Cocks Journal: The Journal of Laryngology & Otology , First View Published online by Cambridge University Press: 12 January 2021, pp. 1-5 Wide-ranging outcomes have been reported for surgical and non-surgical management of T3 laryngeal carcinomas. This study compared the outcomes of T3 tumours treated with laryngectomy or (chemo)radiotherapy in the northeast of England. The outcomes of T3 laryngeal carcinoma treatment at three centres (2007–2016) were retrospectively analysed using descriptive statistics and survival curves. Of 179 T3 laryngeal carcinomas, 68 were treated with laryngectomies, 57 with chemoradiotherapy and 32 with radiotherapy. There was no significant five-year survival difference between treatment with laryngectomy (34.1 per cent) and chemoradiotherapy (48.6 per cent) (p = 0.184). The five-year overall survival rate for radiotherapy (12.5 per cent) was significantly inferior compared to laryngectomy and chemoradiotherapy (p = 0.003 and p < 0.001, respectively). The recurrence rates were 22.1 per cent for laryngectomy, 17.5 per cent for chemoradiotherapy and 50 per cent for radiotherapy. There were significant differences in recurrence rates when laryngectomy (p = 0.005) and chemoradiotherapy (p = 0.001) were compared to radiotherapy. Laryngectomy and chemoradiotherapy had significantly higher five-year overall survival and lower recurrence rates compared with radiotherapy alone. Laryngectomy should be considered in patients unsuitable for chemotherapy, as it may convey a significant survival advantage over radiotherapy alone. Estimated Number of N95 Respirators Needed for Healthcare Workers in Acute Care Hospitals During the COVID-19 Coronavirus Pandemic Patrick T. Wedlock, Kelly J. O'Shea, Madellena Conte, Sarah M. Bartsch, Samuel L. Randall, Marie C. Ferguson, Sarah N. Cox, Sheryl S. Siegmund, Sarah Kulkarni, Denis Nash, Michael Y. Lin, Bruce Y. Lee Journal: Infection Control & Hospital Epidemiology / Accepted manuscript Due to shortages of N95 respirators during the COVID-19 pandemic, it is necessary to estimate the number of N95s required for healthcare workers (HCW) to inform manufacturing targets and resource allocation. We developed a model to determine the number of N95 respirators needed for HCWs both in a single acute care hospital and the United States. For an acute care hospital with 400 all-cause monthly admissions, the number of N95 respirators needed to manage COVID-19 patients admitted during a month ranges from 113 (95% IPR: 50-229) if 0.5% of admissions are COVID-19 patients to 22,101 (95% IPR: 5,904-25,881) if 100% of admissions are COVID-19 patients (assuming single use per respirator, and 10 encounters between HCWs and each COVID-19 patient per day). The number of N95s needed decreases (22 [95% IPR: 10-43]-4,445 [95% IPR: 1,975-8,684]) if each N95 is used for five patient encounters. Varying monthly all-cause admissions to 2,000 requires 6,645-13,404 respirators with a 60% COVID-19 admission prevalence, 10 HCW-patient encounters, and reusing N95s 5-10 times. Nationally, the number of N95 respirators needed over the course of the pandemic ranges from 86 million (95% IPR: 37.1-200.6 million) to 1.6 billion (95% IPR: 0.7-3.6 billion) as 5-90% of the population is exposed (single-use), and 17.4 million (95% IPR: 7.3-41 million) to 312.3 million (95% IPR: 131.5-737.3 million) using each respirator for five encounters. Our study quantifies the number of N95 respirators needed for a given acute care hospital and nationally during the COVID-19 pandemic under varying conditions. The Relationship between Negative Symptoms and Both Emotion Management and Non-social Cognition in Schizophrenia Spectrum Disorders Caitlin O. B. Yolland, Sean P. Carruthers, Wei Lin Toh, Erica Neill, Philip J. Sumner, Elizabeth H. X. Thomas, Eric J. Tan, Caroline Gurvich, Andrea Phillipou, Tamsyn E. Van Rheenen, Susan L. Rossell Journal: Journal of the International Neuropsychological Society , First View Published online by Cambridge University Press: 21 December 2020, pp. 1-13 There is ongoing debate regarding the relationship between clinical symptoms and cognition in schizophrenia spectrum disorders (SSD). The present study aimed to explore the potential relationships between symptoms, with an emphasis on negative symptoms, and social and non-social cognition. Hierarchical cluster analysis with k-means optimisation was conducted to characterise clinical subgroups using the Scale for the Assessment of Negative Symptoms and Scale for the Assessment of Positive Symptoms in n = 130 SSD participants. Emergent clusters were compared on the MATRICS Consensus Cognitive Battery, which measures non-social cognition and emotion management as well as demographic and clinical variables. Spearman's correlations were then used to investigate potential relationships between specific negative symptoms and emotion management and non-social cognition. Four distinct clinical subgroups were identified: 1. high hallucinations, 2. mixed symptoms, 3. high negative symptoms, and 4. relatively asymptomatic. The high negative symptom subgroup was found to have significantly poorer emotion management than the high hallucination and relatively asymptomatic subgroups. No further differences between subgroups were observed. Correlation analyses revealed avolition-apathy and anhedonia-asociality were negatively correlated with emotion management, but not non-social cognition. Affective flattening and alogia were not associated with either emotion management or non-social cognition. The present study identified associations between negative symptoms and emotion management within social cognition, but no domains of non-social cognition. This relationship may be specific to motivation, anhedonia and apathy, but not expressive deficits. This suggests that targeted interventions for social cognition may also result in parallel improvement in some specific negative symptoms. Evidence, and replication thereof, that molecular-genetic and environmental risks for psychosis impact through an affective pathway Jim van Os, Lotta-Katrin Pries, Margreet ten Have, Ron de Graaf, Saskia van Dorsselaer, Philippe Delespaul, Maarten Bak, Gunter Kenis, Bochao D. Lin, Jurjen J. Luykx, Alexander L. Richards, Berna Akdede, Tolga Binbay, Vesile Altınyazar, Berna Yalınçetin, Güvem Gümüş-Akay, Burçin Cihan, Haldun Soygür, Halis Ulaş, Eylem Şahin Cankurtaran, Semra Ulusoy Kaymak, Marina M. Mihaljevic, Sanja Andric Petrovic, Tijana Mirjanic, Miguel Bernardo, Gisela Mezquida, Silvia Amoretti, Julio Bobes, Pilar A. Saiz, María Paz García-Portilla, Julio Sanjuan, Eduardo J. Aguilar, José Luis Santos, Estela Jiménez-López, Manuel Arrojo, Angel Carracedo, Gonzalo López, Javier González-Peñas, Mara Parellada, Nadja P. Maric, Cem Atbaşoğlu, Alp Ucok, Köksal Alptekin, Meram Can Saka, Celso Arango, Michael O'Donovan, Bart P. F. Rutten, Sinan Guloksuz Published online by Cambridge University Press: 19 October 2020, pp. 1-13 There is evidence that environmental and genetic risk factors for schizophrenia spectrum disorders are transdiagnostic and mediated in part through a generic pathway of affective dysregulation. We analysed to what degree the impact of schizophrenia polygenic risk (PRS-SZ) and childhood adversity (CA) on psychosis outcomes was contingent on co-presence of affective dysregulation, defined as significant depressive symptoms, in (i) NEMESIS-2 (n = 6646), a representative general population sample, interviewed four times over nine years and (ii) EUGEI (n = 4068) a sample of patients with schizophrenia spectrum disorder, the siblings of these patients and controls. The impact of PRS-SZ on psychosis showed significant dependence on co-presence of affective dysregulation in NEMESIS-2 [relative excess risk due to interaction (RERI): 1.01, p = 0.037] and in EUGEI (RERI = 3.39, p = 0.048). This was particularly evident for delusional ideation (NEMESIS-2: RERI = 1.74, p = 0.003; EUGEI: RERI = 4.16, p = 0.019) and not for hallucinatory experiences (NEMESIS-2: RERI = 0.65, p = 0.284; EUGEI: −0.37, p = 0.547). A similar and stronger pattern of results was evident for CA (RERI delusions and hallucinations: NEMESIS-2: 3.02, p < 0.001; EUGEI: 6.44, p < 0.001; RERI delusional ideation: NEMESIS-2: 3.79, p < 0.001; EUGEI: 5.43, p = 0.001; RERI hallucinatory experiences: NEMESIS-2: 2.46, p < 0.001; EUGEI: 0.54, p = 0.465). The results, and internal replication, suggest that the effects of known genetic and non-genetic risk factors for psychosis are mediated in part through an affective pathway, from which early states of delusional meaning may arise. A replication study of JTC bias, genetic liability for psychosis and delusional ideation Cécile Henquet, Jim van Os, Lotta K. Pries, Christian Rauschenberg, Philippe Delespaul, Gunter Kenis, Jurjen J. Luykx, Bochao D. Lin, Alexander L. Richards, Berna Akdede, Tolga Binbay, Vesile Altınyazar, Berna Yalınçetin, Güvem Gümüş-Akay, Burçin Cihan, Haldun Soygür, Halis Ulaş, Eylem S. Cankurtaran, Semra U. Kaymak, Marina M. Mihaljevic, Sanja S. Petrovic, Tijana Mirjanic, Miguel Bernardo, Gisela Mezquida, Silvia Amoretti, Julio Bobes, Pilar A. Saiz, Maria P. García-Portilla, Julio Sanjuan, Eduardo J. Aguilar, Jose L. Santos, Estela Jiménez-López, Manuel Arrojo, Angel Carracedo, Gonzalo López, Javier González-Peñas, Mara Parellada, Nadja P. Maric, Cem Atbaşoğlu, Alp Ucok, Köksal Alptekin, Meram C. Saka, Celso Arango, Michael O'Donovan, Bart P.F. Rutten, Sinan Gülöksüz Published online by Cambridge University Press: 13 October 2020, pp. 1-7 This study attempted to replicate whether a bias in probabilistic reasoning, or 'jumping to conclusions'(JTC) bias is associated with being a sibling of a patient with schizophrenia spectrum disorder; and if so, whether this association is contingent on subthreshold delusional ideation. Data were derived from the EUGEI project, a 25-centre, 15-country effort to study psychosis spectrum disorder. The current analyses included 1261 patients with schizophrenia spectrum disorder, 1282 siblings of patients and 1525 healthy comparison subjects, recruited in Spain (five centres), Turkey (three centres) and Serbia (one centre). The beads task was used to assess JTC bias. Lifetime experience of delusional ideation and hallucinatory experiences was assessed using the Community Assessment of Psychic Experiences. General cognitive abilities were taken into account in the analyses. JTC bias was positively associated not only with patient status but also with sibling status [adjusted relative risk (aRR) ratio : 4.23 CI 95% 3.46–5.17 for siblings and aRR: 5.07 CI 95% 4.13–6.23 for patients]. The association between JTC bias and sibling status was stronger in those with higher levels of delusional ideation (aRR interaction in siblings: 3.77 CI 95% 1.67–8.51, and in patients: 2.15 CI 95% 0.94–4.92). The association between JTC bias and sibling status was not stronger in those with higher levels of hallucinatory experiences. These findings replicate earlier findings that JTC bias is associated with familial liability for psychosis and that this is contingent on the degree of delusional ideation but not hallucinations. Overview of the SPARC tokamak Status of the SPARC Physics Basis A. J. Creely, M. J. Greenwald, S. B. Ballinger, D. Brunner, J. Canik, J. Doody, T. Fülöp, D. T. Garnier, R. Granetz, T. K. Gray, C. Holland, N. T. Howard, J. W. Hughes, J. H. Irby, V. A. Izzo, G. J. Kramer, A. Q. Kuang, B. LaBombard, Y. Lin, B. Lipschultz, N. C. Logan, J. D. Lore, E. S. Marmar, K. Montes, R. T. Mumgaard, C. Paz-Soldan, C. Rea, M. L. Reinke, P. Rodriguez-Fernandez, K. Särkimäki, F. Sciortino, S. D. Scott, A. Snicker, P. B. Snyder, B. N. Sorbom, R. Sweeney, R. A. Tinguely, E. A. Tolman, M. Umansky, O. Vallhagen, J. Varje, D. G. Whyte, J. C. Wright, S. J. Wukitch, J. Zhu, the SPARC Team Journal: Journal of Plasma Physics / Volume 86 / Issue 5 / October 2020 Published online by Cambridge University Press: 29 September 2020, 865860502 The SPARC tokamak is a critical next step towards commercial fusion energy. SPARC is designed as a high-field ( $B_0 = 12.2$ T), compact ( $R_0 = 1.85$ m, $a = 0.57$ m), superconducting, D-T tokamak with the goal of producing fusion gain $Q>2$ from a magnetically confined fusion plasma for the first time. Currently under design, SPARC will continue the high-field path of the Alcator series of tokamaks, utilizing new magnets based on rare earth barium copper oxide high-temperature superconductors to achieve high performance in a compact device. The goal of $Q>2$ is achievable with conservative physics assumptions ( $H_{98,y2} = 0.7$) and, with the nominal assumption of $H_{98,y2} = 1$, SPARC is projected to attain $Q \approx 11$ and $P_{\textrm {fusion}} \approx 140$ MW. SPARC will therefore constitute a unique platform for burning plasma physics research with high density ( $\langle n_{e} \rangle \approx 3 \times 10^{20}\ \textrm {m}^{-3}$), high temperature ( $\langle T_e \rangle \approx 7$ keV) and high power density ( $P_{\textrm {fusion}}/V_{\textrm {plasma}} \approx 7\ \textrm {MW}\,\textrm {m}^{-3}$) relevant to fusion power plants. SPARC's place in the path to commercial fusion energy, its parameters and the current status of SPARC design work are presented. This work also describes the basis for global performance projections and summarizes some of the physics analysis that is presented in greater detail in the companion articles of this collection. Can elevated concentrations of ALT and AST predict the risk of 'recurrence' of COVID-19? L. Z. Chen, Z. H. Lin, J. Chen, S. S. Liu, T. Shi, Y. N. Xin 'Recurrence' of coronavirus disease 2019 (COVID-19) has triggered numerous discussions of scholars at home and abroad. A total of 44 recurrent cases of COVID-19 and 32 control cases admitted from 11 February to 29 March 2020 to Guanggu Campus of Tongji Hospital affiliated to Tongji Medical College Huazhong University of Science and Technology were enrolled in this study. All the 44 recurrent cases were classified as mild to moderate when the patients were admitted for the second time. The gender and mean age in both cases (recurrent and control) were similar. At least one concomitant disease was observed in 52.27% recurrent cases and 34.38% control cases. The most prevalent comorbidity among them was hypertension. Fever and cough being the most prevalent clinical symptoms in both cases. On comparing both the cases, recurrent cases had markedly elevated concentrations of alanine aminotransferase (ALT) (P = 0.020) and aspartate aminotransferase (AST) (P = 0.007). Moreover, subgroup analysis showed mild to moderate abnormal concentrations of ALT and AST in recurrent cases. The elevated concentrations of ALT and AST may be recognised as predictive markers for the risk of 'recurrence' of COVID-19, which may provide insights into the prevention and control of COVID-19 in the future. Familial coaggregation of major psychiatric disorders in first-degree relatives of individuals with autism spectrum disorder: a nationwide population-based study Hohui E. Wang, Chih-Ming Cheng, Ya-Mei Bai, Ju-Wei Hsu, Kai-Lin Huang, Tung-Ping Su, Shih-Jen Tsai, Cheng-Ta Li, Tzeng-Ji Chen, Bennett L. Leventhal, Mu-Hong Chen Family coaggregation of attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorder (ASD), bipolar disorder (BD), major depressive disorder (MDD) and schizophrenia have been presented in previous studies. The shared genetic and environmental factors among psychiatric disorders remain elusive. This nationwide population-based study examined familial coaggregation of major psychiatric disorders in first-degree relatives (FDRs) of individuals with ASD. Taiwan's National Health Insurance Research Database was used to identify 26 667 individuals with ASD and 67 998 FDRs of individuals with ASD. The cohort was matched in 1:4 ratio to 271 992 controls. The relative risks (RRs) and 95% confidence intervals (CI) of ADHD, ASD, BD, MDD and schizophrenia were assessed among FDRs of individuals with ASD and ASD with intellectual disability (ASD-ID). FDRs of individuals with ASD have higher RRs of major psychiatric disorders compared with controls: ASD 17.46 (CI 15.50–19.67), ADHD 3.94 (CI 3.72–4.17), schizophrenia 3.05 (CI 2.74–3.40), BD 2.22 (CI 1.98–2.48) and MDD 1.88 (CI 1.76–2.00). Higher RRs of schizophrenia (4.47, CI 3.95–5.06) and ASD (18.54, CI 16.18–21.23) were observed in FDRs of individuals with both ASD-ID, compared with ASD only. The risk for major psychiatric disorders was consistently elevated across all types of FDRs of individuals with ASD. FDRs of individuals with ASD-ID are at further higher risk for ASD and schizophrenia. Our results provide leads for future investigation of shared etiologic pathways of ASD, ID and major psychiatric disorders and highlight the importance of mental health care delivered to at-risk families for early diagnoses and interventions. Recommendations for Patients with Complex Nerve Injuries during the COVID-19 Pandemic Kristine M. Chapman, Michael J. Berger, Christopher Doherty, Dimitri J. Anastakis, Heather L. Baltzer, Kirsty Usher Boyd, Sean G. Bristol, Brett Byers, K. Ming Chan, Cameron J.B. Cunningham, Kristen M. Davidge, Jana Dengler, Kate Elzinga, Jennifer L. Giuffre, Lisa Hadley, A Robertson Harrop, Mahdis Hashemi, J. Michael Hendry, Kristin L. Jack, Emily M. Krauss, Timothy J. Lapp, Juliana Larocerie, Jenny C. Lin, Thomas A. Miller, Michael Morhart, Christine B. Novak, Russell O'Connor, Jaret L. Olsen, Benjamin R. Ritsma, Lawrence R. Robinson, Douglas C. Ross, Christiaan Schrag, Alexander Seal, David T. Tang, Jessica Trier, Gerald Wolff, Justin Yeung Journal: Canadian Journal of Neurological Sciences , First View Published online by Cambridge University Press: 27 August 2020, pp. 1-6 Developing A Rheological Relation for Transient Dense Granular Flows Via Discrete Element Simulation in A Rotating Drum C.-C. Lin, M.-Z. Jiang, F.-L. Yang Journal: Journal of Mechanics / Volume 36 / Issue 5 / October 2020 This work examines the μ(I) relation that describes the effective friction coefficient μ of a dense granular flow as a function of flow inertial number I, at the center of a rotating drum from its flow onset to steady state using DEM. We want to see how the internal friction coefficient of an accelerating flow may be predicted so that the associated tangential stress can be estimated with the proper knowledge of the normal stress. Under the three investigated drum speeds (3, 4.5 and 6 rpm), the bulk normal stress, σn(y), is found to be a consistent linear depth profile throughout the flow development with a slope degraded from the hydrostatic value, Ph(y), due to lateral wall friction. With the discovery of a non-constant depth-decaying effective wall friction coefficient, we derive analytically a wall-degradation factor K(h) to give σn(y)= K(h)Ph(y). The depth profile of tangential stress, however, varies in time from a concave shape upon acceleration, τa(y), to a more linear trend at the steady state, τss(y). Hence, the μa-Ia profile (with μa=τ/σn) upon flow acceleration offsets from the steady μss(Iss) relation. A pseudo-steady acceleration modification number, ΔI, is proposed to shift the inertial number in the acceleration phase to I* = Ia+ΔI so that the μa-I* data converge to μss(Iss). This finding shall allow us to predict a transient tangential stress by τa(y) = μss(I*)K(y)Ph(y) using the well-accepted knowledge of steady flow rheology, hydrostatic pressure, and the currently developed wall-degradation factor. The social and emotional wellbeing of Indigenous LGBTQA+ young people: a global perspective K. Spurway, K. Soldatic, L. Briskman, B. Uink, S. Liddelow-Hunt, B. Hill, A. Lin Journal: Irish Journal of Psychological Medicine , First View There has been scant exploration of the social and emotional wellbeing (SEWB) of young Indigenous populations that identify as LGBTQA+ (Lesbian, Gay, Bisexual, Transgender, Queer/Questioning, Asexual +). Given the vulnerability of this cohort living in Western settler colonial societies, wider investigation is called for to respond to their needs, experiences and aspirations. This paper summarizes existing research on the topic highlighting the lack of scholarship on the intersection of youth, Indigeneity, LGBTQA+ and SEWB. The paper takes a holistic approach to provide a global perspective that draws on an emerging body of literature and research driven by Indigenous scholars in settler colonial societies. The paper points to the importance of understanding converging colonial influences and ongoing contemporary elements, such as racism and marginalization that impact on young Indigenous LGBTQA+ wellbeing. 4070 Association of Interpersonal Processes of Care and Health Outcomes in Patients with Type II Diabetes Hadley Reid, Olivia M Lin, Rebecca L Fabbro, Kimberly S Johnson, Laura P. Svetkey, Bryan C Batch OBJECTIVES/GOALS: 1. Understand the association between patient perceptions of care measured by the Interpersonal Processes of Care (IPC) Survey and glycemic control, appointment no-shows/cancellations and medication adherence in patients with type II diabetes. 2. Determine how these relationships differ by race for non-Hispanic White and Black patients. METHODS/STUDY POPULATION: This is a cross-sectional study of a random sample of 100 White and 100 Black Type II diabetic patients followed in Duke primary care clinics and prescribed antihyperglycemic medication. We will recruit through email and phone calls. Enrolled patients will complete the Interpersonal Processes of Care Short Form and Extent of Medication Adherence survey to measure patient perceptions of care (predictor) and medication adherence (secondary outcome). No show appointments and cancellations (secondary outcomes) and most recent hemoglobin A1c (primary outcome) will be collected from the Electronic Medical Record. We will also collect basic demographic information, insurance status, financial security, significant co-morbidities, and number and type (subcutaneous vs oral) of antihyperglycemic medications. RESULTS/ANTICIPATED RESULTS: -The study is powered to detect a 0.6% difference in HbA1c, our primary outcome, between high and low scorers on the Interpersonal Processes of Care subdomains. -We expect that higher patient scores in the positive domains of the IPC survey and lower DISCUSSION/SIGNIFICANCE OF IMPACT: This study will provide information to develop and implement targeted interventions to reduce racial and ethnic disparities in patients with Type II diabetes. We hope to gain information on potentially modifiable factors in patient-provider interactions that can be intervened upon to improve prevention and long-term outcomes in these populations. Effects of dietary incorporation of linseed oil with soybean isoflavone on fatty acid profiles and lipid metabolism-related gene expression in breast muscle of chickens Z. Y. Gou, X. Y. Cui, L. Li, Q. L. Fan, X. J. Lin, Y. B. Wang, Z. Y. Jiang, S. Q. Jiang The meat quality of chicken is an important factor affecting the consumer's health. It was hypothesized that n-3 polyunsaturated fatty acid (n-3 PUFA) could be effectively deposited in chicken, by incorporating antioxidation of soybean isoflavone (SI), which led to improved quality of chicken meat for good health of human beings. Effects of partial or complete dietary substitution of lard (LA) with linseed oil (LO), with or without SI on growth performance, biochemical indicators, meat quality, fatty acid profiles, lipid-related health indicators and gene expression of breast muscle were examined in chickens. A total of 900 males were fed a corn–soybean meal diet supplemented with 4% LA, 2% LA + 2% LO and 4% LO and the latter two including 30 mg SI/kg (2% LA + 2% LO + SI and 4% LO + SI) from 29 to 66 days of age; each of the five dietary treatments included six replicates of 30 birds. Compared with the 4% LA diet, dietary 4% LO significantly increased the feed efficiency and had no negative effect on objective indices related to meat quality; LO significantly decreased plasma triglycerides and total cholesterol (TCH); abdominal fat percentage was significantly decreased in birds fed the 4% LO and 4% LO + SI diets. Chickens with LO diets resulted in higher contents of α-linolenic acid (C18:3n-3), EPA (C20:5n-3) and total n-3 PUFA, together with a lower content of palmitic acid (C16:0), lignoceric acid (C24:0), saturated fatty acids and n-6:n-3 ratio in breast muscle compared to 4% LA diet (P < 0.05); they also significantly decreased atherogenic index, thrombogenic index and increased the hypocholesterolemic to hypercholesterolemic ratio. Adding SI to the LO diets enhanced the contents of EPA and DHA (C22:6n-3), plasma total superoxide dismutase, reduced glutathione (GSH)/oxidized glutathione and muscle GSH content, while decreased plasma total triglyceride and TCH and malondialdehyde content in plasma and breast muscle compared to its absence (P < 0.05). Expression in breast muscle of fatty acid desaturase 1 (FADS1), FADS2, elongase 2 (ELOVL2) and ELOVL5 genes were significantly higher with the LO diets including SI than with the 4% LA diet. Significant interactions existed between LO level and inclusion of SI on EPA and TCH contents. These findings indicate that diet supplemented with LO combined with SI is an effective alternative when optimizing the nutritional value of chicken meat for human consumers. Role of DRD2 and ALDH2 genes in bipolar II disorder with and without comorbid anxiety disorder Y.-S. Wang, S.-Y. Lee, S.-L. Chen, Y.-H. Chang, T.-Y. Wang, S.-H. Lin, C.-L. Wang, S.-Y. Huang, I.H. Lee, P.S. Chen, Y.K. Yang, R.-B. Lu Journal: European Psychiatry / Volume 29 / Issue 3 / March 2014 The presence of comorbid anxiety disorders (AD) and bipolar II disorders (BP-II) compounds disability complicates treatment, worsens prognosis, and has been understudied. The genes involved in metabolizing dopamine and encoding dopamine receptors, such as aldehyde dehydrogenase 2 (ALDH2) and dopamine D2 receptor (DRD2) genes, may be important to the pathogenesis of BP-II comorbid with AD. We aimed to clarify ALDH2 and DRD2 genes for predisposition to BP-II comorbid with and without AD. The sample consisted of 335 subjects BP-II without AD, 127 subjects BP-II with AD and 348 healthy subjects as normal control. The genotypes of the ALDH2 and DRD2 Taq-IA polymorphisms were determined using polymerase chain reactions plus restriction fragment length polymorphism analysis. Logistic regression analysis showed a statistically significant association between DRD2 Taq-I A1/A2 genotype and BP-II with AD (OR = 2.231, P = 0.021). Moreover, a significant interaction of the DRD2 Taq-I A1/A1 and the ALDH2*1*1 genotypes in BP-II without AD was revealed (OR = 5.623, P = 0.001) compared with normal control. Our findings support the hypothesis that a unique genetic distinction between BP-II with and without AD, and suggest a novel association between DRD2 Taq-I A1/A2 genotype and BP-II with AD. Our study also provides further evidence that the ALDH2 and DRD2 genes interact in BP-II, particularly BP-II without AD. 2205 – Working Memory Dependent Prefrontal-parietal Connectivity And Model-based Diagnostic Classification In Schizophrenia L. Deserno, K. Brodersen, Z. Lin, W.D. Penny, A. Heinz, K.E. Stephan, F. Schlagenhauf Journal: European Psychiatry / Volume 28 / Issue S1 / 2013 Published online by Cambridge University Press: 15 April 2020, p. 1 Impaired working memory (WM) is among the best-established findings in schizophrenia. Nevertheless, functional neuroimaging studies on WM yielded inconsistent results. Disrupted functional integration in the WM network may explain neural inefficiency more precisely. This so-called 'dysconnectivity'-hypothesis of schizophrenia focuses on abnormal synaptic plasticity. In a step towards pathophysiologically informed diagnostic classification schemes, the recent introduction of "generative embedding" procedures to neuroimaging offers the combination of neurobiologically interpretable generative models (e.g. DCMs) and support vector machines (SVM) for diagnostic classification. This fMRI study in 41 schizophrenia patients and 42 healthy controls presents four major results: 1) Across controls and patients, prefrontal activation is modulated by WM performance resulting in an inverted U-curve. 2) DCM of the prefrontal-parietal WM network demonstrated that WM-dependent prefrontal to parietal connectivity is reduced in all patients independent of WM performance. 3) Classification in a supervised setting using generative embedding yielded 78% accuracy. Using model-based clustering in an unsupervised fashion performed almost equally well (71% accuracy). 4) Subclustering schizophrenia patients revealed three distinct subgroups of patients. These subgroups exhibited different profiles of prefrontal-parietal connectivity and, critically, were found to differ significantly in clinical symptoms. This study reveals putative mechanisms underlying prefrontal inefficiency and cognitive deficits in schizophrenia, providing direct experimental evidence for the dysconnectivity hypothesis. A novel model-based clustering approach revealed three distinct subgroups of patients with unique connectivity profiles and significant differences in clinical ratings. This translational approach may help to identify specific factors underlying the variability of treatment responses and to develop subgroup-specific treatment approaches. Dysfunctional default mode network and executive control network in people with Internet gaming disorder: Independent component analysis under a probability discounting task L Wang, L Wu, X Lin, Y Zhang, H Zhou, X Du, G Dong Journal: European Psychiatry / Volume 34 / April 2016 Published online by Cambridge University Press: 23 March 2020, pp. 36-42 The present study identified the neural mechanism of risky decision-making in Internet gaming disorder (IGD) under a probability discounting task. Independent component analysis was used on the functional magnetic resonance imaging data from 19 IGD subjects (22.2 ± 3.08 years) and 21 healthy controls (HC, 22.8 ± 3.5 years). For the behavioral results, IGD subjects prefer the risky to the fixed options and showed shorter reaction time compared to HC. For the imaging results, the IGD subjects showed higher task-related activity in default mode network (DMN) and less engagement in the executive control network (ECN) than HC when making the risky decisions. Also, we found the activities of DMN correlate negatively with the reaction time and the ECN correlate positively with the probability discounting rates. The results suggest that people with IGD show altered modulation in DMN and deficit in executive control function, which might be the reason for why the IGD subjects continue to play online games despite the potential negative consequences. Rama Kiblawi, Andreana N. Holowatyj, Biljana Gigic, Stefanie Brezina, Anne J. M. R. Geijsen, Jennifer Ose, Tengda Lin, Sheetal Hardikar, Caroline Himbert, Christy A. Warby, Jürgen Böhm, Martijn J. L. Bours, Fränzel J. B. van Duijnhoven, Tanja Gumpenberger, Dieuwertje E. Kok, Janna L. Koole, Eline H. van Roekel, Petra Schrotz-King, Arve Ulvik, Andrea Gsur, Nina Habermann, Matty P. Weijenberg, Per Magne Ueland, Martin Schneider, Alexis Ulrich, Cornelia M. Ulrich, Mary Playdon Journal: British Journal of Nutrition / Volume 123 / Issue 10 / 28 May 2020 B vitamins involved in one-carbon metabolism have been implicated in the development of inflammation- and angiogenesis-related chronic diseases, such as colorectal cancer (CRC). Yet, the role of one-carbon metabolism in inflammation and angiogenesis among CRC patients remains unclear. The objective of this study was to investigate associations of components of one-carbon metabolism with inflammation and angiogenesis biomarkers among newly diagnosed CRC patients (n 238) in the prospective ColoCare Study, Heidelberg. We cross-sectionally analysed associations between twelve B vitamins and one-carbon metabolites and ten inflammation and angiogenesis biomarkers from pre-surgery serum samples using multivariable linear regression models. We further explored associations among novel biomarkers in these pathways with Spearman partial correlation analyses. We hypothesised that pyridoxal-5'-phosphate (PLP) is inversely associated with inflammatory biomarkers. We observed that PLP was inversely associated with C-reactive protein (CRP) (r –0·33, Plinear < 0·0001), serum amyloid A (SAA) (r –0·23, Plinear = 0·003), IL-6 (r –0·39, Plinear < 0·0001), IL-8 (r –0·20, Plinear = 0·02) and TNFα (r –0·12, Plinear = 0·045). Similar findings were observed for 5-methyl-tetrahydrofolate and CRP (r –0·14), SAA (r –0·14) and TNFα (r –0·15) among CRC patients. Folate catabolite acetyl-para-aminobenzoylglutamic acid (pABG) was positively correlated with IL-6 (r 0·27, Plinear < 0·0001), and pABG was positively correlated with IL-8 (r 0·21, Plinear < 0·0001), indicating higher folate utilisation during inflammation. Our data support the hypothesis of inverse associations between PLP and inflammatory biomarkers among CRC patients. A better understanding of the role and inter-relation of PLP and other one-carbon metabolites with inflammatory processes among colorectal carcinogenesis and prognosis could identify targets for future dietary guidance for CRC patients. OVERPARTITIONS RELATED TO THE MOCK THETA FUNCTION $V_{0}(q)$ Additive number theory; partitions BERNARD L. S. LIN Journal: Bulletin of the Australian Mathematical Society / Volume 102 / Issue 3 / December 2020 Recently, Brietzke, Silva and Sellers ['Congruences related to an eighth order mock theta function of Gordon and McIntosh', J. Math. Anal. Appl.479 (2019), 62–89] studied the number $v_{0}(n)$ of overpartitions of $n$ into odd parts without gaps between the nonoverlined parts, whose generating function is related to the mock theta function $V_{0}(q)$ of order 8. In this paper we first present a short proof of the 3-dissection for the generating function of $v_{0}(2n)$ . Then we establish three congruences for $v_{0}(n)$ along certain progressions which are subsequences of the integers $4n+3$ . Alterations of the fatty acid composition and lipid metabolome of breast muscle in chickens exposed to dietary mixed edible oils X. Y. Cui, Z. Y. Gou, K. F. M. Abouelezz, L. Li, X. J. Lin, Q. L. Fan, Y. B. Wang, Z. G. Cheng, F. Y. Ding, S. Q. Jiang Journal: animal / Volume 14 / Issue 6 / June 2020 The fatty acid composition of chicken's meat is largely influenced by dietary lipids, which are often used as supplements to increase dietary caloric density. The underlying key metabolites and pathways influenced by dietary oils remain poorly known in chickens. The objective of this study was to explore the underlying metabolic mechanisms of how diets supplemented with mixed or a single oil with distinct fatty acid composition influence the fatty acid profile in breast muscle of Qingyuan chickens. Birds were fed a corn-soybean meal diet supplemented with either soybean oil (control, CON) or equal amounts of mixed edible oils (MEO; soybean oil : lard : fish oil : coconut oil = 1 : 1 : 0.5 : 0.5) from 1 to 120 days of age. Growth performance and fatty acid composition of muscle lipids were analysed. LC-MS was applied to investigate the effects of CON v. MEO diets on lipid-related metabolites in the muscle of chickens at day 120. Compared with the CON diet, chickens fed the MEO diet had a lower feed conversion ratio (P < 0.05), higher proportions of lauric acid (C12:0), myristic acid (C14:0), palmitoleic acid (C16:1n-7), oleic acid (C18:1n-9), EPA (C20:5n-3) and DHA (C22:6n-3), and a lower linoleic acid (C18:2n-6) content in breast muscle (P < 0.05). Muscle metabolome profiling showed that the most differentially abundant metabolites are phospholipids, including phosphatidylcholines (PC) and phosphatidylethanolamines (PE), which enriched the glycerophospholipid metabolism (P < 0.05). These key differentially abundant metabolites – PC (14:0/20:4), PC (18:1/14:1), PC (18:0/14:1), PC (18:0/18:4), PC (20:0/18:4), PE (22:0/P-16:0), PE (24:0/20:5), PE (22:2/P-18:1), PE (24:0/18:4) – were closely associated with the contents of C12:0, C14:0, DHA and C18:2n-6 in muscle lipids (P < 0.05). The content of glutathione metabolite was higher with MEO than CON diet (P < 0.05). Based on these results, it can be concluded that the diet supplemented with MEO reduced the feed conversion ratio, enriched the content of n-3 fatty acids and modified the related metabolites (including PC, PE and glutathione) in breast muscle of chickens. INFINITE FAMILIES OF CONGRUENCES FOR OVERPARTITIONS WITH RESTRICTED ODD DIFFERENCES BERNARD L. S. LIN, JIAN LIU, ANDREW Y. Z. WANG, JIEJUAN XIAO Journal: Bulletin of the Australian Mathematical Society / Volume 102 / Issue 1 / August 2020 Let $\overline{t}(n)$ be the number of overpartitions in which (i) the difference between successive parts may be odd only if the larger part is overlined and (ii) if the smallest part is odd then it is overlined. Ramanujan-type congruences for $\overline{t}(n)$ modulo small powers of $2$ and $3$ have been established. We present two infinite families of congruences modulo $5$ and $27$ for $\overline{t}(n)$ , the first of which generalises a recent result of Chern and Hao ['Congruences for two restricted overpartitions', Proc. Math. Sci. 129 (2019), Article 31].
CommonCrawl
For mathematical questions about Octave; questions purely about the language, syntax, or runtime errors would likely be better received on Stack Overflow. Octave is a high-level interpreted language for numerical computations. Use either the (octave) tag or the (matlab) tag, unless your question involves both packages. Converting system from continuous time to discrete time with restricted time? How do I complete the steps of finding the Jordan of this $5\times 5$ matrix (with Octave)? Solve 2 equations in 2 unknowns in octave? How to make "sigma" summation of a function by i variable in GNU Octave? Why does the multiplication in a division algebra depends on every component? Octave tf2ss: no way to build a system with multiple outputs? Conversion from state space back to transfer function in octave. How do I plot this graph in octave? The decision boundary found by your classifier? Histogram: What is wrong with this code? Octave - Why $0.6-0.2-0.2-0.2 \neq 0$, but $0.4-0.2-0.2 = 0$?
CommonCrawl
Improving the differential diagnosis between myelodysplastic syndromes and reactive peripheral cytopenias by multiparametric flow cytometry: the role of B-cell precursors Suiellen C Reis-Alves1, Fabiola Traina2, Konradin Metze3 & Irene Lorand-Metze1,3 Diagnostic Pathology volume 10, Article number: 44 (2015) Cite this article Immunophenotyping is a valuable ancillary technique for the differential diagnosis between myelodysplastic syndromes (MDS) with low bone marrow (BM) blast counts and a normal karyotype, and reactive peripheral (PB) cytopenias. Our aim was to search for the most important variables for this purpose. We also analyzed the age variation of BM B-cell precursors (BCP) and its differences in reactive and clonal cytopenias. Immunophenotypic analyzes were performed in BM of 54 patients with MDS (76% with BM blasts <5%) and 35 cases of reactive cytopenias. Healthy allogeneic BM transplantation donors (n = 41) were used as controls. We used a four-color panel of antibodies analyzing 9 granulocytic, 8 monocytic and 6 CD34+ cell features. Asynchronous shift to the left in maturing granulocytes and increase in CD16+ monocytes were also found in reactive PB cytopenias, but the most important aberrancies in MDS were seen in myeloid CD34+ cells. Decrease in BCP, that is a hallmark of MDS, could also be found in reactive cytopenias, especially in patients >55 years. % BM BCP could be calculated by the formula: (−7.97 × log age) + (4.24 × log % CD34 + cells) – (0.22 x nr. alterations CD34 + cells) + 0.577. Corrected R2 = 0.467. Analysis of myelomonocytic precursors and CD34+ cells was satisfactory for the differential diagnosis between reactive PB cytopenias and MDS. The most specific alterations were found in CD34+ cells. Comparison of the values obtained with those of normal age-matched controls is recommended. Virtual slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1975931809154663 In the last years, numerous studies have confirmed the utility of multiparametric flow cytometry (FCM) in the diagnosis of myelodysplastic syndromes, especially in cases with a normal karyotype, and its differential diagnosis with peripheral cytopenias of non-clonal origin [1-10]. FCM of BM hemopoietic precursors has been focused mainly on myelomonocytic precursors and CD34+ progenitors. There is no single specific abnormality, but the presence of three or more aberrancies may strongly support the diagnosis of MDS [1,2]. Several kinds of phenotypic abnormalities have been described in MDS such as a low SSC in granulocytic precursors, loss of antigen expression, asynchronous maturation or maturation block, aberrant cross-lineage co-expressions, quantitative and qualitative abnormalities of CD34+ cells, along with the decrease of precursor B cells (BCP) [9,11-21]. Many phenotypic abnormalities found in CD34+ cells have been associated with disease progression and are able to predict a shorter survival of the patients [5,8,10,15,17,19,20,22-31]. According to the European Leukemia Net Working Group (ELN) standardization [3,6,31], BM immunophenotyping in MDS should at least focus on the maturation of myelo-monocytic precursors as well as the enumeration of hemopoietic progenitors and BCP. So, a minimal panel should be designed to detect all these abnormalities [3,5,8,22,31]. Furthermore, comparison with the normal pattern of antigen expression of each lineage and maturation step should be performed. Besides, several scores based on phenotypic findings have been described to support the differential diagnosis between MDS and reactive PB cytopenias, but there is no general consensus indicating the best one for application in daily routine [5,8,10,15,17,23,25-31]. In our previous studies [10,19,23], we have analyzed the utility of a four-color panel that was able to detect several phenotypic abnormalities in the myelomonocytic series and CD34+ cells. We have also found that maturation abnormalities of myelomonocytic precursors are similarly present in all WHO types of MDS, while those detected in CD34+ cells are the most important to predict a shorter survival of the patients [19,20]. Recently, comparing the prognostic value of IPSS, IPSS-R and WPSS with those obtained by flow cytometry, we found that CD34+/CD13+ cells and total number of phenotypic alterations found in the myelomonocytic series and CD34+ cells were additional independent prognostic factors to the clinical scores [23]. Here, our aim was to examine which abnormalities detected by our panel were more important for the differential diagnosis between reactive PB cytopenias and cases of MDS with a normal karyotype. As the number of BM B-cell precursors is age-dependent, we also examined the impact of this variation in the utility of this feature in the differential diagnosis. Patients and samples Since December of 2009, immunophenotyping was included in the diagnostic work-up of peripheral cytopenias in our Institution together with PB counts, BM cytology and karyotyping. The WHO criteria were used for the diagnosis of MDS and exclusion of deficiency anemias, viral infections, auto-imune diseases and renal or hepatic insufficiency was made [1,2]. During the period of the study (December 2009 – February 2013), we could confirm the diagnosis of MDS in 56 cases while in 35 cases the final diagnosis of reactive cytopenias was made (Table 1). Twenty five patients were excluded because of lack of complete clinical data or uncertain diagnosis. Table 1 Clinical and hematological features of MDS patients The classification of the MDS cases was made according to the WHO criteria and risk category according to IPSS, IPSS-R [32] and WPSS (using hemoglobin values instead of "transfusion dependency") [33] was assessed. Cytogenetic analysis of BM was performed after 24 hours of culture according to standard methods. In each case, at least 20 mitoses were analyzed and the karyotypes were reported according to the International System for Human Cytogenetic Nomenclature [34]. Normal BM samples were obtained from 41 healthy donors for allogeneic bone marrow transplantation (age: 15–69 years) in order to standardize a normal immunophenotypic profile for our laboratory. All samples were collected during the period between July 2009 and January 2013. All BM samples were obtained after informed consent was given by each person, according to the recommendations of the local Ethics Committee (proc. Nr. 0652.0.146.000-08). Flow cytometry analyses Immunophenotyping was performed as previously described [10,19]. Briefly, the EDTA-anticoagulated BM sample (5-7 × 106 cells in 100 μl per test) was processed using a standardized direct lyse-and-wash technique within 24 hours after bone marrow aspiration [3]. Quality control, calibration and compensation with FACS Comp were performed daily in our equipment. Antigenic expression was studied using four-color combinations of monoclonal antibodies (MoAbs) conjugated with fluorescein isothiocyanate (FITC), phycoerythrin (PE), peridin clorophyll protein (PerCP) and allophyicocyanin (APC) fluorocromes. The following combinations were used to study the myelomonocytic maturation and progenitor cell populations: HLA-DR/CD14/CD45/CD33; CD16/CD11b/CD45/CD13; CD13/CD34/CD45/CD117; CD10/CD19/CD45/CD34; CD7/CD56/CD45/CD34. The specificity and source of each reagent have already been described in detail [23]. Immediately after staining, samples were acquired in a FACS Calibur flow cytometer (Becton Dickinson – BD Biosciences) using the CellQuest software (BD Biosciences). Instrument quality control, calibration using FacsComp™ (BD) software and spectral compensation were performed daily. Information about at least 100,000 nucleated BM cells was acquired for each sample. Data analysis was made using Infinicyt software (Cytognos). The strategies of analysis were standardized as previously described [10,19,23]. These variables also were assessed in normal BM. The analysis of the myelomonocytic series was made as previously described [23] according to ELN standardization [3,27,31]. Briefly, maturation pattern of neutrophils was analyzed by their expression of CD13, CD16, CD11b, CD33 and HLA-DR. The side-scatter (SSC) of the granulocytic population and the antigen expressions were considered abnormal if the values of the mean fluorescence intensity (MFI) were above or below the benchmark values obtained for the normal cases (Table 2). Monocytes were analyzed by their expression of HLA-DR, CD64 and CD14. The combination CD16/CD11b/CD45/CD13 was used to quantify the CD16+ monocytes. The aberrant expression of CD34, CD7 and CD19 was investigated in each population considering abnormal when at least 10% of the cells expressed these antigens. For expression of CD56 in the myelomonocytic cell line, only values of >20% for granulocytes and >50% for monocytes were considered abnormal. Table 2 Comparison of flow cytometric features among normal, non-clonal cytopenias and MDS CD34+ cells were separated in the SSC/CD34 dot plot [10,19] (Figure 1) and their co-expression of CD19, CD10, CD13, CD117, CD7 and CD56 was examined. The B-cell precursors (CD34+/CD19+/CD10+) were measured as percentage of the total nucleated cells. CD34+ subsets in normal, reactive cases (idiopathic thrombocytopenia) and MDS (RCMD) analyzed in the CD13/CD34/CD45/CD117 combination. Red: CD34+/CD117−/CD13− cells that represent the immature precursors and B-cell precursors. Cyan: CD34+/CD117−−/CD13+ cells characterizing the immature myeloid precursors. Green: CD34+/CD117+/CD13−− representing early myeloblasts and early pro-eritroblasts. Yellow: CD34+/CD117+/CD13+ cells (myeloblasts). Purple: CD34−−/CD117+/CD13+ cells (promyelocytes). Blue: CD34−/CD117+/CD13−− cells = proeritroblasts. The maturation patterns can be analyzed in the CD34/CD117 combination (A), CD13/CD117 combination (B) and CD13/CD34 combination (C). The myeloblasts (yellow) are increased in MDS. All immunophenotypic features were compared with the values found in normal BM. We computed the total number of granulocytic and monocytic abnormalities, as well as those of CD34+ cells. We also computed the sum of all abnormalities found. The mean values and standard deviation were obtained for all variables analyzed. The difference between groups was assessed by analysis of variance. The normal distribution of the features of normal hemopoiesis was examined by the Kolmogorov-Smirnov test, and those with a non-normal distribution were submitted to a log transformation. The relation of B-cell precursors with age and among the groups studied was examined using the Spearman rank order correlation and multiple regressions. Values were considered significant when p < 0.05. The Winstat and SPSS.15 softwares were used for the calculations. Patients' characteristics Concerning patients with MDS, the majority were RCMD (Table 1). In 5 cases no mitoses were available for karyotyping. As some of the patients' groups in IPSS-R and WPSS classification were rather small, we grouped the cases with <5% BM blast (low risk) and MDS with > 5% BM blast (high risk) for analysis. A major part of the patients were low and intermediate risk in all clinical scores analyzed and had BM blasts <5%. The cases with non-clonal cytopenias included deficiency anemias (n = 11), drug-induced cytopenias (n = 6), aplastic anemia (n = 3), idiopathic thrombocytopenic purpura (n = 4), auto-immune diseases (n = 4), thyroid dysfunction (n = 3), and infection-associated leucopenia (n = 4). There were 15 men and 20 women with a median age of 60 years (14–86). The normal values were established by the analysis of 41 normal donors of allogeneic bone marrow transplantation with a median age 32 years (15–69); 25 males and 16 females. All features examined except for MFI for SSC in the granulocytic precursors and total B-cell precursors presented a normal distribution. So we used the mean values obtained ± 2 standard deviations except for MFI SSC of myelomonocytic cell lines and percentage of total B-cell precursors. For these two features we used the 5% and 95% percentiles, due to the large variation observed in the normal controls. Immunophenotypic analysis Reactive cytopenias Flow cytometric data of non-clonal cytopenias are shown in Table 2. There was no abnormality concerning SSC of granulocytes and monocytes. Shift to the left, with asynchrony of antigen expression was found in 7 cases (Table 3). In two cases, abnormal maturation pattern in CD13 or CD16 was observed in maturing granulocytes. There was one case with an autoimmune disorder presenting expression of CD34 in maturing granulocytes without any other abnormality, and one with hepatitis C at diagnosis presenting expression of CD7 in maturing granulocytes. In 9 cases (26%), increased percentages of monocytes were found. Table 3 Frequencies of several abnormalities detected in non-clonal cytopenias and MDS There was an increase of CD16+ monocytes in 8 cases of reactive cytopenias. A statistical difference was found among normal, reactive cytopenias and MDS. CD56 was expressed in monocytes in 2 cases, CD7 and CD19 were expressed in one case each (cases with deficiency anemia). The percentage of total CD34+ cells and %CD34+/CD117+/CD13+ cells (myeloid blasts) were increased in only one case of deficiency anemia, associated with a shift to the left in the granulopoiesis. CD56 was expressed in 2 cases (0.06% and 0.09% of the cells). B-cell precursors were below the normal value in 14 cases (41%). All these alterations resulted in an increased number of total abnormalities. Abnormalities observed in MDS The abnormalities found in MDS are also shown in Tables 2 and 3. Most of the aberrancies studied were more common in MDS than in reactive cases. Concerning CD34+ cells, they were increased in 26% of the cases of MDS with BM blast count <5% in cytology and in 83% of the cases with RAEB. The same was true for the cells with the phenoptype CD34+/CD13+/CD117+ (Table 3). Abnormal co-expressions were rare in reactive cytopenias and common in MDS (Table 3). Relation between B-cell precursors and age Considering all the subjects studied, the median age was 55 years. Within the groups, normal subjects had a median age of 32 years (range: 15–69); reactive cytopenias: 60 years (range: 14–86) and MDS: 69 years (range: 15–84). In the Spearman's test, there was a negative correlation between age and BCP in all three groups: r = − 0.328; p 0.023 for healthy donors; r = − 0.321; p = 0.032 for reactive cytopenias (Figure 2A) and r = − 0.412; p = 0.002 for MDS cases (Figure 2B). These cells were absent in none of the normal donors, in 8/35 cases of reactive cytopenias and in 33/54 cases of MDS. Distribution of bone marrow B-cell precursors according to the age of the patients. A- Variation observed in normal controls and reactive peripheral cytopenias. Decrease is more pronounced in subjects > 55 years old. B- B-cell precursors in MDS. The number of cells is very low, and frequently absent in patients >55 years old, but it can be present in younger persons. Among the subjects with age <55 years, mean B-cell precursors were 0.18%, 0.14% and 0.17% in normal controls, reactive cytopenias and MDS cases respectively, which was not significantly different. Among subjects >55 years old, mean values were 0.28%, 0.075% and 0.015% respectively. These values were significantly different (p < 0.005). In the multiple regressions considering age, total % CD34+ cells, BCP and number of alterations in CD34+ cells, the relation could be described in non-clonal subjects (normals and reactive cytopenias) by the equation: $$ \begin{array}{l}\%\ BCP = \left(-7.3 \times log\ age\right) + \left(8.2 \times log\ \%\ CD{34}^{+} cells\right) + 0.46\\ {}\ \mathrm{corrected}\ {\mathrm{R}}^2 = 0.265\end{array} $$ For the MDS cases the equation was: $$ \begin{array}{l}\%BCP = \\ {}\left(-7.97 \times log\ age\right) + \left(4.24 \times log\%\ CD{34}^{+} cells\right)\ \hbox{--}\ \left(0.22 \times nr. alterations\ CD{34}^{+} cells\right) + 0.577\\ {}\ \mathrm{corrected}\ {\mathrm{R}}^2 = 0.467\end{array} $$ Several immunophenotypic abnormalities have been described in MDS. According to the WHO 2008 recommendations [1], the finding of three or more abnormalities is considered highly suggestive of MDS. Thus, immunophenotyping has been considered a useful adjuvant method for the diagnosis of MDS in cases with low BM blast counts and a normal karyotype [2]. This technique is also widely used for the diagnosis and assessment of minimal residual disease in several other hematological neoplasms [35,36]. Knowledge of normal antigen expression of the several hemopoietic cells and its changes during normal cell maturation is essential to assess abnormalities and to define leukemia-associated immunophenotypes (LAIPs) [3,8]. It is also advisable for each laboratory to establish their own reference normal values for each panel of monoclonal antibodies used. In healthy individuals, normal maturation of BM precursors is genetically tightly controlled, leading to predictable patterns of antigen expression at different stages of cell maturation. Neoplastic cells are characterized by a deviation of this pattern, as well as by the presence of aberrant cross-lineage antigen expressions [4,5,8,12,37]. In previous studies [10,12,19,23], we have shown that a rather small four-color panel of monoclonal antibodies analyzing the myelomonocytic lineage and the subsets of CD34+ cells was suitable to confirm the diagnosis of MDS and allowed us to detect independent prognostic features [23]. Among the phenotypic abnormalities found, those concerning CD34+ cells were the most important to predict a shorter survival of the patients. At last we substituted the combination for analysis of dendritic cells and basophils for a combination (CD7/CD56/CD45/CD34) to assess the most frequent leukemia-associated phenotypes. Both minor populations are altered in MDS, but are less important for the differential diagnosis of clonal and non-clonal cytopenias. In the present work, our aim was to examine which variables detected by our panel were more important to discriminate between reactive PB cytopenias and MDS, especially in those cases with few BM blasts and a normal karyotype. Concerning the variables analyzed, our reactive cases never presented a decreased SSC or a maturation block in the myeloid series. Shift to the left was seen in 7 cases and abnormal expression of CD13 was seen in 2 cases. This is in keeping with the fact that, although variation in antigen expression may occur in reactive cytopenias, the rupture of the normal pattern of maturation should never be seen, as this is indicative of a clonal disorder. Concerning monocytes, increase in number and expression of CD16 and CD56 that are indicative of cell activation, were observed, although this was more frequent in MDS, as it also has been observed by others [31]. The main immunophenotypic features distinguishing low-risk MDS from reactive cytopenias were the increase of CD34+ cells, especially in the presence of normal blast counts in cytology, increase in CD34+/CD117+/CD13+ cells, decrease in B-cell precursors, and aberrant co-expressions in CD34+ cells (CD7 or CD56 or decrease in CD13). The presence of anomalous expression of CD7 in CD34+ cells was more frequent in high-risk MDS and might reflect progression to leukemia. The total number of phenotypic abnormalities was also significantly higher in clonal disorders, confirming previous results of our group and others [12,19,24-28]. Overall, in patients with reactive cytopenias, if phenotypic abnormalities are found, a close follow-up of the patient should be made in order to detect a possible evolution to an overt MDS. The decrease of B-cell precursors has been considered a hallmark of MDS [6,9,13,21,31], even in children. This variable has also been included in phenotypic scores for diagnosis of MDS, such as that proposed by Ogata and included in the guidelines of ELN [31]. B-cell precursors may be assessed as CD34+ cells with a low SSC or by their phenotype, which is the way to obtain more reliable results. We also assessed these cells as their percentage among all cells examined and not as percentage of all CD34+ cells, as has been recommended by several authors [11,24,30,37]. In MDS it is expected that myeloid progenitors may be increased, provoking a false relative decrease of B-lymphoid precursors. So, the best way to evaluate their number would be to use their percentage among all cells. On the other hand, it is well known that the number of BM B-cell precursors have a strong variation with age [38]. This was also the case in the present study. We could show that in a multiple regression, their number was dependent of age and the total number of CD34+ cells. In MDS, also the total alterations observed in CD34+ progenitors entered the equation. In subjects with age below 55 years the difference in number of BCP was not so pronounced, but in older patients, their number was below normal in reactive cytopenias and this was more pronounced in MDS. This is in keeping with an ageing process of the immune system, which is highly variable with age and amount of exposure to antigen stimulation. The pathophysiology of the decrease of B-cell precursors observed in MDS is not well understood. But, it has been described that these cells may also present abnormalities in antigen expression in MDS [21] that are more pronounced in cases with a higher number of BM blasts. This could be due to a more pronounced dysfunction of hemopoietic progenitors that loose their capacity to produce the B-cell line. In conclusion, an antibody panel focused on the analysis of the myelomonocytic cell line and CD34+ cells was satisfactory for the differential diagnosis between reactive PB cytopenias and MDS with low BM blast counts and a normal karyotype. The most specific alterations were found in CD34+ cells. The number of BCP was more discriminative in older patients. For young patients it is necessary to compare their number with normal age-matched subjects. MDS: BCP: B-cell precursors BM: PB: Peripheral blood ELN: European Leukemia Net Working Group IPSS: International Prognostic Scoring System IPSS-R: Revised International Prognostic Scoring System WPSS: WHO classification-based Prognostic Scoring System MoAbs: FITC: Fluorescein isothiocyanate Phycoerythrin PerCP: Peridin clorophyll protein APC: allophyicocyanin BD: SSC: Side-scatter MFI: Median fluorescence intensity RA: Refractory anemia RCMD: Refractory cytopenia with multilineage dysplasia RAEB: ANC: Absolute neutrophil count LAIPs: Leukemia-associated immunophenotypes Swerdlow S, Camp E, Harris NL, Jaffe ES, Pileri SA, Stein H, et al. WHO classification of tumors of haematopoietic and lymphoid tissues. Lyon: IARC; 2008. Valent P, Horny HP, Bennett JM, Fonatsch C, Germing U, Greenberg P, et al. Definitions and standards in the diagnosis and treatment of the myelodysplastic syndromes: Consensus statements and report from a working conference. Leuk Res. 2007;31:727–39. van de Loosdrecht AA, Alhan C, Béné MC, Della Porta MG, Dräger AM, Feuillard J, et al. Standardization of flow cytometry in myelodysplastic syndromes: report from the first European LeukemiaNet working conference on flow cytometric in myelodysplastic syndromes. Haematologica. 2009;94:1124–34. Stetler-Stevenson M, Yuan CM. Myelodysplastic syndromes: the role of flow cytometry in diagnosis and prognosis. Int J Lab Hematol. 2009;31:479–83. Ossenkoppele GJ, van de Loosdrecht AA, Schuurhuis GJ. Review of the relevance of aberrant antigen expression by flow cytometry in myeloid neoplasms. Br J Haematol. 2011;153:421–36. Della Porta MG, Picone C, Pascutto C, Malcovati L, Tamura H, Handa H, et al. Multicenter validation of a reproducible flow cytometric score for the diagnosis of low-grade myelodysplastic syndromes: results of a European LeukemiaNET study. Haematologica. 2012;97:1209–17. Loken MR, van de Loosdrecht A, Ogata K, Orfao A, Wells DA. Flow cytometry in myelodysplasic syndromes: report from a consensus working conference. Leuk Res. 2008;32:5–17. van de Loosdrecht AA, Westers TM, Westra AH, Drager AM, van der Velden VH, Ossenkoppele GJ. Identification of distinct prognostic subgroups in low- and intermediate-1-risk myelodysplastic syndromes by flow cytometry. Blood. 2008;111:1067–77. Sandes AF, Kerbauy DM, Matarraz S, Chauffaille ML, Lopez A, Orfao A, et al. Combined flow cytometric assessment of CD45, HLA-DR, CD34, and CD117 expression is a useful approach for reliable quantification of blast cells in myelodysplastic syndromes. Cytometry B Clin Cytom. 2013;84:157–66. Reis SC, Traina F, Saad STO, Lorand-Metze I. Variation of bone marrow CD34+ cell subsets in myelodysplastic syndromes according to WHO types. Neoplasma. 2009;56:435–40. Satoh C, Dan K, Yamashita T, Jo R, Tamura H, Ogata K. Flow cytometric parameters with little interexaminer variability for diagnosing low-grade myelodysplastic syndromes. Leuk Res. 2008;32:699–707. Lorand-Metze I, Ribeiro E, Lima CSP, Batista LS, Metze K. Detection of hematopoietic maturation abnormalities by flow cytometry in myelodysplastic syndromes and its utility for the differential diagnosis with non-clonal disorders. Leuk Res. 2007;31:147–55. Aalbers AM, van den Heuvel-Eibrink MM, Baumann I, Dworzak M, Hasle H, Locatelli F, et al. Bone marrow immunophenotyping by flow cytometry in refractory cytopenia of childhood. Haematologica. 2015;100:315–23. van Lochem EG, Velden VHJ, Wind JG, Marvelde JG, Westerdaal NAC, Dongen JJM. Immunophenotypic differentiation patterns of normal hematopoiesis in human bone marrow: reference patterns for age-related changes and disease-induced shifts. Cytometry B Clin Cytom. 2004;60B:1–13. Wells DA, Benesch M, Loken MR, Vallejo C, Myerson D, Leisernring WM, et al. Myeloid and monocytic dispoiesis as determinated by flow cytometry scoring in myelodysplastic syndromes correlates with the IPSS and with outcome after hemopoietic stem cell transplantation. Blood. 2003;102:394–405. Stachurski D, Smith BR, Pozdnyakova O, Andersen M, Xiao Z, Raza A, et al. Flow cytometric analysis of myelomonocytic cells by a pattern recognition approach is sensitive and specific in diagnosing myelodysplastic syndrome and related marrow diseases: emphasis on a global evaluation and recognition of diagnostic pitfalls. Leuk Res. 2008;32:215–24. Chu SC, Wang TF, Li CC, Kao RH, Li DK, Su YC, et al. Flow cytometric scoring system as a diagnostic and prognostic tool in myelodysplastic syndromes. Leuk Res. 2011;35:868–73. Tang G, Jorgensen JL, Zhou Y, Hu Y, Kersh M, Garcia-Manero G, et al. Multi-color CD34(+) progenitor-focused flow cytometric assay in evaluation of myelodysplastic syndromes in patients with post cancer therapy cytopenia. Leuk Res. 2012;36:974–81. Reis-Alves SC, Traina F, Saad ST, Metze K, Lorand-Metze I. The impact of several phenotypic features at diagnosis on survival of patients with myelodysplastic syndromes. Neoplasma. 2010;57:530–6. Lorand-Metze I, Califani SM, Ribeiro E, Lima CS, Metze K. The prognostic value of maturation-associated phenotypic abnormalities in myelodysplastic syndromes. Leuk Res. 2008;32:211–3. Ribeiro E, Matarraz Sudón S, Santiago M, Lima CSP, Metze K, Giralt M, et al. Maturation-associated immnophenotypic abnormalities in bone marrow B-lymphocytes in myelodysplastic syndromes. Leuk Res. 2006;30:9–16. Westers TM, Ireland R, Kern W, Alhan C, Balleisen JS, Bettelheim P, et al. Standardization of flow cytometry in myelodysplastic syndromes: a report from an international consortium and the European LeukemiaNet Working Group. Leukemia. 2012;26:1730–41. Reis-Alves SC, Traina F, Harada G, Campos PM, Saad ST, Metze K, et al. Immunophenotyping in myelodysplastic syndromes can add prognostic information to well-established and new clinical scores. PLoS One. 2013;8:81048. Kern W, Haferlach C, Schnittger S, Haferlach T. Clinical utility of multiparameter flow cytometry in the diagnosis of 1013 patients with suspected myelodysplastic syndrome: correlation to cytomorphology, cytogenetics, and clinical data. Cancer. 2010;116:4549–63. Matarraz S, Lopez A, Barrena S, Fernandez C, Jensen E, Flores J, et al. The immunophenotype of different immature, myeloid and B-cell lineage-committed CD34+ hematopoietic cells allows discrimination between normal/reactive and myelodysplastic syndrome precursors. Leukemia. 2008;22:1175–83. Xu F, Guo J, Wu LY, He Q, Zhang Z, Chang CK, et al. Diagnostic application and clinical significance of FCM progress scoring system based on immunophenotyping in CD34+ blasts in myelodysplastic syndromes. Cytometry B Clin Cytom. 2013;84:267–78. Westers TM, van der Velden VH, Alhan C, Bekkema R, Bijkerk A, Brooimans RA, et al. Implementation of flow cytometry in the diagnostic work-up of myelodysplastic syndromes in a multicenter approach: report from the Dutch Working Party on Flow Cytometry in MDS. Leuk Res. 2012;36:422–30. Lorand-Metze I, Pinheiro MP, Ribeiro E, de Paula EV, Metze K. Factors influencing survival in myelodysplastic syndromes in a Brazilian population: comparison of FAB and WHO classifications. Leuk Res. 2004;28:587–94. Ribeiro E, Lima CSP, Metze K, Lorand-Metze I. Flow cytometric analysis of the expression of Fas/FasL in bone marrow CD34+ cells in myelodysplastic syndromes: relation to disease progression. Leuk Lymphoma. 2004;45:309–13. Falco P, Levis A, Stacchini A, Ciriello MM, Geuna M, Notari P, et al. Prognostic relevance of cytometric quantitative assessment in patients with myelodysplastic syndromes. Eur J Haematol. 2011;87:409–18. Porwit A, van de Loosdrecht AA, Bettelheim P, Brodersen LE, Burbury K, Cremers E, et al. Revisiting guidelines for integration of flow cytometry results in the WHO classification of myelodysplastic syndromes—proposal from the International/European LeukemiaNet Working Group for Flow Cytometry in MDS. Leukemia. 2014;28:1793–8. Greenberg PL, Tuechler H, Schanz J, Sanz G, Garcia-Manero G, Solé F, et al. Revised international prognostic scoring system for myelodysplastic syndromes. Blood. 2012;120:2454–65. Malcovati L, Della Porta MG, Strupp C, Ambaglio I, Kuendgen A, Nachtkamp K, et al. Impact of the degree of anemia on the outcome of patients with myelodysplastic syndrome and its integration into the WHO classification-based Prognostic Scoring System (WPSS). Haematologica. 2011;96:1433–40. Schanz J, Tuchler H, Sole F, Mallo M, Luno E, Cervera J, et al. New comprehensive cytogenetic scoring system for primary myelodysplastic syndromes (MDS) and oligoblastic acute myeloid leukemia after MDS derived from an international database merge. J Clin Oncol. 2012;30:820–9. Nakayama S, Yokote T, Hirata Y, Iwaki K, Akioka T, Miyoshi T, et al. An approach for diagnosing plasma cell myeloma by three-color flow cytometry based on kappa/lambda ratios ofCD38-gated CD138+ cells. Diagn Pathol. 2012;7:31. Rimsza LM, Day WA, McGinn S, Pedata A, Natkunam Y, Warnke R, et al. Kappa and lambda light chain mRNA in situ hybridization compared to flow cytometry and immunohistochemistry in B cell lymphomas. Diagn Pathol. 2014;9:144. Ogata K, Kakumoto K, Matsuda A, Tohyama K, Tamura H, Ueda Y, et al. Differences in blast immunophenotypes among disease types in myelodysplastic syndromes: a multicenter validation study. Leuk Res. 2012;36:1229–36. Chantepie SP, Cornet E, Salaun V, Reman O. Hematogones: an overview. Leuk Res. 2013;37:1404–11. Financial support: FAPESP, CNPq (INCTS 2008-57895/1), FAEPEX (proc 1208/11 Research fund of the University of Campinas) and MDS Foundation (Tito Bastianello Young Investigator grant 2009). Konradin Metze and Irene Lorand-Metze have a research grant from CNPq (proc. 307270/2010-6 and 302277/2009-9 respectively) We thank Fernanda G.P. Cunha and Felipe F. Rocha for their technical assistance. Hematology and Hemotherapy Center, University of Campinas, Carlos Chagas Street, 480, 13083-878 Campinas, São Paulo, Brazil Suiellen C Reis-Alves & Irene Lorand-Metze Faculty of Medicine of Ribeirão Preto, University of São Paulo, Vila Monte Alegre, 14048-900, Ribeirão Preto, Sao Paulo, Brazil Fabiola Traina Faculty of Medicine, University of Campinas, Tessália Vieira de Camargo Street 126, 13083-887, Campinas, São Paulo, Brazil Konradin Metze & Irene Lorand-Metze Suiellen C Reis-Alves Konradin Metze Irene Lorand-Metze Correspondence to Irene Lorand-Metze. SCRA was responsible for flow cytometric analysis, participated in the statistical analysis and manuscript writing. FT contributed with the selection of patients, their clinical follow-up, and cytogenetic analysis. KM performed the statistical analysis and participated in the data interpretaion. ILM was responsible for the study design, interpretation of the results and final revision of the manuscript. This work is a part of the PhD thesis of SCR-A with ILM as the advisor (Post-graduate Course of Internal Medicine, University of Campinas). All authors read and approved the final manuscript. Reis-Alves, S.C., Traina, F., Metze, K. et al. Improving the differential diagnosis between myelodysplastic syndromes and reactive peripheral cytopenias by multiparametric flow cytometry: the role of B-cell precursors. Diagn Pathol 10, 44 (2015). https://doi.org/10.1186/s13000-015-0259-3 Received: 23 December 2014 CD34+ cells Submission enquiries: [email protected]
CommonCrawl
\begin{document} \newtheorem{thm}{\noindent Theorem}[section] \newtheorem{lem}{\noindent Lemma}[section] \newtheorem{cor}{\noindent Corollary}[section] \newtheorem{prop}{\noindent Proposition}[section] \newtheorem{conj}{\noindent Conjecture}[section] \newtheorem{assert}{\noindent Assertion}[section] \makeatletter\renewcommand{\theequation}{ \thesection.\arabic{equation}} \@addtoreset{equation}{section}\makeatother \newcommand{\qed}{\hbox{\rule[0pt]{3pt}{6pt}}} \newcommand{\mathop{\overline{\lim}}}{\mathop{\overline{\lim}}} \newcommand{\mathop{\underline{\lim}}}{\mathop{\underline{\lim}}} \newcommand{\mathop{\mbox{Av}}}{\mathop{\mbox{Av}}} \newcommand{{\rm spec}}{{\rm spec}} \newcommand{\subarray}[2]{\stackrel{\scriptstyle #1}{#2}} \setlength{\baselineskip}{14pt} \def\rm{\rm} \def\({(\!(} \def\){)\!)} \def{\bf R}{{\bf R}} \def{\bf Z}{{\bf Z}} \def{\bf N}{{\bf N}} \def{\bf C}{{\bf C}} \def{\bf T}{{\bf T}} \def{\bf E}{{\bf E}} \def{\bf H}{{\bf H}} \def{\bf P}{{\bf P}} \def{\cal M}{{\cal M}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def{\cal D}{{\cal D}} \def{\cal X}{{\cal X}} \def{\cal A}{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal L}{{\cal L}} \def\alpha{\alpha} \def\beta{\beta} \def\varepsilon{\varepsilon} \def\delta{\delta} \def\gamma{\gamma} \def\kappa{\kappa} \def\lambda{\lambda} \def\varphi{\varphi} \def\theta{\theta} \def\sigma{\sigma} \def\tau{\tau} \def\omega{\omega} \def\Delta{\Delta} \def\Gamma{\Gamma} \def\Lambda{\Lambda} \def\Omega{\Omega} \def\Theta{\Theta} \def\langle{\langle} \def\rangle{\rangle} \def\left({\left(} \def\right){\right)} \def\;\operatorname{const}{\;\operatorname{const}} \def\operatorname{dist}{\operatorname{dist}} \def\operatorname{Tr}{\operatorname{Tr}} \def\qquad\qquad{\qquad\qquad} \def\noindent{\noindent} \def\begin{eqnarray*}{\begin{eqnarray*}} \def\end{eqnarray*}{\end{eqnarray*}} \def\mbox{supp}{\mbox{supp}} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def{\bf p}{{\bf p}} \def{\rm sign\,}{{\rm sign\,}} \def{\bf r}{{\bf r}} \def{\bf 1}{{\bf 1}} \def\vskip2mm{\vskip2mm} \def\noindent{\noindent} \def{\it Proof.~}{{\it Proof.~}} \begin{center} {\Large One dimensional lattice random walks with absorption \\ at a point / on a half line} \vskip6mm {K\^ohei UCHIYAMA} \\ \vskip2mm {Department of Mathematics, Tokyo Institute of Technology} \\ {Oh-okayama, Meguro Tokyo 152-8551\\ e-mail: \,[email protected]} \end{center} \vskip6mm \begin{abstract} This paper concerns a random walk that moves on the integer lattice and has zero mean and a finite variance. We obtain first an asymptotic estimate of the transition probability of the walk absorbed at the origin, and then, using the obtained estimate, that of the walk absorbed on a half line. The latter is used to evaluate the space-time distribution for the first entrance of the walk into the half line. \footnote{ {\it key words}: absorption, transition probability, asymptotic estimate, one dimensional random walk\\ {\it ~~~~~ AMS Subject classification (2009)}: Primary 60G50, Secondary 60J45.} \end{abstract} \vskip6mm \noindent {\Large \bf Introduction} \vskip2mm Let $S^x_n=x+Y_1+\cdots+Y_n$ be a random walk on the integer lattice ${\bf Z}$ starting at $x$ where the increments $Y_j$ are independent and identically distributed random variables defined on some probability space $(\Omega, {\cal F}, P)$ and taking values in ${\bf Z}$. Let $Y$ be a random variable having the same law as $Y_1$. We suppose throughout the paper that the walk $S^x_n$ is irreducible and satisfies \begin{equation}\label{mom} EY=0~~~~\mbox{and}~~~~\sigma^2:=E|Y|^{2}<\infty, \end{equation} where $E$ indicates the expectation by $P$. In this paper we compute an asymptotic form as $n\to\infty$ of the probability \begin{equation}\label{q0} q^n(x,y)=P_x[S_n=y, S_1\neq 0,S_2\neq 0,\ldots, S_n\neq 0], \end{equation} the transition probability of the walk absorbed at the origin, where (and in what follows) $P_x$ denotes the law of the walk $(S^x_n)_{n=0}^\infty$ and under $P_x$ we simply write $S_n$ for $S_n^x$. The result on $q^n$ will be used to evaluate $q_{(-\infty,0]}^n(x,y)$, the transition probability of the walk that is absorbed when it enters the negative half line, and the result on the latter in turn to evaluate the space-time distribution for the first entrance of $S^x_n$ into the negative half line. The local central limit theorem, which gives a precise asymptotic form of the transition probabilities $p^n(y-x):= P_x[S_n=y]$, plays a fundamental role in both theory and application of random walks, whereas concerning its analogue for $q_{(-\infty,0]}^n(x,y)$ or $q^n(x,y)$, for all its significance, there seem lacking, except for simple random walk case, detailed results such as provide the precise asymptotic form of them [but see `Note added in proof' at the end of the paper]. In this paper we observe that the asymptotic forms of both $q^n$ and $q_{(-\infty,0]}^n$ are given by the corresponding density of the Brownian motion if space variables $x, y$ as well as $n$ become large in a suitable way, but obviously they fails to be if $x$ and/or $y$ remain in a finite set. In the latter case the order of magnitude of decay (as $n\to\infty$) does not differ but the coefficients do from the Brownian ones. These coefficients are expressed by means of either the potential function of the walk or a pair of \lq harmonic' and \lq conjugate harmonic' functions on the positive half line (renewal functions of ladder-height processes) according as the absorption is made at the origin or on the negative half line. A primary estimate of $q^n$ is derived by using Fourier analytic method; afterwards we refine it by applying the result on the entrance distribution of $(-\infty,0]$ mentioned above (under an additional moment condition). Our results concerning $q_{(-\infty,0]}^n$ partly but significantly rest on a profound theory of the random walk on the half line as found in Spitzer's book \cite{S}. The transition probability $q^n$ may be viewed as the Green function of the space-time walk, an extremal case of two dimensional walks, absorbed on the coordinate axis of the time variable. In a separate paper \cite{U3} we study the corresponding problem for two-dimensional random walks with zero mean and finite variances. With the help of some of the results obtained here and in \cite{U3} the asymptotic estimates of the Green functions of the walks restricted on the upper half space are computed in \cite{U4}. A closely related issue concerning the hitting distribution of a line for two-dimensional walks is studied in \cite{Us}. We illustrate how fine the estimate obtained is by applying them to a problem on a system of independent random walks. Suppose that the particles are initially placed on the positive half line of ${\bf Z}$ (one on each site) and independently move according to the substochastic transition law $q^n$. Then how does the total number of particles on the negative half line at time $n$ behave for large $n$? We shall prove that the expected number of such particles converges to a positive constant if $E[|Y|^3; Y<0]<\infty$ and diverges to infinity otherwise, provided that the walk is not left continuous; an analytic expression of the constant will be given. \section{Statements of Results} Let $S^x_n$ be the random walk described in Introduction and $P_x$ its probability law. Put $p^n(x)=P[S^0_n=x]$, $p(x)=p^1(x)$ and define the potential function \begin{equation}\label{a_def} a(x)=\sum_{n=0}^\infty[p^n(0)-p^n(-x)]; \end{equation} the series on the right side is convergent and $a(x)/|x|\to 1/\sigma^2$ as $|x|\to\infty$ (cf. Spitzer \cite{S}:Propositions P28.8 and P29.2). Denote by $d_\circ$ the period of the walk (namely $d_\circ$ is the smallest positive integer such that $p^{d_\circ n}(0)>0$ for all sufficiently large $n$). Put $${\sf g}_n(u)=\frac{e^{-u^2/2n_*}}{\sqrt {2\pi n_*}} ~~~~~~\mbox{where}~~~~~~n_*=\sigma^2 n.$$ The following notation will also be used: $a\wedge b =\min\{a,b\}$, $a\vee b=\max \{a,b\}$ ($a,b\in {\bf R})$; for functions $g$ and $G$ of a variable $\xi$, $g(\xi) = O(G(\xi))$ means that there exists a constant $C$ such that $|g(\xi)| \leq C|G(\xi)|$ whenever $\xi$ ranges over a specified set; $ {\bf 1}({\cal S})$ denotes the indicator of a statement ${\cal S}$, i.e., $ {\bf 1}({\cal S})=1~\mbox{ or} ~~0~~\mbox{ according as ${\cal S}$ is true or not.}$ We shall denote by $a_{\circ}$ an arbitrarily chosen constant that is greater than unity (as in the items ${\bf (i)}$ and ${\bf (ii)}$ below), whereas positive constants to be determined independently of variables $x, y, n$ etc. but may depend on the law of $Y$ are denoted by $C, C_1, C_2,\ldots$, whose values do not have to be the same at different occurrences even though the same letter may be used. \vskip2mm\vskip2mm\noindent {\bf 1.1.}~~ Let $q^n(x,y)$ denote the transition probability of the walk $S^x_n$ that is absorbed when it hits the origin as defined by (\ref{q0}) (which entails that $q^n(x,y)=0$ if $y=0, x\neq 0$ and $q^0(x,y)={\bf 1}(x=y)$). For convenience sake we put $$a^*(x)={\bf 1}(x=0)+a(x).$$ \begin{thm}\label{thm1.1} ~ The following asymptotic estimates of $q^n(x,y)$ as $n\to \infty$, given in three cases of constraints on $ x$ and $y$, hold true uniformly for $x$ and $y$ subject to the respective constraints. \vskip2mm {\bf (i)}~ Under $|x|\vee|y|< a_\circ \sqrt n$ and $|x|\wedge |y|=o(\sqrt n)$, \begin{equation}\label{(i)} q^n(x,y)=\frac{\sigma^4a^*(x)a(-y)+xy}{n_*}\,p^n(y-x)+o\bigg(\frac{(|x|\vee1)|y|}{n^{3/2}}\bigg). \end{equation} \vskip2mm {\bf (ii)}~ Under $a_\circ^{-1}\sqrt n < |x|,\,|y|< a_\circ \sqrt n$ (both $|x|$ and $|y|$ are between the two extremes), \begin{eqnarray}\label{q(ii)} q^n(x,y)&=&d_\circ {\bf 1}\Big(p^n(y-x)\neq 0)\Big)\Big[{\sf g}_{n}(y-x)-{\sf g}_{n}(y+x)\Big]+o\bigg(\frac{1}{\sqrt n}\bigg)~~~\mbox{if}~~xy>0,\\ q^n(x,y)&=&o\bigg(\frac{1}{\sqrt n}\bigg)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\mbox{if}~~~~~xy<0. \label{(ii)} \end{eqnarray} \vskip2mm {\bf (iii)}~ Let $0< |x|\wedge |y| <\sqrt n< |x|\vee|y|$. ~ Then, if $E|Y|^{2+\delta}<\infty$ for some $ \delta\geq 0$, $$q^n(x,y)=O\bigg(\frac{|x|\wedge |y|}{|x|\vee|y|}{\sf g}_{4n}(|x|\vee|y|)\bigg)+o\bigg(\frac{|x|\wedge |y|}{(|x|\vee|y|)^{2+\delta}}\bigg). $$ ({\it ${\sf g}_{4n}$ on the right side can be replaced by ${\sf g}_{(1+\varepsilon)n}$ with any $\varepsilon>0$.}) \end{thm} \vskip2mm\vskip2mm As a simple consequence of {\bf (i)} and {\bf (iii)} of Theorem \ref{thm1.1} we have the bound \begin{equation}\label{iv} q^{n}(x,y)\leq C\frac{(|x|+1)|y|}{n^{3/2}}. \end{equation} valid for all $ n, x$ and $y$. It is noted that if the walk is {\it left continuous}, i.e. $P[Y\le -2]=0$, then $\sigma^2 a(x)=x$ for $x>0$, hence the leading term in the formula of {\bf (i)} vanishes for $x>0, y<0$ in agreement with the trivial fact that $q^n$ itself does. If $E[|Y|^3]<\infty$ and $xy<0$, then the assertion {\bf (i)} can be refined in two ways: the error term in (\ref{(i)}) may be replaced by $o((|x|+|y|)n^{-3/2})$ and the resulting formula is valid uniformly for $|x|\vee |y|<a_\circ \sqrt n$. Let $C^+$ be the constant given by \begin{equation}\label{C^+0} C^{+}:= \lim_{x\to \infty}(\sigma^2 a(x)-x) \leq \infty. \end{equation} We shall show (Corollary \ref{lem2.5} in Section 2; see also Corollary \ref{cor7.5}) that the limit exists and that it is finite if and only if $E[|Y|^3; Y<0]<\infty$ and positive unless the walk is left continuous. It follows that $$\sigma^4a^*(x)a(-y)+xy= C^+(x-y)(1+o(1))~~~\mbox{ as}~~ x\wedge (-y)\to \infty,$$ provided $E[|Y|^3; Y<0]<\infty$. In view of this relation and duality the refined version of {\bf (i)} mentioned above may read as follows. \begin{thm}\label{thm1.5}~ Suppose that $E[|Y|^3;Y<0]<\infty$. Let $y<0<x$. Then uniformly for $x\vee|y|\le a_\circ\sqrt n$, as $x\wedge |y|\to \infty$ \begin{equation} q^n(x,y)=C^+\frac{x+|y|}{n_*}p^n(y-x)+ o\bigg(\frac{x\vee|y|}{n^{3/2}}\bigg). \end{equation} \end{thm} \vskip2mm The proof of Theorem \ref{thm1.5} requires more delicate analysis than that of Theorem \ref{thm1.1}; it rests on Theorem \ref{thm1.4} below and will be given after the proof of it Given a constant $\alpha\in (0,1)$, one may consider the absorption which is not absolute but takes place with probability $\alpha$ each time the walk is about to visit the origin. Denote by $q_\alpha^n(x,y)$ the transition probability of the process subject to such absorption. In Section 6 we shall obtain the asymptotic estimates \begin{equation}\label{qqq} q_\alpha^n(x,y)-q^n(x,y)=\frac{(1-\alpha)\sigma^2}{\alpha}\cdot\frac{a^*(x)+a^*(-y)}{n}p^n(y-x)(1+o(1)) \end{equation} valid uniformly for $|x|\vee|y|<a_\circ \sqrt n$. Note that as $|x|\wedge |y|\to\infty$ under the same constraint on $x, y$, the right side divided by $q^n(x,y)$ tends to zero for $xy>0$, while it is asymptotically a positive constant for $xy<0$, provided $E[|Y|^3]<\infty$ according to Theorem \ref{thm1.5}. \vskip2mm\vskip2mm\noindent {\bf 1.2.}~~ Here we consider the walk absorbed when it enters $(-\infty,0]$. Let $T$ denote the first entrance time into $(-\infty,0]$: $$T=\inf\{n\geq 1: S_n\leq 0\},$$ and $q_{(-\infty,0]}^n(x,y)$ the transition probability of the absorbed walk: $$q_{(-\infty,0]}^n(x,y)=P_x[S_n=y, S_1>0,\ldots, S_n > 0]= P_x[S_n=y, n<T]~~~~ ~~(x,y >0).$$ The next result states that $q_{(-\infty,0]}^n(x,y)$ behaves similarly to $q^n(x,y)$ within any parabolic region if both $x$ and $y$ get large. \begin{prop}\label{thm1.2}~ Uniformly for $n\geq (x\vee y)^2/a_\circ$, as $x\wedge y\to \infty$ $$q_{(-\infty,0]}^n(x,y)=q^n(x,y)(1+o(1)).$$ \end{prop} Let $f_+(x)$ (resp. $f_-(x)$) ($x= 1,2,\ldots$) be the positive function on $x> 0$ that is asymptotic to $x$ as $x\to\infty$ and harmonic with respect to the walk $S_n$ (resp $-S_n$) absorbed on $(-\infty, 0]$ : \begin{equation}\label{f_def} ~~f_{\pm}(x)=E[f_{\pm}(x\pm Y);\, x\pm Y >0]~~(x\geq 1)~~~\mbox{and}~~~ \lim_{x\to\infty} f_{\pm}(x)/x=1, \end{equation} each of which exists uniquely (Spitzer \cite{S}:P19.5). (It is warned that it is not $[1,\infty)$ but $[0,\infty)$ on which the harmonic function is considered in \cite{S}.) \begin{thm}\label{thm1.3}~ Uniformly for $0< x, y\leq a_\circ\sqrt n$, as $xy/n\to 0$ $$q_{(-\infty,0]}^n(x,y)=\frac{2f_+(x)f_-(y)}{n_*}p^n(y-x)(1+o(1)).$$ \end{thm} \vskip2mm From Theorem \ref{thm1.3} one derives an asymptotic form of the space-time distribution of the first entrance into $(-\infty,0]$, which we denote by $h_x(n,y)$: for $y\leq 0$ $$h_x(n,y) =P_x[S_T=y, T=n].$$ Put \begin{equation}\label{q} H_{\infty}^+(y)=\frac2{\sigma^2}E[f_-(y-Y);Y<y]=\frac{2}{\sigma^{2}}\sum_{j=1}^\infty f_-(j)p(y-j)~~~~~ (y\leq 0). \end{equation} \begin{thm}\label{thm1.4}~ Suppose $E[|Y|^{2+\delta};Y<0]<\infty$ for some $\delta\ge 0$ and $d_\circ=1$. Then, uniformly for $y\leq 0< x\leq a_\circ\sqrt n$, as $n\to \infty$ \begin{equation}\label{h0} h_x(n,y)=\frac{f_+(x){\sf g}_n(x)}{n}H_{\infty}^+(y)(1+o(1))+\frac{x}{n^{3/2}}\alpha_n(x,y), \end{equation} with $$\alpha_n(x,y)=o\Big((|y|\vee\sqrt n\,)^{-1-\delta}\,\Big),~~~\sum_{y\leq 0}|\alpha_n(x,y)|=o(n^{-\delta/2}) ~~~~\mbox{and}~~~\sum_{y\leq 0}|\alpha_n(x,y)||y|^\delta=o(1);$$ and for $x\geq \sqrt n$ and $y\leq 0$ \begin{equation}\label{upb-h20} h_x(n,y)\leq C\bigg[\frac{{\sf g}_{4n}(x)}{\sqrt n}+o\bigg(\frac1{x^{2+\delta}}\bigg)\bigg]H_{\infty}^+(y) +\frac{C}{\sqrt n}P[Y<y-{\textstyle \frac12}x], \end{equation} and in particular \begin{equation}\label{eq1.4} h_x(n,y)\leq C{H_\infty^+(y)}x^{-1} n^{-1/2}. \end{equation} \end{thm} Since $P_x[\,T=n]=\sum_{y\leq 0}h_x(n,y)$ we have the following corollary of Theorem \ref{thm1.4}. \begin{cor}\label{cor1.1}~Uniformly in $x\geq 1$ $$P_x[\,T=n]= \frac{f_+(x){\sf g}_n(x)}{n}(1+o(1)) + o\bigg(\frac{x}{n^{3/2}}\wedge \frac1{x\sqrt n}\bigg).$$ \end{cor} \vskip2mm {\sc Remark.}~ (a)~ $H_{\infty}^+$ is the probability on $(-\infty,0]$ that arises as the limit as $x\to\infty$ of the first entrance distribution $H^+_x(\cdot)=\sum_n h_x(n,\cdot)$ (\cite{S}, P19.4). This in particular gives the identity $\sum_{j=1}^\infty f_-(j)P[Y\leq -j]=\sigma^2/2.$ (b) ~ If the walk is right continuous (i.e., $P[Y\geq 2]=0$) as well as in the case when it is left continuous we have $q^n_{(-\infty,0]}(x,y)=q^n(x,y)$ for $ x, y>0$. (c) ~ The formula (\ref{h0}) holds true also in the periodic case (i.e., $d_\circ>1$), if the leading term on its right side is multiplied by $d_\circ {\bf 1}\Big(p^n(y-x)\neq 0)\Big)$ as in (\ref{q(ii)}). (d)~ The function $f_-$ may be given by the formula $$f_-(x)=f_-(1)\Big (1+ E_0[\,\mbox{the number of ascending ladder points} \in [1, x-1] \,]\Big)$$ and its dual formula for $f_+$ (\cite{S}:pp.201-203). Under our normalization of $f_{\pm}$ the initial value $f_-(1)$ (resp. $f_+(1)$) equals the expectation of the strictly ascending (resp. descending) ladder height : \begin{equation}\label{f(1)} f_-(1)=E_0[S_{\tau([1,\infty))}]~~~ \mbox{and}~~~ f_+(1)=-E_0[S_{\tau((-\infty,-1])}], \end{equation} where $\tau(B)$ denotes the first entrance time into a set $B$, in view of the renewal theorem. (e) ~If the starting point is 1, the Baxter-Spitzer identity gives \begin{equation}\label{h_1} \sum_{n=0}^\infty r^n\sum_{y\leq 0} z^{1-y}h_1(n,y)=1-\exp \bigg(-\sum_{k=1}^\infty \frac{r^k}{k} E_0[z^{-S_k}; S_k < 0]\bigg) ~~~~~~(|z|\leq 1, |r|<1) \end{equation} and a similar formula for $q_{(-\infty,0]}^k(1,y) $ (\cite{C}:Theorem 8.4.2, \cite{F}: Lemmas 1 and 2 of Section XVIII.3, \cite{S}:P17.5). We shall use these identities not directly but via certain fundamental results (including those on $f_{\pm}$ and found in \cite{S}) that are based on them. Taking $z=1$ the above formula reduces to $$1-E_1[r^T]=\sqrt{1-r} \exp \bigg[\sum_{k=1}^\infty \frac{r^k}{k} \bigg(\frac12-P_0[S_k<0] \bigg)\bigg],$$ and, applying Karamata's Tauberian theorem, one can readily find an asymptotic formula of $P_1[T\geq n]$, which is also obtained from Corollary \ref{cor1.1} and (\ref{f(1)}). It would however be difficult to derive directly from the formula (\ref{h_1}) such fine estimates of $h_1(n,y)$ as given in Theorem \ref{thm1.4}. \vskip2mm\vskip2mm\noindent {\bf 1.3.}~~ For $x\in {\bf Z}$, let $Q^+_x(n)$ denote the probability that the walk starting at $x$ is found in the negative half line at time $n$ without having hit the origin before $n$: $$Q_x^+(n)=\sum_{y=-\infty}^{-1}q^n(x,y).$$ \begin{prop} \label{prop1.3.1}~~As $x/\sqrt n\to 0$ \begin{equation}\label{Q+} Q_{x}^+(n)=\frac{\sigma^2a^*(x)-x}{\sqrt{2\pi n_*}}+o\bigg(\frac{|x|+1}{\sqrt n}\bigg); \end{equation} and uniformly in $n$, as $x\to\infty$ \begin{equation}\label{Q+-} Q_{-x}^+(n)=\int_{-x}^x {\sf g}_n(u)du\Big[1+o(1)\Big]. \end{equation} If $E[|Y|^{3}; Y<0]<\infty$, then for $x>0$, the error term in (\ref{Q+}) can be replaced by $o(1/\sqrt n)$. \end{prop} The formula (\ref{Q+-}) follows from (\ref{Q+}) if $x/\sqrt n\to 0$ so that it signifies only in the case $x>a_\circ^{-1}\sqrt n$. Let $C^+$ be the same constant as introduced in the subsection 1.1 (just before Theorem \ref{thm1.5}). $C^+$ is finite if and only if $E[|Y|^3;Y<0]<\infty$ as remarked there. \vskip2mm \begin{thm} \label{thm1.3.2}~ Let $ \nu_n= \sum_{x=1}^\infty Q^+_{x}(n)$. Then ${\displaystyle \lim_{n\to\infty} } \nu_n=\frac12 C^+.$ \end{thm} One can extend Theorem \ref{thm1.3.2} as follows. We are concerned with the particles each of which performs random walk according to the transition law $q^n(x,y)$ independently of the other ones. Consider an experiment such that at a time $n$ that is determined prior to the experiment the experimenter counts the number of particles lying in any interval of the negative half line for the system of our particles in which at time 0 the particles are randomly placed at each site $x>0$ whose mean number, denoted by $m_n(x)$, may depend on $n$ as well as $x$. Let $N_n(\ell)$, $\ell>0,$ denote the number of particles that are found in the interval $[-\ell \sqrt {n_*},-1]$ at the time $n$. The following extension is a corollary of the proofs of Proposition \ref{prop1.3.1} and Theorem \ref{thm1.3.2}. \begin{cor}\label{cor1.2} ~ Suppose $m_n(x)=0$ for $x<0$, $m_n(x)$ is uniformly bounded and for each $K>0$, $m_n(x)=1+o(1)$ as $ n\to\infty$ uniformly for $0<x<K\sqrt n$. Then for each positive number $\ell $, \begin{equation}\label{experiment} \lim_{n\to\infty} E[N_n(\ell)]=\frac{C^+}{\sqrt{2\pi}}\int_0^\ell e^{-t^2/2}dt. \end{equation} \end{cor} \vskip2mm\vskip2mm That $\nu_n= \sum_{x=1}^\infty Q^+_{x}(n)$ is bounded if and only if $E[|Y|^3; Y<0]<\infty$ is easy to prove. Indeed $$\nu_n = \sum_{k=1}^n \sum_{w=1}^\infty\sum_{y=-\infty}^{-1}Q^{*+}_w(k-1)p(y-w)Q^+_y(n-k),$$ where $Q^{*+}_w(k)=\sum_{x=1}^\infty q_{(-\infty,0]}^k(x,w)$. Crude applications of Theorem \ref{thm1.1} and Proposition \ref{thm1.2} give \begin{equation}\label{Q^-} C_1{\bf 1}\bigg(-1<\frac{y}{\sqrt n} <0\bigg)\frac{|y|}{\sqrt n} \leq Q^+_y(n) \leq C_2\frac{|y|}{\sqrt n}~~~~~\mbox{for}~~~y<0 \end{equation} and similar bounds for $Q^{*+}_u(n)$, respectively, whereupon, noting $\sum (k(n-k))^{-1/2} \sim \int_0^1 (t(1-t))^{-1/2}dt$, one finds that $\nu_n$ is bounded if and only if $\sum_{w=-\infty}^{-1} \sum_{y=-\infty}^{-1} p(y+w) wy<\infty$, but the latter condition is equivalent to $E[|Y|^3; Y<0]<\infty$. \vskip2mm The rest of the paper is organized as follows. In Section 2 we give some preliminary lemmas. The estimation of $q^n$ and that of $q^n_{(-\infty,0]}$ and $h_x(n,y)$ are carried out in Section 3 and Section 4, respectively. Further detailed estimation of $q^n(x,y)$ for $xy<0$ that leads to the proof of Theorem \ref{thm1.5} is made in the end of Section 4. In Section 5 $Q^+_x(n)$ is dealt with. In Section 6 we briefly discuss on $q^n_\alpha(x,y)$ and prove (\ref{qqq}). \section{ Preliminary Lemmas } This section is divided into four subsections. In the first one we give some terminologies and notation as well as some fundamental results from Spizer's book \cite{S} in addition to those given in Section 1. Both the the second and the third ones depend in an essential way on the classical results given in the first subsection but self-contained otherwise. \vskip4mm\noindent {\bf 2.0.}~ Let $B$ be a subset of ${\bf Z}$. Denote by $\tau_B$ the first time when $S_n$ enters $B$ after time $0$; $\tau_B=\inf\{n\geq 1: S_n\in B\}$. For a point $x\in {\bf Z}$ write $\tau_x$ for $\tau_{\{x\}}$. For typographical reason we sometimes write $\tau(B)$ for $\tau_B$. A function $\varphi(x)$ on ${\bf Z}\setminus B$ that is bounded from below is said to be {\it harmonic} on ${\bf Z}\setminus B$ if $E_x[\varphi(S_1): S_1\notin B]=\varphi(x)$ for all $x \notin B$. From this property with the help of Fatou's lemma one infers that for any Markov time $\tau$, $E_x[\varphi(S_\tau); \tau<\tau_B]\leq \varphi(x)$~ ($x \notin B$). The functions $f_+(x)$ and $a(x)$ introduced in Section 1 (see (\ref{f_def}) and (\ref{a_def})) are harmonic on $[1,\infty)$ and on ${\bf Z}\setminus \{0\}$, respectively (\cite{S}, T29.1). The function $f_-(x)$ is harmonic also on $[1,\infty)$ but for the {\it dual} walk, namely the walk determined by the probability law $p^*(x)=p(-x)$. Let $g_{\,(-\infty,0]}(x,y)$ ($\,x,y>0\,$) denote the Green function of the walk $S_n$ absorbed on $(-\infty,0]$: $g_{\,(-\infty,0]}(x,y)=\sum_{n=1}^\infty g^n_{(-\infty,0]}(x,y)=\sum_{n=0}^\infty P_x[S_n=y, n<T]$, where $T=\tau_{(-\infty,0]}$ as in Section 1. It follows from the propositions P18.8, P19.3, P19.5 of \cite{S} that the increments $$u^{\pm}(y):=f_{\pm}(y)-f_{\pm}(y-1)~~~ (y=1, 2, \ldots), ~u^\pm(0):=0$$ are all positive and have limits $\lim_{y\to\infty}u^{\pm}(y)= 1$ and with them the function $g_{\,(-\infty,0]}$ is expressed as \begin{equation}\label{g2} g_{\,(-\infty,0]}(x,y)= \frac{2}{\sigma^2}\sum_{z=0}^{x\wedge y} u^+(x-z)u^-(y-z)~~~~~ (x,y >0). \end{equation} Similarly let $g_{\{0\}}(x,y)$ be the Green function of the process $S_n$ absorbed at the origin: $g_{\{0\}}(x,y)=\sum_{n=0}^\infty q^n(x,y)$. Then, according to Spitzer \cite{S} (P29.4) \begin{equation}\label{g} g_{\{0\}}(x,y)=a(x)+a(-y)-a(x-y)~~~~~~(x, y \in {\bf Z}\setminus \{0\}). \end{equation} The results given in the following subsections ${\bf 2.1}$ and ${\bf 2.2}$, though easy consequences of (\ref{g2}) and (\ref{g}), do not seem to appear in the existing literature. \vskip2mm\noindent {\bf 2.1.}~ Let $H_x^+(y)$ denote (as in {\sc Remark} (a)) the hitting distribution of $(-\infty,0]$ for the walk $S_n^x$: \begin{equation}\label{hd} H_x^+(y):=P_x[S_{T}=y]~~~~~ (x>0, y\leq 0), \end{equation} which may be expressed as \begin{equation}\label{h^-} H_x^+(y)=\sum_{w=1}^\infty g_{(-\infty,0]}(x,w)p(y-w). \end{equation} In view of (\ref{g2}) we have $g_{(-\infty,0]}(x,w)\leq C f_-(w)$, hence \begin{equation}\label{17} H_x^+(y)\leq CH_{\infty}^+(y). \end{equation} \begin{lem} \label{lem2.50} ~~For $x>0$ and $y\leq 0$, $${\rm (a)}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\sum_{z=-\infty}^{0} H_x^+(z)a(z-y)=a(x-y)-\sigma^{-2}f_+(x).~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$$ $${\rm (b)}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\sum_{z=-\infty}^{0} H_{\infty}^+(z)a(z)=\lim_{x\to\infty}\Big[a(x)-\sigma^{-2}f_+(x)\Big].~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$$ (Both sides of {\rm (b)} may be infinite simultaneously.) \end{lem} \vskip2mm\noindent {\it Proof.~} ~ With $y\leq 0$ fixed define $\varphi(x)=\sum_{z=-\infty}^{0} H_x^+(z)a(z-y)$ for $x>0$ and $\varphi(x)=a(x-y)$ for $x\le 0$. Owing to (\ref{g2}) and (\ref{h^-}) \begin{equation}\label{eq2.5} H_x^+(z)\leq C\sum_{w=1}^\infty (x\wedge w)p(z-w)\leq CxP[Y<z]~~~~~~~(y<0< x), \end{equation} which combined with $\sum_{z\leq 0} |z|P[Y<z]\leq\sigma^2<\infty$ shows that $\varphi(x)$ takes a finite value; moreover, by dominated convergence, $\varphi(x)/x\to 0$ as $x\to \infty$. It is observed that $\sum_{z=-\infty}^{\infty}p(z-x)\varphi(z)=\varphi(x)$ for $x>0$ and $\sum_{z=-\infty}^{\infty}p(z-x)a(z-y)=a(x-y)$ for $x\neq y$. Hence $a(x-y)-\varphi(x)$, vanishing on $x\leq 0$, is harmonic on $x>0$ and asymptotic to $x/\sigma^2$ as $x\to\infty$. We may now conclude that $a(x-y)-\varphi(x)$ agrees with $\sigma^{-2}f_+(x)$ for $x>0$ since a harmonic function on $x>0$ that is bounded below is unique apart from a constant factor. Thus (a) has been verified. Let $y=0$ and $x\to\infty$ in (a). If the left side of (b) is infinite, so is the right side in view of Fatou's lemma. If it is finite, the dominated convergence theorem may apply owing to (\ref{17}). ~~~\qed \vskip2mm The proof of Lemma \ref{lem2.50} may be repeated word for word but with $a(z-y)$ replaced by $z$ to yield \begin{equation}\label{eq2.51} \sum_{z=-\infty}^{-1} H_x^+(z)|z|= f_+(x)-x \end{equation} and \begin{equation}\label{eq2.50} \sum_{z=-\infty}^{-1} H_{\infty}^+(z)|z|=\lim_{x\to\infty}\Big[f_+(x)-x\Big]. \end{equation} Taking $y=0$ in (a) of Lemma \ref{lem2.50} and combining it with (\ref{eq2.51}) we obtain \begin{equation}\label{22} \sum_{z=-\infty}^{-1} H_x^+(z)(\sigma^2 a(z)-z)=\sigma^2a(x)-x~~~~~~(x>0). \end{equation} Here we advance a corollary of Lemma \ref{lem2.50} that involves the constant $C^+$ introduced in the subsection 1.1. It is convenient to define it by $$C^+=\sum_{y=-\infty}^{-1}H_{\infty}^+(y)\Big[\sigma^2a(y)+|y|\Big]$$ rather than by (\ref{C^+0}). The relation (\ref{C^+0}) then ensues as stated in the corollary below. According to this definition it is clear that $C^+$ is finite if and only if $E[|Y|^3;Y<0]<\infty$, and positive unless the walk is left continuous. Define $H_{-\infty}^-(y)$ and $C^-$ analogously to $H_{\infty}^+$ and $C^+$: $$H_{-\infty}^-(y)=\frac2{\sigma^2}E[f_+(Y-y);Y>y]~~~ (y\geq 0)~~~\mbox{and}~~~C^-=\sum_{y=1}^\infty H_{-\infty}^-(y)(\sigma^2 a(y)+y).$$ It holds that $C^-<\infty$ if and only if $E[|Y|^3; Y>0]<\infty$. \begin{cor} \label{lem2.5} ~ ${\displaystyle C^+=\lim_{x\to+\infty}(\sigma^2 a(x)-x)~\mbox{~and~}~C^-=\lim_{x\to -\infty}(\sigma^2 a(x)-|x|).}$~ \end{cor} \vskip2mm\noindent {\it Proof}.~ From (\ref{eq2.50}) and (b) of Lemma \ref{lem2.50} one deduces the first relation of the corollary. The second one is its dual. ~~~\qed \vskip4mm\noindent {\bf 2.2.}~ The results in this subsection are somewhat different in nature from and independent of those of the preceding one (except for the use of (\ref{17}) ) although machinery for the proof is essentially the same. Recall that $T$ is written for $\tau_{(-\infty,0]}$. We shall show that $P_x[\tau_{[N,\infty)}<\tau_{0}]$ and $P_x[\tau_N<\tau_0]$ are asymptotically equivalent (see Proposition \ref{lem2.2}). For the moment we obtain the following \begin{lem}\label{lem2.1}~~Uniformly in $0<x<N$, as $N\to\infty$ $$\frac{a(x)}{a(N)}\geq P_x[\tau_{[N,\infty)}<\tau_{0}]\geq P_x[\tau_N<\tau_0]=\frac{\sigma^2a(x)+x}{2N}(1+o(1)).$$ \end{lem} \vskip2mm\noindent {\it Proof.} We have $P_x[\tau_N<\tau_0]=g_{\{0\}}(x,N)/g_{\{0\}}(N,N)$ and the last relation of the lemma follows from (\ref{g}) together with $$g_{\{0\}}(N,N)=a(N)+a(-N)=\frac{2N}{\sigma^2}(1+o(1)),~~~~\lim_{y\to\infty}[a(-y)-a(1-y)]=1/\sigma^2$$ (\cite{S}: P29.2). Since $a(x)$ is positive and harmonic on ${\bf Z}\setminus\{0\}$ (i.e. $E_x[a(Y+x)]=a(x),~x\neq 0$) and non-decreasing for $x>0$ large enough, $$a(x)\geq E_x[a(S_{\tau([N,\infty))});\tau_{[N,\infty)}< \tau_0]\geq {a(N)}P_x[\tau_{\,[N,\infty)}< \tau_0].$$ Thus $P_x[\tau_{\,[N,\infty)}<\tau_0]\leq a(x)/a(N)$ provided $N$ is large enough, verifying the first inequality of the lemma. The second one is trivial. ~~~\qed \begin{lem}\label{unif_bd}~ As $N\to\infty$ \[\label{x} \sup_{z>N}P_{z}[ \tau_N> T] \asymp \bigg[ N^{-1}\sum_{y=1}^{N-1}yH^+_{\infty}(-y)+\sum_{y=N}^\infty H^+_{\infty}(-y)\bigg] \longrightarrow 0. \] \end{lem} \vskip2mm {\it Proof.}~ We use the decomposition $$ P_z[ \tau_N> T]=\sum_{w<N}P_z[S_{\tau((-\infty,N] )}=w]\Big(P_w[T<\tau_N]{\bf 1}(w>0)+{\bf 1}(w\leq 0)\Big). $$ Writing the first probability under the summation sign by means of $H_x^+(y)$ (defined in (\ref{hd})) and using the bound $H_x^+(y) \leq CH_\infty^+(y)$ (see (\ref{17})) together with Lemma \ref{lem2.1} we obtain $$P_z[ \tau_N> T] \leq\frac{C}{a(-N)}\sum_{-N<y<0}a(y)H_{\infty}^+(y) +\sum_{y\leq -N}H_{\infty}^+(y),$$ the right side approaching zero. The lower bound is obtained by an application of Fatou's lemma.~~\qed \begin{prop}\label{lem2.2}~~Uniformly in $0<x<N$, as $N\to\infty$ \vskip3mm\noindent {\rm (a)}~$\displaystyle~~~~~~~~~~~~~~~~~~~~~~~~ P_x[\tau_{\,[N,\infty)}<T\,] - P_x[\tau_N<T\,]=o(x/N);~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$ $${\rm (b)}~~~~~~~~~~~~~~~~~~~~~~~~~ P_x[\tau_{\,[N,\infty)}<\tau_0\,] - P_x[\tau_N<\tau_0\,]=o(x/N).~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$$ \end{prop} \vskip2mm {\it Proof.}~~ The difference on the left side of (a) is expressed as \begin{equation}\label{eq-1} \sum_{z>N} P_x[\tau_{\,[N,\infty)}< T,\, S_{\tau([N,\infty))} =z]P_z[ \tau_N> T], \end{equation} and hence (a) follows from the preceding two lemmas (and the inequality $T\leq \tau_0$). The proof of (b) is similar: one has only to replace $T$ by $\tau_0$ in (\ref{eq-1}). ~~~\qed \vskip2mm The next Proposition refines Theorem 22.1 of \cite{S} where the problem is treated by a quite different method from the present one. \begin{prop}\label{lem2.3}~~Uniformly for $1\leq x<N$, as $N\to \infty$ $$ P_x[ \tau_{\,[N,\infty)}<T\,] = \frac{f_+(x)}{N} +o\bigg(\frac{x}{N}\bigg). $$ \end{prop} \vskip2mm {\it Proof.~}~~ By Proposition \ref{lem2.2} $$P_x[\tau_{\,[N,\infty)}<T\,]=P_x[\tau_N<T\,]+o\bigg(\frac{x}{N}\bigg)=\frac{g_{\,(-\infty,0]}(x,N)}{g_{\,(-\infty,0]}(N,N)}+o\bigg(\frac{x}{N}\bigg).$$ It is readily inferred that as $N\to \infty$ $$g_{\,(-\infty,0]}(x,N)=f_+(x)\bigg(\frac{2}{\sigma^2} +o(1)\bigg) ~~~\mbox{uniformly for}~ ~1\leq x\leq N,$$ in particular, $~ g_{\,(-\infty,0]}(N,N)=2\sigma^{-2}N+o(N)$ and substitution leads to the desired relation. ~~~\qed \begin{prop}\label{lem2.4}~~Uniformly for $1\leq x<N$, as $N\to \infty$ $$\frac1{x}E_x[\,S_{\tau([N,\infty))};\, \tau_{\,[N,\infty)}<T\,]=\frac{N}{x}P_x[ \tau_{\,[N,\infty)}<T\,] + o(1), $$ or, what is the same thing, ~$E_x[\,S_{\tau([N,\infty))}-N|\, \tau_{\,[N,\infty)}<T\,]=o(N)$. \end{prop} \vskip2mm {\it Proof.}~~That $f_+$ is non-negative and harmonic on $[1,\infty)$ implies that for $x>0$, $$E_x[f_+(S_{\tau([N,\infty))}); \tau_{\,[N,\infty)}<T\,]\leq f_+(x).$$ Hence, employing Lemma \ref{lem2.3}, one first observes that uniformly for $1\leq x<N$ \begin{eqnarray*} 0\leq E_x[\,f_+(S_{\tau([N,\infty))})-f_+(N)\,; \tau_{\,[N,\infty)}<T\,] &\leq& f_+(x)-f_+(N)P_x[\tau_{\,[N,\infty)}<T\,]\\ &=& f_+(x)(1- f_+(N)/N)+o(x) \end{eqnarray*} and then use $\lim f_+(x)/x=1$ to find the formula of the proposition.~~~\qed \vskip2mm \begin{lem}\label{lem2.9}~ Uniformly for $0<x<N$, as $N\to\infty$ \begin{equation}\label{23} P_x[\tau_{ [N,\infty)}<\tau_0] =\frac{\sigma^2 a(x)+x}{2N}(1+o(1));~~\mbox{and} \end{equation} $$\frac{\sigma^2 [a(x)+a(N)-a(x+N)]}{2N}(1+o(1))\leq P_x[\tau_{ (-\infty,-N]}<\tau_0] \leq\frac{\sigma^2 a(x)-x}{2N}(1+o(1)).~~~~~~ $$ \end{lem} \vskip2mm {\it Proof.}~ The first relation (\ref{23}) follows from (b) of Proposition \ref{lem2.2} and the last equality in Lemma \ref{lem2.1}. The lower bound of the second relation is obtained in the same way as is the second inequality in Lemma \ref{lem2.1}. For the upper bound we apply (\ref{23}) (or rather its dual) and (\ref{22}) in turn to see \begin{eqnarray*} P_x[\tau_{ (-\infty,-N]}<\tau_0] &=& \sum_{y= -N+1}^{-1}H_x^+(y)P_y[\tau_{(-\infty,-N]}<\tau_0]+\sum_{y\leq-N} H_x^+(y)\\ ~~~~~&&\leq \sum_{y=-\infty}^{-1} H_x^+(y)\frac{\sigma^2 a(y)-y}{2N}(1+o(1)) = \frac{\sigma^2 a(x)-x}{2N}(1+o(1)). \end{eqnarray*} The proof of the lemma is complete. ~~ \qed \begin{prop}\label{prop2.4}~~ Uniformly for $0< |x|<N$, as $N\to\infty$ $$P_x[ \tau_{{\bf Z}\setminus (-N, N)} <\tau_0]= \frac{\sigma^2 a(x)}{N}(1+o(1)). $$ \end{prop} \vskip2mm {\it Proof.~}~ Use Lemma \ref{lem2.9} first to infer that for $0<|x|<N$, $$P_x[ \tau_{(-\infty, -N]} \vee \tau_{[N,\infty)}<\tau_0]\leq C\frac{|x|}{N}\sup_{z>N} \Big(P_z[\tau_{(-\infty, -N]}<\tau_0]+P_{-z}[\tau_{[N,\infty)}<\tau_0]\Big)=o\bigg(\frac{x}{N}\bigg);$$ and then, by employing the inclusion-exclusion formula, to obtain the relation of the lemma. ~~ \qed \vskip4mm\noindent {\bf 2.3.} ~In the following two lemmas we suppose that the walk $S_n$ is aperiodic (i.e., $d_\circ =1$). \begin{lem}\label{lem2.6}~ Let $d_\circ=1$. Then uniformly for $x, y\in {\bf Z}$, as $n\to\infty$ \begin{eqnarray}\label{eq2.6} &&p^n(y-x)-p^n(-x)-p^n(y)+p^n(0) \nonumber\\ &&={\sf g}_{n}(y-x)-{\sf g}_{n}(-x)-{\sf g}_{n}(y)+{\sf g}_{n}(0)+o({xy}{n^{-3/2}}). ~~~~~~~~~~~~ \end{eqnarray} \end{lem} \vskip2mm\noindent {\it Proof.~}~~Let $\phi(l)$ denote the characteristic function of $Y$: $\phi(l)=E e^{ilY},$ $l\in {\bf R}$. As in the usual proof of the local central limit theorem choose a positive constant $\varepsilon$ so that $|\phi(l)-1|\geq \sigma^2 l^2/4$ for $|l|<\varepsilon$ and set $\eta=\sup_{\varepsilon\leq |l|\leq \pi}|1-\phi(l)|<1$. Then the error in (\ref{eq2.6}) that we are to show to be $o(xyn^{-3/2})$ is written as \begin{eqnarray*} \,(2\pi)^{-1/2}\int_{-\varepsilon}^\varepsilon \Big([\phi(l)]^n-e^{-n_* l^2/2}\Big)K_{x,y} (l)dl+O(e^{-n_*\varepsilon^2/2}+\eta^n) \end{eqnarray*} where $K_{x,y} (l)=e^{-i(y-x)l}-e^{ixl}-e^{-iyl}+1.$ Since $K_{x,y} (l)=(e^{ixl}-1)(e^{-iyl}-1)$, we have $|K_{x,y} (l)|\leq |xy|l^2$ and, scaling $l$ by $\sqrt{n_*}$ and applying the dominated convergence theorem, we deduce that the integral above is $o(xyn^{-3/2})$ as required. ~~~\qed \begin{lem}\label{lem2.7}~ Let $d_\circ=1$. Then uniformly in $y\in {\bf Z}$, $p^n(y)-p^n(0)={\sf g}_{n}(y)- {\sf g}_{n}(0)+o(y/{n})$ as $n\to\infty$; in particular $|p^n(y)-p^n(0)|\leq C|y|/{n}$. \end{lem} \vskip2mm\noindent {\it Proof.~}~ The proof is similar to the preceding one. We have only to use $|1-e^{ixl}|\leq |xl|$ in place of the bound of $K_{x,y}(l)$. ~~\qed \section{Estimation of $q^n(x,y)$} In this section we prove Theorem \ref{thm1.1}. The proof relies on the asymptotic estimate of the hitting-time distribution $$f_x^{\{0\}}(k)= P_x[\tau_{0}=k]~~~~~(k=1,2,\ldots)$$ as $k\to\infty$, where $\tau_0$ denotes, as in Section 2, the first time that $S^x_n$ hits the origin after time 0. The following theorem is essentially proved in \cite{U}. \vskip2mm\noindent {\bf Theorem A} ~ ~{\it Under the basic assumption of this paper, as $|x|\vee k\to \infty$} \begin{eqnarray}\label{eqA} f_x^{\{0\}}(k)&=&\frac{\sigma a^*(x)e^{- x^2/2\sigma^2k}}{\sqrt {2\pi}\, k^{3/2}} +o\bigg( \frac{|x|+1}{k^{3/2}}\wedge \frac{1}{|x|^{2}+1}\bigg). \end{eqnarray} \vskip2mm\noindent {\it Proof.~} Immediate from Theorems 1.1 and 1.2 of \cite{U}.~~~ \qed \vskip2mm We have only to consider the case $0< |y|\leq x$ in view of the duality of $q^n(x,y)$ and $q^n(y,x)$ (i.e., transformed to each other by time-reversion). Let $\phi(l)$ denote the characteristic function of $Y$ as in the proof of Lemma \ref{lem2.6}. In what follows we suppose that the walk $S_n$ is aperiodic so that $|\phi(l)|<1$ for $|l|\leq \pi$. We shall use the representation \begin{equation}\label{eq3.1} q^n(x,y)=p^n(y-x)-\sum_{k=1}^n f_x^{\{0\}}(n-k)p^k(y) \end{equation} and its Fourier version \begin{equation}\label{eq3.2} q^n(x,y)=\frac1{2\pi}\int_{-\pi}^\pi \Big[\pi_{y-x}(t)-\rho(t)\pi_{-x}(t)\pi_y(t)\Big]e^{-int}dt~~~~~(x\neq 0) \end{equation} and \begin{equation}\label{eq3.20} q^n(0,y)=\frac1{2\pi}\int_{-\pi}^\pi \rho(t)\pi_y(t)e^{-int}dt, \end{equation} where $$\pi_x(t)=\lim_{r\uparrow 1}\sum_{n=0}^\infty p^n(x)e^{itn}r^n=\frac1{2\pi}\int_{-\pi}^\pi \frac{e^{-ix l}}{1-e^{it}\phi(l)}dl~~~~~~(t\neq 0)$$ and $\rho(t)=1/\pi_0(t)$; it holds that $$\rho(t)= \sigma\sqrt{-2it}(1+ o(1))~~~~~~\mbox{as} ~~~t\to 0$$ (cf. \cite{U}, Section 2). Note that $q^n(0,y)=f^{\{0\}}_{-y}(n)$ by duality (or by coincidence of the Fourier coefficients), so that Theorem \ref{thm1.1} in the case $x=0$ is immediate from Theorem A. The supposition that the walk $S_n$ is aperiodic gives rise to no essential loss of generality. To see this let $d_\circ>1$ and put $\omega=2\pi/d_\circ$. Then one can find a number $\xi$ among $1,\ldots, d_\circ-1$ such that $p(x+\xi)=0$ for all $x\notin d_\circ{\bf Z}$. Hence for all $l\in (-\pi,\pi]$, $$\phi(l+\omega)=\sum e^{i(x+\xi)(l+\omega)}p(x+\xi)=e^{i\xi\omega}\phi(l).$$ Owing to the irreducibility of the walk there exists an integer $k$ such that $k\xi =1$(mod$(d_\circ))$. Noting $\phi(\l+k\omega)=e^{i\omega}\phi(l)$, one observes that $$\pi_x(t- \omega)=\frac1{2\pi}\int_{-\pi}^\pi\frac{e^{-ix(l+k \omega)}}{1-e^{it}e^{-i\omega}\phi(l+k\omega)}dl=\pi_x(t)e^{-ixk\omega};$$ in particular $\rho(t-\omega)=\rho(t)$. It accordingly follows that the integrand of the integral on the right side of (\ref{eq3.2}) is invariant by a shift of $t$ by $\omega$ if (and only if) $(y-x)k\omega -n\omega\in 2\pi {\bf Z}$, namely $(y-x)k = n$(mod$(d_\circ))$ (the only case when $q^n(x,y)\neq 0$), hence the general case is reduced to the case $d_\circ=1$ since all our estimation of $q^n(x,y)$ is based on (\ref{eq3.2}). \begin{thm}\label{thm3.1} Uniformly for $0<|y|\leq x< a_\circ \sqrt n$, as $n\to\infty$ and $|y|/\sqrt n\to 0$ $$q^n(x,y)={\sf g}_{n}(x)\frac{\sigma^4a(x)a(-y)+xy}{n_*}+o\bigg(\frac{xy}{n^{3/2}}\bigg).$$ \end{thm} \vskip2mm {\it Proof.~} First consider the case when not only $y$ but also $x$ is $o(\sqrt n)$. Of the integrand in (\ref{eq3.2}) make the decomposition \begin{eqnarray*} \pi_{y-x}(t)-\rho(t)\pi_{-x}(t)\pi_y(t)&=&\pi_{y-x}-\pi_{-x}-\pi_{y}+\pi_{0} +a(x)a(-y)\rho\\ &&-\, \rho\,{\rm e}_{x}\,{\rm e}_{-y}+a(x)\rho\,{\rm e}_{-y}+a(-y)\rho\,{\rm e}_{x}, \end{eqnarray*} where $$\,{\rm e}_{x}=\,{\rm e}_x(t)=\pi_{-x}(t)-\pi_0(t)+a(x).$$ Noting that $e^{-(\xi-\eta)^2}-e^{-\xi^2}-e^{-\eta^2}+1=e^{-\xi^2-\eta^2}(e^{2\xi\eta}-1)+O(\xi^2\eta^2)=2\xi\eta+o(\xi\eta)$ as $\xi, \eta\to 0$, we apply Theorem A and Lemma \ref{lem2.6} to see \begin{eqnarray*} &&\frac1{2\pi}\int_{-\pi}^\pi \Big[\pi_{y-x}-\pi_{-x}-\pi_{y}+\pi_{0} +a(x)a(-y)\rho\Big]e^{-int}dt\\ &&=p^n(y-x)-p^n(-x)-p^n(y)+p^n(0)+a(x)a(-y)f_0^{\{0\}}(n)\\ &&={\sf g}_{n}(0)\frac{\sigma^4a(x)a(-y)+xy}{n_*}+o\bigg(\frac{xy}{n^{3/2}}\bigg). \end{eqnarray*} In \cite{U} (Section 3) we have made decomposition $(2\pi)\,{\rm e}_x(t)=\,{\rm c}_x(t)+i\,{\rm s}_x(t)$, where $${\rm c}_x(t)=\int_{-\pi}^\pi \bigg(\frac1{1-e^{it}\phi(l)}-\frac1{1-\phi(l)}\bigg)(\cos xl -1)dl$$ $${\rm s}_x(t)=\int_{-\pi}^\pi \bigg(\frac1{1-e^{it}\phi(l)}-\frac1{1-\phi(l)}\bigg)\sin xl\,dl$$ and verified the estimates given in the following two lemmas. \vskip2mm\noindent {\bf Lemma B1}~ {\it There exists a constant $C$ such that} $$~~~~|{\rm c}_x(t)|\leq Cx^2\sqrt{|t|}, ~~~|{\rm c}'_x(t)|\leq C x^2/\sqrt{|t|},~~~|{\rm c}''_x(t)|\leq Cx^2/|t|^{3/2}.$$ \vskip2mm\noindent {\bf Lemma B2} ~~{\it Suppose that $E |Y|^{2+\delta}<\infty$ for some $0\leq \delta<1$. Then, uniformly in $x\neq 0$, as $t\to 0$} $$~~~~|{\rm s}_x(t)|/|x|=o(|t|^{\delta/2}), ~~~|{\rm s}'_x(t)|/|x|=o(|t|^{\delta/2}/|t|), ~~~|{\rm s}''_x(t)|/|x|=o(|t|^{\delta/2}/|t|^2).$$ \vskip2mm By a simple change of variables we derive the bounds \begin{equation}\label{114} |\pi_{-x}^{(j)}(t)| \leq C|t|^{-\frac12 -j}~~~~~~~(j=0, 1, 2), \end{equation} which in particular give $|\rho^{(j)}(t)|\leq C|t|^{\frac12 -j}$, where the super script $(j)$ indicates the derivative of $j$-th order. With the help of the bounds given above as well as of Lemmas B1 and B2 we can readily infer that each of the contributions of $\rho\,{\rm e}_{x}\,{\rm e}_{-y}$, $a(x)\rho\,{\rm e}_{-y}$, and $a(-y)\rho\,{\rm e}_{x}$ to the integral in (\ref{eq3.2}) is $o(xy/n^{3/2})$. Eg., writing $g(t)=(\rho\,{\rm c}_{x}\,{\rm c}_{-y})(t)$, integrating by parts and observing that $|g'(t)|\leq Cx^2y^2 \sqrt{|t|}$ we obtain \begin{equation}\label{lem} \int_{-\pi}^\pi g(t)e^{-int}dt= \frac{1}{in}\int_{-\pi}^\pi g'(t) e^{-int}dt= O\bigg(\frac{x^2y^2}{n^2\sqrt n}\bigg)+\int_{1/n<|t|<\pi} g'(t) e^{-int}dt. \end{equation} Integrate by parts once more and apply the bound $|g''(t)|\leq Cx^2y^2/ \sqrt{|t|}$ to evaluate the last integral to be $O(x^2y^2/n^2\sqrt n)$, which is $o(xy/n^{3/2})$ since $x=o(\sqrt n\,)$. It remains to consider the case $\varepsilon\sqrt n <x<a_\circ \sqrt n$. This time we use the decomposition $$ \pi_{y-x}-\rho\pi_{-x}\pi_y=\pi_{y-x}-\pi_{-x}+a(-y)\rho\,{\pi}_{-x}+\, \rho\,{\pi}_{-x}\,{\rm e}_{-y}. $$ Owing to Lemma \ref{lem2.7} and the present assumption on $x, y$, $p^n(y)-p^n(0)=o(y/n)=o(xy/n^{3/2})$. Hence, again by Lemma \ref{lem2.6} and Theorem A, \begin{eqnarray*} \frac1{2\pi}\int_{-\pi}^\pi \Big[\pi_{y-x}-\pi_{-x} +a(-y)\rho\,{\pi}_{-x}\Big]e^{-int}dt &=&p^n(y-x)-p^n(-x)+a(-y)f_x^{\{0\}}(n)\\ &=&{\sf g}_{n}(x)\frac{\sigma^4a(x)a(-y)+xy}{n_*}+o\bigg(\frac{xy}{n^{3/2}}\bigg). \end{eqnarray*} In the same way as is argued at (\ref{lem}) the contribution of $\rho\,{\pi}_{-x}\,{\rm e}_{-y}$ to the integral in (\ref{eq3.2}) can be evaluated to be $O(y^2/n^{3/2})+o(y/n)$, which is $o(xy/n^{3/2})$. Theorem \ref{thm3.1} has been proved.~~~\qed \vskip2mm Theorem \ref{thm3.1} implies {\bf (i)} of Theorem \ref{thm1.1} (in view of the local central limit theorem). \vskip2mm \begin{prop}\label{prop3.1}~ Uniformly for $x, y$ such that both $x$ and $|y|$ are between $ a_\circ^{-1}\sqrt n$ and $a_\circ \sqrt n$, as $n\to\infty$ \begin{eqnarray*} q^n(x,y)&=&{\sf g}_{n}(y-x)-{\sf g}_{n}(y+x)+o({1}/{\sqrt n}) ~~~~~~\mbox{if}~~~~~y>0,\\ &=&o({1}/{\sqrt n}) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\mbox{if}~~~~~y<0. \end{eqnarray*} \end{prop} \vskip2mm\noindent {\it Proof.~} We prove the second relation first. To this end we introduce an auxiliary walk. Let $\tilde p(x)$ be any probability law on ${\bf Z}$ of zero mean and variance $\sigma^2$ such that its third absolute moment is finite and the random walk determined by $\tilde p$ is left continuous, namely $\tilde p(y)=0$ for $y\leq - 2$. Let $\tilde p^n(y-x)$ and $\tilde q^n(x,y)$ denote the corresponding $n$-th step transition probabilities and let $y<0<x$. From the assumed left continuity it follows that $\tilde q^n(x,y)=0$, and hence \begin{eqnarray}\label{I-III} q^n(x,y)&=& p^n(y-x)-\tilde p^n(y-x)-\sum_{k=1}^n \Big(f_x^{\{0\}}(k)p^{n-k}(y)-\tilde f_x^{\{0\}}(k)\tilde p^{n-k}(y)\Big) \nonumber \\ &=& U_n(x,y)+V_n(x,y)+W_n(x,y), \end{eqnarray} where $$U_n(x,y)=p^n(y-x)-\tilde p^n(y-x), $$ $$ V_n(x,y)=-\sum_{k=1}^{n-1} \Big( f_x^{\{0\}}(k)- \tilde f_x^{\{0\}}(k)\Big) p^{n-k}(y) $$ and $$W_n(x,y)=-\sum_{k=1}^{n-1} \tilde f_x^{\{0\}}(k)\Big( p^{n-k}(y)- \tilde p^{n-k}(y)\Big).$$ \vskip2mm\noindent By the local limit theorem $U_n=o(1/\sqrt n)$. We apply Theorem A to see that $\sup_{k\ge 1}|f_x^{\{0\}}(k)-\tilde f_x^{\{0\}}(k)|=o(1/x^2)=o(1/n)$, which combined with the trite bound $\sup_z p^k(z)\leq C/\sqrt k$ shows $V_n=o(1/\sqrt n)$. The bound $W_n=o(1/\sqrt n)$ is verified e.g. by observing that $\sup_{1\leq k\le n}|p^k(y)-\tilde p^k(y)|\sqrt k\to 0$ as $n\to \infty$ uniformly in $y$. For the proof of the first relation we write for $y>0$ \begin{eqnarray*} q^n(x,y)&=&p^n(y-x)-\sum_{k=1}^n f_{-x}^{\{0\}}(k)p^{n-k}(y)+ \sum_{k=1}^n [f_{-x}^{\{0\}}(k)- f_x^{\{0\}}(k)]p^{n-k}(y)\\ &=&p^n(y-x)-p^n(y+x) +q^n(-x,y)+r_n(x,y), \end{eqnarray*} where $r_n(x,y)=\sum_{k=1}^n [f_{-x}^{\{0\}}(k)- f_x^{\{0\}}(k)]p^{n-k}(y)$. In view of what has been shown above as well as the local central limit theorem it suffices to show $r_n(x,y)=o(1/\sqrt n)$. There exists a positive integer $N$ such that for $0<\varepsilon<1/2$ and $n>N$, $$\sum_{1\le k<\varepsilon n} f_{\pm x}^{\{0\}}(k)p^{n-k}(y)\le P_{\pm x}[\tau_0 <\varepsilon n]/\sqrt{\pi n_*}$$ and in view of Donsker's invariance principle the probability on the right side above tends to zero as $\varepsilon\downarrow 0$ uniformly for $x>\sqrt n/a_\circ$. Now the required estimate follows from Theorem A, according to which $f_{-x}^{\{0\}}(k)- f_x^{\{0\}}(k)=o(x/k^{3/2})$ as $x\wedge k\to\infty$. ~~\qed \vskip2mm\noindent \vskip2mm \begin{prop}\label{prop3.2}~ Suppose $E|Y|^{2+\delta}<\infty$ for some $ \delta\geq 0$. Then, uniformly for $ |x| <a_\circ\sqrt n$ and $ |y|> a_\circ^{-1} \sqrt n$, as $n\to\infty$ $$q^n(x,y)=O\bigg(\frac{x}{y}{\sf g}_{4n}(y)\bigg)+o\bigg(\frac{x}{|y|^{2+\delta}}\bigg). $$ \end{prop} \vskip2mm\noindent {\it Proof.~} Suppose $y/2>\sqrt {n_*}$ for simplicity. ~Put $\tau=\tau_0\wedge \tau_{(y/4,\infty)}$. Then \begin{eqnarray*} q^n(x,y)&=&P_x[\tau\le n <\tau_0, S_n=y]\\ &=&P_x[y/4<S_\tau<y/2, n <\tau_0, S_n=y]+P_x[S_{\tau} \ge y/2, n <\tau_0, S_n=y]\\ &=& I+ II~~~~\mbox{(say).} \end{eqnarray*} We employ the inequality $$I\leq \sum_{k=1}^n\sum_{y/4<z<y/2} P_x[\tau=n-k, S_\tau=z]P_z[S_k=y].$$ The following less familiar version of local central limit theorem is found in \cite{U0} (see its Corollary 6): under the assumption of Proposition \ref{prop3.2} \begin{equation}\label{eqLLT} P_0[S_n=x] ={\sf g}_n(x)\left[1+ P^{n,\nu}(x) \right] +o\left(\frac1{\sqrt n^{1+\delta}}\wedge\frac{\sqrt n }{|x|^{2+\delta}}\right), \end{equation} $(n+|x|\to \infty)$, where $\nu=\lfloor \delta \rfloor$ (the largest integer that does not exceeds $\delta$), $P^{n,0}\equiv 0$ and $P^{n,\nu}(x)= \frac1{\sqrt n} P_1\left(\frac{x}{\sqrt n}\right)+\cdots +\frac1{\sqrt n^{\,\nu}} P_{\nu}\left(\frac{x}{\sqrt n}\right)$ if $\nu\geq 1$ with the same real polynomials $P_j$ of degree $j$ as those associated with the Edgeworth expansion. From (\ref{eqLLT}) one deduces $$\max_{1\leq k\leq n}\, \max_{y/4<z<y/2}P_z[S_k=y]=O\bigg({\sf g}_{4n}(y)\bigg)+o\bigg(\frac{\sqrt n}{y^{2+\delta}}\bigg)$$ (use $(y/2)^2> n_*$ for evaluation of the maximum over $k$). On the other hand $$\sum_{k=1}^n\sum_{y/4<z<y/2} P_x[\tau=n-k, S_\tau=z]\le P_x[\tau_{(y/4,\infty)}<\tau_0]=O\bigg(\frac{x}{y} \bigg).$$ Hence \begin{equation}\label{eqI} I= O\bigg(\frac{x}{y}{\sf g}_{4n}(y)\bigg)+o\bigg(\frac{x\sqrt n}{|y|^{3+\delta}}\bigg). \end{equation} For evaluation of $II$ we begin with \[ II\leq \sum_{k=1}^n P_x[S_{k} \ge y/2, \tau=k, S_n=y]. \] Under $S_k\geq y/2$ we have $\{\tau=k\}=\{\tau>k-1\}$, hence the sum on the right side equals $$ \sum_{k=1}^n E_x\Big[P_{S_{k-1}}[S_{1} \ge y/2, S_{n-k+1}=y];~ \tau >k-1\, \Big]. $$ Since $P_x[\tau> k-1]\le P_x[\tau_0> k-1]=O(x/\sqrt k\,)$ and for $z<y/4$, $$P_{z}[S_{1} \ge y/2, S_{n-k+1}=y]\leq \sum_{w\geq y/2}p(w-z)p^{n-k}(y-w)= o\bigg(\frac{1}{y^{2+\delta}\sqrt{n-k+1}}\bigg),$$ we get $$II=\sum_{k=1}^n P_x[\tau>k-1]\times o\bigg(\frac{1}{y^{2+\delta}\sqrt{n-k+1}}\bigg)= o\bigg(\frac{x}{y^{2+\delta}}\bigg). $$ This together with (\ref{eqI}) shows the estimate of the proposition. ~~~\qed \section{Estimation of $q^n_{(-\infty,0]}$ and $h_x(n,y)$} {\it Proof of Proposition \ref{thm1.2}.}~ In view of {\bf (i)} and {\bf (ii)} of Theorem \ref{thm1.1} it suffices to prove that uniformly in $n$, $$q^n(x,y)- q_{(-\infty,0]}^n(x,y)= o\bigg(\frac{xy}{n^{3/2}}\bigg)~~~~~~~\mbox{as}~~~~~~~x\wedge y\to\infty.$$ This difference may be written as \begin{equation}\label{4.1} \sum_{k=1}^n\sum_{z<0} h_x(k,z) q^{n-k}(z,y). \end{equation} Employing the identity (\ref{eq2.51}) one observes that $$\sum_{1\leq k<n/2}\,\sum_{z<0} h_x(k,z)|z|\leq \sum_{z<0} H_x^+(z)|z|=f_+(x)-x=o(x)~~~~\mbox{as}~~~x\to\infty.$$ Combined with the simple bound (\ref{iv}) this shows that the sum over $k\leq n/2$ in (\ref{4.1}) is $o(xy/n^{3/2})$. The other half of the sum is less than the probability that the time-reversed walk starting at $y$ enters $(-\infty,0]$ till the time $n/2$ and ends in $x$ at the time $n$ and hence estimated also to be $o(xy/n^{3/2})$. ~~~\qed \vskip2mm\noindent \begin{lem}\label{lem4.1}~~For each $x=1, 2, \ldots$, uniformly for $n\geq y^2/a_\circ$, as $y\to\infty$ $$q_{(-\infty,0]}^n(x,y)=\frac{2f_+(x)y}{n_*}{\sf g}_n(y)(1+o(1)).$$ \end{lem} \vskip2mm\noindent {\it Proof.}~ Given a positive integer $x$, take an integer $N>x$ and put $\tau=T\wedge \tau_{[N,\infty)}$, the first leaving time from $[1, N-1]$. Then \begin{equation}\label{eq4.0} q_{(-\infty,0]}^n(x,y)= E_{x}[ q_{(-\infty,0]}^{n-\tau}(S_{\tau},y); \tau<T\wedge (n+1)] . \end{equation} Let $\alpha$ be any positive number less than 1. For each $\varepsilon>0$ we can choose $N$ large enough that for all $ k, n, z$ and $y$ that satisfy $0\leq k< n^{\alpha}$, $2N<y\leq \sqrt{a_\circ n}$ and $ N\leq z\leq \sqrt{n}/N$, the following three bounds hold: \begin{equation}\label{eq4.1} \bigg|q_{(-\infty,0]}^{n-k}(z,y)-\frac{2zy}{n_*}{\sf g}_n(y)\bigg|< \frac{\varepsilon zy}{n_*^{3/2}}, \end{equation} \begin{equation}\label{eq4.2} |P_{x}[\tau<T\,]-f_+(x)/N|\leq \varepsilon x/N, \end{equation} \begin{equation}\label{eq4.21} E_{x}[ S_{\tau}-N; \tau<T] \leq \varepsilon x, \end{equation} according to {\bf (i)} of Theorem \ref{thm1.1} and Proposition \ref{thm1.2} for (\ref{eq4.1}), to Proposition \ref{lem2.3} for (\ref{eq4.2}) and to Proposition \ref{lem2.4} for (\ref{eq4.21}). Since $\tau$ equals the sum of the sojourn times of sites $w$ in the interval $[1,N-1]$ spent by the walk before leaving it, we have $E_{x}[\tau] =\sum_{w=1}^{N-1}\sum_{k=0}^\infty P_x[ S_k=w, k<\tau]\leq \sum_{w=1}^{N-1}g_{(-\infty,0]}(x,w)\leq CxN$, and on using this \begin{eqnarray}\label{ppp} P_{x}[ S_{\tau}> \sqrt{n}/N, \tau< T]&\leq &\sum_{k=1}^\infty P_{x}[ S_{k}>\sqrt{n}/N, \tau = k] \nonumber \\ &\leq&\sum_{k=1}^\infty P_{x}[ Y_{k}> \sqrt{n}/N -S_{k-1},\, \tau > k-1] \nonumber \\ &\leq& \sum_{k=0}^{\infty}P_{x}[\tau > k]P[Y>\sqrt{n}/N -N] = xN^3\times o\bigg( \frac{1}{n}\bigg) ~~~~~~ \end{eqnarray} as $n \to\infty$. Since $ q_{(-\infty,0]}^{n-k}(\cdot, y)\leq C'/\sqrt n$ if $k<n^\alpha$, this entails that \begin{equation}\label{eq4.3} E_{x}[ q_{(-\infty,0]}^{n-\tau}(S_{\tau},y);S_{\tau}>\sqrt{n}/N, \tau<T\wedge n^\alpha\,] =o(n^{-3/2}) \end{equation} as $n\to \infty$ (with $N, x$ fixed). On the other hand, using (\ref{eq4.1}) we obtain \begin{eqnarray}\label{pppp} &&E_{x}[q_{(-\infty,0]}^{n-\tau}(S_{\tau},y);S_{\tau}\leq \sqrt{n}/N, \tau<T\wedge n^\alpha\, ] \nonumber\\ &&=E_{x}\bigg[\frac{2Ny}{n_*}{\sf g}_n(y); S_{\tau}\leq \frac{\sqrt{n}}{N}, \tau<T\wedge n^\alpha\,\bigg] + E_{x}\bigg[\frac{2(S_{\tau}-N)y}{n_*}{\sf g}_n(y);S_{\tau}\leq \frac{\sqrt{n}}{N}, \tau<T\wedge n^\alpha\, \bigg] \nonumber \\ &&~~+r(n,x,y) \end{eqnarray} with $|r(n,x,y)|\leq \varepsilon y E_x[S_\tau ; \tau<T]/n_*^{3/2}$. Writing $E_x[S_\tau ; \tau<T] = NP_x[\tau<T] +E_x[S_\tau -N; \tau<T]$ we apply (\ref{eq4.21}) and (\ref{eq4.2}) to see that the remainder $r$ as well as the second expectation on the right side in (\ref{pppp}) is dominated in absolute value by $3\varepsilon xy/n_*^{3/2}$. We also have $P_x[\tau> n^\alpha] = O(e^{-\kappa n^\alpha})$ with some $\kappa=\kappa_N>0,$ and hence, owing to (\ref{ppp}), $$P_x[S_\tau\leq \sqrt{n}/N, \tau< T\wedge n^\alpha]=P_x[\tau<T]+o(1/n).$$ Combining these bounds with (\ref{eq4.3}) and (\ref{eq4.0}) shows that for all sufficiently large $y$ and for $n>y^2/a_\circ$, $$\bigg|q_{(-\infty,0]}^n(x,y)-\frac{2Ny}{n_*}{\sf g}_n(y)P_{x}[\tau<T\,]\bigg|< \frac{7\varepsilon xy}{n_*^{3/2}}$$ and substitution from (\ref{eq4.2}) completes the proof of Lemma \ref{lem4.1}.~~~\qed \begin{lem}\label{lem4.2}~~For each $x, y=1, 2, \ldots$, as $n\to\infty$ $$q_{(-\infty,0]}^n(x,y)=\frac{2f_+(x)f_-(y)}{n_*}{\sf g}_n(0)(1+o(1)).$$ \end{lem} \vskip2mm\noindent {\it Proof.}~ Applying Lemma \ref{lem4.1} to the time-reversed walk we have $$ \bigg|q_{(-\infty,0]}^n(z,y)-\frac{2z{\sf g}_n(z)f_-(y)}{n_*}\bigg|< \frac{\varepsilon z{y}}{n^{3/2}} $$ (valid for all $z\geq N$) in place of (\ref{eq4.1}) and we can proceed as in the proof of Lemma \ref{lem4.1}.~~~\qed \vskip2mm Theorem \ref{thm1.3} follows from Proposition \ref{thm1.2} and Lemmas \ref{lem4.1} and \ref{lem4.2} given above. \vskip2mm\vskip2mm\noindent {\it Proof of Theorem \ref{thm1.4}.} ~ The probability $h_x(n,y)$ is represented as \begin{equation}\label{upb-h} h_x(n,y)= \sum_{z>0} q_{(-\infty,0]}^{n-1}(x,z)p(y-z). \end{equation} Write $F(x,n)=2f_+(x){\sf g}_n(x)/n_*$. In view of Theorem \ref{thm1.1} and a local limit theorem, for each $\varepsilon>0$ we can then choose $\eta>0$ such that for all sufficiently large $n$, $$\Big|q_{(-\infty,0]}^{n-1}(x,z)-F(x,n)f_-(z)\Big|\le \varepsilon F(x,n)f_-(z)$$ whenever $0<z\le \eta \sqrt n$ and $0< x<a_\circ\sqrt n$. Hence, on using the second expression of $H_{\infty}^+$ in (\ref{q}), the difference $|h_x(n,y)-F(x,n)H_{\infty}^+(y)|$ is at most $$ \varepsilon F(x,n)H_{\infty}^+(y) +\sum_{z>\eta \sqrt n} \Big|q_{(-\infty,0]}^{n-1}(x,z)-F(x,n)f_-(z)\Big|p(y-z).$$ Owing to (\ref{iv}) the summand of the last sum is at most a constant multiple of $n^{-3/2}xzp(y-z)$, so that if $\alpha_n(y)=\sum_{z> \eta \sqrt n}zp(y-z)$, then this sum is at most $n^{-3/2}x\alpha_n(y)$, hence $$|h_x(n,y)-F(x,n)H_{\infty}^+(y)|\le \varepsilon F(x,n)H_{\infty}^+(y)+n^{-3/2}x\alpha_n(y).$$ The proof is now finished by observing that if $E[|Y|^{2+\delta};Y<0]<\infty$, then $$\alpha_n(y)=o\bigg(\frac1{(\sqrt n +|y|)^{1+\delta}}\bigg),~~~ \sum_{y\leq 0}\alpha_n(y)=o(n^{-\delta/2})~~~\mbox{ and}~~~ \sum_{y\leq 0}\alpha_n(y)|y|^\delta=o(1).$$ The first half of Theorem \ref{thm1.4} has been verified. For the second half we verify that for $y\leq 0$ and $x\geq \sqrt n$, \begin{equation}\label{upb-h2} \sum_{z=1}^\infty q^{n}(x,z)p(y-z)\leq C\bigg[\frac{{\sf g}_{4n}(x)}{n^{1/2}}+o\bigg(\frac1{x^{2+\delta}}\bigg)\bigg]H_{\infty}^+(y) +\frac{C}{n^{1/2}}P[Y<y-{\textstyle \frac12}x]. \end{equation} Since $h_x(n,y)$ is not larger than the sum on the left side, this implies (\ref{upb-h20}). For verification of (\ref{upb-h2}) we break the range of summation into three parts $0<z\leq \sqrt n \wedge \frac12 x$, $\sqrt n <z\leq x/2$ and $z>x/2$, and denote the corresponding sums by $I$, $II$ and $I\!I\!I$, respectively. It is immediate from {\bf (iii)} of Theorem \ref{thm1.1} that $I =(O( {\sf g}_{4n}(x)x^{-1}+o(x^{-2-\delta}))H_{\infty}^+(y)$. The local limit theorem estimate (\ref{eqLLT}) gives that $q^{n}(x,z)\leq p^n(z-x)\leq 2{\sf g}_{n}(x/2)+ o(n^{1/2}x^{-2-\delta})$ for $\sqrt n<z\leq x/2$ and we apply this bound as well as the bound \begin{equation}\label{eq4.-1} \sum_{z>\sqrt n}p(y-z) \leq \frac1{\sqrt n}\sum_{z=1}^\infty zp(y-z) \leq \frac2{\sigma^2\sqrt n}\bigg[\sup_{z\geq 1}\frac{f_-(z)}{z}\bigg] H_{\infty}^+(y) \end{equation} to have $II=[O( {\sf g}_{4n}(x)n^{-1/2}) +o(x^{-2-\delta})]H_{\infty}^+(y)$. Finally $I\!I\!I \leq C n^{-1/2}\sum_{z<y-x/2}p(z)$. These estimates together verify (\ref{upb-h2}). As in (\ref{eq4.-1}) we derive $P[Y<y-{\textstyle \frac12}x]\leq C_1 H^+_{\infty}(y)/x$. We also have ${\sf g}_{4n}(x)\leq C_1/x$. Hence the last relation of the theorem follows from (\ref{upb-h20}). The proof of Theorem \ref{thm1.4} is complete. ~~~\qed \vskip2mm\vskip2mm The proof of Theorem \ref{thm1.5} is based on Theorem \ref{thm1.4}. Put $$\Phi_\xi(t)=\frac{|\xi| e^{-\xi^2/2t}}{\sqrt{2\pi}\, t^{3/2}}~~~~~~~(t>0, \xi \neq 0).$$ Then Theorem \ref{thm1.5} (under $d_\circ =1$) may be restated as follows. \vskip2mm\noindent {\bf Theorem 1.2} ~~{\it Suppose that $E[|Y|^3;Y<0]<\infty$. Let $y<0<x$. Then uniformly for $x,|y|\le a_\circ\sqrt n$, as $n\to\infty$ and $x\wedge|y|\to \infty$} \begin{equation}\label{prop-est} q^n(x,y)=C^+\Phi_{x+|y|}(n_*)+ o\bigg(\frac{x +|y|}{n^{3/2}}\bigg). \end{equation} \vskip2mm\noindent {\it Proof.~} ~ We use the representation $$q^n(x,y)=\sum_{k=1}^n \sum_{z<0}h_x(k,z)q^{n-k}(z,y).$$ Break the right side into three parts by partitioning the range of the first summation as follows \begin{equation}\label{eq4.5} 1\leq k< \varepsilon n; ~~ \varepsilon n \leq k \leq (1-\varepsilon)n;~~ (1-\varepsilon)n<k\leq n \end{equation} and call the corresponding sums $I,~II $ and $I\!I\!I$, respectively. Here $\varepsilon$ is a positive constant that will be chosen small. Consider the limit procedure as indicated in the theorem. First suppose that $(x\wedge |y|)/\sqrt n$ is bounded away from zero. Then by (\ref{eq1.4}), the last relation of Theorem \ref{thm1.4}, and (\ref{iv}) $$I\leq \sum_{1\leq k <\varepsilon n}\frac C {k^{1/2}\,x}\sum_{z<0} H_{\infty}^+(z)\frac {|zy|}{n^{3/2}} \leq C' \frac{\sqrt{\varepsilon}}n.$$ and similarly $I\!I\!I \leq C'' \sqrt{\varepsilon}/n$ (see also (\ref{upb-h})). By the first half of Theorem \ref{thm1.4} and (\ref{iv}) $$II=\sum_{ \varepsilon n\leq k \le (1-\varepsilon)n}\frac{f_+(x){\sf g}_{k}(x)}{k}\,\sum_{z=-x}^{-1}H_{\infty}^+(z) q^{n-k}(z,y)(1+o_\varepsilon(1))+o_\varepsilon\bigg( \frac{1}{n}\bigg).$$ Here (and in the rest of the proof) the estimate indicated by $o_\varepsilon$ may depend on $\varepsilon$ but is uniform in the limit under consideration once $\varepsilon$ is fixed. We substitute from {\bf (i)} of Theorem \ref{thm1.1} for $q^{n-k}$ and observe, on replacing $f_+(x)$ and $a(-y)$, respectively, by $x$ and $-y/\sigma^2$, \[ II=\sum_{ \varepsilon n\leq k \le (1-\varepsilon)n}\frac{x|y|{\sf g}_{k}(x){\sf g}_{n-k}(y)}{\sigma^2 k(n-k)}\,\sum_{z=-x}^{-1}H_{\infty}^+(z)(\sigma^2a(z)-z)(1+o_\varepsilon(1))+o_\varepsilon\bigg( \frac{1}{n}\bigg) \nonumber. \] Noting $x{\sf g}_k(x)/ k= \Phi_{x/\sigma}(k)$, we see $$\sum_{ \varepsilon n\leq k \le (1-\varepsilon)n}\frac{x|y|{\sf g}_{k}(x){\sf g}_{n-k}(y)}{\sigma^2 k(n-k)}=\frac1{n\sigma^2}\int_0^1 \Phi_{x/\sqrt{n_*}}(t)\Phi_{y/\sqrt{n_*}}(1-t)dt + O\bigg(\frac{\varepsilon}{n}\bigg) +o\bigg( \frac{1}{n}\bigg).$$ Here we have used the assumption that $(x\wedge |y|)/\sqrt n$ is bounded away from zero as well as from infinity. Since $\Phi_\xi$ is the density of a Brownian passage-time distribution, we have $$\int_0^1 \Phi_{\xi}(t)\Phi_{\eta}(1-t)dt=\Phi_{|\xi|+|\eta|}(1).$$ Hence \begin{equation}\label{II} II= \frac1{n\sigma^2} \Phi_{(x+|y|)/\sqrt{n_*}\,}(1)\sum_{z=-x}^{-1}H_{\infty}^+(z)(\sigma^2a(z)-z)+ O\bigg(\frac{\varepsilon}{n}\bigg)+ o_\varepsilon\bigg( \frac{1}{n}\bigg). \end{equation} Recalling that $C^+=\sum _{z<0}H_{\infty}^+(z)(\sigma^2 a(z)-z)$ we then see $\sigma^2 nII-C^+ \Phi_{(x+|y|)/\sqrt{n_*}}(1) \to 0$ (as well as $nI+n I\!I\!I \to 0$) as $n\to\infty$ and $\varepsilon\to 0$ in this order. Thus (\ref{prop-est}) is obtained. Next suppose $x\wedge |y|= o(\sqrt n\,)$. By duality one may suppose that $x=o(\sqrt n)$. From Theorem \ref{thm1.4} (with $\delta=1$) and from the bound $H^+_x(y)\leq CH_{\infty}^+(y)$ (see (\ref{17})) one deduces, respectively, \begin{equation}\label{hh} {\rm i)}~~~\sum_{k\geq \varepsilon n} \sum_{z<0} h_x(k,z)|z|\le M_\varepsilon x/\sqrt{ n}~~~~~\mbox{and}~~~~~{\rm ii)}~~~ \sum_{z<-x} H_x^+(z)z =o(1).~~~ \end{equation} Here (and below) $M_\varepsilon$ indicates a constant that may depend on $\varepsilon$ but not on the other variables. On using i) above with the help of the bound $q^k(z,y)\leq C|zy|k^{-3/2}$ $$II \leq M_\varepsilon xy/n^2= o_\varepsilon( {y}{n^{-3/2}})$$ (as $n\to\infty$ under the supposed constraints on $x, y$); similarly on using Theorem \ref{thm1.1} {\bf(i)} together with ii) above $$I=\sum_{1\le k< \varepsilon n}\,\sum_{z=-x}^{-1}h_x(k,z)\cdot \frac{\sigma^4a(z)a(-y)+zy}{\sigma^2(n-k)}{\sf g}_{n-k}(y)(1+o_e(1))+o\bigg( \frac{y}{n^{3/2}}\bigg).$$ For the evaluation of the last double sum we may replace $(n-k)^{-1}{\sf g}_{n-k}$ by $n^{-1}{\sf g}_{n} (1+O(\varepsilon))$. Since $x\wedge|y|$ is supposed to go to infinity, we may also replace $a(-y)$ by $|y|/\sigma^2$ and in view of (\ref{hh}) we may extend the range of the double summation in the above expression of $I$ to the whole quadrant $k\geq 1, z<0$; moreover the sum $\sum_{z=-\infty}^{-1}H^+_x(z)[\sigma^2 a(z)-z]$ that accordingly comes out and equals $\sigma^2 a(x)-x$ may be replaced by $C^+$ (see Lemma \ref{lem2.50}, Corollary \ref{lem2.5} and (\ref{eq2.51})). This leads to $$ I= C^+|y|{\sf g}_{n}(y)n_*^{-1}(1+O(\varepsilon))+o_\varepsilon(y{n^{-3/2}}).$$ As to $ I\!I\!I$ first observe that $$\sum_{k=1}^{\varepsilon n} q^k(z,y)=g_{\{0\}}(z, y)-r_n\leq C(|z|\wedge |y|)~~~\mbox{with}~~~0\leq r_n\leq C|zy|/\sqrt{\varepsilon n},$$ as follows from (\ref{iv}) and (\ref{g}). If $y/\sqrt n$ is bounded away from zero, then $ I\!I\!I=O(x/n^{3/2})=o(y/n^{3/2})$. On the other hand, applying Theorem \ref{thm1.4} we find that if $y=o(\sqrt n)$, $$ I\!I\!I= f_+(x){\sf g}_{n}(x)n_*^{-1}\sum_{z<0}H_{\infty}^+(z)g_{\{0\}}(z, y)(1+O(\varepsilon))+ o_\varepsilon\Big(x{n^{-3/2}}\Big),$$ hence in view of $g_{\{0\}}(z, y)=a(z)-\sigma^{-2}z(1+o(1))$ (as $y\to -\infty$ uniformly for $z<0$) $$ I\!I\!I= C^+x{\sf g}_{n}(x)n_*^{-1}(1+O(\varepsilon))+o_\varepsilon\Big(x{n^{-3/2}}\Big).$$ Adding these contributions yields the desired formula. ~~~\qed \section{Estimation of $Q^+_n$} {\it Proof of Proposition \ref{prop1.3.1}.}~ We apply {\bf (i)} and {\bf (iii)} of Theorem \ref{thm1.1}. On noting that $\sum_{y<0}(\sigma^2 a(-y)+y){\sf g}_n(y)=o(\sqrt n)$, as $|x|/\sqrt n\to 0$, \[ Q_x^+(n) = \frac{\sigma^2a(x)-x}{n_*}\sum_{-a_\circ\sqrt n<y<0} (-y){\sf g}_n(y)\Big[1+o(1)\Big] +O\bigg(x\sum_{y\le -a_\circ\sqrt n}\frac {{\sf g}_{4n}(y)}{-y} \bigg) +o\bigg(\frac{x}{n^{1/2}}\bigg), \] which shows the first assertion of Proposition \ref{prop1.3.1} since $a_\circ$ is arbitrary and $\int_0^\infty ue^{-u^2/2}du=1$. For the second one, the case $x=o(\sqrt n)$ follows from what has just been proved. In view of Proposition \ref{thm1.2} and {\bf (i)} of Theorem \ref{thm1.1}, it therefore suffices to show that $\sum_y q^n_{(-\infty, 0]}(x,y)=\int_{-x}^x {\sf g}_n(t)dt (1+o(1))$ uniformly for $|x|\geq a_\circ^{-1}\sqrt n $, which however follows from Donsker's invariance principle together with the reflection principle. The last assertion of the proposition follows from Theorem \ref{thm1.5}, {\bf (iii)} of Theorem \ref{thm1.1} and the fact that $\int_0^{a_\circ} \Phi_{|\xi|+\eta}(1)d\eta\to (2\pi)^{-1/2}$ as $\xi\to 0, a_\circ \to\infty$. ~~~\qed \vskip2mm \begin{lem} \label{lem5.1}~ ~~Suppose $E[|Y|^{3}; Y<0]<\infty$. Then uniformly for $x> a_\circ^{-1}\sqrt n$, \begin{equation}\label{Q-4} Q_x^+(n)\leq C{\sf g}_{4n}(x)+o\Big(\sqrt n /x^{3}\Big) +C\sum_{y<0}|y|P[Y<y-{\textstyle \frac12}x]; \end{equation} in particular $\sum_{x> M \sqrt n} Q^+_x(n)\to 0$ as $M\to\infty$ uniformly in $n$. \end{lem} \vskip2mm {\it Proof.~}~ As in the proof of Theorem \ref{thm1.5} we use the representation \begin{equation}\label{Q-2} Q_x^+(n)=\sum_{k=1}^n \sum_{z<0}h_x(k,z)Q^+_z({n-k}), \end{equation} By Proposition \ref{prop1.3.1} $Q_z^+(n-k)\leq C|z|/\sqrt{n-k}$ for all $z$ since $Q^+_x(n)\leq 1$. From the assumption $E[|Y|^{3}; Y<0]<\infty$ it follows that $\sum_{z<0}H_\infty^+(z)|z|<\infty$. Note that $\sum_{k=1}^{n-1} k^{-1/2}(n-k)^{-1/2}$ is bounded. Substitution from (\ref{upb-h20}) then leads to the first estimate (\ref{Q-4}). The second relation is immediate from it if one notes that $\sum_{x=0}^\infty\sum_{y<0}|y|P[Y<y-{\textstyle \frac12}x]$ is dominated by a constant multiple of $E[|Y|^3; Y<0]$. ~~~\qed \vskip2mm\noindent {\it Proof of Theorem \ref{thm1.3.2}.} ~Suppose $E[|Y|^{3}; Y<0]<\infty$. Then in view of Lemma \ref{lem5.1} we have only to evaluate $\sum_{1\leq x\leq M \sqrt n} Q^+_x(n)$ for each $M$. Now apply Theorem \ref{thm1.5} with the observation that $\int_{\xi>0}d\xi \int_{\eta>0} \Phi_{\xi+\eta}(1) d\eta = (2\pi)^{-1/2}\int_0^\infty e^{-\xi^2/2}d\xi=1/2$, and you immediately find the first formula of the theorem. If $E[|Y|^{3}; Y<0]=\infty$, one has only to look at the relation (\ref{II}) which is valid without the third moment condition; in fact, $\sum_{x=1}^\infty Q_x^+(n)$ is bounded below by the sum of $II$ in (\ref{II}) over $1\leq x<\sqrt n$, $-\sqrt n<y<0$ and the latter diverges to $+\infty$. ~~~\qed \vskip2mm\noindent {\it Proof of Corollary \ref{cor1.2} }.~ Lemma \ref{lem5.1} shows that if one writes $$E[N_n(\ell)]=\sum_{1\leq x\leq M\sqrt {n_*}}\,\,\sum_{-\ell\sqrt {n_*}\leq y\leq -1}m_n(x) q^n(x,y)+ \varepsilon_M(n),$$ then $ \varepsilon_M(n)\to 0$ as $M\to\infty$ uniformly in $n$. According to Theorem \ref{thm1.5} and the assumption on $m_n(x)$ the double sum on the right side is asymptotically equal to $$C^+\int_0^M d\xi\int_0^{\ell} \Phi_{\xi+\eta}(1)d\eta=C^+\frac1{\sqrt{2\pi}}\int_0^M\big(e^{-\xi^2/2}-e^{-(\ell+\xi)^2/2}\big)d \xi.$$ Since $M$ is arbitrary, we may let $M\to\infty$ to find the desired formula.~~\qed \section{Absorption at the origin with probability $\alpha\in (0,1)$} Let $\alpha\in (0,1)$ and consider the walk that is absorbed with probability $\alpha$ and continues to walk with probability $1-\alpha$ every time when it is about to visit the origin (thus the walk visits the origin if it is not absorbed, while it does not and disappears if it is). Let $q^n_\alpha(x,y)$ be the $n$-th step transition probability of it (set $q_\alpha^0(x,y)={\bf 1}(x=y)$ as usual) and denote by $r^n_{\alpha}(x,y)$ the probability that this walk starting at $x$ has visited the origin but not been absorbed by the time $n$ when it is at $y$, so that $$q_\alpha^n(x,y)=q^n(x,y)+ r_\alpha^n(x,y).$$ \begin{prop}\label{prop6.1}~~ Let $d_\circ=1$. Then uniformly for $|x|\vee |y|<a_\circ \sqrt n$,~ as $n\to\infty$ \begin{equation}\label{rr} r_\alpha^n(x,y)=\frac{1-\alpha}{\alpha}\cdot \frac{\sigma^2 [a^*(x)+ a^*(-y)]}{ n}\,{\sf g}_n(|x|+|y|)(1+o(1)). \end{equation} \end{prop} \vskip2mm\noindent {\it Proof.~}~~ Set $f_x^{(1)}(k)=f_x^{\{0\}}(k)$ and, for $j=2, 3,\ldots$, inductively define $f_x^{(j)}(k) =\sum_{l=1}^k f_x^{(j-1)}(k-l)f_0^{(1)}(l)$ (the probability that the $j$-th visit of the origin occurs at time $k$). Then for $n=1,2,\ldots$, $$r_\alpha^n(x,y)=\sum_{j=1}^\infty\sum_{k=1}^n(1-\alpha)^j f_x^{(j)}(k)q^{n-k}(0,y).$$ (This is valid even for $y=0$ when the second sum concentrate on $k=n$.) We have $\hat f_x^{(j)}(t):=\sum_{k} f_x^{(j)}(k) e^{ikt}=\hat f_x^{\{0\}}(t)[\hat f_0^{\{0\}}(t)]^{j-1}.$ One can readily derive \begin{equation}\label{111} \hat f_x^{\{0\}}(t)=\pi_{-x}(t)\rho(t) ~~(x\neq 0)~~~~\mbox{and}~~~\hat f_0^{\{0\}}(t)=1-\rho(t) \end{equation} (cf. \cite{U}). We also have $ q^k(0,y) =f^{\{0\}}_{-y}(k)$ as previously noted. Now, employing the second identity in (\ref{111}) we see \[ \sum_{k} r^{k}_\alpha(x,y) e^{ikt}=\frac{\gamma\hat f^{\{0\}}_x(t)\hat f^{\{0\}}_{-y}(t)}{1+\gamma \rho(t)}~~~~\mbox{with}~~~\gamma=\frac{1-\alpha}{\alpha}.\] Hence, for all $x, y\in {\bf Z}$ $$r_\alpha^n(x,y)=\frac{\gamma}{2\pi}\int_{-\pi}^\pi \frac{ \hat f^{\{0\}}_x(t)\hat f^{\{0\}}_{-y}(t)}{1+\gamma \rho(t)}e^{-int}dt.$$ Since $\hat f^{\{0\}}_x(t)\hat f^{\{0\}}_{-y}(t)$ is the characteristic function of the convolution $f_x^{\{0\}}*f_{-y}^{\{0\}}$, $$r_\alpha^n(x,y)=\gamma f_x^{\{0\}}*f_{-y}^{\{0\}}(n)- \frac{\gamma^2}{2\pi}\int_{-\pi}^\pi \frac{ \hat f^{\{0\}}_x(t)\hat f^{\{0\}}_{-y}(t)\rho(t)}{1+\gamma \rho(t)}e^{-int}dt.$$ Under the constraints $|x|\vee|y|<a_\circ\sqrt n$ and $\varepsilon\sqrt n< |x|\wedge |y| $ with $\varepsilon>0$ we apply Theorem A and, making scaling argument, infer that the first term on the right side above agrees with that of the formula (\ref{rr}) with the factor $(|x|+|y|)\sigma^2 [a^*(x)a^*(-y)]/(a^*(x)+a^*(-y))|xy|$ multiplied, which factor may be replaced by 1 since $|x|\wedge |y|\to \infty$. The second term is readily evaluated to be negligible (see (\ref{113}) below and the argument following it as well as (\ref{114}), which provides relevant properties of $\rho$). By the first identity in (\ref{111}) we have \begin{equation}\label{113} \hat f^{\{0\}}_x(t)\hat f^{\{0\}}_{-y}(t)=\rho^2(t)\pi_{-x}(t)\pi_y(t)~~~~~\mbox{ if}~~~ x\neq 0, y\neq 0. \end{equation} As in Section 3 we write $\pi_{-x}(t)={\rm e}_x(t) +\pi_0(t)-a(x)$. Then \begin{eqnarray*} \rho^2(t)\pi_{-x}(t)\pi_y(t)&=&\rho^2{\rm e}_{-x}{\rm e}_y+a(x)a(-y)\rho^2-(a(-y){\rm e}_x+a(x){\rm e}_{-y})\rho^2-1\\ && +~ (\pi_{-x}+\pi_y)\rho. \end{eqnarray*} The contribution to $r_\alpha^n(x,y)$ of the last term is $\gamma(f_x^{\{0\}}(n)+f_{-y}^{\{0\}}(n))$. Those of the other terms are all $o((|x|\vee|y|)/n^{3/2})$ if $|x|\wedge |y|=o(\sqrt n)$: for the proof we need the estimates of ${\rm c}_x'''(t)$ and ${\rm s}_x'''(t)$ (which require no further moment condition) in addition to those given in Lemmas B1 and B2. Now we may conclude that for $x\neq 0, y\neq 0$, \begin{equation}\label{112} r_\alpha^n(x,y)=\frac{ 1-\alpha}{\alpha}(f_x^{\{0\}}(n)+f_{-y}^{\{0\}}(n))(1+o(1))~~~~~\mbox{as}~~~~\frac{|x|\wedge |y|}{\sqrt n}\to 0, \end{equation} which agrees with (\ref{rr}) owing to Theorem A again. If $x= 0$, then $\hat f^{\{0\}}_x\hat f^{\{0\}}_{-y}=\rho(1-\rho)\pi_y=\rho\pi_y-\rho-\rho^2{\rm e}_{y}+\rho^2a(-y)$ and we readily obtain (\ref{112}). The case $y=0$ is similar. ~~~\qed \section{Appendix} ~ Under the basic assumption of this paper Hoeffding \cite{H} shows that $\Re\{(1-e^{ixl})\phi^k(l)\}$ is summable on $\{k=1,2,\ldots\}\times \{-\pi<l<\pi\}$ and hence the series that defines $a(x)$ is absolutely convergent; and, as a consequence of it, that \begin{equation}\label{a(n)} a(x)= \frac1{2\pi} \int_{-\pi}^\pi \Re\bigg\{ \frac{1-e^{ixl}}{1-\phi(l)}\bigg\}dl \end{equation} (see \cite{K} and the references contained in it for more information on $a(x)$). In this appendix we derive an asymptotic estimate of $a(n)$ under the moment condition $E|Y|^{2+\delta}<\infty$ ($0\leq \delta\leq 2$). We also include a proof of (\ref{a(n)}) for reader's convenience. Put for $0<|l|\leq \pi$, $$ \phi_c(l)=\Re \phi (l)=E[\cos lY]~~~\mbox{ and}~~~~ \phi_s(l)=\Im \phi(l)=E[\sin lY].$$ Then \begin{equation}\label{a-1} \Re\Bigg\{ \frac{1-e^{ixl}}{1-\phi(l)} \Bigg\}=\frac{1-\phi_c(l)}{|1-\phi(l)|^2} (1-\cos xl)+ \frac{\phi_s(l)}{|1-\phi(l)|^2} \sin xl. \end{equation} Noticing $\phi_s(l)=E[\sin Yl -Yl]$, one infers that \begin{equation}\label{s} \int_{-\pi}^\pi \frac{|\phi_s(l)l|}{|1-\phi(l)|^2} \leq E\int_{-\pi}^\pi \frac{|\sin Yl -Yl|}{|1-\phi(l)|^2}|l| dl\leq CE\bigg[Y^2\int_{0}^{\pi|Y|}\frac{|\sin u -u|}{u^3}du\bigg]<\infty; \end{equation} hence by the dominated convergence theorem \begin{equation}\label{a-4} \lim_{|x|\to\infty}\frac1{x}\int_{-\pi}^\pi \frac{|\phi_s(l)|}{|1-\phi(l)|^2} |\sin xl |dl =0. \end{equation} From this together with the equality $\int_{-\infty}^\infty (1-\cos u)u^{-2}du =\pi$ we conclude the following result. \begin{lem}\label{lem_a-1}~As $|x|\to\infty$ $$ \int_{-\pi}^\pi \Bigg|\Re\Bigg\{ \frac{1-e^{ixl}}{1-\phi(l)} \Bigg\}\Bigg|dl =O(x) ~~~\mbox{and} ~~~ \frac1{2\pi}\int_{-\pi}^\pi \Re\Bigg\{ \frac{1-e^{ixl}}{1-\phi(l)}\Bigg\}dl = \frac{|x|}{\sigma^2}+o(x).$$ \end{lem} \vskip2mm\noindent The next proposition in particular implies the identity (\ref{a(n)}). \begin{prop}\label{thm_a-2}~ With a uniformly bounded term $o_b(1)$ that tends to zero as $K\to\infty$ $$\sum_{k=0}^K\Big[p^k(0)-p^k(-x)\Big] = \frac1{2\pi} \int_{-\pi}^\pi \Re\bigg\{ \frac{1-e^{ixl}}{1-\phi(l)}\bigg\}dl (1+o_b(1)).$$ \end{prop} \vskip2mm {\it Proof.} ~ Since we have the Tauberian condition $[p^k(0)-p^k(-x)]=o(1/k)$ (with $x$ fixed) as is assured by Lemma \ref{lem2.7}, owing to the corresponding Tauberian theorem it suffices (apart from the boundedness of convergence) to show that the Abelian sum \begin{equation}\label{AS} \frac1{x}\sum_{k=0}^\infty [p^k(0)-p^k(-x)]r^k = \frac1{2\pi x}\int_{-\pi}^\pi\Re\Bigg\{\frac{1-e^{ixl}}{1-r\phi(l)}\Bigg\}dl \end{equation} converges as $r\uparrow 1$ to $1/x$ times the right side of (\ref{a(n)}). The proof of this convergence however is routine and omitted. From (\ref{46}) it is clear that the convergence above is bounded uniformly in $x$, which combined with the bound $|p^k(0)-p^k(-x)|\leq C|x/k|$ (cf. Lemma \ref{lem2.7}) implies that the error term $o_b(1)$ is uniformly bounded (see the proof of Tauber's theorem given in $\S$1.23 of \cite{T}). ~~ \qed \vskip2mm\noindent {\sc Remark.}~ The remainder term $o_b(1)$ in Proposition \ref {thm_a-2} does not uniformly (in $x$) approach zero since the Abelian sum in (\ref{AS}) tends to zero as $x\to\infty$ for each $r<1$. \vskip2mm\vskip2mm If $E|Y|^3<\infty$, put \begin{equation}\label{b&C} \lambda_3=\frac1{3\sigma^2} E[Y^3]~~~~\mbox{and}~~~~C^*=\frac1{2\pi}\int_{-\pi}^\pi \bigg[\frac{\sigma^2}{1-\phi(l)}-\frac1{1-\cos l}\bigg]dl, \end{equation} where the integral of the imaginary part is understood to vanish because of skew symmetry). Since $\int_0^\pi\big[(1-\cos l)^{-1}-2l^{-2}\big]dl=2/\pi$, the constant $C^*$ may alternatively be given by \begin{equation}\label{C_def2} C^*=\frac{\sigma^2}{2\pi}\int_{-\pi}^\pi\bigg[\Re\bigg\{\frac1{1-\phi(l)}\bigg\}-\frac2{\sigma^2l^2}\bigg]dl -\frac2{\pi^2}. \end{equation} The integral in (\ref{C_def2}) as well as the real part of the integral in (\ref{b&C}) is absolutely convergent (under $E[|Y|^3]<\infty$). In fact, in the expression \begin{equation}\label{1-psi} \Re\bigg\{\frac1{1-\phi(l)}\bigg\}=\frac{-\phi_s^2}{|1-\phi|^2(1-\phi_c)}+\frac1{1-\phi_c} \end{equation} the first term on the right side is bounded and, since $\phi_c-1+\frac12\sigma^2 l^2=E[\cos Yl-1+\frac12(Yl)^2]$, \begin{equation}\label{1-psi2}\int_{-\pi}^\pi \bigg[\frac1{1-\phi_c(l)}-\frac2{\sigma^2l^2}\bigg]dl \leq CE\bigg[|Y|^3\int_{0}^{\pi|Y|} \frac{\cos u-1+\frac12 u^2 }{u^4}du\bigg]<\infty. \end{equation} We write ${\rm sign\,} t\,= t/|t|$ $(t\neq 0)$. Suppose that $E |Y|^{2+\delta}<\infty$ for some $0\leq \delta\leq 2$. \vskip2mm \begin{prop}\label{cor6.1}~~ ~ If $0\leq\delta<1$, then $\sigma^2a(x)=|x|+o(|x|^{1-\delta})$ (as $|x|\to\infty$) where the error term is bounded if and only if $E[|Y|^3]<\infty$. If $1\leq\delta<2$, then $$\sigma^2a(x)=|x|+C^*-({\rm sign\,} x)\lambda_3+o(|x|^{1-\delta}).$$ If $\delta=2$, this formula is valid with $o(|x|^{1-\delta})$ replaced by $O(1/x)$. \end{prop} \vskip2mm\noindent {\it Proof.}~ ~ In the case $\delta=0$ the assertion is proved both in \cite{H} and in \cite{S}, and in fact immediate from (\ref{a(n)}) and Lemma \ref{lem_a-1}. The integral in (\ref{a(n)}) may be written as \begin{equation}\label{a000} \sigma^2a(x)=\frac{\sigma^2}{2\pi}\int_{-\pi}^\pi\bigg[\frac1{1-\phi(l)}-\frac2{\sigma^2l^2}\bigg](1-e^{ixl})dl+ \frac{1}{2\pi}\int_{-\pi}^\pi\frac2{l^2}(1-e^{ixl})dl. \end{equation} The second term on the right side of (\ref{a000}) equals \begin{equation}\label{2ndterm} \frac{4}{2\pi}\bigg[|x|\int_{0}^\infty \frac{1-\cos l}{l^2}dl-\int_{\pi}^\infty \frac{1-\cos xl}{l^2}dl\bigg]=|x|-\frac2{\pi^2}+O(1/x^2). \end{equation} Let $0<\delta<1.$ The first term is then $o(|x|^{1-\delta})$. In fact, since $\phi(l)-1+\frac12 \sigma^2 l^2=o(|l|^{2+\delta})$, for any $\varepsilon>0$ there exists a constant $M$ such that $$\int_0^\pi\frac{|\phi(l)-1+\frac12 \sigma^2l^2|}{|1-\phi(l)|l^2}|1-e^{ixl}|dl\leq \varepsilon |x|^{1-\delta}\int_0^\infty u^{-2+\delta}|1-e^{iu}|du+M.$$ The assertion concerning boundedness follows from Corollary \ref{lem2.5}. Let $1\leq \delta <2$. Then by (\ref{a000}), (\ref{C_def2}) and (\ref{2ndterm}) $$\sigma^2 a(x)=|x|+C^*+I_c+I_s+O(x^{-2}),$$ where $$I_c=-\frac{\sigma^2}{2\pi}\int_{-\pi}^\pi \bigg[\Re\bigg\{\frac1{1-\phi(l)}\bigg\}-\frac2{\sigma^2t^2}\bigg]\cos xl\, dl~~~~~~ \mbox{and}~~~~~~I_s=\frac{\sigma^2}{2\pi}\int_{-\pi}^\pi \frac{\phi_s}{|1-\phi|^2}\sin xl\, dl.$$ Recall $\phi_s(l)= -\frac12\sigma^2\lambda_3l^3+o(|l|^{2+\delta})$. Then, employing a truncation argument with a smooth cut-off function $w(t)$ (i.e., $w$ vanishes outside $(-\pi,\pi)$ and equals $1$ in a neighborhood of zero) along with integration by parts one infers that \begin{eqnarray*} I_s&=&-\lambda_3\frac{\sigma^4}{2\pi}\int_{0}^\infty \frac{\sin xl}{\sigma^4l/4}dl+\int_{-\pi}^\pi {r(l)w(l)\sin xl}\,dl+o(1/x^3)\\ &=&-{\lambda_3\,{\rm sign}\,x}+o(|x|^{1-\delta}). \end{eqnarray*} Here the remainder term $r(l)=o(|l|^{\delta-2})$ with $ r'(l)=o(|l|^{\delta-3})$. Similarly we obtain $I_c=o(|x|^{1-\delta})$ if $\delta>1$. If $\delta=1$, an application of Riemann-Lebesgue lemma with (\ref{1-psi}) and (\ref{1-psi2}) taken into account shows $I_c =o(1)$. The case $\delta=2$ is similar and omitted. ~~ \qed \vskip2mm From Subsection {\bf 2.1} we extract the following result. \begin{prop} \label{cor2.2} ~~ Suppose that the walk is not left continuous. Then both $f_+(x)-x$ and $\sigma^2a(x)-f_+(x)$ are positive for all $x>0$ and tend to extended positive numbers as $x\to\infty$, which are finite if and only if $E[Y^3; Y>0]<\infty$. Moreover if $E[Y^3; Y>0]=\infty$, then $$\lim_{x\to\infty}\; \frac{f_+(x)-x}{\sigma^2a(x)- x}=\frac12.$$ \end{prop} \vskip2mm\noindent {\it Proof}.~ The assertions follow on combining Lemma \ref{lem2.50} with (\ref{eq2.51}) and (\ref{eq2.50}). ~\qed \begin{cor} \label{cor7.5} ~ Suppose that $E|Y|^3<\infty$. Then $\lim_{x\to \pm \infty} (\sigma^2 a(x)-|x|)= C^*\mp \lambda_3\geq 0\,;$ in particular $C^*=\lambda_3$ (resp. $-\lambda_3$) if and only if the walk is left (resp. right) continuous. \end{cor} \vskip2mm\noindent {\it Proof}.~ The first half follows from Proposition \ref{cor6.1} and the second one from the proposition above. ~\qed \vskip2mm\noindent \vskip4mm\noindent {\bf Acknowledgments.}~ The author wishes to thank the anonymous referee for his carefully reading the original manuscript and pointing out several significant errors therein. \vskip2mm\vskip2mm\noindent {\bf Note added in proof.}~ Some asymptotic estimates of the transition probability of the walk killed on a half line are obtained in \cite{BD}, \cite{Car}, \cite{D}, \cite{VW} and \cite{A-D}. In the first four papers the problem is considered for wider classes of random walks: the variance may be infinite with the law of $Y$ in the domain of attraction of a stable law \cite{VW}, \cite{D} or of the normal law \cite{BD} \cite{Car}; $Y$ is not necessarily restricted to the arithmetic variables. The very recent paper \cite{D} describes the asymptotic behavior of $p^n_{(-\infty,0]}(x,y)$ valid uniformly within the region of stable deviation of the space-time variables and it in particular contains Theorem \ref{thm1.3} and Corollary \ref{cor1.1} as a special case. The result for the region $a_\circ^{-1}\sqrt n<x, y <a_\circ \sqrt n$ is contained also in \cite{BD}. Theorem \ref{thm1.3} with $x=1$ and $y=o(\sqrt n)$ is also a special case of Proposition 1 of \cite{BD}, Theorem 5 of \cite{VW} and readily derived from Theorem 2 of \cite{Car}. Similar results are proved in \cite{A-D}, where Corollary \ref{cor1.1} is also obtained. The methods used in these papers fully rests on the Wiener-Hopf factorization or its extension recently developed and ours are quite different from them. \end{document}
arXiv
Tensor product of fields In mathematics, the tensor product of two fields is their tensor product as algebras over a common subfield. If no subfield is explicitly specified, the two fields must have the same characteristic and the common subfield is their prime subfield. For other uses, see Tensor product (disambiguation). Not to be confused with Tensor field. The tensor product of two fields is sometimes a field, and often a direct product of fields; In some cases, it can contain non-zero nilpotent elements. The tensor product of two fields expresses in a single structure the different way to embed the two fields in a common extension field. Compositum of fields First, one defines the notion of the compositum of fields. This construction occurs frequently in field theory. The idea behind the compositum is to make the smallest field containing two other fields. In order to formally define the compositum, one must first specify a tower of fields. Let k be a field and L and K be two extensions of k. The compositum, denoted K.L, is defined to be $K.L=k(K\cup L)$ where the right-hand side denotes the extension generated by K and L. This assumes some field containing both K and L. Either one starts in a situation where an ambient field is easy to identify (for example if K and L are both subfields of the complex numbers), or one proves a result that allows one to place both K and L (as isomorphic copies) in some large enough field. In many cases one can identify K.L as a vector space tensor product, taken over the field N that is the intersection of K and L. For example, if one adjoins √2 to the rational field $\mathbb {Q} $ to get K, and √3 to get L, it is true that the field M obtained as K.L inside the complex numbers $\mathbb {C} $ is (up to isomorphism) $K\otimes _{\mathbb {Q} }L$ as a vector space over $\mathbb {Q} $. (This type of result can be verified, in general, by using the ramification theory of algebraic number theory.) Subfields K and L of M are linearly disjoint (over a subfield N) when in this way the natural N-linear map of $K\otimes _{N}L$ to K.L is injective.[1] Naturally enough this isn't always the case, for example when K = L. When the degrees are finite, injectivity is equivalent here to bijectivity. Hence, when K and L are linearly disjoint finite-degree extension fields over N, $K.L\cong K\otimes _{N}L$, as with the aforementioned extensions of the rationals. A significant case in the theory of cyclotomic fields is that for the nth roots of unity, for n a composite number, the subfields generated by the pk th roots of unity for prime powers dividing n are linearly disjoint for distinct p.[2] The tensor product as ring To get a general theory, one needs to consider a ring structure on $K\otimes _{N}L$. One can define the product $(a\otimes b)(c\otimes d)$ to be $ac\otimes bd$ (see Tensor product of algebras). This formula is multilinear over N in each variable; and so defines a ring structure on the tensor product, making $K\otimes _{N}L$ into a commutative N-algebra, called the tensor product of fields. Analysis of the ring structure The structure of the ring can be analysed by considering all ways of embedding both K and L in some field extension of N. The construction here assumes the common subfield N; but does not assume a priori that K and L are subfields of some field M (thus getting round the caveats about constructing a compositum field). Whenever one embeds K and L in such a field M, say using embeddings α of K and β of L, there results a ring homomorphism γ from $K\otimes _{N}L$ into M defined by: $\gamma (a\otimes b)=(\alpha (a)\otimes 1)\star (1\otimes \beta (b))=\alpha (a).\beta (b).$ The kernel of γ will be a prime ideal of the tensor product; and conversely any prime ideal of the tensor product will give a homomorphism of N-algebras to an integral domain (inside a field of fractions) and so provides embeddings of K and L in some field as extensions of (a copy of) N. In this way one can analyse the structure of $K\otimes _{N}L$: there may in principle be a non-zero nilradical (intersection of all prime ideals) – and after taking the quotient by that one can speak of the product of all embeddings of K and L in various M, over N. In case K and L are finite extensions of N, the situation is particularly simple since the tensor product is of finite dimension as an N-algebra (and thus an Artinian ring). One can then say that if R is the radical, one has $(K\otimes _{N}L)/R$ as a direct product of finitely many fields. Each such field is a representative of an equivalence class of (essentially distinct) field embeddings for K and L in some extension M. Examples For example, if K is generated over $\mathbb {Q} $ by the cube root of 2, then $K\otimes _{\mathbb {Q} }K$ is the product of (a copy of) K, and a splitting field of X  3 − 2, of degree 6 over $\mathbb {Q} $. One can prove this by calculating the dimension of the tensor product over $\mathbb {Q} $ as 9, and observing that the splitting field does contain two (indeed three) copies of K, and is the compositum of two of them. That incidentally shows that R = {0} in this case. An example leading to a non-zero nilpotent: let P(X) = X  p − T with K the field of rational functions in the indeterminate T over the finite field with p elements (see Separable polynomial: the point here is that P is not separable). If L is the field extension K(T 1/p) (the splitting field of P) then L/K is an example of a purely inseparable field extension. In $L\otimes _{K}L$ the element $T^{1/p}\otimes 1-1\otimes T^{1/p}$ is nilpotent: by taking its pth power one gets 0 by using K-linearity. Classical theory of real and complex embeddings In algebraic number theory, tensor products of fields are (implicitly, often) a basic tool. If K is an extension of $\mathbb {Q} $ of finite degree n, $K\otimes _{\mathbb {Q} }\mathbb {R} $ is always a product of fields isomorphic to $\mathbb {R} $ or $\mathbb {C} $. The totally real number fields are those for which only real fields occur: in general there are r1 real and r2 complex fields, with r1 + 2r2 = n as one sees by counting dimensions. The field factors are in 1–1 correspondence with the real embeddings, and pairs of complex conjugate embeddings, described in the classical literature. This idea applies also to $K\otimes _{\mathbb {Q} }\mathbb {Q} _{p},$ where $\mathbb {Q} $p is the field of p-adic numbers. This is a product of finite extensions of $\mathbb {Q} $p, in 1–1 correspondence with the completions of K for extensions of the p-adic metric on $\mathbb {Q} $. Consequences for Galois theory This gives a general picture, and indeed a way of developing Galois theory (along lines exploited in Grothendieck's Galois theory). It can be shown that for separable extensions the radical is always {0}; therefore the Galois theory case is the semisimple one, of products of fields alone. See also • Extension of scalars—tensor product of a field extension and a vector space over that field Notes 1. "Linearly-disjoint extensions", Encyclopedia of Mathematics, EMS Press, 2001 [1994] 2. "Cyclotomic field", Encyclopedia of Mathematics, EMS Press, 2001 [1994] References • "Compositum of field extensions", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Kempf, George R. (2012) [1995]. "9.2 Decomposition of Tensor Products of Fields". Algebraic Structures. Springer. pp. 85–87. ISBN 978-3-322-80278-1. • Milne, J.S. (18 March 2017). Algebraic Number Theory (PDF). p. 17. 3.07. • Stein, William (2004). "A Brief Introduction to Classical and Adelic Algebraic Number Theory" (PDF). pp. 140–2. • Zariski, Oscar; Samuel, Pierre (1975) [1958]. Commutative algebra I. Graduate Texts in Mathematics. Vol. 28. Springer-Verlag. ISBN 978-0-387-90089-6. MR 0090581. External links • MathOverflow thread on the definition of linear disjointness
Wikipedia
\begin{document} \maketitle \begin{abstract} In this article, we consider the representation of $m$-gonal forms over ${\mathbb{N}}_0$. We show that any $m$-gonal forms over ${\mathbb{N}}_0$ of rank $\ge 5$ is almost regular and ponder the sufficiently large integers which are indeed represented over ${\mathbb{N}}_0$ among the integers which are locally represented. And as its consequential result, we prove a finiteness theorem for universal (original polygonal number version's) $m$-gonal forms over ${\mathbb{N}}_0$. \end{abstract} \section{Introduction} With a long and splendid history {\it polygonal number} is a number which is defined as a total number of dots of a regular polygon. Especially, the total number of dots of regular $m$-gon with $x$ dots for each side follows the formula \begin{equation} \label{m number} P_m(x)=\frac{m-2}{2}(x^2-x)+x \end{equation} and we call the $m$-gonal number as {\it $x$-th $m$-gonal number}. When we talk about the representation by polygonal numbers, a typically mentioned reference is the famous Fermat's polygonal number conjecture which states that every positive integer may be written as a sum of at most $m$ $m$-gonal numbers. Which was resolved by Lagrange for $m=4$ in 1770, by Gauss for $m=3$ in 1796, and finally by Cauchy for all $m \ge 3$ in 1813. As a generalization of the Fermat's Conjecture, the question of classifying the tuples $(a_1,\cdots,a_n) \in {\mathbb{N}}^n$ for which for any $N \in {\mathbb{N}}$, there is a solution $(x_1,\cdots,x_n) \in {\mathbb{N}}_0^n$ satisfying $$N=a_1P_m(x_1)+\cdots+a_nP_m(x_n)$$ was raised. As a generalization of Gauss's work, for $m=3$, Liouville classified $(a_1,a_2,a_3) \in {\mathbb{N}}^3$ such as above. And as a generalization of Lagrange's work, for $m=4$, Ramanujan classified $(a_1,a_2,a_3,a_4) \in {\mathbb{N}}^4$ such as above. But one mistake was found later in the Ramanujan's list. We call a weighted sum of $m$-gonal numbers, i.e., \begin{equation} \label{m form} a_1P_m(x_1)+\cdots+a_nP_m(x_n) \end{equation} where $(a_1,\cdots,a_n) \in {\mathbb{N}}^n$ as {\it m-gonal form}. We simply write the $m$-gonal form of \eqref{m form} as $\left<a_1,\cdots,a_n\right>_m$. In the case that $m=4$, we conventionally adopt the notation $\left<a_1,\cdots,a_n\right>$ for the diagonal quadratic form (i.e., square form) $\left<a_1,\cdots,a_n\right>_4$. Even though, the original definition of $m$-gonal number admits only positive integer $x$ (which is a number of dots for each side of a regular polygon) in \eqref{m number}, recently, as a kind of generalization of the $m$-gonal number, by many authors, the negative integers also often be admitted for the $x$ in \eqref{m number}. And we call the $P_m(x)$ where $x \in {\mathbb{Z}}$ as {\it generalized $m$-gonal number}. For a positive integer $N \in {\mathbb{N}}$, if the diophantine equation \begin{equation} \label{rep N} F_m(\mathbf x)=a_1P_m(x_1)+\cdots+a_nP_m(x_n)=N \end{equation} has a solution $\mathbf x \in {\mathbb{Z}}^n$ (resp. ${\mathbb{N}}_0^n$), then we say that the $m$-gonal form {\it (globally) represents $N$ over ${\mathbb{Z}}$ (resp. ${\mathbb{N}}_0$)}. And when an $m$-gonal form represents every positive integer $N$ over ${\mathbb{Z}}$ (resp. ${\mathbb{N}}_0$), we say that the $m$-gonal form is {\it universal over ${\mathbb{Z}}$ (resp. ${\mathbb{N}}_0$)}. Determining the universality (i.e., the representability of 'every' positive integer) of a given $m$-gonal form is not easy in general. As a struggling to take a step to closer to the problem, we suggest to verify weaker version's alternative congruence equation \begin{equation} \label{loc rep N} F_m(\mathbf x)=a_1P_m(x_1)+\cdots+a_nP_m(x_n) \equiv N \pmod{r}. \end{equation} If the congruence equation has a solution $\mathbf x \in {\mathbb{Z}}$ (resp. ${\mathbb{N}}_0$) for every $r \in {\mathbb{Z}}$, then we say that $F_m(\mathbf x)$ {\it locally represents $N$ over ${\mathbb{Z}}$ (resp. ${\mathbb{N}}_0$)}. As you can probably notice, the local representability over ${\mathbb{Z}}$ agrees with the local representability over ${\mathbb{N}}_0$. Unlike the global situation, one may completely classify all the $N \in {\mathbb{N}}$ which is locally represented by arbitrary given form $F_m(\mathbf x)$ without difficulty based on the very useful fact that a positive integer $N \in {\mathbb{N}}$ is locally represented by $\left<a_1,\cdots,a_n\right>_m$ if and only if the equation \eqref{rep N} has a $p$-adic integer solution $\mathbf x \in {\mathbb{Z}}_p^n$ for every prime $p$. For a prime $p$, for a $p$-adic integer $N \in {\mathbb{Z}}_p$, when the equation $N=a_1P_m(x_1)+\cdots+a_nP_m(x_n)$ has a $p$-adic integer solution $\mathbf x \in {\mathbb{Z}}_p^n$, we say that $\left<a_1,\cdots,a_n\right>_m$ {\it represents $N$ over ${\mathbb{Z}}_p$}. Obviously, the global representability implies the local representability. But the converse does not hold in general. So the struggling does not perfectly work for our purpose, but the local-to-global approach have achieved somewhat success 'over ${\mathbb{Z}}$'. When the converse also holds, on the other words, when an $m$-gonal form $F_m(\mathbf x)$ (globally) represents every positive integer $N \in {\mathbb{N}}$ which is locally represented by $F_m(\mathbf x)$ over ${\mathbb{Z}}$ (resp. ${\mathbb{N}}_0$), we say that $F_m(\mathbf x)$ is {\it regular over ${\mathbb{Z}}$ (resp. ${\mathbb{N}}_0$)}. When an $m$-gonal form $F_m(\mathbf x)$ represents every positive integer $N \in {\mathbb{N}}$ which is locally represented by $F_m(\mathbf x)$ but finitely many over ${\mathbb{Z}}$ (resp. ${\mathbb{N}}_0$), we say that $F_m(\mathbf x)$ is {\it almost regular over ${\mathbb{Z}}$ (resp. ${\mathbb{N}}_0$)}. By Theorem 4.9 (1) in \cite{CO}, the $m$-gonal form (more precisely, quadratic polynomial whose quadratic part is positive definite) of rank $\ge 5$ represents all sufficiently large integers which are locally represented by the given form over ${\mathbb{Z}}$. On the other words, any $m$-gonal form of rank $\ge 5$ represents almost all integers among the integers which are locally represented by the form. But such a local-to-global principle 'over ${\mathbb{N}}_0$' still remains hidden to us. Because the $m$-gonal form over ${\mathbb{N}}_0$ admits much strict variables than the $m$-gonal form over ${\mathbb{Z}}$, one may naturally expect that it would be having a difficulty in considering the representation over ${\mathbb{N}}_0$ than over ${\mathbb{Z}}$. In this article, firstly, we consider a local-to-global principle of $m$-gonal form over ${\mathbb{N}}_0$. The following theorem is the our first main goal in this article. \vskip 0.8em \begin{thm} \label{main thm} Any $m$-gonal form $\left<a_1,\cdots,a_n\right>_m$ of rank $n \ge 5$ represents the positive integers $N \in {\mathbb{N}}_0$ over ${\mathbb{N}}_0$ provided that \begin{equation} \label{main eq} \begin{cases} N \ge N(a_1,\cdots,a_n)\cdot(m-2)^3 \\ N \text{ is locally represented by }F_m(\mathbf x) \end{cases} \end{equation} where $N(a_1,\cdots,a_n)>0$ is a constant which is dependent only on $a_1,\cdots,a_n$. The cubic on $m$ of \eqref{main eq} is optimal in this sense. \end{thm} \vskip 0.8em We prove the above theorem in section $3$. Theorem \ref{main thm} says that any $m$-gonal form of rank $ \ge 5$ is almost regular over ${\mathbb{N}}_0$. On the other words, any $m$-gonal form $\left<a_1,\cdots,a_n\right>_m$ of rank $n \ge 5$ (globally) represents all sufficiently integers ($\ge N(a_1,\cdots,a_n)\cdot(m-2)^3$) which are locally represented by the form. Moreover, Theorem \ref{main thm} gives an answer about the sufficiently large integers too. For a given $n$-tuple $(a_1,\cdots,a_n) \in {\mathbb{N}}$ with $n\ge 5$, let $N_{(a_1,\cdots,a_n);m}>0$ be the optimal (i.e., the minimal) integer satisfying that the $m$-gonal form $\left<a_1,\cdots,a_n\right>_m$ represents the positive integers $N$ over ${\mathbb{N}}_0$ provided that \begin{equation} \begin{cases} N \ge N_{(a_1,\cdots,a_n);m} \\ N \text{ is locally represented by }F_m(\mathbf x). \end{cases} \end{equation} From the simple observation that the smallest $m$-gonal number is $m$ (which is increasing as $m$ increases) except $1$, one may induce that $N_{(a_1,\cdots,a_n);m}$ would be asymptotically increasing as $m$ increases. In virtue of Theorem \ref{main thm}, we may exactly know that the information about the growth of the $N_{(a_1,\cdots,a_n);m}$ which is cubic on $m$. Based on the Bhargava's escalator tree method, one may easily induce a finiteness theorem for universal $m$-gonal forms over ${\mathbb{Z}}$ for all $m \ge 3$ by using Theorem 4.9 (1) in \cite{CO} which gives that any $m$-gonal form of rank $\ge5$ is almost regular. A finiteness theorem for universal $m$-gonal forms states that there is the unique and minimal $\gamma_m$ for which the universality (i.e., the representability of every positive integer) of $m$-gonal form over ${\mathbb{Z}}$ is characterized by the representability of only finitely many positive integers $1,2,\cdots,\gamma_m$ by the form. On the other words, if an $m$-gonal form $\left<a_1,\cdots,a_n\right>_m$ represents every positive integer up to $\gamma_m$ over ${\mathbb{Z}}$, then $\left<a_1,\cdots,a_n\right>_m$ is universal over ${\mathbb{Z}}$. Since determining the universality of a given form is not easy in general, Conway and Scheeneeberger's announcement of {\it the fifteen theorem} : the first appearance of a finiteness theorem for universal forms was so stunning because a finiteness theorem for universal forms is surprisingly simple criteria to determine the universality. An interesting problem about the growth of $\gamma_m$ (which is asymptotically increasing value as $m$ increases) was firstly questioned by Kane and Liu \cite{KL} and they showed that for any $\epsilon >0$, there is a constant $C_{\epsilon}>0$ for which $$m-4 \le \gamma_m \le C_{\epsilon}m^{7+\epsilon}$$ holds for any $m \ge 3$. After which, Kim and the author \cite{KP'} improved their result by showing that the growth of $\gamma_m$ is exactly linear on $m$, i.e., there is a constant $C>0$ for which $$m-4\le \gamma_m \le C(m-2)$$ for any $m \ge 3$. As the same stream with the above, in section $4$, we consider the above results over ${\mathbb{N}}_0$. Theorem \ref{main thm} may quickly induce a finiteness theorem for universal $m$-gonal forms over ${\mathbb{N}}_0$ for all $m \ge 3$ and also give an (not optimal) answer about the growth of the size of the set of finitely many positive integers whose representability by an $m$-gonal form over ${\mathbb{N}}_0$ classify the universality over ${\mathbb{N}}_0$. The consequential result is the following theorem. \vskip 0.8em \begin{thm} \label{main thm'} For any $m \ge 3$, there is the unique and minimal $\gamma_{m;{\mathbb{N}}_0}>0$ for which if an $m$-gonal form represents every positive integer up to $\gamma_{m;{\mathbb{N}}_0}$ over ${\mathbb{N}}_0$, then the $m$-goanl form represents every positive integer over ${\mathbb{N}}_0$. And the growth of $\gamma_{m;{\mathbb{N}}_0}$ (which is asymptotically increasing as $m$ increases) is bounded by a cubic on $m$, i.e., there is an absolute constant $C>0$ for which $$m \le \gamma_{m;{\mathbb{N}}_0} \le C(m-2)^3.$$ \end{thm} \vskip 0.8em \begin{rmk} Theorem \ref{main thm'} does not give the exact growth of $\gamma_{m;{\mathbb{N}}_0}$ on $m$. The problem claming the exact growth of $\gamma_{m;{\mathbb{N}}_0}$ (or giving a better bound) on $m$ could be an interesting problem. \end{rmk} \vskip 0.8em In this article, we adopt the arithmetic theory of quadratic forms. Any unexplained notation and terminology can be found in \cite{O}. \vskip 0.8em \section{Preliminaries} The $m$-gonal form $\left<a_1,\cdots,a_n\right>_m$ represents an integer $A(m-2)+B$ over ${\mathbb{Z}}, {\mathbb{Z}}_p$, or ${\mathbb{N}}_0$, namely, there is $\mathbf x \in {\mathbb{Z}}^n, {\mathbb{Z}}_p^n$, or ${\mathbb{N}}_0^n$ for which $$A(m-2)+B=\frac{m-2}{2}((a_1x_1^2+\cdots+a_nx_n^2)-(a_1x_1+\cdots+a_nx_n))+(a_1x_1+\cdots+a_nx_n)$$ if and only if there is $k \in {\mathbb{Z}}, {\mathbb{Z}}_p$, or ${\mathbb{N}}_0$ for which the system \begin{equation} \label{main system} \begin{cases} a_1x_1^2+\cdots+a_nx_n^2=2A+B+k(m-4)\\ a_1x_1+\cdots+a_nx_n=B+k(m-2)\\ \end{cases} \end{equation} is solvable over ${\mathbb{Z}}, {\mathbb{Z}}_p$, or ${\mathbb{N}}_0$, respectively. In order to consider the representations of $m$-gonal form over ${\mathbb{Z}}$ or ${\mathbb{Z}}_p$, the author already have been concentrated a discussion about the $k \in {\mathbb{Z}}$ or ${\mathbb{Z}}_p$ of \eqref{main system} in \cite{rank 5}. As an extension of the work in \cite{rank 5}, in this article, we again consider the system \eqref{main system} and the $k$ in \eqref{main system} which admits not only integer solution $\mathbf x \in {\mathbb{Z}}$ but also non-negative integer solution $\mathbf x \in {\mathbb{N}}_0^n$ to understand the representation of $m$-gonal forms over ${\mathbb{N}}_0$. Very roughly speaking, when $2A+B+k(m-4)$ is large, if $B+k(m-2)$ is not large enough, then there would not exist a non-negative integer solution $\mathbf x$ for \eqref{main system}. The other way, if $B+k(m-2)$ is sufficiently large, then there would be a lot of chance that the system \eqref{main system} has a non-negative integer solution (but because of the Cauchy-Schwarz inequality, the $B+k(m-2)$ could not be freely large). The following thankful proposition makes possible to expand the discussion which was used to consider the representation of $m$-gonal form over ${\mathbb{Z}}$ to our work which is a representation of $m$-gonal form over ${\mathbb{N}}_0$ in this article. \vskip 0.8em \begin{prop} \label{prop 1} Suppose that $\max\limits_{1\le i \le n}\sqrt{a_i}\sqrt{\alpha} \le \beta$. Then for any $(x_1,\cdots,x_n) \in \mathbb R^n$ satisfying \begin{equation} \begin{cases} a_1x_1^2+\cdots+a_nx_n^2=\alpha\\ a_1x_1+\cdots+a_nx_n=\beta, \end{cases} \end{equation} we have that $x_i\ge 0$ for all $1 \le i \le n$. \end{prop} \begin{proof} Note that the hyperbolic plane $a_1x_1+\cdots+a_nx_n=\beta$ intersects with the sphere $a_1x_1^2+\cdots+a_nx_n^2=\alpha$ only on the space $(\mathbb R^{+}\cup \{0\})^n$. \end{proof} \vskip 0.8em We recall a simple observation that the system \eqref{main system} holds for $ \mathbf x \in {\mathbb{Z}}, {\mathbb{N}}_0$, or ${\mathbb{Z}}_p$ if and only if the equation \begin{equation} \label{eq2} \left(B+k(m-2)-\left(\sum \limits_{i=2}^na_ix_i\right)\right)^2+\sum\limits_{i=2}^na_1a_ix_i^2=2Aa_1+Ba_1+k(m-4)a_1 \end{equation} holds with $x_1=\frac{1}{a_1}\left(B+k(m-2)-\left(\sum \limits_{i=2}^na_ix_i\right)\right)$. The equation \eqref{eq2} may be organized as \begin{equation} \label{Qaa} Q_{a_1 ; \mathbf a}(\mathbf x-(B+k(m-2))\mathbf r)=(2A+B+k(m-4))a_1-(B+k(m-2))^2 \cdot \left(1-\sum \limits _{i=2}^na_ir_i\right) \end{equation} where $Q_{a_1 ; \mathbf a}(x_2,\cdots,x_n):=\sum_{i=2}^n(a_1a_i+a_i^2)x_i^2+\sum_{2\le i<j \le n}2a_ia_jx_ix_j$ is a positive definite quadratic form and $r_2,\cdots,r_n \in {\mathbb{Q}}$ are the solution for $$\begin{cases} (a_1a_2+a_2^2)r_2+a_2a_3r_3+\cdots+a_2a_nr_n=a_2\\ a_2a_3r_2+(a_1a_3+a_3^2)r_3 +\cdots + a_3a_nr_n=a_3 \\ \quad \quad \quad \quad \quad \quad \quad \quad \quad \vdots \\ a_2a_nr_2+a_3a_nr_3+\cdots +(a_1a_n+a_n^2)r_n=a_n \end{cases}$$ (in practice, $r_2=\cdots=r_n=\frac{1}{a_1+\cdots+a_n}$). The above simple observation may admit to consider the diophantine quadratic equation \eqref{Qaa} of rank $n-1$ instead of the diophantine system \eqref{main system} of rank $n$. So from now on, we mainly consider the diophantine quadratic equation \eqref{Qaa} of rank $n-1$ instead of the diophantine system \eqref{main system} of rank $n$. \section{Representation of $m$-gonal form over ${\mathbb{N}}_0$} Lemma \ref{3 lem} is the most critical argument in this article. When an integer $A(m-2)+B$ is locally represented by an $m$-gonal form $\left<a_1,\cdots,a_n\right>_m$, there are $k \in {\mathbb{Z}}$ for which the system \eqref{main system} is locally solvable (i.e., the system \eqref{main system} has an $p$-adic integer solution $\mathbf x_p \in {\mathbb{Z}}_p^n$ for every prime $p$). Moreover, the $k$'s have very regular distribution (precisely, the set of every $k \in {\mathbb{Z}}$ for which the diophantine system \eqref{main system} is locally solvable is a finite union of arithmetic sequences). The regular distribution's merit is that it makes easily pick fitting $k \in {\mathbb{Z}}$ in \eqref{main system} which meets our purpose. \vskip 0.8em \begin{lem} \label{3 lem} For an $m$-gonal form $\left<a_1,\cdots,a_n\right>_m$ of rank $n\ge5$, let an integer $A(m-2)+B$ with $0\le B \le m-3$ be locally represented by $\left<a_1,\cdots,a_n\right>_m$. Then for some residue $$k(A,B) \in {\mathbb{Z}}/K(\mathbf a){\mathbb{Z}}$$ and $$P=\prod \limits_{p \in T(\mathbf a)\cup\{2\}}p^{s(p)} $$ with $0\le s(p) \le \frac{1}{2}\text{ord}_p(4a_1)$, the quadratic equation \begin{equation} \label{lem eq} Q_{a_1;\mathbf a}(P\mathbf x-(B+k'(m-2))\mathbf r)=(2A+B+k'(m-4))a_1-(B+k'(m-2))^2\cdot\left(1-\sum_{i=2}^na_ir_i\right) \end{equation} is locally primitively solvable for any $k' \equiv k \pmod{K(\mathbf a)}$ where $K(\mathbf a)=K(a_1,\cdots,a_n)$ is a constant which is dependent only on $a_1,\cdots,a_n$ and $T(\mathbf a)=T(a_1,\cdots,a_n)$ is a finite set of all odd primes $p$ for which there are at most four units of ${\mathbb{Z}}_p$ in $\{a_1,\cdots,a_n\}$ by admitting a recursion. \end{lem} \begin{proof} One may use Propositions 3.2, 3.4, 3.6, 3.8, and 3.9 in \cite{rank 5} to prove this lemma. \end{proof} \vskip 0.8em Now, we are already ready to prove our first main goal. \vskip 0.8em \begin{proof}[proof of Theorem \ref{main thm}] Without loss of generality, we assume that $a_1 \le \cdots \le a_n$. By Theorem 4.9 (2) in \cite{CO}, we may obtain a constant $C_{a_1;\mathbf a}>0$ for which $$Q_{a_1;\mathbf a}(P\mathbf x-(B+k(m-2))\mathbf r)=C_{a_1;\mathbf a}$$ has an integer solution $\mathbf x \in {\mathbb{Z}}^{n-1}$ with $n-1 \ge 4$ where $P|\prod \limits_{p \in T(\mathbf a)\cup\{2\}}p^{\frac{1}{2}\text{ord}_p(4a_1)}$ provided that $$\begin{cases} Q_{a_1;\mathbf a}(P\mathbf x-(B+k(m-2))\mathbf r)=N \text{ is primitively locally solvable} \\ N>C_{a_1;\mathbf a} \text{ is sufficiently large}. \end{cases}$$ Note that such a constant $C_{a_1;\mathbf a}$ is dependent only on $a_1,\cdots,a_n$. Now for an integer $A(m-2)+B$ which is locally represented by $\left<a_1,\cdots,a_n\right>_m$, our attention go to $k \in {\mathbb{N}}_0$ satisfying \begin{equation} \label{k} \begin{cases} k\equiv k(A,B) \pmod{K(\mathbf a)} \\ (2A+B+k(m-4))a_1-(B+k(m-2))^2\cdot\left(1-\sum_{i=2}^na_ir_i\right)>C_{a_1;\mathbf a} \\ \sqrt{a_n}\sqrt{2A+B+k(m-4)} \le B+k(m-2) \end{cases} \end{equation} where $k(A,B)$ is a residue in ${\mathbb{Z}}/K(\mathbf a) {\mathbb{Z}}$ in Lemma \ref{3 lem}. The second inequality of \eqref{k} draw $$\alpha_{A,B;m}^-<k<\alpha_{A,B;m}^+$$ where $r_i=\frac{1}{a_1+\cdots+a_n}$ and $2(m-2)^2\alpha_{A,B;m}^{\pm}:= \left(\sum \limits_{i=1}^na_i\right)(m-4)-2B(m-2) \pm$ $ \sqrt{\left\{\left(\sum \limits_{i=1}^na_i\right)(m-4)-2B(m-2)\right\}^2+4(m-2)^2\left\{\left(\sum \limits_{i=1}^na_i\right)(2A+B)-B^2-\frac{C_{a_1;\mathbf a}\sum \limits_{i=1}^na_i}{a_1}\right\}}$. And the third inequality of \eqref{lem eq} draw $$k<\beta_{A,B;m}^- \ \text{ or } \ \beta_{A,B;m}^+<k$$ where $\beta_{A,B;m}^{\pm}:=\frac{a_n(m-4)-2B(m-2)\pm\sqrt{\{a_n(m-4)-2B(m-2)\}^2+4(m-2)^2\{a_n(2A+B)-B^2\}}}{2(m-2)^2}$. Therefore if $$\beta_{A,B;m}^++K(\mathbf a)<\alpha_{A,B;m}^+,$$ then we may get a $k \in {\mathbb{N}}_0$ satisfying the three conditions in \eqref{k}. Through elementary but dirty calculation, one may obtain a constant $N(a_1,\cdots,a_n)>0$ which is dependent only on $a_1,\cdots,a_n$ satisfying that $$\beta_{A,B;m}^++K(\mathbf a)<\alpha_{A,B;m}^+$$ with $0 \le B \le m-3$ holds for any $A \ge N(a_1,\cdots,a_n) \cdot (m-2)^2$ and $m \ge 3$. Consequently, we may conclude that for an integer $A(m-2)+B$ which is locally represented by $\left<a_1,\cdots,a_n\right>_m$ with $$\begin{cases} A \ge N(a_1,\cdots,a_n) \cdot(m-2)^2\\ 0 \le B \le m-3, \end{cases}$$ we may take $k \in {\mathbb{N}}_0$ satisfying \eqref{k}. Then from the first and second conditions of \eqref{k}, we obtain that $A(m-2)+B$ is represented by $\left<a_1,\cdots,a_n\right>_m$ over ${\mathbb{Z}}$, i.e., there is $\mathbf x \in {\mathbb{Z}}^n$ for which $$A(m-2)+B=a_1P_m(x_1)+\cdots+a_nP_m(x_n)$$ holds, more precisely, the diophantine system $$ \begin{cases} a_1x_1^2+\cdots+a_nx_n^2=2A+B+k(m-4)\\ a_1x_1+\cdots+a_nx_n=B+k(m-2)\\ \end{cases} $$ holds. And from the third condition of \eqref{k}, we obtain that the $\mathbf x $ is indeed in ${\mathbb{N}}_0^n$ by Proposition \ref{prop 1}. This completes the first argument. The Cauchy-Schwarz inequality gives that $$(a_1+\cdots+a_n)(a_1x_1^2+\cdots+a_nx_n^2) \le (a_1x_1+\cdots+a_nx_n)^2.$$ Which induces that for a given $(a_1,\cdots,a_n) \in {\mathbb{N}}^n$, the cubic on $m$ is optimal in this argument. \end{proof} \vskip 0.8em \begin{rmk} Note that the local representability of $m$-gonal form over ${\mathbb{N}}_0$ is coincide with the local representability of $m$-gonal form over ${\mathbb{Z}}$. The global representability of $m$-gonal form over ${\mathbb{N}}_0$ obviously implies the global representability of $m$-gonal form over ${\mathbb{Z}}$, but converse does not hold. In \cite{non alm reg}, the author considered infinitely many $m$-gonal forms of rank $4$ which is not almost regular over ${\mathbb{Z}}$. So we may conclude that there are also infinitely many $m$-gonal form of rank $4$ which is not almost regular over ${\mathbb{N}}_0$. So the rank $\ge 5$ in Theorem \ref{main thm} is optimal. \end{rmk} \section{Finiteness theorem for universal $m$-gonal forms (original polygonal number version)} We start this section by recalling the Bhargava's escalaltor tree. We call the smallest positive integer which is not represented by a non-universal $m$-gonal form $\left<a_1,\cdots,a_n\right>_m$ over ${\mathbb{N}}_0$ as the {\it the truant of $\left<a_1,\cdots,a_n\right>_m$}. The {\it escalator tree of $m$-gonal forms over ${\mathbb{N}}_0$} is one sided tree with root $\emptyset_m$. Conventionally, the truant of the $m$-gonal form $\emptyset_m$ is considered as the smallest positive integer $1$. If an $m$-gonal form $\left<a_1,\cdots,a_n\right>_m$ of a node is not universal (i.e., has the truant), then we connect the node with the nodes $\left<a_1,\cdots,a_{n+1}\right>_m$ which are escalated superforms of $\left<a_1,\cdots,a_n \right>_m$ to represent the truant of $\left<a_1,\cdots,a_n \right>_m$. Note that by the construction, the rank of every $m$-gonal form of depth $n$ of the escalator tree is $n$. In order to avoid the appearance of the same forms, we set assumption $a_1\le \cdots \le a_n \le a_{n+1}$. If an $m$-gonal form on a node is universal (i.e., does not have truant), then the node become a leaf of the tree. So, universal forms of the tree appear only on the leaves and the universal $m$-gonal forms on the leaves would be kind of proper universal forms. And one may notice that a univesal $m$-gonal form would contains at least one (proper universal) $m$-gonal form on a leaf of the escalator tree of $m$-gonal form as its subform. Once one completes the escalator tree of $m$-gonal forms over ${\mathbb{N}}_0$, then one should get the $\gamma_{m;{\mathbb{N}}_0}$ as the largest truant of the tree. Since the smallest positive $m$-gonal number (over ${\mathbb{N}}_0$) is $m$ except $1$, if $a_1+\cdots+a_n<m-1$, then the $m$-gonal form $\left<a_1,\cdots,a_n\right>_m$ would not represent $a_1+\cdots+a_n+1$ (more precisely, all the integers between $a_1+\cdots+a_n+1$ and $m-1$). So for an $m$-gonal form $\left<a_1,\cdots,a_n\right>_m$ on a node of the escalator tree, in the case that $a_1+\cdots+a_n<m-1$, its truant would be $a_1+\cdots+a_n+1$ and so its children would be forms of $$\left<a_1,\cdots,a_n,a_{n+1}\right>_m$$ where $a_n \le a_{n+1} \le a_1+\cdots+a_n+1$. Which gives a special observation that for a fixed depth $d$, the depth $d$ of the escalator tree of $m$-gonal form take on the same forms for all sufficiently large $m$. Especially, as an example, when $m \ge 8$, the escalator tree of $m$-gonal forms over ${\mathbb{N}}_0$ up to $3$ depth would appear as following : $$\begin{tikzpicture} \tikzstyle{level 1}=[sibling distance=80mm] \tikzstyle{level 2}=[sibling distance=60mm] \tikzstyle{level 3}=[sibling distance=20mm] \node {$\emptyset_m$} child {node {$\left<1\right>_m$} child {node {$\left<1,1\right>_m$} child {node {$\left<1,1,1\right>_m$} child {node {$\vdots$}} } child {node {$\left<1,1,2\right>_m$} child {node {$\vdots$}} } child {node {$\left<1,1,3\right>_m$} child {node {$\vdots$}} } } child {node {$\left<1,2\right>_m$} child {node {$\left<1,2,2\right>_m$} child {node {$\vdots$}} } child {node {$\left<1,2,3\right>_m$} child {node {$\vdots$}} } child {node {$\left<1,2,4\right>_m$} child {node {$\vdots$}} } } }; \end{tikzpicture}$$ \vskip 0.8em Now we prove a finiteness theorem for univesal $m$-gonal forms over ${\mathbb{N}}_0$ based on the Bhargava's escalator tree idea. Just before, we see a method to determine the local representability of $m$-gonal by adopting already well known local representability results of quadratic forms. \vskip 0.8em \begin{prop} \label{loc.rep} Let $F_m(\mathbf x)=a_1P_m(x_1)+\cdots+a_nP_m(x_n)$ be a primitive $m$-gonal form. \begin{itemize} \item [(1) ] When $p$ is an odd prime with $p|m-2$, $F_m(\mathbf x)$ is universal over ${\mathbb{Z}}_p$. \item [(2) ] When $m \not\equiv 0 \pmod 4$, $F_m(\mathbf x)$ is universal over ${\mathbb{Z}}_2$. \item [(3) ] When $p$ is an odd prime with $(p,m-2)=1$, an integer $N$ is represented by $F_m(\mathbf x)$ over ${\mathbb{Z}}_p$ if and only if the integer $8(m-2)N+(a_1+\cdots+a_n)(m-4)^2$ is represented by the diagonal quadratic form $\left<a_1,\cdots,a_n \right>$ over ${\mathbb{Z}}_p$. \item [(4) ] When $m \equiv 0 \pmod 4$, an integer $N$ is represented by $F_m(\mathbf x)$ over ${\mathbb{Z}}_2$ if and only if the integer $\frac{m-2}{2}N+(a_1+\cdots+a_n)\left(\frac{m-4}{4}\right)^2$ is represented by the diagonal quadratic form $\left<a_1,\cdots,a_n \right>$ over ${\mathbb{Z}}_2$. \end{itemize} \end{prop} \begin{proof} See Proposition 3.1 in \cite{rank 5}. \end{proof} \vskip 0.8em \begin{rmk} \label{loc rep rmk} By Proposition \ref{loc.rep}, when a diagonal quadratic form $\left<a_1,\cdots,a_n\right>$ is locally universal, the $m$-gonal forms $$\left<a_1,\cdots,a_n\right>_m$$ whose coefficients coincide with the coefficients of the quadratic form $\left<a_1,\cdots,a_n\right>$ are also locally universal for all $m \ge 3$. \end{rmk} \vskip 0.8em \begin{proof}[proof of Theorem \ref{main thm'}] Note that for $m \ge 31$, the escalator tree of $m$-gonal forms over ${\mathbb{N}}_0$ up to $5$ depth would be independent on $m$ and all the candidates for the coefficients of $m$-gonal forms of depth $5$ would be the tuples in the finite set $$T_{d=5}:=\{(a_1,\cdots,a_5) \in {\mathbb{N}}^5|a_1=1, a_i \le a_{i+1} \le a_1+\cdots+a_i+1\}.$$ One may check that the quadratic form $\left<a_1,\cdots,a_5\right>$ is locally universal for each $(a_1,\cdots,a_5) \in T_{d=5}$ case by case. And then by Remark \ref{loc rep rmk}, we may obtain that all the $m$-gonal forms of depth $5$ on the nodes of depth $5$ of escalator tree of $m$-gonal forms are locally universal for all $m \ge 31$. From Theorem \ref{main thm}, which implies that for all $m \ge 31$, the $m$-gonal forms $\left<a_1,\cdots,a_5\right>_m$ on the nodes of depth $5$ of escalator tree of $m$-gonal forms represents every sufficiently large integer $$N\ge C_{\ge31}(m-2)^3$$ where $C_{\ge31}:=\max\{N(a_1,\cdots,a_5)|(a_1,\cdots,a_5)\in T_{d=5}\}$. Which says that the truant of the escalator trees of $m$-gonal forms over ${\mathbb{N}}_0$ could not be bigger than $C_{\ge 31}(m-2)^3$ for all $m \ge 31$!, giving that $$\gamma_{m;{\mathbb{N}}_0} \le C_{\ge 31}(m-2)^3$$ for all $m \ge 31$. For $3 \le m \le 30$ too, the local-to-global theorem (Theorem \ref{main thm}) may direcly imply the existence of $\gamma_{m;{\mathbb{N}}_0}$. Consequently, we conclude that for $$C:=\max \left\{\frac{\gamma_{m;{\mathbb{N}}_0}}{(m-2)^3}|3 \le m \le 30 \right\}\cup \{C_{\ge31}\},$$ the theorem holds. \end{proof} \end{document}
arXiv
\begin{document} \twocolumn[ \icmltitle{PULSNAR - Positive unlabeled learning selected not at random: class proportion estimation when the SCAR assumption does not hold} \icmlsetsymbol{equal}{*} \begin{icmlauthorlist} \icmlauthor{Praveen Kumar}{yyy,equal} \icmlauthor{Christophe G. Lambert}{sch,equal} \end{icmlauthorlist} \icmlaffiliation{yyy}{Department of Computer Science, University of New Mexico, Albuquerque, NM, USA} \icmlaffiliation{sch}{Department of Internal Medicine, University of New Mexico, Albuquerque, NM, USA} \icmlcorrespondingauthor{Christophe G. Lambert}{[email protected]} \icmlkeywords{Machine Learning, SCAR, SNAR, PU learning, Positive and Unlabeled learning} \vskip 0.3in ] \printAffiliationsAndNotice{\icmlEqualContribution} \begin{abstract} Positive and Unlabeled (PU) learning is a type of semi-supervised binary classification where the machine learning algorithm differentiates between a set of positive instances (labeled) and a set of both positive and negative instances (unlabeled). PU learning has broad applications in settings where confirmed negatives are unavailable or difficult to obtain, and there is value in discovering positives among the unlabeled (e.g., viable drugs among untested compounds). Most PU learning algorithms make the \emph{selected completely at random} (SCAR) assumption, namely that positives are selected independently of their features. However, in many real-world applications, such as healthcare, positives are not SCAR (e.g., severe cases are more likely to be diagnosed), leading to a poor estimate of the proportion, $\alpha$, of positives among unlabeled examples and poor model calibration, resulting in an uncertain decision threshold for selecting positives. PU learning algorithms can estimate $\alpha$ or the probability of an individual unlabeled instance being positive or both. We propose two PU learning algorithms to estimate $\alpha$, calculate calibrated probabilities for PU instances, and improve classification metrics: i) PULSCAR (positive unlabeled learning selected completely at random), and ii) PULSNAR (positive unlabeled learning selected not at random). PULSNAR uses a divide-and-conquer approach that creates and solves several SCAR-like sub-problems using PULSCAR. In our experiments, PULSNAR outperformed state-of-the-art approaches on both synthetic and real-world benchmark datasets. \end{abstract} \section{Introduction} \label{introduction} In a standard binary supervised classification problem, the classifier (e.g., decision trees, logistic regression, support vector machines, etc.) is given training instances with features x and their labels y=0 (negative) or y=1 (positive). The classifier learns a model $f:x \rightarrow y$, which can classify an unlabeled instance as positive or negative based on its features. It is often challenging, expensive, and even impossible to annotate large datasets in real-world applications \cite{pulsnar_1}, and frequently only positive instances are labeled. Unlabeled instances with their features can be classified via positive and unlabeled (PU) learning \cite{pulsnar_1, pulsnar_2}. Some of the PU learning literature focuses on improving classification metrics, and others focus on the problem of estimating the fraction, $\alpha$, of positives among the unlabeled instances. Although this work focuses on the latter, calibration and enhancing classification performance are also addressed. PU learning problems abound in many domains. For example, in electronic healthcare records, diagnosis codes can establish patients have been evaluated and labeled positive for a given disease, yet the absence of a diagnosis code does not establish a patient is negative for a disease, and such patients are elements of an unlabeled mix of positives and negatives. Importantly, confirmed negatives are generally not codable, and thus traditional supervised learning is infeasible. Much medical literature is dedicated to estimating disease incidence and prevalence but contends with incomplete medical assessment and recording. The potential to assess the incidence of a given disease (without costly in-person assessment or chart reviews) in large unlabeled populations could have substantial public health benefits. In market research, one typically has a modest set of positives, say of customers or buyers of a product, has a set of attributes over both the positives and a large population of unlabeled people of size $N$, and wishes to establish the size of the addressable market, $\alpha N$. The majority of PU learning algorithms use the \emph{selected completely at random} (SCAR) assumption, which states that the labeled positive examples are randomly selected from the universe of positives. That is, the labeling probability of any positive instance is constant \cite{pulsnar_2}. This assumption may fail in real-world applications. For example, in email spam detection, positive instances labeled from an earlier time period could differ from later spam due to adaptive adversaries. Although some PU learning algorithms have shown promising performance on different machine learning (ML) benchmark datasets, to our knowledge, none have been tested on large and highly imbalanced \emph{selected not at random} (SNAR) data. Class imbalance in a PU setting generally means the number of unlabeled instances is large compared to the labeled positive examples. Also, none have explored how to calculate well-calibrated probabilities for PU examples. In addition, few algorithms have been assessed when $\alpha$ is small ($\leq 5\%$), where performance is expected to suffer. In this paper, we propose a PU learning approach to estimate $\alpha$ when positives are SCAR or SNAR, and compare our estimation error in simulated and real data versus other algorithms. We assess the performance with class imbalance in both modest and large datasets and over a rigorous $\alpha$ range. Our contributions are summarized as follows: \begin{enumerate} \itemsep0em \item We propose PULSCAR, a PU learning algorithm for estimating $\alpha$ when the SCAR assumption holds. It uses kernel density estimates of the positive and unlabeled distributions of ML probabilities to estimate $\alpha$. Innovations include: \begin{enumerate} \item Using the beta distribution for density estimation where the $[0\ldots 1]$ support of the beta function matches that of the probability distributions generated from ML classifiers \cite{pulsnar_38}. A bandwidth heuristic is introduced for the beta distribution, with good empirical performance for $\alpha$ estimation. \item Introducing an error function whose derivative maximum provides a rapid, robust estimate of $\alpha$. \end{enumerate} \item We propose PULSNAR, a PU learning algorithm for estimating $\alpha$ when the positives are SNAR, that uses a novel clustering approach to divide the positives into several subsets that can have separate $\alpha$ estimates versus the unlabeled. These sub-problems are more SCAR-like and are solved with PULSCAR. \item We propose a method to calibrate the probabilities of PU examples to their true (unknown) labels in SCAR and SNAR settings. \item We propose a method to improve the classification performance in SCAR and SNAR settings. \item We simulate PU learning data with large deviations from the SCAR assumption, and generate SCAR and SNAR problems from fully-labeled ML repository data. \item We release an open-source Python package for our PULSNAR and PULSCAR algorithms that fully integrates with scikit-learn, enabling multiple state-of-the-art ML algorithms (e.g., XGBoost, catboost, etc.). \end{enumerate} \iffalse This paper is organized as follows. Section \ref{sectionRelatedWork} covers related work. Section \ref{sectionProblemFormulation} explains the PU learning with the SCAR assumption; presents our two algorithms: PULSCAR and PULSNAR; describes how the number of clusters is determined; explains how we choose the probable positive examples in the unlabeled set and how the probabilities are calibrated. Section \ref{sectionExperiments} details the datasets we used for the experiments and how the experiments were performed using PULSCAR and PULSNAR algorithms. Section 4 shows the results of the experiments on different datasets. Section \ref{sectionResults} summarizes our findings, Section \ref{sectionConclusion} provides discussion and concludes. \fi \section{Related work} \label{sectionRelatedWork} Early PU learning \cite{pulsnar_12, pulsnar_13, pulsnar_14} generally followed a two-step heuristic: i) identify strong negative examples from the unlabeled set, and then ii) apply an ML algorithm to given positive and identified negative examples. Some recent work iteratively identifies better negatives \cite{pulsnar_47}, or combines negative-unlabeled learning with unlabeled-unlabeled learning \cite{pulsnar_44}. Instead of extracting only strong negative examples from the unlabeled set, \cite{pulsnar_15} extracted high-quality positive and negative examples from the unlabeled set and then applied classifiers to those data. \cite{pulsnar_16, pulsnar_17} assigned weights to the unlabeled examples to train a classifier. \cite{pulsnar_2} introduced the SCAR assumption. By partially matching the class-conditional density of the positive class to the input density under Pearson divergence, \cite{pulsnar_20} estimated the class prior. \cite{pulsnar_21} proposed a nonparametric class prior estimation technique, AlphaMax, using two-component mixture models. The kernel embedding approaches KM1 and KM2 \cite{pulsnar_22} showed that the algorithm for mixture proportion estimation converges to the true prior under certain assumptions. Estimating the class prior through decision tree induction \cite{pulsnar_23} provides a lower bound for label frequency under the SCAR assumption. DEDPUL \cite{pulsnar_24} assumes SCAR and uses probability densities to estimate $\alpha$ with a compute-intensive EM-algorithm; the method does not produce calibrated probabilities. Confident learning (CL) \cite{pulsnar_25} combines the principle of pruning noisy data, probabilistic thresholds to estimate noise, and sample ranking. Multi-Positive and Unlabeled Learning \cite{pulsnar_26} extends PU learning to multi-class labels. Oversampling the minority class \cite{pulsnar_27,pulsnar_28} or undersampling the majority class are not well-suited approaches for PU data due to contamination in the unlabeled set; \cite{pulsnar_31} uses a re-weighting strategy for imbalanced PU learning. Recent studies have focused on labeling/selection bias to address the SCAR assumption not holding. \cite{pulsnar_42, pulsnar_43} used propensity scores to address labeling bias and improve classification. Using the propensity score, based on a subset of features, as the labeling probability for positive examples, \cite{pulsnar_42} reduced the Selected At Random (SAR) problem into the SCAR problem to learn a classification model in the PU setting. The ``Labeling Bias Estimation'' approach was proposed by \cite{pulsnar_48} to label the data by establishing the relationship among the feature variables, ground-truth labels, and labeling conditions. \section{Problem Formulation and Algorithms} \label{sectionProblemFormulation} In this section, we explain: i) the SCAR and SNAR assumptions, ii) our PULSCAR algorithm for SCAR data and PULSNAR algorithm for SNAR data, iii) bandwidth estimation techniques, and iv) method to find the number of clusters in the labeled positive set. Our method to calibrate probabilities and enhance classification performance using PULSCAR/PULSNAR is in Appendix \ref{appendix1} and \ref{appendix2}, respectively. \subsection{SCAR assumption and SNAR assumption} In PU learning settings, a positive or unlabeled example can be represented as a triplet (x, y, s) where ``x'' is a vector of the attributes, ``y'' the actual class, and ``s'' a binary variable representing whether or not the example is labeled. If an example is labeled (s=1), it belongs to the positive class (y=1) i.e., $p(y=1|s=1)=1$. If an example is not labeled (s=0), it can belong to either class. Since only positive examples are labeled, $p(s=1|x, y=0)=0$ \cite{pulsnar_2}. Under the SCAR assumption, a labeled positive is an independent and identically distributed (i.i.d) example from the positive distribution, i.e., positives are selected independently of their attributes. Therefore, $p(s=1|x, y=1) = p(s=1|y=1)$ \cite{pulsnar_2}. For a given dataset, $p(s=1|y=1)$ is a constant and is the fraction of labeled positives. If $|P|$ is the number of labeled positives, $|U|$ is the number of unlabeled examples, and $\alpha$ is the unknown fraction of positives in the unlabeled set, then \begin{gather} p(s=1) = \frac{|P|}{|P|+|U|} \quad \text{and} \quad p(y=1) = \frac{|P|+\alpha |U|}{|P|+|U|} \nonumber\\ \quad \text{Using Bayes' theorem:} \nonumber\\ p(s=1|y=1) = \frac{p(y=1|s=1)p(s=1)}{p(y=1)} \nonumber\\ = \frac{p(s=1)}{p(y=1)} \quad \text{, since $p(y=1|s=1) = 1$} \nonumber\\ = \frac{|P|}{|P|+\alpha |U|} \text{, which is a constant.} \end{gather} On the contrary, under the SNAR assumption, the probability that a positive example is labeled is not independent of its attributes. Stated formally, the assumption is that $p(s=1|x, y=1) \neq p(s=1|y=1)$ i.e. $p(s=1|x, y=1)$ is not a constant, which can be proved by Bayes' rule (Appendix \ref{appendix0}). The SCAR assumption can hold when: a) both labeled and unlabeled positives are not a mixture of subclasses (i.e. they have similar attributes); b) both labeled and unlabeled positives are from \emph{k} subclasses ($1 \dots k$), and the relative proportion of those subclasses is the same in positive and unlabeled sets. Intra-subclass examples will have similar attributes, whereas the inter-subclass examples may not have similar attributes. E.g., in a dataset of patients positive for diabetes, type 1 patients will be in one subclass, and type 2 patients will be in another subclass. The SCAR assumption can fail when labeled and unlabeled positives are from \emph{k} subclasses, and the proportion of those subclasses is different in positive and unlabeled sets. Suppose both positive and unlabeled sets have subclass 1 and subclass 2 positives, and in the positive set, their ratio is 30:70. If the ratio is also 30:70 in the unlabeled set, the SCAR assumption will hold. If it was different, say 80:20, the SCAR assumption would not hold. \subsection{Positive and Unlabeled Learning Selected Completely At Random (PULSCAR) Algorithm} Using the rule of total probability: if $Y_1$, $Y_0$ are two measurable events of a probability space $\mathcal{S}$ such that $\mathcal{S}=Y_1 \cup Y_0$, then for any event $X\in\mathcal{S}$: \begin{gather} p(X) = p(X|Y_1)p(Y_1) + p(X|Y_0)(1-p(Y_1)) \end{gather} \noindent Let $p(Y_1) = a$, then \begin{gather}\label{eq:total_prob} p(X) = a\, p(X|Y_1) + (1-a) p(X|Y_0) \end{gather} Given any ML algorithm, $\mathcal{A}(x)$, that generates [0\dots1] probabilities for the data based on covariates x, let $f_p(x)$, $f_n(x)$, and $f_u(x)$ be probability density functions (PDFs) corresponding to the probability distribution of positives, negatives, and unlabeled respectively. Let $\alpha$ be the unknown proportion of positives in the unlabeled, then \begin{gather} f_u(x) \equiv \alpha f_p(x) + (1-\alpha) f_n(x) \quad \text{(using } \ref{eq:total_prob} \text{).} \end{gather} A key observation is that $f_p(x)$ should not exceed $f_u(x)$ anywhere, allowing one to place an upper bound on $\alpha$. PULSCAR estimates $\alpha$ by finding the value $\alpha$ where the following error function maximally changes: \begin{gather} f(\alpha) = log(|\min(f_u(x) - \alpha f_p(x))|+ \epsilon) \text{, where} \nonumber\\ \epsilon = |\min(f_p(x))| \text{, if } \min(f_p(x)) \neq 0 \text{, else } \epsilon = 10^{-10}. \end{gather} We add $\epsilon$ to avoid taking the logarithm of zero, if $\min(f_u(x) - \alpha f_p(x))=0$. We use beta kernel density estimates on ML-predicted class 1 probabilities of positives and unlabeled to estimate $f_p(x)$ and $f_u(x)$. We use a finite difference approximation of the slope of $f(\alpha)$ to find its maximum. The value of $\alpha$ can also be determined visually by plotting the error function (Figure \ref{fig:density_error_plot}D); the sharp inflection point in the plot represents the value of $\alpha$. Algorithm \ref{alg:pulscar} shows the pseudocode of the PULSCAR algorithm to estimate $\alpha$ using the error function based on probability densities. Algorithm \ref{alg:bandwidthestimate} is a subroutine to compute the beta kernel bandwidth. Full source code for all algorithms is provided within the following GitHub repository: \ \\ \url{https://github.com/unmtransinfo/PULSNAR}. \begin{figure} \caption{\textbf{PULSCAR algorithm visual intuition}. PULSCAR finds the smallest $\alpha$ such that $f_u(x) - \alpha f_p(x)$ is everywhere positive in [0 \dots 1]. A) Kernel density estimates for simulated data with $\alpha=10\%$ positives in the unlabeled set -- estimated negative density (blue) nearly equals the ground truth (green). B) Overweighting the positive density by $\alpha=15\%$ results in the estimated negative density (blue), $f_u(x) - \alpha f_p(x)$ dropping below zero. C) Underweighting the positive density by $\alpha=5\%$ results in the estimated negative density (blue) being higher than the ground truth (green). D) Error function with estimated $\alpha=10.68\%$ selected where the finite-differences estimate of the slope is largest - very close to ground truth $\alpha=10\%$.} \label{fig:density_error_plot} \end{figure} \begin{algorithm} \caption{PULSCAR Algorithm} \label{alg:pulscar} \textbf{Input}: X ($X_p \cup X_u$), y ($y_p \cup y_u$), n\_bins \\ \textbf{Output}: estimated $\alpha$ \begin{algorithmic}[1] \STATE predicted\_probabilities (p) $\leftarrow$ $\mathcal{A}(X,y)$ \STATE p0 $\leftarrow$ p[y == 0] \STATE p1 $\leftarrow$ p[y == 1] \STATE estimation\_range $\leftarrow$ [0, 0.0001, 0.0002, ..., 1.0] \STATE bw $\leftarrow$ estimate\_bandwidth\_pu(p, n\_bins) \STATE $D_u$ $\leftarrow$ beta\_kernel(p0, bw, n\_bins) \STATE $D_p$ $\leftarrow$ beta\_kernel(p1, bw, n\_bins) \STATE $\epsilon \leftarrow |\min(D_p)|$ \IF{$\epsilon$ = 0} \STATE $\epsilon \leftarrow 10^{-10}$ \ENDIF \STATE len $\leftarrow$ length(estimation\_range) \STATE selected\_range $\leftarrow$ estimation\_range[2:len] \STATE $\alpha \leftarrow$ estimation\_range \STATE f($\alpha$) $\leftarrow \log(|\min(D_u - \alpha D_p)| + \epsilon$) \STATE d $\leftarrow$ f'($\alpha$) \STATE i $\leftarrow$ where the value of d changes maximally \STATE \textbf{return} selected\_range[i] \end{algorithmic} \end{algorithm} \begin{algorithm}[tb] \caption{estimate\_bandwidth\_pu} \label{alg:bandwidthestimate} \textbf{Input}: predicted\_probabilities, n\_bins \\ \textbf{Output}: bandwidth \begin{algorithmic}[1] \STATE preds $\leftarrow$ predicted\_probabilities \STATE bw $\in$ [0.001, 0.5] \STATE $D_{hist}$ $\leftarrow$ histogram(preds, n\_bins, density=True) \STATE $D_{beta}$ $\leftarrow$ beta\_kernel(preds, bw, n\-bins) \STATE \textbf{return} optimize(MeanSquaredError($D_{hist}$, $D_{beta}$)) \end{algorithmic} \end{algorithm} \subsection{Kernel Bandwidth estimation} A beta kernel estimator is used to create a smooth density estimate of both the positive and unlabeled ML probabilities, generating distributions over [$0 \dots1$], free of the problematic boundary biases of kernels (e.g. Gaussian) whose range extends outside that interval, adopting the approach of \cite{pulsnar_38}. Another problem with (faster) Gaussian kernel density implementations is that they often use polynomial approximations that can generate negative values in regions of low support, dramatically distorting $\alpha$ estimates. The beta PDF is as follows \cite{pulsnar_41}: \begin{gather} f(x, a, b) = \frac{\Gamma(a+b) x^{a-1} (1-x)^{b-1}} {\Gamma(a) \Gamma(b)}, \end{gather} for x $\in$ [0,1], where $\Gamma$ is the gamma function, $a=1+\frac{z}{bw}$ and $b=1+\frac{1-z}{bw}$, with $z$ the bin location, and $bw$ the bandwidth. Kernel bandwidth selection can also significantly influence $\alpha$ estimates: too narrow of a bandwidth can result in outliers driving poor estimates, and too wide of a bandwidth prevents distinguishing between distributions. We use a histogram bin count heuristic to generate a histogram density, then optimize the beta distribution bandwidth to best fit that histogram density. \subsubsection{Bin count} Our implementation supports 4 well-known methods to determine the number of histogram bins: square root, Sturges' rule, Rice's rule, Scott's rule, and Freedman–Diaconis (FD) rule \cite{pulsnar_35}. \subsubsection{Bandwidth estimation} We compute a histogram density using a bin count heuristic and a beta kernel density estimate at those bin centers using the ML probabilities of both the positive and unlabeled examples. We find the global minimum of the mean squared error (MSE) between the histogram and beta kernel densities using the scipy \textit{differential\_evolution()} optimizer \cite{pulsnar_41}, solving for the best bandwidth in the range [0.001...0.5]. That bandwidth is chosen for kernel density estimation in the PULSCAR algorithm. All experiments herein use MSE as the error metric, but alternatively, the Jensen-Shannon distance can be employed. \subsection{Positive and Unlabeled Learning Selected Not At Random (PULSNAR) Algorithm} We propose a new PU learning algorithm (PULSNAR) to estimate the $\alpha$ in SNAR data, i.e., labeled positives are not selected completely at random. PULSNAR uses a divide-and-conquer strategy for the SNAR data. It converts a SNAR problem into several sub-problems using an unsupervised learning method (clustering), each of which better satisfies the SCAR assumption; then applies the PULSCAR algorithm to those sub-problems. The final alpha is computed by summing the alpha returned by the PULSCAR algorithm for each cluster. \begin{gather} \alpha = (|U| \alpha_1 + |U| \alpha_2 + ... + |U| \alpha_c)/|U| \nonumber\\ \alpha = \alpha_1 + \alpha_2 + ... + \alpha_c, \quad \text{$c=$ number of clusters} \end{gather} Figure \ref{fig:PULSNAR_flowchart} visualizes the PULSNAR algorithm, and Algorithm \ref{alg:pulsnar} provides its pseudocode. \begin{algorithm}[tb] \caption{PULSNAR Algorithm} \label{alg:pulsnar} \textbf{Input}: X ($X_p \cup X_u$), y ($y_p \cup y_u$), n\_bins \\ \textbf{Output}: estimated $\alpha$ \begin{algorithmic}[1] \STATE feature\_importance ($v_1...v_k$), imp\_features ($x_1...x_k$) $\leftarrow$ $\mathcal{A}(X,y)$ \\ \STATE $x'_1...x'_k$ $\leftarrow$ $x_1 v_1...x_k v_k$ \STATE $X'_p$ $\leftarrow$ $X_p$[$x'_1...x'_k$] \STATE clusters $s_1...s_c$ $\leftarrow$ GMM($X'_p$) \STATE $\alpha$ $\leftarrow$ 0 \FOR{c in $s_1...s_c$} \STATE X' $\leftarrow$ $X_p[c] \cup X_u$ \STATE y' $\leftarrow$ $y_p[c] \cup y_u$ \STATE $\alpha$ $\leftarrow$ $\alpha$ + PULSCAR(X', y', n\_bins) \ENDFOR \STATE \textbf{return} $\alpha$ \end{algorithmic} \end{algorithm} \begin{figure} \caption{\textbf{Schematic of PULSNAR algorithm}. An ML model is trained and tested with 5-fold CV on all positive and unlabeled examples. The important covariates that the model used are scaled by their importance value. Positives are divided into c clusters using the scaled important covariates. c ML models are trained and tested with 5-fold CV on the records from a cluster and all unlabeled records. We estimate the proportions ($\alpha_1...\alpha_c$) of each subtype of positives in the unlabeled examples using PULSCAR. The sum of those estimates gives the overall fraction of positives in the unlabeled set. P = positive set, U = Unlabeled set.} \label{fig:PULSNAR_flowchart} \end{figure} \subsubsection{Clustering rationale} Suppose both positive and unlabeled sets contain positives from $k$ subclasses ($1\dots k$). With selection bias (SNAR), the subclass proportions will differ between the sets, and thus the PDF of the labeled positives cannot be scaled by a uniform $\alpha$ to estimate positives among the unlabeled. The smallest subclass would drive an $\alpha$ underestimate with PULSCAR. To address this, we apply clustering to the labeled positives to split them into $c$ clusters. Clustering separates subclasses of positives, and if the assumption that subclass membership drives selection bias holds, PU models comprising examples from one cluster and the unlabeled set will be more likely to follow the SCAR assumption. Applying PULSCAR to each cluster of positives versus the unlabeled results in better estimates of the proportions of similar unlabeled positives (Figure \ref{fig:PULSNAR_flowchart}). \subsubsection{Determining the number of clusters in the positive set} We build an XGBoost \cite{pulsnar_5} model on all positive and unlabeled examples to determine the important features and their \emph{gain} scores. A \emph{gain} score measures the magnitude of the feature's contribution to the model. We select all labeled positives and then cluster them on those features scaled by their corresponding \emph{gain} score, using scikit\_learn's Gaussian mixture model (GMM) method. To establish the number of clusters (n\_components), we iterate n\_components over $1\ldots25$ and compute the Bayesian information criterion (BIC)\cite{pulsnar_6} for each clustering model. We use max\_iter=250, and covariance\_type=``full''. The other parameters are used with their default values. We implemented the ``Knee Point Detection in BIC'' algorithm to find the number of clusters in the labeled positives \cite{pulsnar_36}. \subsection{Calculating calibrated probabilities} The approach to calibrate the ML-predicted probabilities of positive and unlabeled examples in the SCAR and SNAR data is explained in Appendix \ref{appendix1}. \subsection{Improving classification performance} An approach to improving PULSCAR and PULSNAR classification, based on flipping the highest probability $\alpha |U|$ unlabeled examples to 1, is explained in Appendix \ref{appendix2}. \section{Experiments} \label{sectionExperiments} We evaluated our proposed PU learning algorithms in terms of $\alpha$ estimates, six classification performance metrics, and probability calibration. We used real-world ML benchmark datasets and synthetic data for our experiments. For real-world data, we used Bank \cite{pulsnar_7} and KDD Cup 2004 particle physics \cite{pulsnar_8} datasets as SCAR data and Statlog (Shuttle) \cite{pulsnar_9} and Firewall datasets \cite{pulsnar_10} as SNAR data. The synthetic (SCAR and SNAR) datasets were generated using the scikit-learn function \emph{make\_classification()} \cite{pulsnar_11}. We used XGBoost as a binary classifier in our proposed algorithms. To train the classifier on the imbalanced data, we used \emph{scale\_pos\_weight} parameter of XGBoost to scale the weight of the labeled positive examples by the factor $s=\frac{|U|}{|P|}$. We also compared our methods with five recently published methods for PU learning: KM1 and KM2 \cite{pulsnar_22}, TICE \cite{pulsnar_23}, DEDPUL \cite{pulsnar_24} and CleanLab \cite{pulsnar_25}. KM1, KM2, and TICE algorithms were not scalable and failed to execute on large datasets, so we used smaller synthetic datasets to compare our method with these methods. We compared PULSNAR with only DEDPUL on large synthetic datasets (Appendix \ref{appendix3}). Also, \cite{pulsnar_24} previously demonstrated that DEDPUL outperformed KM and TICE algorithms on several UCI (University of California Irvine) ML benchmark and synthetic datasets. \subsection{Synthetic data} We generated SCAR and SNAR PU datasets with different fractions of positives (1\%, 5\%, 10\%, 20\%, 30\%, 40\%, and 50\%) among the unlabeled examples to test the effectiveness of our proposed algorithms. For each fraction, we generated 40 datasets using sklearn's \emph{make\_classification()} function with random seeds 0-39. The \emph{class\_sep} parameter of the function was used to specify the separability of data classes. Values nearer to 1.0 make the classification task easier; we used class\_sep=0.3 to create difficult classification problems. \subsubsection{SCAR data} The datasets contained 2,000 positives (class 1) and 6,000 unlabeled (class 0) examples with 50 continuous features. The unlabeled set comprised $k\%$ positive examples with labels flipped to 0 and $(100-k)\%$ negative examples. \subsubsection{SNAR data} We generated datasets with 6 labels (0-5), defining `0' as negative and 1-5 as positive subclasses. These datasets contained 2,000 positives (400 from each positive subclass) and 6,000 unlabeled examples with 50 continuous features. The unlabeled set comprised k\% positive examples with labels (1-5) flipped to 0 and (100-k)\% negative examples. The unlabeled positives were markedly SNAR, with the 5 subclasses comprising 1/31, 2/31, 4/31, 8/31, and 16/31 of the unlabeled positives. (e.g., in the unlabeled set with 20\% positives, negative: 4,800, label 1 positive: 39, label 2 positive: 77, label 3 positive: 155, label 4 positive: 310, label 5 positive: 619). \subsection{SCAR ML Benchmark Datasets} \subsubsection{UCI Bank dataset} The dataset has 45,211 records (class 1: 5,289, class 0: 39,922) with 16 features. This dataset is a good example of data with class imbalance and mixed features. Since the features contain both numerical and categorical values, they were one-hot encoded \cite{pulsnar_49} using the scikit-learn function \emph{OneHotEncoder()} \cite{pulsnar_11}. The encoder derives the categories based on the unique values in each feature, resulting in 9,541 features. The ML classifier was applied to the encoded features. \subsubsection{KDD Cup 2004 Particle Physics dataset} The dataset contains two types of particles generated in high-energy collider experiments; 50,000 examples (class 1: 24,861, class 0: 25,139) with 78 numerical attributes. This dataset is a good example of balanced data. In both datasets, class 1 records were used as positive, and class 0 records were used as unlabeled for the ML model. To add k\% positive examples to the unlabeled set, the labels of $m$ randomly selected positive records were flipped from 1 to 0, where $m = \frac{k |U|}{100-k}$. \subsection{SNAR ML Benchmark Datasets} \subsubsection{UCI Statlog (Shuttle) Dataset} The dataset contains 43,500 records (class 1: 34,108, class 2: 37, class 3: 132, class 4: 6,748, class 5: 2,458, class 6: 6, class 7: 11) with 9 numerical attributes. This dataset is an example of data with multiclass and class imbalance. We used class 1 as unlabeled examples and the rest of the records as subclasses of positive examples for the ML model (positive: 9,392, unlabeled: 34,108). \subsubsection{UCI Firewall dataset} It is a multiclass dataset containing 65,532 records (`allow': 37,640, `deny': 14,987, `drop': 12,851, `reset-both': 54) with 12 numerical attributes. Class `allow' was used as unlabeled examples, and the others (`deny', `drop', `reset-both') were used as subclasses of positive examples for the ML model (positive: 27,892, unlabeled: 37,640). In both datasets, the majority of positives are from two classes (\emph{shuttle: class 4, 5; firewall: `deny', `drop'}). So, to add $k\%$ positive examples to the unlabeled set, we randomly selected some examples from the minor positive classes and the remaining examples from two major positive classes in equal proportion. Thus, the proportion of positives in the positive set differed from the unlabeled set. \subsection{Estimation of fraction of positives among unlabeled examples, $\alpha$} We applied the PULSCAR algorithm to both SCAR and SNAR data, and the PULSNAR algorithm only to SNAR data, to estimate $\alpha$. \subsubsection{Using the PULSCAR algorithm} To find the 95\% confidence interval (CI) on estimation, we ran XGBoost with 5-fold cross-validation (CV) for 40 random instances of each dataset generated (or selected from benchmark data) using 40 random seeds. Each iteration's class 1 predicted probabilities of positives and unlabeled were used to calculate the value of $\alpha$. \subsubsection{Using the PULSNAR algorithm} The labeled positives were divided into \emph{c} clusters to get homogeneous subclasses of labeled positives. The XGBoost ML models were trained and tested with 5-fold CV on data from each cluster and all unlabeled records. For each cluster, $\alpha$ was estimated by applying the PULSCAR method to class 1 predicted probabilities of positives from the cluster and all unlabeled examples. The overall proportion was calculated by summing the estimated $\alpha$ for each cluster. To compute the 95\% CI on the estimation, PULSNAR was repeated 40 times on data generated/selected using 40 random seeds. \section{Results} \label{sectionResults} \subsection{Synthetic datasets} Figure \ref{fig:synthetic_scar_non_scar_data} shows the $\alpha$ estimated by PU learning algorithms for synthetic datasets. TICE overestimated $\alpha$ for all fractions in both SCAR and SNAR datasets. For SCAR datasets, only PULSCAR returned close estimates for all fractions; DEDPUL overestimated for 1\%; KM1 and KM2 underestimated for 50\%; CleanLab underestimated for larger $\alpha$ (10-50\%). For SNAR datasets, only PULSNAR's estimates were close to the true $\alpha$; other algorithms underestimated for larger $\alpha$ (40-50\%). Figure \ref{fig:dedpulVSpulsnar} in Appendix \ref{appendix3} shows the $\alpha$ estimated by DEDPUL and PULSNAR on large SNAR datasets with different class imbalances. As the class imbalance increased, the performance of DEDPUL dropped, especially for larger fractions. The estimated $\alpha$ by the PULSNAR method was close to the true $\alpha$ for all fractions and sample sizes. \begin{figure} \caption{\textbf{KM1, KM2, TICE, CleanLab, DEDPUL, PULSCAR, and PULSNAR evaluated on SCAR and SNAR synthetic datasets}. The bar represents the mean value of the estimated $\alpha$, with 95\% confidence intervals for estimated $\alpha$.} \label{fig:synthetic_scar_non_scar_data} \end{figure} \subsection{ML Benchmark datasets} \subsubsection{SCAR data} Figure \ref{fig:uci_scar_data} shows the $\alpha$ estimated by PU learning algorithms for the KDD Cup 2004 particle physics and UCI bank datasets. For KDD Cup, estimates by PULSCAR and DEDPUL were close to the true answers for all fractions; TICE overestimated for all fractions; CleanLab overestimated for 1-30\%. For Bank, only PULSCAR returned correct estimates for all fractions; other algorithms overestimated for all fractions. \subsubsection{SNAR data} Figure \ref{fig:uci_non_scar_data} shows the $\alpha$ estimated by PU learning algorithms for UCI Shuttle and UCI Firewall datasets. For the Shuttle dataset, only PULSNAR's estimates were close to the true fractions; other algorithms either overestimated or underestimated. For the Firewall dataset, TICE overestimated, and CleanLab underestimated for all fractions; PULSNAR's estimates were within $\pm 20\%$ of the true $\alpha$; DEDPUL and PULSCAR underestimated for 40\%. \begin{figure} \caption{\textbf{TICE, CleanLab, DEDPUL, and PULSCAR evaluated on SCAR KDD cup 2004 particle physics and UCI Bank datasets}. The bar represents the mean value of the estimated $\alpha$, with 95\% confidence intervals for estimated $\alpha$. KM1 and KM2 failed to execute on both datasets. As TICE was taking several hours to finish one iteration on the bank dataset, the mean $\alpha$ was computed using 5 iterations, and the standard error was set to 0.} \label{fig:uci_scar_data} \end{figure} \begin{figure} \caption{\textbf{KM1, KM2, TICE, CleanLab, DEDPUL, PULSCAR, and PULSNAR evaluated on SNAR UCI Shuttle and Firewall datasets}. The bar represents the mean value of the estimated $\alpha$, with 95\% confidence intervals for estimated $\alpha$. KM1 and KM2 failed to execute on the Firewall dataset. As KM1 and KM2 were taking several hours to finish one iteration on the Shuttle dataset, the mean $\alpha$ was computed using 5 iterations, and the standard error was set to 0.} \label{fig:uci_non_scar_data} \end{figure} \subsection{Probability calibration} Appendix \ref{appendix1results} shows the calibration curves generated using the unblinded labels and isotonically calibrated probabilities of positive and unlabeled examples or only unlabeled examples in the SCAR and SNAR data. \subsection{Classification performance metrics} Appendix \ref{appendix2results} shows 6 substantial improvement in classification performance metrics when applying PULSCAR and PULSNAR versus XGBoost alone. \section{Discussion and Conclusion} \label{sectionConclusion} This paper presented novel PU learning algorithms to estimate the proportion of positives among unlabeled examples in both SCAR and SNAR data with/without class imbalance. Preliminary work (not shown) suggests PULSNAR $\alpha$ estimation is robust to overestimating the number of clusters. Our synthetic experiments were run on difficult classification tasks with low separability. For SNAR data, with true $\alpha=1\%$, when we increased \textit{class\_sep} from 0.3 to 0.5 the PULSNAR $\alpha$ estimate improved from $1.6\%$ (Figure \ref{fig:synthetic_scar_non_scar_data}) to 0.98\% (data not shown). Experimentally, we showed that our proposed methods outperformed state-of-art methods on synthetic and real-world SCAR and SNAR datasets. PU learning methods based on the SCAR assumption generally give poor $\alpha$ estimates on SNAR data. We demonstrated that after applying PULSCAR/PULSNAR, classifier performance, including calibration, improved significantly. Better $\alpha$ estimates open up new horizons in PU Learning. \appendix \onecolumn \section{Proof: positives are not independent of their attributes under the SNAR Assumption} \label{appendix0} Under the SNAR assumption, the probability that a positive example is labeled is not independent of its attributes. Stated formally, the assumption is that $p(s=1|x, y=1) \neq p(s=1|y=1)$ i.e. $p(s=1|x, y=1)$ is not a constant. \textbf{Proof:} \begin{equation} \begin{split} p(s=1|x, y=1) & = p(y=1|(s=1|x))p(s=1|x) \nonumber \\ & = p(y=1|(s=1|x)) \frac{p(x|s=1)p(s=1)}{p(x)} \nonumber \text{ , using Bayes' rule} \\ & = \frac{p(x|s=1)p(s=1)}{p(x)} \nonumber \text{ , since $p(y=1|(s=1|x)) = 1$ } \\ & = \text{a function of $x$.} \end{split} \end{equation} \section{Algorithm for calibrating probabilities} \label{appendix1} Algorithm \ref{alg:calibprobs} shows the complete pseudocode to calibrate the machine learning (ML) model predicted probabilities. Once $\alpha$ is known, we seek to transform the original class 1 probabilities so that their sum is equal to $\alpha |U|$ among the unlabeled or $|P| + \alpha |U|$ among positive and unlabeled, and that they are well-calibrated. Our approach is to probabilistically flip $\alpha |U|$ labels of unlabeled to positive (from 0 to 1) in such a way as to match the PDF of labeled positives across 100 equispaced bins over $[0\ldots1]$, then fit a logistic or isotonic regression model on those labels versus the probabilities to generate the transformed probabilities. To determine the number of unlabeled examples that need to be flipped in each bin, we compute the normalized histogram density, $D\_hist$, for the labeled positives with 100 bins and then multiply $\alpha |U|$ with $D\_hist$. The unlabeled examples are also divided into 100 bins based on their predicted probabilities. Starting from the bin with the highest probability(p=1), we randomly select \emph{k} examples and flip their labels from 0 to 1, where \emph{k} is the number of unlabeled examples that need to be flipped in the bin. If the number of records ($n_1$) that need to be flipped in a bin is more than the number of records ($n_2$) present in the bin, the difference ($n_1-n_2$) is added to the number of records to be flipped in the next bin, resulting in $\alpha |U|$ flips. After flipping the labels of $\alpha |U|$ unlabeled examples from 0 to 1, we fit an isotonic or sigmoid regression model on the ML-predicted class 1 probabilities with the updated labels to obtain calibrated probabilities. The above calibration approach applies to both SCAR and SNAR data. For the SNAR data, the PULSNAR algorithm divides labeled positive examples into \emph{k} clusters and estimates the $\alpha$ for each cluster. For each cluster, the ML-predicted class 1 probabilities of the examples (positives from the cluster and all unlabeled examples or only unlabeled examples) are calibrated using the estimated $\alpha$ for the cluster. Since, for each cluster, PULSNAR uses all unlabeled examples, each unlabeled example has \emph{k} ML-predicted/calibrated probabilities. The final ML-predicted/calibrated probability of an unlabeled example is calculated using the following Equation \ref{eq:combined_probs}: \begin{equation}\label{eq:combined_probs} p = 1 - (1-p_1)(1-p_2)\dots(1-p_k) \end{equation} where $p_k$ is the probability of an unlabeled example from cluster \emph{k}. \begin{algorithm} \caption{calibrate\_probabilities} \label{alg:calibprobs} \textbf{Input}: predicted\_probs, labels, n\_bins, calibration\_method, calibration\_data, $\alpha$ \\ \textbf{Output}: calibrated\_probs \begin{algorithmic}[1] \STATE p0 $\leftarrow$ predicted\_probs[labels == 0] \STATE p1 $\leftarrow$ predicted\_probs[labels == 1] \STATE y0 $\leftarrow$ labels[labels == 0] \STATE y1 $\leftarrow$ labels[labels == 1] \STATE $D_{hist}$ $\leftarrow$ histogram(p1, n\_bins, density=True) \STATE unlab\_pos\_count\_in\_bin $\leftarrow$ $\alpha$ $|p0|$ $D_{hist}$ \STATE p0\_bins $\leftarrow$ split unlabeled examples into n\_bins using p0 \FOR{k $\leftarrow$ [n\_bins $\dots$ 1]} \STATE $n_1$ $\leftarrow$ unlab\_pos\_count\_in\_bin[k] \STATE $n_2$ $\leftarrow$ p0\_bins[k] \IF{$n_1 > n_2$} \STATE $\hat{y0}$ $\leftarrow$ flip labels (y0) of $n_2$ examples from 0 to 1 in bin k \STATE unlab\_pos\_count\_in\_bin[k-1] $\leftarrow$ unlab\_pos\_count\_in\_bin[k-1] + ($n_1-n_2$) \ELSE \STATE $\hat{y0}$ $\leftarrow$ flip labels (y0) of random $n_1$ examples from 0 to 1 in bin k \ENDIF \ENDFOR \IF{calibration\_data == `PU'} \STATE p, y $\leftarrow$ $p1 \cup p0$, $y1 \cup \hat{y0}$ \ELSIF{calibration\_data == `U'} \STATE p, y $\leftarrow$ p0, $\hat{y0}$ \ENDIF \IF{calibration\_method is `sigmoid'} \STATE $\hat{p}$ $\leftarrow$ LogisticRegression(p, y) \ELSIF{calibration\_method is `isotonic'} \STATE $\hat{p}$ $\leftarrow$ IsotonicRegression(p, y) \ENDIF \STATE \textbf{return} $\hat{p}$ \end{algorithmic} \end{algorithm} \subsection{Experiments and Results} \label{appendix1results} We used synthetic SCAR and SNAR datasets and KDD Cup SCAR dataset to test our calibration algorithm. \textbf{SCAR datasets: }After estimating the $\alpha$ using the PULSCAR algorithm, we applied Algorithm \ref{alg:calibprobs} to calibrate the ML-predicted probabilities. To calculate the calibrated probabilities for both positive and unlabeled (PU) examples, we applied isotonic regression to the ML-predicted class 1 probabilities of PU examples with labels of positives and updated labels of unlabeled (of which $\alpha|U|$ were flipped per Algorithm \ref{alg:calibprobs}). We applied isotonic regression to the unlabeled's predicted probabilities with their updated labels to calculate the calibrated probabilities only for the unlabeled. \textbf{SNAR datasets: } Using the PULSNAR algorithm, the labeled positive examples were divided into \emph{k} clusters. For each cluster, after estimating the $\alpha$, Algorithm \ref{alg:calibprobs} was used to calibrate the ML-predicted probabilities. To calculate the calibrated probabilities for positives from a cluster and all unlabeled examples, we applied isotonic regression to their ML-predicted class 1 probabilities with labels of positives from the cluster and updated labels of unlabeled (of which $\alpha_j |U|$ were flipped for cluster $j=1\dots k$, see Algorithm \ref{alg:calibprobs}). We applied isotonic regression to the unlabeled's predicted probabilities with their updated labels to calculate the calibrated probabilities only for the unlabeled. Thus, each unlabeled example had \emph{k} calibrated probabilities. We computed the final calibrated probability for each unlabeled example using Formula \ref{eq:combined_probs}. Figures \ref{fig:PU_scar_syn_pu_data_calibration}, \ref{fig:PU_scar_syn_u_data_calibration}, \ref{fig:PU_snar_syn_pu_data_calibration}, \ref{fig:PU_snar_syn_u_data_calibration}, \ref{fig:PU_scar_kdd_pu_data_calibration} and \ref{fig:PU_scar_kdd_u_data_calibration} show the calibration curves generated using the unblinded labels and isotonically calibrated (red)/ uncalibrated (blue) probabilities. When both positive and unlabeled examples were used to calculate calibrated probabilities, the calibration curve followed the y=x line (well-calibrated probabilities). When only unlabeled examples were used, the calibration curve for 1\% did not follow the y=x line, presumably due to the ML model being biased toward negatives, given the small $\alpha$. Also, the calibration curves for the SCAR data followed the y=x line more closely than the calibration curves for the SNAR data. It is due to the fact that the final probability of an unlabeled example in the SNAR data is computed using its \emph{k} probabilities from \emph{k} clusters. So, a poor probability estimate from even one cluster can influence the final probability of an unlabeled example. \begin{figure} \caption{\textbf{Calibration curves for Synthetic SCAR datasets (both positive and unlabeled examples)}. Synthetic datasets were generated with different fractions of positives (1\%, 5\%, 10\%, 20\%, 30\%, and 50\%) among the unlabeled examples. class\_sep=0.3, number of attributes=100, $|P| = 5,000$ and $|U| = 50,000$. Calibration curves were generated using both positive and unlabeled examples (Uncalibrated probabilities - blue, calibrated probabilities - red).} \label{fig:PU_scar_syn_pu_data_calibration} \end{figure} \begin{figure} \caption{\textbf{Calibration curves for Synthetic SCAR datasets (only unlabeled examples)}. Synthetic datasets were generated with different fractions of positives (1\%, 5\%, 10\%, 20\%, 30\%, and 50\%) among the unlabeled examples. class\_sep=0.3, number of attributes=100, $|P| = 5,000$ and $|U| = 50,000$. Calibration curves were generated using only unlabeled examples (Uncalibrated probabilities - blue, calibrated probabilities - red).} \label{fig:PU_scar_syn_u_data_calibration} \end{figure} \begin{figure} \caption{\textbf{Calibration curves for Synthetic SNAR datasets (both positive and unlabeled examples)}. Synthetic datasets were generated with different fractions of positives (1\%, 5\%, 10\%, 20\%, 30\%, and 50\%) among the unlabeled examples. class\_sep=0.3, number of attributes=100, number of positive subclasses=5, $|P|$ = 20,000 (4,000 from each subclass) and $|U|$ = 50,000. Calibration curves were generated using both positive and unlabeled examples (Uncalibrated probabilities - blue, calibrated probabilities - red).} \label{fig:PU_snar_syn_pu_data_calibration} \end{figure} \begin{figure} \caption{\textbf{Calibration curves for Synthetic SNAR datasets (only unlabeled examples)}. Synthetic datasets were generated with different fractions of positives (1\%, 5\%, 10\%, 20\%, 30\%, and 50\%) among the unlabeled examples. class\_sep=0.3, number of attributes=100, number of positive subclasses=5, $|P|$ = 20,000 (4,000 from each subclass) and $|U|$ = 50,000. Calibration curves were generated using only unlabeled examples (Uncalibrated probabilities - blue, calibrated probabilities - red).} \label{fig:PU_snar_syn_u_data_calibration} \end{figure} \begin{figure} \caption{\textbf{Calibration curves for SCAR KDD Cup 2004 particle physics dataset (both positive and unlabeled examples)}. Unlabeled sets contained 1\%, 5\%, 10\%, 20\%, 30\%, and 40\% positive examples. Calibration curves were generated using both positive and unlabeled examples (Uncalibrated probabilities - blue, calibrated probabilities - red).} \label{fig:PU_scar_kdd_pu_data_calibration} \end{figure} \begin{figure} \caption{\textbf{Calibration curves for SCAR KDD Cup 2004 particle physics dataset (only unlabeled examples)}. Unlabeled sets contained 1\%, 5\%, 10\%, 20\%, 30\%, and 40\% positive examples. Calibration curves were generated using only unlabeled examples (Uncalibrated probabilities - blue, calibrated probabilities - red).} \label{fig:PU_scar_kdd_u_data_calibration} \end{figure} \section{Improving classification performance with PULSCAR and PULSNAR} \label{appendix2} Algorithm \ref{alg:classification_metrics} shows the complete pseudocode to improve classification performance with PULSCAR and PULSNAR. The algorithm returns the following six classification metrics: \emph{Accuracy, AUC-ROC, Brier score (BS), F1, Matthew's correlation coefficient (MCC)}, and \emph{Average precision score (APS)}. The approach to enhancing the classification performance is as follows: \textbf{Using PULSCAR: } After estimating the $\alpha$, the class 1 predicted probabilities of only unlabeled examples are calibrated using Algorithm \ref{alg:calibprobs}. The calibrated probabilities of the unlabeled examples are sorted in descending order, and the labels of top $\alpha |U|$ unlabeled examples with the highest calibrated probabilities are flipped from 0 to 1 (probable positives). We then train and test an ML classifier (XGBoost) with 5-fold CV using the labeled positives, probable positives, and the remaining unlabeled examples. The classification performance metrics are calculated using the ML predictions and the true labels of the data. \textbf{Using PULSNAR: } The PULSNAR algorithm divides the labeled positive examples into \emph{k} clusters. For each cluster, after estimating $\alpha_j$ for $j$ in $1\dots k$, the class 1 predicted probabilities of only unlabeled examples are calibrated using Algorithm \ref{alg:calibprobs}. Since each unlabeled example has \emph{k} calibrated probabilities, we compute the final calibrated probability for each unlabeled example using the Formula \ref{eq:combined_probs}. The final $\alpha$ is calculated by summing the $\alpha_j$ values over the $k$ clusters. The final calibrated probabilities of the unlabeled examples are sorted in descending order, and the labels of top $\alpha |U|$ unlabeled examples with the highest calibrated probabilities are flipped from 0 to 1 (probable positives). We then train and test an ML classifier (XGBoost) with 5-fold CV using the labeled positives, probable positives, and the remaining unlabeled examples. The classification performance metrics are calculated using the ML predictions and the true labels of the data. \begin{algorithm} \caption{calculate\_classification\_metrics} \label{alg:classification_metrics} \textbf{Input}: X ($X_p \cup X_u$), y ($y_p \cup y_u$), y\_true, bin\_method, n\_bins, predicted\_probabilities, $\alpha$ \\ \textbf{Output}: classification\_metrics (accuracy, roc auc, brier score, f1, Matthew's correlation coefficient, average precision) \begin{algorithmic}[1] \STATE p $\leftarrow$ predicted\_probabilities \STATE $\hat{p}$ $\leftarrow$ calibrate\_probabilities(p, y, n\_bins, calibration\_method, `U', $\alpha$) \STATE sort $\hat{p}$ in descending order \STATE $\hat{y_u}$ $\leftarrow$ flip labels of top $\alpha |U|$ unlabeled examples with highest $\hat{p}$ \STATE y $\leftarrow$ $y_p \cup \hat{y_u}$ \STATE predicted\_probabilities (p) $\leftarrow$ $\mathcal{A}(X,y)$ \STATE \textbf{return} accuracy(p, y\_true), auc(p, y\_true), bs(p, y\_true), f1(p, y\_true), mcc(p, y\_true), aps(p, y\_true) \end{algorithmic} \end{algorithm} \subsection{Experiments and Results} \label{appendix2results} We applied Algorithm \ref{alg:classification_metrics} to synthetic SCAR and SNAR datasets to get the performance metrics for the XGBoost model with PULSCAR and PULSNAR, respectively. The classification performance metrics were also calculated without applying the PULSCAR or PULSNAR algorithm, in order to determine the improvement in the classification performance of the model. The experiment was repeated 40 times by selecting different train and test sets using 40 random seeds to compute the 95\% confidence interval (CI) for the metrics. Figures \ref{fig:scar_syn_classification_metrics} and \ref{fig:snar_syn_classification_metrics} show the classification performance of the XGBoost model with/without the PULSCAR or PULSNAR algorithm on synthetic SCAR and SNAR data, respectively. The classification performance using PULSCAR or PULSNAR increased significantly over XGBoost alone. As the proportion of positives among the unlabeled examples increased, the performance of the model without PULSCAR or PULSNAR (blue) worsened significantly more than when using PULSCAR or PULSNAR. \begin{figure} \caption{\textbf{Classification performance of XGBoost model on synthetic SCAR datasets with and without the PULSCAR algorithm}. Synthetic datasets were generated with different fractions of positives (1\%, 5\%, 10\%, 20\%, 30\%, 40\%, and 50\%) among the unlabeled examples. class\_sep=0.3, number of attributes=100, $|P| = 5,000$ and $|U| = 50,000$. \emph{``no PULSCAR''} (blue): XGBoost model was trained and tested with 5-fold CV on the given data; the classification metrics were calculated using the model predictions and true labels. \emph{``PULSCAR''} (red): PULSCAR algorithm was used to find the proportion of positives among unlabeled examples ($\alpha$); using $\alpha$, probable positives were identified; XGBoost model was trained and tested with 5-fold CV on labeled positives, probable positives, and the remaining unlabeled examples; classification metrics were calculated using the model predictions and true labels. The error bars represent 95\% CIs for the performance metrics.} \label{fig:scar_syn_classification_metrics} \end{figure} \begin{figure} \caption{\textbf{Classification performance of XGBoost model on synthetic SNAR datasets with and without the PULSNAR algorithm}. Synthetic datasets were generated with different fractions of positives (1\%, 5\%, 10\%, 20\%, 30\%, 40\%, and 50\%) among the unlabeled examples. class\_sep=0.3, number of attributes=100, number of positive subclasses=5, $|P|$ = 20,000 (4,000 from each subclass) and $|U|$ = 50,000. \emph{``no PULSNAR''} (blue): XGBoost model was trained and tested with 5-fold CV on the given data; the classification metrics were calculated using the model predictions and true labels. \emph{``PULSNAR''} (red): PULSNAR algorithm was used to find the proportion of positives among unlabeled examples ($\alpha$); using $\alpha$, probable positives were identified; XGBoost model was trained and tested with 5-fold CV on labeled positives, probable positives, and the remaining unlabeled examples; classification metrics were calculated using the model predictions and true labels. The error bars represent 95\% CIs for the performance metrics.} \label{fig:snar_syn_classification_metrics} \end{figure} \section{DEDPUL vs. PULSNAR: Alpha estimation} \label{appendix3} Public implementations of the PU learning methods KM1, KM2, and TICE were not scalable; they either failed to execute or would have taken weeks to run the multiple iterations required to obtain confidence estimates for large datasets. We thus could not compare our method with KM1, KM2, and TICE on large datasets and used only DEDPUL for comparison. Importantly, it was previously demonstrated that the DEDPUL method outperformed these three methods on several UCI ML benchmark and synthetic datasets \cite{pulsnar_24}. We compared our algorithm with DEDPUL on synthetic SNAR datasets with different fractions (1\%, 5\%, 10\%, 20\%, 30\%, 40\%, and 50\%) of positives among unlabeled examples. In our experiments, we observed that class imbalance (ratio of majority class to minority class) could affect the $\alpha$ estimates. So, we used 4 different sample sizes: 1) positive: 5,000 and unlabeled: 5,000; 2) positive: 5,000 and unlabeled: 25,000; 3) positive: 5,000 and unlabeled: 50,000; 4) positive: 5,000 and unlabeled: 100,000. For each sample size and fraction, we generated 20 datasets using sklearn's \emph{make\_classification()} method with random seeds 0-19 to compute 95\% CI. We used class\_sep=0.3 for each dataset to create difficult classification problems. All datasets were generated with 100 attributes and 6 labels (0-5), defining `0' as negative and 1-5 as positive subclasses. The positive set contained 1000 examples from each positive subclass in all datasets. The unlabeled set comprised k\% positive examples with labels (1-5) flipped to 0 and (100-k)\% negative examples. The unlabeled positives were markedly SNAR, with the 5 subclasses comprising 1/31, 2/31, 4/31, 8/31, and 16/31 of the unlabeled positives. Figure \ref{fig:dedpulVSpulsnar} shows the $\alpha$ estimates by DEDPUL and PULSNAR on synthetic SNAR data. For smaller true fractions (1\%, 5\%, 10\%), DEDPUL returned close $\alpha$ estimates, but for larger fractions (20\%, 30\%, 40\%, and 50\%), it underestimated $\alpha$. Also, as the class imbalance increased, the performance of DEDPUL dropped, especially for larger true fractions. The estimated $\alpha$ by the PULSNAR method was close to the true $\alpha$ for all fractions and sample sizes. \begin{figure} \caption{\textbf{PULSNAR and DEDPUL evaluated on synthetic SNAR datasets}. The bar represents the mean value of the estimated $\alpha$, with 95\% CI for estimated $\alpha$.} \label{fig:dedpulVSpulsnar} \end{figure} \end{document}
arXiv
Help us improve our products. Sign up to take part. A Nature Research Journal Search E-alert Submit My Account Login Electric field-controlled transformation of the eigenmodes in a twisted-nematic Fabry–Pérot cavity V. A. Gunyakov1, I. V. Timofeev ORCID: orcid.org/0000-0002-6558-56071,2, M. N. Krakhalev1,3, W. Lee ORCID: orcid.org/0000-0002-3515-11704 & V. Ya. Zyryanov1 Scientific Reports volume 8, Article number: 16869 (2018) Cite this article Liquid crystals Microresonators The polarized optical states in the transmission spectrum of a twisted-nematic Fabry–Pérot cavity with the distinctly broken Mauguin's waveguide regime have been theoretically and experimentally investigated. Specific features of the electric field-induced transformation of the polarization and spectral characteristics of eigenmodes of the neighboring series at the overlap resonant frequencies have been examined. It is demonstrated that the linear polarizations of eigenmodes at the cavity boundaries remain nearly orthogonal and their frequency trajectories reproduce the avoided crossing phenomenon. The experimental data are confirmed analytically and by the numerical simulation of light transmission through the investigated anisotropic multilayer with the use of a Berreman matrix method. The results obtained can be generalized to any materials with the helix response. One of the promising directions in modern photonics is the development of controlled devices on the basis of structures with the permittivity periodically modulated in one, two or three dimensions on a spatial scale comparable to the light wavelength. Such structures are called photonic crystals (PCs)1,2. The Fabry–Pérot microcavities with the distributed Bragg mirrors, i.e., layered structures with the refractive index periodically changing in one spatial direction, are, in fact, one-dimensional PC structures with a defect layer. A specific feature of electromagnetic eigenstate spectrum in the layered structure is the presence of photonic band gaps (PBGs) almost totally reflecting the incident radiation1,2,3. The defect layer breaks the periodicity of dielectric properties and thereby leads to the localization of light with certain wavelengths inside the band gap. The optical properties of the Fabry–Pérot cavity can be effectively controlled by using an electric field-sensitive medium as a defect layer. Here, the highly promising materials are liquid crystals (LCs), which exhibit a great variety of electrooptical effects useful for controlling the refractive index by changing the LC director configuration under low voltages4. Close attention of researches has been paid to the wave processes in optically anisotropic materials, including twisted-nematic LCs placed inside a Fabry–Pérot cavity. In such structures, the ease of controlling LCs by low voltages is combined with the high spectral resolution of the cavity5,6,7,8,9. This allows governing the intensity, phase, and polarization of the transmitted or reflected light10,11. It was analytically established that twisting of the optical axis of a nematic LC and the difference between the propagation constants of the extraordinary (е) and ordinary (о) waves in such a medium cause their coupling and form a new class of eigenmodes called twist extraordinary (tе) and twist ordinary (tо) waves12. These waves are elliptically polarized. The ellipticity of polarization is retained; the semimajor axis of the ellipse is directed along (te) or across (to) the local director. As was demonstrated using the theory of coupled modes, a pair of the te and tо waves at the same frequency is coupled by reflection in a twisted-nematic Fabry–Pérot cavity (TN-FPC). This coupling produces a cavity mode pair, re and ro. The polarization and, consequently, the mode type, re or ro, depend on the ratio between te and tо mode amplitudes13. In this case, despite the ellipticity of the cavity modes, they remain linearly polarized at the TN-FPC boundaries14,15. In a previous study15, the effect of mode coupling on the polarization states of eigenmodes of the TN-FPC containing a thin nematic layer with the distinctly broken Mauguin's waveguide regime16 was investigated. The spectra were measured and calculated for the unpolarized incident light. It was shown that the device can be used as an electric field-controlled rotating linear polarizer. However, there are little-studied problems on specific features of the TN-FPC polarized transmission spectra, where peaks are accompanied by satellites. So, at first, the spectra seem to be random sets of peaks with arbitrary intensities. Experimental data that would reflect the correlation between the field-effect dynamics of the spectral positions of eigenmodes and change in their polarization state are lacking. In particular, it is important to clarify how the mode couplings manifest themselves at the field-effect transition through the Gooch–Tarry spectral point17, where the te and tо elliptic modes are maximally coupled. These problems have to be solved to regulate the concepts on the behavior of modes in twisted structures and optimize the structure of tunable TN-FPCs designed for telecommunication applications18. For this purpose, a modified experimental approach is needed. The aim of this study is to investigate the spectral features of polarization components of the modes in a TN-FPC with a thin twisted-nematic layer within photonic band gap. We discuss the polarization and spectral behavior of selected modes in the vicinity of the Gooch–Tarry maximum under the field-effect dynamics. The spectral position of this point is governed by a low (~1 V) electric voltage and the spectra and polarization states of the re and ro cavity eigenmodes are detected by the rotating polarizer technique under their independent excitation. The experimental data are compared with the results of numerical simulation using the 4 × 4 transfer matrix method. The TN-FPC is a sandwiched structure (see in detail in the Methods) and consists of two dielectric multilayer mirrors separated by a thin twist-nematic liquid-crystalline film. Spectral properties of this device can be controlled with an electric field applied normal to the LC layer. The superimposed polarization components T||,⊥(λ) of the TN-FPC transmission spectrum measured in zero voltage are presented in Fig. 1a. Each component consists of two intervals that divide the PBG approximately in half. In the short-wave spectral range in the vicinity of a wavelength of λmin = 458 nm, one can see a band of well-resolved peaks, which correspond to the re cavity modes in the T||-component and the ro modes in the T⊥-component. At the parameters of the investigated twisted-nematic structure, this wavelength corresponds to the Gooch–Tarry minimum condition17. At the wavelength of λ = 458 nm the values of the refractive indices ne = 1.763 and no = 1.552 of the 5CB liquid crystal (t = 25 °C) and thickness d = 4.15 µm yield 2Δnd/λ = 3.82. Thus, λ = 458 nm corresponds to the second Gooch–Tarry minimum. This condition simulates the Mauguin's regime in the LC layer for the transmitted light linearly polarized along the director or orthogonally to it on the input mirror. In contrast to the Mauguin's regime, the propagation of waves in the TN-FPC is not waveguide, since the modes in the bulk of LC remain elliptically polarized13,19. The wavelength λmax = 560 nm shown by the arrow in Fig. 1a is a center of the mixed peak band. In particular, in the T||-component, along with the well-defined rе modes, the lower-intensity ro modes are observed as satellites and, vice versa, the rе modes are visible in the T⊥-component. At the parameters of the investigated twisted-nematic structure, the wavelength λmax corresponds to the Gooch–Tarry maximum condition17. The electric field applied along the sample normal will unwind the nematic helix. The director field deformation is related to the weakening of the optical anisotropy of the LC medium, which, in turn, allows the spectral positions of modes to be controlled. For example, above some critical voltage applied to the sample, the mode λre = 493 nm will shift toward the mode λro = 484 nm (Fig. 1a) and experience the avoided crossing phenomenon6,8,13. Figure 1b shows the calculated TN-FPC transmission spectrum. It can be seen that the experimental and calculated spectral positions of the cavity modes agree well within photonic band gap. TN-FPC transmission spectra at the longitudinal (T||) and transverse (T⊥) polarizer orientations measured (а) and calculated (b) using the 4 × 4 transfer matrix method with regard to the mode decay (Im nLC = 3.9·10−4). Arrows indicate the wavelengths corresponding to the Gooch–Tarry minimum (458 nm) and maximum (560 nm) conditions. Inset on the top shows a homogeneous twist-nematic structure. The spectral features are explained by the essential difference between the states of polarization (SOP) of the optical modes in the vicinity of the Gooch–Tarry minimum and maximum. A parameter for estimating the SOP can be angle ξ or θ of the deviation of the linear polarization of the modes from the LC director on the input (ξ) or output (θ) cavity mirror, respectively. According to the approach described in ref.19, the angle ξ is determined as $${\rm{\xi }}=\frac{1}{2}{\tan }^{-1}\,[-\frac{{\rm{\phi }}}{{\rm{\upsilon }}}\,\tan \,{\rm{\upsilon }}],$$ where \({\rm{\upsilon }}=\sqrt{{{\rm{\delta }}}^{2}+{{\rm{\phi }}}^{2}}\) is the twisted anisotropy phase, φ is the LC director twist angle, δ = Δndk0/2 is the anisotropy phase (angle), Δn = ne − no is the difference between the refractive indices of the e and o waves, and k0 is the wavenumber in vacuum. In addition, the analytical solution of Eq. (1) contains the LC frequency dispersion in the implicit form. For the investigated structure the angles ξ and θ are complementary. In particular, in the configuration presented in Fig. 8 below, the angle θ can be determined experimentally from the angle of deviation of the transmission direction of analyzer А from the y axis to the maximum transmission in this resonance. The θ values are assumed to be positive upon deviation of А to the positive direction of the x axis and negative upon deviation to the opposite side. Note that the rotation of the analyzer by 90° relative to the desired θ value leads to the quenching of the peak, which is indicative of the nearly linear polarization of the radiation at the cavity output. Figure 2 shows experimental and numerically simulated angles θ for all the resonant peaks of the TN-FPC spectra. Angles θ of deviation of the linear polarization of the modes as a function of the LC director orientation at the output cavity mirror. Triangles show experimental values for the ro (Δ) and re (∇) cavity modes. Open circles show the numerical simulation data and the solid and dashed lines are built using Eq. (1). The θ(λ) functional dependence in Fig. 2 according to Eq. (1) allows us to follow the SOP evolution in the spectrum, starting from the λmin wavelength. As expected, in the vicinity of the Gooch–Tarry minimum, re and ro modes are linearly polarized along the x || n (−90°) and y (0°) axes, respectively. As the λ value increases, the linear polarization smoothly and unidirectionally deviates from the coordinate axes up to critical angles of −45° for the rе modes and +45° for the rо modes at the Gooch–Tarry maximum point. It can be seen that the transition of this point leads to the change in the θ angle sign for the modes of both types. Thus, the modes, in fact, exchange by their SOP. Moving further to the long-wave region, the polarization directions monotonically approach the corresponding coordinate axes: θ → +90° (x axis) for the rе modes and θ → 0° (y axis) for the ro mode. The presented θ(λ) dependence elucidates the reasons for the essential difference between the structures of polarization components T||,⊥(λ) of the transmission spectrum (Fig. 1a) in the band of pure peaks and in the band of mixed ones. It is noteworthy that the above-mentioned trends to the TN-FPC mode SOP evolution in the spectrum are only determined by the strict alternation of the Gooch–Tarry extrema upon variation of the Mauguin's parameter16 u ~ 2Δnd/λ and, in this sense, are general. The number of bands in the PBG and their spectral positions can be different, since they are determined by the specific parameters of the investigated structure, including cavity thickness d, anisotropy value Δn, and twist angle φ. Figure 3 presents experimental dependences of the polarized (T⊥-component) and unpolarized (Т) TN-FPC spectrum in the region of resolved peaks under applied voltages of 0.74 (Fig. 3a) and 0.97 V (Fig. 3b). The Freedericksz transition voltage is Uc = 0.76 V. For clarity, the field-effect dynamics of the spectral position of the modes is shown against the background of the spectra of the T⊥-component measured under zero voltage. The transmission peak position is determined by the cavity eigenmode frequencies. They satisfy the phase matching condition15, which requires the total phase incursion of eigenmodes for a cycle to be multiple of 2π: $$2{\rm{\sigma }}\pm {{\rm{2sin}}}^{-{\rm{1}}}\,(\cos \,{\rm{\Theta }}\cdot \,\sin \,{\rm{\upsilon }})=2{\rm{\pi }}N.$$ here, the quantity 2σ = (ne + no)k0d is the mean mode phase for a cycle and the ellipticity parameter Θ = tan−1(φ/δ) reflects the smoothness of twisting relative to the nematic layer anisotropy value. The integer N = 1, 2, 3,… in Eq. (2) unambiguously determines the number of each resonant series from two peaks of close frequencies, which correspond to the re and ro mode. In Fig. 3a, one can see four such series. A remarkable property of the modes of one series is that they can cross with each other, i.e., resonate at the same frequency, only when the parameter υ amounts to an integer number of π. Coincidence of the re and ro modes at a wavelength of λmin = 458 nm is an example of such crossing, which allows us to determine, according to the refractometric data on 5CB from ref.20, the number N = 30 of this series using Eq. (2). As the wavelength increases, the series number decreases by unity; thus, the series in Fig. 3a from the left to the right have the numbers N = 30, 29, 28, and 27. Another remarkable property of the twisted-nematic cavity is that the avoided crossing phenomenon can only be observed between crossed modes of the neighboring series with the numbers of different evenness13. TN-FPC mode spectra measured for the unpolarized incident light Tunpol (solid lines) at (а) an under-threshold voltage of U = 0.74 V and (b) a voltage of U = 0.97 V at which the Gooch–Tarry maximum shifts to a wavelength of 484.3 nm from the initial position 560 nm (see Fig. 1a). The dashed line shows the spectrum of polarized ro modes (T⊥-component) measured in zero voltage. Simulated of the director field n(r) of the TN-layer of 5CB as a function of the applied voltage within 0.75 ÷ 2.0 V range is presented in Fig. 4. The electric field changes the distribution of the tilt ψ(z, U) and twist φ(z, U) angles of the local nematic director. It leads to the smooth shifting of the rе modes, while the position of the ro modes is almost field-insensitive. In particular, above the threshold voltage Uc, the rе modes (468.8, 480.5, and 493.1 nm) shown by horizontal arrows in Fig. 3a start shifting from the initial position toward the nearest short-wave ro modes (457.9, 470.4, and 484.3 nm) of the neighboring series, forming mixed pairs with the latter (Fig. 3b). Under a certain voltage, each pair experiences the avoided crossing phenomenon, which indicates a new position of the Gooch–Tarry maximum for the structure under study. As an example, Fig. 3b shows the spectrum corresponding to the avoided crossing of the rо mode with 484.3 nm of the 28th series and the rе mode with 493.1 nm of the 27th series shifted toward the former under a voltage of U = 0.97 V. In the vicinity of the point λ = 484.3 nm, where the Gooch–Tarry maximum shifted at this voltage (shown by the arrow), the spectrum has the form of a doublet with the peaks symmetrically repulsed by ∼0.75 nm each relative to this point. Note that the mode spectra are analogous at the unpolarized incident light and in the presence of the polarizer for both T|| and T⊥ component. A further increase in the voltage to 1.05 V will lead to shifting of the Gooch–Tarry maximum to a wavelength of 470.4 nm and the occurrence of the avoided crossing phenomenon for the next mixed pair of modes, and so on. Simulated twisted nematic liquid crystal 5CB director field as function of the applied voltage is described by tilt angle ψ(z, U) (a) and twist angle φ(z, U) (b). It is interesting to follow the field-effect evolution of the SOP of a pair of modes, which experiences the avoided crossing phenomenon. To do that, the SOP spectra for a pair of modes with 484.3 and 493.1 nm were detected independently by the rotating polarizer method in the voltage range of 0.86 ÷ 1.10 V with a step of 0.01 V. In this method, for each voltage, a polarizer position is found at which the transmittance of the investigated resonant peak attains its maximum value Tmax. In this case, the angle between the transmission direction of polarizer Р and the y axis taken for the reference point corresponds to the angle ξ in Eq. (1). At the SOP orthogonality, the mode of the pair selected in such a way looks like a single peak without satellites. Figure 5 shows experimental and calculated the SOP spectra obtained by the rotating polarizer method for modes 484.3 nm and 493.1 nm as a function of the applied voltage. The dependences evidence for not only the mode orthogonality at the avoided crossing point, but also for the synchronous evolution of the eigenstates in the vicinity of this point upon monotonic variation in the voltage applied to the sample. The field-effect dependence of the angle ξ (Fig. 6a) measured with the polarizer set in a desired position before spectrum detection evidences for the not quite obvious fact of the SOP evolution at which the modes remain orthogonally polarized at least in the range of 0.94 ÷ 1.0 V. The unobviousness is caused by the different field-effect dynamics of the spectral positions of the modes with increasing voltage (Fig. 6b). In particular, upon approaching the voltages around U = 0.97 V, the ro mode with 484.3 nm, which is initially insensitive to the field, starts shifting to the blue spectral range, while the active re mode with 493.1 nm slows down. In this case, each mode in the pair resonates at its own frequency (Fig. 6b). Nevertheless, in view of the frequency closeness, the key parameters υ determining the direction of linear polarization of the cavity mode on the mirror13,19 differ insignificantly even at the end points of the λ(U) dependence in Fig. 6b, when the modes start diverging. In particular, at U = 0.94 V, the ratio between the anisotropy phases δrо/δrе = (1 + Δλ/λro) and, consequently, between parameters υrо/υrе, differs from unity by only 0.4%. Here, Δλ = λre − λro is the spectral interval between the combining modes. Such a discrepancy is noncritical for the ξ(υ) dependence of the polarization angles of the rе and rо modes in the range of small values of the υ parameter13, which results in the observed mode orthogonality effect at the essentially different field-effect dynamics of spectral positions of the modes. The specific feature of the behavior of the modes in the TN-FPC is that such independent characteristics as spectral position and polarization state are correlated. Comparison of the ξ(U) and λ(U) dependences shows that, e.g., the ro mode synchronously reacts to approaching the Gooch–Tarry maximum point by rotation of the angle ξ and shifting to the short-wave spectral region. Experimental (a,b) and calculated (c,d) SOP spectra for the TN-FPC ro mode with 484.3 nm (a,c) and re mode with 493.1 nm (b,d) as a function of the applied voltage. Experimental field-effect dependences of (a) the SOP and (b) spectral positions of the maxima of the modes with 484.3 nm (blue branches) and 493.1 nm (red branches) reproducing the avoided crossing phenomenon. The voltage U = 0.97 V is marked by the vertical line; solid lines show the interpolation. It is important that, in the case of orthogonal SOP of the modes in a mixed pair in the above-mentioned method for detecting the spectrum of a selected mode, the second cavity mode is blocked by the input polarizer, since both modes are linearly polarized at the boundaries. Nevertheless, the invisibility of the second mode does not affect the spectral position and polarization state of the investigated mode. In particular, at U = 0.97 V, the ro mode with 484.3 nm in Fig. 5a and rе mode with 493.1 nm in Fig. 5b occupy the same spectral positions as in the spectrum in Fig. 3b obtained by intensity equalization technique15. Thus, the independently detected modes appear repulsed by their virtual twins and thereby reproduce the trajectory corresponding to the avoided crossing phenomenon (Fig. 6). Coincidence of the mode trajectories obtained by different detection methods directly indicates their independence, despite the matched rearrangement of the polarization angles ξ under the action of the electric field. Indeed, the cavity modes represent a mixture of te and to waves in a certain ratio and do not couple, according to the definition 12. As an example, Fig. 7 shows transformation of the SOP resonant peak at the frequency of the ro mode combined from two elliptically polarized waves under the action of electric field when the resonance is not excited at the re mode frequency. Transformation of the coupled elliptically polarized to (dashed lines) and te (solid lines) modes at the field-effect transition (а → b → c) through the Gooch–Tarry maximum spectral point. The nematic director n on the input mirror is aligned parallel to the y axes, light propagates along the z axes, and reciprocal arrows show the orientation of the linear polarization plane of the ro mode: the angle ξ relative to the y axes is (a) 65°, (b) 45°, and (c) 25°. The observed rotation of the linear polarization plane indicates that upon approaching the Gooch–Tarry maximum, the coupling between elliptical waves strengthens and a periodic energy flow from the to to te wave and back increases. The resulting phase shift, which moves the resulting mode frequency to the blue spectral range, grows. The transition through the Gooch–Tarry maximum leads to the transformation ro → o2e → rе and the short-wave mode becomes field-sensitive (the lower branch in Fig. 6b). The synchronous character of the mixed pair evolution suggests the analogous transformation rе → e2o → ro of the SOP; therefore, the long-wave rе mode, which was earlier field-sensitive, transforms to the ro mode and occupies a fixed spectral position, in which the ro mode of the neighboring series was localized at the lower voltage (the upper branch in Fig. 6b). Electric field-controlled transformation for polarization rotation is realized within peak profile of ro-modes. A high-speed defect mode switching in optical cavity with a nematic LC can be realized using a narrow peak schift and a fast part of the molecular orientation21,22. The response time of the presented rotation is expected about of milliseconds and can be improved to microseconds for ferro- or antiferroelectric liquid-crystal materials23 and has a potential for submicrosecond response24,25. We anticipate the presented polarization rotation principle operation for other helix materials with faster response in expense of the cavity thickness or absorption. The LC material extinction is critical to obtain cavity mode with high quality factor and sharp linewidth. The quality factor falls down for plasmonic materials and anisotropic metamaterials that provide stronger optical response, for example, in steering absorption26 and polarization properties27, as well as optical harvesting28,29. The polarization components of the spontaneous TN-FPC transmission spectrum with the distinctly broken Mauguin's waveguide regime were experimentally and theoretically investigated. The correlation between the polarization and spectral characteristics for both the rе and rо modes at the field-effect transition through the Gooch–Tarry maximum critical point was demonstrated. The observed double response of the spectral peaks to the electric field-induced change in the phase shift between the elliptic waves forming the cavity mode is typical of the TN-FPC. At the critical point, each cavity mode transforms to the opposite one. In this case, the linear polarizations of the rе and rо modes at the TN-FPC boundaries remain nearly orthogonal and the trajectories of their superimposed frequencies reproduce the avoided crossing phenomenon observed under sample illumination by the unpolarized light. It was established that the mode transformation accompanied by the change in both mode polarization and spectral position is only determined by the mode coupling force of the elliptic waves and is independent of the excitation of the other mode in the cavity. The experimental results were confirmed analytically and by the numerical simulation of light transmission through the investigated multilayered structure using the 4 × 4 transfer matrix method. Diverse applications of the examined twisted cavity are anticipated in sensing, filtering, switching, and optical modulation in photonic, optoelectronic and telecommunication devices with the advantage of high resolution in both wavelength and polarization. We stress that the reported results can be generalized to materials with the fastest response and any helix structures30. The SOP of transmission peaks in the TN-FPC spectrum with and without control electric field were experimentally studied on a setup schematically shown in Fig. 8. The cavity with the distributed Bragg mirrors had the (ZrO2/SiO2)5ZrO2 (TN) ZrO2(SiO2/ZrO2)5 layered structure. The ZrO2 and SiO2 layers alternately deposited onto fused quartz substrates had refractive indices of 2.04 and 1.45 and thicknesses of 55 and 102 nm, respectively. The transmission spectrum of such a structure is a PBG in the spectral range of 425–625 nm with a set of resonant peaks corresponding to the modes localized on the twisted-nematic defect layer (Fig. 1). Thin indium tin oxide (ITO) electrodes predeposited onto quartz substrates made it possible to apply an electric field along the mirror surface normal. A gap between the mirrors with an actual thickness of d = 4.15 µm was filled with a 4-n-pentyl-4′-cyanobiphenyl (5CB) nematic LC. To form the twisted structure of LC director n, the mirrors were coated with polyvinyl alcohol (PVA) films and then unidirectionally rubbed. The crossed directions of rubbing the output and input cavity mirrors, where the director n is parallel to the x and y axes of the laboratory system of coordinates (x, y, z), respectively, ensured homogeneous twisting of the nematic director n across the LC layer by an angle of φ = 90°. A schematic view of the experimental setup. The ZrO2/SiO2 multilayer mirrors of the TN-FPC are formed on the substrates with transparent ITO electrodes. The cavity is filled with the 5CB twisted-nematic LC disturbed by applied voltage (inset on the top). Polarizer P and analyzer A are Glan prisms. The peculiarities of cavity assembly with regard to the features of the rubbed polymer films used to the planar alignment of the LC director with a slight surface pretilt31 formed a uniform right-handed twisting of the nematic structure. An ac electric field with a frequency of 1 kHz was applied to the sample to ensure smooth untwisting of the director n until quasi-homeotropic alignment (twist-effect). Transmission spectra of the TN-FPC were recorded on an Ocean Optics HR4000 spectrometer under polarized and unpolarized illumination at a fixed sample temperature of t = 23.5 °С. There are four (i–iv) principal configurations of the setup to study polarized optical states of the TN-FPC modes. (i) A single polarizer P was used to detect polarization components T||,⊥ of the transmission spectrum (Fig. 1). Here subscripts (||) and (⊥) indicate the parallel and perpendicular orientations of P relative to the n direction on the input mirror, respectively. (ii) A single analyzer А placed after the sample served to determine the mode polarization angles θ at the cavity output for all resonance peaks of TN-FPC spectra (Fig. 2). (iii) Unpolarized incident light was used to demonstrate the schift of the spectral positions of the modes with turning voltage on (Fig. 3). (iv) Finally, the rotating polarizer technique was used to determine the evolution of the SOP of the cavity eigenmodes depending on the applied voltage (Figs 5 and 6). The polarizers used are Glan prisms equipped with a dial and both can freely rotate in the (x, y) plane. Radiation was introduced in a sample and extracted from it using optical fibers. The simulations are carried out using MATLAB to verify the observed results from the experimental spectra. None of the calculated values in space depends on x,y-axis, so the simulation is one-dimensional. In the first step the nematic orientational structure within the cell is calculated by means of the free energy minimization with rigid anchoring potential. The Frank elastic energy density fk is expressed as $${f}_{k}=\frac{1}{2}{k}_{11}\,{(\nabla \cdot {\bf{n}})}^{2}+\frac{1}{2}{k}_{22}\,{({\bf{n}}\cdot \nabla \times {\bf{n}})}^{2}+\frac{1}{2}{k}_{33}\,{({\bf{n}}\times \nabla \times {\bf{n}})}^{2}.$$ here n is the director, k11, k22 and k33 are the splay, twist and bend elasticity coefficients, respectively. At a fixed voltage the total electric contribution to the free energy density fe is expressed as32 $${f}_{e}=\frac{1}{2}{\bf{D}}\cdot {\bf{E}}-{\bf{D}}\cdot {\bf{E}}=-\,\frac{1}{2}{\bf{D}}\cdot {\bf{E}}=\frac{1}{2{\varepsilon }_{0}}\,\frac{-{D}_{z}^{2}}{({\varepsilon }_{\perp }\,{\cos }^{2}\psi (z)+{\varepsilon }_{||}\,{\sin }^{2}\psi (z))}.$$ E is the vector of electric field applied to the LC layer, D is the vector of electric induction in the bulk of the LC, ψ(z) is the polar angle of the LC director deflection from the substrate plane, ε⊥ and ε|| are the LC permittivities transverse and longitudinal relative to the director. The electric induction D is constant across the cell as long as the divergence of the electric displacement is zero. The total electrostatic energy Fe of a cell can be expressed by voltage U as the following: $${F}_{e}={\int }_{0}^{d}{f}_{e}dz=\frac{-{\varepsilon }_{0}{U}^{2}}{2{\int }_{0}^{d}{({\varepsilon }_{\perp }{\cos }^{2}\psi (z)+{\varepsilon }_{||}{\sin }^{2}\psi (z))}^{-1}dz}.$$ The integral requires a self-consistent solution for all the sublayers and makes calculations rather involved. We use a method of gradient descent to the free energy minimum. The approach is described in detail in ref.8. In the first step we simulate the TN-FPC optical response taking the optical extinction and dispersion of materials into account. The case of normal light incidence is considered. The optical response is found using the Berreman method—the transfer-matrix method generalized for an anisotropic medium33. In the case of birefringent layered media, the electromagnetic radiation consists of four partial waves. Mode coupling takes place at the interface where an incident plane wave produces waves with different polarization states due to anisotropy of the layers. As a result, 4 × 4 matrices are required. Joannopoulos, J. D., Meade, R. D. & Winn, J. N. Photonic crystals: Molding the flow of light (Princeton University Press, 1995). Busch, K. et al. Periodic nanostructures for photonics. Phys. Rep. 444, 101–202 (2007). Shabanov, V. F., Vetrov, S. Ya. & Shabanov, A. V. Optics of real photonic crystals: Liquid crystal defects, irregularities (SB RAS Publisher, 2005). Blinov, L. M. Structure and properties of liquid crystals, topics in applied physics (Springer, 2010). Patel, J. S. et al. Electrically tunable optical filter for infrared wavelength using liquid crystals in a Fabry-Pérot etalon. Appl. Phys. Lett. 57, 1718–1720 (1990). Patel, J. S. & Silberberg, Y. Anticrossing of polarization modes in liquid-crystal etalons. Opt. Lett. 16, 1049–1051 (1991). Zhu, X., Hong, Q., Huang, Y. & Wu, S.-T. Eigenmodes of a reflective twisted-nematic liquid-crystal cell. J. Appl. Phys. 94, 2868–2873 (2003). Timofeev, I. V. et al. Voltage-induced defect mode coupling in a one-dimensional photonic crystal with a twisted-nematic defect layer. Phys. Rev. E 85, 011705 (2012). Bugaychuk, S., Iljin, A., Lytvynenko, O., Tarakhan, L. & Karachevtseva, L. Enhanced nonlinear optical effect in hybrid liquid crystal cells based on photonic crystal. Nanoscale Res. Lett. 12, 449 (2017). Ozaki, R., Ozaki, M. & Yoshino, K. Electrically rotatable polarizer using one-dimensional photonic crystal with a nematic liquid crystal defect layer. Crystals 5, 394–404, https://doi.org/10.3390/cryst5030394 (2015). Zhu, X., Choi, W.-K. & Wu, S.-T. A simple method for measuring the cell gap of a reflective twisted nematic LCD. IEEE Trans. Electron Devices 49, 1863–1867 (2002). Yeh, P. & Gu, C. Optics of liquid crystal displays (Wiley, 1999). Ohtera, Y., Yoda, H. & Kawakami, S. Analysis of twisted nematic liquid crystal Fabry–Pérot interferometer (TN-FPI) filter based on the coupled mode theory. Opt. Quantum Electron. 32, 147–167 (2000). Yoda, H., Ohtera, Y., Hanaizumi, O. & Kawakami, S. Analysis of polarization-insensitive tunable optical filter using liquid crystal: connection formula and apparent paradox. Opt. Quantum Electron. 29, 285–299 (1997). Gunyakov, V. A., Timofeev, I. V., Krakhalev, M. N. & Zyryanov, V. Ya. Polarization exchange of optical eigenmode pair in twisted-nematic Fabry-Pérot resonator. Phys. Rev. E 96, 022711 (2017). Mauguin, C. V. Sur les cristaux liquides de Lehman. Bull. Soc. Fr. Miner. 34, 71–117 (1911). Gooch, C. H. & Tarry, H. A. The optical properties of twisted nematic liquid crystal structures with twist angles 90°. J. Phys. D: Appl. Phys. 8, 1575–1584 (1975). Mallinson, S. R. Wavelength-selective filters for single-mode fiber WDM systems using Fabry-Pérot interferometers. Appl. Opt. 26, 430–436 (1987). Timofeev, I. V. et al. Geometric phase and o-mode blueshift in a chiral anisotropic medium inside a Fabry-Pérot cavity. Phys. Rev. E 92, 052504 (2015). Wu, S.-T., Wu, C.-S., Warenghem, M. & Ismaili, M. Refractive index dispersions of liquid crystals. Opt. Engineering 32, 1775–1780 (1993). Ozaki, R., Moritake, H., Yoshino, K. & Ozaki, M. Analysis of defect mode switching response in one-dimensional photonic crystal with a nematic liquid crystal defect layer. J. Appl. Phys. 101, 033503 (2007). Ozaki, R., Ozaki, M. & Yoshino, K. Defect mode switching in one-dimensional photonic crystal with nematic liquid crystal as defect layer. Jpn. J. Appl. Phys. 42, L669–L671 (2003). Pozhidaev, E. P. et al. Ultrashort helix pitch antiferroelectric liquid crystals based on chiral esters of terphenyldicarboxylic acid. J. Mater. Chem. C. 4, 10339–10346 (2016). Li, B.-X., Shiyanovskii, S. V. & Lavrentovich, O. D. Nanosecond switching of micrometer optical retardance by an electrically controlled nematic liquid crystal cell. Opt. Express. 24, 29477 (2016). Khoo, I. C., Chen, C.-W., Ho, T.-J. & Lin, T.-H. Femtoseconds-picoseconds nonlinear optics with nearly-mm thick cholesteric liquid crystals. Proc. of SPIE 10125, 1012507 (2017). Shrekenhamer, D., Chen, W.-C. & Padilla, W. J. Liquid crystal tunable metamaterial absorber. Phys. Rev. Lett. 110, 177403 (2013). Chin, J. Y., Lu, M. & Cui, T. J. A transmission polarizer by anisotropic metamaterials. 2008 IEEE Antennas and Propagation Society International Symposium, 1–4 (IEEE, 2008). Yu, P. et al. Metamaterial perfect absorber with unabated size-independent absorption. Opt. Express 26, 20471 (2018). Yu, P. et al. Giant optical pathlength enhancement in plasmonic thin film solar cells using core-shell nanoparticles. J. Phys. D: Appl. Phys. 51, 295106 (2018). Faryad, M. & Lakhtakia, A. The circular Bragg phenomenon. Adv. Opt. Photonics 6, 225–292 (2014). Kutty, T. R. N. & Fisher, A. G. Planar orientation of nematic liquid crystals by chemisorbed polyvinyl alcohol surface layers. Mol. Cryst. Liq. Cryst. 99, 301–318 (1983). Deuling, H. J. Deformation of nematic liquid crystals in an electric field. Mol. Cryst. Liq. Cryst. 19, 123–131 (1972). Berreman, D. W. Optics in stratified and anisotropic media: 4 × 4-matrix formulation. J. Opt. Soc. Am. 62, 502–510 (1972). The work of W.L. was supported by the Ministry of Science and Technology, Taiwan, through Grant No. 106-2923-M-009-002-MY3. Kirensky Institute of Physics, Federal Research Center KSC SB RAS, Krasnoyarsk, 660036, Russia V. A. Gunyakov , I. V. Timofeev , M. N. Krakhalev & V. Ya. Zyryanov Laboratory for Nonlinear Optics and Spectroscopy, Siberian Federal University, Krasnoyarsk, 660041, Russia I. V. Timofeev Institute of Engineering Physics and Radio Electronics, Siberian Federal University, Krasnoyarsk, 660041, Russia M. N. Krakhalev Institute of Imaging and Biomedical Photonics, College of Photonics, National Chiao Tung University, Guiren Dist., Tainan, 71150, Taiwan W. Lee Search for V. A. Gunyakov in: Search for I. V. Timofeev in: Search for M. N. Krakhalev in: Search for W. Lee in: Search for V. Ya. Zyryanov in: V.A.G. and M.N.K. conducted the experiment and analyzed spectra of TN-FPC, I.V.T. and W.L. numerically simulated spectral properties of the photonic structure with the twisted nematic defect layer, V.Y.Z. checked, revised and finalized the paper. All authors wrote and reviewed the manuscript. Correspondence to V. A. Gunyakov. The authors declare no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Gunyakov, V.A., Timofeev, I.V., Krakhalev, M.N. et al. Electric field-controlled transformation of the eigenmodes in a twisted-nematic Fabry–Pérot cavity. Sci Rep 8, 16869 (2018) doi:10.1038/s41598-018-35095-y DOI: https://doi.org/10.1038/s41598-018-35095-y Polarization-selective defect mode amplification in a photonic crystal with intracavity 2D arrays of metallic nanoparticles Sergey G. Moiseev , Igor A. Glukhov , Yuliya S. Dadoenkova & Florian F. L. Bentivegna Journal of the Optical Society of America B (2019) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Scientific Reports menu About Scientific Reports Guest Edited Collections Scientific Reports Top 100 2017 Scientific Reports Top 10 2018 Editorial Board Highlights © 2020 Springer Nature Limited Close banner Close Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
CommonCrawl
Biology Direct The power of randomization by sex in multilocus genetic evolution Liudmyla Vasylenko1, Marcus W. Feldman2 & Adi Livnat1 Biology Direct volume 15, Article number: 26 (2020) Cite this article Many hypotheses have been proposed for how sexual reproduction may facilitate an increase in the population mean fitness, such as the Fisher-Muller theory, Muller's ratchet and others. According to the recently proposed mixability theory, however, sexual recombination shifts the focus of natural selection away from favoring particular genetic combinations of high fitness towards favoring alleles that perform well across different genetic combinations. Mixability theory shows that, in finite populations, because sex essentially randomizes genetic combinations, if one allele performs better than another across the existing combinations of alleles, that allele will likely also perform better overall across a vast space of untested potential genotypes. However, this superiority has been established only for a single-locus diploid model. We show that, in both haploids and diploids, the power of randomization by sex extends to the multilocus case, and becomes substantially stronger with increasing numbers of loci. In addition, we make an explicit comparison between the sexual and asexual cases, showing that sexual recombination is the cause of the randomization effect. That the randomization effect applies to the multilocus case and becomes stronger with increasing numbers of loci suggests that it holds under realistic conditions. One may expect, therefore, that in nature the ability of an allele to perform well in interaction with existing genetic combinations is indicative of how well it will perform in a far larger space of potential combinations that have not yet materialized and been tested. Randomization plays a similar role in a statistical test, where it allows one to draw an inference from the outcome of the test in a small sample about its expected outcome in a larger space of possibilities—i.e., to generalize. Our results are relevant to recent theories examining evolution as a learning process. This article was reviewed by David Ardell and Brian Golding. Theory concerning the evolution of sex and recombination has developed along two main lines. One, modifier theory, examines the evolutionary change in the frequencies of alleles that control the rate of recombination [1–16]. The other focuses on the role of sex in evolution assuming that sex is already present (e.g., [17–20]). According to the mixability theory for the role of sex in evolution, in the presence of sexual reproduction, natural selection favors not the best specific combinations of genes; i.e., not those genotypes of highest fitness, but rather alleles that perform well in interaction with a wide variety of different genetic combinations — "mixable alleles" [21]. This theory offers an alternative view on the role of sex in evolution to the more familiar lines of work on this topic from the 20th century, such as the Fisher-Muller theory [17, 18], the deterministic mutation hypothesis [20], the parasite hypothesis [19, 22] and other approaches [23–27], as well as newer lines of theory (e.g., [11, 28, 29]). Mixability theory has already had an unexpected consequence in the interdisciplinary realm: it has served as a motivation in the development of a key advance [30, 31] that contributed to the phenomenal leap of deep learning in 2012 [32, p.440] and thus to the global artificial intelligence revolution (e.g., [33]). Previous theory in evolution and in particular on the role of sexual reproduction has inspired developments in computing through the genetic algorithm work of John Holland [34], while mixability theory has inspired innovation in the science of deep learning. Mixability theory has drawn a connection between sex and genetic evolutionary modularity [35], and has inspired work on the connection between the population genetic equations for the updates of allele frequencies in the presence of sex and natural selection with the powerful Multiplicative Weight Updates Algorithm [36], known in multiple fields under different names [37]. The mixability effect, shown initially through numerical iterations [21, 35], has also been demonstrated in a simple analytical model [38]. However, while our previous studies [21, 35, 38] have focused mostly on mixability in an infinite population context, in finite populations, an intriguing effect emerges: even though the current, finite population represents just a small sample of the space of potential genotypes [39], how well an allele performs overall in interaction with various different combinations of genetic partners in this population is indicative of how well it will perform overall in potential combinations that have not yet materialized and been tested. In other words, the interaction of natural selection and sexual recombination makes it possible for an observer to draw an inference from the success in terms of an allele's mixability in the finite population about its potential success in an untested space of many potential genotypes [40]. Central to this effect is the idea of sex as randomization: while natural selection tests the performance of an allele as an interactant across different genetic combinations in the finite population, sexual recombination entails that the genotypes carrying that allele constitute an essentially random and thus unbiased sample of the vastly larger space of potential genotypes. Hence the outcome of natural selection in the finite population is indicative of which allele will be more mixable in a vast number of yet unseen genetic combinations [40]. Randomization plays a related role in statistical tests. In the evolutionary models described here and in statistical tests, randomization makes an outcome that is based on a small sample indicative of an outcome that would have been based on a much larger space of possibilities. In statistical testing randomization is viewed as allowing for inference-making and generalization. This power of randomization has to date been demonstrated only in a one-locus diploid model, where interaction is between two alleles at one locus [40]. Here, we test this effect using numerical analysis of both haploid and diploid multi-locus models and demonstrate the power of randomization. In both haploid and diploid models, as the number of loci increases, selection acting on an ever smaller fraction of the space of potential genotypes suffices to infer with ever increasing accuracy which allele has the greater mixability in the space of untested potential genotypes. Since in reality sexual species have many recombining loci—many more than can be iterated on the computer while keeping track of the space of all potential genotypes—our present results suggest that in nature alleles that are favored due to the interaction of sex and natural selection are expected to perform better as interactants in the space of yet untested genotypes. Multilocus models allow us to examine the sexual shuffling of the genes due to recombination and/or segregation and independent assortment of chromosomes. In the haploid case, the mixability of an allele depends on its ability to interact well with a wide variety of combinations of alleles at other loci. In the diploid case, it can depend also on its ability to interact with a variety of alleles in the same locus. Here, we will consider multilocus haploid and diploid models with discrete generations, panmixia and no mutation. We will examine change across one generation only. Consider N individuals, L loci and n alleles per locus. The number of possible genotypes is nL in the haploid case and \(\left (\frac {n(n+1)}{2}\right)^{L}\) in the diploid case. Let the fitness of genotype G, wG, be its probability of survival (we assume here for simplicity that viability, but not fertility, is genetic). Our simulations start with uniform allele frequencies, as in [40]. A starting population of N parents is generated by drawing at random one (haploid) or two (diploid) alleles per locus. Unless stated otherwise, the individuals can be thought of as hermaphrodites capable of selfing. The mixability of an allele is defined as the average fitness of the genotypes carrying this allele, unweighted by their genotypic frequencies (in contrast to the marginal fitness). Formal definition of allelic mixability is given below and contrasts with fitness measures as shown in [21]. We expect that under an assumption of different mixabilities of alleles, the allele that is more mixable across all possible genotypes will increase in frequency more than the other allele, even though only a small fraction of all possible genotypes is materialized and tested by selection. Multilocus haploid model Let \(w_{i_{1}, i_{2}, \ldots, i_{L}}\) be the fitness of a genotype with alleles i1 at locus 1, i2 at locus 2, etc. For the nL genotypes of the haploid multi-locus model with L loci and n alleles per locus, for each trial of the simulation, we randomized the fitness values \(w_{i_{1}, i_{2}, \ldots, i_{L}}\) such that the two alleles of interest \(\hat {i}\) and \(\hat {j}\) at the first locus with mixabilities defined as \(\mu _{\hat {i}} = \frac {1}{n^{L-1}} \sum \limits _{i_{2}, \ldots, i_{L}} w_{\hat {i}, i_{2}, \ldots, i_{L}}\) and \(\mu _{\hat {j}} = \frac {1}{n^{L-1}} \sum \limits _{i_{2}, \ldots, i_{L}} w_{\hat {j}, i_{2}, \ldots, i_{L}}\), respectively, had a mixability ratio of \(\mu _{\hat {i}} / \mu _{\hat {j}}\) equal to a pre-chosen value \(d_{\hat {i}\hat {j}}\), following [40]. In this case, the mixabilities of alleles are equivalent to their marginal fitnesses because the allele frequency distribution is uniform, although allelic mixability in general is not equivalent to marginal fitness (for details, see [21]). First, fitness values \(\tilde {w}\) were drawn from the normal distribution \(\mathcal {N}(E,\,\sigma)\) with average E=0.7 and standard deviation σ=0.15. Thus, almost all fitness values fell in the interval [0.1]. Values not in that interval were replaced with new random numbers from the same distribution until all values were between 0 and 1. We refer to the resulting distribution as the truncated normal distribution of fitness values. Next, the fitness values of alleles \(\hat {i}\) and \(\hat {j}\) were adjusted as follows: $$ w_{\hat{i}, i_{2}, \ldots, i_{L}} = \tilde{w}_{\hat{i}, i_{2}, \ldots, i_{L}}\sqrt{\frac{d_{\hat{i}\hat{j}} \tilde{\mu}_{\hat{j}}}{\tilde{\mu}_{\hat{i}}}} $$ $$ w_{\hat{j}, i_{2}, \ldots, i_{L}} = \tilde{w}_{\hat{j}, i_{2}, \ldots, i_{L}}\sqrt{\frac{\tilde{\mu}_{\hat{i}}}{d_{\hat{i}\hat{j}} \tilde{\mu}_{\hat{j}}}}, $$ where \(\tilde {\mu }_{\hat {i}} = \frac {1}{n^{L-1}} \sum \limits _{i_{2}, \ldots, i_{L}} \tilde {w}_{\hat {i}, i_{2}, \ldots, i_{L}}\) and \(\tilde {\mu }_{\hat {j}} = \frac {1}{n^{L-1}} \sum \limits _{i_{2}, \ldots, i_{L}} \tilde {w}_{\hat {j}, i_{2}, \ldots, i_{L}}\). The adjusted values w have a mixability ratio \(\frac {\mu _{\hat {i}}}{\mu _{\hat {j}}}=d_{\hat {i}\hat {j}}\). Each trial of the simulation consisted of a single generation of recombination and selection. At the start of each trial, an initial population of parents was generated by drawing alleles at each of the L loci at random for each parent without replacement from a store of alleles at equal frequencies. Next, an offspring was generated from two random parents using the Poisson model of recombination [41, 42], according to which a crossover occurs between neighboring positions with probability p≤1/2, independently of crossovers at other positions. Finally, an offspring survived with probability \(w_{i_{1},i_{2},\dots,i_{L}}\). This procedure was repeated until N surviving individuals were obtained. At the same time, the number of unique genotypes that materialized in the process—namely the number of genotypes that were tested at least once, whether they survived or not—was recorded. Finally, for each mixability ratio \(d_{\hat {i}\hat {j}}\), number of alleles n, number of loci L and population size N, multiple independent trials were run, and the following two measurements were made: a) the across-trials average fraction of all possible genotypes that materialized and were tested by the population, g(N,L,n), and b) the fraction of trials in which, of the particular allele pair \(\hat {i}\) and \(\hat {j}\), the allele that was more mixable (had a higher μ) across all possible genotypes increased in frequency more than the allele that was less mixable across all possible genotypes, P(N,L,n) (ties in this measure were counted as "half a point" for each allele). For clarity, we note that our results capture the fact that sex promotes the ability of alleles to perform well in the many combinations of alleles across loci that have not yet materialized, where these combinations are composed of present alleles. They do not capture the ability of alleles to perform well in interaction with alleles that have not yet been created through mutation. Figure 1 shows the results of such a simulation for a population size of N=2000 haploids, \(d_{\hat {i}\hat {j}}\) values ranging from about 1.01 to 1.11, n=2 per locus and 100 independent trials for each parameter combination. As expected, in each panel we see that the allele that is more mixable across all possible genotypes is the one more likely to win, even though only a small fraction of all possible genotypes is actually tested. This effect increases with \(d_{\hat {i}\hat {j}}\) (P rises across panels) and remains at the same strength when the space of potential genotypes is increased (P is flat within panels). Random sampling in a multi-locus haploid model. Fitness values were drawn from the normal distribution \(\mathcal {N}(0.7, \, 0.15)\). In each panel, results are shown for a population size of 2000, a varying number of loci from 2 to 20 and 2 alleles per locus. In each panel, for each number of loci, based on 100 independent trials, the red line shows g, the average fraction of all possible genotypes that actually materialized and were tested by the population. For each such genotype, at least one individual was born with that genotype and either survived or did not. The blue line shows P, the fraction of trials in which the allele that is more mixable across all possible genotypes increased in frequency more than the allele that is less mixable across all possible genotypes. Bars for the 4, 8, 12 and 16 loci cases represent a 95% confidence interval for P based on 80 values, each of which was obtained based on 100 independent trials To facilitate comparisons, in all panels the green line highlights the results for the 16 locus case. On the left side of each panel, the number of loci is small, and all possible genotypes get tested. On the right side, the number of possible genotypes is large relative to the population size (220≈1.05E6) and only a small fraction of all possible genotypes is tested. Thus, the distance between P and g increases both with \(d_{\hat {i}\hat {j}}\) (owing to the increase in P) and with L (owing to the decrease in g). This demonstrates that sex enables random sampling in selecting for mixability: in reality, the number of loci (L) is large, and thus the population size becomes small relative to the number of possible genotypes, while the probability of correct evaluation remains high. From the statistical point of view, we are comparing two distributions of fitness values (for allele \(\hat {i}\) and for allele \(\hat {j}\)). Sex and natural selection perform the non-trivial task of distinguishing between these distributions correctly at a high probability with only a small fraction of observations drawn from these distributions and for any number of loci. In the above, we assumed free recombination in hermaphrodites capable of selfing. To examine the case of two mating types, we divided the starting population into two separate types, "type 1" and "type 2," and allowed mating only between types. Since the 95% confidence intervals for the two-mating-types results overlap with those of Fig. 1 almost entirely, we conclude that there is no substantial difference between hermaphrodites capable of selfing and two mating types (see Appendix Fig. 7), as in [40]. An important cause of random deviations from correct inference of mixabilities is random genetic drift due to the sampling of parents and of alleles within parents with replacement. This sampling creates random variation in the parents' fertilities as well as in the transmission success of alleles within a parent. For a pedagogical purpose, to observe the pure effect of random sampling of genotypes by sex (which is our focus here), free of these effects of drift, one can remove drift by running the same simulations while ensuring that each haploid individual appears in exactly one mating event and produces two offspring and that each allele is transmitted exactly once. To keep the simulation simple, this scenario forces us to forgo the constant population size: instead of generating new individuals until N of them survive, we repeat the simulation now until N (even) parents have appeared in N/2 mating events, where each of these events creates two offspring that are complementary to each other in terms of allele transmission. The results with random genetic drift removed (Fig. 2) are clearly stronger than those of Fig. 1. For example, with a population size of 2000, 2 alleles, 16 loci, and mixability ratio \(d_{\hat {i}\hat {j}} = 1.0112\) (green line, top left panel), while in Fig. 1 selection makes the correct mixability evaluation 58% of the times by testing 4.3% of all possible genotypes, in Fig. 2 selection makes the correct evaluation 59% of the times by testing 3.0% of all possible genotypes. This evaluation reaches a rate of 98−100% correct with \(d_{\hat {i}\hat {j}} \ge 1.08\) (all bottom panels of Fig. 2). Random sampling in the multi-locus haploid model without random genetic drift. The simulation conditions are as described in Fig. 1, except that now parents are divided into two mating types, mating can occur only between type 1 and type 2 individuals, each parent participates in exactly one reproductive event that creates two offspring, and each allele in each parent is transmitted exactly once. The difference between the present figure and Fig. 1 shows the importance of drift due to the sampling of parents and of alleles with replacement All loci Above we have tracked two alleles at one locus. How do the results change if we track two alleles at all L loci simultaneously? Let us initialize the fitness matrix with random values as before from \(\mathcal {N}(0.7,0.15)\) and change these values using Eqs. (1) and (2) for all loci one after the other from 1st to L-th in order to obtain mixability ratios between two particular alleles at each locus nearly equal to some predefined value \(d_{\hat {i}\hat {j}}\) that is, for simplicity, equal across loci. Namely, let \(\hat {i}_{l}\) and \(\hat {j}_{l}\) be pairs of alleles at the l-th locus, where 1≤l≤L. Eqs. (1) and (2) are first applied to the first locus, where \(\tilde {w}\) and w are rewritten as w0 and w1 respectively (and similarly for the μs). Then the same transformation is applied to the second locus: $$\begin{array}{*{20}l} w^{2}_{i_{1}, \hat{i}_{2}, \ldots, i_{L}}&=w^{1}_{i_{1}, \hat{i}_{2}, \ldots, i_{L}}\sqrt{\frac{d_{\hat{i}\hat{j}} \sum\limits_{i_{1}, i_{3}, \ldots, i_{L}} w^{1}_{i_{1}, \hat{j}_{2}, \ldots, i_{L}}}{\sum\limits_{i_{1}, i_{3}, \ldots, i_{L}} w^{1}_{i_{1}, \hat{i}_{2}, \ldots, i_{L}}}}\\ &= w^{1}_{i_{1}, \hat{i}_{2}, \ldots, i_{L}}\sqrt{\frac{d_{\hat{i}\hat{j}} \mu^{1}_{\hat{j}_{2}}}{\mu^{1}_{\hat{i}_{2}}}} \end{array} $$ $$w^{2}_{i_{1}, \hat{j}_{2}, \ldots, i_{L}}=w^{1}_{i_{1}, \hat{j}_{2}, \ldots, i_{L}}\sqrt{\frac{\mu^{1}_{\hat{i}_{2}}}{d_{\hat{i}\hat{j}} \mu^{1}_{\hat{j}_{2}}}}. $$ This procedure is repeated until finally the last locus fitness values are adjusted. The mixability ratio for alleles \(\hat {i}_{l}\) and \(\hat {j}_{l}\) is now precisely equal to the predefined value \(d_{\hat {i}\hat {j}}\), and it has been verified by simulation that the mixability ratios for alleles at other loci are approximately equal to this value. We now let P be the sum across loci of the number of trials in which, for the particular allele pair \(\hat {i}_{l}\) and \(\hat {j}_{l}\) at each locus, the allele that was more mixable across all possible genotypes increased in frequency more than the other allele, divided by the product of L and the number of trials. Results further underscore the power of the mixability effect: it is obtained for all loci simultaneously (see Appendix Fig. 8). Sex vs. asex in the haploid model Previously it was shown that selection for mixability occurs in sexual and not in asexual populations [21, 35]. However, to actually observe this difference properly in a simulation is not a trivial task. That is, to draw a comparison one must start these sexual and asexual populations from the same initial conditions. Then, if mixability is measured in a multigenerational process, it takes time for the populations to diverge and begin to show a consistent difference in mixability, while at the same time the mixability measure becomes a proxy that loses power over time. Thus, the difference in mixability between sex and asex is best observed during the evolutionary transient [21]. Here and in [40] we use a different method that is based on a single generational analysis, in which starting the populations from equal beginnings poses a different but related problem: the usual way to generate an initial population would be to draw genotypes at random, but randomness is precisely the element that is supposed to be controlled for. In other words, starting at linkage equilibrium makes the asexual population, when observed through a time window of one generation, essentially a sexual one (that just lost its the ability to reproduce sexually, and hence is still at linkage equilibrium). One way of overcoming this problem is to start at perfect linkage disequilibrium—start with several clones, and in the sexual case allow only for mating between clones. In the asexual case, reproduction will copy the genotypes of the given initial clones. In the sexual case, the shuffling of the genes will produce more combinations than the initial ones, with g increasing with the recombination rate. Figure 3 demonstrates the result of such a simulation for a population size of 2000 haploids, \(d_{\hat {i}\hat {j}}\) values ranging from approximately 1.01 to 1.11, number of alleles n=2 per locus, 12 loci and 100 independent trials for each parameter combination. The starting population consists of two clones; that is, let 0l and 1l be the first and second alleles, respectively, at locus l∈(1,…L). In this notation, the first clone is (01,02,…,0L) and the second is (11,12,…,1L). In the top-left panel, where \(d_{\hat {i}\hat {j}}=1.0112\), the alleles are almost equally mixable, and P varies from 0.52 in the asexual case (no recombination; left end of panel) to 0.60 in the free recombination sexual case (right end of panel). The difference stands out in the central-left panel, where \(d_{\hat {i}\hat {j}}=1.0465\) (it increases from 0.56 in the asexual case to 0.82 in the free recombination case) and reaches its maximum in the bottom-right panel, where \(d_{\hat {i}\hat {j}}=1.1111\), (from 0.64 in the asexual case to 0.99 in the sexual one). Understandably, the number of tested genotypes, g, increases with the recombination rate. Comparison of sampling made by sex and asex in a multi-locus haploid model. Fitness values were drawn from the normal distribution \(\mathcal {N}(0.7, \, 0.15)\) as described in the text. The starting population consists of two clones. In each panel, for each recombination rate from 0 (asex) to 0.5 (sex, free recombination case) on the x-axis, a population size of 2000, 12 loci and 2 alleles per locus, based on 100 independent trials the red line shows g, the blue solid line shows P, and the blue dashed line demarcates the 95% confidence interval of P, as in Fig. 1 As the population size increases, P increases for the sexual population but remains the same for the asexual one (Fig. 4). As the standard deviation of the fitness distributions is increased, P decreases much faster for the asexual than for the sexual population (Appendix Fig. 9). These results clearly demonstrate the power of randomization due to sex. Comparison of sampling by sex and by asex in the multi-locus haploid model for different population sizes. The simulation conditions are as described in Fig. 3, except that now the population size varies on the x-axes and only two recombination rates values are used, r=0 (asex, cyan solid line; 95% C.I. cyan dashed lines) and r=0.5 (sex, blue solid line; 95% C.I. blue dashed lines). The probability that the more mixable allele across all possible genotypes was favored, P, is markedly higher in the sexual case. Furthermore, as the population size is increased, P increases in the sexual population but not in the asexual one. This figure shows that with increasing population size, selection for mixability becomes stronger only in the sexual population Comparison of the simulation with theoretical probabilities in the haploid model The probability P can also be examined from a statistical perspective where we are dealing with two distributions: one for the fitness values of the genotypes carrying one allele, and another for the other allele (distributions that would partly overlap in the diploid case). The question then is how well natural selection can distinguish correctly which distribution has the higher mean: in the sex case, based on comparing small random samples from these distributions, and in the asex case simulated above, based on one observation from each distribution. In the latter case, it makes the correct evaluation if the observation (clone) with higher fitness belongs to the distribution with the higher mean. Thus, the asexual probability of correct evaluation, P, can be directly calculated if the joint distribution of the random variables is known, a calculation which is greatly simplified when those random variables are independent. Thus, let X and Y be independent random variables with probability density functions fX(x) and fY(y), representing the fitness value distributions of genotypes carrying allele \(\hat {i}\) and allele \(\hat {j}\), respectively. Since they are independent, their joint probability density function is the product of their individual probability density functions: fX,Y(x,y)=fX(x)·fY(y), and the probability that one random variable is greater than another is $$ {}P(X\!<\!Y) = \iint \limits_{x< y} f_{X,Y}(x,y) dx dy = \iint \limits_{x< y} f_{X}(x) f_{Y}(y) dx dy. $$ In the sexual case, in contrast, averages of N points from each distribution are compared. Specifically, let X1,X2,…,XN be independent random variables with the common density function fX and Y1,Y2,…,YN be independent random variables with the common density function fY. Let EX,EY,σX,σY be the expectations and standard deviations of X and Y, respectively, \(A_{N} = \frac {X_{1} + X_{2} + \cdots + X_{N}}{N}, B_{N} = \frac {Y_{1} + Y_{2} + \cdots + Y_{N}}{N}\), and fA and fB be the probability density functions of the normal distributions \(\mathcal {N}_{A}=\mathcal {N}\left (E_{X}, \sigma _{X} / \sqrt {N}\right)\) and \(\mathcal {N}_{B}=\mathcal {N}\left (E_{Y}, \sigma _{Y} / \sqrt {N}\right)\), respectively. Then, by the central limit theorem (CLT), for sufficiently large N, the random variable AN has approximately the distribution \(\mathcal {N}_{A}\) and the random variable BN has approximately the distribution \(\mathcal {N}_{B}\), and the probability that the average of N randomly selected points from one distribution is bigger than average of N randomly selected points from another can be calculated as follows: $$ {}\begin{aligned} P(A_{N} < B_{N})&=\! \iint \limits_{a< b} f_{A,B}(a,b) \, da \, db\\ &= \iint \limits_{a< b} f_{A}(a) f_{B}(b) \, da \, db. \end{aligned} $$ A comparison shows that, as the population size is increased (2000 and bigger), the simulated P comes closer to the theoretical P in Eq. (4) (Fig. 4 and Table 1). Table 1 Comparison of theoretical and simulated probabilities, haploid case Multilocus diploid model In our diploid model, there are no position effects; hence the fitness of a genotype with alleles (i,j) at the l-th locus has the same fitness value as a genotype with alleles (j,i) at that locus, yielding \(\left (\frac {n(n+1)}{2}\right)^{L}\) different genotypes. For each trial of the simulation, we randomize the fitness values \(w_{i_{1} j_{1}, i_{2} j_{2}, \ldots, i_{L} j_{L}}\) such that the two alleles of interest \(\hat {i}\) and \(\hat {j}\) from the first locus with mixabilities defined as \(\mu _{\hat {i}} = \frac {1}{n\cdot \left (\frac {n(n+1)}{2}\right)^{L-1}} \times \sum \limits _{k, i_{2}, j_{2}, \ldots, i_{L}, j_{L}} w_{\hat {i}k, i_{2} j_{2}, \ldots, i_{L} j_{L}}\) and \(\mu _{\hat {j}} = \frac {1}{n\cdot \left (\frac {n(n+1)}{2}\right)^{L-1}} \times \sum \limits _{k, i_{2}, j_{2}, \ldots, i_{L}, j_{L}} w_{\hat {j}k, i_{2} j_{2}, \ldots, i_{L} j_{L}}\), respectively, have a mixability ratio of \(\mu _{\hat {i}} / \mu _{\hat {j}}\) almost equal to a pre-chosen value \(d_{\hat {i}\hat {j}}\). Due to computational restrictions, the simulation was performed for n=2 alleles per locus. As in the haploid model, fitness values \(\tilde {w}\) were first drawn from \(\mathcal {N}(0.7, \, 0.15)\) and then truncated. Then, the fitness values of alleles \(\hat {i}\) and \(\hat {j}\) were adjusted as follows: $$ \begin{aligned} w_{\hat{i}k, i_{2} j_{2}, \ldots, i_{L} j_{L}} = \tilde{w}_{\hat{i}k, i_{2} j_{2}, \ldots, i_{L} j_{L}} \sqrt{\frac{\left(2d_{\hat{i}\hat{j}}-1\right)\sum\limits_{l\neq\hat{i}; i_{2}, j_{2}, \ldots, i_{L}, j_{L}} \tilde{w}_{\hat{j}l, i_{2} j_{2}, \ldots, i_{L} j_{L}}}{\sum\limits_{l\neq\hat{j}; i_{2}, j_{2}, \ldots, i_{L}, j_{L}} \tilde{w}_{\hat{i}l, i_{2} j_{2}, \ldots, i_{L} j_{L}}}} \end{aligned} $$ $$ \begin{aligned} w_{\hat{j}k, i_{2} j_{2}, \ldots, i_{L} j_{L}} = \tilde{w}_{\hat{j}k, i_{2} j_{2}, \ldots, i_{L} j_{L}} \sqrt{\frac{\sum\limits_{l\neq\hat{j}; i_{2}, j_{2}, \ldots, i_{L}, j_{L}} \tilde{w}_{\hat{i}l, i_{2} j_{2}, \ldots, i_{L} j_{L}}}{\left(2d_{\hat{i}\hat{j}}-1\right)\sum\limits_{l\neq\hat{i}; i_{2}, j_{2}, \ldots, i_{L}, j_{L}} \tilde{w}_{\hat{j}l, i_{2} j_{2}, \ldots, i_{L} j_{L}}}} \end{aligned} $$ (see the Appendix "Obtaining mixability ratios in the diploid case" section). Figure 5 shows that the diploid case results are stronger than the haploid ones. For example, the 95% confidence interval for P over all loci tested here is included in 0.56−0.72 for the diploid vs. 0.52−0.68 for the haploid for \(d_{\hat {i}\hat {j}} = 1.0112\); 0.92−0.98 diploid vs. 0.84−0.95 haploid for \(d_{\hat {i}\hat {j}} = 1.0588\); and 0.99−1 vs. 0.96−1 for \(d_{\hat {i}\hat {j}} = 1.1111\). P increases with d and varies little across panels. Results for two mating types are similar to Fig. 5 (see Appendix Fig. 10), and much stronger with random genetic drift removed (Appendix Fig. 11). However, the reason that the diploid results are stronger appears to be that in the diploid model the fitness difference between the homozygotes at a given locus is bigger than that the fitness difference between two alleles at a given haploid locus for the same mixability ratio because of the existence of the heterozygote genotype in the former, and only the homozygotes \(\hat {i}\hat {i}\) and \(\hat {j}\hat {j}\) contribute to P (the P that relates to \(\hat {i}\) and \(\hat {j}\) at the given locus). This effect decreases with the number of alleles (Appendix Fig. 12). Random sampling in a multi-locus diploid model. The results were produced and presented in a manner analogous to Fig. 1, the difference being that this model is diploid and number of loci ranges from 2 to 16. Results are much stronger than in the haploid case Sex vs. asex in the diploid model Given the two alleles 0l and 1l at each locus 1≤l≤L and the homozygous clones (0101;0202;…;0L0L) and (1111;1212;…;1L1L), any sexual mating between clones will produce the same F1 genotype (0111;0212;…;0L1L). Therefore, to compare sex and asex in the diploid case, we simulated two generations and compared the starting population with the second generation's population. Results of this simulation for a population size of 2000 diploid individuals, \(d_{\hat {i}\hat {j}}\) values ranging from 1.01 to 1.11, number of alleles n=2 per locus, 8 loci and 100 independent trials for each parameter combination are presented in Fig. 6. In comparison to Fig. 3, P is larger for both the asex and free recombination cases across panels. The difference in P between sex and asex in Fig. 6 increases faster with d than in Fig. 3 (first 5 panels) and then decreases due to a ceiling effect. As in the haploid case, P increases with the population size for the sexual population but remains the same for the asexual one (Appendix Fig. 13). Again as in the haploid case, as the standard deviation of the fitness distributions is increased, P decreases much faster for the asexual than for the sexual population (Appendix Fig. 14). Comparison of sampling by sex and by asex in a multi-locus diploid model. Fitness values were drawn from the normal distribution \(\mathcal {N}(0.7, \, 0.15)\). In each panel, results are shown for a population size of 2000, 8 loci and 2 alleles per locus. The simulation conditions are as described in Fig. 3, the difference being that this model is diploid Comparison of the simulation with theoretical probabilities for the diploid case The probability P in the asexual case can be calculated theoretically, if the joint distribution of random variables X and Y from expression (3) is known. However, the fitness value distributions for two alleles of interest overlap in the diploid case, hence they are not independent and the simplification in the second equation of (3) can not be used. Consider the distributions of fitness values for homozygotes at the first locus for genotypes with two alleles per locus. Let \(\tilde {X}\) and \(\tilde {Y}\) be random variables from the distributions \(f_{\tilde {X}}\) and \(f_{\tilde {Y}}\) of genotypes with \(\hat {i}\hat {i}\) and \(\hat {j}\hat {j}\) respectively at the first locus, which are independent. Now, $$ {}\begin{aligned} P\left(\tilde{X}\! <\! \tilde{Y}\right) = \iint \limits_{x< y} f_{\tilde{X},\tilde{Y}}(x,y) dx dy = \iint \limits_{x< y} f_{\tilde{X}}(x) f_{\tilde{Y}}(y) dx dy \end{aligned} $$ (see the Appendix "Derivation of expression (7)" section). The ratio between the expectations of \(\tilde {X}\) and \(\tilde {Y}\) is equal to \(2d_{\hat {i}\hat {j}}-1\) (see Eqs. (5), (6)). It is greater than that between the expectations of X and Y, which is equal to \(d_{\hat {i}\hat {j}}\), because of the heterozygous genotype (see the Appendix "Obtaining mixability ratios in the diploid case" section for details). Therefore, the difference between sex and asex in the diploid case is greater than in the haploid case. Appendix Fig. 14 shows that it increases with σ. By the CLT, the mean of N points from one distribution has the distribution \(\mathcal {N}(E,\sigma / \sqrt {N})\) for large enough N. Therefore, equation (4) can be used here. In Table 2 it is shown that the simulated P value is close to the theoretical one. Table 2 Comparison of theoretical and simulated probabilities, diploid case In both haploid and diploid cases, we find that sex has the power of randomization: by essentially randomizing genetic combinations, the allele that is favored by natural selection in its interactions with the existing genetic combinations in a current, finite population is also likely to perform better overall across the much larger space of untested, potential genotypes. The results extend our previous studies [40] to the multilocus case. Indeed, increasing the number of loci substantially strengthens the effect: as the number of loci increases, an ever smaller fraction of the space of potential genotypes needs to be tested in order for selection to favor the allele that will most likely also be mixable across the many untested potential genotypes, with ever increasing accuracy. In addition, we demonstrate the power of randomization due to sex by directly comparing sex and asex, showing that selection favors the more mixable alleles substantially more in the sexual population, more so for larger populations and intermediate fitness variance. For sufficiently small σ, even one randomly selected point is sufficient to distinguish two distributions, i.e. the accuracy in the asexual case is high and is therefore close to the sexual one. For sufficiently large σ, the distributions are very close, and even many randomly selected points are not enough to distinguish these distributions, i.e. the accuracy in the sexual case is low, close to the asexual one. To better understand the idea of sex as randomization, it is useful to contrast it with previous theories, such as the Fisher-Muller theory of the benefit of sex [17, 18]. In the latter, sex allows for parallel as opposed to serial accumulation of beneficial mutations: beneficial mutations at different loci that originated in different individuals can be combined into one individual, whereas in an asexual population, such mutations must occur serially in the same clone in order to accumulate under natural selection [17, 18, 43, 44]. However, that theory assumes a priori that a beneficial allele is favored over the wild-type no matter what genetic combination it is in—it is "beneficial" in the sense that it has a value of its own, independent of alleles at other loci, and once it arises, it spreads to fixation because of this rather independent effect [17, 18, 43, 44]. In that framework, there is no need for selection to explore the value of an allele over the generations, because the allele is understood to be beneficial from the start, independently of the genetic context. In the present analysis our focus is on the mixability of alleles as the measure of interest, rather than the population mean fitness, while allowing for genetic interactions. Another surprising implication of the results is as follows. Both in the case where genes do not interact, and in the case where they do interact but in a random fashion (where the fitness of a genotype is a random function of its constituent alleles—the most complex function in the Kolmogorov complexity sense [45]), there is no information to be gained on the mixability of alleles by random sampling of potential genotypes and their fitnesses [40]. Therefore, if the power of randomization by sex is important in nature, then genetic interactions must be common and structured—they must be not overly complex [40]. This implication further underscores the difference between the idea of sex as harnessing the power of randomness [40] and previous theories on the role of sexual reproduction in evolution. For example, the deterministic mutation hypothesis requires a more restrictive form of genetic interaction [20]. In evolutionary theory, randomness has been seen as a force that leads directly to new genetic information: Random mutation represents random change in the information that is stored in the genome, a change that may sometimes contribute to a beneficial phenotypic change. A recombination event can result in a beneficial change to the extent that it can create a lasting beneficial combination of alleles, for example as in the Shifting-Balance Theory [46, 47]. Here we show a very different way by which randomness can be important in evolution: it can be harnessed in an effective way, not as a force that leads to new genetic information directly, but as an element of a larger system. In our case, it makes natural selection in a finite population act in a manner indicative of the ability of alleles to perform well as interactants in the space of untested potential genotypes [40]. Indeed, the fact that randomization can be harnessed very effectively as a part of a bigger system is well known. In the experimental sciences, it is used for random sampling or random assignment to conditions. In computer science, many algorithms have been created that use randomization in an effective way, from testing whether an algebraic identity is correct, to encrypting messages, to testing software, to sorting large files, and more [48, 49]. However, this well-known effect of randomization has not previously been proposed as a possibly important element in the process of evolution. A better understanding of sexual reproduction may be relevant not only to population genetics but also to computer science. It is well known that a simple hypothesis that explains many different facts is a good hypothesis [45]—it does not suffer from over-fitting and is more likely to be correct. Mixable alleles are alleles that work well in many different genetic contexts, and can be viewed as simple modules [21, 30, 31, 35]. In this light, sex may be seen as a phenomenon that decomposes the genome into recombining loci where a mixable allele represents a good, simple "hypothesis" about what genetic information at a given locus will work well in interaction with the genetic information at other loci. Viewing mixability as nature's way of simplifying interactions between genes, Hinton and colleagues designed an analogous method for the training of deep learning neural networks [30, 31] called "dropout," where 50% of the units in the network are chosen at random and temporarily dropped out of the network at each instance of training. This prevents the appearance over time of units that work well only in the context of specific other units, and favors instead the appearance of units that perform well across different contexts, as in the mixability effect of sex on alleles [30, 31]. This serves as a form of simplification of the interactions between units and as a means of preventing over-fitting while creating robust units [30, 31]. The resulting algorithm was described as one of four breakthroughs that allowed for the comeback of artificial neural networks through deep learning in 2012 [32, p. 440], which in turn has been a key part of the recent global artificial intelligence revolution (e.g., [33]). Relatedly, because the interaction of sex and natural selection acts in a manner that is indicative of the performance of alleles in future genetic combinations, and because inference-making is a central aspect of learning, our finding naturally connects with recent work proposing that evolution can be viewed as a learning process (e.g., [36, 50–54]). Both the theory of Interaction-based Evolution [51, 54] and Evolutionary Connectionism [52, 53] recognize the importance of simplification in learning processes but approach simplification in biological evolution in different ways. According to Interaction-based Evolution (IBE) theory, simplification can be implemented directly by mechanisms of genetic change [54]. Therefore, parsimony can serve as a central force in evolution, and natural selection on the one hand and genetic mechanisms of simplification on the other can interact and allow for evolution by the combination of parsimony and fit [54]. In contrast, the evolutionary connectionist approach took methods of simplification known in machine-learning and introduced them into evolutionary simulations of gene-regulatory networks (GRNs) to some beneficial effect [55] but did not ground this simplification biologically in a way that would explore its relevance and importance to biological evolution. The present results exemplify this difference. One way by which simplification is forced into the simulations of the evolutionary connectionist approach is by introducing Gaussian noise to the "target phenotype" at each generation [55]. Interestingly, however, the authors of ref. [55] connect their approach to dropout: "Masking spurious details in the training set by adding noise to the training samples during the training phase is a general method to combat the problem of over-fitting in learning systems. This technique is known as 'training with noise' or 'jittering'... and is closely related to the use of intrinsic noise in deep neural networks; a technique known as 'dropout' " [55, p.9]. However, note that a) dropout was motivated by mixability theory [31]; b) from the point of view of mixability, sexual recombination was seen as nature's way of simplifying interactions between loci [21, 30, 31, 35]; and c) sexual recombination is a quintessential example of a mechanism of genetic change. Thus, the quote from ref. [55] actually returns us to the position of IBE [51, 54], which focuses on the centrality of mechanisms of genetic change. That is, we have demonstrated that randomization is directly inserted into the evolutionary process in nature by sexual recombination itself. Sexual recombination decomposes the genome into simple units or modules [21, 30, 31, 35], where an allele will be favored at a focal locus if it is a better, simple, generalizable hypothesis about what information will work well with other pieces of information at other loci. Thus, simplification in evolution can be implemented by mechanisms of genetic change. Indeed, both in the cases of evolution and statistical tests, randomization allows for an outcome based on a small sample to be indicative of the outcome that would emerge from a far larger space of possibilities. In the case of evolution, it allows selection to act as a signal of the mixability of an allele in future genetic combinations. In the case of statistical tests, randomization allows for inference and generalization, which are key aspects of learning processes. The theoretical study of the role of sex in evolution traditionally focused on the question of how sex might facilitate the increase in population mean fitness. However, this focus is insufficient to explain the evolution of this complex adaptation because the mean fitness does not necessarily capture complex biological structure. Mixability theory takes an alternative approach that focuses on the ability of alleles to perform well as interactants across a wide variety of different genetic combinations and how the sexual shuffling of the genes affects this performance. We found that in both haploid and diploid multilocus systems, alleles that performs better across existing genetic combinations are also the ones most likely to perform better across the much larger space of untested genotypes. Thus, under realistic conditions, the interaction of sex and natural selection makes the success of an allele due to its mixability in the current finite population indicative of its success as an interactant in future genetic combinations. Obtaining mixability ratios in the diploid case In the diploid multi-locus model, for the \(\left (\frac {n(n+1)}{2}\right)^{L}\) genotypes with n alleles and L loci, for each trial of the simulation, fitness values, \(\widetilde {w}_{i_{1} j_{1}, i_{2} j_{2}, \ldots, i_{L} j_{L}}\) are drawn from the normal distribution \(\mathcal {N}(E, \sigma)\) with average E=0.7 and standard deviation σ=0.15 and then truncated as described in the main text. If the fitness values of alleles \(\hat {i}\) and \(\hat {j}\) were adjusted as follows, $$ {}w_{\hat{i}k, i_{2} j_{2}, \ldots, i_{L} j_{L}} = \widetilde{w}_{\hat{i}k, i_{2} j_{2}, \ldots, i_{L} j_{L}} \sqrt{\frac{d_{\hat{i}\hat{j}}\sum\limits_{k\neq\hat{i}; i_{2}, j_{2}, \ldots, i_{L}, j_{L}} \widetilde{w}_{\hat{j}k, i_{2} j_{2}, \ldots, i_{L} j_{L}}}{\sum\limits_{k\neq\hat{j}; i_{2}, j_{2}, \ldots, i_{L}, j_{L}} \widetilde{w}_{\hat{i}k, i_{2} j_{2}, \ldots, i_{L} j_{L}}}} $$ $$ {}w_{\hat{j}k, i_{2} j_{2}, \ldots, i_{L} j_{L}} = \widetilde{w}_{\hat{j}k, i_{2} j_{2}, \ldots, i_{L} j_{L}} \sqrt{\frac{\sum\limits_{k\neq\hat{j}; i_{2}, j_{2}, \ldots, i_{L}, j_{L}} \widetilde{w}_{\hat{i}k, i_{2} j_{2}, \ldots, i_{L} j_{L}}}{d_{\hat{i}\hat{j}}\sum\limits_{k\neq\hat{i}; i_{2}, j_{2}, \ldots, i_{L}, j_{L}} \widetilde{w}_{\hat{j}k, i_{2} j_{2}, \ldots, i_{L} j_{L}}}}, $$ then the mixability ratio between \(\hat {i}\) and \(\hat {j}\) would have been nearly equal to \(d_{\hat {i}\hat {j}}\) if the number of alleles n were sufficiently large. Due to computational restrictions, however, we run simulations for n=2, hence Eqs. (8) and (9) need to be changed to make the mixability ratio between \(\hat {i}\) and \(\hat {j}\) precisely equal to \(d_{\hat {i}\hat {j}}\). In this case, for L loci, the fitness matrix has \(\left (\frac {n(n+1)}{2}\right)^{L} = 3^{L}\) values. Notice that, for alleles \(\hat {i}\) and \(\hat {j}\) at the first locus, expression (8) increases the fitness of genotypes with \(\hat {i}\hat {i}\) and \(\hat {i}\hat {j}\) at the first locus at some given rate, and expression (9) decreases the fitness values of genotypes with \(\hat {i}\hat {j}\) and \(\hat {j}\hat {j}\) at the first locus at the same rate. Thus, the fitness values of genotypes with \(\hat {i}\hat {j}\) at the first locus will be the same as in the beginning, and the ratio between the mixabilities of the pairs \(\hat {i}\hat {i}\) and \(\hat {j}\hat {j}\) (see [35] for the definition of mixability for k-tuples of interacting alleles) will be precisely \(d_{\hat {i}\hat {j}}\). However, we need to get the ratio between the mixabilities of \(\hat {i}\hat {i} + \hat {i}\hat {j}\) and \(\hat {i}\hat {j} + \hat {j}\hat {j}\) to be equal to \(d_{\hat {i}\hat {j}}\). Thus, to get the predefined mixability ratio \(d_{\hat {i}\hat {j}}\), the fitness values of genotypes with \(\hat {i}\hat {i}\) and \(\hat {j}\hat {j}\) at the first locus should be adjusted differently. Let ai be the fitness values of genotypes with \(\hat {i}\hat {i}, b_{i}\) be the fitness values of genotypes with \(\hat {i}\hat {j}\), and ci be the fitness values of genotypes with \(\hat {j}\hat {j}\) at the first locus, and let \(d_{\hat {i}\hat {j}}\) be a predefined mixability ratio. From Eqs. (8) and (9) it follows that $$\frac{\sum a_{i}}{\sum c_{i}} = d_{\hat{i}\hat{j}}. $$ We would like to obtain $$\frac{\sum a_{i} + \sum b_{i}}{\sum b_{i} + \sum c_{i}} = d_{\hat{i}\hat{j}}. $$ This is equivalent to $$\frac{\sum a_{i}}{\sum c_{i}} = d_{\hat{i}\hat{j}} + \frac{\sum b_{i}}{\sum c_{i}} \left(d_{\hat{i}\hat{j}} - 1\right). $$ Since bi and ci are drawn from the same distribution \(\mathcal {N}(0.7, 0.15)\), which implies that \(\sum b_{i} = \sum c_{i}\), we get $$\frac{\sum a_{i}}{\sum c_{i}} = 2d_{\hat{i}\hat{j}} - 1. $$ Hence, (8) and (9) are adjusted to $$ \begin{aligned} w_{\hat{i}k, i_{2} j_{2}, \ldots, i_{L} j_{L}} = \widetilde{w}_{\hat{i}k, i_{2} j_{2}, \ldots, i_{L} j_{L}} \sqrt{\frac{\left(2d_{\hat{i}\hat{j}}-1\right)\sum\limits_{k\neq\hat{i}; i_{2}, j_{2}, \ldots, i_{L}, j_{L}} \widetilde{w}_{\hat{j}k, i_{2} j_{2}, \ldots, i_{L} j_{L}}}{\sum\limits_{k\neq\hat{j}; i_{2}, j_{2}, \ldots, i_{L}, j_{L}} \widetilde{w}_{\hat{i}k, i_{2} j_{2}, \ldots, i_{L} j_{L}}}} \end{aligned} $$ $$ \begin{aligned} w_{\hat{j}k, i_{2} j_{2}, \ldots, i_{L} j_{L}} = \widetilde{w}_{\hat{j}k, i_{2} j_{2}, \ldots, i_{L} j_{L}} \sqrt{\frac{\sum\limits_{k\neq\hat{j}; i_{2}, j_{2}, \ldots, i_{L}, j_{L}} \widetilde{w}_{\hat{i}k, i_{2} j_{2}, \ldots, i_{L} j_{L}}}{\left(2d_{\hat{i}\hat{j}}-1\right)\sum\limits_{k\neq\hat{i}; i_{2}, j_{2}, \ldots, i_{L}, j_{L}} \widetilde{w}_{\hat{j}k, i_{2} j_{2}, \ldots, i_{L} j_{L}}}}. \end{aligned} $$ Derivation of expression (7) As before, the theoretical probability of correct inference in the diploid multilocus model can be calculated as follows: $$P(X<Y) = \iint \limits_{x< y} f_{X,Y}(x,y) dx dy, $$ where X and Y are random variables with joint probability density function fX,Y(x,y). Recall that X is drawn from the distribution of fitness values of genotypes that contain one allele of interest, \(\hat {i}\), and Y is drawn from the distribution of fitness values of genotypes that contain another one, \(\hat {j}\). The difficulty here is that these distributions are not independent because of the existence of a genotype that contains both alleles of interest. Considering the case of two alleles per locus, let the genotypes at the first locus be \(\hat {i}\hat {i}, \hat {j}\hat {j}\) and \(\hat {i}\hat {j}\), and the fitness value distributions for each be the same as those of \(\tilde {X}\) (for \(\hat {i}\hat {i}\)), \(\tilde {Y}\) (for \(\hat {j}\hat {j}\)) and \(\tilde {Z}\) (for \(\hat {i}\hat {j}\)). These three random variables are pairwise independent. We have, \(X = \tilde {X} + \tilde {Z}\) and \(Y = \tilde {Y} + \tilde {Z}\). Therefore, $$\begin{array}{*{20}l} P(X<Y) &= P\left(x< y | x \in X, y \in Y\right)\\ &= P\left(\tilde{x} + \tilde{z} < \tilde{y} + \tilde{z} | \tilde{x} \in \tilde{X}, \tilde{y} \in \tilde{Y}, \tilde{z} \in \tilde{Z}\right) \\ &= P\left(\tilde{x} < \tilde{y} | \tilde{x} \in \tilde{X}, \tilde{y} \in \tilde{Y}\right) = P\left(\tilde{X} < \tilde{Y}\right), \end{array} $$ where \(\tilde {X}\) and \(\tilde {Y}\) are independent. Multilocus binary models Multilocus haploid binary model One of the causes of random deviations from "correct evaluation" of mixabilities in the multilocus haploid model was the probabilistic nature of survival. Here, we carry out a similar simulation with binary fitness values, such that the values of each genotype can be either 0 or 1. Now the mixability of an allele is calculated by dividing the number of genotypes of fitness 1 that carry this allele by the total number of genotypes that carry this allele. \(d_{\hat {i}\hat {j}}\) is equal to the ratio of these fractions for the two alleles of interest. The starting population consists of concrete genotypes as in the main haploid model, but here we must ensure that all parents that survived to replicate have fitness 1 (i.e., there is no genotype with fitness 0 in the starting population), and that the mixabilities of the two alleles of interest (\(\hat {i}\) and \(\hat {j}\)) at the first locus closely approximate some predefined values. One way to do so, is to place zeros and ones in the fitness matrix at the rate that would lead to the mixability ratio, \(d_{\hat {i}\hat {j}}\), chosen for the given simulation. Note that, if the number of loci L is small relative to the population size and the starting allele frequencies are equal, then the set of parents may include too many possible genotypes, leaving no room for enough zero values in the fitness matrix. Therefore, we can run the simulation of discrete fitness values for haploid multilocus model for sufficiently large values of L, depending on the population size N and the number of alleles per locus n. Appendix Fig. 15 shows the result of such a simulation for a population size of 2000 haploids, \(d_{\hat {i}\hat {j}}\) values ranging from approximately 1.01 to 1.11, n=2 alleles per locus and 100 independent trials for each parameter combination. For the selected population size and number of alleles, the number of loci L is L≥11. Results are analogous to those of Fig. 1: the number of tested genotypes is similar across all panels, P is increasing with \(d_{\hat {i}\hat {j}}\) but not with L, and the confidence intervals are similar. For example, for 16 loci, the confidence interval of P increases from (0.47,0.67) for \(d_{\hat {i}\hat {j}} = 1.0112\) to (0.95,1) for \(d_{\hat {i}\hat {j}} = 1.1111\) in Appendix Fig. 15 and from (0.49,0.67) for \(d_{\hat {i}\hat {j}} = 1.0112\) to (0.96,1) for \(d_{\hat {i}\hat {j}} = 1.1111\) in Fig. 1. Bar plots for distributions of fitness values for the two alleles of interest in the case of 18 loci and three values of \(d_{\hat {i}\hat {j}}\): 1.0112, 1.0227 and 1.0345 are shown in Appendix Fig. 16. We see that the difference between these distributions is increasing with the mixability ratio. Also, the mixabilities of given alleles are precisely equal to the predefined values (0.9 for the more mixable allele and from 0.89 to 0.87 for the less mixable one). As in the main multilocus haploid model, we have examined not only hermaphrodites capable of selfing but also the case of two mating types. Results are very similar to those of Appendix Fig. 15 (not shown). Additionally, we have performed the sampling of parents and of alleles within parents without replacement to observe the "pure" effect of random sampling by sex free of the effects of drift (Appendix Fig. 17). Results are much stronger than those in Appendix Fig. 15 and are very similar to those in Fig. 2. Sex vs. asex in the haploid binary model Here we examine the mixability prediction for a range of recombination values and thus are able to compare asex (r=0) to sex (r=0.5 in the free recombination case), starting at linkage disequilibrium, where the initial population consists of distinct clones, with discrete fitness values, 0 and 1. Because of this, offspring in the asexual case always survive. Therefore all alleles of the population created in the asexual case during the simulation have equal frequencies, there is no difference between alleles \(\hat {i}\) and \(\hat {j}\), and the fraction of trials, P, in which the allele that is more mixable across all possible genotypes increases in frequency more than the allele that is less mixable across all possible genotypes, is precisely 0.5. In contrast, in the sexual population, any shuffling of alleles produces more combinations than existed originally (and the number of tested genotypes increases with the recombination rate). Thus P increases substantially even for small values of r (Appendix Fig. 18). Random sampling in a multi-locus haploid model with two mating types. The simulation conditions are as described in Fig. 1, except that now the parents are divided into two mating types, so that mating can occur only between type 1 and type 2 individuals Multilocus diploid binary model For the multilocus diploid binary model with L loci and n alleles per locus, each trial run of the simulation is similar to the haploid binary model. In the beginning, the starting population of random parents is created in such a way that all alleles in this population have equal frequencies. Then a fitness matrix with binary values is created. This step has some conceptual differences from the haploid binary model. We have \(\left (\frac {n(n+1)}{2}\right)^{L}\) genotypes as in the general diploid case. However, at the stage of filling the fitness matrix with zeros and ones at some rate, we should take into account that for two alleles of interest \(\hat {i}\) and \(\hat {j}\) at the first locus, in the diploid model we have genotypes that contain both these alleles. As in the haploid binary model, the simulation can be run if the number of loci, L, is relatively large. Appendix Fig. 19 shows the results of such a simulation with population size N=2000, number of loci L from 7 to 15 and two alleles per locus in a manner analogous to Fig. 5. The results are the same as in the multilocus diploid model, i.e., stronger than any of the haploid models. Examine the green lines positioned at 16 loci for the haploid model in Appendix Fig. 15 and at 12 loci for the diploid model in Appendix Fig. 19. The 95% confidence interval increases from (0.47,0.67) in the haploid and (0.56,0.73) in diploid models in the top-left panel to (0.82,0.94) in the haploid and (0.92,0.99) in diploid models in the central panel to (0.95,1.00) in the haploid and (0.99,1.00) in diploid models in the bottom-right panel. Sex vs. asex in the diploid binary model We compared sex and asex for the binary fitness matrix in the diploid case. The starting population consists of two clones, for example, (0101;0202;…;0L0L) and (1111;1212;…;1L1L), where 0l and 1l,1≤l≤L are two alleles, and two generations are computed and tracked. The results of this simulation are presented in Appendix Fig. 20 in a manner analogous to Figs. 18 and 6. In comparison to the haploid case (see Appendix Fig. 18), the confidence interval is narrower and higher. Take for example a green line drawn in each panel for a recombination rate of 0.3. For the small mixability ratio d=1.0112, the difference between the haploid and diploid cases is only in the size of 95% confidence interval: while for the haploid case it is (0.47,0.66), for the diploid case it is (0.49,0.64). This difference becomes stronger in the central panel for d=1.0588: from (0.67,0.84) for the haploid case to (0.77,0.90) for the diploid case. Finally, in the bottom-right panel, the confidence intervals for the haploid and diploid binary models are different: (0.80,0.94) for the haploid vs. (0.93,0.99) for the diploid. The reason for this difference lies in Eqs. (5) and (6) as explained earlier. Appendix figures Random sampling in a multi-locus haploid model when all loci are tracked simultaneously. In each panel, results are shown for a population size of 2000, a varying number of loci from 2 to 20, and 2 alleles per locus, based on 100 independent trials. Bars for the 4, 8, 12 and 16 alleles represent 95% confidence interval for P based on 80 values, each of which was obtained based on 100 independent trials. P now refers to all loci rather than one (see main text). In comparison to the analysis of the first locus case in Fig. 1, L times more transformations are applied here to the fitness matrix. This leads to a decrease in both its variance (the reason for the thinner confidence interval of P) and average (the reason for the increase of g because more genotypes need to be created to obtain N surviving individuals). To facilitate comparison, the green line highlights the results for 16 loci case in Figs. 1 and 8 Comparison of sampling by sex and asex in the multi-locus haploid model for different standard deviations of the initial fitness values distribution. The simulation conditions are as described in Fig. 4, except that now the population size is fixed (N=2000) and the standard deviation of the fitness distribution varies. This figure shows that as the standard deviation increases, P decreases rapidly to almost 0.5 in the asexual population, while in the sexual population it decreases far more slowly in an apparently linear fashion Random sampling in a multi-locus diploid model with two mating types. The simulation conditions are as described in Fig. 5, except that now the parents are divided into two mating types, so that mating can occur only between type 1 and type 2 individuals Random sampling in the multi-locus diploid model without random genetic drift. Random sampling in the multi-locus diploid model with fitness values from the normal distribution \(\mathcal {N}(0.7, \, 0.15)\), two mating types and without replacement of parents and alleles. The simulation conditions are as described in Fig. 5, except that now the parents are divided into two mating types, each parent participates in exactly two reproductive events, and each allele in each parent is transmitted exactly once. The difference between the present figure and Fig. 5 shows the importance of drift due to the sampling of parents and of alleles with replacement Random sampling in a diploid model with two loci and a different number of alleles. The simulation conditions are as described in Fig. 5, except that now the number of loci is fixed (L=2) and the number of alleles per locus, n, varies. This figure shows that as n increases, P decreases Comparison of sampling made by sex and by asex in the multi-locus diploid model for different population sizes. The simulation conditions are as described in Fig. 6, except that now the population size varies on the x-axes and only two recombination rate values are used, r=0 (asex, cyan solid line; 95% C.I. cyan dashed lines) and r=0.5 (sex, blue solid line; 95% C.I. blue dashed lines). This figure shows that P is much higher in the sexual than in the asexual population, and that as the population size is increased, P increases further in the sexual but not in the asexual population Comparison of sampling made by sex and by asex in the multi-locus diploid model for different standard deviations of the initial fitness distribution. The simulation conditions are as described in Appendix Fig. 13, except that now the population size is fixed (N=2000) and the standard deviation of the fitness values, σ, varies. This figure shows that as σ increases, P decreases rapidly in the asexual population to 0.52−0.63, while in the sexual population it decreases slowly in an apparently linear fashion. The maximum difference between P in the sex and asex cases is for σ of approximately 0.3−0.4 Random sampling in a multi-locus haploid model with binary fitness values. The fraction of genotypes of fitness 1 out of all possible genotypes for the more mixable allele is 0.9 for each panel, whereas the fraction of genotypes of fitness 1 out of all possible genotypes for the less mixable allele decreases from 0.89 to 0.81, producing a range of d values (the ratio between the fractions of genotypes of fitness 1) from 1.0112 to 1.1111 Distribution of fitness values in the haploid multi-locus model with binary fitness values. Each pair of bars shows the fraction of zeros or ones in the fitness matrix for one of the alleles of interest, for a population size of 2000, 18 loci and 2 alleles per locus. The fraction of genotypes of fitness 1 for the more mixable allele is 0.9, whereas the fraction of genotypes of fitness 1 for the less mixable allele decreases from 0.89 to 0.87, producing 3 d values. The left bar-chart shows two pairs of bars for alleles whose fractions of genotypes with fitness 1 are equal to 0.9 and 0.89, respectively, producing a mixability ratio \(d = \frac {0.9}{0.89} \approx 1.0112\). The right bar-chart represents two pairs of bars for alleles whose fractions of genotypes with fitness 1 are equal to 0.9 and 0.87, respectively, producing a mixability ratio \(d = \frac {0.9}{0.87} \approx 1.0345\). The first pair of bars in each panel shows the fraction of zero values in the fitness matrix and the second pair shows the fraction of ones. Note that the difference between the more mixable allele and the less mixable allele increases with d Random sampling in the multi-locus haploid model with binary fitness values, two mating types and without replacement of parents and alleles. The simulation conditions are as described in Appendix Fig. 15, except that now the fitness values are binary and parents are divided into two mating types, so that mating can occur only between type 1 and type 2 individuals. Each parent participates in exactly one reproductive event, which creates two offspring, such that each allele in each parent is transmitted exactly once. The difference between the present figure and Appendix Fig. 15 shows the importance of drift due to the sampling of parents and of alleles with replacement Comparison of sampling made by sex and by asex in a multi-locus haploid model with binary fitness values. The simulation process is similar to Fig. 3, except that now the fitness values are binary Random sampling in a multi-locus diploid model with binary fitness values. The results are produced in a manner analogous to Appendix Fig. 15, the difference being that this model is diploid and the number of loci ranges from 7 to 15. The results in this model are much stronger than in the haploid case Comparison of sampling made by sex and by asex in the multi-locus diploid model with binary fitness values. In each panel, results are shown for a population size of 2000, 8 loci and for 2 alleles per locus. The simulation conditions are as described in Fig. 6, except that now fitness values are either 0 or 1 Nei M. Modification of linkage intensity by natural selection. Genetics. 1967; 57:625–41. Feldman MW. Selection for linkage modification: I. Random mating populations. Theor Popul Biol. 1972; 3:324–46. Feldman MW, Christiansen FB, Brooks LD. Evolution of recombination in a constant environment. Proc Natl Acad Sci. 1980; 77:4838–41. Feldman MW, Liberman U. An evolutionary reduction principle for genetic modifiers. Proc Natl Acad Sci. 1986; 83:4824–7. Altenberg L, Feldman MW. Selection, generalized transmission and the evolution of modifier genes: I. The reduction principle. Genetics. 1987; 117:559–72. Barton N. A general model for the evolution of recombination. Genet Res. 1995; 65:123–44. Bergman A, Feldman MW. Recombination dynamics and the fitness landscape. Phys D Nonlinear Phenom. 1992; 56(1):57–67. Charlesworth B. Directional selection and the evolution of sex and recombination. Genet Res. 1993; 61:205–24. Korol A, Preygel I, Preygel S. Recombination Variability and Evolution. London: Chapman Hall; 1994. Otto SP, Lenormand T. Evolution of sex: resolving the paradox of sex and recombination. Nat Rev Genet. 2002; 3(4):252–61. Hadany L, Beker T. On the evolutionary advantage of fitness-associated recombination. Genetics. 2003; 165(4):2167–79. Otto SP, Nuismer SL. Species interactions and the evolution of sex. Science. 2004; 304(5673):1018–20. Keightley PD, Otto SP. Interference among deleterious mutations favours sex and recombination in finite populations. Nature. 2006; 443(7107):89–92. Hadany L, Otto SP. The evolution of condition-dependent sex in the face of high costs. Genetics. 2007; 176(3):1713–27. Hadany L, Otto SP. Condition-dependent sex and the rate of adaptation. Am Nat. 2009; 174(S1):71–8. Altenberg L, Liberman U, Feldman MW. Unified reduction principle for the evolution of mutation, migration, and recombination. Proc Natl Acad Sci. 2017; 114(12):2392–400. Fisher RA. The Genetical Theory of Natural Selection. Oxford: The Clarendon Press; 1930. Muller HJ. Some genetic aspects of sex. Am Nat. 1932; 66:118–38. Hamilton WD. Sex versus non-sex versus parasite. Oikos. 1980; 35:282–90. Kondrashov A. Selection against harmful mutations in large sexual and asexual populations. Genet Res. 1982; 40:325–32. Livnat A, Papadimitriou C, Dushoff J, Feldman MW. A mixability theory for the role of sex in evolution. Proc Natl Acad Sci. 2008; 105(50):19803–8. Jaenike J. A hypothesis to account for the maintenance of sex within populations. Evol Theory. 1978; 3:191–4. Hill W, Robertson A. The effect of linkage on limits to artificial selection. Genet Res. 1966; 8:269–94. Kirkpatrick M, Jenkins CD. Genetic segregation and the maintenance of sexual reproduction. Nature. 1989; 339(6222):300–1. Eshel I, Feldman MW. On the evolutionary effect of recombination. Theor Popul Biol. 1970; 1(1):88–100. Zhuchenko A, Korol A. Ecological aspects of the recombination problem. Theor Appl Genet. 1983; 64(2):177–85. Bernstein H, Byerly HC, Hopf FA, Michod RE. Genetic damage, mutation, and the evolution of sex. Science. 1985; 229(4719):1277–81. Hadany L, Beker T. Fitness-associated recombination on rugged adaptive landscapes. J Evol Biol. 2003; 16(5):862–70. Azevedo RBR, Lohaus R, Srinivasan S, Dang KK, Burch CL. Sexual reproduction selects for robustness and negative epistasis in artificial gene networks. Nature. 2009; 440(7080):87–90. Hinton GE, Srivastava N, Krizhevsky A, Sutskever I, Salakhutdinov RR. Improving neural networks by preventing co-adaptation of feature detectors. arXiv 12070580. 2012. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res. 2014; 15(1):1929–58. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015; 521(7553):436. Silver D, Hubert T, Schrittwieser J, Antonoglou I, Lai M, Guez A, et al. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science. 2018; 362(6419):1140–4. Holland JH. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. Ann Arbor: U Michigan Press; 1975. Livnat A, Papadimitriou C, Pippenger N, Feldman MW. Sex, mixability, and modularity. Proc Natl Acad Sci. 2010; 107(4):1452–7. Chastain E, Livnat A, Papadimitriou C, Vazirani U. Algorithms, games, and evolution. Proc Natl Acad Sci. 2014; 111(29):10620–3. Arora S, Hazan E, Kale S. The multiplicative weights update method: a meta-algorithm and applications. Theory of Comput. 2012; 121–64(1). Livnat A, Papadimitriou C, Feldman MW. An analytical contrast between fitness maximization and selection for mixability. J Theor Biol. 2011; 273(1):232–4. Hickey DA, Golding GB. The advantage of recombination when selection is acting at many genetic Loci. J Theor Biol. 2018; 442:123–8. Vasylenko L, Feldman MW, Papadimitriou C, Livnat A. Sex: The power of randomization. Theor Popul Biol. 2019; 129:41–53. Haldane JBS. The combination of linkage values and the calculation of distances between the loci of linked factors. Genetics. 1919; 8:299–309. Rabani Y, Rabinovich Y, Sinclair A. A computational view of population genetics. Random Struct Algoritm. 1998; 12(4):313–34. Crow JF, Kimura M. Evolution in sexual and asexual populations. Am Nat. 1965; 99:439–50. Bodmer W. The evolutionary significance of recombination in prokaryotes. In: Symp Soc Gen Microbiol: 1970. p. 279–94. Li M, Vitányi P. An Introduction to Kolmogorov Complexity and its Applications.Springer Science; 2013. Wright S. Evolution in Mendelian populations. Genetics. 1931; 16:97–159. Wright S. The roles of mutation, inbreeding, crossbreeding and selection in evolution. In: Proc 6th Int Cong Genet: 1932. p. 356–66. Motwani R, Raghavan P. Randomized Algorithms. Cambrdige: Cambridge University Press; 1995. Wigderson A. The power and weakness of randomness in computation. In: LATIN 2006: Theoretical Informatics: 2006. p. 28–9, Springer. Valiant LG. Evolvability. J Assoc Comput Mach. 2009; 56(1):3. Livnat A. Interaction-based evolution: how natural selection and nonrandom mutation work together. Biol Direct. 2013; 8(1):24. Watson R, Szathmáry E. How can evolution learn?Trends Ecol Evol. 2016; 31(2):147–57. Watson R, Mills R, Buckley CL, Kouvaris K, Jackson A, Powers S, Cox C, et al. Evolutionary connectionism: algorithmic principles underlying the evolution of biological organisation in evo-devo, evo-eco and evolutionary transitions. Evol Biol. 2016; 43:553–581. Livnat A. Simplification, innateness, and the absorption of meaning from context: how novelty arises from gradual network evolution. Evol Biol. 2017; 44(2):145–89. Kouvaris K, Clune J, Kounios L, Brede M, Watson RA. How evolution learns to generalise: Using the principles of learning theory to understand the evolution of developmental organisation. PLoS Comput Biol. 2017; 13(4):e1005358. AL was supported by the Israel Science Foundation (grant no. 1986/16). MWF was supported in part by the Center for Computational, Evolutionary and Human Genetics at Stanford. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Department of Evolutionary and Environmental Biology and Institute of Evolution, University of Haifa, 199 Aba Khoushy Ave, Haifa, 3498838, Israel Liudmyla Vasylenko & Adi Livnat Department of Biology, Stanford University, 371 Jane Stanford Way, Stanford, 94305-5020, CA, USA Marcus W. Feldman Liudmyla Vasylenko Adi Livnat LV carried out the simulations, analyzed the data, participated in the design of the study, drafted the manuscript and critically revised the manuscript; MWF participated in the design of the study, drafted the manuscript and critically revised the manuscript; AL conceived of the study, designed the study, coordinated the study, drafted the manuscript and critically revised the manuscript. All authors gave final approval for publication and agree to be held accountable for the work performed therein. Correspondence to Adi Livnat. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Vasylenko, L., Feldman, M.W. & Livnat, A. The power of randomization by sex in multilocus genetic evolution. Biol Direct 15, 26 (2020). https://doi.org/10.1186/s13062-020-00277-0 Random sampling Sex and recombination Epistasis Multilocus models Interaction-based evolution
CommonCrawl
\begin{document} \title{A Geometric Proof of Calibration} \begin{abstract} We provide yet another proof of the existence of calibrated forecasters; it has two merits. First, it is valid for an arbitrary finite number of outcomes. Second, it is short and simple and it follows from a direct application of Blackwell's approachability theorem to carefully chosen vector-valued payoff function and convex target set. Our proof captures the essence of existing proofs based on approachability (e.g., the proof by Foster~\cite{Fo99} in case of binary outcomes) and highlights the intrinsic connection between approachability and calibration. \\ Received: July 17, 2010; revised: September 7, 2010; final version: September 16, 2010 \end{abstract} \section{Motivation.} Foster~\cite{Fo99} stated that: \begin{quote} {\small Over the past few years many proofs of the existence of calibration have been discovered. Each of the following provides a different algorithm and proof of convergence: Foster and Vohra~\cite{FoVo91,FoVo98}, Hart~\cite{Ha95}, Fudenberg and Levine~\cite{FuLe99}, Hart and Mas-Colell~\cite{HaMa97}. Does the literature really need one more? Probably not. } \end{quote} In spite of this, he argued, successfully, that his new proof of the existence of calibrated forecasters in the case of binary outcomes, based on Blackwell's approachability theorem (Blackwell~\cite{Bla56}), was shorter and more direct than most of the previous proofs. In this paper, we consider the general case of finitely many outcomes and exhibit an even shorter (ten-line long) proof of the existence of calibrated forecasters based on approachability. We show therefore that calibration is a straightforward consequence of approachability. As we realized by browsing on the web, approachability and calibration are well-taught matters and we are confident that this new proof will become a standard example in the list of direct applications of approachability (as is already the case for the existence of no-regret forecasters). Since calibration is a central tool in learning in games (see, e.g., Kakade and Foster~\cite{KakadeFoster08}) and in online learning (see, e.g., Mannor, Tsitsiklis, and Yu~\cite{MannorTY09}), the simplicity of the proof and the guaranteed convergence rates open up new opportunities to use calibration in practical learning algorithms. Foster~\cite{Fo99} mentions that his approachability-based proof of the existence of a calibrated forecaster was obtained by first considering a modification of an intuitive forecaster already stated in Foster and Vohra~\cite{FoVo91} and then working out the proof of its guarantees. We proceed the other way round and start directly from the statement of Blackwell's approachability theorem for convex sets \cite[Theorem~3]{Bla56} but, as a drawback, can only exhibit a forecaster which has to solve a linear program at each step. Taking a closer look at Foster~\cite{Fo99}, one can see that we indeed capture the essence of his previous proof. His algorithm is a clever modification, in the case of binary outcomes, of the general approachability-based forecaster presented below; the former has a nice, explicit, and simple statement. We now recall the informal definition and consequences of calibration. Consider a finite set of possible outcomes and suppose we obtain random forecasts about future events; these forecasts are each given by probability distributions over the outcomes. Now, such a sequence of forecasts is called calibrated whenever it is consistent in hindsight, that is, when, for all distributions $\mathbf{p}$, the actual empirical distribution of the outcomes on those rounds when the forecast was close to $\mathbf{p}$ is also close to $\mathbf{p}$. Having a calibrated forecasting scheme is beneficial in several ways. On the one hand, it allows some agent to choose the best responses to the predicted forecasts or to consider other risk measures which might be more valuable than greedily choosing the best action leading to highest reward. On the other hand, calibrated forecasting rules enable multiple agents to converge to a reasonable joint play in some situations. For instance, if all players use calibrated forecasts of other players' actions, then the empirical distribution of action profiles converges to the set of correlated equilibria; see Foster and Vohra~\cite{FoVo97}. We refer to Sandroni, Smorodinsky, and Vohra~\cite{SaSmVo03} for further discussion on calibrated forecasting as well as its generalizations. \section{Setup and formal definition of calibration.} \label{sec:caldef} We consider a finite set $\mathcal{A}$ of outcomes, with cardinality denoted by $A$ and denote by $\mathcal{P} = \Delta(\mathcal{A})$ the set of probability distributions over $\mathcal{A}$. We equip $\mathcal{P}$, which can be considered a subset of $\mathbb{R}^{A}$, with some\footnote{The precise nature of this norm, e.g., $\ell^1$, Euclidian $\ell^2$, or $\ell^{\infty}$ supremum norm, is irrelevant at this stage, since all norms are equivalent on finite-dimensional spaces.} norm $\norm$, to be referred to as the calibration norm. In particular, the Dirac probability distribution on some outcome $a \in \mathcal{A}$ will be referred to as $\delta_a$. A forecaster plays a game against Nature. At each step, it outputs a probability distribution $P_t \in \mathcal{P}$ while Nature chooses simultaneously an outcome $a_t \in \mathcal{A}$. We make no assumption on Nature's strategy. The goal of the forecaster is to ensure the following property, known as calibration: for all strategies of Nature, \begin{equation} \label{def:cal} \forall \, \varepsilon > 0, \ \ \forall \, \mathbf{p} \in \mathcal{P}, \qquad \quad \lim_{T \to +\infty} \norm[\frac{1}{T} \sum_{t=1}^T \mathbb{I}_{ \bigl\{ \norm[P_t - \mathbf{p}] \leqslant \varepsilon \bigr\} } \bigl( P_t - \delta_{a_t} \bigr)] = 0 \qquad \quad \mbox{a.s.} \end{equation} The {a.s.} statement accounts for randomized forecasters. (It was shown by Oakes~\cite{Oakes85} and Dawid~\cite{Dawid85} that randomization is essential for calibration.) \\ The literature (e.g., Foster and Vohra~\cite{FoVo98}, Foster~\cite{Fo99}) essentially considers a less ambitious goal, at least in a first step: $\varepsilon$--calibration. (We explain in Section~\ref{sec:cstr} how to get a calibrated forecaster from some sequence of $\varepsilon$--calibrated forecasters with good properties.) Formally, given $\varepsilon > 0$, an $\varepsilon$--calibrated forecaster considers some finite covering of $\mathcal{P}$ by $N_{\varepsilon}$ balls of radius ${\varepsilon}$ and abides by the following constraints. Denoting by $\mathbf{p}_1,\ldots,\mathbf{p}_{N_{\varepsilon}}$ the centers of the balls in the covering (they form what will be referred to later on as an $\varepsilon$--grid), the forecaster chooses only forecasts $P_t \in \bigl\{ \mathbf{p}_1,\ldots,\mathbf{p}_{N_{\varepsilon}} \bigr\}$. We thus denote by $K_t$ the index in $\bigl\{ 1,\ldots,N_{\varepsilon} \bigr\}$ such that $P_t = \mathbf{p}_{K_t}$. The final condition to be satisfied is then that for all strategies of Nature, \begin{equation} \label{def:epscal} \limsup_{T \to +\infty} \ \ \sum_{k=1}^{N_\varepsilon} \norm[\frac{1}{T} \sum_{t=1}^T \mathbb{I}_{ \{ K_t = k \} } \bigl( \mathbf{p}_k - \delta_{a_t} \bigr)] \,\, \leqslant \varepsilon \qquad \quad \mbox{a.s.} \end{equation} When the calibration norm is the $\ell^1$--norm $\norm_1$, the sum appearing in this criterion is usually referred to as the $\ell^1$--calibration score (Foster~\cite{Fo99}). Another popular criterion is the Brier score (Foster and Vohra~\cite{FoVo98}), which we consider in Section~\ref{sec:Brier}; it is bounded, up to a factor of 2, by the $\ell^1$--calibration score. \section{A geometric construction of $\varepsilon$--calibrated forecasters.} In this section we prove our main result regarding the existence of an $\varepsilon$--calibrated forecaster based on approachability theory. We recall results approachability theory, provide the main result (Theorem~\ref{th:main}), and then address the issue of computational complexity. \subsection{Statement of Blackwell's approachability theorem.} Consider a vector-valued game between two players, with respective finite action sets $\mathcal{I}$ and $\mathcal{J}$. We denote by $d$ the dimension of the reward vector. The payoff function of the first player is given by a mapping $m : \mathcal{I} \times \mathcal{J} \to \mathbb{R}^d$, which is linearly extended to $\Delta(\mathcal{I}) \times \Delta(\mathcal{J})$, the set of product-distributions over $\mathcal{I} \times \mathcal{J}$. We denote by $I_1,I_2,\ldots$ and $J_1,J_2,\ldots$ the sequences of actions in $\mathcal{I}$ and $\mathcal{J}$ taken by each player (they are possibly given by randomized strategies). Let $C \subset \mathbb{R}^d$ be some set. By definition, $C$ is approachable if there exists a strategy for the first player such that for all strategies of the second player, \[ \lim_{T \to \infty} \ \ \ \inf_{c \in C} \ \norm[c - \frac{1}{T} \sum_{t=1}^T m \bigl( I_t,J_t \bigr)] \ = 0 \qquad \quad \mbox{a.s.} \] That is, the first player has a strategy that ensures that the average of his vector-valued payoffs converges to the set $C$. For closed convex sets $C$, there is a simple characterization of approachability that is a direct consequence of the minimax theorem. \begin{theorem}[Blackwell {\cite[Theorem~3]{Bla56}}] \label{th:appr} A closed convex set $C \subset \mathbb{R}^d$ is approachable if and only if \[ \forall \, \mathbf{q} \in \Delta(\mathcal{J}), \ \ \exists \, \mathbf{p} \in \Delta(\mathcal{I}), \qquad \quad m(\mathbf{p},\mathbf{q}) \in C~. \] \end{theorem} \subsection{Application to the existence of an $\varepsilon$--calibrated forecaster.} As indicated above, we equip $\mathcal{P}$ with some calibration norm $\norm$ and fix $\varepsilon > 0$; we then consider an associated $\varepsilon$--grid $\bigl\{ \mathbf{p}_1,\ldots,\mathbf{p}_{N_\varepsilon} \bigr\}$ in $\mathcal{P} = \Delta(\mathcal{A})$. \begin{theorem} \label{th:main} There exists an $\varepsilon$--calibrated forecaster which selects at every stage a distribution from this grid. \end{theorem} \begin{proof} We apply the results on approachability recalled above. To that end, we consider in our setting the action sets $\mathcal{I} = \{ 1, \ldots, N_\varepsilon \}$ for the first player and $\mathcal{J} = \mathcal{A}$ for the second player. We define the vector-valued payoff function as follows; it takes values in $\mathbb{R}^{A N_\varepsilon}$. For all $k \in \{ 1,\ldots,N_\varepsilon \}$ and $a \in \mathcal{A}$, \[ m(k,a) = \bigl( \underline{0}, \, \ldots, \, \underline{0}, \,\, \mathbf{p}_k - \delta_a, \,\, \underline{0}, \, \ldots, \, \underline{0} \bigr)~, \] which is a vector of $N_\varepsilon$ elements of $\mathbb{R}^A$ composed by $N_\varepsilon-1$ occurrences of the zero element $\underline{0} \in \mathbb{R}^A$ and one non-zero element, located in the $k$--th position and given by the difference of probability distributions $\mathbf{p}_k - \delta_a$. We now define the target set $C$ as the following subset of the $\varepsilon$--ball around $\bigl( \underline{0}, \, \ldots, \, \underline{0} \bigr)$ for the calibration norm $\norm$. We write $(A N_\varepsilon)$--dimensional vectors of $\mathbb{R}^{A N_\varepsilon}$ as $N_\varepsilon$--dimensional vectors with components in $\mathbb{R}^A$, i.e., for all $\uuline{x} \in \mathbb{R}^{A N_\varepsilon}$, \[ \uuline{x} = \bigl( \underline{x}_1, \, \ldots, \underline{x}_{N_\varepsilon} \bigr)~, \] where $\underline{x}_k \in \mathbb{R}^A$ for all $k \in \{ 1,\ldots,N_\varepsilon \}$. Then, \[ C = \left\{ \uuline{x} \in \mathbb{R}^{A N_\varepsilon} : \ \sum_{k=1}^{N_\varepsilon} \norm[\underline{x}_k] \,\, \leqslant \varepsilon \right\}~. \] Note that $C$ is a closed convex set. The condition (\ref{def:epscal}) of $\varepsilon$--calibration can be rewritten as follows: the sequence of the vector-valued rewards \[ \overline{m}_T \stackrel{\mbox{\scriptsize def}}{=} \frac{1}{T} \sum_{t=1}^T m \bigl( K_t,a_t \bigr) = \left( \frac{1}{T} \sum_{t=1}^T \mathbb{I}_{ \{ K_t = 1 \} } \bigl( \mathbf{p}_1 - \delta_{a_t} \bigr), \,\, \ldots, \,\, \frac{1}{T} \sum_{t=1}^T \mathbb{I}_{ \{ K_t = N_\varepsilon \} } \bigl( \mathbf{p}_{N_\varepsilon} - \delta_{a_t} \bigr) \right) \] converges to the set $C$ almost surely. The existence of an $\varepsilon$--calibrated forecaster is thus equivalent to the approachability of $C$, which we now prove by showing that the characterization provided by Theorem~\ref{th:appr} is satisfied. Let $\mathbf{q} \in \Delta(\mathcal{J}) = \mathcal{P}$. By construction, there exists $k \in \{ 1,\ldots,N_\varepsilon \}$ such that $\norm[\mathbf{p}_k - \mathbf{q}] \leqslant \varepsilon$ and thus \[ m(k,\mathbf{q}) \in C~. \] (Here, the distribution $\mathbf{p}$ of the approachability theorem can be taken as the Dirac distribution $\delta_k$.) \end{proof} \subsection{Computation of the exhibited $\varepsilon$--calibrated forecaster.} \label{sec:cplx} The proof of the approachability theorem gives rise to an implicit strategy, as indicated in Blackwell~\cite{Bla56}. We denote here by $\Pi_C$ the projection in $\ell^2$--norm onto $C$. At each round $t \geqslant 2$ and with the notations above, the forecaster should pick his action $K_t$ at random according to a distribution $\psi_t = \bigl( \psi_{t,1}, \ldots, \psi_{t,N_\varepsilon} \bigr)$ on $\bigl\{ 1,\ldots,N_\varepsilon \bigr\}$ such that \begin{equation} \label{eq:Blk} \forall \, a \in \mathcal{A}, \quad \qquad \Bigl( \overline{m}_{t-1} - \Pi_C \bigl( \overline{m}_{t-1} \bigr) \Bigr) \,\cdot\, \Bigl( m \bigl( \psi_t, \, a \bigr) - \Pi_C \bigl( \overline{m}_{t-1} \bigr) \Bigr) \leqslant 0~, \end{equation} where $\,\cdot\,$ denotes the inner product in $\mathbb{R}^{A N_\varepsilon}$. The proof of Theorem~\ref{th:appr} (see Blackwell~\cite{Bla56}) shows that such a distribution $\psi_t$ indeed exists; the question is how to efficiently compute it. To do so, we first need to compute the projection $\Pi_C \bigl( \overline{m}_{t-1} \bigr)$ of $\overline{m}_{t-1}$. We address the two computational issues separately. We first indicate how to find the projection efficiently and then explain how to find the distribution $\psi_t$ based on the knowledge of this projection. \subsubsection{Projecting onto $C$.} We need to find the closest point in $C$ to $\overline{m}_{t-1}$. Since $C$ is convex and the $\ell^2$--norm is convex, we have to deal with a minimization problem of a convex function over a convex set. Since answering the question whether a given point is in $C$ or not can be done in time linear in $A N_\varepsilon$, the projection problem can be solved (approximately) in time polynomial in $A N_\varepsilon$. Now, for the special case where the calibration norm is the $\ell^1$--norm $\norm_1$, we can do much better. For $i \in \bigl\{ 1, \ldots, A N_{\varepsilon} \bigr\}$, we denote by $s_{i,t-1} \in \{ -1,1\}$ the sign of the $i$--th component $\overline{m}_{i,t-1}$ of the vector $\overline{m}_{t-1}$. (The value of the sign function at $x$ is arbitrary at $x=0$, equal to $-1$ when $x <0$ and to $1$ when $x >0$.) Then, $\Pi_C \bigl( \overline{m}_{t-1} \bigr)$ is the solution of the following optimization problem, where the unknown is $\uuline{y} = \bigl( y_1,\ldots,y_{A N_\varepsilon} \bigr)$: \begin{align} \nonumber \min \quad & \bigl\| \uuline{y} - \overline{m}_{t-1} \bigr\|_2^2 \\ \nonumber \mbox{such that} \ \ & \left\{ \begin{array}{cc} \displaystyle{\sum_{i=1}^{A N_\varepsilon} y_i \, s_{i,t-1} \leqslant \varepsilon} & \\ \quad y_i \, s_{i,t-1} \geqslant 0~, & \forall \, i \in \bigl\{ 1, \ldots, A N_{\varepsilon} \bigr\}~. \end{array} \right. \end{align} It can be easily shown (as in Gafni and Bertsekas~\cite{GafniBertsekas84} or by an immediate adaptation of Palomar~\cite[Lemma~1]{Palomar05}) that the optimal solution is unique; it is given by $\uuline{y}(\mu^*)$ where for all $\mu \geqslant 0$, \[ \uuline{y}(\mu) = s_{i,t-1} \, \bigl( s_{i,t-1} \, \overline{m}_{i,t-1} - \mu \bigr)^+ \] and $\mu^*$ is chosen as the minimum nonnegative value such that $\sum_i y_i(\mu) \, s_{i,t-1} \leqslant \varepsilon$. (Note that if $\mu^* > 0$ then $\sum_i y_i(\mu^*) \, s_{i,t-1} = \varepsilon$.) Finding $\mu^*$ can be done by a binary search to an arbitrary precision. In conclusion, when the calibration norm is the $\ell^1$--norm $\norm_1$, projecting onto $C$ can be done in linear time in $A N_{\varepsilon}$ to a desired precision $\delta$ with complexity that depends on $\delta$ like $\log(1/\delta)$. \subsection{Finding the optimal distribution $\psi_t$ in (\ref{eq:Blk}).} The question that has to be resolved is therefore how to find $\psi_t$ that satisfies condition~(\ref{eq:Blk}). Since we know that such a $\psi_t$ exists, it suffices, for instance, to compute an element of \[ \mathop{\mathrm{argmin}}_{\psi} \,\, \max_{a \in \mathcal{A}} \,\, \Bigl( \overline{m}_{t-1} - \Pi_C \bigl( \overline{m}_{t-1} \bigr) \Bigr) \,\cdot\, m ( \psi, \, a ) = \mathop{\mathrm{argmin}}_{\psi} \,\, \max_{a \in \mathcal{A}} \,\, \sum_{k=1}^{N_\varepsilon} \, \psi_k \, \gamma_{k,a,t-1} \] where we denoted $\gamma_{k,a,t-1} = \Bigl( \overline{m}_{t-1} - \Pi_C \bigl( \overline{m}_{t-1} \bigr) \Bigr) \,\cdot\, m ( k, \, a )$. This can be done efficiently by linear programming leading to a polynomial complexity in $N_\varepsilon$ and $A$. However, if instead of solving the minimax problem exactly we are satisfied with solving it approximately, i.e., allowing a small violation $\delta > 0$ in each of the $A$ constraints given by (\ref{eq:Blk}), we can use the multiplicative weights algorithm as explained in Freund and Schapire~\cite{FrSc99}; see also Cesa-Bianchi and Lugosi~\cite[Section~7.2]{CeLu06}. The complexity of such a solution would be \[ O \! \left(\frac{A N_\varepsilon}{\delta^2} \ln N_\varepsilon \right)~, \] since $(\ln N_{\varepsilon})/\delta^2$ steps of complexity $A N_{\varepsilon}$ each have to be performed. The proof of Blackwell's approachability theorem shows that in this case the sequence of the average payoff vectors $\overline{m}_{t}$ converges rather to the $\sqrt{\delta}$--expansion (in $\ell^2$--norm) of $C$; it is easy\footnote{It suffices to note that for all vectors $\Delta$ of a finite-dimensional space, one has $\| \Delta \|_\infty \leqslant \| \Delta \|_2$, so that the inequality $\| \Delta \|_2 \leqslant \sqrt{\| \Delta \|_\infty \, \| \Delta \|_1}$ yields $\sqrt{\| \Delta \|_2} \leqslant \| \Delta \|_1$. } to see that the latter is included in the $\delta$--expansion (in $\ell^1$--norm) of $C$. Putting all things together and taking the $\ell^1$--norm $\norm_1$ as the calibration norm (in particular, to define $C$), we can find a $2\varepsilon$--calibrated forecaster whose complexity is of the order of $A N_\varepsilon \, \varepsilon^{-2} \log N_\varepsilon$ at each step. Since $N_\varepsilon$ behaves like $\varepsilon^{-(A-1)}$ we have that the dependence of the complexity per stage behaves like $\varepsilon^{-(A+1)}$ (ignoring multiplicative and logarithmic factors). This implies a polynomial dependence in $\varepsilon$ but an exponential dependence in $A$. \begin{remark} It is worth noting that when choosing a solution $\psi_t$, it is not possible to replace $\psi_t$ with its mean or with an element of $\mathbf{p}_1,\mathbf{p}_2,\ldots, \mathbf{p}_{N_\varepsilon}$ that is close to its mean. The reason is that this would give rise to a deterministic rule, which, as we mentioned in Section~\ref{sec:caldef}, cannot be calibrated. The fact that we have to randomize rather than take the mean is due to our construction of the vector-valued game; therein, playing a mixed action $\psi_t$ over the $\mathbf{p}_i$'s leads to a very different vector-valued reward than playing the (element $\mathbf{p}_k$ closest to the) mean of the mixed action. This is because different indices of the $(A N_\varepsilon)$--dimensional space are involved. \end{remark} \section{Rates of convergence and construction of a calibrated forecaster.} In this section we provide rates of convergence and discuss the construction of a calibrated (rather than $\varepsilon$--calibrated) forecaster. We finally compare our results to some existing calibrated forecasters in the literature. The main result of this section is providing rates of convergence for a calibrated forecaster in~(\ref{eq:ratescalibr}). To the best of our knowledge, this is the first rates results for calibration for an alphabet of size $A$ larger than 2. For $A = 2$, (sub)optimal rates follow from the procedure of Foster and Vohra~\cite{FoVo98} as recalled in Section~\ref{sec:FoVo}. \subsection{Rates of convergence.} \label{sec:rates} Approachability theory provides uniform convergence rates of sequence of empirical payoff vectors to the target set, see Cesa-Bianchi and Lugosi~\cite[Exercise 7.23]{CeLu06}. Formally, denoting by $\norm_2$ the Euclidian $\ell^2$--norm in $\mathbb{R}^{A N_\varepsilon}$, it follows in our context that there exists some absolute constant $\gamma$ (independent of $A$ and $N_\varepsilon$) such that for all strategies of Nature and for all $T$, with probability $1-\delta$, \[ \norm[ \overline{m}_T - \Pi_C \bigl( \overline{m}_T \bigr) ]_2 \leqslant \gamma \sqrt{\frac{\ln (1/\delta)}{T}}~. \] Here, it is crucial to state the convergence rates based on the Euclidian norm because of an underlying martingale convergence argument in Hilbert spaces proved by Chen and White~\cite{ChWh96}. The reason why the convergence rate here is independent of $A$ and $N_\varepsilon$ is that the payoff vectors $m(k,a)$ all have an Euclidian norm bounded by an absolute constant, e.g., 2; this happens because most of their components are 0. We now apply this result. However, we underline that the set $C$ can be defined by a different calibration norm $\norm$; below, we will define it based on the $\ell^1$--norm, for instance. But the stated uniform convergence rate can be used since, via a triangle inequality and an application of the Cauchy-Schwarz inequality, \[ \norm[ \overline{m}_T ]_1 \leqslant \norm[ \Pi_C \bigl( \overline{m}_T \bigr) ]_1 + \norm[ \overline{m}_T - \Pi_C \bigl( \overline{m}_T \bigr) ]_1 \leqslant \varepsilon + \sqrt{A N_\varepsilon} \, \norm[ \overline{m}_T - \Pi_C \bigl( \overline{m}_T \bigr) ]_2~. \] $N_\varepsilon$ is of the order of $\varepsilon^{-(A-1)}$; we let $\gamma'$ be an absolute constant such that $N_\varepsilon \leqslant \gamma' \, \varepsilon^{-(A-1)}$ for all $\varepsilon \leqslant 1$ (say). We therefore have proved that given $0 < \varepsilon \leqslant 1$, the forecaster defined in the previous section is such that for all strategies of Nature and for all $T$, with probability $1-\delta$, \[ \norm[ \overline{m}_T ]_1 = \sum_{k=1}^{N_\varepsilon} \norm[\frac{1}{T} \sum_{t=1}^T \mathbb{I}_{ \{ K_t = k \} } \bigl( \mathbf{p}_k - \delta_{a_t} \bigr)]_1 \leqslant \varepsilon + \gamma \gamma' \sqrt{A} \, \sqrt{\frac{\ln (1/\delta)}{\varepsilon^{(A-1)} \, T}} \stackrel{\mbox{\scriptsize def}}{=} U_{\varepsilon,T,\delta}~. \] This high-probability bound is to be used below as the key ingredient to construct a calibrated forecaster, i.e., a forecaster satisfying~(\ref{def:cal}). Combining the Borel-Cantelli Lemma with the bound above shows that the less ambitious goal~(\ref{def:epscal}) can be achieved. \subsection{Construction of a calibrated forecaster.} \label{sec:cstr} We use a standard approach which is commonly known as the ``doubling trick," see, e.g., Cesa-Bianchi and Lugosi~\cite{CeLu06}. It consists of defining a meta-forecaster that proceeds in regimes; regime $r$ (where $r \geqslant 1$) lasts $T_r$ rounds and resorts for the forecasts to an $\varepsilon_r$--calibrated forecaster, for some $\varepsilon_r > 0$ to be defined by the analysis. We now show that for appropriate values of the $T_r$ and $\varepsilon_r$, the resulting meta-forecaster is calibrated in the sense of (\ref{def:cal}), and even uniformly calibrated in the following sense, where $\mathcal{B}$ denotes the Borel sigma-algebra of $\mathcal{P}$: \begin{equation} \label{def:unifcal} \lim_{T \to +\infty} \ \ \sup_{B \in \mathcal{B}} \ \norm[\frac{1}{T} \sum_{t=1}^T \mathbb{I}_{ \{ P_t \in B \} } \bigl( P_t - \delta_{a_t} \bigr)] \,\, = 0 \qquad \quad \mbox{a.s.} \end{equation} Of course, uniform calibration (\ref{def:unifcal}) implies calibration (\ref{def:cal}) via the choices for $B$ given by $\varepsilon$--balls around probability distributions $\mathbf{p}$. For concreteness, we focus below on the $\ell^1$--calibration score. Regimes are indexed by $r = 1,2,\ldots$ and the index of the regime corresponding to round $T$ is referred to as $R_T$. The set of the rounds within regime $r \leqslant R_T -1$ is called $\mathcal{T}_r$; rounds in regime $R_T$ with index less than $T$ are gathered in the set $\mathcal{T}_{R_T}$ (we commit here an abuse of notations). We denote by $\mathbf{p}_{k,r}$, where $k \in \{ 1,\ldots,N_{\varepsilon_r} \}$, the finite $\varepsilon_r$--grid considered in the $r$--th regime. By the triangle inequality satisfied by $\norm$, we first decompose the quantity of interest according to the regimes and to the played points of the grids, \[ \norm[\sum_{t=1}^T \mathbb{I}_{ \{ P_t \in B \} } \bigl( P_t - \delta_{a_t} \bigr)]_1 \leqslant \sum_{r=1}^{R_T} \sum_{k = 1}^{N_{\varepsilon_r}} \mathbb{I}_{ \{ \mathbf{p}_{k,r} \in B \} } \norm[ \sum_{t \in \mathcal{T}_r} \mathbb{I}_{ \{ K_t = k \} } \bigl( \mathbf{p}_{k,r} - \delta_{a_t} \bigr)]_1~. \] We now substitue the uniform bound obtained in the previous section and get that with probability $1 - (\delta_{1,T} + \ldots + \delta_{R_T,T}) \geqslant 1 - 1/T^2$, \[ \sup_{B \in \mathcal{B}} \norm[\frac{1}{T}\sum_{t=1}^T \mathbb{I}_{ \{ P_t \in B \} } \bigl( P_t - \delta_{a_t} \bigr)]_1 \leqslant \frac{1}{T} \sum_{r=1}^{R_T} T_r \, U_{\varepsilon_r,T_r,\delta_{r,T}}~, \] where we defined $\delta_{r,T} = 1/(2^r T^2)$. An application of the Borel-Cantelli Lemma and Cesaro's Lemma shows that for suitable choices of a sequence $\varepsilon_r$ decreasing towards 0 and an increasing sequence $T_r$ such that $\varepsilon_r^{A-1} \, T_r$ tends to infinity fast enough, one then gets the desired convergence (\ref{def:unifcal}). For instance, if $T_r = 2^r$, and $\varepsilon_r$ is chosen such that \[ \varepsilon_r \qquad \mbox{and} \qquad \sqrt{\frac{1}{\varepsilon_r^{\, (A-1)} \, T_r}} \] are of the same order of magnitude, e.g., $\varepsilon_r = 2^{-r/(A+1)}$, then \begin{equation} \label{eq:ratescalibr} \limsup_{T \to \infty} \ \ \frac{T^{1/(A+1)}}{ \sqrt{\ln T}} \,\, \sup_{B \in \mathcal{B}} \norm[\frac{1}{T}\sum_{t=1}^T \mathbb{I}_{ \{ P_t \in B \} } \bigl( P_t - \delta_{a_t} \bigr)]_1 \ \leqslant \Gamma_A \qquad \quad \mbox{a.s.}~, \end{equation} where the constant $\Gamma_A$ depends only on $A$. As indicated above, to the best of our knowledge, this is the first rates results for calibration for an alphabet of size $A$ larger than 2. \subsection{Comparison to previous forecasters.} \label{sec:Brier} \subsubsection{$\ell^1$--calibration score.} Foster~\cite{Fo99} first considered the $\ell^1$--calibration score in the context of the prediction of binary outcomes only, i.e., when $A = 2$. The $\varepsilon$--calibrated forecaster he explicitly exhibited has a computational complexity of the order of $1/\varepsilon$. He did not work out the convergence rates but since his procedure is mostly a clever twist on our general procedure, they should be similar to the ones we proved in Section~\ref{sec:rates}. \subsubsection{Brier score.} \label{sec:FoVo} What follows is extracted from Foster and Vohra~\cite{FoVo98}; see also Cesa-Bianchi and Lugosi~\cite[Section 4.5]{CeLu06}. Given an $\varepsilon$--grid over the simplex $\mathcal{P}$, we define, for all $k \in \{ 1,\ldots,N_\varepsilon \}$, the empirical distribution of the outcomes chosen by Nature at those rounds $t$ when the forecaster used $\mathbf{p}_k$, \begin{numcases}{\rho_T(k) = } \nonumber \mathbf{p}_k & if $\sum_{t=1}^T \mathbb{I}_{ \{ K_t = k \} } = 0$, \\ \nonumber \sum_{t=1}^T \mathbb{I}_{ \{ K_t = k \} } \, \frac{1}{\sum_{t=1}^T \mathbb{I}_{ \{ K_s = k \} }} \, \delta_{a_t} & if $\sum_{t=1}^T \mathbb{I}_{ \{ K_t = k \} } > 0$. \end{numcases} The classical Brier score can be shown in our setup to be equal to the following criterion: \[ \sum_{k=1}^{N_\varepsilon} \norm[\rho_T(k) - \mathbf{p}_k]_2^2 \, \left( \frac{1}{T} \sum_{t=1}^T \mathbb{I}_{ \{ K_t = k \} } \right)~. \] Since for two probability distributions $\mathbf{p}$ and $\mathbf{q}$ of $\mathcal{P}$, one always has \[ \norm[\mathbf{p} - \mathbf{q}]_2^2 \leqslant 2 \norm[\mathbf{p} - \mathbf{q}]_1~, \] the Brier score can be seen to be upper bounded by twice the $\ell^1$--calibration score; it is thus a weaker criterion. Cesa-Bianchi and Lugosi~\cite[Section~4.5]{CeLu06} shows however that forecasters with Brier scores asymptotically smaller than $\varepsilon$ can be the keystones to construct calibrated forecasters, in a way similar to the construction exhibited in Section~\ref{sec:cstr}. In the case $A = 2$, these forecasters essentially bound the Brier score, with probability at least $1-\delta$, by a term that is of the order of \[ \varepsilon + \frac{1}{\varepsilon} \sqrt{\frac{\ln(1/\varepsilon) + \ln(1/\delta)}{T}}~, \] which is worse than the rate we could exhibit in Section~\ref{sec:rates} for the $\ell^1$--calibration score. In addition, the computational complexity of the underlying procedure (based on the minimization of internal regret) is of the order of $1/\varepsilon^2$ per stage and thus is similar to the complexity $1/\varepsilon^{A+1} = 1/\varepsilon^2$ we derived in Section~\ref{sec:cplx} for our new procedure. The general case of $A \geqslant 3$ is briefly mentioned in Cesa-Bianchi and Lugosi~\cite[Section~4.5]{CeLu06} indicating that the case of $A=2$ can be extended to $A \geqslant 3$ without further details. As far as we can say, the computational complexity of such an extension per step would be of the order of $1/\varepsilon^{2(A-1)}$ versus $1/\varepsilon^{(A+1)}$ for the approachability-based procedure we suggested above. The convergence rates, for a straightforward extension, seem to be quite slow. However, based on a draft of the present article, Perchet~\cite{Per10} recently proposed a more efficient extension of the procedure of Foster and Vohra~\cite{FoVo98} and obtained the same rates of convergence as in~(\ref{eq:ratescalibr}); he however did not work out the complexity of his procedure, which seems to be similar to the one of our construction. \section{Acknowledgments.} Shie Mannor was partially supported by the ISF under contract 890015 and a Horev Fellowship. Gilles Stoltz was partially supported by the French ``Agence Nationale pour la Recherche'' under grant JCJC06-137444 ``From applications to theory in learning and adaptive statistics'' and by the PASCAL Network of Excellence under EC grant {no.} 506778. {\small } \end{document}
arXiv
Mapping child growth failure across low- and middle-income countries Local Burden of Disease Child Growth Failure Collaborators Nature volume 577, pages231–234(2020)Cite this article Childhood malnutrition is associated with high morbidity and mortality globally1. Undernourished children are more likely to experience cognitive, physical, and metabolic developmental impairments that can lead to later cardiovascular disease, reduced intellectual ability and school attainment, and reduced economic productivity in adulthood2. Child growth failure (CGF), expressed as stunting, wasting, and underweight in children under five years of age (0–59 months), is a specific subset of undernutrition characterized by insufficient height or weight against age-specific growth reference standards3,4,5. The prevalence of stunting, wasting, or underweight in children under five is the proportion of children with a height-for-age, weight-for-height, or weight-for-age z-score, respectively, that is more than two standard deviations below the World Health Organization's median growth reference standards for a healthy population6. Subnational estimates of CGF report substantial heterogeneity within countries, but are available primarily at the first administrative level (for example, states or provinces)7; the uneven geographical distribution of CGF has motivated further calls for assessments that can match the local scale of many public health programmes8. Building from our previous work mapping CGF in Africa9, here we provide the first, to our knowledge, mapped high-spatial-resolution estimates of CGF indicators from 2000 to 2017 across 105 low- and middle-income countries (LMICs), where 99% of affected children live1, aggregated to policy-relevant first and second (for example, districts or counties) administrative-level units and national levels. Despite remarkable declines over the study period, many LMICs remain far from the ambitious World Health Organization Global Nutrition Targets to reduce stunting by 40% and wasting to less than 5% by 2025. Large disparities in prevalence and progress exist across and within countries; our maps identify high-prevalence areas even within nations otherwise succeeding in reducing overall CGF prevalence. By highlighting where the highest-need populations reside, these geospatial estimates can support policy-makers in planning interventions that are adapted locally and in efficiently directing resources towards reducing CGF and its health implications. Despite improvements in nearly all LMICs, stunting remained the most widespread and prevalent indicator of CGF throughout the study period. Overall, estimated childhood stunting prevalence across LMICs decreased from 36.9% (95% uncertainty interval, 32.8–41.4%) in 2000 to 26.6% (21.5–32.4%) in 2017. Progress was particularly noticeable in Central America and the Caribbean, Andean South America, North Africa, and East Asia regions, and in some coastal central and western sub-Saharan African (SSA) countries, where most areas with estimated stunting prevalence of at least 50% in 2000 had reduced to 30% or less by 2017 (Fig. 1a, b). By 2017, zones with the highest prevalence of stunting primarily persisted throughout much of the SSA, Central and South Asia, and Oceania regions, where large areas had estimated levels of at least 40%, such as in the first administrative-level units of Nigeria's Jigawa state (60.6% (51.5–69.7%)), Burundi's Karuzi province (60.0% (51.4–67.5%)), India's Uttar Pradesh state (49.0% (48.5–49.5%)), and Laos's Houaphan province (58.3% (50.7–66.8%)) (Extended Data Fig. 1). In 2017, Guatemala (47.0% (40.2–54.6%)), Niger (47.5% (42.2–53.9%)), Burundi (54.2% (46.3–61.2%)), Madagascar (49.8% (43.2–57.2%)), Timor-Leste (49.8% (43.4–56.2%)), and Yemen (45.4% (38.8–51.5%)) had the highest national-level stunting prevalence. Fig. 1: Prevalence of stunting in children under five in LMICs (2000–2017) and progress towards 2025. a, b, Prevalence of stunting in children under five at the 5 × 5-km resolution in 2000 (a) and 2017 (b). c, Overlapping population-weighted tenth and ninetieth percentiles (lowest and highest) of 5 × 5-km grid cells and AROC in stunting, 2000–2017. d, Overlapping population-weighted quartiles of stunting prevalence and relative 95% uncertainty in 2017. e, f, Number of children under five who were stunted, at the 5 × 5-km (e) and first-administrative-unit (f) levels. g, 2000–2017 annualized decrease in stunting prevalence relative to rates needed during 2017–2025 to meet the WHO GNT. h, Grid-cell-level predicted stunting prevalence in 2025. Maps were produced using ArcGIS Desktop 10.6. Interactive visualization tools are available at https://vizhub.healthdata.org/lbd/cgf. Even within the aforementioned regions where reductions were most evident, local-level estimates revealed communities in which levels still approached those seen in SSA and South Asia; areas in southern Mexico and central Ecuador had estimated stunting prevalence of at least 40%, and areas in western Mongolia reached at least 30%. Wide within-country disparities were apparent in several instances, indicating large areas left behind by the general pace of progress that require attention (Fig. 1a, b). Although most countries successfully reduced stunting prevalence, subnational inequalities (disparities between second administrative-level units (henceforth 'units')) remained widespread globally—especially evident in Vietnam, Honduras, Nigeria, and India (Extended Data Fig. 2). Among the top quintile of widest disparities, Indonesia experienced a twofold difference in stunting levels in 2017, ranging from 21.0% (16.2–27.0%) in Kota Yogyakarta regency (Yogyakarta province) to 51.5% (40.6–62.3%) in Sumba Barat regency (Nusa Tenggara Timur province). Stunting levels varied fourfold in Nigeria, ranging from 14.7% (9.1–21.0%) in Surulere Local Government Area (Lagos state) to 64.2% (54.2–74.6%) in Gagarawa Local Government Area (Jigawa state) in 2017. Evaluated from estimates of population-weighted prevalence for areas with the highest and lowest estimated prevalence of stunting (ninetieth and tenth percentiles, respectively), locations in central Chad, Pakistan, and Afghanistan, in northeastern Angola, and throughout the Democratic Republic of the Congo and Madagascar had among the lowest annualized rates of change (AROC), indicating stagnation or increase over the study period (Fig. 1c); in 2017, these countries also had large geographical areas among the most highly prevalent for stunting. By contrast, areas scattered throughout Peru, northwestern Mexico, and eastern Nepal had among the highest stunting levels in 2000, but also the highest rates of decline; by 2017, many of these areas were subsequently no longer in the highest-prevalence decile. The absolute number of children under five who were stunted was also unequally distributed (Fig. 1e, f), with a large proportion concentrated in a few nations in 2017; overall, 85.1% (84.4–85.7%) of all stunted children under five lived in Africa or Asia. Of the 176.1 million (151.6–203.3 million) children who were stunted in 2017, just over half (50.1% (48.5–52.0%)) lived in only four countries: India (51.5 million (47.7–55.3 million) children; 28.6% (27.1–30.4%) of global stunting), Pakistan (10.7 million (9.3–12.1 million); 6.8% (6.7–6.9%)), Nigeria (11.8 million (10.7–13.0 million); 6.6% (6.4–6.8%)), and China (16.2 million (14.0–18.5 million); 9.0% (9.1–8.9%)). Although China had a low prevalence of national stunting (10.8% (9.1–12.6%)) in 2017, the prevalence was high in India (39.3% (39.1–39.6%)), Pakistan (44.0% (38.4–49.9%)), and Nigeria (38.2% (34.5–42.0%)). Even with moderate levels of stunting (10 to <20%)10, these highly populous countries would substantially contribute to the global share owing to their population size, and reducing their levels would markedly decrease the number of stunted children. Childhood wasting was less widespread than stunting (Fig. 2a, b), affecting 8.4% (7.9–9.9%) of children under five in LMICs in 2000, and 6.4% (4.9–7.9%) by 2017. Wasting reached critical levels (at least 15%)11 nationally in 13 LMICs in 2000 and 7 LMICs in 2017, although only in Mauritania (20.7% (16.5–25.6%)) did all units exceed these levels (Extended Data Fig. 3). Critical wasting prevalence was concentrated in few areas across the globe in 2017, including the peri-Sahelian areas of countries stretching from Mauritania to Sudan, as well as areas in South Sudan, Ethiopia, Kenya, Somalia, Yemen, India, Pakistan, Bhutan, and Indonesia. Most LMICs reduced within-country disparities between their highest- and lowest-prevalence units between 2000 and 2017, most notably in Algeria, Uzbekistan, and Egypt (Extended Data Fig. 4). Even against a backdrop of national-level declines, however, broad within-country disparities in wasting remained in countries such as Indonesia, Ethiopia, Nigeria, and Kenya. An estimated ninefold difference in wasting prevalence occurred among Kenya's units in 2017, ranging from 2.9% (1.6–4.9%) in Tetu constituency (Nyeri county) to 28.3% (20.2–37.3%) in Turkana East constituency (Turkana county); higher-resolution estimates reveal areas with a wasting prevalence of at least 25%. High-prevalence areas in 2000 typically remained within the highest population-weighted decile for wasting in 2017, including the units of Rabkona county (Unity state) in northern South Sudan (27.8% (19.8–37.6%) in 2000; 17.3% (8.8–21.9%) in 2017), the Tanout department (Zinder region) in southern Niger (21.6% (17.3–26.7%) in 2000; 16.5% (11.3–23.3%) in 2017), and Alor regency (Nusa Tenggara Timur province) in southeastern Indonesia (16.4% (9.6–25.8%) in 2000; 20.7% (12.8–30.3%) in 2017) (Fig. 2c). Fig. 2: Prevalence of wasting in children under five in LMICs (2000–2017) and progress towards 2025. a, b, Prevalence of child wasting in children under five at the 5 × 5-km resolution in 2000 (a) and 2017 (b). c, Overlapping population-weighted tenth and ninetieth percentiles (lowest and highest) of 5 × 5-km grid cells and AROC in wasting, 2000–2017. d, Overlapping population-weighted quartiles of wasting prevalence and relative 95% uncertainty in 2017. e, f, Number of children under five affected by wasting, at the 5 × 5-km (e) and first-administrative-unit (f) levels. g, 2000–2017 annualized decrease in wasting prevalence relative to rates needed during 2017–2025 to meet the WHO GNT. h, Grid-cell-level predicted wasting prevalence in 2025. Maps were produced using ArcGIS Desktop 10.6. Interactive visualization tools are available at https://vizhub.healthdata.org/lbd/cgf. The absolute number of children affected by wasting was unequal both across and within countries (Fig. 2e, f). Of the 58.3 million (47.6–70.7 million) children affected by wasting in 2017, 57.1% (52.7–61.6%) occurred in four of the most populous countries: India (26.1 million (23.1–29.0 million); 44.7% (41.0–48.6%) of global wasting), Pakistan (3.5 million (2.8–4.3 million); 6.0% (5.8–6.1%)), Bangladesh (1.8 million (1.2–2.4 million); 3.0% (2.6–3.4%)), and Indonesia (2.0 million (1.7–2.3 million); 3.4% (3.3–3.5%)). On the basis of standard thresholds11, these countries had serious levels of national wasting prevalence (10 to <15%), ranging from 12.2% (9.7–14.9%) in Pakistan to 15.7% (15.5–15.9%) in India, and all but Bangladesh had areas with estimated wasting levels above 20%; increased efforts, especially in densely populated areas with high prevalence and absolute numbers, could immensely reduce global child wasting. The prevalence of underweight—a composite indicator of stunting and wasting—followed the scattered pattern of high-stunting areas in SSA and spanning Central Asia to Oceania, and the high prevalence belt of wasting along the African Sahel (Extended Data Fig. 5a, b). Affecting 19.8% (17.3–22.7%) of children under five across LMICs in 2000 and 13.0% (10.4–16.0%) in 2017, reductions in underweight prevalence were most notable for countries in Central and South America, southern SSA, North Africa, and Southeast Asia. For example, by 2017, estimated underweight prevalence had decreased to less than or equal to 20% for nearly all areas in Namibia. By contrast, peri-Sahelian countries stretching from Mauritania to Somalia maintained an estimated underweight prevalence of at least 30% in many areas. Large geographical areas across Central and South Asia also maintained high prevalence of underweight during the study period; in particular, India, Pakistan, and Bangladesh sustained estimated prevalence of at least 30% in most locations. Although levels of child underweight had largely reduced since 2000, within-country disparities remained widespread; 71.4% (75 out of 105) of LMICs experienced at least a twofold difference across units in 2017 (Extended Data Fig. 6). Prospects for reaching 2025 targets We estimate that broad areas across Central America and the Caribbean, South America, North Africa, and East Asia had high probability (>95%) of having already achieved targets for both stunting and wasting in 2017 (Extended Data Fig. 7). Exceptions to these regional patterns exist; areas with stagnated progress and less than 50% probability of having achieved the World Health Organization's Global Nutrition Targets for 2025 (WHO GNTs) in 2017 were found throughout much of Guatemala and Ecuador for stunting and in southern Venezuela for wasting (Figs. 1g, 2g, Extended Data Fig. 7). Even within countries that had achieved targets, there remain areas with slow progress; locations in central Peru for stunting and southwestern South Africa for wasting had not achieved targets in 2017 (less than 5% probability)—nuances otherwise hidden by aggregated estimates. Owing to stagnation or increases in prevalence, broad areas in SSA and substantial portions across Central Asia, South Asia, and Oceania (for example, in the Democratic Republic of the Congo and Pakistan for stunting; in Yemen and Indonesia for wasting) require reversal of trends or acceleration of declines in order to meet international targets (Figs. 1g, 2g). Despite predicted improvements in AROC for 2017–2025, many highly affected countries are predicted to have areas that maintain estimated stunting levels of at least 40% or wasting levels of at least 15% in 2025 (Figs. 1h, 2h). Accounting for uncertainty in 2000–2017 AROC estimates, and with 2010 national-level estimates as a baseline for the 40% stunting reduction target, 44.8% (47 out of 105) of LMICs are estimated to nationally meet WHO GNT (>95% probability) for stunting by 2025 (Supplementary Table 13). At finer scales, 17.1% (n = 18) and 7.6% (n = 8) of LMICs will meet the stunting target in all first and second administrative-level units in 2025, respectively (Extended Data Fig. 8a, d, Supplementary Table 13). Similarly, 35.2% (n = 37) of LMICs are estimated to reduce to or maintain less than 5% wasting prevalence by 2025 (>95% probability) based on current trajectories (Supplementary Table 13). Fewer countries were estimated to meet wasting targets in all first administrative-level (16.2% (n = 17)) or second administrative-level (9.5% (n = 10)) units (Extended Data Fig. 8b, e, Supplementary Table 13). Only 26.7% (n = 28) of LMICs will meet national-level targets for both stunting and wasting by 2025, and only 4.8% (n = 5) will achieve both targets in all units (Supplementary Table 13). Although commendable declines in CGF have occurred globally, this progress measured at a coarse scale conceals subnational and local underachievement and variation in achieving the WHO GNTs. Supporting conclusions in the Global Nutrition Report12, our results show that most LMICs will not reach WHO GNTs nationally, and even fewer will meet targets across subnational units. Our mapped results show broad heterogeneity across areas, and reveal hotspots of persistent CGF even within well-performing regions and countries, where increased and targeted efforts are needed. In 2017, one in four children under five across LMICs still suffered at least one dimension of CGF, and the largest numbers of affected children were often in specific within-country locations. Although the national prevalence of CGF was generally lower in Central America and the Caribbean, South American, and East Asian countries, there are communities in these regions in which levels of CGF remain as high as those in SSA and South Asia. Regardless of overall declines, many subnational areas across LMICs maintained high levels of CGF and require substantial acceleration of progress or reversal of increasing trends to meet nutrition targets and leave no populations behind. To our knowledge, this study is the first to estimate CGF comprehensively across LMICs at a fine geospatial scale, providing a precision public health tool to support efficient targeting of local-level interventions to vulnerable populations. Although densely populated areas may have relatively low prevalence of CGF, the absolute number of affected children may still be high; thus, both relative and absolute estimates are important to determine where additional attention is needed. To achieve international goals, more concerted efforts are needed in areas with decreasing or stagnating trends, without diminishing support in areas that demonstrate progress nor contributing to increases in obesity. In future work, we plan to determine how to stratify our estimates of CGF by sex and age, assess the double burden of child undernutrition and overweight, analyse important maternal indicators that affect child nutritional status outcomes (such as anaemia), and continue to monitor progress towards the 2025 WHO GNTs. These mapped estimates enable decision-makers to visualize and compare subnational CGF and nutritional inequalities, and identify populations most in need of interventions13. Building from our previous study of CGF in Africa9, we used Bayesian model-based geostatistics14—which leveraged geo-referenced survey data and environmental and socioeconomic covariates, and the assumption that points with similar covariate patterns and that are closer to one another in space and time would be expected to have similar patterns of CGF—to produce high-spatial-resolution estimates of the prevalence of stunting, wasting, and underweight among children under five across LMICs. Stunting, wasting, and underweight were defined as z-scores that were two or more standard deviations below the WHO healthy population reference median for length/height-for-age, weight-for-length/height, and weight-for-age, respectively, for age- and sex-specific curves6. Using an ensemble modelling framework that feeds into a Bayesian generalized linear model with a correlated space–time error, and 1,000 draws from the fitted posterior distribution, we generated estimates of annual prevalence for each indicator of CGF on a 5 × 5-km grid over 105 LMICs for each year from 2000 to 2017 and mapped results at administrative levels to provide relevant subnational information for policy planning and public health action. For this analysis, we compiled an extensive geo-positioned dataset, using data from 460 household surveys and reports representing 4.6 million children. To ensure comparability with national estimates and to facilitate benchmarking, these local-level estimates were calibrated to those produced by the Global Burden of Disease (GBD) Study 20171, and were subsequently aggregated to the first administrative level (for example, states or provinces) and second administrative level (for example, districts or departments) in each LMIC. We also predict CGF prevalence for 2025 based on 2000–2017 trajectories and estimate the AROC required to meet the WHO GNTs by 2025. In addition, we estimate the 2017 absolute numbers of children under five affected by each CGF indicator in LMICs based on our prevalence estimates and the size of the populations of children under five15,16. Furthermore, we provide figures that demonstrate subnational disparities between each country's second administrative-level units with the highest and lowest estimated prevalence for 2000 and 2017 (Extended Data Figs. 2, 4, 6). We re-estimate CGF prevalence for the 51 African countries included in our previous analysis9 using 28 additional surveys, and extend time trends to model each year from 2000 to 2017. Owing to these improvements in data availability and methodology, the estimates provided here supersede our previous modelling efforts. Countries were selected for inclusion in this study using the socio-demographic index (SDI)—a summary measure of development that combines education, fertility, and poverty, published in the GBD study1. The analyses reported here include countries in the low, low-middle, and middle SDI quintiles, with several exceptions (Supplementary Table 3). China, Iran, Libya, and Malaysia were included despite high-middle SDI status in order to create better geographical continuity. Albania and Moldova were excluded owing to geographical discontinuity with other included countries and lack of available survey data. We did not estimate for the island nations of American Samoa, Federated States of Micronesia, Fiji, Kiribati, Marshall Islands, North Korea, Samoa, Solomon Islands, or Tonga, where no available survey data could be sourced. The flowchart of our modelling process is provided in Extended Data Fig. 9. Surveys and child anthropometry data We extracted individual-level height, weight, and age data for children under five from household survey series including the Demographic and Health Surveys (DHS), Multiple Indicator Cluster Surveys (MICS), Living Standards Measurement Study (LSMS), and Core Welfare Indicators Questionnaire (CWIQ), among other country-specific child health and nutrition surveys7,17,18,19 (Supplementary Tables 4, 5). Included in our models were 460 geo-referenced household surveys and reports from 105 countries representing approximately 4.6 million children under five. Each individual child record was associated with a cluster, a group of neighbouring households or a 'village' that acts as a primary sampling unit. Some surveys included geographical coordinates or precise place names for each cluster within that survey (138,938 clusters for stunting, 144,460 for wasting, and 147,624 for underweight). In the absence of geographical coordinates for each cluster, we assigned data to the smallest available administrative areal unit in the survey (termed a 'polygon') while correcting for the survey sample design (16,554 polygons for stunting, 18,833 for wasting, and 19,564 for underweight). Boundary information for these administrative units was obtained as shapefiles either directly from the surveys or by matching to shapefiles in the Global Administrative Unit Layers (GAUL)20 or the Database of Global Administrative Areas (GADM)21. In select cases, shapefiles provided by the survey administrator were used, or custom shapefiles were created based on survey documentation. These areal data were resampled to point locations using a population-weighted sampling approach over the relevant areal unit with the number of locations set proportionally to the number of grid cells in the area and the total weights of all the resampled points summing to one16. Select data sources were excluded for the following reasons: missing survey weights for areal data, missing sex variable, insufficient age granularity (in months) for calculations of length/height-for-age z-scores and weight-for-age z-scores in children ages 0–2 years, incomplete sampling (for example, only children ages 0–3 years measured), or untrustworthy data (as determined by the survey administrator or by inspection). We excluded data for children for whom we could not compute age in both months and weeks. Children with height values ≤0 cm or ≥180 cm, and/or with weight values ≤0 kg or ≥45 kg were also excluded from the study. We also excluded data that were considered outliers according to the 2006 WHO Child Growth Standards recommended range values, which were values <−6 or >6 length/height-for-age z-score for stunting, <−5 or >5 weight-for-length/height z-score for wasting, and <−6 or >5 weight-for-age z-score for underweight3,4. Details on the survey data excluded for each country are provided in Supplementary Table 6. Data availability plots for all the CGF indicators by country, type, and year are included in Supplementary Figs. 2–16. Child anthropometry Using the height, weight, age, and sex data for each individual, height-for-age, weight-for-height, and weight-for-age z-scores were calculated using the age-, sex-, and indicator-specific LMS (lambda-mu-sigma) values from the 2006 WHO Child Growth Standards3,4. The LMS methodology allows for Gaussian z-score calculations and comparisons to be applied to skewed, non-Gaussian distributions22. We classified stunting, wasting, or underweight if the height/length-for-age, weight-for-height/length, or weight-for-age, respectively, was more than two standard deviations (z-scores) below the WHO growth reference population6. These individual-level data observations were then collapsed to cluster-level totals for the number of children sampled and total number of children under five affected by stunting, wasting, or underweight. We estimated the prevalence of stunting, wasting, and underweight annually from 2000 to 2017 using a model that allows us to account for data points measured across survey years. As such, the model would also allow us to predict at monthly or finer temporal resolutions; however, we are limited both computationally and by the temporal resolution of the covariates. Seasonality adjustment Owing to the acute nature of wasting and its relative temporal transience, wasting data were pre-processed to account for seasonality within each year of observation. Across LMICs, large proportions of the population live in rural areas and have livelihoods that rely on agriculture and livestock. Seasonality affects the availability of and access to food, sometimes owing to natural disasters or climate events (for example, floods, monsoons, or droughts) that vary by season. Generalized additive models were fit to wasting data across time using the month of interview and a country-level fixed effect as the explanatory variables, and the wasting z-score as the response. A 12-month periodic spline for the interview month was used, as well as a spline that smoothed across the whole duration of the dataset. Once the models were fit, individual weight-for-height/length z-score observations were adjusted so that each measurement was consistent with a day that represented a mean day in the periodic spline. The seasonality adjustment had relatively little effect on the raw data9. Spatial covariates To leverage strength from locations with observations to the entire spatiotemporal domain, we compiled several 5 × 5-km raster layers of possible socioeconomic and environmental correlates of CGF in the 105 LMICs (Supplementary Table 7, Supplementary Fig. 17). Covariates were selected based on their potential to be predictive for the set of CGF indicators, after reviewing literature on evidence and plausible hypotheses as to their influence. Acquisition of temporally dynamic datasets, where possible, was prioritized to best match our observations and thus predict the changing dynamics of the CGF indicators. Of the twelve covariates included, eight were temporally dynamic and were reformatted as a synoptic mean over each estimation period or as a mid-period year estimate: these covariates included average daily mean rainfall (precipitation), average daily mean temperature, enhanced vegetation index, fertility, malaria incidence, educational attainment in women of reproductive age (15–49 years old), population, and urbanicity. The remaining four covariate layers were static throughout the study period and were applied uniformly across all modelling years; growing season length, irrigation, nutritional yield for vitamin A, and travel time to nearest settlement of >50,000 inhabitants. To select covariates and capture possible nonlinear effects and complex interactions between them, an ensemble covariate modelling method was implemented23. For each region, three sub-models were fit to our dataset using all of our covariate data as explanatory predictors; these sub-models were: generalized additive models, boosted regression trees, and lasso regression. Each sub-model was fit using fivefold cross-validation to avoid overfitting, and the out-of-sample predictions from across the five holdouts were compiled into a single comprehensive set of predictions from that model. In addition, the same sub-models were run using 100% of the data, and a full set of in-sample predictions were created. The three sets of out-of-sample sub-model predictions were fed into the full geostatistical model14 as the explanatory covariates when performing the model fit. The in-sample predictions from the sub-models were used as the covariates when generating predictions using the fitted full geostatistical model. A recent study demonstrated that this ensemble approach can improve predictive validity by up to 25% over an individual model23. Geostatistical model analysis Binomial count data were modelled within a Bayesian hierarchical modelling framework using a logit link function and a spatially and temporally explicit hierarchical generalized linear regression model to fit prevalence of each of our indicators in 14 regions24 of LMICs (North Africa, western SSA, central SSA, eastern SSA, southern SSA, Middle East, Central Asia, East Asia, South Asia, Southeast Asia, Oceania, Central America and the Caribbean, Andean South America, and Tropical South America; see Extended Data Fig. 10). For each region, we explicitly wrote the hierarchy that defines our Bayesian model. For each binomial CGF indicator, we modelled the average number of children with stunting, wasting, or who were underweight in each survey cluster, d. Survey clusters are precisely located by their GPS coordinates and year of observation, which we map to a spatial raster location, i, at time, t. We observed the number of children reported to be stunted, wasted, or underweight, respectively, as binomial count data, Cd, among an observed sample size, Nd. As we may have observed several data clusters within a given location, i, at time, t, we refer to the probability of stunting, wasting, or underweight, p, within a given cluster, d, by its indexed location, i, and time, t, as pi(d),t(d). $$\begin{array}{l}{C}_{d}|{p}_{i(d),t(d)},\,{N}_{d}\sim {\rm{B}}{\rm{i}}{\rm{n}}{\rm{o}}{\rm{m}}{\rm{i}}{\rm{a}}{\rm{l}}({p}_{i(d),t(d)},\,{N}_{d})\,{\rm{\forall }}\,{\rm{o}}{\rm{b}}{\rm{s}}{\rm{e}}{\rm{r}}{\rm{v}}{\rm{e}}{\rm{d}}\,{\rm{c}}{\rm{l}}{\rm{u}}{\rm{s}}{\rm{t}}{\rm{e}}{\rm{r}}{\rm{s}}\,d\end{array}$$ $$\begin{array}{l}{\rm{l}}{\rm{o}}{\rm{g}}{\rm{i}}{\rm{t}}({p}_{i,t})=\,{\beta }_{0}+{{\bf{X}}}_{i,t}{\boldsymbol{\beta }}+{Z}_{i,t}+{{\epsilon }}_{{\rm{c}}{\rm{t}}{\rm{r}}(i)}+{{\epsilon }}_{i,t}+{Z}_{i,t}\,{\rm{\forall }}\\ i\in {\rm{s}}{\rm{p}}{\rm{a}}{\rm{t}}{\rm{i}}{\rm{a}}{\rm{l}}\,{\rm{d}}{\rm{o}}{\rm{m}}{\rm{a}}{\rm{i}}{\rm{n}}\,{\rm{\forall }}\,t\in {\rm{t}}{\rm{i}}{\rm{m}}{\rm{e}}\,{\rm{d}}{\rm{o}}{\rm{m}}{\rm{a}}{\rm{i}}{\rm{n}}\end{array}$$ $$\begin{array}{l}\mathop{\sum }\limits_{h=1}^{3}{\beta }_{h}\,=1\\ {{\epsilon }}_{{\rm{c}}{\rm{t}}{\rm{r}}}\sim {\rm{i}}{\rm{i}}{\rm{d}}\,{\rm{N}}{\rm{o}}{\rm{r}}{\rm{m}}{\rm{a}}{\rm{l}}(0,\,{\gamma }^{2})\\ {{\epsilon }}_{i,t}\sim {\rm{i}}{\rm{i}}{\rm{d}}\,{\rm{N}}{\rm{o}}{\rm{r}}{\rm{m}}{\rm{a}}{\rm{l}}(0,\,{\sigma }^{2})\\ {\bf{Z}}\sim {\rm{G}}{\rm{P}}(0,{\Sigma }^{{\rm{s}}{\rm{p}}{\rm{a}}{\rm{c}}{\rm{e}}}\otimes {\Sigma }^{{\rm{t}}{\rm{i}}{\rm{m}}{\rm{e}}})\\ {\Sigma }^{{\rm{s}}{\rm{p}}{\rm{a}}{\rm{c}}{\rm{e}}}=\,\frac{{\omega }^{2}}{\Gamma (\nu ){2}^{v-1}}\times {(\kappa D)}^{\nu }\times {{\rm K}}_{\nu }(\kappa D)\\ {\Sigma }_{j,\,k}^{{\rm{t}}{\rm{i}}{\rm{m}}{\rm{e}}\,}={\rho }^{|k-j|}\end{array}$$ For indices d, i, and t, *(index) is the value of * at that index. The probabilities, pi,t, represent both the annual prevalence at the space–time location and the probability that an individual child was afflicted with the risk factor given that they lived at that particular location. The annual prevalence, pi,t, of each indicator was modelled as a linear combination of the three sub-models (generalized additive model, boosted regression trees, and lasso regression), rasterized covariate values, Xi,t, a correlated spatiotemporal error term, Zi,t, and country random effects, ϵctr(i), with one unstructured country random effect fit for each country in the modelling region and all ϵctr sharing a common variance parameter, γ2, and an independent nugget effect, ϵi,t, with variance parameter, σ2. Coefficients in βh in the three sub-models h = 1, 2, 3 represent their respective predictive weighting in the mean logit link, while the joint error term, Zi,t, accounts for residual spatiotemporal autocorrelation between individual data points that remains after accounting for the predictive effect of the sub-model covariates, the country-level random effect, ϵctr(i), and the nugget independent error term, ϵi,t. The residuals, Zi,t, are modelled as a three-dimensional Gaussian process (GP) in space–time centred at zero and with a covariance matrix constructed from a Kronecker product of spatial and temporal covariance kernels. The spatial covariance, Σspace, is modelled using an isotropic and stationary Matérn function25, and temporal covariance, Σtime, as an annual autoregressive (AR1) function over the 18 years represented in the model. In the stationary Matérn function, Γ is the gamma function, Κv is the modified Bessel function of order v > 0, κ > 0 is a scaling parameter, D denotes the Euclidean distance, and ω2 is the marginal variance. The scaling parameter, κ, is defined to be \(\kappa =\sqrt{8v}/\delta \) in which δ is a range parameter (which is about the distance where the covariance function approaches 0.1) and v is a scaling constant, which is set to 2 rather than fit from the data26,27. This parameter is difficult to reliably fit, as documented by many other analyses26,28,29 that set this to 2. The number of rows and the number of columns of the spatial Matérn covariance matrix are both equal to the number of spatial mesh points for a given modelling region. In the AR1 function, ρ is the autocorrelation function (ACF), and k and j are points in the time series where |k − j| defines the lag. The number of rows and the number of columns of the AR1 covariance matrix are both equal to the number of temporal mesh points (18). The number of rows and the number of columns of the space–time covariance matrix, Σspace ⊗ Σtime, for a given modelling region are both equal to: (the number of spatial mesh points × the number of temporal mesh points). This approach leveraged the residual correlation structure of the data to more accurately predict prevalence estimates for locations with no data, while also propagating the dependence in the data through to uncertainty estimates14. The posterior distributions were fit using computationally efficient and accurate approximations in R-INLA30,31 (integrated nested Laplace approximation) with the stochastic partial differential equations (SPDE)27 approximation to the Gaussian process residuals using R project v.3.5.1. The SPDE approach using INLA has been demonstrated elsewhere, including the estimation of health indicators, particulate air matter, and population age structure9,32,33,34,35. Uncertainty intervals were generated from 1,000 draws (that is, statistically plausible candidate maps)36 created from the posterior-estimated distributions of modelled parameters. Further details on model and estimation processes are provided in the Supplementary Information. Post estimation To leverage national-level data included in the 2017 GBD study1 that were not within the scope of our current geospatial modelling framework, and to ensure alignment between these estimates and GBD national-level and subnational estimates, we performed a post hoc calibration to the mean of the 1,000 draws. We calculated population-weighted aggregations to the GBD estimate level, which was either at the national or first administrative level, and compared these estimates to our corresponding year estimates from 2000 to 2017. We defined the calibration factor to be the ratio between the GBD estimates and our current estimates for each year from 2000 to 2017. For some selected countries where GBD estimates were at the first administrative level, the calibration factors were also calculated at the lowest available subnational level. These countries included Brazil, China, Ethiopia, India, Indonesia, Iran, Mexico, and South Africa. Finally, we multiplied each of our estimates in a country-year (or first-administrative-year) by its associated factor. This ensures consistency between our geospatial estimates and those of the 2017 GBD1, while preserving our estimated within-country geospatial and temporal variation. To transform grid-cell-level estimates into a range of information useful to a wide constituency of potential users, these estimates were aggregated at first and second administrative-level units specific to each country and at national levels using conditional simulation37. Although the models can predict all locations covered by available raster covariates, all final model outputs for which land cover was classified as 'barren or sparsely vegetated' on the basis of the most recently available Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data (2013) were masked38. Areas where the total population density was less than ten individuals per 1 × 1-km grid cell were also masked in the final outputs. Model validation We assessed the predictive performance of the models using fivefold out-of-sample cross-validation strategies and found that our prevalence estimates closely matched the survey data. To offer a more stringent analysis by respecting some of the spatial correlation in the data, holdout sets were created by combining sets of data at different spatial resolutions (for example, first administrative level). Validation was performed by calculating bias (mean error), variance (root mean square error), 95% data coverage within prediction intervals, and correlation between observed data and predictions. All validation metrics were calculated on the out-of-sample predictions from the fivefold cross-validation. Furthermore, measures of spatial and temporal autocorrelation pre- and post-modelling were examined to verify correct recognition, fitting, and accounting for the complex spatiotemporal correlation structure in the data. All validation procedures and corresponding results are included in Supplementary Tables 14–22 and Supplementary Figs. 24–41. To compare our estimated rates of improvement in CGF prevalence over the last 18 years with the improvements needed between 2017 and 2025 to meet WHO GNTs, we performed a simple projection using estimated annualized rates of change (AROC) applied to the final year of our estimates. For each CGF indicator, u, we calculated AROC at each grid cell, m, by calculating the AROC between each pair of adjacent years, t: $${{\rm{AROC}}}_{u,m,t}={\rm{logit}}\left(\frac{{p}_{u,m,t}}{{p}_{u,m,t-1}}\right)$$ We then calculated a weighted AROC for each indicator by taking a weighted average across the years, where more recent AROCs were given more weight in the average. We defined the weights to be: $${W}_{t}={(t-2000+1)}^{\gamma }$$ in which γ may be chosen to give varying amounts of weight across the years. For any indicator, we then calculated the average AROC to be: $${{\rm{AROC}}}_{u,m}={\rm{logit}}(\mathop{\sum }\limits_{2001}^{2017}{W}_{t}\times {{\rm{AROC}}}_{u,m,t})\,$$ Finally, we calculated the projections, Proj, by applying the AROC in our 2017 mean prevalence estimates to produce estimates in 8 years from 2017 to 2025. For this set of projections, we selected γ = 1.7 for stunting, γ = 1.9 for wasting, and γ = 1.8 for underweight1. $${{\rm{P}}{\rm{r}}{\rm{o}}{\rm{j}}}_{u,m,2025}={{\rm{l}}{\rm{o}}{\rm{g}}{\rm{i}}{\rm{t}}}^{-1}({\rm{l}}{\rm{o}}{\rm{g}}{\rm{i}}{\rm{t}}({p}_{u,m,2017})+{{\rm{A}}{\rm{R}}{\rm{O}}{\rm{C}}}_{u,m}\times 8)$$ This projection scheme is analogous to the methods used in the 2017 GBD measurement of progress and projected attainment of health-related Sustainable Development Goals1. Our projections are based on the assumption that areas will sustain the current AROC, and the precision is dependent on the level of uncertainty emanating from the estimation of annual prevalence. Although the WHO GNT for wasting was to reduce prevalence to less than 5%, the WHO GNT for stunting was a 40% relative reduction in prevalence. For our analyses, we defined the WHO GNT for stunting and underweight (for which no WHO GNT was established) to be 40% reduction relative to 2010, the year the World Health Assembly requested the development of the WHO GNTs39. The accuracy of our models depends on the volume, representativeness, quality, and validity of surveys available for analysis (Supplementary Tables 4, 5, Supplementary Figs. 2–16). Persistent data gaps in national surveys include a lack of CGF data or household-level characteristics, such as hygiene and sanitation practices. The associated uncertainties of our estimates are higher in areas where data are either missing or less reliable (Figs. 1d, 2d, Extended Data Fig. 5d), and rely more heavily on covariates and borrowing from neighbouring areas for their modelling (Supplementary Table 7, Supplementary Fig. 17). Investments in improvements of health surveillance systems and including child anthropometrics as part of routine data collection for profiling population characteristics could improve the certainty of our estimates and better monitor progress towards international goals. In addition, measurement error in collecting anthropometric information, including the child's age, height, and weight, could have introduced bias or error in the data across different survey types. The accuracy of age data may be affected by differences in sampling approaches and self-reporting bias, such as long recall period or selective recall. Weight and height measurements may be inaccurate owing to improper calibration of equipment, device inaccuracy, different measurement methods, or human error. We did not include a survey random effect to account for between-survey variability in data accuracy; given that most surveys represent a country-year, it would be difficult to distinguish these biases from temporal effects. Our calibration approach in the post-estimation process used only a ratio estimator and did not account for an additive effect, which may have introduced bias. Owing to the complexity of the boosted regression tree sub-model, we were unable to account for the uncertainty of our three sub-models in our final estimates (see Supplementary Information section 3.2.2 for more detail). It is worth noting that our analyses are descriptive and do not support causal inferences on their own. Future research is required to determine the causal pathways for each CGF indicator across and within LMICs. Further information on research design is available in the Nature Research Reporting Summary linked to this paper. CGF estimates can be further explored at various spatial scales (national, administrative, and local levels) through our customized online data visualization tools (https://vizhub.healthdata.org/lbd/cgf). The full output of the analyses and the underlying data used in the analyses are publicly available via the Global Health Data Exchange (GHDx; http://ghdx.healthdata.org/record/ihme-data/lmic-child-growth-failure-geospatial-estimates-2000-2017). Some data sources are under special licenses for the current study and are thus not publicly available. Supplementary Tables 4 and 5 show the incorporated data sources, and data with restrictions are marked with an obelisk symbol (†). All maps presented in this study are generated by the authors and no permissions are required to publish them. The findings of this study are supported by data available in public online repositories, data publicly available upon request of the data provider, and data not publicly available owing to restrictions by the data provider. Non-publicly available data were used under license for the current study but may be available from the authors upon reasonable request and with permission of the data provider. Detailed tables and figures of data sources and availability can be found in Supplementary Tables 4, 5, and Supplementary Figs. 2–16. Administrative boundaries were retrieved from the Global Administrative Unit Layers (GAUL)20 or the Database of Global Administrative Areas (GADM)21. Land cover was retrieved from the online Data Pool, courtesy of the NASA EOSDIS Land Processes Distributed Active Archive Center (LP DAAC), USGS/Earth Resources Observation and Science (EROS) Center, Sioux Falls, South Dakota40. Lakes were retrieved from the Global Lakes and Wetlands Database (GLWD), courtesy of the World Wildlife Fund and the Center for Environmental Systems Research, University of Kassel41,42. Populations were retrieved from WorldPop15,16. All maps in this study were produced using ArcGIS Desktop 10.6. Code availability Our study follows the Guidelines for Accurate and Transparent Health Estimate Reporting (GATHER; Supplementary Table 1). All code used for these analyses is publicly available online http://ghdx.healthdata.org/record/ihme-data/lmic-child-growth-failure-geospatial-estimates-2000-2017 and at http://github.com/ihmeuw/lbd/tree/cgf-lmic-2019. Dicker, D. et al. Global, regional, and national age-sex-specific mortality and life expectancy, 1950–2017: a systematic analysis for the Global Burden of Disease Study 2017. Lancet 392, 1684–1735 (2018). Victora, C. G. et al. Maternal and child undernutrition: consequences for adult health and human capital. Lancet 371, 340–357 (2008). WHO & UNICEF. WHO Child Growth Standards and the Identification of Severe Acute Malnutrition in Infants and Children: A Joint Statement https://www.who.int/nutrition/publications/severemalnutrition/9789241598163/en/ (2009). Wang, Y. & Chen, H.-J. In Handbook of Anthropometry (ed. Preedy, V. R.) 2, 29–48 (Springer New York, 2012). Waterlow, J. C. et al. The presentation and use of height and weight data for comparing the nutritional status of groups of children under the age of 10 years. Bull. World Health Organ. 55, 489–498 (1977). WHO Multicentre Growth Reference Study Group. WHO Child Growth Standards based on length/height, weight and age. Acta Paediatr. 450, 76–85 (2006). ICF & USAID. The DHS Program: Demographic and Health Surveys https://dhsprogram.com/publications/Publication-Search.cfm?shareurl=yes&topic1=15&pubTypeSelected=pubtype_5 (accessed 13 September 2018). Reich, B. J. & Haran, M. Precision maps for public health. Nature 555, 32–33 (2018). Osgood-Zimmerman, A. et al. Mapping child growth failure in Africa between 2000 and 2015. Nature 555, 41–47 (2018). de Onis, M. et al. Prevalence thresholds for wasting, overweight and stunting in children under 5 years. Public Health Nutr. 22, 1–5 (2018). WHO. Nutrition Landscape Information System (NLIS) Country Profile Indicators Interpretation Guide https://www.who.int/nutrition/nlis_interpretationguide_isbn9789241599955/en/ (2010). Development Initiatives. The 2018 Global Nutrition Report: Shining a Light to Spur Action on Nutrition https://globalnutritionreport.org/reports/global-nutrition-report-2018/ (2018). Annan, K. Data can help to end malnutrition across Africa. Nature 555, 7 (2018). Hotez, P. J. & Ribeiro, P. J. Model-Based Geostatistics (Springer New York, 2007). WorldPop. WorldPop Dataset http://www.worldpop.org.uk/data/get_data/ (accessed 24 July 2017). Tatem, A. J. WorldPop, open data for spatial demography. Sci. Data 4, 170004 (2017). UNICEF. Multiple Indicator Cluster Surveys (MICS) http://mics.unicef.org (accessed 26 June 2019). World Bank Group. Living Standards Measurement Survey (LSMS). http://surveys.worldbank.org/lsms (accessed 26 June 2019). World Bank Group. Core Welfare Indicators Questionnaire Survey (CWIQ) http://ghdx.healthdata.org/series/core-welfare-indicators-questionnaire-survey-cwiq (accessed 21 April 2017). GeoNetwork. The Global Administrative Unit Layers (GAUL) http://www.fao.org/geonetwork/srv/en/main.home (2015). Global Administrative Areas (GADM). GADM Database of Global Administrative Areas http://www.gadm.org (2018). Indrayan, A. Demystifying LMS and BCPE methods of centile estimation for growth and other health parameters. Indian Pediatr. 51, 37–43 (2014). Bhatt, S. et al. Improved prediction accuracy for disease risk mapping using Gaussian process stacked generalization. J. R. Soc. Interface 14, 20170520 (2017). Murray, C. J. L. et al. GBD 2010: design, definitions, and metrics. Lancet 380, 2063–2066 (2012). Stein, M. L. Interpolation of Spatial Data (Springer New York, 1999). Lindgren, F. & Rue, H. Bayesian spatial modelling with R-INLA. J. Stat. Softw. 63, jss.v063.i19 (2015). Lindgren, F., Rue, H. & Lindström, J. An explicit link between Gaussian fields and Gaussian Markov random fields: the stochastic partial differential equation approach. J. R. Stat. Soc. Series B Stat. Methodol. 73, 423–498 (2011). Rozanov, Y. A. Markov Random Fields (Springer-Verlag, 1982). Whittle, P. On stationary processes in the plane. Biometrika 41, 434–449 (1954). Rue, H., Martino, S. & Chopin, N. Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations. J. R. Stat. Soc. Series B Stat. Methodol. 71, 319–392 (2009). Martins, T. G., Simpson, D., Lindgren, F. & Rue, H. Bayesian computing with INLA: new features. Comput. Stat. Data Anal. 67, 68–83 (2013). Golding, N. et al. Mapping under-5 and neonatal mortality in Africa, 2000-15: a baseline analysis for the Sustainable Development Goals. Lancet 390, 2171–2182 (2017). Cameletti, M., Lindgren, F., Simpson, D. & Rue, H. Spatio-temporal modeling of particulate matter concentration through the SPDE approach. AStA Adv. Stat. Anal. 97, 109–131 (2013). Alegana, V. A. et al. Fine resolution mapping of population age-structures for health and development applications. J. R. Soc. Interface 12, 20150073 (2015). Kinyoki, D. K. et al. Assessing comorbidity and correlates of wasting and stunting among children in Somalia using cross-sectional household surveys: 2007 to 2010. BMJ Open 6, e009854 (2016). Patil, A. P., Gething, P. W., Piel, F. B. & Hay, S. I. Bayesian geostatistics in health cartography: the perspective of malaria. Trends Parasitol. 27, 246–253 (2011). Gething, P. W., Patil, A. P. & Hay, S. I. Quantifying aggregated uncertainty in Plasmodium falciparum malaria prevalence and populations at risk via efficient space-time geostatistical joint simulation. PLOS Comput. Biol. 6, e1000724 (2010). Scharlemann, J. P. W. et al. Global data for ecology and epidemiology: a novel algorithm for temporal Fourier processing MODIS data. PLoS ONE 3, e1408 (2008). de Onis, M. et al. The World Health Organization's global target for reducing childhood stunting by 2025: rationale and proposed actions. Matern. Child Nutr. 9, 6–26 (2013). Friedl, M. & Sulla-Menashe, D. MCD12Q1 v006. MODIS/Terra+Aqua Land Cover Type Yearly L3 Global 500m SIN Grid https://doi.org/10.5067/MODIS/MCD12Q1.006 (NASA EOSDIS Land Processes DAAC, 2019). Lehner, B. & Döll, P. Development and validation of a global database of lakes, reservoirs and wetlands. J. Hydrol. (Amst.) 296, 1–22 (2004). World Wildlife Fund. Global Lakes and Wetlands Database, Level 3 https://www.worldwildlife.org/pages/global-lakes-and-wetlands-database (2004). This work was primarily supported by grant OPP1132415 from the Bill & Melinda Gates Foundation. These authors jointly supervised this work: Nicholas J. Kassebaum, Simon I. Hay Institute for Health Metrics and Evaluation, University of Washington, Seattle, WA, USA Damaris K. Kinyoki , Aaron E. Osgood-Zimmerman , Brandon V. Pickering , Lauren E. Schaeffer , Laurie B. Marczak , Alice Lazzar-Atwood , Michael L. Collison , Nathaniel J. Henry , Natalia V. Bhattacharjee , Roy Burstein , Michael A. Cork , Elizabeth A. Cromwell , Lalit Dandona , Rakhi Dandona , Farah Daoud , Nicole Davis Weaver , Aniruddha Deshpande , Laura Dwyer-Lindgren , Lucas Earl , Jason B. Hall , Bernardo Hernández Prado , Aubrey J. Levine , Benjamin K. Mayala , Ali H. Mokdad , Jonathan F. Mosser , Christopher J. L. Murray , Mohsen Naghavi , David M. Pigott , Jennifer M. Ross , Nafis Sadat , Megan F. Schipp , John Vanderheide , Nicholas J. Kassebaum & Simon I. Hay Department of Health Metrics Sciences, School of Medicine, University of Washington, Seattle, WA, USA , Benn Sartorius Human Nutrition Department, University of Gondar, Gondar, Ethiopia Zegeye Abebe Department of Global Health, Stellenbosch University, Cape Town, South Africa Abdu A. Adamu Cochrane South Africa, South African Medical Research Council, Cape Town, South Africa , Duduzile Edith Ndwandwe & Chukwudi A. Nnaji School of Medicine, Cardiff University, Cardiff, UK Victor Adekanmbi Lincoln Medical School, Universities of Nottingham & Lincoln, Lincoln, UK Keivan Ahmadi School of Community Health Sciences, University of Nevada, Reno, NV, USA Olufemi Ajumobi National Malaria Elimination Program, Federal Ministry of Health, Abuja, Nigeria Pediatric Intensive Care Unit, King Saud University, Riyadh, Saudi Arabia Ayman Al-Eyadhy Department of Family and Community Medicine, King Abdulaziz University, Jeddah, Saudi Arabia Rajaa M. Al-Raddadi Evidence Based Practice Center, Mayo Clinic Foundation for Medical Education and Research, Rochester, MN, USA Fares Alahdab Qazvin University of Medical Sciences, Qazvin, Iran Mehran Alijanzadeh Health Economics Department, Iran University of Medical Sciences, Tehran, Iran Vahid Alipour Health Management and Economics Research Center, Iran University of Medical Sciences, Tehran, Iran , Jalal Arabloo & Samad Azari King Saud University, Riyadh, Saudi Arabia Khalid Altirkawi Health Services Management Department, Arak University of Medical Sciences, Arak, Iran Saeed Amini Carol Davila University of Medicine & Pharmacy, Bucharest, Romania Catalina Liliana Andrei Department of Health Policy & Administration, University of the Philippines Manila, Manila, The Philippines Carl Abelardo T. Antonio Department of Applied Social Sciences, Hong Kong Polytechnic University, Hong Kong, China School of Health Sciences, Birmingham City University, Birmingham, UK Olatunde Aremu Preventive Medicine and Public Health Research Center, Iran University of Medical Sciences, Tehran, Iran Mehran Asadi-Aliabadi & Maziar Moradi-Lakeh Department of Health Informatics, University of Ha'il, Ha'il, Saudi Arabia Suleman Atique School of Business, University of Leicester, Leicester, UK Marcel Ausloos Department of Statistics and Econometrics, Bucharest University of Economic Studies, Bucharest, Romania Center for Research in Evaluation and Surveys, National Public Health Institute, Cuernavaca, Mexico Marco Avila , Lucía Cuevas-Nasu & Teresa Shamah Levy Indian Institute of Public Health, Gandhinagar, India Ashish Awasthi Public Health Foundation of India, Gurugram, India , G. Anil Kumar & Anamika Pandey The Judith Lumley Centre, La Trobe University, Melbourne, Victoria, Australia Beatriz Paulina Ayala Quintanilla General Office for Research and Technological Transfer, Peruvian National Institute of Health, Lima, Peru Public Health Risk Sciences Division, Public Health Agency of Canada, Toronto, Ontario, Canada Alaa Badawi Department of Nutritional Sciences, University of Toronto, Toronto, Ontario, Canada Heidelberg Institute of Global Health (HIGH), Heidelberg University, Heidelberg, Germany Till Winfried Bärnighausen , Jan-Walter De Neve , Babak Moazen & Shafiu Mohammed T. H. Chan School of Public Health, Harvard University, Boston, MA, USA Barcelona Institute for Global Health, University of Barcelona, Barcelona, Spain Quique Bassat Catalan Institution for Research and Advanced Studies (ICREA), Barcelona, Spain & Ai Koyanagi Center for Food Science and Nutrition, Addis Ababa University, Addis Ababa, Ethiopia Kaleab Baye Department of Community Medicine, Gandhi Medical College Bhopal, Bhopal, India Neeraj Bedi Jazan University, Jazan, Saudi Arabia Institute of Public Health, University of Gondar, Gondar, Ethiopia Bayu Begashaw Bekele & Mulugeta Melku Public Health Department, Mizan-Tepi University, Teppi, Ethiopia & Andualem Henok School of Forestry and Environmental Studies, Yale University, New Haven, CT, USA Michelle L. Bell Department of Statistical and Computational Genomics, National Institute of Biomedical Genomics, Kalyani, India Krittika Bhattacharyya Department of Statistics, University of Calcutta, Kolkata, India Department of Global Health, Global Institute for Interdisciplinary Studies, Kathmandu, Nepal Suraj Bhattarai Centre for Global Child Health, University of Toronto, Toronto, Ontario, Canada Zulfiqar A. Bhutta Centre of Excellence in Women and Child Health, Aga Khan University, Karachi, Pakistan Department of Clinical Chemistry, University of Gondar, Gondar, Ethiopia Belete Biadgo Istituto di Ricerche Farmacologiche Mario Negri IRCCS, Ranica, Italy Boris Bikbov Biomedical Technologies, Bauman Moscow State Technical University, Moscow, Russia Andrey Nikolaevich Briko Center for Neuroscience, Instituto de Investigaciones Científicas y Servicios de Alta Tecnología (INDICASAT AIP), Panama, Panama Gabrielle Britton School of Public Health and Health Systems, University of Waterloo, Waterloo, Ontario, Canada Zahid A. Butt Al Shifa School of Public Health, Al Shifa Trust Eye Hospital, Rawalpindi, Pakistan Centre for Population Health Sciences, Nanyang Technological University, Singapore, Singapore Josip Car Global Health Unit, Imperial College London, London, UK Colombian National Health Observatory, National Institute of Health, Bogota, Colombia Carlos A. Castañeda-Orjuela Epidemiology and Public Health Evaluation Group, National University of Colombia, Bogota, Colombia Gorgas Memorial Institute for Health Studies, Panama, Panama Franz Castro & Hedley Quintana Mary Mackillop Institute for Health Research, Australian Catholic University, Melbourne, Victoria, Australia Ester Cerin School of Public Health, University of Hong Kong, Hong Kong, China Big Data Institute, University of Oxford, Oxford, UK Michael G. Chipeta Faculty of Biology, Hanoi National University of Education, Hanoi, Vietnam Dinh-Toi Chu Department of Epidemiology and Biostatistics, University of South Carolina, Columbia, SC, USA Rajat Das Gupta James P. Grant School of Public Health, BRAC University, Dhaka, Bangladesh Australian Institute for Suicide Research and Prevention, Griffith University, Mount Gravatt, Queensland, Australia Diego De Leo School of Public Health, Addis Ababa University, Addis Ababa, Ethiopia Kebede Deribe Department of Global Health and Infection, Brighton and Sussex Medical School, Brighton, UK School of Nutrition, Food Science and Technology, Hawassa University, Hawassa, Ethiopia Beruk Berhanu Desalegn Department of Midwifery, Debre Markos University, Debre Markos, Ethiopia Melaku Desta Faculty of Veterinary Medicine and Zootechnics, Autonomous University of Sinaloa, Culiacan Rosales, Mexico & Daniel Diaz Center of Complexity Sciences, National Autonomous University of Mexico, Mexico City, Mexico Daniel Diaz Department of Midwifery, Debre Berhan University, Debre Berhan, Ethiopia Mesfin Tadese Dinberu Department of Population and Health, University of Cape Coast, Cape Coast, Ghana David Teye Doku Faculty of Social Sciences, Health Sciences, University of Tampere, Tampere, Finland World Food Programme, New Delhi, India Manisha Dubey Medical Board, Roberto Santos General Hospital, Salvador, Brazil Andre R. Durães Department of Internal Medicine, Bahia School of Medicine and Public Health, Salvador, Brazil Clinical Epidemiology and Biostatistics, University of Newcastle, Newcastle, New South Wales, Australia Andem Effiong Department of Clinical Pathology, Mansoura University, Mansoura, Egypt Maysaa El Sayed Zaki Pediatric Dentistry and Dental Public Health, Alexandria University, Alexandria, Egypt Maha El Tantawi Department of Public Health Sciences, Karolinska Institutet, Stockholm, Sweden Ziad El-Khatib World Health Programme, Université du Québec en Abitibi-Témiscamingue, Rouyn-Noranda, Quebec, Canada School of Public Health, Arak University of Medical Sciences, Arak, Iran Babak Eshrati Center of Communicable Disease Control, Ministry of Health and Medical Education, Tehran, Iran College of Medicine, Imam Muhammad Ibn Saud Islamic University, Riyadh, Saudi Arabia Mohammad Fareed Department of Psychology, Federal University of Sergipe, Sao Cristovao, Brazil Andre Faro Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden Seyed-Mohammad Fereshtehnejad Division of Neurology, University of Ottawa, Ottawa, Ontario, Canada Psychiatry Department, Kaiser Permanente, Fontana, CA, USA Irina Filip Department of Health Sciences, A. T. Still University, Mesa, AZ, USA Department of Population Health Medicine and Health Services Research, Bielefeld University, Bielefeld, Germany Florian Fischer Laboratory of Population Aging, Institute of Gerontology, National Academy of Medical Sciences of Ukraine, Kyiv, Ukraine Nataliya A. Foigt Department of Child Dental Health, Obafemi Awolowo University, Ile-Ife, Nigeria Morenike Oluwatoyin Folayan Gene Expression & Regulation Program, The Wistar Institute, Philadelphia, PA, USA Takeshi Fukumoto Department of Dermatology, Kobe University, Kobe, Japan Department of Epidemiology, Jimma University, Jimma, Ethiopia Tsegaye Tewelde Gebrehiwot Department of Biostatistics, Mekelle University, Mekelle, Ethiopia Kebede Embaye Gezae Endocrinology and Metabolism Research Center (EMRC), Tehran University of Medical Sciences, Tehran, Iran Alireza Ghajar Department of Medicine, Massachusetts General Hospital, Boston, MA, USA Unit of Academic Primary Care, University of Warwick, Coventry, UK Paramjit Singh Gill Nursing and Health Sciences Department, University of Massachusetts Boston, Boston, MA, USA Philimon N. Gona Department of Biostatistics and Epidemiology, University of Oklahoma, Oklahoma City, OK, USA Sameer Vali Gopalani Department of Health and Social Affairs, Government of the Federated States of Micronesia, Palikir, Federated States of Micronesia Department of Dermatology, Boston University, Boston, MA, USA Ayman Grada School of Public Health and Preventive Medicine, Monash University, Melbourne, Victoria, Australia Yuming Guo & Shanshan Li Department of Epidemiology and Biostatistics, College of Public Health, Zhengzhou University, Zhengzhou, China Department of Pharmacology, Tehran University of Medical Sciences, Tehran, Iran Arvin Haj-Mirzaian & Arya Haj-Mirzaian Obesity Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran Department of Radiology, Johns Hopkins University, Baltimore, MD, USA Arya Haj-Mirzaian School of Health and Environmental Studies, Hamdan Bin Mohammed Smart University, Dubai, United Arab Emirates Samer Hamidi Agriculture and Food, Commonwealth Scientific and Industrial Research Organisation, St Lucia, Queensland, Australia Mario Herrero Claudiu Herteliu & Adrian Pana Center of Excellence in Behavioral Medicine, Nguyen Tat Thanh University, Ho Chi Minh City, Vietnam Chi Linh Hoang , Long Hoang Nguyen , Son Hoang Nguyen , Giang Thu Vu & Linh Gia Vu Department of Pediatrics, Dell Medical School, University of Texas Austin, Austin, TX, USA Michael K. Hole Department of Pharmacology and Therapeutics, Dhaka Medical College, Dhaka, Bangladesh Naznin Hossain Department of Pharmacology, Bangladesh Industrial Gases Limited, Tangail, Bangladesh Department of Computer Engineering, Islamic Azad Univeristy, Tehran, Iran Mehdi Hosseinzadeh Computer Science Department, University of Human Development, Sulaimaniyah, Iraq Department of Epidemiology and Health Statistics, Central South University, Changsha, China Guoqing Hu Institute for Physical Activity and Nutrition, Deakin University, Burwood, Victoria, Australia Sheikh Mohammed Shariful Islam Sydney Medical School, University of Sydney, Sydney, New South Wales, Australia Department of Health Care and Public Health, Sechenov First Moscow State Medical University, Moscow, Russia Mihajlo Jakovljevic Department of Community Medicine, Banaras Hindu University, Varanasi, India Ravi Prakash Jha Department of Ophthalmology, Heidelberg University, Heidelberg, Germany Jost B. Jonas Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Beijing, China Department of Family Medicine and Public Health, University of Opole, Opole, Poland Jacek Jerzy Jozwiak Department of Nutrition and Dietetics, Mekelle University, Mekelle, Ethiopia Amaha Kahsay Department of Forensic Medicine and Toxicology, All India Institute of Medical Sciences, Jodhpur, India Tanuj Kanchan Department of Epidemiology, Hamadan University of Medical Sciences, Hamadan, Iran Manoochehr Karami Pars Advanced and Minimally Invasive Medical Manners Research Center, Iran University of Medical Sciences Tehran, Tehran, Iran Amir Kasaeian Hematology-Oncology and Stem Cell Transplantation Research Center, Tehran University of Medical Sciences, Tehran, Iran Department of Public Health, Jordan University of Science and Technology, Irbid, Jordan Yousef Saleh Khader Epidemiology and Biostatistics Department, Health Services Academy, Islamabad, Pakistan Ejaz Ahmad Khan Department of Medical Parasitology, Cairo University, Cairo, Egypt Mona M. Khater School of Medicine, Xiamen University Malaysia, Sepang, Malaysia Yun Jin Kim Department of Nutrition, Simmons University, Boston, MA, USA Ruth W. Kimokoti School of Health Sciences, Kristiania University College, Oslo, Norway Adnan Kisa Department of Global Health, University of Washington, Seattle, WA, USA Sonali Kochhar & Judd L. Walson Department of Public Health, Erasmus University Medical Center, Rotterdam, The Netherlands Independent Consultant, Jakarta, Indonesia Soewarta Kosen CIBERSAM, San Juan de Dios Sanitary Park, Sant Boi De Llobregat, Spain Ai Koyanagi Department of Anthropology, Panjab University, Chandigarh, India Kewal Krishan Department of Social and Preventive Medicine, University of Montreal, Montreal, Quebec, Canada Barthelemy Kuate Defo Department of Demography, University of Montreal, Montreal, Quebec, Canada Department of Psychiatry, University of Nairobi, Nairobi, Kenya Manasi Kumar Division of Psychology and Language Sciences, University College London, London, UK Department of Pediatrics, Post Graduate Institute of Medical Education and Research, Chandigarh, India Sheetal D. Lad Department of Community and Family Medicine, University of Baghdad, Baghdad, Iraq Faris Hasan Lami School of Nursing, Hong Kong Polytechnic University, Hong Kong, China Paul H. Lee School of Public Health, University of Haifa, Haifa, Israel Shai Linn Department of Paediatrics, All India Institute of Medical Sciences, Jodhpur, India Rakesh Lodha Radiology Department, Mansoura Faculty of Medicine, Mansoura, Egypt Hassan Magdy Abd El Razek Ophthalmology Department, Aswan Faculty of Medicine, Aswan, Egypt Muhammed Magdy Abd El Razek Department of Public Health, Trnava University, Trnava, Slovakia Marek Majdan Department of Primary Care and Public Health, Imperial College London, London, UK Azeem Majeed Digestive Diseases Research Institute, Tehran University of Medical Sciences, Tehran, Iran Reza Malekzadeh , Akram Pourshams , Hamideh Salimzadeh & Sadaf G. Sepanlou Non-communicable Diseases Research Center, Shiraz University of Medical Sciences, Shiraz, Iran Department of Maternal and Child Nursing and Public Health, Federal University of Minas Gerais, Belo Horizonte, Brazil Institute for Social Science Research, The University of Queensland, Brisbane, Queensland, Australia Abdullah A. Mamun Department of Epidemiology and Biostatistics, Tehran University of Medical Sciences, Tehran, Iran Mohammad Ali Mansournia Campus Caucaia, Federal Institute of Education, Science and Technology of Ceará, Caucaia, Brazil Francisco Rogerlândio Martins-Melo Public Health Department, Botho University-Botswana, Gaborone, Botswana Anthony Masaka Division of Plastic Surgery, University of Washington, Seattle, WA, USA Benjamin Ballard Massenburg Research in Nutrition and Health, National Institute of Public Health, Cuernavaca, Mexico Fabiola Mejia-Rodriguez Peru Country Office, United Nations Population Fund (UNFPA), Lima, Peru Walter Mendoza Center for Translation Research and Implementation Science, National Institutes of Health, Bethesda, MD, USA George A. Mensah Department of Medicine, University of Cape Town, Cape Town, South Africa & Jean Jacques Noubiap Department of Propedeutics of Internal Diseases & Arterial Hypertension, Pomeranian Medical University, Szczecin, Poland Tomasz Miazgowski Pacific Institute for Research & Evaluation, Calverton, MD, USA Ted R. Miller School of Public Health, Curtin University, Perth, Western Australia, Australia Achutha Menon Centre for Health Science Studies, Sree Chitra Tirunal Institute for Medical Sciences and Technology, Trivandrum, India G. K. Mini Global Institute of Public Health (GIPH), Ananthapuri Hospitals and Research Centre, Trivandrum, India Faculty of Internal Medicine, Kyrgyz State Medical Academy, Bishkek, Kyrgyzstan Erkin M. Mirrakhimov Department of Atherosclerosis and Coronary Heart Disease, National Center of Cardiology and Internal Disease, Bishkek, Kyrgyzstan Institute of Addiction Research (ISFF), Frankfurt University of Applied Sciences, Frankfurt, Germany Babak Moazen Department of Information Technology, University of Human Development, Sulaymaniyah, Iraq Aso Mohammad Darwesh Health Systems and Policy Research Unit, Ahmadu Bello University, Zaria, Nigeria Shafiu Mohammed Non-communicable Diseases Research Center, Tehran University of Medical Sciences, Tehran, Iran Farnam Mohebi Department of Public Health Medicine, University of Kwazulu-Natal, Durban, South Africa Yoshan Moodley Department of Epidemiology and Biostatistics, Kurdistan University of Medical Sciences, Sanandaj, Iran Ghobad Moradi Social Determinants of Health Research Center, Kurdistan University of Medical Sciences, Sanandaj, Iran Department of Mathematical Sciences, University of Bath, Bath, UK Paula Moraga Department of Surgery, University of Washington, Seattle, WA, USA Shane Douglas Morrison Department of Health Management and Economics, Tehran University of Medical Sciences, Tehran, Iran Seyyed Meysam Mousavi Health Management Research Center, Baqiyatallah Univeristy of Medical Sciences, Tehran, Iran Federal Institute for Population Research, Wiesbaden, Germany Ulrich Otto Mueller & Andrea Werdecker Center for Population and Health, Wiesbaden, Germany Department of Pediatric Medicine, Nishtar Medical University, Multan, Pakistan Ghulam Mustafa Department of Pediatrics & Pediatric Pulmonology, Institute of Mother & Child Care, Multan, Pakistan Clinical Research Development Centre, Kermanshah University of Medical Sciences, Kermanshah, Iran Mehdi Naderi Department of Epidemiology & Biostatistics, Kermanshah University of Medical Sciences, Kermanshah, Iran Farid Najafi & Yahya Salimi Suraj Eye Institute, Nagpur, India Vinay Nangia General Surgery, Carol Davila University of Medicine & Pharmacy, Bucharest, Romania Ionut Negoi Department of Biological Sciences, University of Embu, Embu, Kenya Josephine W. Ngunjiri Institute for Global Health Innovations, Duy Tan University, Hanoi, Vietnam Huong Lan Thi Nguyen Department of Sociology & Institute for Empirical Social Science Research, Xi'an Jiaotong University, Xi'an, China Jing Nie School of Public Health and Family Medicine, University of Cape Town, Cape Town, South Africa Chukwudi A. Nnaji Mazandaran University of Medical Sciences, Sari, Iran Malihe Nourollahpour Shiadeh Faculty of Medicine & Health Sciences, Stellenbosch University, Cape Town, South Africa Peter S. Nyasulu UCIBIO, University of Porto, Porto, Portugal Felix Akpojene Ogbo Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, Ontario, Canada Andrew T. Olagunju Department of Psychiatry, University of Lagos, Lagos, Nigeria Centre for Healthy Start Initiative, Lagos, Nigeria Bolajoko Olubukunola Olusanya & Jacob Olusegun Olusanya Center for Population Health Research, National Institute of Public Health, Cuernavaca, Mexico Eduardo Ortiz-Panozo School of Health and Welfare, Jönköping University, Jönköping, Sweden Laboratory of Public Health Indicators Analysis and Health Digitalization, Moscow Institute of Physics and Technology, Dolgoprudny, Russia Stanislav S. Otstavnov Department of Project Management, National Research University Higher School of Economics, Moscow, Russia Department of Respiratory Medicine, Jagadguru Sri Shivarathreeswara Academy of Health Education and Research, Mysore, India Mahesh P. A. Center for Health Outcomes & Evaluation, Bucharest, Romania Adrian Pana Regional Medical Research Centre, Indian Council of Medical Research, Bhubaneswar, India Sanghamitra Pati Krishna Institute of Medical Sciences, Deemed University, Karad, India Snehal T. Patil Department of Paediatrics, University of Melbourne, Melbourne, Victoria, Australia George C. Patton Population Health, Murdoch Children's Research Institute, Melbourne, Victoria, Australia Istituto di Ricerche Farmacologiche Mario Negri IRCCS, Bergamo, Italy Norberto Perico & Giuseppe Remuzzi Research Center for Environmental Determinants of Health, Kermanshah University of Medical Sciences, Kermanshah, Iran Meghdad Pirsaheb , Fatemeh Rajati , Yahya Safari & Kiomars Sharafi Bill & Melinda Gates Foundation, Seattle, WA, USA Ellen G. Piwoz & Rahul Rawat Department of Economics and Business, University of Groningen, Groningen, The Netherlands Maarten J. Postma University Medical Center Groningen, University of Groningen, Groningen, The Netherlands Department of Nephrology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow, India Swayam Prakash College of Graduate Health Sciences, A. T. Still University, Mesa, AZ, USA Amir Radfar College of Medicine, University of Central Florida, Orlando, FL, USA Molecular and Cell Biology Research Center, Mazandaran University of Medical Sciences, Sari, Iran Alireza Rafiei Department of Immunology, Mazandaran University of Medical Sciences, Sari, Iran Sina Trauma and Surgery Research Center, Tehran University of Medical Sciences, Tehran, Iran Vafa Rahimi-Movaghar , Mahdi Safdarian & Payman Salamati Society for Health and Demographic Surveillance, Suri, India Rajesh Kumar Rai Department of Economics, University of Göttingen, Göttingen, Germany WHO Collaborating Centre for Public Health Education and Training, Imperial College London, London, UK David Laith Rawaf University College London Hospitals, London, UK Academic Public Health, Public Health England, London, UK Salman Rawaf Department of Primary Care and Public Health, School of Public Health, Imperial College London, London, UK School of Social Sciences and Psychology, Western Sydney University, Penrith, New South Wales, Australia Andre M. N. Renzaho Translational Health Research Institute, Western Sydney University, Penrith, New South Wales, Australia Research Directorate, Nihon Gakko University, Fernando De La Mora, Paraguay Carlos Rios-González Research Direction, Universidad Nacional de Caaguazú, Coronel Oviedo, Paraguay Department of Clinical Research, Federal University of Uberlândia, Uberlândia, Brazil Leonardo Roever Infectious Diseases and Tropical Medicine Research Center, Babol University of Medical Sciences, Babol, Iran Ali Rostami Department of Neuroscience, Iran University of Medical Sciences, Tehran, Iran Mahdi Safdarian Neurogenic Inflammation Research Center, Mashhad University of Medical Sciences, Mashhad, Iran Amirhossein Sahebkar Halal Research Center of IRI, FDA, Tehran, Iran Department of Pathology, Al-Imam Mohammad Ibn Saud Islamic University, Riyadh, Saudi Arabia Nasir Salam Social Development & Health Promotion Research Center, Kermanshah University of Medical Sciences, Kermanshah, Iran Yahya Salimi & Moslem Soofi Department of Entomology, Ain Shams University, Cairo, Egypt Abdallah M. Samy Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK Benn Sartorius Surgery Department, Hamad Medical Corporation, Doha, Qatar Brijesh Sathian Faculty of Health & Social Sciences, Bournemouth University, Bournemouth, UK Department of Psychology, University of Alabama at Birmingham, Birmingham, AL, USA David C. Schwebel Department of Food Science and Nutrition, Jigjiga University, Jigjiga, Ethiopia Anbissa Muleta Senbeta Independent Consultant, Karachi, Pakistan Masood Ali Shaikh Department of Sports Medicine & Rehabilitation, Kermanshah University of Medical Sciences, Kermanshah, Iran Mohammadbagher Shamsi University School of Management and Entrepreneurship, Delhi Technological University, New Delhi, India Division of General Internal Medicine, Harvard University, Boston, MA, USA Aziz Sheikh Centre for Medical Informatics, University of Edinburgh, Edinburgh, UK Department of Public Health, Ben Gurion University of the Negev, Beersheva, Israel Apurba Shil Department of Physical Education, Federal University of Santa Catarina, Florianopolis, Brazil Diego Augusto Santos Silva Department of Medicine, University of Alabama at Birmingham, Birmingham, AL, USA Jasvinder A. Singh Department of Epidemiology, University of Alabama at Birmingham, Birmingham, AL, USA Department of Epidemiology, School of Preventive Oncology, Patna, India Dhirendra Narain Sinha Department of Epidemiology, Healis Sekhsaria Institute for Public Health, Mumbai, India Department of Nursing, Muhammadiyah University of Surakarta, Surakarta, Indonesia Agus Sudaryanto Department of Public Health, China Medical University, Taichung, Taiwan Department of Community Medicine, Ahmadu Bello University, Zaria, Nigeria Mu'awiyyah Babale Sufiyan Department of Medicine, University of Valencia, Valencia, Spain Rafael Tabarés-Seisdedos Carlos III Health Institute, Biomedical Research Networking Center for Mental Health Network (CIBERSAM), Madrid, Spain Department of Pediatrics, Hawassa University, Hawassa, Ethiopia Birkneh Tilahun Tadesse International Vaccine Institute, Seoul, South Korea Department of Pediatrics, King Saud University, Riyadh, Saudi Arabia Mohamad-Hani Temsah College of Medicine, Alfaisal University, Riyadh, Saudi Arabia & Mohamad-Hani Temsah Department of Anesthesiology, Perioperative, and Pain Medicine, Stanford University, Palo Alto, CA, USA Abdullah Sulieman Terkawi Department of Anesthesiology, King Fahad Medical City, Riyadh, Saudi Arabia Department of Epidemiology and Biostatistics, University of Gondar, Gondar, Ethiopia Zemenu Tadesse Tessema Department of International Health, Johns Hopkins University, Baltimore, MD, USA Andrew L. Thorne-Lyman Department of Pathology and Legal Medicine, University of São Paulo, Ribeirão Preto, Brazil Marcos Roberto Tovani-Palone Department of Health Economics, Hanoi Medical University, Hanoi, Vietnam Bach Xuan Tran Molecular Medicine and Pathology, University of Auckland, Auckland, New Zealand Khanh Bao Tran Clinical Hematology and Toxicology, Military Medical University, Hanoi, Vietnam Gomal Center of Biochemistry and Biotechnology, Gomal University, Dera Ismail Khan, Pakistan TB Culture Laboratory, Mufti Mehmood Memorial Teaching Hospital, Dera Ismail Khan, Pakistan Division of Health Sciences, University of Warwick, Coventry, UK Olalekan A. Uthman Department of Epidemiology and Biostatistics, School of Public Health and Nutrition, Umeå University, Umeå, Sweden Masoud Vaezghasemi Department of Medical Mycology and Parasitology, Mazandaran University of Medical Sciences, Sari, Iran Afsane Vaezi Argentine Society of Medicine, Ciudad De Buenos Aires, Argentina Pascual R. Valdez Velez Sarsfield Hospital, Buenos Aires, Argentina Psychosocial Injuries Research Center, Ilam University of Medical Sciences, Ilam, Iran Yousef Veisani Department of Medical and Surgical Sciences, University of Bologna, Bologna, Italy Francesco S. Violante Occupational Health Unit, Sant'orsola Malpighi Hospital, Bologna, Italy Department of Health Care Administration and Economics, National Research University Higher School of Economics, Moscow, Russia Vasily Vlassov Foundation University Medical College, Foundation University Islamabad, Islamabad, Pakistan Yasir Waheed Department of Epidemiology and Biostatistics, Wuhan University, Wuhan, China Yafeng Wang & Chuanhua Yu Department of Psychiatry, University of São Paulo, São Paulo, Brazil Yuan-Pang Wang University of Nairobi, Nairobi, Kenya Elizabeth N. Wangia School of Medicine, Nanjing University, Nanjing, China Gelin Xu Department of Diabetes and Metabolic Diseases, University of Tokyo, Tokyo, Japan Tomohide Yamada School of Allied Health Sciences, Addis Ababa University, Addis Ababa, Ethiopia Engida Yisma Department of Psychopharmacology, National Center of Neurology and Psychiatry, Tokyo, Japan Naohiro Yonemoto Health Economics & Finance, Jackson State University, Jackson, MS, USA Mustafa Z. Younis School of Medicine, Tsinghua University, Beijing, China Prevention of Cardiovascular Disease Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran Mahmoud Yousefifard Global Health Institute, Wuhan University, Wuhan, China Chuanhua Yu Department of Medicine, School of Clinical Sciences at Monash Health, Monash University, Melbourne, Victoria, Australia Sojib Bin Zaman Maternal and Child Health Division, International Centre for Diarrhoeal Disease Research, Bangladesh, Dhaka, Bangladesh Student Research Committee, Babol University of Medical Sciences, Babol, Iran Mohammad Zamani School of Public Health, Wuhan University of Science and Technology, Wuhan, China Yunquan Zhang Hubei Province Key Laboratory of Occupational Hazard Identification and Control, Wuhan University of Science and Technology, Wuhan, China Department of Anesthesiology & Pain Medicine, University of Washington, Seattle, WA, USA Nicholas J. Kassebaum Consortia , Zegeye Abebe , Abdu A. Adamu , Victor Adekanmbi , Keivan Ahmadi , Olufemi Ajumobi , Ayman Al-Eyadhy , Rajaa M. Al-Raddadi , Fares Alahdab , Mehran Alijanzadeh , Vahid Alipour , Khalid Altirkawi , Saeed Amini , Catalina Liliana Andrei , Carl Abelardo T. Antonio , Olatunde Aremu , Mehran Asadi-Aliabadi , Suleman Atique , Marcel Ausloos , Marco Avila , Ashish Awasthi , Beatriz Paulina Ayala Quintanilla , Samad Azari , Alaa Badawi , Till Winfried Bärnighausen , Quique Bassat , Kaleab Baye , Neeraj Bedi , Bayu Begashaw Bekele , Michelle L. Bell , Krittika Bhattacharyya , Suraj Bhattarai , Zulfiqar A. Bhutta , Belete Biadgo , Boris Bikbov , Andrey Nikolaevich Briko , Gabrielle Britton , Zahid A. Butt , Josip Car , Carlos A. Castañeda-Orjuela , Franz Castro , Ester Cerin , Michael G. Chipeta , Dinh-Toi Chu , Rajat Das Gupta , Diego De Leo , Kebede Deribe , Beruk Berhanu Desalegn , Melaku Desta , Daniel Diaz , Mesfin Tadese Dinberu , David Teye Doku , Manisha Dubey , Andre R. Durães , Andem Effiong , Maysaa El Sayed Zaki , Maha El Tantawi , Ziad El-Khatib , Babak Eshrati , Mohammad Fareed , Andre Faro , Seyed-Mohammad Fereshtehnejad , Irina Filip , Florian Fischer , Nataliya A. Foigt , Morenike Oluwatoyin Folayan , Takeshi Fukumoto , Tsegaye Tewelde Gebrehiwot , Kebede Embaye Gezae , Alireza Ghajar , Paramjit Singh Gill , Philimon N. Gona , Sameer Vali Gopalani , Ayman Grada , Yuming Guo , Arvin Haj-Mirzaian , Arya Haj-Mirzaian , Samer Hamidi , Andualem Henok , Mario Herrero , Claudiu Herteliu , Chi Linh Hoang , Michael K. Hole , Naznin Hossain , Mehdi Hosseinzadeh , Guoqing Hu , Sheikh Mohammed Shariful Islam , Mihajlo Jakovljevic , Ravi Prakash Jha , Jost B. Jonas , Jacek Jerzy Jozwiak , Amaha Kahsay , Tanuj Kanchan , Manoochehr Karami , Amir Kasaeian , Yousef Saleh Khader , Ejaz Ahmad Khan , Mona M. Khater , Yun Jin Kim , Ruth W. Kimokoti , Adnan Kisa , Sonali Kochhar , Soewarta Kosen , Ai Koyanagi , Kewal Krishan , Barthelemy Kuate Defo , Manasi Kumar , Sheetal D. Lad , Faris Hasan Lami , Paul H. Lee , Shanshan Li , Shai Linn , Rakesh Lodha , Hassan Magdy Abd El Razek , Muhammed Magdy Abd El Razek , Marek Majdan , Azeem Majeed , Reza Malekzadeh , Deborah Carvalho Malta , Abdullah A. Mamun , Mohammad Ali Mansournia , Francisco Rogerlândio Martins-Melo , Anthony Masaka , Benjamin Ballard Massenburg , Fabiola Mejia-Rodriguez , Mulugeta Melku , Walter Mendoza , George A. Mensah , Tomasz Miazgowski , Ted R. Miller , G. K. Mini , Erkin M. Mirrakhimov , Aso Mohammad Darwesh , Shafiu Mohammed , Farnam Mohebi , Yoshan Moodley , Ghobad Moradi , Maziar Moradi-Lakeh , Paula Moraga , Shane Douglas Morrison , Seyyed Meysam Mousavi , Ulrich Otto Mueller , Ghulam Mustafa , Mehdi Naderi , Farid Najafi , Vinay Nangia , Ionut Negoi , Josephine W. Ngunjiri , Huong Lan Thi Nguyen , Jing Nie , Chukwudi A. Nnaji , Jean Jacques Noubiap , Malihe Nourollahpour Shiadeh , Peter S. Nyasulu , Felix Akpojene Ogbo , Andrew T. Olagunju , Bolajoko Olubukunola Olusanya , Jacob Olusegun Olusanya , Eduardo Ortiz-Panozo , Stanislav S. Otstavnov , Mahesh P. A. , Adrian Pana , Anamika Pandey , Sanghamitra Pati , Snehal T. Patil , George C. Patton , Norberto Perico , Meghdad Pirsaheb , Ellen G. Piwoz , Maarten J. Postma , Swayam Prakash , Hedley Quintana , Amir Radfar , Alireza Rafiei , Vafa Rahimi-Movaghar , Rajesh Kumar Rai , David Laith Rawaf , Salman Rawaf , Rahul Rawat , Giuseppe Remuzzi , Andre M. N. Renzaho , Carlos Rios-González , Leonardo Roever , Ali Rostami , Amirhossein Sahebkar , Nasir Salam , Payman Salamati , Yahya Salimi , Abdallah M. Samy , Brijesh Sathian , David C. Schwebel , Anbissa Muleta Senbeta , Sadaf G. Sepanlou , Masood Ali Shaikh , Teresa Shamah Levy , Mohammadbagher Shamsi , Kiomars Sharafi , Rajesh Sharma , Aziz Sheikh , Apurba Shil , Diego Augusto Santos Silva , Jasvinder A. Singh , Dhirendra Narain Sinha , Moslem Soofi , Agus Sudaryanto , Mu'awiyyah Babale Sufiyan , Rafael Tabarés-Seisdedos , Birkneh Tilahun Tadesse , Mohamad-Hani Temsah , Abdullah Sulieman Terkawi , Zemenu Tadesse Tessema , Andrew L. Thorne-Lyman , Marcos Roberto Tovani-Palone , Bach Xuan Tran , Khanh Bao Tran , Irfan Ullah , Olalekan A. Uthman , Masoud Vaezghasemi , Afsane Vaezi , Pascual R. Valdez , Yousef Veisani , Francesco S. Violante , Vasily Vlassov , Linh Gia Vu , Yasir Waheed , Judd L. Walson , Yafeng Wang , Yuan-Pang Wang , Elizabeth N. Wangia , Andrea Werdecker , Gelin Xu , Tomohide Yamada , Engida Yisma , Naohiro Yonemoto , Mustafa Z. Younis , Mahmoud Yousefifard , Chuanhua Yu , Sojib Bin Zaman , Mohammad Zamani , Yunquan Zhang S.I.H. and N.J.K. conceived and planned the study. B.V.P., A.L.-A., and D.K.K. obtained, extracted, processed, and geo-positioned CGF data. L.E. constructed covariate data layers. D.K.K., A.E.O.-Z., and M.L.C. wrote the computer code and designed the statistical analyses. D.K.K. carried out the statistical analyses with input from A.E.O.-Z., M.L.C., N.J.H., and N.V.B. D.K.K. and L.E. prepared figures. D.K.K., L.E.S., and L.B.M. wrote the first draft of the manuscript with assistance from S.I.H. and M.F.S., and all authors contributed to subsequent revisions. All authors provided intellectual input into aspects of this study. Additional details on author contributions can be found in the Supplementary Information (section 8.0). Correspondence to Simon I. Hay. This study was funded by the Bill & Melinda Gates Foundation. Co-authors employed by the Bill & Melinda Gates Foundation provided feedback on initial maps and drafts of this manuscript. Otherwise, the funders of the study had no role in study design, data collection, data analysis, data interpretation, writing of the final report, or the decision to publish. The corresponding author had full access to all the data in the study and had final responsibility for the decision to submit for publication. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Extended data figures and tables Extended Data Fig. 1 Prevalence of stunting in children under five in LMICs at administrative levels 0, 1, 2, and at 5 × 5-km resolution in 2017. Administrative level 0 are national-level estimates; administrative level 1 are first administrative-level (for example, states or provinces) estimates; administrative level 2 are second administrative-level (for example, districts or departments) estimates. Maps reflect administrative boundaries, land cover, lakes, and population; grey-coloured grid cells had fewer than ten people per 1 × 1-km grid cell and were classified as 'barren or sparsely vegetated'15,16,20,21,40,41,42, or were not included in these analyses. Maps were produced using ArcGIS Desktop 10.6. Extended Data Fig. 2 Geographical inequality in the prevalence of child stunting across 105 countries. The bars represent the range of stunting prevalence in children under five in the second administrative-level units in each country. Bars indicating the range in 2017 are coloured according to the regions defined by the Global Burden of Disease (GBD)1. Grey bars indicate the range in 2000. The graph was produced using R project v.3.5.1. Extended Data Fig. 3 Prevalence of wasting in children under five in LMICs at administrative levels 0, 1, 2, and at 5 × 5-km resolution in 2017. Administrative levels are as described in Extended Data Fig. 1. Maps reflect administrative boundaries, land cover, lakes, and population; grey-coloured grid cells had fewer than ten people per 1 × 1-km grid cell and were classified as 'barren or sparsely vegetated'15,16,20,21,40,41,42, or were not included in these analyses. Maps were produced using ArcGIS Desktop 10.6. Extended Data Fig. 4 Geographical inequality in prevalence of child wasting across 105 countries. The bars represent the range of wasting prevalence in children under five in the second administrative-level units in each country. Bars indicating the range in 2017 are coloured according to their GBD-defined1 regions. Grey bars indicate the range in 2000. The graph was produced using R project v.3.5.1. Extended Data Fig. 5 Prevalence of underweight in children under five in LMICs (2000–2017) and progress towards 2025. a, b, Prevalence of underweight in children under five at the 5 × 5-km resolution in 2000 (a) and 2017 (b). c, Overlapping population-weighted tenth and ninetieth percentiles (lowest and highest) of 5 × 5-km grid cells and AROC in underweight, 2000–2017. d, Overlapping population-weighted quartiles of underweight prevalence and relative 95% uncertainty in 2017. e, f, Number of underweight children under five, at the 5 × 5-km (e) and first-administrative-unit (f) levels. g, 2000–2017 annualized decrease in underweight prevalence relative to rates needed during 2017–2025 to meet WHO GNT. h, Grid-cell-level predicted underweight prevalence in 2025. Maps were produced using ArcGIS Desktop 10.6. Interactive visualization tools are available at https://vizhub.healthdata.org/lbd/cgf. Extended Data Fig. 6 Geographical inequality in prevalence of child underweight across 105 countries. The bars represent the range of underweight prevalence in the second administrative-level units in each country. Bars indicating the range in 2017 are coloured according to their GBD-defined1 regions. Grey bars indicate the range in 2000. The graph was produced using R project v.3.5.1. Extended Data Fig. 7 Probability that WHO GNT had been achieved in 2017 at the first administrative and 5 × 5-km grid-cell levels for stunting, wasting, and underweight. a–f, Probability of WHO GNT achievement in 2017 at the first administrative and 5 × 5-km levels for stunting (a, d), wasting (b, e), and underweight (c, f). Dark-blue and dark-red grid cells indicate >95% and <5% probability, respectively, of having met the WHO GNT in 2017. Given that there was no WHO GNT established for underweight, we based the underweight target on WHO GNT for stunting, as the conditions are similarly widespread and prevalent. Maps were produced using ArcGIS Desktop 10.6. Extended Data Fig. 8 Probability of meeting WHO GNT in 2025 at the first administrative and 5 × 5-km grid-cell levels for stunting, wasting, and underweight. a–f, Probability of WHO GNT achievement in 2025 at the first administrative and 5 × 5-km levels for stunting (a, d), wasting (b, e), and underweight (c, f). Dark-blue and dark-red grid cells indicate >95% and <5% probability, respectively, of meeting WHO GNT in 2025. Given that there was no WHO GNT established for underweight, we based the underweight target on WHO GNT for stunting as the conditions are similarly widespread and prevalent. Maps were produced using ArcGIS Desktop 10.6. Extended Data Fig. 9 Flowchart of CGF prevalence modelling process. The process used to produce CGF prevalence estimates in LMICs involved three main parts. In the data-processing steps (green), data were identified, extracted, and prepared for use in the models. In the modelling phase (red), we used these data and covariates in stacked generalization ensemble models and spatiotemporal Gaussian process models for each CGF indicator. In post-processing (blue), we calibrated the prevalence estimates to match 2017 GBD study1 estimates and aggregated the estimates to the first- and second-administrative-level units in each country. Extended Data Fig. 10 Modelling regions. Modelling regions24 were based on geographical and SDI regions from the GBD study1, defined as: Andean South America, Central America and the Caribbean, central SSA, East Asia, eastern SSA, Middle East, North Africa, Oceania, Southeast Asia, South Asia, southern SSA, Central Asia, Tropical South America, and western SSA. 'High income country' refers to regions not included in our models owing to high-middle or a high SDI. The map was produced using ArcGIS Desktop 10.6. Supplementary Discussion; Supplementary Tables; Supplementary Figures; Supplementary Methods. Additional discussion of associated causes of child growth failure, interventions, and future work. Supplementary Tables 1–22: data sources, fitted parameters, countries estimated to meet WHO GNTs in 2017 and 2025, predictive metrics. Supplementary Figures 1–41: data availability, covariates, seasonal adjustments, validation metrics. Additional methods details. Detailed author contributions. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Kinyoki, D.K., Osgood-Zimmerman, A.E., Pickering, B.V. et al. Mapping child growth failure across low- and middle-income countries. Nature 577, 231–234 (2020). https://doi.org/10.1038/s41586-019-1878-8 Issue Date: 09 January 2020
CommonCrawl
A nonlinear effective slip interface law for transport phenomena between a fracture flow and a porous medium DCDS-S Home Existence and decay of solutions of the 2D QG equation in the presence of an obstacle October 2014, 7(5): 1045-1063. doi: 10.3934/dcdss.2014.7.1045 Stokes and Navier-Stokes equations with perfect slip on wedge type domains Siegfried Maier 1, and Jürgen Saal 1, Heinrich-Heine-Universität Düsseldorf, Mathematisches Institut, 40204 Düsseldorf, Germany, Germany Received March 2013 Revised June 2013 Published May 2014 Well-posedness of the Stokes and Navier-Stokes equations subject to perfect slip boundary conditions on wedge type domains is studied. Applying the operator sum method we derive an $\mathcal{H}^\infty$-calculus for the Stokes operator in weighted $L^p_\gamma$ spaces (Kondrat'ev spaces) which yields maximal regularity for the linear Stokes system. This in turn implies mild well-posedness for the Navier-Stokes equations, locally-in-time for arbitrary and globally-in-time for small data in $L^p$. Keywords: Kondrat'ev spaces, perfect slip, $\mathcal{H}^\infty$-calculus., Stokes equations, wedge domains. Mathematics Subject Classification: Primary: 76D035, 35K65; Secondary: 76D0. Citation: Siegfried Maier, Jürgen Saal. Stokes and Navier-Stokes equations with perfect slip on wedge type domains. Discrete & Continuous Dynamical Systems - S, 2014, 7 (5) : 1045-1063. doi: 10.3934/dcdss.2014.7.1045 W. Borchers and T. Miyakawa, $L^2$ decay for the Navier-Stokes flow in halfspaces, Math. Ann., 282 (1988), 139-155. doi: 10.1007/BF01457017. Google Scholar G. Da Prato and P. Grisvard, Sommes d'oprateurs linaires et quations diffrentielles oprationelles, J. Math. Pures Appl., 54 (1975), 305-387. Google Scholar R. Denk, M. Hieber and J. Prüss, $\mathcalR$-boundedness, Fourier multipliers and problems of elliptic and parabolic type, Mem. Am. Math. Soc., 166 (2003). doi: 10.1090/memo/0788. Google Scholar R. Denk and M. Geißert, J. Saal and O. Sawada, The spin-coating process: Analysis of the free boundary value problem, Commun. Partial Differ. Equations, 36 (2011), 1145-1192. doi: 10.1080/03605302.2010.546469. Google Scholar G. Dore and A. Venni, On the closedness of the sum of two operators, Math. Z., 196 (1987), 189-201. doi: 10.1007/BF01163654. Google Scholar A. Friedman, Partial Differential Equations, Holt, Rinehard and Winston, 1969. Google Scholar A. Friedman and J. L. Velázquez, Time-dependent coating flows in a strip. I: The linearized problem, Trans. Am. Math. Soc., 349 (1997), 2981-3074. doi: 10.1090/S0002-9947-97-01956-9. Google Scholar G. P. Galdi, An Introduction to the Mathematical Theory of the Navier-Stokes Equations. Steady-State Problems, Springer Monographs in Mathematics, 2011. doi: 10.1007/978-0-387-09620-9. Google Scholar Y. Giga, Solutions for semilinear parabolic equations in $L_p$ and regularity of weak solutions of the Navier-Stokes system, Journal of Differential Equations, 62 (1986), 186-212. doi: 10.1016/0022-0396(86)90096-3. Google Scholar M. Haase, The Functional Calculus for Sectorial Operators, Operator Theory: Advances and Applications, 169, Birkhäuser Verlag, Basel, 2006. doi: 10.1007/3-7643-7698-8. Google Scholar P. W. Jones, Quasiconformal mappings and extendability of functions in Sobolev Spaces, Acta Math., 147 (1981), 71-88. doi: 10.1007/BF02392869. Google Scholar N. Kalton and L. Weis, The $H^\infty$-calculus and sums of closed operators, Math. Ann., 321 (2001), 319-345. doi: 10.1007/s002080100231. Google Scholar P. Kunstmann and L. Weis, Maximal $L_p$-regularity for parabolic equations, Fourier multiplier theorems and $H^\infty$-functional calculus, in Functional analytic methods for evolution equations, Lecture Notes in Math., 1855, Springer, Berlin, 2004, 65-311. doi: 10.1007/978-3-540-44653-8_2. Google Scholar R. Labbas and B. Terreni, Somme d'opérateurs linéaires de type parabolique, Boll. Un. Mat. Ital., 7 (1987), 545-569. Google Scholar V. N. Maslennikova and M. E. Bogovski, Elliptic boundary value problems in unbounded domains with noncompact and nonsmooth boundaries, Rendiconti del Seminario Matematico e Fisico di Milano, 56 (1986), 125-138. doi: 10.1007/BF02925141. Google Scholar M. Mitrea and S. Monniaux, On the analyticity of the semigroup generated by the Stokes operator with Neumann-type boundary conditions on Lipschitz subdomains of Riemannian manifolds, Transactions of the American Mathematical Society, 361 (2009), 3125-3157. doi: 10.1090/S0002-9947-08-04827-7. Google Scholar M. Mitrea and S. Monniaux, The nonlinear Hodge-Navier-Stokes equations in Lipschitz domains, Differential and Integral Equations, 22 (2009), 339-356. Google Scholar T. Nau and J. Saal, H-infinity-calculus for cylindrical boundary value problems, Advances in Differential Equations, 17 (2012), 767-800. Google Scholar A. I. Nazarov, $L_p$-estimates for a solution to the Dirichlet problem and to the Neumann problem for the heat equation in a wedge with edge of arbitrary codimension, J. Math. Sci., 106 (2001), 2989-3014. doi: 10.1023/A:1011319521775. Google Scholar A. Noll and J. Saal, $H^\infty$-calculus for the Stokes operator on Lq-spaces, Math. Z., 244 (2003), 651-688. Google Scholar J. Prüss, Evolutionary Integral Equations and Applications, Monographs in Mathematics, 87, Birkhäuser Verlag, Basel, 1993. doi: 10.1007/978-3-0348-8570-6. Google Scholar J. Prüss and S. Shimizu and Y. Shibata and G. Simonett, On well-posedness of incompressible two-phase flows with phase transitions: The case of equal densities, Evolution Equations and Control Theory, 1 (2012), 171-194. doi: 10.3934/eect.2012.1.171. Google Scholar J. Prüss and G. Simonett, $H^{\infty}$-calculus for the sum of non-commuting operators, Trans. Amer. Math. Soc., 359 (2007), 3549-3565. doi: 10.1090/S0002-9947-07-04291-2. Google Scholar J. Saal, Robin Boundary Conditions and Bounded $H^\infty$-Calculus for the Stokes Operator, Logos-Verlag, Ph.D thesis, Tu Darmstadt, 2003. Google Scholar J. Saal, Stokes and Navier-Stokes equations with Robin boundary conditions in a half-space, J. Math. Fluid Mech., 8 (2006), 211-241. doi: 10.1007/s00021-004-0143-5. Google Scholar B. Schweizer, A well-posed model for dynamic contact angles, Nonlinear Anal. Theory Methods Appl., 43 (2001), 109-125. doi: 10.1016/S0362-546X(99)00183-2. Google Scholar V. A. Solonnikov, On some free boundary problems for the Navier-Stokes equations with moving contact points and lines, Math. Ann., 302 (1995), 743-772. doi: 10.1007/BF01444515. Google Scholar Matthias Geissert, Horst Heck, Christof Trunk. $H^{\infty}$-calculus for a system of Laplace operators with mixed order boundary conditions. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1259-1275. doi: 10.3934/dcdss.2013.6.1259 Antonio Vitolo. $H^{1,p}$-eigenvalues and $L^\infty$-estimates in quasicylindrical domains. Communications on Pure & Applied Analysis, 2011, 10 (5) : 1315-1329. doi: 10.3934/cpaa.2011.10.1315 Rama Ayoub, Aziz Hamdouni, Dina Razafindralandy. A new Hodge operator in discrete exterior calculus. Application to fluid mechanics. Communications on Pure & Applied Analysis, 2021, 20 (6) : 2155-2185. doi: 10.3934/cpaa.2021062 Xingyue Liang, Jianwei Xia, Guoliang Chen, Huasheng Zhang, Zhen Wang. $ \mathcal{H}_{\infty} $ control for fuzzy markovian jump systems based on sampled-data control method. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1329-1343. doi: 10.3934/dcdss.2020368 Boris Muha, Zvonimir Tutek. Note on evolutionary free piston problem for Stokes equations with slip boundary conditions. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1629-1639. doi: 10.3934/cpaa.2014.13.1629 Donatella Donatelli, Eduard Feireisl, Antonín Novotný. On incompressible limits for the Navier-Stokes system on unbounded domains under slip boundary conditions. Discrete & Continuous Dynamical Systems - B, 2010, 13 (4) : 783-798. doi: 10.3934/dcdsb.2010.13.783 Maxim A. Olshanskii, Leo G. Rebholz, Abner J. Salgado. On well-posedness of a velocity-vorticity formulation of the stationary Navier-Stokes equations with no-slip boundary conditions. Discrete & Continuous Dynamical Systems, 2018, 38 (7) : 3459-3477. doi: 10.3934/dcds.2018148 Linjie Xiong. Incompressible Limit of isentropic Navier-Stokes equations with Navier-slip boundary. Kinetic & Related Models, 2018, 11 (3) : 469-490. doi: 10.3934/krm.2018021 Quanrong Li, Shijin Ding. Global well-posedness of the Navier-Stokes equations with Navier-slip boundary conditions in a strip domain. Communications on Pure & Applied Analysis, 2021, 20 (10) : 3561-3581. doi: 10.3934/cpaa.2021121 Igor Kukavica. On regularity for the Navier-Stokes equations in Morrey spaces. Discrete & Continuous Dynamical Systems, 2010, 26 (4) : 1319-1328. doi: 10.3934/dcds.2010.26.1319 Sylvie Monniaux. Various boundary conditions for Navier-Stokes equations in bounded Lipschitz domains. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1355-1369. doi: 10.3934/dcdss.2013.6.1355 Gilberto M. Kremer, Filipe Oliveira, Ana Jacinta Soares. $\mathcal H$-Theorem and trend to equilibrium of chemically reacting mixtures of gases. Kinetic & Related Models, 2009, 2 (2) : 333-343. doi: 10.3934/krm.2009.2.333 Ming Wang, Yanbin Tang. Attractors in $H^2$ and $L^{2p-2}$ for reaction diffusion equations on unbounded domains. Communications on Pure & Applied Analysis, 2013, 12 (2) : 1111-1121. doi: 10.3934/cpaa.2013.12.1111 Hongjie Dong, Kunrui Wang. Interior and boundary regularity for the Navier-Stokes equations in the critical Lebesgue spaces. Discrete & Continuous Dynamical Systems, 2020, 40 (9) : 5289-5323. doi: 10.3934/dcds.2020228 Minghua Yang, Zunwei Fu, Jinyi Sun. Global solutions to Chemotaxis-Navier-Stokes equations in critical Besov spaces. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 3427-3460. doi: 10.3934/dcdsb.2018284 Gisella Croce, Nikos Katzourakis, Giovanni Pisante. $\mathcal{D}$-solutions to the system of vectorial Calculus of Variations in $L^∞$ via the singular value problem. Discrete & Continuous Dynamical Systems, 2017, 37 (12) : 6165-6181. doi: 10.3934/dcds.2017266 M. S. Mahmoud, P. Shi, Y. Shi. $H_\infty$ and robust control of interconnected systems with Markovian jump parameters. Discrete & Continuous Dynamical Systems - B, 2005, 5 (2) : 365-384. doi: 10.3934/dcdsb.2005.5.365 Amol Sasane. Extension of the $\nu$-metric for stabilizable plants over $H^\infty$. Mathematical Control & Related Fields, 2012, 2 (1) : 29-44. doi: 10.3934/mcrf.2012.2.29 Jamal Mrazgua, El Houssaine Tissir, Mohamed Ouahi. Frequency domain $ H_{\infty} $ control design for active suspension systems. Discrete & Continuous Dynamical Systems - S, 2022, 15 (1) : 197-212. doi: 10.3934/dcdss.2021036 Eduard Feireisl, Josef Málek, Antonín Novotný. Navier's slip and incompressible limits in domains with variable bottoms. Discrete & Continuous Dynamical Systems - S, 2008, 1 (3) : 427-460. doi: 10.3934/dcdss.2008.1.427 Siegfried Maier Jürgen Saal
CommonCrawl
For how many positive integers $x$ is $100 \leq x^2 \leq 200$? We have $10^2=100$, so $10$ is the smallest positive integer which satisfies the inequalities. From here, we can compute the next few perfect squares: \begin{align*} 11^2 &= 121, \\ 12^2 &= 144, \\ 13^2 &= 169, \\ 14^2 &= 196, \\ 15^2 &= 225. \end{align*} The last $x$ for which $x^2\le 200$ is $x=14$. In all, our solutions in positive integers are $$x=10,11,12,13,14,$$ so there are $\boxed{5}$ such $x$.
Math Dataset
Acta Math. Volume 195, Number 1 (2005), 61-115. Uniform bound for Hecke L-functions Matti Jutila and Yoichi Motohashi More by Matti Jutila More by Yoichi Motohashi The first author was supported by the grant 8205966 from the Academy of Finland, and the second author by KAKENHI 15540047 and Nihon University research grant (2004). Acta Math., Volume 195, Number 1 (2005), 61-115. Received: 22 November 2004 First available in Project Euclid: 31 January 2017 https://projecteuclid.org/euclid.acta/1485891763 2005 © Institut Mittag-Leffler Jutila, Matti; Motohashi, Yoichi. Uniform bound for Hecke L -functions. Acta Math. 195 (2005), no. 1, 61--115. doi:10.1007/BF02588051. https://projecteuclid.org/euclid.acta/1485891763 Bruggeman, R. W., Fourier coefficients of cusp forms. Invent. Math., 45 (1978), 1–18. Digital Object Identifier: doi:10.1007/BF01406220 Bruggeman, R. W. & Motohashi, Y., Sum formula for Kloosterman sums and fourth moment of the Dedekind zeta-function over the Gaussian number field. Funct. Approx. Comment. Math., 31 (2003), 23–92. — A new approach to the spectral theory of the fourth moment of the Riemann zeta-function. J. Reine Angew. Math., 579 (2005), 75–114. Good, A., The square mean of Dirichlet series associated with cusp forms. Mathematika, 29 (1982), 278–295. Ivić, A., On sums of Hecke series in short intervals. J. Théor. Nombres Bordeaux, 13 (2001), 453–468. Iwaniec, H., Fourier coefficients of cusp forms and the Riemann zeta-function, in Seminar on Number Theory, 1979–80, Exp. 18. Univ. Bordeaux I, Talence, 1980. — Small eigenvalues of Laplacian for Γ0(N). Acta Arith., 56 (1990), 65–82. Jutila, M., Lectures on a Method in the Theory of Exponential Sums. Tata Inst. Fund. Res. Lectures on Math. and Phys., 80. Springer, Berlin, 1987. — The additive divisor problem and its analogs for Fourier coefficients of cusp forms, I. Math. Z., 223 (1996), 435–461; Ibid., The additive divisor problem and its analogs for Fourier coefficients of cusp forms, II., 225 (1997), 625–637. Digital Object Identifier: doi:10.1007/PL00004270 — Mean values of Dirichlet series via Laplace transforms, in Analytic Number Theory (Kyoto, 1996), pp. 169–207. Cambridge Univ. Press, Cambridge, 1997. —, On spectral large sieve inequalities. Funct. Approx. Comment. Math., 28 (2000), 7–18. —, The fourth moment of central values of Hecke series, in Number Theory (Turku, 1999), pp. 167–177. de Gruyter, Berlin, 2001. —, The spectral mean square of Hecke L-functions on the critical line. Publ. Inst. Math. (Beograd) (N.S.), 76 (90) (2004), 41–55. Jutila, M. & Motohashi, Y., A note on the mean value of the zeta and L-functions, XI. Proc. Japan Acad. Ser. A Math. Sci., 78 (2002), 1–6. Katok, S. & Sarnak, P., Heegner points, cycles and Maass forms. Israel J. Math., 84 (1993), 193–227. Kuznetsov, N. V., The Petersson hypothesis for forms of weight zero and the Linnik hypothesis. Preprint, Khabarovsk Complex Res. Inst. Acad. Sci. USSR, 1977 (Russian). —, Convolution of Fourier coefficients of Eisenstein-Maass series. Zap. Nauchn. Sem. Leningrad Otdel. Mat. Inst. Steklov. (LOMI), 129 (1983), 43–84 (Russian). Lebedev N. N., Special Functions and Their Applications. Dover, New York, 1972. Meurman, T., On the order of the Maass L-function on the critical line, in Number Theory, Vol. I (Budapest, 1987), pp. 325–354. Colloq. Math. Soc. János Bolyai, 51. North-Holland, Amsterdam, 1990. Motohashi, Y., An explicit formula for the fourth power mean of the Riemann zeta-function. Acta Math., 170 (1993), 181–220. —, The binary additive divisor problem. Ann. Sci. École Norm. Sup., 27 (1994), 529–572. Motohashi, Y., The mean square of Hecke L-series attached to holomorphic cusp forms, in Analytic Number Theory (Kyoto, 1993). Sûrikaisekikenkyûsho Kôkyûroku, 886 (1994), 214–227. —, Spectral Theory of the Riemann Zeta-Function, Cambridge Tracts in Math., 127. Cambridge Univ. Press, Cambridge, 1997. —, A note on the mean value of the zeta and L-functions, XIV. Proc. Japan Acad. Ser. A Math. Sci., 80 (2004), 28–33. Digital Object Identifier: doi:10.3792/pjaa.80.28 Sarnak, P., Estimation of Rankin-SelbergL-functions and quantum unique ergodicity. J. Funct. Anal., 184 (2001), 419–453. Digital Object Identifier: doi:10.1006/jfan.2001.3783 Titchmarsh, E. C., The Theory of the Riemann Zeta-Function. Oxford Univ. Press, Oxford, 1951. Watson, G. N., A Treatise on the Theory of Bessel Functions. Cambridge Univ. Press, Cambridge, 1995. Institut Mittag-Leffler Hecke's theory and the Selberg class Kaczorowski, Jerzy, Molteni, Giuseppe, Perelli, Alberto, Steuding, Jörn, and Wolfart, Jürgen, Functiones et Approximatio Commentarii Mathematici, 2006 Commutation relations of Hecke operators for Arakawa lifting Murase, Atsushi and Narita, Hiro-aki, Tohoku Mathematical Journal, 2008 Universality of Hecke $L$-functions in the Grossencharacter-aspect Koyama, Shin-ya and Mishou, Hidehiko, Proceedings of the Japan Academy, Series A, Mathematical Sciences, 2002 Explicit formulas for Dirichlet and Hecke $L$-functions Li, Xian-Jin, Illinois Journal of Mathematics, 2004 Weighted spectral large sieve inequalities for Hecke congruence subgroups of $SL(2,\mathbb{Z}[i])$ Watt, Nigel, Functiones et Approximatio Commentarii Mathematici, 2013 The first moment of twisted Hecke $L$-functions with unbounded shifts Bettin, Sandro, Functiones et Approximatio Commentarii Mathematici, 2017 Transcendence of Hecke operators in the big Hecke algebra Hida, Haruzo, Duke Mathematical Journal, 2014 Semigroup asymptotics, the Funk-Hecke identity and the Gegenbauer coefficients associated with the spherical Laplacian Day, Stuart and Taheri, Ali, Rocky Mountain Journal of Mathematics, 2018 The behavior of Hecke $L$-functions of real quadratic fields at $s=0$ Jun, Byungheup and Lee, Jungyun, Algebra & Number Theory, 2011 Graphs of Hecke operators Lorscheid, Oliver, Algebra & Number Theory, 2013 euclid.acta/1485891763
CommonCrawl
Network Analysis of the Multidimensional Symptom Experience of Oncology Nikolaos Papachristou1 na1, Payam Barnaghi1 na1, Bruce Cooper2, Kord M. Kober2, Roma Maguire3, Steven M. Paul2, Marilyn Hammer4, Fay Wright5, Jo Armes1,10, Eileen P. Furlong6, Lisa McCann3, Yvette P. Conley7, Elisabeth Patiraki8, Stylianos Katsaragakis9, Jon D. Levine2 & Christine Miaskowski2 Oncology patients undergoing cancer treatment experience an average of fifteen unrelieved symptoms that are highly variable in both their severity and distress. Recent advances in Network Analysis (NA) provide a novel approach to gain insights into the complex nature of co-occurring symptoms and symptom clusters and identify core symptoms. We present findings from the first study that used NA to examine the relationships among 38 common symptoms in a large sample of oncology patients undergoing chemotherapy. Using two different models of Pairwise Markov Random Fields (PMRF), we examined the nature and structure of interactions for three different dimensions of patients' symptom experience (i.e., occurrence, severity, distress). Findings from this study provide the first direct evidence that the connections between and among symptoms differ depending on the symptom dimension used to create the network. Based on an evaluation of the centrality indices, nausea appears to be a structurally important node in all three networks. Our findings can be used to guide the development of symptom management interventions based on the identification of core symptoms and symptom clusters within a network. Oncology patients undergoing cancer treatment experience an average of fifteen unrelieved symptoms that are highly variable in both their severity and distress1,2,3. In order to advance symptom management science and gain a better understanding of oncology patients' symptom experiences, research has focused on the evaluation of symptom clusters using techniques such as exploratory factor analysis or cluster analysis4,5,6. One of the underlying assumptions of this research is that symptoms that cluster together may share underlying mechanisms that are potential targets for therapeutic interventions. While progress is being made in symptom clusters research4, one of the major gaps in knowledge using standard statistical approaches is that the nature of the relationships among individual symptoms and symptom clusters have not been evaluated. This gap in knowledge prevents the identification of key symptom(s) that exert an influence on other co-occurring symptoms or symptom clusters that may be potential target(s) for therapeutic interventions. In this study, we investigate the application of Network Analysis (NA) methods to better understand and interpret the associations among co-occurring symptoms and symptom clusters in oncology patients receiving chemotherapy (CTX). NA7,8,9 is a graph theory based methodology that is being used to gain new insights into systems biology10,11 depression12,13, post-traumatic stress14, complex bereavement15, quality of life (QOL)16, and identifying high-risk cancer sub-population17. In terms of oncology patients, NA allows one to visualize and interpret quantitatively the relationships among various symptoms and symptom clusters that patients are experiencing. While NA is being used to understand the associations among psychiatric symptoms18,19,20,21,22 and substance abuse and dependence symptoms23, only one study was found that used NA to evaluate symptoms in oncology patients24. Using data on the occurrence of 18 symptoms in 665 oncology patients, a force directed layout algorithm was used to visualize a patient-symptom bipartite network. Then four quantitative methods were used to analyse the patterns of symptom occurrence suggested by the network visualizations. The authors concluded that cancer symptoms occur in a nested pattern as opposed to distinct clusters24. While a historic study24, the conclusions regarding the absence of distinct symptom clusters warrants additional exploration because of the limitations and associated implications of the NA methods that were used. For example, modularity optimization has a resolution limit that may prevent it from detecting clusters which are comparatively small with respect to the graph as a whole, even when they are well defined communities25. In addition, during unweighted or weighted one-mode projection, some information is lost and the final models do not hold the complete structural information of bipartite networks26. As mentioned by the authors24, their methods concealed how the groups of symptoms co-occurred, as well as their globally optimal co-occurrence frequencies. In the current study, we explore the complex organisation and interconnectedness of cancer symptoms and associated clusters by using two different models of Pairwise Markov Random Fields (PMRF)27,28,29 on binary symptom occurrence and ordinal symptom severity and distress data. As part of a symptom assessment, oncology patients are asked to rate not only the occurrence of the symptom, but its associated severity and distress30,31,32,33. Two of the unanswered questions in symptom clusters' research is whether the number and types of symptom clusters differ based on the dimension used to create the cluster and how symptoms within and across clusters are related to each other4,5. Our study is the first to use NA to evaluate the relationships among symptoms and symptom clusters using ratings of symptom occurrence, severity, and distress, in a sample of oncology patients undergoing chemotherapy (CTX; n = 1328). We used NA to examine the relationships among 38 common symptoms and to explore if the network structures for occurrence, severity, and distress have different properties. Our analyses show the prevalence, importance, and influence of each symptom within each network and the overall connectivity of cancer symptoms within each symptom dimension network. In addition, the interrelationships among symptoms inside and outside of a symptom cluster are described. Patients and Settings This secondary analysis is part of a longitudinal study of the symptom experience of oncology outpatients receiving CTX. The methods for this study are described in detail in our previous publications34,35,36. For this NA, enrollment assessment data from the parent, longitudinal study were analysed (n = 1328). Patients were eligible to participate if they: were ≥18 years of age; had a diagnosis of breast, gastrointestinal (GI), gynecological (GYN), or lung cancer; had received CTX within the preceding four weeks; were scheduled to receive at least two additional cycles of CTX; were able to read, write, and understand English; and gave written informed consent. Patients were recruited from two Comprehensive Cancer Centers, one Veteran's Affairs hospital, and four community-based oncology programs. This study was approved by the Committee on Human Research at the University of California, San Francisco. All methods were performed in accordance with the relevant guidelines and regulations. A written informed consent was obtained from all patients. Cancer Symptom Dimensions A modified version of the Memorial Symptom Assessment Scale (MSAS)33 was used to evaluate the occurrence, severity, and distress of 38 symptoms commonly associated with cancer and its treatment. In addition to the original 32 MSAS symptoms, the following six symptoms were assessed: hot flashes, chest tightness, difficulty breathing, abdominal cramps, increased appetite, and weight gain. The MSAS is a self-report questionnaire designed to measure the multidimensional experience of symptoms. Using the MSAS, patients were asked to indicate whether or not they had experienced each symptom in the past week (i.e., symptom occurrence). If they had experienced the symptom, they were asked to rate its severity and distress. Symptom severity was measured using a 4-point Likert scale (i.e., 1 = slight, 2 = moderate, 3 = severe, 4 = very severe). Symptom distress was measured using a 5-point Likert scale (i.e., 0 = not at all, 1 = a little bit, 2 = somewhat, 3 = quite a bit, 4 = very much). The reliability and validity of the MSAS are well established in studies of oncology inpatients and outpatients33. In general, networks are defined as a collection of interconnected components (i.e., in this paper, symptoms). These components are called nodes and their interaction links are called edges37. A Pairwise Markov Random Field (PMRF)29 is an undirected graphical model of a set of random variables having a Markov property, described by this undirected graph (or network). Its edges indicate the full conditional association between two nodes after conditioning on all of the other nodes in the network. When a relationship exists between two nodes (i.e., symptoms) that cannot be explained by any other node in the network, these two nodes are connected. The absence of an edge between two nodes (i.e., symptoms) indicates that these nodes are conditionally independent of each other given the other nodes in the network (Fig. 1). A Pairwise Markov Random Field (PMRF) or an undirected graphical model with 6 nodes, A to F. The presence of edges between nodes indicates the conditional dependency between them. When estimating a PMRF, the number of parameters that need to be estimated grows quickly with the size of the network38. In our 38-node networks, 741 parameters (i.e., 38 threshold parameters and 38 × 37/2 = 703 pairwise association parameters) needed to be estimated38. To estimate this number of parameters in a reliable fashion, the number of observations in our sample needed to be at least equivalent, which it was given a sample size of 1328 patients. To create the networks, we used the generalization of the Ising model presented in the IsingFit R-package39 for the occurrence data and the polychoric correlation method28 for the severity and distress data, using the R-package qgraph40. Both approaches entailed the application of a statistical regularization technique, which provided an extra penalty for model complexity. The edges that were likely to be spurious or false positives were removed from the models, leading to networks that were more interpretable. The model used in the IsingFit R-package39 is a binary equivalent of the Gaussian approximation method. Its variables can have only two states and interactions are considered pairwise. The aforementioned model contains two node-specific parameters: the interaction parameter βjk, representing the strength of the interaction between variable j and k, and the node parameter τj, which represents the autonomous disposition of the variable to take the value of one - "1" - regardless of neighboring variables. The IsingFit model estimates the aforementioned parameters using logistic regression. Through repetition, every variable is regressed on all of the other variables. To obtain sparsity, an \({\ell }_{1}\)-penalty is imposed on the regression coefficients. The level of shrinkage depends on the penalty parameter of the lasso. In the IsingFit method, the Extended Bayesian Information Criterion (EBIC) is used to select the set of neighbor nodes that yield the lowest EBIC and in this way constructs the final "true" network. By viewing Xj as the response variable and all the other variables X\j as the predictors, the EBIC is represented as: $$BI{C}_{\gamma }(j)=-\,2\ell ({\hat{{\rm{\Theta }}}}_{j})+|J|\cdot \,\mathrm{log}(n)+2\gamma |J|\cdot \,\mathrm{log}(p-1)$$ in which \(\ell ({\hat{{\rm{\Theta }}}}_{{\rm{j}}})\) is the log likelihood of the conditional probability of Xj given its neighbours, Xne(j), |J| is the number of neighbours selected by logistic regression at a certain penalty parameter ρ, n is the number of observations, p − 1 is the number of covariates (predictors), and c is a hyperparameter, determining the strength of prior information on the size of the model space. The model with the set of neighbours J that has the lowest EBIC is selected. For severity and distress, we used the R-package qgraph40 and applied the polychoric correlation method in combination with the graphical "least absolute shrinkage and selection operator" (glasso) algorithm28,41,42. The glasso algorithm by inverting its input, which is the sample's polychoric correlation matrix, returns a sparse network model where only a relatively small number of edges are used to explain the covariance structure in the data. More precisely, the graphical lasso estimator is the \(\hat{{\rm{\Theta }}}\) such that: $$\hat{{\rm{\Theta }}}={{\rm{a}}{\rm{r}}{\rm{g}}{\rm{m}}{\rm{i}}{\rm{n}}}_{{\rm{\Theta }}\ge 0}({\rm{t}}{\rm{r}}(S{\rm{\Theta }})-\,{\rm{l}}{\rm{o}}{\rm{g}}\,det({\rm{\Theta }})+\lambda \,\sum _{j\ne k}\,|{{\rm{\Theta }}}_{jk}|)$$ where S is the sample's polychoric correlation matrix, and λ is a penalizing parameter. Glasso utilizes this penalizing parameter to control the degree to which regularization is applied. This penalising parameter can be selected by minimizing the EBIC. In general, graphical lasso controls the relationships between the variables in a network and gives partial correlations between variables, which increases the parsimony of the final network models28,42. The above mentioned techniques allowed us to create and construct the networks using the symptom occurrence, severity, and distress data. However, it is crucial to establish robust methods to assess the stability and accuracy of the network. The next section discusses our approach to assess and evaluate the constructed networks. In network model representations, nodes (symptoms) are represented as circles and links between nodes (edges) are represented as lines (see Figs 2a, 3a and 4a). The size of each node (i.e., symptom) is proportional to the occurrence rate, severity rating, or distress rating of each symptom. Each link in the network represents the interconnections between two symptoms after conditioning on all of the other symptoms in the network. Green lines indicate positive inter-connections. Red lines indicate negative inter-connections. Thicker lines indicate stronger inter-connections. Because the strength of the relationships between symptoms are taken into account, the networks are considered weighted. The layout of these networks is based on the Fruchterman-Reingold algorithm, which estimates the optimal layout so that nodes with stronger and/or more connections are placed closer to each other43. The estimated networks of 38 cancer symptoms across the "occurrence" dimension (a) without the identified communities and (b) with the identified communities (walktrap algorithm). Nodes represent symptoms and edges represent pairwise dependencies between the symptoms, after controlling for all of the other correlations of a given node. The 38 cancer symptoms represented in the nodes above are coded in the following fashion: difcon: Difficulty Concentrating, pain: Pain, energy: Lack of Energy, cough: Cough, nervous: Feeling Nervous, hotflash: Hot Flashes, drymouth: Dry Mouth, nausea: Nausea, drowsy: Feeling Drowsy, numb: Numbness or Tingling in Hands or Feet, chest: Chest Tightness, difbreath: Difficulty Breathing, difsleep: Difficulty Sleeping, bloat: Feeling Bloated, urinate: Problems with Urination, vomit: Vomitting, sob: Shortness of Breath, diarrhea: Diarrhea, sad: Feeling Sad, sweats: Sweats, sexual: Problems with Sexual Interest or Activity, worry: Worrying, itch: Itching, appetite: Lack of Appetite, abdominal: Abdominal Cramps, increaseapp: Increased Appetite, wtgain: Weight Gain, dizzy: Dizziness, swallow: Difficulty Swallowing, irritable: Feeling Irritable, mouthsore: Mouth Sore, wtloss: Weight Loss, hairloss: Hair Loss, constipat: Constipation, swelling: Swelling, taste: Change in the Way Food Tastes, myself: I Do Not Look Like Myself, skin: Changes in Skin. The estimated networks of 38 cancer symptoms across the "severity" dimension (a) without the identified communities and (b) with the identified communities (walktrap algorithm). Nodes represent symptoms and edges represent a partial correlation between the symptoms, after controlling for all of the other correlations of a given node. The 38 cancer symptoms represented in the nodes above are coded in the following fashion: difcon: Difficulty Concentrating, pain: Pain, energy: Lack of Energy, cough: Cough, nervous: Feeling Nervous, hotflash: Hot Flashes, drymouth: Dry Mouth, nausea: Nausea, drowsy: Feeling Drowsy, numb: Numbness or Tingling in Hands or Feet, chest: Chest Tightness, difbreath: Difficulty Breathing, difsleep: Difficulty Sleeping, bloat: Feeling Bloated, urinate: Problems with Urination, vomit: Vomitting, sob: Shortness of Breath, diarrhea: Diarrhea, sad: Feeling Sad, sweats: Sweats, sexual: Problems with Sexual Interest or Activity, worry: Worrying, itch: Itching, appetite: Lack of Appetite, abdominal: Abdominal Cramps, increaseapp: Increased Appetite, wtgain: Weight Gain, dizzy: Dizziness, swallow: Difficulty Swallowing, irritable: Feeling Irritable, mouthsore: Mouth Sore, wtloss: Weight Loss, hairloss: Hair Loss, constipat: Constipation, swelling: Swelling, taste: Change in the Way Food Tastes, myself: I Do Not Look Like Myself, skin: Changes in Skin. The estimated networks of 38 cancer symptoms across the "distress" dimension (a) without the identified communities and (b) with the identified communities (walktrap algorithm). Nodes represent symptoms and edges represent a partial correlation between the symptoms, after controlling for all of the other correlations of a given node. The 38 cancer symptoms represented in the nodes above are coded in the following fashion: difcon: Difficulty Concentrating, pain: Pain, energy: Lack of Energy, cough: Cough, nervous: Feeling Nervous, hotflash: Hot Flashes, drymouth: Dry Mouth, nausea: Nausea, drowsy: Feeling Drowsy, numb: Numbness or Tingling in Hands or Feet, chest: Chest Tightness, difbreath: Difficulty Breathing, difsleep: Difficulty Sleeping, bloat: Feeling Bloated, urinate: Problems with Urination, vomit: Vomitting, sob: Shortness of Breath, diarrhea: Diarrhea, sad: Feeling Sad, sweats: Sweats, sexual: Problems with Sexual Interest or Activity, worry: Worrying, itch: Itching, appetite: Lack of Appetite, abdominal: Abdominal Cramps, increaseapp: Increased Appetite, wtgain: Weight Gain, dizzy: Dizziness, swallow: Difficulty Swallowing, irritable: Feeling Irritable, mouthsore: Mouth Sore, wtloss: Weight Loss, hairloss: Hair Loss, constipat: Constipation, swelling: Swelling, taste: Change in the Way Food Tastes, myself: I Do Not Look Like Myself, skin: Changes in Skin. In order to gain additional insights into the structural importance of each node (i.e., symptom) in each of the networks, three centrality indices (i.e., betweenness, closeness, strength) were estimated28,44. Nodes with high centrality indices are considered core nodes in the network. Betweenness measures the number of times a node lies on the shortest path between two other nodes. This index indicates which nodes may act as bridges between other nodes in the network. Closeness summarizes the average distance of a node to all other nodes in the network. Closeness allows for the identification of nodes (i.e., symptoms) that are in a position to have a substantial influence on other node(s) (i.e., other symptom (s)) in the network. Strength indicates which node has the strongest overall connections. It is calculated by summing the absolute edge weights that are connected to a specific node. Strength provides a measure for identifying the most connected node (i.e., symptom) inside a network. Figures S1–S3 in the Appendix illustrate the distribution of each symptom within each dimension (i.e. occurrence, severity, distress). These data are presented to assess whether some of our findings could be due to floor or ceiling effects that affect the properties of our centrality indices45. Network Accuracy and Stability Inherent in NA is the problem of obtaining network structures that are sensitive to a specific dataset, or to the specific variables included in a study, and/or the specific estimation methods used. As recommended in the literature38, we used bootstrap confidence regions to examine the certainty of the edges and tested for significance between edge weights with α = 0.05 based on 1000 bootstrap iterations. To estimate the stability of the order of the centrality indices, we used a case- and node-dropping sub-setting bootstrap technique together with the correlation stability coefficient (Cs-coefficient), which is an index of the stability of the centrality indices. The Cs-coefficient quantifies the maximum proportion of cases or nodes, respectively, can be dropped at random to retain, with 95% certainty, a correlation of at least 0.7 with the centralities of the original network38. While no strict cut-off value exists for the CS-coefficient, its value should be at least 0.25 and preferably higher than 0.5. Additionally to the aforementioned analyses, we tested the stability of the centrality indices on four equally divided and randomly assigned subsets. This analysis showed the stability of the identified networks as well as the repeatability of the NA approach on cancer symptoms' dimensions. In order to determine whether and how symptoms clustered together inside our networks, we used the Walktrap algorithm46,47. The Walktrap algorithm identifies communities (i.e., clusters) of nodes (i.e., symptoms) that are relatively highly connected with each other. Nodes in a community are more likely to connect to other nodes in the same community than to nodes in other communities. Each community corresponds to a connected subgraph. In Figs 2b, 3b and 4b, these communities (i.e., symptom clusters) are visualized with different colors. Sample Characteristics - Of the 1328 patients in this study, 77.7% were female and their mean age was 57.2 (±12.4) years. The majority of the patients had breast (40.2%) or gastrointestinal (30.7%) cancer. These patients reported an average of 13.9 (±7.2) symptoms prior to their next dose of CTX. Additional sample characteristics are summarized in Table S1 in the Appendix. Network Models of Symptom Occurrence, Severity, and Distress For the occurrence dimension, created using the IsingFit method (see Fig. 2a), we used a gamma value of 0.25 and the OR rule for the nodewise estimation. All of the symptoms were directly or indirectly associated with the network and the network had a medium density (i.e., 36.42% of the potential connections were observed in the network). All connections were positive except for weight gain (wtgain) and weight loss (wtloss). For the severity dimension, created using the polychoric correlation method and the glasso algorithm (Fig. 3a), we used a tuning parameter of 0.25. All of the symptoms were directly or indirectly associated with the network and the network had a medium density (i.e., 54.48% of the potential connections were observed in the network). All of the connections were positive except for: increased appetite (increaseapp) and lack of appetite (appetite); hair loss (hairloss) and difficulty with urination (urinate); and diarrhea (diarrhea) and constipation (constipat). For the distress dimension, created using the polychoric correlation method and the Glasso algorithm (Fig. 4a), we used a tuning parameter of 0.25. All of the symptoms were directly or indirectly associated with the network and the network had a medium density (i.e., 50.92% of the potential connections were observed in the network). All of the connections were positive except for: increased appetite (increaseapp) and lack of appetite (appetite); weight gain (wtgain) and weight loss (wtloss); diarrhea (diarrhea) and hot flashes (hotflash); and hot flashes and swelling of the arms and legs (swelling). To inspect the statistical importance and possible role of each symptom inside each of the the networks, we calculated their centrality indices (Fig. 5). As shown in Supplemental Table S2 in the Appendix, for the symptom occurrence network, nausea and lack of appetite had the highest scores for all three centrality indices. For the severity network, lack of appetite had the highest scores for all three centrality indices and lack of energy had the highest scores across two centrality indices (betweenness and closeness). For the distress dimension, lack of appetite had the highest scores across all three centrality indices. Centrality indices for the estimated network of 38 cancer symptoms shown in Figs 2a to 4a. Bootstrap confidence regions for the edges' weights were mostly overlapping (shown in Appendix Fig. S4). The results of the case- and node-dropping bootstrap techniques that were used to estimate the stability of the centrality indices are shown in Appendix Fig. S5. Robustness analyses of the centrality indices showed the following CS-coefficients for each dimension: 1) Occurrence: 0.517 for strength, 0.128 for closeness, and 0.128 for betweenness; 2) Severity: 0.361 for strength, 0.05 for closeness, and 0.284 for betweenness; and 3) Distress: 0.361 for strength, 0.205 for closeness, and 0.128 for betweenness. Across the three symptom dimensions, node strength was the most reliable centrality index. We also obtained similar results for the node strength for the 4 equally divided and randomly assigned subsets of patients, for each symptom dimension (i.e. occurrence, severity, distress) (See Appendix Figs S6 and S7). Communities Within Each Symptom Dimension Network Using the walktrap algorithm (Fig. 2b), the symptoms appear to group into six main clusters: psychological symptom cluster [shown in gold], hormonal symptom cluster [shown in blue], respiratory symptom cluster [shown in green], nutritional symptom cluster [shown in white, yellow, and brown], CTX-related symptom cluster [shown in red], and pain and abdominal symptom cluster [shown in purple]. Using the walktrap algorithm (Fig. 3b), the symptoms appear to group into five main clusters: psychological symptom cluster [shown in gold], hormonal symptom cluster [shown in blue], respiratory symptom cluster [shown in green], nutritional symptom cluster [shown in white and brown], and CTX-related symptom cluster [shown in red]. Using the walktrap algorithm (Fig. 4b), the symptoms appear to group into seven main clusters: psychological symptom cluster [shown in gold], hormonal symptom cluster [shown in blue], respiratory symptom cluster [shown in green], nutritional symptom cluster [shown in white and brown], CTX-related symptom cluster [shown in red], GI symptom cluster [shown in pink], and epithelial symptom cluster [shown in purple]. It should be noted, in the communities (i.e., symptom clusters) that were constructed using the walktrap algorithm, while a number of the symptom clusters have the same names, the specific symptoms within each of these clusters vary across the three dimensions (Table 1). Table 1 Symptom Clusters Derived From Network Analyses of Occurrence, Severity, and Distress. This study is the first to use NA methods to examine the relationships among 38 common symptoms in a large sample of oncology patients undergoing CTX using ratings of occurrence, severity, and distress. The use of NA to understand the symptom experience of oncology patients has the potential to increase our knowledge of the structural relationships among co-occurring symptoms and symptom clusters; the core symptoms driving associations between and among symptoms, and how co-occurring symptoms and symptom clusters change based on the dimension of the symptom experience that is used to create the network. Our hypothesis that the network structure for the distress dimension would differ from the occurrence and severity dimensions was partially supported based on visual inspection of the network structures and the larger number of symptom clusters identified in the distress network. For over four decades, emphasis has been placed on an evaluation of multiple dimensions of the symptom experience because each dimension provides distinct and useful information30,31,32,33,48,49. Occurrence data are used to identify the most common symptoms in oncology patients. Severity data are used to determine the magnitude of a specific symptom and to guide treatment decisions. An evaluation of symptom distress provides information on "the physical or mental anguish or suffering" associated with a symptom48. While symptom theory50,51,52,53 and data from studies that used the MSAS suggest that these three dimensions are distinct32,33,54,55,56, findings from our study provide the first direct evidence that the connections between and among symptoms differ depending on the symptom dimension that was used to create the network. Because oncology patients experience an average of fifteen unrelieved symptoms that are highly variable in their occurrence, severity, and distress1,2,3, an equally important question in symptom research is to determine which symptom or symptoms is driving the other symptoms. While our NA of cross-sectional data does not demonstrate causality, the centrality indices provide some insights into the structural importance of each of the symptoms within each of the networks. In terms of the occurrence network, nausea had the highest scores for all three centrality indices. In this sample, 47.48% of patients reported nausea prior to their next dose of CTX. While vomiting is well controlled with newer antiemetic regimens, nausea is a persistent symptom that compromises a patient's nutritional status, results in significant psychological distress, has a negative impact on quality of life, and can result in the discontinuation of cancer treatment57,58,59. For both the severity and distress networks, lack of appetite had the highest scores for all three centrality indices and it was the symptom with the second highest centrality scores for the occurrence dimension. While this symptom was reported by 41.31% of the patients in this study, it is a symptom that is not routinely assessed in oncology patients undergoing cancer treatment. Based on network theory19,60,61,62,63, given their high centrality index scores, these symptoms may be targets for therapeutic interventions that if successful would reduce other symptoms in the network. While a tremendous amount of research has focused on the evaluation of symptom clusters in oncology patients4,5, our study is the first to use NA to visualize how one symptom cluster is associated with other symptom clusters. To date, the majority of the work to create symptom clusters was done using cluster analysis or factor analysis. While these approaches identified some of the most common symptom clusters in oncology patients, these symptom clusters are created as independent "factors". Our NA represents a major breakthrough in symptom cluster research. Within each dimension, our graphical representation allows us to visualize how the various symptom clusters within the network are inter-connected with other symptom clusters in the same network. Based on network theory60,64,65, we can hypothesize that symptoms on the edges of each of the clusters may have an influence on that cluster. For example, in Fig. 2b, difficulty sleeping and hot flashes are on the edges of their respective symptom clusters. While we cannot demonstrate causality, it is known that the occurrence of hot flashes disrupts patients' sleep66,67. If our findings are confirmed in an independent sample, future NAs can evaluate for causality and test interventions to reduce symptoms across clusters. In terms of the specific symptom clusters identified for each of the symptom dimensions, our finding of a psychological symptom cluster across all three dimensions is consistent with findings from a recent review that noted that this cluster is one of the most common clusters identified in oncology patients4. The other four symptom clusters that were common across all three symptom dimensions (i.e., hormonal, respiratory, nutrition, and CTX-related) were reported in previous symptom cluster studies68,69,70,71,72. The fact that two additional and unique symptom clusters were identified within the distress network provide additional support for the hypothesis that symptom distress is a distinct dimension of the oncology patients' symptom experience. Future research will need to evaluate causality among symptoms within each of the dimension networks and whether common or distinct interventions are needed to decrease the severity and distress associated with a specific symptom. Limitations and Future Directions Several limitations warrant consideration. While our sample was rather large in comparison to the number of parameters estimated, the heterogeneity introduced by the specific demographic and clinical characteristics of the patients in this study may influence the stability of our estimated networks. Since this study is the first to use NA to examine the relationships among co-occurring symptoms and symptom clusters, our findings warrant replication in an independent sample of oncology patients undergoing CTX. In addition, this analysis of cross-sectional data does not allow for causal inferences on the role of each symptom within each of our networks. Finally, because no standards exist to interpret the significance and robustness of networks and because the validity of the visual interpretation of complex networks is subjective, additional research is warranted to confirm our findings. In terms of directions for future research, our findings warrant replication in an independent sample with similar demographic and clinical characteristics. In addition, comparisons of network structures need to be done among different cancer diagnoses, across different stages of disease, and among different cancer treatments. The impact of various demographic (e.g., age, gender) and clinical (e.g., comorbid conditions, functional status) characteristics on the network structure of cancer symptoms warrants evaluation. Using longitudinal data, NA will allow us to explore the causal relationships among co-occurring symptoms and symptom clusters12. In this study, we used NA to investigate the relationships among 38 common symptoms in oncology patients receiving CTX. As the first NA of cancer symptoms, our work provides new insights into the inter-relationships among co-occurring symptoms and symptom clusters. Findings from this study suggest that the connections between and among symptoms may differ depending on the symptom dimension used to create the network. Our findings suggest that distress may be a different dimension of a patient's symptom experience. In addition, this study provides the first visualizations of the inter-relationships among symptom clusters across three dimensions of the patients' symptom experience. While these findings warrant confirmation in an independent sample, we believe that NA has the potential to improve our understanding of the oncology patients' symptom experience so that individualized and targeted interventions can be prescribed to reduce each patient's symptom burden. The data used in this study will be available upon request and subject to ethics approval. All data requests should be sent to Christine Miaskowski ([email protected]). Papachristou, N. et al. Congruence between latent class and k-modes analyses in the identification of oncology patients with distinct symptom experiences. J Pain Symptom Manage. 55, 318–333 (2018). Miaskowski, C. et al. Latent class analysis reveals distinct subgroups of patients based on symptom occurrence and demographic and clinical characteristics. J Pain Symptom Manage. 50, 28–37 (2015). Esther Kim, J. E., Dodd, M. J., Aouizerat, B. E., Jahan, T. & Miaskowski, C. A review of the prevalence and impact of multiple symptoms in oncology patients. J Pain Symptom Manage. 37, 715–736 (2009). Miaskowski, C. et al. Advancing symptom science through symptom cluster research: Expert panel proceedings and recommendations. J. Natl. Cancer Inst. 109 (2017). Miaskowski, C. Future directions in symptom cluster research. Semin Oncol Nurs. 32, 405–415 (2016). Barsevick, A. Defining the symptom cluster: How far have we come? Semin Oncol Nurs. 32, 334–350 (2016). Boccaletti, S., Latora, V., Moreno, Y., Chavez, M. & Hwang, D. Complex networks: Structure and dynamics. Phys. Reports 424, 175–308 (2006). ADS MathSciNet MATH Article Google Scholar Albert, R. & Barabási, A. L. Statistical mechanics of complex networks. Reviews of modern physics 74, 47 (2002). Strogatz, S. H. Exploring complex networks. Nature 410, 268 (2012). ADS MATH Article Google Scholar Wang, R. S., Maron, B. A. & Loscalzo, J. Systems medicine: evolution of systems biology from bench to bedside. Wiley Interdiscip Rev Syst Biol Med 7, 141–161 (2015). Loscalzo, J. & Barabasi, A. L. Systems biology and the future of medicine. Wiley Interdiscip Rev Syst Biol Med 3, 619–627 (2011). Bringmann, L. F., Lemmens, L. H., Huibers, M. J., Borsboom, D. & Tuerlinckx, F. Revealing the dynamic network structure of the beck depression inventory-ii. Psychol. Med. 45, 747–757 (2015). Fried, E. I., Epskamp, S., Nesse, R. M., Tuerlinckx, F. & Borsboom, D. What are 'good' depression symptoms? Comparing the centrality of dsm and non-dsm symptoms of depression in a network analysis. J. Affect Disord. 189, 314–320 (2016). Frewen, P. A., Schmittmann, V. D., Bringmann, L. F. & Borsboom, D. Perceived causal relations between anxiety, posttraumatic stress and depression: extension to moderation, mediation, and network analysis. Eur J Psychotraumatol 4 (2013). Robinaugh, D. J., LeBlanc, N. J., Vuletich, H. A. & McNally, R. J. Network analysis of persistent complex bereavement disorder in conjugally bereaved adults. J. Abnorm. Psychol. 123, 510–522 (2014). Kossakowski, J. J. et al. The application of a network approach to health-related quality of life (hrqol): introducing a new method for assessing hrqol in healthy adults and cancer patients. Qual. Life. Res. 25, 781–792 (2016). Zou, J. & Wang, E. Etumorrisk, an algorithm predicts cancer risk based on co-mutated gene networks in an individual's germline genome. bioRxiv, https://doi.org/10.1101/393090 (2018). McNally, R. J. Can network analysis transform psychopathology? Behav. Res. Ther. 86, 95–104 (2016). Fried, E. I. et al. Mental disorders as networks of problems: a review of recent insights. Soc. Psychiatry Psychiatr. Epidemiol. 52, 1–10 (2017). Boschloo, L., van Borkulo, C. D., Borsboom, D. & Schoevers, R. A. A prospective study on how symptoms in a network predict the onset of depression. Psychother. Psychosom. 85, 183–184 (2016). Boschloo, L. et al. The network structure of symptoms of the diagnostic and statistical manual of mental disorders. PLoS One 10, e0137621 (2015). Borsboom, D. & Cramer, A. O. Network analysis: an integrative approach to the structure of psychopathology. Annu. Rev. Clin. Psychol. 9, 91–121 (2013). Rhemtulla, M. et al. Network analysis of substance abuse and dependence symptoms. Drug Alcohol. Depend. 161, 230–237 (2016). Bhavnani, S. K. et al. The nested structure of cancer symptoms. implications for analyzing co-occurrence and managing symptoms. Methods Inf. Med. 49, 581–591 (2010). Fortunato, S. Community detection in graphs. Phys. Rep. 486, 75–174 (2010). ADS MathSciNet Article Google Scholar Qiao, J., Meng, Y. Y., Chen, H., Huang, H. Q. & Li, G. Y. Modeling one-mode projection of bipartite networks by tagging vertex information. Physica A: Statistical Mechanics and its Applications 457, 270–279 (2016). Epskamp, S., Maris, G. K., Waldorp, L. J. & Borsboom, D. Network psychometrics. arXiv preprint arXiv:1609.02818 (2016). Epskamp, S. & Fried, E. I. A tutorial on regularized partial correlation networks. Psychol Methods (2018). Koller, D. & Friedman, N. Probabilistic graphical models: principles and techniques (MIT press, 2009). McCorkle, R. The measurement of symptom distress. Semin. Oncol. Nurs 3, 248–256 (1987). McCorkle, R. & Young, K. Development of a symptom distress scale. Cancer Nurs 1, 373–378 (1978). Portenoy, R. K. et al. Symptom prevalence, characteristics and distress in a cancer population. Qual. Life. Res. 3, 183–189 (1994). Portenoy, R. K. et al. The Memorial Symptom Assessment Scale: an instrument for the evaluation of symptom prevalence, characteristics and distress. Eur. J. Cancer 30A, 1326–1336 (1994). Miaskowski, C. et al. The symptom phenotype of oncology outpatients remains relatively stable from prior to through 1 week following chemotherapy. Eur J Cancer Care (Engl) 26 (2017). Wright, F. et al. Inflammatory pathway genes associated with inter-individual variability in the trajectories of morning and evening fatigue in patients receiving chemotherapy. Cytokine 91, 187–210 (2017). Kober, K. M. et al. Subgroups of chemotherapy patients with distinct morning and evening fatigue trajectories. Support. Care Cancer 24, 1473–1485 (2016). Barabási, A. L. & Pósfai, M. Network science (Cambridge university press, 2016). Epskamp, S., Borsboom, D. & Fried, E. I. Estimating psychological networks and their accuracy: a tutorial paper. Behav. Res. Methods. 50, 195–212 (2018). Van Borkulo, C. D. et al. A new method for constructing networks from binary data. Sci. Rep. 4, 5918 (2014). Epskamp, S., Cramer, A. O., Waldorp, L., Schmittmann, V. & Borsboom, D. qgraph: Network visualizations of relationships in psychometric data. J. Stat. Softw. 48, 1–18 (2012). Friedman, J., Hastie, T. & Tibshirani, R. glasso: Graphical lasso- estimation of gaussian graphical models, https://cran.r-project.org/web/packages/glasso/ (2014). Friedman, J., Hastie, T. & Tibshirani, R. Sparse inverse covariance estimation with the graphical lasso. Biostatistics 9, 432–441 (2008). PubMed MATH Article Google Scholar Fruchterman, T. & Reingold, E. Graph drawing by force-directed placement. Software: Practice and experience 21, 1129–1164 (1991). Opsahl, T., Agneessens, F. & Skvoretz, J. Node centrality in weighted networks: Generalizing degree and shortest paths. Soc. Networks 32, 245–251 (2010). Lewis-Beck, M., Bryman, A. & Liao, T. F. The Sage encyclopedia of social science research methods (Sage Publications, 2003). Orman, G. & Labatut, V. A comparison of community detection algorithms on artificial networks. In International Conference on Discovery Science, 242–256 (2009). Yang, Z., Algesheimer, R. & Tessone, C. J. A comparative analysis of community detection algorithms on artificial networks. Sci. Rep. 6, 30750 (2016). ADS CAS PubMed PubMed Central Article Google Scholar Rhodes, V. A., McDaniel, R. W., Homan, S. S., Johnson, M. & Madsen, R. An instrument to measure symptom experience. symptom occurrence and symptom distress. Cancer Nurs. 23, 49–54 (2000). McClement, S. E., Woodgate, R. L. & Degner, L. Symptom distress in adult patients with cancer. Cancer Nurs. 20, 236–243 (1997). Brant, J. M., Beck, S. & Miaskowski, C. Building dynamic models and theories to advance the science of symptom management research. J. Adv. Nurs. 66, 228–240 (2010). Humphreys, J. et al. Middle range theory for nursing, chap. A middle range theory of symptom management, 141–164 (2014). Lenz, E. R., Pugh, L. C., Milligan, R. A., Gift, A. & Suppe, F. The middle-range theory of unpleasant symptoms: an update. ANS Adv. Nurs. Sci. 19, 14–27 (1997). Lenz, E. R., Suppe, F., Gift, A. G., Pugh, L. C. & Milligan, R. A. Collaborative development of middle-range nursing theories: toward a theory of unpleasant symptoms. ANS Adv. Nurs. Sci. 17, 1–13 (1995). Tantoy, I. Y. et al. Differences in symptom occurrence, severity, and distress ratings between patients with gastrointestinal cancers who received chemotherapy alone or chemotherapy with targeted therapy. J. Gastrointest. Oncol. 8, 109–126 (2017). Oksholm, T. et al. Does age influence the symptom experience of lung cancer patients prior to surgery? Lung Cancer 82, 156–161 (2013). Hofsø, K., Miaskowski, C., Bjordal, K., Cooper, B. A. & Rustøen, T. Previous chemotherapy influences the symptom experience and quality of life of women with breast cancer prior to radiation therapy. Cancer Nurs. 35, 167–177 (2012). Farrell, C., Brearley, S. G., Pilling, M. & Molassiotis, A. The impact of chemotherapy-related nausea on patients' nutritional status, psychological distress and quality of life. Support. Care Cancer 21, 59–66 (2013). Molassiotis, A. et al. Validation and psychometric assessment of a short clinical scale to measure chemotherapy-induced nausea and vomiting: the mascc antiemesis tool. J. Pain Symptom Manage. 34, 148–159 (2007). Molassiotis, A., Stricker, C. T., Eaby, B., Velders, L. & Coventry, P. A. Understanding the concept of chemotherapy-related nausea: the patient experience. Eur. J. Cancer Care (Engl.) 17, 444–453 (2008). Borsboom, D. A network theory of mental disorders. World Psychiatry. 16, 5–13 (2017). Borsboom, D., Epskamp, S., Kievit, R. A., Cramer, A. O. & Schmittmann, V. D. Transdiagnostic networks: Commentary on nolen-hoeksema and watkins (2011). Perspect. Psychol. Sci. 6, 610–614 (2011). Bringmann, L. F. et al. A network approach to psychopathology: new insights into clinical longitudinal data. PLoS One 8, e60188 (2013). Isvoranu, A. M., Borsboom, D., van Os, J. & Guloksuz, S. A network approach to environmental impact in psychotic disorder: Brief theoretical framework. Schizophr. Bull. 42, 870–873 (2016). Liu, Y. Y., Slotine, J. J. & Barabasi, A. L. Controllability of complex networks. Nature 473, 167–173 (2011). ADS CAS PubMed Article Google Scholar Cramer, A. O., Waldorp, L. J., van der Maas, H. L. & Borsboom, D. Comorbidity: a network perspective. Behav Brain Sci 33, 137–150 (2010). Gonzalez, B. D. et al. Sleep disturbance in men receiving androgen deprivation therapy for prostate cancer: The role of hot flashes and nocturia. Cancer 124, 499–506 (2018). Savard, M. H., Savard, J., Caplette-Gingras, A., Ivers, H. & Bastien, C. Relationship between objectively recorded hot flashes and sleep disturbances among breast cancer patients: investigating hot flash characteristics other than frequency. Menopause 20, 997–1005 (2013). Mazor, M. et al. Differences in symptom clusters before and twelve months after breast cancer surgery. Eur. J. Oncol. Nurs. 32, 63–72 (2018). Sullivan, C. W. et al. Stability of symptom clusters in patients with breast cancer receiving chemotherapy. J. Pain Symptom Manage. 55, 39–55 (2018). Wong, M. L. et al. Differences in symptom clusters identified using ratings of symptom occurrence vs. severity in lung cancer patients receiving chemotherapy. J. Pain Symptom Manage. 54, 194–203 (2017). Huang, J. et al. Symptom clusters in ovarian cancer patients with chemotherapy after surgery: A longitudinal survey. Cancer Nurs. 39, 106–116 (2016). Hwang, K. H., Cho, O. H. & Yoo, Y. S. Symptom clusters of ovarian cancer patients undergoing chemotherapy, and their emotional status and quality of life. Eur. J. Oncol. Nurs. 21, 215–222 (2016). We would like to thank Professor Anne Skeldon from the Department of Mathematics, University of Surrey for her suggestion to sub-divide our sample into 4 similar groups and cross-check the stability of their centrality indices. Part of this study was funded by the National Cancer Institute (CA134900). In addition, this project received funding from the European Union's Horizon 2020 research and innovation ACTIVAGE project under grant agreement No. 732679. Nikolaos Papachristou and Payam Barnaghi contributed equally. Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, UK Nikolaos Papachristou, Payam Barnaghi & Jo Armes University of California, San Francisco, USA Bruce Cooper, Kord M. Kober, Steven M. Paul, Jon D. Levine & Christine Miaskowski University of Strathclyde, Glasgow, Scotland Roma Maguire & Lisa McCann Department of Nursing, Mount Sinai Medical Center, New York, USA Marilyn Hammer School of Nursing, Yale University, New Haven, USA Fay Wright School of Nursing, Midwifery and Health Systems, University College Dublin, Dublin, Ireland Eileen P. Furlong School of Nursing, University of Pittsburgh, Pittsburgh, USA Yvette P. Conley National and Kapodistrian University of Athens, Athens, Greece Elisabeth Patiraki Faculty of Nursing, University of Peloponnese, Sparti, Greece Stylianos Katsaragakis School of Health Sciences, University of Surrey, Guildford, UK Jo Armes Nikolaos Papachristou Payam Barnaghi Bruce Cooper Kord M. Kober Roma Maguire Steven M. Paul Lisa McCann Jon D. Levine Christine Miaskowski Christine Miaskowski, Bruce Cooper, Yvette P. Conley, Marilyn Hammer, Kord M. Kober, Jon D. Levine, Steven M. Paul, and Fay Wright were responsible for the design and execution of the clinical study including patient recruitment and retention and data collection. Nikolaos Papachristou, Payam Barnaghi and Christine Miaskowski conceived the experiment. Nikolaos Papachristou and Payam Barnaghi conducted and analysed the results. Nikolaos Papachristou, Payam Barnaghi, Christine Miaskowski, Bruce Cooper, Roma Maguire, Yvette P. Conley, Marilyn Hammer, Stylianos Katsaragakis, Kord M. Kober, Jon D. Levine, Jo Armes, Lisa McCann, Elisabeth Patiraki, Eileen P. Furlong, Steven M. Paul and Fay Wright contributed to the interpretation of the study findings and participated in the editing and revision of the final version of the manuscript. Correspondence to Nikolaos Papachristou or Payam Barnaghi. Appendix in pdf format Papachristou, N., Barnaghi, P., Cooper, B. et al. Network Analysis of the Multidimensional Symptom Experience of Oncology. Sci Rep 9, 2258 (2019). https://doi.org/10.1038/s41598-018-36973-1 Moving the Needle: Promoting the Research, Dissemination, and Implementation of Oncology Acupuncture Kevin T. Liou & Jun J. Mao The Journal of Alternative and Complementary Medicine (2020) Oncology patients' perceptions of and experiences with COVID-19 , Steven M. Paul , Karin Snowberg , Maura Abbott , Hala Borno , Susan Chang , Lee May Chen , Bevin Cohen , Bruce A. Cooper , Marilyn J. Hammer , Stacey A. Kenfield , Angela Laffan , Jon D. Levine , Rachel Pozzar , Katy K. Tsai , Erin L. Van Blarigan & Katherine Van Loon Supportive Care in Cancer (2020) Stable Symptom Clusters and Evolving Symptom Networks in Relation to Chemotherapy Cycles Sun Young Rha & Jiyeon Lee Journal of Pain and Symptom Management (2020) A COVID-19 screening tool for oncology telephone triage Emmika Elkin , Carol Viele , Karen Schumacher , Maureen Boberg , Mari Cunningham , Lauren Liu & Christine Miaskowski A Biopsychosocial Model of Chronic Pain for Older Adults , Fiona Blyth , Francesca Nicosia , Mary Haan , Frances Keefe , Alexander Smith & Christine Ritchie Pain Medicine (2020) Editor's choice: complex networks Top 100 in Cancer
CommonCrawl
But where will it all stop? Ambitious parents may start giving mind-enhancing pills to their children. People go to all sorts of lengths to gain an educational advantage, and eventually success might be dependent on access to these mind-improving drugs. No major studies have been conducted on the long-term effects. Some neuroscientists fear that, over time, these memory-enhancing pills may cause people to store too much detail, cluttering the brain. Read more about smart drugs here. But he has also seen patients whose propensity for self-experimentation to improve cognition got out of hand. One chief executive he treated, Ngo said, developed an unhealthy predilection for albuterol, because he felt the asthma inhaler medicine kept him alert and productive long after others had quit working. Unfortunately, the drug ended up severely imbalancing his electrolytes, which can lead to dehydration, headaches, vision and cardiac problems, muscle contractions and, in extreme cases, seizures. Oxiracetam is one of the 3 most popular -racetams; less popular than piracetam but seems to be more popular than aniracetam. Prices have come down substantially since the early 2000s, and stand at around 1.2g/$ or roughly 50 cents a dose, which was low enough to experiment with; key question, does it stack with piracetam or is it redundant for me? (Oxiracetam can't compete on price with my piracetam pile stockpile: the latter is now a sunk cost and hence free.) Many of the positive effects of cognitive enhancers have been seen in experiments using rats. For example, scientists can train rats on a specific test, such as maze running, and then see if the "smart drug" can improve the rats' performance. It is difficult to see how many of these data can be applied to human learning and memory. For example, what if the "smart drug" made the rat hungry? Wouldn't a hungry rat run faster in the maze to receive a food reward than a non-hungry rat? Maybe the rat did not get any "smarter" and did not have any improved memory. Perhaps the rat ran faster simply because it was hungrier. Therefore, it was the rat's motivation to run the maze, not its increased cognitive ability that affected the performance. Thus, it is important to be very careful when interpreting changes observed in these types of animal learning and memory experiments. Neuroplasticity, or the brain's ability to change and reorganize itself in response to intrinsic and extrinsic factors, indicates great potential for us to enhance brain function by medical or other interventions. Psychotherapy has been shown to induce structural changes in the brain. Other interventions that positively influence neuroplasticity include meditation, mindfulness , and compassion. I don't believe there's any need to control for training with repeated within-subject sampling, since there will be as many samples on both control and active days drawn from the later trained period as with the initial untrained period. But yes, my D5B scores seem to have plateaued pretty much and only very slowly increase; you can look at the stats file yourself. "In the hospital and ICU struggles, this book and Cavin's experience are golden, and if we'd have had this book's special attention to feeding tube nutrition, my son would be alive today sitting right here along with me saying it was the cod liver oil, the fish oil, and other nutrients able to be fed to him instead of the junk in the pharmacy tubes, that got him past the liver-test results, past the internal bleeding, past the brain difficulties controlling so many response-obstacles back then. Back then, the 'experts' in rural hospitals were unwilling to listen, ignored my son's unexpected turnaround when we used codliver oil transdermally on his sore skin, threatened instead to throw me out, but Cavin has his own proof and his accumulated experience in others' journeys. Cavin's boxed areas of notes throughout the book on applying the brain nutrient concepts in feeding tubes are powerful stuff, details to grab onto and run with… hammer them! (I was more than a little nonplussed when the mushroom seller included a little pamphlet educating one about how papaya leaves can cure cancer, and how I'm shortening my life by decades by not eating many raw fruits & vegetables. There were some studies cited, but usually for points disconnected from any actual curing or longevity-inducing results.) Take at 11 AM; distractions ensue and the Christmas tree-cutting also takes up much of the day. By 7 PM, I am exhausted and in a bad mood. While I don't expect day-time modafinil to buoy me up, I do expect it to at least buffer me against being tired, and so I conclude placebo this time, and with more confidence than yesterday (65%). I check before bed, and it was placebo. Compared with those reporting no use, subjects drinking >4 cups/day of decaffeinated coffee were at increased risk of RA [rheumatoid arthritis] (RR 2.58, 95% CI 1.63-4.06). In contrast, women consuming >3 cups/day of tea displayed a decreased risk of RA (RR 0.39, 95% CI 0.16-0.97) compared with women who never drank tea. Caffeinated coffee and daily caffeine intake were not associated with the development of RA. Many people prefer the privacy and convenience of ordering brain boosting supplements online and having them delivered right to the front door. At Smart Pill Guide, we have made the process easier, so you can place your order directly through our website with your major credit card or PayPal. Our website is secure, so your personal information is protected and all orders are completely confidential. Not included in the list below are prescription psychostimulants such as Adderall and Ritalin. Non-medical, illicit use of these drugs for the purpose of cognitive enhancement in healthy individuals comes with a high cost, including addiction and other adverse effects. Although these drugs are prescribed for those with attention deficit hyperactivity disorder (ADHD) to help with focus, attention and other cognitive functions, they have been shown to in fact impair these same functions when used for non-medical purposes. More alarming, when taken in high doses, they have the potential to induce psychosis. Before you try nootropics, I suggest you start with the basics: get rid of the things in your diet and life that reduce cognitive performance first. That is easiest. Then, add in energizers like Brain Octane and clean up your diet. Then, go for the herbals and the natural nootropics. Use the pharmaceuticals selectively only after you've figured out your basics. Before taking any supplement or chemical, people want to know if there will be long term effects or consequences, When Dr. Corneliu Giurgea first authored the term "nootropics" in 1972, he also outlined the characteristics that define nootropics. Besides the ability to benefit memory and support the cognitive processes, Dr. Giurgea believed that nootropics should be safe and non-toxic. Caffeine dose dependently decreased the 1,25(OH)(2)D(3) induced VDR expression and at concentrations of 1 and 10mM, VDR expression was decreased by about 50-70%, respectively. In addition, the 1,25(OH)(2)D(3) induced alkaline phosphatase activity was also reduced at similar doses thus affecting the osteoblastic function. The basal ALP activity was not affected with increasing doses of caffeine. Overall, our results suggest that caffeine affects 1,25(OH)(2)D(3) stimulated VDR protein expression and 1,25(OH)(2)D(3) mediated actions in human osteoblast cells. A study mentioned in Neuropsychopharmacology as of August 2002, revealed that Bacopa Monnieri decreases the rate of forgetting newly acquired information, memory consolidations, and verbal learning rate. It also helps in enhancing the nerve impulse transmission, which leads to increased alertness. It is also known to relieve the effects of anxiety and depression. All these benefits happen as Bacopa Monnieri dosage helps in activating choline acetyltransferase and inhibiting acetylcholinesterase which enhances the levels of acetylcholine in the brain, a chemical that is also associated in improving memory and attention. Absorption of nicotine across biological membranes depends on pH. Nicotine is a weak base with a pKa of 8.0 (Fowler, 1954). In its ionized state, such as in acidic environments, nicotine does not rapidly cross membranes…About 80 to 90% of inhaled nicotine is absorbed during smoking as assessed using C14-nicotine (Armitage et al., 1975). The efficacy of absorption of nicotine from environmental smoke in nonsmoking women has been measured to be 60 to 80% (Iwase et al., 1991)…The various formulations of nicotine replacement therapy (NRT), such as nicotine gum, transdermal patch, nasal spray, inhaler, sublingual tablets, and lozenges, are buffered to alkaline pH to facilitate the absorption of nicotine through cell membranes. Absorption of nicotine from all NRTs is slower and the increase in nicotine blood levels more gradual than from smoking (Table 1). This slow increase in blood and especially brain levels results in low abuse liability of NRTs (Henningfield and Keenan, 1993; West et al., 2000). Only nasal spray provides a rapid delivery of nicotine that is closer to the rate of nicotine delivery achieved with smoking (Sutherland et al., 1992; Gourlay and Benowitz, 1997; Guthrie et al., 1999). The absolute dose of nicotine absorbed systemically from nicotine gum is much less than the nicotine content of the gum, in part, because considerable nicotine is swallowed with subsequent first-pass metabolism (Benowitz et al., 1987). Some nicotine is also retained in chewed gum. A portion of the nicotine dose is swallowed and subjected to first-pass metabolism when using other NRTs, inhaler, sublingual tablets, nasal spray, and lozenges (Johansson et al., 1991; Bergstrom et al., 1995; Lunell et al., 1996; Molander and Lunell, 2001; Choi et al., 2003). Bioavailability for these products with absorption mainly through the mucosa of the oral cavity and a considerable swallowed portion is about 50 to 80% (Table 1)…Nicotine is poorly absorbed from the stomach because it is protonated (ionized) in the acidic gastric fluid, but is well absorbed in the small intestine, which has a more alkaline pH and a large surface area. Following the administration of nicotine capsules or nicotine in solution, peak concentrations are reached in about 1 h (Benowitz et al., 1991; Zins et al., 1997; Dempsey et al., 2004). The oral bioavailability of nicotine is about 20 to 45% (Benowitz et al., 1991; Compton et al., 1997; Zins et al., 1997). Oral bioavailability is incomplete because of the hepatic first-pass metabolism. Also the bioavailability after colonic (enema) administration of nicotine (examined as a potential therapy for ulcerative colitis) is low, around 15 to 25%, presumably due to hepatic first-pass metabolism (Zins et al., 1997). Cotinine is much more polar than nicotine, is metabolized more slowly, and undergoes little, if any, first-pass metabolism after oral dosing (Benowitz et al., 1983b; De Schepper et al., 1987; Zevin et al., 1997). Take at 10 AM; seem a bit more active but that could just be the pressure of the holiday season combined with my nice clean desk. I do the chores without too much issue and make progress on other things, but nothing major; I survive going to The Sitter without too much tiredness, so ultimately I decide to give the palm to it being active, but only with 60% confidence. I check the next day, and it was placebo. Oops. As mentioned earlier, cognitive control is needed not only for inhibiting actions, but also for shifting from one kind of action or mental set to another. The WCST taxes cognitive control by requiring the subject to shift from sorting cards by one dimension (e.g., shape) to another (e.g., color); failures of cognitive control in this task are manifest as perseverative errors in which subjects continue sorting by the previously successful dimension. Three studies included the WCST in their investigations of the effects of d-AMP on cognition (Fleming et al., 1995; Mattay et al., 1996, 2003), and none revealed overall effects of facilitation. However, Mattay et al. (2003) subdivided their subjects according to COMT genotype and found differences in both placebo performance and effects of the drug. Subjects who were homozygous for the val allele (associated with lower prefrontal dopamine activity) made more perseverative errors on placebo than other subjects and improved significantly with d-AMP. Subjects who were homozygous for the met allele performed best on placebo and made more errors on d-AMP. Most epidemiological research on nonmedical stimulant use has been focused on issues relevant to traditional problems of drug abuse and addiction, and so, stimulant use for cognitive enhancement is not generally distinguished from use for other purposes, such as staying awake or getting high. As Boyd and McCabe (2008) pointed out, the large national surveys of nonmedical prescription drug use have so far failed to distinguish the ways and reasons that people use the drugs, and this is certainly true where prescription stimulants are concerned. The largest survey to investigate prescription stimulant use in a nationally representative sample of Americans, the National Survey on Drug Use and Health (NSDUH), phrases the question about nonmedical use as follows: "Have you ever, even once, used any of these stimulants when they were not prescribed for you or that you took only for the experience or feeling they caused?" (Snodgrass & LeBaron 2007). This phrasing does not strictly exclude use for cognitive enhancement, but it emphasizes the noncognitive effects of the drugs. In 2008, the NSDUH found a prevalence of 8.5% for lifetime nonmedical stimulant use by Americans over the age of 12 years and a prevalence of 12.3% for Americans between 21 and 25 (Substance Abuse and Mental Health Services Administration, 2009). Most people I talk to about modafinil seem to use it for daytime usage; for me that has not ever worked out well, but I had nothing in particular to show against it. So, as I was capping the last of my piracetam-caffeine mix and clearing off my desk, I put the 4 remaining Modalerts pills into capsules with the last of my creatine powder and then mixed them with 4 of the theanine-creatine pills. Like the previous Adderall trial, I will pick one pill blindly each day and guess at the end which it was. If it was active (modafinil-creatine), take a break the next day; if placebo (theanine-creatine), replace the placebo and try again the next day. We'll see if I notice anything on DNB or possibly gwern.net edits. Yet some researchers point out these drugs may not be enhancing cognition directly, but simply improving the user's state of mind – making work more pleasurable and enhancing focus. "I'm just not seeing the evidence that indicates these are clear cognition enhancers," says Martin Sarter, a professor at the University of Michigan, who thinks they may be achieving their effects by relieving tiredness and boredom. "What most of these are actually doing is enabling the person who's taking them to focus," says Steven Rose, emeritus professor of life sciences at the Open University. "It's peripheral to the learning process itself." Metabolic function smart drugs provide mental benefits by generally facilitating the body's metabolic processes related to the production of new tissues and the release of energy from food and fat stores. Creatine, a long-time favorite performance-enhancement drug for competitive athletes, was in the news recently when it was found in a double-blind, placebo-controlled crossover trial to have significant cognitive benefits – including both general speed of cognition and improvements in working memory. Ginkgo Biloba is another metabolic function smart drug used to increase memory and improve circulation – however, news from recent studies raises questions about these purported effects. During the 1920s, Amphetamine was being researched as an asthma medication when its cognitive benefits were accidentally discovered. In many years that followed, this enhancer was exploited in a number of medical and nonmedical applications, for instance, to enhance alertness in military personnel, treat depression, improve athletic performance, etc. But, thanks to the efforts of a number of remarkable scientists, researchers and plain-old neurohackers, we are beginning to put together a "whole systems" model of how all the different parts of the human brain work together and how they mesh with the complex regulatory structures of the body. It's going to take a lot more data and collaboration to dial this model in, but already we are empowered to design stacks that can meaningfully deliver on the promise of nootropics "to enhance the quality of subjective experience and promote cognitive health, while having extremely low toxicity and possessing very few side effects." It's a type of brain hacking that is intended to produce noticeable cognitive benefits. Participants (n=205) [young adults aged 18-30 years] were recruited between July 2010 and January 2011, and were randomized to receive either a daily 150 µg (0.15mg) iodine supplement or daily placebo supplement for 32 weeks…After adjusting for baseline cognitive test score, examiner, age, sex, income, and ethnicity, iodine supplementation did not significantly predict 32 week cognitive test scores for Block Design (p=0.385), Digit Span Backward (p=0.474), Matrix Reasoning (p=0.885), Symbol Search (p=0.844), Visual Puzzles (p=0.675), Coding (p=0.858), and Letter-Number Sequencing (p=0.408). "There seems to be a growing percentage of intellectual workers in Silicon Valley and Wall Street using nootropics. They are akin to intellectual professional athletes where the stakes and competition is high," says Geoffrey Woo, the CEO and co-founder of nutrition company HVMN, which produces a line of nootropic supplements. Denton agrees. "I think nootropics just make things more and more competitive. The ease of access to Chinese, Russian intellectual capital in the United States, for example, is increasing. And there is a willingness to get any possible edge that's available." I am not alone in thinking of the potential benefits of smart drugs in the military. In their popular novel Ghost Fleet: A Novel of the Next World War, P.W. Singer and August Cole tell the story of a future war using drug-like nootropic implants and pills, such as Modafinil. DARPA is also experimenting with neurological technology and enhancements such as the smart drugs discussed here. As demonstrated in the following brain initiatives: Targeted Neuroplasticity Training (TNT), Augmented Cognition, and High-quality Interface Systems such as their Next-Generational Nonsurgical Neurotechnology (N3). There is no official data on their usage, but nootropics as well as other smart drugs appear popular in the Silicon Valley. "I would say that most tech companies will have at least one person on something," says Noehr. It is a hotbed of interest because it is a mentally competitive environment, says Jesse Lawler, a LA based software developer and nootropics enthusiast who produces the podcast Smart Drug Smarts. "They really see this as translating into dollars." But Silicon Valley types also do care about safely enhancing their most prized asset – their brains – which can give nootropics an added appeal, he says. The majority of nonmedical users reported obtaining prescription stimulants from a peer with a prescription (Barrett et al., 2005; Carroll et al., 2006; DeSantis et al., 2008, 2009; DuPont et al., 2008; McCabe & Boyd, 2005; Novak et al., 2007; Rabiner et al., 2009; White et al., 2006). Consistent with nonmedical user reports, McCabe, Teter, and Boyd (2006) found 54% of prescribed college students had been approached to divert (sell, exchange, or give) their medication. Studies of secondary school students supported a similar conclusion (McCabe et al., 2004; Poulin, 2001, 2007). In Poulin's (2007) sample, 26% of students with prescribed stimulants reported giving or selling some of their medication to other students in the past month. She also found that the number of students in a class with medically prescribed stimulants was predictive of the prevalence of nonmedical stimulant use in the class (Poulin, 2001). In McCabe et al.'s (2004) middle and high school sample, 23% of students with prescriptions reported being asked to sell or trade or give away their pills over their lifetime. While the primary effect of the drug is massive muscle growth the psychological side effects actually improved his sanity by an absurd degree. He went from barely functional to highly productive. When one observes that the decision to not attempt to fulfill one's CEV at a given moment is a bad decision it follows that all else being equal improved motivation is improved sanity. Low-dose lithium orotate is extremely cheap, ~$10 a year. There is some research literature on it improving mood and impulse control in regular people, but some of it is epidemiological (which implies considerable unreliability); my current belief is that there is probably some effect size, but at just 5mg, it may be too tiny to matter. I have ~40% belief that there will be a large effect size, but I'm doing a long experiment and I should be able to detect a large effect size with >75% chance. So, the formula is NPV of the difference between taking and not taking, times quality of information, times expectation: \frac{10 - 0}{\ln 1.05} \times 0.75 \times 0.40 = 61.4, which justifies a time investment of less than 9 hours. As it happens, it took less than an hour to make the pills & placebos, and taking them is a matter of seconds per week, so the analysis will be the time-consuming part. This one may actually turn a profit. The research literature, while copious, is messy and varied: methodologies and devices vary substantially, sample sizes are tiny, the study designs vary from paper to paper, metrics are sometimes comically limited (one study measured speed of finishing a RAPM IQ test but not scores), blinding is rare and unclear how successful, etc. Relevant papers include Chung et al 2012, Rojas & Gonzalez-Lima 2013, & Gonzalez-Lima & Barrett 2014. Another Longecity user ran a self-experiment, with some design advice from me, where he performed a few cognitive tests over several periods of LLLT usage (the blocks turned out to be ABBA), using his father and towels to try to blind himself as to condition. I analyzed his data, and his scores did seem to improve, but his scores improved so much in the last part of the self-experiment I found myself dubious as to what was going on - possibly a failure of randomness given too few blocks and an temporal exogenous factor in the last quarter which was responsible for the improvement. We reviewed recent studies concerning prescription stimulant use specifically among students in the United States and Canada, using the method illustrated in Figure 1. Although less informative about the general population, these studies included questions about students' specific reasons for using the drugs, as well as frequency of use and means of obtaining them. These studies typically found rates of use greater than those reported by the nationwide NSDUH or the MTF surveys. This probably reflects a true difference in rates of usage among the different populations. In support of that conclusion, the NSDUH data for college age Americans showed that college students were considerably more likely than nonstudents of the same age to use prescription stimulants nonmedically (odds ratio: 2.76; Herman-Stahl, Krebs, Kroutil, & Heller, 2007). Depending on where you live, some nootropics may not be sold over the counter, but they are usually available online. The law regarding nootropics can vary massively around the world, so be sure to do your homework before you purchase something for the first time. Be particularly cautious when importing smart drugs, because quality control and regulations abroad are not always as stringent as they are in the US. Do not put your health at risk if all you are trying to do is gain an edge in a competitive sport. ^ EFSA Panel on Dietetic Products, Nutrition and Allergies; European Food Safety Authority (EFSA), Parma, Italy (2011). "Scientific Opinion on the substantiation of health claims related to L-theanine from Camellia sinensis (L.) Kuntze (tea) and improvement of cognitive function (ID 1104, 1222, 1600, 1601, 1707, 1935, 2004, 2005), alleviation of psychological stress (ID 1598, 1601), maintenance of normal sleep (ID 1222, 1737, 2004) and reduction of menstrual discomfort (ID 1599) pursuant to Article 13(1) of Regulation (EC) No 1924/2006". EFSA Journal. 9 (6): 2238. doi:10.2903/j.efsa.2011.2238.
CommonCrawl
Student Algebra-Number Theory Seminar Organizers: Shin Eui Song When: Fridays @ 4PM Archives: 2011 | 2012 | 2013 | 2014 | 2015 | 2016 | 2017 | 2018 | 2019 | 2020 | 2021 Introduction to p-adic Hodge theory Speaker: Shin Eui Song (University of Maryland, College Park) - https://www.math.umd.edu/~sesong When: Fri, September 11, 2020 - 4:00pm Abstract: We introduce p-adic Hodge theory by describing two different aspects of the theory. The arithmetic side of the theory concerns p-adic representations of the absolute Galois group of a p-adic field. We discuss a toy example on how a p-adic representation arising from an elliptic curve comes from a (semi-)linear algebraic object. On the other hands, the geometric side of the theory concerns on the comparison theory of various cohomology theories of proper smooth varieties. If time permits, we discuss Fontaine's formalism to connect to two perspectives. Dirac operators in representation theory Speaker: Arghya Sadhukhan (UMD) - Abstract: Dirac operator was introduced into representation theory in the 1970s by Parthasarathy and Atiyah-Schidmt in the context of discrete series representations of real reductive Lie groups. Since then, they have played a crucial role in the enormous project of computing unitary dual of such groups. By work of Harish-Chandra, the study of irreducible unitary representations of G can be reduced to analysis of their associated (g, K) modules, thus converting the problem of understanding typically infinite dimensional analytical objects into a purely algebraic one. To further study these (g, K) modules via their infinitesimal characters, one is then led to consider the action of Dirac operator on them. By a conjecture of Vogan settled in the early 2000s, we can identify such infinitesimal characters for (g, K) modules having non-vanishing Dirac cohomology. I'll survey these developments and time permitting, discuss some of its ramification in classical topics, such as the generalized Weyl character formula and Borel-Weil-Bott theorem. p-divisible groups, Dieudonne modules and deformations Speaker: Weimin Jiang (UMD) - Abstract: In this talk, we will first motivate the definition of p-divisible group by looking at the Tate modules of geometric objects in characteristic p. Then we try to use Dieudonne modules, some (semi)-linear algebraic objects to classify p-divisible groups. Finally, we talk about the deformation space of 1-dimensional p-divisible groups and its importance in local Langlands (if time permits). The equivalence of the constructible topology and the ultrafilter topology Speaker: Do Hoon Kim (UMD) - When: Fri, October 2, 2020 - 4:00pm Abstract: Let A be a commutative ring. It is well-known that the Zariski topology on Spec(A) is generally not Hausdorff. The constructible topology is a refinement of the Zariski topology that is always compact and Hausdorff. In this paper, we define a new topology on Spec(A) using the notion of ultrafilters and show that this topology is equivalent to the constructible topology. In particular, the definition of the ultrafilter topology gives a full description of all the closed sets in the constructible topology. $P_{L^+G} (\mathrm{Gr}_G)$: The Left-hand side of the geometric Satake correspondence Speaker: Jackson Hopper (UMD) - Abstract: Given a reductive group $G$, the geometric Satake correspondence is an isomorphism between two $\overline{\mathbb{Q}}_{\ell}$-linear tensor categories associated to $G$. This isomorphism builds a bridge between two interpretations of an important category, allowing us to apply methods of study naturally suited to the either description, and has even been used to prove a form of the Langlands correspondence for function fields. On the right-hand side, we have the familiar category of finite-dimensional $\ell$-adic representations of the Langlands dual group $\hat{G}$, equipped with the usual tensor product $\otimes$. On the left-hand side, we have a more mysterious category whose definition and basic properties I will establish in this talk. This will build on previous talks defining both the affine Grassmannian $\mathrm{Gr}_G$ and perverse sheaves. I will extend the definition of perverse sheaves to apply to ind-(separated finite-type schemes), define group equivariance of perverse sheaves, and classify the simple objects of the category $P_{L^+G} (\mathrm{Gr}_G)$. Then I will then prove the category is semisimple, and define the convolution product $\star$ on the simple objects. At this point we can in full awareness state the theorem of geometric Satake correspondence. Perfect submonoids of dominant weights+something for classification of reductive monoids Speaker: Chengze Duan (UMD) - When: Fri, October 16, 2020 - 4:00pm Abstract: Vinberg introduced the notion of perfect submonoids of dominant weights of G in the study of Vinberg monoids. These perfect submonoids are closely related to tensor product decomposition, which is important in representation theory of algebraic groups. I'll talk about an explicit description of such submonoids of dominant weights and its relation to classfication of reductive monoids and Vinberg monoids (if time permits). Local Langlands correspondence and Fargues-Fontaine curve Speaker: Shin Eui Song (UMD) - https://www.math.umd.edu/~sesong/index.html Abstract: An important recent development in local Langlands correspondence for an arbitrary reductive group $G$ over the $p$-adic numbers $\mathbb{Q}_p$ is the construction of a map that sends a smooth irreducible representation of $G(\mathbb{Q}_p)$ to a semi-simple Langlands parameter by Fargues and Scholze. The main strategy is to associate a smooth irreducible representation to a sheaf on $\mathrm{Bun}_G$, the moduli space of $G$-bundles on the Fargues-Fontaine curve. In this survey, we briefly review the general strategy in constructing the Fargues-Scholze map. Then we introduce two constructions of a Fargues-Fontaine curve over a perfectoid field: schematic and adic. In both versions, we uniformize the $p$-adic (resp. perfectoid) punctured unit disk to construct a Fargues-Fontaine curve which then is related via a GAGA type theorem. Of polynomials, representations et al. Abstract: Introduced in [KL79] in a study of Springer representations of Weyl group , Kazhdan-Lusztig polynomials have since been ubiquitous in representation theory. Indexed by pairs of elements in Coxeter groups, they have non-negative integer coefficients – a deep fact first proved employing perverse sheaves & BBD theorem (for Weyl groups), and later by Elias-Williamson via Soergel bimodules. Among other things, they describe multiplicities of simple \( \mathfrak{g} \) into Verma modules (& therefore gives character formulas for such simple modules), encodes information about intersection homology of Schubert varieties, and can be used to construct interesting representations of Hecke algebras using Lusztig's idea of cells in Weyl group. I'll discuss different facets of this beautiful story, and if time permits, we will see some recent developments in Elias-Williamson's work. Deformations of Galois Representations Speaker: Steven Jin (UMD) - When: Fri, November 6, 2020 - 4:00pm Abstract: Let $\Pi$ be a profinite group and $h:A_1\to A_0$ a continuous homomorphism of local rings. Then this induces a map $\tilde{h}:GL_N(A_1)\to GL_N(A_0)$. If $\rho_0:\Pi\to GL_N(A_0)$ is a continuous representation, a deformation of $\rho_0$ to $A_1$ is a certain equivalence class of liftings $\rho_1:\Pi\to GL_N(A_1)$. When $\Pi$ is taken to be the Galois group of a number field (or of the maximal algebraic extension of a number field unramified outside of a finite set of primes), it makes sense to speak about Galois deformations. In this talk, we will survey the theory of Galois deformations. Littered throughout, we will also touch on a number of arithmetic and geometric applications to motivate the study. Vinberg monoids and its application to affine Springer fibers When: Fri, November 13, 2020 - 4:00pm Abstract: Vinberg monoids have many applications and one of them is in the theory of affine Springer fibers. Many important features of affine Springer fibers, such as nonemptiness, dimension formula (and maybe irreducible components) can be extended to generalized affine Springer fibers using Vinberg monoids. In this talk I will talk about the theory and construction of Vinberg monoids and the theory of usual affine Springer fibers first. Then I will talk about the above application of Vinberg monoids. Non-archimedean geometry, through Tate, Berkovich and Huber. When: Fri, December 4, 2020 - 4:00pm Abstract: This will be an introductory talk on non-archimedean geometry by Tate, Berkovich and Huber. We can do "geometry" over any field(even any base scheme) by the tools of algebraic geometry, but when the ground field is equipped with a complete absolute value, we hope to get something more. Classical examples and methods in real and complex geometry are not adapted well in non-archimedean case, which leads to the discovery of various constructions. We will mostly focus on Tate's rigid analytic space and Huber's adic space constructions.
CommonCrawl
Braiding statistics of anyons from a Non-Abelian Chern-Simon theory Given a 2+1D Abelian K matrix Chern-Simon theory (with multiplet of internal gauge field $a_I$) partition function: $$ Z=\exp\left[i\int\big( \frac{1}{4\pi} K_{IJ} a_I \wedge d a_J + a \wedge * j(\ell_m)+ a \wedge * j(\ell_n)\big)\right] $$ with anyons (Wilson lines) of $j(\ell_m)$ and $j(\ell_n)$. One can integrate out internal gauge field $a$ to get a Hopf term, which we interpret as the braiding statistics angle, i.e. the phase gained of the full wave function of the system when we do the full braiding between two anyons: $$ \exp\left[i\theta_{ab}\right]\equiv\exp\left[i 2 \pi\ell_{a,I}^{} K^{-1}_{IJ} \ell_{b,J}^{}\right] $$ see also this paper and this paper. I would like to know the way(s) to obtain braiding statistics of anyons from a Non-Abelian Chern-Simon theory? (generically, it should be a matrix.) How to obtain this braiding matrix from Non-Abelian Chern-Simon theory? topological-field-theory topological-order spin-statistics chern-simons-theory anyons Kyle Kanos wonderichwonderich The (unitary) "phase" factor for non-Abelian anyons satisfies the (non-Abelian) Knizhnik-Zamolodchikov equation: $$\big (\frac{\partial}{\partial z_{\alpha}} + \frac{1}{2\pi k} \sum_{\beta \neq \alpha} \frac{Q^a_{\alpha}Q^a_{\beta}}{z_{\alpha} - z_{\beta}}\big )U(z_1, ....,z_N) = 0 $$ Where $z_{\alpha}$ is the complex plane coordinate of the particle $\alpha$ , and $Q^a_{\alpha}$ is the matrix representative of the $a-$th gauge group generator of the particle $\alpha$ and $k$ is the level . Please, see the following two articles by Lee and Oh (article-1, article-2). In the first article they explicitly write the solution in the case of the two-body problem: $$U(z_1, z_2) = exp( i\frac{Q^a_1Q^a_2}{2\pi k} ln(z_1-z_2))$$ The articles describe the method of solution: The non-Abelian phase factor can be obtained from a quantum mechanical model of $N$ particles on the plane each belonging possibly to a different representation of the gauge group minimally coupled to a gauge field with a Chern-Simons term in the Lagrangian. The classical field equations of the gauge potential can be exactly solved and substituted in the Hamiltonian. The reduced Hamiltonian can also be exactly solved. Its solution is given by the action of a unitary phase factor on a symmetric wave function. This factor satisfies the Knizhnik-Zamolodchikov equation. The unitary phase factor lives in the tensor product Hilbert space of the individual particle representations. The wave function is a vector in this Hilbert space valued holomorphic function depending on the $N$ points in the plane. David Bar MosheDavid Bar Moshe $\begingroup$ Thanks David. You offer another way I did not learn before. How to connect this viewpoint to a non-Ab Chern-Simons theory? (the way I knew before was the way in Witten's Jones polynomial paper and Wilson loop approach. Is there a connection to this Wilson loop approach?) $\endgroup$ – wonderich Dec 17 '13 at 17:47 $\begingroup$ @Idear These two approaches are very similar (but not exactly equivalent). Actually, Witten in his Jones polynomial paper (on page 365) refers to this similarity and asserts that the Wilson loop can be "thought of" as the trajectory of a particle in 2+1 dimensions. Witten refers to a famous paper by Polyakov adopting the strategy that Lee and Oh used later. This is only one of the numerous issues that Witten only talked about in his Jones polynomial paper (even without giving a single formula), which proved to be very fruitful for subsequent research. $\endgroup$ – David Bar Moshe Dec 19 '13 at 10:54 $\begingroup$ @Ideat cont. Witten's approach is more "thermodynamical" and he prefers to see the traces of the non-Abelian statistics in the partition function. To be more precise,(and this fact was also written in not so much words in Witten's paper): A non-Abelian Wilson loop can be thought of as a particle moving on a group or a flag manifold stuck to the boundary in the limit where its mass vanishes. Then its dynamics restricts upon quantization to the lowest Landau level producing the correct Wilson loop insertion. $\endgroup$ – David Bar Moshe Dec 19 '13 at 10:55 $\begingroup$ Does the Knizhnik-Zamolodchikov equation also hold for other surfaces? Like the torus or the sphere? $\endgroup$ – Hamurabi Dec 30 '13 at 0:24 $\begingroup$ @Hamurabi There exist generalizations of the Knizhnik-Zamolodchikov, for example arXiv:arXiv:hep-th/9510143, hep-th/9410091, for elliptic curves and Riemann surfaces. $\endgroup$ – David Bar Moshe Dec 30 '13 at 16:03 How to obtain this braiding matrix from Non-Abelian Chern-Simon theory? To obtain braiding matrix $U^{ab}$ for particle $a$ and $b$, we first need to know the dimension of the matrix. However, the dimension of the matrix for Non-Abelian Chern-Simon theory is NOT determined by $a$ and $b$ alone. Say if we put four particles $a,b,c,d$ on a sphere, the dimension of the degenerate ground states depend on $a,b,c,d$. So even the dimension of the braiding matrix $U^{ab}$ depends on $c$ and $d$. The "braiding matrix" $U^{ab}$ is mot deterimened by the two particles $a$ and $b$. Bottom line: physically, the Non-Abelian statistics is not described by the "braiding matrix" of the two particles $a$ and $b$, but by modular tensor category. Xiao-Gang WenXiao-Gang Wen Not the answer you're looking for? Browse other questions tagged topological-field-theory topological-order spin-statistics chern-simons-theory anyons or ask your own question. geometric quantization of the moduli space of abelian Chern-Simons theory an Abelian complex statistical phase from exchanging non-Abelian anyons? Chern Simons Theory over S^3 as integral - what is domain of integration? Coordinate variation of a Wilson loop Path Integral of Chern-Simons Theory Braiding matrix in Chern-Simons theories$.$ Faddeev-Popov Determinant of Chern-Simons Theory Is it possible to directly derive the $K$ matrix for a topological order described by a gauge-theory Hamiltonian? Is $U(2)_{2, 1}$ Chern Simons Theory Completely Trivial? Braiding matrix from CFT first principles
CommonCrawl
Cognitive Research: Principles and Implications Lineup fairness: propitious heterogeneity and the diagnostic feature-detection hypothesis Curt A. Carlson ORCID: orcid.org/0000-0002-3909-77731, Alyssa R. Jones1, Jane E. Whittington1, Robert F. Lockamyeir1, Maria A. Carlson1 & Alex R. Wooten1 Cognitive Research: Principles and Implications volume 4, Article number: 20 (2019) Cite this article This article has been updated The Correction to this article has been published in Cognitive Research: Principles and Implications 2019 4:30 Researchers have argued that simultaneous lineups should follow the principle of propitious heterogeneity, based on the idea that if the fillers are too similar to the perpetrator even an eyewitness with a good memory could fail to correctly identify him. A similar prediction can be derived from the diagnostic feature-detection (DFD) hypothesis, such that discriminability will decrease if too few features are present that can distinguish between innocent and guilty suspects. Our first experiment tested these predictions by controlling similarity with artificial faces, and our second experiment utilized a more ecologically valid eyewitness identification paradigm. Our results support propitious heterogeneity and the DFD hypothesis by showing that: 1) as the facial features in lineups become increasingly homogenous, empirical discriminability decreases; and 2) lineups with description-matched fillers generally yield higher empirical discriminability than those with suspect-matched fillers. Mistaken eyewitness identification is one of the primary factors involved in wrongful convictions, and the simultaneous lineup is a common procedure for testing eyewitness memory. It is critical to present a fair lineup to an eyewitness, such that the suspect does not stand out from the fillers (known-innocent individuals in the lineup). However, it is also theoretically possible to have a lineup with fillers that are too similar to the suspect, such that even an eyewitness with a good memory for the perpetrator may struggle to identify him. Our first experiment tested undergraduate participants with a series of lineups containing computer-generated faces so that we could control for very high levels of similarity by manipulating the homogeneity of facial features. In support of two theories of eyewitness identification (propitious heterogeneity and diagnostic feature-detection), the overall accuracy of identifications was worst at the highest level of similarity. Our second and final experiment investigated two common methods of creating fair lineups: selecting fillers based on matching the description of the perpetrator provided by eyewitnesses, or matching a suspect who has already been apprehended. A nationwide sample of participants from a wide variety of backgrounds watched a mock crime video and later made a decision for a simultaneous lineup. We found that description-matched lineups produced higher eyewitness identification accuracy than suspect-matched lineups, which could be due in part to the higher similarity between fillers and suspect for suspect-matched lineups. These results have theoretical importance for researchers and also practical importance for the police when constructing lineups. Mistaken eyewitness identification (ID) remains the primary contributing factor to the over 350 false convictions revealed by DNA exonerations (Innocence Project, 2019), and is a factor in 29% of the over 2200 exonerations nationally (National Registry of Exonerations, 2018). As a result, psychological scientists continue to study the problem, researching aspects of the crime as well as the ID procedure and other issues. Here, we investigate how police should select fillers for lineups in order to maximize eyewitness accuracy. A lineup should be constructed so that the suspect does not stand out, with reasonably similar fillers (e.g., Doob & Kirshenbaum, 1973; Lindsay & Wells, 1980; Malpass, 1981; National Institute of Justice, 1999). Often the goal is to reduce bias toward the suspect in a lineup (Lindsay, 1994), but sometimes the issue of too much filler similarity is addressed. For example, Lindsay and Wells (1980) found that using fillers that matched the perpetrator's description, as opposed to matching the suspect, reduced false IDs more than correct IDs (see also Luus & Wells, 1991). They concluded that eyewitness ID accuracy is best if the fillers do not match the suspect too poorly (see also Lindsay & Pozzulo, 1999) and do not match the suspect too well, as they can when matched to the suspect rather than description of the perpetrator. This recommendation to avoid a kind of upper limit of filler similarity is based largely on investigating the impact of different filler selection methods (e.g., match to description versus match to suspect) on correct ID rates separately from false ID rates. Usually the recommended procedure is the one that reduces the false ID rate without significantly reducing the correct ID rate (e.g., Lindsay & Pozzulo, 1999). However, Clark (2012) showed that these kinds of "no cost" arguments do not hold under scrutiny. The true pattern of results that arises when manipulating variables to enhance the performance of eyewitnesses is a tradeoff, such that a manipulation (e.g., unbiased lineup instructions, more similar fillers, sequential presentation of lineup members) tends to lower both false and correct IDs. The best method for determining whether system variable manipulations are producing a tradeoff or actually affecting eyewitness accuracy is receiver operating characteristic (ROC) analysisFootnote 1 (e.g., Gronlund, Wixted, & Mickes, 2014; Mickes, Flowe, & Wixted, 2012; Wixted & Mickes, 2012). This approach is based on signal detection theory (SDT; see Macmillan & Creelman, 2005), which separates performance into two parameters: response bias versus discriminability. The tradeoff explained by Clark (2012) is best described by SDT as a shift in response bias, whereas the true goal of system variable manipulations is to increase discriminability. Whenever correct and false ID rates are moving in the same direction, even if one is changing to a greater extent, this pattern could be driven by changes in response bias, discriminability, or both. ROC analysis is needed to make this determination, and we will apply this technique to manipulations of lineup composition in order to shed light on the issue of fillers matching the suspect too well. Four recent studies also applied ROC analysis to manipulations of lineup fairness. Wetmore et al. (2015, 2016) were primarily concerned with comparing showups (presenting a suspect alone rather than with fillers) with simultaneous lineups, but tangentially compared biased with fair simultaneous lineups. A lineup is typically considered biased if the suspect stands out in some way from the fillers. They found that fair lineups yielded higher empirical discriminability compared with biased lineups. Colloff, Wade, and Strange (2016) and Colloff, Wade, Wixted, and Maylor (2017) also found a significant advantage for fair over biased lineups, but defined bias as the presence of a distinctive feature on only one lineup member, and fair as either the presence of the feature on all lineup members or concealed for all members. It is unclear how these distinctive lineups would generalize to more common lineups containing no such obvious distinctive feature. Lastly, Key et al. (2017) found that fair lineups yielded higher empirical discriminability than biased lineups with more realistic stimuli (no distinctive features). However, their target-present and target-absent lineups were extremely biased, containing fillers that matched only one broad characteristic with the suspect (e.g., weight). The official level of fairness was around 1.0 for these biased lineups based on Tredoux's E' (Tredoux, 1998), which ranges from 1 to 6, with 1 representing extreme bias, and 6 representing a very fair lineup. They compared these biased lineups with a target-present and target-absent lineup of intermediate fairness (Tredoux's E' of 3.77 and 3.15, respectively). Our first experiment will add to this literature by evaluating high levels of similarity between fillers and target faces as a test of propitious heterogeneity and the diagnostic feature detection hypothesis (described below). Our second experiment will contribute at a more practical level as the first comparison of suspect-matched and description-matched lineups with ROC analysis. Theoretical motivations: propitious heterogeneity and diagnostic feature-detection Wells, Rydell, and Seelau (1993) argued that lineups should follow the rule of propitious heterogeneity, such that fillers should not be too similar to each other or the suspect (Luus & Wells, 1991; Wells, 1993). At the extreme would be a lineup of identical siblings, such that even a perfect memory of the perpetrator would not help to make a correct ID. Fitzgerald, Oriet, and Price (2015) utilized face morphing software to create lineups with very similar-looking faces. They found that lineups containing highly homogenous faces reduced correct as well as false IDs, thereby creating a tradeoff. More recently, Bergold and Heaton (2018) also found that highly similar lineup members could be problematic, reducing correct IDs and increasing filler IDs. However, neither of these studies applied ROC analysis to address the impact of high similarity among lineup members on empirical discriminability. We will address this issue in the present experiments. Propitious heterogeneity is a concept with testable predictions (e.g., discriminability will decline at very high levels of filler similarity), but it is not a quantitatively specified theory. In contrast, the diagnostic feature-detection (DFD) hypothesis (Wixted & Mickes, 2014) is a well-specified model that can help explain why it is preferable to have some heterogeneity among lineup members. DFD was initially proffered to explain how certain procedures (e.g., simultaneous lineup versus showup) could increase discriminability. According to this theory, presenting all lineup members simultaneously allows an eyewitness to assess facial features they all share, helping them to determine the more diagnostic features on which to focus when comparing the lineup members to their memory of the perpetrator. However, this should only be useful when viewing a fair lineup in which all members share the general characteristics included in an eyewitness's description of a perpetrator (e.g., Caucasian man in his 20s with dark hair and a beard). Presenting all members simultaneously (as opposed to sequentially or a showup) allows the eyewitness to quickly disregard these shared features in order to focus on features distinctive to their memory for the perpetrator (see also Gibson, 1969). DFD theory also predicts that discriminability will be higher for fair over biased simultaneous lineups (Colloff et al., 2016; Wixted & Mickes, 2014). All members of a fair lineup should equivalently match the description of the perpetrator, which should allow the eyewitness to disregard these aspects and focus instead on features that could distinguish between the innocent and the guilty. For example, imagine a perpetrator described as a tall heavy-set Caucasian man with dark hair, a beard, and large piercing eyes. Police would likely ensure that all fillers in the lineup match the general characteristics such as height, weight, race, hair color, and that all have a beard. However, the distinctive eyes would be more difficult to replicate. Therefore, when an eyewitness views a simultaneous lineup, he or she should discount the diagnosticity of these broad characteristics, thereby focusing on internal facial features such as the eyes to make their ID. This process, according to DFD theory, should increase discriminability. In contrast, if the only lineup member with a beard is the suspect (innocent or guilty), the lineup would be biased, and an eyewitness might base their ID largely on this distinctive but nondiagnostic feature. Doing so would reduce discriminability. It is important to note that there is an important distinction between theoretical and empirical discriminability (see Wixted & Mickes, 2018). DFD predicts changes in theoretical discriminability (i.e., underlying psychological discriminability), which involves latent memory signals affecting decision-making in the mind of an eyewitness. Empirical discriminability is the degree to which eyewitnesses can place innocent and guilty suspects into their appropriate categories. Our experiments will focus on empirical discriminability, which is more relevant for real-world policy decisions (e.g., Wixted & Mickes, 2012, 2018). Empirical discriminability can be used to test the DFD hypothesis because "theoretical and empirical measures of discriminability usually agree about which condition is diagnostically superior" (Wixted & Mickes, 2018, p. 2). In other words, the goal of our experiments is to utilize a theory of underlying psychological discriminability to make predictions about empirical discriminability. Other researchers have noted that it is critical to ground eyewitness ID research in theory (e.g., Clark, Benjamin, Wixted, Mickes, & Gronlund, 2015; Clark, Moreland, & Gronlund, 2014). The four ROC studies mentioned above (Colloff et al., 2016, 2017; Key et al., 2017; Wetmore et al., 2015) have provided some support for DFD theory by comparing biased with fair lineups. We instead test another prediction that can be derived from the theory: lineups at the highest levels of similarity between fillers and suspect will actually reduce empirical discriminability. In other words, when fillers are too similar to the suspect, potentially diagnostic features are eliminated, which will reduce discriminability according to DFD theory. Similarly, Luus and Wells (1991) predicted that diagnosticity would decline as fillers become more and more similar to each other and the suspect, and Clark, Rush, and Moreland (2013) predicted diminishing returns as filler similarity increases, based on WITNESS model (Clark, 2003) simulations. We addressed this issue of high filler similarity first in an experiment with computer-generated faces for experimental control. We then conducted a more ecologically valid mock-crime experiment with real faces to test the issue of high filler similarity in the context of description-matched versus suspect-matched fillers. Matching fillers to the suspect could increase the overall level of similarity among lineup members too much (Wells, 1993; Wells et al., 1993), reducing empirical discriminability. If this is the case, we would minimally expect that the similarity ratings between match-to-suspect fillers and the target should be higher than those between match-to-description fillers and the target (Tunnicliff & Clark, 2000). As described below (Experiment 2), we addressed this and also compared description-matched and suspect-matched lineups in ROC space to determine effects on empirical discriminability. There is still much debate in the literature regarding the benefits of matching fillers to description versus suspect (see, e.g., Clark et al., 2013; Fitzgerald et al., 2015). To our knowledge, we are the first to investigate which approach yields higher empirical discriminability. Moreover, despite the historical advocacy for a description-matched approach, to date there are few direct tests of description-matched versus suspect-matched fillers. Lastly, Clark et al. (2014) found that the original accuracy advantage for description-matched fillers has declined over time. One of our goals is to determine if the advantage is real. Experiment 1 We utilized FACES 4.0 (IQ Biomatrix, 2003) to tightly control all stimuli in our first experiment.Footnote 2 This program allows for the creation of simple faces based on various combinations of internal (e.g., eyes, nose, mouth) and external (e.g., hair, head shape, chin shape) facial features. The FACES software is commonly used by police agencies (see www.iqbiometrix.com/products_faces_40.html), and has also been used successfully by eyewitness researchers (e.g., Flowe & Cottrell, 2010; Flowe & Ebbesen, 2007), yielding lineup ID results paralleling results from real faces. Moreover, there is some evidence that FACES are processed similarly to real faces, at least to a degree (Wilford & Wells, 2010; but see Carlson, Gronlund, Weatherford, & Carlson, 2012). Regardless of the artificial nature of these stimuli, we argue that the experimental control they allow in terms of both individual FACE creation as well as lineup creation provides an ideal testing ground for theory. Specifically, with FACES we can precisely control the homogeneity of facial features among lineup members, and then work backward from this extreme level to provide direct tests of propitious heterogeneity and the DFD hypothesis. Our participants viewed three types of FACES. In one condition, all FACES in all lineups were essentially target clones, except for one feature that was allowed to vary (the eyes, nose, or mouth; see Fig. 1 for examples). Therefore, participants could base their decision on just one feature rather than the entire FACE. The other two conditions varied two versus three features, respectively. DFD theory predicts that discriminability should increase as participants can base their ID decision on more features that discriminate between guilty and innocent suspects. Therefore, we predicted that empirical discriminability would be best when three features vary, followed by two features, and worst when only one feature varies across FACES in each lineup. Example lineups from Experiment 1 composed of facial stimuli from FACES 4.0. Only the eyes vary in the top left, the eyes and nose vary in the top right, and eyes, nose, and mouth vary in the bottom The theoretical rationale is presented in Table 1, which is adapted from Table 1 of Wixted and Mickes (2014). Whereas they were interested in comparing showups with simultaneous lineups, here we present three levels of simultaneous lineups that differ only in the number of features that vary across all fillers. As will be described below, we did not have a designated innocent suspect, but the logic is the same, so we will continue with the "Innocent Suspect" label from Wixted and Mickes. Focus first on the Guilty Suspect rows. Following Wixted and Mickes, and based on signal detection theory, we assume that the target (guilty suspect) was encoded with memory strength values of M = 1 and SD = 1.22 (so, variance approximately = 1.5 in the table). This, of course, is the case regardless of the fillers, so this remains constant for every lineup type and feature manipulated in a lineup (f1, f2, f3). These three features (f1–3) are the only source of variance (i.e., potentially diagnostic information) in the lineup. If only one feature varies, this means that all fillers (for both target-present and target-absent lineups) are identical to the target except for one feature (eyes, nose, or mouth in our experiments). If two features vary, then all fillers are identical to the target except for two features; if three features vary, then all fillers are identical to the target except for three features. Table 1 Memory strength values of three facial features that are summed to yield an aggregate memory strength value for a face in a simultaneous lineup (adapted from Wixted & Mickes, 2014) Critically, the Innocent Suspect rows change across these levels of similarity, reflecting featural overlap with the guilty suspect. When only one feature varies in the lineup, only f3 differs between fillers and guilty suspect, and f1 and f2 are identical. For example, this occurs when the participant in this condition sees that the lineup is entirely composed of clones except that all lineup members have a different mouth. This is the case for target-present (TP) and target-absent (TA) lineups, making the mouth diagnostic of suspect guilt (only one lineup member serves as the target with the correct mouth). This is represented by the top rows of Table 1: One Feature Varies. For that feature (f3; e.g., mouth), the memory strength values for the innocent suspect are M = 0 and SD = 1 (see Wixted & Mickes, 2014). Moving down to the next lineup type, two features vary, so now the memory strength values for the innocent suspect are set to M = 0 and SD = 1 for f2 as well as f3. This would be the case if, for example, both the nose and the mouth differ between innocent suspect (i.e., all fillers, as in our experiments) and guilty suspect. Finally, the bottom rows represent lineups in which all three features vary (eyes, nose, and mouth), which decreases the overlap between innocent and guilty suspects even further (i.e., between fillers and the target). As can be seen in the far-right column, underlying psychological discriminability is expected to increase as more features are diagnostic of suspect guilt in the lineup, based on the unequal variance signal detection model: $$ {d}_a=\frac{\upmu_{guilty}-{\mu}_{innocent}}{\sqrt{\left({\upsigma}_{guilty}^2+{\upsigma}_{innocent}^2\right)/2}} $$ We assessed whether empirical discriminability would increase as more facial features in each of the fillers differ from the target (i.e., as more features are present that are diagnostic of suspect guilt). In other words, as the fillers look less and less like the target (with more features allowed to vary), participants should be better able to identify the target and reject fillers. Students from the Texas A&M University – Commerce psychology department subject pool served as participants (N = 100). Based on the within-subjects design described below, this sample size allowed us to obtain 300 data points per cell. Although some more recent eyewitness studies applying ROC analysis to lineup data have included around 500 or more participants or data points per cell (e.g., Seale-Carlisle, Wetmore, Flowe, & Mickes, 2019) other studies have shown that 100–200 is sufficient (e.g., 100–130/cell in Carlson & Carlson, 2014; around 150/cell in Mickes et al., 2012), and so both experiments in this paper included at least 200 data points per experimental cell. We obtained approval from the university's institutional review board for both experiments in this paper, and informed consent was provided by each participant at the beginning of the experiment. We utilized the FACES 4.0 software (IQ Biometrix, 2003) to create our stimuli (see Fig. 1 for examples). No face had any hair or other distinguishing external characteristics; all shared the same external features as seen in Fig. 1. The only features that varied were the eyes, nose, and/or mouth. The critical independent variable, manipulated within subjects, was how many of these features varied in a given lineup. Under one condition, only one of these features varied in a given lineup. For example, all members of a given lineup were clones except that each would have different eyes. Therefore, participants could base their lineup decision (for both TP and TA lineups) on the eyes alone. The same logic applied to lineups with only the mouth being different among the lineup members, as well as those in which only the nose varied. However, when encoding each face prior to the lineup, participants did not know which of the three features (or how many features, as this was manipulated within subjects) would vary in the upcoming lineup. Under another condition, two of these three features varied in a given lineup, thereby providing participants with more featural information on which to base their ID decision (again, for both TP and TA lineups). Lastly, all three features varied under the third condition of this independent variable. Each target was randomly assigned to a position during creation of the TP lineups (see Carlson et al., 2019, for the importance of randomizing or counter-balancing suspect position), and there was no designated innocent suspect in TA lineups. Procedure and design Participants took part in a face recognition paradigm with 18 blocks, and research has shown that lineup responses across multiple trials are similar to single-trial eyewitness ID paradigms (Mansour, Beaudry, & Lindsay, 2017). Both target presence (TP vs. TA lineup) and the number of diagnostic features in each lineup (1–3) were manipulated within subjects. Each of the 18 blocks contained the same general procedure: encoding of a single FACE, distractor task, then lineup. For each encoding phase, we simply presented the target FACE for 1 s in the middle of the screen. The distractor task in each block was a word search puzzle on which participants worked for 1 min between the encoding and lineup phase of each block. The final part of each block was the critical element: a simultaneous lineup of six FACES presented in a 2 × 3 array, and participants were instructed to identify the target presented earlier in that block, which may or may not be present. They could choose one of the six lineup members or reject the lineup. After their decision, they entered their confidence on an 11-point scale (0–100% in 10% increments), and then the next block automatically began. There were three blocks dedicated to each of the six experimental cells: 1) TP vs TA lineup with one feature varying; 2) TP vs TA lineup with two features varying; and 3) TP vs TA lineup with three features varying. Each participant viewed a randomized order of these blocks. See Table 2 for all correct, false, and filler IDs, along with lineup rejections. We will first describe the results of ROC analysis, followed by TP versus TA lineup data separately (Gronlund & Neuschatz, 2014). We applied Bonferroni correction (α = .05/3 = .017) to control Type I error rate due to multiple comparisons. Table 2 Number of identifications and rejections from Experiment 1 ROC analysis It is important to determine how our manipulations affected empirical discriminability independently of a bias toward selecting any suspect (whether guilty or innocent), which is what ROC analysis is designed to accomplish (e.g., Gronlund et al., 2014; Rotello & Chen, 2016; Wixted & Mickes, 2012). As shown in Fig. 2, each condition results in a curve in ROC space constructed from correct and false ID rates across levels of confidence. In order to be comparable to the correct ID rates of targets from TP lineups, the total number of false IDs from TA lineups were divided by the number of lineup members (6) to calculate false ID rates, which is a common approach in the literature when there is no designated innocent suspect (e.g., Mickes, 2015). All data from a given condition reside at the far-right end of its curve, and then the curve extends to the left first by dropping participants with low levels of confidence. Thus, the second point from the far right of each curve excludes IDs that were supported by confidence of 0–20%, then the third point excludes these IDs as well as those supported by 30–40% confidence. This process continues for each curve until the far-left point represents only those IDs supported by the highest levels of confidence (here 90–100%). Confidence thereby serves as a proxy for the bias for choosing any suspect (regardless of guilt), with the most conservative suspect IDs residing on the far left, and the most liberal on the far right. ROC data from Experiment 1. The curves drawn through the empirical data points are not based on model fits, but rather are simple trendlines drawn in Excel. The correct ID rate on the y axis is the proportion of targets chosen from the total number of target-present lineups in a given condition. The false ID rate on the x axis is the proportion of all filler identifications from the total number of target-absent lineups in a given condition (as we had no designated innocent suspects), divided by the nominal lineup size (six) to provide an estimated innocent suspect ID rate The level of empirical discriminability for each curve is determined with the partial area under the curve (pAUC; Robin et al., 2011). The farther a curve resides in the upper-left quadrant of the space, the greater the empirical discriminability. The pAUC rather than full AUC is calculated because TA filler IDs are divided by six, thereby preventing false ID rate on the x axis from reaching 1.0. Finally, each pair of curves can be compared with D = (pAUC1 – pAUC2)/s, where s is the standard error of the difference between the two pAUCs after bootstrapping 10,000 times (see Gronlund et al., 2014, for a tutorial). As seen in Fig. 2, there was no significant difference between three features (pAUC = .088 [.079–.097]) and two features (pAUC = .086 [.075–.096]), D = 0.46, not significant (ns). However, having multiple diagnostic features boosted empirical discriminability beyond just one feature (pAUC = .061 [.050–.072]): (a) two features were better than one, D = 3.98, p < .001, and (b) three features were better than one, D = 4.58, p < .001. This pattern largely supports both the concept of propitious heterogeneity and the DFD hypothesis. Separate analyses of TP and TA lineups The number of diagnostic features in each lineup significantly impacted correct IDs, Wald (2) = 9.48, p = .009. Chi-square analyses revealed that, though there was no difference between two and three diagnostic features (χ2 = 1, N = 600) = 0.72, ns), we did confirm that having just one diagnostic feature yielded fewer correct IDs compared with both two features (χ2 (1, N = 600) = 8.79, p = .002, ϕ = .12) and marginally fewer compared with three features (χ2 (1, N = 600) = 4.52, p = .02, ϕ = .09). False IDs (of any lineup member from TA lineups) were affected even more so by the number of diagnostic features, Wald (2) = 159.59, p < .001. Participants were much more likely to choose lineup members from TA lineups when only one feature varied compared with two features (χ2 (1, N = 600) = 81.10, p < .001, ϕ = .37) or three features (χ2 (1, N = 600) = 167.69, p < .001, ϕ = .53). There were also more false alarms when two features varied compared with three, χ2 (1, N = 600) = 19.33, p < .001, ϕ = .18. In summary, unsurprisingly, the more the lineup members matched the target (i.e., with fewer features varying across members), the more participants chose these faces. In support of other research investigating lineups of high filler similarity (e.g., Fitzgerald et al., 2015), these results indicate that lineups containing very similar fillers could be problematic, as they tended to lower ID accuracy (see also simulations by Clark et al., 2013). We went a step beyond the literature to show with ROC analysis that empirical discriminability declines at the upper levels of filler similarity. Allowing more features to vary among lineup members generally increased accuracy. These preliminary findings support the principle of propitious heterogeneity (e.g., Wells et al., 1993) and the DFD hypothesis (Wixted & Mickes, 2014). Here, our goal was to extend the logic of the first experiment to an issue of more ecological importance than lineups of extremely high levels of featural homogeneity, which would not occur in the real world. Instead, we focused on whether police should select fillers based on matching a suspect's description or a suspect himself. Both should lead to fair lineups that yield higher empirical discriminability compared with showups (Wetmore et al., 2015; Wixted & Mickes, 2014) or compared with biased lineups (e.g., Key et al., 2017). However, suspect-matched lineups could have fillers that are more similar to the suspect than description-matched lineups because each filler is selected based directly on the suspect's face. Features that otherwise would be diagnostic of guilt could thereby be replicated in TP lineups, which could reduce correct ID rate. A greater overlap of diagnostic features would also reduce discriminability according to the DFD hypothesis. In this experiment, we compared suspect-matched with description-matched lineups to determine which should be recommended to police. Others have compared these filler selection methods (e.g., Lindsay, Martin, & Webber, 1994; Luus & Wells, 1991; Tunnicliff & Clark, 2000), but we make two contributions beyond this prior research: 1) we will assess which method yields higher empirical discriminability; and 2) we will test a theoretical prediction based on propitious heterogeneity and the DFD hypothesis that higher similarity between fillers and suspect in suspect-matched lineups will contribute to lower empirical discriminability compared with description-matched lineups. As mentioned above, based on eyewitness ID studies utilizing ROC analysis (e.g., Carlson & Carlson, 2014; Mickes et al., 2012), we sought a minimum of 200 participants for each lineup that we created. As described below, we created nine lineups, requiring a minimum of 1800 participants. We utilized SurveyMonkey to offer this experiment to a nationwide sample of participants (N = 2159) in the United States. We dropped 194 participants for providing incomplete data or failing to answer our attention check question correctly, leaving 1965 for analysis (see Table 3 for demographics). Table 3 Demographics for Experiment 2 Mock crime video We used a mock crime video from Carlson et al. (2016), which presents a woman sitting on a bench surrounded by trees in a public park. A male perpetratorFootnote 3 emerges from behind a large tree in the right of the frame, approaches the woman slowly, and grabs her purse before running away. He is visible for 10 s, and is approximately 3 m from the camera when he emerges from behind the tree, and about 1.5 m away when he reaches the victim. A photo of the perpetrator taken a few days later was used as his lineup mugshot. Description-matched lineups In order to create description-matched lineups, we first needed a modal description for the perpetrator. A group of undergraduates (N = 54Footnote 4) viewed the mock crime video and then answered six questions regarding the perpetrator's physical characteristics. We used the most frequently reported descriptors to create the modal description (white male, 20–30 years old, tall, short hair, stubble-like facial hair). We gave this description to four research assistants (none of whom ever saw the mock crime video or perpetrator mugshot) and asked each of them to pick 20 matches from various public offender mugshot databases (e.g., State of Kentucky Department of Corrections) to create a pool of 80 description-matched fillers. We randomly selected 10 mugshots from the description-matched pool to serve as fillers in the two description-matched TP lineups. In order to avoid stimulus-specific effects lacking generalizability (Wells & Windschitl, 1999), we used two designated innocent suspects who were randomly selected from the description-matched pool. To further increase generalizability, we then created two TA lineups for each of these two innocent suspects, for a total of four description-matched TA lineups. Twenty additional mugshots were randomly selected from the pool to serve as fillers in these lineups. To assess lineup fairness, we presented an independent group of undergraduates (N = 28) with each lineup and they chose the member that best matched the perpetrator's modal description. We used these data to calculate Tredoux's E' (Tredoux, 1998), which is a statistic ranging from 1 (very biased) to 6 (very fair): TP Lineup 1 (3.09), TP Lineup 2 (4.17), Lineup 1 for Innocent Suspect 1 (4.08), Lineup 2 for Innocent Suspect 1 (5.09), Lineup 1 for Innocent Suspect 2 (4.04), and Lineup 2 for Innocent Suspect 2 (4.36). Suspect-matched lineups We started by providing the perpetrator's mugshot to a new group of four research assistants, asking each of them to pick 20 matches from the mugshot databases (e.g., State of Kentucky Department of Corrections) to create a pool of 80 suspect-matched fillers. We randomly selected five mugshots from this pool to serve as fillers in the suspect-matched TP lineup. We then randomly selected 49 mugshots from the description-matched pool, which an independent group of undergraduates (N = 30) rated for similarity to each of the innocent suspects using a 1 (least similar) to 7 (most similar) Likert scale. The five most similar faces to each innocent suspect served as fillers in their respective suspect-matched TA lineup. We therefore had a total of three suspect-matched lineups: one for the perpetrator and one for each innocent suspect (these are the same two innocent suspects as in the description-matched lineups, as police would never apprehend a suspect because he matches a perpetrator). The same group of 28 participants who reviewed the description-matched lineups also evaluated these lineups for fairness, resulting in Tredoux's E' (Tredoux, 1998) of 3.27 for the TP lineup, 4.45 for TA Lineup 1, and 5.16 for TA Lineup 2. These results are comparable to the description-matched lineups. According to the prediction of Luus and Wells' (1991) that a suspect-matched procedure could produce fillers that are too similar to the suspect, similarity ratings should be higher for suspect-matched lineups than for description-matched lineups (see also Tunnicliff & Clark, 2000). This is also necessary according to the DFD hypothesis to create a situation that would lower empirical discriminability. To establish the level of similarity, an independent group of participants (N = 50Footnote 5) rated the similarity of the suspect to each of the five fillers in their respective lineups on a 1 (least similar) to 7 (most similar) Likert scale. Indeed, overall mean similarity between each filler and the suspect was higher for suspect-matched lineups (M = 2.84, SD = 1.26) compared with description-matched lineups (M = 2.11, SD = 1.20), t (49) = 9.05, p < .001. This pattern is consistent across both TP (suspect-matched M = 3.56, SD = 1.39; description-matched M = 2.20, SD = 1.18; t(49) = 9.31, p < .001) and TA lineups (suspect-matched M = 2.48, SD = 1.32; description-matched M = 2.07, SD = 1.22; t(49) = 5.91, p < .001). These patterns, as well as the overall low similarity ratings (all less than mid-point of 7-point Likert scale) are consistent with results from earlier studies (e.g., Tunnicliff & Clark, 2000; Wells et al., 1993). Design and procedure This experiment conformed to a 2 (filler selection method: suspect-matched vs. description-matched lineup) × 2 (TP or TA lineup) between-subjects factorial design. After informed consent, participants watched the mock crime video followed by another video (about protecting the environment) serving as a distractor for 3 min. After answering a question about the distractor video to confirm that they watched it, each participant was randomly assigned to view a six-person TP or TA simultaneous lineup, containing either suspect-matched or description-matched fillers. All lineups were formatted in a 2 × 3 array, and the position of the suspect was randomized. Each lineup was accompanied with instructions that stated that the perpetrator may or may not be present. Immediately following their lineup decision, participants rated their confidence on a 0%–100% scale (in 10% increments). Finally, they answered an attention check question ("What crime did the man in the video commit?") as well as demographic questions pertaining to age, sex, and race. As with our earlier experiment, we will first present the results of ROC analysis to determine differences in empirical discriminability, followed by logistic regression and chi-square analyses to the TP data separately from the TA data. All reported p values are two-tailed. See Table 4 for all ID decisions across all lineups. Our primary goal was to determine whether description-matched lineups would increase empirical discriminability compared with suspect-matched lineups. To address this, we compared the description-matched ROC curve with the suspect-matched curve, collapsing over individual lineups (specificity = .84Footnote 6; see Fig. 3). As predicted, matching fillers to description (pAUC = .052 [.045–.059]) increased empirical discriminability compared with matching fillers to suspect (pAUC = .037, [.029–.045]), D = 2.61, p = .009. As for the bias toward choosing any suspect, description-matched lineups overall induced more liberal suspect choosing (as shown by the longer ROC curve in Fig. 3) compared with the suspect-matched lineups. This effect on response bias replicates other research comparing these two methods of filler selection without ROC analysis (Lindsay et al., 1994; Tunnicliff & Clark, 2000; Wells et al., 1993). ROC data (with trendlines) from Experiment 2 collapsed over the different description-matched and suspect-matched lineups. The false ID rate on the x axis is the proportion of innocent suspect identifications from the total number of target-absent lineups in a given condition In order to address the robustness of the overall effect on empirical discriminability, we then broke down the curves into four description-matched curves and two suspect-matched curves (Fig. 4; specificity = .66). The description-matched curves were based on correct ID rates from the two TP lineups (each with the same target but different description-matched fillers) combined with false alarm rates from four TA lineups (two with fillers matching the description of innocent suspect 1, and two with fillers matching the description of innocent suspect 2). The two suspect-matched curves are based on the correct ID rate from the one suspect-matched TP lineup and the false alarm rates from the two suspect-matched TA lineups (one for innocent suspect 1 and one for innocent suspect 2). See Table 5 for the pAUC of each curve and Table 6 for the comparison between each description-matched and suspect-matched curve (Bonferroni-corrected α = .05/8 = .006). No suspect-matched curve ever increased discriminability compared with a description-matched curve. Rather, two description-matched curves yielded greater discriminability than both suspect-matched curves.Footnote 7 ROC data (with trendlines) for all description-matched and suspect-matched lineups from Experiment 2 Table 5 Results of receiver operating characteristic analysis for Experiment 2 Table 6 Comparison of each suspect-matched lineup with each description-matched lineup from Experiment 2 We begin with the correct IDs. As a reminder, there was one suspect-matched TP lineup and two description-matched TP lineups, so Bonferroni-corrected α = .05/2 = .025. The full logistic regression model was significant, showing that there were more correct IDs for the description-matched lineups compared with the suspect-matched lineup, Wald (2) = 21.57, p < .001. This pattern was supported by follow-up chi-square tests comparing the suspect-matched lineup with: (a) Description-Matched TP1, χ2 (1, N = 426) = 15.03, p < .001, ϕ = .19; and (b) Description-Matched TP2, χ2 (1, N = 425) = 17.79, p < .001, ϕ = .21. As for filler IDs from TP lineups, the full logistic regression model was again significant, Wald (2) = 46.82, p < .001. The filler ID rate was higher for the suspect-matched lineup compared with both Description-Matched TP1, χ2 (1, N = 426) = 24.41, p < .001, ϕ = .24, and Description-Matched TP2, χ2 (1, N = 425) = 42.52, p < .001, ϕ = .32. Lastly, the model for TP lineup rejections was not significant, Wald (2) = 4.89, p = .087. Turning to the TA lineups, there were two suspect-matched (each based on its own innocent suspect) and four description-matched (the same two innocent suspects × 2 sets of fillers each). The full model comparing false IDs across all six lineups was significant, Wald (5) = 36.47, p < .001. A follow-up chi-square found that the false ID rate was lower for the suspect-matched lineups compared with the description-matched lineups overall, χ2 (1, N = 1328) = 4.12, p = .042, ϕ = .06. There was no difference in filler IDs or correct rejections. The next step was to compare each suspect-matched lineup with each description-matched lineup to determine the consistency of the pattern of false IDs (Bonferroni-corrected α = .05/8 = .006). Of the eight comparisons, only two were significant: (a) Suspect-Matched TA1 yielded fewer false IDs than Description-Matched TA1.2, χ2 (1, N = 412) = 15.74, p < .001, ϕ = .20; and (b) Suspect-Matched TA2 yielded fewer false IDs than Description-Matched TA1.2, χ2 (1, N = 422) = 16.89, p < .001, ϕ = .20. As can be seen in Table 4, Description-Matched TA1.2 had a higher false ID rate than any other TA lineup, which drove the overall effect of more false IDs for description-matched over suspect-matched lineups. The more consistent finding was no difference in false IDs between the two filler selection methods. We reviewed these lineups in light of these results, and could not determine why the false ID rate was higher for TA1.2, as the innocent suspect does not appear to stand out from the fillers. In fact, this lineup had the highest level of fairness (E' = 5.09) compared with the other description-matched TA lineups (4.08, 4.04, and 4.36). This indicates that Tredoux's E', and likely other lineup fairness measures that are based on a perpetrator's description, could inaccurately diagnose a lineup's level of fairness. This point has recently been supported by a large study comparing several methods of evaluating lineup fairness (Mansour, Beaudry, Kalmet, Bertrand, & Lindsay, 2017). Confidence-accuracy characteristic analysis Discriminability is an important consideration when it comes to system variables, such as filler selection method, but the reliability of an eyewitness's suspect identification, given their confidence, is also critical. Whereas ROC analysis is ideal for revealing differences in discriminability, some kind of confidence-accuracy characteristic (CAC) analysis is needed to investigate reliability (Mickes, 2015). In other words, to a judge and jury evaluating an eyewitness ID from a given case, one piece of information will be the filler selection method used by police when constructing the lineup. Another piece of information will be the eyewitness's confidence in their lineup decision, which studies have shown has a strong relationship to the accuracy of the suspect ID given that it is immediately recorded after the suspect ID, and the lineup was conducted under good conditions (e.g., double-blind administrator and a fair lineup; see Wixted & Wells, 2017). Recent studies have supported a strong CA relationship across various manipulations, such as weapon presence during the crime (Carlson et al., 2017), amount of time to view the perpetrator during the crime (Palmer, Brewer, Weber, & Nagesh, 2013), and lineup type (simultaneous versus sequential; Mickes, 2015). The present experiment allowed us to test suspect- versus description-matched filler selection methods in terms of the CA relationship. We had no explicit predictions regarding this comparison, but provide the CAC analysis due to its applied importance. As can be seen in Fig. 5, there is a strong CA relationship across both filler selection methods. The x axis represents three levels of confidence (0–60% for low, 70–80% for medium, and 90–100% for high), which is typically broken down in this way for CAC analysis (see Mickes, 2015). The y axis represents the conditional probability (i.e., positive predictive value): given a suspect ID, what is the likelihood that the suspect was guilty, represented as guilty suspect IDs/(guilty suspect IDs + innocent suspect IDs). Two results are of note from Fig. 5: 1) confidence is indicative of accuracy, such that both curves have positive slopes; and 2) suspect IDs supported by high confidence are generally accurate (85% or higher). CAC data from Experiment 2. The bars represent standard errors. Proportion correct on the y axis is #correct IDs/(#correct IDs + #false IDs) This is the first experiment (to our knowledge) to address which method of filler selection, description- versus suspect-match, yields the highest empirical discriminability. We found that matching fillers to description appears to be the preferred approach, as it increased the ability of our participant eyewitnesses to sort innocent and guilty suspects into their proper categories. This was the case when collapsing over all individual lineups and, when making all pairwise comparisons between description- and suspect-matched lineups, we found that no suspect-matched lineup ever increased discriminability beyond a description-matched lineup. Rather, description-matched lineups were either better than, or equivalent to, suspect-matched lineups. We discuss the potential reasons for the overall advantage for description-matched lineups below. We supported two theories from the eyewitness identification literature: propitious heterogeneity (e.g., Wells et al., 1993) and diagnostic feature-detection (DFD; Wixted & Mickes, 2014) by showing that empirical discriminability decreases as fillers become too similar to each other and the suspect. Our first experiment demonstrated this phenomenon with computer-generated faces that we could manipulate to precisely control levels of similarity among lineup members. Experiment 2 extended this effect to the real-world issue of filler selection, showing that police should match fillers to the description of a perpetrator rather than to a suspect. However, this recommendation is not without its caveats, such as the level of detail of a particular eyewitness's description. This issue of specificity of the description for description-matched lineups is a question ripe for empirical investigation. To our knowledge, there has been no research on the influence of description quality (i.e., number of fine-grained descriptors) on the development of lineups and resulting empirical discriminability. Based on our findings, we would predict an inverted U-shaped function on empirical discriminability, such that eyewitnesses would perform best on description-matched lineups with fillers matched to a description that is not too vague (see Lindsay et al., 1994) and also not too specific. The former could yield biased lineups, whereas the latter could yield lineups with fillers that are too similar to the perpetrator, akin to the suspect-matched lineups that we tested. We encourage researchers to investigate this important issue of descriptor quality and eyewitness ID. Minimally, this research would address the issue of boundary conditions for description- versus suspect-matched lineups. At what point are suspect-matched lineups superior? Surely, if the description of the perpetrator is sufficiently vague, discriminability would be higher for suspect-matched lineups, but this is an empirical question. Other than filler similarity, there is at least one more explanation for the reduction in empirical discriminability that we found for suspect-matched lineups. In the basic recognition memory literature, within-participant variance in responses has been shown to reduce discriminability (e.g., Benjamin, Diaz, & Wee, 2009). Mickes et al. (2017) found that variance among eyewitness participants can reduce empirical discriminability in a similar manner. Their variance was created by different instructions prior to the lineup (to induce conservative versus liberal choosing), which could have been interpreted or adhered to differently across participants. Similarly, suspect-matched lineups have an additional source of variance compared with description-matched lineups, which could have contributed to the lowering of empirical discriminability for suspect-matched lineups. For description-matched lineups, all fillers are selected based on matching a single description. Assuming the description is not too vague, this should limit the overall variance across fillers. In contrast, suspect-matched fillers are matched to the target for TP lineups and to a completely different individual (the innocent suspect) for TA lineups. This would likely add variance to the similarity of fillers across these two conditions, thereby lowering empirical discriminability. However, although alternative explanations such as criterial variability are always possible, it is important to note that the DFD theory predicted our results in advance, making it a particularly strong competitor with other potential explanations of the effect of lineup fairness and filler similarity on empirical discriminability. This also illustrates the importance of theory-driven research for the field of eyewitness identification (e.g., Clark et al., 2015). Conclusion and implications It is unlikely that a large number of police departments construct highly biased lineups, as most report that they select fillers by matching to the suspect (Police Executive Research Forum, 2013). Therefore, we argue that eyewitness researchers, rather than comparing very biased with fair lineups, should focus on varying levels of reasonably fair lineups that are more like those used by police. Moreover, we acknowledge that it is not always possible to follow a strict match to description procedure. When the description of a perpetrator is very vague, or when there is a significant mismatch between the description and suspect's appearance, matching to the suspect can be acceptable, or some combination of the two procedures (see Wells et al., 1998). However, only about 10% of police in the United States select fillers according to the match to description method recommended by the NIJ (Police Executive Research Forum, 2013; Technical Working Group for Eyewitness Evidence, 1999). This is problematic if additional research supports our finding that suspect-matched lineups reduce empirical discriminability. However, CAC analysis revealed a strong confidence-accuracy relationship regardless of filler selection method, in agreement with recent research on other variables relevant to eyewitness ID (e.g., Semmler, Dunn, Mickes, & Wixted, 2018; Wixted & Wells, 2017). Therefore, although the ROC results indicate that policy makers should recommend that fillers be selected based on match to (a sufficiently detailed) description, the CAC results indicate that judges and juries should not be concerned with which method was utilized in a given case. If an eyewitness provides immediate high confidence in a suspect ID, this carries weight in gauging the likely guilt of the suspect. The datasets from these experiments are available from the first author on reasonable request. An error occurred during the publication of a number of articles in Cognitive Research: Principles and Implications. Several articles were published in volume 4 with a duplicate citation number. We note that there is still some debate in the literature regarding the applicability of ROC analysis to lineup data, with some opposed (e.g., Lampinen, 2016; Smith, Wells, Smalarz, & Lampinen, 2018; Wells, Smalarz, & Smith, 2015), but many in favor (e.g., Gronlund et al., 2012; Gronlund, Wixted & Mickes, 2014; National Research Council, 2014; Rotello & Chen, 2016; Wixted & Mickes, 2012, 2018) We initially conducted three pilot experiments to test our FACES stimuli. See Additional file 1 for information on these experiments. We will refer to the perpetrator as the target in the results, in order to be consistent with terminology (e.g., target-present and target-absent lineups) from our initial experiments. Most eyewitness researchers do not go to these lengths when creating lineups, but we needed to follow these steps to carefully establish well-operationalized suspect-matched versus description-matched lineups. Prior research following similar steps to create fair lineups has also started with a modal description of the perpetrator, but based on a much smaller group of participants (e.g., N = 5; e.g., Carlson, Dias, Weatherford, & Carlson, 2017). We had 10 times as many participants (54) provide descriptions because the resulting modal description was so critical to the purpose of our final experiment, and we therefore wanted it to have a stronger foundation empirically. Later we had only 28 participants choose from each of our lineups the person who best matched the modal description, but this has been shown to be a roughly sufficient number in the literature (e.g., Carlson et al., 2017, based on their Tredoux's E' calculations on 30 participants). In order to ensure that we had a sufficient number of participants for similarity ratings, we had a sample size somewhat larger than another eyewitness ID study featuring pairwise similarity ratings (N = 34; Charman, Wells, & Joy, 2011). This specificity is based on the maximum false alarm rate for the most conservative curve (i.e., the shortest curve) so that no extrapolation is required. We repeated all analyses with specificities based on the most liberal curves so that all data from all conditions could be included. The pattern of results in Table 6 remained the same, and overall, suspect-matched lineups (pAUC = .061 [.050–.072]) still had lower discriminability than description-matched lineups (pAUC = .085 [.076–.094]), D = 3.30, p < .001. With Bonferroni-corrected alpha of .006, one of these four comparisons (Description Match 4 vs. Suspect Match 2; see Table 6) is marginally significant, with p = .01. When setting specificity based on the most liberal rather than most conservative condition's maximum false alarm rate, this difference is significant at p = .001. Benjamin, A. S., Diaz, M., & Wee, S. (2009). Signal detection with criterion noise: application to recognition memory. Psychological Review, 116, 84–115. https://doi.org/10.1037/a0014351. Bergold, A. N., & Heaton, P. (2018). Does filler database size influence identification accuracy? Law and Human Behavior, 42, 227. https://doi.org/10.1037/lhb0000289. Carlson, C. A., & Carlson, M. A. (2014). An evaluation of lineup presentation, weapon presence, and a distinctive feature using ROC analysis. Journal of Applied Research in Memory and Cognition, 3, 45–53. https://doi.org/10.1016/j.paid.2013.12.011. Carlson, C. A., Dias, J. L., Weatherford, D. R., & Carlson, M. A. (2017). An investigation of the weapon focus effect and the confidence–accuracy relationship for eyewitness identification. Journal of Applied Research in Memory and Cognition, 6(1), 82–92. Carlson, C. A., Gronlund, S. D., Weatherford, D. R., & Carlson, M. A. (2012). Processing differences between feature-based facial composites and photos of real faces. Applied Cognitive Psychology, 26, 525–540. https://doi.org/10.1002/acp.2824. Carlson, C. A., Jones, A. R., Goodsell, C. A., Carlson, M. A., Weatherford, D. R., Whittington, J. E., Lockamyeir, R. L. (2019). A method for increasing empirical discriminability and eliminating top-row preference in photo arrays. Applied Cognitive Psychology. in press. doi: https://doi.org/10.1002/acp.3551 Carlson, C. A., Young, D. F., Weatherford, D. R., Carlson, M. A., Bednarz, J. E., & Jones, A. R. (2016). The influence of perpetrator exposure time and weapon presence/timing on eyewitness confidence and accuracy. Applied Cognitive Psychology, 30, 898–910. https://doi.org/10.1002/acp.3275. Charman, S. D., Wells, G. L., & Joy, S. W. (2011). The dud effect: adding highly dissimilar fillers increases confidence in lineup identifications. Law and Human Behavior, 35(6), 479–500. Clark, S. E. (2003). A memory and decision model for eyewitness identification. Applied Cognitive Psychology, 17(6), 629–654. Clark, S. E. (2012). Costs and benefits of eyewitness identification reform: psychological science and public policy. Perspectives on Psychological Science, 7, 238–259. https://doi.org/10.1177/1745691612439584. Clark, S. E., Benjamin, A. S., Wixted, J. T., Mickes, L., & Gronlund, S. D. (2015). Eyewitness identification and the accuracy of the criminal justice system. Policy Insights from the Behavioral and Brain Sciences, 2, 175–186. https://doi.org/10.1177/2372732215602267. Clark, S. E., Moreland, M. B., & Gronlund, S. D. (2014). Evolution of the empirical and theoretical foundations of eyewitness identification reform. Psychonomic Bulletin & Review, 21(2), 251–267. Clark, S. E., Rush, R. A., & Moreland, M. B. (2013). Constructing the lineup: law, reform, theory, and data. In B. L. Cutler (Ed.), Reform of eyewitness identification procedures. Washington, DC: American Psychological Association Colloff, M. F., Wade, K. A., & Strange, D. (2016). Unfair lineups don't just make witnesses more willing to choose the suspect, they also make them more likely to confuse innocent and guilty suspects. Psychological Science, 27, 1227–1239. https://doi.org/10.1177/0956797616655789. Colloff, M. F., Wade, K. A., Wixted, J. T., & Maylor, E. A. (2017). A signal-detection analysis of eyewitness identification across the adult lifespan. Psychology and Aging, 32, 243–258. https://doi.org/10.1037/pag0000168. Doob, A. N., & Kirshenbaum, H. M. (1973). Bias in police lineups-partial remembering. Journal of Police Science and Administration, 1(3), 287–293. Fitzgerald, R. J., Oriet, C., & Price, H. L. (2015). Suspect filler similarity in eyewitness lineups: a literature review and a novel methodology. Law and Human Behavior, 39, 62–74. https://doi.org/10.1037/lhb00000095. Flowe, H., & Cottrell, G. W. (2010). An examination of simultaneous lineup identification decision processes using eye tracking. Applied Cognitive Psychology, 25, 443–451. https://doi.org/10.1002/acp.1711. Flowe, H. D., & Ebbesen, E. B. (2007). The effect of lineup member similarity on recognition accuracy in simultaneous and sequential lineups. Law and Human Behavior, 31, 33–52. https://doi.org/10.1007/s10979-006-9045-9. Gibson, E. J. (1969). Principles of perceptual learning and development. New York: Appleton-Century-Crofts. Gronlund, S. D., Carlson, C. A., Neuschatz, J. S., Goodsell, C. A., Wetmore, S. A., Wooten, A., & Graham, M. (2012). Showups versus lineups: an evaluation using ROC analysis. Journal of Applied Research in Memory and Cognition, 1(4), 221–228. Gronlund, S. D., & Neuschatz, J. S. (2014). Eyewitness identification discriminability: ROC analysis versus logistic regression. Journal of Applied Research in Memory and Cognition, 3, 54–57 Retrieved from https://doi.org/10.1016/j.jarmac.2014.04.008. Gronlund, S. D., Wixted, J. T., & Mickes, L. (2014). Evaluating eyewitness identification procedures using receiver operating characteristic analysis. Current Directions in Psychological Science, 23, 3–10. https://doi.org/10.1177/0963721413498891. Innocence Project. (2019). DNA exonerations worldwide. Retrieved from http://www.innocenceproject.org IQ Biometrix. (2003). FACES, the Ultimate Composite Picture (Version 4.0) [Computer software]. Fremont, CA: IQ Biometrix, Inc. Key, K. N., Wetmore, S. A., Neuschatz, J. S., Gronlund, S. D., Cash, D. K., & Lane, S. (2017). Line-up fairness affects postdictor validity and 'don't know' responses. Applied Cognitive Psychology, 31, 59–68. https://doi.org/10.1002/acp.3302. Lampinen, J. M. (2016). ROC analyses in eyewitness identification research. Journal of Applied Research in Memory and Cognition, 5, 21–33. https://doi.org/10.1016/j.jarmac.2015.08.006. Lindsay, R. C. L. (1994). Biased lineups: where do they come from? In D. Ross, J. D. Read, & M. Toglia (Eds.), Adult eyewitness testimony: current trends and developments. New York: Cambridge University Press. Lindsay, R. C. L., Martin, R., & Webber, L. (1994). Default values in eyewitness descriptions: a problem for the match-to-description lineup filler selection strategy. Law and Human Behavior, 18, 527–541. https://doi.org/10.1007/BF01499172. Lindsay, R. C. L., & Pozzulo, J. D. (1999). Sources of eyewitness identification error. International Journal of Law and Psychiatry, 22, 347–360. https://doi.org/10.1016/S0160-2527(99)00014-X. Lindsay, R. C. L., & Wells, G. L. (1980). What price justice? Exploring the relationship of lineup fairness to identification accuracy. Law and Human Behavior, 4(4), 303–313. Luus, C. E., & Wells, G. L. (1991). Eyewitness identification and the selection of distracters for lineups. Law and Human Behavior, 15(1), 43–57. Macmillan, N. A., & Creelman, C. D. (2005). Detection theory: a user's guide. Mawhaw: Lawrence Erlbaum Associates Publishers. Malpass, R. S. (1981). Effective size and defendant bias in eyewitness identification lineups. Law and Human Behavior, 5(4), 299–309. Mansour, J. K., Beaudry, J. L., Kalmet, N., Bertrand, M. I., & Lindsay, R. C. L. (2017). Evaluating lineup fairness: variations across methods and measures. Law and Human Behavior, 41, 103–115. https://doi.org/10.1037/lhb0000203. Mansour, J. K., Beaudry, J. L., & Lindsay, R. C. L. (2017). Are multiple-trial experiments appropriate for eyewitness identification studies? Accuracy, choosing, and confidence across trials. Behavior Research Methods, 49, 2235–2254. https://doi.org/10.3758/s13428-017-0855-0. Mickes, L. (2015). Receiver operating characteristic analysis and confidence–accuracy characteristic analysis in investigations of system variables and estimator variables that affect eyewitness memory. Journal of Applied Research in Memory and Cognition, 4, 93–102. https://doi.org/10.1016/j.jarmac.2015.01.003. Mickes, L., Flowe, H. D., & Wixted, J. T. (2012). Receiver operating characteristic analysis of eyewitness memory: comparing the diagnostic accuracy of simultaneous versus sequential lineups. Journal of Experimental Psychology: Applied, 18, 361–376. https://doi.org/10.1037/a0030609. Mickes, L., Seale-Carlisle, T. M., Wetmore, S. A., Gronlund, S. D., Clark, S. E., Carlson, C. A., … Wixted, J. T. (2017). ROCs in eyewitness identification: instructions versus confidence ratings. Applied Cognitive Psychology, 31, 467–477. https://doi.org/10.1002/acp.3344. National Institute of Justice (1999). Eyewitness evidence: a guide for law enforcement. Washington: DIANE Publishing. National Registry of Exonerations. (2018). The National Registry of Exonerations. Retrieved from http://www.law.umich.edu/special/exoneration/Pages/about.aspx National Research Council (2014). Identifying the culprit: assessing eyewitness identification. Washington, DC: The National Academic Press. Palmer, M. A., Brewer, N., Weber, N., & Nagesh, A. (2013). The confidence-accuracy relationship for eyewitness identification decisions: effects of exposure duration, retention interval, and divided attention. Journal of Experimental Psychology: Applied, 19(1), 55–71. Police Executive Research Forum (2013). A national survey of eyewitness identification processes in law enforcement agencies. Washington, DC: U.S. Department of Justice Retrieved from https://www.policeforum.org/assets/docs/Free_Online_Documents/Eyewitness_Identification/a%20national%20survey%20of%20eyewitness%20identification%20procedures%20in%20law%20enforcement%20agencies%202013.pdf. Robin, X., Turck, N., Hainard, A., Tiberti, N., Lisacek, F., Sanchez, J. C., & Müller, M. (2011). pROC: an open-source package for R and S+ to analyze and compare ROC curves. BMC Bioinformatics, 12, 77. Rotello, C. M., & Chen, T. (2016). ROC Curve analyses of eyewitness identification decisions: an analysis of the recent debate. Cognitive Research: Principles and Implications, 1, 10. https://doi.org/10.1186/s41235-016-0006-7. Seale-Carlisle, T. M., Wetmore, S. A., Flowe, H. D., & Mickes, L. (2019). Designing police lineups to maximize memory performance. Journal of Experimental Psychology: Applied (in press). Semmler, C., Dunn, J., Mickes, L., & Wixted, J. T. (2018). The role of estimator variables in eyewitness identification. Journal of Experimental Psychology: Applied, 24(3), 400–415. Smith, A. M., Wells, G. L., Smalarz, L., & Lampinen, J. M. (2018). Increasing the similarity of lineup fillers to the suspect improves the applied value of lineups without improving memory performance: commentary on Colloff, Wade, and Strange (2016). Psychological Science, 29, 1548–1551. https://doi.org/10.1177/0956797617698528. Technical Working Group for Eyewitness Evidence (1999). Eyewitness evidence: a guide for law enforcement. Washington, D.C.: U.S. Department of Justice, Office of Justice Programs. Tredoux, C. G. (1998). Statistical inference on measures of lineup fairness. Law and Human Behavior, 22(2), 217–237. Tunnicliff, J. L., & Clark, S. E. (2000). Selecting fillers for identification lineups: matching suspects or descriptions? Law and Human Behavior, 24(2), 231–258. Wells, G. L. (1993). What do we know about eyewitness identification? American Psychologist, 48(5), 553–571. Wells, G. L., Rydell, S. M., & Seelau, E. P. (1993). The selection of distractors for eyewitness lineups. Journal of Applied Psychology, 78, 835–844. https://doi.org/10.1037/0021-9010.78.5.835. Wells, G. L., Smalarz, L., & Smith, A. M. (2015). ROC analysis of lineups does not measure underlying discriminability and has limited value. Journal of Applied Research in Memory, 4, 313–317. https://doi.org/10.1016/j.jarmac.2015.08.008. Wells, G. L., Small, M., Penrod, S., Malpass, R. S., Fulero, S. M., & Brimacombe, C. A. E. (1998). Eyewitness identification procedures: Recommendations for lineups and photospreads. Law and Human Behavior, 22, 603–647. Wells, G. L., & Windschitl, P. D. (1999). Stimulus sampling and social psychological experimentation. Personality and Social Psychology Bulletin, 25(9), 1115–1125. Wetmore, S. A., Neuschatz, J. S., Gronlund, S. D., Wooten, A., Goodsell, C. A., & Carlson, C. A. (2015). Effect of retention interval on showup and lineup performance. Journal of Applied Research in Memory and Cognition, 4, 8–14. https://doi.org/10.1016/j.jarmac.2014.07.003. Wetmore, S. A., Neuschatz, J. S., Gronlund, S. D., Wooten, A., Goodsell, C. A., & Carlson, C. A. (2016). Corrigendum to 'Effect of retention interval on showup and lineup performance. Journal of Applied Research in Memory and Cognition, 5(1), 94. Wilford, M. M., & Wells, G. L. (2010). Does facial processing prioritize change detection? Change blindness illustrates costs and benefits of holistic processing. Psychological Science, 21, 1611–1615. https://doi.org/10.1177/0956797610385952. Wixted, J. T., & Mickes, L. (2012). The field of eyewitness memory should abandon probative value and embrace receiver operating characteristic analysis. Perspectives on Psychological Science, 7, 275–278. https://doi.org/10.1177/1745691612442906. Wixted, J. T., & Mickes, L. (2014). A signal-detection-based diagnostic-feature-detection model of eyewitness identification. Psychological Review, 121, 262–276. https://doi.org/10.1037/a0035940. Wixted, J. T., & Mickes, L. (2018). Theoretical vs. empirical discriminability: the application of ROC methods to eyewitness identification. Cognitive Research: Principles and Implications, 3. https://doi.org/10.1186/s41235-018-0093-8. Wixted, J. T., & Wells, G. L. (2017). The relationship between eyewitness confidence and identification accuracy: a new synthesis. Psychological Science in the Public Interest, 18(1), 10–65. The authors thank all research assistants of the Applied Cognition Laboratory for help with stimuli preparation and data collection. Data collection for Experiment 2 via SurveyMonkey was supported by a Criminal Justice and Policing Reform Grant from the Charles Koch Foundation to JEW and CAC. Texas A&M University – Commerce, PO Box 3011, Commerce, TX, 75429, USA Curt A. Carlson, Alyssa R. Jones, Jane E. Whittington, Robert F. Lockamyeir, Maria A. Carlson & Alex R. Wooten Curt A. Carlson Alyssa R. Jones Jane E. Whittington Robert F. Lockamyeir Maria A. Carlson Alex R. Wooten CAC, ARJ, JEW, and RFL designed and conducted the experiments. CAC wrote the first draft of the manuscript. MAC assisted with data analysis and some writing. ARW provided valuable feedback on drafts of the manuscript. All authors read and approved the final manuscript. Correspondence to Curt A. Carlson. All experiments reported in this paper were approved by the Institutional Review Board of Texas A&M University – Commerce, and all participants provided informed consent prior to participation. Additional file Supplemental material: pilot experiments. (DOCX 36 kb) Carlson, C.A., Jones, A.R., Whittington, J.E. et al. Lineup fairness: propitious heterogeneity and the diagnostic feature-detection hypothesis. Cogn. Research 4, 20 (2019). https://doi.org/10.1186/s41235-019-0172-5 Simultaneous lineup Lineup fairness Diagnostic feature-detection hypothesis Propitious heterogeneity
CommonCrawl
\begin{document} \title{Specification Decomposition for Reactive Synthesis\thanks{This work was partially supported by the German Research Foundation~(DFG) as part of the Collaborative Research Center ``Foundations of Perspicuous Software Systems'' (TRR 248 -- CPEC, 389792660), and by the European Research Council (ERC) Grant OSARES (No. 683300). The authors thank Alexandre Duret-Lutz for providing valuable feedback on the algorithm and for bringing up the idea of extending assumption dropping to non-strict formulas. Moreover, they thank Marvin Stenger for help with the implementation.} } \author{Bernd Finkbeiner \and Gideon Geier \and Noemi Passing } \institute{B.~Finkbeiner and N.~Passing \at CISPA Helmholtz Center for Information Security, Germany \\ \email{[email protected], [email protected]} \and G.~Geier \at Saarland University, Germany \\ \email{[email protected]} } \date{Received: date / Accepted: date} \maketitle \begin{abstract} Reactive synthesis is the task of automatically deriving a correct implementation from a specification. It is a promising technique for the development of verified programs and hardware. Despite recent advances in terms of algorithms and tools, however, reactive synthesis is still not practical when the specified systems reach a certain bound in size and complexity. In this paper, we present a sound and complete modular synthesis algorithm that automatically decomposes the specification into smaller subspecifications. For them, independent synthesis tasks are performed, significantly reducing the complexity of the individual tasks. Our decomposition algorithm guarantees that the subspecifications are independent in the sense that completely separate synthesis tasks can be performed for them. Moreover, the composition of the resulting implementations is guaranteed to satisfy the original specification. Our algorithm is a preprocessing technique that can be applied to a wide range of synthesis tools. We evaluate our approach with state-of-the-art synthesis tools on established benchmarks: The runtime decreases significantly when synthesizing implementations modularly. \keywords{Reactive Synthesis \and Specification Decomposition \and Modular Synthesis \and Compositional Synthesis \and Preprocessing for Synthesis} \end{abstract} \section{Introduction} Reactive synthesis automatically derives an implementation that satisfies a given specification. It is a push-button method producing implementations which are correct by construction. Therefore, reactive synthesis is a promising technique for the development of probably correct systems since it allows for concentrating on \emph{what} a system should do instead of \emph{how} it should be done. Despite recent advances in terms of efficient algorithms and tools, however, reactive synthesis is still not practical when the specified systems reach a certain bound in size and complexity. It is long known that the scalability of model checking algorithms can be improved significantly by using compositional approaches, i.e.\@\xspace, by breaking down the analysis of a system into several smaller subtasks.~\cite{Compos97,ClarkeLM89}. In this paper, we apply compositional concepts to reactive synthesis: We present and extend a modular synthesis algorithm~\cite{FinalVersion} that decomposes a specification into several subspecifications. Then, independent synthesis tasks are performed for them. The implementations obtained from the subtasks are combined into an implementation for the initial specification. The algorithm uses synthesis as a black box and can thus be applied to a wide range of synthesis algorithms. In particular, it can be seen as a preprocessing step for reactive synthesis that enables compositionality for existing algorithms and tools. Soundness and completeness of modular synthesis strongly depends on the decomposition of the specification into subspecifications. We introduce a criterion, \emph{non-contradictory independent sublanguages}, for subspecifications that ensures soundness and completeness: The original specification is equirealizable to the subspecifications and the parallel composition of the implementations for the subspecifications is guaranteed to satisfy the original specification. The key question is now how to decompose a specification such that the resulting subspecifications satisfy the criterion. Lifting the language-based criterion to the automaton level, we present a decomposition algorithm for nondeterministic Büchi automata that directly implements the independent sublanguages paradigm. Thus, using subspecifications obtained with this decomposition algorithm ensures soundness and completeness of modular synthesis. A specification given in the standard temporal logic LTL can be translated into an equivalent nondeterministic Büchi automaton and hence the decomposition algorithm can be applied as well. However, while the decomposition algorithm is semantically precise, it utilizes several expensive automaton operations. For large specifications, the decomposition thus becomes infeasible. Therefore, we present an approximate decomposition algorithm for LTL specification that still ensures soundness and completeness of modular synthesis but is more scalable. It is approximate in the sense that, in contrast to the automaton decomposition algorithm, it does not necessarily find all possible decompositions. Moreover, we present an optimization of the LTL decomposition algorithm for formulas in a common assumption-guarantee format. It analyzes the assumptions and drops those that do not influence the realizability of the rest of the formula, yielding more fine-grained decompositions. We extend the optimization from specifications in a strict assume-guarantee format to specifications consisting of several conjuncts in assume-guarantee format. This allows for applying the optimization to even more of the common LTL synthesis benchmarks. We have implemented both decomposition procedures as well as the modular synthesis algorithm and used it with the two state-of-the-art synthesis tools BoSy~\cite{BoSy} and Strix~\cite{MeyerStrix}. We evaluate our algorithms on the established benchmarks from the synthesis competition SYNTCOMP~\cite{SYNTCOMP}. As expected, the decomposition algorithm for nondeterministic Büchi automata becomes infeasible when the specifications grow. For the LTL decomposition algorithm, however, the experimental results are excellent: Decomposition terminates in less than 26 milliseconds on all benchmarks. Hence, the overhead of LTL decomposition is negligible, even for non-decomposable specifications. Out of 39 decomposable specifications, BoSy and Strix increase their number of synthesized benchmarks by nine and five, respectively. For instance, on the generalized buffer benchmark~\cite{JacobsB16,Jobstmann07} with three receivers, BoSy is able to synthesize a solution within~28 seconds using modular synthesis while neither the non-compositional version of BoSy, nor the non-compositional version of Strix terminates within one hour. For twelve and nine further benchmarks, respectively, BoSy and Strix reduce their synthesis times significantly, often by an order of magnitude or more, when using modular synthesis instead of their classical algorithms. The remaining benchmarks are too small and too simple for compositional methods to pay off. Thus, decomposing the specification into smaller subspecifications indeed increases the scalability of synthesis on larger systems. \textbf{Related Work:} Compositional approaches are long known to improve the scalability of model checking algorithms significantly~\cite{Compos97,ClarkeLM89}. The approach that is most related to our contribution is a preprocessing algorithm for compositional model checking~\cite{DurejaR18}. It analyzes dependencies between the properties that need to be checked in order to reduce the number of model checking tasks. We lift this idea from model checking to reactive synthesis. The dependency analysis in our algorithm, however, differs inherently from the one for model checking. There exist several compositional approaches for reactive synthesis. The algorithm by Filiot~et~al. depends, like our LTL decomposition approach, heavily on dropping assumptions~\cite{FiliotJR10}. They use an heuristic that, in contrast to our criterion, is incomplete. While their approach is more scalable than a non-compositional one, one does not see as significant differences as for our algorithm. The algorithm by Kupferman~et~al. is designed for incrementally adding requirements to a specification during system design~\cite{KupfermanPV06}. Thus, it does not perform independent synthesis tasks but only reuses parts of the already existing solutions. In contrast to our algorithm, both \cite{KupfermanPV06} and \cite{FiliotJR10} do not consider dependencies between the components to obtain prior knowledge about the presence or absence of conflicts in the implementations. Assume-guarantee synthesis algorithms~\cite{ChatterjeeH07,MajumdarMSZ20,FinkbeinerP21,BloemCJK15} take dependencies between components into account. In this setting, specifications are not always satisfiable by one component alone. Thus, a negotiation between the components is needed. While this yields more fine-grained decompositions, it produces a significant overhead that, as our experiments show, is often not necessary for common benchmarks. Avoiding negotiation, dependency-based compositional synthesis~\cite{FinkbeinerP20} decomposes the system based on a dependency analysis of the specification. The analysis is more fine-grained than the one presented in this paper. Moreover, a weaker winning condition for synthesis, remorsefree dominance~\cite{DammF11}, is used. While this allows for smaller synthesis tasks since the specification can be decomposed further, both the dependency analysis and using a different winning condition produce a larger overhead than our approach. The reactive synthesis tools Strix~\cite{MeyerStrix}, Unbeast~\cite{Ehlers11}, and Safety-First~\cite{SohailS13} decompose the given specification. Strix uses decomposition to find suitable automaton types for internal representation and to identify isomorphic parts of the specification. Unbeast and Safety-First in contrast, decompose the specification to identify safety parts. All three tools do not perform independent synthesis tasks for the subspecifications. In fact, our experiments show that the scalability of Strix still improves notably with our algorithm. Independent of~\cite{FinalVersion}, Mavridou et al. introduce a compositional realizability analysis of formulas given in FRET~\cite{GiannakopoulouP20a} that is based on similar ideas as our LTL decomposition algorithm~\cite{MavridouKGKPW21}. They only study the realizability of formulas but do not synthesize solutions. Optimized assumption handling cannot easily be integrated into their approach. For a detailed comparison of both approaches, we refer to~\cite{MavridouKGKPW21}. The first version~\cite{FinalVersion} of our modular synthesis approach is already well-accepted in the synthesis community: Our LTL decomposition algorithm has been integrated into the new version~\cite{LTLsyntOptimized} of the synthesis tool ltlsynt~\cite{LTLsynt}. \section{Preliminaries} \paragraph{LTL.} Linear-time temporal logic~(LTL)~\cite{Pnueli77} is a specification language for linear-time properties. For a finite set $\Sigma$ of atomic propositions, the syntax of LTL is given by $ \varphi, \psi ::= a ~ | ~ \mathit{true} ~ | ~ \neg \varphi ~ | ~ \varphi \lor \psi ~ | ~ \varphi \land \psi ~ | ~ \LTLnext \varphi ~ | ~ \varphi \LTLuntil \psi$, where $a \in \Sigma$. We define the operators $\LTLdiamond \varphi := \mathit{true} \LTLuntil \varphi$ and $\LTLsquare \varphi := \neg \LTLdiamond \neg \varphi$ and use standard semantics. The atomic propositions in $\varphi$ are denoted by $\propositions{\varphi}$, where every occurrence of $\mathit{true}$ or $\mathit{false}$ in $\varphi$ does not add any atomic propositions to $\propositions{\varphi}$. The language~$\mathcal{L}(\varphi)$ of~$\varphi$ is the set of infinite words that satisfy $\varphi$. \paragraph{Automata.} For a finite alphabet $\Sigma$, a nondeterministic Büchi automaton~(NBA) is a tuple $\mathcal{A} = (Q,Q_0,\delta,F)$, where $Q$ is a finite set of states, $Q_0 \subseteq Q$ is a set of initial states, $\delta: Q \times \Sigma \times Q$ is a transition relation, and $F \subseteq Q$ is a set of accepting states. Given an infinite word $\sigma = \sigma_1\sigma_2 \dots \in \Sigma^\omega$, a run of $\sigma$ on $\mathcal{A}$ is an infinite sequence $q_1 q_2 q_3 \dots \in Q^\omega$ of states where $q_1 \in Q_0$ and $(q_i,\sigma_i,q_{i+1}) \in \delta$ holds for all $i \geq 1$. A run is accepting if it contains infinitely many accepting states. $\mathcal{A}$ accepts a word $\sigma$ if there is an accepting run of~$\sigma$ on~$\mathcal{A}$. The language $\mathcal{L}(\mathcal{A})$ of $\mathcal{A}$ is the set of all accepted words. Two NBAs are equivalent if their languages are. An LTL specification $\varphi$ can be translated into an equivalent NBA $\mathcal{A}_\varphi$ with a single exponential blow up~\cite{KupfermanV05}. \paragraph{Implementations and Counterstrategies.} An implementation of a system with inputs~$I$, outputs~$O$, and variables $V = I \cup O$ is a function $f : (2^V)^* \times 2^I \rightarrow 2^O$ mapping a history of variables and the current input to outputs. An infinite word $\sigma = \sigma_1 \sigma_2 \dots \in (2^V)^\omega$ is compatible with an implementation $f$ if for all $n \in \mathbb{N}$, $f(\sigma_1 \dots \sigma_{n-1}, \sigma_n \cap I) = \sigma_n \cap O$ holds. The set of all compatible words of $f$ is denoted by $\compatibleWords{f}$. An implementation~$f$ realizes a specification~$s$ if $\sigma \in \mathcal{L}(s)$ holds for all $\sigma \in \compatibleWords{f}$. A specification is called realizable if there exists an implementation realizing it. If a specification is unrealizable, there is a counterstrategy $f^c:(2^V)^* \rightarrow 2^I$ mapping a history of variables to inputs. An infinite word $\sigma = \sigma_1 \sigma_2 \dots \in (2^V)^\omega$ is compatible with $f^c$ if $f^c(\sigma_1 \dots \sigma_{n-1}) = \sigma_n \cap I$ holds for all $n \in \mathbb{N}$. All compatible words of $f^c$ violate $s$, i.e.\@\xspace, $\compatibleWords{f^c} \subseteq \overline{\mathcal{L}(s)}$. \paragraph{Reactive Synthesis.} Given a specification, reactive synthesis derives an implementation realizing it. For LTL specifications, synthesis is 2EXPTIME-complete~\cite{PnueliR89}. In this paper, we use reactive synthesis as a black box procedure and thus we do not go into detail here. Instead, we refer the interested reader to~\cite{Finkbeiner16}. \paragraph{Notation.} Overloading notation, we use union and intersection on infinite words: For $\sigma = \sigma_1 \sigma_2 \dots \in (2^{\Sigma_1})^\omega$, $\sigma' = \sigma'_1 \sigma'_2 \dots \in (2^{\Sigma_2})^\omega$ with $\Sigma = \Sigma_1 \cup \Sigma_2$, we define $\sigma \cup \sigma' := (\sigma_1 \cup \sigma'_1) (\sigma_2 \cup \sigma'_2) \dots \in (2^{\Sigma})^\omega$. For $\sigma$ as above and a set~$X$, let $\sigma \cap X := (\sigma_1 \cap X) (\sigma_2 \cap X) \dots \in (2^X)^\omega$. \section{Modular Synthesis} In this section, we introduce a modular synthesis algorithm that divides the synthesis task into independent subtasks by splitting the specification into several subspecifications. The decomposition algorithm has to ensure that the synthesis tasks for the subspecifications can be solved independently and that their results are non-contradictory, i.e.\@\xspace, that they can be combined into an implementation satisfying the initial specification. Note that when splitting the specification, we assign a set of relevant in- and output variables to every subspecification. The corresponding synthesis subtask is then performed on these variables. \begin{algorithm}[t] \SetKwInput{KwData}{Input} \SetKwInOut{KwResult}{Output} \SetKw{KwBy}{by} \KwData{\texttt{s}: Specification, \texttt{inp}: List Variable, \texttt{out}: List Variable} \KwResult{\texttt{realizable}: Bool, \texttt{implementation}: $\mathcal{T}$} \texttt{subspecifications} $\leftarrow$ decompose(\texttt{s}, \texttt{inp}, \texttt{out})\label{alg:compositional_synthesis:decompose} \\ \texttt{sub\_results} $\leftarrow$ map synthesize \texttt{subspecifications}\label{alg:compositional_synthesis:synthesize} \\ \ForEach{\upshape{(\texttt{real},\texttt{strat}) $\in$ \texttt{sub\_results}}}{ \If{\upshape{! \texttt{real}}}{ \texttt{impl} $\leftarrow$ extendCounterStrategy(\texttt{strat}, \texttt{s})\label{alg:compositional_synthesis:counterstrategy} \\ \Return{\upshape{($\bot$, \texttt{impl})}} } } \texttt{impls} $\leftarrow$ map second \texttt{sub\_results} \\ \Return{\upshape{($\top$, compose \texttt{impls})}} \caption{Modular Synthesis}\label{alg:compositional_synthesis} \end{algorithm} \Cref{alg:compositional_synthesis} describes this modular synthesis approach. First, the specification is decomposed into a list of subspecifications using an adequate decomposition algorithm. Then, the synthesis tasks for all subspecifications are solved. If a subspecification is unrealizable, its counterstrategy is extended to a counterstrategy for the whole specification. This construction is given in \Cref{def:counterstrategy}. Otherwise, the implementations of the subspecifications are composed. Intuitively, the behavior of the counterstrategy of an unrealizable subspecification~$s_i$ violates the full specification~$s$ as well. A counterstrategy for the full specification, however, needs to be defined on all variables of~$s$, i.e.\@\xspace, also on the variables that do not occur in~$s_i$. Thus, we extend the counterstrategy for $\varphi_i$ such that it ignores outputs outside of $s_i$ and produces an arbitrary valuation of the input variables outside of $s_i$: \begin{definition}[Counterstrategy Extension\label{def:counterstrategy}] Let $s$ be a specification with $\mathcal{L}(s) \subseteq (2^V)^\omega$. Let $V_1, V_2 \subset V$ with $V_1 \cup V_2 = V$ and $V_1 \cap V_2 \subseteq I$. Let $s_1, s_2$ be subspecifications of $s$ with $\mathcal{L}(s_1) \subseteq (2^{V_1})^\omega$, $\mathcal{L}(s_2) \subseteq (2^{V_2})^\omega$ such that $\mathcal{L}(s_1) \pc \mathcal{L}(s_1) = \mathcal{L}(s)$. Let $s_1$ be unrealizable and let $f^c_1: (2^{V_1})^* \rightarrow 2^{I \cap V_1}$ be a counterstrategy for~$s_1$. We construct a counterstrategy $f^c: (2^V)^* \rightarrow 2^I$ from~$f^c_1$ for $s$: $f^c(\sigma) = f^c_1(\sigma \cap V_1) \cup \mu$, where $\mu \in 2^{I \setminus V_1}$ is an arbitrary valuation of the input variables outside of $V_1$. \end{definition} The counterstrategy for the full specification constructed as in \Cref{def:counterstrategy} then indeed fulfills the condition of a counterstrategy for the full specification, i.e.\@\xspace, all of its compatible words violate the full specification: \begin{lemma}\label{lem:extension_counterstrategy} Let $s$ be a specification with $\mathcal{L}(s) \subseteq (2^V)^\omega\!$. Let $V_1, V_2 \subset V$ with $V_1 \cup V_2 = V$, $V_1 \cap V_2 \subseteq I$. Let $s_1, s_2$ be specifications with $\mathcal{L}(s_1) \subseteq (2^{V_1})^\omega$, $\mathcal{L}(s_2) \subseteq (2^{V_2})^\omega$ and $\mathcal{L}(s_1) \pc \mathcal{L}(s_1) = \mathcal{L}(s)$. Let $f^c_1: (2^{V_1})^* \rightarrow 2^{I \cap V_1}$ be a counterstrategy for $s_1$. The function $f^c$ constructed as in \Cref{def:counterstrategy} from $f^c_i$ is a counterstrategy for $s$. \end{lemma} \begin{proof} Let $\sigma \in \compatibleWords{f^c}$. Then $f^c(\sigma_1 \dots \sigma_{n-1}) = \sigma_n \cap I$ for all $n \in \mathbb{N}$ and hence, by construction of $f^c$, we have $f^c_1(\restrict{\sigma_1 \dots \sigma_{n-1}}{V_1})= \sigma_n \cap (I \cap V_1)$. Thus, $\sigma \cap V_1 \in \compatibleWords{f^c_1}$ follows. Since $f^c_1$ is a counterstrategy for $s_1$, we have $\compatibleWords{f^c_1} \subseteq \overline{\mathcal{L}(s_1)}$. Hence, $\sigma \cap V_1 \in \overline{\mathcal{L}(s_1)}$. By assumption, $\mathcal{L}(s_1) \pc \mathcal{L}(s_2) = \mathcal{L}(s)$ and thus $(\restrict{\sigma}{V_1}) \cup \sigma' \not\in \mathcal{L}(s)$ for any infinite word $\sigma' \in (2^{V_2})^\omega$. Thus, in particular, $(\restrict{\sigma}{V_1}) \cup (\restrict{\sigma}{V_2}) \not\in \mathcal{L}(s)$ holds. Since $V_1 \cup V_2 = V$, $(\restrict{\sigma}{V_1}) \cup (\restrict{\sigma}{V_2}) = \sigma$ follows. Thus, $\sigma \notin \mathcal{L}(s)$. Hence, for all $\sigma \in \compatibleWords{f^c}$, $\sigma \not \in \mathcal{L}(s)$ and thus $\compatibleWords{f^c} \subseteq \overline{\mathcal{L}(s)}$. Thereforde, $f^c$ is a counterstrategy for~$s$.\qed \end{proof} Soundness and completeness of modular synthesis depend on three requirements: Equirealizability of the initial specification and the subspecifications, non-con\-tra\-dic\-to\-ry composability of the subresults, and satisfaction of the initial specification by the parallel composition of the subresults. Intuitively, these requirements are met if the decomposition algorithm neither introduces nor drops parts of the system specification and if it does not produce subspecifications that allow for contradictory implementations. To obtain composability of the subresults, the implementations need to agree on shared variables. We ensure this by assigning disjoint sets of output variables to the synthesis subtasks: Since every subresult only defines the behavior of the assigned output variables, the implementations are non-contradictory. Since the language alphabets of the subspecifications thus differ, the composition of their languages is non-contradictory: \begin{definition}[Language Composition] Let $L_1$, $L_2$ be languages over $2^{\Sigma_1}$ and $2^{\Sigma_2}$, respectively. The \emph{non-contradictory composition of $L_1$ and~$L_2$} is given by $L_1 \! \pc L_2 \! = \! \{ \sigma_1 \cup \sigma_2 \mid \sigma_1 \! \in \! L_1 \land \sigma_2 \! \in \! L_2 \land \restrict{\sigma_1}{\Sigma_2} = \restrict{\sigma_2}{\Sigma_1} \}$. \end{definition} The satisfaction of the initial specification by the composed subresults can be guaranteed by requiring the subspecifications to be independent sublanguages: \begin{definition}[Independent Sublanguages] Let $L \subseteq (2^\Sigma)^\omega$, $L_1 \subseteq (2^{\Sigma_1})^\omega$, and $L_2 \subseteq (2^{\Sigma_2})^\omega$ be languages with $\Sigma_1, \Sigma_2 \subseteq \Sigma$ and $\Sigma_1 \cup \Sigma_2 = \Sigma$. Then, $L_1$ and $L_2$ are \emph{independent sublanguages} of $L$ if $L_1 \pc L_2 = L$ holds. \end{definition} From these two requirements, i.e.\@\xspace, the subspecifications form non-contradictory and independent sublanguages, equirealizability of the initial specification and the subspecifications follows: \begin{theorem}\label{thm:equisynthesizeability_independent_sublanguages} Let $s$, $s_1$, and $s_2$ be specifications with $\mathcal{L}(s) \subseteq (2^V)^\omega$, $\mathcal{L}(s_1) \subseteq (2^{V_1})^\omega$, $\mathcal{L}(s_2) \subseteq (2^{V_2})^\omega$. Recall that $I \subseteq V$ is the set of input variables. If $V_1 \cap V_2 \subseteq I$ and $V_1 \cup V_2 = V$ hold, and $\mathcal{L}(s_1)$ and $\mathcal{L}(s_2)$ are independent sublanguages of~$\mathcal{L}(s)$, then $s$ is realizable if, and only if, both $s_1$ and $s_2$ are realizable. \end{theorem} \begin{proof} First, suppose that $s_1$ and $s_2$ are realizable. Let $f_1: (2^{V_1})^* \times 2^{I \cap V_1} \rightarrow 2^{O \cap V_1}$, $f_2: (2^{V_2})^* \times 2^{I \cap V_2} \rightarrow 2^{O \cap V_2}$ be implementations realizing $s_1$ and $s_2$, respectively. We construct an implementation $f: (2^V)^* \times 2^I \rightarrow 2^O$ from $f_1$ and $f_2$: $f(\sigma,\inp{i}) := f_1(\restrict{\sigma}{V_1},\inp{i}\cap V_1) \cup f_2(\restrict{\sigma}{V_2}, \inp{i} \cap V_2)$. Let $\sigma \in \compatibleWords{f}$. Hence, $f((\sigma_1 \dots \sigma_{n-1}), \sigma_n \cap I) = \sigma_n \cap O$ for all $n \in \mathbb{N}$. Let $\sigma' \in (2^{V_1})^\omega$, $\sigma'' \in (2^{V_2})^\omega$ be sequences with $\sigma'_n \cap O = f_1((\restrict{\sigma_1 \dots \sigma_{n-1}}{V_1}), \sigma_n \cap (I \cap V_1))$ and $\sigma''_n \cap O = f_2((\restrict{\sigma_1 \dots \sigma_{n-1}}{V_2}), \sigma_n \cap (I \cap V_2))$, respectively, for all $n \in \mathbb{N}$. Then, $\sigma'_n \cup \sigma''_n = \sigma_n \cap O$ for all $n \in \mathbb{N}$ follows by construction of $f$ and thus $\sigma = \sigma' \cup \sigma''$ holds. Further, $\sigma' \in \compatibleWords{f_1}$ and $\sigma'' \in \compatibleWords{f_2}$ and thus, since~$s_1$ and $s_2$ are realizable by assumption, $\sigma' \in \mathcal{L}(s_1)$ and $\sigma'' \in \mathcal{L}(s_2)$. Since $\mathcal{L}(s_1)$ and $\mathcal{L}(s_2)$ are independent sublanguages by assumption, $\mathcal{L}(s_1) \pc \mathcal{L}(s_2) = \mathcal{L}(s)$ holds. Hence, by definition of language composition, $\sigma_1 \cup \sigma_2 \in \mathcal{L}(s)$ follows and thus, $\sigma \in \mathcal{L}(s)$ holds. Hence, for all $\sigma \in \compatibleWords{f}$, $\sigma \in \mathcal{L}(s)$ and therefore $f$ realizes $s$. Second, let $s_i$ is unrealizable for some $i \in \{1,2\}$ and let $f^c_i: (2^V)^* \rightarrow 2^{I \cap V_1}$ be a counterstrategy for~$s_i$. We construct a counterstrategy $f^c: (2^V)^* \rightarrow 2^I$ from $f^c_i$ as described in \Cref{def:counterstrategy}. By \Cref{lem:extension_counterstrategy},~$f^c$ is a counterstrategy for $s$. Thus, $s$ is unrealizable.\qed \end{proof} The soundness and completeness of \Cref{alg:compositional_synthesis} for adequate decomposition algorithms now follows directly with \Cref{thm:equisynthesizeability_independent_sublanguages} and the properties of such algorithms described above: They produce subspecifications that (1) do not share output variables and that (2) form independent sublanguages of the initial specification. \begin{theorem} \label{thm:soundness_completeness} Let $s$ be a specification. Moreover, let $\mathcal{S} = \{s_1, \dots, s_k\}$ be a set of subspecifications of $s$ with $\mathcal{L}(s_i) \subseteq (2^{V_i})^\omega$ such that $\bigcup_{1 \leq i \leq k} V_i = V$, $V_i \cap V_j \subseteq I$ for $1 \leq i,j \leq k$ with $i \neq j$, and such that $\mathcal{L}(s_1), \dots, \mathcal{L}(s_k)$ are independent sublanguages of $\mathcal{L}(s)$. If $s$ is realizable, \Cref{alg:compositional_synthesis} yields an implementation realizing $s$. Otherwise, \Cref{alg:compositional_synthesis} yields a counterstrategy for~$s$. \end{theorem} \begin{proof} First, let $s$ be realizable. Then, by applying \Cref{thm:equisynthesizeability_independent_sublanguages} recursively, it follows that $s_i$ is realizable for all $s_i \in \mathcal{S}$. Since $V_i \cap V_j \subseteq I$ holds for any $s_i,s_j \in \mathcal{S}$ with $i \neq j$, the implementations realizing $s_1, \dots, s_k$ are non-contradictory. Hence, \Cref{alg:compositional_synthesis} returns their composition: Implementation~$f$. Since $V_1 \cup \dots \cup V_k = V$,~$f$ defines the behavior of all outputs. By construction, $f$ realizes all $s_i \in \mathcal{S}$. Since the $\mathcal{L}(s_i)$ are non-contradictory, independent sublanguages of $\mathcal{L}(s)$, $f$ thus realizes~$s$. Next, let $s$ be unrealizable. Then, by applying \Cref{thm:equisynthesizeability_independent_sublanguages} recursively, $s_i$ is unrealizable for some $s_i \in \mathcal{S}$. Thus, \Cref{alg:compositional_synthesis} returns the extension of $s_i$'s counterstrategy to a counterstrategy for the full specification. Its correctness follows with \Cref{lem:extension_counterstrategy}.\qed \end{proof} \section{Decomposition of Büchi Automata}\label{sec:automata} To ensure soundness and completeness of modular synthesis, a specification decomposition algorithm needs to meet the language-based adequacy conditions of \Cref{thm:equisynthesizeability_independent_sublanguages}. In this section, we lift these conditions from the language level to nondeterministic Büchi automata and present a decomposition algorithm for specifications given as NBAs on this basis. Since the algorithm works directly on NBAs and not on their languages, we consider their composition instead of the composition of their languages: Let $\mathcal{A}_1 = (Q_1,Q^1_0,\delta_1,F_1)$ and $\mathcal{A}_2 = (Q_2,Q^2_0,\delta_2,F_2)$ be NBAs over $2^{V_1}$, $2^{V_2}$, respectively. The \emph{parallel composition of $\mathcal{A}_1$ and $\mathcal{A}_2$} is defined by the NBA $\mathcal{A}_1 \pc \mathcal{A}_2 = (Q,Q_0,\delta,F)$ over~$2^{V_1 \cup V_2}$ with $Q = Q_1 \times Q_2$, $Q_0 = Q^1_0 \times Q^2_0$, $((q_1,q_2), \inp{i}, (q'_1,q'_2)) \in \delta$ if, and only if, $(q_1,\inp{i} \cap V_1,q'_1) \in \delta_1$ and $(q_2,\inp{i} \cap V_2,q'_2) \in \delta_2$, and $F = F_1 \times F_2$. The parallel composition of NBAs reflects the composition of their languages: \begin{lemma}\label{lem:correctness_parallel_composition_automata} Let $\mathcal{A}_1$ and $\mathcal{A}_2$ be NBAs over alphabets $2^{V_1}\!$ and $2^{V_2}\!$. Then, $\mathcal{L}(\mathcal{A}_1 \pc \mathcal{A}_2) = \mathcal{L}(\mathcal{A}_1) \pc \mathcal{L}(\mathcal{A}_2)$ holds. \end{lemma} \begin{proof} First, let $\sigma \in \mathcal{L}(\mathcal{A}_1 \pc \mathcal{A}_2)$. Then, $\sigma$ is accepted by $\mathcal{A}_1 \pc \mathcal{A}_2$. Hence, by definition of automaton composition, for $i \in \{1,2\}$, $\restrict{\sigma}{V_i}$ is accepted by~$\mathcal{A}_i$. Thus, $\restrict{\sigma}{V_i} \in \mathcal{L}(\mathcal{A}_i)$. Since $\restrict{(\restrict{\sigma}{V_1})}{V_2} = \restrict{(\restrict{\sigma}{V_2})}{V_1}$, we have $(\restrict{\sigma}{V_1}) \cup (\restrict{\sigma}{V_2}) \in \mathcal{L}(\mathcal{A}_1) \pc \mathcal{L}(\mathcal{A}_2)$. By definition of automaton composition, $\sigma \in (2^{V_1 \cup V_2})^\omega$ and thus $\sigma = (\restrict{\sigma}{V_1}) \cup (\restrict{\sigma}{V_2})$. Hence, $\sigma \in \mathcal{L}(\mathcal{A}_1) \pc \mathcal{L}(\mathcal{A}_2)$. Next, let $\sigma \in \mathcal{L}(\mathcal{A}_1) \pc \mathcal{L}(\mathcal{A}_2)$. Then, for $\sigma_1 \in (2^{V_1})^\omega$, $\sigma_2 \in (2^{V_2})^\omega$ with $\sigma = \sigma_1 \cup \sigma_2$, we have $\sigma_i \in \mathcal{L}(\mathcal{A}_i)$ for $i \in \{1,2\}$ and $\restrict{\sigma_1}{V_2} = \restrict{\sigma_2}{V_1}$. Hence,~$\sigma_i$ is accepted by $\mathcal{A}_i$. Thus, by definition of automaton composition and since $\sigma_1$ and $\sigma_2$ agree on shared variables, $\sigma_1 \cup \sigma_2$ is accepted by $\mathcal{A}_1 \pc \mathcal{A}_2$. Thus, $\sigma_1 \cup \sigma_2 \in \mathcal{L}(\mathcal{A}_1 \pc \mathcal{A}_2)$ and hence $\sigma \in \mathcal{L}(\mathcal{A}_1 \pc \mathcal{A}_2)$ holds.\qed \end{proof} Using the above lemma, we can formalize the independent sublanguage criterion on NBAs directly: Two automata $\mathcal{A}_1$, $\mathcal{A}_2$ are \emph{independent subautomata} of $\mathcal{A}$ if $\mathcal{A} = \mathcal{A}_1 \pc \mathcal{A}_2$. To apply \Cref{thm:equisynthesizeability_independent_sublanguages}, the alphabets of the subautomata may not share output variables. Our decomposition algorithm achieves this by constructing the subautomata from the initial automaton by projecting to disjoint sets of outputs. Intuitively, the projection to a set $X$ abstracts from the variables outside of $X$. Hence, it only captures the parts of the initial specification concerning the variables in $X$. Formally: Let $\mathcal{A} = (Q,Q_0,\delta,F)$ be an NBA over alphabet~$2^V$ and let $X \subset V$. The \emph{projection of $\mathcal{A}$ to $X$} is the NBA $\project{\mathcal{A}}{X} = (Q,Q_0,\pi_X(\delta),F)$ over~$2^X$ with $\pi_X(\delta) = \{ (q,a,q') \mid \exists~ b \in 2^{V \setminus X}.~(q,a \cup b,q')\in\delta\}$. \begin{algorithm}[t] \SetKwInput{KwData}{Input} \SetKwInOut{KwResult}{Output} \SetKw{KwBy}{by} \KwData{$\mathcal{A}$: NBA, \texttt{inp}: List Variable, \texttt{out}: List Variable} \KwResult{\texttt{subautomata}: List (NBA, List Variable, List Variable)} \If{\upshape{isNull \texttt{checkedSubsets}}}{ \texttt{checkedSubsets} $\leftarrow$ $\emptyset$ } \texttt{subautomata} $\leftarrow$ [($\mathcal{A}$, \texttt{inp}, \texttt{out})] \\ \ForEach{\upshape{\texttt{X} $\subset$ \texttt{out}}}{\label{alg:automaton-based_decomposition:guess} \texttt{Y} $\leftarrow$ \texttt{out}$\setminus$\texttt{X} \\ \If{\upshape{\texttt{X} $\not\in$ \texttt{checkedSubsets} $\land$ \texttt{Y} $\not\in$ \texttt{checkedSubsets}}}{\label{alg:automaton-based_decomposition:unchecked} $\mathcal{A}_\texttt{X}$ $\leftarrow$ $\project{\mathcal{A}}{\texttt{X} \cup \texttt{inp}}$ \\ $\mathcal{A}_\texttt{Y}$ $\leftarrow$ $\project{\mathcal{A}}{\texttt{Y} \cup \texttt{inp}}$ \\ \If{\upshape{$\mathcal{L}(\mathcal{A}_\texttt{X}$ $\pc$ $\mathcal{A}_\texttt{Y})$~ $\subseteq$ $\mathcal{L}(\mathcal{A})$}}{\label{alg:automaton-based_decomposition:if} \texttt{subautomata} $\leftarrow$ decompose($\mathcal{A}_\texttt{X}$, \texttt{inp}, \texttt{X}) $++$ decompose($\mathcal{A}_\texttt{Y}$, \texttt{inp}, \texttt{Y})\label{alg:automaton-based_decomposition:add}\\ break } } \texttt{checkedSubsets} $\leftarrow$ \texttt{checkedSubsets} $\cup$ $\{\texttt{X},\texttt{Y}\}$\label{alg:automaton-based_decomposition:store} } \Return{\upshape{\texttt{subautomata}}} \caption{Automaton Decomposition}\label{alg:automaton-based_decomposition} \end{algorithm} The decomposition algorithm for NBAs is described in \Cref{alg:automaton-based_decomposition}. It is a recursive algorithm that, starting with the initial automaton $\mathcal{A}$, guesses a subset~$\texttt{X}$ of the output variables $\texttt{out}$. It abstracts from the output variables outside of $\texttt{X}$ by building the projection $\mathcal{A}_\texttt{X}$ of~$\mathcal{A}$ to $\texttt{X} \cup \texttt{inp}$, where $\texttt{inp}$ is the set of input variables. Similarly, it builds the projection $\mathcal{A}_\texttt{Y}$ of~$\mathcal{A}$ to $\texttt{Y} := (\texttt{out} \setminus \texttt{X}) \cup \texttt{inp}$. By construction of $\mathcal{A}_\texttt{X}$ and $\mathcal{A}_\texttt{Y}$ and since both $\texttt{X} \cap \texttt{Y} = \emptyset$ and $\texttt{X} \cup \texttt{Y} = \texttt{out}$ hold, we have $\mathcal{L}(\mathcal{A}) \subseteq \mathcal{L}(\mathcal{A}_\texttt{X}$ $\pc$ $\mathcal{A}_\texttt{Y})$. Hence, if $\mathcal{L}(\mathcal{A}_\texttt{X}$ $\pc$ $\mathcal{A}_\texttt{Y}) \subseteq \mathcal{L}(\mathcal{A})$ holds, then $\mathcal{A}_\texttt{X}$ $\pc$ $\mathcal{A}_\texttt{Y}$ is equivalent to $\mathcal{A}$ and therefore $\mathcal{L}(\mathcal{A}_\texttt{X})$ and $\mathcal{L}(\mathcal{A}_\texttt{Y})$ are independent sublanguages of $\mathcal{L}(\mathcal{A})$. Thus, since $\texttt{X}$ and~$\texttt{Y}$ are disjoint and therefore $\mathcal{A}_\texttt{X}$ and $\mathcal{A}_\texttt{Y}$ do not share output variables, $\mathcal{A}_\texttt{X}$ and $\mathcal{A}_\texttt{Y}$ are a valid decomposition of~$\mathcal{A}$. The subautomata are then decomposed recursively. If no further decomposition is possible, the algorithm returns the subautomata. By only considering unexplored subsets of output variables, no subset combination $\texttt{X}, \texttt{Y}$ is checked twice. \begin{figure} \caption{NBA $\mathcal{A}$ for $\varphi = \protect\LTLdiamond o_1 \land \protect\LTLsquare(i \rightarrow \protect\LTLdiamond o_2)$. Accepting states are marked with double circles.} \label{fig:initial_automaton_example} \end{figure} \begin{figure}\label{fig:projection_1_min} \label{fig:projection_2_min} \label{fig:projections_example_minimized} \end{figure} As an example for the specification decomposition algorithm based on NBAs, consider the specification $\varphi=\LTLdiamond o_1 \land \LTLsquare (i \rightarrow \LTLdiamond o_2)$ for inputs $I = \{i\}$ and outputs $O = \{o_1,o_2\}$. The NBA~$\mathcal{A}$ that accepts $\mathcal{L}(\varphi)$ is depicted in \Cref{fig:initial_automaton_example}. The (minimized) subautomata obtained with \Cref{alg:automaton-based_decomposition} are shown in \Cref{fig:projection_1_min,fig:projection_2_min}. Clearly, $V_1 \cap V_2 \subseteq I$ holds. Moreover, their parallel composition is exactly $\mathcal{A}$ depicted in \Cref{fig:initial_automaton_example} and therefore their parallel composition accepts exactly those words that satisfy $\varphi$. For a slightly modified specification $\varphi' = \LTLdiamond o_1 \lor \LTLsquare (i \rightarrow \LTLdiamond o_2)$, however, \Cref{alg:automaton-based_decomposition} does not decompose the NBA $\mathcal{A}'$ with $\mathcal{L}(\mathcal{A}') = \mathcal{L}(\varphi')$ depicted in \Cref{fig:initial_automaton_example_2}: The only possible decomposition is $\texttt{X} = \{o_1\}$, $\texttt{Y} = \{o_2\}$ (or vice-versa), yielding NBAs $\mathcal{A}'_\texttt{X}$ and $\mathcal{A}'_\texttt{Y}$ that accept every infinite word. Clearly, $\mathcal{L}(\mathcal{A}'_\texttt{X} \pc \mathcal{A}'_\texttt{Y}) \not\subseteq \mathcal{L}(\mathcal{A}')$ since $\mathcal{L}(\mathcal{A}'_\texttt{X} \pc \mathcal{A}'_\texttt{Y}) = (2^{I\cupO})^\omega$ and hence $\mathcal{A}'_\texttt{X}$ and $\mathcal{A}'_\texttt{Y}$ are no valid decomposition. \begin{figure} \caption{NBA $\mathcal{A}'$ for $\varphi' = \protect\LTLdiamond o_1 \lor \protect\LTLsquare(i \rightarrow \protect\LTLdiamond o_2)$. Accepting states are marked with double circles.} \label{fig:initial_automaton_example_2} \end{figure} \Cref{alg:automaton-based_decomposition} ensures soundness and completeness of modular synthesis: The subspecifications do not share output variables and they are equirealizable to the initial specification. This follows from the construction of the subautomata, \Cref{lem:correctness_parallel_composition_automata}, and \Cref{thm:equisynthesizeability_independent_sublanguages}: \begin{theorem}\label{thm:correctness_automaton_decomposition} Let $\mathcal{A}$ be an NBA over alphabet $2^V$. \Cref{alg:automaton-based_decomposition} terminates on $\mathcal{A}$ with a set $\mathcal{S} = \{\mathcal{A}_1, \dots, \mathcal{A}_k\}$ of NBAs with $\mathcal{L}(\mathcal{A}_i) \subseteq (2^{V_i})^\omega$, where $V_i \cap V_j \subseteq I$ for $1 \leq i,j \leq k$ with $i \neq j$, $V = \bigcup_{1 \leq i \leq k} V_i$, and $\mathcal{A}$ is realizable if, and only if, $\mathcal{A}_i$ is realizable for all $\mathcal{A}_i \in \mathcal{S}$. \end{theorem} \begin{proof} Clearly, there are NBAs that cannot be decomposed further, e.g.\@\xspace, automata whose alphabet contains only one output variable. Thus, since there are only finitely many subsets of $O$, \Cref{alg:automaton-based_decomposition} terminates. We show that the algorithm returns subspecifications that only share input variables, define all output variables of the system, and that are independent sublanguages of the initial specification by structural induction on the initial automaton: For any automaton $\mathcal{A}'$ that is not further decomposable, \Cref{alg:automaton-based_decomposition} returns a list $\mathcal{S}'$ solely containing $\mathcal{A}'$. Clearly, the parallel composition of all automata in $\mathcal{S}'$ is equivalent to $\mathcal{A}'$ and the alphabets of the languages of the subautomata do not share output variables. Next, let $\mathcal{A}'$ be an NBA such that there exists a set \texttt{X} $\subset \texttt{out}$ with $\mathcal{L}(\project{\mathcal{A}'}{\texttt{X}\cup\texttt{inp}} \pc \project{\mathcal{A}'}{\texttt{Y} \cup \texttt{inp}}) \subseteq \mathcal{L}(\mathcal{A}')$, where $\texttt{Y} = \texttt{out} \setminus \texttt{X}$. By construction of $\project{\mathcal{A}'}{\texttt{X}\cup\texttt{inp}}$ and $\project{\mathcal{A}'}{\texttt{Y}\cup\texttt{inp}}$, we have $(\mathcal{A}' \cap (\texttt{Z}\cup\texttt{inp})) \subseteq \project{\mathcal{A}'}{\texttt{Z}\cup\texttt{inp}}$ for $\texttt{Z} \in \{ \texttt{X},\texttt{Y} \}$. Since both $\texttt{X} \cap \texttt{Y} = \emptyset$ and $\texttt{X} \cup \texttt{Y} = \texttt{out}$ hold by construction of $\texttt{X}$ and $\texttt{Y}$, $(\texttt{X}\cup\texttt{inp}) \cap (\texttt{Y}\cup\texttt{inp}) \subseteq \texttt{inp}$ as well as $(\texttt{X}\cup\texttt{inp}) \cup (\texttt{Y}\cup\texttt{inp}) = \texttt{inp} \cup \texttt{out}$ follows. Therefore, $\mathcal{L}(\mathcal{A}) \subseteq \mathcal{L}(\project{\mathcal{A}'}{\texttt{X}\cup\texttt{inp}} \pc \project{\mathcal{A}'}{\texttt{Y} \cup \texttt{inp}})$ holds and thus, $\project{\mathcal{A}'}{\texttt{X}\cup\texttt{inp}} \pc \project{\mathcal{A}'}{\texttt{Y} \cup \texttt{inp}} \equiv \mathcal{A}'$ follows. By induction hypothesis, the calls to the algorithm with $\project{\mathcal{A}'}{\texttt{X}\cup\texttt{inp}}$ and $\project{\mathcal{A}'}{\texttt{Y} \cup \texttt{inp}}$ return lists $\mathcal{S}'_\texttt{X}$ and $\mathcal{S}'_{\texttt{Y}}$, respectively, where the parallel composition of all automata in $\mathcal{S}'_\texttt{Z}$ is equivalent to $\project{\mathcal{A}'}{\texttt{Z} \cup \texttt{inp}}$ for $\texttt{Z} \in \{\texttt{X}, \texttt{Y}\}$. Thus, the parallel composition of all automata in the concatenation of $\mathcal{S}'_\texttt{X}$ and~$\mathcal{S}'_{\texttt{Y}}$ is equivalent to $\project{\mathcal{A}'}{\texttt{X}\cup\texttt{inp}} \pc \project{\mathcal{A}'}{\texttt{Y} \cup \texttt{inp}}$ and thus, by construction of \texttt{X}, to~$\mathcal{A}'$. Hence, their languages are independent sublanguages of $\mathcal{A}'$. Furthermore, by induction hypothesis, the alphabets of the automata in $\mathcal{S}'_\texttt{Z}$ do not share output variables for $\texttt{Z} \in \{\texttt{X}, \texttt{Y}\}$ and, by construction, they are subsets of the alphabet of $\project{\mathcal{A}'}{\texttt{Z}}$. Hence, since clearly $(\texttt{X} \cup \texttt{inp}) \cap ((\texttt{out} \setminus \texttt{X}) \cup \texttt{inp}) \subseteq \texttt{inp}$ holds, the alphabets of the automata in the concatenation of $\mathcal{S}'_\texttt{X}$ and $\mathcal{S}'_{\texttt{Y}}$ do not share output variables. Moreover, the union of the alphabets of the automata in $\mathcal{S}'_\texttt{Z}$ equals the alphabet of $\project{\mathcal{A}}{\texttt{Z} \cup \texttt{inp}}$ for $\texttt{Z} \in \{\texttt{X}, \texttt{Y}\}$ by induction hypothesis. Since clearly $\texttt{X} \cup \texttt{Y} = \texttt{out}$, it follows that the union of the alphabets of the automata in the concatenation of $\mathcal{S}'_\texttt{X}$ and $\mathcal{S}'_{\texttt{Y}}$ equals $\texttt{inp} \cup \texttt{out}$. Thus, $\bigcup_{1 \leq i \leq k} V_i = V$ and $V_i \cap V_j \subseteq I$ for $1 \leq i,j \leq k$ with $i \neq j$. Moreover, $\mathcal{L}(\mathcal{A}_1), \dots, \mathcal{L}(\mathcal{A}_k)$ are independent sublanguages of $\mathcal{L}(\mathcal{A})$. Thus, by \Cref{thm:equisynthesizeability_independent_sublanguages}, $\mathcal{A}$ is realizable if, and only if, all $\mathcal{A}_i \in \mathcal{S}$ are realizable.\qed \end{proof} Since \Cref{alg:automaton-based_decomposition} is called recursively on every subautomaton obtained by projection, it directly follows that the nondeterministic Büchi automata contained in the returned list are not further decomposable: \begin{theorem} Let $\mathcal{A}$ be an NBA and let $\mathcal{S}$ be the set of NBAs that $\Cref{alg:automaton-based_decomposition}$ returns on input $\mathcal{A}$. Then, for each $\mathcal{A}_i \in \mathcal{S}$ over alphabet $2^{V_i}$, there are no NBAs $\mathcal{A}'$, $\mathcal{A''}$ over alphabets $2^{V'}$ and $2^{V''}$ with $V_i = V' \cup V''$ such that $\mathcal{A}_i = \mathcal{A}' \pc \mathcal{A}''$ holds. \end{theorem} Hence, \Cref{alg:automaton-based_decomposition} yields \emph{perfect} decompositions and is semantically precise. Yet, it performs several expensive automaton operations such as projection, composition, and language containment checks. For large automata, this is infeasible. For specifications given as LTL formulas, we thus present an approximate decomposition algorithm in the next section that does not yield non-decomposable subspecifications, but that is free of the expensive automaton operations. \section{Decomposition of LTL Formulas} An LTL specification can be decomposed by translating it into an equivalent NBA and by then applying \Cref{alg:automaton-based_decomposition}. To circumvent expensive automaton operations, though, we introduce an approximate decomposition algorithm that, in contrast to \Cref{alg:automaton-based_decomposition}, does not necessarily find all possible decompositions. In the following, we assume that $V = \propositions{\varphi}$ holds for the initial specification~$\varphi$. Note that any implementation for the variables in $\propositions{\varphi}$ can easily be extended to one for the variables in $V$ if $\propositions{\varphi} \subset V$ holds by ignoring the inputs in $I \setminus \propositions{\varphi}$ and by choosing arbitrary valuations for the outputs in $O \setminus \propositions{\varphi}$. The main idea of the decomposition algorithm is to rewrite the initial LTL formula $\varphi$ into a conjunctive form $\varphi=\varphi_1 \land \dots \land \varphi_k$ with as many top-level conjuncts as possible by applying distributivity and pushing temporal operators inwards whenever possible. Then, we build subspecifications $\varphi_i$ consisting of subsets of the conjuncts. Each conjunct occurs in exactly one subspecification. We say that conjuncts are \emph{independent} if they do not share output variables. Given an LTL formula with two independent conjuncts, the languages of the conjuncts are independent sublanguages of the language of the whole formula: \begin{lemma}\label{lem:independent_sublanguages_conjuncts} Let $\varphi = \varphi_1 \land \varphi_2$ be an LTL formula over atomic propositions $V$ with conjuncts $\varphi_1$ and $\varphi_2$ over $V_1$ and $V_2$, respectively, with $V_1 \cup V_2 \subseteq V$. Then, $\mathcal{L}(\varphi_1)$ and~$\mathcal{L}(\varphi_2)$ are independent sublanguages of $\mathcal{L}(\varphi)$. \end{lemma} \begin{proof} First, let $\sigma \in \mathcal{L}(\varphi)$. Then, $\sigma \in \mathcal{L}(\varphi_i)$ holds for all $i \in \{1,2\}$. Since $\propositions{\varphi_i} \subseteq V_i$ holds and since the satisfaction of $\varphi_i$ only depends on the valuations of the variables in $\propositions{\varphi_i}$, we have $\restrict{\sigma}{V_i} \in \mathcal{L}(\varphi_i)$. Since clearly $\restrict{(\restrict{\sigma}{V_1})}{V_2} = \restrict{(\restrict{\sigma}{V_2})}{V_1}$ holds, we have $(\restrict{\sigma}{V_1}) \cup (\restrict{\sigma}{V_2}) \in \mathcal{L}(\varphi_1) \pc \mathcal{L}(\varphi_2)$. Since $V_1 \cup V_2 = V$ holds by assumption, we have $\sigma = (\restrict{\sigma}{V_1}) \cup (\restrict{\sigma}{V_2})$ and hence $\sigma \in \mathcal{L}(\varphi_1) \pc \mathcal{L}(\varphi_2)$ follows. Next, let $\sigma \in \mathcal{L}(\varphi_1) \pc \mathcal{L}(\varphi_2)$. Then, there are words $\sigma_1 \in \mathcal{L}(\varphi_1)$, $\sigma_2 \in \mathcal{L}(\varphi_2)$ with $\restrict{\sigma_1}{V_2} = \restrict{\sigma_2}{V_1}$ and $\sigma = \sigma_1 \cup \sigma_2$. Since $\sigma_1$ and $\sigma_2$ agree on shared variables, $\sigma \in \mathcal{L}(\varphi_1)$ and $\sigma \in \mathcal{L}(\varphi_2)$. Hence, $\sigma \in \mathcal{L}(\varphi_1 \land \varphi_2)$.\qed \end{proof} Our decomposition algorithm then ensures that different subspecifications share only input variables by merging conjuncts that share output variables into the same subspecification. Then, equirealizability of the initial formula and the subformulas follows directly from \Cref{thm:equisynthesizeability_independent_sublanguages} and \Cref{lem:independent_sublanguages_conjuncts}: \begin{corollary}\label{cor:equisynthesizeability_independent_conjuncts} Let $\varphi = \varphi_1 \land \varphi_2$ be an LTL formula over $V$ with conjuncts $\varphi_1$, $\varphi_2$ over $V_1$, $V_2$, respectively, with $V_1 \cup V_2 = V$ and $V_1 \cap V_2 \subseteq I$. Then, $\varphi$ is realizable if, and only if, both $\varphi_1$ and $\varphi_2$ are realizable. \end{corollary} To determine which conjuncts of an LTL formula $\varphi = \varphi_1 \land \dots \land \varphi_n$ share variables, we build the \emph{dependency graph} $\depGraph{\varphi} = (V,E)$ of the output variables, where $V = O$ and $(a,b) \in E$ if, and only if, $a \in \propositions{\varphi_i}$ and $b \in \propositions{\varphi_i}$ for some $1 \leq i \leq n$. Intuitively, outputs $a$ and $b$ that are contained in the same connected component of $\depGraph{\varphi}$ depend on each other in the sense that they either occur in the same conjunct or that they occur in conjuncts that are connected by other output variables. Hence, to ensure that subspecifications do not share output variables, conjuncts containing $a$ or $b$ need to be assigned to the same subspecification. Output variables that are contained in different connected components, however, are not linked and therefore implementations for their requirements can be synthesized independently, i.e.\@\xspace, with independent subspecifications. \begin{algorithm}[t] \SetKwInput{KwData}{Input} \SetKwInOut{KwResult}{Output} \SetKw{KwBy}{by} \KwData{$\varphi$: LTL, \texttt{inp}: List Variable, \texttt{out}: List Variable} \KwResult{\texttt{specs}: List (LTL, List Variable, List Variable)} $\varphi$ $\leftarrow$ rewrite$(\varphi)$ \\ \texttt{formulas} $\leftarrow$ removeTopLevelConjunction$(\varphi)$ \\ \texttt{graph} $\leftarrow$ buildDependencyGraph($\varphi$, \texttt{out}) \\ \texttt{components} $\leftarrow$ \texttt{graph}.connectedComponents() \\ \texttt{specs} $\leftarrow$ new LTL[$|$\texttt{components}$|$+1] ~\tcp{initialized with true} \ForEach{\upshape{$\psi$ $\in$ \texttt{formulas}}}{ \texttt{propositions} $\leftarrow$ getProps$(\psi)$ \\ \ForEach{\upshape{(\texttt{spec},\texttt{set}) $\in$ zip \texttt{specs} (\texttt{components} $++$ [\texttt{inp}])}}{\label{alg:rewriting-based_decomposition:zip} \If{\upshape{\texttt{propositions} $\cap$ \texttt{set} $\neq$ $\emptyset$}} { \texttt{spec}.And$(\psi)$\label{alg:rewriting-based_decomposition:add}\\ break\label{alg:rewriting-based_decomposition:break} } } } \Return{\emph{map ($\lambda \varphi \rightarrow$ ($\varphi$, inputs($\varphi$), outputs($\varphi$)))} \upshape{\texttt{specs}}} \caption{LTL Decomposition}\label{alg:rewriting-based_decomposition} \end{algorithm} \Cref{alg:rewriting-based_decomposition} describes how an LTL formula is decomposed into subspecifications. First, the formula is rewritten into conjunctive form. Then, the dependency graph is built and the connected components are computed. For each connected component as well as for all input variables, a subspecification is built by adding the conjuncts containing variables of the respective connected component or an input variable, respectively. To also consider the input variables is necessary to assign every conjunct, including input-only ones, to at least one subspecification. By construction, no conjunct is added to the subspecifications of two different connected components. Yet, a conjunct could be added to both a subspecification of a connected component and the subspecification for the input-only conjuncts. This is circumvented by the \emph{break} in \Cref{alg:rewriting-based_decomposition:break}. Hence, every conjunct is added to exactly one subspecification. To define the input and output variables for the synthesis subtasks, the algorithm assigns the inputs and outputs occurring in~$\varphi_i$ to the subspecification $\varphi_i$. While restricting the inputs is not necessary for correctness, it may improve the runtime of the synthesis task. As an example for the decomposition of LTL formulas, consider the specification $\varphi = \LTLdiamond o_1 \land \LTLsquare(i \rightarrow o_2)$ with $I = \{i\}$ and $O = \{o_1,o_2\}$ again. Since $\varphi$ is already in conjunctive form, no rewriting has to be performed. The two conjuncts of $\varphi$ do not share any variables and therefore the dependency graph $\mathcal{D}_\varphi$ does not contain any edges. Therefore, we obtain two subspecifications $\varphi_1 = \LTLdiamond o_1$ and $\varphi_2 = \LTLsquare(i \rightarrow o_2)$. Soundness and completeness of modular synthesis with \Cref{alg:rewriting-based_decomposition} as a decomposition algorithm for LTL formulas follows directly from \Cref{cor:equisynthesizeability_independent_conjuncts} if the subspecifications do not share any output variables: \begin{theorem} Let $\varphi$ be an LTL formula over $V$. Then, $\Cref{alg:rewriting-based_decomposition}$ terminates with a set $\mathcal{S}=\{\varphi_1, \dots, \varphi_k\}$ of LTL formulas on $\varphi$ with $\mathcal{L}(\varphi_i) \in (2^{V_i})^\omega$ such that $V_i \cap V_j \subseteq I$ for $1 \leq i,j \leq k$ with $i \neq j$, $\bigcup_{1 \leq i \leq k} V_i = V$, and such that $\varphi$ is realizable, if, and only if, for all subspecifications $\varphi_i \in \mathcal{S}$, $\varphi_i$ is realizable. \end{theorem} \begin{proof} Since an output variable is part of exactly one connected component and since all conjuncts containing an output are contained in the same subspecification, every output is part of exactly one subspecification. Therefore, $V_i \cap V_j \subseteq I$ holds for $1 \leq i,j \leq k$ with $i \neq j$. Moreover, the last component added in \Cref{alg:rewriting-based_decomposition:zip} contains all inputs. Hence, all variables that occur in a conjunct of $\varphi$ are featured in at least one subspecification. Thus, $\bigcup_{1\leq i \leq k} V_i = \propositions{\varphi}$ holds and hence, since $V = \propositions{\varphi}$ by assumption, $\bigcup_{1\leq i \leq k} V_i = V$ follows. Therefore, equirealizability of $\varphi$ and the formulas in $\mathcal{S}$ directly follows with \Cref{cor:equisynthesizeability_independent_conjuncts}.\qed \end{proof} While \Cref{alg:rewriting-based_decomposition} is simple and ensures soundness and completeness of modular synthesis, it strongly depends on the structure of the formula: When rewriting formulas in assume-guarantee format, i.e.\@\xspace, formulas of the form $\varphi = \bigwedge^m_{i=1} \varphi_i \rightarrow \bigwedge^n_{j=1} \psi_j$, to a conjunctive form, the conjuncts contain both assumptions $\varphi_i$ and guarantees~$\psi_j$. Hence, if $a,b \in O$ occur in assumption~$\varphi_i$ and guarantee $\psi_j$, respectively, they are dependent. Thus, all conjuncts featuring $a$ or $b$ are contained in the same subspecification according to \Cref{alg:rewriting-based_decomposition}. Yet, $\psi_j$ might be realizable even without~$\varphi_i$. An algorithm accounting for this might yield further decompositions and thus smaller synthesis subtasks. In the following, we present a criterion for dropping assumptions while maintaining equirealizability. Intuitively, we can drop an assumption $\varphi$ for a guarantee~$\psi$ if they do not share any variable. However, if $\varphi$ can be violated by the system, i.e.\@\xspace, if $\neg \varphi$ is realizable, equirealizability is not guaranteed when dropping $\varphi$. For instance, consider the formula $\varphi = \LTLdiamond(i_1 \land o_1) \rightarrow \LTLsquare (i_2 \land o_2)$, where $I = \{i_1,i_2\}$ and $O = \{o_1,o_2\}$. Although assumption and guarantee do not share any variables, the assumption cannot be dropped: An implementation that never sets $o_1$ to $\mathit{true}$ satisfies $\varphi$ but $\LTLsquare(i_2 \land o_2)$ is not realizable. Furthermore, dependencies between input variables may yield unrealizability if an assumption is dropped as information about the remaining inputs might get lost. For instance, in the formula $\varphi \rightarrow \psi$ with $\varphi \! = \! (\LTLsquare i_1 \! \rightarrow i_2) \land (\neg\LTLsquare i_1 \! \rightarrow i_3) \land (i_2 \! \leftrightarrow i_4) \land (i_3 \! \leftrightarrow \neg i_4)$ and $\psi = \LTLsquare i_1 \leftrightarrow o$, where $I = \{i_1,i_2,i_3,i_4\}$ and $O = \{o\}$, no assumption can be dropped: Otherwise the information about the global behavior of $i_1$, which is crucial for the existence of an implementation, is incomplete. These observations lead to the following criterion for safely dropping assumptions. \begin{lemma}\label{lem:assumption_dropping} Let $\varphi = (\varphi_1 \land \varphi_2) \rightarrow \psi$ be an LTL~formula with $\propositions{\varphi_1} \cap \propositions{\varphi_2} = \emptyset$, $\propositions{\varphi_2} \cap \propositions{\psi} = \emptyset$. Let~$\neg\varphi_2$ be unrealizable. Then, $\varphi_1 \rightarrow \psi$ is realizable if, and only if, $\varphi$ is realizable. \end{lemma} \begin{proof} Let $V_1 := \propositions{\varphi_1} \cup \propositions{\psi}$, $I_1 := I \cap V_1$, and $O_1 := O \cap V_1$. First, let $\varphi_1 \rightarrow \psi$ be realizable. Then there is an implementation $f_1: (2^{V_1})^* \times 2^{I_1} \rightarrow 2^{O_1}$ that realizes $\varphi_1 \rightarrow \psi$. From $f_1$, we construct a strategy $f:(2^V)^* \times 2^I \rightarrow 2^O$ as follows: Let $\mu \in 2^{O \setminus O_1}$ is an arbitrary valuation of the outputs outside of $O_1$. Then, let $(\sigma, \inp{i}) := f_1(\restrict{\sigma}{V_1},\inp{i}\cap I_1) \cup \mu$. Let $\sigma \in \compatibleWords{f}$. Then we have $f(\sigma_1 \dots \sigma_{n-1}, \sigma_n \cap I) = \sigma_n \cap I$ for all $n \in \mathbb{N}$ and thus $f_1(\restrict{(\sigma_1 \dots \sigma_{n-1})}{V_1}, \sigma \cap I_1)= \sigma_n \cap (I \cap V_1)$ follows by construction of $f$. Hence, $\restrict{\sigma}{V_1} \in \compatibleWords{f_1}$ holds and thus, since $f_1$ realizes $\varphi_1 \rightarrow \psi$ by assumption, $\restrict{\sigma}{V_1} \in \mathcal{L}(\varphi_1 \rightarrow \psi)$. Since $\propositions{\varphi_1} \cap \propositions{\varphi_2} = \emptyset$ and $\propositions{\varphi_2} \cap \propositions{\psi} = \emptyset$, we have $\propositions{\varphi_2} \cap V_1 = \emptyset$. Hence, the valuations of the variables in $\propositions{\varphi_2}$ do not affect the satisfaction of $\varphi_1 \rightarrow \psi$. Thus, we have $(\restrict{\sigma}{V_1}) \cup \sigma' \in \mathcal{L}(\varphi_1 \rightarrow \psi)$ for any $\sigma' \in (2^\propositions{\varphi_2})^\omega$. In particular, $(\restrict{\sigma}{V_1}) \cup (\restrict{\sigma}{\propositions{\varphi_2}}) \in \mathcal{L}(\varphi_1 \rightarrow \psi)$. Since $\propositions{\varphi} = V$ by assumption, $V = V_1 \cup \propositions{\varphi_2}$ holds and thus $(\restrict{\sigma}{V_1}) \cup (\restrict{\sigma}{\propositions{\varphi_2}}) = \sigma$. Hence, $\sigma \in \mathcal{L}(\varphi_1 \rightarrow \psi)$ holds and thus, since $\varphi_1 \rightarrow \psi$ implies $(\varphi_1 \land \varphi_2) \rightarrow \psi$, $\sigma \in \mathcal{L}(\varphi)$ follows. Hence, $f$ realizes $\varphi$. Next, let $(\varphi_1 \land \varphi_2) \rightarrow \psi$ be realizable. Then, there is an implementation $f: (2^V)^* \times 2^I \rightarrow 2^O$ that realizes $(\varphi_1 \land \varphi_2) \rightarrow \psi$. Since $\neg\varphi_2$ is unrealizable, there is a counterstrategy $f^c_2: (2^{\propositions{\varphi_2}})^* \rightarrow 2^{I \cap \propositions{\varphi_2}}$ for $\neg \varphi_2$ and all words compatible with $f^c_2$ satisfy $\varphi_2$. Given a finite sequence $\eta \in (2^{V_1})^*$, let $\hat{\eta} \in (2^V)^*$ be the sequence obtained by lifting $\eta$ to $V$ using the output of~$f^c_2$. Formally, let $\hat{\eta} = h(\varepsilon,\eta)$, where $h: (2^V)^* \times (2^{V_1})^* \rightarrow (2^V)^*$ is a function defined by $h(\tau,\varepsilon) = \tau$ for the empty word~$\varepsilon$ and, when $\boldsymbol{\cdot}: V \times V^* \rightarrow V^*$ denotes concatenation, $h(\tau,s \boldsymbol{\cdot} \eta) = h(\tau \boldsymbol{\cdot} ((s \cap I) \cup c \cup f(\tau, ((s \cap I) \cup c) \cap I)), \eta)$ with $c = f^c_2(\restrict{\tau}{\propositions{\varphi_2}})$. We construct an implementation $g: (2^{V_1})^* \times 2^{I_1} \rightarrow 2^{O_1}$ based on $f$ and $\hat{\eta}$ as follows: $g(\eta,\inp{i}) := f(\hat{\eta}, \inp{i} \cup (f^c_2(\hat{\eta}) \cap I)) \cap O_1$. Let $\sigma \in \compatibleWords{g}$. Let $\sigma_\mathit{f}$ be the corresponding infinite sequence obtained from $g$ when not restricting the output of $f$ to $O_1$. Hence, $\sigma_\mathit{f} \cap V_1 = \sigma$. Clearly, by construction of $g$, we have $\sigma_\mathit{f} \in \compatibleWords{f}$ and hence, since~$f$ realizes~$\varphi$ by assumption, $\sigma_\mathit{f} \in \mathcal{L}(\varphi)$. Furthermore, we have $\sigma_\mathit{f} \in \mathcal{L}(\varphi_2)$ by construction of $g$ since $\hat{\eta}$ forces~$f$ to satisfy~$\varphi_2$. Hence, $\sigma_\mathit{f} \in \mathcal{L}(\varphi_1 \rightarrow \psi)$. Since $\varphi_2$ neither shares variables with $\varphi_1$ nor with $\psi$ by assumption, the satisfaction of $\varphi_1 \rightarrow \psi$ is not influenced by the variables outside of $V_1$. Thus, since we have $\sigma_\mathit{f} \cap V_1 = \sigma$ by construction, $\sigma \in \mathcal{L}(\varphi_1 \rightarrow \psi)$ follows. Hence, $g$ realizes $\varphi_1 \rightarrow \psi$.\qed \end{proof} By dropping assumptions, we are able to decompose LTL formulas of the form $\varphi = \bigwedge^m_{i=1} \varphi_i \rightarrow \bigwedge^n_{j=1} \psi_j$ in further cases: We rewrite $\varphi$ to $\bigwedge^n_{j=1}(\bigwedge^m_{i=1} \varphi_i \rightarrow \psi_j)$ and then drop assumptions for the individual guarantees. If the resulting subspecifications only share input variables, they are equirealizable to $\varphi$. \begin{theorem}\label{thm:ltl_decomposition_with_assumptions} Let $\varphi = (\varphi_1 \land \varphi_2 \land \varphi_3) \rightarrow (\psi_1 \land \psi_2)$ be an LTL formula over $V$, where $\propositions{\varphi_3} \subseteq I$ and $\propositions{\psi_1} \cap \propositions{\psi_2} \subseteq I$. Let $\propositions{\varphi_i} \cap \propositions{\varphi_j} = \emptyset$ for $i,j \in \{1,2,3\}$ with $i \neq j$, and $\propositions{\varphi_i} \cap \propositions{\psi_{3-i}} = \emptyset$ for $i \in \{1,2\}$. Let $\neg(\varphi_1 \land \varphi_2 \land \varphi_3)$ be unrealizable. Then, $\varphi$ is realizable if, and only if, both $\varphi' = (\varphi_1 \land \varphi_3) \rightarrow \psi_1$ and $\varphi'' = (\varphi_2 \land \varphi_3) \rightarrow \psi_2$ are realizable. \end{theorem} \begin{proof} Define $V_i = \propositions{\varphi_i} \cup \propositions{\varphi_3} \cup \propositions{\psi_3}$ for $i \in \{1,2\}$. Since we have $V = \propositions{\varphi}$ by assumption, $V_1 \cup V_2 = V$ holds. With the assumptions made on $\varphi_1$, $\varphi_2$, $\varphi_3$, $\psi_1$, and $\psi_2$, we obtain $V_1 \cap V_2 \subseteq I$. First, let $\varphi$ be realizable and let $f:(2^V)^* \times 2^I \rightarrow 2^O$ be an implementation that realizes $\varphi$. Let $\sigma \in \compatibleWords{f}$. Then, $\sigma \in \mathcal{L}(\varphi)$ and thus by the semantics of implication, $\restrict{\sigma}{(V \setminus \propositions{\psi_{3-i}})} \in \mathcal{L}((\varphi_1 \land \varphi_2 \land \varphi_3) \rightarrow \psi_i)$ follows for $i \in \{1,2\}$. Hence, an implementation $f_i$ that behaves as $f$ restricted to $O \setminus \propositions{\psi_{3-i}}$ realizes $(\varphi_1 \land \varphi_2 \land \varphi_3) \rightarrow \psi_i$. By \Cref{lem:assumption_dropping}, $(\varphi_1 \land \varphi_2 \land \varphi_3) \rightarrow \psi_i$ and $(\varphi_i \land \varphi_3) \rightarrow \psi_i$ are equirealizable since $\varphi_1$, $\varphi_2$, and~$\varphi_3$ as well as $\varphi_{3-i}$ and $\psi_i$ do not share any variables. Thus, there exist implementations $f_1$ and $f_2$ realizing $(\varphi_1 \land \varphi_3) \rightarrow \psi_1$ and $(\varphi_2 \land \varphi_3) \rightarrow \psi_2$, respectively. Next, let both $(\varphi_1 \land \varphi_3) \rightarrow \psi_1$ and $(\varphi_2 \land \varphi_3) \rightarrow \psi_2$ be realizable and let $f_i: (2^{V_i})^* \times 2^{I \cap V_i} \rightarrow 2^{O \cap V_i}$ be an implementation realizing $(\varphi_i \land \varphi_3) \rightarrow \psi_i$. We construct an implementation $f:(2^V)^* \times 2^I \rightarrow 2^O$ from $f_1$ and $f_2$ as follows: $f(\sigma,\inp{i}) := f_1(\restrict{\sigma}{V_1},\inp{i} \cap V_1) \cup f_2(\restrict{\sigma}{V_2},\inp{i} \cap V_2)$. Let $\sigma \in \compatibleWords{f}$. Since $V_1$ and $V_2$ do not share any output variables, $\restrict{\sigma}{V_i} \in \mathcal{L}((\varphi_i \land \varphi_3) \rightarrow \psi_i)$ follows from the construction of $f$. Moreover, $\restrict{\sigma}{V_1}$ and $\restrict{\sigma}{V_2}$ agree on shared variables and thus $(\restrict{\sigma}{V_1}) \cup (\restrict{\sigma}{V_2}) \in \mathcal{L}(\varphi' \land\varphi'')$ holds. Therefore, we have $(\restrict{\sigma}{V_1}) \cup (\restrict{\sigma}{V_2}) \in \mathcal{L}(\varphi)$ as well by the semantics of conjunction and implication. Since $V_1 \cup V_2 = V$, we have $(\restrict{\sigma}{V_1}) \cup (\restrict{\sigma}{V_2}) = \sigma$ and thus $\sigma \in \mathcal{L}(\sigma)$. Hence, $f$ realizes $\varphi$.\qed \end{proof} Analyzing assumptions thus allows for decomposing LTL formulas in further cases and still ensures soundness and completeness of modular synthesis. In the following, we present an optimized LTL decomposition algorithm that incorporates assumption dropping into the search for independent conjuncts. Intuitively, the algorithm needs to identify variables that cannot be shared safely among subspecifications. If an \emph{assumption} contains such non-sharable variables, we say that it is \emph{bound} to guarantees since it can influence the possible decompositions. Otherwise, it is called \emph{free}. To determine which assumptions are relevant for decomposition, i.e.\@\xspace, which assumptions are \emph{bounded assumptions}, we build a slightly modified version of the dependency graph that is only based on assumptions and not on all conjuncts of the formula. Moreover, all variables serve as the nodes of the graph, not only the output variables. An undirected edge between two variables in the modified dependency graph denotes that the variables occur in the same assumption. Variables that are contained in the same connected component as an output variable $o\inO$ are thus connected to $o$ over a path of one or more assumptions. Therefore, they may not be shared among subspecifications as they might influence $o$ and thus may influence the decomposability of the specification. These variables are then called \emph{decomposition-critical}. Given the modified dependency graph, we can compute the decomposition-critical propositions with a simple depth-first search. \begin{algorithm}[t] \DontPrintSemicolon \KwIn{$\varphi$: LTL, \texttt{inp}: List Variable, \texttt{out}: List Variable} \KwResult{\texttt{specs}: List (LTL, List Variable, List Variable)} \texttt{assumptions} $\leftarrow$ getAssumptions($\varphi$)\; \texttt{guarantees} $\leftarrow$ getGuarantees($\varphi$)\; \texttt{decCritProps} $\leftarrow$ getDecCritProps($\varphi$)\; \texttt{graph} $\leftarrow$ buildDependencyGraph($\varphi$,\texttt{decCritProps})\; \texttt{components} $\leftarrow$ \texttt{graph}.connectedComponents()\; \texttt{specs} $\leftarrow$ new LTL[$|$\texttt{components}$|+1$]\; \texttt{freeAssumptions} $\leftarrow$[\ ]\; \ForEach{\upshape{$\psi \in$ \texttt{assumptions}}}{ \texttt{propositions} $\leftarrow$ \texttt{decCritProps} $\cap$ getProps($\psi$)\; \eIf{\upshape{$|$\texttt{propositions}$| = 0$}}{ \texttt{freeAssumptions}.append($\psi$)\; }{ \ForEach{\upshape{(\texttt{spec}, \texttt{set}) $\in$ zip \texttt{specs} (\texttt{components} $++$ [\texttt{inp}])}}{ \If{\upshape{\texttt{propositions} $\cap$ \texttt{set} $\neq \emptyset$}}{ \texttt{spec}.addAssumption($\psi$)\; break\; } } } } \ForEach{\upshape{$\psi \in$ \texttt{guarantees}}}{ \texttt{propositions} $\leftarrow$ \texttt{decCritProps} $\cap$ getProps($\psi$)\; \ForEach{\upshape{(\texttt{spec}, \texttt{set}) $\in$ zip \texttt{specs} (\texttt{components} $++$ [\texttt{inp}])}}{ \If{\upshape{\texttt{propositions} $\cap$ \texttt{set} $\neq \emptyset$}}{ \texttt{spec}.addGuarantee($\psi$)\; break\; } } } \KwRet{\upshape{addFreeAssumptions \texttt{specs freeAssumptions}}} \caption{Optimized LTL Decomposition Algorithm} \label{alg:optimized_decomposition} \end{algorithm} After computing the decomposition-critical propositions, we create the dependency graph and extract connected components in the same way as in \Cref{alg:rewriting-based_decomposition} to decompose the LTL specification. Instead of using only output variables as nodes of the graph, though, we use all decomposition-critical variables. We then exclude free assumptions and add all other assumptions to their respective subspecification similar to \Cref{alg:rewriting-based_decomposition}. We assign the guarantees to their subspecification in the same manner. Lastly, we add the remaining assumptions. Since all of these assumptions are free, they could be safely added to all subspecifications. Yet, to obtain small subspecifications, we only add them to subspecifications for which they are needed. Note that we have to add all assumptions featuring an input variable that occurs in the subspecification. Therefore, we analyze the assumptions and add them in one step, as a naive approach could have an unfavorable running time. The whole LTL decomposition algorithm with optimized assumption handling is shown in \Cref{alg:optimized_decomposition}. The decomposition algorithm does not check for assumption violations. The unrealizability of the negation of the dropped assumption, however, is an essential part of the criterion for assumption dropping (c.f.\ \Cref{thm:ltl_decomposition_with_assumptions}). Therefore, we incorporate the check for assumption violations into the modular synthesis algorithm: Before decomposing the specification, we perform synthesis on the negated assumptions. If synthesis returns that the negated assumptions are realizable, the system is able to violate an assumption. The implementation satisfying the negated assumptions is then extended to an implementation for the whole specification that violates the assumptions and thus realizes the specification. Otherwise, if the negated assumptions are unrealizable, the conditions of \Cref{thm:ltl_decomposition_with_assumptions} are satisfied. Hence, we can use the decomposition algorithm and proceed as in \Cref{alg:compositional_synthesis}. The modified modular synthesis algorithm that incorporates the check for assumption violations is shown in \Cref{alg:compositional_synthesis2}. \begin{algorithm}[t] \DontPrintSemicolon \KwIn{\texttt{s}: Specification, \texttt{inp}: List Variable, \texttt{out}: List Variable} \KwResult{\texttt{realizable}: Bool, \texttt{implementation}: $\mathcal{T}$} (\texttt{real}, \texttt{strat}) $\leftarrow$ synthesize(getNegAss($\varphi$), \texttt{inp}, \texttt{out})\; \If{\upshape{\texttt{real}}}{ \KwRet{\upshape{($\top$, \texttt{strat})}} } \texttt{subspecifications} $\leftarrow$ decompose$(\texttt{s},\texttt{inp},\texttt{out})$ \; \texttt{sub\_results} $\leftarrow$ map synthesize \texttt{subspecifications} \; \ForEach{\upshape{(\texttt{real}, \texttt{strat}) $\in$ \texttt{sub\_results}}}{ \If{\upshape{! \texttt{real}}}{ \texttt{implementation} $\leftarrow$ extendCounterStrategy(\texttt{strat}, \texttt{s})\; \KwRet{\upshape{($\bot$, \texttt{implementation})}} } } \texttt{impls} $\leftarrow$ map second \texttt{sub\_results}\; \texttt{implementation} $\leftarrow$ compose \texttt{impls}\; \KwRet{\upshape{($\top$, \texttt{implementation})}} \caption{Modular Synthesis Algorithm with Optimized LTL Decomposition} \label{alg:compositional_synthesis2} \end{algorithm} Note that \Cref{alg:optimized_decomposition} is only applicable to specifications in a strict assume-guarantee format since \Cref{thm:ltl_decomposition_with_assumptions} assumes a top-level implication in the formula. In the next section, we thus present an extension of the LTL decomposition algorithm with optimized assumption handling to specifications consisting of several assume-guarantee conjuncts, i.e.\@\xspace, specifications of the form $\varphi = (\varphi_1 \rightarrow \psi_1) \land \dots \land (\varphi_k \rightarrow \psi_k)$. \section{Optimized LTL Decomposition for Formulas with Several Assume-Guarantee Conjuncts}\label{sec:ltl_optimized} Since \Cref{cor:equisynthesizeability_independent_conjuncts} can be applied recursively, classical LTL decomposition, i.e.\@\xspace, as described in \Cref{alg:rewriting-based_decomposition}, is applicable to specifications with several conjuncts. That is, in particular, it is applicable to specifications with several assume-guarantee conjuncts, i.e.\@\xspace, specifications of the form $\varphi = (\varphi_1 \rightarrow \psi_1) \land \dots \land (\varphi_k \rightarrow \psi_k)$. \Cref{alg:optimized_decomposition}, in contrast, is restricted to LTL specifications consisting of a single assume-guarantee pair since \Cref{thm:ltl_decomposition_with_assumptions}, on which \Cref{alg:optimized_decomposition} relies, assumes a top-level implication in the specification. Hence, we cannot apply the optimized assumption handling to specifications with several assume-guarantee conjuncts directly. A naive approach to extend assumption dropping to formulas with several assume-guarantee conjuncts is to first drop assumptions for all conjuncts separately and then to decompose the resulting specification using \Cref{alg:rewriting-based_decomposition}. In general, however, this is not sound: The other conjuncts may introduce dependencies between assumptions and guarantees that prevent the dropping of the assumption. When considering the conjuncts during the assumption dropping phase separately, however, such dependencies are not detected. For instance, consider a system with $I = \{i\}$, $O = \{o_1,o_2\}$, and the specification $\varphi = \LTLsquare\neg(o_1 \land o_2) \land \LTLsquare \neg(i \leftrightarrow o_1) \land (\LTLsquare i \rightarrow \LTLsquare o_2)$. Clearly, $\varphi$ is realizable by an implementation that sets $o_1$ to $\neg i$ and $o_2$ to $i$ in every time step. Since the first conjunct contains both $o_1$ and $o_2$, \Cref{cor:equisynthesizeability_independent_conjuncts} is not applicable and thus \Cref{alg:rewriting-based_decomposition} does not decompose~$\varphi$. The naive approach for incorporating assumption dropping described above considers the third conjunct of $\varphi$ separately and checks whether whether the assumption $\LTLsquare i$ can be dropped. Since the assumptions and guarantees do not share any variables, \Cref{lem:assumption_dropping} is applicable and thus the naive algorithm drops $\LTLsquare i$, yielding $\varphi' = \LTLsquare\neg(o_1 \land o_2) \land \LTLsquare \neg(i \leftrightarrow o_1) \land \LTLsquare o_2$. Yet, $\varphi'$ is not realizable: If $i$ is constantly set to $\mathit{false}$, the second conjunct of $\varphi'$ enforces $o_1$ to be always set to $\mathit{true}$. The third conjunct enforces that $o_2$ is constantly set to $\mathit{true}$ irrespective of the input $i$. The first conjunct, however, requires in every time step one of the output variables to be $\mathit{false}$. Thus, although \Cref{lem:assumption_dropping} is applicable to $\LTLsquare i \rightarrow \LTLsquare o_1$, dropping the assumption safely is not possible in the context of the other two conjuncts. In particular, the first conjunct of $\varphi$ introduces a dependency between $o_1$ and $o_2$ while the second conjunct introduces one between $i$ and $o_1$. Hence, there is a transitive dependency between $i$ and $o_1$ due to which the assumption $\LTLsquare i$ cannot be dropped. This dependency is not detected when considering the conjuncts separately during the assumption dropping phase. In this section, we introduce an optimization of the LTL decomposition algorithm which is able to decompose specifications with several conjuncts (possibly) in assume-guarantee format and which is, in contrast to the naive approach described before, sound. Similar to the naive approach, the main idea is to first check for assumptions that can be dropped in the different conjuncts and to then perform the classical LTL decomposition algorithm. Yet, the assumption dropping phase is not performed completely separately for the individual conjuncts but takes the other conjuncts and thus possible transitive dependencies between the assumptions and guarantees into account. If the other conjuncts do not share any variable with the assumption to be dropped, then there are no transitive dependencies between the assumption and the guarantee due to the other conjuncts. Thus, the assumption can be dropped safely if the other conditions of \Cref{lem:assumption_dropping} are satisfied: \begin{lemma}\label{lem:assumption_dropping_optimized} Let $\varphi = \psi_1 \land ((\varphi_1 \land \varphi_2) \rightarrow \psi_2)$ be an LTL~formula, where we have $\propositions{\varphi_1} \cap \propositions{\varphi_2} = \emptyset$, $\propositions{\varphi_2} \cap \propositions{\psi_1} = \emptyset$ and $\propositions{\varphi_2} \cap \propositions{\psi_2} = \emptyset$. Let~$\neg\varphi_2$ be unrealizable. Then, $\varphi' = \psi_1 \land (\varphi_1 \rightarrow \psi_2)$ is realizable if, and only if, $\varphi$ is realizable. \end{lemma} \begin{proof} Let $V_1 := \propositions{\psi_1 \land (\varphi_1 \rightarrow \psi_2)}$, $I_1 := I \cap V_1$, and $O_1 := O \cap V_1$. If $\varphi'$ is realizable, then we can construct an implementation $f: (2^V)^* \times 2^I \rightarrow 2^O$ that realizes $\varphi$ from the implementation $f_1: (2^{V_1})^* \times 2^{I_1} \rightarrow 2^{O_1}$ that realizes $\varphi'$ analogous to the proof of \Cref{lem:assumption_dropping}. If $\varphi$ is realizable, then there is an implementation $f: (2^V)^* \times 2^I \rightarrow 2^O$ that realizes $\varphi$. Since $\neg\varphi_2$ is unrealizable by assumption, there is a counterstrategy $f^c_2: (2^{\propositions{\varphi_2}})^* \rightarrow 2^{I \cap \propositions{\varphi_2}}$ for $\neg \varphi_2$ and all words compatible with $f^c_2$ satisfy $\varphi_2$. Let $g: (2^{V_1})^* \times 2^{I_1} \rightarrow 2^{O_1}$ be the implementation constructed from $f$ and $f^c_2$ in the proof of \Cref{lem:assumption_dropping}. We show that $g$ realizes $\varphi'$. Let $\sigma \in \compatibleWords{g}$ and let $\sigma_f$ be the corresponding infinite sequence obtained from $g$ when not restricting the output of $f$ to the variables in $O_1$. As shown in the proof of \Cref{lem:assumption_dropping}, $\sigma_\mathit{f} \in \mathcal{L}(\varphi)$ and $\sigma_\mathit{f} \in \mathcal{L}(\varphi_2)$. Thus, $\sigma_\mathit{f} \in \mathcal{L}(\psi_1 \land (\varphi_1 \rightarrow \psi_2))$. Since $\varphi_2$ neither shares variables with $\varphi_1$ nor with $\psi_1$ or $\psi_2$, the satisfaction of $\psi_1 \land (\varphi_1 \rightarrow \psi_2)$ is not influenced by the variables outside of $V_1$. Hence, since $\sigma_\mathit{f} \cap V_1 = \sigma$ by construction, $\sigma \in \mathcal{L}(\varphi')$ follows and thus $g$ realizes $\varphi'$.\qed \end{proof} Similar to the optimized assumption handling for specifications in strict assume-guarantee form described in the previous section, we utilize \Cref{lem:assumption_dropping_optimized} for an optimized decomposition for specifications containing several assume-guarantee conjuncts: We rewrite LTL formulas of the form $\varphi = \psi' \land \bigwedge^m_{i=1} \varphi_i \rightarrow \bigwedge^n_{j=1} \psi_j$ to $\psi' \land \bigwedge^n_{j=1}(\bigwedge^m_{i=1} \varphi_i \rightarrow \psi_j)$ and then drop assumptions for the individual guarantees $\psi_1, \dots, \psi_j$ according to \Cref{lem:assumption_dropping_optimized}. If the resulting subspecifications only share input variables, they are equirealizable to $\varphi$. \begin{theorem}\label{thm:ltl_decomposition_with_assumptions_optimized} $\!\!\!$ Let $\varphi \! = \! \psi'_1 \land \psi'_2 \land (\!(\varphi_1 \land \varphi_2 \land \varphi_3) \!\rightarrow \!\psi_1 \land \psi_2)$ be an LTL formula over $V$, where $\propositions{\varphi_3} \subseteq I$ and $(\propositions{\psi_1} \cup \propositions{\psi'_1}) \cap (\propositions{\psi_2} \cup \propositions{\psi'_2})\subseteq I$. Let $\propositions{\varphi_i} \cap \propositions{\varphi_j} = \emptyset$ for $i,j \in \{1,2,3\}$ with $i \neq j$, and let $\propositions{\varphi_i} \cap \propositions{\psi_{3-i}} = \emptyset$ for $i \in \{1,2\}$. Let $\propositions{\psi'_i} \cap \propositions{\varphi_{3-i}} = \emptyset$ for $i \in \{1,2\}$. Moreover, let $\neg(\varphi_1 \land \varphi_2 \land \varphi_3)$ be unrealizable. Then, $\varphi$ is realizable if, and only if, both $\varphi' = \psi' \land ((\varphi_1 \land \varphi_3) \rightarrow \psi_1)$ and $\varphi'' = \psi'' \land ((\varphi_2 \land \varphi_3) \rightarrow \psi_2)$ are realizable. \end{theorem} \begin{proof} First, let $\varphi'$ and $\varphi''$ be realizable. Then, there are implementations $f_1$ and $f_2$ realizing $\varphi'$ and~$\varphi''$, respectively. Since $\varphi'$ and $\varphi''$ do not share output variables by assumption, we can construct an implementation realizing $\varphi$ from $f_1$ and $f_2$ as in the proof of \Cref{thm:ltl_decomposition_with_assumptions}. Next, let $\varphi$ be realizable and let $f: (2^V)^* \times 2^I \rightarrow 2^O$ be an implementation realizing $\varphi$. Let $\sigma \in \compatibleWords{f}$. Then, $\sigma \in \mathcal{L}(\varphi)$ holds. Let $V' = \propositions{\varphi'} \cup \propositions{\varphi_2}$ and let $V'' = \propositions{\varphi''} \cup \propositions{\varphi_1}$. Then, since $\sigma \in \mathcal{L}(\varphi)$ holds, $\sigma \cap V' \in \mathcal{L}(\psi' \land ((\varphi_1 \land \varphi_2 \land \varphi_3) \rightarrow \psi_1))$ as well as $\sigma \cap V'' \in \mathcal{L}(\psi'' \land ((\varphi_1 \land \varphi_2 \land \varphi_3) \rightarrow \psi_2))$ follow. Thus, an implementation $f_1$ that behaves as $f$ restricted to the variables in $V'$ realizes $\psi' \land ((\varphi_1 \land \varphi_2 \land \varphi_3) \rightarrow \psi_1)$. An implementation $f_2$ that behaves as $f$ restricted to the variables in $V''$ realizes $\psi'' \land ((\varphi_1 \land \varphi_2 \land \varphi_3) \rightarrow \psi_2)$. By assumption, for $i \in \{1,2\}$, $\varphi_{i}$ does not share any variables with $\varphi_3$, $\varphi_{3-1}$, $\psi_{3-1}$ and $\psi'_{3-1}$. Therefore, by \Cref{lem:assumption_dropping_optimized}, $\psi'_1 \land ((\varphi_1 \land \varphi_2 \land \varphi_3) \rightarrow \psi_1)$ and $\varphi'$ are equirealizable. Moreover, $\psi'_2 \land ((\varphi_1 \land \varphi_2 \land \varphi_3) \rightarrow \psi_2)$ and $\varphi''$ are equirealizable. Thus, since $f_1$ and $f_2$ realize the former formulas, $\varphi'$ and $\varphi''$ are both realizable. \qed \end{proof} \begin{algorithm}[t] \DontPrintSemicolon \KwIn{$\varphi$: LTL, \texttt{inp}: List Variable, \texttt{out}: List Variable} \KwResult{\texttt{specs}: List (LTL, List Variable, List Variable)} \texttt{implication} $\leftarrow$ chooseImplication($\varphi$)\; \texttt{assumptions} $\leftarrow$ getAssumptions(\texttt{implication})\; \texttt{guarantees} $\leftarrow$ getGuarantees(\texttt{implication})\; \texttt{decCritProps} $\leftarrow$ getDecCritProps(\texttt{implication})\; \texttt{graph} $\leftarrow$ buildDependencyGraph($\varphi$,\texttt{decCritProps})\; \texttt{components} $\leftarrow$ \texttt{graph}.connectedComponents()\; \texttt{specs} $\leftarrow$ new LTL[$|$\texttt{components}$|+1$]\; \texttt{freeAssumptions} $\leftarrow$[\ ]\; \ForEach{\upshape{$\psi \in$ \texttt{assumptions}}}{ \texttt{propositions} $\leftarrow$ \texttt{decCritProps} $\cap$ getProps($\psi$)\; \eIf{\upshape{$|$\texttt{propositions}$| = 0$}}{ \texttt{freeAssumptions}.append($\psi$)\; }{ \ForEach{\upshape{(\texttt{spec}, \texttt{set}) $\in$ zip \texttt{specs} (\texttt{components} $++$ [\texttt{inp}])}}{ \If{\upshape{\texttt{propositions} $\cap$ \texttt{set} $\neq \emptyset$}}{ \texttt{spec}.addAssumption($\psi$)\; break\; } } } } \ForEach{\upshape{$\psi \in$ \texttt{guarantees}}}{ \texttt{propositions} $\leftarrow$ \texttt{decCritProps} $\cap$ getProps($\psi$)\; \ForEach{\upshape{(\texttt{spec}, \texttt{set}) $\in$ zip \texttt{specs} (\texttt{components} $++$ [\texttt{inp}])}}{ \If{\upshape{\texttt{propositions} $\cap$ \texttt{set} $\neq \emptyset$}}{ \texttt{spec}.addGuarantee($\psi$)\; break\; } } } \ForEach{\upshape{$\psi \in$ getConjuncts($\varphi$)$\setminus$\texttt{implication}}}{ \texttt{propositions} $\leftarrow$ \texttt{decCritProps} $\cap$ getProps($\psi$)\; \ForEach{\upshape{(\texttt{spec}, \texttt{set}) $\in$ zip \texttt{specs} (\texttt{components} $++$ [\texttt{inp}])}}{ \If{\upshape{\texttt{propositions} $\cap$ \texttt{set} $\neq \emptyset$}}{ \texttt{spec}.addConjunct($\psi$)\; break\; } } } \KwRet{\upshape{addFreeAssumptions \texttt{specs freeAssumptions}}} \caption{Optimized LTL Decomposition Algorithm for Specifications with Conjuncts} \label{alg:optimized_decomposition_with_conjuncts} \end{algorithm} Utilizing \Cref{thm:ltl_decomposition_with_assumptions_optimized}, we extend \Cref{alg:optimized_decomposition} to LTL specifications that do not follow a strict assume-guarantee form but consist of multiple conjuncts. The extended algorithm is depicted in \Cref{alg:optimized_decomposition_with_conjuncts}. We assume that the specification is not decomposable by \Cref{alg:rewriting-based_decomposition}, i.e.\@\xspace, we assume that no plain decompositions are possible. In practice, we thus first rewrite the specification and apply \Cref{alg:rewriting-based_decomposition} afterwards before then applying \Cref{alg:optimized_decomposition_with_conjuncts} to the resulting subspecifications. Hence, we assume that the dependency graph built from the output propositions of all given conjuncts consists of a single connected component. \Cref{thm:ltl_decomposition_with_assumptions_optimized} hands us the tools to ``break a link'' in that chain of dependencies. This link has to be induced by a suitable implication. \Cref{alg:optimized_decomposition_with_conjuncts} assumes that at least one of the conjuncts is an implication. In case of more than one implication, the choice of the implication consequently determines whether or not a decomposition os found. Therefore, it is crucial to reapply the algorithm on the subspecifications after a decomposition has been found and to try all implications if no decomposition is found. Since iterating through all conjuncts does not pose a large overhead in computing time, the choice of the implication is not further specified in the algorithm. The extended algorithm is similar to \Cref{alg:optimized_decomposition}. Note that the dependency graph used for finding the decomposition-critical propositions is built only from the assumptions of the chosen implication as we are only seeking for droppable assumptions of this implication. In contrast to \Cref{alg:optimized_decomposition}, the dependency graph in line 5 of \Cref{alg:optimized_decomposition_with_conjuncts} also includes the dependencies induced by the other conjuncts, similarly to the dependency graph in \Cref{alg:rewriting-based_decomposition}. Here, we consider all decomposition-critical variables in the conjuncts, not only output variables, as an assumption can only be dropped if there are no shared variables with the remaining conjuncts. Therefore, the additional conjuncts are treated in the same way as the guarantees. This carries over to when the conjuncts are added to the subspecifications. Lastly, \Cref{alg:optimized_decomposition_with_conjuncts} slightly differs from \Cref{alg:optimized_decomposition} when the free assumptions are added to the subspecifications. Here, the remaining conjuncts have to be considered, too, since we may not drop assumptions that share variables with the outside conjunct. Consequently, all free assumptions that share an input with one of the remaining conjuncts, needs to be added. One detail that has to be taken into account when integrating this LTL decomposition algorithm with extended optimized assumption handling into a synthesis tool, is that, like \Cref{alg:optimized_decomposition}, \Cref{alg:optimized_decomposition_with_conjuncts} assumes that all negated assumptions are unrealizable. For formulas in a strict assume-guarantee format, the consequences of realizable assumptions is that we have found a strategy for the implementation. This changes when considering formulas with additional conjuncts since they might forbid this strategy. To detect such strategies, we can verify the synthesized strategy against the remaining conjunct and only extend it to a counterstrategy for the whole specification in the positive case. \section{Experimental Evaluation} We implemented the modular synthesis algorithm as well as the decomposition approaches and evaluated them on the 346 publicly available SYNTCOMP~\cite{SYNTCOMP} 2020 benchmarks. Note that only 207 of the benchmarks have more than one output variable and are therefore realistic candidates for decomposition. The automaton decomposition algorithm utilizes Spot's~\cite{Duret-LutzLFMRX16} automaton library (Version 2.9.6). The LTL decomposition relies on SyFCo~\cite{JacobsFS16} for formula transformations (Version 1.2.1.1). We first decompose the specification with our algorithms and then run synthesis on the resulting subspecifications. We compare the CPU time of the synthesis task as well as the number of gates, and latches of the synthesized AIGER circuit for the original specification to the sum of the corresponding attributes of all subspecifications. Thus, we calculate the runtime for sequential modular synthesis. Parallelization of the synthesis tasks may further reduce the runtime. \subsection{LTL Decomposition} \begin{figure} \caption{Comparison of the performance of modular and non-compositional synthesis with BoSy and Strix on the decomposable SYNTCOMP benchmarks. For the modular approach, the accumulated time for all synthesis tasks is depicted.} \label{comparison_ltl} \end{figure} \begin{table*}[t] \caption{Distribution of the number of subspecifications over all specifications for LTL decomposition.} \label{table:ltl_subspecs} \centering \begin{tabular}{p{2.9cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering\arraybackslash}p{0.5cm}} \# subspecifications& 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline \# specifications& 308 & 19 & 8 & 2 & 3 & 2 & 0 & 2 & 0 & 1 & 1 & 1 \\ \end{tabular} \end{table*} \begin{table*}[t] \centering \caption{Synthesis time in seconds of BoSy and Strix for non-compositional and modular synthesis on exemplary SYNTCOMP benchmarks with a timeout of 60 minutes.} \label{ltl_times} \begin{tabular}{p{2.9cm}|>{\centering}p{1.6cm}|>{\centering}p{1.6cm}|>{\centering}p{1.6cm}|>{\centering}p{1.6cm}|>{\centering\arraybackslash}p{1.8cm}} & \multicolumn{2}{c|}{original} & \multicolumn{2}{c|}{modular} & ~\\ Benchmark & BoSy & Strix & BoSy & Strix & \# subspec.\\ \hline Cockpitboard & 1526.32 & 11.06 & \textbf{2.108} & 8.168 & 8\\ Gamelogic & TO & 1062.27 & TO & \textbf{25.292} & 4\\ LedMatrix & TO & TO& TO & \textbf{1156.68} & 3\\ Radarboard & TO & 126.808 & \textbf{3.008}& 11.04 & 11\\ Zoo10 & 1.316 & 1.54 & \textbf{0.884}& 2.744 & 2\\ generalized\_buffer\_2 & 70.71 & 534.732 & \textbf{4.188}& 7.892 & 2\\ generalized\_buffer\_3 & TO & TO& \textbf{27.136}& 319.988 & 3\\ shift\_8 & \textbf{0.404}& 1.336 & 2.168& 3.6 & 8\\ shift\_10 & \textbf{1.172}& 1.896 & 2.692 & 4.464 & 10\\ shift\_12 & 4.336& 6.232 & \textbf{3.244} & 5.428 & 12 \end{tabular} \end{table*} LTL decomposition with optimized assumption handling (c.f.\ \Cref{sec:ltl_optimized}) terminates on all benchmarks in less than 26ms. Thus, even for non-decomposable specifications, the overhead of trying to perform decompositions is negligible. The algorithm decomposes 39 formulas into several subspecifications, most of them yielding two or three subspecifications. Only a handful of formulas are decomposed into more than six subspecifications. The full distribution of the number of subspecifications for all specifications is shown in \Cref{table:ltl_subspecs} We evaluate our modular synthesis approach with two state-of-the-art synthesis tools: BoSy~\cite{BoSy}, a bounded synthesis tool, and Strix~\cite{MeyerStrix}, a game-based synthesis tool, both in their 2019 release. We used a machine with a 3.6GHz quad-core Intel Xeon processor and 32GB RAM as well as a timeout of 60 minutes. \begin{table}[t] \centering \caption{Gates of the synthesized solutions of BoSy and Strix for non-compositional and modular synthesis on exemplary SYNTCOMP benchmarks. Entry -- denotes that no solution was found within 60 minutes.} \label{ltl_gat} \begin{tabular}{p{2.8cm}|>{\centering}p{0.8cm}|>{\centering}p{0.8cm}|>{\centering}p{0.8cm}|>{\centering\arraybackslash}p{0.8cm}} & \multicolumn{2}{c|}{original} & \multicolumn{2}{c}{modular}\\ Benchmark & BoSy & Strix & BoSy & Strix \\ \hline Cockpitboard & 11 & \textbf{7} & 25 & 10 \\ Gamelogic & -- & 26 & -- & \textbf{21} \\ LedMatrix & -- & -- & -- & \textbf{97} \\ Radarboard & -- & \textbf{6} & 19 & \textbf{6} \\ Zoo10 & 14 & 15 & 15 & \textbf{13} \\ generalized\_buffer\_2 & \textbf{3} & 12 & \textbf{3} & 11\\ generalized\_buffer\_3 & -- & -- & \textbf{20} & 3772 \\ shift\_8 & 8 & \textbf{0} & 8 & 7 \\ shift\_10 & 10 & \textbf{0} & 10 & 9 \\ shift\_12 & 12 & \textbf{0} & 12 & 11 \end{tabular} \end{table} \begin{table}[t] \centering \caption{Latches of the synthesixed solutions of BoSy and Strix for non-compositional and modular synthesis on exemplary SYNTCOMP benchmarks. Entry -- denotes that no solution was found within 60 minutes.} \label{ltl_lat} \begin{tabular}{p{2.8cm}|>{\centering}p{0.8cm}|>{\centering}p{0.8cm}|>{\centering}p{0.8cm}|>{\centering\arraybackslash}p{0.8cm}} & \multicolumn{2}{c|}{original} & \multicolumn{2}{c}{modular}\\ Benchmark & BoSy & Strix & BoSy & Strix \\ \hline Cockpitboard & 1 & \textbf{0} & 8 & \textbf{0} \\ Gamelogic & -- & \textbf{2} & -- & \textbf{2} \\ LedMatrix & -- & -- & -- & \textbf{5} \\ Radarboard & -- & \textbf{0} & 11 & \textbf{0} \\ Zoo10 & \textbf{1} & 2 & 2 & 2 \\ generalized\_buffer\_2 & 69 & 47134 & \textbf{14} & 557 \\ generalized\_buffer\_3 & -- & -- & \textbf{3} & 14 \\ shift\_8 & 1 & \textbf{0} & 8 & \textbf{0} \\ shift\_10 & 1 & \textbf{0} & 10 & \textbf{0} \\ shift\_12 & 1 & \textbf{0} & 12 & \textbf{0} \end{tabular} \end{table} \begin{table*}[t] \centering \caption{Distribution of the number of subspecifications over all specifications for NBA decomposition. For 79 specifications, the timeout (60min) was reached. For 39 specification, the memory limit (16GB) was reached.} \label{table:nba_subspecs} \begin{tabular}{p{1.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering}p{0.5cm}|>{\centering\arraybackslash}p{0.5cm}} \# subspec. & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 12 & 14 & 19 & 20 & 24 & 36\\ \hline \# spec.& 192 & 9 & 8 & 6 & 2 & 3 & 1 & 2 & 1 & 1 & 4 & 1 & 1 & 2 & 1 & 1\\ \end{tabular} \end{table*} In \Cref{comparison_ltl}, the comparison of the accumulated runtimes of the synthesis tasks of the subspecifications and of the original formula is shown for the decomposable SYNTCOMP benchmarks. For both BoSy and Strix, decomposition generates a slight overhead for small specifications. For larger and more complex specifications, however, modular synthesis decreases the execution time significantly, often by an order of magnitude or more. Note that due to the negligible runtime of specification decomposition, the plot looks similar when considering all SYNTCOMP benchmarks. \Cref{ltl_times} shows the running times of BoSy and Strix for modular and non-compositional synthesis on exemplary benchmarks. For modular synthesis, the accumulated running time of all synthesis tasks is depicted. On almost all of them, both tools decrease their synthesis times with modular synthesis notably compared to the original non-compositional approaches. Particularly noteworthy is the benchmark \emph{generalized\_buffer\_3}. In the last synthesis competition, SYNTCOMP~2021, no tool was able to synthesize a solution for it within one hour. With modular synthesis, however, BoSy yields a result in less than 28 seconds. In \Cref{ltl_gat,ltl_lat}, the number of gates and latches, respectively, of the AIGER circuits~\cite{BiereHW11} corresponding to the implementations computed by BoSy and Strix for modular and non-compositional synthesis are depicted for exemplary benchmarks. For most specifications, the solutions of modular synthesis are of the same size or smaller in terms of gates than the solutions for the original specification. The size of the solutions in terms of latches, however, varies. Note that BoSy does not generate solutions with less than one latch in general. Hence, the modular solution will always have at least as many latches as subspecifications. \subsection{Automaton Decomposition} Besides LTL specifications, Strix also accepts specifications given as deterministic parity automata (DPAs) in extended HOA format~\cite{Perez2019}, an automaton format well-suited for synthesis. Thus, our implementation for decomposing specifications given as NBAs performs \Cref{alg:automaton-based_decomposition}, converts the resulting automata to DPAs and then synthesizes solutions with Strix. For 235 out of the 346 benchmarks, NBA decomposition terminates within ten minutes, yielding several subspecifications or proving that the specification is not decomposable. In 79 of the other cases, the tool timed out after 60 minutes and in the remaining 32 cases it reached the memory limit of 16GB or the internal limits of Spot. Note, however, that for 81 specifications even plain DPA generation failed. The distribution of the number of subspecifications for all specifications is shown in \Cref{table:nba_subspecs}. Thus, while automaton decomposition yields more fine-grained decompositions than the approximate LTL approach, it becomes infeasible when the specifications grow. Hence, the advantage of smaller synthesis subtasks cannot pay off. However, the coarser LTL decomposition suffices to reduce the synthesis time on common benchmarks significantly. Thus, LTL decomposition is in the right balance between small subtasks and a scalable decomposition. For 43 specifications, the automaton approach yields decompositions and many of them consist of four or more subspecifications. For 22 of these specifications, the LTL approach yields a decomposition as well. Yet, they differ in most cases, as the automaton approach yields more fine-grained decompositions. Recall that only 207 SYNTCOMP benchmarks are realistic candidates for decomposition. The automaton approach proves that 90 of those specifications (43.6\%) are not decomposable. Thus, our implementations yield decompositions for 33.33\% (LTL) and 36.75\% (NBA) of the potentially decomposable specifications. We observed that decomposition works exceptionally well for specifications that stem from real system designs, for instance the Syntroids~\cite{GeierH0F19} case study, indicating that modular synthesis is particularly beneficial in practice. \section{Conclusion} We have presented a modular synthesis algorithm that applies compositional techniques to reactive synthesis. It reduces the complexity of synthesis by decomposing the specification in a preprocessing step and then performing independent synthesis tasks for the subspecifications. We have introduced a criterion for decomposition algorithms that ensures soundness and completeness of modular synthesis as well as two algorithms for specification decomposition satisfying the criterion: A semantically precise one for specifications given as nondeterministic Büchi automata, and an approximate algorithm for LTL specifications. We presented optimizations of the LTL decomposition algorithm for formulas in a strict assume-guarantee format and for formulas consisting of several assume-guarantee conjuncts. Both optimizations are based on dropping assumptions that do not influence the realizability of the rest of the formula. We have implemented the modular synthesis algorithm as well as both decomposition algorithms and we compared our approach for the state-of-the-art synthesis tools BoSy and Strix to their non-compositional forms. Our experiments clearly demonstrate the significant advantage of modular synthesis with LTL decomposition over traditional synthesis algorithms. While the overhead is negligible, both BoSy and Strix are able to synthesize solutions for more benchmarks with modular synthesis than in their non-compositional form. Moreover, on large and complex specifications, BoSy and Strix improve their synthesis times notably, demonstrating that specification decomposition is a game-changer for practical LTL synthesis. Building up on the presented approach, we can additionally analyze whether the subspecifications fall into fragments for which efficient synthesis algorithms exist, for instance safety specifications. Since modular synthesis performs independent synthesis tasks for the subspecifications, we can choose, for each synthesis task, an algorithm that is tailored to the fragment the respective subspecification lies in. Moreover, parallelizing the individual synthesis tasks may increase the advantage of modular synthesis over classical algorithms. Since the number of subspecifications computed by the LTL decomposition algorithm highly depends on the rewriting of the initial formula, a further promising next step is to develop more sophisticated rewriting algorithms. \section{Decomposition of LTL Formulas}\label{app:ltl} \end{document}
arXiv
# Understanding the data for retirement planning To effectively forecast retirement outcomes using machine learning techniques, it's crucial to understand the data you'll be working with. This section will provide an overview of the types of data commonly used in retirement planning and how to preprocess it for machine learning models. ## Exercise Instructions: 1. List the types of data commonly used in retirement planning. 2. Explain how to preprocess this data for machine learning models. ### Solution 1. Types of data commonly used in retirement planning: - Current age - Retirement age - Income - Savings and investments - Expenses - Taxes - Healthcare costs - Social Security benefits 2. Preprocessing steps for machine learning models: - Clean and format the data: Remove any inconsistencies or errors in the data, and ensure it's in a standardized format. - Feature engineering: Create new features from the existing data that may be useful for the machine learning model, such as age at retirement or percentage of income spent on expenses. - Handle missing values: If any data points are missing, decide whether to impute them using a statistical method or remove the entire data point. - Normalize and scale the data: If the data has different scales, normalize or scale the features to ensure that they have comparable weights in the machine learning model. # Supervised learning techniques Supervised learning techniques are a fundamental building block for creating machine learning models to forecast retirement outcomes. This section will introduce key concepts and techniques, such as regression models and decision trees, which can be applied to retirement planning. ## Exercise Instructions: 1. Explain the difference between supervised and unsupervised learning techniques. 2. Describe the process of training and testing a supervised learning model. ### Solution 1. Supervised vs. unsupervised learning: - Supervised learning: The model is trained on a labeled dataset, where each data point is associated with a known output. The goal is to learn a mapping from inputs to outputs. - Unsupervised learning: The model is trained on an unlabeled dataset, and the goal is to discover patterns or relationships within the data without any prior knowledge of the desired output. 2. Training and testing a supervised learning model: - Training: The model is fit to the training dataset, which consists of labeled examples. The model learns the relationship between inputs and outputs from this data. - Testing: The model is evaluated on a separate test dataset, which is unseen during training. The performance of the model is measured using appropriate evaluation metrics, such as accuracy, precision, recall, or mean squared error. # Linear regression for retirement forecasting Linear regression is a popular supervised learning technique that can be used to forecast retirement outcomes. This section will explain the mathematical foundation of linear regression and how it can be applied to retirement planning. ## Exercise Instructions: 1. Derive the equation for a simple linear regression model. 2. Explain how to interpret the coefficients of the linear regression model. ### Solution 1. Simple linear regression equation: The equation for a simple linear regression model is: $$y = \beta_0 + \beta_1 x + \epsilon$$ where $y$ is the dependent variable (the variable we want to predict), $x$ is the independent variable, $\beta_0$ is the intercept term, $\beta_1$ is the slope term, and $\epsilon$ is the error term. 2. Interpretation of coefficients: - Intercept term ($\beta_0$): This is the predicted value of $y$ when $x = 0$. It represents the baseline level of the dependent variable. - Slope term ($\beta_1$): This measures the change in $y$ for a one-unit increase in $x$. It represents the relationship between the independent and dependent variables. # Decision trees for retirement forecasting Decision trees are another supervised learning technique that can be used to forecast retirement outcomes. This section will introduce the concept of decision trees and explain how they can be applied to retirement planning. ## Exercise Instructions: 1. Explain the process of building a decision tree. 2. Describe the advantages and disadvantages of using decision trees for retirement forecasting. ### Solution 1. Building a decision tree: - Start with the full dataset and identify the variable that best splits the data into different groups. - Continue splitting the data using the best-performing variables until a stopping criterion is met, such as maximum depth or minimum node size. 2. Advantages and disadvantages of decision trees: - Advantages: - Easy to interpret and visualize. - Can handle both numerical and categorical variables. - Less prone to overfitting compared to regression models. - Disadvantages: - Prone to overfitting if the tree is too deep or complex. - Less accurate than regression models for continuous outcomes. # Gradient boosting for retirement forecasting Gradient boosting is an ensemble learning technique that combines multiple decision tree models to forecast retirement outcomes. This section will explain the concept of gradient boosting and how it can be applied to retirement planning. ## Exercise Instructions: 1. Explain the process of building a gradient boosting model. 2. Describe the advantages and disadvantages of using gradient boosting for retirement forecasting. ### Solution 1. Building a gradient boosting model: - Start with a base model, such as a decision tree. - For each subsequent model, fit it to the residuals of the previous models. - Combine the models using a weighted sum of their predictions. 2. Advantages and disadvantages of gradient boosting: - Advantages: - Can achieve high prediction accuracy. - Less prone to overfitting compared to individual decision trees. - Disadvantages: - Computationally expensive, especially for large datasets. - Requires careful tuning of hyperparameters, such as the number of trees and the learning rate. # Clustering for retirement forecasting Clustering is an unsupervised learning technique that can be used to group similar retirement outcomes. This section will introduce the concept of clustering and explain how it can be applied to retirement planning. ## Exercise Instructions: 1. Explain the process of building a clustering model. 2. Describe the advantages and disadvantages of using clustering for retirement forecasting. ### Solution 1. Building a clustering model: - Partition the dataset into a fixed number of clusters using a distance measure, such as Euclidean distance or cosine similarity. - Assign each data point to the cluster with the nearest centroid. - Repeat the process until convergence, or until a maximum number of iterations is reached. 2. Advantages and disadvantages of clustering for retirement forecasting: - Advantages: - Can identify patterns or groups in the data. - Can be used for exploratory analysis to understand the structure of the data. - Disadvantages: - The number of clusters and the distance measure are subjective and require expert knowledge. - Less accurate than supervised learning techniques for predicting specific outcomes. # Evaluating the forecasting models Evaluating the performance of forecasting models is crucial for determining their effectiveness in retirement planning. This section will introduce key evaluation metrics and techniques for assessing the performance of machine learning models. ## Exercise Instructions: 1. Explain the process of evaluating a forecasting model. 2. Describe the advantages and disadvantages of using accuracy as an evaluation metric. ### Solution 1. Evaluating a forecasting model: - Split the dataset into a training set and a test set. - Train the model on the training set and make predictions on the test set. - Measure the performance of the model using appropriate evaluation metrics, such as accuracy, precision, recall, or mean squared error. 2. Advantages and disadvantages of using accuracy as an evaluation metric: - Advantages: - Simple and intuitive. - Useful for binary classification problems. - Disadvantages: - Not suitable for imbalanced datasets. - Does not consider the trade-off between false positives and false negatives. # Optimizing the forecasting models Optimizing forecasting models is essential for improving their accuracy and effectiveness in retirement planning. This section will introduce techniques for tuning the hyperparameters of machine learning models and selecting the best model for a given dataset. ## Exercise Instructions: 1. Explain the process of hyperparameter tuning for a machine learning model. 2. Describe the advantages and disadvantages of using cross-validation for model selection. ### Solution 1. Hyperparameter tuning: - Define a set of possible hyperparameter values. - Train and evaluate the model for each combination of hyperparameter values. - Select the combination that produces the best performance. 2. Advantages and disadvantages of using cross-validation for model selection: - Advantages: - Provides a more reliable estimate of model performance. - Allows for the selection of a single best model. - Disadvantages: - Computationally expensive, especially for large datasets. - May overfit to the training data if the number of folds is too small. # Real-world applications of retirement forecasting Machine learning techniques can have a significant impact on retirement planning by providing accurate forecasts of retirement outcomes. This section will discuss real-world applications of retirement forecasting, such as in financial planning, retirement savings, and Social Security benefits. ## Exercise Instructions: 1. Explain the role of retirement forecasting in financial planning. 2. Describe how retirement forecasting can be used to optimize retirement savings and investment strategies. ### Solution 1. Role of retirement forecasting in financial planning: - Retirement forecasting can help financial planners understand the potential outcomes for a client's retirement savings and investments. - This information can be used to make informed decisions about savings goals, investment strategies, and Social Security benefits. 2. Optimizing retirement savings and investment strategies: - Retirement forecasting can identify the optimal time to withdraw funds from a retirement account, taking into account factors such as inflation, taxes, and market returns. - By optimizing withdrawals, clients can maximize the lifetime income from their retirement savings and investments. # Conclusion and further reading In conclusion, machine learning techniques offer powerful tools for forecasting retirement outcomes. By understanding the data, using supervised learning techniques, evaluating model performance, and optimizing model parameters, financial planners can provide accurate and actionable retirement forecasts to their clients. For further reading, consider exploring the following resources: - "Forecasting: Principles and Practice" by Rob J. Hyndman and George Athanasopoulos - "The Art and Science of Personal Financial Planning" by William J. Bengen - "The Science of Successful Retirement Planning" by Mark V. Schaefer and John B. Shields These resources will provide a deeper understanding of the principles and practices behind retirement forecasting and offer practical guidance for financial planners.
Textbooks
Mathematical simulation of temperature distribution in tumor tissue and surrounding healthy tissue treated by laser combined with indocyanine green Yuanyuan Xu1,2 na1, Shan Long3 na1, Yunning Yang1,2, Feifan Zhou4, Ning Dong5, Kesong Yan6, Bo Wang7, Yachao Zeng8, Nan Du7, Xiaosong Li7 & Wei R. Chen9 Theoretical Biology and Medical Modelling volume 16, Article number: 12 (2019) Cite this article Photothermal therapy is a local treatment method for cancer and the heat energy generated from it could destroy the tumor cells. This study is aimed to investigate the temperature distribution in tumor tissue and surrounding health tissue of tumor bearing mice applying mathematical simulation model. Tumor bearing mice treated by laser combined with or without indocyanine green. Monte Carlo method and the Pennes bio-heat equation were used to calculate the light distribution and heat energy. COMSOL Multiphysic was adopted to construct three dimensional temperature distribution model. This study revealed that the data calculated by simulation model is in good agreement with the surface temperature monitored by infrared thermometer. Effected by the optical parameters and boundary conditions of tissue, the highest temperature of tissue treated by laser combined with indocyanine green was about 65 °C which located in tumor tissue and the highest temperature of tissue treated by laser was about 43 °C which located under the tumor tissue. The temperature difference was about 20 °C. Temperature distribution in tissue was not uniform. The temperature difference in different parts of tumor tissue raised up to 15 °C. The temperature of tumor tissue treated by laser combined with indocyanine green was about 20 °C higher than that of the surrounding healthy tissue. Reasonably good matching between the calculated temperature and the measured temperature was achieved, thus demonstrated great utility of our modeling method and approaches for deepening understand in the temperature distribution in tumor tissue and surrounding healthy tissue during the laser combined with photosensitizer. The simulation model could provide guidance and reference function for the effect of photothermal therapy. Photothermal therapy is a local treatment method for cancer which applies intensive laser energy to targeted tumor cells. Heat energy generated from absorbing laser energy could destroy the tumor cells [1]. Photosensitizer such as indocyanine green (ICG) could enhance the absorption of laser energy when it was used in conjunction with laser [2]. The absorption spectrum of ICG is about 600 to 900 nm [3]. ICG irradiated by near-infrared laser could produce thermal effect which shows a severe cytotoxic effect to tumor cells [4]. Many literatures investigated that thermal effect induced by near-infrared laser combined with ICG eradicated the local tumor cells and prolonged the survival time of mice [5, 6]. A clinical trial demonstrated that the thermal effect induced by laser and ICG combined with immunoadjuvant could effectively treated the breast tumor and the side effect was tolerant [7]. Photothermal therapy is an ideal method for cancer treatment which could destroy the targeted tumor cells while protect the surrounding normal tissue. The thermal distribution in tumor tissue and surrounding healthy tissue is the most important factor to influence the effectiveness of photothermal therapy. A literature showed that different biological effect could be induced by different temperatures [8]. For example, when temperature was about 37 °C, the feeling of warmth was felt. When temperature ranged from 60 to 100 °C, the protein could be denatured. When the temperature ranged between 100 °C to 300 °C, the bio-tissue may even be carbonized. In general, tumor cells are sensitive to hyperthermia and vulnerable to heat stress than healthy cells when the temperature was above 42.5 °C [9, 10]. With the development of infrared thermography [11], the digital infrared thermometer can be a reliable method to monitor the surface temperature on tumor. To measure the temperature of deep tissue, thermocouples are always inserted to tissue. However, this method is invasive. During the photothermal therapy, photons coming from laser experience either scatting or absorption when they go through tissue. The extent of scatting and absorption is related to the scatting coefficient and absorption coefficient of tissue respectively. The absorbed photons get excited electronically and in excited state. When transiting from excited state to lower energy state, phones emit energy in some forms, for example, heat generation [12]. The light distribution and temperature distribution during photothermal therapy could be investigated by mathematical simulation, which could display the three dimensional temperature profile of whole tissue not just surface temperature of tissue. Besides, mathematical simulation is a noninvasive method to analyze temperature distribution. Manuchehrabadi et al. [13] applied the computational Monte Carlo simulation algorithm to simulate the temperature elevation in prostatic tumor embedded in a mouse body during the treatment of laser combined with gold nanorods. In Ganguly's study [14], finite element modeling was used to demonstrate the temperature distribution and heat affected zone of excised rat skin samples and live anesthetized mouse tissue during laser irradiation. In Paul's study [15], finite element-based commercial software was used to simulate the subsurface thermal behavior of tissue phantom embedded with large blood vessels during plasmonic photo-thermal therapy. In Sazgarnia's study [16], the thermal distribution of tumor and surrounding tissue was simulated in COMSOL software in a phantom made of agarose and intralipid during the treatment of laser combined with gold/gold sulfide nanoshells. In Gnyawali's study [12], finite difference method for heat distribution in tissue was used to simulate the temperature distribution in tissue phantom during the selective laser photothermal interaction. To our knowledge, there was few investigation of simulation model of temperature distribution in tissue phantom during photothermal therapy. The investigations of temperature distribution in living tissue are less. This paper will to investigate mathematics simulation of temperature distribution in tumor tissue and surrounding healthy tissue treated by laser combined with indocyanine green. This study could provide reference function for mathematical simulation design of temperature distribution in tumor and surrounding healthy tissue and provide guidance for the clinical application of photothermal therapy. Material and method Tumor cell line 4 T1 Cells, a breast tumor cell line, were cultured in Roswell Park Memorial Institute 1640 (RPMI-1640) medium (Invitrogen, Carlsbad, CA) with 10% fetal bovine serum, 100 U/ml penicillin, and 100 U/ml streptomycin (Sigma, St. Louis, MO) at 37 °C in a humidified atmosphere of 5% CO2/95% air. The cells were harvested and prepared in the medium (1 million cells per 100 μl) for injection. Female Balb/c mice (Harlan Sprogue Dawley Co. Indianapolis, IN, USA) at the age of 6 to 8 weeks and weight of 15–25 g were used in our experiment. Mice were anesthetized with a gas mixture of isoflurane (2%) and oxygen before laser irradiation. After the completion of laser irradiation, mice were allowed to recover. All animal experiments were approved by the Institutional Animal Care and Use Committee and were in compliance with National Institutes of Health guidelines. All Balb/c mice were depilated on the back; they were then injected subcutaneously with 106 4 T1 cells suspended in 100 μl of phosphate-buffered saline. Tumors grew predictably in all mice and reached a size of 5 to 10 mm in diameter 8 to 10 days after injection. Tumor growth was assessed 2 times a week throughout the entire experiment. The orthogonal tumor dimensions (a and b) were measured with a Vernier caliper. The tumor volume was calculated according to the formula, V = ab2/2. The tumor-bearing mice were readying for the treatment when the tumor reached 0.2–0.5 cm3. Mice were monitored carefully throughout the study and were preemptively euthanized when they became moribund. According to the parameters of elements in the photothermal therapy, the experiment was divided into three groups as shown in Table 1. In group 1 and group 3, The tumors were injected with 200 μL of ICG, respectively, the laser power densities were 1 W/cm2 and 0.8 W/cm2. While in group 2, 200 of μL PBS (Phosphate-buffered saline) was used, and the laser power densities were 1 W/cm2. Table 1 The experimental group Photothermal therapy Before the laser treatment, the 4 T1 tumor-bearing mice were anesthetized, and the hairs overlying the tumor were clipped. Before laser irradiation, 200 μL of ICG solution (Akorn Inc. Buffalo Grove, IL) or PBS was injected into the center of tumors on the back of mice. Eight hundred five nm laser was adopted to irradiate the tumor tissue for 600 s. Infrared thermometer (FLIR E8) was used to measure surface temperature at the irradiation time points of 0, 20 s, 40 s, 60 s, 120 s, 180 s, 240 s, 300 s, 360 s, 420 s, 480 s, 540 s and 600 s. Method of temperature distribution simulation model Monte Carlo methods rely on random sampling to calculate their results which could simulate physical and mathematical systems [17]. Monte Carlo model was capable to simulate the light transportation in multi-layered tissues [18]. The steps of Monte Carlo simulating light distribution were showed in Fig. 1. The steps of Monte Carlo simulating light distribution Based on the model of breast tumor bearing mice, the physiology of breast tumor area in tumor bearing mice was presented. The breast tumor model was composed of three parts representing skin, fat and tumor. In the simulation model, the thickness of the epidermis and fat above tumor tissue was 0.5 mm and 1 mm respectively. A sphere with a diameter of 8 mm represented tumor tissue and a cylinder with a diameter of 2 cm and height of 2 cm represented the surrounding healthy tissue. The sphere tissue was embedded into the cylinder tissue. The simulated model was showed in Fig. 2. The simulation model of tumor area in the tumor bearing mice. a) Diagram of the cylindrical modeling domain of the tumor issue. b) A free tetrahedral mesh of the computation domain The model simulated the distribution of the absorption energy which came from an 805 nm laser with a diameter of 1.5 cm. The optical parameters of the tissue [19] were showed in Table 2. Table 2 Optical parameters of tissue In addition of the light energy distribution affected by biological tissue, ICG also contributed a lot to the absorption of light energy. According to the literature study [20], there was a liner relationship about absorption coefficient between ICG and 805 nm laser as follows: $$ \mathrm{A}=0.04\cdot {\mathrm{C}}_{\mathrm{ICG}} $$ A is the absorption coefficient of ICG under the irradiation of 805 nm laser. CICG (μg/mL) is the concentration of ICG. When tumor tissue was treated by laser combined with photosensitizer, the absorption coefficient was equal to the sum of the light absorption coefficient of tumor tissue and the light absorption coefficient of the photosensitizer. Heat distribution of tissues was calculated by Pennes bio-heat equation. The Pennes bio-heat equation reads: $$ \uprho \mathrm{C}\frac{\mathrm{\partial T}}{\mathrm{\partial t}}-\nabla \left(\mathrm{k}\cdot \nabla \mathrm{T}\right)={\uprho}_{\mathrm{b}}\cdot {\mathrm{C}}_{\mathrm{b}}\cdot {\upomega}_{\mathrm{b}}\cdot \left({\mathrm{T}}_{\mathrm{b}}-\mathrm{T}\right)+{\mathrm{Q}}_{\mathrm{met}}+{\mathrm{Q}}_{\mathrm{ext}} $$ where ρ (kg/cm3), C (J/((kg∙K)))and k are the density, specific heat and thermal conductivity of the tissue respectively. T is the temperature, ωb (1/s), ρb (kg/cm3), Cb (J/((kg∙K)))and Tb (C) are the perfusion, density, specific heat and the temperature of the blood, Qmet (W/m3) is the metabolic heat generation rate per unit volume of the tissue, Qext (W/m3) is the distributed volumetric heat source due to laser heating. The data of Qext came from Monte Carlo simulation which calculated the energy of light distribution in tissues. The temperature distribution simulation of tissues during the photothermal therapy was performed via the finite element method available in COMSOL Multiphysics computational package. Thermophysical simulation was consist with the model of light distribution. A set of thermophysical parameters of tissues were used in the simulation as shown in Table 3. Table 3 Thermal parameters of tissue [21,22,23,24] The boundary of the epidermis in the simulation was the boundary of air convection, and the convective heat transfer coefficient was 18(W/m2 ∙ K). The environment temperature was selected at 15 °C and considered constant. Other boundaries temperature was 37 °C. Surface temperature distribution during laser irradiation The surface temperature of tumor tissue was monitored by infrared thermometer and calculated by simulation model, as shown in Fig. 3. In the first 240 s of photothermal therapy, the temperature rose rapidly, then the temperature was not obviously elevated and became stable after 240 s. The temperature of tumor in group 1 (solid line - square) and group 2 (dash dot line - circular) were about 63 °C and about 39 °C respectively at t = 600 s. The maximum temperature difference was about 20 °C between the two groups. The results showed that ICG contributed a lot to temperature elevation. The temperature difference between group 1(solid line - square) and group 3 (short line - triangle) was about 5 °C. The temperature measured in experiment was almost consistent with the temperature calculated by the simulation, especially after 240 s. Comparison of the experimental and simulated results on the surface tumor temperature in tumor bearing mice Monte Carlo simulation of light distribution in tissues The light distribution in tumor tissue and surrounding healthy tissue was simulated by Monte Carlo method, as shown in Fig. 4. When tumor was irradiated by laser (Fig. 4a and b), the light energy absorbed by tumor tissue was almost equal to that absorbed by surrounding healthy tissue. The area had the maximum absorption light energy locating in the tumor tissue where it was about 1.5–2 mm from the epidermis. The maximum absorption energy was 5 × 105 W/m3. The distribution of the absorbed laser energy (W/m3) in tumor and surrounding tissue. a, b The laser power density is 1 W/cm2 and the ICG is 0.0 mg/mL. c, d The laser power density is 1 W/cm2 and the ICG is 0.1 mg/mL When the tumor had been injected with ICG and irradiated by laser (Fig. 4c and d), the dose of light energy absorbed by tumor tissue was more than that absorbed by surrounding healthy tissue. The largest absorption of light energy in tumor tissue and surrounding healthy tissue were 5 × 106 W/m3 and 0.5 × 106 W/m3 respectively. The area had the maximum absorption light energy locating in the tumor tissue where it was about 5–7 mm from the epidermis. Temperature distribution in tissue at different treatment parameters When tissue was irradiated for 600 s, the temperature distribution of tumor tissue and surrounding healthy tissue at different treatment parameters was showed in Fig. 5 (Additional file 2). When tumor bearing mice were treated by laser combined with ICG (Fig. 5c, d, e and f), temperature of tumor tissue was significantly higher than the surrounding healthy tissue. The highest temperature at t = 600 s (Fig. 5e and f) in tumor tissue and surrounding healthy tissue were about 70 °C and 50 °C respectively when tumor was treated by laser (1 W/cm2) and ICG (0.1 mg/ml). The position had the highest temperature locating in the tumor tissue where it was about 5–8 mm from the epidermis. The surface temperature of tumor tissue was about 65 °C. The temperature difference between the highest temperature and the lowest temperature in tumor tissue was about 20 °C in Fig. 5e, f and 15 °C in Fig. 5c, d. Three-dimensional and two-dimensional temperature distributions in tumor tissue and surrounding healthy tissue during photothermal therapy. a, b The laser power density is 1 W/cm2 and the ICG is 0.0 mg/ml. c, d The laser power density is 0.8 W/cm2 and the ICG is 0.1 mg/mL. e, f The laser power density is 1 W/cm2 and the ICG is 0.1 mg/mL Additional file 1: Temperature Evolution in tumor and surrounding tissue by laser without ICG (2). (AVI 3180 kb) Temperature distribution was showed in Fig. 5a and b when tumor bearing mice was treated by laser without ICG. The highest temperature was about 41.5 °C under the tumor tissue. The temperature of tumor tissue ranged between 37 °C to 41.5 °C. The temperature of surrounding healthy tissue was about was about 38.5 °C at t = 600 s. Temperature distribution during photothermal therapy at different time The two-dimensional and three-dimensional temperature distribution of tumor tissue and surrounding healthy tissue treated by laser without ICG at different time were showed in Fig. 6 (Additional file 1). The body temperature of mice was about 37 °C. The area of the highest temperature was under the tumor where it was about 13–18 mm from the epidermis. The highest temperature varied from 37 °C to 41.5 °C. The surface temperature varied from 32 °C to 38.5 °C. Three-dimensional and two-dimensional temperature distribution in tumor tissue and surrounding tissue treated by laser without ICG. a, b t = 120 s, c, d t = 240 s, e, f t = 480 s Additional file 2: Temperature Evolution in tumor and surrounding tissue by laser with ICG (2). (AVI 3310 kb) The two-dimensional and three-dimensional temperature distribution of tumor tissue and surrounding healthy tissue treated by laser (1 W/cm2) combined with ICG (0.1 mg/ml) at different time were showed in Fig. 7. The area of the highest temperature was in the tumor where it was about 5-8 mm from the epidermis. The highest temperature varied from 37 °C to 70 °C. The maximum temperature of surrounding tissue was about 50 °C. Three-dimensional and two-dimensional temperature distribution in tumor tissue and surrounding tissue treated by laser with ICG. a, b t = 120 s, c, d t = 240 s, e, f t = 480 s In this work, temperature distribution of tumor tissue and surrounding healthy tissue was investigated when tumor bearing mice was treated by laser with or without ICG. The infrared thermometer was applied to measure the surface temperature during photothermal therapy. Based on the model of tumor bearing mice treated by photothermal therapy, mathematical simulation about temperature distribution was constructed. The model coupled the physical light field and heat field. According to the generation principle of heat and light field, the constructed simulation model in this study included two parts. Firstly, light distribution in the tumor and surrounding healthy tissue was simulated by Monte Carlo method, and then the energy distribution of heat source was calculated according to light distribution and absorption coefficient of tissue and ICG. Secondly, based on Pennes bio-heat equation, temperature field simulation model of tumor tissue and surrounding healthy tissue was constructed by using direct coupling analysis software COMSOL Multiphysics. The simulated results were compared with the measured results in the vivo experiment. To our knowledge, it is the first work to investigate the temperature distribution of tumor bearing mice treated by laser combined with ICG. Besides, it is the first time to analyze the spatial and temporal temperature simulation model according to the combination of Monte Carlo method and the finite element method available in COMSOL Multiphysics. The simulation results were in good agreement with the experimental results, as shown in Fig. 3. The present results about temperature distribution of living tissue matched well with the results about tissue phantoms demonstrated by Gnyawali SC. In Gnyawali SC's study [12], Gelatin phantoms were applied to simulate normal biological tissue. A spherical ICG-mixed gelatin buried in the gelatin was applied to simulate tumor tissue which could simulate absorption-enhanced target for selective photothermal interaction. An 805 nm laser was used to irradiate the dye for 600 s and a Prism DS-infrared camera was used to monitor the real-time surface temperature. The Monte Carlo method and finite difference method were used to simulate the surface temperature profile about the tumor tissue. The simulated results and the experimental results were in good agreement. The current experimental results provided a more valuable role for the clinical application of photothermal therapy compared with the results of tissue phantoms. The result showed that temperature monitoring is feasible using mathematical simulation. The temperature simulation model contained the coupling of the light field and the heat field. The light distribution was simulated by the Monte Carlo method. Monte Carlo simulation method is a kind of commonly used statistical simulation random sampling method, which has been widely used in the simulation of various random processes. Light distribution of complex organization can be regarded as the results of a large number of photons randomly moving and absorbed in the tissues which could be investigated by Monte Carlo method [25, 26]. Xue Lingling's research [27] showed that the simulation results of five layer of skin tissue solved by Monte Carlo method fit well with the experimental results. The heat energy distribution was simulated by Pennes bio-heat equation. The Pennes bio-heat equation is a classical bio-heat equation which considered the effect of the blood perfusion, metabolism heat generation of tissues as well as the heat absorption of ICG. Monte Carlo simulation provided the heat energy source for Pennes bio-heat equation. COMSOL Multiphysics is a multi-physical field coupling software which was used to couple the light and heat physical fields. The mathematical simulation model of this study conforms to the heat transfer characteristics of biological tissue which make the simulation results agreed with the experiment results. Figure 5 showed the light distribution of tumor tissue and surrounding healthy tissue. The absorption energy deposition was affected by the optical parameters of tissue and the absorption coefficient of ICG. The pattern of light energy distribution in tissue was largely due to the concave shape of the tumor top surface where the laser is incident and the cylinder-shaped of surrounding tissue. The light energy distribution was similar to the results showed by Manuchehrabadi [13] who applied the Monte Carlo method to simulate photon propagation in a spherical tumor and calculate laser energy absorption in tumor tissue. When the tumor tissue was treated by laser without ICG (Fig. 6), the temperature of tumor tissue and surrounding tissue was not above 42.5 °C. The tumor and surrounding healthy tissue would not be damaged by laser. Referring to the optical parameters and boundary conditions of tissue, the simulation showed that the highest point of the temperature field was under the tumor tissue when tumor was not treated by ICG. The highest point of the temperature field was in the tumor tissue and close to the skin when the tumor was deposited with ICG. The temperature distribution was similar to the results reported by Manuchehrabadi N et al. [13]. Mathematical simulation demonstrated that the temperature of the tumor tissue was higher than the temperature of surrounding healthy tissue under the treatment of laser combined with ICG (Fig. 7). Temperature distribution of the tumor was not uniform. The temperature of different part of tumor tissue varied from about 45 °C to 70 °C. In general, temperature of the tumor periphery is lower than the temperature of the central region. As literature mentioned [9, 28], when the temperature of tumor cells was above 42.5 °C, the number of dead tumor cells drastically increased with increasing temperature. The temperature of surrounding healthy tissue varied from 37 °C to about 45 °C. Within this temperature, the surrounding tissue near the tumor tissue could be destroyed slightly and the tissue far away the tumor could be relatively safe. During photothermal therapy, temperature elevated obviously before t = 240 s. While the temperature become stable after 240 s. The variation trend of temperature was also observed in the Gnyawali's study [12]. The tumor in group 1 and group 3 had the same concentration of ICG, they were irradiated by laser with power density of 1 W/cm2 and 0.8 W/cm2 respectively, the maximum temperature difference was about 5 °C. Compared with ICG, the contribution of laser power density to temperature elevation seemed not obvious. Kannadorai et al. [29] also found that there was hardly any increase in overall temperature of the tumor during the photothermal therapy when laser power density was steadily increased. Maybe, the laser power density contributed a little to the temperature elevation. There are still some drawbacks to this experiment. The geometric structure in this study was fixed and could not simulate the different tumor size, tumor shape and tumor depth which caused tiny inconsistency between simulation results and experiment results. Further studies in this subject will be investigated in the future. In this study, the distribution of ICG was thought to be uniform. However, instability and easy biodegradation are the characteristics of ICG. A literature [30] investigated that graphene oxide-titanium dioxide nanomaterial/ICG (TiO2-GO / ICG) was stable and could increase tumor accumulation of ICG when TiO2-GO / ICG was used for cancer treatment as a photosensitizer. The temperature distribution of ICG loaded by nanomaterial will be a direction to be investigated. Mathematical simulation was feasible to monitor the temperature of tissue during photothermal therapy. The simulation model could predict the temperature distribution in tumor tissue and surrounding healthy tissue to achieve the ideal effectiveness of treatment that could selectively destroy the tumor cells while avoid damaging the surrounding healthy tissue. Photosensitizer, ICG, could selectively elevated the temperature of tumor tissue. The model could provide guidance function for the research and development of appropriated photosensitizer which could targeted to tumor cells and be uniform distribution in tumor tissue. The appropriated photosensitizer should be further researched and developed. The best thermal dose should be further investigated and the model of temperature distribution could provide guidance function. ρ the density, kg/cm3 C the specific heat, J/((kg∙K) k the thermal conductivity, W/(m∙K) Qmet the metabolic heat generation rate per unit volume of the tissue, W/m3 Qext the distributed volumetric heat source due to laser heating, W/m3 ρb the blood density, kg/cm3 Cb the blood specific heat, J/((kg∙K) ωb the blood perfusion, 1/s Tb the blood temperature, °C All data generated or analyzed during this study are included in this published article and its additional file. ICG: Indocyanine green PBS: RPMI-1640: Roswell Park Memorial Institute 1640 Chen WR, Adams RL, Heaton S, Dickey DT, Bartels KE, Nordquist RE. Chromophore-enhanced laser-tumor tissue photothermal interaction using 808 nm diode laser. Cancer Lett. 1995;88(1):15–9. Li XS, Min M, Gu Y, Du N, Hode T, Nordquist RE, et al. Laser immunotherapy: concept, possible mechanism, clinical applications and recent experimental results. IEEE J Sel Top Quant. 2012;18(4):1434–8. Landsman ML, Kwant G, Mook GA, Zijlstra WG. Light-absorbing properties, stability, and spectral stabilization of indocyanine green. J Appl Physiol. 1976;40(4):575–83. Hirohashi K, Anayama T, Wada H, Nakajima T, Kato T, Keshavjee S, et al. Photothermal ablation of human lung cancer by low-power near-infrared laser and topical injection of indocyanine green. J Bronchology Interv Pulmonol. 2015;22(2):99–106. Li XS, Le H, Wolf RF, Chen VA, Sarkar A, Nordquist RE, et al. Long-term effect on EMT6 tumors in mice induced by combination of laser immunotherapy and surgery. Integr Cancer Ther. 2011;10(4):368–73. Barnes KD, Shafirstein G, Webber JS, Koonce NA, Harris Z, Griffin RJ. Hyperthermia-enhanced indocyanine green delivery for laser-induced thermal ablation of carcinomas. Int J Hyperth. 2013;29(5):474–9. Li XS, Ferrel GL, Guerra MC, Hode T, Lunn JA, Adalsteinsson O. Preliminary safety and efficacy results of laser immunotherapy for the treatment of metastatic breast cancer patients. Photochem Photobiol Sci. 2011;10(5):817–21. Zaami A, Baran I, Akkerman R. Experimental and numerical analysis of laser reflection for optical-thermal process modeling of tape winding. In: 21st international conference on composite materials Xian; 2017. Lagendijk JJ. Hyperthermia treatment planning. Phys Med Biol. 2000;45(5):61–76. Luo M, Shi L, Zhang F, Zhou F, Zhang L, Wang B, et al. Laser immunotherapy for cutaneous squamous cell carcinoma with optimal thermal effects to enhance tumour immunogenicity. Int J Hyperthermia. 2018;34(8):1337-50. Xie W, Pip M, Jakobsen K, Parish C. Evaluation of the ability of digital infrared imaging to detect vascular changes in experimental animal tumors. Int J Cancer. 2004;108(5):790–4. Gnyawali SC, Chen Y, Wu F, Bartels KE, Wicksted JP, Liu H, et al. Temperature measurement on tissue surface during laser irradiation. Med Biol Eng Comput. 2008;46(2):159–68. Manuchehrabadi N, Chen Y, Lebrun A, Ma RH, Zhu L. Computational simulation of temperature elevations in tumors using Monte Carlo method and comparison to experimental measurements in laser photothermal therapy. J Biomech Eng. 2013;135(12):121007. Ganguly M, Miller S, Mitra K. Model development and experimental validation for analyzing initial transients of irradiation of tissues during thermal therapy using short pulse lasers. Lasers Surg Med. 2015;47(9):711–22. Paul A, Narasimhan A, Das SK, Sengupta S, Pradeep T. Subsurface thermal behaviour of tissue mimics embedded with large blood vessels during plasmonic photo-thermal therapy. Int J Hyperth. 2016;32(7):765–77. Sazgarnia A, Naghavi N, Mehdizadeh H, Shahamat Z. Investigation of thermal distribution for pulsed laser radiation in cancer treatment with nanoparticle-mediated hyperthermia. J Therm Biol. 2015;47:32–41. Li X, Cheng G, Huang N, Wang L, Liu FG, Gu Y. Light distribution in intravascular low level laser therapy applying mathematical simulation: a comparative study. J XraySci Technol. 2010;18(1):47–55. Wang L, Jacques SL, Zheng L. MCML--Monte Carlo modeling of light transport in multi-layered tissues. Comput Methods Prog Biomed. 1995;47(2):131–46. Sandell JL, Zhu TC. A review of in-vivo optical properties of human tissues and its impact on PDT. J Biophotonics. 2011;4(11–12):773–87. Wang CP, Zeng CC, Guan XY, Zhu ZR, Liu SH, et al. The effects of Indocyanine green on the near-infrared optical properties and optical coherence tomography of rat cerebral cortex. Guang Pu Xue Yu Guang Pu Fen Xi. 2012;32(7):1766–70. Prather SO, Lausch RN. Membrane-associated antigen from the SV40-induced hamster fibrosarcoma, Para-7. I. Role in immune complex formation and effector cell blockade. Int J Cancer. 1976;18(6):820–8. Chato JC. Fundamentals of bioheat transfer. In: Thermal dosimetry and treatment planning; 1990. p. 1–56. https://doi.org/10.1007/978-3-642-48712-5_1. Cohen ML. Measurement of the thermal properties of human skin. A review. J Invest Dermatol. 1977;69(3):333–8. Rossmanna C, Haemmerich D. Review of temperature dependence of thermal properties, dielectric properties, and perfusion of biological tissues at hyperthermic and ablation temperatures. Crit Rev Biomed Eng. 2014;42(6):467–92. Azimipour M, Baumgartner R1, Liu Y, et al. Extraction of optical properties and prediction of light distribution in rat brain tissue. J Biomed Opt. 2014;19(7):75001. Gysbrechts B, Wang L, Trong NN, Cabral H, Navratilova Z, Battaglia F, et al. Light distribution and thermal effects in the rat brain under optogenetic stimulation. J Biophotonics. 2016;9(6):576–85. Wang S, Zhang JH, Lui H, He Q, Bai J, Zeng H. Monte Carlo simulation of in vivo Raman spectral measurements of human skin with a multi-layered tissue optical model. J Biophotonics. 2014;7(9):703–12. Moroz P, Jones SK, Gray BN. Magnetically mediated hyperthermia: current status and future directions. Int J Hyperth. 2002;18(4):267–84. Kannadorai RK, Liu Q. Optimization in interstitial plasmonic photothermal therapy for treatment planning. Med Phys. 2013;40(10):103301. Li WW, Zhang XG, Zhu X, Zhang HJ. Influence of graphene oxide-titanium dioxide nanomaterial on the stability and in vivo distribution of indocyanine green. J Zhengzhou Univ (Med Sci). 1999;2(3):187–99. Yuanyuan Xu and Shan Long contributed equally to this work. Jinzhou Medical University, Jinzhou, 121000, China Yuanyuan Xu & Yunning Yang Department of Oncology, Graduate Training Base- Fourth Medical Center of Chinese PLA General Hospital of Jinzhou Medical University, Beijing, 100048, China School of Medicine, Nankai University, Tianjin, 300071, China Shan Long Shenzhen University, Shenzhen, 518000, China Feifan Zhou Burns Institute, Fourth Medical Center of Chinese PLA General Hospital, Beijing, 100048, China Ning Dong Department of laboratory animal, Fourth Medical Center of Chinese PLA General Hospital, Beijing, 100048, China Kesong Yan Department of Oncology, Fourth Medical Center of Chinese PLA General Hospital, Beijing, 100048, China Bo Wang , Nan Du & Xiaosong Li Dalian Institute of Chemical Physics, Chinese Academy of Science, Dalian, 116000, Liaoning, China Yachao Zeng Biophotonics Research Laboratory, Center for Interdisciplinary Biomedical Education and Research, College of Mathematics and Science, University of Central Oklahoma, Edmond, 73034, USA Wei R. Chen Search for Yuanyuan Xu in: Search for Shan Long in: Search for Yunning Yang in: Search for Feifan Zhou in: Search for Ning Dong in: Search for Kesong Yan in: Search for Bo Wang in: Search for Yachao Zeng in: Search for Nan Du in: Search for Xiaosong Li in: Search for Wei R. Chen in: All authors developed the model, performed the simulation study and wrote the manuscript. All authors read and approved the final manuscript. Correspondence to Xiaosong Li. Xu, Y., Long, S., Yang, Y. et al. Mathematical simulation of temperature distribution in tumor tissue and surrounding healthy tissue treated by laser combined with indocyanine green. Theor Biol Med Model 16, 12 (2019) doi:10.1186/s12976-019-0107-3 Accepted: 10 June 2019 Pennes bio-equation COMSOL Multiphysics
CommonCrawl
A right triangle is inscribed in a circle with a diameter $100$ units long. What is the maximum area of the triangle, in square units? Let the triangle be $ABC$, with hypotenuse $\overline{AB}$, and let $O$ be the center of the circle. The hypotenuse of a right triangle that is inscribed in a circle is a diameter of the circle, so $\overline{AB}$ is the diameter of the circle. Since point $C$ is on the circle, point $C$ is $100/2=50$ units from the midpoint of $\overline{AB}$ (which is the center of the circle). So, point $C$ cannot be any more than 50 units from $\overline{AB}$. This maximum can be achieved when $\overline{OC}\perp\overline{AB}$. The area of $\triangle ABC$ then is $(50)(100)/2 = \boxed{2500}$ square units. [asy] pair A,B,C,O; A = (-1,0); B=-A; C = (0,1); draw(A--B--C--A); draw(C--(0,0),dashed); O = (0,0); label("$O$",O,S); label("$A$",A,W); label("$B$",B,E); label("$C$",C,N); draw(Circle(O,1)); [/asy]
Math Dataset
RevScripter Background on state-dependent diversification rate estimation An introduction to inference using state-dependent speciation and extinction (SSE) branching processes Sebastian Höhna, Will Freyman, and Emma Goldberg This is a general introduction to character state-dependent branching process models, particularly as they are implemented in RevBayes. Frequently referred to as state-dependent speciation and extinction (SSE) models, these models are a birth-death process where the diversification rates are dependent on the state of an evolving character. The original model of this type considered a binary character (a trait with two discrete state values; called BiSSE, (Maddison et al. 2007). Several variants have also been developed for other types of traits (FitzJohn 2010; Goldberg et al. 2011; Goldberg and Igić 2012; Magnuson-Ford and Otto 2012; FitzJohn 2012; Beaulieu and O'Meara 2016; Freyman and Höhna 2018). RevBayes can be used to specify a wide range of SSE models. For specific examples see these other tutorials: BiSSE and MuSSE models: State-dependent diversification with BiSSE and MuSSE ClaSSE and HiSSE models: State-dependent diversification with HiSSE and ClaSSE ChromoSSE: Chromosome Evolution Background: The BiSSE Model The binary state speciation and extinction model (BiSSE) (Maddison et al. 2007) was introduced because of two problems identified by Maddison (2006). First, inferences about character state transitions based on simple transition models [like Pagel (1999)] can be thrown off if the character affects rates of speciation or extinction. Second, inferences about whether a character affects lineage diversification based on sister clade comparisons (Mitter et al. 1988) can be thrown off if the transition rates are asymmetric. BiSSE and related models are now mostly used to assess if the states of a character are associated with different rates of speciation or extinction. RevBayes implements the extension of BiSSE to any number of discrete states–i.e., the MuSSE model in diversitree; (FitzJohn 2012). We will first describe the general theory about the model. The theory behind state-dependent diversification models A schematic overview of the BiSSE model. Each lineage has a binary trait associated with it, so it is either in state 0 (blue) or state 1 (red). When a lineage is in state 0, it can either (a) speciate with rate $\lambda_0$ which results into two descendant lineage both being in state 0; (b) go extinct with rate $\mu_0$; or (c) transition to state 1 with rate $q_{01}$. The same types of events are possible when a lineage is in state 1 but with rates $\lambda_1$, $\mu_1$, and $q_{10}$, respectively. General approach The BiSSE model assumes two discrete states (i.e., a binary character), and that the state of each extant species is known (i.e., the discrete-valued character is observed). The general approach adopted by BiSSE and related models is to derive a set of ordinary differential equations (ODEs) that describe how the probability of observing a descendant clade changes along a branch in the observed phylogeny. Each equation in this set describes how the probability of observing a clade changes through time if it is in a particular state over that time period; collectively, these equations are called $\frac{\mathrm{d}D_{N,i}(t)}{\mathrm{d}t}$, where $i$ is the state of a lineage at time $t$ and $N$ is the clade descended from that lineage. Computing the likelihood proceeds by establishing an initial value problem. We initialize the procedure by observing the character states of some lineages, generally the tip states. Then starting from those probabilities (e.g., species X has state 0 with probability 1 at the present), we describe how those probabilities change over time (described by the ODEs), working our way back until we have computed the probabilities of observing that collection of lineages at some earlier time (e.g., the root). As we integrate from the tips to the root, we need to deal with branches coming together at nodes. Assuming that the parent and daughter lineages have the same state, we multiply together the probabilities that the daughters are state $i$ and the instantaneous speciation rate $\lambda_i$ to get the initial value for the ancestral branch subtending that node. Proceeding in this way down the tree results in a set of $k$ probabilities at the root; these $k$ probabilities represent the probability of observing the phylogeny conditional on the root being in each of the states (i.e., the $i^\text{th}$ conditional probability is the probability of observing the tree given that the root is in state $i$). The overall likelihood of the tree is a weighted average of the $k$ probabilities at the root, where the weighting scheme represents the assumed probability that the root was in each of the $k$ states. As with all birth-death process models, special care must be taken to account for the possibility of extinction. Specifically, the above ODEs must accommodate lineages that may arise along each branch in the tree that subsequently go extinct before the present (and so are unobserved). This requires a second set of $k$ ODEs, $\frac{ \mathrm{d}E_{i}(t)}{\mathrm{d}t}$, which define how the probability of eventual extinction from state $i$ changes over time. These ODEs must be solved to compute the differential equations $\frac{ \mathrm{d}D_{N,i}(t)}{\mathrm{d}t}$. We will derive both sets of equations in the following sections. Derivation for the binary state birth-death process The derivation here follows the original description in Maddison et al. (2007). Consider a (time-independent) birth-death process with two possible states (a binary character), with diversification rates \(\{\lambda_0, \mu_0\}\) and \(\{\lambda_1, \mu_1\}\). Clade probabilities, $D_{N, i}$ We define $D_{N,0}(t)$ as the probability of observing lineage $N$ descending from a particular branch at time $t$, given that the lineage at that time is in state 0. To compute the probability of observing the lineage at some earlier time point, $D_{N,0}(t + \Delta t)$, we enumerate all possible events that could occur within the interval $\Delta t$. Assuming that $\Delta t$ is small—so that the probability of more than one event occurring in the interval is negligible—there are four possible scenarios within the time interval (): nothing happens; a transition occurs, so the state changes $0 \rightarrow 1$; a speciation event occurs, and the right descendant subsequently goes extinct before the present, or; a speciation event occurs and the left descendant subsequently goes extinct before the present. We are describing events within a branch of the tree (not at a node), so for (3) and (4), we require that one of the descendant lineages go extinct before the present because we do not observe a node in the tree between $t$ and $t + \Delta t$. Possible events along a branch in the BiSSE model, used for deriving $D_{N,0}(t + \Delta t)$. This is Figure 2 in Maddison et al. (2007). We can thus compute $D_{N,0}(t + \Delta t)$ as: \[\begin{aligned} D_{N,0}(t + \Delta t) = & \;(1 - \mu_0 \Delta t) \times & \text{in all cases, no extinction of the observed lineage} \\ & \;[ (1 - q_{01} \Delta t)(1 - \lambda_0 \Delta t) D_{N,0}(t) & \text{case (1) nothing happens} \\ & \; + (q_{01} \Delta t) (1 - \lambda_0 \Delta t) D_{N,1}(t) & \text{case (2) state change but no speciation} \\ & \; + (1 - q_{01} \Delta t) (\lambda_0 \Delta t) E_0(t) D_{N,0}(t) & \text{case (3) no state change, speciation, extinction} \\ & \; + (1 - q_{01} \Delta t) (\lambda_0 \Delta t) E_0(t) D_{N,0}(t)] & \text{case (4) no state change, speciation, extinction} \end{aligned}\] A matching equation can be written down for $D_{N,1}(t+\Delta t)$. To convert these difference equations into differential equations, we take the limit $\Delta t \rightarrow 0$. With the notation that $i$ can be either state 0 or state 1, and $j$ is the other state, this yields: \[\frac{\mathrm{d}D_{N,i}(t)}{\mathrm{d}t} = - \left(\lambda_i + \mu_i + q_{ij} \right) D_{N,i}(t) + q_{ij} D_{N,j}(t) + 2 \lambda_i E_i(t) D_{N,i}(t) \tag{1}\label{eq:one}\] Extinction probabilities, $E_i$ To solve the above equations for $D_{N, i}$, we see that we need the extinction probabilities. Define $E_0(t)$ as the probability that a lineage in state 0 at time $t$ goes extinct before the present. To determine the extinction probability at an earlier point, $E_0(t+\Delta t)$, we can again enumerate all the possible events in the interval $\Delta t$ (): the lineage goes extinct within the interval; the lineage neither goes extinct nor speciates, resulting in a single lineage that must eventually go extinct before the present; the lineage neither goes extinct nor speciates, but there is a state change, resulting in a single lineage that must go extinct before the present, or; the lineage speciates in the interval, resulting in two lineages that must eventually go extinct before the present. \[\begin{aligned} E_0(t + \Delta t) = &\; \mu_0\Delta t + & \text{case (1) extinction in the interval} \\ & (1 - \mu_0\Delta t) \times & \text{no extinction in the interval and \dots} \\ & \;[(1-q_{01}\Delta t)(1-\lambda_0 \Delta t) E_0(t) & \text{case (2) nothing happens, but subsequent extinction} \\ & \;+ (q_{01}\Delta t) (1-\lambda_0 \Delta t) E_1(t) & \text{case (3) state change and subsequent extinction} \\ & \;+ (1 - q_{01} \Delta t) (\lambda_0 \Delta t) E_0(t)^2] & \text{case (4) speciation and subsequent extinctions} \end{aligned}\] Again, a matching equation for $E_1(t+\Delta t)$ can be written down. Possible events along a branch in the BiSSE model, used for deriving $E_0(t + \Delta t)$. This is Figure 3 in Maddison et al. (2007). To convert these difference equations into differential equations, we again take the limit $\Delta t \rightarrow 0$: \[\frac{\mathrm{d}E_i(t)}{\mathrm{d}t} = \mu_i - \left(\lambda_i + \mu_i + q_{ij} \right)E_i(t) + q_{ij} E_j(t) + \lambda_i E_i(t)^2 \tag{2}\label{eq:two}\] Initial values: tips and sampling The equations above describe how to get the answer at time $t + \Delta t$ assuming we already have the answer at time $t$. How do we start this process? The answer is with our character state observations, which are generally the tip state values. If species $s$ has state $i$, then $D_{s,i}(0) = 1$ (probability is 1 at time 0 [the present] because we observed it for sure) and $E_i(0) = 0$ (probability 0 of being extinct at the present). For all states other than $i$, $D_{s,j}(0) = 0$ and $E_j(0) = 1$. We can adjust these initial conditions to allow for incomplete sampling. If a proportion $\rho$ of species are included on the tree, we would instead set $D_{s,i}(0) = \rho$ (probability of having state $s$ and of being on the tree) and $E_i(0) = 1-\rho$ (probability of absent, due to sampling rather than extinction). This simple form of incomplete sampling assumes that any species is equally likely to be on the tree (FitzJohn et al. 2009). At nodes Equations \eqref{eq:one} and \eqref{eq:two} are the BiSSE ODEs, describing probabilities along the branches of a phylogeny. We also need to specify what happens with the clade probabilities (the $D$s) at the nodes of the tree. BiSSE assumes the ancestor (called $A$) and descendants (called $N$ and $M$) have the same state (i.e., there is no cladogenetic character change). The initial value for the ancestral branch going into a node (at time $t_A$) is then the product of the final values for each of the daughter branches coming out of that node, times the instantaneous speciation rate (to account for the observed speciation event): \[D_{A, i}(t_A) = D_{N, i}(t_A) D_{M, i}(t_A) \lambda_i \tag{3}\label{eq:three}\] At the root After we integrate equations \eqref{eq:one} and \eqref{eq:two} from the tips to the root, dealing with nodes along the way via equation \eqref{eq:three}, we arrive at the root with the $D$ values (called $D_{R, i}$), one for each state. These need to be combined somehow to get the overall likelihood of the data: \[\text{Likelihood(tree, tip states | model)} = \sum_i D_{R, i} \, p_{R, i}\] What probability weighting, $p_{R, i}$ should be used for the possible root states? Sometimes a fixed approach is used, assuming that the prior root state probabilities are either all equal, or are the same as the observed tip state frequencies, or are the equilibrium state frequencies under the model parameters. These assumptions do not have a real basis, however (unless there is some external data that supports them), and they can cause trouble (Goldberg and Igić 2008). An alternative is to use the BiSSE probabilities themselves to determine the root state weightings, essentially adjusting the weightings to be most consistent with the data and BiSSE parameters (FitzJohn et al. 2009). Perhaps better is to treat the weightings as unknown parameters to be estimated. These estimates are usually quite uncertain, but in a Bayesian framework, one can treat the $p_{R, i}$ as nuisance parameters and integrate over them. BiSSE model parameters and their interpretation $\Psi$ Phylogenetic tree with divergence times $T$ Root age $q_{01}$ Rate of transitions from 0 to 1 $\lambda_0$ Speciation rate for state 0 $\mu_0$ Extinction rate for state 0 Equations for the multi-state birth-death process The entire derivation above can easily be expanded to accommodate an arbitrary number of states (FitzJohn 2012). The only extra piece is summing over all the possible state transitions. The resulting differential equations within the branches are: \[\begin{aligned} \frac{\mathrm{d}D_{N,i}(t)}{\mathrm{d}t} &= - \left(\lambda_i + \mu_i + \sum\limits_{j \neq i}^k q_{ij} \right)D_{N,i}(t) + \sum\limits_{j \neq i}^k q_{ij} D_{N,j}(t) + 2\lambda_iE_i(t)D_{N,i}(t) \\ \frac{\mathrm{d}E_i(t)}{\mathrm{d}t} &= \mu_i - \left(\lambda_i + \mu_i + \sum\limits_{j \neq i}^k q_{ij} \right)E_i(t) + \sum\limits_{j \neq i}^k q_{ij} E_j(t) + \lambda_i E_i(t)^2 \end{aligned}\] Beaulieu J.M., O'Meara B.C. 2016. Detecting hidden diversification shifts in models of trait-dependent speciation and extinction. Systematic Biology. 65:583–601. 10.1093/sysbio/syw022 FitzJohn R.G., Maddison W.P., Otto S.P. 2009. Estimating trait-dependent speciation and extinction rates from incompletely resolved phylogenies. Systematic Biology. 58:595–611. 10.1093/sysbio/syp067 FitzJohn R.G. 2010. Quantitative Traits and Diversification. Systematic Biology. 59:619–633. 10.1093/sysbio/syq053 FitzJohn R.G. 2012. Diversitree: Comparative Phylogenetic Analyses of Diversification in R. Methods in Ecology and Evolution. 3:1084–1092. 10.1111/j.2041-210X.2012.00234.x Freyman W.A., Höhna S. 2018. Cladogenetic and anagenetic models of chromosome number evolution: a Bayesian model averaging approach. Systematic Biology. 67:1995–215. Goldberg E.E., Lancaster L.T., Ree R.H. 2011. Phylogenetic Inference of Reciprocal Effects between Geographic Range Evolution and Diversification. Systematic Biology. 60:451–465. 10.1093/sysbio/syr046 Goldberg E.E., Igić B. 2008. On Phylogenetic Tests of Irreversible Evolution. Evolution. 62:2727–2741. 10.1111/j.1558-5646.2008.00505.x Goldberg E.E., Igić B. 2012. Tempo and Mode in Plant Breeding System Evolution. Evolution. 66:3701–3709. 10.1111/j.1558-5646.2012.01730.x Maddison W.P., Midford P.E., Otto S.P. 2007. Estimating a binary character's effect on speciation and extinction. Systematic Biology. 56:701. 10.1080/10635150701607033 Maddison W.P. 2006. Confounding Asymmetries in Evolutionary Diversification and Character Change. Evolution. 60:1743–1746. 10.1111/j.0014-3820.2006.tb00517.x Magnuson-Ford K., Otto S.P. 2012. Linking the Investigations of Character Evolution and Species Diversification. The American Naturalist. 180:225–245. 10.1086/666649 Mitter C., Farrell B., Wiegemann B. 1988. The Phylogenetic Study of Adaptive Zones: Has Phytophagy Promoted Insect Diversification? The American Naturalist. 132:107–128. 10.1086/284840 Pagel M. 1999. The Maximum Likelihood Approach to Reconstructing Ancestral Character States of Discrete Characters on Phylogenies. Systematic Biology. 48:612–622. 10.1080/106351599260184 GitHub | License | Citation | Users Forum
CommonCrawl
Standard error The standard error (SE)[1] of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution[2] or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).[1] The sampling distribution of a mean is generated by repeated sampling from the same population and recording of the sample means obtained. This forms a distribution of different means, and this distribution has its own mean and variance. Mathematically, the variance of the sampling mean distribution obtained is equal to the variance of the population divided by the sample size. This is because as the sample size increases, sample means cluster more closely around the population mean. Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size.[1] In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean. In regression analysis, the term "standard error" refers either to the square root of the reduced chi-squared statistic or the standard error for a particular regression coefficient (as used in, say, confidence intervals). Standard error of the sample mean Exact value Suppose a statistically independent sample of $n$ observations $x_{1},x_{2},\ldots ,x_{n}$ is taken from a statistical population with a standard deviation of $\sigma $. The mean value calculated from the sample, ${\bar {x}}$, will have an associated standard error on the mean, ${\sigma }_{\bar {x}}$, given by:[1] ${\sigma }_{\bar {x}}\ ={\frac {\sigma }{\sqrt {n}}}$. Practically this tells us that when trying to estimate the value of a population mean, due to the factor $1/{\sqrt {n}}$, reducing the error on the estimate by a factor of two requires acquiring four times as many observations in the sample; reducing it by a factor of ten requires a hundred times as many observations. Estimate The standard deviation $\sigma $ of the population being sampled is seldom known. Therefore, the standard error of the mean is usually estimated by replacing $\sigma $ with the sample standard deviation $\sigma _{x}$ instead: ${\sigma }_{\bar {x}}\ \approx {\frac {\sigma _{x}}{\sqrt {n}}}$. As this is only an estimator for the true "standard error", it is common to see other notations here such as: ${\widehat {\sigma }}_{\bar {x}}:={\frac {\sigma _{x}}{\sqrt {n}}}$   or alternately   ${s}_{\bar {x}}\ :={\frac {s}{\sqrt {n}}}$ :={\frac {s}{\sqrt {n}}}} . A common source of confusion occurs when failing to distinguish clearly between: • the standard deviation of the population ($\sigma $), • the standard deviation of the sample ($\sigma _{x}$), • the standard deviation of the mean itself ($\sigma _{\bar {x}}$, which is the standard error), and • the estimator of the standard deviation of the mean (${\widehat {\sigma }}_{\bar {x}}$, which is the most often calculated quantity, and is also often colloquially called the standard error). Accuracy of the estimator When the sample size is small, using the standard deviation of the sample instead of the true standard deviation of the population will tend to systematically underestimate the population standard deviation, and therefore also the standard error. With n = 2, the underestimate is about 25%, but for n = 6, the underestimate is only 5%. Gurland and Tripathi (1971) provide a correction and equation for this effect.[3] Sokal and Rohlf (1981) give an equation of the correction factor for small samples of n < 20.[4] See unbiased estimation of standard deviation for further discussion. Derivation The standard error on the mean may be derived from the variance of a sum of independent random variables,[5] given the definition of variance and some simple properties thereof. If $x_{1},x_{2},\ldots ,x_{n}$ is a sample of $n$ independent observations from a population with mean ${\bar {x}}$ and standard deviation $\sigma $, then we can define the total $T=(x_{1}+x_{2}+\cdots +x_{n})$ which due to the Bienaymé formula, will have variance $\operatorname {Var} (T)={\big (}\operatorname {Var} (x_{1})+\operatorname {Var} (x_{2})+\cdots +\operatorname {Var} (x_{n}){\big )}=n\sigma ^{2}.$ where we've approximated the standard deviations, i.e., the uncertainties, of the measurements themselves with the best value for the standard deviation of the population. The mean of these measurements ${\bar {x}}$ is simply given by ${\bar {x}}=T/n$. The variance of the mean is then $\operatorname {Var} ({\bar {x}})=\operatorname {Var} \left({\frac {T}{n}}\right)={\frac {1}{n^{2}}}\operatorname {Var} (T)={\frac {1}{n^{2}}}n\sigma ^{2}={\frac {\sigma ^{2}}{n}}.$ The standard error is, by definition, the standard deviation of ${\bar {x}}$ which is simply the square root of the variance: $\sigma _{\bar {x}}={\sqrt {\frac {\sigma ^{2}}{n}}}={\frac {\sigma }{\sqrt {n}}}$. For correlated random variables the sample variance needs to be computed according to the Markov chain central limit theorem. Independent and identically distributed random variables with random sample size There are cases when a sample is taken without knowing, in advance, how many observations will be acceptable according to some criterion. In such cases, the sample size $N$ is a random variable whose variation adds to the variation of $X$ such that, $\operatorname {Var} (T)=\operatorname {E} (N)\operatorname {Var} (X)+\operatorname {Var} (N){\big (}\operatorname {E} (X){\big )}^{2}$[6] which follows from the law of total variance. If $N$ has a Poisson distribution, then $\operatorname {E} (N)=\operatorname {Var} (N)$ with estimator $n=N$. Hence the estimator of $\operatorname {Var} (T)$ becomes $nS_{X}^{2}+n{\bar {X}}^{2}$, leading the following formula for standard error: $\operatorname {Standard~Error} ({\bar {X}})={\sqrt {\frac {S_{X}^{2}+{\bar {X}}^{2}}{n}}}$ (since the standard deviation is the square root of the variance). Student approximation when σ value is unknown Further information: Student's t-distribution § Confidence intervals, and Normal distribution § Confidence intervals In many practical applications, the true value of σ is unknown. As a result, we need to use a distribution that takes into account that spread of possible σ's. When the true underlying distribution is known to be Gaussian, although with unknown σ, then the resulting estimated distribution follows the Student t-distribution. The standard error is the standard deviation of the Student t-distribution. T-distributions are slightly different from Gaussian, and vary depending on the size of the sample. Small samples are somewhat more likely to underestimate the population standard deviation and have a mean that differs from the true population mean, and the Student t-distribution accounts for the probability of these events with somewhat heavier tails compared to a Gaussian. To estimate the standard error of a Student t-distribution it is sufficient to use the sample standard deviation "s" instead of σ, and we could use this value to calculate confidence intervals. Note: The Student's probability distribution is approximated well by the Gaussian distribution when the sample size is over 100. For such samples one can use the latter distribution, which is much simpler. Assumptions and usage Further information: Confidence interval An example of how $\operatorname {SE} $ is used is to make confidence intervals of the unknown population mean. If the sampling distribution is normally distributed, the sample mean, the standard error, and the quantiles of the normal distribution can be used to calculate confidence intervals for the true population mean. The following expressions can be used to calculate the upper and lower 95% confidence limits, where ${\bar {x}}$ is equal to the sample mean, $\operatorname {SE} $ is equal to the standard error for the sample mean, and 1.96 is the approximate value of the 97.5 percentile point of the normal distribution: Upper 95% limit $={\bar {x}}+(\operatorname {SE} \times 1.96),$ and Lower 95% limit $={\bar {x}}-(\operatorname {SE} \times 1.96).$ In particular, the standard error of a sample statistic (such as sample mean) is the actual or estimated standard deviation of the sample mean in the process by which it was generated. In other words, it is the actual or estimated standard deviation of the sampling distribution of the sample statistic. The notation for standard error can be any one of SE, SEM (for standard error of measurement or mean), or SE. Standard errors provide simple measures of uncertainty in a value and are often used because: • in many cases, if the standard error of several individual quantities is known then the standard error of some function of the quantities can be easily calculated; • when the probability distribution of the value is known, it can be used to calculate an exact confidence interval; • when the probability distribution is unknown, Chebyshev's or the Vysochanskiï–Petunin inequalities can be used to calculate a conservative confidence interval; and • as the sample size tends to infinity the central limit theorem guarantees that the sampling distribution of the mean is asymptotically normal. Standard error of mean versus standard deviation In scientific and technical literature, experimental data are often summarized either using the mean and standard deviation of the sample data or the mean with the standard error. This often leads to confusion about their interchangeability. However, the mean and standard deviation are descriptive statistics, whereas the standard error of the mean is descriptive of the random sampling process. The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.[7] Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean.[8] If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases. Extensions Finite population correction (FPC) The formula given above for the standard error assumes that the population is infinite. Nonetheless, it is often used for finite populations when people are interested in measuring the process that created the existing finite population (this is called an analytic study). Though the above formula is not exactly correct when the population is finite, the difference between the finite- and infinite-population versions will be small when sampling fraction is small (e.g. a small proportion of a finite population is studied). In this case people often do not correct for the finite population, essentially treating it as an "approximately infinite" population. If one is interested in measuring an existing finite population that will not change over time, then it is necessary to adjust for the population size (called an enumerative study). When the sampling fraction (often termed f) is large (approximately at 5% or more) in an enumerative study, the estimate of the standard error must be corrected by multiplying by a ''finite population correction'' (a.k.a.: FPC):[9] [10] $\operatorname {FPC} ={\sqrt {\frac {N-n}{N-1}}}$ which, for large N: $\operatorname {FPC} \approx {\sqrt {1-{\frac {n}{N}}}}={\sqrt {1-f}}$ to account for the added precision gained by sampling close to a larger percentage of the population. The effect of the FPC is that the error becomes zero when the sample size n is equal to the population size N. This happens in survey methodology when sampling without replacement. If sampling with replacement, then FPC does not come into play. Correction for correlation in the sample If values of the measured quantity A are not statistically independent but have been obtained from known locations in parameter space x, an unbiased estimate of the true standard error of the mean (actually a correction on the standard deviation part) may be obtained by multiplying the calculated standard error of the sample by the factor f: $f={\sqrt {\frac {1+\rho }{1-\rho }}},$ where the sample bias coefficient ρ is the widely used Prais–Winsten estimate of the autocorrelation-coefficient (a quantity between −1 and +1) for all sample point pairs. This approximate formula is for moderate to large sample sizes; the reference gives the exact formulas for any sample size, and can be applied to heavily autocorrelated time series like Wall Street stock quotes. Moreover, this formula works for positive and negative ρ alike.[11] See also unbiased estimation of standard deviation for more discussion. See also • Illustration of the central limit theorem • Margin of error • Probable error • Standard error of the weighted mean • Sample mean and sample covariance • Standard error of the median • Variance • Variance of the mean and predicted responses References 1. Altman, Douglas G; Bland, J Martin (2005-10-15). "Standard deviations and standard errors". BMJ: British Medical Journal. 331 (7521): 903. doi:10.1136/bmj.331.7521.903. ISSN 0959-8138. PMC 1255808. PMID 16223828. 2. Everitt, B. S. (2003). The Cambridge Dictionary of Statistics. CUP. ISBN 978-0-521-81099-9. 3. Gurland, J; Tripathi RC (1971). "A simple approximation for unbiased estimation of the standard deviation". American Statistician. 25 (4): 30–32. doi:10.2307/2682923. JSTOR 2682923. 4. Sokal; Rohlf (1981). Biometry: Principles and Practice of Statistics in Biological Research (2nd ed.). p. 53. ISBN 978-0-7167-1254-1. 5. Hutchinson, T. P. (1993). Essentials of Statistical Methods, in 41 pages. Adelaide: Rumsby. ISBN 978-0-646-12621-0. 6. Cornell, J R, and Benjamin, C A, Probability, Statistics, and Decisions for Civil Engineers, McGraw-Hill, NY, 1970, ISBN 0486796094, pp. 178–9. 7. Barde, M. (2012). "What to use to express the variability of data: Standard deviation or standard error of mean?". Perspect. Clin. Res. 3 (3): 113–116. doi:10.4103/2229-3485.100662. PMC 3487226. PMID 23125963. 8. Wassertheil-Smoller, Sylvia (1995). Biostatistics and Epidemiology : A Primer for Health Professionals (Second ed.). New York: Springer. pp. 40–43. ISBN 0-387-94388-9. 9. Isserlis, L. (1918). "On the value of a mean as calculated from a sample". Journal of the Royal Statistical Society. 81 (1): 75–81. doi:10.2307/2340569. JSTOR 2340569. (Equation 1) 10. Bondy, Warren; Zlot, William (1976). "The Standard Error of the Mean and the Difference Between Means for Finite Populations". The American Statistician. 30 (2): 96–97. doi:10.1080/00031305.1976.10479149. JSTOR 2683803. (Equation 2) 11. Bence, James R. (1995). "Analysis of Short Time Series: Correcting for Autocorrelation". Ecology. 76 (2): 628–639. doi:10.2307/1941218. JSTOR 1941218. Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject
Wikipedia
\begin{document} \title{The basis of Boole's logical theory} \begin{abstract} {In the present paper we aim to provide a thoughtful and exegetical account of the fundamental ideas at the basis of Boole's theory, with the goal of developing our investigation strictly within the conceptual structure originally introduced by Boole himself. In particular, we will focus on the meaning and the usefulness of the methods of the developments. We will also consider a slight variation of it that will allow us to to present in a new light some important and ingenuous aspects of Boole's calculus examined by the author in his work. Finally, a large attention is devoted to the analysis of the ``neglected'' logical connective of division.} \end{abstract} \section{\emph{Introduction and outline}} Anyone approaching the study of George Boole's \emph{The Laws of Thought}, \cite{B1854}, a classic from the origins of modern logic, would expect to find in the critical literature a clear explanation of the fundamental ideas on which Boole based his theory of logic. In our opinion no such explanation has yet been given, though there are attempts to account for Boole's theory by means of complex algebraic systems, see \cite{H76} and \cite{BR09}. In the present paper we aim to provide a clear and exhaustive account of the fundamental ideas at the basis of Boole's whole theory of logic without appealing to anything other than concepts introduced by Boole himself {in the chapters I-VI of \emph{The Laws of Thought}.} According to James W. van Evra \cite{VE77}, ``While there is general agreement that his work occupies an important place in the history of logic, the exact nature of that importance remains elusive.'' We believe that we can show that the nature of that importance consists in ingenious and refined ideas on which his logical theory is based. The calculus that Boole takes as model for his logical theory is quantitative algebra. Both calculi are built on the same language based on the operations $\times, +, - , / $; Boole goes to great lengths to show that those operations share the same formal properties since the variables of logical theory are interpreted on sets. Nevertheless, according to Boole, the formal correspondences between the two calculi are so strong and deep that we can address a logical problem by using transformations typical of algebra and interpret the result in logical terms. Section 2 of the present paper is devoted to showing aspects of this correspondence\footnote{We are well aware that we here address a much debated and well known topic, still we think it is important to note certain similarities and/or dissimilarities}. Section 3 starts out from Boole's turning point: the recognition of a crucial difference between the two calculi, a difference that can be formalized in the \emph{law of duality}: $$x x = x.$$ The product among classes is idempotent, which is not, as we know, among numbers. Boole assumes this law as the defining property of classes: if anything is a class, then the law of duality holds for it; on the algebraic side, $0$ and $1$, seen as numbers, fulfill the law of duality. The correspondence can be re-established: the functions of this new (quantitative) algebra will be functions $f:\{0,1\}^n\to\mathbb{Q}$. We call the resulting calculus, the \emph{pseudo-binary} calculus. {This is in fact what Boole calculus really is: a pseudo-binary system, and not a purely binary one, as many may expect after the re-foundation of Boole's algebra pursued by Boole's successors!} Now to a central question of Boole's project: given a function in the $\{\times, +, -, /, 0, 1\}$-language, what class does it represent ? The answer to this question is of fundamental importance and Boole answers it with \emph{the method of developments.} Let us start by considering the \emph{pseudo-binary} calculus limited to the functions $f:\{0,1\}^n\to \mathbb{Z}$ in the language $\{\times, +, - ,0, 1\}$. We show that in this case the method of developments is equivalent to a variant of it that we call the \emph{method of intersections.} This variant we propose is very helpful to give a logical interpretation to the pseudo-binary calculus. Just to get an intuitive idea, let us start by observing that given $n$ classes $x_1, \dots, x_n$, the universe can be partitioned into $2^n$ regions: $ 1 = x_1 + (1 - x_1)$ $ 1 = x_1 x_2 + x_1(1-x_2) + (1-x_1) x_2 + (1-x_1)(1-x_2)$ $ 1 = x_1 x_2 x_3 + x_1x_2 (1-x_3) + x_1 (1-x_2) x_3 + (1-x_1) x_2 x_3 + x_1(1-x_2) (1-x_3) + (1-x_1) x_2 (1-x_3) + (1-x_1)(1-x_2) x_3 + (1-x_1)(1-x_2) (1-x_3)$ $\dots$ See Figure \ref{fig:development_1}. \begin{figure} \caption{Development of 1 in two variables} \label{fig:development_1} \end{figure} Then any arbitrary class $a = f(x,y)$ expressed by a function of the variables, say $x$ and $y$, can be represented as a disjoint union as follows $$ a x y + a x(1-y) + a (1-x) y + a (1-x)(1-y)$$ This representation, given by the method of intersections, is equivalent to what is obtained by the method of developments but at the same time it represents a class as the disjoint union of its "shadows" over the different regions into which the universe is partitioned. See Figure \ref{fig:shadow}. \begin{figure} \caption{The shadow of A} \label{fig:shadow} \end{figure} We immediately see that the set-theoretical union relevant here is the exclusive one. Boole has been severely blamed by Frege for having privileged the exclusive union instead of inclusive one, but Boole had very good reasons for his choice. {In Section 3.2 we show that if} we limit ourselves to consider functions $f^n:\{0,1\}^n\to\{0,1\}$, we end up with nothing but the truth functions. Now, if we interpret the variables on propositions, the product as $\wedge$, $+$ as $\vee$ and $1 - x$ as $\neg x$, then the development of $f^n(x_1, \dots, x_n)$ gives the proposition in disjunctive normal form whose truth table is $f^n(x_1, \dots, x_n)$. In this case too, the use of exclusive disjunction is the natural one even if the inclusive one works as well. There is more to the exclusive union and inclusive subtraction than a mere success in applying the method of intersections, these two operations are \emph{partial } operations, they are not defined when $x$ and $y$ are not disjoint or when $y$ is not a subset of $x$ and this, as we will see, corresponds perfectly to the cases in which the algebraic value of $x + y$ or $x - y$ is outside $\{0, 1\}$. The correspondence between the two calculi is amazing! {In Section 3.3 we compare Boole's explanation of his strategy for treating non binary coefficients with our motivations based on the use of the methods of intersections.} So far so good, but one could object that the logical calculus was intended to be a calculus of arbitrary classes, not just a calculus of the universal and the empty class only. It is well known from the standard propositional calculus and universal algebra that whatever can be said about arbitrary classes is reducible to what can be said about the universal and empty classes, see Section 3.2. Therefore as far as the logical principles are concerned, nothing is lost. In {Section 3.4} we comment on the very controversial solutions proposed by Boole when dealing with functions $f:\{0,1\}^n\to\mathbb{Q}$ in the language also containing the operation of division. The main problem is how to interpret such terms as $0/0$ and $1/0$ possibly occurring in the developments of these functions. We show that these cases too admit of a logical interpretation by extending the equivalence between the method of developments and the method of intersections from an algebraic point of view. \section{\emph{Logic and Algebra: a Formal Correspondence}} In his celebrated work \emph{An Investigation of the Laws of Thought} [1854] George Boole analyzes the \emph{structural} similarities between quantitative algebra and logical reasoning in more {details} than he had {done} in his previous work \emph{The *Mathematical Analysis of Logic.} His claim is the existence of evident \emph{formal} analogies shared by the universal laws of algebra and those of logic. Such formal analogies would be indeed so evident to be expressible in a common mathematical language in terms of equations having the same syntactical form. The semantic interpretation of any equation of this type will of course differ within the two different domains (quantitative algebra and logic); the revealed formal analogy does not have in fact to correspond to some \emph{conceptual} identity. The \emph{syntactical} match is rather obtained in virtue of the use of the ordinary algebraic formalism as a unifying language between the two fields, and nothing can be said at the ontological level. Boole himself openly expresses his empirical attitude of \emph{hypotheses non fingo} concerning the nature of this correspondence by pointing out that \begin{quote}...it is not affirmed that the process of multiplication in Algebra (...) possesses in itself any analogy with that process of logical combination wich $xy$ has been made to represent above; but only that if the arithmetical and logical process are expressed in the same manner, their symbolical expressions will be subject to the same formal law. (p.31) \end{quote} Same formal laws, different semantic interpretations: this is really the essence of the modern conception of formalism! The variables of the language will be interpreted either as numerical quantities, when our investigation concerns numerical algebra, or as classes of objects, when {our investigation is oriented} to logic, but the form of the respective laws will coincide. \subsection {Logical operations and their interpretative problems} The first example of correspondence between ordinary quantitative algebra and abstract logic that Boole shows concerns the multiplication symbol $\times$, whose logical interpretation turns out to correspond to the set theoretical operation of \emph{class intersection} $\cap$\footnote{The legitimacy of an extensional approach to the treatment of concepts can be vindicated already from the beginning of Boole's \cite{B1854}: \begin{quote} (...) let $x$ represent ``all men'', or the class ``men''. By a class is usually meant a collection of individuals, to each of which a particular name or description may be applied (p.28) \end{quote} His theory always admits in fact a set-theoretical interpretation and therefore can legitimately be viewed as a theory of classes. This does not mean that Boole really speaks the language of a contemporary set theorist. In this respect, the specific case of the ``class intersection'' operator even constitutes a particularly delicate case (see the following footnotes 5 and 6). It is nevertheless clear that we can immediately translate all the conceptual operations that he describes in the vocabulary of modern set theory, as we do in this case. We will adopt this approach throughout the paper, since we are mainly interested in investigating the value of Boole's system from a modern perspective.}: \begin{quote} Let it (...) be agreed, that by the combination $xy$ shall be represented that class of things to which the names or descriptions represented by $x$ and $y$ are simultaneously applicable.\footnote{As is customary in algebra, Boole always omits to write explicitly the occurrence of the symbol $\times$.}(p.28) \end{quote} Boole claims that logical multiplication satisfies, \emph{formally}, the same \emph{law of commutativity} that holds of algebraic multiplication, or, more exactly, he affirmes that the two laws of commutativity holding in the two different domains can be expressed syntactically in the same way, as $xy=yx$: \begin{quote} In the case of $x$ representing white things, and $y$ sheep, either of the members of this equation will represent the class of ``white sheep''. There may be a difference as to the order in which the conception is formed, but there is none as to the individual things which are comprehended under it. In like manner, if $x$ represent ``estuaries'', and $y$ ``rivers'', the expressions $xy$ and $yx$ will indifferently represent ``rivers that are estuaries'', or ``estuaries that are rivers'', the combination in this case being in ordinary language that of two substantives, instead of that of a substantive and an adjective as in the previous instance. Let there be a third symbol, as $z$, representing the class of things to which the term ``navigable'' is applicable, and any one of the following expressions, \begin{center} $zxy,zyx,xyz,$ ecc. \end{center} will represent the class of ``navigable rivers that are estuaries''. (p.29) \end{quote} Consistently with this, Boole observes that this law ``may be characterized by saying that the literal symbols $x,y,z$ are commutative, like the symbols of Algebra'' (p.31)\footnote{Boole is so used to the customary omission of the product symbol in the current algebraic notation that he apparently recognizes here no operation at all! He ascribes indeed the commutativity property not to the logical operator in itself, but to the variables alone! Actually, although there might be really some ambiguity in this context, he refers explicitly to the \emph{logical process of combination} as compared to the \emph{arithmetical process of multiplication} (p.31), as we have seen. Therefore, at least the existence of two analogue ``processes'' in the two respective fields is assumed; and if the arithmetical product is an operator, the same applies then to the logical combination.} \subsubsection{Sum and subtraction as partial operations} The quoted paragraph above is paradigmatic of Boole's peculiar {presentation} of his logical system: he does not introduce a ``static'' axiomatic calculus built on {basic} statements conceived as eternal \emph{a priori} truths. He rather investigates the rules of natural language and of the psychological construction of concepts almost as an \emph{experimental scientist}. The formal basic equations of logic are presented in the context of a colloquial and informal discussion which is mostly aimed at \emph{persuading} the reader about the plausibility of the results, and perhaps also at revealing the experience that led Boole to their discovery. For this reason, the main evidence for the fundamental logical laws that he formulates consists in concrete linguistic examples so obvious that nobody could raise any objection to their universal validity. Hence, not surprisingly, the method applied to prove the commutativity of the product is directly extended to the sum: \begin{quote} We are not only capable of entertaining the conceptions of objects, [...] but also of forming the aggregate conceptions of a group of objects consisting of partial groups, each of which is separately named or described. For this purpose we use the conjunctions ``and'', ``or'', etc. ``Trees and minerals,'' ``barren mountains, and fertile vales,'' are examples of this kind. In strictness, the words ``and'', ``or'', interposed between the terms descriptive of two or more classes of objects, imply that those classes are quite distinct, so that no member of one is found in another. In this and in all other respects the words ``and'' ``or'' are analogous with the sign $+$ in algebra, and their laws are identical. Thus the expression ``men and women'' is [...] equivalent with the expression ``women and men''. Let $x$ represent ``men'', $y$, ``women''; and let $+$ stand for ``\emph{and'}' or ``\emph{or}'', then we have $$x+y=y+x,$$ an equation which would equally hold true if $x$ and $y$ represented \emph{numbers}, and $+$ were the sign of arithmetical sum. (pp.32-33) \end{quote} One should not be confused by the use of ``and'' in this context: as is clear from the explanation, the operation {in question} is not class intersection, but rather the \emph{union} of (disjoint) classes. This is in fact consistent with an idiomatic use of the preposition ``and'' in our ordinary language, aimed at joining, although incorrectly, \emph{disjoint} notions. We will call this particular set theoretical form of union ``\emph{disjoint union}''. As is well known, the preference accorded by Boole to the disjunctive interpretation of the union instead of its \emph{inclusive} version (joining sets that may overlap) has met in the literature a lively criticism {since Jevons' work (see} \cite{J74}, {p.70--71)}. Even the founder of contemporary mathematical logic, G. Frege, defined Boole's choice a ``retrograde step away from Leibniz''.\footnote{Frege's \emph{Posthumous Writings}: ``On one point indeed Boole has taken a retrogade step away from Leibniz, in adding to the leibnizian meaning of $A + B$ the condition that the classes $A$ and $B$ should have no common element. W.Stanley Jevons, E. Schr\"oder and others have quite rightly not followed him in this.''( pag 10)} In this paper we rather support the opposite view: arguments defending this and {others} of Boole's original {proposals} will be provided and discussed. It is true that in many circumstances his treatment of logic does not exactly coincide with the expectations of present day logicians, but this does not always entail a defect in its {pristine} design. We believe indeed that Boole should be regarded as ``something more'' than a \emph{clumsy} pioneer of the system {now called} ``Boolean algebra''. He introduced a \emph{prolific} discipline whose goals are often different from those of our modern conception of logic, but with an independent value in themselves. This holds for instance, in our opinion, of the disjoint union. We will not deny that this operation {apparently} carries a very unwelcome disadvantage: the occurrence of terms lacking of interpretations for some variable interpretations. An immediate example is ``$x+y$'', which is semantically undefined unless $x$ and $y$ denote disjoint classes. Nevertheless, there are several arguments in favour of Boole's choice. Firstly, one could observe that symbolic logic is nowadays a subject that has gained a mature status of independence from traditional mathematics, whereas Boole's aim was that of showing that the connections linking logic and algebra were as close as they could be. In this particular respect, it is to be noted {in the first place} that the cardinality of a \emph{finite} set $A\cup B$ is equal to the sum of the cardinalities of $A$ and $B$ \emph{only if} $A$ and $B$ are disjoint sets. Another fundamental similarity with ordinary algebra justifying the use of disjoint union can be found in its connection with its dual operator, introduced by Boole himself: the logical subtraction. The latter constitutes in fact a sort of ``inclusive subtraction'', where the removed set must be completely included in the set from which it is substracted: the term $x-y$ will denote in fact the result of removing the class $y$ from the class $x$, under the assumption that $y$ is a subset of $x$. Again, the term ``$x-y$'' will be undefined unless this condition is fulfilled: \begin{quote} (...) we cannot conceive it possible to collect parts into a whole, and not conceive it also possible to separate a part from a whole. This operation we express in common language by the sign \emph{except}, as, ``All men \emph{except} Asiatics,'' ``All states \emph{except} those which are monarchical.'' Here it is implied that the things excepted form a part of the things from which they are excepted. As we have expressed the operation of aggregation by the sign $+$, so we may express the negative operation above described by $-$ minus. Thus if $x$ be taken to represent men, and $y$, Asiatics, i. e. Asiatic men, then the conception of ``All men except Asiatics'' will be expressed by $x-y$. (pp.33-34) \end{quote} Still, these semantic restrictions make the logical subtraction be the formal inverse of the logical sum, like in integer algebra. Firstly, Boole observes that the transformation of $x=y+z$ into $x-z=y$ is valid: \begin{quote} Let us take the Proposition, ``The stars are the suns and the planets'', and let us represent stars by $x$, suns by $y$, and planets by $z$; we have then $$x=y+z.$$ Now if it be true that the stars are the suns and the planets, it will follow that the stars, except the planets, are suns\footnote{Actually, as is clear from the context, Boole means not only that the remainder consists in \emph{some} suns, but, rather, in \emph{exactly all} suns.}. This would give the equation $$x-z=y,$$ (...) Thus a term $z$ has been removed from one side of an equation to the other by changing its sign. This is in accordance with the algebraic rule of transposition. (pp.35--36) \end{quote} The inference from $x=y+z$ to $x-z=y$ is valid in virtue of the disjointness of $y$ and $z$, as the example just mentioned shows (and Euler diagrams prove universally). In contrast, if $y$ and $z$ were allowed to overlap, the validity of this inference would no longer be deducible: by removing from $x$ all the elements which also are in $z$ it might be the case that only some, but \emph{not all}, members of $y$ are preserved. Hence, the restriction imposed to the logical sum justifies the given inference. Secondly, let us consider the opposite direction, from $x - z = y$ to $x = y + z$. {Assume} $x - z = y$, if the class $z$ is entirely included in $x$, as Boole requires, by re-adding $z$ to the result of the previous subtraction, $y$, we indeed come back to $x$ (Euler-Venn diagrams can be used for a clarification). But what would happen in modern set theory? The definition of set subtraction used in this theory, as well as that of set union, avoids the occurrence of potentially vacuous terms: the set $A\setminus B$ is in fact defined as $A\cap \overline{B}$, where $overline{B}$ is the complementary set of $B$, hence it has always a denotation for every interpretation of $A$ and $B$. In particular, $B$ does not need to be contained in $A$. Well, for this notion of subtraction, the {inverse} of Boole's inference would be no more justified: for $z$ possibly overflowing extensionally outside $x$, the result of adding $z$ to any given set, in particular to $y$ (i.e. $x-z$), may possibly exceed $x$. The restriction imposed on the subtraction is then necessary to guarantee the {inverse} of Boole's inference. In any case, the operator $\setminus$ coincides with Boole's subtraction under the same semantical restiriction, that is, when $B\subseteq A$. \\ A further argument supporting Boole's conception of subtraction can be inspired by elementary arithmetic: within the realm of natural numbers, $x-y$ has in fact a denotation only when $y$ is a quantity entirely ``contained'' in the quantity $x$. \\ \\ \subsubsection{Commutativity and associativity laws: a rift in the correspondence} \begin{quote} As it is indifferent for all the \emph{essential} purposes of reasoning whether we express excepted cases first or last in the order of speech, it is also indifferent in what order we write any series of terms, some of which are effected by the sign $-$. Thus we have, as in common algebra, $x-y=-y+x$. (p.34) \end{quote} In ordinary algebra the term $x-y$ can be transformed into $-y+x$ by applying the commutative law, but the validity of the equation $x-y=-y+x$ requires the introduction of negative numbers. We read in fact $-y+x$ as the sum of $-y$ with $+x$. We apply then the commutativity law for the \emph{sum} (and conventional syntactical abbreviations, such as that reducing, say, $(+a)+(-b)$ to {$(a-b)$} to {get} $x-y=(+x)+(-y)=(-y)+(+x)=-y+x$. Without the introduction of negative numbers and terms, the term $-y+x$ would simply be syntactically wrong, hence meaningless. This fact is completely overlooked by Boole, who does not explain what a ``negative concept'' might be\footnote{The most intuitive solution, that is., interpreting negative concepts as ``complementary" classes, is not acceptable: Boole will define the complement of a class in a different manner, as we shall see later. For the moment, we only point out once again that the complementation operator is subject to no constraint, whereas a very strong constraint regulates the use of the subtraction sign, hence they must differ substantially.}, although, as we have seen, he sometimes explicitly refers to the \emph{signs} of the terms: ``a term $z$ has been removed from one side of an equation to the other by changing its sign'' (p.36), or ``any series of terms, some of which are effected by the sign $-$'' (p.34). To try and solve the dilemma, one may introduce, in virtue of a mere \emph{convention}, the term $-y+x$ as a syntactically well constructed expression denoting the same class as $x-y$. But this simple solution would hardly be applicable to the general case of ``any series of terms, some of which are effected by the sign $-$'' (p.34). For example: what could be said about the term $x+y-z$? Would it have the same denotation as $x-z+y$? Actually, the term $x+y-z$ looks syntactically quite ambiguous: is the class $z$ removed from the class $x+y$ or from the class $y$ only? This ambiguity disappears in ordinary algebra, where the two terms $x+(y-z)$ and $(x+y)-z$ have the same denotation. But what about Boole's logical calculus? A little detour concerning the associative laws is in order, laws that Boole, to the best of our knowledge, never takes into consideration. Actually, the applications of the associative laws in absence of occurrences of the subtraction symbol look definitely non problematic in the logical calculus, too. In fact $(x+y)+z=x+(y+z)$ is valid (for all suitable variable interpretations) as well as $(xy)z=x(yz)$ (always). The discrepancy between the two calculi really arises when negative terms {appear}, as with the case $x+y-z$ we started with. The equation $(x+y)-z=x+(y-z)$ is not universally valid in Boole's calculus, it fails in fact for those interpretations for which $z$ is a subset of $x+y$ but not of $y$ alone. Consequently, the search for a solution of our question whether \begin{align}\label{star} x+y-z = x-z+y \end{align} \noindent should impose at least the use of explicit brackets. \\ (a) Let us start with the reading of the first term as $x+(y-z )$. In this case $z$ is supposed to be a subset of $y$. Is there a reading of $x-z+y$ to justify (\ref{star})? The reading of $x-z+y$ as $(x-z)+y$ would in general be not allowed, since $z$ is not necessarily a subset of $x$. On the other hand, the reading as $x+(-z+y)$ even if potentially correct on the basis of the convention that $-z+y = y -z$ adds a new occurrence of $+$ which is not contained in the original term $x-z+y$, and moreover reads automatically $-z$ and $y$ as paired. This is admissible in algebra, but in the context of Boolean calculus it probably requires some \emph{ad hoc} syntactical convention. (b) Consider now the reading of the first term as $(x+y)-z$. Is there a reading of $x-z+y$ to justify (\ref{star})? A variable interpretation according to which $z$ is a subset of $x+y$ but not of either $x$ or $y$ would be compatible with the given reading of the first term, but with none of the second, as both $(x-z)+y$ or $x+(-z+y)$ would not be acceptable. One could then try and fix precise laws for an obligatory explicit use of brackets, but this solution would make Boole's calculus quite {clumsy} to handle. Boole ignores this and similar problematic aspects in the presentation of his system and instead concentrates, strategically, on those where a perfect correspondence between the two calculi can be exhibited, as in the case of the distributivity laws: \begin{quote} Let $x$ represent ``men'', $y$, ``women'' (...) Let the symbol $z$ stand for the adjective ``European'', then since it is, in effect, the same thing to say ``European men and women'', as to say ``European men and European women'', we have $$z(x+y)=zx+zy.$$ And this equation also would be equally true were $x$, $y$ and $z$ symbols of number (...) (p.33) \end{quote} \begin{quote} Still representing by $x$ the class ``men'', and by $y$ ``Asiatics'', let $z$ represent the adjective ``white''. Now to apply the adjective ``white'' to the collection of men expressed by the phrase ``Men except Asiatics'' is the same as to say, ``White men, except white Asiatics''. Hence we have $$z(x-y)=zx-zy.$$ This is also in accordance with the laws of ordinary algebra (p.34) \end{quote} \section{The duality law and Boole's pseudo-binary calculus} Boole's pivotal move is the recognition of a crucial difference between the two calculi, a difference that can be formalized in the \emph{law of duality}: $$x x = x.$$ An explanation of this law is immediate: intersecting a set $x$ with itself results in the set $x$ itself. Hence, the product among classes is idempotent, which is not, as we know, among numbers. As {a} most striking discrepancy between the two domains,\footnote{ {See also} the equation (\ref{odot}) {as far as the division operator is concerned.}} this property will characterize very {efficiently} the notion of class in Boole's system: if anything is a class, then the law of duality will hold for it.\footnote{Boole apparently finds another break of the simmetry, when he observes the impossibility of infering, in his system, the truth of $x=y$ from that of $zx=zy$. Such an inference cannot indeed hold universally for all interpretations of $x$, $y$ and $z$ as classes. But after a little hesitation he is able to re-establish the desired analogy through the following remark: this inference is not even in algebra universally valid, although at a first superficial sight it might look to be so, in fact not acceptable for a vanishing $y$ (pp.36--37).}\label{impdiv} But what follows now is a revolutionary idea with an enormous long lasting influence in the future development of logic and informatics\.footnote{Although Boole was in this respect anticipated by Leibniz.} It is well true that the law of duality does not hold, in general, over the whole domain of quantities, but, as he himself immediately points out, it can at least hold in some little fragment of it: more precisely on the sub-domain $\{0,1\}$! In this way, Boole is entitled to give a new life to his beloved correspondence: the match between the logical system and the ordinary algebraic calculus can be re-established \emph{provided that} the {interpretation} of the numerical variables {is} limited to $\{0,1\}$: \begin{quote} We have seen (...) that the symbols of Logic are subject to the special law, $$x^2=x.$$ Now of the symbols of Number there are but two, viz. 0 and 1, which are subject to the same formal law. We know that $0^2=0$, and that $1^2=1$; and the equation $x^2=x$, considered as algebraic, has no other roots than 0 and 1. Hence, instead of determining the measure of formal agreement of the symbols of Logic with those of Number generally, it is more immediately suggested to us to compare them with the symbols of quantity \emph{admitting only of the value 0 and 1}. Let us conceive, then, of an Algebra in which the symbols $x,y,z$, etc. admit indifferently of the values 0 and 1, and of these values alone. The laws, the axioms, and the processes, of such an Algebra will be identical in their whole extent with the laws, the axioms, and the processes of an Algebra of Logic. Difference of interpretation will alone divide them. Upon this principle the method of the following work is established. (pp.37-38) \end{quote} This passage is very important for two main reasons. (a) Boole introduces a new calculus, a sort of binary calculus that we will call ``\emph{pseudo-binary}''. It is true that variables can assume only the values 0 and 1, but this does not automatically extends to all terms. For instance, the term $x+y$ will assume the value 2 when both variables $x$ and $y$ are assigned the value 1. In this respect, Boole's logic (with operators $\times,+,-$)\footnote{In Section \ref{division} we will discuss the extension of the language to the division operator.} deals with functions $f:\{0,1\}^n\to\mathbb{Z}$, rather then with functions $f:\{0,1\}^n\to\{0,1\}$. {(b) The last statement sets the method that Boole will use throughout his book: the validity of any given logical law or logically correct inference must be provable by a sequence of calculations in the new algebraic calculus, obtained from the usual one by adding instances of the duality law \emph{for all the variables}.} {The origin of our modern conception of Boolean algebra as a logical binary calculus is to be found exactly in this strategy pursued by Boole! Still, this fact should not induce the reader to underestimate some irreducible differences between Boole's original view and our modern treatment of what is nowadays called Boolean algebra:} (i) {Boole's pseudo-binary calculus has a quantitative interpretation, whereas modern Boolean algebra is essentially qualitative and sees 0 and 1 not as numbers but rather as truth values (they can in fact be replaced by ``\emph{true}'' and ``\emph{false}''); logical operators are then free from formal connections with quantitative algebraic operations;\footnote{Although quantitative interpretations of Boolean operators are still possible, as $\hat\sigma(A\wedge B):= \min\{\sigma(A),\sigma(B)\}$, $\hat\sigma(A \vee B):=\max\{\sigma(A),\sigma(B)\}$ , $\hat\sigma(\neg A):=1-\sigma(A)$, for $\sigma(A),\sigma(B)\in\{0,1\}\subseteq\mathbb{N}$.}} (ii) it is well known that modern Boolean algebra has both a propositional and a set theoretical interpretation, Boole himself foresees the connections between the two fields, but he conceives this relation in a complete different way with respect to our current view, see Section 3.2. We will be soon compelled to admit that the quantitative Boole's pseudo-binary calculus works really well, {after all.} A first motivation for this consists in the possibility of reducing it to a systems of equations of the ordinary algebraic calculus. More precisely, the assumption that all variables contained in a given equation $t=s$, say $x_1,...,x_n$, can be assigned only the values 0 and 1 can be expressed by the following system of equations of the ordinary algebra: $$x_1^2=x_1$$ $$...$$ $$x_n^2=x_n$$ $$t=s.$$ Of course, the only possible solutions of this system assign to $x_1,...,x_n$ values in $\{0,1\}$. This is therefore a basic step to reduce Boole's pseudo-binary calculus to the ordinary algebraic one. Although Boole never mentions it explicitly, this procedure is suggested by his own words ``the equation $x^2=x$, considered as algebraic, has no other roots than 0 and 1''. (p.37)\\ The double interpretation (quantitative and set-theoretical) proposed by Boole for his calculus does not concern only the binary operations, but also the constant symbols 0 and 1. He lets 0 denote the \emph{empty set}, and 1 the \emph{universal class}. Such interpretations are not only merely conventional, there are cogent motivations for that. In particular, an immediate set-theoretical reading of the quantitatively valid equation $1\cdot x=x$ is that the intersection of the whole with any class $x$ results in the class $x$ itself. Beside the possible ontological justifications of interpreting 0 as the empty class (``no quantity=nothing=\emph{no thing}''), the equations $0\cdot x=0$ or $x+0=x$ will provide analogue meaningful set-theoretical motivations. Since 1 is the \emph{whole}, $1-x$ will denote the complementary class of $x$. The law of duality can now be re-written, via distributivity laws, as $$x(1-x)=0.$$ This equivalent notion expresses another fundamental property characterizing the notion of class: the intersection of a set with its complement results in the empty set. The law of duality must hold for every term denoting a class: it holds for variables \emph{in virtue of an assumption}, it holds for 0 and 1 \emph{numerically}. It holds as well for the complement of every class, as the next easy argument shows: let $t$ denote a class, then $t^2 = t$ by induction hypothesis, hence $(1-t)^2=(1-t)(1-t)=1-2t+t^2=1-2t+t=1-t$. Moreover, all terms denoting set intersections of classes, say $t_1 \cdot ... \cdot t_n$, satisfy the duality law: $(t_1 \cdot t_2 \cdot ... \cdot t_n)^2=t_1\cdot t_2 \cdot ... \cdot t_n\cdot t_1\cdot t_2 \cdot ... \cdot t_n=t_1^2 \cdot t_2^2 \cdot ... \cdot t_n^2=t_1\cdot t_2\cdot ... \cdot t_n$. Consequenlty, the terms that Boole calls ``constituents'' satisfy the same law: given $n$ variables $v_1,...,v_n$, a \emph{constituent} is nothing but a product whose $i$-th factor is either $v_i$ or $(1-v_i)$. For instance, all constituents expressible through $x$ and $y$ are $xy$, $x(1-y)$, $(1-x)y$, $(1-x)(1-y)$. More generally, every term $t$ representing a function $f:\{0,1\}^n\to\{0,1\}$ does denote a class, since $f(x_1,...,x_n)^2=f(x_1,...,x_n)$ holds necessarily. In other words, every expression that can assume only the values 0 and 1 for every possible interpretation $\sigma$ of the variables over $\{0,1\}$ will denote a class.\footnote{This does not mean that for any given term $t$ denoting a class each of its subterms will do the same. For instance $(1+1)-(1+1)$ satisfies the duality law (and can be viewed as a name for the empty set), nevertheless its subterm 1+1 does not. Of course, one may ask for an \emph{inductive} definition of the notion of \emph{class term}, in which case $(1+1)-(1+1)$ would no longer be admissible. But this requirement, which is not present in Boole's work, would restrict drastically the extension of the concept. A further example: $(x+x)-x$. This is essentially only a redundant re-writing of the basic term $x$, which represents a class for both binary interpretations of the involved variable $x$. Nevertheless, the subterm $x+x$ satisfies the duality law only for the interpretation $\sigma(x)=0$. Our definition of class term as any term satisfying the duality law is more tolerant than the inductive one, and can be very naturally characterized in the following way. We call a term $t'$ \emph{primitive} if it can algebraically be no longer simplified. When a term $t$ satisfies the duality law (possibly with respect to a variable interpretation) and is reducible to the primitive term $t'$, then $t$ has the same denotation as $t'$ in virtue of the equation $t=t'$ (this convention is of course well defined in virtue of the laws of ordinary algebra, which imply the uniqueness of the primitive form $t'$ of $t$). Notice that all subterms in $t'$ will satisfy the duality law (possibly with respect to the considered variable interpretation). This approach looks very natural, but it will require some care for the case of the division operator.} On the other hand, there exist terms whose values lie in $\{0,1\}$ only for some binary interpretations $\sigma$ of their variables. This is for example the case, as we know, of $x+y$ {but this is not a problem since the pseudo-binary calculus} is supposed {to go hand in hand with} the set theoretical {interpretation}: according to both, $x+y$ will indeed denote a class not always but only \emph{sometimes}. Even more remarkably, the correspondence between the two interpretations looks even stronger after a more careful inspection. On the one hand, $x+y$ does not satisfy the law of duality exactly for $\sigma(x)=\sigma(y)=1$, with 0 and 1 seen as numbers. Let us now have a deeper look at the set theoretical interpretation. The term $x+y$ denotes a set only when $x$ and $y$ are disjoint classes. The universal class is not disjoint with itself, whereas the empty set is disjoint with everything (even with itself!). Therefore, the sum of 1 with itself is even set theoretically not acceptable, while any sum involving the empty set as one of its addenda will be {automatically} admissible.\footnote{In the example just discussed, we find another justification for Boole's use of disjoint union: overlapping sets are not acceptable, just like $1+1=2\notin\{0,1\}$!} This is a very appreciable property of Boole calculus: whatever can be said for arbitrary classes is reducible to what can be said for the empty and the universal sets alone! Some explanation for this apparently bizzarre ``set theoretical collapse'' is needed.\footnote{In contemporary algebra of logic this fact is a very well known consequence of the Stone's Representation Theorem. Still we want to give an intuitive motivation in terms of ordinary propositional calculus.} We know that in modern propositional calculus every formula admits of a double interpretation. According to the first option, variables are seen as \emph{arbitrary} propositions, and operation symbols $\wedge$, $\vee$, $\neg$ as propositional connectives. In the second option, the variables are seen as \emph{arbitrary} sets and the operation symbols are interpreted as the set operators ``union'' ($\cup$), ``intersection'' ($\cap$) and ``complementation'' ($\overline{\cdot}$). Nevertheless, one could mantain that the first interpretation is somehow \emph{vacuously redundant}. In fact, via the usual binary truth value semantics, a propositional formula is reduced \emph{in reality} to a binary function over truth values. Hence, one could \emph{get rid} of all that huge world of arbitrary propositions and \emph{keep only} a single tautology $\top$ and a single contradiction $\bot$. That is, the restriction of all possible variable interpretations to the domain $\{\top,\bot\}$ suffices to establish whether a given formula is a tautology, a contradiction, whether a formula is a logical consequence of others, and so on. For what concerns this paper, the following remark is of crucial importance: ordinary propositional logic and set theory define the same class of valid and invalid equations (where ``equation'' in the first case must of course be intended as ``semantic equivalence''\footnote{Every formula can obviously be reduced to a semantic equivalence via logical constants: for example, $A\vee\neg A$ can be read as $A\vee\neg A\equiv\top$, which corresponds set theoretically to $A\cup overline{A}=U$, for $U$ the universal set.}). Therefore, to prove the validity of the equation $A\cap (B\cup C)=(A\cap B)\cup(A\cap C)$, one can convert this formula into the propositional formula $A\wedge (B\vee C)=(A\wedge B)\vee(A\wedge C)$, and then check its validity only for those variable interpretations that assign to $A,B,C$ the statements $\top,\bot$. Correspondingly, the interpretation of $A,B,C$ as the universal set or the empty set must suffice to test the validity of $A\cap (B\cup C)=(A\cap B)\cup(A\cap C)$. We can interpret Boole's operations as connectives whose meaning is fixed by the truth tables given below. For the connective $\times$ nothing new. Its binary interpretation coincides with the ordinary multiplication over $\{0,1\}$, and hence with the truth table of the propositional conjunction $\wedge$: \begin{tabular}{p{0cm}p{0cm}p{0,5cm}p{1cm}} $A$ & $B$ & & $A\times B$\\ 1 & 1 & & 1\\ 1 & 0 & & 0\\ 0 & 1 & & 0\\ 0 & 0 & & 0\\ \end{tabular} This goes on a par with the fact that this operator corresponds to the ordinary set theoretical intersection $\cap$. As to the operator $+$, its binary truth-table can only be \emph{partial} and this goes on a par with the fact that this operator corresponds to the disjoint union \begin{tabular}{p{0cm}p{0cm}p{0,5cm}p{2cm}} $A$ & $B$ & & $A+B$\\ 1 & 1 & & {\small not allowed}\\ 1 & 0 & & 1\\ 0 & 1 & & 1\\ 0 & 0 & & 0\\ \end{tabular} First, observe that the arithmetically non allowed case {reflects}, set theoretically, the unique case in which the {corresponding} sets are not disjoint, and hence their union cannot be done. Second, the table expresses a \emph{partial} truth function which coincides, over its partial domain, with the truth table of \emph{aut}, as well as {of} \emph{vel}. And here is a curious fact. Although it has often been stated that the {propositional interpretation of Boole's sum is the exclusive disjunction},\footnote{ {See for instance} \cite{S82} p.23, \cite{H91} p.12, \cite{B97}, {p.175.}} over its domain of definibility, it can be read \emph{indifferently} in both manners, as one prefers!\footnote{ {The acceptability of the inclusive interpretation has been pointed out already by J. Corcoran} (\cite{C86} {p.71) and C. Badesa} (\cite{B04} {p.3), who is even inclined to recognize it as the one really underlying Boole's operator. Actually, both versions are correct.}} As to the operator $-$, its binary truth-table can again only be \emph{partial}, as well as its set theoretical interpretation as inclusive set subtraction: \begin{tabular}{p{0cm}p{0cm}p{0,5cm}p{2cm}} $A$ & $B$ & & $A-B$\\ 1 & 1 & & $0$\\ 1 & 0 & & 1\\ 0 & 1 & & {\small not allowed}\\ 0 & 0 & & 0\\ \end{tabular} Observe that the arithmetically non allowed case corresponds, set theoretically, to the unique case in which the set to be removed is not contained in the other. \\ The truth table technique shows perfectly well that the empty and the universal set suffice for all semantic purposes, as they can simulate the behaviour of \emph{true} and \emph{false} with respect to the propositional interpretations of $\times,+,-$ and they are even able to distinguish between allowed and non allowed cases. This is enough for Boole's ``set theoretical collapse''. \subsection{The method of the developments} Given an arbitrary functional term\footnote{Coherently with Boole's notation, we will not introduce a notational distinction between functions and functional symbols. The difference will be clear from the context.} defined in the $\{+,\times,-,1,0\}$-language, what class does it represent? The method of the \emph{developments} transforms each functional term in $n$ variables into a term in \emph{canonical form} i.e. a term expressed by the sum of $2^n$ products each of which is composed of a numerical value (a \emph{coefficient}) and a \emph{constituent}. Here is the development of $f(x_1, \dots, x_n)$: \begin{align}\label{nabla} f(x_1, \dots, x_n)=\sum_{1 \leq i \leq 2^n}f( \sigma^i(x_1) , \dots, \sigma^i(x_n)) \cdot s_{i_1} \cdot \, \, ... \cdot \, s_{i_n} \end{align} for all $2^n$ possible binary functions $\sigma^i : \{ x_1, \dots, x_n\} \to \{0, 1\}$ and $s_{i_k}, 1 \leq k \leq n,$ {where}: \\ \\ $s_{i_k}= \left\{ \begin{array}{ll} x_k & \quad \textrm{if} \quad \sigma^i(x_k)=1\\ 1 -x_k & \quad \textrm{if} \quad \sigma^i(x_k)=0 \end{array} \right. $. \\ \\ An example: consider $f(x_1, x_2)$; then $$f(x_1, x_2)=\sum_{1 \leq i \leq 4}f(\sigma^i(x_1), \sigma^i(x_2)) \cdot s_{i_1} \cdot s_{i_2} =$$ $$ f( 1,1 ) \cdot x_1\cdot x_2 \, + \, f(1,0) \cdot x_1\cdot (1 - x_2) \, + \, f(0,1) \cdot (1 - x_1)\cdot x_2 \, + \, f(0,0) \cdot (1 - x_1) \cdot (1 - x_2). $$ \\ We have to show that the equality (\ref{nabla}) is valid, i.e., however we interpret the variables $x_1, \dots ,x_n$ that equality is true. This amounts to say that for every interpretation $\tau$ of the variables $x_1, \dots, x_n$ into $\{0,1\}$, \begin{align}\label{nabla2} f(\tau(x_1), \dots ,\tau(x_n))=\sum_{1 \leq i \leq 2^n}f( \sigma^i(x_1), \dots , \sigma^i(x_n)) \cdot \hat\tau(s_{i_1}) \cdot \, \, ... \cdot \, \hat\tau(s_{i_n}) \end{align} where $\hat\tau(x_k)=\tau(x_k)$, and $\hat\tau(1-x_k) = 1 - \tau(x_k)$. Notice that the $n$-tuple $(\sigma^i(x_1), \dots , \sigma^i(x_n))$ is an $n$-tuple of 0 and 1, so the function $\tau$ plays no role on $f( \sigma^i(x_1), \dots , \sigma^i(x_n))$. The function $\tau$ is necessarily equal to a $\sigma^j$ for exactly one $j$, $1 \leq j \leq 2^n$, so \\ $f(\tau(x_1), \dots ,\tau(x_n)) = f(\sigma^j(x_1), \dots ,\sigma^j(x_n)) $. Moreover we show that for that $j$: $\hat\tau(s_{j_1}) \cdot \, \,... \, \cdot \, \hat\tau(s_{j_n})$ is equal to 1. In fact for every $k$, $ 1 \leq k \leq n$, either $s_{j_k} = x_k $ or $s_{j_k} = 1 - x_k $. In the first case $\sigma^j(x_k) = 1$, therefore $\hat\tau (s_{j_k} ) =1$; in the second case, $ \sigma^j(x_k) = 0$, therefore $\hat\tau (s_{j_k} ) = 1 - \tau(x_k)=1 - \sigma^j(x_k) =1$. \\ Now we show that for all $h \neq j$, $ 1 \leq h \leq 2^n$, $(\hat\tau(s_{h_1}) \cdot ... \cdot \, \hat\tau(s_{h_n})) = 0$. For, let $(\sigma^h(x_1), \dots , \sigma^h(x_n))\neq (\sigma^j(x_1) , \dots , \sigma^j(x_n))$, say $\sigma^h(x_k) \neq \sigma^j(x_k)$ for some $k$, and consider the constituent $s_{h_1}\cdot ... \cdot \, s_{h_n}$. If $\sigma^h(x_k) =1$, then $s_{h_k} =x_k$ and $\sigma^j(x_k) = 0$ (since $\sigma^h(x_k) \neq \sigma^j(x_k) $), therefore $\hat\tau(s_{h_k})=\tau(x_k)$ = $\sigma^j(x_k) = 0$, and so the whole product $(\hat\tau(s_{h_1}) \cdot ... \cdot \, \hat\tau(s_{h_n}))$ is equal to 0. Analogously, if $\sigma^h(x_k )=0$, then $s_{h_k} = 1-x_k$ and $\sigma^j(x_k) = 1$ (since $\sigma^h(x_k) \neq \sigma^j(x_k) $), therefore $\hat\tau(s_{h_k})=\hat\tau(1-x_k) = 1-\tau(x_k)=1-\sigma^j(x_k) = 0$, and so the whole product $\hat\tau(s_{h_1})\, \cdot ... \cdot \, \hat\tau(s_{h_n})$ is equal to 0 again. In conclusion, for every interpretation $\tau$ of the variables, the equation (\ref{nabla}) reduces to the trivial identity $$f(\tau(x_1),\dots,\tau(x_n))=f(\sigma^j(x_1),\dots,\sigma^j(x_n)) \cdot 1$$ with $\tau(x_1)= \sigma^j(x_1),\dots,\tau(x_n)= \sigma^j(x_n)$. \\ Briefly, the given proof tells us that for any given interpretation $\tau$ the summation in (\ref{nabla2}) reduces to that component whose coefficient is determined by the unique $\sigma^j$ equal to $\tau$. If we use the notation $i_1, \dots, i_n$ instead of $\sigma^i(x_1), \dots , \sigma^i(x_n)$, the equation just proved can be written as \\ \begin{proposition}\label{developments} $$f(x_1, \dots, x_n)=\sum_{i_1, \dots , i_n \in \{0,1\}}f( i_1, \dots , i_n) \cdot s_{i_1} \cdot ... \cdot \, s_{i_n}.$$ \end{proposition} \noindent The proof given by Boole in Chapter V of his book (pp.72-74) is not so general and uniform for all $n$, but we can say that our presentation captures its \emph{hidden} essence. \\ \\ The point here is somehow delicate. Both Boole's original proof and our generalized version share a \emph{purely algebraic} nature and are indeed formulated within the pseudo-binary interpretation of the calculus. Therefore, one may also ask for an intuitive set theoretical characterization of the development technique for those functional terms $f(x_1,\dots,x_n)$ which allow a set theoretical interpretation\footnote{Also in this case, we will not be pedantic in distinguishing rigorously between terms denoting sets and their corresponding denotations, which only are properly sets, or between functional symbols and the functions over the realm of sets they denote.}. A similar requirement is in this context even more relevant then in others because of the possible presence of \emph{numerical coefficients} $f(i_1,\dots,i_n)\in\mathbb{Z}$ possibly lying outside $\{0,1\}$, hence a set theoretical interpretation of the whole development becomes particularly problematic. To try and accomplish this legitimate request, we will first of all prove, \emph{algebraically}, an interesting fact: the development of a term $f(x_1,\dots,x_n)$ \emph{coincides} with the {product} of that term with the development of 1 (in the same variables). By applying the distributive laws, we will even be able to show that the coincidence holds \emph{addendum by addendum}. The precise meaning of this claim needs a more detailed explanation, and, for the sake of simplicity, we limit our discussion to the case of two variables only (the generalization to any arbitrary number of variables can be then easily deduced). \\ By ordinary algebraic syntactical manipulations it can be shown that \begin{align}\label{box} 1=xy+x(1-y)+(1-x)y+(1-x)(1-y) \end{align} This equation is valid for all possible values of $x$ and $y$, not necessarily in $\{0,1\}$, that is, without any use of the duality law. \\ By the ordinary distributivity laws we deduce then \\ $f(x,y) \,= \, f(x,y)\cdot 1=$ \\ $f(x,y)[xy+x(1-y)+(1-x)y+(1-x)(1-y)]=$ \\ $f(x,y)xy+f(x,y)x(1-y)+f(x,y)(1-x)y+f(x,y)(1-x)(1-y)$. \\ \\ On the other hand, by the method of the developments \\ $f(x,y)=f(1,1)xy+f(1,0)x(1-y)+f(0,1)(1-x)y+f(0,0)(1-x)(1-y)$. \\ \\ Therefore, the two sums \begin{align}\label{clubsuit} f(x,y)xy+f(x,y)x(1-y)+f(x,y)(1-x)y+f(x,y)(1-x)(1-y) \end{align} and \begin{align}\label{clubsuit_clubsuit} f(1,1)xy+f(1,0)x(1-y)+f(0,1)(1-x)y+f(0,0)(1-x)(1-y) \end{align} must denote, \emph{numerically}, the same quantity for each \emph{binary} interpretation $\sigma$ of the variables. We will prove something more, i.e., that \emph{each addendum of the first sum can be transformed, by algebraic calculations, into the corresponding addendum of the second sum} (and vice versa). Before proving it, we want to give a set theoretical interpretation of this crucial result. To this purpose, we notice first of all that the right term of (\ref{box}) coincides with the \emph{development} of 1, since for arbitrary $i$ and $j$ we have $f(i,j)=1$ where $f$ is the constant function 1.\footnote{For each $n\geq 1$, the coefficient 1 can be identified with the $n$-ary function $f(x_1, \dots , x_n)$ whose value is constantly 1.} From the set theoretical point of view, the development of the universal class 1 can be interpreted as its decomposition into four disjoint classes, whose disjoint union results indeed in the universal class itself. This fact can be easily checked by ordinary Eulero-Venn diagrams. See figure \ref{fig:development_1}. As we know, the set theoretical counterpart of the algebraic product is the set intersection. Therefore, each addendum of (\ref{clubsuit}) can be seen as the \emph{intersection of the class $f(x,y)$ with the corresponding region of the universal set} (whenever $f(x,y)$ is a class). The final sum is of course $f(x,y)$, algebraically, but this is perfectly coherent with the set theoretical interpretation. Let $f(x,y)$ express the class $A$, if we relativize all regions of the universal set to $A$ (i.e. we keep only those portions overlapping with $A$), and then we join them, we obtain exactly the set $A$ itself. Again, the Eulero-Venn diagrams give a visual image of this fact. See figure \ref{fig:shadow}. In virtue of this set theoretical interpretation, we call the method of analyzing $f(x,y)$ as specified by (\ref{clubsuit}) the \emph{method of the intersections} and we prove that \begin{proposition}\label{developments-intersections-equivalence-a} \emph{For functional terms $f(x,y)$ in the $\{+,\times,-,0,1\}$-language, the} method of the developments \emph{and the} method of the intersections \emph{are equivalent.} \end{proposition} This methodological equivalence makes the method of the developments more intuitive in the domain of sets: \emph{calculating the development of a set $f(x,y)$ coincides with re-constructing that set by joining disjoint ``pieces'' of it, each piece being the ``shadow'' of $f(x,y)$ over one of the four parts into which the universal set is standardly decomposed}\footnote{Of course, properly speaking, such four parts change along with the interpretation of $x$ and $y$.}. Now to the proof of our claim. We simplify the notation for constituents: we will write $c_{11}, c_{10}, c_{01}, c_{00} $ for $xy$, $x(1-y)$, $(1-x)y$ and $(1-x)(1-y)$, respectively. Whence, as announced, we prove that $f(xy)c_{ij}$ and $f(i,j)c_{ij}$ coincide up to algebraic trasformations\footnote{This means that they share the same numerical denotations, therefore we show each step of their reciprocal trasformation by using the identity symbol rather than some artificial symbol of syntactical reduction: to manipulate the terms, we use nothing but algebraic \emph{equations}.} for $i,j \in \{0,1\}$ ($f(i,j)$ not necessarily in $\{0,1\}$). The proof is then by induction on the syntactical construction of $f$: \begin{itemize} \item $f(x,y):=0$: then $f(xy)c_{ij}=0=f(i,j)c_{ij}$; \item $f(x,y):=k\neq 0$: then $f(xy)c_{ij}=kc_{ij}=f(i,j)c_{ij}$; \item $f(x,y)$ is a projection, for instance $f(x,y):=x$. If $c_{ij}$ is of the type $x\hat y$ for $\hat y\in\{y,(1-y)\}$, then it is associated with the coefficient $f(1,j)=1$. Therefore $f(i,j)c_{ij}=c_{ij}$; on the other hand it holds $f(x,y)c_{ij}=xc_{ij}=x^2\hat y= x\hat y=c_{ij}$. If $c_{ij}$ is of the type $(1-x)\hat y$ for $\hat y\in\{y,(1-y)\}$, then it is associated with the coefficient $f(0,j)=0$. Therefore $f(i,j)c_{ij}=0$; on the other hand it holds $f(x,y)c_{ij}=xc_{ij}=x(1-x)\hat y=0\cdot \hat y=0$; \item $f(x,y):=f_1(x,y)\cdot f_2(x,y)$. Of course $f(i,j)=f_1(i,j)\cdot f_2(i,j)$. By IH $f_l(x,y)c_{ij}= f_l(i,j)c_{ij}$ for $l=1,2$. Then, by commutativity of the product and by the fact that $c_{ij}$ satisfies the duality law, $f(x,y)c_{ij}=(f_1(x,y)\cdot f_2(x,y))c_{ij}=(f_1(x,y)\cdot f_2(x,y))c_{ij}^2= f_1(x,y)c_{ij}\cdot f_2(x,y)c_{ij}= f_1(i,j)c_{ij}\cdot f_2(i,j)c_{ij}=(f_1(i,j)\cdot f_2(i,j))c_{ij}^2=(f_1(i,j)\cdot f_2(i,j))c_{ij}= f(i,j)c_{ij}$; \item $f(x,y):=f_1(x,y)\diamond f_2(x,y)$ for $\diamond\in\{+,-\}$. Of course $f(i,j)=f_1(i,j)\diamond f_2(i,j)$. By IH $f_l(x,y)c_{ij}= f_l(i,j)c_{ij}$ for $l=1,2$. Then, by distributivity laws, $f(x,y)c_{ij}=(f_1(x,y)\diamond f_2(x,y))c_{ij}= f_1(x,y)c_{ij}\diamond f_2(x,y)c_{ij}= f_1(i,j)c_{ij}\diamond f_2(i,j)c_{ij}= (f_1(i,j)\diamond f_2(i,j))c_{ij}=f(i,j)c_{ij}$. \end{itemize} The proof of the coincidence between the two methods is proved for the general case of generic functions that are not necessarily logical. When non logical functions are involved (for instance $f(x,y)=5$, $f(x,y):=-x-y$ or $f(x,y):=x+y+2$) products will be seen as purely algebraic operations and not as set intersections. We nevertheless use in every situation the expression ``method of the intersections'', as the proved algebraic result represents indeed a generalization of the set theoretical perspective. Let us now instantiate some specific case. In the following calculations, which show the equalence of the terms $f(x,y)c_{ij}$ and $f(i,j)c_{ij}$ for some concrete examples, we may even go beyond the rigid sequence of steps prescribed by the induction scheme of the proof of Proposition \ref{developments-intersections-equivalence-a}. After all, that induction scheme shows that $f(x,y)c_{i_j}$ and $f(i,j)c_{ij}$ denote, numerically, the same quantity; hence we can use all admissible algebraic transformations \emph{freely}: \begin{itemize} \item $f(x,y):=xy$ is always a class. It holds $$f(x,y)c_{11}=xyc_{11}=xyxy=xy=1\cdot xy=1\cdot c_{11}=f(1,1)c_{11}$$ $$f(x,y)c_{10}=xyc_{10}=xyx(1-y)=0=0\cdot c_{10}=f(1,0)c_{10}$$ $$f(x,y)c_{01}=xyc_{01}=xy(1-x)y=0=0\cdot c_{01}=f(0,1)c_{01}$$ $$f(x,y)c_{00}=xyc_{00}=xy(1-x)(1-y)=0=0\cdot c_{00}=f(0,0)c_{00}$$ \item $f(x,y):=x+y$ is sometimes a class. It holds anyway $$f(x,y)c_{11}=(x+y)xy=x^2y+xy^2=xy+xy=2xy=f(1,1)c_{11}$$ $$f(x,y)c_{10}=(x+y)x(1-y)=(x+y)(x-xy)=x^2-x^2y+xy-xy^2=$$ $$=x-xy+xy-xy=x-xy =x(1-y)=1\cdot x(1-y)=f(1,0)c_{10}$$ \\ $$f(x,y)c_{01}=(x+y)(1-x)y=(x+y)(y-xy)=xy-x^2y+y^2-xy^2= $$ $$= xy-xy+y-xy=y-xy=$$ $$=(1-x)y=1\cdot (1-x)y=f(0,1)c_{01}$$ \\ $$f(x,y)c_{00}=(x+y)(1-x)(1-y)=(x+y)(1-y-x+xy)= $$ $$=x-xy-x^2+x^2y+y-y^2-xy+xy^2= $$ $$=x-xy-x+xy+y-y-xy+xy=0=0\cdot(1-x)(1-y)=f(0,0)c_{00}$$ \\ \end{itemize} \subsection{Boole's propositional calculus}\label{Booleprop} Let us now draw our attention to the cases in which $f(x_1,\dots,x_n)$ \emph{always} satisfies the law of duality, that is, for every possible interpretation of the variables; this implies that all coefficients of type $f(i_1, \dots, i_n)$, for $i_1, \dots , i_n\in\{0,1\}$, belong to $\{0,1\}$. In this context {a} set theoretical interpretation of the whole development of $f(x_1,\dots,x_n)$ is {immediate}. Let $f(i_1,...,i_n) \cdot c_{i_1...i_n}$ be the $i$-th element of the development. By assumption, $f(i_1,...,i_n)$ must be interpreted {either} as the universe or the empty set. Correspondingly, its intersersection with the consituent $c_{i_1...i_n}$ will result {either in the constituent itself or in the empty set}. In other words, the \emph{coefficient} $f(i_1,...,i_n)$ will determine whether the corresponding constituent $c_{i_1...i_n}$ will contribute or not to the construction of the set $f(x_1,...,x_n)$ as a member of the development union: $f(x_1,...,x_n)$ will be the set resulting from the union of the {``survived'' }constituents. In such cases, the calculation of developments reflects exactly the standard method of constructing a propositional formula corresponding to a given truth table. As well known, such formula is in disjunctive normal form. Take the $i$-th line of the truth table with value 1, construct the conjunction of $n$ literals $l_1,\dots,l_n$, with $l_j\equiv A_j$ if in that line $A_j$ is assigned the value 1, $l_j\equiv \neg A_j$ if it is assigned the value 0. Build the disjunction of all the conjuncts so obtained. For example, a formula corresponding to the truth table \begin{tabular}{p{0,2cm}p{0,2cm}p{0,2cm}p{0,5cm}p{0,2cm}} $A_1$ & $A_2$ & $A_3$ & \\ 1 & 1& 1& & 1\\ 1 & 1& 0& & 0\\ 1 & 0& 1& & 1\\ 1 & 0& 0& & 1\\ 0 & 1& 1& & 0\\ 0 & 1& 0& & 0\\ 0 & 0& 1& & 1\\ 0 & 0& 0& & 1\\ \end{tabular} is the following: $(A_1\wedge A_2\wedge A_3)\vee(A_1\wedge\neg A_2\wedge A_3)\vee(A_1\wedge\neg A_2\wedge\neg A_3)\vee(\neg A_1\wedge\neg A_2\wedge A_3)\vee(\neg A_1\wedge\neg A_2\wedge\neg A_3)$. In Boole's language, every positive literal $A_j$ is replaced by $x_j$ and every negative literal $\neg A_j$ is replaced by $(1-x_j)$, $\wedge$ is replaced by $\times$ and $\vee$ by $+$. Therefore, if $f(x,y,z)$ is the (binary) function whose graph coincides with the given truth table, we obtain that $f(x,y,z)=xyz+x(1-y)z+x(1-y)(1-z)+(1-x)(1-y)z+(1-x)(1-y)(1-z)$. This is equal to $1\cdot xyz+0\cdot xy(1-z)+1\cdot x(1-y)z+1\cdot x(1-y)(1-z)+0\cdot(1-x)yz+0\cdot(1-x)y(1-z)+1\cdot(1-x)(1-y)z+1\cdot(1-x)(1-y)(1-z)$, that is, the development of $f(x,y,z)$.\footnote{One could argue that in this way $\vee$ is translated into the exclusive disjunction rather than into the inclusive one, but this is irrelevant: the different lines of the truth table are mutually exclusive, hence in this context ``vel'' and ``out'' coincide!} By interpreting set theoretically the development so obtained, according to the instructions suggested above, we have {automatically} a set theoretical interpretation of $f(x,y)$ as a well determined set. Vice versa, every development with all coefficients $f(i_1,...,i_n)$ in $\{0,1\}$ defines a truth table by retracing the process in the opposite direction. {Boole comes very close to} a fundamental aspect of the {symmetry between} propositions and classes, at the base of the modern Boolean algebra. Nevertheless, he does not seem to be aware of this fact. The correspondence between the two domains that he {sees} is of completely different nature. Well, we could even argue that he sees no correspondence at all! Rather, only a trivial reduction; the idea is to interpret a variable $v$ as the set of moments in which a given proposition $A$ is true: \begin{quote} Let us employ the capital letters $X, Y, Z$, to denote the elementary propositions concerning which we desire to make some assertion touching their truth or falsehood (...) And let us employ the corresponding small letters $x,y,z$, considered as expressive of mental operations, in the following sense, viz. : Let $x$ represent an act of the mind by which we fix our regard upon that portion of time for which the proposition $X$ is true; and let this meaning be understood when it is asserted $x$ \emph{denotes} the time for which the proposition $X$ is true. Let us further employ the connecting signs $+,-,=, \&$ in the following sense, viz.: Let $x+y$ denote the aggregate of those portions of time for which the propositions $X$ and $Y$ are respectively true, those times being entirely separated from each other. Similarly let $x-y$ denote the remainder of time which is left when we take away from the portion of time for which $X$ is true, that (by supposition) included portion for which $Y$ is true. Also, let $x=y$ denote that the time for which the proposition $X$ is true, is identical with the time for which the proposition $Y$ is true. (pp.164--165) \end{quote} \subsection{Empty constituents} What does it happen when the coefficients $f(i,j)$ {do} not belong to $\{0,1\}$? For instance, the \emph{partial} binary tables of $+$ and $-$ presented earlier can be extended to \emph{total} tables by completing the missing arithmetical calculations, so to obtain \begin{tabular}{p{0cm}p{0cm}p{0,5cm}p{1cm}p{1cm}p{1cm}} $A$ & $B$ & & $A+B$ & & $A-B$\\ 1 & 1 & & $2$ & & $0$\\ 1 & 0 & & 1 & & 1\\ 0 & 1 & & 1& & -1\\ 0 & 0 & & 0 & & 0\\ \end{tabular} This point is very delicate, since the values 2 and $-1$ have no direct set theoretical meaning (nothing is greater than the whole or smaller than the nothing!). {Contrary to expectations, Boole is able to draw from such ``unlogical'' values, a kind of} \emph{logical information}. The idea is {simple}: every constituent {preceded} by a coefficient different from 0 or 1 in the development of a term {is going to be} identified with the empty set! Be careful now: it is the constituent, and not its coefficent, to be identified with such a set. The, so to say, set theoretical interpretation of the coefficient reduces to a mere \emph{warning} for the {emptiness} of its constituent. Let us calculate for example the development of the term $f(x,y):=x+y$: $$x+y=2xy+x(1-y)+(1-x)y.$$ The constituent $xy$ has coefficient 2. According to Boole's claim, this coefficient indicates that the intersection of $x$ with $y$ is the empty set (as really must be, being $x$ and $y$ by assumption disjoint!). As another example, consider the development of $x-y$. This results in $$x-y=x(1-y)-(1-x)y.$$ Here, the intersection of $(1-x)$ with $y$ will be again the {empty set} (as must be, by the assumption that the whole $y$ is included in $x$). In spite of its good behaviour in these two {examples}, the reader might feel quite perplex about a suggestion that looks totally arbitrary in itself. But this is not the case, and Boole justifies his claim by proving, algebraically, the following theorem {which opens} the 11th paragraph of the sixth chapter: \emph{If a function $V$, intended to represent any class or collection of objects, $w$, be expanded, and if the numerical coefficient, $a$, of any constituent in its development, do not satisfy the law, $a(1-a)=0$, then the constituent in question must be made equal to 0.} In the proof, Boole assumes that \begin{align}\label{star} w=a_1t_1+a_2t_2+ \dots +a_nt_n \end{align} where the right term of this equation is the development of $V$ (which means that the terms $t_i$ are constituents). From (\ref{star}) we immediately obtain that $w^2=(a_1t_1+a_2t_2+...+a_nt_n)^2$, and then $w=(a_1t_1+a_2t_2+...+a_nt_n)^2$, because $w$, as a variable, is subjected to the duality law. The distributivity laws allow us to compute the value of the square, i.e., of $(a_1t_1+a_2t_2+...+a_nt_n)(a_1t_1+a_2t_2+...+a_nt_n)$. The computation is very easy because, on the one hand, $a_it_i \cdot a_it_i = a_i^2 t_i$ for all $i$, and on the other, $a_it_i \cdot a_jt_j = 0$ whenever $i \neq j$: for that, observe that for some variable $v$, either $v$ is a factor of $t_i$ and $(1-v)$ of $t_j$, or viceversa. \\ So we obtain \begin{align}\label{star_star} w=a_1^2t_1+a_2^2t_2+...+a_n^2t_n \end{align} Let $a_{i_1},...,a_{i_m}$ be all the coefficients among $a_{1},...,a_{n}$ that do not satisfy the duality law. For all other coefficients it holds $a=a^2$, therefore by subtracting (\ref{star_star}) from (\ref{star}) we deduce \\ \\ $0=(a_{i_1}-a_{i_1}^2)t_{i_1}+(a_{i_2}-a_{i_2}^2)t_{i_2}+...+(a_{i_m}-a_{i_m}^2)t_{i_m}$, that is to say \begin{align}\label{star_star_star} 0=a_{i_1}(1-a_{i_1})t_{i_1}+a_{i_2}(1-a_{i_2})t_{i_2}+...+a_{i_m}(1-a_{i_m})t_{i_m} \end{align} By multiplying both sides in (\ref{star_star_star}) by $t_{i_j}$ with $1\leq j\leq m$, one infers in each case $0=a_{i_j}(1-a_{i_j})t_{i_j}$ (all the other addenda disappear for the very same reason as above). But $a_{i_j}(1-a_{i_j})\neq 0$ by assumption, and this yelds that $t_{i_j}=0$. Boole's proof is algebraically correct. Notwithstanding, the reader may feel unhappy with it, as, once again, its purely algebraic nature hides completely any set theoretical interpretation. We give two alternative explanations closely related to each other which are still of numerical nature,\footnote{It could not be any different, as long as coefficients with no direct set theoretical interpretation are possibly concerned.} nevertheless they are based on the coincidence of the method of the developments with that of the intersections. We think that this may help, in particular in the second argument, to \emph{perceive} some {more evident} set theoretical \emph{flavour}. Again, for the sake of simplicity, we deal with the case of two variables only. By (the proof of) Proposition \ref{developments-intersections-equivalence-a}, $f(i,j)c_{ij}=f(x,y)c_{ij}$ (for all possible binary variable interpretations). Morevorer we know that Boole's calculus can be simulated within the ordinary algebraic calculus by considering only the variable interpretations which are solutions of the equation system $\bigstar$: \begin{eqnarray*} x^2 &=& x\\ y^2 &=& y\\ f(x,y)^2 &=& f(x,y). \end{eqnarray*} First argument. For all binary variable interpretations satisfying $\bigstar$ it holds $f(x,y)^2=f(xy)$. For such interpretations, we deduce then: \noindent $f(i,j)c_{ij}=f(x,y)c_{ij}=f(x,y)^2c_{ij}^2=(f(x,y)c_{ij})^2=(f(i,j)c_{ij})^2$. Therefore $f(i,j)c_{ij}=(f(i,j)c_{ij})^2$. If $f(i,j)\neq f(i,j)^2$, then it must be $c_{ij}=0$ (for all binary interpretations allowed by $\bigstar$. Second argument. For all variable interpretations fulfilling $\bigstar$, we can see $f(x,y)$ as a set. Hence, $f(x,y)c_{ij}$ will represent the intersection of two sets, which in turn is a set. Therefore, $f(x,y)c_{ij}$ must satisfy the duality law, and the same must then apply, numerically, to $f(i,j)c_{ij}$. When $f(i,j)\notin\{0,1\}$, the only condition for this to hold is that $c_{ij}$ is algebraically null, i.e., set theoretically seen as the empty set. \subsection{The logical meaning of division}\label{division} According to Michael Dummett, the division operator has \emph{no logical meaning} at all, in \cite{D59} Dummett writes ``He introduced a division sign for the operation inverse to intersection, and never succeeded in unravelling the complicated tangles which resulted from this.'' Let us consider Boole's analysis of the proposition ``Clean beasts are those which both divide the hoof and chew the cud'', formalized as $x=yz$ (pp.86--87). Boole is tempted to infer from this equation the validity of a new one, $z=\frac xy$, through the ordinary algebraic introduction of division; but immediately after he observes that this ``equation is not at present in an interpretable form'' (p.87). At the same time, {he goes on saying} ``If we can reduce it to such a form it will furnish the relation required'' (p.87). \\ The interpretable form Boole is referring to is the development of $\frac xy$, which is: \begin{align}\label{paragrafo} \frac xy \, = \, \frac11xy+\frac10x(1-y)+\frac01(1-x)y+\frac00(1-x)(1-y) \end{align} By that, the problem of providing a meaning to the operation of division is reduced to that of providing a meaning to $\frac 1 0$ and $\frac 00$. \ According to Boole the quotient $\frac10$ should be dealt with in the same way as non binary coefficients seen so far. While commenting on his theorem for the treatment of non binary coefficients, he states: \begin{quote} (...) it may be shown generally that any constituent whose coefficient is not subject to the same fundamental law as the symbols themselves\footnote{Of course, the duality law.} must be separately equated to 0. The usual form under which such coefficients occur is $\frac10$. This is the algebraic symbol of infinity. Now the nearer any number approaches to infinity (allowing such an expression), the more does it depart from the condition of satisfying the fundamental law above referred to. (p.91) \end{quote} Boole treats the ``\emph{infinity}'', as he calls it, as the limit of quantities whose degree of compliance of the duality law is proportionally inverse to their magnitude (``the nearer any number approaches to infinity... the more does it depart from the condition of satisfying the fundamental law'': formally, $\lim_{x\to\infty}(x^2-x)=\infty$). Hence, whatever this object might be, it should not satisfy the duality law at the highest degree; by that, its treatment as an \emph{ordinary} number different from 0 and 1 will follow.\footnote{In ordinary algebra, the argument $(\frac10)^2=\frac{1^2}{0^2}=\frac10$, \emph{apparently} proving the duality law for $\frac10$, has no sense, as $\frac10$ is no number. Consequently, the argument does not need to hold in Boole's calculus either. One can also observe that in non standard analysis the sequences $\Big(\frac1{\frac1n}\Big)_n$ and $\Big(\frac1{(\frac1n)^2}\Big)_n$ define two \emph{well distinct} unlimited non standard numbers, the second being larger than the first one.} As for the coefficient $\frac00$ Boole affirms: \begin{quote} The symbol $\frac00$ [...] does not necessarily disobey the law we are here considering, for it admits of the numerical values 0 and 1 indifferently. Its actual interpretation, however, as an indefinite class symbol, cannot, I conceive, except upon the ground of analogy, be deduced from its arithmetical properties, but must be established experimentally. (pp.91--92) \end{quote} \begin{quote} [...] The symbol $\frac00$ indicates that a perfectly \emph{indefinite} portion of the class, {i.e.} \emph{some}, \emph{none}, or \emph{all} of its members are to be taken. (p.92) \end{quote} Boole claims that the interpretation of $\frac 0 0$ ``must be established experimentally'' (p.92), and starts by considering the statement ``Men who are not mortals do not exist''. By denoting ``men'' as $y$ and ``mortal beings'' as $x$ he comes to the equation $y(1-x)=0$, transformed into $y-yx=0$, and finally into $yx=y$. In ordinary algebra one would then proceed by dividing both sides by $y$ (for a non vanishing $y$), so to obtain $x = \frac y y $ and conclude $x = 1$. But, of course, the conclusion that all things are mortal is not contained in the assumption and we do not want it! Boole knows that ``the operation of division cannot be \emph{performed} with the symbols with which we are now engaged'' (p.89) and again he suggests us to calculate the development of $\frac yy$: ``Our resource, then, is to \emph{express} the operation, and develop the result by the method' of the preceding chapter'' (i.e. the method of developments) (p.89). Well, from the equation $x=\frac yy$ by calculating the development of $\frac yy$ we obtain: $$x=y+\frac00(1-y).$$ Simple semantic observations lead Boole to extract some meaningful logical information from it: \begin{quote} This implies that mortals $(x)$ consist of all men $(y)$, together with such a remainder of beings which are not men $(1-y)$, as will be indicated by the coefficient $\frac00$. Now let us inquire what remainder of ``not men'' is implied by the premise. It might happen that the remainder included all the beings who are not men, or it might include only some of them, and not others, or it might include none, and any one of these assumptions would be in perfect accordance with our premiss [...] and therefore the expression $\frac00$ here indicates that \emph{all}, \emph{some} or \emph{none} of the class to whose expression it is affixed it must be taken. (pp.89--90) \end{quote} He then quite optimistically and without proof concludes: \begin{quote} Although the above determination of the significance of the symbol $\frac00$ is founded only upon the examination of a particular case, yet the principle involved in the demonstration is general, and there are no circumstances under which the symbol can present itself to which the same mode of analysis is inapplicable. We may properly term $\frac00$ an \emph{indefinite class symbol}, and may, if convenience should require, replace it by an uncompounded symbol $v$, subject to the fundamental law, $v(1-v)=0$. (p.90) \end{quote} The task of providing a clear meaning to the operation of division so as to include also expressions such as $\frac 0 0 $ and $\frac 1 0 $ remains untouched by Boole. In what follows we try to address this task and we make a proposal with the effect that the `new' operation of division will coincide with the standard operation whenever the denominator is different from 0, and it is in accordance with Boole's \emph{desiderata} as to $\frac k 0 $, $k \geq 0$. Our {proposal} starts from the obvious consideration that the division operator is the inverse operation of the multiplication. This is essentially what Boole himself requires, in other words, the validity of \begin{align}\label{rombo} \frac xy=z \quad \Leftrightarrow \quad x=yz \end{align} It is important to notice from the start that the equivalence above, read in set theoretical terms, says that the division operation has a value if and only if \begin{align}\label{subset} x \subseteq y. \end{align} A major difference with respect to ordinary algebra is that the values $z$ are (almost) \emph{always} undetermined (not only for null $y$). In fact, $\frac xy$, thus $z$, is any set that intersected with $y$ results in $x$: \begin{align}\label{division} \frac xy \: y=x. \end{align} There are in general infinitely many possible values of $z$ which are suitable for the goal, more precisely, every set $z$ ranging from $x$ to $x+(1-y)$ will work. This reflects exactly Boole's words ``\emph{all}, \emph{some}, or \emph{none}'', concerning the members of $(1-x)(1-y)$ (p.90).\footnote{Of course, which one of the three cases will depend on the different individual examples. What we give here is a general theory designed by abstraction from single cases.} See the following Euler-Venn diagrams in {Figure} \ref{fig:division} for a visual clarification of this fact.\footnote{T. Hailperin, one of the very few authors, to our knowledge, suggesting a possible rigorous treatment of the division, shares with us the same starting point (\cite{H76b}. pp.70--77). But his approach is very different from ours: he develops a rigorous system with techniques of modern algebra that are external to Boole's conceptual background. We rather prefer to clarify the notion of division Boole worked with by shaping his fundamental intuitions through a suitable extension of the pseudo-binary calculus. In particular, for Hailpering $\frac00$ and $\frac10$ denote algebraic abstract entities, whereas they are ordinary numbers in our approach.} \begin{figure} \caption{$z$ is undetermined.} \label{fig:division} \end{figure} If we want to simulate the \emph{set-theoretical} calculus enriched with the division operator by a suitable \emph{quantitative} algebraic calculus, we must therefore preserve somehow this idea of \emph{multi-valuedeness}. A possible way out is to say that any operation satisfying the three requirements below is a good interpretation of the division operation. In the following, $p$ and $q$ are arbitrary rational numbers and $div$ is used to denote the ordinary division between rational numbers (to avoid ambiguity with $/$): \begin{enumerate} \item $\frac pq=div(p,q)$, for $p\neq0\neq q$; \item $\frac p0\in\mathbb{Q}\setminus\{0,1\}$, whenever $p\neq0$; \item $\frac 00\in\{0,1\}$. \end{enumerate} Notice that the second condition is in accordance with Boole's claim that $\frac10$ fails to fulfill the duality law, whereas the third condition meets his idea of $\frac00$ as an undetermined quantity that nevertheless should always represent a class.\footnote{Beside the conditions 1--3, the reader may reasonably ask for a forth natural requirement, that is, $\frac k0=k\frac10$ for $k\neq 0$. The reader is of course allowed to select a function respecting this further restriction, however this is not relevant for our discussion here.} In this approach $\frac 00$ and $\frac k0$ are admissible expressions denoting quantities, so the realm of fractions of standard algebra is extended. Consider now the generalization of (\ref{rombo}): \begin{align}\label{rombogen} \frac st=u \quad \Leftrightarrow \quad s=tu \end{align} \noindent for {arbitrary} terms $s,t,u$. This holds in ordinary algebra only if $t\neq 0$, otherwise it has no meaning at all. On the contrary, in our new calculus this equation is valid also for $s=t=0$, whereas it is not valid for $s \neq 0$ and $t = 0$. This last fact can be motivated in set theoretical terms by noticing that $\frac s 0$ has a meaning only when $s$ is a subset of the empty set, hence $\frac s 0$ is set theoretically not allowed when $s \neq 0$. On the other hand, $\frac 00$ is perfectly acceptable since $\emptyset\subseteq\emptyset$ and it satisfies the fundamental equivalence (\ref{rombo}). A direct consequence valid in the same domain (hence, in particular, for $s=0=t$) is the generalization of (\ref{division}), \begin{align}\label{circledS} \frac st \: t=s \end{align} \noindent (just substitute $u$ by $\frac st$ in virtue of $\frac st=u$): set theoretically, whenever $s$ is a subset of $t$, then $\frac st$ is a set that intersected with $t$ gives $s$. The involved validity condition is then: \begin{align}\label{triangle} \textrm{\emph{no denumerator vanishes unless the corresponding numerator does the same}} \end{align} This condition extends the usual one required in ordinary algebra: \begin{align}\label{triangle_triangle} \textrm{\emph{no denumerator vanishes}} \end{align} It is then not difficult to see that under (\ref{triangle}), or in the case $u$ satisfies the law of duality (as it will be the case below), also the following law from ordinary algebra is preserved: \begin{align}\label{odot} \frac st \: u=\frac{su}{tu} \: u. \end{align} This relation will be soon of fundamental importance. However, here are two cases for which a larger domain of validity is not granted. The first one is particularly interesting: \begin{align}\label{circledS_circledS} \frac{st}t=s \end{align} \noindent It fails to be true for $t=0$, $\frac00:=i$ and $s\neq i$. This failure, however unpleasant at first sight, perfectly agrees with the fact that even if $st$ is always a subset of $t$, this does not imply that $s$ is the only set that intersected with $t$ gives us $st$. Analogue observations apply to the equations $$\frac st=\frac{su}{tu} \qquad \frac st \frac uv=\frac{su}{tv}$$ that do not necessarily hold: neither set-theoretically, nor numerically under (\ref{triangle}). Nevertheless, the given definition of division suffices to prove our main requirement: the extension of Proposition \ref{developments-intersections-equivalence-a} to the division operator. \begin{proposition}\label{developments-intersections-equivalence-b} \emph{For functional terms $f(x,y)$ in the full algebraic language $\{+,\times,-,/,0,1\}$, the} method of the developments \emph{and the} method of the intersections \emph{are equivalent.} \end{proposition} First of all let us observe that the methods of the developments and the proof of Proposition \ref{developments} can be immediately transferred to the division. The proof of that proposition is in fact completely independent of the type of $f$, the only delicate point is to ensure that all possible fractional terms are always admissible, but this is indeed the case in our system. Therefore, it remains just to complete the proof of Proposition \ref{developments-intersections-equivalence-a} with the induction step: \begin{itemize} \item $f(x,y):=\frac{f_1(x,y)}{f_2(x,y)}$. Of course $f(i,j)=\frac{f_1(i,j)}{f_2(i,j)}$. By IH $f_l(x,y)c_{ij}$ and $f_l(i,j)c_{ij}$ coincide up to algebraic transformations for $l=1,2$. Then, by (\ref{odot}), since $c_{ij}$ satisfies the duality law, $f(x,y)c_{ij}:=\frac{f_1(x,y)}{f_2(x,y)}c_{ij}=\frac{f_1(x,y)c_{ij}}{f_2(x,y)c_{ij}}c_{ij}=\frac{f_1(i,j)c_{ij}}{f_2(i,j)c_{ij}}c_{ij}=\frac{f_1(i,j)}{f_2(i,j)}c_{ij}:=f(i,j)c_{ij}$. \end{itemize} The proof is now complete. But this is not the end of the story. It is interesting to notice that by mere algebraic calculations an alternative justification of Boole's interpretation of division can be given. Such justification relies only on the syntactic rule (\ref{rombo}) and on an application of the method of the intersection to $\frac xy$: \begin{align}\label{paragrafo_2} \frac xy\, = \,\frac xy xy+\frac xy x(1-y)+\frac xy (1-x)y+\frac xy (1-x)(1-y). \end{align} Let now $\frac xy\, = \, z$ for some $z$. We obtain from (\ref{paragrafo_2}) $$\frac xy\, = \, z xy+z x(1-y)+z (1-x)y+z (1-x)(1-y) .$$ Since (\ref{rombo}) holds by assumption, we have \\ $$\frac xy\, = \, z yzy+z yz(1-y)+z (1-yz)y+z (1-x)(1-y)\, = \, $$ \begin{align}\label{paragrafo2} yz+yz(1-y)+z (1-yz)y+z(1-x)(1-y). \end{align} The variable $z$ is present in all the four addenda, but only in the last it plays an essential role, as the following calculations show: \begin{itemize} \item $yz=xy$, since from $\frac xy\, = \, z$ follows that $x=yz$, and so $xy=y^2z=yz$; \item $yz(1-y)=zy(1-y)=z\cdot 0=0$ \item $z(1-yz)y\, = \, (z-yz)y\, = \, z(1-y)y=0$ \end{itemize} So far each addendum coincides with the corresponding one in the development (\ref{paragrafo}). $z$ remains only in the forth addendum $z(1-x)(1-y)$. Since $z$ is an unknown parameter, the situation is exactly the one described by Boole's expression ``\emph{all}, \emph{some}, or \emph{none}'' referred to the members of $(1-x)(1-y)$ (p.90). In conclusion, the use of the intersections yields the same intuitive interpretations of $\frac10$ and $\frac00$ suggested by Boole. And since $x$ and $y$ range over all possible classes, this proof has the highest degree of generality. As a side remark, we observe that an hypothetical application of (\ref{circledS_circledS}), i.e. $\frac{st}t=s$, would on the contrary imply syntactically the unpleasant result $\frac xy=x$. First of all, observe that under (\ref{division}) $x=xy$, since $x=xx=\frac xy y x= \frac xy y^2 x = \frac xy y yx = xyx = xy$. Hence, by (\ref{circledS_circledS}), $\frac xy=\frac{xy}y=x$. Although this result would deliver one of the suitable outputs, the multi-valudeness of the division would be lost in this way. But, as we have seen, $\frac{st}t=s$ is not valid under (\ref{triangle}) for every allowed interpretation of the division! We conclude our analysis of the division by an inspection of its pseudo-binary table: \begin{tabular}{p{0cm}p{0cm}p{0,5cm}p{2cm}} $A$ & $B$ & & $A/B$\\ 1 & 1 & & 1\\ 1 & 0 & & $q\neq 0,1$\\ 0 & 1 & & 0\\ 0 & 0 & & 0 or 1\\ \end{tabular} This table shows how the use of $\emptyset$ and $U$ reflects the behaviour of the division over all possible sets $A$ and $B$, analogously to what happens for the other operations! Always according to (\ref{rombo}) and (\ref{circledS}), the idea is that $A/B$ is defined only when $A\subseteq B$ and in this case $A/B$ is such that $$A=\frac AB \cap B.$$ As to the first line of the table above, $U=Z\cap U$ if and only if $Z=U$. The second line instead originates no set, as $U$ is not a subset of $\emptyset$, and therefore it is set theoretically not allowed. As to the third line, the intersection of any set $Z$ with a non empty set results in $\emptyset$ if and only if $Z=\emptyset$. For the fourth, we observe that $\emptyset=Z\cap\emptyset$ holds for all sets, in particular for $Z=\emptyset$ or $Z=U$. This is in perfect agreement with the fact that all sets $Z$ ranging from $A$ to $A+(1-B)$ are valid solutions. If only $\emptyset$ and $U$ are available, then $Z=A=\emptyset$ or $Z=A+(1-B)=\emptyset+(U-\emptyset)=U$. \section{Conclusion: the correspondence revisited} John Corcoran in his introduction to \emph{The Laws of Thought}, Prometheus Books 2003, says that \begin{quote} Boole is one of the most misunderstood of the major philosophers of logic. He gets criticized for things he did not do, or did not do wrong. He never confused logic with psycology. He gets credits for things he did not do, or did not do right. He did not write the first book on pure mathematics and he did not devise `Boolean algebra'. Even where there is no question of blame or praise, his ideas are misdescribed or, worse, ignored.(pag xxix-xxx) \end{quote} We do agree with Corcoran. In the present paper we have tried to present the basis of Boole's logical theory as it emerges from \emph{The Laws of Thought} with its good and bad features. To this aim we have proved that the method of developments, once it is seen through the method of intersections, shows its clear set-theoretical meaning and it shows how natural and effective was Boole's idea of decomposing a concept in the way he did. In our analysis, it emerges that the operations that Boole denotes by $+$ and $-$ are neither the standard set-theoretical union (between disjoint sets or not) nor the set-theoretical difference. $+$ and $-$ are partial operations.\footnote{ Corcoran in his introduction agrees on this point, see pag xxx-xxxi).} What emerges from our work is that those operations are not defined, set-theoretically, exactly when the corresponding algebraic operations assume values outside $\{0, 1\}$. The division operation in the algebra of logic is again a partial operation, as it is in quantitative algebra. \\ \\ {Furthermore, we have tried to provide a mathematical interpretation justifying the logical conception of Boole's division and his intuitive treatment of unusual coefficients such as $\frac10$ and $\frac00$.} \\ \\ At each stage of our presentation of Boole's theory we have insisted in checking if the alleged correspondence between algebra and logic were achieved or not, or in which degree. Do they share the same universal laws? This point is of fundamental importance, since an essential part of Boole's machinery relies on the perfect formal correspondence between them so that we should nowadays rather speak of two different interpretations of a formally unique calculus. As Boole himself says, when proving the validity of a logical law or of an argument through symbolic calculations, one would be allowed to \emph{suspend} the logical interpretation in all intermediate steps and rely {only} on the application of ordinary algebraic rules plus the law of duality. The {presumed} existence of a perfect formal correspondence between the logical and the quantitative pseudo-binary calculi would always guarantee the correctness of the proof: \begin{quote} It has been seen, that any system of propositions may be expressed by equations involving symbols $x$, $y$, $z$, which, whenever interpre- tation is possible, are subject to laws identical in form with the laws of a system of quantitative symbols, susceptible only of the values 0 and 1 [...]. But as the formal processes of reasoning depend only upon the laws of the symbols, and not upon the nature of their interpretation, we are permitted to treat the above symbols, $x$, $y$, $z$, as if they were quantitative symbols of the kind above described. \emph{We may in fact lay aside the logical interpretation of the symbols in the given equation; convert them into quantitative symbols, susceptible only of the values 0 and 1; perform upon them as such all the requisite processes of solution; and finally restore to them their logical interpretation.} (pp.69--70) \end{quote} Actually, despite Boole's enthusiatic slogan about the correspondence of the two calculi, he seems perfectly aware of the irreducible discrepancy produced by the need of interpretational restrictions for his logical calculus. Nevertheless, instead of being worried about it, he thinks (quite unexepctedly, we would say) that he can take advantage of it: \begin{quote} The processes to which the symbols $x,y,z$ regarded as quantitative and of the species above described, are subject, are not limited by those conditions of thought to which they would, if performed upon purely logical symbols, be subject, and a freedom of operation is given to us in the use of them, without which, the inquiry after a general method in Logic would be a hopeless quest. (p.70) \end{quote} We see here somehow a \emph{sudden change of perspective}. Although Boole had insisted, as much as possible, on the perfect correspondence between the algebraic and the logical caluli, in the end his \emph{true} idea becomes that of \emph{substituting} the logical calculus by the quantative pseudo-binary calculus, taking advantage of the freedom of interpretation in the latter. The subtle point to appreciate here is that such substitution is recommended because it is not vacuous: the two calculi do not coincide and one is more efficient than the other! \end{document}
arXiv
Research article | Open | Open Peer Review | Published: 08 February 2018 Evaluating organizational change in health care: the patient-centered hospital model Carlo V. Fiorio5,2,6, Mara Gorli4,7 & Stefano Verzillo1,3 BMC Health Services Researchvolume 18, Article number: 95 (2018) | Download Citation An increasing number of hospitals react to recent demographic, epidemiological and managerial challenges moving from a traditional organizational model to a Patient-Centered (PC) hospital model. Although the theoretical managerial literature on the PC hospital model is vast, quantitative evaluations of the performance of hospitals that moved from the traditional to the PC organizational structure is scarce. However, quantitative analysis of effects of managerial changes is important and can provide additional argument in support of innovation. We take advantage of a quasi-experimental setting and of a unique administrative data set on the population of hospital discharge charts (HDCs) over a period of 9 years of Lombardy, the richest and one of the most populated region of Italy. During this period three important hospitals switched to the PC model in 2010, whereas all the others remained with the functional organizational model. This allowed us to develop a difference-in-difference analysis of some selected measures of efficiency and effectiveness for PC hospitals focusing on the "between-variability" of the 25 major diagnostic categories (MDCs) in each hospital and estimating a difference-in-difference model. We contribute to the literature that addresses the evaluation of healthcare and hospital change by providing a quantitative estimation of efficiency and effectiveness changes following to the implementation of the PC hospital model. Results show that both efficiency and effectiveness have significantly increased in the average MDC of PC hospitals, thus confirming the need for policy makers to invest in new organizational models close to the principles of PC hospital structures. Although an organizational change towards the PC model can be a costly process, implying a rebalancing of responsibilities and power among hospital personnel (e.g. medical and nursing staff), our results suggest that changing towards a PC model can be worthwhile in terms of both efficacy and efficiency. This evidence can be used to inform and sustain hospital managers and policy makers in their hospital design efforts and to communicate the innovation advantages within the hospital organizations, among the personnel and in the public debate. In recent decades, national health care systems have been dealing with an increased demand for high-quality and patient-centered services, but limited resources have often challenged their sustainability ([1]). New demands and needs are emerging, connected with the growth of chronic pathologies, the ageing of the population, the development of technologies, the scarcity of economic resources and people's emerging awareness of their care and cure rights. With respect to this demographic, epidemiological and social context, health care and hospital systems overall must innovate to respond to the new care needs. The mandate to "do more with less" encourages policy makers, health care managers and scholars to look for innovative ways to redesign health care services. The need for innovation is often interlaced with processes of organizational redesign in many forms. There are many examples of health care organizations that have committed to broad changes due to the actual social and economic demands. A significant stream of change relates to technological innovations, such as telemedicine ([2]). There exists extensive experience of activation of new social and integrated care networks. These are designed to act as community-based care networks ([3, 4]). A major movement in policy making identifies the "patient-centered approach" as the key leverage for making the health care delivery system respectful of, and responsive to, the current needs and requirements ([5–8]). The patient-centered approach, while presenting clear statements, principles of care and operative practices, also leads to different care model designs within hospitals ([9]). In fact, an increasing literature ([10–13]) suggests that innovation in health care should evolve towards a patient-centered (henceforth PC) model, reshaping hospitals with the aim of moving from functional towards process-oriented organizational forms, focusing on the process of care instead of on functional, self-referential departments within the hospital. To innovate towards the PC model, hospitals usually undergo a process of redesign that encompasses several restructuring actions, both in the organizational structure and in the physical building ([14]). Although the theoretical managerial literature on the PC model is vast, evaluations of the performance of hospitals that have moved from the functional to the PC organizational structure are scarce (with a few exceptions, such as [11, 15, 16]). The complexity of the variables at play, the sensitivity of data, which are not always made available for research, the diversity of the pathologies and types of patients and many other elements have so far made the construction of a methodological framework for the evaluation of the PC hospital model extremely challenging. The shift to different hospital models may therefore follow international trends and interests that not always are connect to clear ex ante impact evaluation ([17]). However, without any evaluative research, any innovation risks being perceived by local communities and by organizations' employees as being driven more by political reasons or managerial trends than by a serious assessment of its benefits in terms of effectiveness and efficiency. In this work, we take the challenge to embark on a sound assessment of the efficiency and effectiveness of the PC model as opposed to the traditional functional-based hospital model. To approach the PC model evaluation, we begin by considering and evaluating two assertions that constitute the essential policy makers' drivers for innovating towards the PC model: the PC model responds to the need to reduce waste, hence increasing hospital efficiency; the PC model responds to the need to reshape care delivery processes around the needs of the patients, increasing the effectiveness of the treatment ([12, 18]); Driven by the belief that an assessment of important organizational changes is crucial, we show how this is possible given the availability of a quasi-experiment and of adequate administrative data. Our research study focuses on the provision of health care services in the Lombardy region, the richest and one of the largest regions of Italy. With nearly 10 million inhabitants, Lombardy is larger than the median country in the EU by population and one of the richest region of Europe by per capita GDP. In this context, three important hospitals switched to the PC hospital model at the end of 2010, while the rest of the Lombardy hospitals remained with the traditional functional organizational structure. In this paper, we suggest an empirical strategy for a quantitative evaluation of the overall impact of the PC model on the pre-existing one, following traditional evaluation studies, in which the effects of a policy intervention are measured through appropriate econometric techniques (difference in difference estimators) on a set of selected outcome indicators (e.g. [19]). The available data for this research, based on an administrative data set, are used to measure the effectiveness and efficiency by major diagnostic category (henceforth, MDC). The relevance of this study is related not solely to evaluate the PC hospital model impact, which is proposed as the main focus of our analysis. Our research exercise suggests that ex-post assessment of organizational changes by the use of statistical data is relevant for informing about policy implications and serve as a driver for future innovations. The patient-centered hospital model Hospitals have often been conceived as functional organizational structures, in which patients requiring a similar area of expertise are grouped into independently controlled departments. Although in some countries such organization seemed for a long time to be the most appropriate to support and foster the knowledge development required by medical science, the functional structure has shown severe shortcomings, consisting mainly of economic and organizational inefficiencies. In fact, the functional organization often lacks the capability to control the work flow across departments and thus the coordination of the care activities within a patient care trajectory. Moreover, in the functional organization, resources tend to be duplicated, causing waste, and the autonomy in using the specialty's resources often prevails over accountability, in some cases reducing the effectiveness of treatments ([10, 12, 20]). The inefficiencies and complexities detected in functional hospital organization led to many forms of organizational innovation. Examples may be found in the process-oriented design ([11, 20]), in the lean philosophy ([21]) or in the experimentation of new hospital settings ([9]). Another planned change process is the one defined as the patient-centered (PC) hospital model, towards which hospitals are converging worldwide, for instance in England ([22]), the Netherlands ([23]), Spain ([24]), Sweden ([25]) and Italy ([26]). The PC model represents an attempt to redesign the care delivery process by shaping the structures and processes involved in delivering hospital care according to the needs of the patients. In the traditional hospital models, patients are admitted under individual specialist clinicians, who keep them or transfer them to the care of another clinician. As summarized in Table 1, to innovate toward the PC model, hospitals undergo a process of redesign that encompasses several restructuring actions that, by taking stock from authors (cfr. [10, 20, 27]) we summarize over six dimensions ([28]). The first regards the change of the organizational model, which passes from a functional/divisional model to a process-oriented model ([20]). The second is the transformation of the concept of organizational unit, necessary for responding to patients' care needs and for managing the relationship among specialties. The criteria for patients' allocation to hospital units switch from specialty-based units to multi-specialty units, differentiated by the level of patients' clinical and assistential care needs instead of by their specific pathologies. In fact, the core principle of the PC model consists of the delivery of the appropriate amount of cure and care to patients in the most suitable setting according to their health conditions. Third, as the PC model requires integrated care, multi-professional and multi-specialty teams are strengthened and requested to collaborate. This is consistent with a different analysis proposed for patient centeredness carried out by [29] and by [30]. An example of this new integrated effort is represented by the specific reconfiguration of nurses' position, in which the traditional "functional nursing" (i.e. nurses specializing in a single care activity) becomes "modular nursing" (i.e. nurses responsible for the overall assistential practices required by small groups of patients within the ward). Fourth, hospitals rethink their use of resources, such as beds, operating rooms and equipment, which are shared by all the functional specialties and they, regroup and regulate them by a centralized logistical model. Patients are no longer transferred across different units or departments; rather, physicians and technologies move to the patients' bed. Fifth, such re-organization calls for new managerial roles ([10]) responsible for the appropriateness, timeliness, flow and integration of patients' care delivery process (e.g. the bed manager or case manager). Sixth, the described changes might require a redesign of the physical environment to maximize the resource pooling and the patients' grouping based on the patients' clinical severity and on the complexity of the assistance required ([27]). Table 1 Disentangling the differences between traditional and PC hospitals The PC organizational model is understandably characterized by local variations depending on the boards' strategic choices, the hospitals' dimensions, the workforce composition, the patients' average characteristics, and so on. While this type of diversity is hardly predictable and should be better addressed by case study analyses ([31, 32]), the main common traits of the PC innovation can be identified, provided that a suitable environment and adequate data are available. For the former, one needs a context in which, from a pool of comparable units before treatment, some hospitals have been treated while others have not. For the latter, one needs data characterized by minimal error due to mis-measurement, a non-random response rate and proper population coverage. Unsurprisingly, there are very few studies providing ex post analysis of the implementation of the PC model so far. The application of the PC principles is expected to improve quality, increase patient satisfaction, increase job satisfaction for staff and improve efficiency ([33]). Reports on new PC - hospitals highlight the positive aspects of patient-friendly and staff-friendly design ([34]). Other authors, however, question the strength of these claims ([18, 22]). A few authors (see for example [10, 20]) present extensive literature reviews on assessing hospitals' changes and hospital designs (see for example [35]), thus ending up tracing the factors that affect their success or failure in the redesign process but provide no ex post analysis of the PC model adoption. To the best of our knowledge, there is still little evidence either to support or to refute these claims, notably in the European context ([36]), and there is no quantitative assessment of the efficiency and effectiveness of the PC model as a whole. Considering the relevance of the PC model change with respect to hospital managing and policy making, and considering also the extensive implementation and debate in European countries and international context, this paper proposes to fill the quantitative assessment gap, with a specific focus on efficiency and effectiveness of PC implementation. The empirical model A key ingredient in assessing the effects of a change from a functional to a PC model is to observe, in a group of comparable hospitals, a change in a group of hospitals (treated units) as opposed to others (control units) over time. The decision to move from a functionally organized to a PC hospital model is typically taken at the hospital level; however, its implementation might differ greatly depending on each major diagnostic categoryFootnote 1, as some MDCs are more influenced by the organization, whereas others follow very strict protocols regardless of the organizational model adopted. In our model, we identify the effect of moving from a functional to a PC model of hospital organization, exploiting the variability of health outcomes across MDCs. For such an organizational change, there is no need for high-frequency data (e.g. daily), as it is likely to have an impact on the hospital performance over months or years, or for individual data, as the focus is on the average efficiency and effectiveness in MDCs of treated hospital units versus those in untreated ones. However, such an empirical setting requires the availability of large data sets regarding the characteristics of all the MDCs in several hospitals over time. The increasing availability of administrative data about hospital discharge charts (henceforth, HDCs) allows us to overcome this major data requirement. As we have access to administrative data on the full population of all HDCs for all Lombardy hospitals between 2004 and 2012, we managed to build some measures of effectiveness and efficiency by MDC. In our empirical model, we organize the data by year of discharge and collapse the data by the average HDC at MDC j in hospital h at time t. The reason for keeping the MDC dimension in our collapsed data is that hospitals differ greatly in terms of the MDC mix and relative importance and we aim to exploit this variability for the identification of our main coefficient as well. The basic model is a standard difference-in-difference model: $$ \begin{aligned} y_{j,h,t}&=Z_{j,h}+T_{t}+\alpha_{1}{HDC}_{j,h,t}+\alpha_{2}{Age}_{j,h,t}\\ &\quad+\alpha_{3}{Male}_{j,h,t}+\gamma {PC}_{h,t}+\epsilon_{j,h,t} \end{aligned} $$ where yj,h,t is the logarithmic transformation of the average outcomeFootnote 2 in MDC j of hospital h at time t, Zj,h are fixed effects identifying idiosyncratic characteristics of MDC j in hospital h and T t are year fixed effects that account for possible common trends, such as technological advancement or a changed demand for certain services. We also control for a set of variables defined at the j,h,t cell level, such as the average number of discharges (HDCj,h,t), the average age of patients (Agej,h,t) and the share of male patients (Malej,h,t). The variable PCh,t is defined as a dummy that is equal to one if the PC has been adopted in hospital h in year t and zero otherwiseFootnote 3, and εj,h,t is an error term. By controlling for a set of observables over time, we control for observed differences among the treated and the control group, which allows us to reduce the imbalance of the two samples. The main coefficient of interest is γ, which accounts for the difference in the logarithm of the mean outcome due to the adoption of the PC organizational method. However, the estimate of γ could be biased by a set of omitted variables, which could take into account the fact that hospitals' heterogeneity depends on the know-how developed in each MDC, which typically increases with the number of patients treated, on the morbidity of the average patients in each MDC and on their age and gender. The heterogeneity of MDCs within hospitals also affects the heterogeneity among hospitals that a simple hospital fixed effect, such as the one used in the basic specification (Eq. 1), would be unable to capture. Hence, we also control for a set of interaction terms, which are introduced into the basic model incrementally to reach a saturated one. In particular, we first condition on the interaction of year fixed effects with MDC dummies (I j ×T t , where I j is equal to 1 for MDC j and 0 otherwise) and with hospital dummies (I h ×T t , where I h is equal to 1 for hospital h and 0 otherwise) to account for possibly different time trends among different MDCs and hospitals. We then control for the interactions of the average number of discharges with MDC dummies (I j ×HDCj,h,t) and with hospital dummies (I h ×HDCj,h,t) to account for heterogeneity in the attractiveness of hospitals and the frequency of diagnostic categories. Finally, to take into account patient complexity and risk adjustment issues, we also control for the interactions of the average age of patients with MDC dummies (I j ×Agej,h,t) and with hospital dummies (I h ×Agej,h,t) to account for heterogeneity in the age composition of discharges by MDCs and hospitals and for the interactions of the share of male patients with MDCs (I j ×Malej,h,t) and hospitals (I h ×Malej,h,t), since different diagnostic categories are characterized by different gender compositions of patients. The saturated model that we finally estimate can be written as follows: $$ \begin{aligned} {}y_{j,h,t}=&Z_{j,h}+T_{t}+\alpha_{1}{HDC}_{j,h,t}+\alpha_{2}{Age}_{j,h,t}+\alpha_{3}{Male}_{j,h,t}\\ &+\beta_{1}I_{j}\times T_{t}+\beta_{2}I_{h}\times T_{t}\\ &+_{3}I_{j}\times {HDC}_{j,h,t}+\beta_{4}I_{h}\times {HDC}_{j,h,t}\\ &+\beta_{5}I_{j}\times {Age}_{j,h,t}+\beta_{6}I_{h}\times {Age}_{j,h,t}\\ &+\beta_{7}I_{j}\times {Male}_{j,h,t}+\beta_{8}I_{h}\times {Male}_{j,h,t}\\ &+\gamma {PC}_{h,t}+\epsilon_{j,h,t} \end{aligned} $$ By including all the possible pairwise interactions, we identify the coefficient of interest by estimating the empirical models outlined above by ordinary least squares, assuming that the remaining variation is explained by the dummy variable, which identifies the adoption of the PC model. From a methodological point of view, over-controlling in a linear regression model is similar to statistical matching (e.g. propensity scoring) and the models deliver very similar results (among others, see [37]). To account for the presence of a common random effect at the hospital level, all the models are estimated with clustered standard errors at the hospital level. Data and performance measures We use a large administrative data set covering the full population of patients and hospitals operating in the Lombardy Health Care System. Our data set combines information on more than 17.4 million hospital discharge charts (HDCs), over 25 MDCs, provided by all Lombardy hospitals, concerning 13.3 million patients between 2004 and 2012Footnote 4. They are individual records with daily frequency, but since we focus here on the average efficiency and effectiveness of MDCs in hospitals that moved to a PC organization as compared with those in hospitals that maintained the traditional organization, we consider the yearly frequency of the average HDC. The administrative data set that we use is routinely collected by hospitals for both financial and managerial purposes and is relayed regularly to the regional administration. The main advantages of using administrative records consist of full population coverage and the significant reduction of measurement and sampling errors, with plenty of details about the diagnosis and the service provided. Each HDC reports information regarding the patient characteristics (gender, age and province of residence) and the discharge characteristics (e.g. diagnosis-related groupFootnote 5, length of stay in hospital, major diagnostic category, regional reimbursement, number of times the patient was physically or administratively transferred within the same hospital before dischargeFootnote 6, etc.). This data set has been linked with other information, also provided by the Lombardy Health Care Department, regarding several hospital characteristics, such as ownership and geographic location. These data are also matched with the registry office that records the deaths of all residents in the region. According to the international literature ([38, 39]), outcome indicators of hospital care essentially analyze costs in relation to some proxies for the quantity of delivered care. Although these outcomes are not entirely under the control of the hospitals, they deal with the risk of adverse events (effectiveness) as well as with the hospitals' ability to satisfy the care demand (efficiency) ([40]). Moreover, outcomes indicators have high relevance from the viewpoint of both patients and policy makers as reliable proxies for health care qualityFootnote 7. Our data set allows us to define a limited number of efficiency and effectiveness outcomes. Here, as a measure of efficiency, we consider the following index: Average days of stay in hospital: this index counts the average number of days from admission to the hospital to discharge. It provides a measure of efficiency as, by reducing the length of stay (LoS) a hospital would manage to reduce its costs. As for the effectiveness measures, we consider the rate at which patients are re-hospitalized in the same major diagnostic category (MDC) within 30 days (both in the same hospital and in different hospitals), as ceteris paribus this might signal an early discharge or unsatisfactory treatment. Related to this, we would also like to test whether patients treated in PC hospitals have different mortality rates from those treated in traditionally organized ones. The literature studying acute care typically focuses on in-hospital mortality, possibly also because of the difficulty of reporting accurately all discharged patients' deaths. In fact, our administrative data record whether any discharged patients died at any moment after the day of discharge up to the end of 2012, allowing us to construct a mortality rate within 30 days of discharge, which is likely to provide an accurate indication of care effectiveness ([41]). We have no a priori expectation regarding how the PC organization could affect this outcome variable. It might even be that for such an important health care outcome, the PC innovation will be found to have no significant effect. Hence, we consider three effectiveness indexes based on the available information: Average number of readmissions within 30 days: this index measures the number of readmissions of the same patient to a Lombardy hospital within the same MDC within 30 days of discharge; Average number of readmissions in the same hospital within 30 days: this index measures the number of readmissions of the same patient to the same hospital and to the same MDC within 30 days of discharge; Average mortality rates within 30 days: this index defines the mortality rate of patients within 30 days of hospital discharge. In fact, this set of indexes provides only a partial picture of efficiency and effectiveness at the hospital level. For instance, one would like to measure efficiency also comparing costs and benefits of treatment, assess incentives provided to medical doctors and nurses, and measure effectiveness also analysing patients' satisfaction and care quality, however our data do not provide such information and for their administrative nature they cannot be merged with other data sets. Sample selection and descriptive statistics Before using the data set to estimate the empirical models outlined above, we discarded the discharge charts belonging to patients with a province of residence outside Lombardy, discharges for hospitalizations shorter than one day and subacute hospital discharge chartsFootnote 8. As all three hospitals that introduced the PC in the last quarter of 2010 (the Ospedale Civile di Vimercate, the Ospedale S. Anna di Como and the Ospedale di Legnano) are public and non-research-oriented hospitals, we selected only hospitals belonging to the same category. We also dropped a few other hospitals that could not be clearly ascribed to either the treated or the control group as some had started the PC model implementation before and some immediately after our observation period and those for which it was not possible to identify a clear starting point for the move to the PC model. As all the PC hospitals considered provide care to patients of any MDC, we also dropped those hospitals that did not present HDCs for all MDCs. Hence, we collapsed the data set by major diagnostic categories (MDCs)Footnote 9, hospitals and year and dropped all the cells produced by the collapse with fewer than 30 discharges to preserve an acceptable level of precisionFootnote 10. Eventually, we obtained a panel of 25 MDCs belonging to 86 hospitals over at most 9 years (from 2004 to 2012), with a total size of nearly 13 thousand observations. Table 2 shows some summary statistics of the total sample, showing that in the average MDC the average age is 51.77, 47.89% of patients are male and the number of discharges is about 522 per year. Table 3 shows some descriptive statistics of the efficiency and effectiveness outcomes for the PC and functional hospitals before and after the organizational change that took place at the end of 2010. The average number of days in hospital of average MDCs increased by 0.3 in PC hospitals as opposed to 0.41 in functional ones. The rate of re-hospitalization in the same hospital and in the same MDC decreased for all Lombardy hospitals after 2010 compared with the previous period, suggesting an overall increase in effectiveness, but the decrease was slightly larger in PC hospitals (− 0.008) than in functional hospitals (− 0.003). As we observe the full population of Lombardy hospitals, we can also observe the case of patients who needed re-hospitalization for the same MDC but decided to change hospital, possibly because they did not appreciate the treatment received in the first one. The descriptive statistics suggest that re-hospitalization for the same MDC but in different hospitals is slightly negative for the average MDC of PC hospitals (− 0.007) and slightly positive for the group of controls (0.002). As our administrative data are matched with registry office data recording people who passed away, we can also make a clear estimate of the mortality rate of patients after being discharged by a hospital. The average mortality rate for the average MDC is about 6% for PC hospitals and slightly higher for functional ones; however, what matters most for our research focus is that the change between before and after 2010 is very similar for both groups of PC and functional organization hospitals. The differences in the changes between pre- and post-treatment periods of average MDCs in the control and treated groups for the considered measures of efficiency and effectiveness suggest that some improvement might have been produced by the switch to the PC organizational model, but for a proper statistical assessment of their significance we need the estimation of the empirical model outlined above. Table 2 Summary statistics of patients' characteristics in average MDCs Table 3 Summary statistics before and after the organizational change, in average MDCs At the core of our difference-in-difference identification strategy lies the so-called parallel trends assumption. A graphical representation of the parallel trend assumption is provided in Fig. 1. However, as in some cases the graphical representation is not conclusive, we also tested the internal validity of our identification strategy by checking whether there is any evidence rejecting the assumption of parallel trends for the period before the treatment of PC and traditionally organized hospitals. The results are presented in Table 4, showing that there is no evidence to reject the parallel trends assumptionFootnote 11, hence we proceed presenting our main results. Parallel Trend Table 4 Test for parallel trends of treated and control hospitals in the period before the PC organizational change Table 5 shows our main results. This table presents the estimate of the γ coefficients; the empirical model is as outlined in the Material and Methods section and the whole list of efficiency and effectiveness measures is as above. Each coefficient estimate comes from different regressions, in which only the estimate of our coefficient of interest, its standard error in brackets and the total number of observations are presented. This offers us an immediate analysis of the overall effect on the average MDC of adopting PC organization in health care in the outcome analyzed. Table 5 The effect of the PC organizational change, difference-in-difference estimations Column (1) presents the results for the basic model (Eq. 1), always including the year fixed effects, average number of discharged patients, average age and share of male patients by MDC j, hospital h and year t. In column (2) we add the interactions between hospitals and MDC fixed effects and the number of discharges to capture effects that could be hospital-specific, MDC-specific or size-specific. We also add the interactions between hospital and MDC dummies in column (3) with the average age and in column (4) with the average gender composition of each cell, to capture the compositional differences of MDC' hospital cells. The estimate of γ for the saturated model of Eq. 2 is then presented in column (4). All the models are estimated with cluster-corrected standard errors at the hospital level. Column (1) of Table 5 shows that, on the one hand, there is no evidence that PC hospitals deal with higher levels of efficiency (their coefficient are not statistically different from zero), while, on the other hand, we find significant evidence of higher levels of effectiveness of PC hospitals in terms of the re-hospitalization rate in the same MDC. However, once we control for the interaction of MDCs, year dummies and number of discharges in each cell (column 2) and eventually reach the fully saturated model (column 4), all the coefficients become statistically significant, suggesting that, taking into account the average heterogeneity among MDCs, the PC organizational model has an effect on both the selected efficiency and the selected effectiveness outcomes. These results suggest the following conclusions. The PC organizational model significantly increases hospitals' efficiency, reducing the length of hospitalization (− 4.6%). This estimate rises strongly when heterogeneity in the number of discharges by MDC is taken into consideration in addition to the year-specific interactions, as the γ coefficient estimate jumps from about − 0.015 to − 0.069 from column (1) to column (4). However, in addition to the predictable higher level of efficiency associated with the PC model, one should also expect an impact in terms of effectiveness, looking at the average re-hospitalization rate within 30 days of discharge for the same MDC and for the same MDC and hospital and on the mortality rate at 30 days. We find no statistically significant reduction of the mortality rate (the estimated coefficient is 0) but a relatively more important reduction in both the re-hospitalization rates of discharged patients. Column (4) suggests that, having controlled for average patients' age and gender composition of the hospital and MDC, the rate of re-hospitalization reduces slightly but significantly, by 0.6% within the same MDC and hospital and by 0.4% within the same MDC only. This is a relevant drop, which immediately affects the welfare of discharged patients. There are, however, some caveats that should be stressed. First, there is the role of possibly confounding factors, which could bias our estimates. For instance, the transition to a PC model from a traditional organizational model involves changing incentives, for medical doctors, for nurses and for managers, but to account for them we should have access to detailed information about the composition of the hospital workforce and its remuneration and incentive policies. This is something that unfortunately we cannot address with the available data. Second, there is the issue of the external validity of our results. We provide here an empirical analysis using recent data on public hospitals operating in the Italian national health care system. Our results are likely to be relevant to public hospitals operating in national health care systems (i.e. massively funded by public revenues), which are prevalent across Europe. However, we are unable to say whether our estimated effects would be confirmed in countries where there is no similar system. Our evaluation analysis could be criticized for not allowing the capture of all the complexities and articulations of the PC model or the specificities of each and every implementation of the general framework of the model. In fact, we claim that our quantitative approach does not substitute but complements more qualitative analyses based, for instance, on ethnographic approaches or case study analyses ([17, 32, 42]). Our approach allows one to gain an assessment of the overall average change of a set of outcomes, controlling for a large range of confounding factors, and to measure the overall effect of the switch to the PC model exploiting the time variation of treated and untreated units and the heterogeneity among MDCs and hospitals. Robustness checks As we mentioned above the adoption of the PC organizational model is not an immediate process but often requires a preparation period as well as a period of adaptation to the new organizational standards. Of the three hospitals that switched all their MDCs to the PC model, two did so in October and one in November 2010. This is the reason why we defined the PC dummy variable for these three hospitals as equal to one for the years 2011 and 2012 only and equal to zero for all the other years. Hence, we tested the robustness of the results by simultaneously dropping both the years 2011 and 2010, which allows for an adjustment period and for a preparation period respectively towards the PC model (Table 6). Table 6 The effect of the PC organizational change, difference-in-difference estimations excluding the years 2010 and 2011 The results show that the main findings for both efficiency and effectiveness of the PC model are broadly confirmed, showing only a slightly larger effect of the PC innovation on the average length of hospital stay. Also results on effectiveness show the overall robustness of results to the exclusion of the years 2010–2011 (Table 6). Finally, observing that our sample size is affected by the fact that many MDC-year cells present fewer than 30 HDCs per year and that small denominators (MDCs with very few patients in any one year) may introduce statistical noise into our outcome indicators - and for these reasons have been dropped from the analysis - we estimate the same empirical models allowing for different minimum cell sizes. The results are presented in Tables 7 and 8 and again produce evidence of overall strong robustness of our estimates. Table 7 The effect of the PC organizational change, difference-in-difference estimations selecting different minimum cell sizes Table 8 The effect of the PC organizational change, difference-in-difference estimations selecting different minimum cell sizes excluding the years 2010 and 2011 One can notice that the effects on re-hospitalization rates (both the same MDC and the same hospital-MDC) are largely unaffected by the different cell sizes. The signs do not change and the statistical significance of these indicators is roughly constant, between 20 and 40 minimum cell sizes, and equal to the baseline selection of Table 5. As for the size of the reduction in mortality and the length of the hospital stay, it is positively correlated with the cell size, suggesting that the higher the restriction, the stronger and more significant is the estimated effect, implying that the adoption of a PC organizational model has stronger effects in relatively larger MDCs. Patient-Centered care has been widely embraced by many of the industry's most influential care providers, policymakers, regulatory agencies, research bodies, and funders. This profound shift can be traced to a 2001 Institute of Medicine report ([43]) that identified a focus on Patient-Centered care as one factor constituting high-quality care. This solidified the Patient-Centered care approach not only as a way of creating a more appealing patient experience, but also as a fundamental practice for the provision of high-quality care, with direct implication on hospital organizational models and processes ([44]). In this paper we took advantage of the fortunate coincidence of a quasi-experimental setting regarding all the MDCs in three hospitals of an important region of Italy and of the availability of a unique administrative data set to develop an ex post evaluation of an innovation from a traditional functional model to a PC organizational model in hospitals. We suggested a quantitative framework for overcoming some of the current challenges in the evaluative policies of hospital organizational models (for a similar approach to policy analysis in health care see [45]). To the best of our knowledge, this is the first quantitative assessment of such an important and frequently found organizational setting in hospitals. We managed to estimate difference-in-difference models that support some of the theoretical claims of the PC model as a whole. In particular, the PC model seems to have an effect on effectiveness, which is a relevant dimension of the quality of health care services. The rate of readmission for PC hospitals decreases slightly, by less than 1%, with no significant effect on the death rate of patients. The strongest effects are found in the efficiency variable measuring the duration of hospitalization. These results are in line with the theoretical framework outlined in the Empirical Model subsection, which suggested increased efficiency and effectiveness of PC hospitals. In particular, the increase in efficiency emerges from the reduction of the hospitalization duration. As for efficacy, our results, showing a reduction in re-hospitalization, suggest an increased level of efficacy of hospitals that switched to a PC organization. The lack of statistical significance of mortality rates suggests that this organizational innovation is unlikely to have any impact on such an outcome. Considering PC model change as a relevant turning point with respect to hospital managing and policy making, and considering also the extension of its implementation and debate in European countries and international context (as we have seen, experiments can be found in England ([22]), in the Netherlands ([23]), in Spain ([24]), in Sweden ([25]) and in Italy), we advocate the relevance of this paper's attempt in two directions. First, this paper fills the quantitative assessment gap related to the PC hospital model with a specific focus on efficiency and effectiveness. Such an organizational change towards the PC model can be a costly process, implying a rebalancing of responsibilities and power among hospital personnel, affecting inter-disciplinary and inter-professional relations (e.g. medical and nursing staff) and possibly affecting individual motivations and enthusiasm or opposition to the change ([28]). Nevertheless, our results confirm the effect of these hospital innovations on efficiency ([11]), adding some robust results, thus suggesting that a change to the PC model can be worthwhile. This evidence can be used to inform and sustain hospital managers and policy makers in their hospital design efforts, and to communicate the innovation advantages within the hospital organizations, among the personnel and in the public debate. With these data analysis, we believe that this health care innovation can be regarded as an actual improvement to meet the needs of the community, contrasting the possible perception that it may have been driven by managerial, international or political trends. As suggested by McKee and Healy ([36]), all that we can be certain of is that the hospital of the future will be different from the hospital of today and the PC model is an interesting innovation, which, however, requires a proper evaluation. Second, this research exercise can be also considered as a guiding example for ex-post evaluation of broad interventions. This is a complicated task, although worthwhile as it provides fundamental suggestions to policy makers engaged in important future and complex innovations ([46]). This study refers to the long-standing tradition of program evaluation, which may be used when the real-world provides data to support testing hypothesis with a counterfactual approach. The availability of administrative data, which is increasing in all developed countries and is characterised by little measurement error and high detail of information, makes the opportunity for sound quantitative assessments, offering evidence that turns useful in the planning of innovation initiatives and their policy implications for the overall society. This paper provides a quantitative estimation of efficiency and effectiveness changes following the implementation of the PC hospital model in a major region of Italy. Taking advantage of a quasi-experimental setting and a detailed administrative dataset, we perform an ex-post evaluation of innovating the hospital organization by switching from a traditional functional model to a PC organizational one. We provide robust evidence, at the average MDC, of a statistically significant and positive effect of the introduction of the PC model on both effectiveness and efficiency. In particular, the increase in efficiency emerges from the reduction of the average length of stay, while for efficacy, our results, show a reduction in re-hospitalization rates of hospitals that switched to a PC organization. These results are in line with our theoretical framework which suggests an increase in efficiency and effectiveness of PC hospitals and provides a sound example of a quantitative evaluation of an organizational intervention adopting a counterfactual approach. MDC codes are internationally recognized thanks to their adoption in the United States medical care reimbursement system. They are formed mapping all the DRG codes into 25 mutually exclusive diagnosis areas. We estimate log-linear models of the outcome means considering that the outcomes that we use are strictly non-negative (e.g. means of count variables or rates), not over-dispersed and do not raise zero inflation concerns ([47], p. 645) The coefficient of interest, γ, refers to a dummy variable, PCh,t, that is equal to one for those hospitals that adopted a PC model in the years immediately after their organizational change and zero otherwise. This is clearly equivalent to including a standard interaction term between the treatment variable and a post-reform dummy. Also notice that there is no need to include a treatment dummy, as we have the full set of hospital fixed effects, or a post-reform dummy variable, as we have the full set of year fixed effects Data are provided by the Health Care Department of the Lombardy Region and are processed in collaboration with CRISP - the Inter-university Research Centre on Public Services at the University of Milan-Bicocca (Italy). Individual HDC records are not publicly available under the Italian privacy law. The Health Care Department of the Lombardy Region must be contacted to discuss the provision of the data The diagnosis-related group (DRG) code is a standard classification ([48]) adopted in the Lombardy Region of Italy since 1995. The DRG classifies hospital discharge charts depending on patients' diagnoses, procedures, complications, co-morbidity and demographic factors (such as age and gender) In fact, HDC data trace the department that is in charge of each patient and record the total number of departmental transfers of each HDC, but not whether a transfer is in fact a bed change within the same hospital or, more simply, a change of the administratively responsible department. An important efficiency measure that we do not observe is the cost of single HDCs as we have no information on the composition and cost of the physical and human resources used. In fact, we are provided with the cost of reimbursement by the Lombardy Health Care System to hospitals for each HDC, but this variable is unsuitable for use as a cost measure as it is affected by DRG up-coding practices, discretionality of the regional policy makers in deciding the price of the duration and the DRG of each HDC, allowing for strategic behaviour of hospital managers. For an extensive analysis of the reimbursement mechanism adopted in the Lombardy Health Care System, see [49] The attractiveness of the Lombardy Health Care System is indeed relevant, with a proportion of hospitalized patients from other regions close to 10% ([49]) of the yearly provision. The main reason for dropping the HDCs of patients with residence outside Lombardy is because they might be occasional users of the Lombardy Health Care System and we lack relevant information about them regarding their possible re-hospitalization and death. For instance, as we know the date of death of Lombardy residents only, including non-Lombardy patients would bias the average mortality rate of patients downward by an unpredictable amount. We also dropped one-day-long and subacute HDCs due to comparability issues. A similar approach was used by [50] Some robustness checks assessing the relevance of this selection rule are provided in Tables 7 and 8. We developed this test (results in Table 4) for all the models that we estimated in Table 5 (columns 1 to 4), starting from the basic equation (Eq. 1) to the saturated equation (Eq. 2), as follows. First, we computed each outcome variable of interest after partialling out the contribution of all the independent variables except for PCh,t. Hence, we regressed each of them on a fourth-degree polynomial time trend, allowing all the coefficients to differ between the PC and the traditionally organized hospitals (unrestricted model), and we regressed the same dependent variable on a fourth-degree polynomial time trend in which only the intercept is allowed to differ between the two groups considered. Finally, we computed the statistic $\left (\left (\left (R^{2}_{UR}-R^{2}_{R}\right)/r\right)/\left (1-R^{2}_{UR}\right)\right)$, which is distributed as an F-distribution with (r,n−k) degrees of freedom and in which $R^{2}_{R}$ and $R^{2}_{UR}$ are respectively the R2 of the restricted and unrestricted models, r is the number of restrictions imposed and n−k is the number of degrees of freedom of the unrestricted model. Patient-centered MDC: Major diagnostic categories HDC: Hospital discharge charts Diagnosis-related group Pencheon D. Developing a sustainable health and care system: lessons for research and policy. J Health Serv Res Policy. 2013; 18(4):193–4. Mohrman SA, Kanter M. Designinig for health: learning from kaiser permanente In: Mohrman SA, Kanter M, Shany ABR, editors. Organizing for Sustainable Healthcare. London: Emerald: 2012. p. 77–111. Burnham JC. Health Care in America: A History. Baltimore: Johns Hopkins University Press; 2015. Gorli M, Galuppo L, Liberati EG. Hospital innovations in the light of patient engagement. Insights from the organizational field In: Graffigna G, Barello S, Triberti S, editors. Patient engagement: a consumer-centered model to innovate healthcare. Warsaw: Gruyter Open: 2015. Hernandez SE, Conrad DA, Marcus-Smith MS, Reed P, Watts C. Patient-centered innovation in health care organizations: A conceptual framework and case study application. Health Care Manage Rev. 2013; 38(2):166–75. Rathert C, Wyrwich MD, Boren SA. Patient centered care and outcomes: A systematic review of the literature. Med Care Res Rev. 2013; 70(4):351–79. Berwick DM. What patient-centered should mean: Confessions of an extremist. Health Aff (Millwood). 2009; 28(4):555–65. Gorli M, Galuppo L, Liberati E, Scaratti G. The patient centered organizational model in italian hospitals: Practical challenges for patient engagement. Healthc Ethics Train Concepts Methodologies Tools Appl. 2017; 1:290–308. Gerteis M, Edgman-Levitan S, Daley J, Delbanco T. Through the Patient's Eyes: Understanding and Promoting Patient-centered Care. San Francisco, California: Jossey-Bass; 1993. Lega F, DePietro C. Converging patterns in hospital organization: beyond the professional bureaucracy. Health Policy. 2005; 74(3):261–81. Vera A, Kuntz L. Processe-based organization disegn and hospital efficiency. Health Care Manage Rev. 2007; 32(1):55–65. Villa S, Barbieri M, Lega F. Restructuring patient flow logistics around patient care needs: implications and practicalities from three critical cases. Health Care Manag Sci. 2009; 12(2):155–65. Cicchetti A. L'organizzazione Dell'ospedale. Fra Tradizione e Strategie per Il Futuro. Milano: Vita e pensiero; 2002. Gesler W, Bell M, Curtis S, Hubbard P, Francis S. Therapy by design: evaluating the uk hospital building program. Health Place. 2004; 10(2):117–28. Sikka V, Luke RD, Ozcan YA. The efficiency of hospital-based clusters: Evaluating system performance using data envelopment analysis. Health Care Manage Rev. 2009; 43(3):251–61. Salge TO, Vera A. Hospital innovativeness and organizational performance: Evidence from english public acute care. Health Care Manage Rev. 2009; 34(1):54–67. Gorli M, Kaneklin C, Scaratti G. A multi-method approach for looking inside healthcare practices. Qual Res Organ Manag. 2012; 7(3):290–307. Walston S, Kimberley J. Re-engineering hospitals: experience and analysis from the field. Hosp Health Serv Adm. 1997; 42:143–63. Shetty KD, DeLeire T, White C, Bhattacharya J. Changes in U.S. hospitalization and mortality rates following smoking bans. J Policy Anal Manage. 2011; 30(1):6–28. Vos L, Chalmers S, Duckers M, Groenewegen P, Wagner C, van Merode G. Towards an organisation-wide process-oriented organisation of care: A literature review. Implement Sci. 2011; 6(1):8. https://doi.org/10.1186/1748-5908-6-8. Waring JJ, Bishop S. Lean healthcare: rhetoric, ritual and resistance. Soc Sci Med. 2010; 71(7):1332–40. Hurst K. Progress with Patient Focused Care in the United Kingdom. Leeds: NHS Executive; 1995. Bainton D. Building blocks. Health Serv J. 1995; 105(23):25–7. Coulson-Thomas C. Re-engineering hospitals and health care processes. Br J Health Care Manag. 1996; 2(6):338–42. Brodersen J, Thorwid J. Enabling sustainable change for healthcare in stockholm. Br J Healthcare Comput Inf Manag. 1997; 14(4):23–6. Lega F. Lights and shades in the managerialization of the italian national health service. Health Serv Manage Res. 2008; 21:248–61. Edwards N, McKee M. The future role of the hospital. J Health Serv Res Policy. 2002; 7(1):1–2. Liberati EG, Gorli M, Scaratti G. Reorganising hospitals to implement a patient-centered model of care. J Healt Org Man. 2015; 29:848–73. Scholl I, Zill JM, Härter M, Dirmaier J. An integrative model of patient-centeredness - a systematic review and concept analysis. PLoS ONE. 2014;9(9). Drupsteen J, Der VTV, Donk DPV. Integrative practices in hospitals and their impact on patient flow. Int J Oper Prod Manag. 2013; 7(33):912–33. Radnor Z, Holweg M, Waring J. Lean in healthcare: The unfilled promise?Soc Sci Med. 2012; 74(3):364–71. Liberati EG, Gorli M, Moja L, Galuppo L, Ripamonti S, Scaratti G. Exploring the practice of patient centered care: The role of ethnography and reflexivity. Soc Sci Med. 2015; 133:45–52. Consulting A. Patient Centred Care: Reinventing the Hospital. New York: Andersen Consulting; 1992. Glanville R. Architecture and design In: Schutyser K, Edwards B, editors. Hospital Healthcare Europe, 1998–1999. The Official HOPE Reference Book. Brussels: Campden Publisher: 1998. Dias C, Escoval A. Improvement of hospital performance through innovation: toward the value of hospital care. Health Care Manag. 2013; 2(32):129–40. McKee M, Healy J. Hospitals in a Changing Europe. Buckingham: Open University Press; 2002. Senn S, Graf E, Caputo A. Stratification for the propensity score compared with linear regression techniques to assess the effect of treatment or exposure. Stat Med. 2007; 26(30):5529–44. https://doi.org/10.1002/sim.3133. Boyce NW. Quality and outcome indicators for acute healthcare services: a research project for the National Hospital Outcomes Program (NHOP) Health Service Outcomes Branch. Canberra: Australian Government Publishing Service; 1997. Ash AS, Fienberg SF, Louis TA, Normand SLT, Stukel TA, Utts J. Statistical issues in assessing hospital performance. Quant Health Sci Publ Presentations. 2012. Paper 1114. Berta P, Seghieri C, Vittadini G. Comparing health outcomes among hospitals: the experience of the lombardy region. Health Care Manag Sci. 2013; 16(3):245–57. https://doi.org/10.1007/s10729-013-9227-1. Austin C, Tu J. Comparing clinical data with administrative data for producing acute myocardial infarction report cards. J R Stat Soc A. 2006; 69 Part 1:115–26. Renedo A, Marston C. Developing patient-centred care: an ethnographic study of patient perceptions and influence on quality improvement. BMC Health Serv Res. 2015;15(122). In: Institute of Medicine U, (ed).Committee on Quality of Healthcare in America. Washington: National Academies Press; 2001. Chap. Crossing the quality chasm: a new health system for the 21st century. Charmel P, Frampton SB. Building the business case for patient-centered care. Healthc Financ Manag. 2008; 62(3):80–5. Chatterji P, Decker SL, Markowitz S. The effects of mandated health insurance benefits for autism on out-of-pocket costs and access to treatment. J Policy Anal Manage. 2015; 34(2):328–53. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new medical research council guidance. Bmj. 2008; 337:1655. Wooldridge JM. Econometric Analysis of Cross Section and Panel Data. London: MIT Press; 2002. Mayes R. The origins, development, and passage of medicare's revolutionary prospective payment system. J Hist Med Allied Sci. 2007; 62:21–55. Vittadini G, Berta P, Martini G, Callea G. The effect of a law limiting upcoding on hospital admissions: evidence from italy. Empir Econ. 2012; 42(2):563–82. https://doi.org/10.1007/s00181-012-0548-6. Mariani L, Cavenago D. Defining hospital's internal boundaries. an organizational complexity criterion. Health Policy. 2014; 117:239–46. We would like to thank Marco Albini, Paolo Berta, Massimiliano Bratti, Daniele Checchi, Francesco De Fazio, Corinna Ghirelli, Claudio Jommi, Elisa Giulia Liberati, Marta Marsilio, Catia Nicodemo, Enrico Rettore, Dylan Roby, Chiara Seghieri, Giuseppe Scaratti, Luigi Siciliani and the editor of this journal for their helpful comments and suggestions. We are also grateful to Luca Merlino and the Health Care Department of the Lombardy Region for providing us with the data. **The final revision of both the text and the empirical strategy of the article have been conducted when Stefano Verzillo took service at the European Commission, Joint Research Centre, Competence Centre on Microeconomic Evaluation (CC-ME). The scientific output expressed does not imply a policy position of the European Commission. Neither the European Commission nor any person acting on behalf of the Commission is responsible for the use which might be made of this publication. The data are administrative records accessible upon authorization granted by the Health Care Department of the Lombardy Region. The data analyzed in this paper were processed in collaboration with CRISP - the Inter-university Research Centre on Public Services at the University of Milan-Bicocca (Italy). The Health Care Department of the Lombardy Region must be contacted to discuss the provision of the data. Ethical approval and consent to partecipate Access to administrative data was provided by the Health Care Department of the Lombardy Region. European Commission, Joint Research Centre**, Via E. Fermi, 2749, Ispra (VA), 21027, Italy Stefano Verzillo Irvapp-FBK, Via Santa Croce 77, Trento, 38122, Italy Carlo V. Fiorio CRISP - Interuniversity Research Centre on Public Services, Universitá degli Studi di Milano-Bicocca, Piazza dell'Ateneo Nuovo, 1, Milano, 20126, Italy Universitá Cattolica del Sacro Cuore, Largo Gemelli, 1, Milano, 20123, Italy Mara Gorli Universitá degli Studi di Milano, Via Conservatorio, 7, Milano, 20121, Italy Dondena Centre, Bocconi University, Via Rontgen, 1, Milano, 20136, Italy CERISMAS, Centro di Ricerche e Studi in Management Sanitario c/o Universitá Cattolica del Sacro Cuore, Via Necchi 7, Milano, 20123, Italy Search for Carlo V. Fiorio in: Search for Mara Gorli in: Search for Stefano Verzillo in: All the authors have made substantial contributions to conception, design and the drafting of the manuscript. In particular, CVF and SV performed the statistical and data analysis while MG carried out the literature and background analysis. All authors read and approved the final manuscript. Correspondence to Stefano Verzillo. Patient centered model Hospital change Ex-post evaluation Difference-in-difference
CommonCrawl
Maximum Entropy Inverse Reinforcement Learning Chelsea Sidrane In reinforcement learning, we aim to teach computers how to make decisions on their own. Say we want to teach a computer to drive a car. In order to do this, we will write a program that takes in the "world state" — here this is the state of the car itself and the state of the roadway — current velocity, speed limit, surrounding cars, etc. Given this input, the program then outputs what driving "move" to make next. We call this program a policy. If we're doing deep reinforcement learning, our policy will have tunable parameters that we, the programmers, adjust via training. We often start out training by randomly initializing our tunable parameters. Initially, the "moves" that our policy spits out will be bad. We then collect data by trying out our policy and then use this data to tune the parameters until the policy makes good decisions for the driving task. This data is usually in the form of state, action, and reward tuples: $latex (s,a,r)$. The s is the world state, the a is the action the policy took in that state, say, moving over one lane to the left, and the reward is something that the RL designers construct for the system. Specifically, the designer specifies a reward function that generates a real-valued reward given the state s and action a. So if we the RL designers, as experienced drivers, think that moving over one lane to the left when the policy finds itself behind a slow car is a good move, then we might award one point to our system for making that decision. We might also construct negative rewards for actions that could result in bad outcomes. For example, we might assign a negative reward for getting within 1 foot of any other car, as this could cause a crash. Inverse Reinforcement Learning Sometimes, it's hard to specify these rewards. For example, we all (mostly) know what it means to "drive well" as Abbeel and Ng describe in their paper from 2004 [1]. Driving well involves keeping a safe distance from the car in front of you and from the cars on either side, driving within some margin of the speed limit, switching lanes to allow faster cars to pass you, but not switching lanes too often, slowing down and speeding up smoothly, making turns with a wide enough radius so as to keep the passengers in your car from feeling sick, and so forth. But how do we assign relative numerical weights to all of those components that ought to play into "reward"? Is keeping a safe distance from the car in front of you equally as important as accelerating smoothly? Or more important? If it's more important, is it twice as important? Or 1.3 times as important? The Ambiguity Problem These questions are inherently difficult to answer, so sometimes instead of creating a reward function, we call in an expert to demonstrate good driving behavior, and then learn from them. One way we can learn from the expert is to directly copy their behavior, but something else we can do is to try and learn the expert's reward function, assuming they are performing roughly optimally with respect to it. This is known as Inverse Reinforcement Learning. But why would we want to do this instead of directly copying the expert's behavior? We might choose to learn the expert's underlying reward function because we want to understand why the expert behaves like they do. Ultimately, we want to achieve the same outcomes as the expert [2], and the reward function more succinctly and directly describes desirable outcomes than does a policy demonstrating expert behavior [1]. Knowing the desired outcome is helpful when the expert has a different action space than our computer learner. For example, an expert human driver has actions available like "scoot forward in the driver's seat to get a better view" but an autonomous car most likely has a camera that is fixed to its mount which cannot "scoot forward in the driver's seat". We may still be able to achieve the same outcome as the expert — driving well– but we may not be able to copy the expert's actual actions to help us get there. Okay so we've called our driving expert in and she's given us some demonstrations of good driving. In one of these examples she stays in the same lane on the highway for several miles. There is an opportunity to move to a faster lane, but the open space that is available would mean that the distance to the car in front would be less than her stated, desired distance of 200 feet. However, as we look at the data, we also realize that switching lanes in that time interval would have put her above her desired lane switching frequency. As we try to reconstruct the relative weights in her reward function, in this scenario it's unclear how to assign credit for her decision. Which was the more important factor in her decision to remain in her lane, lane switching frequency or distance to the car in front? Were the factors equally important? Was one much more important? We can't know by looking at this data point. We could scale the weighting between the two criteria however we'd like, but because both criteria recommend the same behavior in this situation, the behavior wouldn't change as the relative weighting changes. This is known as reward function ambiguity. Here we are talking about how there may be ambiguity within a single example trajectory. While this particular ambiguity may be resolved by looking at additional data, there will always exist multiple reward functions to describe observed behavior [3]. Maximum Entropy IRL There have been several strategies proposed to choose among this set of reward functions that fit the observed behavior. One such strategy is called Maximum Entropy Inverse Reinforcement Learning. Before we get into any of the technical details, the main idea of maximum entropy inverse reinforcement learning is this: We want to incorporate the expert data into our reconstruction of the reward function while maintaining equal preference (in terms of negative or positive rewards) over stuff (state-action pairs) that we don't have data about. Okay now for the technical details: In maximum entropy inverse reinforcement learning we are going to consider a stochastic policy. In our policy we will execute a certain action with some probability as opposed to definitely executing one action depending on the world state. We have some probability of choosing action $latex a_1$ at starting state $latex s_1$ which takes us to state $latex s_2$ with some probability, … and so forth. In this way, we can compute the probability of the trajectory $latex \tau = s_1, s_2$ under our policy. In fact, under our policy, we can compute the probability of any possible trajectory $latex \tau$. These probabilities make up a distribution over trajectories. We want to construct a reward function and then compute an optimal or approximately optimal policy with respect to that reward function. We want the distribution over trajectories under that policy to match the expert's distribution over trajectories, but we don't have access to the expert's full distribution — only sample demonstrations, which make up our data set. However if all we're worried about is fitting the data set, there are many reward functions that would do so, as we explained earlier. Ziebart proposed that the way to break the "tie" between all of these reward functions was to select the one that was "maximally non-committal" regarding missing information [3]. Specifically, we want the distribution over trajectories from the resulting policy to be "maximally non-committal" with respect to trajectories that it doesn't have data about. This idea of a "maximally non-committal" distribution given the data was formalized by Jaynes in 1957 [4], who defined this distribution using an idea from information theory: entropy. What's entropy? Let's take a quick digression. Entropy measures the "surprise" of a distribution. If you have a random variable $latex X$ which takes values $latex x\in\mathcal{X} $, if $latex x$ occurs with low probability, then $latex \frac{1}{p(x)} $ is a large number — you can say it is very surprising to see value $latex x$. If we take a log of this quantity to get $latex \log_2 \frac{1}{p(x)}$ the behavior is still qualitatively similar — $latex \log_2 \frac{1}{p(x)}$ increases as $latex p(x) $ approaches 0. We then take an expectation over the possible values to obtain the entropy, H: $latex H(X) = \mathbb{E}\left[log_2 \frac{1}{p(x)}\right] = – \sum_{x\in\mathcal{X}} p(x) log_2(p(x)) $ The entropy of a distribution is largest when it is uniformly distributed, meaning that $latex X$ takes on each $latex x$ with equal probability: $latex p(x) = \frac{1}{\mid \mathcal{X} \mid} $. Entropy can also be interpreted as measuring the uncertainty in a distribution. A sharply peaked distribution has low entropy and we can be more confident in predicting the value of a sample (we will be less often "surprised") than when compared to a uniform distribution (we will always be "surprised"). Jaynes concluded that the distribution that introduces the least bias with respect to yet unseen data is the distribution that maximizes the likelihood of the data and has the maximum entropy. So in Maximum Entropy IRL we solve the ambiguity problem by selecting a reward function such that the resulting distribution over trajectories maximizes the likelihood of the expert data and also has the maximum entropy. Maximizing the entropy in this way represents an acknowledgement that we want to be uncertain or ambivalent about all trajectories that have the same 'distance' to our dataset. Note that we do have to define the notion of distance. We do this through defining feature functions for the trajectories that turn each trajectory into a feature vector. To wrap things up, we'll give an example of the outcome of using Maximum Entropy IRL. Say I have a driving dataset where I only drive straight on local roads. If I used MaxEnt IRL to extract a reward function for this dataset, I would obtain a reward function that indicated preference for going straight above turning left or right, but would give equal reward values for going both left and right. And that's it! Thanks for reading. [1] Abbeel, Pieter, and Andrew Y. Ng. "Apprenticeship learning via inverse reinforcement learning." In Proceedings of the twenty-first international conference on Machine learning, p. 1. ACM, 2004. [2] Finn, Chelsea. "Inverse Reinforcement Learning." Lecture video, CS294-112 Deep Reinforcement Learning Sp17. URL [3] Ziebart, Brian D., Andrew L. Maas, J. Andrew Bagnell, and Anind K. Dey. "Maximum entropy inverse reinforcement learning." In Aaai, vol. 8, pp. 1433-1438. 2008. [4] Jaynes, Edwin T. "Information theory and statistical mechanics." Physical review 106, no. 4 (1957): 620. Other useful references on IRL: Lecture Notes by Professor Katerina Fragkiadaki March 24, 2019 March 26, 2019 The Informaticists Tagged deep learning, inverse reinforcement learning, maximum entropy inverse reinforcement learning, reinforcement learning 1 Comment One thought on "Maximum Entropy Inverse Reinforcement Learning" Pingback: Writing Tutorials for Topics in Information Theory – The Informaticists ← Inferring Connectivity in Neural Spike Train Recordings A Debate: The Information Bottleneck Theory for DNNs →
CommonCrawl
\begin{document} \title{Zero-one laws for provability logic: Axiomatizing\\ validity in almost all models and almost all frames} \author{\IEEEauthorblockN{Rineke Verbrugge\\Department of Artificial Intelligence, University of Groningen, e-mail [email protected]} \IEEEauthorblockA{ } } \IEEEoverridecommandlockouts \IEEEpubid{\makebox[\columnwidth]{978-1-6654-4895-6/21/\$31.00~ \copyright2021 IEEE } \hspace{\columnsep}\makebox[\columnwidth]{ }} \maketitle \begin{abstract} It has been shown in the late 1960s that each formula of first-order logic without constants and function symbols obeys a zero-one law: As the number of elements of finite models increases, every formula holds either in almost all or in almost no models of that size. Therefore, many properties of models, such as having an even number of elements, cannot be expressed in the language of first-order logic. For modal logics, limit behavior for models and frames may differ. Halpern and Kapron proved zero-one laws for classes of models corresponding to the modal logics K, T, S4, and S5. They also proposed zero-one laws for the corresponding classes of frames, but their zero-one law for K-frames has since been disproved. In this paper, we prove zero-one laws for provability logic with respect to both model and frame validity. Moreover, we axiomatize validity in almost all irreflexive transitive finite models and in almost all irreflexive transitive finite frames, leading to two different axiom systems. In the proofs, we use a combinatorial result by Kleitman and Rothschild about the structure of almost all finite partial orders. On the way, we also show that a previous result by Halpern and Kapron about the axiomatization of almost sure frame validity for S4 is not correct. Finally, we consider the complexity of deciding whether a given formula is almost surely valid in the relevant finite models and frames. \end{abstract} \IEEEpeerreviewmaketitle \section{Introduction} In the late 1960s, Glebskii and colleagues proved that first-order logic without function symbols satisfies a zero-one law, that is, every formula is either almost always true or almost always false in finite models \cite{glebskii1969}. More formally, let $L$ be a language of first-order logic and let $A_n(L)$ be the set of all {\em labelled} $L$-models with universe $\{1, \ldots, n\}$. Now let $\mu_n(\sigma)$ be the fraction of members of $A_n(L)$ in which $\sigma$ is true, i.e., \[\mu_n(\sigma) = \frac{ \mid \{ M \in A_n(L) : M \models \sigma \}\mid}{ \mid A_n(L)\mid}\] Then for every $\sigma \in L$, $\lim_{n\to\infty} \mu_n(\sigma) = 1$ or $\lim_{n\to\infty} \mu_n(\varphi) = 0$.\footnote{The distinction between labelled and unlabelled probabilities was introduced by Compton~\cite{compton1987}. The unlabelled count function counts the number of isomorphism types of size $n$, while the labelled count function counts the number of labelled structures of size $n$, that is, the number of relevant structures on the universe $\{1,\ldots,n\}$. It has been proved both for the general zero-one law and for partial orders that in the limit, the distinction between labelled and unlabelled probabilities does not make a difference for zero-one laws~\cite{fagin1976,compton1987,compton1988b}. Per finite size $n$, labelled probabilities are easier to work with than unlabelled ones~\cite{goranko2003}, so we will use them in the rest of the article.} This was also proved later but independently by Fagin~\cite{fagin1976}; Carnap had already proved the zero-one law for first-order languages with only unary predicate symbols~\cite{Carnap1950} (see \cite{compton1988,goranko2003} for nice historical overviews of zero-one laws). Later, Kaufmann showed that monadic existential second-order logic does not satisfy a zero-one law~\cite{kaufmann1987}. Kolaitis and Vardi have made the border more precise by showing that a zero-one law holds for the fragment of existential second-order logic ($\Sigma^1_1$) in which the first-order part of the formula belongs to the Bernays-Sch\"{o}nfinkel class ($\exists^\ast \forall^\ast$ prefix) or the Ackermann class ($\exists^\ast \forall\exists^\ast$ prefix)~\cite{kolaitis1987,kolaitis1990}; however, no zero-one law holds for any other class, for example, the G\"{o}del class ($ \forall^2\exists^\ast$ prefix)~\cite{pacholski1989}. Kolaitis and Vardi proved that a zero-one law does hold for the infinitary finite-variable logic $\mathcal{L}^{\omega}_{\infty\omega}$, which implies that a zero-one law also holds for LFP(FO), the extension of first-order logic with a least fixed-point operator~\cite{kolaitis1992}. The above zero-one laws and other limit laws have found applications in database theory~\cite{gurevich1993,halpern2006,libkin2013} and algebra~\cite{zaid2017}. In AI, there has been great interest in asymptotic conditional probabilities and their relation to default reasoning and degrees of belief~\cite{grove1996,halpern2006}. In this article, we focus on zero-one laws for a modal logic that imposes structural restrictions on its models, namely, provability logic, which is sound and complete with respect to finite strict (irreflexive) partial orders~\cite{segerberg1971}. The zero-one law for first-order logic also holds when restricted to partial orders, both reflexive and irreflexive ones, as proved by Compton \cite{compton1988b}. To prove this, he used a surprising combinatorial result by Kleitman and Rothschild~\cite{kleitman1975} on which we will also rely for our results. Let us give a summary. \subsection{Kleitman and Rothschild's result on finite partial orders} \label{KR} Kleitman and Rothschild proved that with asymptotic probability 1, finite partial orders have a very special structure: There are no chains $u<v<w<z$ of more than three objects and the structure can be divided into three levels: \begin{itemize} \item $L_1$, the set of minimal elements; \item $L_2$, the set of elements immediately succeeding elements in $L_1$; \item $L_3$, the set of elements immediately succeeding elements in $L_2$. \end{itemize} Moreover, the ratio of the expected size of $L_1$ to $n$ and of the expected size of $L_3$ to $n$ are both $\frac{1}{4}$, while the ratio of the expected size of $L_2$ to $n$ is $\frac{1}{2}$. As $n$ increases, each element in $L_1$ has as immediate successors asymptotically half of the elements of $L_2$ and each element in $L_3$ has as immediate predecessors asymptotically half of the elements of $L_2$~\cite{kleitman1975}.\footnote{Interestingly, it was recently found experimentally that for smaller $n$ there are strong oscillations, while the behavior stabilizes only around $n=45$~\cite{Henson2017}.} Kleitman and Rothschild's theorem holds both for reflexive (non-strict) and for irreflexive (strict) partial orders. \subsection{Zero-one laws for modal logics: Almost sure model validity} In order to describe the known results about zero-one laws for modal logics with respect to the relevant classes of models and frames, we first give reminders of some well-known definitions and results. \noindent Let $\Phi=\{p_1,\ldots, p_k\}$ be a finite set of propositional atoms\footnote{In the rest of this paper in the parts on almost sure model validity, we take $\Phi$ to be finite, although the results can be extended to enumerably infinite $\Phi$ by the methods described in~\cite{halpern1994,grove1996}.} and let $L(\Phi)$ be the modal language over $\Phi$, inductively defined as the smallest set closed under: \begin{enumerate} \item If $p \in \Phi$, then $p\in L(\Phi)$. \item If $A\in L(\Phi)$ and $B \in L(\Phi)$, then also $\neg A \in L(\Phi)$, $\Box A \in L(\Phi)$, $\Diamond(\varphi)\in L(\Phi)$, $(A \wedge B) \in L(\Phi)$, $(A \vee B)\in L(\Phi)$, and $(A \rightarrow B) \in L(\Phi)$. \end{enumerate} \noindent A {\em Kripke frame} (henceforth: frame) is a pair $F=(W, R)$ where $W$ is a non-empty set of worlds and $R$ is a binary accessibility relation. A {\em Kripke model} (henceforth: model) $M=(W, R, V)$ consists of a frame $(W, R)$ and a valuation function $V$ that assigns to each atomic proposition in each world a truth value $V_w(p)$, which can be either 0 or 1. The truth definition is as usual in modal logic, including the clause: \[M,w \models \Box \varphi \mbox{ if and only if}\] \[\mbox{for all } w' \mbox{ such that } wRw', M,w'\models \varphi. \] \noindent A formula $\varphi$ is valid in model $M=(W, R, V)$ (notation $M \models \varphi$) iff for all $w\in W$, $M,w\models\varphi$. \noindent A formula $\varphi$ is valid in frame $F=(W, R)$ (notation $F\models\varphi$) iff for all valuations $V$, $\varphi$ is valid in the model $(W, R, V)$.\\ \noindent Let $\mathcal{M}_{n,\Phi}$ be the set of finite models over $\Phi$ with set of worlds $W=\{1, \ldots, n\}$. We take $\nu_{n,\Phi}$ to be the uniform probability distribution on $\mathcal{M}_{n,\Phi}$. Let $\nu_{n,\Phi}(\varphi)$ be the measure in $\mathcal{M}_{n,\Phi}$ of the set of models in which $\varphi$ is valid.\\ \noindent Let $\mathcal{F}_{n,\Phi}$ be the set of finite frames with set of worlds $W=\{1, \ldots, n\}$. We take $\mu_{n,\Phi}$ to be the uniform probability distribution on $\mathcal{F}_n$. Let $\mu_{n,\Phi}(\varphi)$ be the measure in $\mathcal{F}_n$ of the set of frames in which $\varphi$ is valid. \\ \noindent Halpern and Kapron proved that every formula $\varphi$ in modal language $L(\Phi)$ is either valid in almost all models (``almost surely true'') or not valid in almost all models (``almost surely false'')~\cite[Corollary 4.2]{halpern1994}: \[ \mbox{Either } \lim_{n\to\infty} \nu_{n,\Phi}(\varphi) = 0 \mbox{ or } \lim_{n\to\infty} \nu_{n,\Phi}(\varphi) = 1.\] \noindent In fact, this zero-one law for models already follows from the zero-one law for first-order logic~\cite{glebskii1969,fagin1976} by Van Benthem's translation method \cite{Benthem1976,Benthem1983}.\label{Benthem} As reminder, let $^\ast$ be given by: \begin{itemize} \item $p_i^\ast = P_i(x)$ for atomic sentences $p_i\in \Phi$; \item $(\neg \varphi)^\ast = \neg \varphi^\ast$; \item $(\varphi \wedge \psi)^\ast = (\varphi^\ast \wedge \psi^\ast)$ ( similar for other binary operators); \item $(\Box \varphi)^\ast = \forall y(Rxy \rightarrow \varphi^\ast [y/x])$. \end{itemize} Van Benthem mapped each model $M=(W,R,V)$ to a classical model $M^\ast$ with as objects the worlds in $W$ and the obvious binary relation $R$, while for each atom $p_i\in\Phi$, $P_i=\{ w \in W \mid M,w\models p_i \}= \{ w \in W \mid V_w(p_i)=1 \}$. Van Benthem then proved that for all $\varphi \in L(\Phi)$, $M\models \varphi$ iff $M^\ast \models \forall x \; \varphi^\ast$~\cite{Benthem1983}. Halpern and Kapron~\cite{halpern1992,halpern1994} showed that a zero-one law for modal models immediately follows by Van Benthem's result and the zero-one law for first-order logic. By Compton's above-mentioned result that the zero-one law for first-order logic holds when restricted to the partial orders~\cite{compton1988b}, this modal zero-one law can also be restricted to finite models on reflexive or irreflexive partial orders, so that a zero-one law for finite models of provability logic immediately follows. However, one would like to prove a stronger result and axiomatize the set of formulas $\varphi$ for which $\lim_{n\to\infty} \nu_{n,\Phi}(\varphi) = 1$. Also, Van Benthem's result does not allow proving zero-one laws for classes of frames instead of models: We have $F\models \varphi$ iff $F^\ast \models \forall P_1 \ldots \forall P_n \forall x \varphi^\ast$, but the latter formula is not necessarily a negation of a formula in $\Sigma^1_1$ with its first-order part in one of the Bernays-Sch\"{o}nfinkel or Ackermann classes (see~\cite{halpern1994}). Halpern and Kapron~\cite{halpern1992,halpern1994} aimed to fill in the above-mentioned gaps for the modal logics {\bf K}, {\bf T}, {\bf S4} and {\bf S5} (see~\cite{chellas1980} for definitions). They proved zero-one laws for the relevant classes of finite models for these logics. For all four, they axiomatized the classes of sentences that are almost surely true in the relevant finite models. \subsection{The quest for zero-one laws for frame validity} Halpern and Kapron's paper also contains descriptions of four zero-one laws with respect to the classes of finite frames corresponding to {\bf K}, {\bf T}, {\bf S4} and {\bf S5}.~\cite[Theorem 5.1 and Theorem 5.15]{halpern1994}: Either $\lim_{n\to\infty} \mu_{n,\Phi}(\varphi) = 0$ or $\lim_{n\to\infty} \mu_{n,\Phi}(\varphi) = 1$.\\ They proposed four axiomatizations for the sets of formulas that would be almost always valid in the corresponding four classes of frames~\cite{halpern1994}. However, almost 10 years later, Le Bars surprisingly proved them wrong with respect to the zero-one law for {\bf K}-frames~\cite{Bars2002}. By proving that the formula $q \wedge \neg p \wedge \Box\Box((p\vee q) \rightarrow \neg\Diamond(p\vee q)) \wedge \Box\Diamond p$ does {\em not} have an asymptotic probability, he showed that in fact {\em no} zero-one law holds with respect to all finite frames. Doubt had already been cast on the zero-one law for frame validity by Goranko and Kapron, who proved that the formula $\neg \Box\Box(p \leftrightarrow \neg \Diamond p)$ fails in the countably infinite random frame, while it is almost surely valid in {\bf K}-frames~\cite{goranko2003}. (See also ~\cite[Section 9.5]{Goranko2007}).\footnote{We will show in this paper that for irreflexive partial orders, almost-sure frame validity in the finite {\em does} transfer to validity in the corresponding countable random Kleitman-Rothschild frame, and that the validities are quite different from those for almost all {\bf K} frames (see Section~\ref{Frames}).} Currently, the problem of axiomatizing the modal logic of almost sure frame validities for finite {\bf K}-frames appears to be open.\footnote{For up to 2006: see~\cite{Goranko2007}; for more recently:~\cite{Goranko2020} .} As a reaction to Le Bars' counter-example, Halpern and Kapron~\cite{halpern2003erratum} published an erratum, in which they showed exactly where their erstwhile proof of~\cite[Theorem 5.1]{halpern1994} had gone wrong. It may be that the problem they point out also invalidates their similar proof of the zero-one law with respect to finite reflexive frames, corresponding to {\bf T}~\cite[Theorem 5.15 a]{halpern1994}. However, with respect to frame validity for {\bf T}-frames, as far as we know, no counterexample to a zero-one law has yet been published and Le Bars' counterexample cannot easily be adapted to reflexive frames; therefore, the situation remains unsettled for {\bf T}.\footnote{Joe Halpern and Bruce Kapron (personal communication) and Jean-Marie Le Bars (personal communication) confirmed the current non-settledness of the problem for {\bf T}.} \subsection{Halpern and Kapron's axiomatization for almost sure frame validities for S4 fails} Unfortunately, Halpern and Kapron's proof of the 0-1 law for reflexive, transitive frames and the axiomatization of the almost sure frame validities for reflexive, transitive frames ~\cite[Theorem 5.16]{halpern1994} turn out to be incorrect as well, as follows.\footnote{The author of this paper discovered the counter-example after a colleague had pointed out that the author's earlier attempt at a proof of the 0-1 law for frames of provability logic, inspired by Halpern and Kapron's~\cite{halpern1994} axiomatiation, contained a serious gap.} Halpern and Kapron introduce the axiom DEP2$'$ and they axiomatize almost-sure frame validities in reflexive transitive frames by {\bf S4}+DEP2$'$~\cite[Theorem 5.16]{halpern1994}, where DEP2$'$ is:$ \neg(p_1 \wedge \Diamond(\neg p_1 \wedge \Diamond (p_1 \wedge \Diamond \neg p_1))).$\\ \noindent The axiom DEP2$'$ precludes $R$-chains $tRuRvRw$ of more than three different states. \begin{proposition} Suppose $\Phi=\{p_1,p_2\}$. Now take the following sentence $\chi$: \[\chi:= (p_1 \wedge \Diamond (\neg p_1 \wedge \Diamond p_1 \wedge \Box (p_1\rightarrow p_2))) \rightarrow \] \[\Box((\neg p_1 \wedge \Diamond p_1) \rightarrow \Diamond\Box (p_1 \rightarrow p_2)) \] Then {\bf S4}+DEP2$'\not \vdash\chi$ but $\lim_{n\to\infty} \mu_{n,\Phi}(\chi) = 1$ \end{proposition} \begin{proof} It is easy to see that {\bf S4}+DEP2$'\not \vdash\chi$ by taking the five-point reflexive transitive frame of Figure~\ref{treecounter}, where \[M,w_0\models p_1 \wedge \Diamond (\neg p_1 \wedge \Diamond p_1 \wedge \Box(p_1\rightarrow p_2)) \] but $M, w_3 \not \models (\neg p_1 \wedge \Diamond p_1) \rightarrow \Diamond \Box(p_1 \rightarrow p_2)$, so \[M,w_0 \not \models \Box((\neg p_1 \wedge \Diamond p_1) \rightarrow \Diamond \Box(p_1 \rightarrow p_2)).\] Now we sketch a proof that $\chi$ is valid in almost all reflexive Kleitman-Rothschild frames.\footnote{Halpern and Kapron~\cite[Theorem 4.14]{halpern1994} proved that almost surely, every reflexive transitive relation is in fact a partial order, so the Kleitman-Rothschild result also holds for finite frames with reflexive transitive relations.} So let $M=(W,R,V)$ be an arbitrary large enough (with appropriate extension axioms holding) Kleitman-Rothschild frame $(W, R)$ together with an arbitrary valuation $V$. Let $w$ be arbitrary in $W$ and suppose $M,w\models p_1\wedge\Diamond(\neg p_1\wedge\Diamond p_1\wedge \Box(p_1\rightarrow p_2))$. Then there is a $w_1\in W$ with $wRw_1$ and $M,w_1\models \neg p_1\wedge\Diamond p_1\wedge \Box(p_1\rightarrow p_2)$. We want to show that $M,w\models\Box((\neg p_1 \wedge \Diamond p_1) \rightarrow \Diamond \Box(p_1 \rightarrow p_2))$. To do this, suppose $w_2$ is arbitrary in $W$ with $wRw_2$ and $M,w_2\models \neg p_1 \wedge \Diamond p_1$. The above facts imply that both $w_1$ and $w_2$ are in the middle layer and $w$ is in the bottom layer. Then almost surely, there is a $w_3$ in the top layer with $w_1Rw_3$ and $w_2Rw_3$. This confluence follows from Compton’s extension axiom (b)~\cite{compton1988b} (similar to (b) in Proposition~\ref{Axalmost} of the current paper). Therefore by $M,w_1\models \Box(p_1 \rightarrow p_2)$, also $M,w_3\models \Box(p_1 \rightarrow p_2)$, so $M,w_2\models\Diamond\Box(p_1 \rightarrow p_2)$. Therefore $M,w\models\Box((\neg p_1 \wedge \Diamond p_1) \rightarrow \Diamond \Box(p_1 \rightarrow p_2))$. Now, because $w\in W$ was arbitrary, we have $M\models\chi$. \end{proof} \begin{figure} \caption{Counter-model showing that the formula $\chi:=$ $(p_1 \wedge \Diamond (\neg p_1 \wedge \Diamond p_1 \wedge \Box (p_1\rightarrow p_2))) \rightarrow \Box((\neg p_1 \wedge \Diamond p_1) \rightarrow \Diamond\Box (p_1 \rightarrow p_2))$ does not hold in $w_0$ of this three-layer model. The relation in the model is the reflexive transitive closure of the one represented by the arrows. } \label{treecounter} \end{figure} \noindent Therefore, the axiom system given in ~\cite[Theorem 5.16]{halpern1994} is {\em not} complete with respect to almost-sure frame validities for finite reflexive transitive orders. Fortunately, it seems possible to mend the situation and still obtain an axiom system that is sound and complete with respect to almost sure $\mathcal{S}$4 frame validity, by adding extra axioms characterizing the umbrella- and diamond properties that we will also use for the provability logic {\bf GL} in Section~\ref{Frames}; the $\mathcal{S}$4 version is future work. \subsection{Almost sure model validity does not coincide with almost sure frame validity} Interestingly, whereas for full classes of frames, `validity in {\em all} finite models' coincides with `validity in {\em all} finite frames' of the class, this is not the case for `almost sure validity'. In particular, for both the class of reflexive transitive frames ($\mathcal{S}$4) and the class of reflexive transitive symmetric frames ($\mathcal{S}$5), there are many more formulas that are `valid in {\em almost all} finite models' than `valid in {\em almost all} finite frames' of the appropriate kinds. Our work has been greatly inspired by Halpern and Kapron's paper~\cite{halpern1994} and we also use some of the previous results that they applied, notably the above-mentioned combinatorial result by Kleitman and Rothschild about finite partial orders.\\ \noindent The rest of this paper is structured as follows. In Section~\ref{Provability-intro}, we give a brief reminder of the axiom system and semantics of provability logic. In the central Sections~\ref{GL-models},~\ref{Random} and~\ref{Frames}, we show why provability logic obeys zero-one laws both with respect to its models and with respect to its frames. We provide two axiom systems characterizing the formulas that are almost always valid in the relevant models, respectively almost always valid in the relevant frames. When discussing almost sure frame validity, we will investigate both the almost sure validity in finite irreflexive transitive frames and validity in the countable random Kleitman-Rothschild frame, and show that there is transfer between them. Section~\ref{Complexity} provides a sketch of the complexity of the decidability problems of almost sure model and almost sure frame validity for provability logic. Finally, Section~\ref{Discussion} presents a conclusion and some questions for future work. The result on models in Section~\ref{GL-models} was proved 26 years ago, and presented in \cite{Verbrugge1995,verbrugge2018}, but the proofs have not been published before in an archival venue. The results about almost sure frame validities for {\bf GL} are new, as well as the counter-example against the axiomatization by Halpern and Kapron of almost sure $\mathcal{S}$4 frame validities.\footnote{Due to the length restriction, this paper includes proof sketches of the main results. Full proofs are to be included in an extended version for a journal.} \section{Provability logic} \label{Provability-intro} \label{GL} In this section, a brief reminder is provided about the protagonist of this paper: the provability logic {\bf GL}, named after G\"{o}del and L\"{o}b. As axioms, it contains all axiom schemes from $\mathbf{K}$ and the extra scheme GL. Here follows the full set of axiom schemes of $\mathbf{GL}$: \begin{align} \tag{A1} &\text{All (instances of) propositional tautologies} \label{eq:A1}\\ \tag{A2} &\square(\varphi\rightarrow \psi) \rightarrow (\square\varphi \rightarrow \square\psi) \label{eq:A2}\\ \tag{GL} &\square (\square \varphi \rightarrow \varphi)\rightarrow \square \varphi \label{eq:GL} \end{align} The rules of inference are modus ponens and necessitation: \begin{quote} if $\mathbf{GL}\vdash \varphi\rightarrow \psi$ and $\mathbf{GL}\vdash \varphi$, then $\mathbf{GL}\vdash\varphi$.\\ if $\mathbf{GL}\vdash \varphi$, then $\mathbf{GL}\vdash \square\varphi$. \end{quote} Note that the transitivity axiom $\square \varphi \rightarrow \square \square \varphi$ follows from $\mathbf{GL}$, which was first proved by De Jongh and Sambin~\cite{boolos1993,Verbrugge2017}, but that the reflexivity axiom $\square \varphi \rightarrow \varphi$ does not follow. Indeed, Segerberg proved in 1971 that provability logic is sound and complete with respect to all transitive, {\em converse well-founded} frames (i.e., for each non-empty set $X$, there is an R-greatest element of $X$; or equivalently: there is no infinitely ascending sequence $x_1Rx_2Rx_3Rx_4, \ldots$). Segerberg also proved completeness with respect to all finite, transitive, irreflexive frames~\cite{segerberg1971}. The latter soundness and completeness result will be relevant for our purposes. For more information on provability logic, see, for example,~\cite{smorynski1985,boolos1993,Verbrugge2017}. \label{Zero-one-provability} In the next three sections, we provide axiomatizations, first for almost sure model validity and then for almost sure frame validity, with respect to the relevant finite frames corresponding to {\bf GL}, namely the irreflexive transitive ones. For the proofs of the zero-one laws for almost sure model and frame validity, we will need completeness proofs of the relevant axiomatic theories -- let us refer to such a theory by $\mathbf{S}$ for the moment -- with respect to almost sure model validity and with respect to almost sure frame validity. Here we will use Lindenbaum's lemma and maximal $\mathbf{S}$-consistent sets of formulas. For such sets, the following useful properties hold, as usual~\cite{segerberg1971,chellas1980}: \begin{proposition} \label{maxconsistent} Let $\Theta$ be a maximal $\mathbf{S}$-consistent set of formulas in $L(\Phi)$. Then for each pair of formulas $\varphi, \psi \in L(\Phi)$: \begin{enumerate} \item $\varphi \in \Theta$ iff $\neg \, \varphi \not\in \Theta$; \item $(\varphi \wedge \psi) \in \Theta \Leftrightarrow \varphi \in \Theta$ and $\psi \in \Theta$; \item if $\varphi \in \Theta$ and $(\varphi \rightarrow \psi) \in \Theta$ then $\psi \in \Theta$; \item if $\Theta \vdash_{\mathbf{S}} \; \varphi$ then $\varphi \in \Theta$. \end{enumerate} \end{proposition} \section{Validity in almost all finite irreflexive transitive models} \label{GL-models} The axiom system $\mathbf{AX^{\Phi,M}_{GL}}$ has the same axioms and rules as {\bf GL} (see Section~\ref{GL}) plus the following axioms: \begin{align} \tag{T3} &\Box\Box\Box\bot \label{Threelayers}\\ \tag{C1} & \Diamond \top \rightarrow \Diamond A \label{Carnap1}\\ \tag{C2} & \Diamond \Diamond \top \rightarrow \Diamond(B \wedge \Diamond C) \label{Carnap2} \end{align} \noindent \noindent In the axiom schemes C1 and C2, the formulas $A$, $B$ and $C$ all stand for consistent conjunctions of literals over $\Phi$. These axiom schemes have been inspired by Carnap's consistency axiom: $\Diamond \varphi$ for any $\varphi$ that is a consistent propositional formula~\cite{Carnap1947}, which has been used by Halpern and Kapron~\cite{halpern1994} for axiomatizing almost sure model validities for $\mathcal{K}$-models. Note that $\mathbf{AX^{\Phi,M}_{GL}}$ is not a normal modal logic, because one cannot substitute just any formula for $A, B, C$; for example, substituting $p_1\wedge \neg p_1$ for $A$ in C1 would make that formula equivalent to $\neg\Diamond\top$, which is clearly undesired. However, even though $\mathbf{AX^{\Phi,M}_{GL}}$ is not closed under uniform substitution, it is still a propositional theory, in the sense that it is closed under modus ponens. \begin{example} For $\Phi=\{p_1, p_2\}$, the axiom scheme C1 boils down to the following four axioms: \begin{align} & \Diamond \top \rightarrow \Diamond (p_1 \wedge p_2)\\ & \Diamond \top \rightarrow \Diamond (p_1 \wedge \neg p_2)\\ &\Diamond \top \rightarrow \Diamond (\neg p_1 \wedge p_2)\\ &\Diamond \top \rightarrow \Diamond (\neg p_1 \wedge \neg p_2) \end{align} The axiom scheme C2 covers 16 axioms, corresponding to the $2^4$ possible choices of positive or negative literals, as captured by the following scheme, where ``$[\neg]$'' is shorthand for a negation being present or absent at the current location: \[\Diamond\Diamond \top \rightarrow \Diamond ([\neg]p_1 \wedge [\neg]p_2 \wedge \Diamond([\neg]p_1 \wedge [\neg]p_2))\] \end{example} \noindent The following definition of the canonical asymptotic model over a finite set of propositional atoms $\Phi$ is based on the set of propositional valuations on $\Phi$, namely, the functions $v$ from the set of propositional atoms $\Phi$ to the set of truth values $\{0,1\}$. As worlds, we introduce for each such valuation $v$ three new distinct objects, for mnemonic reasons called $u_v$ (Upper), $m_v$ (Middle), and $b_v$ (Bottom); see Figure 2. \begin{definition} \label{Canonical-GL} Define $\mathrm{M}^{\Phi}_{GL}= (W, R, V)$, the {\em canonical asymptotic model} over $\Phi$, with $W, R, V$ as follows:\\ $W= \{b_v \mid v \mbox{ a propositional valuation on } \Phi \} \cup \\ \mbox{ \hspace{0.55cm} } \{m_v \mid v \mbox{ a propositional valuation on } \Phi \} \cup \\ \mbox{ \hspace{0.55cm} } \{u_v \mid v \mbox{ a propositional valuation on } \Phi \}$\\ $R=\{\langle b_v, m_{v'}\rangle \mid v, v' \mbox{ propositional valuations on } \Phi \} \cup \\ \mbox{ \hspace{0.55cm} }\{\langle m_v, u_{v'}\rangle \mid v, v' \mbox{ propositional valuations on } \Phi \} \cup \\ \mbox{ \hspace{0.55cm} } \{\langle b_v, u_{v'}\rangle \mid v, v' \mbox{ propositional valuations on } \Phi \}$; \\ and for all $p_i\in\Phi$ and all propositional valuations $v$ on $\Phi$, the modal valuation $V$ is defined by:\\ $V_{b_v} (p_i) = V_{m_v} (p_i) = V_{u_v} (p_i) = v(p_i)$.\footnote{ If $\Phi$ were enumerably infinite, the definition could be adapted so that precisely those propositional valuations are used that make only finitely many propositional atoms true, see also~\cite{halpern1994}.} \end{definition} \begin{figure*} \caption{The canonical asymptotic model $\mathrm{M}^{\Phi}_{GL}= (W, R, V)$ for $\Phi=\{p_1, p_2\}$, defined in Definition~\ref{Canonical-GL}. The accessibility relation is the transitive closure of the relation given by the arrows drawn in the picture. The four relevant valuations are $v_1, v_2, v_3, v_4$, given by $v_1(p_1)=v_1(p_2)=1$; $v_2(p_1)=1, v_2(p_2)=0$; $v_3(p_1)=0, v_3(p_2)=1$; $v_4(p_1)=v_4(p_2)=0$.} \label{canonical16} \end{figure*} \noindent For the proof of the zero-one law for model validity, we will need a completeness proof of $\mathbf{AX^{\Phi,M}_{GL}}$ with respect to almost sure model validity, including use of Lindenbaum's lemma and Proposition~\ref{maxconsistent}, applied to $\mathbf{AX^{\Phi,M}_{GL}}$. The zero-one law for model validity follows directly from the following theorem: \begin{theorem} For every formula $\varphi \in L(\Phi)$, the following are equivalent: \begin{enumerate} \item $\mathrm{M}^{\Phi}_{GL}\models \varphi$; \item $\mathbf{AX^{\Phi,M}_{GL}} \vdash \varphi$; \item $\lim_{n\to\infty} \nu_{n,\Phi}(\varphi) = 1$; \item $\lim_{n\to\infty} \nu_{n,\Phi}(\varphi) \not = 0$. \end{enumerate} \end{theorem} \begin{proof} We show a circle of implications. Let $\varphi\in L(\Phi)$.\\ \noindent {\bf 1 $\Rightarrow$ 2}\\ By contraposition. Suppose that $\mathbf{AX^{\Phi,M}_{GL}} \not \vdash \varphi$, then $\neg \varphi$ is $\mathbf{AX^{\Phi,M}_{GL}}$-consistent. By Lindenbaum's lemma, we can extend $\{\neg \varphi\}$ to a maximal $\mathbf{AX^{\Phi,M}_{GL}}$-consistent set $\Delta$ over $\Phi$. We use a standard canonical model construction; here, we illustrate how that works for the finite set $\Phi = \{p_1, p_2\}$, but the method works for any finite $\Phi=\{p_1, \ldots, p_k\}$.\footnote{For adapting to the enumerably infinite case, see~\cite[Theorem 4.15]{halpern1994}.} It will turn out that the model we define is isomorphic to the model of Definition~\ref{Canonical-GL}. Let us define the model $\mathrm{MC}^{\Phi}_{GL}= (W', R', V')$: \begin{itemize} \item $W'= \{ w_{\Gamma} \mid \Gamma \mbox { is maximal } \mathbf{AX^{\Phi,M}_{GL}}\mbox{-consistent,}$\\ $\mbox{ } \hspace{2cm}\mbox{ based on } \Phi \}$. \item $R' = \{ \langle w_{\Gamma_1}, w_{\Gamma_2}\rangle \mid w_{\Gamma_1}, w_{\Gamma_2}\in W' \mbox{ and } $\\$ \mbox{ }\hspace{2cm}\mbox{ for all } \Box\psi\in \Gamma_1, \mbox{ it holds that } \psi \in \Gamma_2 \}$ \item For each $w_{\Gamma} \in W': V'_{w_{\Gamma}}(p)= 1 \mbox{ iff } p\in \Gamma$ \end{itemize} \noindent Because the worlds of this model correspond to the maximal $\mathbf{AX^{\Phi,M}_{GL}}$-consistent sets, all worlds $w_{\Gamma}\in W'$ can be distinguished into three kinds, exhaustively and without overlap: \begin{description} \item[U] $\Box\bot \in \Gamma$; there are exactly four maximal consistent sets $\Gamma$ of this form, determined by which of the four conjunctions of relevant literals $[\neg] p_1 \wedge [\neg] p_2$ is an element. These comprise the upper level U of the model. \item[M] $\neg \Box\bot \in \Gamma$ and $\Box\Box\bot \in \Gamma$; there are exactly four maximal consistent sets $\Gamma$ of this form, determined by which of the four conjunctions of relevant literals $[\neg] p_1 \wedge [\neg] p_2$ is an element. By axiom C1 and Proposition~\ref{maxconsistent}, all these four maximal consistent sets contain the four formulas of the form $\Diamond([\neg]p_1 \wedge [\neg] p_2)$; by definition of $R'$ and using the fact that $\Box\Box\bot \in \Gamma$, this means that all the four worlds in this middle level M will have access to all the four worlds in the upper level U. \item[B] $\neg \Box\bot \in \Gamma$ and $\neg\Box\Box\bot \in \Gamma$ and $\Box\Box\Box\bot \in \Gamma$; there are exactly four maximal consistent sets $\Gamma$ of this form, determined by which of the four conjunctions of relevant literals $[\neg] p_1 \wedge [\neg] p_2$ is an element. Because $\Diamond\Diamond\top \in \Gamma$, by axiom C2 and Proposition~\ref{maxconsistent}, all these four maximal consistent sets contain the 16 formulas $\Diamond([\neg]p_1 \wedge [\neg] p_2 \wedge \Diamond([\neg]p_1 \wedge [\neg] p_2))$. By the definition of $R'$, this means that all four worlds in this bottom level B will have direct access to all the four worlds in middle level M as well as access in two steps to all four worlds in upper level U.\\ \end{description} \noindent Note that $R'$ is transitive because $\mathbf{AX^{\Phi,M}_{GL}}$ extends {\bf GL}, so for all maximal consistent sets $\Gamma$ and all formulas $\psi \in L(\Phi)$, we have that $\Box\psi \rightarrow \Box\Box\psi\in\Gamma$. Also $R'$ is irreflexive. Because each world contains either $\Box\bot$ and $\neg\bot$ (for U), or $\Box\Box\bot $ and $\neg \Box\bot$ (for M), or $\Box\Box\Box\bot$ and $\neg\Box\Box\bot$ (for B), by definition of $R'$, none of the worlds has a relation to itself.\\ \noindent The next step is to prove by induction that a {\em truth lemma} holds: For all $\psi$ in the language $L(\Phi)$ and for all maximal $\mathbf{AX^{\Phi,M}_{GL}}$-consistent sets $\Gamma$, the following holds: \begin{quote} $\mathrm{MC}^{\Phi}_{GL}, w_{\Gamma} \models \psi$ iff $\psi \in \Gamma$.\\ \end{quote} \noindent For atoms, this follows by the definition of $V'$. The steps for the propositional connectives are as usual, using the properties of maximal consistent sets (see Proposition~\ref{maxconsistent}).\\ \noindent For the $\Box$-step, let $\Gamma$ be a maximal $\mathbf{AX^{\Phi,M}_{GL}}$-consistent set and let us suppose as induction hypothesis that for some arbitrary formula $\chi$, for all maximal $\mathbf{AX^{\Phi,M}_{GL}}$-consistent sets $\Pi$, $\mathrm{MC}^{\Phi}_{GL}, w_{\Pi} \models \chi$ iff $\chi \in \Pi$. We want to show that $\mathrm{MC}^{\Phi}_{GL}, w_{\Gamma} \models \Box\chi$ iff $\Box\chi \in \Gamma$. For the direction from right to left, suppose that $\Box\chi\in \Gamma$, then by definition of $R'$, for all $\Pi$ with $w_{\Gamma} R' w_{\Pi}$, we have $\chi\in\Pi$, so by induction hypothesis, $\mathrm{MC}^{\Phi}_{GL}, w_{\Pi}\models \chi$. Therefore, by the truth definition, $\mathrm{MC}^{\Phi}_{GL}, w_{\Gamma}\models \Box\chi$. For the direction from left to right, let us use contraposition and suppose that $\Box\chi\not\in \Gamma$. Now we will show that the set $\{\xi \mid \Box\xi\in \Gamma\}\cup \{\neg \chi\}$ is $\mathbf{AX^{\Phi,M}_{GL}}$-consistent. For otherwise, there would be some $\xi_1,\ldots,\xi_n$ for which $\Box \xi_1, \ldots, \Box \xi_n \in \Gamma$ such that $\xi_1,\ldots,\xi_n \vdash_{\mathbf{AX^{\Phi,M}_{GL}}} \; \chi$, so by necessitation, A2, and propositional logic, $\Box\xi_1,\ldots,\Box\xi_n \vdash_{\mathbf{AX^{\Phi,M}_{GL}}} \; \Box\chi$, therefore by maximal consistency of $\Gamma$ and Proposition~\ref{maxconsistent}(iv), also $\Box\chi\in\Gamma$, contradicting our assumption. Therefore, by Lindenbaum's lemma there is a maximal consistent set $\Pi \supseteq \{\xi \mid \Box\xi\in \Gamma\}\cup \{\neg \chi\}$. It is clear by definition of $R'$ that $w_{\Gamma} R' w_{\Pi}$, and by induction hypothesis, $\mathrm{MC}^{\Phi}_{GL}, w_{\Pi} \models \neg\chi$, i.e., $\mathrm{MC}^{\Phi}_{GL}, w_{\Pi} \not\models\chi$, so by the truth definition, $\mathrm{MC}^{\Phi}_{GL}, w_{\Gamma} \not\models\Box\chi$. This finishes the inductive proof of the truth lemma.\\ \noindent Finally, from the truth lemma and the fact stated at the beginning of the proof of 1 $\Rightarrow$ 2 that $\neg\varphi \in \Delta$, we have that $\mathrm{MC}^{\Phi}_{GL}, w_{\Delta} \not \models \varphi$, so we have found our counter-model.\\ \noindent With its three layers (Upper, Middle, and Bottom) of four worlds each, corresponding to each consistent conjunction of literals, and with each world corresponding to maximal consistent sets containing axioms C1 and C2 and therefore being related to precisely all those worlds in the layers above, the model $\mathrm{MC}^{\Phi}_{GL}$ that we construct in the completeness proof above is isomorphic to the canonical asymptotic model $\mathrm{M}^{\Phi}_{GL}$ of Definition~\ref{Canonical-GL}; for $\Phi=\{p_1, p_2\}$, see Figure 2.\\%~\ref{canonical16}.\\ \noindent {\bf 2 $\Rightarrow$ 3}\\ Suppose that $\mathbf{AX^{\Phi,M}_{GL}}\vdash \varphi$. We will show that the axioms of $\mathbf{AX^{\Phi,M}_{GL}}$ hold in almost all irreflexive transitive Kleitman-Rothschild models of depth 3 (see Subsection~\ref{KR}). First, it is immediate that {\bf GL} is sound with respect to {\em all} finite irreflexive transitive converse well-founded models, that axiom $\Box\Box\Box\bot$ is sound with respect to those of depth 3, and that almost sure model validity is closed under MP and Necessitation. It remains to show the almost sure model validity of axiom schemes C1 and C2 over finite irreflexive models of the Kleitman-Rothschild variety.\\ \noindent We will now sketch a proof that the `Carnap-like' axiom C1, namely $\Diamond \top \rightarrow \Diamond A$ where $A$ is a consistent conjunction of literals over $\Phi$, is valid in almost all irreflexive transitive models $(W,R,V)$ of depth 3 of the Kleitman-Rothschild variety with an arbitrary $V$. Let us suppose that $\Phi=\{p_1,\ldots,p_k\}$, so there are $2^k$ possible valuations. Let us consider a state $s$ in such a model of $n$ elements where $\Diamond\top$ holds; then, being a Kleitman-Rothschild model, $s$ has as direct successors approximately half of the states in the directly higher layer, which contains asymptotically at least $\frac{1}{4}$ of the model's states. So $s$ has asymptotically at least $\frac{1}{8}\cdot n$ direct successors. The probability that a given state $t$ is a direct successor of $s$ with the right valuation to make $A$ true is therefore at least $\frac{1}{8}\cdot \frac{1}{2^k} = \frac{1}{2^{k+3}}$. Thus, the probability that $s$ does not have {\em any} direct successors in which $A$ holds is at most $(1- \frac{1}{2^{k+3}})^n$. Therefore, the probability that there is at least one $s$ in a Kleitman-Rothschild model not having any direct successors satisfying $A$ is at most $n\cdot (1- \frac{1}{2^{k+3}})^n$. By a standard calculus result, $\lim_{n\to\infty} n\cdot (1- \frac{1}{2^{k+3}})^n = 0$, so C1 is valid in almost all Kleitman-Rothschild models, i.e., $\lim_{n\to\infty} \nu_{n,\Phi}(\Diamond \top \rightarrow \Diamond A)=1$.\\ \noindent Similarly, we sketch a proof that axiom C2, namely $\Diamond\Diamond \top \rightarrow \Diamond (B \wedge \Diamond C)$ where $B, C$ are consistent conjunctions of literals over $\Phi$, is valid in almost all irreflexive transitive Kleitman-Rothschild models $(W,R,V)$ of depth 3 with an arbitrary $V$. Let $\Phi=\{p_1,\ldots,p_k\}$. Again, let us consider a state $s$ in such a model of $n$ elements where $\Diamond\Diamond\top$ holds, then $s$ is in the bottom of the three layers; therefore, the model being of Kleitman-Rothschild type, $s$ has as direct successors approximately half of the states in the middle layer, which contains asymptotically at least $\frac{1}{2}$ of the model's states. So $s$ has asymptotically at least $\frac{1}{4}\cdot n$ direct successors. The probability that a given state $t$ is a direct successor of $s$ with the right valuation to make $B$ true is therefore at least $\frac{1}{4}\cdot \frac{1}{2^k} = \frac{1}{2^{k+2}}$. Similarly, given such a $t$, the probability that a given state $t'$ in the top layer is a direct successor of $t$ in which $C$ holds is asymptotically at least $\frac{1}{2^{k+2}}\cdot \frac{1}{2^{k+3}}= \frac{1}{2^{2k+5}}$ Therefore, the probability that for the given $s$ there are {\em no} $t, t'$ with $sRtRt'$ with $B$ true at $t$ and $C$ true at $t'$ is at most $(1-\frac{1}{2^{2k+5}})^n$. Summing up, the probability that there is at least one $s$ in a Kleitman-Rothschild model not having any pair of successors $sRtRt'$ with $B$ true at $t$ and $C$ true at $t'$ is at most $n\cdot (1-\frac{1}{2^{2k+5}})^n$. Again, $\lim_{n\to\infty} n\cdot (1-\frac{1}{2^{2k+5}})^n=0$, so C2 holds in almost all Kleitman-Rothschild models, i.e. $\lim_{n\to\infty} \nu_{n,\Phi}(\Diamond\Diamond \top \rightarrow \Diamond (B \wedge \Diamond C)) = 1$.\\ \noindent {\bf 3 $\Rightarrow$ 4}\\ Obvious, because $0 \not = 1$.\\ \noindent {\bf 4 $\Rightarrow$ 1}\\ By contraposition. Suppose as before that $\Phi=\{p_1,\ldots,p_k\}$. Now suppose that the canonical asymptotic model $\mathrm{M}^{\Phi}_{GL}\not\models \varphi$ for some $\varphi \in L(\Phi)$, say, $\mathrm{M}^{\Phi}_{GL}, s\not\models \varphi$, for some $s \in W$. We claim that almost surely for a sufficiently large finite Kleitman-Rothschild type model $M' = (W', R', V')$ of three layers with $V'$ random, there is a bisimulation relation $Z$ from $\mathrm{M}^{\Phi}_{GL}$ to $M'$. We now sketch how to define the bisimulation $Z$. Asymptotically, we will be able to find in $M'$ a world $s'$ that is situated at the same layer (top, middle or bottom) as the layer where $s$ is in $\mathrm{M}^{\Phi}_{GL}$ and that has the same valuation for all atoms $p_1, \ldots, p_k$. First, fix an enumeration of all $2^k$ possible valuations: $v_1, \ldots, v_{2_k}$. For each $b_{v_i}$ with valuation $v_i$ (where $i\in \{1,\ldots,2^k\}$) in the bottom layer of $\mathrm{M}^{\Phi}_{GL}$, there will almost surely be a $b'_i\in W'$ that has the same valuation $v_i$, as well as direct successors $m'_{i,1},\ldots,m'_{i,2^k}$ corresponding to valuations $v_1, \ldots, v_{2^k}$ respectively, where each $m'_{i,j}$ in turn has $2^k$ direct successors $u'_{i,j,1}, \ldots, u'_{i,j,2^k}$ corresponding to valuations $v_1, \ldots, v_{2^k}$ respectively. Take relation $Z$ given by: for all $i, j, l \in \{1,\ldots,2^k\}$, $b_{v_i} Z b'_i$ and $m_{v_j} Z m'_{i,j}$ and $u_{v_l} Z u'_{i,j,l}$. This $Z$ satisfies the three conditions for bisimulations for all $w\in W, w'\in W'$: If $wZw'$, then $w$ and $w'$ have the same valuation; the `forth' condition that $wZw'$ and $wRv$ together imply that there is a $v'\in W'$ such that $w'R'v'$ and $vZv'$; and the `back' condition that $wZw'$ and $w'R'v'$ together imply that there is a $v\in W$ such that $wRv$ and $vZv'$. Now that the bisimulation $Z$ is given, suppose that $sZs'$ for some $s'\in W'$. By the bisimulation theorem~\cite{Benthem1983}, we have that for all $\psi\in L(\Phi)$, $\mathrm{M}^{\Phi}_{GL}, s\models\psi \Leftrightarrow M',s'\models\psi$, in particular, $M'\not\models\varphi$. Conclusion: $\lim_{n\to\infty} \nu_{n,\Phi}(\varphi)=0$. \\ \noindent We can now conclude that all of 1, 2, 3, 4 are equivalent. Therefore, each modal formula in $L(\Phi)$ is either almost surely valid or almost surely invalid over finite models in $\mathcal{GL}$. \end{proof} \noindent This concludes our investigation of validity in almost all models. For almost sure frame validity, it turns out that there is transfer between validity in the countable irreflexive Kleitman Rothschild frame and almost sure frame validity. \section{The countable random irreflexive Kleitman-Rothschild frame} \label{Random} Differently than for the system {\bf K}~\cite{goranko2003}, it turns out that in logics for transitive partial (strict) orders such as {\bf GL}, we can prove transfer between validity of a sentence in almost all relevant finite frames and validity of the sentence in one specific frame, namely the countably infinite random irreflexive Kleitman Rothschild frame. Let us start by introducing this frame step by step. \noindent The following definition specifies a first-order theory in the language of strict (irreflexive asymmetric) partial orders. We have adapted it from Compton's~\cite{compton1988} set of extension axioms $T_\mathrm{as}$ (where the subscript ``{\it as}'' stands for `` almost sure'') for reflexive partial orders of the Kleitman-Rothschild form, which were in turn inspired by Gaifman's and Fagin's extension axioms for almost all first-order models with a binary relation~\cite{gaifman1964,fagin1976}. \begin{definition}[Extension axioms] \label{Extension} The theory $T_\mathrm{as\mbox{-}irr}$\footnote{Here, the subscript {\it as-irr} stands for ``almost sure - irreflexive''.} includes the axioms for strict partial orders, namely, $\forall x \neg(x < x)$ and $\forall x, y, z((x < y \wedge y < z) \rightarrow x < z)$. In addition, it includes the following: \begin{align} \tag{Depth-at-least-3} & \exists x_0, x_1, x_2,(\bigwedge_{i\leq 1} x_i < x_{i+1})\label{Threelayers-02} \end{align} \begin{align} \tag{Depth-at-most-3} & \neg \exists x_0, x_1, x_2, x_3 (\bigwedge_{i\leq 2} x_i < x_{i+1})\label{Threelayers-2} \end{align} Every strict partial order satisfying Depth-at-least-3 and Depth-at-most-3 can be partitioned into the three levels $L_1$ (Bottom), $L_2$ (Middle), and $L_3$ (Upper) as in Subsection~\ref{KR} and these levels are first-order definable. Let us describe the extension axioms. \noindent For every $j, k, l \geq 0$ there is an extension axiom saying that for all distinct $x_0,\ldots,x_{k-1}$ and $y_0,\ldots,y_{j-1}$ in $L_2$ and all distinct $z_0,\ldots,z_{l-1}$ in $L_1$, there is an element $z$ in $L_1$ not equal to $z_0,\ldots,z_{l-1}$ such that: \begin{align} \tag{a} & \bigwedge_{i < k} z < x_i \; \wedge \bigwedge_{i <j} \neg(z < y_i) \label{Extension-a} \end{align} For every $j, k, l \geq 0$ there is an axiom saying that for all distinct $x_0,\ldots,x_{k-1}$ and $y_0,\ldots,y_{j-1}$ in $L_2$ and all distinct $z_0,\ldots,z_{l-1}$ in $L_3$, there is an element $z$ in $L_3$ not equal to $z_0,\ldots,z_{l-1}$ such that: \begin{align} \tag{b} & \bigwedge_{i < k} x_i < z \; \wedge \bigwedge_{i <j} \neg(y_i < z) \label{Extension-b} \end{align} For every $j, j', k, k', l \geq 0$ there is an axiom saying that for all distinct $x_0,\ldots,x_{k-1}$ and $y_0,\ldots,y_{j-1}$ in $L_1$ and all distinct $x'_0,\ldots,x'_{k'-1}$ and $y'_0,\ldots,y'_{j'-1}$ in $L_3$, and all distinct $z_0,\ldots,z_{l-1}$ in $L_2$, there is an element $z$ in $L_2$ not equal to $z_0,\ldots,z_{l-1}$ such that: \begin{align} \tag{c} & \bigwedge_{i < k} x_i < z \; \wedge \bigwedge_{i <j} \neg(y_i < z) \; \wedge \bigwedge_{i < k'} z < x'_i \; \wedge \bigwedge_{i <j'} \neg(z < y'_i) \label{Extension-c} \end{align} \end{definition} \noindent \begin{proposition} \label{Categorical} $T_\mathrm{as\mbox{-}irr}$ is $\aleph_0$-categorical and therefore also complete, because it has no finite models. \end{proposition} \noindent {\bf Proof sketch} Straightforward adaptation from Compton's reflexive to our irreflexive orders of his proof that his $T_\mathrm{as}$ is $\aleph_0$-categorical and therefore also complete~\cite[Theorem 3.1]{compton1988b}. Let us call the unique countably infinite model of $T_\mathrm{as\mbox{-}irr}$ by the name $\mathcal{F}_{KR}$: the countable random irreflexive Kleitman-Rothschild frame. \begin{proposition} \label{Axalmost} Each of the sentences in $T_\mathrm{as\mbox{-}irr}$ has labeled asymptotic probability 1 in the class of finite strict (irreflexive) partial orders. \end{proposition} \noindent {\bf Proof sketch} Straightforward adaptation to our irreflexive orders of Compton's proof that his $T_\mathrm{as}$ has labeled asymptotic probability 1 in reflexive partial orders~\cite[Theorem 3.2]{compton1988b}. \\ \noindent Notice that by extension axiom (c) and transitivity, in almost all Kleitman-Rothschild models, all points in the bottom layer $L_1$ are connected to all points in the top layer $L_3$.\\ \noindent Now that we have shown that the extension axioms hold in $\mathcal{F}_{KR}$ as well as in almost all finite strict partial orders, we have enough background to be able to prove the modal zero-one law with respect to the class of finite irreflexive transitive frames corresponding to provability logic. \section{Validity in almost all finite irreflexive transitive frames} \label{Frames} Take $\Phi=\{p_1,\ldots, p_k\}$ or $\Phi=\{p_i \mid i\in \mathbb{N}\}$. The axiom system $\mathbf{AX^{\Phi,F}_{GL}}$ corresponding to validity in almost all finite frames of provability logic has the same axioms and rules as {\bf GL}, plus the following axiom schemas, for all $k\in \mathbb{N}$, where all $\varphi_i \in L(\Phi)$: \begin{align} \tag{T3} &\Box\Box\Box\bot \label{Threelayers}\\ \tag{DIAMOND-k} & \Diamond \Diamond \top \wedge \bigwedge_{i\leq k} \Diamond(\Diamond \top \wedge \Box \varphi_i )\rightarrow \Box (\Diamond \top \rightarrow \Diamond ( \bigwedge_{i\leq k} \varphi_i)) \label{Diamond}\\ \tag{UMBRELLA-k} & \Diamond\Diamond \top \wedge \bigwedge_{i\leq k}\Diamond (\Box\bot \wedge \varphi_i ) \rightarrow \Diamond (\bigwedge_{i\leq k} \Diamond \varphi_i)\label{Umbrella} \end{align} \noindent Here, UMBRELLA-0 is the formula $\Diamond\Diamond \top \wedge \Diamond (\Box\bot\wedge \varphi_0) \rightarrow \Diamond \Diamond \varphi_0$, which represents the property that direct successors of bottom layer worlds are never endpoints but have at least one successor in the top layer. The formula DIAMOND-0 has been inspired by the well-known axiom $\Diamond\Box\varphi \rightarrow \Box\Diamond\varphi$ that characterizes confluence, also known as the diamond property: for all $x,y,z$, if $xRy$ and $xRz$, then there is a $w$ such that $yRw$ and $zRw$. Note that in contrast to the theory $\mathbf{AX^{\Phi,M}_{GL}}$ introduced in Section~\ref{GL-models}, the axiom system $\mathbf{AX^{\Phi,F}_{GL}}$ gives a normal modal logic, closed under uniform substitution. Also notice that $\mathbf{AX^{\Phi,F}_{GL}}$ is given by an infinite set of axioms. It turns out that if we base our logic on an infinite set of atoms $\Phi=\{p_i \mid i \in \mathbb{N}\}$, then for each $k\in \mathbb{N}$, DIAMOND-k+1 and UMBRELLA-k+1 are strictly stronger than DIAMOND-k and UMBRELLA-k, respectively. So we have two infinite sets of axioms that both strictly increase in strength, thus by a classical result of Tarski, the modal theory generated by $\mathbf{AX^{\Phi,F}_{GL}}$ is not finitely axiomatizable. For the proof of the zero-one law for frame validity, we will again need a completeness proof, this time of $\mathbf{AX^{\Phi,F}_{GL}}$ with respect to almost sure frame validity, including use of Lindenbaum's lemma and finitely many maximal $\mathbf{AX^{\Phi,F}_{GL}}$-consistent sets of formulas, each intersected with a finite set of relevant formulas $\Lambda$. Below, we will define the {\em closure} of a sentence $\varphi\in L(\Phi)$. We may view this closure as the set of formulas that are relevant for making a (finite) countermodel against $\varphi$. \begin{definition}[Closure of a formula] \label{Closure} The {\em closure} of $\varphi$ with respect to $\mathbf{AX^{\Phi,F}_{GL}}$ is the minimal set $\Lambda$ of $\mathbf{AX^{\Phi,F}_{GL}}$-formulas such that: \begin{enumerate} \item $\varphi\in\Lambda$. \item $\Box\Box\Box\bot\in\Lambda$. \item If $\psi\in\Lambda$ and $\chi$ is a subformula of $\psi$, then $\chi\in\Lambda$. \item If $\psi\in\Lambda$ and $\psi$ itself is not a negation, then $\neg\psi\in\Lambda$. \item If $\Diamond \psi\in\Lambda$ and $\psi$ itself is not of the form $\Diamond\xi$ or $\neg\Box\xi$, then $\Diamond\Diamond\psi\in\Lambda$, and also $\Box\neg\psi$, $\Box\Box\neg\psi \in \Lambda$. \item If $\Box \psi\in\Lambda$ and $\psi$ itself is not of the form $\Box\xi$ or $\neg\Diamond\xi$, then $\Box\Box\psi\in\Lambda$, and also $\Diamond\neg\psi$, $\Diamond\Diamond\neg\psi \in \Lambda$. \end{enumerate} \end{definition} \noindent Note that $\Lambda$ is a finite set of formulas, of size polynomial in the length of the formula $\varphi$ from which it is built. \begin{definition}\label{precedence} Let $\Lambda$ be a closure as defined above and let $\Delta, \Delta_1, \Delta_2$ be maximal $\mathbf{AX^{\Phi,F}_{GL}}$-consistent sets. We define: \begin{itemize} \item $\Delta^{\Lambda}:= \Delta \cap\Lambda$; \item $\Delta_1 \prec \Delta_2 $ iff for all $\Box\chi \in \Delta_1$, we have $\chi \in \Delta_2$; \item $\Delta_1^{\Lambda} \prec \Delta_2^{\Lambda}$ iff $\Delta_1 \prec \Delta_2$. \end{itemize} \end{definition} \begin{theorem} \label{GL-trees} For every formula $\varphi \in L(\Phi)$, the following are equivalent: \begin{enumerate} \item $\mathbf{AX^{\Phi,F}_{GL}} \vdash \varphi$; \item $ \mathcal{F}_{KR}\models \varphi$; \item $\lim_{n\to\infty} \mu_{n,\Phi}(\varphi) = 1$; \item $\lim_{n\to\infty} \mu_{n,\Phi}(\varphi) \not = 0$. \end{enumerate} \end{theorem} \begin{proof} We show a circle of implications. Let $\varphi\in L(\Phi)$.\\ \noindent {\bf 1 $\Rightarrow$ 2}\\ Suppose $\mathbf{AX^{\Phi,F}_{GL}} \vdash \varphi$. Because finite irreflexive Kleitman-Rothschild frames are finite strict partial orders that have no chains of length greater than $3$, the axioms and theorems of {\bf GL} + $\Box\Box\Box\bot$ hold in all Kleitman-Rothschild frames, therefore they are valid in $\mathcal{F}_{KR}$. So we only need to check the validity of the DIAMOND-k and UMBRELLA-k axioms in $\mathcal{F}_{KR}$ for all $k\geq 0$.\\ \noindent DIAMOND-k-1: Fix $k\geq 1$, take sentences $\varphi_i\in L(\Phi)$ for $i=1,\ldots, k-1$ and let $\varphi= \Diamond \Diamond \top \wedge \bigwedge_{i\leq k-1} \Diamond(\Diamond \top \wedge \Box \varphi_i)\rightarrow \Box (\Diamond \top \rightarrow \Diamond ( \bigwedge_{i\leq k-1} \varphi_i))$. By Proposition~\ref{Categorical}, we know that each of the extension axioms of the form (b) holds in $ \mathcal{F}_{KR}$. We want to show that $\varphi$ is valid in $\mathcal{F}_{KR}$. To this end, let $V$ be any valuation on the set of labelled states $W$ of $\mathcal{F}_{KR}$ and let $M= (\mathcal{F}_{KR}, V)$. Now take an arbitrary $b\in W$ and suppose that $M, b \models \Diamond \Diamond \top \wedge \bigwedge_{i\leq k-1} \Diamond(\Diamond \top \wedge \Box \varphi_i)$. Then $b$ is in the bottom layer $L_1$ and there are worlds $x_0,\ldots,x_{k-1}$ (not necessarily distinct) in the middle layer $L_2$ such that for all $i \leq k-1$, we have $b < x_i$ and $M, x_i \models \Box \varphi_i$. Now take any $x_{k}$ in $L_2$ with $b < x_{k}$. Then, by the extension axiom (b), there is an element $z$ in the upper layer $L_3$ such that $\bigwedge_{i \leq k} x_i < z $. Now for that $z$, we have that $M, z \models \bigwedge_{i \leq k-1} \varphi_i$. But then $M, x_{k} \models \Diamond( \bigwedge_{i\leq k-1} \varphi_i)$, so because $x_{k}$ is an arbitrary direct successor of $b$, we have $M, b\models \Box (\Diamond \top \rightarrow \Diamond ( \bigwedge_{i\leq k-1} \varphi_i))$. To conclude, \[M, b \models \Diamond \Diamond \top \wedge \bigwedge_{i\leq k-1} \Diamond(\Diamond \top \wedge \Box \varphi_i) \rightarrow \Box (\Diamond \top \rightarrow \Diamond ( \bigwedge_{i\leq k-1} \varphi_i)),\] so because $b$ and $V$ were arbitrary, we have \[\mathcal{F}_{KR}\models \Diamond \Diamond \top \wedge \bigwedge_{i\leq k-1} \Diamond(\Diamond \top \wedge \Box \varphi_i) \rightarrow \Box (\Diamond \top \rightarrow \Diamond ( \bigwedge_{i\leq k-1} \varphi_i)), \] as desired.\\ \noindent UMBRELLA-k-1: Fix $k\geq 1$, take sentences $\varphi_i\in L(\Phi)$ for $i=1,\ldots, k-1$ and let $\varphi= \Diamond\Diamond \top \wedge \bigwedge_{i\leq k-1}\Diamond (\Box\bot \wedge \varphi_i ) \rightarrow \Diamond (\bigwedge_{i\leq k-1} \Diamond \varphi_i)$. By Proposition~\ref{Categorical}, we know that each of the extension axioms of the form (c) holds in $\mathcal{F}_{KR}$. We want to show that $\varphi$ is valid in $\mathcal{F}_{KR}$. To this end, let $V$ be any valuation on the set of labelled states $W$ of $\mathcal{F}_{KR}$ and let $M= (\mathcal{F}_{KR}, V)$. Now take an arbitrary $b\in W$ and suppose that $M, b \models \Diamond\Diamond \top \wedge \bigwedge_{i\leq k-1}\Diamond (\Box\bot \wedge \varphi_i )$. Then $b$ is in the bottom layer $L_1$ and there are accessible worlds $x_0,\ldots, x_{k-1}$ (not necessarily distinct) in the upper layer $L_3$ such that for all $i \leq k-1$, we have $b < x_i$ and $M, x_i\models \varphi_i$. By the extension axiom (c) from Definition~\ref{Extension}, there is an element $z$ in the middle layer $L_2$ such that $b < z$ and for all $i\leq k-1$, $z < x_i$. But that means that $M,z \models \bigwedge_{i\leq k-1} \Diamond \varphi_i$, therefore $M,b \models \Diamond (\bigwedge_{i\leq k-1} \Diamond \varphi_i)$. In conclusion, \[M,b \models \Diamond\Diamond \top \wedge \bigwedge_{i\leq k-1}\Diamond (\Box\bot \wedge \varphi_i ) \rightarrow \Diamond (\bigwedge_{i\leq k-1} \Diamond \varphi_i),\] so because $b$ and $V$ were arbitrary, we have \[\mathcal{F}_{KR}\models \Diamond\Diamond \top \wedge \bigwedge_{i\leq k-1}\Diamond (\Box\bot \wedge \varphi_i) \rightarrow \Diamond (\bigwedge_{i\leq k-1} \Diamond \varphi_i), \] as desired.\\ \noindent {\bf 2 $\Rightarrow$ 3}\\ Suppose $\mathcal{F}_{KR}\models\varphi$. Using Van Benthem's translation (see Subsection~\ref{Benthem}), we can translate this as a $\Pi^1_1$ sentence being true in $\mathcal{F}_{KR}$ (viewed as model of the relevant second-order language): Universally quantify over predicates corresponding to all propositional atoms occurring in $\varphi$, to get a sentence of the form $\chi:=\forall P_1,\ldots,P_n \; \forall x \varphi^\ast$, where $\forall x \varphi^\ast$ is a first-order sentence. The claim is that $\forall x\varphi^\ast$ follows from a finite set of the extension axioms. Following the procedure of Kolaitis and Vardi~\cite[Lemma 4]{kolaitis1987}, we can prove that if every finite subset of $T_\mathrm{as-irr}\cup\{\exists x\neg\varphi^\ast(P_1,\ldots,P_n)\}$ is satisfiable over the extended vocabulary with $P_1, \ldots, P_n$, then by compactness, $T_\mathrm{as-irr}\cup\{\exists x\neg\varphi^\ast(P_1, \ldots,P_n)\}$ has a countable model over the extended vocabulary. The reduct of this model to the old language is still a countable model of $T_\mathrm{as-irr}$, and is therefore isomorphic to $\mathcal{F}_{KR}$ (by Proposition~\ref{Categorical}). But then $\mathcal{F}_{KR}\models\exists P_1,\ldots,P_n\exists x\neg\varphi^\ast$, a contradiction. \\ \noindent {\bf 3 $\Rightarrow$ 4}\\ Obvious, because $0 \not = 1$.\\ \noindent {\bf 4 $\Rightarrow$ 1}\\ By contraposition. Let $\varphi\in L(\Phi)$ and suppose that $\mathbf{AX^{\Phi,F}_{GL}} \not \vdash \varphi$. Then $\neg \varphi$ is $\mathbf{AX^{\Phi,F}_{GL}}$-consistent. We will do a completeness proof by the finite step-by-step method (see, for example,~\cite{burgess1984,JonghVV}), but based on infinite maximal consistent sets, each of which is intersected with the same finite set of relevant formulas $\Lambda$, so that the constructed counter-model remains finite (see~\cite{Verbrugge1988}, \cite[footnote 3]{Joosten2020}). In the following, we are first going to construct a model $M_{\varphi}=(W, R, V)$ that will contain a world where $\neg \varphi$ holds (Step 4 $\Rightarrow$ 1 (a)). Then we will embed this model into Kleitman-Rothschild frames of any large enough size to show that $\lim_{n\to\infty} \mu_{n,\Phi}(\varphi) = 0$ (Step 4 $\Rightarrow$ 1 (b)).\\ \noindent {\bf Step 4 $\Rightarrow$ 1 (a)}\\ \noindent By the Lindenbaum Lemma, we can extend $\{\neg \varphi\}$ to a maximal $\mathbf{AX^{\Phi,F}_{GL}}$-consistent set $\Psi$. Now define $\Psi^{\Lambda}:= \Psi \cap \Lambda$, where $\Lambda$ is as in Definition~\ref{Closure}, and introduce a world $s_\Psi$ corresponding to $\Psi^{\Lambda}$. We distinguish three cases for the step-by-step construction: {\bf U} (upper layer), {\bf M} (middle layer), and {\bf B} (bottom layer).\\ \noindent {\bf Case U, with } {\boldmath$\Box\bot\in\Psi^{\Lambda}$}:\\ In this case we are done: a one-point counter-model $M_\varphi=(W,R,V)$ with $W=\{s_\Psi\}$, empty $R$, and for all $p\in\Phi$, $V_{s_\Psi}(p)$ iff $p\in\Psi^\Lambda$ suffices. \\ \noindent {\bf Case M, with } {\boldmath$\Box\bot\not\in \Psi^{\Lambda}$}, {\boldmath$\Box\Box\bot\in \Psi^{\Lambda}$}:\\ Let $\Diamond \psi_1,\ldots,\Diamond \psi_n$ be an enumeration of all the formulas of the form $\Diamond \psi$ in $\Psi^{\Lambda}$. Note that for all these formulas, $\Diamond \Diamond \psi_i \not\in \Psi^{\Lambda}$, because $\Box\Box\bot\in \Psi^{\Lambda}$. Take an arbitrary one of the $\psi_i$ for which $\Diamond\psi_i \in \Psi^{\Lambda}$. Claim: the set \[\Delta_i:=\{\Box \chi, \chi \mid \Box \chi \in \Psi \} \cup \{\psi_i, \Box \neg \psi_i\}\] is $\mathbf{AX^{\Phi,F}_{GL}}$-consistent. For if not, then \[\{\Box \chi, \chi \mid \Box \chi \in \Psi \} \vdash_{\mathbf{AX^{\Phi,F}_{GL}}} \Box\neg\psi_i \rightarrow \neg\psi_i.\] Because proofs are finite, there is a finite set $\chi_1,\ldots,\chi_k$ with $\Box\chi_1,\ldots \Box\chi_k\in\Psi$ and \[\{\Box \chi_j, \chi_j \mid j \in \{1,\ldots, k\} \} \vdash_{\mathbf{AX^{\Phi,F}_{GL}}} \Box\neg\psi_i \rightarrow \neg\psi_i.\] Using necessitation, we get \[\{\Box\Box \chi_j, \Box \chi_j \mid j \in \{1,\ldots, k\} \} \vdash_{\mathbf{AX^{\Phi,F}_{GL}}} \Box(\Box\neg\psi_i \rightarrow \neg\psi_i).\] Because we have $\vdash_{\mathbf{AX^{\Phi,F}_{GL}}} \Box\chi_j \rightarrow \Box\Box \chi_j$ for all $j =1,\ldots, k$ and $\vdash_{\mathbf{AX^{\Phi,F}_{GL}}} \Box(\Box\neg\psi_i \rightarrow \neg\psi_i)\rightarrow \Box\neg \psi_i$, we can conclude: \[\{\Box \chi \mid \Box \chi \in \Psi \} \vdash_{\mathbf{AX^{\Phi,F}_{GL}}} \Box \neg\psi_i.\] Using Proposition~\ref{maxconsistent}(4) and the fact that $\Box\neg\psi_i\in\Lambda$, this leads to $\Box\neg\psi_i\in \Psi^{\Lambda}$, contradicting our assumption that $\Diamond \psi_i\in \Psi^{\Lambda}$. Also note that because $\Box\Box\bot\in \Psi$, by definition, $\Box\bot\in \Delta_i$. We can now extend $\Delta_i$ to a maximal $\mathbf{AX^{\Phi,F}_{GL}}$-consistent set $\Psi_i$ by the Lindenbaum Lemma, and we define for each $i \in \{1, \ldots, n\}$ the set $\Psi_i^{\Lambda}:= \Psi_i \cap\Lambda$ and a world $s_{\Psi_i}$ corresponding to it. We have for all $i\in \{1, \ldots, n\}$ that $\Psi^{\Lambda} \prec \Psi_i^{\Lambda}$ as well as $\psi_i, \Box \neg\psi_i \in \Psi_i^{\Lambda}$.\\ \noindent We have now finished creating a two-layer counter-model $M_{\varphi}= (W, R, V)$, which has: \begin{itemize} \item $W = \{s_{\Psi},s_{\Psi_1},\ldots,s_{\Psi_n}\}$; \item $R= \{\langle s_{\Psi}, s_{\Psi_i}\rangle \mid i\in\{1,\ldots,n\}\}$; \item For each $p\in\Phi$ and $s_{\Gamma}\in W$: $V_{s_{\Gamma}}(p)=1$ iff $p\in \Gamma^{\Lambda}$. \end{itemize} \noindent A truth lemma can be proved as in Case B below (but easier).\\ \noindent {\bf Case B, with } {\boldmath$\Box\Box\bot\not\in \Psi^{\Lambda}$}:\\ In this case, we also look at all formulas of the form $\Diamond\psi \in \Psi^{\Lambda}$. We first divide this into two sets, as follows: \begin{enumerate} \item The set of $\Diamond$-formulas in $\Psi^{\Lambda}$ for which we have that $\Diamond \xi_{k+1}, \dots,\Diamond\xi_l \in \Psi^{\Lambda}$ but $\Diamond\Diamond \xi_{k+1}, \dots,\Diamond\Diamond\xi_l\not \in \Psi^{\Lambda}$ for some $l\in \mathbb{N}$, so $\Box\Box\neg \xi_{k+1}, \dots,\Box\Box\neg\xi_l \in \Psi^{\Lambda}$.\footnote{The formulas of the form $\Box\Box\neg \xi_{j}$ are in $\Lambda$ because of Def.~\ref{Closure}, clause 5.} \item The set of $\Diamond\Diamond$-formulas with $\Diamond\Diamond \xi_1, \dots,\Diamond\Diamond\xi_k\in \Psi^{\Lambda}$.\\ Note that for these formulas, we also have $\Diamond \xi_1, \dots,\Diamond\xi_k\in \Psi^{\Lambda}$, because $GL \vdash \Diamond\Diamond \xi_i \rightarrow \Diamond \xi_i$. We will treat these pairs $\Diamond\Diamond\xi_i, \Diamond\xi_i$ for $i=1, \ldots,k$ at the same go. \end{enumerate} \noindent Note that (1) and (2) lead to disjoint sets which together exhaust the $\Diamond$-formulas in $\Psi^{\Lambda}$. Altogether, that set now contains $\{\Diamond \xi_1, \ldots, \Diamond\xi_k, \Diamond\Diamond \xi_1, \ldots, \Diamond \Diamond\xi_k, \Diamond \xi_{k+1}, \dots,\Diamond\xi_l \}$.\\ \noindent Let us first check the formulas of type (1): $\Diamond \xi_{k+1},\ldots, \Diamond \xi_l\in\Psi^{\Lambda}$, but $\Box \Box \neg\xi_{k+1}, \ldots, \Box \Box \neg\xi_{l} \in \Psi^{\Lambda}$. We can now show by similar reasoning as in Case M that for each $i\in\{k+1,\ldots, l\}$, $\Delta_i= \{\Box \chi, \chi \mid \Box \chi \in \Psi \} \cup \{ \xi_i, \Box\neg \xi_i\}$ is $\mathbf{AX^{\Phi,F}_{GL}}$-consistent, so we can extend them to maximal $\mathbf{AX^{\Phi,F}_{GL}}$-consistent sets $\Psi_i$ and define $\Psi_i^{\Lambda}:=\Psi_i \cap \Lambda$ with $\Psi^{\Lambda} \prec \Psi_i^{\Lambda}$, and corresponding worlds $s_{\Psi_i}$ for all $i\in\{k+1, \ldots, l\}$. We now claim that for all $i\in \{k+1,\ldots, l\}$, the world $s_{\Psi_i}$ is not in the top layer of the model with root $s_\Psi$. To derive a contradiction, suppose that it is in the top layer, so $\Box \bot \in \Psi_i^{\Lambda}$. Then also $\Box\bot \wedge \xi_i \in \Psi_i$, so because $\Psi\prec \Psi_i$, we have $\Diamond (\Box\bot \wedge \xi_i) \in \Psi$. By UMBRELLA-0, we know that \[\vdash_{\mathbf{AX^{\Phi,F}_{GL}}} \Diamond \Diamond \top \wedge \Diamond (\Box \bot \wedge \xi_i) \rightarrow \Diamond\Diamond\xi_i. \] Also having $\Diamond\Diamond\top\in\Psi$, we can now use Proposition~\ref{maxconsistent}(4) to conclude that $\Diamond\Diamond \xi_i\in \Psi$. Therefore, because $\Diamond\Diamond \xi_i\in \Lambda$, we also have $\Diamond\Diamond \xi_i\in \Psi^{\Lambda}$, contradicting our starting assumption that $\Diamond \xi_i$ is a type (1) formula. We conclude that $\Box \bot \not\in \Psi_i^{\Lambda}$, therefore, $s_{\Psi_i}$ is in the middle layer. Let us now look for each of these $s_{\Psi_i}$ with $i$ in $k+1,\ldots, l$, which direct successors in the top layer they require. Any formulas of the form $\Diamond\chi\in\Psi_i^{\Lambda}$ have to be among the formulas $\Diamond\xi_1,\ldots,\Diamond\xi_k$ of type (2), for which $\Diamond\Diamond\xi_1,\ldots,\Diamond\Diamond\xi_k\in\Psi^{\Lambda}$. Suppose $\Diamond\xi_j\in \Psi_i$ for some $j$ in $1,\ldots, k$ and $i$ in $k+1,\ldots, l$. Then we can show (just like in Case M) that there is a maximal consistent set $X_{i, j}$ with $\Psi_i \prec X_{i, j}$ and $\xi_j,\Box\bot \in X_{i, j}^{\Lambda}$. The world in the top layer corresponding to $X_{i,j}^{\Lambda}$ will be called $s_{X_{i, j}}$. Because $X_{i, j}^{\Lambda}$ is finite, we can describe it by $\Box\bot$ and a finite conjunction of literals, which we represent as $\chi_{i, j}$. For ease of reference in the next step, let us define:\\ \noindent $A:=\{\langle i,j\rangle\!\mid\!\Diamond\xi_j\in\!\Psi_i\!\mbox{ with }\!i\in\!k+1,\ldots,l\mbox{ and } j\in\!1,\ldots,k\}$\\% \\ \noindent For the formulas of type (2), we have $\Diamond\Diamond\xi_i\in \Psi^{\Lambda}$. Moreover, we have for each $i\in \{1,\ldots, k\}$: \[GL+\Box\Box\Box\bot\vdash \Diamond\Diamond\xi_i \rightarrow \Diamond(\Box\bot \wedge \xi_i).\] Therefore, by maximal $\mathbf{AX^{\Phi,F}_{GL}}$-consistency of $\Psi$, we have by Proposition~\ref{maxconsistent} that $\Diamond(\Box\bot \wedge \xi_i) \in \Psi$ for each $i\in \{1,\ldots, k\}$. We also have $\Diamond\Diamond \top\in\Psi$. UMBRELLA-k now gives us \[ \Psi\vdash_{\mathbf{AX^{\Phi,F}_{GL}}} \Diamond \Diamond \top \wedge \bigwedge_{i= 1,\ldots, k} \Diamond(\Box\bot \wedge \xi_i ) \rightarrow\] \[ \Diamond ( \bigwedge_{i= 1,\ldots, k}\Diamond \xi_i)\] We conclude from maximal $\mathbf{AX^{\Phi,F}_{GL}}$-consistency of $\Psi$ and Proposition~\ref{maxconsistent}(4) that $\Diamond ( \bigwedge_{i= 1,\ldots, k} \Diamond \xi_i)\in \Psi$. This means that we can construct {\em one} direct successor of $\Psi^{\Lambda}$ containing all the $\Diamond \xi_i$ for $i\in \{1,\ldots, k\}$. To this end, let \[\Delta_1:= \{\Box \chi, \chi \mid \Box \chi \in \Psi \} \cup \{\Diamond \xi_1, \ldots, \Diamond \xi_k\} \] \noindent Claim: $\Delta_1$ is $\mathbf{AX^{\Phi,F}_{GL}}$-consistent. For if not, we would have: \[\{\Box \chi, \chi \mid \Box \chi \in \Psi \} \vdash_{\mathbf{AX^{\Phi,F}_{GL}}} \neg ( \bigwedge_{i= 1,\ldots, k} \Diamond \xi_i)\] But then by the same reasoning as we used before (``boxing both sides" and using $GL\vdash \Box \chi \rightarrow \Box\Box\chi$) we conclude that \[\{\Box \chi \mid \Box \chi \in \Psi \} \vdash_{\mathbf{AX^{\Phi,F}_{GL}}} \Box\neg (\bigwedge_{i= 1,\ldots, k} \Diamond \xi_i).\] This contradicts $\Diamond (\bigwedge_{i= 1,\ldots, k}\Diamond\xi_i)\!\in\!\Psi$, which we showed above. Now that we know $\Delta_1$ to be $\mathbf{AX^{\Phi,F}_{GL}}$-consistent, we can extend it by the Lindenbaum Lemma to a maximal $\mathbf{AX^{\Phi,F}_{GL}}$-consistent set, which we call $\Psi_1 \supseteq \Delta_1$. Let $s_{\Psi_1}$ be the world corresponding to $\Psi_1^{\Lambda}$, with $\Psi^{\Lambda}\prec \Psi_1^{\Lambda}$. Now we can use the same method as in Case M to find the required direct successors of $\Psi_1^{\Lambda}$. Namely, for all $i\in\{1,\ldots,k\}$ we find maximal $\mathbf{AX^{\Phi,F}_{GL}}$-consistent sets $\Xi_i$ and let $s_{\Xi_i}$ be the worlds corresponding to the $\Xi_i^{\Lambda}$, with $\Psi_1^{\Lambda}\prec \Xi_i^{\Lambda}$ and $\xi_i\in \Xi_i$.\\%\footnote{This implies that each literal that is a conjunct in $\chi_{i,j}$ is a member of $\Xi_{ i, j}^{\Lambda}$ and that $\xi_j\in \Xi_{ i, j}^{\Lambda}$. Note also that $\vdash \chi_{i,j} \rightarrow \xi_j$ for each $\langle i, j\rangle\in A$.} \noindent We have now handled making direct successors of $\Psi^{\Lambda}$ for all the formulas of type (1) and type (2). We can then finish off the step-by-step construction for Case B by populating the upper layer U using one appropriate restriction to $\Lambda$ of a maximal consistent set $\Xi_0$, as follows. We note that $\Box\neg \xi_{i}\in \Psi_{i}^{\Lambda}$ for $i$ in $k+1, \ldots, l$, and that $\Box\Box\bot\in \Psi_1^{\Lambda}$. Let us take the following instance of the DIAMOND-(l-k) axiom scheme: \[\Diamond \Diamond \top \wedge \bigwedge_{i\in \{k+1,\ldots, l\}} \Diamond(\Diamond \top \wedge \Box \neg \xi_i )\rightarrow \] \[ \Box (\Diamond \top \rightarrow \Diamond ( \bigwedge_{i\in \{k+1,\ldots, l\}} \neg \xi_i))\] Now we have $\Diamond \Diamond \top \in \Psi^{\Lambda}$. Because $\Psi \prec \Psi_i$ and $\Diamond\top \wedge \Box\neg\xi_i\in \Psi_i$ for all $i$ in $k+1, \ldots, l$, we derive that \[\bigwedge_{i\in \{k+1,\ldots, l\}} \Diamond(\Diamond \top \wedge \Box \neg \xi_i )\in \Psi.\] Now by one more application of Proposition~\ref{maxconsistent}(4), we have \[\Box (\Diamond \top \rightarrow \Diamond ( \bigwedge_{i\in \{k+1,\ldots, l\}} \neg \xi_i))\in \Psi.\] Because $\Psi \prec \Psi_j$ and $\Diamond\top\in\Psi_j$ for all $j$ in $1, k+1, \ldots, l$, we conclude that \[\Diamond \top \rightarrow \Diamond ( \bigwedge_{i\in \{k+1,\ldots, l\}} \neg \xi_i)\in \Psi_j \mbox { for all } j\in\{1,k+1, \ldots, l\}.\] Now we can find one world $s_{\Xi_0}$ corresponding to $\Xi_0^{\Lambda}$ such that for all $j$ in $1, k+1,\ldots, l$, we have $\Psi_j^{\Lambda} \prec \Xi_0^{\Lambda}$. And moreover, $\neg\xi_i \in \Xi_0^{\Lambda}$ for all $i$ in $k+1,\ldots, l$.\\ \noindent We have now finished creating our finite counter-model $M_\varphi= (W, R, V)$, which has: \begin{itemize} \item $W = \{s_{\Psi}\}\cup \{s_{\Psi_1}, s_{\Psi_{k+1}},\ldots, s_{\Psi_{l}}\} \cup$\\ $\mbox{ }\hspace{0.7cm}\{s_{X_{i,j}} \mid \langle i, j \rangle \in A\}\cup \{s_{\Xi_i} \mid i \in \{1,\ldots,k\}\} \cup\{s_{\Xi_0}\}$ \item $R$ is the transitive closure of:\\ $\{\langle s_{\Psi},s_{\Psi_i}\rangle\mid i\in\{1,k+1,\ldots,l\}\}\cup$\\ $\{\langle s_{\Psi_i},s_{X_{i,j}}\rangle \mid \langle i, j \rangle \in A\}\cup$\\ $\{\langle s_{\Psi_1},s_{\Xi_i}\rangle \mid i \in \{1,\ldots,k\}\} \cup $\\ $\{\langle s_{\Psi_i}, s_{\Xi_0}\rangle\mid i\in\{1,k+1,\ldots,l\}\}$; \item For each $p\in\Phi$ and $s_\Gamma \in W: V_{s_{\Gamma}}(p)= 1 \mbox{ iff } p\in \Gamma^{\Lambda}$. \end{itemize} \noindent Now we can relatively easily prove a truth lemma, restricted to formulas from $\Lambda$, as follows.\\ \noindent {\bf Truth Lemma} \noindent For all $\psi$ in $\Lambda$ and all worlds $s_\Gamma$ in $W$:\\ $M_{\varphi},s_\Gamma \models \psi$ iff $\psi \in \Gamma^{\Lambda}$.\\ \noindent {\bf Proof} By induction on the construction of the formula. For atoms $p\in\Lambda$, the fact that $M_{\varphi}, s_\Gamma\models p$ iff $p \in \Gamma^{\Lambda}$ follows by the definition of $V$.\\ \noindent {\bf Induction Hypothesis}: Suppose for some arbitrary $\chi,\xi\in \Lambda$, we have that for {\em all} worlds $s_\Delta$ in $W$:\\ $M_\varphi, s_\Delta \models \chi$ iff $\chi\in \Delta^{\Lambda}$ and $M_\varphi, s_\Delta \models \xi$ iff $\xi\in \Delta^{\Lambda}$.\\ \noindent {\bf Inductive step}: \begin{itemize} \item {\em Negation}: Suppose $\neg \chi\in \Lambda$. Now by the truth definition, $M_\varphi, s_\Gamma \models \neg \chi$ iff $M_\varphi, s_\Gamma \not\models \chi$. By the induction hypothesis, the latter is equivalent to $\chi\not\in \Gamma^{\Lambda}$. But this in turn is equivalent by Proposition~\ref{maxconsistent}(1) to $\neg \chi\in \Gamma^{\Lambda}$. \item {\em Conjunction}: Suppose $\chi\wedge\xi\in \Lambda$. Now by the truth definition, $M_\varphi, s_\Gamma\models\chi\wedge\xi$ iff $M_\varphi, s_\Gamma \models \chi$ and $M_\varphi, s_\Gamma \models \chi$. By the induction hypothesis, the latter is equivalent to $\chi\in\Gamma^{\Lambda}$ and $\xi\in\Gamma^{\Lambda}$, which by Proposition~\ref{maxconsistent}(2) is equivalent to $\chi\wedge \xi\in \Gamma^{\Lambda}$. \item {\em Box}: Suppose $\Box\chi\in \Lambda$. We know by the induction hypothesis that for all sets $\Delta^{\Lambda}$ in $W$, $M_\varphi, s_\Delta\models \chi$ iff $\chi \in \Delta^{\Lambda}$. We want to show that $M_\varphi, s_\Gamma \models \Box\chi$ iff $\Box\chi \in \Gamma^{\Lambda}$. For one direction, suppose that $\Box\chi\in \Gamma^{\Lambda}$, then by definition of $R$, for all $s_\Delta$ with $s_\Gamma R s_\Delta$, we have $\Gamma\prec\Delta$ so $\chi\in\Delta^{\Lambda}$, so by induction hypothesis, for all these $s_\Delta$, $M_\varphi, s_\Delta\models \chi$. Therefore by the truth definition, $M_\varphi, s_\Gamma\models \Box\chi$. For the other direction, suppose that $\Box\chi\in\Lambda$ but $\Box\chi\not\in \Gamma^{\Lambda}$. Then (by Definition~\ref{Closure} and Proposition~\ref{maxconsistent}(4)), we have $\Diamond \neg \chi \in \Gamma^{\Lambda}$.\footnote{or, if $\chi$ is of the form $\neg\chi_1$, then $\Diamond\chi_1\in\Gamma$, with $\Diamond\chi_1$ logically equivalent to $\Diamond \neg\chi$; in that case we reason further with $\Diamond\chi_1$.} Then in the step-by-step construction, in Case {\bf M} or Case {\bf B}, we have constructed a maximal $\mathbf{AX^{\Phi,F}_{GL}}$-consistent set $\Xi$ with $\Gamma \prec \Xi$ and $s_\Gamma R s_\Xi$, with $\neg \chi\in \Xi$, thus $\neg \chi\in \Xi^{\Lambda}$. Now by the induction hypothesis, we have $M_\varphi, s_\Xi \not\models \chi$, so by the truth definition, $M_\varphi, s_\Gamma \not\models \Box\chi$.\\ \end{itemize} \noindent Finally, from the truth lemma and the fact above that $\neg \varphi \in \Psi^{\Lambda}$, we have $M_\varphi, s_\Psi \not \models \varphi$, so we have found our counter-model.\\ \noindent {\bf Step 4 $\Rightarrow$ 1 (b)}\\ \noindent Now we need to show that $\lim_{n\to\infty} \mu_{n,\Phi}(\varphi) = 0$.\\ \noindent We claim that almost surely for a sufficiently large finite Kleitman-Rothschild type frame $F' = (W', R')$ of three layers, there is a bisimulation relation $Z$ from $M_\varphi=(W,R,V)$ defined in the (a) part of this step to $F'$, such that the image is a generated subframe $F''=(W'', R'')$ of $F'$, with $R''=R' \cap W''$. We will define a valuation $V'$ on $F'$ and define $V''$ as the restriction of $V'$ to $W''$ and such that for all $w\in W, w''\in W''$: If $wZw''$, then $w$ and $w''$ have the same valuation. Then, once the bisimulation $Z$ is given, suppose that $s_\Psi Zs''$ for some $s''\in W''$. By the bisimulation theorem~\cite{Benthem1983}, we have that for all $\psi\in L(\Phi)$, $M_\varphi, s_\Psi\models\psi \Leftrightarrow M'',s''\models\psi$, in particular, $M'',s''\not\models\varphi$. Because $M''$ is a generated submodel of $M'=(W',R',V')$, we also have $M',s''\not\models\varphi$. Conclusion: $\lim_{n\to\infty} \mu_{n,\Phi}(\varphi)=0$.\\ \noindent We now sketch how to define the above-claimed bisimulation $Z$ from $M_\varphi$ to a generated subframe of such a sufficiently large Kleitman-Rothschild frame $F' = (W', R')$. There are three cases, corresponding to {\rm Case U}, {\rm Case M}, and {\rm Case B} of the step-by-step construction of the counter-model $M_{\varphi}$ in Step 4 $\Rightarrow$ 1 (a). One by one, we will show that the constructed counter-model can almost surely be mapped by a bisimulation to a generated subframe of a Kleitman-Rothschild frame, as the number of nodes grows large enough.\\ \noindent {\bf Case U}\\ The one-point counter-model $M_\varphi=(W,R,V)$ against $\varphi$, with $W=\{s_\Psi\}$, can be turned into a counter-model on every three-layer Kleitman-Rothschild frame $F'$ as follows. Take a world $u$ in the top layer of $F'$, define $Z$ by $s_\Psi Z u$ and take a valuation on $L(\Phi)$ that corresponds on that world $u$ with the valuation of world $s_\Psi$ in $M_\varphi$. Then $Z$ is a bisimulation from $M_\varphi$ to a model on a one-point generated subframe of $F'$. This world provides a counterexample showing $F'\not\models\varphi$.\\ \noindent {\bf Case M}\\ The two-layer model $M_\varphi$ defined in Case M of part (a) of this step almost surely has as a bisimilar image a generated subframe of a large enough Kleitman-Rothschild frame $F' =(W', R', V')$, as follows. Take a world $m$ in the middle layer of the Kleitman-Rothschild frame $F'$ with sufficiently many (at least $n$) successors in the top layer, say, $u_1, \ldots, u_m$ with $m\geq n$. Take $F''=(W'', R'', V'')$ to be the generated upward-closed frame of $F'$ with root $m$. Define a mapping $Z$ by $s_\Psi Z m$, $s_{\Psi_i} Z u_i$ for all $i < n$, and $s_{\Psi_n} Z u_i$ for all $i$ in $n, \ldots, m$. This mapping satisfies the `forth' condition as well as the `back' condition. Choose the valuation $V''$ on $L(\Phi)$ such that $m$ has the same valuation as $s_\Psi$, while $u_i$ has the same valuation as $s_{\Psi_i}$ for $i<n$, and $u_i$ has the same valuation as $s_{\Psi_n}$ for all $i$ in $n, \ldots, m$. So $Z$ is a bisimulation.\\ \noindent {\bf Case B}\\ The three-layer model $M_\varphi= (W, R, V)$ defined in Case B of part (a) of this step can be embedded into almost every sufficiently large Kleitman-Rothschild frame $F'=(W',R')$ in the sense that there is a bisimulation to a model on a generated subframe $F''$ of $F'$. Pick different pairwise distinct elements $u_{i,j}$ for all $\langle i,j\rangle\in A$ and $v_i$ for all $i\in\{1,\ldots,k\}$ in the upper layer $L_3$ of $F'$.\footnote{This is possible because there are at least $k \cdot (l-k) + k$ elements of $L_3$.} Now take any $b$ in bottom layer $L_1$. Then by a number of applications of extension axiom (c), we find members $m_1, m_{k+1}, \ldots, m_l$ in the middle layer $L_2$ such that: \begin{itemize} \item $bR' m_i$ for all $i$ in $1, k+1,\ldots, l$; \item $m_i R' u_{i,j}$ for all $\langle i,j\rangle\in A$; \item $m_1 R' v_i$ for all $i\in\{1,\ldots,k\}$; \item {\em not} $m_1 R' u_{i,j}$ for any $\langle i,j\rangle\in A$; \item {\em not} $m_j R' v_i$ for any $j\in \{ k+1,\ldots, l\}$ and $i\in\{1,\ldots,k\}$; \item {\em not} $m_i R' u_{i',j}$ for any $i\in \{ k+1,\ldots, l\}$ and $\langle i',j\rangle\in A$ with $i \not = i'$; \end{itemize} Finally, by extension axiom (b), there is a $w_0$ in $L_3$ different from all $u_{i,j}$, $v_{i}$ such that $m_i R' w_0$ for all $i$ in $1, k+1, \ldots, l$. Let $F''=(W'',R'')$ be the upward-closed subframe generated by $b$, with $R''=R'\cap W''$. Now define mapping $Z$ from $M_\varphi$ to $F''$ such that: \begin{itemize} \item $s_{\Psi} Z b$; \item $s_{\Psi_i} Z m_i$ for $i \in \{k+1,\ldots, l\}$; \item $s_{X_{i,j}} Z u_{i,j}$ for all $\langle i,j\rangle\in A$; \item $s_{\Xi_{i}} Z v_{i}$ for all $i\in\{1,\ldots,k\}$; \item $s_{\Xi_0} Z v$ for all other $v$ (not of the form $u_{i,j}$) with $m_i R' v$ for $i \in \{k+1,\ldots, l\}$; \item $s_{\Xi_0} Z w_0$; \item $s_{\Psi_1} Z m$ for all $m$ in $L_2$ other than $m_{k+1},\ldots, m_l$; \item Divide the (many) $w\in L_3$ such that for all $i\in\{k+1,\ldots,l\}$ {\em not} $m_i R' w$, randomly into a partition of about equal-sized subsets, one by one corresponding to $Z$- images of each of $s_{\Xi_0}$, $s_{\Xi_{i}}$ for each $i\in\{1,\ldots,k\}$. \end{itemize} Now define the valuation $V''$ on $F''$ such that for all $p\in \Phi$ and all $s_\Gamma \in W$ and $s''\in W''$, if $s_\Gamma Z s''$, then $V''_{s''}(p)=1$ iff $V_{s_\Gamma}(p)=1$. Finally, one can check that $Z$ also satisfies the two other conditions for bisimulations: \begin{itemize} \item {\bf Forth:} Suppose $s_\Gamma Z s''$ and $s_\Gamma R s_\Delta$. Then case by case, one can show that there is a $v''\in W''$ such that $s''R'' v''$ and $s_\Delta Zv''$; \item {\bf Back:} Suppose $s_\Gamma Zs''$ and $s''R'' v''$. Then case by case, one can show that there is an $s_\Delta\in W$ such that $s_\Gamma R s_\Delta$ and $s_\Delta Zv''$. \end{itemize} \noindent Hereby we have sketched a proof that on almost all large enough Kleitman Rothschild frame frames, $\varphi$ is not valid. \\ \noindent To conclude, all of 1, 2, 3, and 4 are equivalent. \end{proof} \section{Complexity of almost sure model and frame satisfiability} \label{Complexity} It is well known that the satisfiability problem and the validity problem for {\bf GL} are PSPACE-complete (for a proof sketch, see~\cite{Verbrugge2017}), just like for other well-known modal logics such as {\bf K} and {\bf S4}. In contrast, for enumerably infinite vocabulary $\Phi$, the problem whether $\lim_{n\to\infty} \nu_{n,\Phi}(\varphi) = 0$ is in $\Delta^p_2$ (for the dag-representation of formulas), by adapting~\cite[Theorem 4.17]{halpern1994}. If $\Phi$ is finite, the decision problem whether $\lim_{n\to\infty} \nu_{n,\Phi}(\varphi) = 0$ is even in $P$, because you only need to check validity of $\varphi$ in the fixed finite canonical model $\mathrm{M}^{\Phi}_{GL}$. For example, for $\Phi=\{p_1,p_2\}$, this model contains only 16 worlds, see Figure 2. The problem whether $\lim_{n\to\infty} \mu_{n,\Phi}(\varphi) = 0$ is in NP, more precisely, NP-complete for enumerably infinite vocabulary $\Phi$. To show that it is in NP, suppose you need to decide whether $\lim_{n\to\infty} \mu_{n,\Phi}(\varphi) = 0$. By the proof of part 4 $\Rightarrow$ 1 of Theorem~\ref{GL-trees}, you can simply guess an at most 3-level irreflexive transitive frame of the appropriate form and of size $< \mid\varphi\mid^3$, a model on it and a world in that model, and check (in polynomial time) whether $\varphi$ is not true in that world. NP-hardness is immediate for $\Phi$ infinite: for propositional $\psi$, we have $\psi\in\mathbf{SAT}$ iff $\lim_{n\to\infty} \mu_{n,\Phi}(\psi) = 0$. In conclusion, if the polynomial hierarchy does not collapse and in particular (as most complexity theorists believe) $\Delta^p_2 \not=$ PSPACE and NP $\not=$ PSPACE, then the problems of deciding whether a formula is {\em almost always} valid in finite models or frames of provability logic are easier than deciding whether it is {\em always} valid. For comparison, remember that for first-order logic the difference between validity and almost sure validity is a lot starker still: Grandjean~\cite{grandjean1983} proved that the decidability problem of almost sure validity in the finite is only PSPACE-complete, while the validity problem on {\em all} structures is undecidable~\cite{church1936,turing1937} and the validity problem on all finite structures is not even recursively enumerable~\cite{Trakhtenbrot1950}. \section{Conclusion and future work} \label{Discussion} We have proved zero-one laws for provability logic with respect to both model and frame validity. On the way, we have axiomatized validity in almost all relevant finite models and in almost all relevant finite frames, leading to two different axiom systems. If the polynomial hierarchy does not collapse, the two problems of `almost sure model/frame validity' are less complex than `validity in all models/frames'. Among finite frames in general, partial orders are pretty rare -- using Fagin's extension axioms, it is easy to show that almost all finite frames are {\em not} partial orders. Therefore, results about almost sure frame validities in the finite do not transfer between frames in general and strict partial orders. Indeed, the logic of frame validities on finite irreflexive partial orders studied here is quite different from the modal logic of the validities in almost all finite frames~\cite{goranko2003,Goranko2020}. One of the most interesting results in~\cite{goranko2003} is that frame validity does not transfer from almost all finite $\mathcal{K}$-frames to the countable random frame, although it does transfer in the other direction. In contrast, we have shown that for irreflexive transitive frames, validity does transfer in both directions between almost all finite frames and the countable random irreflexive Kleitman-Rothschild frame. \subsection{Future work} Currently, we are proving similar 0-1 laws for logics of reflexive transitive frames, such as {\bf S4} and Grzegorczyk logic, axiomatizing both almost sure model validity and almost sure frame validity. It turns out that Halpern and Kapron's claim that there is a 0-1 law for $\mathcal{S}4$ frame validity can still be salvaged, albeit with a different, stronger axiom system, containing two infinite series of umbrella and diamond axioms similar to the ones in the current paper. Furthermore, it appears that one can do the same for logics of transitive frames that may be neither reflexive nor irreflexive, such as {\bf K4} and weak Grzegorczyk logic. \end{document}
arXiv
\begin{document} \title{Bilinear Assignment Problem: Large Neighborhoods and Experimental Analysis of Algorithms} \author{\sc{Vladyslav Sokol}\thanks{{\tt [email protected]}. School of Computing Science, Simon Fraser University, 8888 University Drive, Burnaby, British Columbia, V5A 1S6, Canada} \and \sc{Ante \'Custi\'c}\thanks{{\tt [email protected]}. Department of Mathematics, Simon Fraser University Surrey, 250-13450 102nd AV, Surrey, British Columbia, V3T 0A3, Canada} \and \sc{Abraham P. Punnen}\thanks{{\tt [email protected]}. Department of Mathematics, Simon Fraser University Surrey, 250-13450 102nd AV, Surrey, British Columbia, V3T 0A3, Canada} \and \sc{Binay Bhattacharya}\thanks{{\tt [email protected]}. School of Computing Science, Simon Fraser University, 8888 University Drive, Burnaby, British Columbia, V5A 1S6, Canada}} \maketitle \begin{abstract} The \emph{bilinear assignment problem (BAP)} is a generalization of the well-known \emph{quadratic assignment problem (QAP)}. In this paper, we study the problem from the computational analysis point of view. Several classes of neigborhood structures are introduced for the problem along with some theoretical analysis. These neighborhoods are then explored within a local search and a variable neighborhood search frameworks with multistart to generate robust heuristic algorithms. Results of systematic experimental analysis have been presented which divulge the effectiveness of our algorithms. In addition, we present several very fast construction heuristics. Our experimental results disclosed some interesting properties of the BAP model, different from those of comparable models. This is the first thorough experimental analysis of algorithms on BAP. We have also introduced benchmark test instances that can be used for future experiments on exact and heuristic algorithms for the problem. \noindent\emph{Keywords:} bilinear assignment problem, quadratic assignment problem, average solution value, exponential neighborhoods, heuristics, local search, variable neighborhood search, VLSN search. \end{abstract} \section{Introduction} \label{sec:intro} Given a four dimensional array $Q=(q_{ijkl})$ of size $m\times m\times n\times n$, an $m\times m$ matrix $C=(c_{ij})$ and an $n\times n$ matrix $D=(d_{kl})$, the {\it bilinear assignment problem} (BAP) can be stated as: \begin{align} \text{Minimize} \qquad &\sum_{i=1}^m\sum_{j=1}^m\sum_{k=1}^n\sum_{l=1}^n q_{ijkl}x_{ij}y_{kl} + \sum_{i=1}^m\sum_{j=1}^m c_{ij}x_{ij} + \sum_{k=1}^n\sum_{l=1}^n d_{kl}y_{kl} \label{of}\\ \text{subject to}\quad \ \ & \sum_{j=1}^m x_{ij}=1 \qquad \qquad i=1,2,\ldots,m, \label{x1}\\ &\sum_{i=1}^m x_{ij}=1 \qquad \qquad j=1,2,\ldots,m, \label{x2}\\ &\sum_{l=1}^n y_{kl}=1 \qquad \qquad k=1,2,\ldots,n, \label{y1}\\ &\sum_{k=1}^n y_{kl}=1 \qquad \qquad l=1,2,\ldots,n, \label{y2}\\ &x_{ij},\ y_{kl}\in \{0,1\} \qquad i,j=1,\ldots,m,\ \ k,l=1,\ldots,n. \label{int} \end{align} If we impose additional restrictions that $m=n$ and $x_{ij}=y_{ij}$ for all $i,j$, BAP becomes equivalent to the well-known \emph{quadratic assignment problem} (QAP) \cite{burkard2012assignment,cela2013quadratic}. As noted in \cite{custicbilinear}, the constraints $x_{ij}=y_{ij}$ can be enforced without explicitly stating them by modifying the entries of $Q,$ $C$ and $D$. For example, replacing $c_{ij}$ by $c_{ij}+L$, $d_{ij}$ by $d_{ij}+L$ and $q_{ijij}$ by $q_{ijij}-2L$, for some large $L$ results in an increase in the objective function value by $\sum_{i,j=1}^n L(x_{ij}-2x_{ij}y_{ij}+y_{ij})=\sum_{i,j=1}^nL(x_{ij}-y_{ij})^2$. Since $L$ is large, in an optimal solution, $x_{ij}=y_{ij}$ is forced and hence the modified BAP becomes QAP. Therefore, BAP is also strongly NP-hard. Moreover, since the reduction described above preserves the objective values of the solutions that satisfy $x_{ij}=y_{ij}$, BAP inherits the approximability hardness of QAP \cite{sahni1976p}. That is, for any $\alpha > 1$, BAP does not have a polynomial time $\alpha$-approximation algorithm, unless P=NP. Further, BAP is NP-hard even if $m=n$ and $Q$ is a diagonal matrix \cite{custicbilinear}. A special case of BAP, called the independent quadratic assignment problem, was studied by Burkard et al.~\cite{burkard1998quadratic} and identified polynomially solvable special cases. Since BAP is a generalization of the QAP, all of the applications of QAP can be solved as BAP. In addition, BAP can be used to model other discrete optimization problems with practical applications. Tsui and Chang \cite{tsui1990microcomputer,tsui1992optimal} used BAP to model a door dock assignment problem. Consider a sorting facility of a large shipping company where $m$ loaded inbound trucks are arriving from different locations, and they need to be assigned to $m$ inbound doors of the facility. The shipments from the inbound trucks need to be transferred to $n$ outbound trucks, which carries the shipments to different customer locations. The sorting facility also has $n$ outbound doors for the outbound trucks. Let $w_{ij}$ denote the amount of items from $i$-th inbound truck that need to be transferred to $j$-th outbound truck/customer location, and let $d_{ij}$ denote the distance between the $i$-th inbound door and the $j$-th outbound door. Then the problem of assigning inbound trucks to inbound doors and outbound trucks to outbound doors, so that the total work needed to transfer all items from inbound to outbound trucks, is exactly BAP with costs $q_{ijkl}=w_{ik}d_{jl}$. Torki et al.~\cite{torki1996low} used BAP to develop heuristic algorithms for QAP with a low rank cost matrix. BAP also encompasses well-known disjoint matching problem \cite{custicbilinear,fon1997arrays,frieze1983complexity} and axial 3-dimensional assignment problem \cite{custicbilinear,pierskalla1968letter}. Despite the applicability and unifying capabilities of the model, BAP is not studied systematically from an experimental analysis point of view. In \cite{tsui1990microcomputer,tsui1992optimal}, the authors proposed local search and branch and bound algorithms to solve BAP, but detailed computational analysis was not provided. The model was specially structured to focus on a single application, which limited the applicability of these algorithms for the general case. Torki et al.~\cite{torki1996low} presented experimental results on algorithms for low rank BAP in connection with developing heuristics for QAP. To the best of our knowledge, no other experimental studies on the model are available. In this paper, we present various neighborhoods associated with a feasible solution of BAP and analyze their theoretical properties in the context of local search algorithms, particularly on the worst case behavior. Some of these neighborhoods are of exponential size but can be searched for an improving solution in polynomial time. Local search algorithms with such \textit{very large scale neighborhoods (VLSN)} proved to be an effective solution approach for many hard combinatorial optimization problems \cite{ahuja2002survey,ahuja2007very}. We also present extensive experimental results by embedding these neighborhoods within a \textit{variable neighborhood search (VNS)} framework in addition to the standard and multi-start VLSN local search. Some very fast construction heuristics are also provided along with experimental analysis. Although local search and variable neighborhood search are well known algorithmic paradigms that are thoroughly investigated in the context of various combinatorial optimization problems, to achieve effectiveness and obtain superior outcomes variable neighborhood search algorithms needs to exploit special problem structures that efficiently link the various neighborhoods under consideration. In this sense, developing variable neighborhood search algorithms is always intriguing, especially when it comes to new optimization problems having several well designed neighborhood structures with interesting properties. Our experimental analysis shows that the average behavior of the algorithms are much better and the established negative worst case performance hardly occurs. Such a conclusion can only be made by systematic experimentation, as we have done. On a balance of computational time and solution quality, a multi-start based VLSN local search became our proposed approach. Although, by allowing significantly more time, a strategic variable neighborhood search outperformed this algorithm in terms of solution quality. The rest of the paper is organized as follows. In Section \ref{sec:notations} we specify notations and several relevant results that are used in the paper. In Section \ref{sec:constr} we describe several construction heuristics for BAP that generate reasonable solutions, often quickly. In Section \ref{sec:ls}, we present various neighborhood structures and analyze their theoretical properties. We then (Section \ref{sec:expsetup}) describe in details specifics of our experimental setup as well as sets of instances that we have generated for the problem. The benchmark instances that we have developed are available upon request from Abraham Punnen ([email protected]) for other researchers to further study the problem. The development of these test instances and best-known solutions is yet another contribution of this work. Sections \ref{sec:expconstr} and \ref{sec:expls} deal with experimental analysis of construction heuristics and local search algorithms. Our computational results disclose some interesting and unexpected outcomes, particularly when comparing standard local search with its multi-start counterpart. In Section \ref{sec:expvnsms} we combine better performing construction heuristics and different local search algorithms to develop several variable neighborhood search algorithms and present comparison with our best performing multistart local search algorithm. Concluding remarks are presented in Section \ref{sec:conclusion}. \section{Notations and basic results} \label{sec:notations} Let $\mathcal{X}$ be the set of all 0-1 $m\times m$ matrices satisfying \eqref{x1} and \eqref{x2} and $\mathcal{Y}$ be the set of all 0-1 $n \times n$ matrices satisfying \eqref{y1} and \eqref{y2}. Also, let $\mathcal{F}$ be the set of all feasible solutions of BAP. Note that $|\mathcal{F}|=m!n!$. An instance of the BAP is completely represented by the triplet $(Q,C,D)$. Let $M=M'=\{1,2,\ldots,m\}$ and $N=N'=\{1,2,\ldots,n\}$. An $\mb{x}\in \mathcal{X}$ assigns each $i\in M$ a unique $j\in M'$. Likewise, a $\mb{y}\in\mathcal{Y}$ assigns each $k\in N$ a unique $l \in N'$. Without loss of generality we assume that $m\leq n$. For $\mb{x}\in\mathcal{X}$ and $\mb{y}\in\mathcal{Y}$, $f(\mb{x},\mb{y})$ denotes the objective function value of ($\mb{x},\mb{y}$). Given an instance $(Q,C,D)$ of a BAP, let $\mathcal{A}(Q,C,D)$ be the average of the objective function values of all feasible solutions. \begin{theorem}[\'Custi\'c et al.~\cite{custicbilinear}] \label{thm:Aa} ${\displaystyle \mathcal{A}(Q,C,D)=\frac{1}{mn}\sum_{i=1}^m\sum_{j=1}^m\sum_{k=1}^n\sum_{l=1}^n q_{ijkl}+ \frac{1}{m}\sum_{i=1}^m\sum_{j=1}^mc_{ij}+ \frac{1}{n}\sum_{k=1}^n\sum_{l=1}^nd_{kl}}$. \end{theorem} Consider an equivalence relation $\sim$ on $\mathcal{F}$, where $(\mb{x},\mb{y})\sim(\mb{x}',\mb{y}')$ if and only if there exist $a\in\{0,1,\ldots,m-1\}$ and $b\in\{0,1,\ldots,n-1\}$ such that $x_{ij}=x'_{i(j+a \mod m)}$ for all $i,j$, and $y_{kl}=y'_{k(l+b \mod n)}$ for all $k,l$. Here and later in the paper we use the notation of $x_{i(j+a \mod m)}$ in a sense that, if $(j+a) \mod m=0$, we then assume it to refer to the variable $x_{im}$. Similar assumptions will be made for the other index of $x_{ij}$ and variables $y_{kl}$ to improve the clarity of presentation. Let us consider an example of equivalence class for $\sim$. Given $a\in M$, $b\in N$ let $(\mb{x}^a,\mb{y}^b)\in\mathcal{F}$ be defined as \[ x_{ij}^a= \begin{cases} 1 & \text{if } j=i+a \mod m, \\ 0 & \text{otherwise} \end{cases}\quad \text{and}\quad y_{kl}^b= \begin{cases} 1 & \text{if } l=k+b \mod n, \\ 0 & \text{otherwise}. \end{cases} \] \begin{theorem}[\'Custi\'c et al.~\cite{custicbilinear}] \label{thm:minmax} For any instance $(Q,C,D)$ of BAP \[\min_{a\in M,b\in N}\{f(\mb{x}^a,\mb{y}^b)\}\leq\mathcal{A}(Q,C,D)\leq\max_{a\in M,b\in N}\{f(\mb{x}^a,\mb{y}^b)\}.\] \end{theorem} It can be shown that any equivalence class defined by $\sim$ can be used to obtain the type of inequalities stated above. Theorem \ref{thm:minmax} provides a way to find a feasible solution to BAP with objective function value no worse than $\mathcal{A}(Q,C,D)$ in $O(m^2n^2)$ time. To achieve this, we search through the set of solutions defined by the equivalence class, with any feasible solution to BAP as a starting point. A feasible solution $(\mb{x},\mb{y})$ to BAP is said to be no better than the average if $f(\mb{x},\mb{y}) \geq A(Q,C,D)$. In \cite{custicbilinear} we have provided the following lower bound for the number of feasible solutions that are no better than the average. \begin{theorem}[\'Custi\'c et al.~\cite{custicbilinear}] \label{thm:dom} $|\{(\mb{x},\mb{y})\in\mathcal{F}\ \colon f(\mb{x},\mb{y})\geq A(Q,C,D)\}|\geq (m-1)!(n-1)!$. \end{theorem} An algorithm that is guaranteed to return a solution with the objective function value at most $\mathcal{A}(Q,C,D)$ guarantees a solution that is no worse than $(m-1)!(n-1)!$ solutions. Thus, the domination ratio \cite{glover1997travelling,custic2017average} of such an algorithm is $\frac{1}{mn}$. \section{Construction heuristics} \label{sec:constr} In this section, we consider heuristic algorithms that will generate solutions to BAP using various construction approaches. Such algorithms are useful in situations where solutions of reasonable quality are needed quickly. These algorithms can also be used to generate starting solutions for more complex improvement based algorithms. Our first algorithm, called \textit{\textbf{Random}}, is the trivial approach of generating a feasible solution ($\mb{x},\mb{y}$). Both $\mb{x}$ and $\mb{y}$ are selected as random assignments in uniform fashion. It should be noted that the expected value of the solution produced by \textit{Random} is precisely $\mathcal{A}(Q,C,D)$. Let us now discuss a different randomized technique, called \textit{\textbf{RandomXYGreedy}}. This algorithm builds a solution by randomly picking a `not yet assigned' $i \in M$ or $k \in N$, and then setting $x_{ij}$ or $y_{kl}$ to $1$ for a `not yet assigned' $j \in M'$ or $l \in N'$ so that the total cost of the resulting \textit{partial solution} is minimized. A pseudo-code of \textit{RandomXYGreedy} is presented in Algorithm \ref{rxyg}. Here and later in the paper we will present description of the algorithms by assuming that the input BAP instance $(Q,C,D)$ has $C$ and $D$ as zero arrays. This restriction is for simplicity of presentation and does not affect neither the theoretical complexity of BAP nor the asymptotic computational complexity of the presented algorithms. It is easy to extend the algorithms to the general case in a straightforward way. The running time of \textit{RandomXYGreedy} is $O(mn^2)$ as each addition to our solution is selected using quadratic number of computations. However, just reading the data for the $Q$ matrix takes $O(m^2 n^2)$ time. For the rest of the paper we will consider running time of our algorithms without including this input overhead. \begin{algorithm} \caption{\textit{RandomXYGreedy}} \label{rxyg} \begin{algorithmic}[0]\scriptsize \Input integers $m, n$; $m \times m \times n \times n$ array $Q$ \Output feasible solution to BAP \State $x_{ij} \gets 0 \, \forall i, j$; $y_{kl} \gets 0 \, \forall k, l$ \While{not all $i \in M$ and $k \in N$ are assigned} \State randomly pick some $i \in M$ or $k \in N$ that is unassigned \If{$i$ is picked} \State $j' \gets$ random $j \in M$ that is unassigned; $\Delta' \gets \sum_{k, l \in N} q_{ij'kl} y_{kl}$ \ForAll{$j \in M$ that is unassigned} \State $\Delta \gets \sum_{k, l \in N} q_{ijkl} y_{kl}$ \Comment{value change if $i$ assigned to $j$} \If{$\Delta < \Delta'$} \State $j' \gets j$; $\Delta' \gets \Delta$ \EndIf \EndFor \State $x_{ij'} \gets 1$ \Comment{assign $i$ to $j'$} \Else \State $l' \gets$ random $l \in N$ that is unassigned; $\Delta' \gets \sum_{i, j \in M} q_{ijkl'} x_{ij}$ \ForAll{$l \in N$ that is unassigned} \State $\Delta \gets \sum_{i, j \in M} q_{ijkl} x_{ij}$ \Comment{value change if $k$ assigned to $l$} \If{$\Delta < \Delta'$} \State $l' \gets l$; $\Delta' \gets \Delta$ \EndIf \EndFor \State $y_{kl'} \gets 1$ \Comment{assign $k$ to $l'$} \EndIf \EndWhile \State \textbf{return} ($\mb{x}$, $\mb{y}$) \end{algorithmic} \end{algorithm} Our next algorithm is fully deterministic and is called \textit{\textbf{Greedy}} (see Algorithm \ref{greedy}). This is similar to \textit{RandomXYGreedy}, except that, at each iteration, we select the best available $x_{ij}$ or $y_{kl}$ to be added to the current partial solution. We start the algorithm by choosing the partial solution $x_{i'j'} = 1$ and $y_{k'l'} = 1$ where $i', j', k', l'$ correspond to a smallest element in the array $Q$. The total running time of this heuristic is $O(n^3)$, considering that the position of the smallest $q_{i'j'k'l'}$ is provided. \begin{algorithm} \caption{\textit{Greedy}} \label{greedy} \begin{algorithmic}[0]\scriptsize \Input integers $m, n$; $m \times m \times n \times n$ array $Q$ \Output feasible solution to BAP \State $x_{ij} \gets 0 \, \forall i, j$; $y_{kl} \gets 0 \, \forall k, l$ \State $i', j', k', l' \gets arg\,min_{i, j \in M, k, l \in N} q_{ijkl}$; $x_{i'j'} \gets 1$; $y_{k'l'} \gets 1$ \While{not all $i \in M$ and $k \in N$ are assigned} \State $\Delta'_x \gets \infty$; $\Delta'_y \gets \infty$ \ForAll{$i \in M$ that is unassigned} \ForAll{$j \in M$ that is unassigned} \State $\Delta \gets \sum_{k, l \in N} q_{ijkl} y_{kl}$ \Comment{value change if $i$ assigned to $j$} \If{$\Delta < \Delta'_x$} \State $i' \gets i$; $j' \gets j$; $\Delta'_x \gets \Delta$ \EndIf \EndFor \EndFor \ForAll{$k \in N$ that is unassigned} \ForAll{$l \in N$ that is unassigned} \State $\Delta \gets \sum_{i, j \in M} q_{ijkl} x_{ij}$ \Comment{value change if $k$ assigned to $l$} \If{$\Delta < \Delta'_y$} \State $k' \gets k$; $l' \gets l$; $\Delta'_y \gets \Delta$ \EndIf \EndFor \EndFor \If{$\Delta'_x \leq \Delta'_y$} \State $x_{i'j'} \gets 1$ \Comment{assign $i'$ to $j'$} \Else \State $y_{k'l'} \gets 1$ \Comment{assign $k'$ to $l'$} \EndIf \EndWhile \State \textbf{return} ($\mb{x}$, $\mb{y}$) \end{algorithmic} \end{algorithm} \begin{theorem} The objective function value of a solution produced by the Greedy algorithm could be arbitrarily bad and could be worse than $\mathcal{A}(Q,C,D)$. \end{theorem} \begin{proof} Consider the following BAP instance: $C$ and $D$ are zero matrices and elements of $2 \times 2 \times 3 \times 3$ matrix $Q$ are all zero except $q_{1111}=-\epsilon, q_{1122}=q_{1133}=\epsilon, q_{2211}=q_{1123}=q_{1132}=2\epsilon, q_{2222}=q_{2233}=L$, where $\epsilon$ and $L$ are arbitrarily small and large positive numbers, respectively. At first the algorithm will assign $x_{11} = y_{11} = 1$, as $q_{1111}$ is the smallest element in the array. Next, all indices $i, j \in M$ such that $i, j > 2$ and $k, l \in M$ such that $k, l > 3$ will be assigned within their respective groups. This is due to the fact that any assignment in those sets adds no additional cost to the current partial solution. Following that, $y_{22} = y_{33} = 1$ will be added. And finally, $x_{22}$ will be set to $1$ to complete a solution with the cost $3\epsilon + 2L$. However, an optimal solution in this case will contain $x_{11} = x_{22} = y_{11} = y_{23} = y_{32} = 1$ with an objective value of $5\epsilon$. Note that $\mathcal{A}(Q,C,D) = \frac{7\epsilon + 2L}{mn}$ and the result follows. \end{proof} We also consider a randomized version of \textit{Greedy}, called \textit{\textbf{GreedyRandomized}}. In this variation a partial assignment is extended by a randomly picked $x_{ij}$ or $y_{kl}$ out of $h$ best candidates (by solution value change), where $h$ is some fixed number. Such approaches are generally called semi-greedy algorithms and form an integral part of many GRASP algorithms \cite{hart1987semi,feo1989probabilistic}. To emphasize the randomized decisions in the algorithm and its linkages to GRASP, we call it \textit{GreedyRandomized}. Finally we discuss a construction heuristic based on rounding a fractional solution. In \cite{custicbilinear}, a discretization procedure was introduced that computes a feasible solution to BAP with objective function value no more than that of the fractional solution. Given a fractional solution to BAP ($\mb{x},\mb{y}$) (i.e. a solution to BAP (\ref{of})-(\ref{y2}) without integrality constrains (\ref{int})), we fix one side of the solution (say $\mb{x}$) and optimize $\mb{y}$ by solving a linear assignment problem to obtain a solution $\mb{\bar{y}}$ . Then, fix $\mb{\bar{y}}$ and solve a linear assignment problem to find a solution $\mb{\bar{x}}$. Output the solution ($\mb{\bar{x}},\mb{\bar{y}}$) as a result. We denote this approach as \textit{\textbf{Rounding}}. \begin{theorem} A feasible solution $(\mb{x}^*,\mb{y}^*)$ to BAP with the cost $f(\mb{x}^*,\mb{y}^*)\leq \mathcal{A}(Q,C,D)$, can be obtained in $O(m^2n^2+n^3)$ time using the \textit{Rounding} algorithm. \end{theorem} \begin{proof} Consider the fractional solution $(\mb{x},\mb{y})$ where $x_{ij}=1/m$ for all $i,j\in M$, and $y_{ij}=1/n$ for all $i,j\in N$. Then $(\mb{x},\mb{y})$ is a feasible solution to the relaxation of BAP obtained by removing the integrality restrictions \eqref{int}. It is easy to see that $f(\mb{x},\mb{y})=\mathcal{A}(Q,C,D)$. One of the properties of \textit{Rounding} discussed in \cite{custicbilinear} is that the resulting solution is no worse than the input fractional solution, in terms of objective value. Apply Rounding to $(\mb{x},\mb{y})$ to obtain the desired solution. \end{proof} \textit{Rounding} provides us with an alternative way to Theorem \ref{thm:minmax} for generating a BAP solution with objective value no worse than the average. Recall, that by Theorem \ref{thm:dom} this solution is guaranteed to be no worse than $(m-1)!(n-1)!$ feasible solutions. It should be noted that this discretization procedure could also be applied to BAP fractional solutions obtained from other sources, such as the solution to the relaxed version of an integer linear programming reformulation of BAP. Some of the linearization reformulations \cite{kaufman1978algorithm,frieze1989algorithms,lawler1963quadratic,adams2007level} of the QAP can be modified to obtain the corresponding linearizations of BAP. Selecting only $\mb{x}$ and $\mb{y}$ part from continuous solutions and ignoring other variables in the linearization formulations can be used to initiate the rounding algorithm discussed above. However, in this case, the resulting solution is not guaranteed to be no worse than the average. \section{Neighborhood structures and properties} \label{sec:ls} Let us now discuss various neighborhoods associated with a feasible solution of BAP and analyze their properties. We also consider worst case properties of a local optimum for these neighborhoods. All these neighborhoods are based on reassigning parts of $\mb{x}\in \mathcal{X}$, parts of $\mb{y}\in\mathcal{Y}$, or both. The neighborhoods that we consider can be classified into three categories: \textit{$h$-exchange neighborhoods}, \textit{$[h,p]$-exchange neighborhoods}, and \textit{shift based neighborhoods}. \subsection{The $h$-exchange neighborhood} \label{sec:hex} In this class of neighborhoods, we apply an $h$-exchange operation to $ \mb{x}$ while keeping $\mb{y}$ unchanged or viceversa. Let us discuss this in detail with $h = 2$. The $2$-exchange neighborhood is well studied in the QAP literature. Our version of $2$-exchange for BAP is related to the QAP variation, but also have some significant differences due to the specific structure of our problem. Let $(\mb{x}, \mb{y})$ be a feasible solution to BAP. Consider two elements $i_1, i_2 \in M$, $j_1, j_2 \in M'$, such that $x_{i_1j_1} = x_{i_2j_2} = 1$. Then the \textit{$2$-exchange} operation on the $\mb{x}$-variables produces $(\mb{x}', \mb{y})$, where $\mb{x}'$ is obtained from $\mb{x}$ by swapping assignments of $i_1, i_2$ and $j_1, j_2$ (i.e. setting $x_{i_1j_2} = x_{i_2j_1} = 1$ and $x_{i_1j_1} = x_{i_2j_2} = 0$). Let $\Delta^x_{i_1i_2}$ be the change in the objective value from $(\mb{x}, \mb{y})$ to $(\mb{x}', \mb{y})$. I.e., \begin{equation} \begin{split} \Delta^x_{i_1i_2} = & f(\mb{x}', \mb{y}) - f(\mb{x}, \mb{y})\\ = & \sum_{i=1}^m\sum_{j=1}^m\sum_{k=1}^n\sum_{l=1}^n q_{ijkl}x'_{ij}y_{kl} + \sum_{i=1}^m\sum_{j=1}^m c_{ij}x'_{ij} + \sum_{k=1}^n\sum_{l=1}^n d_{kl}y_{kl}\\ & - \sum_{i=1}^m\sum_{j=1}^m\sum_{k=1}^n\sum_{l=1}^n q_{ijkl}x_{ij}y_{kl} - \sum_{i=1}^m\sum_{j=1}^m c_{ij}x_{ij} - \sum_{k=1}^n\sum_{l=1}^n d_{kl}y_{kl}\\ = & \sum_{k=1}^n\sum_{l=1}^n (q_{i_1j_2kl} + q_{i_2j_1kl} - q_{i_1j_1kl} - q_{i_2j_2kl}) y_{kl} + c_{i_1j_2} + c_{i_2j_1} - c_{i_1j_1} - c_{i_2j_2}. \end{split} \end{equation} Let $2exchangeX(\mb{x}, \mb{y})$ be the set of all feasible solutions $(\mb{x}', \mb{y})$, obtained from $(\mb{x}, \mb{y})$ by applying the $2$-exchange operation for all $i_1, i_2 \in M$ (with corresponding $j_1, j_2 \in M'$). Efficient computation of $\Delta^x_{i_1i_2}$ is crucial in developing fast algorithms that use this neighborhood. For a fixed $\mb{y}$, consider the $m \times m$ matrix $E$ such that $e_{ij} = \sum_{k=1}^n\sum_{l=1}^n q_{ijkl} y_{kl} + c_{ij}$. Then we can write $\Delta^x_{i_1i_2} = e_{i_1j_2} + e_{i_2j_1} - e_{i_1j_1} - e_{i_2j_2}$. If the matrix $E$ is available, this calculation can be done in constant time, and hence the neighborhood $2exchangeX(\mb{x}, \mb{y})$ can be explored in $O(m^2)$ time for an improving solution. Note that the values of $E$ depend only on $\mb{y}$ and not on $\mb{x}$. Thus, we do not need to update $E$ within a local search algorithm as long as $\mb{y}$ remains unchanged. Likewise, we can define a 2-exchange operation on $\mb{y}$ by keeping $\mb{x}$ constant. Consider two elements $k_1, k_2 \in N$ and let $l_1, l_2$ be the corresponding assignments in $N'$, such that $x_{k_1l_1} = x_{k_2l_2} = 1$. Then the $2$-exchange operation will produce $(\mb{x}, \mb{y}')$, where $\mb{y}'$ is obtained from $\mb{y}$ by swapping assignments of $k_1, k_2$ and $l_1, l_2$ (i.e. setting $x_{k_1l_2} = x_{k_2l_1} = 1$ and $x_{k_1l_1} = x_{k_2l_2} = 0$). Let $\Delta^y_{k_1k_2}$ be the change in the objective value from $(\mb{x}, \mb{y})$ to $(\mb{x}, \mb{y}')$. I.e., \begin{equation} \begin{split} \Delta^y_{k_1k_2} = & f(\mb{x}, \mb{y}') - f(\mb{x}, \mb{y})\\ = & \sum_{i=1}^m\sum_{j=1}^m (q_{ijk_1l_2} + q_{ijk_2l_1} - q_{ijk_1l_1} - q_{ijk_2l_2}) x_{ij} + d_{k_1l_2} + d_{k_2l_1} - d_{k_1l_1} - d_{k_2l_2}. \end{split} \end{equation} Let $2exchangeY(\mb{x}, \mb{y})$ be the set of all feasible solutions $(\mb{x}, \mb{y}')$, obtained from $(\mb{x}, \mb{y})$ by applying the 2-exchange operation on $\mb{y}$ while keeping $\mb{x}$ unchanged. As in the previous case, efficient computation of $\Delta^y_{k_1k_2}$ is crucial in developing fast algorithms that use this neighborhood. For a fixed $\mb{x}$ consider an $n \times n$ matrix $G$ such that $g_{kl} = \sum_{i=1}^m\sum_{j=1}^m q_{ijkl} x_{ij} + d_{kl}$. Then we can write $\Delta^y_{k_1k_2} = g_{k_1l_2} + g_{k_2l_1} - g_{k_1l_1} - g_{k_2l_2}$. If the matrix $G$ is available, this calculation can be done in constant time and hence the neighborhood $2exchangeY(\mb{x}, \mb{y})$ can be explored in $O(n^2)$ time for an improving solution. Note that the values of $G$ depends only on $\mb{x}$ and not on $\mb{y}$. Thus, we do not need to update $G$ within a local search algorithm as long as $\mb{y}$ remains unchanged. The \textit{2-exchange neighborhood} of ($\mb{x}, \mb{y}$), denoted by $2exchange(\mb{x}, \mb{y})$, is given by $$2exchange(\mb{x}, \mb{y}) = 2exchangeX(\mb{x}, \mb{y})\cup 2exchangeY(\mb{x}, \mb{y}).$$ In a local search algorithm based on the $2exchange(\mb{x}, \mb{y})$ neighborhood, after each move, either $\mb{x}$ or $\mb{y}$ will be changed, but not both. To maintain our data structure, if $\mb{y}$ is changed, we update $E$ in $O(m^2)$ time. More specifically, suppose a $2$-exchange operation takes $(\mb{x}, \mb{y})$ to $(\mb{x}, \mb{y}')$, then $E$ is updated as: $e_{ij} \gets e_{ij} + q_{ijk_1l_2} + q_{ijk_2l_1} - q_{ijk_1l_1} - q_{ijk_2l_2}$, where $k_1, k_2 \in N, l_1, l_2 \in N'$ are the corresponding positions where the swap have occurred. Analogous changes will be performed on $G$ in $O(n^2)$ time if $(\mb{x}, \mb{y})$ is changed to $(\mb{x}', \mb{y})$. The general \textit{$h$-exchange neighborhood} for BAP is obtained by replacing $2$ in the above definition by $2, 3, \ldots, h$. Notice that the $h$-exchange neighborhood can be searched for an improving solution in $O(n^h)$ time, and already for $h=3$, the running time of the algorithm that completely explores this neighborhood is $O(n^3)$. With the same asymptotic running time we could instead optimally reassign whole $\mb{x}$ (or $\mb{y}$) by solving the linear assignment problem with $E$ (or $G$ respectively) as the cost matrix. This fact suggests that any $h$ larger that $3$ potentially leads to a weaker algorithm in terms of running time. Such full reassignment can be viewed as a local search based on the special case of the $h$-exchange neighborhood with $h = n$. This special local search will be referred to as \textit{\textbf{Alternating Algorithm}} and will be alternating between re-optimizing $\mb{x}$ and $\mb{y}$. For clarity, the pseudo code for this approach is presented in Algorithm \ref{AA}. \textit{Alternating Algorithm} is a strategy well-known in non-linear programming literature as \textit{coordinate-wise descent}. Similar underlying ideas are used in the context of other bilinear programming problems by various authors \cite{konno1980maximizing,karapetyan2012heuristic,punnen2015average}. \begin{algorithm} \caption{\textit{Alternating Algorithm}} \label{AA} \begin{algorithmic}[0]\scriptsize \Input integers $m, n$; $m \times m \times n \times n$ array $Q$; feasible solution ($\mb{x}$, $\mb{y}$) to BAP \Output feasible solution to BAP \While{True} \State $e_{ij} \gets \sum_{k, l \in N} q_{ijkl} y_{kl} \, \forall i, j \in M$ \State $\mb{x}^* \gets arg\,min_{\mb{x}' \in \mathcal{X}} \sum_{i, j \in M} e_{ij} x'_{ij}$ \Comment{solving assignment problem for $\mb{x}$} \State $g_{kl} \gets \sum_{i, j \in M} q_{ijkl} x^*_{ij} \, \forall k, l \in N$ \State $\mb{y}^* \gets arg\,min_{\mb{y}' \in \mathcal{Y}} \sum_{k, l \in N} g_{kl} y'_{kl}$ \Comment{solving assignment problem for $\mb{y}$} \If{$f(\mb{x}^*, \mb{y}^*) = f(\mb{x}, \mb{y})$ } \State \textbf{break} \EndIf \State $\mb{x} \gets \mb{x}^*; \, \mb{y} \gets \mb{y}^*$ \EndWhile \State \textbf{return} ($\mb{x}$, $\mb{y}$) \end{algorithmic} \end{algorithm} \begin{theorem} \label{thm:hex} The objective function value of a locally optimal solution for BAP based on the h-exchange neighborhood could be arbitrarily bad and could be worse than $\mathcal{A}(Q,C,D)$, for any $h$. \end{theorem} \begin{proof} For a small $\epsilon > 0$ and a large $L$, we consider BAP instance $(Q,C,D)$ such that all of its cost elements are equal to $0$, except $c_{11}=c_{22}=d_{11}=d_{22}=-\epsilon$, and $q_{1212}=-L$. Let a feasible solution $(\mb{x},\mb{y})$ be such that $x_{11}=x_{22}=y_{11}=y_{22}=1$. Then $(\mb{x},\mb{y})$ is a local optimum for the $h$-exchange neighborhood. Note that this local optimum can only be improved by simultaneously making changes to both $\mb{x}$ and $\mb{y}$, which is not possible for this neighborhood. The objective function value of $(\mb{x},\mb{y})$ is $-4\epsilon$, while the optimal solution objective value is $-L$. \end{proof} Despite the negative result of Theorem \ref{thm:hex}, we will see in Section \ref{sec:explsms} that on average, $2$-exchange and $n$-exchange (with \textit{Alternating Algorithm}) are two of the most efficient neighborhoods to explore from a practical point of view. Moreover, when restricted to non-negative input array, we can establish some performance guarantees for $2$-exchange (and consequently for any $h$-exchange) local search. In particular, we derive upper bounds on the local optimum solution value and the number of iterations to reach a solution not worse than this value bound. The proof technique follows \cite{angel1998quality}, where authors obtained similar bounds for Koopmans-Beckman QAP. In fact, these results can be obtained for the general QAP as well, by modifying the following proof accordingly. \begin{theorem} \label{thm:2exA} For any BAP instance $(Q,C,D)$ with non-negative $Q$ and zero matrices $C,D$, the cost of the local optimum for the $2$-exchange neighborhood is $f^* \leq \frac{2mn}{m + n} \mathcal{A}(Q,C,D)$. \end{theorem} \begin{proof} In this proof, for simplicity, we represent BAP as a permutation problem. As such, the permutation formulation of BAP is \begin{equation} \min_{\pi \in \Pi, \phi \in \Phi} \sum_{i=1}^m \sum_{k=1}^n q_{i \, \pi(i) \, k \, \phi(k)}, \end{equation} where $\Pi$ and $\Phi$ are sets of all permutations on $\{1, 2, \ldots, m\}$ and $\{1, 2, \ldots, n\}$, respectively. Cost of a particular permutation pair $\pi, \phi$ is $f(\pi, \phi) = \sum_{i=1}^m \sum_{k=1}^n q_{i \, \pi(i) \, k \, \phi(k)}$. Let $\pi_{ij}$ be the permutation obtained by applying a single $2$-exchange operation to $\pi$ on indices $i$ and $j$. Define $\delta^{\pi}_{ij}$ as an objective value difference after applying such $2$-exchange: \begin{equation} \delta^{\pi}_{ij}(\pi, \phi) = f(\pi_{ij}, \phi) - f(\pi, \phi) = \sum_{k = 1}^m \left(q_{i \, \pi(j) \, k \, \phi(k)} + q_{j \, \pi(i) \, k \, \phi(k)} - q_{i \, \pi(i) \, k \, \phi(k)} - q_{j \, \pi(j) \, k \, \phi(k)}\right). \nonumber \end{equation} Similarly we can have $\phi_{kl}$ and $\delta^{\phi}_{kl}$: \begin{equation} \delta^{\phi}_{kl}(\pi, \phi) = f(\pi, \phi_{kl}) - f(\pi, \phi) = \sum_{i = 1}^n \left(q_{i \, \pi(i) \, k \, \phi(l)} + q_{i \, \pi(i) \, l \, \phi(k)} - q_{i \, \pi(i) \, k \, \phi(k)} - q_{i \, \pi(i) \, l \, \phi(l)}\right). \nonumber \end{equation} Summing up over all possible $\delta^{\pi}_{ij}$ and $\delta^{\phi}_{kl}$ we get \begin{align} \label{deltapi} \sum_{i,j=1}^m \delta^{\pi}_{ij}(\pi, \phi) \, &= \sum_{i,j=1}^m \sum_{k=1}^n q_{i \, \pi(j) \, k \, \phi(k)} + \sum_{i,j=1}^m \sum_{k=1}^n q_{j \, \pi(i) \, k \, \phi(k)} - \sum_{i,j=1}^m \sum_{k=1}^n q_{i \, \pi(i) \, k \, \phi(k)} - \sum_{i,j=1}^m \sum_{k=1}^n q_{j \, \pi(j) \, k \, \phi(k)} \nonumber\\ &= 2 \sum_{i,j=1}^m \sum_{k=1}^n q_{i \, \pi(j) \, k \, \phi(k)} - 2m f(\pi, \phi), \end{align} \begin{align} \label{deltaphi} \sum_{k,l=1}^n \delta^{\phi}_{kl}(\pi, \phi) = 2 \sum_{i=1}^m \sum_{k,l=1}^n q_{i \, \pi(i) \, k \, \phi(l)} - 2n f(\pi, \phi). \end{align} Using (\ref{deltapi}) and (\ref{deltaphi}) we can now compute an average cost change after $2$-exchange operation on solution $(\pi, \phi)$. \begin{align} \label{DeltaB} \Delta(\pi, \phi) \, &= \frac{\sum_{i,j=1}^m \delta^{\pi}_{ij}(\pi, \phi) + \sum_{k,l=1}^n \delta^{\phi}_{kl}(\pi, \phi)}{m^2 + n^2} \nonumber\\ &= \frac{2 \sum_{i,j=1}^m \sum_{k=1}^n q_{i \, \pi(j) \, k \, \phi(k)} + 2 \sum_{i=1}^m \sum_{k,l=1}^n q_{i \, \pi(i) \, k \, \phi(l)} - 2(m + n)f(\pi, \phi)}{m^2 + n^2} \nonumber\\ & = \frac{2 \sum_{i,j=1}^m \sum_{k=1}^n q_{i \, \pi(j) \, k \, \phi(k)} + 2 \sum_{i=1}^m \sum_{k,l=1}^n q_{i \, \pi(i) \, k \, \phi(l)}}{m^2 + n^2} \! - \! \lambda f(\pi, \phi) \! + \! \lambda \frac{2mn}{m \! + \! n} \mathcal{A} \! - \! \lambda \frac{2mn}{m \! + \! n} \mathcal{A} \nonumber\\ & \leq - \lambda(f(\pi, \phi) - \frac{2mn}{m + n} \mathcal{A}) + \mu - \lambda \frac{2mn}{m + n} \mathcal{A}, \end{align} where $\lambda = 2 \frac{m + n}{m^2 + n^2}$ and $\mu = \max_{\pi \in \Pi, \phi \in \Phi} \left[\dfrac{2 \sum_{i,j=1}^m \sum_{k=1}^n q_{i \, \pi(j) \, k \, \phi(k)} + 2 \sum_{i=1}^m \sum_{k,l=1}^n q_{i \, \pi(i) \, k \, \phi(l)}}{m^2 + n^2}\right]$. Note that both $\lambda$ and $\mu$ do not depend on any particular solution and are fixed for a given BAP instance. We are ready to prove the theorem by contradiction. Let $(\pi^*, \phi^*)$ be the local optimum for $2$-exchange local search, with the objective function cost $f^* = f(\pi^*, \phi^*)$. Assume now that $f(\pi^*, \phi^*) > \frac{2mn}{m + n} \mathcal{A}$. Then $-\lambda(f(\pi^*, \phi^*) - \frac{2mn}{m + n} \mathcal{A}) < 0$ and \begin{align} \mu - \lambda \frac{2mn}{m + n} \mathcal{A} = \max_{\pi \in \Pi, \phi \in \Phi} \, &\left[\frac{2 \sum_{i,j=1}^m \sum_{k=1}^n q_{i \, \pi(j) \, k \, \phi(k)} + 2 \sum_{i=1}^m \sum_{k,l=1}^n q_{i \, \pi(i) \, k \, \phi(l)}}{m^2 + n^2}\right] \nonumber\\ - \, &2 \frac{m + n}{m^2 + n^2} \frac{2mn}{m+n} \frac{1}{mn} \sum_{i,j=1}^m \sum_{k,l=1}^n q_{ijkl} \nonumber\\ = \max_{\pi \in \Pi, \phi \in \Phi} \, &\left[\frac{2 \sum_{i,j=1}^m \sum_{k=1}^n q_{i \, \pi(j) \, k \, \phi(k)}}{m^2 + n^2} + \frac{2 \sum_{i=1}^m \sum_{k,l=1}^n q_{i \, \pi(i) \, k \, \phi(l)}}{m^2 + n^2}\right] \nonumber\\ - \, &\frac{2 \sum_{i,j=1}^m \sum_{k,l=1}^n q_{ijkl}}{m^2 + n^2} - \frac{2 \sum_{i,j=1}^m \sum_{k,l=1}^n q_{ijkl}}{m^2 + n^2} \leq 0, \end{align} which implies $\Delta(\pi^*, \phi^*) < 0$. As $\Delta$ is the average cost difference after applying $2$-exchange, there exists some swap that decreases solution cost by at least $-\Delta(\pi^*, \phi^*)$, and that contradicts with $(\pi^*, \phi^*)$ being a local optimum. \end{proof} It is easy to see that the bound $\mu \leq \lambda \frac{2mn}{m + n} \mathcal{A}$ from Theorem \ref{thm:2exA} is tight. Consider some arbitrary bilinear assignment $(\pi, \phi)$, and set all $q_{ijkl}$ to zero except $q_{i \, \pi(i) \, k \, \phi(k)} = 1, \, \forall i \, \forall k$. Then $\mu = 4 \dfrac{\sum_{i=1}^m \sum_{k=1}^n q_{i \, \pi(i) \, k \, \phi(k)}}{m^2 + n^2} = \lambda \frac{2mn}{m + n} \mathcal{A} = \frac{4mn}{m^2 + n^2}$. \begin{theorem} \label{thm:2ext} For any BAP instance $(Q,C,D)$ with elements of $Q$ restricted to non-negative integers and zero matrices $C,D$, the local search algorithm that explores $2$-exchange neighborhood will reach a solution with the cost at most $\frac{2mn}{m + n} \mathcal{A}(Q,C,D)$ in $O\left(\frac{m^2 + n^2}{m + n} \log{\sum q_{ijkl}}\right)$ iterations. \end{theorem} \begin{proof} Inequality (\ref{DeltaB}) can be also written as $\Delta(\pi, \phi) \leq -\lambda f(\pi, \phi) + \mu$, and so any solution with $f(\pi, \phi) > \frac{\mu}{\lambda}$ would yield $\Delta(\pi, \phi) < 0$, and would have some $2$-exchange improvement possible. Note that $\frac{2mn}{m + n} \mathcal{A} \geq \frac{\mu}{\lambda}$. Consider a cost $f'(\pi, \phi) = f(\pi, \phi) - \frac{\mu}{\lambda}$. At every step of the $2$-exchange local search $f'(\pi, \phi)$ is decreased by at least $\Delta(\pi, \phi)$ and becomes at most $$f'(\pi, \phi) + \Delta(\pi, \phi) \leq f'(\pi, \phi) + (-\lambda f(\pi, \phi) + \mu) = f'(\pi, \phi) - \lambda f'(\pi, \phi) = (1 - \lambda)f'(\pi, \phi).$$ Since elements of $Q$ are integer, the cost at each step must decrease by at least $1$. Then a number of iterations $t$ for $C'(\pi, \phi)$ to become less than or equal to zero has to satisfy \begin{align} (1 - \lambda)^{t-1} (f_{\text{max}} - \frac{\mu}{\lambda}) - (1 - \lambda)^{t} (f_{\text{max}} - \frac{\mu}{\lambda}) &\geq 1, \nonumber\\ (1 - \lambda)^{t-1} (f_{\text{max}} - \frac{\mu}{\lambda}) (1 - (1 - \lambda)) &\geq 1, \nonumber\\ (1 - \lambda)^{t-1} &\geq \frac{1}{(f_{\text{max}} - \frac{\mu}{\lambda}) \lambda}, \nonumber\\ (t - 1) \log{(1 - \lambda)} &\geq -\log{\lambda (f_{\text{max}} - \frac{\mu}{\lambda})}, \nonumber\\ t &\leq 1 + \frac{-\log{\lambda (f_{\text{max}} - \frac{\mu}{\lambda}})}{\log{(1 - \lambda)}}, \end{align} where $f_{\text{max}}$ is the highest possible solution value. It follows that \begin{equation} t \in O\left(\frac{1}{\lambda} \log{\lambda (f_{\text{max}} - \frac{\mu}{\lambda}})\right) = O\left(\frac{m^2 + n^2}{m + n} \log{\frac{m + n}{m^2 + n^2} (f_{\text{max}} - \frac{\mu}{\lambda}})\right). \end{equation} This together with the fact that $f_{\text{max}} - \frac{\mu}{\lambda} \leq f_{\text{max}} \leq \sum_{i,j=1}^m \sum_{k,l=1}^n q_{ijkl}$ completes the proof. \end{proof} It should be noted that the solution considered in the statement of Theorem \ref{thm:2ext} may not be a local optimum. The theorem simply states that, the solution of the desired quality will be reached by $2$-exchange local search in polynomial time. It is known that for QAP, $2$-exchange local search may sometimes reach local optimum in exponential number of steps \cite{pardalos1994quadratic}. \subsection{[$h,p$]-exchange neighborhoods} \label{sec:hpex} Recall that in the $h$-exchange neighborhood we change either the $\mb{x}$ variables or the $\mb{y}$ variables, but not both. Simultaneous changes in $\mb{x}$ and $\mb{y}$ could lead to more powerful neighborhoods, but with additional computational effort in exploring them. With this motivation, we introduce the \textit{[$h,p$]-exchange neighborhood }for BAP. In the $[h,p]$-exchange neighborhood, for each $h$-exchange operation on $\mb{x}$ variables, we consider all possible $p$-exchange operations on $\mb{y}$ variables. Thus, the [$h,p$]-exchange neighborhood is the set of all solutions $(\mb{x}', \mb{y}')$ obtained from the given solution $(\mb{x}, \mb{y})$, such that $\mb{x}'$ differs from $\mb{x}$ in at most $h$ assignments, and $\mb{y}'$ differs from $\mb{y}$ in at most $p$ assignments. The size of this neighborhood is $\Theta(m^hn^p)$. \begin{theorem} The objective function value of a locally optimal solution for the $[h,p]$-exchange neighborhood could be arbitrarily bad. If $h < \frac{m}{2}$ or $p < \frac{n}{2}$ this value could be arbitrarily worse than $\mathcal{A}(Q,C,D)$. \end{theorem} \begin{proof} Let $\epsilon > 0$ be an arbitrarily small and $L$ be an arbitrarily large numbers. Consider the BAP instance $(Q,C,D)$ such that all of the associated cost elements are equal to $0$, except $q_{iikk}=-\epsilon, q_{i(i+1 \mod m)k(k+1 \mod n)}=-L, q_{iik(k+1 \mod n)}=\frac{hL}{m - h} \quad \forall i \in M \, \forall k \in N$. Let $(\mb{x},\mb{y})$ be a feasible solution such that $x_{ii}=1 \quad \forall i \in M$ and $y_{kk}=1 \quad \forall k \in N$. Note that $f(\mb{x},\mb{y}) = -mn\epsilon$. We first show that $(\mb{x},\mb{y})$ is a local optimum for the $[h,p]$-exchange neighborhood. If we assume the opposite and $(\mb{x},\mb{y})$ is not a local optimum, then there exist a solution $(\mb{x}',\mb{y}')$ with $\mb{x}'$ being different from $\mb{x}$ in at most $h$ assignments, $\mb{y}'$ being different from $\mb{y}$ in at most $p$ assignments, and $f(\mb{x}',\mb{y}') - f(\mb{x},\mb{y}) < 0$. Since the summation for $f(\mb{x},\mb{y})$ comprised of exactly $mn$ elements of $Q$ with value $-\epsilon$, the only way to get an improving solution is to get some number of elements with value $-L$, and therefore to flip some number of $x_{ii}$ to $x_{i(i + 1 \mod m)}$ and $y_{kk}$ to $y_{k(k + 1 \mod n)}$. Let $1 < u \leq h$ and $1 < v \leq p$ be the number of such elements $u = |\{i \in M | x'_{i(i + 1 \mod m)} = 1\}|$ and $v = |\{k \in N | y'_{k(k + 1 \mod n)} = 1\}|$ in $(\mb{x}',\mb{y}')$. Then we know that the cost function $f(\mb{x}',\mb{y}')$ contains exactly $uv$ number of $-L$. However, each of the $v$ elements of type $y'_{k(k + 1 \mod n)} = 1$ also contributes at least $(m - h) \frac{hL}{m - h} = hL$ to the objective value (due to remaining $m - h$ elements of type $x_{ii} = 1$ being unchanged). From this we get that $f(\mb{x}',\mb{y}') > mn(-\epsilon) + uv(-L) + hv(L) = f(\mb{x},\mb{y}) + vL(h - u)$, and since $u \leq h$ we get $f(\mb{x}',\mb{y}') - f(\mb{x},\mb{y}) > 0$ which contradicts the fact that $(\mb{x}',\mb{y}')$ is an improving solution to $(\mb{x},\mb{y})$. Hence, $(\mb{x},\mb{y})$ must be a local optimum. We also get that an optimal solution for this instance is $x_{i(i+1 \mod m)}=1 \quad \forall i \in M$ and $y_{k(k+1 \mod n)}=1 \quad \forall k \in N$ with a total cost of $-mnL$. The average value of all feasible solutions is $\mathcal{A}(Q,C,D) = \dfrac{mn(-L) + mn(-\epsilon) + mn\frac{hL}{m - h}}{mn} = L\frac{2h-m}{m - h} - \epsilon$. $h < \frac{m}{2}$ and appropriate choice of $\epsilon, L$ guarantee us that considered local optimum is arbitrarily worse than $\mathcal{A}(Q,C,D)$. The construction of the example for the case $p < \frac{n}{2}$ is similar, so we omit the details. \end{proof} One particular case of the $[h,p]$-exchange neighborhood deserves a special mention. If $p=n$, then for each candidate $h$-exchange solution $\mb{x}'$ we will consider all possible assignments for $\mb{y}$. To find the optimal $\mb{y}$ given $\mb{x}'$, we can solve a linear assignment problem with cost matrix $g_{kl} = \sum_{i=1}^m\sum_{j=1}^m q_{ijkl} x'_{ij} + d_{kl}$, as in the \textit{Alternating Algorithm}. Analogous situation appears when we consider $[h,p]$-exchange neighborhood with $h=m$. A set of solutions defined by the union of $[h,n]$-exchange and $[m,p]$-exchange neighborhoods, for the case $h=p$, will be called simply \textit{optimized $h$-exchange neighborhood}. Note that the optimized $h$-exchange neighborhood is exponential in size, but it can be searched in $O(m^hn^3 + n^hm^3)$ time due to the fact that for fixed $\mb{x}$ ($\mb{y}$), optimal $f(\mb{x}, \mb{y}')$ ($f(\mb{x}', \mb{y})$) can be found in $O(n^3)$ time. Neighborhoods similar to optimized $2$-exchange were used for unconstrained bipartite binary quadratic program by Glover et al. \cite{glover2015integrating}, and for the bipartite quadratic assignment problem by Punnen and Wang \cite{punnen2016bipartite}. As in the case of $h$-exchange, some performance bounds for optimized $h$-exchange neighborhood can be established, if the input array $Q$ is not allowed to have negative elements. \begin{theorem} \label{thm:2exOptA} There exists a solution with the cost $f \leq (m + n) \mathcal{A}(Q,C,D)$ in the optimized $2$-exchange neighborhood of every solution to BAP, for any instance $(Q,C,D)$ with non-negative $Q$ and zero matrices $C,D$. \end{theorem} \begin{proof} The proof will follow the structure of Theorem \ref{thm:2exA}, and will focus on the average solution change to a given permutation pair solution ($\pi$, $\phi$) to BAP. Let $\pi_{ij}$ be the permutation obtained by applying a single $2$-exchange operation to $\pi$ on indices $i$ and $j$, and $\phi^*$ be the optimal permutation that minimizes the solution cost for such fixed $\pi_{ij}$. Define $\delta^{\pi}_{ij}$ as the objective value difference after applying such operation: \begin{equation} \delta^{\pi}_{ij}(\pi, \phi) = f(\pi_{ij}, \phi^*) - f(\pi, \phi) = \sum_{u=1}^m \sum_{k=1}^n q_{u \, \pi_{ij}(u) \, k \, \phi^*(k)} - f(\pi, \phi) \leq \frac{1}{n} \sum_{u = 1}^m \sum_{k,l = 1}^n q_{u \, \pi_{ij}(u) \, k \, l} - f(\pi, \phi). \nonumber \end{equation} The last inequality due to the fact that, for fixed $\pi_{ij}$, the value of the solution with the optimal $\phi^*$ is not worse than the average value of all such solutions. We also know that for any $k,l \in N$, \begin{equation} \sum_{u = 1}^m q_{u \, \pi_{ij}(u) \, k \, l} = \sum_{u = 1}^m q_{u \, \pi(u) \, k \, l} + q_{i \, \pi(j) \, k \, l} + q_{j \, \pi(i) \, k \, l} - q_{i \, \pi(i) \, k \, l} - q_{j \, \pi(j) \, k \, l}. \nonumber \end{equation} and, therefore, \begin{equation} \delta^{\pi}_{ij}(\pi, \phi) \leq \frac{1}{n} \sum_{k,l = 1}^n \sum_{u = 1}^m q_{u \, \pi(u) \, k \, l} + \frac{1}{n} \sum_{k,l = 1}^n \left(q_{i \, \pi(j) \, k \, l} + q_{j \, \pi(i) \, k \, l} - q_{i \, \pi(i) \, k \, l} - q_{j \, \pi(j) \, k \, l}\right) - f(\pi, \phi). \nonumber \end{equation} Analogous result can be derived for similarly defined $\delta^{\phi}_{kl}$: \begin{equation} \delta^{\phi}_{kl}(\pi, \phi) \leq \frac{1}{m} \sum_{i,j = 1}^m \sum_{v = 1}^n q_{i \, j \, v \, \phi(v)} + \frac{1}{m} \sum_{i,j = 1}^m \left(q_{i \, j \, k \, \phi(l)} + q_{i \, j \, l \, \phi(k)} - q_{i \, j \, k \, \phi(k)} - q_{i \, j \, l \, \phi(l)}\right) - f(\pi, \phi). \nonumber \end{equation} We can now get an upper bound on the average cost change after optimized $2$-exchange operation on solution $(\pi, \phi)$. \begin{align} \Delta(\pi, \phi) \, &= \frac{\sum_{i,j=1}^m \delta^{\pi}_{ij}(\pi, \phi) + \sum_{k,l=1}^n \delta^{\phi}_{kl}(\pi, \phi)}{m^2 + n^2} \nonumber\\ & \leq \frac{\frac{m^2}{n} \sum_{u = 1}^m \sum_{k,l = 1}^n q_{u \, \pi(u) \, k \, l} + \frac{2}{n} \sum_{i,j = 1}^m \sum_{k,l = 1}^n q_{i \, j \, k \, l} - \frac{2m}{n} \sum_{i = 1}^m \sum_{k,l = 1}^n q_{i \, \pi(i) \, k \, l} - m^2 f(\pi, \phi)}{m^2 + n^2} \nonumber\\ & \quad + \frac{\frac{n^2}{m} \sum_{i,j = 1}^m \sum_{v = 1}^n q_{i \, j \, v \, \phi(v)} + \frac{2}{m} \sum_{i,j = 1}^m \sum_{k,l = 1}^n q_{i \, j \, k \, l} - \frac{2n}{m} \sum_{i,j = 1}^m \sum_{k = 1}^n q_{i \, j \, k \, \phi(k)} - n^2 f(\pi, \phi)}{m^2 + n^2} \nonumber\\ & = \frac{(m^3 - 2m^2) \sum_{i = 1}^m \sum_{k,l = 1}^n q_{i \, \pi(i) \, k \, l} + (n^3 - 2n^2) \sum_{i,j = 1}^m \sum_{v = 1}^n q_{i \, j \, v \, \phi(v)}}{mn(m^2 + n^2)} \nonumber\\ & \quad + \frac{2(m + n) \sum_{i,j = 1}^m \sum_{k,l = 1}^n q_{i \, j \, k \, l}}{mn(m^2 + n^2)} - f(\pi, \phi) \nonumber\\ & \leq \mu - f(\pi, \phi), \nonumber \end{align} where \begin{equation} \mu = \max_{\pi \in \Pi, \phi \in \Phi} \left[\dfrac{m^3 \sum_{i = 1}^m \sum_{k,l = 1}^n q_{i \, \pi(i) \, k \, l} + n^3 \sum_{i,j = 1}^m \sum_{v = 1}^n q_{i \, j \, v \, \phi(v)} + 2(m + n) \sum_{i,j = 1}^m \sum_{k,l = 1}^n q_{ijkl}}{mn(m^2 + n^2)}\right]. \nonumber \end{equation} Note that $\mu$ does not depend on any particular solution and is fixed for a given BAP instance. For any given solution $(\pi, \phi)$ to BAP, either $f(\pi, \phi) \leq \mu$ or $f(\pi, \phi) > \mu$, which means that $\Delta(\pi, \phi) \leq 0$, and so there exists an optimized $2$-exchange operation that improves our solution cost by at least $f(\pi, \phi) - \mu$, thus, making it not worse than $\mu$. We also notice that, \begin{align} \mu - (m + n) \mathcal{A} &= \mu - \frac{m + n}{mn} \sum_{i,j=1}^m \sum_{k,l=1}^n q_{ijkl} = \mu - \frac{(m + n)(m^2 + n^2)}{mn(m^2 + n^2)} \sum_{i,j=1}^m \sum_{k,l=1}^n q_{ijkl} \nonumber\\ & = \max_{\pi \in \Pi} \left[\dfrac{m^3 \sum_{i = 1}^m \sum_{k,l = 1}^n q_{i \, \pi(i) \, k \, l}}{mn(m^2 + n^2)}\right] - \frac{m^3 \sum_{i,j=1}^m \sum_{k,l=1}^n q_{ijkl}}{mn(m^2 + n^2)} \nonumber\\ & + \max_{\phi \in \Phi} \left[\dfrac{n^3 \sum_{i,j = 1}^m \sum_{v = 1}^n q_{i \, j \, v \, \phi(v)}}{mn(m^2 + n^2)}\right] - \frac{n^3 \sum_{i,j=1}^m \sum_{k,l=1}^n q_{ijkl}}{mn(m^2 + n^2)} \nonumber\\ & + \dfrac{2(m + n) \sum_{i,j = 1}^m \sum_{k,l = 1}^n q_{ijkl}}{mn(m^2 + n^2)} - \frac{(m^2n + n^2m) \sum_{i,j=1}^m \sum_{k,l=1}^n q_{ijkl}}{mn(m^2 + n^2)} \leq 0, \nonumber\\ \end{align} and so $(m + n) \mathcal{A} \geq \mu$, which completes the proof. \end{proof} We now show that by exploiting the properties of optimized $h$-exchange neighborhood, one can obtain a solution with an improved domination number, compared to the result in Theorem \ref{thm:dom}. \begin{theorem} For an integer $h$, a feasible solution to BAP, which is no worse than $\Omega((m - 1)!(n - 1)! + m^hn! + n^hm!)$ feasible solutions, can be found in $O(m^hn^3 + n^hm^3)$ time. \end{theorem} \begin{proof} We show that the solution described in the statement of the theorem, can be obtained in the desired running time by choosing the best solution in the optimized $h$-exchange neighborhood of a solution with objective function value no worse than $\mathcal{A}(Q,C,D)$. Let $(\mb{x}^*,\mb{y}^*)\in \mathcal{F}$ be a BAP solution such that $f(\mb{x}^*,\mb{y}^*)\leq \mathcal{A}(Q,C,D)$. Solution like that can be found in $O(m^2n^2)$ time using Theorem \ref{thm:minmax}. From the proof of Theorem \ref{thm:dom} we know that there exists a set $R_\sim$ of $(m - 1)!(n - 1)!$ solutions, with one solution from every class defined by the equivalence relation $\sim$, such that $f(\mb{x},\mb{y})\geq\mathcal{A}(Q,C,D)\geq f(\mb{x}^*,\mb{y}^*)$ for every $(\mb{x},\mb{y})\in R_\sim$. Let $R_x$ denote the $[h,n]$-exchange neighborhood of $(\mb{x}^*,\mb{y}^*)$, and let $R_y$ denote the $[m,h]$-exchange neighborhood of $(\mb{x}^*,\mb{y}^*)$. Note that $R_x\cup R_y$ is the optimized $h$-exchange neighborhood of $(\mb{x}^*,\mb{y}^*)$. $R_x\cup R_y$ can be searched in $O(m^hn^3 + n^hm^3)$ time, and the result of the search has the objective function value less or equal than every $(\mb{x},\mb{y})\in R_\sim \cup R_x \cup R_y$. Consider $R'_x \subset R_x$ ($R'_y \subset R_y$) to be the set of solutions constructed in the same way as $R_x$ ($R_y$), but now only considering those reassignments of $h$-sets $S \in M$ ($S \in N$) that are different from $\mb{x}^*$ ($\mb{y}^*$) on entire $S$. By simple enumerations it can be shown that $|R'_x|=\binom{m}{h}(!h)n!$, $|R'_y|=\binom{n}{h}(!h)m!$ and $|R'_x \cap R'_y|=\binom{m}{h}(!h)\binom{n}{h}(!h)$, where $!h$ denotes the number of derangements (i.e.\@ permutations without fixed points) of $h$ elements. Furthermore, $|R_\sim \cap R'_x|\leq \binom{m}{h}(!h)(n - 1)!$ and $|R_\sim \cap R'_y|\leq \binom{n}{h}(!h)(m - 1)!$. The later two inequalities are due to the fact that for some fixed $\mb{x}'$ ($\mb{y}'$), the relation $\sim$ partitions the set of solutions $\{\mb{x}'\}\times \mathcal{Y}$ ($\mathcal{X}\times \{\mb{y}'\}$) into equivalence classes of size $n$ ($m$) exactly, and each such class contains at most one element of $R_\sim$. Now we get that \begin{align*} |R_\sim \cup R_x\cup R_y| &\geq |R_\sim \cup R_x^d\cup R_y^d|\\ &\geq |R_\sim|+|R_x^d|+|R_y^d|-|R_\sim \cap R_x^d|-|R_\sim \cap R_y^d|-|R_x^d \cap R_y^d|\\ &\geq (m - 1)!(n - 1)! + \binom{m}{h}(!h)n! + \binom{n}{h}(!h)m!\\ &\ \ \ \ -\binom{m}{h}(!h)(n - 1)! - \binom{n}{h}(!h)(m - 1)! - \binom{m}{h}(!h)\binom{n}{h}(!h)\\ &\in \Omega((m - 1)!(n - 1)! + m^hn! + n^hm!), \end{align*} which concludes the proof. \end{proof} \subsection{Shift based neighborhoods} Following the equivalence class example in Section \ref{sec:notations}, the \textit{shift} neighborhood of a given solution $(\mb{x}, \mb{y})$ will be comprised of all $m$ solutions $(\mb{x}', \mb{y})$, such that $x_{ij}'=x_{i(j + a \mod m)}, \forall a \in M$ and all $n$ solutions $(\mb{x}, \mb{y}')$, such that $y_{kl}'=y_{k(l + b \mod m)}, \forall b \in N$. Alternatively, shift neighborhood can be described in terms of the permutation formulation of BAP. Given a permutation pair ($\pi$, $\phi$), we are looking at all $m$ solutions ($\pi'$, $\phi$), such that $\pi'(i) = \pi(i) + a \mod m, \forall a \in M$, and all $n$ solutions ($\pi$, $\phi'$), such that $\phi'(k) = \phi(k) + b \mod m, \forall b \in N$. Intuitively this means that, either $\pi$ will be cyclically shifted by $a$ or $\phi$ will be cyclically shifted by $b$, hence the name of this neighborhood. An iteration of the local search algorithm based on Shift neighborhood will take $O(mn^2)$ time, as we are required to fully recompute each of the $m$ (resp. $n$) solutions objective values. Using the same asymptotic running time per iteration, it is possible to explore the neighborhood of a larger size, with the help of additional data structures $e_{ij}, g_{kl}$ (see Section \ref{sec:hex}) that maintain partial sums of assigning $i \in M$ to $j \in M'$ and $k \in N$ to $l \in N'$ given $\mb{y}$ and $\mb{x}$ respectively. Consider $\Theta(n^2)$ size neighborhood \textit{shift+shuffle} defined as follows. For a given permutation solution ($\pi$, $\phi$) this neighborhood will contain all ($\pi'$, $\phi$) such that \begin{equation} \pi'(i) = \pi\left((i \mod \lfloor \frac{m}{u} \rfloor) u + \lfloor \frac{i}{\lfloor \frac{m}{u} \rfloor} \rfloor + a \mod m\right), \quad \forall a \in M, \, \forall u \in \{1, 2, \ldots, \lfloor \frac{m}{2} \rfloor \}, \end{equation} and all ($\pi$, $\phi'$) such that \begin{equation} \phi'(k) = \phi\left((k \mod \lfloor \frac{n}{v} \rfloor) v + \lfloor \frac{k}{\lfloor \frac{n}{v} \rfloor} \rfloor + b \mod n\right), \quad \forall b \in N, \, \forall v \in \{1, 2, \ldots, \lfloor \frac{n}{2} \rfloor \}. \end{equation} Two of the above equations are sufficient for the case of $m \mod u = 0$ or $n \mod v = 0$. Otherwise, for all $i > m - (m \mod u)$ and all $k > n - (n \mod v)$ an arbitrary reassignment could be applied (for example $\pi'(i)=\pi(i)$ and $\phi'(k)=\phi(k)$). One can visualize shuffle operation as splitting elements of a permutation into buckets of the same size ($u$ or $v$ in the formulas above), and then forming a new permutation by placing first elements from each bucket in the beginning, followed by second elements of each bucket, and so on. Figure \ref{shuffle} depicts such shuffling for a permutation $\pi$. \begin{figure} \caption{Example of shuffle operation on permutation $\pi$, with $u=3$} \label{shuffle} \end{figure} By combining shift and shuffle we increase the size of the explored neighborhood, at no extra asymptotic running time cost for the local search implementations. Local search algorithms that explore shift or shift+shuffle neighborhoods could potentially be stuck in the arbitrarily bad local optimum, following the same argument as in Theorem \ref{thm:hex}. If we allow applying shift simultaneously to both $\mb{x}$ and $\mb{y}$ we will consider all $mn$ neighbors of the current solution, precisely as in equivalence class example from Section \ref{sec:notations}. We will call this \textit{dual shift} neighborhood of a solution $(\mb{x}, \mb{y})$. Notice that a local search algorithm that explores this neighborhood reaches a local optimum only after a single iteration, with running time $O(m^2n^2)$. A much larger \textit{optimized shift} neighborhood will be defined as follows. For every shift operation on $\mb{x}$ we consider all possible assignments of $\mb{y}$, and vice versa, for each shift on $\mb{y}$ we will consider all possible assignments of $\mb{x}$. Just like in the case of optimized $h$-exchange, this neighborhood is exponential in size, but can be efficiently explored in $O(mn^3)$ running time by solving corresponding linear assignment problems. \begin{theorem} For local search based on dual shift and optimized shift neighborhoods, the final solution value is guaranteed to be no worse than $\mathcal{A}(Q,C,D)$. \end{theorem} \begin{proof} The proof for dual shift neighborhood follows from the fact that we are completely exploring the equivalence class defined by $\sim$ of a given solution, as in Theorem \ref{thm:minmax}. For optimized shift, notice that for each shift on one side of $(\mb{x}, \mb{y})$ we consider all possible solutions on the other side. This includes all possible shifts on that respective side. Therefore the set of solutions of optimized shift neigborhood includes the set of solutions of dual shift neighborhood, and contains the solution with the value at most $\mathcal{A}(Q,C,D)$. \end{proof} In \cite{custicbilinear} we have explored the complexity of a special case of BAP where $Q$, observed as a $m^2 \times n^2$ matrix, is restricted to be of a fixed rank. The rank of such $Q$ is said to be at most $r$ if and only if there exist some $m\times m$ matrices $A^{^{p}}=(a_{ij}^p)$ and $n\times n$ matrices $B^{^p}=(b_{ij}^p)$, $p=1,\ldots,r,$ such that \begin{equation}\label{fact} q_{ijkl}=\sum_{p=1}^r a_{ij}^pb_{kl}^p \end{equation} for all $i,j\in M$, $k,l\in N$. \begin{theorem} Alternating Algorithm and local search algorithms that explore optimized $h$-exchange and optimized shift neighborhoods will find an optimal solution to BAP $(Q,C,D)$, if $Q$ is a non-negative matrix of rank $1$, and both $C$ and $D$ are zero matrices. \end{theorem} \begin{proof} Note that in the case described in the statement of the theorem, we are looking for such $(\mb{x}^*, \mb{y}^*)$ that minimizes $(\sum_{i,j=1}^m a_{ij}x^*_{ij}) \cdot (\sum_{k,l=1}^n b_{kl} y^*_{kl})$, where $q_{ijkl}=a_{ij} b_{kl}, \, \forall i,j\in M, \, k,l\in N$. If we are restricted to non-negative numbers, solutions to corresponding linear assignment problems would be an optimal solution to this BAP. It is easy to see that, for any fixed $\mb{x}$, a solution of the smallest value will be produced by $\mb{y}^*$. And viceversa, for any fixed $\mb{y}$, a solution of the smallest value will be produced by $\mb{x}^*$. Optimized $h$-exchange neighborhood, optimized shift neighborhood and the neighborhood that \textit{Alternating Algorithm} is based on, all contain the solution that has one side of $(\mb{x}, \mb{y})$ unchanged and has the optimal assignment on the other side. Therefore, the local search algorithms that explore these neighborhoods will proceed to find optimal $(\mb{x}^*, \mb{y}^*)$ in at most $2$ iterations. \end{proof} \section{Experimental design and test problems} \label{sec:expsetup} In this section we present general information on the design of our experiments and generation of test problems. All experiments are conducted on a PC with Intel Core i7-4790 processor, 32 GB of memory under control of Linux Mint 17.3 (Linux Kernel 3.19.0-32-generic) 64-bit operating system. Algorithms are coded using Python 2.7 programming language and run via PyPy 5.3 implementation of Python. The linear assignment problem, that appears as a subproblem for several algorithms, is solved using Hungarian algorithm \cite{kuhn1955hungarian} implementation in Python. \subsection{Test problems} As there are no existing benchmark instances available for BAP, we have created several sets of test problems, which could be used by other researchers in the future experimental analysis. Three categories of problem instances are considered: \textit{\textbf{uniform}}, \textit{\textbf{normal}} and \textit{\textbf{euclidean}}. \begin{itemize} \item For \textit{uniform} instances we set $c_{ij}, d_{kl} = 0$ and the values $q_{ijkl}$ are generated randomly with uniform distribution from the interval $[0, m n]$ and rounded to the nearest integer. \item For \textit{normal} instances we set $c_{ij}, d_{kl} = 0$ and the values $q_{ijkl}$ are generated randomly following normal distribution with mean $\mu = \frac{m n}{2}$, standard deviation $\sigma = \frac{m n}{6}$ and rounded to the nearest integer. \item For \textit{euclidean} instances we generate randomly with uniform distribution four sets of points $A,B,U,V$ in Euclidean plane of size $[0, 1.5 \sqrt[2]{m n}] \times [0, 1.5 \sqrt[2]{m n}]$, such that $|A| = |B| = m$, $|U| = |V| = n$. Then $C$ and $D$ are chosen as zero vectors, and $q_{ijkl} = ||a_i - u_k|| \cdot ||b_j - v_l||$ (rounded to the nearest integer), where $a_i \in A, b_j \in B, u_k \in U, v_l \in V$. \end{itemize} Test problems are named using the convention ``type size number'', where type $\in$ \{\textit{uniform}, \textit{normal}, \textit{euclidean}\}, size is of the form $m \times n$, and number $\in \{0, 1, \ldots\}$. For every instance type and size we have generated 10 problems, and all the results of experiments will be averaged over those 10 problems. For example, in a table or a figure, a data point for ``uniform $50 \times 50$'' would be the average among the 10 generated instances. This applies to objective function values, running times and number of iterations, and would not be explicitly mentioned throughout the rest of the paper. Problem instances, results for our final set of experiments as well as best found solutions for every instance are available upon request from Abraham Punnen ([email protected]). \section{Experimental analysis of construction heuristics} \label{sec:expconstr} In Section \ref{sec:constr} we presented several construction approaches to generate a solution to BAP. In this section we discuss results of computational experiments using these heuristics. The experimental results are summarized in Table \ref{constrt}. For the heuristic \textit{GreedyRandomized}, we have considered the candidate list size $2$, $4$ and $6$. In the table, columns GreedyRandomized2 and GreedyRandomized4 refer to implementations with candidate list size of $2$ and $4$, respectively. Results for candidate list size $6$ are excluded from the table due to poor performance. Here and later when presenting computational results, ``value'' and ``time'' refer to objective function value and running time of an algorithm. The best solution value among all tested heuristics is shown in bold font. We also report (averaged over 10 instances of given type and size) the average solution value $\mathcal{A}(Q,C,D)$ (denoted simply as $\mathcal{A}$), computed using the closed-form expression from Section \ref{sec:notations}. \newgeometry{margin=1cm} \thispagestyle{empty} \begin{sidewaystable*}[t!] \centering \small \caption{Solution value and running time in seconds for construction heuristics} \label{constrt} \scalebox{0.9}{\begin{tabular}{@{}lrrcrcrcrcrc@{}} \toprule & & \multicolumn{2}{c}{RandomXYGreedy} & \multicolumn{2}{c}{Greedy} & \multicolumn{2}{c}{GreedyRandomized2} & \multicolumn{2}{c}{GreedyRandomized4} & \multicolumn{2}{c}{Rounding}\\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} \cmidrule(lr){11-12} instances & $\mathcal{A}$ & value & time & value & time & value & time & value & time & value & time\\ \midrule uniform 20x20 & 79975 & 62981 & 0.0011 & 61930 & 0.0016 & 61824 & 0.0015 & 62997 & 0.0023 & \textbf{58587} & 0.0282 \\ uniform 40x40 & 1280013 & 1039365 & 0.0024 & 1038410 & 0.0085 & 1046862 & 0.0117 & 1047444 & 0.0107 & \textbf{1005375} & 0.4083 \\ uniform 60x60 & 6480224 & 5335157 & 0.0057 & 5399004 & 0.0362 & 5430190 & 0.0403 & 5429077 & 0.0381 & \textbf{5311287} & 2.076 \\ uniform 80x80 & 20480398 & 17179410 & 0.0119 & 17393975 & 0.0901 & 17427649 & 0.1092 & 17455112 & 0.1231 & \textbf{17127745} & 8.6041 \\ uniform 100x100 & 50001181 & \textbf{42492213} & 0.0205 & 43134618 & 0.1797 & 43115743 & 0.1755 & 43209207 & 0.2431 & 42521606 & 29.3038 \\ uniform 120x120 & 103680291 & \textbf{88710617} & 0.0334 & 90317432 & 0.2459 & 90450040 & 0.3127 & 90388890 & 0.3208 & 89342939 & 90.1245 \\ uniform 140x140 & 192079012 & \textbf{165656443} & 0.0518 & 168664018 & 0.404 & 168695610 & 0.5922 & 168683177 & 0.5869 & 166927409 & 196.3766 \\ uniform 160x160 & 327679690 & \textbf{284623314} & 0.0768 & 289819325 & 0.939 & 289847112 & 0.9922 & 290034508 & 0.9862 & 287148038 & 339.6329 \\ uniform 180x180 & 524879096 & \textbf{458395075} & 0.1088 & 466419210 & 1.0135 & 466652862 & 1.107 & 466938203 & 1.5316 & 462852252 & 539.6931 \\ normal 20x20 & 79977 & 69989 & 0.0011 & 69032 & 0.0013 & 69322 & 0.0015 & 69899 & 0.0022 & \textbf{67367} & 0.0275 \\ normal 40x40 & 1280007 & 1137550 & 0.0022 & 1137478 & 0.008 & 1139150 & 0.0098 & 1139608 & 0.0116 & \textbf{1123670} & 0.3902 \\ normal 60x60 & 6480142 & 5825775 & 0.0055 & 5847641 & 0.0229 & 5841178 & 0.0277 & 5860741 & 0.0427 & \textbf{5795676} & 2.0257 \\ normal 80x80 & 20480028 & 18555962 & 0.0108 & 18696934 & 0.0613 & 18658585 & 0.0772 & 18697475 & 0.102 & \textbf{18544051} & 6.9208 \\ normal 100x100 & 50000062 & 45647505 & 0.02 & 45909621 & 0.1293 & 45925799 & 0.1584 & 45943220 & 0.1958 & \textbf{45643447} & 30.2969 \\ normal 120x120 & 103680643 & \textbf{94952757} & 0.0325 & 95765991 & 0.2465 & 95711199 & 0.2967 & 95757531 & 0.3385 & 95332171 & 80.9744 \\ normal 140x140 & 192079732 & \textbf{176656351} & 0.0507 & 178279212 & 0.4034 & 178238835 & 0.4936 & 178233293 & 0.556 & 177501940 & 179.0639 \\ normal 160x160 & 327681533 & \textbf{302496650} & 0.0738 & 305379404 & 0.746 & 305333912 & 0.696 & 305345983 & 0.823 & 304080792 & 310.9162 \\ normal 180x180 & 524880349 & \textbf{486132477} & 0.1056 & 490345723 & 0.8888 & 490464093 & 1.0742 & 490656416 & 1.3211 & 489077716 & 540.4644 \\ euclidean 20x20 & 95297 & 93756 & 0.0011 & 98864 & 0.0013 & 99027 & 0.0014 & 98104 & 0.0015 & \textbf{85564} & 0.0276 \\ euclidean 40x40 & 1554313 & 1540492 & 0.0024 & 1559829 & 0.0111 & 1546894 & 0.0116 & 1551881 & 0.0123 & \textbf{1430068} & 0.4218 \\ euclidean 60x60 & 8003105 & 7821082 & 0.0063 & 8021089 & 0.0445 & 8014594 & 0.0461 & 7945751 & 0.0489 & \textbf{7331236} & 1.9805 \\ euclidean 80x80 & 24906273 & 24190227 & 0.0129 & 24873255 & 0.0611 & 24799662 & 0.0954 & 24853670 & 0.0805 & \textbf{23145446} & 6.141 \\ euclidean 100x100 & 61053265 & 59345477 & 0.0235 & 60305521 & 0.103 & 59882626 & 0.1285 & 60052837 & 0.1223 & \textbf{56848260} & 31.8484 \\ euclidean 120x120 & 126198999 & 121816738 & 0.0389 & 123601338 & 0.2986 & 123829252 & 0.305 & 124053452 & 0.3252 & \textbf{117754675} & 93.6024 \\ euclidean 140x140 & 230673448 & 221785417 & 0.0617 & 227949036 & 0.4082 & 227508295 & 0.4637 & 227854403 & 0.4979 & \textbf{214876628} & 183.0906 \\ euclidean 160x160 & 404912898 & 390412111 & 0.0897 & 395260253 & 0.8908 & 398388924 & 0.8284 & 396277525 & 1.0551 & \textbf{378608021} & 309.2262 \\ euclidean 180x180 & 635700756 & 607470603 & 0.1289 & 623035384 & 1.1913 & 625456121 & 1.356 & 623393649 & 1.4349 & \textbf{593800828} & 548.8153 \\ \bottomrule \end{tabular}} \end{sidewaystable*} \restoregeometry As the table shows, for smaller \textit{uniform} and \textit{normal} instances as well as for all \textit{euclidean} instances \textit{Rounding} produced better quality results, however, using substantially longer time. For all other problems \textit{RandomXYGreedy} obtained better results. To our surprise, the quality of the solution produced by \textit{Greedy} was inferior to that of \textit{RandomXYGreedy}. It can, perhaps, be explained as a consequence of being ``too greedy'' in the beginning, leading to worse overall solution, particularly, taking into consideration the quadratic nature of the objective function. In the initial steps the choice is made based on the very much incomplete information about solution and the interaction cost of $\mb{x}$ and $\mb{y}$ assignments. In addition, the running time for \textit{RandomXYGreedy} was significantly lower than that of \textit{Rounding} and other algorithms. Thus, we conclude that \textit{RandomXYGreedy} is our method of choice if a solution to BAP is needed quickly. As for the \textit{GreedyRandomized} strategy, the higher the size of the candidate list, the worse is the quality of the resulting solution. On the other hand, larger sizes of the candidate lists provide us with more diversified ways to generate solutions for BAP. That may have advantages if the construction is followed by an improvement approach as generally done in GRASP algorithm. In Figures \ref{construv} and \ref{construt} we present solution value and running time results of this section for \textit{uniform} instances. \begin{figure} \caption{Difference between solution values (to the best) for construction heuristics; \textit{uniform} instances} \label{construv} \end{figure} \begin{figure} \caption{Running time for construction heuristics; \textit{uniform} instances} \label{construt} \end{figure} \section{Experimental analysis of local search algorithms} \label{sec:expls} Let us now discuss the results of computational experiments carried out using local search algorithms that explore neighborhoods discussed in Section \ref{sec:ls}. All algorithms are started from the same random solution and ran until a local optimum is reached. In addition to the objective function value and running time we report the number of iterations for each approach. For $h$-exchange neighborhoods, we selected $2$ and $3$-exchange local search algorithms (denoted by \textit{\textbf{2ex}} and \textit{\textbf{3ex}}) as well as the Alternating Algorithm (\textit{\textbf{AA}}). From [$h,p$]-exchange based algorithms, we have implemented $[2,2]$-exchange local search (named \textit{\textbf{Dual2ex}}). The $[2,2]$-exchange neighborhood can be explored in $O(m^2n^2)$ time, using efficient recomputation of the change in the objective value. We refer to the algorithm that explores optimized $2$-exchange neighborhood as \textit{\textbf{2exOpt}}. The running time of each iteration of this local search is $O(m^2 n^3)$. To speed up this potentially slow approach, we have also considered a version, namely \textit{\textbf{2exOptHeuristic}}, where we use an $O(n^2)$ heuristic to solve the underlying linear assignment problem, instead of the Hungarian algorithm with cubic running time. The running time of each iteration of 2exOptHeuristic is then $O(m^2 n^2)$. Similarly defined will be \textit{\textbf{3exOpt}}. \textit{\textbf{Shift}}, \textit{\textbf{ShiftShuffle}}, \textit{\textbf{DualShift}} and \textit{\textbf{ShiftOpt}} are implementations of local search based on shift, shift+shuffle, dual shift and optimized shift neighborhoods respectively. In addition, we consider variations of the above-mentioned algorithms, namely \textit{\textbf{2exFirst}}, \textit{\textbf{3exFirst}}, \textit{\textbf{Dual2exFirst}}, \textit{\textbf{2exOptFirst}}, \textit{\textbf{2exOptHeuristicFirst}}, \textit{\textbf{ShiftOptFirst}}, where corresponding neighborhoods explored only until the first improving solution is encountered. We provide a summary of complexity results on these local search algorithms in Table \ref{sum}. Here by $I$ we denote the number of iterations (or ``moves'') that it takes for a corresponding search to converge to a local optimum. As $I$ could potentially be exponential in $n$ and will vary between algorithms, we use this notation to simply emphasize the running time of an iteration of each approach. \begin{table}[!htb] \centering \caption{Asymptotic running time and neighborhood size per iteration for local searches} \label{sum} \scalebox{1.0}{\begin{tabular}{@{}ccc@{}} \toprule name & running time & neighborhood size per iteration\\ \midrule 2ex & $O(n^3 + I n^2)$ & $\Theta(n^2)$\\ Shift & $O(I n^3)$ & $n$\\ ShiftShuffle & $O(I n^3)$ & $\Theta(n^2)$\\ 3ex & $O(I n^3)$ & $\Theta(n^3)$\\ AA & $O(I n^3)$ & $n!$\\ DualShift & $O(n^4)$ & $n^2$\\ Dual2ex & $O(I n^4)$ & $\Theta(n^4)$\\ ShiftOpt & $O(I n^4)$ & $n \cdot n!$\\ 2exOptHeuristic & $O(I n^4)$ & $\Theta(n^2 \cdot n!)^*$\\ 2exOpt & $O(I n^5)$ & $\Theta(n^2 \cdot n!)$\\ 3exOpt & $O(I n^6)$ & $\Theta(n^3 \cdot n!)$\\ \bottomrule \multicolumn{3}{@{}l@{}}{\scalebox{.8}{* 2exOptHeuristic does not fully explore the neighborhood.}} \end{tabular}} \end{table} Table \ref{convt} summarizes experimental results for \textit{2ex}, \textit{3ex}, \textit{AA}, \textit{2exOpt} and \textit{2exOptFirst}. Results for other algorithms are not included in the table due to inferior performance. However, figures \ref{convuv} and \ref{convut} provide additional insight into the performance of all the algorithms we have tested, for the case of \textit{uniform} instances. \newgeometry{margin=1cm} \thispagestyle{empty} \begin{sidewaystable*}[t!] \centering \small \caption{Solution value, running time in seconds and number of iterations for local searches} \label{convt}\scalebox{0.85}{\begin{tabular}{@{}lrrlcrlcrlcrlcrlc@{}} \toprule & & \multicolumn{3}{c}{2ex} & \multicolumn{3}{c}{3ex} & \multicolumn{3}{c}{AA} & \multicolumn{3}{c}{2exOpt} & \multicolumn{3}{c}{2exOptFirst}\\ \cmidrule(lr){3-5} \cmidrule(lr){6-8} \cmidrule(lr){9-11} \cmidrule(lr){12-14} \cmidrule(lr){15-17} instances & $\mathcal{A}$ & value & time & iter & value & time & iter & value & time & iter & value & time & iter & value & time & iter\\ \midrule uniform 10x10 & 4995 & 3378 & 0.0 & 9 & 3241 & 0.0 & 9 & 3385 & 0.0 & 3 & \textbf{3103} & 0.04 & 4 & 3128 & 0.02 & 11 \\ uniform 20x20 & 80043 & 59371 & 0.0 & 20 & 56593 & 0.01 & 18 & 56097 & 0.01 & 4 & \textbf{54912} & 0.68 & 6 & 55059 & 0.34 & 25 \\ uniform 30x30 & 404944 & 310455 & 0.02 & 32 & 297569 & 0.05 & 28 & 298787 & 0.02 & 4 & 291520 & 3.96 & 6 & \textbf{291268} & 3.09 & 46 \\ uniform 40x40 & 1279785 & 1003731 & 0.04 & 45 & 977498 & 0.14 & 39 & 971400 & 0.06 & 5 & \textbf{954676} & 21.71 & 10 & 957381 & 8.46 & 56 \\ uniform 50x50 & 3124809 & 2493822 & 0.08 & 57 & 2433665 & 0.32 & 49 & 2416832 & 0.13 & 5 & \textbf{2385232} & 63.49 & 11 & 2389496 & 24.94 & 73 \\ uniform 60x60 & 6479878 & 5256357 & 0.15 & 74 & 5149634 & 0.59 & 55 & 5098653 & 0.26 & 6 & 5056566 & 143.48 & 11 & \textbf{5031368} & 80.32 & 97 \\ uniform 70x70 & 12005619 & 9844646 & 0.24 & 85 & 9682798 & 1.1 & 67 & 9587489 & 0.38 & 6 & \textbf{9469736} & 326.04 & 14 & 9472549 & 156.78 & 114 \\ uniform 80x80 & 20480209 & 17022523 & 0.37 & 96 & 16694088 & 1.81 & 75 & 16519908 & 0.66 & 7 & 16388545 & 504.34 & 12 & \textbf{16355658} & 285.23 & 136 \\ uniform 90x90 & 32803918 & 27479017 & 0.52 & 111 & 26978715 & 2.97 & 88 & 26650508 & 1.08 & 8 & 26563051 & 882.81 & 13 & \textbf{26514860} & 497.74 & 158 \\ uniform 100x100 & 49999078 & 42138227 & 0.74 & 124 & 41363121 & 4.96 & 109 & 41031842 & 1.45 & 8 & 40912367 & 1480.03 & 14 & \textbf{40767754} & 864.39 & 172 \\ uniform 110x110 & 73206906 & 61988038 & 1.06 & 148 & 61179121 & 6.57 & 109 & 60529975 & 1.92 & 7 & 60162728 & 2406.29 & 15 & \textbf{60068824} & 1504.27 & 196 \\ uniform 120x120 & 103679901 & 88602187 & 1.23 & 137 & 87330165 & 8.52 & 109 & 86174642 & 2.61 & 8 & 85872203 & 3865.67 & 18 & \textbf{85670906} & 1917.76 & 201 \\ normal 10x10 & 4999 & 4044 & 0.0 & 10 & 4019 & 0.0 & 9 & 4040 & 0.0 & 2 & 3910 & 0.03 & 4 & \textbf{3862} & 0.02 & 14 \\ normal 20x20 & 79955 & 67321 & 0.0 & 20 & 66520 & 0.01 & 16 & 66179 & 0.01 & 3 & \textbf{64913} & 0.79 & 7 & 65363 & 0.33 & 25 \\ normal 30x30 & 404959 & 348058 & 0.02 & 34 & 342238 & 0.06 & 29 & 343639 & 0.03 & 4 & \textbf{338796} & 4.98 & 8 & 339162 & 2.61 & 45 \\ normal 40x40 & 1279974 & 1119684 & 0.04 & 46 & 1111127 & 0.14 & 33 & 1099106 & 0.07 & 6 & 1089996 & 23.21 & 10 & \textbf{1089752} & 10.61 & 60 \\ normal 50x50 & 3124879 & 2752326 & 0.08 & 63 & 2737137 & 0.34 & 43 & 2711191 & 0.14 & 6 & 2696287 & 65.48 & 11 & \textbf{2696062} & 32.57 & 77 \\ normal 60x60 & 6479794 & 5769522 & 0.16 & 73 & 5707107 & 0.7 & 53 & 5665027 & 0.3 & 7 & 5640412 & 151.84 & 12 & \textbf{5633463} & 81.97 & 99 \\ normal 70x70 & 12004939 & 10738678 & 0.24 & 88 & 10641129 & 1.3 & 65 & 10596245 & 0.42 & 6 & 10544640 & 316.24 & 13 & \textbf{10538513} & 144.42 & 116 \\ normal 80x80 & 20480106 & 18434378 & 0.38 & 103 & 18282395 & 2.35 & 80 & 18173927 & 0.71 & 7 & 18126933 & 537.29 & 12 & \textbf{18095224} & 338.76 & 132 \\ normal 90x90 & 32805972 & 29736595 & 0.51 & 108 & 29408513 & 3.79 & 91 & 29245481 & 0.92 & 6 & 29176212 & 1017.08 & 14 & \textbf{29165974} & 500.62 & 151 \\ normal 100x100 & 49999105 & 45514117 & 0.71 & 122 & 45009249 & 5.69 & 100 & 44798388 & 1.45 & 7 & 44635991 & 1602.09 & 15 & \textbf{44603238} & 940.19 & 176 \\ normal 110x110 & 73205050 & 66768499 & 1.01 & 142 & 66224593 & 8.26 & 110 & 65812495 & 2.69 & 10 & 65716978 & 2218.71 & 13 & \textbf{65539744} & 1632.32 & 193 \\ normal 120x120 & 103681336 & 95001950 & 1.32 & 147 & 94151507 & 11.24 & 116 & 93702171 & 2.16 & 6 & 93322807 & 4645.28 & 20 & \textbf{93248160} & 2130.64 & 215 \\ euclidean 10x10 & 6186 & 5397 & 0.0 & 13 & 5379 & 0.0 & 12 & 5404 & 0.0 & 3 & \textbf{5368} & 0.05 & 4 & 5375 & 0.03 & 16 \\ euclidean 20x20 & 95834 & 82325 & 0.01 & 41 & 82293 & 0.01 & 25 & 82242 & 0.01 & 3 & 82160 & 1.27 & 5 & \textbf{81813} & 1.52 & 49 \\ euclidean 30x30 & 490614 & 419174 & 0.02 & 61 & 418942 & 0.07 & 40 & 419000 & 0.03 & 3 & \textbf{416436} & 9.13 & 5 & 417339 & 18.19 & 98 \\ euclidean 40x40 & 1553544 & 1314659 & 0.07 & 87 & 1312649 & 0.21 & 59 & 1311131 & 0.07 & 3 & \textbf{1309701} & 37.08 & 5 & 1311093 & 90.91 & 156 \\ euclidean 50x50 & 3761359 & 3178424 & 0.14 & 112 & 3173915 & 0.5 & 78 & 3178006 & 0.16 & 4 & \textbf{3167772} & 134.77 & 7 & 3168388 & 314.91 & 211 \\ euclidean 60x60 & 7999029 & 6740779 & 0.26 & 141 & 6720560 & 1.04 & 98 & \textbf{6714400} & 0.23 & 4 & 6714689 & 314.14 & 7 & 6716877 & 1012.93 & 296 \\ euclidean 70x70 & 14909550 & 12533959 & 0.45 & 180 & 12500249 & 1.92 & 117 & 12490034 & 0.42 & 4 & \textbf{12487021} & 674.66 & 7 & 12499281 & 2354.68 & 366 \\ euclidean 80x80 & 25210773 & 21188706 & 0.68 & 200 & 21182227 & 3.2 & 133 & 21160309 & 0.55 & 4 & \textbf{21150070} & 1222.19 & 6 & 21156445 & 5250.01 & 456 \\ euclidean 90x90 & 39495474 & 33083033 & 1.04 & 240 & 33072079 & 4.87 & 145 & 33082326 & 0.98 & 5 & \textbf{33049474} & 2017.96 & 6 & 33089283 & 10482.75 & 556 \\ \bottomrule \end{tabular}} \end{sidewaystable*} \restoregeometry \begin{figure} \caption{Difference between solution values (to the best) for local search; \textit{uniform} instances} \label{convuv} \end{figure} \begin{figure} \caption{Running time to converge for local search; \textit{uniform} instances} \label{convut} \end{figure} Even though the convergence speed is very fast for implementations of \textit{Shift}, \textit{ShiftShuffle} and \textit{DualShift}, the resulting solution values are not significantly better than the average value $\mathcal{A}(Q,C,D)$ for the instance. The \textit{optimized shift} versions, namely \textit{ShiftOpt} and \textit{ShiftOptFirst} produced better solutions but still are outperformed by all remaining heuristics. This fact together with the slower convergence speed (as compared to say \textit{2ex}) shows the weaknesses of the approach. \textit{Dual2ex} and \textit{Dual2exFirst} are heavily outperformed both in terms of convergence speed as well as the quality of the resulting solution by \textit{AA}. It is also worth mentioning that speeding up \textit{2exOpt} and \textit{2exOptFirst} by substituting the Hungarian algorithm with an $O(n^2)$ heuristic for the assignment problem did not provide us with good results. The solution quality decreased substantially and, considering that the running time to converge is still slower than that of \textit{AA}, we discard these options. Table \ref{convt} presents the results for the better performing set of algorithms. The performance of both \textit{first improvement} and \textit{best improvement} approaches \textit{2exFirst}, \textit{3exFirst} and \textit{2ex}, \textit{3ex} respectively are similar so we will consider only the latter two from now on. Interestingly, it is not the case for the \textit{optimized} neighborhoods. We noticed that, for \textit{uniform} and \textit{normal} instances \textit{2exOptFirst} runs faster than \textit{2exOpt}, in most cases. However, for \textit{euclidean} instances \textit{2exOptFirst} takes more time to converge. As expected, \textit{AA} is better than \textit{3ex} with respect to both solution quality and running time. We will not include any of the $h$-exchange neighborhood search implementations for $h > 3$ in this study due to relatively poor performance and huge running time. We focused the remaining experiments in the paper on \textit{2ex}, \textit{AA} and \textit{2exOpt}. Among these \textit{2ex} converges the fastest, \textit{2exOpt} provides the best solutions and \textit{AA} assumes a ``balanced'' position. It is also clear that even better solution quality could be achieved by using implementations of optimized $h$-exchange neighborhood search with higher $h$. However, we show in the next sub-section that this is not feasible in terms of efficient metaheuristics implementation. \subsection{Local search with multi-start} \label{sec:explsms} Now we would like to see how well our heuristics perform in terms of solutions quality, when the amount of time is fixed. For this we implemented a simple multi-start strategy for each of the algorithms. The framework will keep restarting the local search from the new \textit{Random} instance until the time limit is reached. The best solution found in the process is then reported as the result. Time limit for each instance will be set as the following. Considering the results of the previous sub-section, we expect \textit{3exOptFirst} to be the slowest method to converge for all of the instances. We run it exactly once, and use its running time as a time limit for other multi-start algorithms. Together with resulting values we also report the number of restarts of each approach in Table \ref{mst}. Clearly, the choice of time limit yields $1$ as the number of starts for \textit{3exOptFirst}. \newgeometry{margin=1cm} \thispagestyle{empty} \begin{sidewaystable*}[t!] \centering \small \caption{Solution value and number of starts for time-limited multi-start local searches} \label{mst}\scalebox{0.9}{\begin{tabular}{@{}lcrcrcrcrcrcrc@{}} \toprule & & \multicolumn{2}{c}{3exOptFirst} & \multicolumn{2}{c}{2exOpt} & \multicolumn{2}{c}{2exOptFirst} & \multicolumn{2}{c}{AA} & \multicolumn{2}{c}{2ex} & \multicolumn{2}{c}{2exFirst}\\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8} \cmidrule(lr){9-10} \cmidrule(lr){11-12} \cmidrule(lr){13-14} instances & time limit & value & starts & value & starts & value & starts & value & starts & value & starts & value & starts\\ \midrule uniform 10x10 & 0.1 & 3059 & 1 & 2943 & 3 & 2974 & 5 & 2946 & 97 & \textbf{2934} & 221 & 2980 & 176 \\ uniform 20x20 & 2.7 & 54250 & 1 & 53496 & 4 & 53286 & 8 & \textbf{53096} & 428 & 53983 & 997 & 54244 & 879 \\ uniform 30x30 & 23.4 & 290200 & 1 & 288401 & 5 & 285630 & 10 & \textbf{285271} & 919 & 292991 & 1859 & 292363 & 1695 \\ uniform 40x40 & 103.2 & 948029 & 1 & 943982 & 5 & 940718 & 10 & \textbf{936113} & 1528 & 963120 & 2858 & 960093 & 2679 \\ uniform 50x50 & 531.7 & 2370639 & 1 & 2365473 & 8 & 2358811 & 18 & \textbf{2346865} & 3664 & 2410678 & 6592 & 2401247 & 6337 \\ uniform 60x60 & 1148.5 & 5017422 & 1 & 5003247 & 7 & 4989212 & 16 & \textbf{4980930} & 4221 & 5105064 & 7747 & 5092544 & 7522 \\ uniform 70x70 & 3291.3 & 9429464 & 1 & 9421085 & 10 & 9404126 & 21 & \textbf{9369944} & 7017 & 9601891 & 13499 & 9583585 & 13009 \\ uniform 80x80 & 3763.3 & 16406602 & 1 & 16319588 & 7 & 16241213 & 13 & \textbf{16229861} & 5031 & 16612105 & 10017 & 16583987 & 9578 \\ normal 10x10 & 0.1 & 3857 & 1 & 3838 & 2 & 3851 & 5 & 3828 & 91 & \textbf{3818} & 208 & 3847 & 162 \\ normal 20x20 & 2.5 & 65014 & 1 & 64635 & 4 & 64433 & 7 & \textbf{64020} & 396 & 64867 & 902 & 64738 & 769 \\ normal 30x30 & 23.4 & 337626 & 1 & 336552 & 5 & 335378 & 10 & \textbf{335042} & 899 & 339448 & 1818 & 338849 & 1623 \\ normal 40x40 & 113.3 & 1086083 & 1 & 1082094 & 5 & 1081530 & 12 & \textbf{1078755} & 1675 & 1092923 & 3063 & 1091803 & 2840 \\ normal 50x50 & 469.3 & 2688595 & 1 & 2679334 & 8 & 2677720 & 16 & \textbf{2672481} & 3217 & 2711913 & 5807 & 2704948 & 5475 \\ normal 60x60 & 933.4 & 5640721 & 1 & 5627391 & 6 & 5612362 & 13 & \textbf{5604229} & 3413 & 5679037 & 6216 & 5672749 & 5979 \\ normal 70x70 & 3593.3 & 10512493 & 1 & 10492591 & 12 & 10483432 & 25 & \textbf{10474343} & 7685 & 10604646 & 14559 & 10591133 & 13903 \\ normal 80x80 & 11339.0 & \textbf{17989971} & 1 & 17993643 & 20 & 18010732 & 42 & 17995894 & 15435 & 18226724 & 29827 & 18209532 & 28425 \\ euclidean 10x10 & 0.1 & 5447 & 1 & 5430 & 3 & 5445 & 3 & \textbf{5427} & 98 & 5427 & 266 & 5427 & 162 \\ euclidean 20x20 & 5.1 & 82409 & 1 & 81717 & 4 & 81710 & 4 & \textbf{81573} & 589 & 81573 & 1283 & 81575 & 747 \\ euclidean 30x30 & 70.1 & 418658 & 1 & 415529 & 7 & 415419 & 4 & \textbf{414767} & 2399 & 414774 & 3382 & 414808 & 1732 \\ euclidean 40x40 & 390.3 & 1321385 & 1 & 1317439 & 9 & 1317948 & 4 & \textbf{1316409} & 5459 & 1316509 & 6197 & 1316771 & 3010 \\ euclidean 50x50 & 1675.4 & 3151591 & 1 & 3136628 & 13 & 3139866 & 4 & \textbf{3135362} & 11411 & 3135723 & 11993 & 3136122 & 5359 \\ euclidean 60x60 & 4604.9 & 6563921 & 1 & 6532789 & 15 & 6537657 & 4 & \textbf{6529495} & 17621 & 6530835 & 17448 & 6532247 & 6641 \\ \bottomrule \end{tabular}} \end{sidewaystable*} \restoregeometry \begin{figure} \caption{Difference between solution values (to the best) for multi-start algorithms; \textit{uniform} instances} \label{msv} \end{figure} The best algorithm in these settings is \textit{AA}, which consistently exhibited better performance for all instance types. The reason behind this is the fact that a local optimum by this approach can be reached almost as fast as by \textit{2ex}, however solution quality is much better. On the other hand, the convergence of \textit{2exOpt} to a local optimum is very time consuming, and perhaps a better strategy is to do more restarts with slightly less quality of resulting solution. Similar argument holds for the case why \textit{2exOptFirst} outperforms \textit{3exOptFirst} in this type of experiments. This observation is in contrast with the results experienced by researches of bipartite unconstrained binary quadratic program \cite{glover2015integrating} and bipartite quadratic assignment problem \cite{punnen2016bipartite}. The difference can be attributed to the more complex structure of BAP in comparison to problems mentioned above. \section{Variable neighborhood search} \label{sec:expvnsms} Variable neighborhood search (VNS) is an algorithmic paradigm to enhance standard local search by making use of properties (often complementary) of multiple neighborhoods \cite{ahuja2007very,hansen2016variable}. The $2$-exchange neighborhood is very fast to explore and optimized $2$-exchange is more powerful but searching through it for an improving solution takes significantly more time. The neighborhood considered in the \textit{Alternating Algorithm} works better when significant asymmetry is present regarding $\mb{x}$ and $\mb{y}$ variables. Motivated by these complementary properties, we have explored VNS based algorithms to solve BAP. We start by attempting to improve the convergence speed of \textit{AA} by the means of the faster \textit{2ex}. The first variation, named \textbf{\textit{2ex+AA}} will first apply \textit{2ex} to \textit{Random} starting solution and then apply \textit{AA} to the resulting solution. A more complex approach \textbf{\textit{2exAAStep}} (Algorithm \ref{2exAAStep}) will start by applying \textit{2ex} and as soon as the search converge it will apply a single improvement (step) with respect to \textit{Alternating Algorithm} neighborhood. After successful update the procedure defaults to running \textit{2ex} again. The process stops when no more improvements by \textit{AA} (and consequently by \textit{2ex}) are possible. \begin{algorithm} \caption{$2exAAStep$} \label{2exAAStep} \begin{algorithmic}[0]\scriptsize \Input integers $m, n$; $m \times m \times n \times n$ array $Q$; feasible solution $(\mb{x}, \mb{y})$ to given BAP \Output feasible solution to given BAP \While{True} \State $(\mb{x}, \mb{y}) \gets 2ex(m, n, Q, (\mb{x}, \mb{y}))$ \Comment{running $2$-exchange local search (Section \ref{sec:hex})} \State $e_{ij} \gets \sum_{k, l \in N} q_{ijkl} y_{kl} \, \forall i, j \in M$ \State $\mb{x}^* \gets arg\,min_{\mb{x}' \in \mathcal{X}} \sum_{i, j \in M} e_{ij} x'_{ij}$ \Comment{solving assignment problem for $\mb{x}$} \If{$f(\mb{x}^*, \mb{y}) < f(\mb{x}, \mb{y})$ } \State \textbf{continue} \Comment{restarting the procedure \textbf{while} loop} \EndIf \State $g_{kl} \gets \sum_{i, j \in M} q_{ijkl} x^*_{ij} \, \forall k, l \in N$ \State $\mb{y}^* \gets arg\,min_{\mb{y}' \in \mathcal{Y}} \sum_{k, l \in N} g_{kl} y'_{kl}$ \Comment{solving assignment problem for $\mb{y}$} \If{$f(\mb{x}^*, \mb{y}^*) = f(\mb{x}, \mb{y})$ } \State \textbf{break} \Comment{algorithm converged, terminate} \EndIf \State $\mb{x} \gets \mb{x}^*; \, \mb{y} \gets \mb{y}^*$ \EndWhile \State \textbf{return} ($\mb{x}$, $\mb{y}$) \end{algorithmic} \end{algorithm} Results in Table \ref{vnsconvt1} follow the structure of experimental results reported earlier in the paper. The number of iterations that we report for \textit{2exAAStep} is the number of times the heuristic switches from $2$-exchange neighborhood to the neighborhood of the \textit{Alternating Algorithm}. Clearly, this number will be $1$ for \textit{2ex+AA} by design. \begin{table}[!htb] \centering \caption{Solution value, running time in seconds and number of iterations for \textit{Alternating Algorithm} and variations (convergence to local optima)} \label{vnsconvt1}\scalebox{0.8}{\begin{tabular}{@{}lrlrlrlc@{}} \toprule & \multicolumn{2}{c}{AA} & \multicolumn{2}{c}{2ex+AA} & \multicolumn{3}{c}{2exAAStep}\\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-8} instances & value & time & value & time & value & time & iter\\ \midrule uniform 10x10 & 3255 & \textbf{0.0} & 3305 & 0.0 & 3322 & 0.01 & 1 \\ uniform 20x20 & 56287 & \textbf{0.01} & 56136 & 0.01 & 56076 & 0.01 & 3 \\ uniform 30x30 & 297819 & \textbf{0.02} & 298485 & 0.03 & 297874 & 0.05 & 4 \\ uniform 40x40 & 965875 & \textbf{0.06} & 967373 & 0.08 & 971010 & 0.13 & 5 \\ uniform 50x50 & 2415720 & \textbf{0.11} & 2414279 & 0.18 & 2419385 & 0.34 & 6 \\ uniform 60x60 & 5077348 & \textbf{0.23} & 5089275 & 0.33 & 5095460 & 0.77 & 9 \\ uniform 70x70 & 9578626 & \textbf{0.32} & 9561747 & 0.51 & 9549687 & 1.25 & 10 \\ uniform 80x80 & 16505833 & \textbf{0.59} & 16422705 & 0.93 & 16474525 & 1.87 & 10 \\ uniform 90x90 & 26650437 & \textbf{0.93} & 26726070 & 1.16 & 26706156 & 3.04 & 11 \\ uniform 100x100 & 41027445 & \textbf{1.12} & 41001387 & 1.89 & 41038180 & 4.78 & 14 \\ uniform 110x110 & 60512662 & \textbf{1.72} & 60549540 & 2.37 & 60508210 & 6.87 & 15 \\ uniform 120x120 & 86397256 & \textbf{2.08} & 86108044 & 3.23 & 86019130 & 10.47 & 18 \\ uniform 130x130 & 119380881 & \textbf{3.02} & 119421396 & 4.06 & 119417016 & 12.52 & 16 \\ uniform 140x140 & 161524589 & \textbf{3.58} & 161725915 & 5.6 & 161535754 & 16.97 & 18 \\ uniform 150x150 & 213377462 & \textbf{5.02} & 214064556 & 6.9 & 213453225 & 22.48 & 19 \\ normal 10x10 & 4037 & \textbf{0.0} & 3997 & 0.0 & 3997 & 0.0 & 2 \\ normal 20x20 & 66006 & \textbf{0.01} & 66372 & 0.01 & 66104 & 0.01 & 3 \\ normal 30x30 & 343319 & \textbf{0.02} & 342316 & 0.03 & 342776 & 0.05 & 3 \\ normal 40x40 & 1096961 & \textbf{0.06} & 1098741 & 0.09 & 1101256 & 0.17 & 7 \\ normal 50x50 & 2712329 & \textbf{0.12} & 2709929 & 0.2 & 2708557 & 0.38 & 8 \\ normal 60x60 & 5668986 & \textbf{0.21} & 5671907 & 0.33 & 5678451 & 0.72 & 8 \\ normal 70x70 & 10561145 & \textbf{0.42} & 10588835 & 0.57 & 10581535 & 1.29 & 10 \\ normal 80x80 & 18172093 & \textbf{0.51} & 18160338 & 0.87 & 18141092 & 2.22 & 12 \\ normal 90x90 & 29222387 & \textbf{0.91} & 29231041 & 1.3 & 29283340 & 2.84 & 10 \\ normal 100x100 & 44751122 & \textbf{1.31} & 44735031 & 1.72 & 44753417 & 5.22 & 15 \\ normal 110x110 & 65809366 & \textbf{1.64} & 65817524 & 2.39 & 65812802 & 6.97 & 15 \\ normal 120x120 & 93529513 & \textbf{2.26} & 93491028 & 3.58 & 93581308 & 8.65 & 14 \\ normal 130x130 & 129150096 & \textbf{3.26} & 129310194 & 4.14 & 129238943 & 12.84 & 17 \\ normal 140x140 & 174245361 & \textbf{3.75} & 174296950 & 5.91 & 174169032 & 20.14 & 21 \\ normal 150x150 & 230484514 & \textbf{4.28} & 230242366 & 7.32 & 230292305 & 24.21 & 21 \\ euclidean 10x10 & 5032 & \textbf{0.0} & 5015 & 0.0 & 5015 & 0.01 & 1 \\ euclidean 20x20 & 81714 & \textbf{0.01} & 81701 & 0.01 & 81701 & 0.01 & 2 \\ euclidean 30x30 & 424425 & \textbf{0.03} & 424261 & 0.04 & 424261 & 0.06 & 3 \\ euclidean 40x40 & 1331726 & \textbf{0.06} & 1330070 & 0.11 & 1330070 & 0.15 & 4 \\ euclidean 50x50 & 3342515 & \textbf{0.13} & 3337157 & 0.24 & 3337157 & 0.35 & 4 \\ euclidean 60x60 & 6637101 & \textbf{0.24} & 6622844 & 0.42 & 6622844 & 0.63 & 5 \\ euclidean 70x70 & 12373648 & \textbf{0.33} & 12345122 & 0.7 & 12345122 & 1.01 & 4 \\ euclidean 80x80 & 21088451 & \textbf{0.55} & 21060424 & 1.01 & 21060424 & 1.34 & 3 \\ euclidean 90x90 & 33842019 & \textbf{0.85} & 33831315 & 1.48 & 33831315 & 2.01 & 4 \\ euclidean 100x100 & 50386904 & \textbf{1.08} & 50351081 & 2.19 & 50350547 & 3.33 & 5 \\ \bottomrule \end{tabular}} \end{table} As all these approaches are guaranteed to be locally optimal with respect to \textit{Alternating Algorithm} neighborhood, we expect the solution values to be similar. This can be seen in the table. A main observation here is that the \textit{2ex} heuristic does not combine well with \textit{AA}. Increased running time for both \textit{2ex+AA} and \textit{2exAAStep} confirms that \textit{AA} is more efficient in searching its much larger neighborhood. We then explored the effect of combining \textit{2exOptFirst} and \textit{AA}. An algorithm that first runs \textit{AA} once and then applies \textit{2exOptFirst} until convergence will be referred to as \textbf{\textit{AA+2exOptFirst}}. A more desirable variable neighborhood search based on the discussed heuristics will use the fact that most of the time running \textit{AA} until convergence is faster than even a single update of the solutions during the \textit{2exOptFirst} run. The algorithm \textit{\textbf{AA2exOptFirstStep}} (Algorithm \ref{AA2exOptFirstStep}) will use \textit{AA} to reach its local optimum and then will try to escape it by applying a single first possible improvement of the slower search \textit{2exOptFirst}. If successful, the process will start from the beginning with \textit{AA}. We will also add to the comparison variation with \textit{best improvement rule}, namely \textit{\textbf{AA2exOptStep}}. \begin{algorithm} \caption{$AA2exOptFirstStep$} \label{AA2exOptFirstStep} \begin{algorithmic}[0]\scriptsize \Input integers $m, n$; $m \times m \times n \times n$ array $Q$; feasible solution $(\mb{x}, \mb{y})$ to given BAP \Output feasible solution to given BAP \While{True} \State $(\mb{x}, \mb{y}) \gets AA(m, n, Q, (\mb{x}, \mb{y}))$ \Comment{running \textit{Alternating Algorithm} (Section \ref{sec:hex})} \ForAll{$i_1 \in M$ \textbf{and all} $i_2 \in M \setminus \{i_1\}$} \State $j_1 \gets$ assigned index to $i_1$ in $\mb{x}$ \State $j_2 \gets$ assigned index to $i_2$ in $\mb{x}$ \State $\mb{x}^* \gets \mb{x}$ \State $x^*_{i_1j_1} \gets 0; \, x^*_{i_2j_2} \gets 0; \, x^*_{i_1j_2} \gets 1; \, x^*_{i_2j_1} \gets 1$ \Comment{applying 2-exchange} \State $g_{kl} \gets \sum_{i, j \in M} q_{ijkl} x^*_{ij} \, \forall k, l \in N$ \State $\mb{y}^* \gets arg\,min_{\mb{y}' \in \mathcal{Y}} \sum_{k, l \in N} g_{kl} y'_{kl}$ \Comment{solving assignment problem for $\mb{y}$} \If{$f(\mb{x}^*, \mb{y}^*) < f(\mb{x}, \mb{y})$ } \State $\mb{x} \gets \mb{x}^*; \, \mb{y} \gets \mb{y}^*$ \State \textbf{continue while} \Comment{restarting the procedure \textbf{while} loop} \EndIf \EndFor \ForAll{$k_1 \in N$ \textbf{and all} $k_2 \in N \setminus \{k_1\}$} \State $l_1 \gets$ assigned index to $k_1$ in $\mb{y}$ \State $l_2 \gets$ assigned index to $k_2$ in $\mb{y}$ \State $\mb{y}^* \gets \mb{y}$ \State $y^*_{k_1l_1} \gets 0; \, y^*_{k_2l_2} \gets 0; \, y^*_{k_1l_2} \gets 1; \, y^*_{k_2l_1} \gets 1$ \Comment{applying 2-exchange} \State $e_{ij} \gets \sum_{k, l \in N} q_{ijkl} y^*_{kl} \, \forall i, j \in M$ \State $\mb{x}^* \gets arg\,min_{\mb{x}' \in \mathcal{X}} \sum_{i, j \in M} e_{ij} x'_{ij}$ \Comment{solving assignment problem for $\mb{x}$} \If{$f(\mb{x}^*, \mb{y}^*) < f(\mb{x}, \mb{y})$ } \State $\mb{x} \gets \mb{x}^*; \, \mb{y} \gets \mb{y}^*$ \State \textbf{continue while} \Comment{restarting the procedure \textbf{while} loop} \EndIf \EndFor \textbf{break} \Comment{algorithm converged, terminate} \EndWhile \State \textbf{return} ($\mb{x}$, $\mb{y}$) \end{algorithmic} \end{algorithm} The results of these experiments are reported in Table \ref{vnsconvt2}. Here, we also report the number of iterations for \textit{AA2exOptStep} and \textit{AA2exOptFirstStep}, which represents the number of switches from the \textit{Alternating Algorithm} neighborhood to optimized $2$-exchange neighborhood before the algorithms converge. \newgeometry{margin=1cm} \thispagestyle{empty} \begin{sidewaystable*}[t!] \centering \small \caption{Solution value, running time in seconds and number of iterations for \textit{2exOpt} and variations (convergence to local optima)} \label{vnsconvt2}\scalebox{0.75}{\begin{tabular}{@{}lrlrlrlcrlc@{}} \toprule & \multicolumn{2}{c}{2ExOptFirst} & \multicolumn{2}{c}{AA+2exOptFirst} & \multicolumn{3}{c}{AA2exOptStep} & \multicolumn{3}{c}{AA2exOptFirstStep}\\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-8} \cmidrule(lr){9-11} instances & value & time & value & time & value & time & iter & value & time & iter\\ \midrule uniform 10x10 & 3156 & \textbf{0.02} & 3059 & 0.01 & 3054 & 0.02 & 2 & 3059 & 0.02 & 3 \\ uniform 20x20 & 54670 & \textbf{0.35} & 54877 & 0.24 & 54718 & 0.32 & 3 & 54431 & 0.2 & 4 \\ uniform 30x30 & 291902 & 2.27 & 294044 & \textbf{1.12} & 291184 & 2.64 & 4 & 290011 & 1.17 & 4 \\ uniform 40x40 & 948344 & 9.78 & 958550 & \textbf{3.29} & 953938 & 5.61 & 3 & 958215 & 3.4 & 3 \\ uniform 50x50 & 2379856 & 33.02 & 2399151 & 11.15 & 2392151 & 16.57 & 3 & 2395319 & \textbf{8.08} & 3 \\ uniform 60x60 & 5044883 & 64.73 & 5026000 & 36.08 & 5030618 & 35.22 & 3 & 5026865 & \textbf{20.45} & 4 \\ uniform 70x70 & 9479099 & 168.6 & 9511756 & 67.85 & 9521222 & 78.27 & 3 & 9501548 & \textbf{28.81} & 3 \\ uniform 80x80 & 16418360 & 252.23 & 16400987 & 120.41 & 16390406 & 132.04 & 3 & 16381373 & \textbf{56.49} & 3 \\ uniform 90x90 & 26507000 & 569.45 & 26499481 & 229.48 & 26536687 & 238.11 & 3 & 26546135 & \textbf{113.07} & 4 \\ uniform 100x100 & 40753550 & 878.32 & 40894844 & 293.74 & 40949795 & 184.26 & 1 & 40875653 & \textbf{155.41} & 3 \\ uniform 110x110 & 60079399 & 1539.8 & 60231687 & 458.67 & 60277301 & 487.4 & 3 & 60196421 & \textbf{317.3} & 4 \\ uniform 120x120 & 85818278 & 2120.85 & 85774789 & 1090.37 & 85996522 & 526.83 & 2 & 86070239 & \textbf{306.32} & 2 \\ uniform 130x130 & 118773110 & 3515.46 & 118967905 & 1105.4 & 119034719 & 827.06 & 2 & 119133276 & \textbf{452.11} & 3 \\ uniform 140x140 & 160780185 & 4860.32 & 160956538 & 1304.17 & 161002007 & 1479.93 & 3 & 161113803 & \textbf{764.84} & 3 \\ uniform 150x150 & 213525103 & 5514.74 & 213372569 & \textbf{538.34} & 213372569 & 748.16 & 1 & 213372569 & 553.21 & 1 \\ normal 10x10 & 3866 & \textbf{0.02} & 3895 & 0.01 & 3886 & 0.02 & 2 & 3917 & 0.01 & 2 \\ normal 20x20 & 65262 & \textbf{0.3} & 65137 & 0.28 & 65166 & 0.36 & 3 & 65258 & 0.21 & 4 \\ normal 30x30 & 338569 & 2.9 & 340096 & 1.19 & 340240 & 1.52 & 2 & 340534 & \textbf{0.86} & 3 \\ normal 40x40 & 1087006 & 10.28 & 1087569 & 6.04 & 1090323 & 6.93 & 3 & 1089412 & \textbf{3.46} & 4 \\ normal 50x50 & 2695007 & 26.39 & 2697747 & 14.44 & 2697124 & 19.13 & 3 & 2696860 & \textbf{7.45} & 3 \\ normal 60x60 & 5637608 & 71.64 & 5639469 & 34.18 & 5634802 & 45.69 & 4 & 5638741 & \textbf{18.75} & 3 \\ normal 70x70 & 10538891 & 159.53 & 10527751 & 61.85 & 10524931 & 80.22 & 3 & 10532494 & \textbf{33.81} & 3 \\ normal 80x80 & 18102861 & 292.68 & 18102161 & 145.45 & 18123379 & 148.5 & 4 & 18125319 & \textbf{62.56} & 3 \\ normal 90x90 & 29162243 & 447.82 & 29167487 & 166.29 & 29176575 & 193.61 & 3 & 29167084 & \textbf{102.4} & 3 \\ normal 100x100 & 44610176 & 953.0 & 44644532 & 272.9 & 44626268 & 376.46 & 4 & 44645246 & \textbf{153.74} & 3 \\ normal 110x110 & 65589378 & 1404.23 & 65635027 & 561.99 & 65669769 & 423.52 & 3 & 65646106 & \textbf{233.42} & 3 \\ normal 120x120 & 93315766 & 2071.35 & 93321138 & 697.7 & 93338052 & 692.75 & 3 & 93300933 & \textbf{346.7} & 3 \\ normal 130x130 & 128872342 & 3329.54 & 129005518 & 630.07 & 128978046 & 784.53 & 2 & 129030228 & \textbf{361.45} & 2 \\ normal 140x140 & 173877153 & 4669.47 & 174004558 & 1379.7 & 174104009 & 857.84 & 2 & 174117705 & \textbf{565.83} & 2 \\ normal 150x150 & 229879808 & 6572.92 & 229985798 & 2161.19 & 230286566 & 1481.59 & 3 & 230254077 & \textbf{757.09} & 3 \\ euclidean 10x10 & 4988 & \textbf{0.04} & 4995 & 0.02 & 4992 & 0.02 & 1 & 4996 & 0.02 & 1 \\ euclidean 20x20 & 81833 & 1.46 & 81644 & \textbf{0.33} & 81644 & 0.31 & 1 & 81644 & 0.28 & 1 \\ euclidean 30x30 & 424227 & 17.82 & 424425 & \textbf{1.63} & 424425 & 1.65 & 1 & 424425 & 1.64 & 1 \\ euclidean 40x40 & 1330114 & 84.25 & 1331592 & \textbf{7.63} & 1331592 & 7.28 & 1 & 1331592 & 7.24 & 1 \\ euclidean 50x50 & 3344106 & 347.38 & 3342208 & 22.61 & 3342208 & 20.49 & 1 & 3342208 & \textbf{18.78} & 1 \\ euclidean 60x60 & 6628784 & 968.15 & 6637101 & \textbf{43.81} & 6637101 & 43.91 & 1 & 6637101 & 43.81 & 1 \\ euclidean 70x70 & 12343342 & 2404.75 & 12373648 & \textbf{90.12} & 12373648 & 90.34 & 1 & 12373648 & 90.1 & 1 \\ euclidean 80x80 & 21098260 & 5579.32 & 21088451 & \textbf{174.46} & 21088451 & 174.94 & 1 & 21088451 & 174.98 & 1 \\ euclidean 90x90 & 33892498 & 11440.65 & 33841998 & 333.66 & 33841998 & 338.05 & 1 & 33841998 & \textbf{326.42} & 1 \\ euclidean 100x100 & 50313528 & 19808.73 & 50386904 & \textbf{514.2} & 50386904 & 515.13 & 1 & 50386904 & 514.74 & 1 \\ \bottomrule \end{tabular}} \end{sidewaystable*} \restoregeometry We have noticed that incorporating \textit{Alternating Algorithm} into \textit{optimized 2-exchange} yields a much better performance, bringing the convergence time down by at least an order of magnitude. Among variations, \textit{AA2exOptFirstStep} is consistently faster for \textit{uniform} and \textit{normal} instances. However, for \textit{euclidean} instances performance of all variable neighborhood search algorithms is similar. In fact, for euclidean instances of all sizes the average number of switches between neighborhoods is $1$, which implies that there is no possible improvement from the optimized $2$-exchange neighborhood after the Alternating Algorithm has converged. Thus, the special structure of instances must be always considered when developing metaheuristics for BAP. Results on convergence time for all described algorithms from this sub-section, for \textit{uniform} instances, are given in Figure \ref{vnsconvut}. \begin{figure} \caption{Running time to reach the local optima by algorithms; \textit{uniform} instances} \label{vnsconvut} \end{figure} Our concluding set of experiments is dedicated to finding the most efficient combination of variable neighborhood search strategies and construction heuristics. We consider a variation of the VNS approach with the best convergence speed performance - \textit{AA2exOptFirstStep}. Namely, let \textit{\textbf{h-AA2exOptFirstStep}} be the algorithm that first generates $h$ starting solution, using \textit{RandomXYGreedy} strategy. It then proceeds to apply \textit{AA} to each of these solutions, selecting the best one and discarding the rest. After that \textit{h-AA2exOptFirstStep} will follow the description of \textit{AA2exOptFirstStep} (Algorithm \ref{AA2exOptFirstStep}) and will alternate between finding an improving solution using optimized $2$-exchange neighborhood and applying \textit{AA}, until the convergence to local optima. In this sense, \textit{AA2exOptFirstStep} and \textit{1-AA2exOptFirstStep} are equivalent implementations. The single iteration of \textit{AA} requires $O(n^3)$ running time, whereas, a full exploration of the optimized $2$-exchange neighborhood will take $O(m^2n^3)$. From the experiments in Section \ref{sec:expls} we also know that it usually takes \textit{AA} less than 10 iterations to converge. Based on these observations, for the following experimental analysis we have chosen $h$ for \textit{h-AA2exOptFirstStep} as $h \in \{4, 10, 100\}$. In addition to versions of \textit{h-AA2exOptFirstStep} we consider a simple multi-start \textit{AA} strategy that performed well in previous experiments (see Section \ref{sec:explsms}), denoted \textit{\textbf{msAA}}. Now however, the starting solution each time is generated using \textit{RandomXYGreedy} construction heuristic. As the time limit for this multi-start approach we select the highest convergence time among all \textit{h-AA2exOptFirstStep} variations. As it often happens during the time-limited multi-start procedures, the best solution will be found before the final iteration. Hence, in addition to the total number we also report the average iteration (\textit{best iter}) at which the finally reported solution was found, and the standard deviation of this value. See the results of these experiments in Table \ref{vnsmst} and Figure \ref{vnsmsuv}. \newgeometry{margin=1cm} \thispagestyle{empty} \begin{sidewaystable*}[t!] \centering \small \caption{Solution value, running time in seconds and number of iterations for Variable Neighborhood Search and multi-start \textit{AA}} \label{vnsmst}\scalebox{0.8}{\begin{tabular}{@{}lrlcrlcrlcrlcrlccc@{}} \toprule & \multicolumn{3}{c}{AA2ExOptFirstStep} & \multicolumn{3}{c}{4AA2ExOptFirstStep} & \multicolumn{3}{c}{10AA2ExOptFirstStep} & \multicolumn{3}{c}{100AA2ExOptFirstStep} & \multicolumn{5}{c}{msAA}\\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} \cmidrule(lr){11-13} \cmidrule(lr){14-18} instances & value & time & iter & value & time & iter & value & time & iter & value & time & iter & value & time & iter & best iter & $\sigma(\text{best iter})$\\ \midrule uniform 10x10 & 3162 & 0.02 & 3 & 3126 & 0.01 & 1 & 3025 & 0.01 & 1 & \textbf{2983} & 0.06 & 1 & 2995 & 0.07 & 116 & 47 & 41\\ uniform 20x20 & 55131 & 0.15 & 2 & 54601 & 0.17 & 2 & 54294 & 0.19 & 2 & \textbf{53281} & 0.58 & 1 & 53620 & 0.59 & 131 & 54 & 37\\ uniform 30x30 & 293385 & 0.89 & 3 & 292039 & 0.83 & 2 & 289483 & 0.92 & 2 & \textbf{286542} & 2.42 & 1 & 287169 & 2.44 & 130 & 57 & 49\\ uniform 40x40 & 955295 & 3.03 & 3 & 950608 & 2.77 & 3 & 951947 & 2.87 & 2 & 942849 & 6.82 & 1 & \textbf{939052} & 6.85 & 138 & 89 & 32\\ uniform 50x50 & 2380817 & 11.35 & 5 & 2379835 & 11.88 & 4 & 2375551 & 7.42 & 2 & 2370805 & 15.35 & 1 & \textbf{2360529} & 16.56 & 165 & 83 & 52\\ uniform 60x60 & 5038934 & 19.96 & 3 & 5030082 & 15.28 & 2 & 5015756 & 18.16 & 2 & \textbf{4990868} & 35.36 & 2 & 4993774 & 38.42 & 208 & 112 & 35\\ uniform 70x70 & 9479825 & 34.21 & 4 & 9436974 & 43.32 & 3 & 9445502 & 39.85 & 3 & 9413893 & 54.29 & 1 & \textbf{9399736} & 61.76 & 203 & 115 & 67\\ uniform 80x80 & 16389632 & 61.47 & 3 & 16357168 & 55.61 & 2 & 16303348 & 59.12 & 2 & \textbf{16261295} & 95.21 & 1 & 16264848 & 104.0 & 217 & 95 & 54\\ uniform 90x90 & 26505894 & 110.55 & 3 & 26456700 & 94.5 & 3 & 26407075 & 80.08 & 1 & 26356116 & 151.23 & 2 & \textbf{26342919} & 160.45 & 226 & 83 & 64\\ uniform 100x100 & 40782492 & 141.59 & 3 & 40712949 & 180.44 & 3 & 40633567 & 165.63 & 3 & 40540438 & 208.3 & 1 & \textbf{40506423} & 241.3 & 241 & 116 & 96\\ uniform 120x120 & 85825930 & 342.18 & 3 & 85579139 & 274.87 & 2 & 85471530 & 333.39 & 3 & 85335239 & 441.49 & 1 & \textbf{85283242} & 509.31 & 273 & 122 & 71\\ uniform 140x140 & 160605657 & 693.67 & 3 & 160415349 & 555.54 & 3 & 160292924 & 474.41 & 1 & 160035009 & 719.05 & 1 & \textbf{159912990} & 927.88 & 286 & 131 & 124\\ uniform 160x160 & 277129402 & 909.79 & 2 & 276565751 & 918.66 & 2 & 276159588 & 908.23 & 2 & \textbf{275721038} & 1386.9 & 1 & 275725334 & 1657.71 & 302 & 154 & 100\\ normal 10x10 & 3894 & 0.02 & 3 & 3855 & 0.01 & 2 & 3855 & 0.01 & 1 & \textbf{3808} & 0.07 & 1 & 3809 & 0.07 & 117 & 40 & 37\\ normal 20x20 & 65712 & 0.15 & 2 & 65077 & 0.17 & 2 & 64803 & 0.2 & 1 & \textbf{64293} & 0.58 & 1 & 64477 & 0.58 & 130 & 73 & 48\\ normal 30x30 & 338547 & 1.17 & 5 & 337693 & 0.95 & 3 & 338138 & 0.79 & 1 & \textbf{335113} & 2.75 & 2 & 335756 & 2.76 & 145 & 74 & 42\\ normal 40x40 & 1090670 & 2.81 & 3 & 1088357 & 3.1 & 3 & 1085519 & 2.69 & 2 & \textbf{1081375} & 7.56 & 1 & 1082915 & 7.58 & 154 & 81 & 46\\ normal 50x50 & 2696368 & 8.24 & 3 & 2692035 & 8.33 & 2 & 2682121 & 8.66 & 3 & \textbf{2678345} & 17.52 & 2 & 2680271 & 17.58 & 175 & 71 & 55\\ normal 60x60 & 5647247 & 17.06 & 3 & 5633194 & 14.77 & 1 & 5627675 & 17.07 & 2 & \textbf{5616899} & 31.56 & 2 & 5617125 & 32.18 & 173 & 83 & 55\\ normal 70x70 & 10549768 & 26.89 & 1 & 10519922 & 34.7 & 3 & 10509205 & 30.19 & 2 & \textbf{10493809} & 57.37 & 2 & 10494503 & 61.86 & 201 & 104 & 64\\ normal 80x80 & 18095404 & 72.05 & 3 & 18069406 & 59.64 & 2 & 18067347 & 55.46 & 2 & 18032081 & 86.61 & 1 & \textbf{18023497} & 100.11 & 209 & 112 & 62\\ normal 90x90 & 29115217 & 107.77 & 3 & 29103538 & 103.37 & 2 & 29097191 & 95.29 & 2 & 29045978 & 165.73 & 2 & \textbf{29027250} & 187.3 & 264 & 120 & 71\\ normal 100x100 & 44618697 & 130.7 & 2 & 44578918 & 138.0 & 2 & 44556729 & 162.61 & 3 & 44484747 & 245.72 & 3 & \textbf{44482231} & 279.76 & 274 & 172 & 62\\ normal 120x120 & 93293438 & 343.2 & 3 & 93162243 & 313.92 & 2 & 93112300 & 309.4 & 2 & 93023046 & 506.08 & 2 & \textbf{92984865} & 540.0 & 282 & 149 & 93\\ normal 140x140 & 173820624 & 535.5 & 2 & 173653510 & 510.49 & 2 & 173594266 & 481.53 & 1 & 173434718 & 815.2 & 2 & \textbf{173430869} & 900.03 & 279 & 144 & 76\\ normal 160x160 & 298434202 & 967.33 & 2 & 297840806 & 899.65 & 2 & 297816150 & 1030.84 & 2 & 297540220 & 1211.89 & 1 & \textbf{297480023} & 1567.93 & 294 & 126 & 62\\ euclidean 10x10 & 5037 & 0.02 & 1 & \textbf{5026} & 0.02 & 1 & 5027 & 0.02 & 1 & 5026 & 0.11 & 1 & 5026 & 0.11 & 116 & 6 & 7\\ euclidean 20x20 & 82675 & 0.25 & 1 & 82008 & 0.26 & 1 & 81842 & 0.31 & 1 & \textbf{81718} & 1.0 & 1 & 81718 & 1.0 & 129 & 12 & 11\\ euclidean 30x30 & 411014 & 1.78 & 1 & 408739 & 1.72 & 1 & 407379 & 1.91 & 1 & \textbf{406970} & 4.23 & 1 & 406970 & 4.24 & 162 & 32 & 43\\ euclidean 40x40 & 1348302 & 6.68 & 1 & 1342159 & 6.99 & 1 & 1339683 & 7.09 & 1 & 1337792 & 12.69 & 1 & \textbf{1337738} & 12.72 & 204 & 48 & 58\\ euclidean 50x50 & 3231060 & 21.05 & 1 & 3219207 & 20.39 & 1 & 3214867 & 19.94 & 1 & 3210442 & 30.74 & 1 & \textbf{3210280} & 31.97 & 254 & 37 & 36\\ euclidean 60x60 & 6548901 & 44.42 & 1 & 6519075 & 44.82 & 1 & 6515800 & 46.24 & 1 & 6507833 & 65.26 & 1 & \textbf{6507813} & 65.41 & 304 & 32 & 23\\ euclidean 70x70 & 12315235 & 93.93 & 1 & 12283239 & 100.51 & 1 & 12264197 & 96.28 & 1 & 12257619 & 126.03 & 1 & \textbf{12256435} & 128.94 & 388 & 74 & 76\\ euclidean 80x80 & 21240164 & 187.89 & 1 & 21143316 & 183.3 & 1 & 21104571 & 185.35 & 1 & 21096255 & 229.53 & 1 & \textbf{21095365} & 232.0 & 459 & 144 & 132\\ euclidean 90x90 & 33385322 & 335.48 & 1 & 33323860 & 319.99 & 1 & 33296502 & 326.28 & 1 & 33279588 & 388.9 & 1 & \textbf{33277417} & 398.29 & 558 & 81 & 126\\ euclidean 100x100 & 51524424 & 530.7 & 1 & 51382552 & 535.98 & 1 & 51303227 & 538.1 & 1 & 51289100 & 632.49 & 1 & \textbf{51286565} & 633.16 & 597 & 158 & 133\\ euclidean 120x120 & 105192868 & 1291.27 & 1 & 105092433 & 1284.2 & 1 & 105037756 & 1404.01 & 1 & 104969850 & 1456.4 & 1 & \textbf{104965462} & 1556.45 & 908 & 93 & 112\\ \bottomrule \end{tabular}} \end{sidewaystable*} \restoregeometry \begin{figure} \caption{Difference between solution values (to the best) for algorithms; \textit{uniform} instances} \label{vnsmsuv} \end{figure} Under this considerations, multi-start \textit{AA} once again performed the best. \textit{h-AA2exOptFirstStep} variations were the more efficient, the higher the number $h$ was. Interestingly, for several instance sizes, the average iteration of finding the best solution by \textit{msAA} is substantially bellow $100$. However, the observed standard deviation is very high, which hints towards the variability of the solutions produced by \textit{AA}. To confirm this, we present in Figures \ref{100AAu}, \ref{100AAn} and \ref{100AAe} the spread of solution values produced by applying \textit{AA} to the solution of \textit{RandomXYGreedy} (denoted as \textit{RandomXYGreedy+AA}). All three instances in these charts are of size $m=n=100$, and we perform $100$ runs of this metaheuristic. \begin{figure} \caption{Objective solution values for \textit{RandomXYGreedy+AA} metaheuristic; \textit{uniform} $100 \times 100$ instance} \label{100AAu} \end{figure} \begin{figure} \caption{Objective solution values for \textit{RandomXYGreedy+AA} metaheuristic; \textit{normal} $100 \times 100$ instance} \label{100AAn} \end{figure} \begin{figure} \caption{Objective solution values for \textit{RandomXYGreedy+AA} metaheuristic; \textit{euclidean} $100 \times 100$ instance} \label{100AAe} \end{figure} At this point, we conclude that optimized $2$-exchange neighborhood is too costly to explore, in comparison to the neighborhood that \textit{AA} is based on. For the general case it is more effective to do several more restarts of \textit{AA} from \textit{RandomXYGreedy} solutions then to spend time escaping local optima with even a single step of \textit{2exOpt}. It is suggested to only use efficient implementations of VNS that explore optimized $2$-exchange neighborhood as the final step of any metaheuristic. In this way you can improve your solution quality without excessive time spending, while leaving all the heavy work for \textit{Alternation Algorithm}. Our previous experiments that involve multi-start strategies (in this section and Section \ref{sec:explsms}) have reasonable time limit restrictions. This considerations are important when developing algorithms to run on real-life instances. However, we are also interested in behavior of multi-start AA and multi-start VNS in the case of unlimited (or unreasonably large) running time constraints. Figure \ref{ut100u} presents results of running multi-start \textit{AA}, multi-start \textit{1-AA2exOptFirstStep} and multi-start \textit{100-AA2exOptFirstStep}, for a single $100 \times 100$ \textit{uniform} instance, for an exceedingly long period of time. All starts are made from the solutions generated by \textit{RandomXYGreedy} heuristic. Here we report the change of the best found solution value, depending on time. \begin{figure} \caption{Improvement over time of best found objective solution value for multi-start heuristics; \textit{uniform} $100 \times 100$ instance} \label{ut100u} \end{figure} We can see that after 50000 seconds (0.58 days of running) multi-start VNS strategies begin to dominate the multi-start \textit{AA}, even though the later approach is much more efficient in solution space exploration for short running times. This observation is consistent with optimized $h$-exchange being a more powerful neighborhood in terms of solutions quality. \section{Conclusion} \label{sec:conclusion} We have presented the first systematic experimental analysis of heuristics for BAP along with some theoretical results on local search algorithms worst case performance. Three classes of neighborhoods - $h$-exchange, $[h,p]$-exchange and shift based - are introduced. Some of the neighborhoods are of an exponential size but can be searched for an improving solution in polynomial time. Analysis of local optimums in terms of domination properties and relation to average value $\mathcal{A}(Q,C,D)$ are presented. Several greedy, semi-greedy and rounding construction heuristics are proposed for generating reasonable quality solution quickly. Experimental results show that \textit{RandomXYGreedy} is a good alternative among the approaches. The built-in randomized decision steps make this heuristic valuable for generating starting solutions for improvement algorithms within a multistart framework. Extensive computational analysis has been carried out on the searches based on described neighborhoods. The experimental results suggest that the very large-scale neighborhood (VLSN) search algorithm - \textit{Alternating Algorithm (AA)}, when used within multi-start framework, yields a more balanced heuristic in terms of running time and solution quality. A variable neighborhood search (VNS) algorithm, that strategically uses optimized $2$-exchange neighborhood and \textit{AA} neighborhood, produced superior outcomes. However, this came with the downside of a significantly larger computational time. We hope that this study inspires additional research work on the bilinear assignment model, particularly in the area of design and analysis of exact and heuristic algorithms. \section*{Acknowledgment} This work was supported by NSERC Discovery Grant and NSERC Accelerator Supplement awarded to Abraham P Punnen as well as NSERC Discovery Grant awarded to Binay Bhattacharya. \end{document}
arXiv
EURASIP Journal on Advances in Signal Processing Target parameter estimation for spatial and temporal formulations in MIMO radars using compressive sensing Hussain Ali1, Sajid Ahmed2, Tareq Y. Al-Naffouri2, Mohammad S. Sharawi1 & Mohamed-S Alouini2 EURASIP Journal on Advances in Signal Processing volume 2017, Article number: 6 (2017) Cite this article Conventional algorithms used for parameter estimation in colocated multiple-input-multiple-output (MIMO) radars require the inversion of the covariance matrix of the received spatial samples. In these algorithms, the number of received snapshots should be at least equal to the size of the covariance matrix. For large size MIMO antenna arrays, the inversion of the covariance matrix becomes computationally very expensive. Compressive sensing (CS) algorithms which do not require the inversion of the complete covariance matrix can be used for parameter estimation with fewer number of received snapshots. In this work, it is shown that the spatial formulation is best suitable for large MIMO arrays when CS algorithms are used. A temporal formulation is proposed which fits the CS algorithms framework, especially for small size MIMO arrays. A recently proposed low-complexity CS algorithm named support agnostic Bayesian matching pursuit (SABMP) is used to estimate target parameters for both spatial and temporal formulations for the unknown number of targets. The simulation results show the advantage of SABMP algorithm utilizing low number of snapshots and better parameter estimation for both small and large number of antenna elements. Moreover, it is shown by simulations that SABMP is more effective than other existing algorithms at high signal-to-noise ratio. Colocated multiple-input-multiple-output (MIMO) radars have been extensively studied in literature for surveillance applications. In phased array radars, each antenna transmits the phase shifted version of the same waveform to steer the transmit beam. Therefore, in phased array radars, the transmitted waveforms at each antenna element are sufficiently correlated resulting in a single beamformed waveform. In contrast, MIMO radar can be seen as an extension of phased array radar, where transmitted waveforms can be independent or partially correlated. Such waveforms yield extra degrees of freedom that can be exploited for better detection performance and resolution and to achieve desired beam patterns achieving uniform transmit energy in the desired direction. For MIMO radar, many parameter estimation algorithms have been studied, e.g., Capon, amplitude-and-phase estimation (APES), Capon and APES (CAPES), and Capon and approximate maximum likelihood (CAML) [1, 2]. These algorithms require the inverse of the covariance matrix of the received samples. The covariance matrix of the received samples is full rank if the number of snapshots is greater than or equal to the number of receive antenna elements. Therefore, the conventional algorithms like Capon and APES require a large number of snapshots for parameter estimation. Moreover, for the case of large arrays, the inversion of the covariance matrix of a larger number of received snapshots will become computationally expensive. Compressive sensing (CS) [3, 4] is a useful tool for data recovery in sparse environments. Some efficient algorithms are proposed that fall in the category of greedy algorithms that include orthogonal matching pursuit(OMP) [5], regularized orthogonal matching pursuit (ROMP) [6], stagewise orthogonal matching pursuit (StOMP) [7], and compressive sampling matching pursuit (CoSaMP) [8]. There is another category of CS algorithms called Bayesian algorithms that assume the a priori statistics are known. These algorithms include sparse Bayes [9], Bayesian compressive sensing (BCS) [10] and the fast Bayesian matching pursuit (FBMP) [11]. Another reduced complexity algorithm based on the structure of the sensing matrix is proposed in [12]. In addition to these algorithms, support agnostic Bayesian matching pursuit (SABMP) is proposed in [13] which assumes that the support distribution is unknown and finds the Bayesian estimate for the sparse signal by utilizing noise statistics and sparsity rate. The target parameters to be estimated are the reflection coefficients (path gains) and location of the target. To estimate the reflection coefficient and location angle of the target, existing CS algorithms can be utilized by formulating the MIMO radar parameter estimation problem as a sparse estimation problem. It is shown in [14–16] that the MIMO radar problem can be seen as an ℓ 1-norm minimization problem. In direction of arrival (DOA) estimation, a discretized grid is selected to search all possible DOA estimates. The grid is equal to the search points in the angle domain of MIMO radar. The complexity of the CS method developed in [15] grows with the size of the discretized grid. In [16], the minimization problem is solved based on the covariance matrix estimation approach which requires a large number of snapshots. The work in [17] does not provide a fast parameter estimation algorithm and assumes that the number of targets, sparsity rate, and noise variance are known. The authors in [18] have used CVX (a package to solve convex problems) to solve the minimization problem obtained by CS formulation of MIMO radar. The solution of CS problems by CVX is computationally expensive for large angle grid. In [19], off-grid direction of arrival is estimated using sparse Bayesian inference where the number of sources or targets is assumed to be known. An off-grid CS algorithm called adaptive matching pursuit with constrained total least squares is proposed in [20] with application to DOA estimation. Another algorithm based on iterative recovery of off-grid target is proposed in [21, 22]. For recent developments that are useful in off-grid recovery, please see [23] and references therein. In this work, our contribution is twofold. First, we solve the spatial formulation for parameter estimation by SABMP for on-grid targets assuming that the number of targets and noise variance are unknown. Second, we solve an alternate temporal formulation to find estimates for the unknown parameters. We also make comparisons of MSE and complexity of our work with the existing conventional algorithms. Specifically, the advantages of using a CS based algorithm are as follows: The spatial formulation can recover the unknown parameters when the number of snapshots is less than the number of receiving antennas. The proposed approach for parameter estimation is capable of estimating unknown parameters even away from the broadside of the beam pattern. The recovery of the reflection coefficient in CS temporal formulation using SABMP is better than Capon, APES, and CoSaMP algorithms. The complexity of SABMP algorithm is not much effected by the number of receive antenna elements in the spatial formulation. Organization of the paper The rest of the paper is organized as follows: In Section 2, the signal model for MIMO radar DOA problem is formulated. In Section 3, the system model is reformulated in a CS environment for on-grid parameter estimation along with the spatial and temporal formulations for large and small arrays (Sections 3.1 and 3.2, respectively). In Section 4, we show the derivation for the Cramér Rao lower bound (CRLB). The simulation results are discussed in Section 5, and the paper is concluded in Section 6. We assume complex-valued data which is more general. Bold lower case letters, e.g., x, and bold upper case letters, e.g., X, respectively, denote vectors and matrices. The notations x T and X T, respectively, denote the transpose of a vector x and transpose of a matrix X. The notations x H denote the complex conjugate transpose of a vector x. The notation diag{a,b} denotes a diagonal matrix with diagonal entries a and b. Support agnostic Bayesian matching pursuit CS technique is used to recover information from signals that are sparse in some domain, using fewer measurements than required by Nyquist theory. Let \(\mathbf {x} \in \mathcal {C}^{N}\) be a sparse signal which consists of K non-zero coefficients in an N-dimensional space where K≪N. If \(\mathbf {y} \in \mathcal {C}^{M}\) be the observation vector with M≪N, then the CS problem can be formulated as $$\begin{array}{*{20}l} \mathbf{y} = \text{\boldmath\(\Phi\)} \mathbf{x} + \mathbf{z} \end{array} $$ where \(\text {\boldmath \(\Phi \)} \in \mathcal {C}^{M \times N}\) is referred to as sensing matrix and \(\mathbf {z} \in \mathcal {C}^{M}\) is complex additive white Gaussian noise, \(\mathcal {CN} (\mathbf {0},\sigma _{\mathbf {z}}^{2} \mathbf {I}_{M})\). The theoretical way to reconstruct x is to solve an ℓ 0-norm minimization problem when it is known a priori that the signal x is sparse and measurements are noise free, i.e., $$\begin{array}{*{20}l} \min \|\mathbf{x} \|_{0}, ~~~~~ \text{subject to} ~~~~~ \mathbf{y}=\text{\boldmath\(\Phi\)} \mathbf{x}. \end{array} $$ Solving the ℓ 0-norm minimization problem is NP-hard problem and requires exhaustive search to find the solution. Therefore, a more tractable solution [24] is to minimize the ℓ 1-norm with a relaxed constraint, i.e., $$\begin{array}{*{20}l} \min \|\mathbf{x}\|_{1}, ~~~~~ \text{subject to} ~~~~~ \|\mathbf{y} - \boldsymbol{\Phi} \mathbf{x}\|_{2} \leq \delta, \end{array} $$ where \(\delta = \sqrt {\sigma _{\mathbf {z}}^{2} (M+\sqrt {2M})}\). ℓ 1-norm minimization problem reduces to a linear program known as basis pursuit. SABMP algorithm [13] is a Bayesian algorithm which provides robust sparse reconstruction. As discussed in [13], Bayesian estimation finds the estimate of x by solving the conditional expectation $$ \hat{\mathbf{x}} = \mathsf{E}~ \left[\mathbf{x}|\mathbf{y} \right] = \sum\limits_{\mathcal{S}} p (\mathcal{S}|\mathbf{y}) \mathsf{E} \left[\mathbf{x}|\mathbf{y},\mathcal{S} \right] $$ where \(\mathcal {S}\) denotes the support set which contains the location of non-zero entries and \(p(\mathcal {S}|\mathbf {y})\) is the probability of \(\mathcal {S}\) given y which is found by evaluating Bayes rule. In SABMP algorithm, the support set \(\mathcal {S}\) is found by greedy approach. Once the support set \(\mathcal {S}\) is known, the best linear unbiased estimator is found using y to estimate x. SABMP algorithm, like other Bayesian algorithms, utilizes statistics of noise and sparsity rate. SABMP algorithm assumes prior Gaussian statistics of the additive noise and the sparsity rate. The estimates of noise variance and sparsity rate need not to be known rather SABMP algorithm estimates them in a robust manner. The statistics of locations of non-zero coefficients or signal support are assumed either non-Gaussian or unknown. Hence, it is agnostic to the support distribution. SABMP is a low complexity algorithm as it searches for the solution in a greedy manner. The matrix inversion involved in the calculations is done in an order-recursive manner which leads to further reduction in complexity. Signal model We focus on a colocated MIMO radar setup as illustrated in Fig. 1. In colocated MIMO radar, the transmitting antenna elements in the transmitter and the receiving antenna elements in the receiver are closely spaced. Both the transmitter and receiver are closely spaced too in a monostatic configuration. In the monostatic configuration, the transmitter and receiver see the same aspects of a target. In other words, the distance between the target and transmitter/receiver is large enough that the distance between transmitter and receiver becomes insignificant. Consider a MIMO radar system of n T transmit and n R receive antenna elements. The antenna arrays at the transmitter and receiver are uniform and linear, the inter-element-spacing between any two adjacent antennas is half of the transmitted signal wavelength, and there are K possible targets located at angles θ k ∈ [ θ 1,θ 2,…,θ K ]. Let s(n) denote the vector of transmitted symbols which are uncorrelated quadrature phase shift keying (QPSK) sequences. If z(n) denote the vector of circularly symmetric white Gaussian noise samples at n R receive antennas at time index n, the vector of baseband samples at all n R receive antennas can be written as [25] $$ \mathbf{y}(n) = \sum\limits_{k=1}^{K} \beta_{k}(\theta_{k}) \mathbf{a}_{R}(\theta_{k}) \mathbf{a}_{T}^{\mathsf{T}} (\theta_{k}) \mathbf{s}(n) + \mathbf{z}(n), $$ Colocated MIMO radar setup where (.)T denotes the transpose, β k denotes the reflection coefficient of the k-th target at location angle θ k , while \(\mathbf {a}_{T} (\theta _{k}) = [\!1, e^{i \pi \sin (\theta _{k})}, \ldots, e^{i \pi (n_{T} - 1) \sin (\theta _{k})}]^{\mathsf {T}}\) and \(\mathbf {a}_{R} (\theta _{k}) = [\!1, e^{i \pi \sin (\theta _{k})}, \ldots, e^{i \pi (n_{R} - 1) \sin (\theta _{k})}]^{\mathsf {T}}\), respectively, denote the transmit and receive steering vectors. We have assumed z(n) as uncorrelated noise. A correlated noise model can be found in [26]. We are interested in estimating the two parameters: DOA represented by θ k and reflection coefficient β k which is proportional to the radar cross section (RCS) of the target. It is assumed that the targets are in the same range bins. CS for target parameter estimation CS formulation for target parameter estimation can be done in two different ways. First, via spatial formulation in which the samples at all antennas constitute a measurement vector. In the second approach, termed as temporal formulation, all snapshots in time at one antenna represent a measurement vector. These two methods are discussed next. Spatial formulation Suppose each antenna transmit L uncorrelated symbols, the matrix of all received samples can be written as [18, 27] $$ \mathbf{Y} = \sum\limits_{k=1}^{K} \beta_{k}(\theta_{k}) \mathbf{a}_{R}(\theta_{k}) \mathbf{a}_{T}^{\mathsf{T}} (\theta_{k}) \mathbf{S} + \mathbf{Z}, $$ $$ \mathbf{Y} = [\!\mathbf{y}(0), \mathbf{y}(1), \ldots, \mathbf{y}(L-1)] \in \mathcal{C}^{n_{R} \times L} $$ $$ \mathbf{S} = [\!\mathbf{s}(0), \mathbf{s}(1), \ldots, \mathbf{s}(L-1)] \in \mathcal{C}^{n_{T} \times L} $$ is a matrix of all transmitted symbols from all antennas. For independent transmitted waveforms, the rows of S will be uncorrelated. It should be noted that (6) holds if and only if the targets fall in the same range bins which is a special case. The model in (6) can be generalized for delay by adding the delay parameter in the transmitted waveform S. If the targets are in different range bin, there will be another parameter of delay or time of arrival associated with each target making the problem more complex. Since the targets are located at only finite discretized locations in the angle range [ −π/2,π/2], by dividing the region-of-interest into N grid points \(\{\hat \theta _{1},\hat \theta _{2},\ldots,\hat \theta _{N}\}\) and assuming \(\mathbf {A}_{R} = [\!\mathbf {a}_{R}(\hat \theta _{1}), \mathbf {a}_{R}(\hat \theta _{2}), \ldots, \mathbf {a}_{R}(\hat \theta _{N})], \mathbf {A}_{T} = [\!\mathbf {a}_{T}(\hat \theta _{1}), \mathbf {a}_{T}(\hat \theta _{2}), \ldots, \mathbf {a}_{T}(\hat \theta _{N})]\), and B=diag{β 1,β 2,…,β N }, we have $$ \mathbf{Y} = \mathbf{A}_{R} \mathbf{B} \mathbf{A}_{T}^{\mathsf{T}} \mathbf{S} + \mathbf{Z} $$ It should be noted here that the diagonal elements of B will be non-zero if and only if the target is present at the corresponding grid location. If N≫K, the columns of the matrix \(\mathbf {B} \mathbf {A}_{T}^{\mathsf {T}} \mathbf {S}\) will be sparse. Therefore, (9) can be written as $$\begin{array}{*{20}l} [\!\mathbf{y}(0), \mathbf{y}(1), \ldots, \mathbf{y}(L-1)] &= \mathbf{A}_{R} [\!\tilde{\mathbf{x}}(0), \tilde{\mathbf{x}}(1), \ldots,\\ & \tilde{\mathbf{x}}(L-1)] + \mathbf{Z}, \end{array} $$ where \(\tilde {\mathbf {x}}(l) = \mathbf {B} \mathbf {A}_{T}^{\mathsf {T}} \mathbf {s}(l)\) for l=0,1,…,L−1 is a sparse vector. For a single snapshot, we can solve $$ \mathbf{y}(l) = \mathbf{A}_{R} \tilde{\mathbf{x}}(l) + \mathbf{z}(l) $$ by optimizing the cost function $$ \min_{\tilde{\mathbf{x}}(l)} \|\tilde{\mathbf{x}} (l) \|_{1} ~~~~~ \text{subject to} ~~~~~ \|\mathbf{y} - \mathbf{A}_{R} \tilde{\mathbf{x}}(l) \|_{2} \leq \eta $$ and assuming A R as the sensing matrix using convex optimization tools. The sensing matrix A R is a structured matrix similar to the Fourier matrix. For guaranteed sparse recovery, there are conditions on the sensing matrix. One such condition is called restricted isometry property (RIP) [28] which says for a matrix Φ satisfies RIP with constant δ k if $$ (1-\delta_{k}) \|\mathbf{x}\|_{2}^{2} \leq \|\boldsymbol{\Phi} \mathbf{x} \|^{2} \leq (1+\delta_{k}) \|\mathbf{x} \|^{2}_{2} $$ for every vector x with sparsity k. For guaranteed sparse recovery in unbounded noise, δ 2k should be less than \(\sqrt {2}-1\). To find the exact value of δ k is a combinatorial problem which requires exhaustive search. For noiseless recovery of sparse vectors, coherence criteria is more tractable. The coherence of a sensing matrix with column norms 1 is given by $$ \mu(\boldsymbol{\Phi}) = \max_{i\neq j} |\langle \phi_{i}, \phi_{j} \rangle | $$ where {i,j}=1,2,…,N and ϕ i is the i-the column of Φ. In general for any matrix, Φ,0<μ≤1 but for guaranteed sparse recovery μ should be as small as possible and it must be less than one. The sensing matrix A R can be used for sparse reconstruction because it satisfies the coherence criteria with μ(A R )<1. The convex optimization methods require randomness in the sensing matrix. The structure in sensing matrix deteriorates the performance of convex optimizations methods due to high μ(Φ). But, the properties of structured sensing matrix can be exploited for reduced complexity sparse reconstruction. It is shown in [12] that for Toeplitz matrix exhibiting structure and μ(Φ)≃0.9, Bayesian reconstruction is more efficient than convex optimization methods. Furthermore, the matrix A R has Vandermonde structure and its usage for sparse recovery with a similar matrix to A R is also discussed in [17]. Ref [29] analyzed Fourier-based structured matrices for compressed sensing. Group sparsity algorithms were used to solve (10) for multiple snapshots and showed that the complexity grows with the number of measurement vectors as well as handling of the sensing matrix becomes difficult due to a Kronecker product involved in the construction of the group sensing matrix [30]. Since the column vectors \(\tilde {\mathbf {x}} (l)\), for l=0,1,…,L−1 in (12) are sparse, using A R as the sensing matrix, CS algorithms can be used to estimate the location and corresponding values of non-zero elements in \(\tilde {\mathbf {x}} (l)\). Once they are known, the reflection coefficients and location angles of the targets can be easily found. The formulation developed in (9) can be considered as block-sparse and can be solved by SABMP for block sparse signals [31]. SABMP is a low complexity algorithm and provides an approximate MMSE estimate of the sparse vector with unknown support distribution. The authors would like to emphasize that SABMP does not require the estimates of sparsity rate and noise variance rather it refines the initial estimates of these parameters in an iterative fashion. Therefore, we will assume that the noise variance and the number of targets are unknown. Moreover, SABMP is a low complexity algorithm because it calculates the inverses by order-recursive updates. The undersampling ratio in CS environment is defined as the length of sparse vector divided by the number of measurements, i.e., N/M. As the undersampling ratio increases, the performance of CS algorithms deteriorates (please see [13] and the references therein). The results in [13] show that the best performance of SABMP algorithm can be achieved when the undersampling ratio is 1<N/M<7. Since the number of measurements is n R , it can be deduced for the number of receiving antennas that N/7<n r <N. For a given grid size and to maintain a low undersampling ratio, the spatial formulation is best suitable for large arrays. Temporal formulation For smaller antenna arrays, where n R ≪N, the formulation mentioned above can have a very high undersampling ratio which will lead to poor sparse recovery. To overcome this problem, by taking the transpose of (9) an alternate formulation can be written as $$ \mathbf{Y}^{\mathsf{T}} = \mathbf{S}^{\mathsf{T}} \mathbf{A}_{T} \mathbf{B} \mathbf{A}_{R}^{\mathsf{T}} + \mathbf{Z}^{\mathsf{T}} $$ Since B is sparse, \(\bar {\mathbf {X}} = \mathbf {B} \mathbf {A}_{R}^{\mathsf {T}}\) will consist of sparse column vectors, the new sensing matrix will be $$ \boldsymbol{\Psi} = \mathbf{S}^{\mathsf{T}} \mathbf{A}_{T} ~~ \in \mathcal{C}^{L \times N}. $$ Similar to the argument of target range bins on (6), the model in (15) holds if and only if the targets fall in the same range bins. Moreover, if there is any delay in waveform S, it will effect the RIP of Ψ. Although the sensing matrix Ψ exhibit structure, the coherence of this sensing matrix is less than 1. Here, we are assuming that the transmitted waveforms matrix S is known at the receiver and A T can be reconstructed at the receiver in the absence of any calibration error. Therefore, the second formulation for CS becomes $$ \bar{\mathbf{Y}} = \boldsymbol{\Psi} \bar{\mathbf{X}} + \bar{\mathbf{Z}}, $$ where \(\bar {\mathbf {Y}} = \mathbf {Y}^{\mathsf {T}}\) and \(\bar {\mathbf {Z}} = \mathbf {Z}^{\mathsf {T}}\). As long as μ(Ψ)<1, the solution obtained for \(\bar {\mathbf {X}}\) is the sparsest solution. More specifically, if any vector \(\bar {\mathbf {X}}\) in \(\bar {\mathbf {X}}\) satisfies the following inequality $$ \|\bar{\mathbf{x}}\|_{0} < \frac{1}{2} \left(1+{\mu(\boldsymbol{\Psi})}^{-1} \right) $$ then ℓ 1-minimization recovers \(\bar {\mathbf {X}}\) [32, 33]. With this new formulation, the advantage that we get is that the undersampling ratio will become N/L. Using a similar argument for the undersampling ratio as made in the spatial formulation, it can be shown that N/7<L<N because the number of measurements is now L. Since the undersampling ratio is determined by the number of snapshots for a given grid size, this formulation is more suitable for small arrays. This formulation also has the additional advantage of increasing the number of grid points N for finer resolution by keeping a low undersampling ratio N/L by increasing the number of snapshots L at the same time. Cramér Rao lower bound In the following subsections, we discuss the CRLB for two cases, i.e. for known θ k and for unknown θ k respectively. Although both θ k and β k are unknown, yet we need to differentiate between the two cases of CRLB based on the assumption that either the target lies on-grid or off-grid. For CRLB, the error has to be consistent. In order to keep the consistency of error for CRLB, we will use the CRLB for known θ k when the target is on-grid and we will use CRLB for unknown θ k when the target is off-grid. CRLB for known θ k Let us define: $$\begin{array}{*{20}l} \boldsymbol{\eta} = \left[\begin{array}{cccc} \Re(\beta_{k}) & \Im (\beta_{k}) \end{array} \right] \end{array} $$ The Fisher information matrix (FIM) for the unknown parameters is given by the Slepian-Bang's formula assuming that the noise samples are uncorrelated. $$\begin{array}{*{20}l} \mathbf{F} (\boldsymbol{\eta}) = \frac{2}{\sigma_{\mathbf{z}}^{2}} \Re \left[\sum_{n=0}^{N-1} \left(\frac{\partial \mathbf{u}^{\mathsf{H}}(n)}{\partial \boldsymbol{\eta}} \frac{\partial \mathbf{u}(n)}{\partial \boldsymbol{\eta}^{\mathsf{T}}} \right) \right] \end{array} $$ $$\begin{array}{*{20}l} \frac{\partial \mathbf{u}^{\mathsf{H}}(n)}{\partial \boldsymbol{\eta}} = \left[ \begin{array}{c} \frac{\partial \mathbf{u}^{\mathsf{H}}(n)}{\partial \Re(\beta_{k})} \\ \frac{\partial \mathbf{u}^{\mathsf{H}}(n)}{\partial \Im(\beta_{k})} \end{array} \right]_{2 \times n_{R}}, \end{array} $$ $$\begin{array}{*{20}l} \frac{\partial \mathbf{u}^{\mathsf{H}}(n)}{\partial \boldsymbol{\eta}^{\mathsf{T}}} = \left[ \begin{array}{cccc} \frac{\partial \mathbf{u}}{\partial \Re(\beta_{k})} &\frac{\partial \mathbf{u}}{\partial \Im(\beta_{k})} \end{array} \right]_{n_{R} \times 2} \end{array} $$ $$\begin{array}{*{20}l} \mathbf{u}(n) = \beta_{k}(\theta_{k}) \mathbf{a}_{R}(\theta_{k}) \mathbf{a}_{T}^{\mathsf{T}} (\theta_{k}) \mathbf{s}(n) \end{array} $$ The two terms with partial derivatives in (22) are found to be: $$\begin{array}{*{20}l} \frac{\partial \mathbf{u}(n)}{\partial \Re(\beta_{k})} &= \mathbf{a}_{R}(\theta_{k}) \mathbf{a}_{T}^{\mathsf{T}}(\theta_{k}) \mathbf{s}(n) \end{array} $$ $$\begin{array}{*{20}l} \frac{\partial \mathbf{u}(n)}{\partial \Im(\beta_{k})} &= j \mathbf{a}_{R}(\theta_{k}) \mathbf{a}_{T}^{\mathsf{T}}(\theta_{k}) \mathbf{s}(n) \end{array} $$ The other two partial derivatives in (21) can be found by using the identity ∂ x H=(∂ x)H. Thus, (20) can be solved by using (24) and (25). The CRLB is found by inverting F(η). CRLB for unknown θ k Next, we derive CRLB for unknown θ k . Let us define: $$\begin{array}{*{20}l} \boldsymbol{\alpha} = \left[\begin{array}{cccc} \Re(\beta_{k}) & \Im (\beta_{k}) & \theta_{k} \end{array} \right] \end{array} $$ The Fisher information matrix for the unknown parameters is given by the Slepian-Bang's formula assuming that the noise samples are uncorrelated. $$\begin{array}{*{20}l} \mathbf{F} (\boldsymbol{\alpha}) = \frac{2}{\sigma_{\mathbf{z}}^{2}} \Re \left[\sum_{n=0}^{N-1} \left(\frac{\partial \mathbf{u}^{\mathsf H}(n)}{\partial \boldsymbol{\alpha}} \frac{\partial \mathbf{u}(n)}{\partial \boldsymbol{\alpha}^{\mathsf{T}}} \right) \right] \end{array} $$ $$\begin{array}{*{20}l} \frac{\partial \mathbf{u}^{\mathsf{H}}(n)}{\partial \boldsymbol{\alpha}} = \left[ \begin{array}{c} \frac{\partial \mathbf{u}^{\mathsf{H}}(n)}{\partial \Re(\beta_{k})} \\ \frac{\partial \mathbf{u}^{\mathsf{H}}(n)}{\partial \Im(\beta_{k})} \\ \frac{\partial \mathbf{u}^{\mathsf{H}}(n)}{\partial \theta_{k}} \end{array} \right]_{3 \times n_{R}} \end{array} $$ $$\begin{array}{*{20}l} \frac{\partial \mathbf{u}^{\mathsf{H}}(n)}{\partial \boldsymbol{\alpha}^{\mathsf{T}}} = \left[ \begin{array}{cccc} \frac{\partial \mathbf{u}}{\partial \Re(\beta_{k})} &\frac{\partial \mathbf{u}}{\partial \Im(\beta_{k})} &\frac{\partial \mathbf{u}}{\partial \theta_{k}} \end{array} \right]_{n_{R} \times 3} \end{array} $$ The partial derivatives with respect to ℜ(β k ) and I(β k ) are given in (24) and (25), respectively. The third partial derivative is found as follows by taking the second order derivative. Therefore, $$\begin{array}{*{20}l} \frac{\partial \mathbf{u}(n)}{\partial \theta_{k}} &= \beta_{k} \left(j\pi\cos(\theta_{k})\right) \left(\mathbf{a}_{T}^{\mathsf{T}}(\theta_{k}) \mathbf{A}_{T} \mathbf{s}(n) \mathbf{a}_{R}(\theta_{k}) \right.\\ & \quad + \left. \mathbf{a}_{T}^{\mathsf{T}}(\theta_{k}) \mathbf{s}(n) \mathbf{A}_{T} \mathbf{a}_{R}(\theta_{k}) \right) \end{array} $$ $$\begin{array}{*{20}l} \mathbf{A}_{T} = {\mathsf{diag}}\{0, 1, \ldots, n_{T} - 1 \} \end{array} $$ FIM can be found by above Eq. (30) along with (24) and (25) and the inversion of F(α) leads to CRLB. Simulation results We present here some simulation results to validate the methods discussed in this work. We assume a single target located at θ k . The parameters to be estimated are the reflection coefficient β k and DOA of the target θ k . To assess the performance of the algorithms, the unknown parameters are generated randomly according to \(\theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) and \(\beta _{k} = e^{j \varphi _{k}}\phantom {\dot {i}\!}\) of amplitude unity where \( \varphi _{k} \sim \mathcal {U} (0,1)\). The grid is uniformly discretized between −90° to +90° with N grid points. The number of grid points N is 512 in all the simulations. All algorithms are iterated for 104 iterations. The noise is assumed to be uncorrelated Gaussian with zero mean and variance σ 2. The algorithms that are included for comparisons are Capon, APES and CoSaMP algorithms. In the simulation results, while referring to SABMP means the SABMP for block sparse signals. Also, for CoSaMP algorithm, its block-CoSaMP version [34] is used. CS spatial formulation We discuss the simulation results for the spatial formulation. Figures 2 and 3 shows the mean square error (MSE) performance for β k and θ k , respectively. The number of antenna elements n T and n R is 16 and the number of snapshots L is 20. This is the case where L>n R . Both APES and Capon algorithms require L>n R to evaluate the correlation of the received signal. The estimation performance of β k for Capon reaches an error floor because Capon estimates are always biased [1]. APES algorithm shows the best estimation for β k for SNR greater than −8 dB. Both SABMP and CoSaMP algorithms do not perform well due to high under-sampling ratio. But, SABMP has better performance than CoSaMP algorithm for β k estimation. For θ k estimation, the results in Fig. 3 show that the Capon algorithm has the best performance at SNR greater than 3 dB. In Capon algorithm, at high SNR, the covariance matrix of received signals becomes close to singular causing poor estimation of θ k . That is why, the results are not plotted after 22 dB. Nevertheless, the results available in Fig. 3 will serve the purpose of comparison. SABMP performs worse in this scenario because it requires more measurements for better sparse recovery. All four algorithms reach an error floor because the grid is finite. In [35], this phenomenon is referred to as off-grid effect. MSE performance for β k estimation. Simulation parameters: \(L = 20, n_{T} = 16, n_{R} = 16, N = 512, \theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) but on-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j {\varphi }_{k}}\) where \(\varphi _{k} \sim \mathcal {U} (0,1)\) MSE performance for θ k estimation. Simulation parameters: \(L = 20, n_{T} = 16, n_{R} = 16, N = 512, \theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) but falling off-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\) In Figs. 4 and 5, we discuss the case when L<n R . To simulate this case, we choose n T and n R equal to 128 and L is kept to 10 only. In this case, both Capon and APES will fail to recover the estimates due to rank deficiency of received signal covariance matrix. However, CoSaMP and SABMP algorithms will still be able to work for both β k and θ k estimation. For β k estimation, SABMP algorithm has better estimation than CoSaMP algorithm up to SNR 22 dB. At high SNR, both CoSaMP and SABMP algorithms almost have the same performance for β k estimation. Both CoSaMP and SABMP are not able to achieve the CRLB due to high under-sampling ratio. The results obtained in Fig. 5 show that SABMP algorithm has slightly better performance than CoSaMP algorithm for θ k estimation. MSE performance for β k estimation. Simulation parameters: \(L = 10, n_{T} = 128, n_{R} = 128, N = 512, \theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) but on-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\). No recovery for Capon and APES methods MSE performance for θ k estimation. Simulation parameters: \(L = 10, n_{T} = 128, n_{R} = 128, N = 512, \theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) but falling off-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\). No recovery for Capon and APES methods We show the complexity comparison in Fig. 6. The plot is shown for processing time against n R . For all cases of n R , the number of snapshots L is 10 for CS. For both Capon and APES algorithms, if we keep L=10, it will not recover the unknown parameters. However, the comparison remains fair if we assume L at least equal to n R because the computational burden is on the inversion of the covariance matrix. It can be seen that as n R increases, the processing time for Capon and APES algorithm increases significantly. Since the size of the covariance matrix is equal to n R ×n R , the size of covariance matrix increases with n R . Both Capon and APES need to invert the covariance matrix obtained from the received samples which increase the processing time with increased n R . For SABMP, the increase in computation is mainly dependent on L in spatial formulation and is less dependent on n R . That is why SABMP complexity does not change drastically with n R . From Fig. 6, we can note that for n R greater than or equal to 32, the complexity of SABMP algorithm is lower than APES but higher than Capon algorithm. CoSaMP algorithm has lower complexity than SABMP algorithm but is increasing significantly with n R because its complexity is dependent on both the number of measurements n R and the number of blocks L. Since it has lower complexity, a trade-off between performance and complexity exists between SABMP and CoSaMP with spatial formulation. Complexity comparison. Simulation parameters: n T =n R , SNR =20 dB, \(\theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) but on-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\) CS temporal formulation In this subsection, we present simulation results for the temporal formulation as an alternative to the spatial one. First, we make a comparison of resolution. Figure 7 shows a comparison of resolution of the three algorithms. APES has wider resolution than both Capon and SABMP algorithms. Capon has finer resolution, but its amplitude is biased downwards. SABMP algorithm gives the best resolution because on-grid CS algorithms are based on recovery of non-zero entries. That is why SABMP algorithm provides a single sample at the target location. A similar behavior can be anticipated for CoSaMP algorithm because it is also an on-grid CS algorithm. Resolution comparison. Simulation parameters: L = 256,n T = 10,n R =10, SNR =0 dB (left), SNR =25 dB (right) The MSE of β k and θ k estimates is shown in Figs. 8 and 9, respectively. The number of snapshots L=256 and the array size is kept small, i.e. n T =10 and n R =10. We plot the MSE obtained by existing algorithms Capon, APES and CoSaMP along with SABMP for comparison. CRLB is also plotted for comparison. In Fig. 8, we assume that the target lies on the grid to plot MSE of β k and to compare it with CRLB for known θ k . Otherwise, we need infinite grid points to compare the performance of algorithms with CRLB. The simulation results show that SABMP performs better than all three Capon, APES and CoSaMP algorithms to estimate β k at high SNR. This better performance of SABMP is due to its Bayesian approach and its robustness to noise. Moreover, the coherence of the sensing matrix is also less than 1 which guarantees sparse recovery at low noise. In Fig. 9, we simulate the algorithms by generating θ k anywhere randomly and not necessarily on the grid. Due to this reason, it can be seen that MSE of θ k reached the error floor which is due to the discretized grid and depends on the difference between the two consecutive grid points. For θ k estimation, SABMP performs better than APES algorithm after 10 dB but worse than Capon algorithm. CoSaMP algorithm has the worst performance because it cannot work well with structured sensing matrices. MSE performance for β k estimation. Simulation parameters: \(L = 256, n_{T} = 10, n_{R} = 10, N = 512, \theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) but on-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\) MSE performance for θ k estimation. Simulation parameters: \(L = 256, n_{T} = 10, n_{R} = 10, N = 512, \theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) but falling off-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\) The above mentioned simulation results are obtained for L>n R . Now, we discuss the case when L<n R and the number of snapshots is low. In the simulation results shown in Figs. 10 and 11, the number of snapshots L is 8 only. In this case, there will be no recovery by both Capon and APES methods due to rank deficiency of covariance matrix. But both CS algorithms can work in this scenario. SABMP performs better than CoSaMP algorithm for both β k and θ k estimation. SABMP cannot achieve the CRLB because of very low number of measurements in this case. Fig. 10 MSE performance for β k estimation. Simulation parameters: \(L = 8, n_{T} = 10, n_{R} = 10, N = 512, \theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) but on-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\). No recovery for Capon and APES methods MSE performance for θ k estimation. Simulation parameters: \(L = 8, n_{T} = 10, n_{R} = 10, N = 512, \theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) but falling off-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\). No recovery for Capon and APES methods Next, we compare the performance of algorithms at two different target locations. We choose one location at 5° and a second location at 70°. The simulation results in Figs. 12 and 13 show estimation performance for θ k and β k respectively. The performance of all algorithms is degraded for θ k =70° case because it comes in the low power region. For β k estimation, the results show that for the θ k =5°, the APES and SABMP algorithms achieve the bound earlier than θ k =70°. MSE performance for β k estimation. Simulation parameters: L=256,n T =10,n R =10,N=512,θ k =5° (solid lines) & θ k =70° (dashed lines) but on-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\) and is same for all iterations MSE performance for θ k estimation. Simulation parameters: L=256,n T =10,n R =10,N=512,θ k =5° (solid lines) & θ k =70° (dashed lines) but falling off-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\) and is same for all iterations We compare the complexity of the discussed algorithms. Figure 14 gives the processing time plotted against the number of grid points N. The results show that SABMP algorithm has the higher complexity than Capon and APES algorithms but lower than CoSaMP algorithm. CoSaMP algorithm has the highest complexity due to a Kronecker product involved in the construction of its sensing matrix. The complexity of SABMP is dependent on the number of multiple-measurement-vectors. In this case the number of multiple-measurement-vectors is equal to number of receive antennas. Therefore, there exists a tradeoff between performance and complexity of Capon, APES, CoSaMP and SABMP algorithm. Complexity comparison. Simulation parameters: L = 256,n T =10,n R =10, SNR =20 dB, \(\theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) but on-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\) Lastly, we show a comparison of receiver operating characteristic (ROC) curves. At high SNR, the probability of detection for all algorithms is 1 almost for all probabilities of false alarm. Therefore, MSE criteria is better to compare performance of different algorithms at high SNRs. However, we can choose small SNR value of -12 dB to plot ROCs for all four algorithms. Figure 15 shows the ROC comparison of the four algorithms discussed. The probability of detection is close to one for both Capon and APES algorithms for a wide range of probability of false alarm. SABMP algorithm has a little worse performance than both Capon and APES algorithms because we have chosen a low SNR value of -12 dB but SABMP performance gains are at usually at high SNRs. CoSaMP algorithm has slightly better performance than SABMP algorithm for low values of probability of false alarm but its performance deteriorates afterwards. ROC comparison. Simulation parameters: n T =10,n R =10, SNR =−12 dB, \(\theta _{k} \sim \mathcal {U} (-60^{\circ},60^{\circ})\) but on-grid, \(\protect \phantom {\dot {i}\!}\beta _{k} = e^{j \varphi _{k}}\) where \( \varphi _{k} \sim \mathcal {U} (0,1)\). (Markers are added in this plot only for the purpose of identification of different curves) In this work, the authors solved the MIMO radar parameter estimation problem by two methods: the spatial method for large arrays and temporal method for small arrays by a fast and robust CS algorithm. It is shown that SABMP provides the best estimates for parameter estimation at high SNR even when the number of targets and noise variance are unknown. J Li, P Stoica, MIMO Radar signal processing (John Wiley & Sons, New Jersey, 2009). JA Scheer, WA Holm, Principles of modern radar: advanced techniques (SciTech Publishing, Edison, NJ, USA, 2013). DL Donoho, Compressed sensing. IEEE Trans. Inf. Theory. 52(4), 1289–1306 (2006). MathSciNet Article MATH Google Scholar EJ Candes, PA Randall, Highly robust error correction by convex programming. IEEE Trans. Inf. Theory. 54(7), 2829–2840 (2008). YC Pati, R Rezaiifar, PS Krishnaprasad, in Proc. 27th Asilomar Conf. Signals, Syst. Comput. Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition (IEEE, 1993), pp. 40–44. D Needell, R Vershynin, Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit. Found. Comput. Math. 9(3), 317–334 (2008). DL Donoho, Y Tsaig, I Drori, J-L Starck, Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit. IEEE Trans. Inf. Theory. 58(2), 1094–1121 (2012). MathSciNet Article Google Scholar D Needell, JA Tropp, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2009). ME Tipping, Sparse Bayesian learning and the relevance vector machine. J. Mach. Learn. Res.1:, 211–244 (2001). MathSciNet MATH Google Scholar S Ji, Y Xue, L Carin, Bayesian compressive sensing. IEEE Trans. Signal Process. 56(6), 2346–2356 (2008). P Schniter, LC Potter, J Ziniel, in 2008 Inf. Theory Appl. Work. Fast Bayesian matching pursuit (IEEE, 2008), pp. 326–333. AA Quadeer, TY Al-Naffouri, Structure-based Bayesian sparse reconstruction. IEEE Trans. Signal Process. 60(12), 6354–6367 (2012). M Masood, TY Al-Naffouri, Sparse reconstruction using distribution agnostic Bayesian matching pursuit. IEEE Trans. Signal Process. 61(21), 5298–5309 (2013). JHG Ender, On compressive sensing applied to radar. Signal Process.90(5), 1402–1414 (2010). Article MATH Google Scholar Y Yu, AP Petropulu, HV Poor, MIMO radar using compressive sampling. IEEE J. Sel. Top. Signal Process. 4(1), 146–163 (2010). P Stoica, P Babu, J Li, SPICE: A sparse covariance-based estimation method for array processing. IEEE Trans. Signal Process. 59(2), 629–638 (2011). M Rossi, AM Haimovich, YC Eldar, Spatial compressive sensing for MIMO radar. IEEE Trans. Signal Process.62(2), 419–430 (2014). Y Yu, S Sun, RN Madan, A Petropulu, Power allocation and waveform design for the compressive sensing based MIMO radar. IEEE Trans. Aerosp. Electron. Syst. 50(2), 898–909 (2014). Z Yang, L Xie, C Zhang, Off-grid direction of arrival estimation using sparse Bayesian inference. IEEE Trans. Signal Process. 61(1), 38–43 (2013). T Huang, Y Liu, H Meng, X Wang, Adaptive matching pursuit with constrained total least squares. EURASIP J. Adv. Signal Process. 2012(1), 252 (2012). S Jardak, S Ahmed, M-S Alouini, in 2015 Sens. Signal Process. Def. Low complexity parameter estimation for off-the-grid targets (IEEE, 2015). S Jardak, S Ahmed, M-S Alouini, in 2014 Int. Radar Conf. Low complexity joint estimation of reflection coefficient, spatial location, and Doppler shift for MIMO-radar by exploiting 2D-FFT (IEEE, 2014). KV Mishra, M Cho, A Kruger, W Xu, Spectral super-resolution with prior knowledge. IEEE Trans. Signal Process. 63(20), 5342–5357 (2015). Cande, EJ̀,s, JK Romberg, T Tao, Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006). J Li, P Stoica, MIMO radar with colocated antennas. IEEE Signal Process. Mag. 24(5), 106–114 (2007). H Jiang, J-K Zhang, KM Wong, Joint DOD and DOA estimation for bistatic MIMO radar in unknown correlated noise. IEEE Trans. Veh. Technol. 64(11), 5113–5125 (2015). P Stoica, Target detection and parameter estimation for MIMO radar systems. IEEE Trans. Aerosp. Electron. Syst. 44(3), 927–939 (2008). EJ Candes, T Tao, Decoding by linear programming. IEEE Trans. Inf. Theory. 51(12), 4203–4215 (2005). N Yu, Y Li, Deterministic construction of Fourier-based compressed sensing matrices using an almost difference set. EURASIP J. Adv. Signal Process.2013(1), 155 (2013). H Ali, S Ahmed, TY Al-Naffouri, M-S Alouini, in Int. Radar Conf. Reduction of snapshots for MIMO radar detection by block/group orthogonal matching pursuit (IEEE, 2014). M Masood, TY Al-Naffouri, in IEEE Int. Conf. Acoust. Speech Signal Process. Support agnostic Bayesian matching pursuit for block sparse signals (IEEE, 2013), pp. 4643–4647. DL Donoho, M Elad, Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ 1 minimization. Proc. Natl. Acad. Sci. 100(5), 2197–2202 (2003). R Gribonval, M Nielsen, Sparse representations in unions of bases. IEEE Trans. Inf. Theory. 49(12), 3320–3325 (2003). RG Baraniuk, V Cevher, MF Duarte, C Hegde, Model-based compressive sensing. IEEE Trans. Inf. Theory. 56(4), 1982–2001 (2010). S Fortunati, R Grasso, F Gini, MS Greco, K LePage, Single-snapshot DOA estimation by using compressed sensing. EURASIP J. Adv. Signal Process.2014(1), 120 (2014). This research was funded by a grant from the office of competitive research funding (OCRF) at the King Abdullah University of Science and Technology (KAUST). The work was also supported by the Deanship of Scientific Research (DSR) at King Fahd University of Petroleum and Minerals (KFUPM), Dhahran, Saudi Arabia, through project number KAUST-002. The authors acknowledge the Information Technology Center at King Fahd University of Petroleum and Minerals (KFUPM) for providing high performance computing resources that have contributed to the research results reported within this paper. HA, SA, and TYA contributed to the formulation of the problem. HA and SA carried out the simulations. MSS and MSA commented and criticized the work to improve the manuscript. All authors read and approved the final manuscript. Electrical Engineering Department, KFUPM, Dhahran, Saudi Arabia Hussain Ali & Mohammad S. Sharawi Computer, Electrical and Mathematical Science and Engineering (CEMSE) Division, KAUST, Thuwal, Saudi Arabia Sajid Ahmed, Tareq Y. Al-Naffouri & Mohamed-S Alouini Hussain Ali Tareq Y. Al-Naffouri Mohammad S. Sharawi Mohamed-S Alouini Correspondence to Hussain Ali. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Ali, H., Ahmed, S., Al-Naffouri, T.Y. et al. Target parameter estimation for spatial and temporal formulations in MIMO radars using compressive sensing. EURASIP J. Adv. Signal Process. 2017, 6 (2017). https://doi.org/10.1186/s13634-016-0436-x DOI: https://doi.org/10.1186/s13634-016-0436-x Compressive sensing MIMO radar Colocated Advanced Techniques for Radar Signal Processing
CommonCrawl
Boy's surface In geometry, Boy's surface is an immersion of the real projective plane in 3-dimensional space found by Werner Boy in 1901. He discovered it on assignment from David Hilbert to prove that the projective plane could not be immersed in 3-space. Boy's surface was first parametrized explicitly by Bernard Morin in 1978.[1] Another parametrization was discovered by Rob Kusner and Robert Bryant.[2] Boy's surface is one of the two possible immersions of the real projective plane which have only a single triple point.[3] Unlike the Roman surface and the cross-cap, it has no other singularities than self-intersections (that is, it has no pinch-points). Parametrization Boy's surface can be parametrized in several ways. One parametrization, discovered by Rob Kusner and Robert Bryant,[4] is the following: given a complex number w whose magnitude is less than or equal to one ($\|w\|\leq 1$), let ${\begin{aligned}g_{1}&=-{3 \over 2}\operatorname {Im} \left[{w\left(1-w^{4}\right) \over w^{6}+{\sqrt {5}}w^{3}-1}\right]\\[4pt]g_{2}&=-{3 \over 2}\operatorname {Re} \left[{w\left(1+w^{4}\right) \over w^{6}+{\sqrt {5}}w^{3}-1}\right]\\[4pt]g_{3}&=\operatorname {Im} \left[{1+w^{6} \over w^{6}+{\sqrt {5}}w^{3}-1}\right]-{1 \over 2}\\\end{aligned}}$ and then set ${\begin{pmatrix}x\\y\\z\end{pmatrix}}={\frac {1}{g_{1}^{2}+g_{2}^{2}+g_{3}^{2}}}{\begin{pmatrix}g_{1}\\g_{2}\\g_{3}\end{pmatrix}}$ we then obtain the Cartesian coordinates x, y, and z of a point on the Boy's surface. If one performs an inversion of this parametrization centered on the triple point, one obtains a complete minimal surface with three ends (that's how this parametrization was discovered naturally). This implies that the Bryant–Kusner parametrization of Boy's surfaces is "optimal" in the sense that it is the "least bent" immersion of a projective plane into three-space. Property of Bryant–Kusner parametrization The Wikibook Famous Theorems of Mathematics has a page on the topic of: Boy's surface If w is replaced by the negative reciprocal of its complex conjugate, $ -{1 \over w^{\star }},$ then the functions g1, g2, and g3 of w are left unchanged. By replacing w in terms of its real and imaginary parts w = s + it, and expanding resulting parameterization, one may obtain a parameterization of Boy's surface in terms of rational functions of s and t. This shows that Boy's surface is not only an algebraic surface, but even a rational surface. The remark of the preceding paragraph shows that the generic fiber of this parameterization consists of two points (that is that almost every point of Boy's surface may be obtained by two parameters values). Relation to the real projective plane Let $P(w)=(x(w),y(w),z(w))$ be the Bryant–Kusner parametrization of Boy's surface. Then $P(w)=P\left(-{1 \over w^{\star }}\right).$ This explains the condition $\left\|w\right\|\leq 1$ on the parameter: if $\left\|w\right\|<1,$ then $ \left\|-{1 \over w^{\star }}\right\|>1.$ However, things are slightly more complicated for $\left\|w\right\|=1.$ In this case, one has $ -{1 \over w^{\star }}=-w.$ This means that, if $\left\|w\right\|=1,$ the point of the Boy's surface is obtained from two parameter values: $P(w)=P(-w).$ In other words, the Boy's surface has been parametrized by a disk such that pairs of diametrically opposite points on the perimeter of the disk are equivalent. This shows that the Boy's surface is the image of the real projective plane, RP2 by a smooth map. That is, the parametrization of the Boy's surface is an immersion of the real projective plane into the Euclidean space. Symmetries Boy's surface has 3-fold symmetry. This means that it has an axis of discrete rotational symmetry: any 120° turn about this axis will leave the surface looking exactly the same. The Boy's surface can be cut into three mutually congruent pieces. Applications Boy's surface can be used in sphere eversion, as a half-way model. A half-way model is an immersion of the sphere with the property that a rotation interchanges inside and outside, and so can be employed to evert (turn inside-out) a sphere. Boy's (the case p = 3) and Morin's (the case p = 2) surfaces begin a sequence of half-way models with higher symmetry first proposed by George Francis, indexed by the even integers 2p (for p odd, these immersions can be factored through a projective plane). Kusner's parametrization yields all these. Model at Oberwolfach The Mathematical Research Institute of Oberwolfach has a large model of a Boy's surface outside the entrance, constructed and donated by Mercedes-Benz in January 1991. This model has 3-fold rotational symmetry and minimizes the Willmore energy of the surface. It consists of steel strips which represent the image of a polar coordinate grid under a parameterization given by Robert Bryant and Rob Kusner. The meridians (rays) become ordinary Möbius strips, i.e. twisted by 180 degrees. All but one of the strips corresponding to circles of latitude (radial circles around the origin) are untwisted, while the one corresponding to the boundary of the unit circle is a Möbius strip twisted by three times 180 degrees — as is the emblem of the institute (Mathematisches Forschungsinstitut Oberwolfach 2011). Model made for Clifford Stoll A model was made in glass by glassblower Lucas Clarke, with the cooperation of Adam Savage, for presentation to Clifford Stoll, It was featured on Adam Savage's YouTube channel, Tested. All three appeared in the video discussing it.[5] References Citations 1. Morin, Bernard (13 November 1978). "Équations du retournement de la sphère" [Equations of the eversion of the sphere] (PDF). Comptes Rendus de l'Académie des Sciences. Série A (in French). 287: 879–882. 2. Kusner, Rob (1987). "Conformal geometry and complete minimal surfaces" (PDF). Bulletin of the American Mathematical Society. New Series. 17 (2): 291–295. doi:10.1090/S0273-0979-1987-15564-9.. 3. Goodman, Sue; Marek Kossowski (2009). "Immersions of the projective plane with one triple point". Differential Geometry and Its Applications. 27 (4): 527–542. doi:10.1016/j.difgeo.2009.01.011. ISSN 0926-2245. 4. Raymond O'Neil Wells (1988). "Surfaces in conformal geometry (Robert Bryant)". The Mathematical Heritage of Hermann Weyl (May 12–16, 1987, Duke University, Durham, North Carolina). Proc. Sympos. Pure Math. Vol. 48. American Mathematical Soc. pp. 227–240. doi:10.1090/pspum/048/974338. ISBN 978-0-8218-1482-6. 5. Adam, Savage. "This Object Should've Been Impossible to Make". YouTube. Retrieved 22 June 2023. Sources • Kirby, Rob (November 2007), "What is Boy's surface?" (PDF), Notices of the AMS, 54 (10): 1306–1307 This describes a piecewise linear model of Boy's surface. • Casselman, Bill (November 2007), "Collapsing Boy's Umbrellas" (PDF), Notices of the AMS, 54 (10): 1356 Article on the cover illustration that accompanies the Rob Kirby article. • Mathematisches Forschungsinstitut Oberwolfach (2011), The Boy surface at Oberwolfach (PDF). • Sanderson, B. Boy's will be Boy's, (undated, 2006 or earlier). • Weisstein, Eric W. "Boy's Surface". MathWorld. External links Wikimedia Commons has media related to Boy's surface. • Boy's surface at MathCurve; contains various visualizations, various equations, useful links and references • A planar unfolding of the Boy's surface – applet from Plus Magazine. • Boy's surface resources, including the original article, and an embedding of a topologist in the Oberwolfach Boy's surface. • A LEGO Boy's surface • A paper model of Boy's surface – pattern and instructions • Java-based model that can be freely rotated • A model of Boy's surface in Constructive Solid Geometry together with assembling instructions • Boy's surface visualization video from the Mathematical Institute of the Serbian Academy of the Arts and Sciences • This Object Should've Been Impossible to Make Adam Savage making a museum stand for a glass model of the surface Compact topological surfaces and their immersions in 3D Without boundary Orientable • Sphere (genus 0) • Torus (genus 1) • Number 8 (genus 2) • Pretzel (genus 3) ... Non-orientable • Real projective plane • genus 1; Boy's surface • Roman surface • Klein bottle (genus 2) • Dyck's surface (genus 3) ... With boundary • Disk • Semisphere • Ribbon • Annulus • Cylinder • Möbius strip • Cross-cap • Sphere with three holes ... Related notions Properties • Connectedness • Compactness • Triangulatedness or smoothness • Orientability Characteristics • Number of boundary components • Genus • Euler characteristic Operations • Connected sum • Making a hole • Gluing a handle • Gluing a cross-cap • Immersion
Wikipedia
\begin{document} \title{ space{-1cm} \section{Introduction} Watkins' and Dayan's Q-learning is a model-free reinforcement learning algorithm that iteratively refines an estimate for the optimal action-value function of an MDP by stochastically ``visiting'' many state-ation pairs \citep{watkins1992q}. Variants of the algorithm lie at the heart of numerous recent state-of-the-art achievements in reinforcement learning, including the superhuman Atari-playing deep Q-network \citep{mnih2015human}. The goal of this paper is to reproduce a precise and (nearly) self-contained proof that Q-learning converges. Much of the available literature leverages powerful theory to obtain highly generalizable results in this vein. However, this approach requires the reader to be familiar with and make many deep connections to different research areas. A student seeking to deepen their understand of Q-learning risks becoming caught in a vicious cycle of ``RL-learning Hell''. For this reason, we give a complete proof from start to finish using only one external result from the field of stochastic approximation, despite the fact that this minimal dependence on other results comes at the expense of some ``shininess''. \section{Related Works} The first proof that Q-learning converges with probability $1$ is outlined in \citep{watkins1989learning} and given more fully in \citep{watkins1992q}. The proof of \citep{tsitsiklis1994asynchronous} applies the theory of stochastic approximation to allow a far more general asynchronous structure. \citep{even2003learning} builds upon this work to derive more precise rates of convergence. Another approach by \citep{borkar2000ode} leverages the Lyapunov theory of ordinary differential equations to analyze a swath of stochastic approximation algorithms. Lastly, \citep{szepesvari1996generalized} analyzes Q-learning in the setting of generalized MDPs and focuses on the contractivity properties of dynamic programming operators. \section{Background} We make frequent use of standard measure theoretic and linear analytic notation and thus invite the reader to read Section \ref{sec:notation} upon encountering any unfamiliar symbols or terms. \subsection{Markov Decision Processes} \label{subsec:mdps} A typical formalization of environment in reinforcement learning---and the one we study here---is the Markov decision process (MDP). A reader familiar with the fundamentals of reinforcement learning may skip this subsection without issue. \begin{definition} A countable (finite) discounted MDP is a tuple $\langle\mathcal{S}, \mathcal{A}, P, r, \gamma\rangle$ where $\mathcal{S}$ and $\mathcal{A}$ are countable (finite) sets of ``states'' and ``actions'' respectively, $P : \mathcal{S} \times \mathcal{A} \to \Delta(\mathcal{S})$ is a ``transition kernel'', $r \in \ell^\infty(\mathcal{S} \times \mathcal{A})$ represents ``rewards'', and $\gamma \in [0, 1)$ is a ``discount rate''. \end{definition} In order to design agents that make ``good'' decisions when interacting with an MDP, we would like to somehow measure the value of making certain decisions in certain states. A convenient approach to measuring value relies on the fixed point theory of so-called ``dynamic programming'' operators. The following class of operators will serve our purposes nicely. \begin{definition} The ``Bellman optimality operator'' of an MDP $M = \langle\mathcal{S}, \mathcal{A}, P, r, \gamma\rangle$ is \begin{align*} T^*_M : \ell^\infty(\mathcal{S} \times \mathcal{A}) \to \ell^\infty(\mathcal{S} \times \mathcal{A}), q \mapsto (s, a) \mapsto r(s, a) + \gamma \sum_{s' \in \mathcal{S}} P(s' | s, a)\sup_{a' \in \mathcal{A}} q(s', a'). \end{align*} \end{definition} Incidentally, exact or even approximate knowledge of the fixed point\footnote{A fixed point of a map $f : \mathcal{X} \to \mathcal{X}$ is a point $x^* \in \mathcal{X}$ such that $f(x^*) = x^*$.} of the Bellman optimality operator is sufficient to act optimally or near-optimally\footnote{See Lemma I at \url{https://rltheory.github.io/lecture-notes/planning-in-mdps/lec6/}.}. For now, however, it is enough that a unique fixed point exists. The proof is a routine application of the well-known Banach fixed point theorem and can be found in Section \ref{subsec:mdps_proofs}. \begin{theorem} \label{thm:existence_of_q*} For any MDP $M$, $T^*_M$ admits a unique fixed point $q^*_M$, which we refer to as the ``optimal action-value function'' for $M$. \end{theorem} The following bound will serve a useful purpose in proving our main theorem. As before, a proof can be found in Section \ref{subsec:mdps_proofs}. \begin{lemma} \label{lemma:q*_bound} For any MDP $M$ with rewards $r$ and discount rate $\gamma$, \begin{align*} \norm{q^*_M}_\infty \leq \frac{\norm{r}_\infty}{1 - \gamma}. \end{align*} \end{lemma} \subsection{Sampling Trajectories from an MDP} In order to compute Q-learning iterates, we would like to sample trajectories from a distribution that respects the dynamics of a given countable discounted MDP $M = \langle \mathcal{S}, \mathcal{A}, P, r, \gamma \rangle$. To that end, we require some statistical apparatus. Once again, the reader is referred to Section \ref{sec:notation} if any notation is unfamiliar. \begin{definition} The ``trajectory space'' of $M$ is the measurable space \begin{align*} (\Omega_M, \mathcal{F}_M) := \p{ (\mathcal{S} \times \mathcal{A} \times \mathcal{S})^{\mathbb{N}_0}, \bigotimes_{t \in \mathbb{N}_0} \mathcal{P}(\mathcal{S} \times \mathcal{A} \times \mathcal{S}) }. \end{align*} \end{definition} \begin{definition} The ``trajectory process'' of $M$ is the sequence $(S_0, A_0, S'_0, S_1, A_1, S'_1, \dots)$ of $\mathcal{F}_M/\mathcal{P}(\mathcal{S})$ and $\mathcal{F}_M/\mathcal{P}(\mathcal{A})$-measurable projections defined by\footnote{For convenience, we suppress $M$ from the notation of the trajectory process as the correct meaning should always be deducible via ``type inference''.} \begin{align*} ((S_0, A_0, S'_0), (S_1, A_1, S'_1), \dots) := \id_{\Omega_M}. \end{align*} \end{definition} \begin{definition} \label{def:trajectory_measure} The set of ``trajectory measures'' on $M$, denoted $\Delta_T(M)$, is the set of probability measures $\mathbb{P} \in \Delta(\Omega_M, \mathcal{F}_M)$ satisfying \begin{align*} \mathbb{P}(S'_t = s'_t | S_0, A_0, S'_0, \dots, S_t, A_t) = P(s' | S_t, A_t) \end{align*} almost surely (a.s.) for any $s'_t \in \mathcal{S}$ and $t \in \mathbb{N}_0$. \end{definition} \begin{definition} The ``occurences'' of $(s, a) \in \mathcal{S} \times \mathcal{A}$ along a ``trajectory'' $\omega \in \Omega_M$ constitute \begin{align*} \mathcal{T}_{(s, a)}(\omega) := \{t \in \mathbb{N}_0 : (S_t, A_t)(\omega) = (s, a)\}. \end{align*} \end{definition} \section{The Q-learning Algorithm} Our overall goal is to design a reinforcement learning agent that makes good decisions in a given environment. To that end, we seek to develop an algorithm that closely approximates the optimal action-value function for a given MDP. Furthermore, we would like to do this without explicitly accessing an environment's transition kernel as these are frequently unavailable in real-world applications. On the other hand, many real-world environments permit the sampling of transitions and in fact we will use sampling to develop the Q-learning algorithm. In particular, by stochastically ``visiting'' many state-action pairs, we iteratively refine an estimate for $q^*_M$. The details of how we visit states and choose actions should not matter as long as our samples cover the state-action space sufficiently well. Altogether, these ideas form the basis of Watkins' and Dayan's Q-learning \citep{watkins1992q}. \begin{definition}[Q-learning] The ``Q-learning iterates'' on a finite MDP $M$ with discount rate $\gamma$ induced by a ``stepsize'' sequence $\alpha = (\alpha_t)_{t \in \mathbb{N}_0}$ in $\mathbb{R}$ and a trajectory $\omega \in \Omega_M$ form the sequence\footnote{Similarly, we omit $M$ from the notation of the Q-learning iterates and rely instead upon context and prepositional phrases to make the underlying MDP unambiguous.} $(Q_t^\alpha(\omega))_{t \in \mathbb{N}_0}$ in $\ell^\infty(\mathcal{S} \times \mathcal{A})$ defined recursively by $Q_0^\alpha(\omega) \equiv \mathbf{0}$ and\footnote{We adopt the function ``currying'' convention $f(y; x) := f(x)(y)$ for $f : \mathcal{X} \to \mathcal{Y} \to \mathcal{Z}$, $x \in \mathcal{X}$, and $y \in \mathcal{Y}$.} \begin{align*} Q_{t + 1}^\alpha := (s, a; \omega) \mapsto \begin{cases} (1 - \alpha_t)Q_t^\alpha(s, a; \omega) + \alpha_t(r(s, a) + \gamma \max\limits_{a' \in \mathcal{A}}Q_t^\alpha(S'_t(\omega), a'; \omega)) & \text{if } t \in \mathcal{T}_{(s, a)}(\omega) \\ Q_t^\alpha(s, a; \omega) & \text{otherwise} \end{cases} \end{align*} for $t \in \mathbb{N}_0$. \end{definition} \begin{remark} While the construction of the Q-learning iterates depends explicitly on the states, actions, rewards, and discount rate of an MDP, it does not depend directly on the transition kernel of an MDP. This increases the flexibility of Q-learning and, as we will see later, does not preclude convergence as long as the trajectories are sampled from an appropriate distribution. \end{remark} \section{Convergence of Q-learning} Q-learning iterates in hand, we are ready to state the assumptions that lead to convergence. \begin{definition} \label{def:hypotheses} Let $M$ be an MDP. A trajectory measure $\mathbb{P} \in \Delta_T(M)$ (see Definition \ref{def:trajectory_measure}) and a sequence $(\alpha_t)_{t \in \mathbb{N}_0}$ in $[0, 1]$ are said to satisfy the Robbins--Monro condition when \begin{align*} \sum_{t \in \mathcal{T}_{(s, a)}(\omega)} \alpha_t = \infty \quad\text{and}\quad \sum_{t \in \mathcal{T}_{(s, a)}(\omega)} \alpha_t^2 < \infty. \end{align*} for all $(s, a) \in \mathcal{S} \times \mathcal{A}$ and $\mathbb{P}$-almost all $\omega \in \Omega_M$. The set of all such trajectory measure-stepsize sequence pairs is denoted $\nu(M)$. \end{definition} \begin{remark} \label{rmk:infinite_occurences} The condition that $\sum_{t \in \mathcal{T}_{(s, a)}(\omega)} \alpha_t = \infty$ requires that $\mathcal{T}_{(s, a)}(\omega)$ be infinite for all $(s, a) \in \mathcal{S} \times \mathcal{A}$ and $\mathbb{P}$-almost all $\omega \in \Omega_M$, i.e. the sampling strategy that produces the measure $\mathbb{P}$ must visit all state-action pairs infinitely often. \end{remark} At last, we have arrived at our main result. The proof is delayed until Subsection \ref{subsec:big_boi_proof} as only then will we be adequately equipped for the task. \begin{theorem} \label{thm:big_boi} Let $M$ be a finite MDP and let $(\mathbb{P}, \alpha) \in \nu(M)$ be a Robbins--Monro trajectory measure-stepsize sequence pair for $M$. Then the Q-learning iterates $(Q_t^\alpha(\omega))_{t \in \mathbb{N}_0}$ on $M$ converge uniformly to $q^*_M$ for $\mathbb{P}$-almost all $\omega \in \Omega_M$. \end{theorem} \subsection{The Action-Replay Processes} \label{subsec:arp} We begin our journey toward convergence by showing that an MDP $M$ can be recovered by a certain limiting process from a trajectory-dependent MDP whose whose optimal action-value functions track the Q-learning iterates on $M$. We will see that this construction serves as the primary proof device for proving the convergence of Q-learning. \begin{definition} \label{def:arp} The ``action-replay process'' of an MDP $M = \langle \mathcal{S}, \mathcal{A}, P, r, \gamma \rangle$ induced by a stepsize sequence $\alpha = (\alpha_t)_{t \in \mathbb{N}_0}$ and a trajectory $\omega \in \Omega_M$ is the MDP $\hat{M}^\alpha(\omega) := \langle \hat{\mathcal{S}}, \mathcal{A}, \hat{P}, \hat{r}, \gamma \rangle$ where $\hat{\mathcal{S}} := \mathcal{S} \times \mathbb{N}_0 \cup \{s_{\textrm{absorb}}\}$; \begin{align*} \hat{P}((S'_{t'}(\omega), t') | (s, t), a) & := \alpha_{t'}\prod_{\tau \in \mathcal{T}_{(s, a)}(\omega) \cap (t', t)}(1 - \alpha_\tau), \\ \hat{P}(s_{\textrm{absorb}} | (s, t), a) & := \prod_{\tau \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t)} (1 - \alpha_{\tau}), \text{and} \\ \hat{P}(s_{\textrm{absorb}} | s_{\textrm{absorb}}, a) & := 1 \end{align*} for $(s, a) \in \mathcal{S} \times \mathcal{A}$, $t \in \mathbb{N}_0$, and $t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t)$ as well as $\hat{P}(\cdot | \cdot, \cdot) \equiv \mathbf{0}$ everwhere else; and, finally, \begin{align*} \hat{r}((s, t), a) := r(s, a)\sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t)} \hat{P}((S'_{t'}(\omega), t') | (s, t), a) \end{align*} for $(s, t) \in \mathcal{S} \times \mathbb{N}_0$ and $a \in \mathcal{A}$ as well as $\hat{r}(\cdot, \cdot) \equiv \mathbf{0}$ everwhere else. \end{definition} Our next theorem reduces the analysis of Q-learning iterates to analysis of the optimal action-value function of an action-replay process. \begin{theorem} \label{thm:q*_arp} Let $M$ be a finite MDP, let $\alpha$ be a stepsize sequence, and let $(Q_t^\alpha(\omega))_{t \in \mathbb{N}_0}$ be the induced Q-learning iterates on $M$. For every $\omega \in \Omega_M$, $t \in \mathbb{N}_0$, and $(s, a) \in \mathcal{S} \times \mathcal{A}$, \begin{align*} q^*_{\hat{M}^\alpha(\omega)}((s, t), a) = Q_t^\alpha(s, a; \omega). \end{align*} \end{theorem} Before we prove the theorem, we strongly encourage the reader to prove the following lemma that shows that, while the dynamics of the action-replay processes may look intimidating at a first glance, their recursive form is much more pleasant to work with. \begin{lemma} \label{lemma:arp_dynamics} With all terms as in Definition \ref{def:arp}, $(s, a) \in \mathcal{S} \times \mathcal{A}$, and $\omega \in \Omega_M$, we have \begin{align*} \hat{P}((S'_{t'}(\omega), t') | (s, t + 1), a) = \hat{P}((S'_{t'}(\omega), t') | (s, t), a) \end{align*} for any $t \notin \mathcal{T}_{(s, a)}(\omega)$ and $t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t + 1)$ as well as \begin{align*} \hat{P}((S'_t(\omega), t) | (s, t + 1), a) = \alpha_t \end{align*} and \begin{align*} \hat{P}((S'_{t'}(\omega), t') | (s, t + 1), a) = (1 - \alpha_t)\hat{P}((S'_{t'}(\omega), t') | (s, t), a) \end{align*} for any $t \in \mathcal{T}_{(s, a)}(\omega)$ and $t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t)$. \end{lemma} \begin{proof}[Proof of Theorem \ref{thm:q*_arp}] Fix $\omega \in \Omega_M$ and let $\hat{M}^\alpha(\omega) = \langle \hat{\mathcal{S}}, \mathcal{A}, \hat{P}, \hat{r}, \gamma \rangle$. We begin by establishing an extremely useful form for the optimal action-values of $\hat{M}^\alpha(\omega)$. To that end, notice that, for any $a \in \mathcal{A}$, \begin{align*} q^*_{\hat{M}^\alpha(\omega)}(s_{\textrm{absorb}}, a) & = T^*_{\hat{M}^\alpha(\omega)}q^*_{\hat{M}^\alpha(\omega)}(s_{\textrm{absorb}}, a) \\ & = \hat{r}(s_{\textrm{absorb}}, a) + \gamma \sum_{\sigma' \in \mathcal{S}_{\hat{M}^\alpha(\omega)}}\hat{P}(\sigma' | s_{\textrm{absorb}}, a) \max_{a' \in \mathcal{A}} q^*_{\hat{M}^\alpha(\omega)}(\sigma', a') \\ & = \gamma\max_{a' \in \mathcal{A}} q^*_{\hat{M}^\alpha(\omega)}(s_{\textrm{absorb}}, a'), \end{align*} so, taking a maximum over $a \in \mathcal{A}$, we must have $\max_{a' \in \mathcal{A}} q^*_{\hat{M}^\alpha(\omega)}(s_{\textrm{absorb}}, a') = 0$ and hence \begin{align} \label{eq:arp_q*} q^*_{\hat{M}^\alpha(\omega)}((s, k), a) =&\ T^*_{\hat{M}^\alpha(\omega)}q^*_{\hat{M}^\alpha(\omega)}((s, k), a) \nonumber\\ =&\ \hat{r}((s, k), a) + \gamma \sum_{\sigma' \in \mathcal{S}_{\hat{M}^\alpha(\omega)}} \hat{P}(\sigma' | (s, k), a) \max_{a' \in \mathcal{A}} q^*_{\hat{M}^\alpha(\omega)}(\sigma', a') \nonumber\\ =&\ r(s, a)\sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, k)} \hat{P}((S'_{t'}(\omega), t') | (s, k), a) + \nonumber\\ &\ \gamma \hat{P}(s_{\textrm{absorb}} | (s, k), a) \cancelto{0}{ \max_{a' \in \mathcal{A}} q^*_{\hat{M}^\alpha(\omega)}(s_{\textrm{absorb}}, a') } + \\ &\ \gamma \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, k)} \hat{P}((S'_{t'}(\omega), t') | (s, k), a) \max_{a' \in \mathcal{A}} q^*_{\hat{M}^\alpha(\omega)}((S'_{t'}(\omega), t'), a') \nonumber\\ =&\ \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, k)} \hat{P}((S'_{t'}(\omega), t') | (s, k), a) \p{ r(s, a) + \gamma\max_{a' \in \mathcal{A}} q^*_{\hat{M}^\alpha(\omega)}((S'_{t'}(\omega), t'), a') } \nonumber \end{align} for any $(s, a) \in \mathcal{S} \times \mathcal{A}$ and $k \in \mathbb{N}_0$. With this in mind, we now prove the theorem by induction on $t$. Since $[0, 0) = \varnothing$, Equation (\ref{eq:arp_q*}) yields $q^*_{\hat{M}^\alpha(\omega)}((s, 0), a) = 0 = Q_0^\alpha(s, a; \omega)$ for any $(s, a) \in \mathcal{S} \times \mathcal{A}$ and hence the base case holds. As for the inductive step, let $t \in \mathbb{N}_0$, assume the claim holds for $t$, and let $(s, a) \in \mathcal{S} \times \mathcal{A}$. We consider two cases. If $t \notin \mathcal{T}_{(s, a)}(\omega)$, then, by Equation (\ref{eq:arp_q*}) and Lemma \ref{lemma:arp_dynamics}, we have \begin{align*} q^*_{\hat{M}^\alpha(\omega)}((s, t + 1), a) & = \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t + 1)} \hat{P}((S'_{t'}(\omega), t') | (s, t + 1), a) \p{ r(s, a) + \gamma\max_{a' \in \mathcal{A}} q^*_{\hat{M}^\alpha(\omega)}((S'_{t'}(\omega), t'), a') } \\ & = \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t)} \hat{P}((S'_{t'}(\omega), t') | (s, t), a) \p{ r(s, a) + \gamma\max_{a' \in \mathcal{A}} q^*_{\hat{M}^\alpha(\omega)}((S'_{t'}(\omega), t'), a') } \\ & = q^*_{\hat{M}^\alpha(\omega)}((s, t), a) \\ & = Q_t^\alpha(s, a; \omega) \\ & = Q_{t + 1}^\alpha(s, a; \omega). \end{align*} Likewise, if $t \in \mathcal{T}_{(s, a)}(\omega)$, then, by Equation (\ref{eq:arp_q*}) and Lemma \ref{lemma:arp_dynamics}, \begin{align*} q^*_{\hat{M}^\alpha(\omega)}((s, t + 1), a) =&\ \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t + 1)} \hat{P}((S'_{t'}(\omega), t') | (s, t + 1), a) \p{ r(s, a) + \gamma\max_{a' \in \mathcal{A}} q^*_{\hat{M}^\alpha(\omega)}((S'_{t'}(\omega), t'), a') } \\ =&\ (1 - \alpha_t) \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t)} \hat{P}((S'_{t'}(\omega), t') | (s, t), a) \p{ r(s, a) + \gamma\max_{a' \in \mathcal{A}} q^*_{\hat{M}^\alpha(\omega)}((S'_{t'}(\omega), t'), a') } \\ &\ + \alpha_t \p{ r(s, a) + \gamma\max_{a' \in \mathcal{A}} q^*_{\hat{M}^\alpha(\omega)}((S'_t(\omega), t), a') } \\ =&\ (1 - \alpha_t)q^*_{\hat{M}^\alpha(\omega)}((s, t), a) + \alpha_t \p{ r(s, a) + \gamma\max_{a' \in \mathcal{A}} q^*_{\hat{M}^\alpha(\omega)}((S'_t(\omega), t), a') } \\ =&\ (1 - \alpha_t)Q_t^\alpha(s, a; \omega) + \alpha_t \p{ r(s, a) + \gamma\max_{a' \in \mathcal{A}} Q_t^\alpha(S'_t(\omega), a'; \omega) } \\ =&\ Q_{t + 1}^\alpha(s, a; \omega) \end{align*} and hence the inductive step holds as well. \end{proof} At the beginning of Subsection \ref{subsec:arp}, we promised that an MDP can be recovered from its action-replay process via a limiting procedure; we now make good on that promise. \begin{theorem} \label{thm:arp_limit} Let $M = \langle \mathcal{S}, \mathcal{A}, P, r, \gamma \rangle$ be an MDP and let $(\mathbb{P}, \alpha) \in \nu(M)$ (recall Definition \ref{def:hypotheses}). Then, for any $(s, a, s') \in \mathcal{S} \times \mathcal{A} \times \mathcal{S}$ and $\mathbb{P}$-almost all $\omega \in \Omega_M$, \begin{align*} \hat{r}((s, t), a; \omega) \xrightarrow{t \to \infty} r(s, a) \end{align*} and \begin{align*} \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t)} \hat{P}((s', t') | (s, t), a; \omega) \xrightarrow{t \to \infty} P(s' | s, a) \end{align*} where $\hat{M}^\alpha(\omega) = \langle \hat{\mathcal{S}}, \mathcal{A}, \hat{P}(\omega), \hat{r}(\omega), \gamma \rangle$. \end{theorem} The proof rests on a classic result from the theory of stochastic approximation. \begin{theorem}[The Robbins--Monro Theorem] \label{thm:robbins_monro} For any familes of random variables $(\beta_t)_{t \in \mathbb{N}_0}$, $(\xi_t)_{t \in \mathbb{N}_0}$, and $(X_t)_{t \in \mathbb{N}_0}$ such that $(\beta_t)_{t \in \mathbb{N}_0}$ is non-negative and satisfies $\sum_{t \in \mathcal{T}_{(s, a)}(\omega)} \beta_t = \infty$ as well as $\sum_{t \in \mathcal{T}_{(s, a)}(\omega)} \beta_t^2 < \infty$ a.s., $\E[\xi_t] = \Xi$ for all $t \in \mathbb{N}_0$, $(\xi_t)_{t \in \mathbb{N}_0}$ is bounded a.s., and \begin{align*} X_{t + 1} = (1 - \beta_t) X_t + \beta_t \xi_t \end{align*} for all $t \in \mathbb{N}_0$, we have that $X_t \to \Xi$ a.s. \end{theorem} A statement and proof of the theorem can be found under Theorem 2.3.1 in \citep{nla.cat-vn954258} and its original, weaker variant (quadratic mean convergence rather than almost sure convergence) is stated and proved in \citep{robbins1951stochastic}. \begin{proof}[Proof of Theorem \ref{thm:arp_limit}] Fix $(s, a, s') \in \mathcal{S} \times \mathcal{A} \times \mathcal{S}$ and discard a $\mathbb{P}$-null set from $\Omega_M$ so that $\sum_{t \in \mathcal{T}_{(s, a)}(\omega)} \alpha_t = \infty$ and $\sum_{t \in \mathcal{T}_{(s, a)}(\omega)} \alpha_t^2 < \infty$ for $\omega \in \Omega_M$. Furthermore, for any $k \in \mathbb{N}_0$ and $\omega \in \Omega_M$, let $T_k(\omega)$ be the $k$\textsuperscript{th} smallest element of $\mathcal{T}_{(s, a)}(\omega)$ (where $T_0(\omega) := \min{\mathcal{T}_{(s, a)}(\omega)}$), which is well-defined by Remark \ref{rmk:infinite_occurences}. We now show that the reward limit holds. To that end, for $t \in \mathbb{N}_0$ and $\omega \in \Omega_M$, define \begin{align*} X_t(\omega) := \hat{r}((s, t), a; \omega) = r(s, a)\sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t)} \hat{P}((S'_{t'}(\omega), t') | (s, t), a; \omega). \end{align*} Then, for any $t \in \mathbb{N}_0$ and $\omega \in \Omega_M$, $t \notin \mathcal{T}_{(s, a)}(\omega)$ implies \begin{align*} X_{t + 1}(\omega) & = r(s, a)\sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t + 1)} \hat{P}((S'_{t'}(\omega), t') | (s, t + 1), a; \omega) \\ & = r(s, a)\sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t)} \hat{P}((S'_{t'}(\omega), t') | (s, t), a; \omega) \\ & = X_t(\omega) \end{align*} by Lemma \ref{lemma:arp_dynamics}, whereas $t \in \mathcal{T}_{(s, a)}(\omega)$ implies that \begin{align*} X_{t + 1}(\omega) & = r(s, a)\sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t + 1)} \hat{P}((S'_{t'}(\omega), t') | (s, t + 1), a; \omega) \\ & = r(s, a)\p{(1 - \alpha_t)\sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t)} \hat{P}((S'_{t'}(\omega), t') | (s, t), a; \omega) + \alpha_t} \\ & = (1 - \alpha_t)X_t(\omega) + \alpha_t r(s, a) \end{align*} by Lemma \ref{lemma:arp_dynamics}. In particular, we have \begin{align*} X_{T_{k + 1}} = (1 - \alpha_{T_k})X_{T_k} + \alpha_{T_k}r(s, a) \end{align*} for all $k \in \mathbb{N}_0$. By Theorem \ref{thm:robbins_monro}, $X_{T_k}(\omega) \xrightarrow{k \to \infty} r(s, a)$ for $\mathbb{P}$-almost all $\omega \in \Omega_M$. Finally, since $(X_t)_{t \in \mathbb{N}_0}$ is constant between the terms of the subsequence $(X_{T_k})_{k \in \mathbb{N}_0}$, we have \begin{align*} \hat{r}((s, t), a; \omega) = X_t(\omega) \xrightarrow{t \to \infty} r(s, a) \end{align*} for $\mathbb{P}$-almost all $\omega \in \Omega_M$ as well. Next, we show that the dynamics limit holds in an analogous fashion. To that end, for $t \in \mathbb{N}_0$ and $\omega \in \Omega_M$, define \begin{align*} Y_t(\omega) := \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t)} \hat{P}((s', t') | (s, t), a; \omega). \end{align*} Then, for any $t \in \mathbb{N}_0$ and $\omega \in \Omega_M$, $t \notin \mathcal{T}_{(s, a)}(\omega)$ implies $Y_{t + 1}(\omega) = Y_t(\omega)$ by Lemma \ref{lemma:arp_dynamics}, whereas $t \in \mathcal{T}_{(s, a)}(\omega)$ implies that \begin{align*} Y_{t + 1}(\omega) & = \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t + 1)} \hat{P}((s', t') | (s, t + 1), a) \\ & = \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t + 1)} \mathbbm{1}(S'_{t'}(\omega) = s') \hat{P}((S'_{t'}(\omega), t') | (s, t + 1), a) \\ & = (1 - \alpha_t)\sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t)} \mathbbm{1}(S'_{t'}(\omega) = s') \hat{P}((S'_{t'}(\omega), t') | (s, t), a) + \alpha_t \mathbbm{1}(S'_t(\omega) = s') \\ & = (1 - \alpha_t)\sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t)} \hat{P}((s', t') | (s, t), a) + \alpha_t \mathbbm{1}(S'_t(\omega) = s') \\ & = (1 - \alpha_t)Y_t(\omega) + \alpha_t \mathbbm{1}(S'_t(\omega) = s') \end{align*} by Lemma \ref{lemma:arp_dynamics}. In particular, we have \begin{align*} Y_{T_{k + 1}} = (1 - \alpha_{T_k})Y_{T_k} + \alpha_{T_k}\mathbbm{1}(S'_{T_k} = s') \end{align*} for all $k \in \mathbb{N}_0$. But, for any $k \in \mathbb{N}_0$, \begin{align*} \E[\mathbbm{1}(S'_{T_k} = s')] & = \mathbb{P}(S'_{T_k} = s') \\ & = \sum_{t = 0}^\infty \mathbb{P}(T_k = t, S'_t = s') \\ & = \sum_{t = 0}^\infty \mathbb{P}(\abs{\mathcal{T}_{(s, a)} \cap [0, t)} = k - 1, S_t = s, A_t = a, S'_t = s') \\ & = \sum_{t = 0}^\infty \mathbb{P}(\abs{\mathcal{T}_{(s, a)} \cap [0, t)} = k - 1, S_t = s, A_t = a)P(s' | s, a) \\ & = P(s' | s, a)\sum_{t = 0}^\infty \mathbb{P}(T_k = t) \\ & = P(s' | s, a) \end{align*} since $\abs{\mathcal{T}_{(s, a)} \cap [0, t)}$ is a $\sigma(S_0, A_0, S'_0, \dots, S_{t - 1}, A_{t - 1})$-measurable random variable and since $\mathbb{P}$ is a trajectory measure on $M$. By Theorem \ref{thm:robbins_monro}, $Y_{T_k}(\omega) \xrightarrow{k \to \infty} P(s' | s, a)$ for $\mathbb{P}$-almost all $\omega \in \Omega_M$. As $(Y_t)_{t \in \mathbb{N}_0}$ is constant between the terms of the subsequence $(Y_{T_k})_{k \in \mathbb{N}_0}$, \begin{align*} \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t)} \hat{P}((s', t') | (s, t), a; \omega) = Y_t(\omega) \xrightarrow{t \to \infty} P(s' | s, a) \end{align*} for $\mathbb{P}$-almost all $\omega \in \Omega_M$ as well. \end{proof} \subsection{Proof of Theorem \ref{thm:big_boi}} \label{subsec:big_boi_proof} Having tamed the action-replay processes, all of the conceptual pieces are now in place to prove the convergence of Q-learning. For the sake of digestibility, we have factored out some of the technical heavy lifting into the following two lemmas. \begin{lemma} \label{lemma:low_level_bound} Let $M = \langle\mathcal{S}, \mathcal{A}, P, r, \gamma\rangle$ be an MDP, $\alpha = (\alpha_t)_{t \in \mathbb{N}_0}$ a stepsize sequence in $[0, 1]$, $\omega \in \Omega_M$, and $(s, a) \in \mathcal{S} \times \mathcal{A}$. For any $\tilde{t}, t \in \mathbb{N}_0$ with $\tilde{t} \leq t$, \begin{align*} \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, \tilde{t})} \hat{P}((S'_{t'}(\omega), t') | (s, t), a) \leq e^{-\sum_{\tau \in \mathcal{T}_{(s, a)}(\omega) \cap [\tilde{t}, t)} \alpha_\tau} \end{align*} where $\hat{M}^\alpha(\omega) = \langle \hat{\mathcal{S}}, \mathcal{A}, \hat{P}, \hat{r}, \gamma \rangle$. \end{lemma} \begin{proof} Since $1 - \alpha \leq e^{-\alpha}$ for all $\alpha \in \mathbb{R}$, \begin{align*} \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, \tilde{t})} \hat{P}((S'_{t'}(\omega), t') | (s, t), a) & = \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, \tilde{t})} \alpha_{t'}\prod_{\tau \in \mathcal{T}_{(s, a)}(\omega) \cap (t', t)}(1 - \alpha_\tau) \\ & \leq \prod_{\tau \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t)}(1 - \alpha_\tau) + \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, \tilde{t})} \alpha_{t'}\prod_{\tau \in \mathcal{T}_{(s, a)}(\omega) \cap (t', t)}(1 - \alpha_\tau) \\ & = \prod_{\tau \in \mathcal{T}_{(s, a)}(\omega) \cap [\tilde{t}, t)}(1 - \alpha_\tau) \\ & \leq \prod_{\tau \in \mathcal{T}_{(s, a)}(\omega) \cap [\tilde{t}, t)}e^{-\alpha_\tau} \\ & = e^{-\sum_{\tau \in \mathcal{T}_{(s, a)}(\omega) \cap [\tilde{t}, t)}\alpha_\tau} \end{align*} where the second equality follows by induction on $\tilde{t}$ (we encourage the reader to check). \end{proof} \begin{lemma} \label{lemma:one_step_error} Let $M = \langle\mathcal{S}, \mathcal{A}, P, r, \gamma\rangle$ be a finite MDP, let $\alpha = (\alpha_t)_{t \in \mathbb{N}_0}$ be a stepsize sequence, let $\omega \in \Omega_M$, let $(Q_t := Q_t^\alpha(\omega))_{t \in \mathbb{N}_0}$ be the induced Q-learning iterates on $M$, and let $\tilde{t}, t \in \mathbb{N}_0$ with $\tilde{t} \leq t$. Then, for any $(s, a) \in \mathcal{S} \times \mathcal{A}$, $\abs{Q_t(s, a) - q^*_M(s, a)}$ is at most \begin{align*} \gamma \max_{t' \in [\tilde{t}, t)} \norm{Q_{t'} - q^*_M}_\infty + \norm{\hat{r}_t - r}_\infty + \p{\frac{\gamma \norm{r}_\infty}{1 - \gamma}}\p{ \abs{\mathcal{S}}\norm{\hat{P}_t - P}_\infty + 2e^{-\sum_{\tau \in \mathcal{T}_{(s, a)}(\omega) \cap [\tilde{t}, t)}\alpha_\tau} } \end{align*} where $\hat{M} := \hat{M}^\alpha(\omega) = \langle \hat{\mathcal{S}}, \mathcal{A}, \hat{P}, \hat{r}, \gamma \rangle$, $\hat{P}_t(s' | s, a) := \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t)} \hat{P}((s', t') | (s, t), a)$, and $\hat{r}_t(s, a) := \hat{r}((s, t), a)$. \end{lemma} \begin{proof} Fix $(s, a) \in \mathcal{S} \times \mathcal{A}$. By Theorem \ref{thm:q*_arp} and the triangle inequality, \begin{align*} |Q_t(s, a) - q^*_M(s, a)| = & \abs{T^*_{\hat{M}}q^*_{\hat{M}}((s, t), a) - T^*_Mq^*_M(s, a)} \\ \leq & \abs{\hat{r}((s, t), a) - r(s, a)} + \\ & \gamma\Bigg| \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t)} \hat{P}((S'_{t'}(\omega), t') | (s, t), a) \max_{a' \in \mathcal{A}} q^*_{\hat{M}}((S'_{t'}(\omega), t'), a') \\ & \qquad\qquad - \sum_{s' \in \mathcal{S}} P(s' | s, a) \max_{a' \in \mathcal{A}} q^*_M(s', a')\Bigg|. \end{align*} But $\abs{\hat{r}((s, t), a) - r(s, a)} = \abs{\hat{r}_t(s, a) - r(s, a)} \leq \norm{\hat{r}_t - r}_\infty$ and, applying the triangle inequality once more, \begin{align*} \abs{ \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t)} \hat{P}((S'_{t'}(\omega), t') | (s, t), a) \max_{a' \in \mathcal{A}} q^*_{\hat{M}}((S'_{t'}(\omega), t'), a') - \sum_{s' \in \mathcal{S}} P(s' | s, a) \max_{a' \in \mathcal{A}} q^*_M(s', a') } \end{align*} is bounded by the sum of (\ref{exp:term_a}) and (\ref{exp:term_b}) where \begin{align} \label{exp:term_a} \Bigg| \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t)}& \hat{P}((S'_{t'}(\omega), t') | (s, t), a)\p{ \max_{a' \in \mathcal{A}} q^*_{\hat{M}}((S'_{t'}(\omega), t'), a') - \max_{a' \in \mathcal{A}} q^*_M(S'_{t'}(\omega), a') } \Bigg| \\ \leq & \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [\tilde{t}, t)} \hat{P}((S'_{t'}(\omega), t') | (s, t), a) \max_{a' \in \mathcal{A}} \abs{ q^*_{\hat{M}}((S'_{t'}(\omega), t'), a') - q^*_M(S'_{t'}(\omega), a') } + \nonumber\\ & \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, \tilde{t})} \hat{P}((S'_{t'}(\omega), t') | (s, t), a)\p{ \norm{q^*_{\hat{M}}}_\infty + \norm{q^*_M}_\infty } \nonumber\\ \leq & \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [\tilde{t}, t)} \hat{P}((S'_{t'}(\omega), t') | (s, t), a) \max_{a' \in \mathcal{A}} \abs{ Q_{t'}(S'_{t'}(\omega), a') - q^*_M(S'_{t'}(\omega), a') } + \tag{Theorem \ref{thm:q*_arp}}\\ & \p{\frac{\norm{\hat{r}}_\infty + \norm{r}_\infty}{1 - \gamma}} \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, \tilde{t})} \hat{P}((S'_{t'}(\omega), t') | (s, t), a) \tag{Lemma \ref{lemma:q*_bound}}\\ \leq & \max_{t' \in [\tilde{t}, t)} \norm{Q_{t'} - q^*_M}_\infty + \p{\frac{2\norm{r}_\infty}{1 - \gamma}} e^{-\sum_{\tau \in \mathcal{T}_{(s, a)}(\omega) \cap [\tilde{t}, t)}\alpha_\tau} \tag{Lemma \ref{lemma:low_level_bound}} \end{align} and \begin{align} \label{exp:term_b} \Bigg| \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t)}& \hat{P}((S'_{t'}(\omega), t') | (s, t), a) \max_{a' \in \mathcal{A}} q^*_M(S'_{t'}(\omega), a') - \sum_{s' \in \mathcal{S}} P(s' | s, a) \max_{a' \in \mathcal{A}} q^*_M(s', a') \Bigg| \\ = & \abs{ \sum_{s' \in \mathcal{S}} \p{ \sum_{t' \in \mathcal{T}_{(s, a)}(\omega) \cap [0, t)} \hat{P}((s', t') | (s, t), a) - P(s' | s, a) } \max_{a' \in \mathcal{A}} q^*_M(s', a') } \nonumber\\ \leq & \norm{q^*_M}_\infty \sum_{s' \in \mathcal{S}} \abs{\hat{P}_t(s' | s, a) - P(s' | s, a)} \nonumber\\ \leq & \p{\frac{\abs{\mathcal{S}}\norm{r}_\infty}{1 - \gamma}} \norm{\hat{P}_t - P}_\infty \tag{Lemma \ref{lemma:q*_bound}} \end{align} (where the equality follows from the fact that $s' \neq S'_{t'}(\omega)$ implies $\hat{P}((s', t') | (s, t), a) = 0$), which yields the desired bound. \end{proof} It is time to finish the job. While most of the error terms provided by Lemma \ref{lemma:one_step_error} can be controlled in a straightforward manner via Theorem \ref{thm:arp_limit}, it is not immediately clear how to control $\max_{t' \in [\tilde{t}, t)} \norm{Q_{t'} - q^*_M}_\infty$. However, we will see that it may be subdued by repeatedly applying Lemma \ref{lemma:one_step_error} until a sufficiently small exponential coefficient is obtained. \begin{proof}[Proof Theorem \ref{thm:big_boi}] Taking finite unions of null sets as needed, discard a $\mathbb{P}$-null set from $\Omega_M$ so that, for all $(s, a) \in \mathcal{S} \times \mathcal{A}$ and $\omega \in \Omega_M$, $\sum_{t \in \mathcal{T}_{(s, a)}(\omega)} \alpha_t = \infty$ holds in addition to the conclusion of Theorem \ref{thm:arp_limit}. With this in mind, fix $\omega \in \Omega_M$, put $(Q_t)_{t \in \mathbb{N}_0} := (Q_t^\alpha(\omega))_{t \in \mathbb{N}_0}$, and let $(\hat{P}_t)_{t \in \mathbb{N}_0}$ as well as $(\hat{r}_t)_{t \in \mathbb{N}_0}$ be as in Lemma \ref{lemma:one_step_error}. Now, let $\epsilon > 0$ and choose $k \in \mathbb{N}$ sufficiently large so that \begin{align*} \gamma^{k + 1} \leq \frac{\epsilon(1 - \gamma)}{8 \norm{r}_\infty} \end{align*} (where $\cdot/0 := \infty$). Furthermore, by Theorem \ref{thm:arp_limit}, we may find $t_0 \in \mathbb{N}_0$ such that \begin{align*} \norm{\hat{r}_t - r}_\infty \leq \frac{\epsilon(1 - \gamma)}{4} \end{align*} and \begin{align*} \norm{\hat{P}_t - P}_\infty \leq \frac{\epsilon(1 - \gamma)^2}{4\gamma\abs{\mathcal{S}}\norm{r}_\infty} \end{align*} for $t \geq t_0$. Finally, as $\mathcal{S} \times \mathcal{A}$ is finite and as $\sum_{t \in \mathcal{T}_{(s, a)}(\omega)} \alpha_t = \infty$ for all $(s, a) \in \mathcal{S} \times \mathcal{A}$, we may choose $t_k \geq \dots \geq t_1 \geq t_0$ sufficiently far apart such that \begin{align*} e^{-\sum_{\tau \in \mathcal{T}_{(s, a)}(\omega) \cap [t_{i - 1}, t_i)} \alpha_\tau} \leq \frac{\epsilon(1 - \gamma)^2}{8\gamma \norm{r}_\infty} \end{align*} for all $i \in \{1, \dots, k\}$ and $(s, a) \in \mathcal{S} \times \mathcal{A}$. In particular, for any $i \in \{1, \dots, k\}$, $t \geq t_i$, and $(s, a) \in \mathcal{S} \times \mathcal{A}$, \begin{align*} \norm{\hat{r}_t - r}_\infty + & \p{\frac{\gamma\norm{r}_\infty}{1 - \gamma}} \p{ \abs{\mathcal{S}}\norm{\hat{P}_{t} - P}_\infty + 2e^{-\sum_{\tau \in \mathcal{T}_{(s, a)}(\omega) \cap [t_{i - 1}, t)} \alpha_\tau} } \\ & \leq \norm{\hat{r}_t - r}_\infty + \p{\frac{\gamma\norm{r}_\infty}{1 - \gamma}} \p{ \abs{\mathcal{S}}\norm{\hat{P}_{t} - P}_\infty + 2e^{-\sum_{\tau \in \mathcal{T}_{(s, a)}(\omega) \cap [t_{i - 1}, t_i)} \alpha_\tau} } \\ & \leq \frac{3}{4}\epsilon(1 - \gamma) \end{align*} since $(\alpha_t)_{t \in \mathbb{N}_0}$ is non-negative, so it follows by inductive application of Lemma \ref{lemma:one_step_error} that \begin{align*} \norm{Q_t - q^*_M}_\infty & \leq \gamma\max_{t' \in [t_k, t)}\norm{Q_{t'} - q^*_M}_\infty + \frac{3}{4}\epsilon(1 - \gamma) \\ & \leq \gamma \max_{t' \in [t_k, t)}\p{ \gamma\max_{t'' \in [t_{k - 1}, t')}\norm{Q_{t''} - q^*_M}_\infty + \frac{3}{4}\epsilon(1 - \gamma) } + \frac{3}{4}\epsilon(1 - \gamma) \\ & = \gamma^2\max_{t' \in [t_{k - 1}, t)}\norm{Q_{t'} - q^*_M}_\infty + \frac{3}{4}\epsilon(1 - \gamma)(1 + \gamma) \\ & \leq \dots \\ & = \gamma^{k + 1}\max_{t' \in [t_0, t)}\norm{Q_{t'} - q^*_M}_\infty + \frac{3}{4}\epsilon(1 - \gamma)(1 + \gamma + \dots + \gamma^k) \\ & \leq \p{\frac{\epsilon(1 - \gamma)}{8 \norm{r}_\infty}} \p{\frac{2\norm{r}_\infty}{1 - \gamma}} + \frac{\frac{3}{4}\epsilon(1 - \gamma)}{1 - \gamma} \tag{Lemma \ref{lemma:q*_bound}} \\ & = \epsilon \end{align*} for all $t \geq t_k$ and, with that, the beast has been slain. \end{proof} \appendix \section{Notation} \label{sec:notation} \subsection{Measure Theory} \begin{definition} A $\sigma$-algebra on a non-empty set $\mathcal{X}$ is collection of subsets $\mathcal{F}$ of $\mathcal{X}$ satisfying \begin{enumerate}[(i)] \item $\varnothing \in \mathcal{F}$; \item $\forall A \in \mathcal{F},\ \mathcal{X} \setminus A \in \mathcal{F}$; and \item $\forall A_1, A_2, \dots \in \mathcal{F},\ \bigcup_{n \in \mathbb{N}} A_n \in \mathcal{F}$. \end{enumerate} In this case, we call $(\mathcal{X}, \mathcal{F})$ a measurable space. \end{definition} \begin{definition} Given measurable spaces $(\mathcal{X}, \mathcal{F})$ and $(\mathcal{Y}, \mathcal{G})$ as well as a function $A : \mathcal{X} \to \mathcal{Y}$, the $\sigma$-algebra induced by $A$ is \begin{align*} \sigma_{\mathcal{G}}(A) := \{A^{-1}(G) : G \in \mathcal{G}\} \end{align*} and is usually denoted $\sigma(A)$ (when $\mathcal{G}$ is clear from context). Moreover, if $\sigma_{\mathcal{G}}(A) \subseteq \mathcal{F}$, we say that $A$ is $\mathcal{F}/\mathcal{G}$-measurable, $\mathcal{F}$-measurable, or just measurable for short. \end{definition} \begin{remark} Every non-empty set $\mathcal{X}$ admits at least one $\sigma$-algebra---namely $\mathcal{P}(\mathcal{X})$---and if $\{\mathcal{F}_i : i \in \mathcal{I}\}$ is a non-empty family of $\sigma$-algebras on $\mathcal{X}$, then $\bigcap_{i \in \mathcal{I}} \mathcal{F}_i$ is a $\sigma$-algebra on $\mathcal{X}$. \end{remark} \begin{definition} Given measurable spaces $(\mathcal{X}_i, \mathcal{F}_i)_{i \in \mathcal{I}}$, the product $\sigma$-algebra on $\bigtimes_{i \in \mathcal{I}} \mathcal{X}_i$ \begin{align*} \bigotimes_{i \in \mathcal{I}} \mathcal{F}_i := \bigcap \{\mathcal{F} \text{ a $\sigma$-algebra on $\bigtimes_{i \in \mathcal{I}} \mathcal{X}_i$} : \forall i \in \mathcal{I},\ \text{$\pi_i$ is $\mathcal{F}/\mathcal{F}_i$-measurable} \} \end{align*} is the smallest $\sigma$-algebra with respect to which each projection $\pi_i : \bigtimes\limits_{j \in \mathcal{I}}\mathcal{X}_j \to \mathcal{X}_i$ is measurable. \end{definition} \begin{definition} A probability measure on a measurable space $(\mathcal{X}, \mathcal{F})$ is $\mathbb{P} : \mathcal{F} \to [0, \infty]$ s.t. \begin{enumerate}[(i)] \item $\mathbb{P}(\mathcal{X}) = 1$; and \item $\forall A_1, A_2, \dots \in \mathcal{F},\ (A_n)_{n \in \mathbb{N}} \text{ pairwise disjoint} \implies \mathbb{P}\p{\bigcup_{n \in \mathbb{N}} A_n} = \sum_{n \in \mathbb{N}} \mathbb{P}(A_n)$. \end{enumerate} Altogether, we call $(\mathcal{X}, \mathcal{F}, \mathbb{P})$ a probability space and real-valued measurable functions on $\mathcal{X}$ are called random variables. \end{definition} \begin{definition} The set of probability measures on a measurable space $(\mathcal{X}, \mathcal{F})$ is denoted $\Delta(\mathcal{X}, \mathcal{F})$. If $\mathcal{X}$ is countable, we write $\Delta(\mathcal{X}) := \Delta(\mathcal{X}, \mathcal{P}(\mathcal{X}))$ for short. \end{definition} \subsection{Function Spaces} \begin{definition} Let $\mathcal{I}$ be a non-empty set. The supremum norm on $\mathcal{I} \to \mathbb{R}$ ($\mathbb{R}^{\mathcal{I}}$ for short) is \begin{align*} \norm{\cdot}_{\mathcal{I},\infty} : \mathbb{R}^\mathcal{I} \to [0, \infty], x \mapsto \sup_{i \in \mathcal{I}} \abs{x(i)} \end{align*} and the set of bounded real-valued functions on $\mathcal{I}$ is \begin{align*} \ell^\infty(\mathcal{I}) := \{x \in \mathbb{R}^\mathcal{I} : \norm{x}_{\mathcal{I},\infty} < \infty\}. \end{align*} Frequently, $\mathcal{I}$ is clear from context, in which case we write $\norm{\cdot}_\infty$ instead of $\norm{\cdot}_{\mathcal{I}, \infty}$. \end{definition} \section{Banach's Fixed Point Theorem} In order to prove Theorem \ref{thm:existence_of_q*}, we first need to know a little bit about metric spaces. \begin{definition} Let $E$ be non-empty. We call $d : E \times E \to [0, \infty)$ a metric on $E$ when \begin{enumerate}[(i)] \item $\forall x, y \in E,\ d(x, y) = 0 \iff x = y$; \item $\forall x, y \in E,\ d(x, y) = d(y, x)$; and \item $\forall x, y, z \in E,\ d(x, z) \leq d(x, y) + d(y, z)$. \end{enumerate} In this case, the pair $(E, d)$ is called a metric space. \end{definition} \begin{definition} A metric space $(E, d)$ is said to be complete when, for all sequences $(x_n)_{n \in \mathbb{N}_0}$ in $E$ satisfying \begin{align*} \forall \epsilon > 0,\ \exists n_0 \in \mathbb{N}_0,\ \forall n_1, n_2 \geq n_0,\ d(x_{n_1}, x_{n_2}) \leq \epsilon \end{align*} (i.e. $(x_n)_{n \in \mathbb{N}_0}$ is a Cauchy sequence), we have that $d(x_n, x_\infty) \to 0$ for some $x_\infty \in E$. \end{definition} \begin{definition} Let $(E, d)$ be a metric space and let $\gamma \in [0, 1)$. We say that a map $T : E \to E$ is a $\gamma$-contraction on $(E, d)$ when \begin{align*} d(T(x), T(y)) \leq \gamma d(x, y) \end{align*} holds for all $x, y \in E$. \end{definition} \begin{theorem}[Banach's Fixed Point Theorem] \label{thm:banach} Let $(E, d)$ be a complete metric space and let $T : E \to E$ be a $\gamma$-contraction for some $\gamma \in [0, 1)$. Then $T$ admits a unique fixed point. \end{theorem} The proof of Banach's fixed point theorem is a classic exercise in analysis. We omit it here but encourage the reader to try it on their own (hint: fix an arbitrary $x_0 \in E$ and show that $(T^n(x_0))_{n \in \mathbb{N}_0}$ is Cauchy by leveraging the fact that $\sum_{n = 0}^\infty \gamma^n$ is a convergent series). \section{Proofs of Results in Subsection \ref{subsec:mdps}} \label{subsec:mdps_proofs} In any case, the latter fixed point theorem is all we need to show that optimal action-value functions exist and are unique in an MDP. \begin{proof}[Proof of Theorem \ref{thm:existence_of_q*}] Let $M = \langle\mathcal{S}, \mathcal{A}, P, r, \gamma\rangle$. It is a straightforward exercise to verify that \begin{align*} d_\infty : \mathbb{R}^{\mathcal{S} \times \mathcal{A}} \times \mathbb{R}^{\mathcal{S} \times \mathcal{A}} \to [0, \infty], (q_1, q_1) \mapsto \norm{q_1 - q_2}_\infty \end{align*} is a metric on $\ell^\infty(\mathcal{S} \times \mathcal{A})$ and we omit the details. As for completeness, let $(q_n)_{n \in \mathbb{N}_0}$ be a Cauchy sequence in $(\ell^\infty(\mathcal{S} \times \mathcal{A}), d_\infty)$ and let $\epsilon > 0$. For each $(s, a) \in \mathcal{S} \times \mathcal{A}$ and $n_1, n_2 \in \mathbb{N}_0$, $\abs{q_{n_1}(s, a) - q_{n_2}(s, a)} \leq d_\infty(q_{n_1}, q_{n_2})$, which implies that $(q_n(s, a))_{n \in \mathbb{N}_0}$ is a Cauchy sequence in $\mathbb{R}$ and hence, by completeness of $\mathbb{R}$, converges to some $q_\infty(s, a) \in \mathbb{R}$; in particular, there is $n_{(s, a)} \in \mathbb{N}_0$ such that $\abs{q_n(s, a) - q_\infty(s, a)} \leq \frac{\epsilon}{2}$ for $n \geq n_{(s, a)}$. Furthermore, there is $n_0 \in \mathbb{N}_0$ for which $d_\infty(q_{n_1}, q_{n_2}) \leq \frac{\epsilon}{2}$ for $n_1, n_2 \geq n_0$. Hence \begin{align*} d_\infty(q_n, q_\infty) & = \sup_{(s, a) \in \mathcal{S} \times \mathcal{A}} \abs{q_n(s, a) - q_\infty(s, a)} \\ & \leq \sup_{(s, a) \in \mathcal{S} \times \mathcal{A}} \p{ d_\infty(q_n, q_{\max\{n_0, n_{(s, a)}\}}) + \abs{q_{\max\{n_0, n_{(s, a)}\}}(s, a) - q_\infty(s, a)} } \\ & \leq \sup_{(s, a) \in \mathcal{S} \times \mathcal{A}} \p{ \frac{\epsilon}{2} + \frac{\epsilon}{2} } \\ & \leq \epsilon \end{align*} for $n \geq n_0$ and so $d_\infty(q_n, q_\infty) \to 0$. In particular, $d_\infty(q_{n_0}, q_\infty) < 1$ for some $n_0 \in \mathbb{N}_0$ and thus \begin{align*} \norm{q_\infty}_\infty \leq \norm{q_{n_0}}_\infty + d_\infty(q_{n_0}, q_\infty) < \infty, \end{align*} i.e. $q_\infty \in \ell^\infty(\mathcal{S} \times \mathcal{A})$ so that the latter is complete with respect to $d_\infty$ as claimed. Finally, we claim that $T^*_M$ is a $\gamma$-contraction on $(\ell^\infty(\mathcal{S} \times \mathcal{A}), d_\infty)$ as the conclusion will then follow immediately from Theorem \ref{thm:banach}. Indeed, for any $q_1, q_2 \in \ell^\infty(\mathcal{S} \times \mathcal{A})$ and $(s, a) \in \mathcal{S} \times \mathcal{A}$, \begin{align*} \abs{T^*_Mq_1(s, a) - T^*_Mq_2(s, a)} & = \abs{ \gamma \sum_{s' \in \mathcal{S}} P(s' | s, a) \p{\max_{a' \in \mathcal{A}} q_1(s', a') - \max_{a' \in \mathcal{A}} q_2(s', a')} } \\ & \leq \gamma \sum_{s' \in \mathcal{S}} P(s' | s, a) \abs{\max_{a' \in \mathcal{A}} q_1(s', a') - \max_{a' \in \mathcal{A}} q_2(s', a')} \\ & \leq \gamma \sum_{s' \in \mathcal{S}} P(s' | s, a) \max_{a' \in \mathcal{A}} \abs{q_1(s', a') - q_2(s', a')} \\ & \leq \gamma \sum_{s' \in \mathcal{S}} P(s' | s, a) d_\infty(q_1, q_2) \\ & = \gamma d_\infty(q_1, q_2), \end{align*} which implies that $d_\infty(T^*_M q_1, T^*_M q_2) \leq \gamma d_\infty(q_1, q_2)$ as desired. \end{proof} Lastly, the proof of Lemma \ref{lemma:q*_bound} follows from a straightforward calculation. \begin{proof}[Proof of Lemma \ref{lemma:q*_bound}] Let $M = \langle\mathcal{S}, \mathcal{A}, P, r, \gamma\rangle$. Then, for any $(s, a) \in \mathcal{S} \times \mathcal{A}$, \begin{align*} \abs{q^*_M(s, a)} & = \abs{T^*_Mq^*_M(s, a)} \\ & = \abs{r(s, a) + \gamma\sum_{s' \in \mathcal{S}} P(s' | s, a)\sup_{a' \in \mathcal{A}} q^*_M(s', a')} \\ & \leq \abs{r(s, a)} + \gamma\sum_{s' \in \mathcal{S}} P(s' | s, a)\abs{\sup_{a' \in \mathcal{A}} q^*_M(s', a')} \\ & \leq \norm{r}_\infty + \gamma\norm{q^*_M}_\infty\sum_{s' \in \mathcal{S}} P(s' | s, a) \\ & = \norm{r}_\infty + \gamma\norm{q^*_M}_\infty. \end{align*} In particular, $\norm{q^*_M}_\infty \leq \norm{r}_\infty + \gamma\norm{q^*_M}_\infty$ and hence $\norm{q^*_M}_\infty \leq \frac{\norm{r}_\infty}{1 - \gamma}$. \end{proof} \end{document}
arXiv
Biology 2015 The Cell Cycle and Cellular Reproduction Sylvia S. Mader, Michael Windelspecht The Cell Cycle and Cellular Reproduction - all with Video Answers + 4 more educators Chapter Questions For questions 1–4, match each stage of the cell cycle to its correct description. $\begin{array}{ll}{\text { a. } G_{1} \text { stage }} & {\text { b. S stage }} \\ {\text { c. } G_{2} \text { stage }} & {\text { d. } M(\text { mitotic }) \text { stage }}\end{array}$ At the end of this stage, each chromosome consists of two attached chromatids. For questions , match each stage of the cell cycle to its correct description. During this stage, daughter chromosomes are distributed to two daughter nuclei. Nalvi D. The cell doubles its organelles and accumulates the materials needed for DNA synthesis. Jamelia A. The cell synthesizes the proteins needed for cell division. Which is not true of the cell cycle? a. The cell cycle is controlled by internal/external signals. b. Cyclin is a signaling molcule that increases and decreases as the cycle continues. c. DNA damage can stop the cell cycle at the G1 checkpoint. d. Apoptosis occurs frequently during the cell cycle. The diploid number of chromosomes a. is the 2n number. b. is in a parent cell and therefore in the two daughter cells following mitosis. c. varies according to the particular organism. d. is present in most somatic cells. e. All of these are correct. The form of DNA that contains genes that are actively being transcribed is called a. histones. b. telomeres. c. heterochromatin. d. euchromatin. Alyssa M. Histones are involved in a. regulating the checkpoints of the cell cycle. b. lengthening the ends of the telomeres. c. compacting the DNA molecule. d. cytokinesis. At the metaphase plate during metaphase of mitosis, there are a. single chromosomes. b. duplicated chromosomes. c. G1 stage chromosomes. d. always 23 chromosomes During which mitotic phases are duplicated chromosomes present? a. all but telophase b. prophase and anaphase c. all but anaphase and telophase d. only during metaphase at the metaphase plate e. Both a and b are correct. Which of these is paired incorrectly? a. prometaphase - the kinetochores become attached to spindle fibers b. anaphase- daughter chromosomes migrate toward spindle poles c. prophase - the nucleolus disappears and the nuclear envelope disintegrates d. metaphase - the chromosomes are aligned in the metaphase plate e. telophase - a resting phase between cell division cycles Kemi A. Which of the following is not characteristic of cancer cells? a. Cancer cells often undergo angiogenesis. b. Cancer cells tend to be nonspecialized. c. Cancer cells undergo apoptosis. d. Cancer cells often have abnormal nuclei. e. Cancer cells can metastasize. Which of the following statements is true? a. Proto-oncogenes cause a loss of control of the cell cycle. b. The products of oncogenes may inhibit the cell cycle. c. Tumor suppressor gene products inhibit the cell cycle. d. A mutation in a tumor suppressor gene may inhibit the cell cycle. In contrast to a eukaryotic chromosome, a prokaryotic chromosome a. is shorter and fatter. b. has a single loop of DNA. c. never replicates. d. contains many histones. Which of the following is the term used to describe asexual reproduction in a single-celled organism? a. cytokinesis b. mitosis c. binary fission d. All of these are correct.
CommonCrawl
Thanks.) terminology type-i-errors type-ii-errors share|improve this question edited May 15 '12 at 11:34 Peter Flom♦ 57.4k966150 asked Aug 12 '10 at 19:55 Thomas Owens 6161819 Terminology is a bit Table of error types Tabularised relations between truth/falseness of the null hypothesis and outcomes of the test: Table of error types Null hypothesis (H0) is Valid/True Invalid/False Judgment of Null Hypothesis SEND US SOME FEEDBACK>> Disclaimer: The opinions and interests expressed on EMC employee blogs are the employees' own and do not necessarily represent EMC's positions, strategies or views. What are "desires of the flesh"? Unfortunately, this increases the incidences of Type II error. :) Reducing the chances of Type II error would mean making the alarm hypersensitive, which in turn would increase the chances of Going to be away for 4 months, should we turn off the refrigerator or leave it on with water inside? Data dredging after it has been collected and post hoc deciding to change over to one-tailed hypothesis testing to reduce the sample size and P value are indicative of lack of Is there an easy way to remember what the difference is, such as a mnemonic? Digital Diversity My CEO wants permanent access to every employee's emails. already suggested), but I generally like showing the following two pictures: share|improve this answer answered Oct 13 '10 at 18:43 chl♦ 37.5k6125243 add a comment| up vote 7 down vote Based Launch The "Thinking" Part of "Thinking Like A Data Scientist" Launch Big Data Journey: Earning the Trust of the Business Launch Determining the Economic Value of Data Launch The Big Data Perhaps the most widely discussed false positives in medical screening come from the breast cancer screening procedure mammography. Raiffa, H., Decision Analysis: Introductory Lectures on Choices Under Uncertainty, Addison–Wesley, (Reading), 1968. Reply Tone Jackson says: April 3, 2014 at 12:11 pm I am taking statistics right now and this article clarified something that I needed to know for my exam that is Twelve Tan Elvis's Ate Nine Hams With Intelligent Irish Farmers share|improve this answer answered Dec 12 '12 at 3:54 Mason Oliver 91 giggle. No funnier, but commonplace enough to remember. He's presented most recently at STRATA, The Data Science Summit and TDWI, and has written several white papers and articles about the application of big data and advanced analytics to drive Risk higher for type 1 or type 2 error?1Examples for Type I and Type II errors9Are probabilities of Type I and II errors negatively correlated?0Second type error for difference in proportions It helps that when I was at school, every time we wrote up a hypothesis test we were nagged to write "$\alpha = ...$" at the start, so I knew what Bitte versuche es später erneut. Hafner:Edinburgh. ^ Williams, G.O. (1996). "Iris Recognition Technology" (PDF). Type One and Type Two Errors are discussed in length in most introductory college texts. I set the criterion for the probability that I will make a false rejection. However, that singular right answer won't apply to everyone (some people might find an alternative answer to be better). loved it and I understand more now. One consequence of the high false positive rate in the US is that, in any 10-year period, half of the American women screened receive a false positive mammogram. explorable.com. How do I explain that this is a terrible idea? How do professional statisticians do it - is it just something that they know from using or discussing it often? (Side Note: This question can probably use some better tags. M. Also, your question should be community wiki as there is no correct answer to your question. –user28 Aug 12 '10 at 20:00 @Srikant: in that case, we should make This represents a power of 0.90, i.e., a 90% chance of finding an association of that size. A type II error, or false negative, is where a test result indicates that a condition failed, while it actually was successful. A Type II error is committed when we fail The prediction that patients with attempted suicides will have a different rate of tranquilizer use — either higher or lower than control patients — is a two-tailed hypothesis. (The word tails No matter how many data a researcher collects, he can never absolutely prove (or disprove) his hypothesis. Simple, direct.
CommonCrawl
Kosambi–Karhunen–Loève theorem In the theory of stochastic processes, the Karhunen–Loève theorem (named after Kari Karhunen and Michel Loève), also known as the Kosambi–Karhunen–Loève theorem[1][2] states that a stochastic process can be represented as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling transform and eigenvector transform, and is closely related to principal component analysis (PCA) technique widely used in image processing and in data analysis in many fields.[3] Stochastic processes given by infinite series of this form were first considered by Damodar Dharmananda Kosambi.[4][5] There exist many such expansions of a stochastic process: if the process is indexed over [a, b], any orthonormal basis of L2([a, b]) yields an expansion thereof in that form. The importance of the Karhunen–Loève theorem is that it yields the best such basis in the sense that it minimizes the total mean squared error. In contrast to a Fourier series where the coefficients are fixed numbers and the expansion basis consists of sinusoidal functions (that is, sine and cosine functions), the coefficients in the Karhunen–Loève theorem are random variables and the expansion basis depends on the process. In fact, the orthogonal basis functions used in this representation are determined by the covariance function of the process. One can think that the Karhunen–Loève transform adapts to the process in order to produce the best possible basis for its expansion. In the case of a centered stochastic process {Xt}t ∈ [a, b] (centered means E[Xt] = 0 for all t ∈ [a, b]) satisfying a technical continuity condition, X admits a decomposition $X_{t}=\sum _{k=1}^{\infty }Z_{k}e_{k}(t)$ where Zk are pairwise uncorrelated random variables and the functions ek are continuous real-valued functions on [a, b] that are pairwise orthogonal in L2([a, b]). It is therefore sometimes said that the expansion is bi-orthogonal since the random coefficients Zk are orthogonal in the probability space while the deterministic functions ek are orthogonal in the time domain. The general case of a process Xt that is not centered can be brought back to the case of a centered process by considering Xt − E[Xt] which is a centered process. Moreover, if the process is Gaussian, then the random variables Zk are Gaussian and stochastically independent. This result generalizes the Karhunen–Loève transform. An important example of a centered real stochastic process on [0, 1] is the Wiener process; the Karhunen–Loève theorem can be used to provide a canonical orthogonal representation for it. In this case the expansion consists of sinusoidal functions. The above expansion into uncorrelated random variables is also known as the Karhunen–Loève expansion or Karhunen–Loève decomposition. The empirical version (i.e., with the coefficients computed from a sample) is known as the Karhunen–Loève transform (KLT), principal component analysis, proper orthogonal decomposition (POD), empirical orthogonal functions (a term used in meteorology and geophysics), or the Hotelling transform. Formulation • Throughout this article, we will consider a random process Xt defined over a probability space (Ω, F, P) and indexed over a closed interval [a, b], which is square-integrable, is zero-mean, and with covariance function KX(s, t). In other words, we have: $\forall t\in [a,b]\qquad X_{t}\in L^{2}(\Omega ,F,\mathbf {P} ),\quad {\text{i.e. }}\mathbf {E} [X_{t}^{2}]<\infty ,$ $\forall t\in [a,b]\qquad \mathbf {E} [X_{t}]=0,$ $\forall t,s\in [a,b]\qquad K_{X}(s,t)=\mathbf {E} [X_{s}X_{t}].$ The square-integrable condition $\mathbf {E} [X_{t}^{2}]<\infty $ is logically equivalent to $K_{X}(s,t)$ being finite for all $s,t\in [a,b]$.[6] • We associate to KX a linear operator (more specifically a Hilbert–Schmidt integral operator) TKX defined in the following way: ${\begin{aligned}&T_{K_{X}}&:L^{2}([a,b])&\to L^{2}([a,b])\\&&:f\mapsto T_{K_{X}}f&=\int _{a}^{b}K_{X}(s,\cdot )f(s)\,ds\end{aligned}}$ Since TKX is a linear operator, it makes sense to talk about its eigenvalues λk and eigenfunctions ek, which are found solving the homogeneous Fredholm integral equation of the second kind $\int _{a}^{b}K_{X}(s,t)e_{k}(s)\,ds=\lambda _{k}e_{k}(t)$ Statement of the theorem Theorem. Let Xt be a zero-mean square-integrable stochastic process defined over a probability space (Ω, F, P) and indexed over a closed and bounded interval [a, b], with continuous covariance function KX(s, t). Then KX(s,t) is a Mercer kernel and letting ek be an orthonormal basis on L2([a, b]) formed by the eigenfunctions of TKX with respective eigenvalues λk, Xt admits the following representation $X_{t}=\sum _{k=1}^{\infty }Z_{k}e_{k}(t)$ where the convergence is in L2, uniform in t and $Z_{k}=\int _{a}^{b}X_{t}e_{k}(t)\,dt$ Furthermore, the random variables Zk have zero-mean, are uncorrelated and have variance λk $\mathbf {E} [Z_{k}]=0,~\forall k\in \mathbb {N} \qquad {\mbox{and}}\qquad \mathbf {E} [Z_{i}Z_{j}]=\delta _{ij}\lambda _{j},~\forall i,j\in \mathbb {N} $ Note that by generalizations of Mercer's theorem we can replace the interval [a, b] with other compact spaces C and the Lebesgue measure on [a, b] with a Borel measure whose support is C. Proof • The covariance function KX satisfies the definition of a Mercer kernel. By Mercer's theorem, there consequently exists a set λk, ek(t) of eigenvalues and eigenfunctions of TKX forming an orthonormal basis of L2([a,b]), and KX can be expressed as $K_{X}(s,t)=\sum _{k=1}^{\infty }\lambda _{k}e_{k}(s)e_{k}(t)$ • The process Xt can be expanded in terms of the eigenfunctions ek as: $X_{t}=\sum _{k=1}^{\infty }Z_{k}e_{k}(t)$ where the coefficients (random variables) Zk are given by the projection of Xt on the respective eigenfunctions $Z_{k}=\int _{a}^{b}X_{t}e_{k}(t)\,dt$ • We may then derive ${\begin{aligned}\mathbf {E} [Z_{k}]&=\mathbf {E} \left[\int _{a}^{b}X_{t}e_{k}(t)\,dt\right]=\int _{a}^{b}\mathbf {E} [X_{t}]e_{k}(t)dt=0\\[8pt]\mathbf {E} [Z_{i}Z_{j}]&=\mathbf {E} \left[\int _{a}^{b}\int _{a}^{b}X_{t}X_{s}e_{j}(t)e_{i}(s)\,dt\,ds\right]\\&=\int _{a}^{b}\int _{a}^{b}\mathbf {E} \left[X_{t}X_{s}\right]e_{j}(t)e_{i}(s)\,dt\,ds\\&=\int _{a}^{b}\int _{a}^{b}K_{X}(s,t)e_{j}(t)e_{i}(s)\,dt\,ds\\&=\int _{a}^{b}e_{i}(s)\left(\int _{a}^{b}K_{X}(s,t)e_{j}(t)\,dt\right)\,ds\\&=\lambda _{j}\int _{a}^{b}e_{i}(s)e_{j}(s)\,ds\\&=\delta _{ij}\lambda _{j}\end{aligned}}$ where we have used the fact that the ek are eigenfunctions of TKX and are orthonormal. • Let us now show that the convergence is in L2. Let $S_{N}=\sum _{k=1}^{N}Z_{k}e_{k}(t).$ Then: ${\begin{aligned}\mathbf {E} \left[\left|X_{t}-S_{N}\right|^{2}\right]&=\mathbf {E} \left[X_{t}^{2}\right]+\mathbf {E} \left[S_{N}^{2}\right]-2\mathbf {E} \left[X_{t}S_{N}\right]\\&=K_{X}(t,t)+\mathbf {E} \left[\sum _{k=1}^{N}\sum _{l=1}^{N}Z_{k}Z_{\ell }e_{k}(t)e_{\ell }(t)\right]-2\mathbf {E} \left[X_{t}\sum _{k=1}^{N}Z_{k}e_{k}(t)\right]\\&=K_{X}(t,t)+\sum _{k=1}^{N}\lambda _{k}e_{k}(t)^{2}-2\mathbf {E} \left[\sum _{k=1}^{N}\int _{a}^{b}X_{t}X_{s}e_{k}(s)e_{k}(t)\,ds\right]\\&=K_{X}(t,t)-\sum _{k=1}^{N}\lambda _{k}e_{k}(t)^{2}\end{aligned}}$ which goes to 0 by Mercer's theorem. Properties of the Karhunen–Loève transform Special case: Gaussian distribution Since the limit in the mean of jointly Gaussian random variables is jointly Gaussian, and jointly Gaussian random (centered) variables are independent if and only if they are orthogonal, we can also conclude: Theorem. The variables Zi have a joint Gaussian distribution and are stochastically independent if the original process {Xt}t is Gaussian. In the Gaussian case, since the variables Zi are independent, we can say more: $\lim _{N\to \infty }\sum _{i=1}^{N}e_{i}(t)Z_{i}(\omega )=X_{t}(\omega )$ almost surely. The Karhunen–Loève transform decorrelates the process This is a consequence of the independence of the Zk. The Karhunen–Loève expansion minimizes the total mean square error In the introduction, we mentioned that the truncated Karhunen–Loeve expansion was the best approximation of the original process in the sense that it reduces the total mean-square error resulting of its truncation. Because of this property, it is often said that the KL transform optimally compacts the energy. More specifically, given any orthonormal basis {fk} of L2([a, b]), we may decompose the process Xt as: $X_{t}(\omega )=\sum _{k=1}^{\infty }A_{k}(\omega )f_{k}(t)$ where $A_{k}(\omega )=\int _{a}^{b}X_{t}(\omega )f_{k}(t)\,dt$ and we may approximate Xt by the finite sum ${\hat {X}}_{t}(\omega )=\sum _{k=1}^{N}A_{k}(\omega )f_{k}(t)$ for some integer N. Claim. Of all such approximations, the KL approximation is the one that minimizes the total mean square error (provided we have arranged the eigenvalues in decreasing order). Proof Consider the error resulting from the truncation at the N-th term in the following orthonormal expansion: $\varepsilon _{N}(t)=\sum _{k=N+1}^{\infty }A_{k}(\omega )f_{k}(t)$ The mean-square error εN2(t) can be written as: ${\begin{aligned}\varepsilon _{N}^{2}(t)&=\mathbf {E} \left[\sum _{i=N+1}^{\infty }\sum _{j=N+1}^{\infty }A_{i}(\omega )A_{j}(\omega )f_{i}(t)f_{j}(t)\right]\\&=\sum _{i=N+1}^{\infty }\sum _{j=N+1}^{\infty }\mathbf {E} \left[\int _{a}^{b}\int _{a}^{b}X_{t}X_{s}f_{i}(t)f_{j}(s)\,ds\,dt\right]f_{i}(t)f_{j}(t)\\&=\sum _{i=N+1}^{\infty }\sum _{j=N+1}^{\infty }f_{i}(t)f_{j}(t)\int _{a}^{b}\int _{a}^{b}K_{X}(s,t)f_{i}(t)f_{j}(s)\,ds\,dt\end{aligned}}$ We then integrate this last equality over [a, b]. The orthonormality of the fk yields: $\int _{a}^{b}\varepsilon _{N}^{2}(t)\,dt=\sum _{k=N+1}^{\infty }\int _{a}^{b}\int _{a}^{b}K_{X}(s,t)f_{k}(t)f_{k}(s)\,ds\,dt$ The problem of minimizing the total mean-square error thus comes down to minimizing the right hand side of this equality subject to the constraint that the fk be normalized. We hence introduce βk, the Lagrangian multipliers associated with these constraints, and aim at minimizing the following function: $Er[f_{k}(t),k\in \{N+1,\ldots \}]=\sum _{k=N+1}^{\infty }\int _{a}^{b}\int _{a}^{b}K_{X}(s,t)f_{k}(t)f_{k}(s)\,ds\,dt-\beta _{k}\left(\int _{a}^{b}f_{k}(t)f_{k}(t)\,dt-1\right)$ Differentiating with respect to fi(t) (this is a functional derivative) and setting the derivative to 0 yields: ${\frac {\partial Er}{\partial f_{i}(t)}}=\int _{a}^{b}\left(\int _{a}^{b}K_{X}(s,t)f_{i}(s)\,ds-\beta _{i}f_{i}(t)\right)\,dt=0$ which is satisfied in particular when $\int _{a}^{b}K_{X}(s,t)f_{i}(s)\,ds=\beta _{i}f_{i}(t).$ In other words, when the fk are chosen to be the eigenfunctions of TKX, hence resulting in the KL expansion. Explained variance An important observation is that since the random coefficients Zk of the KL expansion are uncorrelated, the Bienaymé formula asserts that the variance of Xt is simply the sum of the variances of the individual components of the sum: $\operatorname {var} [X_{t}]=\sum _{k=0}^{\infty }e_{k}(t)^{2}\operatorname {var} [Z_{k}]=\sum _{k=1}^{\infty }\lambda _{k}e_{k}(t)^{2}$ Integrating over [a, b] and using the orthonormality of the ek, we obtain that the total variance of the process is: $\int _{a}^{b}\operatorname {var} [X_{t}]\,dt=\sum _{k=1}^{\infty }\lambda _{k}$ In particular, the total variance of the N-truncated approximation is $\sum _{k=1}^{N}\lambda _{k}.$ As a result, the N-truncated expansion explains ${\frac {\sum _{k=1}^{N}\lambda _{k}}{\sum _{k=1}^{\infty }\lambda _{k}}}$ of the variance; and if we are content with an approximation that explains, say, 95% of the variance, then we just have to determine an $N\in \mathbb {N} $ such that ${\frac {\sum _{k=1}^{N}\lambda _{k}}{\sum _{k=1}^{\infty }\lambda _{k}}}\geq 0.95.$ The Karhunen–Loève expansion has the minimum representation entropy property Given a representation of $X_{t}=\sum _{k=1}^{\infty }W_{k}\varphi _{k}(t)$, for some orthonormal basis $\varphi _{k}(t)$ and random $W_{k}$, we let $p_{k}=\mathbb {E} [|W_{k}|^{2}]/\mathbb {E} [|X_{t}|_{L^{2}}^{2}]$, so that $\sum _{k=1}^{\infty }p_{k}=1$. We may then define the representation entropy to be $H(\{\varphi _{k}\})=-\sum _{i}p_{k}\log(p_{k})$. Then we have $H(\{\varphi _{k}\})\geq H(\{e_{k}\})$, for all choices of $\varphi _{k}$. That is, the KL-expansion has minimal representation entropy. Proof: Denote the coefficients obtained for the basis $e_{k}(t)$ as $p_{k}$, and for $\varphi _{k}(t)$ as $q_{k}$. Choose $N\geq 1$. Note that since $e_{k}$ minimizes the mean squared error, we have that $\mathbb {E} \left|\sum _{k=1}^{N}Z_{k}e_{k}(t)-X_{t}\right|_{L^{2}}^{2}\leq \mathbb {E} \left|\sum _{k=1}^{N}W_{k}\varphi _{k}(t)-X_{t}\right|_{L^{2}}^{2}$ Expanding the right hand size, we get: $\mathbb {E} \left|\sum _{k=1}^{N}W_{k}\varphi _{k}(t)-X_{t}\right|_{L^{2}}^{2}=\mathbb {E} |X_{t}^{2}|_{L^{2}}+\sum _{k=1}^{N}\sum _{\ell =1}^{N}\mathbb {E} [W_{\ell }\varphi _{\ell }(t)W_{k}^{*}\varphi _{k}^{*}(t)]_{L^{2}}-\sum _{k=1}^{N}\mathbb {E} [W_{k}\varphi _{k}X_{t}^{*}]_{L^{2}}-\sum _{k=1}^{N}\mathbb {E} [X_{t}W_{k}^{*}\varphi _{k}^{*}(t)]_{L^{2}}$ Using the orthonormality of $\varphi _{k}(t)$, and expanding $X_{t}$ in the $\varphi _{k}(t)$ basis, we get that the right hand size is equal to: $\mathbb {E} [X_{t}]_{L^{2}}^{2}-\sum _{k=1}^{N}\mathbb {E} [|W_{k}|^{2}]$ We may perform identical analysis for the $e_{k}(t)$, and so rewrite the above inequality as: $\mathbb {E} [X_{t}]_{L^{2}}^{2}-\sum _{k=1}^{N}\mathbb {E} [|Z_{k}|^{2}]}\leq \mathbb {E} [X_{t}]_{L^{2}}^{2}-\sum _{k=1}^{N}\mathbb {E} [|W_{k}|^{2}]}$ Subtracting the common first term, and dividing by $\mathbb {E} [|X_{t}|_{L^{2}}^{2}]$, we obtain that: $\sum _{k=1}^{N}p_{k}\geq \sum _{k=1}^{N}q_{k}$ This implies that: $-\sum _{k=1}^{\infty }p_{k}\log(p_{k})\leq -\sum _{k=1}^{\infty }q_{k}\log(q_{k})$ Linear Karhunen–Loève approximations Consider a whole class of signals we want to approximate over the first M vectors of a basis. These signals are modeled as realizations of a random vector Y[n] of size N. To optimize the approximation we design a basis that minimizes the average approximation error. This section proves that optimal bases are Karhunen–Loeve bases that diagonalize the covariance matrix of Y. The random vector Y can be decomposed in an orthogonal basis $\left\{g_{m}\right\}_{0\leq m\leq N}$ as follows: $Y=\sum _{m=0}^{N-1}\left\langle Y,g_{m}\right\rangle g_{m},$ where each $\left\langle Y,g_{m}\right\rangle =\sum _{n=0}^{N-1}{Y[n]}g_{m}^{*}[n]$ is a random variable. The approximation from the first M ≤ N vectors of the basis is $Y_{M}=\sum _{m=0}^{M-1}\left\langle Y,g_{m}\right\rangle g_{m}$ The energy conservation in an orthogonal basis implies $\varepsilon [M]=\mathbf {E} \left\{\left\|Y-Y_{M}\right\|^{2}\right\}=\sum _{m=M}^{N-1}\mathbf {E} \left\{\left|\left\langle Y,g_{m}\right\rangle \right|^{2}\right\}$ This error is related to the covariance of Y defined by $R[n,m]=\mathbf {E} \left\{Y[n]Y^{*}[m]\right\}$ For any vector x[n] we denote by K the covariance operator represented by this matrix, $\mathbf {E} \left\{\left|\langle Y,x\rangle \right|^{2}\right\}=\langle Kx,x\rangle =\sum _{n=0}^{N-1}\sum _{m=0}^{N-1}R[n,m]x[n]x^{*}[m]$ The error ε[M] is therefore a sum of the last N − M coefficients of the covariance operator $\varepsilon [M]=\sum _{m=M}^{N-1}{\left\langle Kg_{m},g_{m}\right\rangle }$ The covariance operator K is Hermitian and Positive and is thus diagonalized in an orthogonal basis called a Karhunen–Loève basis. The following theorem states that a Karhunen–Loève basis is optimal for linear approximations. Theorem (Optimality of Karhunen–Loève basis). Let K be a covariance operator. For all M ≥ 1, the approximation error $\varepsilon [M]=\sum _{m=M}^{N-1}\left\langle Kg_{m},g_{m}\right\rangle $ is minimum if and only if $\left\{g_{m}\right\}_{0\leq m<N}$ is a Karhunen–Loeve basis ordered by decreasing eigenvalues. $\left\langle Kg_{m},g_{m}\right\rangle \geq \left\langle Kg_{m+1},g_{m+1}\right\rangle ,\qquad 0\leq m<N-1.$ Non-Linear approximation in bases Linear approximations project the signal on M vectors a priori. The approximation can be made more precise by choosing the M orthogonal vectors depending on the signal properties. This section analyzes the general performance of these non-linear approximations. A signal $f\in \mathrm {H} $ is approximated with M vectors selected adaptively in an orthonormal basis for $\mathrm {H} $ $\mathrm {B} =\left\{g_{m}\right\}_{m\in \mathbb {N} }$ Let $f_{M}$ be the projection of f over M vectors whose indices are in IM: $f_{M}=\sum _{m\in I_{M}}\left\langle f,g_{m}\right\rangle g_{m}$ The approximation error is the sum of the remaining coefficients $\varepsilon [M]=\left\{\left\|f-f_{M}\right\|^{2}\right\}=\sum _{m\notin I_{M}}^{N-1}\left\{\left|\left\langle f,g_{m}\right\rangle \right|^{2}\right\}$ To minimize this error, the indices in IM must correspond to the M vectors having the largest inner product amplitude $\left|\left\langle f,g_{m}\right\rangle \right|.$ These are the vectors that best correlate f. They can thus be interpreted as the main features of f. The resulting error is necessarily smaller than the error of a linear approximation which selects the M approximation vectors independently of f. Let us sort $\left\{\left|\left\langle f,g_{m}\right\rangle \right|\right\}_{m\in \mathbb {N} }$ in decreasing order $\left|\left\langle f,g_{m_{k}}\right\rangle \right|\geq \left|\left\langle f,g_{m_{k+1}}\right\rangle \right|.$ The best non-linear approximation is $f_{M}=\sum _{k=1}^{M}\left\langle f,g_{m_{k}}\right\rangle g_{m_{k}}$ It can also be written as inner product thresholding: $f_{M}=\sum _{m=0}^{\infty }\theta _{T}\left(\left\langle f,g_{m}\right\rangle \right)g_{m}$ with $T=\left|\left\langle f,g_{m_{M}}\right\rangle \right|,\qquad \theta _{T}(x)={\begin{cases}x&|x|\geq T\\0&|x|<T\end{cases}}$ The non-linear error is $\varepsilon [M]=\left\{\left\|f-f_{M}\right\|^{2}\right\}=\sum _{k=M+1}^{\infty }\left\{\left|\left\langle f,g_{m_{k}}\right\rangle \right|^{2}\right\}$ this error goes quickly to zero as M increases, if the sorted values of $\left|\left\langle f,g_{m_{k}}\right\rangle \right|$ have a fast decay as k increases. This decay is quantified by computing the $\mathrm {I} ^{\mathrm {P} }$ norm of the signal inner products in B: $\|f\|_{\mathrm {B} ,p}=\left(\sum _{m=0}^{\infty }\left|\left\langle f,g_{m}\right\rangle \right|^{p}\right)^{\frac {1}{p}}$ The following theorem relates the decay of ε[M] to $\|f\|_{\mathrm {B} ,p}$ Theorem (decay of error). If $\|f\|_{\mathrm {B} ,p}<\infty $ with p < 2 then $\varepsilon [M]\leq {\frac {\|f\|_{\mathrm {B} ,p}^{2}}{{\frac {2}{p}}-1}}M^{1-{\frac {2}{p}}}$ and $\varepsilon [M]=o\left(M^{1-{\frac {2}{p}}}\right).$ Conversely, if $\varepsilon [M]=o\left(M^{1-{\frac {2}{p}}}\right)$ then $\|f\|_{\mathrm {B} ,q}<\infty $ for any q > p. Non-optimality of Karhunen–Loève bases To further illustrate the differences between linear and non-linear approximations, we study the decomposition of a simple non-Gaussian random vector in a Karhunen–Loève basis. Processes whose realizations have a random translation are stationary. The Karhunen–Loève basis is then a Fourier basis and we study its performance. To simplify the analysis, consider a random vector Y[n] of size N that is random shift modulo N of a deterministic signal f[n] of zero mean $\sum _{n=0}^{N-1}f[n]=0$ $Y[n]=f[(n-p){\bmod {N}}]$ The random shift P is uniformly distributed on [0, N − 1]: $\Pr(P=p)={\frac {1}{N}},\qquad 0\leq p<N$ Clearly $\mathbf {E} \{Y[n]\}={\frac {1}{N}}\sum _{p=0}^{N-1}f[(n-p){\bmod {N}}]=0$ and $R[n,k]=\mathbf {E} \{Y[n]Y[k]\}={\frac {1}{N}}\sum _{p=0}^{N-1}f[(n-p){\bmod {N}}]f[(k-p){\bmod {N}}]={\frac {1}{N}}f\Theta {\bar {f}}[n-k],\quad {\bar {f}}[n]=f[-n]$ Hence $R[n,k]=R_{Y}[n-k],\qquad R_{Y}[k]={\frac {1}{N}}f\Theta {\bar {f}}[k]$ Since RY is N periodic, Y is a circular stationary random vector. The covariance operator is a circular convolution with RY and is therefore diagonalized in the discrete Fourier Karhunen–Loève basis $\left\{{\frac {1}{\sqrt {N}}}e^{i2\pi mn/N}\right\}_{0\leq m<N}.$ The power spectrum is Fourier transform of RY: $P_{Y}[m]={\hat {R}}_{Y}[m]={\frac {1}{N}}\left|{\hat {f}}[m]\right|^{2}$ Example: Consider an extreme case where $f[n]=\delta [n]-\delta [n-1]$. A theorem stated above guarantees that the Fourier Karhunen–Loève basis produces a smaller expected approximation error than a canonical basis of Diracs $\left\{g_{m}[n]=\delta [n-m]\right\}_{0\leq m<N}$. Indeed, we do not know a priori the abscissa of the non-zero coefficients of Y, so there is no particular Dirac that is better adapted to perform the approximation. But the Fourier vectors cover the whole support of Y and thus absorb a part of the signal energy. $\mathbf {E} \left\{\left|\left\langle Y[n],{\frac {1}{\sqrt {N}}}e^{i2\pi mn/N}\right\rangle \right|^{2}\right\}=P_{Y}[m]={\frac {4}{N}}\sin ^{2}\left({\frac {\pi k}{N}}\right)$ Selecting higher frequency Fourier coefficients yields a better mean-square approximation than choosing a priori a few Dirac vectors to perform the approximation. The situation is totally different for non-linear approximations. If $f[n]=\delta [n]-\delta [n-1]$ then the discrete Fourier basis is extremely inefficient because f and hence Y have an energy that is almost uniformly spread among all Fourier vectors. In contrast, since f has only two non-zero coefficients in the Dirac basis, a non-linear approximation of Y with M ≥ 2 gives zero error.[7] Principal component analysis Main article: Principal component analysis We have established the Karhunen–Loève theorem and derived a few properties thereof. We also noted that one hurdle in its application was the numerical cost of determining the eigenvalues and eigenfunctions of its covariance operator through the Fredholm integral equation of the second kind $\int _{a}^{b}K_{X}(s,t)e_{k}(s)\,ds=\lambda _{k}e_{k}(t).$ However, when applied to a discrete and finite process $\left(X_{n}\right)_{n\in \{1,\ldots ,N\}}$, the problem takes a much simpler form and standard algebra can be used to carry out the calculations. Note that a continuous process can also be sampled at N points in time in order to reduce the problem to a finite version. We henceforth consider a random N-dimensional vector $X=\left(X_{1}~X_{2}~\ldots ~X_{N}\right)^{T}$. As mentioned above, X could contain N samples of a signal but it can hold many more representations depending on the field of application. For instance it could be the answers to a survey or economic data in an econometrics analysis. As in the continuous version, we assume that X is centered, otherwise we can let $X:=X-\mu _{X}$ (where $\mu _{X}$ is the mean vector of X) which is centered. Let us adapt the procedure to the discrete case. Covariance matrix Recall that the main implication and difficulty of the KL transformation is computing the eigenvectors of the linear operator associated to the covariance function, which are given by the solutions to the integral equation written above. Define Σ, the covariance matrix of X, as an N × N matrix whose elements are given by: $\Sigma _{ij}=\mathbf {E} [X_{i}X_{j}],\qquad \forall i,j\in \{1,\ldots ,N\}$ Rewriting the above integral equation to suit the discrete case, we observe that it turns into: $\sum _{j=1}^{N}\Sigma _{ij}e_{j}=\lambda e_{i}\quad \Leftrightarrow \quad \Sigma e=\lambda e$ where $e=(e_{1}~e_{2}~\ldots ~e_{N})^{T}$ is an N-dimensional vector. The integral equation thus reduces to a simple matrix eigenvalue problem, which explains why the PCA has such a broad domain of applications. Since Σ is a positive definite symmetric matrix, it possesses a set of orthonormal eigenvectors forming a basis of $\mathbb {R} ^{N}$, and we write $\{\lambda _{i},\varphi _{i}\}_{i\in \{1,\ldots ,N\}}$ this set of eigenvalues and corresponding eigenvectors, listed in decreasing values of λi. Let also Φ be the orthonormal matrix consisting of these eigenvectors: ${\begin{aligned}\Phi &:=\left(\varphi _{1}~\varphi _{2}~\ldots ~\varphi _{N}\right)^{T}\\\Phi ^{T}\Phi &=I\end{aligned}}$ Principal component transform It remains to perform the actual KL transformation, called the principal component transform in this case. Recall that the transform was found by expanding the process with respect to the basis spanned by the eigenvectors of the covariance function. In this case, we hence have: $X=\sum _{i=1}^{N}\langle \varphi _{i},X\rangle \varphi _{i}=\sum _{i=1}^{N}\varphi _{i}^{T}X\varphi _{i}$ In a more compact form, the principal component transform of X is defined by: ${\begin{cases}Y=\Phi ^{T}X\\X=\Phi Y\end{cases}}$ The i-th component of Y is $Y_{i}=\varphi _{i}^{T}X$, the projection of X on $\varphi _{i}$ and the inverse transform X = ΦY yields the expansion of X on the space spanned by the $\varphi _{i}$: $X=\sum _{i=1}^{N}Y_{i}\varphi _{i}=\sum _{i=1}^{N}\langle \varphi _{i},X\rangle \varphi _{i}$ As in the continuous case, we may reduce the dimensionality of the problem by truncating the sum at some $K\in \{1,\ldots ,N\}$ such that ${\frac {\sum _{i=1}^{K}\lambda _{i}}{\sum _{i=1}^{N}\lambda _{i}}}\geq \alpha $ where α is the explained variance threshold we wish to set. We can also reduce the dimensionality through the use of multilevel dominant eigenvector estimation (MDEE).[8] Examples The Wiener process There are numerous equivalent characterizations of the Wiener process which is a mathematical formalization of Brownian motion. Here we regard it as the centered standard Gaussian process Wt with covariance function $K_{W}(t,s)=\operatorname {cov} (W_{t},W_{s})=\min(s,t).$ We restrict the time domain to [a, b]=[0,1] without loss of generality. The eigenvectors of the covariance kernel are easily determined. These are $e_{k}(t)={\sqrt {2}}\sin \left(\left(k-{\tfrac {1}{2}}\right)\pi t\right)$ and the corresponding eigenvalues are $\lambda _{k}={\frac {1}{(k-{\frac {1}{2}})^{2}\pi ^{2}}}.$ Proof In order to find the eigenvalues and eigenvectors, we need to solve the integral equation: ${\begin{aligned}\int _{a}^{b}K_{W}(s,t)e(s)\,ds&=\lambda e(t)\qquad \forall t,0\leq t\leq 1\\\int _{0}^{1}\min(s,t)e(s)\,ds&=\lambda e(t)\qquad \forall t,0\leq t\leq 1\\\int _{0}^{t}se(s)\,ds+t\int _{t}^{1}e(s)\,ds&=\lambda e(t)\qquad \forall t,0\leq t\leq 1\end{aligned}}$ differentiating once with respect to t yields: $\int _{t}^{1}e(s)\,ds=\lambda e'(t)$ a second differentiation produces the following differential equation: $-e(t)=\lambda e''(t)$ The general solution of which has the form: $e(t)=A\sin \left({\frac {t}{\sqrt {\lambda }}}\right)+B\cos \left({\frac {t}{\sqrt {\lambda }}}\right)$ where A and B are two constants to be determined with the boundary conditions. Setting t = 0 in the initial integral equation gives e(0) = 0 which implies that B = 0 and similarly, setting t = 1 in the first differentiation yields e' (1) = 0, whence: $\cos \left({\frac {1}{\sqrt {\lambda }}}\right)=0$ which in turn implies that eigenvalues of TKX are: $\lambda _{k}=\left({\frac {1}{(k-{\frac {1}{2}})\pi }}\right)^{2},\qquad k\geq 1$ The corresponding eigenfunctions are thus of the form: $e_{k}(t)=A\sin \left((k-{\frac {1}{2}})\pi t\right),\qquad k\geq 1$ A is then chosen so as to normalize ek: $\int _{0}^{1}e_{k}^{2}(t)\,dt=1\quad \implies \quad A={\sqrt {2}}$ This gives the following representation of the Wiener process: Theorem. There is a sequence {Zi}i of independent Gaussian random variables with mean zero and variance 1 such that $W_{t}={\sqrt {2}}\sum _{k=1}^{\infty }Z_{k}{\frac {\sin \left(\left(k-{\frac {1}{2}}\right)\pi t\right)}{\left(k-{\frac {1}{2}}\right)\pi }}.$ Note that this representation is only valid for $t\in [0,1].$ On larger intervals, the increments are not independent. As stated in the theorem, convergence is in the L2 norm and uniform in t. The Brownian bridge Similarly the Brownian bridge $B_{t}=W_{t}-tW_{1}$ which is a stochastic process with covariance function $K_{B}(t,s)=\min(t,s)-ts$ can be represented as the series $B_{t}=\sum _{k=1}^{\infty }Z_{k}{\frac {{\sqrt {2}}\sin(k\pi t)}{k\pi }}$ Applications Adaptive optics systems sometimes use K–L functions to reconstruct wave-front phase information (Dai 1996, JOSA A). Karhunen–Loève expansion is closely related to the Singular Value Decomposition. The latter has myriad applications in image processing, radar, seismology, and the like. If one has independent vector observations from a vector valued stochastic process then the left singular vectors are maximum likelihood estimates of the ensemble KL expansion. Detection of a known continuous signal S(t) In communication, we usually have to decide whether a signal from a noisy channel contains valuable information. The following hypothesis testing is used for detecting continuous signal s(t) from channel output X(t), N(t) is the channel noise, which is usually assumed zero mean Gaussian process with correlation function $R_{N}(t,s)=E[N(t)N(s)]$ $H:X(t)=N(t),$ $K:X(t)=N(t)+s(t),\quad t\in (0,T)$ Signal detection in white noise When the channel noise is white, its correlation function is $R_{N}(t)={\tfrac {1}{2}}N_{0}\delta (t),$ and it has constant power spectrum density. In physically practical channel, the noise power is finite, so: $S_{N}(f)={\begin{cases}{\frac {N_{0}}{2}}&|f|<w\\0&|f|>w\end{cases}}$ Then the noise correlation function is sinc function with zeros at ${\frac {n}{2\omega }},n\in \mathbf {Z} .$ Since are uncorrelated and gaussian, they are independent. Thus we can take samples from X(t) with time spacing $\Delta t={\frac {n}{2\omega }}{\text{ within }}(0,''T'').$ Let $X_{i}=X(i\,\Delta t)$. We have a total of $n={\frac {T}{\Delta t}}=T(2\omega )=2\omega T$ i.i.d observations $\{X_{1},X_{2},\ldots ,X_{n}\}$ to develop the likelihood-ratio test. Define signal $S_{i}=S(i\,\Delta t)$, the problem becomes, $H:X_{i}=N_{i},$ $K:X_{i}=N_{i}+S_{i},i=1,2,\ldots ,n.$ The log-likelihood ratio ${\mathcal {L}}({\underline {x}})=\log {\frac {\sum _{i=1}^{n}(2S_{i}x_{i}-S_{i}^{2})}{2\sigma ^{2}}}\Leftrightarrow \Delta t\sum _{i=1}^{n}S_{i}x_{i}=\sum _{i=1}^{n}S(i\,\Delta t)x(i\,\Delta t)\,\Delta t\gtrless \lambda _{\cdot }2$ As t → 0, let: $G=\int _{0}^{T}S(t)x(t)\,dt.$ Then G is the test statistics and the Neyman–Pearson optimum detector is $G({\underline {x}})>G_{0}\Rightarrow K<G_{0}\Rightarrow H.$ As G is Gaussian, we can characterize it by finding its mean and variances. Then we get $H:G\sim N\left(0,{\tfrac {1}{2}}N_{0}E\right)$ $K:G\sim N\left(E,{\tfrac {1}{2}}N_{0}E\right)$ where $\mathbf {E} =\int _{0}^{T}S^{2}(t)\,dt$ is the signal energy. The false alarm error $\alpha =\int _{G_{0}}^{\infty }N\left(0,{\tfrac {1}{2}}N_{0}E\right)\,dG\Rightarrow G_{0}={\sqrt {{\tfrac {1}{2}}N_{0}E}}\Phi ^{-1}(1-\alpha )$ And the probability of detection: $\beta =\int _{G_{0}}^{\infty }N\left(E,{\tfrac {1}{2}}N_{0}E\right)\,dG=1-\Phi \left({\frac {G_{0}-E}{\sqrt {{\tfrac {1}{2}}N_{0}E}}}\right)=\Phi \left({\sqrt {\frac {2E}{N_{0}}}}-\Phi ^{-1}(1-\alpha )\right),$ where Φ is the cdf of standard normal, or Gaussian, variable. Signal detection in colored noise When N(t) is colored (correlated in time) Gaussian noise with zero mean and covariance function $R_{N}(t,s)=E[N(t)N(s)],$ we cannot sample independent discrete observations by evenly spacing the time. Instead, we can use K–L expansion to decorrelate the noise process and get independent Gaussian observation 'samples'. The K–L expansion of N(t): $N(t)=\sum _{i=1}^{\infty }N_{i}\Phi _{i}(t),\quad 0<t<T,$ where $N_{i}=\int N(t)\Phi _{i}(t)\,dt$ and the orthonormal bases $\{\Phi _{i}{t}\}$ are generated by kernel $R_{N}(t,s)$, i.e., solution to $\int _{0}^{T}R_{N}(t,s)\Phi _{i}(s)\,ds=\lambda _{i}\Phi _{i}(t),\quad \operatorname {var} [N_{i}]=\lambda _{i}.$ Do the expansion: $S(t)=\sum _{i=1}^{\infty }S_{i}\Phi _{i}(t),$ where $S_{i}=\int _{0}^{T}S(t)\Phi _{i}(t)\,dt$, then $X_{i}=\int _{0}^{T}X(t)\Phi _{i}(t)\,dt=N_{i}$ under H and $N_{i}+S_{i}$ under K. Let ${\overline {X}}=\{X_{1},X_{2},\dots \}$, we have $N_{i}$ are independent Gaussian r.v's with variance $\lambda _{i}$ under H: $\{X_{i}\}$ are independent Gaussian r.v's. $f_{H}[x(t)|0<t<T]=f_{H}({\underline {x}})=\prod _{i=1}^{\infty }{\frac {1}{\sqrt {2\pi \lambda _{i}}}}\exp \left(-{\frac {x_{i}^{2}}{2\lambda _{i}}}\right)$ under K: $\{X_{i}-S_{i}\}$ are independent Gaussian r.v's. $f_{K}[x(t)\mid 0<t<T]=f_{K}({\underline {x}})=\prod _{i=1}^{\infty }{\frac {1}{\sqrt {2\pi \lambda _{i}}}}\exp \left(-{\frac {(x_{i}-S_{i})^{2}}{2\lambda _{i}}}\right)$ Hence, the log-LR is given by ${\mathcal {L}}({\underline {x}})=\sum _{i=1}^{\infty }{\frac {2S_{i}x_{i}-S_{i}^{2}}{2\lambda _{i}}}$ and the optimum detector is $G=\sum _{i=1}^{\infty }S_{i}x_{i}\lambda _{i}>G_{0}\Rightarrow K,<G_{0}\Rightarrow H.$ Define $k(t)=\sum _{i=1}^{\infty }\lambda _{i}S_{i}\Phi _{i}(t),0<t<T,$ then $G=\int _{0}^{T}k(t)x(t)\,dt.$ How to find k(t) Since $\int _{0}^{T}R_{N}(t,s)k(s)\,ds=\sum _{i=1}^{\infty }\lambda _{i}S_{i}\int _{0}^{T}R_{N}(t,s)\Phi _{i}(s)\,ds=\sum _{i=1}^{\infty }S_{i}\Phi _{i}(t)=S(t),$ k(t) is the solution to $\int _{0}^{T}R_{N}(t,s)k(s)\,ds=S(t).$ If N(t)is wide-sense stationary, $\int _{0}^{T}R_{N}(t-s)k(s)\,ds=S(t),$ which is known as the Wiener–Hopf equation. The equation can be solved by taking fourier transform, but not practically realizable since infinite spectrum needs spatial factorization. A special case which is easy to calculate k(t) is white Gaussian noise. $\int _{0}^{T}{\frac {N_{0}}{2}}\delta (t-s)k(s)\,ds=S(t)\Rightarrow k(t)=CS(t),\quad 0<t<T.$ The corresponding impulse response is h(t) = k(T − t) = CS(T − t). Let C = 1, this is just the result we arrived at in previous section for detecting of signal in white noise. Test threshold for Neyman–Pearson detector Since X(t) is a Gaussian process, $G=\int _{0}^{T}k(t)x(t)\,dt,$ is a Gaussian random variable that can be characterized by its mean and variance. ${\begin{aligned}\mathbf {E} [G\mid H]&=\int _{0}^{T}k(t)\mathbf {E} [x(t)\mid H]\,dt=0\\\mathbf {E} [G\mid K]&=\int _{0}^{T}k(t)\mathbf {E} [x(t)\mid K]\,dt=\int _{0}^{T}k(t)S(t)\,dt\equiv \rho \\\mathbf {E} [G^{2}\mid H]&=\int _{0}^{T}\int _{0}^{T}k(t)k(s)R_{N}(t,s)\,dt\,ds=\int _{0}^{T}k(t)\left(\int _{0}^{T}k(s)R_{N}(t,s)\,ds\right)=\int _{0}^{T}k(t)S(t)\,dt=\rho \\\operatorname {var} [G\mid H]&=\mathbf {E} [G^{2}\mid H]-(\mathbf {E} [G\mid H])^{2}=\rho \\\mathbf {E} [G^{2}\mid K]&=\int _{0}^{T}\int _{0}^{T}k(t)k(s)\mathbf {E} [x(t)x(s)]\,dt\,ds=\int _{0}^{T}\int _{0}^{T}k(t)k(s)(R_{N}(t,s)+S(t)S(s))\,dt\,ds=\rho +\rho ^{2}\\\operatorname {var} [G\mid K]&=\mathbf {E} [G^{2}|K]-(\mathbf {E} [G|K])^{2}=\rho +\rho ^{2}-\rho ^{2}=\rho \end{aligned}}$ Hence, we obtain the distributions of H and K: $H:G\sim N(0,\rho )$ $K:G\sim N(\rho ,\rho )$ The false alarm error is $\alpha =\int _{G_{0}}^{\infty }N(0,\rho )\,dG=1-\Phi \left({\frac {G_{0}}{\sqrt {\rho }}}\right).$ So the test threshold for the Neyman–Pearson optimum detector is $G_{0}={\sqrt {\rho }}\Phi ^{-1}(1-\alpha ).$ Its power of detection is $\beta =\int _{G_{0}}^{\infty }N(\rho ,\rho )\,dG=\Phi \left({\sqrt {\rho }}-\Phi ^{-1}(1-\alpha )\right)$ When the noise is white Gaussian process, the signal power is $\rho =\int _{0}^{T}k(t)S(t)\,dt=\int _{0}^{T}S(t)^{2}\,dt=E.$ Prewhitening For some type of colored noise, a typical practise is to add a prewhitening filter before the matched filter to transform the colored noise into white noise. For example, N(t) is a wide-sense stationary colored noise with correlation function $R_{N}(\tau )={\frac {BN_{0}}{4}}e^{-B|\tau |}$ $S_{N}(f)={\frac {N_{0}}{2(1+({\frac {w}{B}})^{2})}}$ The transfer function of prewhitening filter is $H(f)=1+j{\frac {w}{B}}.$ Detection of a Gaussian random signal in Additive white Gaussian noise (AWGN) When the signal we want to detect from the noisy channel is also random, for example, a white Gaussian process X(t), we can still implement K–L expansion to get independent sequence of observation. In this case, the detection problem is described as follows: $H_{0}:Y(t)=N(t)$ $H_{1}:Y(t)=N(t)+X(t),\quad 0<t<T.$ X(t) is a random process with correlation function $R_{X}(t,s)=E\{X(t)X(s)\}$ The K–L expansion of X(t) is $X(t)=\sum _{i=1}^{\infty }X_{i}\Phi _{i}(t),$ where $X_{i}=\int _{0}^{T}X(t)\Phi _{i}(t)\,dt$ and $\Phi _{i}(t)$ are solutions to $\int _{0}^{T}R_{X}(t,s)\Phi _{i}(s)ds=\lambda _{i}\Phi _{i}(t).$ So $X_{i}$'s are independent sequence of r.v's with zero mean and variance $\lambda _{i}$. Expanding Y(t) and N(t) by $\Phi _{i}(t)$, we get $Y_{i}=\int _{0}^{T}Y(t)\Phi _{i}(t)\,dt=\int _{0}^{T}[N(t)+X(t)]\Phi _{i}(t)=N_{i}+X_{i},$ where $N_{i}=\int _{0}^{T}N(t)\Phi _{i}(t)\,dt.$ As N(t) is Gaussian white noise, $N_{i}$'s are i.i.d sequence of r.v with zero mean and variance ${\tfrac {1}{2}}N_{0}$, then the problem is simplified as follows, $H_{0}:Y_{i}=N_{i}$ $H_{1}:Y_{i}=N_{i}+X_{i}$ The Neyman–Pearson optimal test: $\Lambda ={\frac {f_{Y}\mid H_{1}}{f_{Y}\mid H_{0}}}=Ce^{-\sum _{i=1}^{\infty }{\frac {y_{i}^{2}}{2}}{\frac {\lambda _{i}}{{\tfrac {1}{2}}N_{0}({\tfrac {1}{2}}N_{0}+\lambda _{i})}}},$ so the log-likelihood ratio is ${\mathcal {L}}=\ln(\Lambda )=K-\sum _{i=1}^{\infty }{\tfrac {1}{2}}y_{i}^{2}{\frac {\lambda _{i}}{{\frac {N_{0}}{2}}\left({\frac {N_{0}}{2}}+\lambda _{i}\right)}}.$ Since ${\widehat {X}}_{i}={\frac {\lambda _{i}}{{\frac {N_{0}}{2}}\left({\frac {N_{0}}{2}}+\lambda _{i}\right)}}$ is just the minimum-mean-square estimate of $X_{i}$ given $Y_{i}$'s, ${\mathcal {L}}=K+{\frac {1}{N_{0}}}\sum _{i=1}^{\infty }Y_{i}{\widehat {X}}_{i}.$ K–L expansion has the following property: If $f(t)=\sum f_{i}\Phi _{i}(t),g(t)=\sum g_{i}\Phi _{i}(t),$ where $f_{i}=\int _{0}^{T}f(t)\Phi _{i}(t)\,dt,\quad g_{i}=\int _{0}^{T}g(t)\Phi _{i}(t)\,dt.$ then $\sum _{i=1}^{\infty }f_{i}g_{i}=\int _{0}^{T}g(t)f(t)\,dt.$ So let ${\widehat {X}}(t\mid T)=\sum _{i=1}^{\infty }{\widehat {X}}_{i}\Phi _{i}(t),\quad {\mathcal {L}}=K+{\frac {1}{N_{0}}}\int _{0}^{T}Y(t){\widehat {X}}(t\mid T)\,dt.$ Noncausal filter Q(t,s) can be used to get the estimate through ${\widehat {X}}(t\mid T)=\int _{0}^{T}Q(t,s)Y(s)\,ds.$ By orthogonality principle, Q(t,s) satisfies $\int _{0}^{T}Q(t,s)R_{X}(s,t)\,ds+{\tfrac {N_{0}}{2}}Q(t,\lambda )=R_{X}(t,\lambda ),0<\lambda <T,0<t<T.$ However, for practical reasons, it's necessary to further derive the causal filter h(t,s), where h(t,s) = 0 for s > t, to get estimate ${\widehat {X}}(t\mid t)$. Specifically, $Q(t,s)=h(t,s)+h(s,t)-\int _{0}^{T}h(\lambda ,t)h(s,\lambda )\,d\lambda $ See also • Principal component analysis • Polynomial chaos • Reproducing kernel Hilbert space • Mercer's theorem Notes 1. Sapatnekar, Sachin (2011), "Overcoming variations in nanometer-scale technologies", IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 1 (1): 5–18, Bibcode:2011IJEST...1....5S, CiteSeerX 10.1.1.300.5659, doi:10.1109/jetcas.2011.2138250, S2CID 15566585 2. Ghoman, Satyajit; Wang, Zhicun; Chen, PC; Kapania, Rakesh (2012). "A POD-based Reduced Order Design Scheme for Shape Optimization of Air Vehicles". Proc of 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, AIAA-2012-1808, Honolulu, Hawaii. 3. Karhunen–Loeve transform (KLT) Archived 2016-11-28 at the Wayback Machine, Computer Image Processing and Analysis (E161) lectures, Harvey Mudd College 4. Raju, C.K. (2009), "Kosambi the Mathematician", Economic and Political Weekly, 44 (20): 33–45 5. Kosambi, D. D. (1943), "Statistics in Function Space", Journal of the Indian Mathematical Society, 7: 76–88, MR 0009816. 6. Giambartolomei, Giordano (2016). "4 The Karhunen-Loève Theorem". The Karhunen-Loève theorem (Bachelors). University of Bologna. 7. A wavelet tour of signal processing-Stéphane Mallat 8. X. Tang, “Texture information in run-length matrices,” IEEE Transactions on Image Processing, vol. 7, No. 11, pp. 1602–1609, Nov. 1998 References • Stark, Henry; Woods, John W. (1986). Probability, Random Processes, and Estimation Theory for Engineers. Prentice-Hall, Inc. ISBN 978-0-13-711706-2. OL 21138080M. • Ghanem, Roger; Spanos, Pol (1991). Stochastic finite elements: a spectral approach. Springer-Verlag. ISBN 978-0-387-97456-9. OL 1865197M. • Guikhman, I.; Skorokhod, A. (1977). Introduction a la Théorie des Processus Aléatoires. Éditions MIR. • Simon, B. (1979). Functional Integration and Quantum Physics. Academic Press. • Karhunen, Kari (1947). "Über lineare Methoden in der Wahrscheinlichkeitsrechnung". Ann. Acad. Sci. Fennicae. Ser. A I. Math.-Phys. 37: 1–79. • Loève, M. (1978). Probability theory. Vol. II, 4th ed. Graduate Texts in Mathematics. Vol. 46. Springer-Verlag. ISBN 978-0-387-90262-3. • Dai, G. (1996). "Modal wave-front reconstruction with Zernike polynomials and Karhunen–Loeve functions". JOSA A. 13 (6): 1218. Bibcode:1996JOSAA..13.1218D. doi:10.1364/JOSAA.13.001218. • Wu B., Zhu J., Najm F.(2005) "A Non-parametric Approach for Dynamic Range Estimation of Nonlinear Systems". In Proceedings of Design Automation Conference(841-844) 2005 • Wu B., Zhu J., Najm F.(2006) "Dynamic Range Estimation". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Vol. 25 Issue:9 (1618–1636) 2006 • Jorgensen, Palle E. T.; Song, Myung-Sin (2007). "Entropy Encoding, Hilbert Space and Karhunen–Loeve Transforms". Journal of Mathematical Physics. 48 (10): 103503. arXiv:math-ph/0701056. Bibcode:2007JMP....48j3503J. doi:10.1063/1.2793569. S2CID 17039075. External links • Mathematica KarhunenLoeveDecomposition function. • E161: Computer Image Processing and Analysis notes by Pr. Ruye Wang at Harvey Mudd College
Wikipedia
# Defining and calling simple functions To begin, let's define a simple function in Python. A function is a block of code that performs a specific task. It can take input (called parameters) and return a result (called return values). Here's an example of a simple function that adds two numbers: ```python def add_numbers(a, b): return a + b ``` In this example, `add_numbers` is the name of the function, and `a` and `b` are the parameters. The function takes two numbers as input and returns their sum. To call this function, you can use the following code: ```python result = add_numbers(3, 5) print(result) # Output: 8 ``` In this example, we pass the numbers 3 and 5 as arguments to the `add_numbers` function. The function adds these numbers and returns the result, which we then print. ## Exercise Define a function called `multiply_numbers` that takes two parameters, `a` and `b`, and returns their product. Then, call this function with the arguments 4 and 6, and print the result. ### Solution ```python def multiply_numbers(a, b): return a * b result = multiply_numbers(4, 6) print(result) # Output: 24 ``` # Using parameters and return values Parameters are the inputs that a function takes. They allow you to customize the behavior of a function. For example, let's say we have a function that calculates the area of a rectangle: ```python def rectangle_area(length, width): return length * width ``` In this function, `length` and `width` are the parameters. By using different values for these parameters, we can calculate the area of rectangles with different dimensions. Return values are the outputs that a function produces. They allow you to use the results of a function in other parts of your code. For example, let's say we have a function that calculates the square root of a number: ```python import math def square_root(number): return math.sqrt(number) ``` In this function, the `math.sqrt` function is used to calculate the square root of the input `number`. The result is returned as a return value. ## Exercise Define a function called `calculate_distance` that takes two parameters, `x1` and `x2`, and returns the absolute difference between them. Then, call this function with the arguments 7 and 3, and print the result. ### Solution ```python def calculate_distance(x1, x2): return abs(x1 - x2) result = calculate_distance(7, 3) print(result) # Output: 4 ``` # Creating more complex functions For example, let's say we want to create a function that calculates the volume of a cylinder. The volume of a cylinder is calculated using the formula: $$V = \pi r^2 h$$ Here, $V$ is the volume, $r$ is the radius, and $h$ is the height. We can define a function called `cylinder_volume` that takes these three parameters and returns the volume: ```python import math def cylinder_volume(radius, height): return math.pi * radius**2 * height ``` In this function, we use the `math.pi` constant to represent the value of pi, and the `**` operator to raise `radius` to the power of 2. ## Exercise Define a function called `calculate_sphere_volume` that takes one parameter, `radius`, and returns the volume of a sphere with that radius. The formula for the volume of a sphere is: $$V = \frac{4}{3} \pi r^3$$ Then, call this function with the argument 5, and print the result. ### Solution ```python import math def calculate_sphere_volume(radius): return (4/3) * math.pi * radius**3 result = calculate_sphere_volume(5) print(result) # Output: 523.5987755982989 ``` # Working with mathematical operations For example, let's say we want to create a function that calculates the average of two numbers: ```python def calculate_average(a, b): return (a + b) / 2 ``` In this function, we use the `+` operator for addition and the `/` operator for division. We can also use the `**` operator for exponentiation. For example, let's say we want to create a function that calculates the power of a number: ```python def calculate_power(base, exponent): return base ** exponent ``` In this function, we use the `**` operator to raise `base` to the power of `exponent`. Finally, we can use the `%` operator for modulo. For example, let's say we want to create a function that checks if a number is even: ```python def is_even(number): return number % 2 == 0 ``` In this function, we use the `%` operator to check if the remainder of `number` divided by 2 is 0. If it is, the function returns `True`, indicating that the number is even. Otherwise, it returns `False`. ## Exercise Define a function called `calculate_factorial` that takes one parameter, `number`, and returns the factorial of that number. The factorial of a number is the product of all positive integers less than or equal to that number. For example, the factorial of 5 is 5 * 4 * 3 * 2 * 1 = 120. You can use a loop to calculate the factorial. Here's a possible implementation: ```python def calculate_factorial(number): factorial = 1 for i in range(1, number + 1): factorial *= i return factorial ``` Then, call this function with the argument 6, and print the result. ### Solution ```python result = calculate_factorial(6) print(result) # Output: 720 ```
Textbooks
Is there an example of a "almost-metric" that is not symmetric but satisfies the other axioms of a metric (positive-definiteness, triangle inequality)? to hold. It is certainly interesting to find an example in $\mathbb R^n$ (even if just for a specific $n$) but I'm also interested in other "almost-metric-spaces". Not the answer you're looking for? Browse other questions tagged geometry metric-spaces symmetry or ask your own question. What values of $p$ make $d$ a metric? is symmetric chi-squared distance "A" metric? Is sum of two metrics a metric? What do we call a metric that doesn't satisfy triangle inequality? Proof that the triangle inequality holds in the following metric?
CommonCrawl
The Malgrange-Ehrenpreis theorem for nonlocal Schrödinger operators with certain potentials Local Aronson-Bénilan gradient estimates and Harnack inequality for the porous medium equation along Ricci flow September 2018, 17(5): 1975-1992. doi: 10.3934/cpaa.2018094 A blowup alternative result for fractional nonautonomous evolution equation of Volterra type Pengyu Chen , , Xuping Zhang and Yongxiang Li Department of Mathematics, Northwest Normal University, Lanzhou 730070, China Received August 2017 Revised November 2017 Published April 2018 In this article, we consider a class of fractional non-autonomous integro-differential evolution equation of Volterra type in a Banach space $E$, where the operators in linear part (possibly unbounded) depend on time $t$. Combining the theory of fractional calculus, operator semigroups and measure of noncompactness with Sadovskii's fixed point theorem, we firstly proved the local existence of mild solutions for corresponding fractional non-autonomous integro-differential evolution equation. Based on the local existence result and a piecewise extended method, we obtained a blowup alternative result for fractional non-autonomous integro-differential evolution equation of Volterra type. Finally, as a sample of application, these results are applied to a time fractional non-autonomous partial integro-differential equation of Volterra type with homogeneous Dirichlet boundary condition. This paper is a continuation of Heard and Rakin [13, J. Differential Equations, 1988] and the results obtained essentially improve and extend some related conclusions in this area. Keywords: Fractional non-autonomous evolution equation, analytic semigroup, measure of noncompactness, volterra integro-differential, mild solution. Mathematics Subject Classification: Primary: 35R11; Secondary: 47H08, 47J35. Citation: Pengyu Chen, Xuping Zhang, Yongxiang Li. A blowup alternative result for fractional nonautonomous evolution equation of Volterra type. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1975-1992. doi: 10.3934/cpaa.2018094 R. P. Agarwal, M. Benchohra and S. Hamani, A survey on existence results for boundary value problems of nonlinear fractional differential equations and inclusions, Acta Appl. Math., 109 (2010), 973-1033. Google Scholar E. G. Bajlekova, Fractional Evolution Equations in Banach Spaces, Ph. D thesis, Department of Mathematics, Eindhoven University of Technology, 2001. Google Scholar J. and K. Goebel, Measures of Noncompactness in Banach Spaces, In Lecture Notes in Pure and Applied Mathematics, Volume 60, Marcel Dekker, New York, 1980. Google Scholar P. M. Carvalho-Neto and G. Planas, Mild solutions to the time fractional Navier-Stokes equations in $\mathbb{R}^N$, J. Differential Equations, 259 (2015), 2948-2980. Google Scholar P. Chen and Y. Li, Monotone iterative technique for a class of semilinear evolution equations with nonlocal conditions, Results Math., 63 (2013), 731-744. Google Scholar P. Chen and Y. Li, Existence of mild solutions for fractional evolution equations with mixed monotone nonlocal conditions, Z. Angew. Math. Phys., 65 (2014), 711-728. Google Scholar K. Deimling, Nonlinear Functional Analysis, Springer-Verlag, New York, 1985. Google Scholar M. M. El-Borai, The fundamental solutions for fractional evolution equations of parabolic type, J. Appl. Math. Stoch. Anal., 3 (2004), 197-211. Google Scholar M. M. El-Borai, K. E. El-Nadi and E. G. El-Akabawy, On some fractional evolution equations, Comput. Math. Appl., 59 (2010), 1352-1355. Google Scholar A. Friedman, Partial Differential Equations, Holt, Rinehart and Winston, New York, NY, USA, 1969. Google Scholar R. Gorenflo and F. Mainardi, Fractional calculus and stable probability distributions, Arch. Mech., 50 (1998), 377-388. Google Scholar H. Gou and B. Li, Local and global existence of mild solution to impulsive fractional semilinear integro-differential equation with noncompact semigroup, Commun. Nonlinear Sci. Numer. Simul., 42 (2017), 204-214. Google Scholar M. L. Heard and S. M. Rankin, A semi-linear parabolic integro-differential equation, J. Differential Equations, 71 (1988), 201-233. Google Scholar H. P. Heinz, On the behaviour of measure of noncompactness with respect to differentiation and integration of vector-valued functions, Nonlinear Anal., 7 (1983), 1351-1371. Google Scholar D. Henry, Geometric Theory of Semilinear Parabolic Equations, Lecture Notes in Math., vol. 840, Springer-verlag, New York, 1981. Google Scholar V. Lakshmikantham and S. Leela, Nonlinear Differential Equations in Abstract Spaces, Pergamon Press, New York, 1981. Google Scholar Y. Li, Existence of solutions of initial value problems for abstract semilinear evolution equations, Acta Math. Sin., 48 (2005), 1089-1094 (in Chinese). Google Scholar M. Li, C. Chen and F. B. Li, On fractional powers of generators of fractional resolvent families, J. Funct. Anal., 259 (2010), 2702-2726. Google Scholar K. Li, J. Peng and J. Jia, Cauchy problems for fractional differential equations with Riemann-Liouville fractional derivatives, J. Funct. Anal., 263 (2012), 476-510. Google Scholar A. A. Kilbas, H. M. Srivastava and J. J. Trujillo, Theory and Applications of Fractional Differential Equations, in: North-Holland Mathematics Studies, vol. 204, Elsevier Science B. V., Amsterdam, 2006. Google Scholar Z. Mei, J. Peng and Y. Zhang, An operator theoretical approach to Riemann-Liouville fractional Cauchy problem, Math. Nachr., 288 (2015), 784-797. Google Scholar Z. Ouyang, Existence and uniqueness of the solutions for a class of nonlinear fractional order partial differential equations with delay, Comput. Math. Appl., 61 (2011), 860-870. Google Scholar A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations, Springer-verlag, Berlin, 1983. Google Scholar M. H. M. Rashid and A. Al-Omari, Local and global existence of mild solutions for impulsive fractional semi-linear integro-differential equation, Commun. Nonlinear Sci. Numer. Simul., 16 (2011), 3493-3503. Google Scholar H. Tanabe, Functional Analytic Methods for Partial Differential Equations, Marcel Dekker, New York, USA, 1997. Google Scholar R. Temam, Infinite-Dimensional Dynamical Systems in Mechanics and Physics, second ed., Springer-verlag, New York, 1997. Google Scholar R. N. Wang, D. H. Chen and T. J. Xiao, Abstract fractional Cauchy problems with almost sectorial operators, J. Differential Equations, 252 (2012), 202-235. Google Scholar R. N. Wang, T. J. Xiao and J. Liang, A note on the fractional Cauchy problems with nonlocal conditions, Appl. Math. Lette., 24 (2011), 1435-1442. Google Scholar J. Wang and Y. Zhou, A class of fractional evolution equations and optimal controls, Nonlinear Anal. Real World Appl., 12 (2011), 262-272. Google Scholar J. Wang, Y. Zhou and M. Fečkan, Abstract Cauchy problem for fractional differential equations, Nonlinear Dyn., 74 (2013), 685-700. Google Scholar Y. Zhou and F. Jiao, Existence of mild solutions for fractional neutral evolution equations, Comput. Math. Appl., 59 (2010), 1063-1077. Google Scholar B. Zhu, L. Liu and Y. Wu, Local and global existence of mild solutions for a class of nonlinear fractional reaction-diffusion equations with delay, Appl. Math. Lett., 61 (2016), 73-79. Google Scholar Tomás Caraballo, P.E. Kloeden. Non-autonomous attractors for integro-differential evolution equations. Discrete & Continuous Dynamical Systems - S, 2009, 2 (1) : 17-36. doi: 10.3934/dcdss.2009.2.17 Miloud Moussai. Application of the bernstein polynomials for solving the nonlinear fractional type Volterra integro-differential equation with caputo fractional derivatives. Numerical Algebra, Control & Optimization, 2021 doi: 10.3934/naco.2021021 Sertan Alkan. A new solution method for nonlinear fractional integro-differential equations. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1065-1077. doi: 10.3934/dcdss.2015.8.1065 Priscila Santos Ramos, J. Vanterler da C. Sousa, E. Capelas de Oliveira. Existence and uniqueness of mild solutions for quasi-linear fractional integro-differential equations. Evolution Equations & Control Theory, 2022, 11 (1) : 1-24. doi: 10.3934/eect.2020100 Hermann Brunner. The numerical solution of weakly singular Volterra functional integro-differential equations with variable delays. Communications on Pure & Applied Analysis, 2006, 5 (2) : 261-276. doi: 10.3934/cpaa.2006.5.261 Seda İğret Araz. New class of volterra integro-differential equations with fractal-fractional operators: Existence, uniqueness and numerical scheme. Discrete & Continuous Dynamical Systems - S, 2021, 14 (7) : 2297-2309. doi: 10.3934/dcdss.2021053 Huy Tuan Nguyen, Huu Can Nguyen, Renhai Wang, Yong Zhou. Initial value problem for fractional Volterra integro-differential equations with Caputo derivative. Discrete & Continuous Dynamical Systems - B, 2021, 26 (12) : 6483-6510. doi: 10.3934/dcdsb.2021030 K. Ravikumar, Manil T. Mohan, A. Anguraj. Approximate controllability of a non-autonomous evolution equation in Banach spaces. Numerical Algebra, Control & Optimization, 2021, 11 (3) : 461-485. doi: 10.3934/naco.2020038 Yin Yang, Sujuan Kang, Vasiliy I. Vasil'ev. The Jacobi spectral collocation method for fractional integro-differential equations with non-smooth solutions. Electronic Research Archive, 2020, 28 (3) : 1161-1189. doi: 10.3934/era.2020064 Ramasamy Subashini, Chokkalingam Ravichandran, Kasthurisamy Jothimani, Haci Mehmet Baskonus. Existence results of Hilfer integro-differential equations with fractional order. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 911-923. doi: 10.3934/dcdss.2020053 Ji Shu, Linyan Li, Xin Huang, Jian Zhang. Limiting behavior of fractional stochastic integro-Differential equations on unbounded domains. Mathematical Control & Related Fields, 2021, 11 (4) : 715-737. doi: 10.3934/mcrf.2020044 Walter Allegretto, John R. Cannon, Yanping Lin. A parabolic integro-differential equation arising from thermoelastic contact. Discrete & Continuous Dynamical Systems, 1997, 3 (2) : 217-234. doi: 10.3934/dcds.1997.3.217 Narcisa Apreutesei, Nikolai Bessonov, Vitaly Volpert, Vitali Vougalter. Spatial structures and generalized travelling waves for an integro-differential equation. Discrete & Continuous Dynamical Systems - B, 2010, 13 (3) : 537-557. doi: 10.3934/dcdsb.2010.13.537 Shihchung Chiang. Numerical optimal unbounded control with a singular integro-differential equation as a constraint. Conference Publications, 2013, 2013 (special) : 129-137. doi: 10.3934/proc.2013.2013.129 Frederic Abergel, Remi Tachet. A nonlinear partial integro-differential equation from mathematical finance. Discrete & Continuous Dynamical Systems, 2010, 27 (3) : 907-917. doi: 10.3934/dcds.2010.27.907 Samir K. Bhowmik, Dugald B. Duncan, Michael Grinfeld, Gabriel J. Lord. Finite to infinite steady state solutions, bifurcations of an integro-differential equation. Discrete & Continuous Dynamical Systems - B, 2011, 16 (1) : 57-71. doi: 10.3934/dcdsb.2011.16.57 Faranak Rabiei, Fatin Abd Hamid, Zanariah Abd Majid, Fudziah Ismail. Numerical solutions of Volterra integro-differential equations using General Linear Method. Numerical Algebra, Control & Optimization, 2019, 9 (4) : 433-444. doi: 10.3934/naco.2019042 Vladimir E. Fedorov, Natalia D. Ivanova. Identification problem for a degenerate evolution equation with overdetermination on the solution semigroup kernel. Discrete & Continuous Dynamical Systems - S, 2016, 9 (3) : 687-696. doi: 10.3934/dcdss.2016022 Michel Chipot, Senoussi Guesmia. On a class of integro-differential problems. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1249-1262. doi: 10.3934/cpaa.2010.9.1249 Mahesh G. Nerurkar. Spectral and stability questions concerning evolution of non-autonomous linear systems. Conference Publications, 2001, 2001 (Special) : 270-275. doi: 10.3934/proc.2001.2001.270 Pengyu Chen Xuping Zhang Yongxiang Li
CommonCrawl
Spatial changes in the command and control function of cities based on the corporate centre of gravity model Piotr Raźniak, Sławomir Dorocki oraz Anna Winiarczyk-Raźniak Otrzymano: 01 Jul 2019 Przyjęty: 14 Nov 2019 DOI: https://doi.org/10.2478/mgrsd-2020-0002 © 2020 Piotr Raźniak, Sławomir Dorocki, Anna Winiarczyk-Raźniak, published by SciendoThis work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Studies on intercity linkages between companies appear to be particularly interesting. This is true of both the ownership structure and linkages between companies. In the modern global economy, research on the command and control function of cities is important in relation to their geographic distribution (Śleszyński 2018). In today's globalised world, the significance of geographic distance diminishes when considered in terms of the transfer of workforce and financial assets. Therefore, a trend is emerging that leads to the creation of an "international" economic system, while connectivity increases between corporations as well as between cities in general (Sassen 2000). The functions of large corporations have been studied multi-dimensionally over the last several decades. It was stated, in one of the seven hypotheses regarding the concept of the world city, that the global control functions of international corporations in world cities are directly related to the activity of the production sector (Friedmann 1986). This view is now outdated. Currently it is believed that firms in the advanced producer services sector (APS) are the main drivers of globalisation processes. These firms yield groups of cities (world cities) that serve as strategic places for economic globalisation processes (Taylor et al. 2014), although they are not the only strategic places in global connectivity networks (Goerzen et al. 2013). The world city concept was developed by J.V. Beaverstock, R.G. Smith and P.J. Taylor (1999). The financial results of the largest corporations that generate the C&C function of cities have also been the subject of analysis in the last 10 to 20 years. Studies of this type were conducted either on a global scale (Godfrey & Zhou 1999, Alderson & Beckfield 2004, Taylor & Csomós 2012), on a continental scale (Csomós & Derudder 2014, Dorocki, Raźniak & Winiarczyk-Raźniak 2018), or on a regional scale (Raźniak, Dorocki & Winiarczyk-Raźniak 2018). Diversification of the economy is an important element of the stabilisation and strengthening of regions (Masik 2016, 2019) and cities (Raźniak, Dorocki & Winiarczyk-Raźniak 2019), which is an important element in the event of an economic crisis. The concept of the command and control function takes into account the presence of the largest corporations and their financial results. What is not considered, however, is the potential collapse of the dominant sector that helps create the command and control function of a city. Such a scenario was introduced by P. Raźniak, S. Dorocki & A. Winiarczyk-Raźniak (2017), where the economic resistance of the command and control function of cities to a potential collapse of the dominant sector was also taken into account. The subject of the economic centre of gravity of the world is currently discussed in the era of progressing globalisation. The problem of how to determine the location of the world's economic centre of gravity, or of a specific region and its change over time, as well as problems associated with relationships between geographic distance, economic potential and various measures of concentration in analyses of socio-economic phenomena, are all issues that have been discussed by many researchers including R.R. Boyce, W. A. V Clark (1964), B. Kostrubiec (1972), A. Scharlig (1973), L. Wojciechowski (2004), Ch. I-Hui & H. J. Wall (2005), I. Jażdżewska (2006), L. R. Klein, (2009), D.Huanfeng, and L. Peiyi, (2009), J.-M. Grether, N.A Mathys (2010), D. Quah, (2011), Y Zhang I in. (2012), Á. Kincses, Z. Nagy, G. Tóth (2013), Ramos, Suriñach (2017), G. Csomós, G. Tóth (2016), S , Dorocki P. Raźniak (2017), S. Dorocki, P. Raźniak & Winiarczyk-Raźniak (2019). The centre of gravity in the abovementioned works is defined as the spatial equivalent of the arithmetic mean. In order to identify the socio-economic centre of gravity, variables such as the size of GDP, population of cities, number of employees, and other measures describing socio-economic potential were used. Data on the number and distribution of the headquarters of the largest corporations as well as on their financial potential were used in this study. These indicators describe the rank and significance of cities in the modern, globalised world. According to G. Csomós (2013), the command and control function is performed by 2,000 companies named on the Forbes Global 2000 list (Forbes Global 2000, 2018). Research studies exist, where the method of the centre of gravity of cities is applied, whether in the aspect of advanced R&D activity (Tóth & Csomós 2016) or a more general focus on their command and control functions (Csomós & Tóth 2016). There exists, however, no specific research on the financial results of corporations on the global scale, and especially on C&C functions by sector. Given the above, the aim of this paper is to describe the centre of gravity and changes to it for the command and control function of cities in the years 2006 and 2016, both for individual economic sectors, as well as globally. From world cities to world economic centres The idea of the command and control function of a city, created by the financial power of large corporations, emerges in many studies and concepts describing the power of cities and their mutual relationships. One of the most important works in this area is the work by P. Hall (1966), who described the theoretical fundamentals of the concept of the world city. According to P. Hall, world cities are political centres, concentrating government institutions, trade unions and federations. Studies on the global city theory increased in number throughout the 1980s (Friedmann & Wolff 1982, Friedmann 1986, Sassen 1988). At present, the considerable mobility of capital seems to be the most important aspect of globalisation. Moreover, the significance of distance diminishes in terms of flow of labour and financial means. Therefore, there exists a trend to create a global economic system and to increase both the connectivity between cities as well as their influence (Sassen 1991, 2000). International corporations have noted a decisive increase in their significance since the 1970s, although the locations of their headquarters have often changed over the same period of time (Csomós & Derudder 2014). Analysis of the locations of corporate headquarters of the largest companies shows the specific strength of a city focusing on its command and control functions. In their research, P. J. Taylor and G. Csomós (2012) found that command and control functions are created by the largest world corporations on the Forbes Global 2000 list. Based on the above-mentioned concepts, new indicators were also created showing the position of particular cities in a world-wide hierarchy of cities. At the end of the 1990s, J.V. Beaverstock, P.J. Taylor and R.G. Smith (1999) created an indicator, which showed the level of international connectivity between cities. In their research, they considered the location of corporate headquarters, regional, and local divisions of the largest 100 corporations from the advanced producer services sector, which comprised the following: accountancy, advertising, management consultancy, financial services and law. Beginning in the year 2000, the analysis was broadened and, in addition to the 100 corporations studied earlier, an additional 75 largest corporations on the Forbes Global 2000 list were also studied. Research efforts on intercity connectivity are currently being undertaken by many researchers looking for innovative concepts related to the subject (Liu et al. 2014, Liu, Derudder &Taylor 2014, Hennemann & Derudder 2014, Neal 2016, Yang et al. 2017, Neal, Derudder & Taylor 2019). The concepts of the command and control centre, global city, and world city (GaWC) illustrate the power of the city using various indicators. They do not deal with the possibility of a potential recession ("crisis") of a dominant sector, which possesses the ability to establish the command and control function of a city. Such a novel modification of the older concepts was presented by P. Raźniak, S. Dorocki & A. Winiarczyk-Raźniak (2017). An "economic crisis" is understood by these three researchers as a decline in the financial results of firms in a given sector, because of which firms that belong to the sector are dropped from the Forbes Global 2000 list, which, in turn, causes the sector (both the sector and the firms that belong to it) to lose its ability to generate the command and control function for a given city. The timeframe of the study is the years 2006 and 2016. The former was the last year before the global financial crisis that began in the United States in 2007. The study illustrates the command and control function before the crisis in comparison to the latest data available to the authors for the year 2016. The analysis in the study was performed using the list of the largest 2,000 corporations in the world, published by Forbes magazine (Forbes Global 2000, 2018). The centre of gravity was calculated based on the Corporation Potential Index (CPI) [2]. This indicator was produced based on data illustrating the potential of a city with respect to the standardised value of individual firms by the sector of their economic activity. The paper provides an analysis of revenue, income, market value, and asset values of the largest corporations in the years 2006 and 2016. The Sectoral Corporation Potential Index (SCPI) [1] was developed based on this data. The index was calculated based on average values standardised by the average value of the variable (x¯)$\left( {\bar{x}} \right)$ and its standard deviation (SD). The following four values – revenue, income, market value, and asset value for individual sectors – were assumed to be variables. Any analysis of the command and control potential of cities also needs to consider the concentration of capital, as measured via the number of corporate headquarters. This is why the standardised financial results of companies were multiplied by the number of corporate headquarters located in a given city and divided by four. The value "four" is the value of the third quartile (75% of observations) of the number of corporate headquarters increased by 1. Hence, if the number of headquarters in a given city was larger than 4, then financial results were multiplied by a value larger than 1. If the number of headquarters was smaller than 4, then the multiplier would assume values of less than one, thus reducing the financial benefit to the given city. The number of headquarters (HQ) in a given city was assigned such high importance because the authors of this study believe that the international importance of a city is largely determined not only by the economic potential of individual corporate headquarters, but most of all by their actual number, which signifies the global influence of a given city. Therefore, it has been judged – based on analysis of data and earlier works – that at least 4 headquarters located in one city determines its global potential and significance for a given sector of the economy. However, in order to show the whole picture, as to where the headquarters of international corporations are located, all the cities included in the Forbes' ranking were considered. In order to exclude negative values of the index, the obtained average value was increased by the absolute value of the smallest value in the obtained sequence; in this manner, the index for the minimum value of the SCPI would equal zero. In order to calculate the global potential of a given city, expressed by Corporation Potential Index (CPI), a total was computed by adding the values of SCPI that were calculated earlier for individual sectors [2]. (1) SCPI=∑i=1Nxi−x¯SDxN×HQ4+∑i=1Nxi−x¯SDxN×HQ4min$$ \text{SCPI}=\left( \frac{\sum\nolimits_{\text{i=1}}^{\text{N}}{\frac{{{\text{x}}_{\text{i}}}-{\bar{\text x}}}{\text{S}{{\text{D}}_{\text{x}}}}}}{\text{N}}\times \frac{\text{HQ}}{4} \right)+\left| {{\left( \frac{\sum\nolimits_{\text{i=1}}^{\text{N}}{\frac{{{\text{x}}_{\text{i}}}-{\bar{\text x}}}{\text{S}{{\text{D}}_{\text{x}}}}}}{\text{N}}\times \frac{\text{HQ}}{4} \right)}_{\min }} \right|$$ (2) CPI=∑i=1NSCPIi$$ \text{CPI=}\sum\nolimits_{\text{i=1}}^{\text{N}}{\text{SCP}{{\text{I}}_{\text{i}}}} $$ x – value of variables for particular cities (i) - that is, in this case, of four variables: revenue, profit, asset value, and market value for individual sectors of the economy x¯$\bar{x}$– average value of variables for all cities by economic sector SDx – standard deviation of variables for all cities by economic sector N – number of cities HQ – number of headquarters of corporations in a given city by economic sector (...)min – minimum value of the index for a given sector of the economy Furthermore, based on the number of headquarters of corporations in the studied cities and the value of the CPI and the SCPI, the centre of gravity was calculated based on the coordinates of the centroid calculated using Formula [3] (3) longitude =∑i=1n(xili)∑i=1nli latitude=∑i=1n(yili)∑i=1nli$$ \text{longitude =}\frac{\sum\nolimits_{\text{i=1}}^{\text{n}}{\left( {{x}_{i}}{{l}_{i}} \right)}}{\sum\nolimits_{\text{i=1}}^{\text{n}}{{{l}_{i}}}}\ \ \ \text{latitude=}\frac{\sum\nolimits_{\text{i=1}}^{\text{n}}{\left( {{y}_{i}}{{l}_{i}} \right)}}{\sum\nolimits_{\text{i=1}}^{\text{n}}{{{l}_{i}}}} $$ x, y– coordinates of the studied ith points (cities) li– weights expressed using the ithunit (number of HQs or SCPI and CPI), longitude, latitude – coordinates of the centroid. Gravity centre of the Command and Control potential of cities The centroid calculated based on the number of headquarters shifted east by about 14.5 degrees within the study period. At the beginning of this period, the centre of the headquarters of corporations was located near Gibraltar. By 2016, the centre had shifted to Tunisia. In the study period, a latitudinal shift also occurred by about 0.8 degrees to the south. The centres of gravity for individual sectors of the economy also clearly shifted eastward in the study period. Six out of ten sectors of the economy had their centres of gravity located in the western hemisphere in 2006, while in 2016, only three remained there. Healthcare was the westernmost sector in both 2006 and 2016. This sector also shifted the least to the east (by about 13.8 degrees). This sector was also the most northerly one (at about 40°N). The centre of gravity of the IT sector was also located to the west of the centroids of the remaining sectors within the study period. However, a clear shift to the east, by about 11.5 degrees, occurred in this case, which confirms that the relocation of the headquarters of firms in this sector did occur during the study period. The centre of gravity of the headquarters of corporations in the energy sector also shifted substantially east, by about 13.8 degrees. Nevertheless, the centroid of the energy firms still remained in the western hemisphere (12.8°W) in 2016 (Fig. 1). Centres of gravity based on the number of corporate headquarters in 2006 and 2016. Source: Authors' own work based on Forbes Global 2000 (Forbes Global 2000, 2018) The largest shift to the east was recorded in the case of the centre of gravity of the number of headquarters of consumer staples companies (shifted by 28.99 degrees), while the second largest shift was recorded by the utility sector (shifted by 27.86 degrees). The shifts of the centroids for consumer staples and for utilities were associated with the economic development of countries in Asia (such as China) and Eastern Europe which is growing rapidly (Egri, Tánczos 2018). Additionally, in time of economic crisis, conservative investors preferred to invest in utilities, considering it to be a more stable sector. The industrial sector was the one whose centroid remained the most eastern with respect to the overall centre of gravity of the number of corporate headquarters in the study period. Its shift to the east in the study period was relatively small (by about 3.9 degrees). Therefore, it may be inferred that it is companies from the industrial sector that are primarily located in the eastern hemisphere. In the study period, the largest shifts in the HQ centroids in the eastern hemisphere were recorded for the financials, telecommunications and materials sectors. It is noteworthy that the HQ centroid of the telecommunications sector moved to the north, and the magnitude of the northward shift was the greatest of all the sector-related centroids. Meanwhile, the financial and consumer staples sectors experienced the largest southward shift between 2006-2016, along with a small eastward shift. This results from the fact that new headquarters of companies were established in South America and the Middle East and that relatively long-term stability of location is typical for these sectors, which are considered to be strategic sectors for the national economies of many countries (Fig. 1). In the case of the centre of gravity calculated for the value of the CPI, it is possible to observe large differences in relation to the HQ distribution pattern. In order to identify the main regions of concentration of the command and control function, the financial potential of companies was calculated by accounting for the number of HQs. The centre of gravity of CPI in the years 2006-2016 shifted from the region between the Balearic Islands and Sardinia by 9.38 degrees to the east and 1.16 degrees to the south in the direction of eastern Sicily (Fig. 2). It has been shown that the location of (and shift in) the centroid is markedly different from the world's economic centre of gravity, as calculated by G. Tóth and Z. Nagy (2017), based on GDP values for selected countries. The world's economic centre of gravity is located in the western part of the Czech Republic and has shifted east in recent years but by an insignificant amount. This suggests that, despite general economic development in Asia in recent decades, the centre of gravity of the global economy has not shifted substantially eastward. Centres of gravity according to the value of CPI and SCPI in 2006 and 2016. What can also be observed is a convergence between locations of the centre of gravity of the SCPI for individual sectors and corresponding HQ centroids (Fig. 2). The centre of gravity of the SCPI for IT was the most western one. Information technology is the foundation of the digital economy. The IT sector includes hardware equipment, software and internet services. IT commonly refers to the application of computer methods to solving practical problems in all aspects of life such as industry, commerce, medicine, and agriculture. IT represents, therefore, the use of computer and information science via hardware equipment, software, services and infrastructure, to create, store, and exchange information. The IT sector is also its employees who develop, implement, and use information technology (directly or indirectly). Technological innovations and their application take on different forms and often consequently involve specialised knowledge from various disciplines and branches of industry. The IT sector is closely tied to the life sciences as well as technologically advanced industries, the energy industry, and environmental protection. Consequently, the shift of the centre of gravity representing the index of the C&C potential of cities in the IT sector occurred in a western direction, despite the relocation (eastwards) of a large number of production facilities, for example, to developing countries. In the study period, the world C&C centre for the IT sector, the city of San Jose in California, increased its share of potential, represented by the sCPI for the cities under analysis, from 38.6% in 2006 to 46.6% in 2016, while the Tokyo's share fell from 17% to 3.4%. Meanwhile, new and very important C&C centres have emerged in the eastern hemisphere, in places such as Seoul (South Korea), Taipei City, Hsinchu City, Taoyuan City (Taiwan), Beijing and Shenzhen (China), and Bangalore (India). Additional concentration of IT potential occurred in the studied period of time in the western hemisphere, which led to a shift in the IT centre of gravity 8.5 degrees to the west, shifting from the East Coast of the United States to the interior of the country. The main region of concentration of IT headquarters remains Silicon Valley in California. The healthcare sector also remains rooted in the western hemisphere, with a shift to the east of 3.8 degrees in the studied period. Nevertheless, the shift of the centre of gravity for the C&C potential for this sector confirms that slowly, yet progressively, the process of relocating companies from this sector into Asia, Eastern Europe and South America is also taking place. The United States played a key role among countries with leading healthcare companies in the study period, with 20 ranked companies in 2006 or 45% of all companies in the ranked part of the sector. In 2016. the number of ranked healthcare companies in the United States increased to 24, although its overall share in the ranked part of the sector declined to 37%. The second spot in the healthcare ranking was held by Japan, with 8 companies in 2006 and 10 companies in 2016. Subsequent spots were held by Switzerland, Ireland, Great Britain, and Germany. Two cities may be described as global centres of the healthcare industry – New York and Basel (Dorocki et al. 2017). Based on the potential represented by the SCPI, the energy sector experienced the largest eastward shift in the centre of gravity. The shift of the centre of gravity of this sector in the study period was more than 31.4 degrees east, although it still remained in the western hemisphere. Although tied to its mineral resource base, the energy sector, is not closely tied spatially to that base in terms of its economic leadership. Corporations in the energy sector are concentrated primarily in North America (USA, Canada) and Western Europe (Holland, UK, France). The global centres of corporations doing business in the energy sector include Houston and Dallas in Texas, Calgary in Canada, and the Hague in the Netherlands. However, changes that occurred in the energy sector, in the sense of its privatisation and opening up to private investors in Asian and Eastern European countries (Dunkerley 1995), triggered the rise of energy sector powers such as China (Beijing) and Russia (Moscow) as well as Brazil and India. However, the largest shift to the east in the study period was that of the public utility sector centroid. Its longitude in 2006 was 2.39 degrees east – and it had shifted to 45.4 degrees east by 2016. This change was mainly due to the growth of China's economy, which helped China grow its share of world CPI in this sector from 1.8% to 32.4% during the study period. South Korea and Malaysia also made significant gains in this area. These changes altogether produced changes in city rank. In 2006 Paris held 26% of SCPI in the public utilities sector, while London, Madrid, and Tokyo held approximately 5% each. In 2016 Beijing held 18% of the sector, followed by Hong Kong with 17%. For comparison, the two cities stood at 0.4% and 1.4% in 2006, respectively. Another city that made gains was Seoul with about 3%. Paris declined more than 14% during the same period. One may imagine a situation, where a decline in the financial performance of major corporations and the effect of this on the command and control functions of cities may result in improvement of the financial condition of remaining (smaller) firms in a given country. This may be beneficial for the stabilisation of the economy in times of recession, considering that executives of smaller firms make decisions independently and locally, and not somewhere in a city geographically remote from production facilities. This may favour local and regional economies as well as national economies. Therefore, declines in various financial indicators for corporations do not necessarily mean that there are major problems in the overall economy. Similarly, increases in corporate performance indicators may not necessarily imply success. In this context, the centres of gravity, including those for individual sectors calculated for entire economies, may be located in other geographic places. It is also possible that the shifts being observed for centres of gravity may be different than for the group of sectors studied here. Even though no single study can cover an entire economy, in subsequent studies it would be worthwhile to explore the significance of corporations with respect to entire economies, and to determine whether an increase in the significance of corporations and the command and control function of cities results from the growth of national economies – and may even exceed the national economic growth rate – or provides a contrast to national economic growth patterns. In summary, there are three main rules to be recognised, which govern the distribution of, and changes in, the potential of cities based on the command and control functions of the headquarters of corporations hosted by these cities. First, three main regions may be identified which concentrate cities hosting the headquarters of corporations, based on the spatial distribution of the C&C potential of cities in the study period. The three main regions are: Western Europe (the European Banana), United States, and Eastern Asia (primarily China and Japan). However, new areas were also identified, where the C&C potential has begun to concentrate: the Middle East, South America (Brazil), and Eastern Europe. Second, regardless of the analysis of the number of corporate headquarters, and of the potential calculated based on the Corporate Potential Index a shift of the centre of gravity of the C&C functions of cities in an easterly direction can be observed. This pertains to almost all sectors of the economy. However, the centroids of the knowledge-based sectors of the economy, such as IT and healthcare, are the westernmost ones. The last major pattern is that the magnitude and direction of the shift of the centre of gravity of the number of headquarters in cities do not converge with the shift of the centroid based on the value of CPI. The best example of this fact is the IT sector, whose centre of gravity in the period 2006-2016, shifted eastward when calculated based on the number of HQs, yet shifted westward when based on the total potential represented by the CPI. Therefore, the location and shift of the centre of gravity for a given sector largely depend upon the level of scientific advancement of the selected sector. This was confirmed by the research of W. Kilar (2015) who emphasised that the distribution of locations of corporate HQs, the economic activity of which belong to the group of advanced technology industries, depends on the attributes of each geographic location, such as: the availability of large numbers of highly qualified employees, a substantial number of employees in the area of basic research (R&D), the socio-economic development of the area, where the HQs are located, developed telecommunications infrastructure, transportation accessibility, a "friendly" environment for the generation of new knowledge and technologies, major metropolitan areas, or geographic areas that concentrate companies from certain sectors engaged in specialised, advanced technologies. Areas satisfying such conditions are found primarily in the United States. The largest shifts to the east in the study period were noted by sectors such as energy and financials. This is associated with the processes of globalisation and with the growing role of developing countries, primarily China. In 2016, sectors such as industrials, materials, financials, and telecommunications found themselves in the economic realm of the eastern hemisphere. At the same time, the centroids of the IT, healthcare, consumer products and energy sectors remained in the western hemisphere. Moreover, the shift occurred primarily longitudinally, while the latitudinal shifts were small and usually occurred in a southerly direction. This was mostly caused by an increase in the importance of major cities in Brazil, India, and in the general region of the Persian Gulf. Shifts in the centre of gravity are generally consistent with globalisation processes and the rank of world cities. In the 21st century the intercity connectivities of key cities in south eastern Asia are growing much more rapidly than those of European cities and North American cities. On the other hand, marked increases in the connectivity of cities in Latin America do not strongly affect shifts in the centre of gravity of individual sectors and the overall centre of gravity. Centres of gravity based on the number of corporate headquarters in 2006 and 2016.Source: Authors' own work based on Forbes Global 2000 (Forbes Global 2000, 2018) Centres of gravity according to the value of CPI and SCPI in 2006 and 2016.Source: Authors' own work based on Forbes Global 2000 (Forbes Global 2000, 2018) Alderson, AS & Beckfield, J 2004, 'Power and position in the world city system', American Journal of Sociology, vol. 109, no. 4, pp. 811–851.AldersonAS&BeckfieldJ2004'Power and position in the world city system'American Journal of Sociologyvol. 109no. 481185110.1086/378930Search in Google Scholar Beaverstock, JV, Smith, RG & Taylor, PJ 1999, 'A roster of world cities', Cities, vol. 16, no. 6, pp. 445–458.BeaverstockJV, Smith, RG&TaylorPJ1999'A roster of world cities'Citiesvol. 16no. 644545810.1016/S0264-2751(99)00042-6Search in Google Scholar Boyce, RR & Clark, WAV 1964, 'The concept of shape in geography', Geographical Review, vol. 54, no. 4, pp. 561–572.BoyceRR&ClarkWAV1964'The concept of shape in geography'Geographical Reviewvol. 54no. 456157210.2307/212982Search in Google Scholar Cohen, RB 1981, 'The new international division of labor, multinational corporations and urban hierarchy' in Urbanization and Urban Planning in Capitalist Societies, eds M Dear & A Scott, Methuen, London-New York, pp. 287–316.CohenRB1981'The new international division of labor, multinational corporations and urban hierarchy'Urbanization and Urban Planning in Capitalist SocietiesDearM&ScottAMethuenLondon-New York28731610.4324/9781351068000-12Search in Google Scholar Csomós, G 2013, 'The command and control centers of the United States (2006/2012): An analysis of industry sectors influencing the position of cities', Geoforum, vol. 12, no. 50, pp. 241–251.CsomósG2013'The command and control centers of the United States (2006/2012): An analysis of industry sectors influencing the position of cities'Geoforumvol. 12no. 5024125110.1016/j.geoforum.2013.09.015Search in Google Scholar Csomós, G & Derudder, B 2014, 'European cities as command and control centres 2006–11', European Urban and Regional Studies, vol. 21, pp. 345–352.CsomósG&DerudderB2014'European cities as command and control centres 2006–11'European Urban and Regional Studiesvol. 2134535210.1177/0969776412453149Search in Google Scholar Csomós, G & Tóth, G 2016, 'Featured graphic. Modelling the shifting command and control function of cities through a gravity model based bidimensional regression analysis', Environment and Planning A, vol. 48, no. 4, pp.613–615.CsomósG&TóthG2016'Featured graphic. Modelling the shifting command and control function of cities through a gravity model based bidimensional regression analysis'Environment and Planning Avol. 48no. 461361510.1177/0308518X15621632Search in Google Scholar Derudder, B & Taylor, P 2016, 'Change in the World City Network, 2000–2012', The Professional Geographer, vol. 68, no. 4, pp. 624–637.DerudderB&TaylorP2016'Change in the World City Network, 2000–2012'The Professional Geographervol. 68no. 462463710.1080/00330124.2016.1157500Search in Google Scholar Dorocki, S & Raźniak, P 2017, 'Globalne zmiany ekonomicznego centrum grawitacji w oparciu o funkcje kontrolno-zarządcze miast', [Global changes of economic centre of gravity based on control and management functions of cities], Studia Ekonomiczne, no. 320, pp. 140–156.DorockiS&RaźniakP2017'Globalne zmiany ekonomicznego centrum grawitacji w oparciu o funkcje kontrolno-zarządcze miast', [Global changes of economic centre of gravity based on control and management functions of cities]Studia Ekonomiczneno. 320140156Search in Google Scholar Dorocki, S, Raźniak, P & Winiarczyk-Raźniak, A 2018, 'Zmiany funkcji kontrolno-zarządczych w miastach europejskich w dobie globalizacji' [Changes in the command and control functions in European Cities in the age of globalisation] Prace Komisji Geografii Przemysłu Polskiego Towarzystwa Geograficznego, vol. 32, no. 3, pp. 128–143.DorockiS, Raźniak, P&Winiarczyk-RaźniakA2018'Zmiany funkcji kontrolno-zarządczych w miastach europejskich w dobie globalizacji' [Changes in the command and control functions in European Cities in the age of globalisation]Prace Komisji Geografii Przemysłu Polskiego Towarzystwa Geograficznegovol. 32no. 312814310.24917/20801653.323.8Search in Google Scholar Dorocki, S, Raźniak, P & Winiarczyk-Raźniak, A 2019, 'Changes in the command and control potential of European cities in 2006–2016' Geographia Polonica, vol. 92, no. 3, pp. 275–288.DorockiS, Raźniak, P&Winiarczyk-RaźniakA2019'Changes in the command and control potential of European cities in 2006–2016'Geographia Polonicavol. 92no. 327528810.7163/GPol.0149Search in Google Scholar Dorocki, S, Raźniak, P, Winiarczyk-Raźniak, A & Boguś, M 2017, 'The role of global cities in creation of innovative industry sectors. Case study – life sciences sector' in Proceedings of the 5th International Conference IMES, eds O Dvouletý, M Lukeš & J Mísar University of Economics, Prague, pp. 136–146.DorockiS, Raźniak, P, Winiarczyk-Raźniak, A&BoguśM2017'The role of global cities in creation of innovative industry sectors. Case study – life sciences sector'Proceedings of the 5th International Conference IMESDvouletýOLukešM&MísarJUniversity of EconomicsPrague136146Search in Google Scholar Dunkerley, J 1995, 'Financing the energy sector in developing countries', Energy Policy, vol. 23, no. 11, pp. 929–939.DunkerleyJ1995'Financing the energy sector in developing countries'Energy Policyvol. 23no. 1192993910.1016/0301-4215(95)00101-8Search in Google Scholar Egri, Z & Tánczos, T 2018, 'The spatial peculiarities of economic and social convergence in Central and Eastern Europe', Regional Statistics, vol. 8, no. 1, pp. 49–77.EgriZ&TánczosT2018'The spatial peculiarities of economic and social convergence in Central and Eastern Europe'Regional Statisticsvol. 8no. 1497710.15196/RS080108Search in Google Scholar Forbes Global 2000 2018. Available from: <www.forbes.com>. [17 October 2018].Forbes Global20002018Available from<www.forbes.com>17 October 2018Search in Google Scholar Friedmann, J 1986, 'The world city hypothesis', Development and Change, vol. 17, 69–83.FriedmannJ1986'The world city hypothesis'Development and Changevol. 17698310.1017/CBO9780511522192.019Search in Google Scholar Friedmann, J & Wolff, G 1982, 'World city formation: an agenda for research and action (urbanization process)', International Journal of Urban & Regional Research, vol. 6, no. 3, pp. 309–344.FriedmannJ&WolffG1982'World city formation: an agenda for research and action (urbanization process)'International Journal of Urban & Regional Researchvol. 6no. 330934410.1111/j.1468-2427.1982.tb00384.xSearch in Google Scholar Garbacz, Ch & Thompson, HG Jr 2007, 'Demand for telecommunication services in developing countries', Telecommunications Policy, vol. 31, no. 5, pp. 276–289.GarbaczCh&ThompsonHG Jr2007'Demand for telecommunication services in developing countries'Telecommunications Policyvol. 31no. 527628910.1016/j.telpol.2007.03.007Search in Google Scholar Godfrey, BJ & Zhou, Y 1999, 'Ranking world cities: Multinational corporations and the global urban hierarchy', Urban Geography, vol. 20, no. 3, pp. 268–281.GodfreyBJ&ZhouY1999'Ranking world cities: Multinational corporations and the global urban hierarchy'Urban Geographyvol. 20no. 326828110.2747/0272-3638.20.3.268Search in Google Scholar Goerzen, A, Asmussen, CG & Nielsen, BB 2013. 'Global cities and multinational enterprise location strategy', Journal of International Business Studies, vol 44, no. 5, pp. 427–450.GoerzenA, Asmussen, CG&NielsenBB2013'Global cities and multinational enterprise location strategy'Journal of International Business Studiesvol 44no. 542745010.1057/jibs.2013.11Search in Google Scholar Grether, JM & Mathys, NA 2010, 'Is the world's economic centre of gravity already in Asia?', Area, vol. 42, pp. 47–50.GretherJM&MathysNA2010'Is the world's economic centre of gravity already in Asia?'Areavol. 42475010.1111/j.1475-4762.2009.00895.xSearch in Google Scholar Hall, P 1966, The World Cities. Heinemann, London.HallP1966The World CitiesHeinemannLondonSearch in Google Scholar Hennemann, S & Derudder, B 2014, 'An alternative approach to the calculation and analysis of connectivity in the world city network', Environment and Planning B, vol. 41, no. 3, pp. 392–412.HennemannS&DerudderB2014'An alternative approach to the calculation and analysis of connectivity in the world city network'Environment and Planning Bvol. 41no. 339241210.1068/b39108Search in Google Scholar Huanfeng, D & Peiyi, L 2009, 'The variation contrastive analysis of economy gravity center and regional pollution gravity center of China in 1986–2006', Economic Geogrophy, 29(10), pp. 1629–1633.HuanfengD&PeiyiL2009'The variation contrastive analysis of economy gravity center and regional pollution gravity center of China in 1986–2006'Economic Geogrophy291016291633Search in Google Scholar I-Hui, Ch & Wall HJ 2005, 'Controlling for heterogeneity in gravity models of trade and integration', Federal Reserve Bank of St. Louis Review, vol. 87(1), pp. 49–63.I-HuiCh&WallHJ2005'Controlling for heterogeneity in gravity models of trade and integration'Federal Reserve Bank of St. Louis Reviewvol. 871496310.20955/r.87.49-64Search in Google Scholar Jażdżewska, I 2006, 'Zmiany położenia środka ciężkości miast i ludności miejskiej w Polsce w XX wieku' [Changes in the center of gravity of cities and urban populations in Poland in the 20th century], Przegląd Geograficzny, vol. 78, no. 4, pp. 561–574.JażdżewskaI2006'Zmiany położenia środka ciężkości miast i ludności miejskiej w Polsce w XX wieku' [Changes in the center of gravity of cities and urban populations in Poland in the 20th century]Przegląd Geograficznyvol. 78no. 4561574Search in Google Scholar Kilar, W 2015, 'Settlement concentration of economic potential represented by IT corporations', Geographia Polonica, vol. 88, no. 1, pp. 123–141.KilarW2015'Settlement concentration of economic potential represented by IT corporations'Geographia Polonicavol. 88no. 112314110.7163/GPol.0009Search in Google Scholar Kincses, Á, Nagy, Z & Tóth, G 2013, 'Prostorske strukture v Evropi', Acta Geographica Slovenica, vol. 53, no. 1, pp. 1–36.KincsesÁNagyZ&TóthG2013'Prostorske strukture v Evropi'Acta Geographica Slovenicavol. 53no. 113610.3986/AGS53103Search in Google Scholar Klein, LR 2009, 'Measurement of a shift in the world's center of economic gravity', Journal of Policy Modeling, vo. 31, no. 4, pp. 489–492.KleinLR2009'Measurement of a shift in the world's center of economic gravity'Journal of Policy Modelingvo. 31no. 448949210.1016/j.jpolmod.2009.05.005Search in Google Scholar Kot, J 2007, 'Kriging - A method of statistical interpolation of spatial data', Acta Universitatis Lodziensis. Folia Oeconomica, vol. 206, pp. 89–99.KotJ2007'Kriging - A method of statistical interpolation of spatial data'Acta Universitatis Lodziensis. Folia Oeconomicavol. 2068999Search in Google Scholar Kostrubiec, B 1972, 'Analiza zjawisk koncentracji w sieci osadniczej' [Analysis of concentration phenomena in settlement network], Prace Geograficzne, vol. 93.KostrubiecB1972'Analiza zjawisk koncentracji w sieci osadniczej' [Analysis of concentration phenomena in settlement network]Prace Geograficznevol. 93Search in Google Scholar Liu, X, Derudder, B, Witlox, F & Hoyler, M 2014, 'Cities as networks within networks of cities: The evolution of the city/firm-duality in the world city network, 2000–2010', Tijdschrift voor economische en sociale geographie, vol. 105, no. 4, pp. 465–482.LiuX, Derudder, B, Witlox, F&HoylerM2014'Cities as networks within networks of cities: The evolution of the city/firm-duality in the world city network, 2000–2010'Tijdschrift voor economische en sociale geographievol. 105no. 446548210.1111/tesg.12097Search in Google Scholar Liu, X, Derruder, B & Taylor, P 2014, 'Mapping the evolution of hierarchical and regional tendencies in the world city network, 2000–2010', Computers, Environment and Urban Systems, vol. 43, pp. 51–66.LiuX, Derruder, B&TaylorP2014'Mapping the evolution of hierarchical and regional tendencies in the world city network, 2000–2010'Computers, Environment and Urban Systemsvol. 43516610.1016/j.compenvurbsys.2013.10.004Search in Google Scholar Masik, G 2016, 'Economic resilience: The case of Poland and certain European regions', Geographia Polonica, vol. 89, no. 4, pp. 457–471.MasikG2016'Economic resilience: The case of Poland and certain European regions'Geographia Polonicavol. 89no. 445747110.7163/GPol.0068Search in Google Scholar Masik, G 2019, 'Economic sectors in the research of economic resilience of regions', Studies of the Industrial Geography Commission of the Polish Geographical Society, vol. 33, no. 1, pp. 117–129.MasikG2019'Economic sectors in the research of economic resilience of regions'Studies of the Industrial Geography Commission of the Polish Geographical Societyvol. 33no. 111712910.24917/20801653.331.9Search in Google Scholar Neal, Z 2016, 'Well connected compared to what? Rethinking frames of reference in world city network research', Environment & Planning A, vol. 49, no. 12, pp. 2859–2877.NealZ2016'Well connected compared to what? Rethinking frames of reference in world city network research'Environment & Planning Avol. 49no. 122859287710.1177/0308518X16631339Search in Google Scholar Neal, Z, Derudder, B & Taylor, PJ 2019, 'Should I stay or should I go: Predicting advanced producer services firm expansion and contraction', International Regional Science Review, vol. 42, no. 2, pp. 207–229.NealZ, Derudder, B&TaylorPJ2019'Should I stay or should I go: Predicting advanced producer services firm expansion and contraction'International Regional Science Reviewvol. 42no. 220722910.1177/0160017618784739Search in Google Scholar Quah, D 2011, 'The global economy's shifting centre of gravity, Global Policy', vol. 2(1), pp. 3–9.QuahD2011'The global economy's shifting centre of gravityGlobal Policy'vol. 213910.1111/j.1758-5899.2010.00066.xSearch in Google Scholar Raźniak, P, Dorocki, S & Winiarczyk-Raźniak, A 2017, 'Permanence of economic potential of cities based on sector development', Chinese Geographical Sciences, vol. 1, no. 27, pp. 123–136.RaźniakP, Dorocki, S&Winiarczyk-RaźniakA2017'Permanence of economic potential of cities based on sector development'Chinese Geographical Sciencesvol. 1no. 2712313610.1007/s11769-017-0850-5Search in Google Scholar Raźniak, P, Dorocki, S & Winiarczyk-Raźniak, A 2018, 'Eastern European cities as command and control centres in time of economic crisis', Acta Geographica Slovenica, vol. 58, no. 2, pp. 101–110.RaźniakP, Dorocki, S&Winiarczyk-RaźniakA2018'Eastern European cities as command and control centres in time of economic crisis'Acta Geographica Slovenicavol. 58no. 210111010.3986/AGS.3124Search in Google Scholar Raźniak, P, Dorocki, S & Winiarczyk-Raźniak, A 2019, 'Resistance of cities performing command and control functions in Central and Eastern Europe to the economic crisis', Prace Komisji Geografii Przemysłu Polskiego Towarzystwa Geograficznego, vol. 33, no. 2, pp. 45–58.RaźniakP, Dorocki, S&Winiarczyk-RaźniakA2019'Resistance of cities performing command and control functions in Central and Eastern Europe to the economic crisis'Prace Komisji Geografii Przemysłu Polskiego Towarzystwa Geograficznegovol. 33no. 2455810.3986/AGS.3124Search in Google Scholar Ramos, R & Suriñach, J 2017, 'A gravity model of migration between the ENC and the EU', Tijdschrift voor Economische en Sociale Geografie, vol. 1 pp. 21–35.RamosR&SuriñachJ2017'A gravity model of migration between the ENC and the EU'Tijdschrift voor Economische en Sociale Geografievol. 1213510.1111/tesg.12195Search in Google Scholar Sassen, S 1988, The mobility of labor and capital. A study in international investment and capital flow, Cambridge University Press, Cambridge.SassenS1988The mobility of labor and capital. A study in international investment and capital flowCambridge University PressCambridge10.1017/CBO9780511598296Search in Google Scholar Sassen, S 1991, The Global City: New York, London, Tokyo, Princeton University Press, Princeton.SassenS1991The Global City: New York, London, TokyoPrinceton University PressPrincetonSearch in Google Scholar Sassen, S 2000, 'The global city: Strategic site/new frontier', American Studies, vol. 41, no. 2/3, pp. 79–95.SassenS2000'The global city: Strategic site/new frontier'American Studiesvol. 41no. 2/37995Search in Google Scholar Scharlig, A 1973, 'About the confusion between the center of gravity and Weber's optimum', Regional and Urban Economics, vol. 3, no. 4, pp. 371–382.ScharligA1973'About the confusion between the center of gravity and Weber's optimum'Regional and Urban Economicsvol. 3no. 437138210.1016/0034-3331(73)90031-6Search in Google Scholar Stein, ML 1999, Interpolation of spatial data. Some theory for Kriging, Springer Series in Statistics, Springer, New York.SteinML1999Interpolation of spatial data. Some theory for KrigingSpringer Series in Statistics, SpringerNew York10.1007/978-1-4612-1494-6Search in Google Scholar Śleszyński, P 2018, 'Research topics of geography of enterprise and decision-control functions in Poland against global trends', Prace Komisji Geografii Przemysłu Polskiego Towarzystwa Geograficznego Geograficznego, vol. 32, no. 4, pp. 23–47.ŚleszyńskiP2018'Research topics of geography of enterprise and decision-control functions in Poland against global trends'Prace Komisji Geografii Przemysłu Polskiego Towarzystwa Geograficznego Geograficznegovol. 32no. 4234710.24917/20801653.324.2Search in Google Scholar Taylor, PJ, Derudder, B, Faulconbridge, J, Hoyler, M & Ni, P 2014, 'Advanced producer service firms as strategic networks, global cities as strategic places', Economic Geography, vol. 90, no. 3, pp. 67–291.TaylorPJDerudderBFaulconbridgeJHoylerM&NiP2014'Advanced producer service firms as strategic networks, global cities as strategic places'Economic Geographyvol. 90no. 36729110.1111/ecge.12040Search in Google Scholar Taylor, PJ & Csomós, G 2012, 'Cities as control and command centres: Analysis and interpretation', Cities, vol. 29, no. 6, pp. 408–411.TaylorPJ&CsomósG2012'Cities as control and command centres: Analysis and interpretation'Citiesvol. 29no. 640841110.1016/j.cities.2011.09.005Search in Google Scholar Tóth, G & Csomós, G 2016, 'Mapping the position of cities in corporate research and development through a gravity model-based bidimensional regression analysis', Regional Statistics, vol. 6, no. 1, pp. 217–220.TóthG&CsomósG2016'Mapping the position of cities in corporate research and development through a gravity model-based bidimensional regression analysis'Regional Statisticsvol. 6no. 1217220Search in Google Scholar Tóth, G & Nagy, Z 2017, 'The world's economic centre of gravity', Regional Statistics, vol. 6, no. 2, pp. 177–180.TóthG&NagyZ2017'The world's economic centre of gravity'Regional Statisticsvol. 6no. 2177180Search in Google Scholar Yang, X, Derudder, B, Taylor, P, Ni, P & Shen, W 2017, 'Asymmetric global network connectivities in the world city network 2013', Cities, vol. 60, pp. 84–90.YangXDerudderBTaylorPNiP&ShenW2017'Asymmetric global network connectivities in the world city network 2013'Citiesvol. 60849010.1016/j.cities.2016.08.009Search in Google Scholar Wojciechowski, L 2004, 'Ekonomiczne modele grawitacyjne – przykłady ich zastosowania w literaturze światowej i polskiej' [Economic gravity models - Examples of application in the related literature in the world and in Poland], Zeszyty Naukowe Akademii Ekonomicznej w Poznaniu, vol. 47, pp. 9–37.WojciechowskiL2004'Ekonomiczne modele grawitacyjne – przykłady ich zastosowania w literaturze światowej i polskiej' [Economic gravity models - Examples of application in the related literature in the world and in Poland]Zeszyty Naukowe Akademii Ekonomicznej w Poznaniuvol. 47937Search in Google Scholar
CommonCrawl
If a + b + c = 0, where $a \neq b \neq c$, then what is the value of: $\frac{a^{2}}{2a^{2}+bc}+\frac{b^2}{2b^{2}+ac}+\frac{c^{2}}{2c^{2}+ab}$ asked Oct 28, 2017 in Quantitative Aptitude by makhdoom ghaya (7.8k points) ● 63 ● 220 ● 553 recategorized Dec 12, 2017 by Arjun | 93 views Please check question commented Oct 31, 2017 by Lakshman Patel RJIT (532 points) ● 2 ● 8 ● 24 edited Nov 26, 2018 by Lakshman Patel RJIT Assume the value of a,b,c and solve it answered Oct 31, 2017 by Lakshman Patel RJIT (532 points) ● 2 ● 8 ● 24 A and B walk from X to Y, a distance of 27 km at 5 kmph and 7 kmph respectively. B reaches Y and immediately turns back meeting A at Z. What is the distance from X to Z? 25 km 22.5 km 24 km 20 km asked Oct 28, 2017 in Quantitative Aptitude by makhdoom ghaya (7.8k points) ● 63 ● 220 ● 553 | 181 views time-and-distance Four friends start from four towns, which are at the four corners of an imaginary rectangle. They meet at a point which falls inside the rectangle, after travelling distances of4O, 50, and 60 metres. The maximum distance that the fourth could have travelled is (approximately): 67 metres 52 metres 22.5 metres Cannot be determined A man buys spirit at Rs. 60 per litre, adds water to it and then sells it at Rs. 75 per litre. What is the ratio of spirit to water if his profit in the deal is 37.5%? 9 : 1 10 : 1 11 : 1 None of these ratio-proportion What is the smallest number, which when increased by 5 is completely divisible by 8, 11 and 24? 264 259 269 None of these asked Oct 28, 2017 in Quantitative Aptitude by makhdoom ghaya (7.8k points) ● 63 ● 220 ● 553 | 61 views Along a road lie an odd number of stones placed at intervals of 10 m. These stones have to be assembled around the middle stone. A person can carry only one stone at a time. A man carried out the job starting with the stone in the middle, carrying stones in succession, thereby covering a distance of 4.8 km. Then the number of stones is: 35 15 29 31
CommonCrawl
Is there oscillating charge in a hydrogen atom? In another post, I claimed that there was obviously an oscillating charge in a hydrogen atom when you took the superposition of a 1s and a 2p state. One of the respected members of this community (John Rennie) challenged me on this, saying: Why do you say there is an oscillating charge distribution for a hydrogen atom in a superposition of 1s and 2p states? I don't see what is doing the oscillating. Am I the only one that sees an oscillating charge? Or is John Rennie missing something here? I'd like to know what people think. quantum-mechanics atomic-physics charge Marty GreenMarty Green $\begingroup$ For large values of the principal quantum number, I would argue that the problem of the motion of the electron should become (semi-)classical. As the Coulomb potential is equal to the Newton potential, the electron follows an elliptic orbit around the proton. And yes, I see an oscillation charge then. $\endgroup$ – Fabian Nov 18 '16 at 5:50 $\begingroup$ A related problem: imagine the particle in the (quantum) harmonic oscillator is charged. Do you see an oscillating charge in this case? Schrödinger has solved the apparent contradiction between the stationary eigenstates and the harmonic motion some long time ago, see here. $\endgroup$ – Fabian Nov 18 '16 at 5:53 $\begingroup$ Can you extend your question to explain why you think there is an oscillating charge. For example are you suggesting it is because the charge density switches between the 1s shape and the 2p shape at some frequency? $\endgroup$ – John Rennie Nov 18 '16 at 5:56 $\begingroup$ @JohnRennie Marty is in the right in this instance. The oscillation of the wavefunction is pretty easy to see from the superposition - the overall shape of the superposition will depend on the relative phase of the two components, and this changes over time whenever the eigenenergies are different. $\endgroup$ – Emilio Pisanty Nov 18 '16 at 14:22 $\begingroup$ (Also - may I suggest an explicit mention of the 1s+2p superposition pure state in the question title?) $\endgroup$ – Emilio Pisanty Nov 18 '16 at 14:45 The superposition of eigenstates in a hydrogen atom results in an oscillating wave function in time with a frequency corresponding to the difference of energies of the eigenstates. Schrödinger considered for a time the wave function squares as charge density which resulted in an oscillating charge distribution. As this corresponded to an electric dipole oscillation and also explained intensities and polarization of observed light emission, he assumed heuristically that this interpretation explained the origin of light emission. See E. Schrödinger "Collected Papers on Wave Mechanics ", Blackie & Son Ltd., London and Glasgow 1928 freecharlyfreecharly $\begingroup$ Odd that no one has commented on your answer so far. I am going to post a follow-up as a separate question: is it true that you get the correct intensity for the emitted light by applying Maxwell's equations to the oscillating charge density? $\endgroup$ – Marty Green Nov 19 '16 at 13:23 $\begingroup$ And here is the follow-up question: physics.stackexchange.com/questions/293577/… $\endgroup$ – Marty Green Nov 19 '16 at 14:44 $\begingroup$ Re "...considered for a time...": Are you suggesting it is not correct? $\endgroup$ – Peter Mortensen Nov 20 '16 at 10:09 $\begingroup$ @Marty Green - After finding his famous equation, Schrödinger for some time heuristically assumed that the (square of the single) electron wave function scalar corresponded to the particle and charge being smeared out in real space. He himself realized that this meaning of the wave function could not be maintained in multi-electron wave functions because they described the electrons in higher dimensional configuration space. Eventually he accepted the electron location probability interpretation suggested by Born. $\endgroup$ – freecharly Nov 20 '16 at 14:56 $\begingroup$ @Marty Green - I found this recent article on the charge density interpretation of Schrödinger where it is argued that it is actually correct: philsci-archive.pitt.edu/9696/1/electroncloud_v9.pdf $\endgroup$ – freecharly Nov 27 '16 at 5:46 In this specific instance you are correct. If you have a hydrogen atom that is completely isolated from the environment, and which has been prepared in a pure quantum state given by a superposition of the $1s$ and $2p$ states, then yes, the charge density of the electron (defined as the electron charge times the probability density, $e|\psi(\mathbf r)|^2$) will oscillate in time. In essence, this is because the $2p$ wavefunction has two lobes with opposite sign, so adding it to the $1s$ blob will tend to shift it towards the positive-sign lobe of the $p$ peanut. However, the relative phase of the two evolves over time, so at some point the $p$ signs will switch over, and the $1s$ blob will be pushed in the other direction. It's worth doing this in a bit more detail. The two wavefunctions in play are $$ \psi_{100}(\mathbf r,t) = \frac{1}{\sqrt{\pi a_0^3}} e^{-r/a_0} e^{-iE_{100}t/\hbar} $$ and $$ \psi_{210}(\mathbf r, t) = \frac{1}{\sqrt{32\pi a_0^5}} \, z \, e^{-r/2a_0} e^{-iE_{210}t/\hbar}, $$ both normalized to unit norm. Here the two energies are different, with the energy difference $$\Delta E = E_{210}-E_{100} = 10.2\mathrm{\: eV}=\hbar\omega = \frac{2\pi\,\hbar }{405.3\:\mathrm{as}}$$ giving a sub-femtosecond period. This means that the superposition wavefunction has a time dependence, $$ \psi(\mathbf r,t) = \frac{\psi_{100}(\mathbf r,t) + \psi_{210}(\mathbf r,t)}{\sqrt{2}} = \frac{1}{\sqrt{2\pi a_0^3}} e^{-iE_{100}t/\hbar} \left( e^{-r/a_0} + e^{-i\omega t} \frac{z}{a_0} \frac{ e^{-r/2a_0} }{ 4\sqrt{2} } \right) , $$ and this goes directly into the oscillating density: $$ |\psi(\mathbf r,t)|^2 = \frac{1}{2\pi a_0^3} \left[ e^{-2r/a_0} + \frac{z^2}{a_0^2} \frac{ e^{-r/a_0} }{ 32 } + z \cos(\omega t) \, \frac{e^{-3r/2a_0}}{2\sqrt{2}a_0} \right] . $$ Taking a slice through the $x,z$ plane, this density looks as follows: Mathematica source through Import["http://halirutan.github.io/Mathematica-SE-Tools/decode.m"]["http://i.stack.imgur.com/KAbFl.png"] This is indeed what a superposition state looks like, as a function of time, for an isolated hydrogen atom in a pure state. On the other hand, a word of warning: the above statement simply states: "this is what the (square modulus of the) wavefunction looks like in this situation". Quantum mechanics strictly restricts itself to providing this quantity with physical meaning if you actually perform a high-resolution position measurements at different times, and compare the resulting probability distributions. (Alternatively, as done below, you might find some other interesting observable to probe this wavefunction, but the message is the same: you don't really get to talk about physical stuff until and unless you perform a projective measurement.) This means that, even with the wavefunction above, quantum mechanics does not go as far as saying that "there is oscillating charge" in this situation. In fact, that is a counterfactual statement, since it implies knowledge of the position of the electron in the same atom at different times without a (state-destroying) measurement. Any such claims, tempting as they are, are strictly outside of the formal machinery and interpretations of quantum mechanics. Also, and for clarity, this superposition state, like any hydrogen state with support in $n>1$ states, will eventually decay down to the ground state by emitting a photon. However, the lifetime of the $2p$ state is on the order of $1.5\:\mathrm{ns}$, so there's room for some four million oscillations of the superposition state before it really starts decaying. A lot of atomic physics was forged in a time when a nanosecond was essentially instantaneous, and this informed a lot of our attitudes towards atomic superposition states. However, current technology makes subpicosecond resolution available with a modest effort, and femtosecond resolution (and better) is by now routine for many groups. The coherent dynamics of electrons in superposition states has been the name of the game for some time now. It's also important to make an additional caveat: this is not the state that you will get if you initialize the atom in the excited $2p$ state and wait for it to decay until half of the population is in the ground state. In a full quantum mechanical treatment, you also need to consider the quantum mechanics of the radiation field, which you usually initialize in the vacuum, $|0⟩$, but that means that after half the population has decayed, the state of the system is $$ |\Psi⟩= \frac{|1s⟩|\psi⟩+|2p⟩|0⟩}{\sqrt{2}}, $$ where $|\psi⟩$ is a state of the radiation field with a single photon in it, and which is therefore orthogonal to the EM vacuum $|0⟩$. What that means is that the atom and the radiation field are entangled, and that neither can be considered to even have a pure quantum state on its own. Instead, the state of the atom is fully described (for all experiments that do not involve looking at the radiation that's already been emitted) by the reduced density matrix obtained by tracing out the radiation field, $$ \rho_\mathrm{atom} = \operatorname{Tr}_\mathrm{EM}\mathopen{}\left(|\Psi⟩⟨\Psi|\right)\mathclose{} =\frac{|1s⟩⟨1s|+|2p⟩⟨2p|}{2}, $$ and this does not show any oscillations in the charge density. Foundational issues about interpretations aside, it's important to note that this is indeed a real, physical oscillation (of the wavefunction, at least), and that equivalent oscillations have indeed been observed experimentally. Doing it for this hydrogen superposition is very challenging, because the period is blazingly fast, and it's currently just out of reach for the methods we have at the moment. (That's likely to change over the next five to ten years, though: we broke the attosecond precision barrier just last week.) The landmark experiment in this regard, therefore, used a slightly slower superposition, with a tighter energy spacing. In particular, they used two different fine-structure states within the valence shell of the Kr+ ion, i.e. the states $4p_{3/2}^{-1}$ and $4p_{1/2}^{-1}$, which have the same $n$ and $L$, but with different spin-orbit alignments, giving different total angular momenta, and which are separated by $$\Delta E=0.67\:\mathrm{eV}=2\pi\hbar/6.17\:\mathrm{fs}.$$ That experiment is reported in Real-time observation of valence electron motion. E. Goulielmakis et al. Nature 466, 739 (2010). They prepared the superposition by removing one of the $4p$ electrons of Kr using tunnel ionization, with a strong ~2-cycle pulse in the IR, which is plenty hard to get right. The crucial step, of course, is the measurement, which is a second ionization step, using a single, very short ($<150\:\mathrm{as}$) UV burst of light. Here the superposition you're probing is slightly more complicated than the hydrogen wavefunction the OP asks about, but the essentials remain the same. Basically, the electron is in a superposition of an $l=1,m=0$ state, and an $l=1,m=1$ state, with an oscillation between them induced by the difference in energy given by the spin-orbit coupling. This means that the shape of the ion's charge density is changing with time, and this will directly impact how easy it is for the UV pulse to ionize it again to form Kr2+. What you end up measuring is absorbance: if the UV ionizes the system, then it's absorbed more strongly. The absorbtion data therefore shows a clear oscillation as a function of the delay between the two pulses: The pictures below show a good indication of how the electron cloud moves over time. (That's actually the hole density w.r.t. the charge density of the neutral Kr atom, but it's all the same, really.) However, it's important to note that the pictures are obviously only theoretical reconstructions. Anyways, there you have it: charge densities (defined as $e|\psi(\mathbf r)|^2$) do oscillate over time, for isolated atoms in pure superposition states. Finally, the standard caveats apply: the oscillations caused in quantum mechanics by superpositions are only valid for pure, isolated states. If your system is entangled with the environment (or, as noted above, with the radiation it's already emitted), then this will degrade (and typically kill) any oscillations of local observables. If the overall state of the world is in some meaningful superposition of energy eigenstates, then that state will indeed evolve in time. However, for heavily entangled states, like thermal states or anything strongly coupled to the environment, any local observables will typically be stationary, because each half of an entangled state doesn't even have a proper state to call its own. Emilio PisantyEmilio Pisanty Yes, the electric charge density (or more precisely, the electron's spatial probability density $p(x)$) would indeed oscillate with time, with frequency $10.2\text{ eV}/\hbar$. Energy eigenstates are stationary in time; since the state you propose isn't an energy eigenstate, it isn't stationary in time. I don't understand anna v's answer on several levels. First of all, I don't see what the fine structure has to do with anything, because the $1s$ and $2p$ states have different principal quantum numbers, so they have a fairly large ($10.2\text{ eV} \approx 118000\text{ K}$) energy difference even in the completely non-relativistic limit. Second of all, I don't understand her claim that superpositions of different energy levels aren't allowed - if this were true, then nothing would ever change with time! I think that what anna v's getting at is that if you include relativistic corrections from QED - i.e. you treat the electromagnetic field as quantum-mechanical - then the usual non-relativistic electron eigenstates are no longer exact eigenstates of the full relativistic Hamiltonian, and so electrons can undergo spontaneous emission or absorption of photons and change energy levels. I'm not sure what the time scales for this process are. But if you ignore relativistic QED effects (which I think is what Marty Green had in mind) then the electric charge distribution will indeed oscillate indefinitely. tparkertparker $\begingroup$ "her claim that superpositions of different energy levels aren't allowed - if this were true, then nothing would ever change with time!" Probabilities are nomalized to 1 for each energy level, otherwise there would be no stability. Things change in time by incoming and outgoing packets of energy momentum and angular momentum; photons at the hydrogen level, (but not only in general). $\endgroup$ – anna v Nov 18 '16 at 7:58 $\begingroup$ You cannot have a hydrogen atom at a mix of an 1s and 2p state. It would emit a photon and go to the 1s state. $\endgroup$ – anna v Nov 18 '16 at 8:01 $\begingroup$ @annav Indeed, it would emit a photon and go to the $1s$ state - on a nanosecond timescale, which leaves room for about four million oscillations at the 0.4 fs period of the superposition. $\endgroup$ – Emilio Pisanty Nov 18 '16 at 14:21 $\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$ – ACuriousMind♦ Nov 19 '16 at 15:50 First, I respectfully disagree with @anna v's statement that there cannot be a superposition of two states with different energy (although she seems to withdraw this statement in her comment). The superposition principle rules supreme, so if each of two states is possible, any superposition of the states is possible. Stability of energy levels does not seem relevant, as only the ground level is stable anyway. Now, let us consider some (non-normalized) superposition, say $$\psi_1(\vec{r}) \exp(iE_1 t)+\psi_2(\vec{r}) \exp(iE_2 t),$$ of two eigenstates of energy $\psi_1(\vec{r}) \exp(iE_1 t)$ and $\psi_2(\vec{r}) \exp(iE_2 t)$. The probability density and the charge density (averaged over an ensemble) for this superposition will equal (up to a constant factor) \begin{align} (\psi_1(\vec{r}) \exp(iE_1 t) +\psi_2(\vec{r}) & \exp(iE_2 t))^* (\psi_1(\vec{r}) \exp(iE_1 t)+\psi_2(\vec{r}) \exp(iE_2 t)) \\&=|\psi_1(\vec{r})|^2+|\psi_2(\vec{r})|^2+2\Re(\psi_1(\vec{r})^*\psi_2(\vec{r})\exp(i(E_2-E_1)t)). \end{align} Therefore, the charge density (averaged over an ensemble) for the superposition does have an oscillating part in (pretty much) any point. EDIT: (11/19/2016) In my initial answer above, I tried to avoid interpretational issues. However, as the OP accepted @freecharly's answer (and expressed interest in comments on that answer) and as @annav added in her answer that "It is very clear that the space charge distribution is not oscillating at the individual electron level, the charge sticks to the electron as the spot shows", I conclude that there may be clear interest in the interpretation, so let me add a few words. freecharly mentioned the well-known interpretation by Schrödinger, where the squared magnitude of the wave function is the charge density. This interpretation has some weaknesses. For example, a wave packet in free space spreads steadily, which is in tension with the charge of electron being integer. In my article http://link.springer.com/content/pdf/10.1140%2Fepjc%2Fs10052-013-2371-4.pdf (published in the European Physical Journal C) (at the end of section 3) I proposed another tentative interpretation: "the one-particle wave function may describe a large (infinite?) number of particles moving along the trajectories defined in the de Broglie–Bohm interpretation. The total charge, calculated as an integral of charge density over the infinite 3-volume, may still equal the charge of electron. So the individual particles can be either electrons or positrons, but together they can be regarded as one electron, as the total charge is conserved." This seems to be compatible with the notion of vacuum polarization and can provide the same charge density as in Schrödinger's interpretation (while the total charge in any volume is integer, there can be a fractional average charge density in the limit of decreasing volume), however, wave packet spreading is not as problematic. akhmeteliakhmeteli Here are hydrogen atom energy levels For hydrogen and other nuclei stripped to one electron, the energy depends only upon the principal quantum number n. This fits the hydrogen spectrum unless you take a high resolution look at fine structure where the electron spin and orbital quantum numbers are involved. At even higher resolutions, there is a tiny dependence upon the orbital quantum number in the Lamb shift. So your: when you took the superposition of a 1s and a 2p state. in a quantum mechanical frame does not make sense. There cannot be a superposition because these are two distinct energy levels. To go from the 2p to the 1s energy and angular momentum will be radiated away, and to go from the 1s to the 2p energy is needed. Energy levels are posited as stable in the quantum mechanical frame, i.e the probability of an electron that was in a 1s state to remain a 1s state is 1, unless energy is supplied. Quantum mechanics is continually validated. Thus there cannot be an oscillation between 1s and 2p states and energy conservation at the same time. Edit after comment by Emilio Pisanty Me: You cannot have a hydrogen atom at a mix of an 1s and 2p state. It would emit a photon and go to the 1s state. – anna v Reply: @annav Indeed, it would emit a photon and go to the 1s state - on a nanosecond timescale, which leaves room for about four million oscillations at the 0.4 fs period of the superposition. - Emilio Pisanty In conclusion I will accept that my answer holds for times larger than nanosecond scales. It seems that technology is reaching much faster times than I was aware, and superposed states can within these time limits exist. Now whether the oscillation of a wavefunction squared ( i.e. probability), can be considered as oscillating charges is not clear. The double slit experiment one particle at a time shows that the interference pattern is a probability wave , not a mass wave. There is one spot on the screen for the one electron at a time, the spot coming from the interaction of the electron with the atoms of the screen. The accumulation shows the interference, i.e. the oscillation over space of the accumulated charge distribution. It is very clear that the space charge distribution is not oscillating at the individual electron level, the charge sticks to the electron as the spot shows. Thus I expect that the time oscillation of the probability distribution shown in Emilio's answer cannot be interpreted as other than a probability distribution , "how probable it is that the single electron of the hydrogen atom will be found at the (x,y,z,t) point". It is another way of saying that the electron is not in an orbit, but in an orbital in the hydrogen atom. Thus I will be very skeptical that it is the charge that is waving in the femtosecond scales. Not the least that classically oscillating charges radiate (see the answer by freecharly ) anna vanna v $\begingroup$ Regarding whether oscillation of probability distribution can be considered as charge oscillation, consider that in the classical limit the electron's very localized wave packet will orbit the nucleus, still being a probability cloud. Its mean value will indeed obey classical equations of motion. (See my related self-answer for an animation of such a wave packet evolution). Thus for all intents and purposes, this probability cloud does represent an orbiting electron. $\endgroup$ – Ruslan Dec 20 '17 at 11:37 Not the answer you're looking for? Browse other questions tagged quantum-mechanics atomic-physics charge or ask your own question. Physical source of photon frequency What is Chirped Pulse Amplification, and why is it important enough to warrant a Nobel Prize? Why do electrons in an atom occupy only the stationary states? Can a quasiclassical electron wave packet in elliptic orbit be formed from bound hydrogen-like eigenstates? Is there any time-dependent hydrogen atom Schrödinger equation, solvable analytically? Probabilities in non-stationary states Why does the universe need quantum superposition? What would measuring the position of an electron in an electronic superposition look like? What ticks in an optical clock? How quickly does an atomic electron decay? How to set up Schrodinger's equation for an electron (as a charge distribution) under its own electrostatic field States versus ensembles in quantum mechanics The point of charge neutrality in the Hydrogen Atom Is there a charge density in quantum mechanics? Are there exact analytical solutions to the electronic states of the hydrogen molecular ion $\mathrm H_2^+$? What happens if you try to apply Maxwell's Equations to this quantum mechanical system? How are Electrons on an Atom distributed Transitions in Hydrogen Atom
CommonCrawl
Earth, Moon, and Planets December 2005 , Volume 97, Issue 3–4, pp 459–470 | Cite as A Widebinary Solar Companion as a Possible Origin of Sedna-like Objects John J. Matese Daniel P. Whitmire Jack J. Lissauer Sedna is the first inner Oort cloud object to be discovered. Its dynamical origin remains unclear, and a possible mechanism is considered here. We investigate the parameter space of a hypothetical solar companion which could adiabatically detach the perihelion of a Neptune-dominated TNO with a Sedna-like semimajor axis. Demanding that the TNO's maximum value of osculating perihelion exceed Sedna's observed value of 76 AU, we find that the companion's mass and orbital parameters (m c , a c , q c , Q c , i c ) are restricted to $$m_c>rapprox 5\hskip.25em\hbox{M}_{\rm J}\left(\frac{Q_c}{7850\hbox{ AU}} \frac{q_c}{7850\hbox{ AU}}\right)^{3/2}$$ during the epoch of strongest perturbations. The ecliptic inclination of the companion should be in the range \(45{\deg}\lessapprox i_c\lessapprox 135{\deg}\) if the TNO is to retain a small inclination while its perihelion is increased. We also consider the circumstances where the minimum value of osculating perihelion would pass the object to the dynamical dominance of Saturn and Jupiter, if allowed. It has previously been argued that an overpopulated band of outer Oort cloud comets with an anomalous distribution of orbital elements could be produced by a solar companion with present parameter values $$m_c\approx 5\hskip.25em\hbox{M}_{\rm J}\left(\frac{9000\hbox{ AU}}{a_c}\right)^{1/2}.$$ If the same hypothetical object is responsible for both observations, then it is likely recorded in the IRAS and possibly the 2MASS databases. Kuiper Belt Oort Cloud comets:2003 VB12 comets:general binaries:general The authors gratefully acknowledge informative exchanges with Rodney Gomes. J.J.L. received support from NASA Planetary Geology and Geophysics Grant 344-30-50-01. Appendix A: Dynamics We approximate the companion orbit as an invariant ellipse of mass and orbital parameters m c , a c , q c , Q c , i c having orbit normal \(\hat{{\bf n}}_{\bf c}\). The heliocentric companion position is denoted by r c while the heliocentric Sedna position is r. The barycentric solar location is $${\bf r}_{\odot}=-\frac{m_c}{\hbox{M}_\odot+\sum_p M_p+m_c} {\bf r}_{\bf c}\equiv-\frac{m_c}{\hbox{M}_{\circ}+m_c}{\bf r}_{\bf c}.$$ Newton's equations of motion for the STNO are then $$\ddot{{\bf r}}=-\ddot{{\bf r}}_{\odot}+{\bf g}_{\odot}+\mathop{\varvec{\sum}}\limits_p {\bf g}_{\bf p}+{\bf g}_{\bf c},$$ where \({\bf g}_{\odot,{\bf p,c}}\) are the gravitational fields at the STNO's location due to the Sun, the planets and the companion, respectively. Further, we approximate the planetary perturbations by treating the planets as circular rings. In the limits r p r r c , we expand both the planetary and companion interactions. Thus $${\bf g}_{\odot}=\nabla_{\bf r}\left(\frac{\mu_{\odot}}{r}\right)$$ $${\bf g}_{\bf p}\approx \nabla_{\bf r}\left(\frac{\mu_p}{r}+\frac{{\mu_p {r_p}^2}\left(r^2-3 \left({\bf r}\cdot\hat{{\bf n}}_{\bf p}\right)^2\right)}{4 r^5}\right)$$ $${\bf g}_{\rm c}=\nabla_{\bf r}\left(\frac{\mu_c}{|{\bf r}-{\bf r_c}|}\right)\approx\nabla_{\bf r}\left(\frac{\mu_c\left(2{\bf r}_{\bf c}\cdot{\bf r}r_c^2+3({\bf r}_{\bf c}\cdot{\bf r})^2-r^2 r_c^2\right)}{2r_c^5}\right).$$ Combining these results, we obtain $$\eqalign{ \ddot{{\bf r}} & \approx\nabla_{\bf r_{\bf c}}\left(\frac{\mu_c}{r_c}\right)+\nabla_{\bf r}\left(\frac{\mu_{\circ}}{r}+\frac{{\cal I}_p\left(r^{2}-3\left({\bf r}\cdot\hat{{\bf n}}_{\bf p}\right)^2\right)}{4 r^5}+\frac{\mu_c \left(2{\bf r}_{\bf c}\cdot{\bf r}r_c^2+3\left({\bf r}_{\bf c}\cdot{\bf r}\right)^2-r^2 r_c^2\right)}{2r_c^5}\right)\cr & \quad=\nabla_{\bf r}\left(\frac{\mu_{\circ}}{r}+\frac{{\cal I}_p\left(r^{2}-3 ({\bf r}\cdot\hat{{\bf n}}_{\bf p})^2\right)}{4r^5}+\frac{\mu_c\left(3({\bf r}_{\bf c}\cdot{\bf r})^2-r^2 r_c^2\right)}{2r_c^5}\right)\equiv {\bf a}_{\bf 0}+{\bf a}_{\bf p}+{\bf a}_{\bf c}, }<!endaligned>$$ where \({\cal I}_p\equiv\sum_p\mu_p r_p^2\) and \(\mu_{\circ}\equiv\mu_\odot +\sum_p \mu_p\). We then construct the equations of motion for the scaled angular momentum vector and the eccentricity vector, $${\bf h}\equiv\frac{{\bf r}\times\dot{{\bf r}}}{\sqrt{\mu_{\circ} a}},\quad {\bf e}\equiv\frac{\dot{{\bf r}}\times {\bf h}}{\sqrt{\mu_{\circ} /a}} -\hat{{\bf r}},$$ which yields: $$\dot{{\bf h}} = \frac{{\bf r}\times({\bf a_p}+{\bf a_c})}{\sqrt{\mu_{\circ} a}},\quad \dot{{\bf e}}=\frac{\left(({\bf a_p}+{\bf a_c})\times {\bf h}+{\bf \dot{r}}\times\dot{{\bf h}}\right)}{\sqrt{\mu_{\circ}/ a}}.$$ Expressing the positions of the STNO and the companion in vector form $${\bf r}=\frac{a(1-e^2)}{1+e \cos f} \left({\bf \hat{e}}\cos{f}+({\bf \hat{h}}\times {\bf \hat{e}})\sin f\right),$$ $${\bf r_c}=\frac{a_c(1-e_c^2)}{1+e_c \cos f_c} \left({\bf \hat{e}_c}\cos{f_c}+({\bf \hat{n}_c}\times{\bf \hat{e}_c})\sin{f_c}\right),$$ we sequentially perform secular averages over the short (STNO) orbital period and the long (companion) orbital period to obtain: $$\langle\dot{\bf h}\rangle=\frac{2 ({\bf \hat{n}_p}\cdot {\bf h}) {\bf \hat{n}_p}\times {\bf h}}{h^5 \tau_p} +\frac{5({\bf \hat{n}_c} \cdot {\bf e}) {\bf \hat{n}_c} \times {\bf e}- ({\bf \hat{n}_c}\cdot {\bf h}) {\bf \hat{n}_c}\times {\bf h}}{\tau_c}$$ $$\begin{aligned} \langle\dot{\bf e}\rangle&=\frac{( h^2-3({\bf \hat{n}_p}\cdot {\bf h})^2) {\bf h}\times{\bf e} -2 ({\bf \hat{n}_p}\cdot{\bf h}) ({\bf \hat{n}_p}\cdot({\bf h}\times{\bf e})) {\bf h}}{h^7 \tau_p}+\\ &\quad+\frac{{\bf h} \times {\bf e} +4 ({\bf \hat{n}_c}\cdot {\bf e}) {\bf\hat{n}_c}\times {\bf h}+({\bf \hat{n}_c}\cdot({\bf h}\times{\bf e})){\bf \hat{n}_c}}{\tau_c}, \end{aligned}$$ (A10) $$\frac{1}{\tau_p}\equiv\frac{3 {\cal I}_p}{8\sqrt{\mu_{\circ} a^7}} {}_{\overrightarrow{\hbox{Sedna}}}\frac{1}{10 \hbox{ Gy}}\hbox{ and }\frac{1}{\tau_c}\equiv\frac{3 m_c}{4\hbox{M}_{\circ}}\sqrt{\frac{\mu_{\circ} a^3}{q_c^3 Q_c^3}}\equiv\frac{\gamma_c}{\tau_p}.$$ These analytic forms are obtained using Mathematica (Wolfram Research, 2003)). We see in Eq. (A9) that the secular planetary interaction produces orbit normal precession around \({\bf \hat{n}_p}\), while a similar term in the secular companion interaction produces orbit normal precession around \({\bf \hat{n}_c}\). It is the term \(\propto ({\bf \hat{n}_c}\cdot{\bf e}) {\bf \hat{n}_c} \times {\bf e}\) that dominates the nutation of h and the changes in perihelion distances for large-eccentricity STNO. The analysis reproduces a well-known result: In the secular approximation, planetary perturbations alone do not change e (Goldreich, 1965). The secularly averaged equations depend on the companion elements through the quantities \({\bf \hat{n}_c}\) and τ c . There are several symmetries evident in the equations, such as their invariance when \({\bf \hat{n}_c}\rightarrow - {\bf \hat{n}_c}\), i.e., i c → π −i c , and their independence of the companion perihelion direction, \({\bf \hat{e}_c}\). Orienting our axes as shown in Figure 1, we see that the companion can be characterized by two parameters, γ c and i c , assumed to be constant here. Of course a wide-binary companion orbit is subject to perturbations from passing stars and the galactic tide. Therefore, these parameters essentially describe the epoch when companion interactions with the STNO are strongest, i.e., when γ c is largest. The galactic tide will change e c and i c , but changes are small for \(a_c\lessapprox\) 10,000 AU. Osculations proceed through \(>rapprox\) one half-cycle in 4.6 Gy when \(a_c>rapprox\)20,000 AU. The STNO orbit is characterized by a secularly constant semimajor axis, a, and four variable elements i, ω, ω and e. The six coupled equations for the components of e and h are restricted by the two conserved quantities, h·e=0 and h 2+e 2=1, which serve as checks on our numerical solutions. Brown M. E., Trujillo C., Rabinowicz D. (2004). ApJ 617: 645CrossRefADSGoogle Scholar Burrows A., Sudarsky D., Lunine J. I. (2003). ApJ 596: 587CrossRefADSGoogle Scholar Chauvin G. et al. (2004). A&A 425(2): L29MathSciNetCrossRefADSGoogle Scholar Emel'yanenko V. V., Asher D., Bailey M. (2002). MNRAS 338(2): 443CrossRefADSGoogle Scholar Goldreich P. (1965). Rev. Geophys. 4: 411ADSGoogle Scholar Gomes R. S., Gallardo T., Fernández J. A., Brunini A. (2005). CeMDA 91: 109zbMATHADSGoogle Scholar Gomes, R. S., Matese, J. J., and Lissauer, J. J.: 2006, Icarus (in press)Google Scholar Matese J. J., Whitman P. G., Whitmire D. P. (1999). Icarus 141: 354CrossRefADSGoogle Scholar Matese, J. J. and Lissauer, J. J.: 2002, in B. Warm bein (ed.), Proceedings of Astroids Comets Meteors 2002, ESA sp 500, p. 309Google Scholar Morbidelli A., Levison H. (2004). AJ 128: 2564CrossRefADSGoogle Scholar Wolfram Research: 2003, Mathematica 5 Google Scholar 1.Department of PhysicsUniversity of LouisianaLafayetteUSA 2.Space Science and Astrobiology DivisionMS 245-3, NASA Ames Research CenterMoffett FieldUSA Matese, J.J., Whitmire, D.P. & Lissauer, J.J. Earth Moon Planet (2005) 97: 459. https://doi.org/10.1007/s11038-006-9078-6 Received 13 September 2005 Publisher Name Kluwer Academic Publishers
CommonCrawl
March 2008 , Volume 9 , Issue 2 Homogenization and long time asymptotic of a fluid-structure interaction problem Grégoire Allaire and Alessandro Ferriero 2008, 9(2): 199-220 doi: 10.3934/dcdsb.2008.9.199 +[Abstract](2041) +[PDF](273.9KB) We study the homogenization of an unsteady fluid-structure interaction problem with a scaling corresponding to a long time asymptotic regime. We consider oscillating initial data which are Bloch wave packets corresponding to tubes vibrating in opposition of phase. We prove that the initial displacements follow the rays of geometric optics and that the envelope function evolves according to a Schr ̈odinger equation which can be interpreted as an effect of dispersion. Gr\u00E9goire Allaire, Alessandro Ferriero. Homogenization and long time asymptotic of a fluid-structure interaction problem. Discrete & Continuous Dynamical Systems - B, 2008, 9(2): 199-220. doi: 10.3934/dcdsb.2008.9.199. Lie symmetries, qualitative analysis and exact solutions of nonlinear Schrödinger equations with inhomogeneous nonlinearities Juan Belmonte-Beitia, Víctor M. Pérez-García, Vadym Vekslerchik and Pedro J. Torres Using Lie group theory and canonical transformations, we construct explicit solutions of nonlinear Schrödinger equations with spatially inhomogeneous nonlinearities. We present the general theory, use it to study different examples and use the qualitative theory of dynamical systems to obtain some properties of these solutions. Juan Belmonte-Beitia, V\u00EDctor M. P\u00E9rez-Garc\u00EDa, Vadym Vekslerchik, Pedro J. Torres. Lie symmetries, qualitative analysis and exact solutions of nonlinear Schr\u00F6dinger equations with inhomogeneous nonlinearities. Discrete & Continuous Dynamical Systems - B, 2008, 9(2): 221-233. doi: 10.3934/dcdsb.2008.9.221. Adaptive synchronization of a class of uncertain chaotic systems Samuel Bowong and Jean Luc Dimi The aim of this paper is to study the adaptive synchronization of a class of uncertain chaotic systems in the drive-response framework. A robust adaptive observer-based response system is designed to synchronize a given chaotic system with uncertainties. An improved adaptation law on the upper bound of uncertainties is proposed to guarantee the boundedness of both the synchronization error and the estimated feedback coupling gains when a boundary layer technique is employed. A numerical example of the modified Chua's circuit is considered to show the efficiency and effectiveness of this scheme. Samuel Bowong, Jean Luc Dimi. Adaptive synchronization of a class of uncertain chaotic systems. Discrete & Continuous Dynamical Systems - B, 2008, 9(2): 235-248. doi: 10.3934/dcdsb.2008.9.235. Asymptotic behavior of size-structured populations via juvenile-adult interaction József Z. Farkas and Thomas Hagen In this work a size-structured juvenile-adult population model is considered. The linearized dynamical behavior of stationary solutions is analyzed using semigroup and spectral methods. The regularity of the governing linear semigroup allows us to derive biologically meaningful conditions for the linear stability of stationary solutions. The main emphasis in this work is on juvenile-adult interaction and resulting consequences for the dynamics of the system. In addition, we investigate numerically the effect of a non-zero population inflow, due to an external source of newborns, on the linear dynamical behavior of the system in a special case of model ingredients. J\u00F3zsef Z. Farkas, Thomas Hagen. Asymptotic behavior of size-structured populations via juvenile-adult interaction. Discrete & Continuous Dynamical Systems - B, 2008, 9(2): 249-266. doi: 10.3934/dcdsb.2008.9.249. Periodic solutions for a semi-ratio-dependent predator-prey dynamical system with a class of functional responses on time scales Mostafa Fazly and Mahmoud Hesaaraki In this paper we explore the existence of periodic solutions of a nonautonomous semi-ratio-dependent predator-prey dynamical system with functional responses on time scales. To illustrate the utility of this work, we should mention that, in our results this system with a large class of monotone functional responses, always has at least one periodic solution. For instance, this system with some celebrated functional responses such as Holling type-II (or Michaelis-Menten), Holling type-III, Ivlev, $mx$ (Holling type I), sigmoidal [e.g., Real and ${mx^2}/{((A+x)(B+x))}$] and some other monotone functions, has always at least one $\omega$-periodic solution. Besides, for some well-known functional responses which are not monotone such as Monod-Haldane or Holling type-IV, the existence of periodic solutions is proved. Our results extend and improve previous results presented in [4], [10], [22], and [38]. Mostafa Fazly, Mahmoud Hesaaraki. Periodic solutions for a semi-ratio-dependent predator-prey dynamical system with a class of functional responses on time scales. Discrete & Continuous Dynamical Systems - B, 2008, 9(2): 267-279. doi: 10.3934/dcdsb.2008.9.267. A nonlinear degenerate system modelling water-gas flows in porous media Cédric Galusinski and Mazen Saad The aim of this paper is to study a system modelling the flow of an incompressible phase (water) and a compressible phase (gas) in porous media. Two kinds of degeneracy appear for this problem: a dissipative term and an evolution term degenerate with respect to the saturation. Global weak solutions are established for the system by introducing several approximate models. The first one consists in obtaining a non-degenerate dissipative system. The second one is a time discretization method in order to overcome the degeneracy in the evolution term. At this step, the subproblem is a non- degenerate elliptic system which is strongly coupled and highly nonlinear. Then the Leray-Schauder fixed point theorem instead of a classical Schauder fixed point theorem is the key point to solve such a problem. C\u00E9dric Galusinski, Mazen Saad. A nonlinear degenerate system modelling water-gas flows in porous media. Discrete & Continuous Dynamical Systems - B, 2008, 9(2): 281-308. doi: 10.3934/dcdsb.2008.9.281. Quasi-static evolution of polyhedral crystals Przemysław Górka We examine quasi-static evolution of crystals in three dimensions. We assume that the Wulff shape is a prism with a hexagonal base. We include the Gibbs-Thomson law on the crystal surface and the so-called Stefan condition. We show local in time existence of solutions assuming that the initial crystal has admissible shape. Przemys\u0142aw G\u00F3rka. Quasi-static evolution of polyhedral crystals. Discrete & Continuous Dynamical Systems - B, 2008, 9(2): 309-320. doi: 10.3934/dcdsb.2008.9.309. Feedback-mediated coexistence and oscillations in the chemostat Willard S. Keeran, Patrick D. Leenheer and Sergei S. Pilyugin We consider a mathematical model that describes the competition of three species for a single nutrient in a chemostat in which the dilution rate is assumed to be controllable by means of state dependent feedback. We consider feedback schedules that are affine functions of the species concentrations. In case of two species, we show that the system may undergo a Hopf bifurcation and oscillatory behavior may be induced by appropriately choosing the coefficients of the feedback function. When the growth of the species obeys Michaelis-Menten kinetics, we show that the Hopf bifurcation is supercritical in the relevant parameter region, and the bifurcating periodic solutions for two species are always stable. Finally, we show that by adding a third species to the system, the two-species stable periodic solutions may bifurcate into the coexistence region via a transcritical bifurcation. We give conditions under which the bifurcating orbit is locally asymptotically stable. Willard S. Keeran, Patrick D. Leenheer, Sergei S. Pilyugin. Feedback-mediated coexistence and oscillations in the chemostat. Discrete & Continuous Dynamical Systems - B, 2008, 9(2): 321-351. doi: 10.3934/dcdsb.2008.9.321. The effect of the remains of the carcass in a two-prey, one-predator model Sungrim Seirin Lee and Tsuyoshi Kajiwara 2008, 9(2): 353-374 doi: 10.3934/dcdsb.2008.9.353 +[Abstract](1743) +[PDF](1562.9KB) We propose a two-prey, one-predator model involving the effect of the carcass. We consider a commensal interaction that a prey species eats the remains of the other prey species' carcass given by their predator. Under some biological assumptions, we construct two ODE models. We analyze the linear stability and prove the permanence of the two models. We also show that the effect of the remains of the carcass leads to chaotic dynamics for biologically reasonable choices of parameters by numerical simulations. Finally, we discuss the dynamical results and the coexistent regions of the three species. Sungrim Seirin Lee, Tsuyoshi Kajiwara. The effect of the remains of the carcass in a two-prey, one-predator model. Discrete & Continuous Dynamical Systems - B, 2008, 9(2): 353-374. doi: 10.3934/dcdsb.2008.9.353. Uniqueness in determining multiple polygonal scatterers of mixed type Hongyu Liu and Jun Zou We prove that a polygonal scatterer in $\mathbb{R}^2$, possibly consisting of finitely many sound-soft and sound-hard polygons, is uniquely determined by a single far-field measurement. Hongyu Liu, Jun Zou. Uniqueness in determining multiple polygonal scatterers of mixed type. Discrete & Continuous Dynamical Systems - B, 2008, 9(2): 375-396. doi: 10.3934/dcdsb.2008.9.375. A two-parameter geometrical criteria for delay differential equations Suqi Ma, Zhaosheng Feng and Qishao Lu In some cases of delay differential equations (DDEs), a delay-dependant coefficient is incorporated into models which takes the form of a function of delay quantity. This brings forth frequent stability-switch phenomena. A geometrical stability criterion is developed on the two-parameter plane for analyzing Hopf bifurcations of equilibria. It is shown that the increasing direction of parameter $\sigma$ would confirm bifurcation directions (from stable one to unstable one, or whereas) at the critical delay values. These lead to the definite partition of stable and unstable regions on the $(\sigma-\tau)$ plane. Several examples are given to illustrate how to use this method to detect both Hopf and double Hopf bifurcations. Suqi Ma, Zhaosheng Feng, Qishao Lu. A two-parameter geometrical criteria for delay differential equations. Discrete & Continuous Dynamical Systems - B, 2008, 9(2): 397-413. doi: 10.3934/dcdsb.2008.9.397. Dynamically consistent discrete Lotka-Volterra competition models derived from nonstandard finite-difference schemes Lih-Ing W. Roeger Discrete-time Lotka-Volterra competition models are obtained by applying nonstandard finite difference (NSFD) schemes to the continuous-time counterparts of the model. The NSFD methods are noncanonical symplectic numerical schemes when applying to the predator-prey model $x'=x-xy$ and $y'=-y+xy$. The local dynamics of the discrete-time model are analyzed and compared with the continuous model. We find the NSFD schemes that preserve the local dynamics of the continuous model. The local stability criteria are exactly the same between the continuous model and the discrete model independent of the step size. Two specific discrete-time Lotka-Volterra competition models by NSFD schemes that preserve positivity of solutions and monotonicity of the system are also given. The two discrete-time models are dynamically consistent with their continuous counterpart. Lih-Ing W. Roeger. Dynamically consistent discrete Lotka-Volterra competition models derived from nonstandard finite-difference schemes. Discrete & Continuous Dynamical Systems - B, 2008, 9(2): 415-429. doi: 10.3934/dcdsb.2008.9.415.
CommonCrawl
Yao-Li Chuang 1,2, , Tom Chou 1,3, and Maria R. D'Orsogna 1,2,, Dept. of Biomathematics, UCLA, Los Angeles, CA 90095-1766, USA Dept. of Mathematics, CSUN, Los Angeles, CA 91330-8313, USA Dept. of Mathematics, UCLA, Los Angeles, CA 90095-1555, USA * Corresponding author: Maria R. D'Orsogna Received May 2018 Published January 2019 Fund Project: This work was made possible by support from grants ARO W1911NF-14-1-0472, ARO W1911NF-16-1-0165 (MRD), and NSF DMS-1516675 (TC) Successfully integrating newcomers into native communities has become a key issue for policy makers, as the growing number of migrants has brought cultural diversity, new skills, but also, societal tensions to receiving countries. We develop an agent-based network model to study interacting "hosts" and "guests" and to identify the conditions under which cooperative/integrated or uncooperative/segregated societies arise. Players are assumed to seek socioeconomic prosperity through game theoretic rules that shift network links, and cultural acceptance through opinion dynamics. We find that the main predictor of integration under given initial conditions is the timescale associated with cultural adjustment relative to social link remodeling, for both guests and hosts. Fast cultural adjustment results in cooperation and the establishment of host-guest connections that are sustained over long times. Conversely, fast social link remodeling leads to the irreversible formation of isolated enclaves, as migrants and natives optimize their socioeconomic gains through in-group connections. We discuss how migrant population sizes and increasing socioeconomic rewards for host-guest interactions, through governmental incentives or by admitting migrants with highly desirable skills, may affect the overall immigrant experience. Keywords: Sociological model, network dynamics, game theory, opinion dynamics, agent-based model. Mathematics Subject Classification: Primary: 90B15, 91D30; Secondary: 05C40, 05C57. Citation: Yao-Li Chuang, Tom Chou, Maria R. D'Orsogna. A network model of immigration: Enclave formation vs. cultural integration. Networks & Heterogeneous Media, 2019, 14 (1) : 53-77. doi: 10.3934/nhm.2019004 R. Axelrod, The dissemination of culture: A model with local convergence and global polarization, The Journal of Conflict Resolution, 41 (1997), 203-226. doi: 10.1177/0022002797041002001. Google Scholar P. Barron, K. Kaiser and M. Pradhan, Local Conflict in Indonesia: Measuring Incidence and Identifying Patterns, World Bank Policy Research Paper 3384, 2004.Google Scholar J. W. Berry, Acculturation and adaptation in a new society, International Migration Quarterly Review, 30 (1992), 69-85. doi: 10.1111/j.1468-2435.1992.tb00776.x. Google Scholar J. W. Berry, Living successfully in two cultures, International Journal of Intercultural Relations, 29 (2005), 697-712. Google Scholar J. W. Berry, U. Kim, T. Minde and D. Mok, Comparative studies of acculturative stress, The International Migration Review, 21 (1987), 491-511. Google Scholar P. Boyle, K. Halfacree and V. Robinson, Exploring contemporary migration, 2nd edition, Pearson Education Limited, London and New York, 2013.Google Scholar [7] S. Castles and M. J. Miller, The Age of Migration: International Population Movements in the Modern World, The Guilford Press, New York, 2003. E. M. Chaney, Foreword: The world economy and contemporary migration, The International Migration Review, 13 (1979), 204-212. doi: 10.2307/2545027. Google Scholar Y.-S. Chiang, Cooperation could evolve in complex networks when activated conditionally on network characteristics, Journal of Artificial Societies and Social Simulation, 16 (2013), 6. doi: 10.18564/jasss.2148. Google Scholar Y.-L. Chuang, M. R. D'Orsogna and T. Chou, A bistable belief dynamics model for radicalization within sectarian conflict, Quarterly of Applied Mathematics, 75 (2017), 19-37. doi: 10.1090/qam/1446. Google Scholar M. D. Cohen, R. L. Riolo and R. Axelrod, The role of social structure in the maintenance of cooperative regimes, Rationality and Society, 13 (2001), 5-32. doi: 10.1177/104346301013001001. Google Scholar M. H. Crawford and B. C. Campbell (eds.), Causes and Consequences of Human Migration, Cambridge University Press, Cambridge, UK, 2012. doi: 10.1017/CBO9781139003308. Google Scholar A. P. Damm, Determinants of recent immigrants'location choices: Quasi-experimental evidence, Journal of Population Economics, 22 (2009), 145-174. Google Scholar G. Deffuant, D. Neau, F. Amblard and G. Weisbuch, Mixing beliefs among interacting agents, Advances in Complex Systems, 3 (2000), 87-98. doi: 10.1142/S0219525900000078. Google Scholar M. H. DeGroot, Reaching a consensus, Journal of the American Statistical Association, 69 (1974), 118-121. Google Scholar K. Fehl, D. J. van der Post and D. Semmann, Co-evolution of behaviour and social network structure promotes human cooperation, Ecology Letters, 14 (2011), 546-551. doi: 10.1111/j.1461-0248.2011.01615.x. Google Scholar K. Felijakowski and R. Kosinski, Bounded confidence model in complex networks, International Journal of Modern Physics C, 24 (2013), 1350049, 12pp. doi: 10.1142/S0129183113500496. Google Scholar K. Felijakowski and R. Kosinski, Opinion formation and self-organization in a social network in an intelligent agent system, ACTA Physica Polonica B, 45 (2014), 2123-2134. doi: 10.5506/APhysPolB.45.2123. Google Scholar M. Fossett, Ethnic preferences, social distance dynamics, and residential segregation: Theoretical explorations using simulation analysis, The Journal of Mathematical Sociology, 30 (2006), 185-273. doi: 10.1080/00222500500544052. Google Scholar N. E. Friedkin, Choice shift and group polarization, American Sociological Review, 64 (1999), 856-875. Google Scholar A. Gabel, P. L. Krapivsky and S. Redner, Highly dispersed networks by enhanced redirection, Physical Review E, 88 (2013), 050802(R). doi: 10.1103/PhysRevE.88.050802. Google Scholar S. Galam, Heterogeneous beliefs, segregation, and extremism in the making of public opinions, Physical Review E, 71 (2005), 046123. doi: 10.1103/PhysRevE.71.046123. Google Scholar S. Galam, Stubbornness as an unfortunate key to win a public debate: An illustration from sociophysics, Society, 15 (2016), 117-130. doi: 10.1007/s11299-015-0175-y. Google Scholar S. Galam and M. A. Javarone, Modeling radicalization phenomena in heterogeneous populations, PLOS One, 11 (2016), e0155407. doi: 10.1371/journal.pone.0155407. Google Scholar B. Golub and M. O. Jackson, Naíve learning in social networks and the wisdom of crowds, American Economic Journal: Microeconomics, 2 (2010), 112-149. doi: 10.1257/mic.2.1.112. Google Scholar D. Hales, Cooperation without memory or space: Tags, groups and the Prisoner's Dilemma, in Multi-Agent-Based Simulation. (eds. S. Moss and P. Davidsson), Springer, Berlin/Heidelberg, 2000, 157–166. doi: 10.1007/3-540-44561-7_12. Google Scholar R. A. Hammond and R. Axelrod, The evolution of ethnocentrism, Journal of Conflict Resolution, 50 (2006), 926-936. doi: 10.1177/0022002706293470. Google Scholar D. J. Haw and J. Hogan, A dynamical systems model of unorganized segregation, The Journal of Mathematical Sociology, 42 (2018), 113-127. doi: 10.1080/0022250X.2018.1427091. Google Scholar A. D. Henry, P. Pralat and C.-Q. Zhang, Emergence of segregation in evolving social networks, PNAS, 108 (2011), 8605-8610. doi: 10.1073/pnas.1014486108. Google Scholar [30] P. Ireland, Becoming Europe: Immigration Integration And The Welfare State, University of Pittsburgh Press, Pittsburgh, PA, 2004. M. A. Javarone, A. E. Atzeni and S. Galam, Emergence of cooperation in the Prisoner's Dilemma driven by conformity, in Applications of Evolutionary Computation. EvoApplications 2015. Lecture Notes in Computer Science (eds. A. Mora and G. Squillero), vol. 9028, Springer, Cham, 2015, 155–163. doi: 10.1007/978-3-319-16549-3_13. Google Scholar W. Kandel and J. Cromartie, New patterns of Hispanic settlement in rural America, Technical Report 99, United States Department of Agriculture, 2004.Google Scholar T. B. Klos, Decentralized interaction and co-adaptation in the repeated prisoners dilemma, Computational and Mathematical Organization Theory, 5 (1999), 147165.Google Scholar R. Koopmans, Trade-offs between equality and difference: Immigrant integration, multiculturalism and the welfare state in cross-national perspective, Journal of Ethnic and Migration Studies, 36 (2010), 1-26. doi: 10.1080/13691830903250881. Google Scholar U. Krause, A discrete nonlinear and non-autonomous model of consensus formation, in Communications in Difference Equations (eds. S. Elaydi, G. Ladas, J. Popenda and J. Rakowski), Amsterdam: Gordon and Breach, 2000, 227–236. Google Scholar D. T. Lichter, D. Parisi and M. C. Taquino, Emerging patterns of Hispanic residential segregation: Lessons from rural and small–town America, Rural Sociology, 81 (2016), 483-518. doi: 10.1111/ruso.12108. Google Scholar D. T. Lichter, D. Parisi, M. C. Taquino and S. M. Grice, Residential segregation in new Hispanic destinations: Cities, suburbs, and rural communities compared, Social Science Research, 39 (2010), 215-230. doi: 10.1016/j.ssresearch.2009.08.006. Google Scholar A. Németh and K. Takács, The evolution of altruism in spatially structured populations, Journal of Artificial Societies and Social Simulations, 10 (2007), 1-13. Google Scholar A. Nowak, J. Szamrej and B. Latané, From private attitude to public opinion: A dynamic theory of social impact, Psychological Review, 97 (1990), 362-376. doi: 10.1037/0033-295X.97.3.362. Google Scholar N. Priest, Y. Paradies, A. Ferdinand, L. Rouhani and M. Kelaher, Patterns of intergroup contact in public spaces: Micro-ecology of segregation in Australian communities, Societies, 4 (2014), 30-44. doi: 10.3390/soc4010030. Google Scholar R. Riolo, The Effects of Tag-Mediated Selection of Partners in Evolving Populations Playing the Iterated Prisoner's Dilemma, Technical report, Santa Fe Institute, Santa Fe, NM, 1997.Google Scholar R. L. Riolo, M. D. Cohen and R. Axelrod, Evolution of cooperation without reciprocity, Nature, 414 (2001), 441-443. doi: 10.1038/35106555. Google Scholar F. C. Santos, J. M. Pacheco and T. Lenaerts, Cooperation prevails when individuals adjust their social ties, PLoS Computational Biology, 2 (2006), e140. doi: 10.1371/journal.pcbi.0020140. Google Scholar M. Semyonov and A. Tyree, Community segregation and the costs of ethnic subordination, Social Forces, 59 (1981), 649-666. Google Scholar E. S. Stewart, UK dispersal policy and onward migration: Mapping the current state of knowledge, Journal of Refugee Studies, 25 (2012), 25-49. doi: 10.1093/jrs/fer039. Google Scholar A. Szolnoki and M. Perc, Competition of tolerant strategies in the spatial public goods game, New Journal of Physics, 18 (2016), 083021. doi: 10.1088/1367-2630/18/8/083021. Google Scholar UNHCR, Global trends: Forced displacement in 2017, Technical report, The UN Refugee Agency, The United Nations, 2018, http://www.unhcr.org/5b27be547.pdf.Google Scholar Q. Wang, H. Wang, Z. Zhang, Y. Li, Y. Liu and M. Perc, Heterogeneous investments promote cooperation in evolutionary public goods games, Physica A, 502 (2018), 570-575. Google Scholar D. J. Watts and S. H. Strogatz, Collective dynamics of 'small-world' networks, Nature, 393 (1998), 440-442. Google Scholar G. Weisbuch, G. Deffuant, F. Amblard and J.-P. Nadal, Meet, discuss, and segregate, Complexity, 7 (2002), 55-63. doi: 10.1002/cplx.10031. Google Scholar J. S. White, R. Hamad, X. Li, S. Basu, H. Ohlsson, J. Sundquist and K. Sundquist, Long-term effects of neighbourhood deprivation on diabetes risk: Quasi-experimental evidence from a refugee dispersal policy in Sweden, The Lancet Diabetes and Endocrinology, 4 (2016), 517-524. doi: 10.1016/S2213-8587(16)30009-2. Google Scholar Figure 2. Simulated network dynamics leading to (a) complete segregation, and (b) integration between guest (red) and host (blue) populations. Shading of node colors represents the degree of hostility $|x_i^t|$ of node $i$ towards those of its opposite group, according to the color scheme shown in Fig. 1. Initial conditions are randomly connected guest and host nodes with attitudes $x_{i, {\rm guest}}^{0} = -1$ and $x_{i, {\rm host}}^{0} = 1$. Other parameters are $N_{\rm h} = 900, N_{\rm g} = 100$, $\alpha = 3$, $A_{\rm in} = A_{\rm out} = 10$, $\sigma = 1$. The two panels differ only for $\kappa$, the attitude adjustment timescale, with $\kappa = 1000$ in panel (a) and $\kappa = 100$ in panel (b). (a) For slowly changing attitudes ($\kappa = 1000$), hostile attitudes persist over time, eventually leading to segregated clusters. (b) For fast changing attitudes ($\kappa = 100$), guests initially become more cooperative, as shown by the lighter red colors. Over time, a more connected host--guest cluster arises with hosts eventually adopting more cooperative attitudes as well Figure 1. Model diagram. Each node $i$ is characterized by a variable attitude $-1 \le x_i^t \le 1$ at time $t$. Negative values, depicted in red, indicate guest nodes; positive values represent hosts, colored in blue. The magnitude $\vert x_i^t \vert$ represents the degree of hostility of node $i$ towards members of the other group. Each node is shaded accordingly. All nodes $j, k$ linked to the central node $i$ represent the green-shaded social circle $\Omega_i^t$ of node $i$ at time $t$. The utility $U_i^t$ of node $i$ depends on its attitude relative to that of its $m^t_i $ connections in $\Omega_i^t$ and on $m^t_i$. Nodes maximize their utility by adjusting their attitudes $x_i^t$ and by establishing or severing connections, reshaping the network over time Figure 3. Dynamics of the average utility per node $ \langle U_i^t \rangle_{\rm guest} $ in panels (a) and (c), and of the average attitudes $ \langle x^t_{i} \rangle_{\rm guest}, \langle x^t_{i} \rangle_{\rm host} $ in panels (b) and (d) for $ N_{\rm g} = 200 $ (a, b) and $ N_{\rm g} = 20 $ (c, d) guests in a total population of $ N = 2000 $ nodes. Parameters are $ \alpha = 3 $, $ A_{\rm in} = A_{\rm out} = 10 $, and $ \sigma = 1 $, and $ \kappa = 100 $ (faster) and $ \kappa = 1000 $ (slower) attitude adjustment. Initial attitudes are $ x_{i, {\text{host}}}^0 = 1 $ and $ x_{i, {\rm {guest}}}^0 = -1 $, with random connections between nodes so that on average each node is connected to $ m_i^0 = 10 $ others at $ t = 0 $, representing full insertion of guests into the community. Network remodeling (solid-red curve) and attitude adjustment (blue-dashed and green-dotted curves) are considered separately; their interplay is illustrated in full model simulations (purple-dot-dashed and magenta-double-dotted-dashed). Utility is increased in all cases, but attitude adjustment is more efficient at the onset due to the initially set cross-group connections. Network remodeling allows for higher utilities at longer times. For the full model, fast adjustment ($ \kappa = 100 $) leads to well integrated societies for $ N_{\rm g} = 200 $ as $ t \to \infty $, given that $ \langle x^t_{i} \rangle_{\rm host} \to 0^{+} $ and $ \langle x^t_{i} \rangle_{\rm guest} \to 0^{-} $; for $ N_{\rm g} = 20 $ hosts and guests segregate, with guests adopting collaborative attitudes, $ \langle x^t_{i} \rangle_{\rm host} \to 0.93 $ and $ \langle x^t_{i} \rangle_{\rm guest} \to 0^{-} $. Under slow adjustment ($ \kappa = 1000 $) hosts and guests will remain hostile and segregated with $ \langle x^t_{i} \rangle_{\rm host} \to 0.95 $, $ \langle x^t_{i} \rangle_{\rm guest} \to -0.34 $ for $ N_{\rm g} = 200 $ and $ \langle x^t_{i} \rangle_{\rm host} \to 0.99, \langle x^t_{i} \rangle_{\rm guest} \to 0^- $ for $ N_{\rm g} = 20 $ Figure 4. Dynamics of the integration index $I^t_{\rm int}$ in panels (a) and (c) and of the out-group reward fraction $v^t_{\rm out}$ in panels (b) and (d). Parameters and initial conditions are the same as in Fig. 3. (a, b) Large migrant population $N_{\rm g} = 200$. Here, $I_{\rm int}^t \to 0$ and $v_{\rm out}^t \to 0$ at long times when only network remodeling is allowed, and nodes seek links with conspecifics. If only attitude adjustment is allowed, $I_{\rm int}^t$ remains fixed due to the quenched network connectivity, while $v_{\rm out}^t$ increases as guests and hosts adopt more cooperative attitudes. For the full model, slow attitude changes ($\kappa = 1000$) lead to segregation and $I_{\rm int}^t \to 0$, $v_{\rm int}^t \to 0$ as $t \to \infty$. Fast attitude changes ($\kappa = 100$) lead to non-zero values of $I_{\rm int}^t$ and $v_{\rm out}^t$, indicating a more cooperative society. (c, d) Small migrant population $N_{\rm g} = 20$. Results are similar to the previous case except for the full model where $I_{\rm int}^t \to 0$, $v_{\rm out}^t \to 0$ as $t \to \infty$ for both $\kappa = 1000$ and $\kappa = 100$. For low values of $N_{\rm g}$ segregation arises under both fast and slow attitude changes Figure 5. Dynamics of the integration index $I^t_{\rm out}$ in panel (a) and of the out-group reward fraction $v^t_{\rm out}$ in panel (b) for initially cooperative hosts. Parameters are the same as for the full model in Fig. 3, with initially cooperative hosts and uncooperative guests at $x_{i, {\rm host}}^0 = 0^+$ and $x_{i, {\rm guest}}^0 = -1$. (a) $I_{\rm int}^t$ decreases at the onset, eventually rising towards integration, where $I_{\rm int}^t \to 1$ as $t \to \infty$. The initial decrease is more pronounced for slow attitude adjustment ($\kappa = 1000$) and for larger guest populations ($N_{\rm g} = 200$) as described in the text. (b) $v_{\rm out}^t$ increases over long times as attitude adjustment allows for more cooperation between guests and hosts. Under slow attitude adjustment ($\kappa = 1000$) and large guest populations ($N_{\rm g} = 200$), $v_{\rm out}^t$ decreases at the onset, with players seeking in-group connections. As guests and hosts become more cooperative $v_{\rm out}^t$ increases Figure 6. Dynamics of the integration index $I^t_{\rm out}$ in panel (a) and of the out-group reward fraction $v^t_{\rm out}$ in panel (b) under different initial random connectivities. Parameters are the same as in Fig 3 with initial hostile attitudes $x_{i, {\rm host}}^0 = 1$ and $x_{i, {\rm guest}}^0 = -1$. In the blue-solid curve $I_{\rm int}^0 = 0.91$; in the green-dashed curve $I_{\rm int}^0 = 0.37$; in the red-dotted curve $I_{\rm int}^0 = 0.06$. (a) For all three cases, $I_{\rm int}^t$ decreases from the initial values, but only the initially poorly connected case of $I_{\rm int}^0 = 0.06$ leads to full segregation, indicated by $I_{\rm int}^t \to 0$ as $t \to \infty$. For the other two cases, $I_{\rm int}^t \to 1$. (b) For all three cases $v_{\rm out}^t$ increases at the onset due to attitude adjustment, and later decreases due to network remodeling. Only $I_{\rm int}^0 = 0.06$ leads to long-time $v_{\rm out}^t \to 0$: as guest-host connections are severed, no socioeconomic utility can be shared. For the other two cases, $v_{\rm out}^t$ increases at long times, suggesting increasing rewards through cross-group connections Figure 7. Integration index at steady state. In panel (a) $\langle I^*_{\rm int} \rangle$ is averaged over 20 realizations and plotted as a function of $A_{\rm out} / A_{\rm in}$ with $\kappa = \infty$. The bar indicates the variance. In panel (b) single representations $I^*_{\rm int}$ are shown as a function of $\kappa$ with $A_{\rm out} / A_{\rm in} = 2$. Other parameters are set at $\alpha = 3$ and $\sigma = 1$, with $N_{\rm h} = 1800$ and $N_{\rm g} = 200$. In both panels red solid circles represent initially unconnected, hostile hosts and guests, $x_{i, {\rm host}}^0 = 1$, $x_{i, {\rm guest}}^0 = -1$; blue triangles correspond to fully cooperative initial conditions $x_{i, {\rm host}}^0 = x_{i, {\rm guest}}^0 = 0$. When the ratio $A_{\rm out} / A_{\rm in}$ increases, the long-time state of the network changes from segregation to uniform mixture, and finally to reversed segregation. The transition for the default initial conditions occurs at larger $A_{\rm out} / A_{\rm in}$ ratios, compared to the cooperative initial conditions, as the former require higher compensation from out-group connections to overlook the hostile attitudes between guests and hosts. In panel (b) each data point corresponds to one realization. Increasing attitude adjustment time scale $\kappa$ leads to increased likelihood of segregation. A bimodal regime emerges for intermediate $\kappa$ Figure 8. Time $\tau_{\rm seg}$ to reach $\langle I_{\rm int}^*\rangle = 0.1$, where 90$\%$ of guest nodes are segregated as a function of (a) the sensitivity to the reward function $\sigma$, (b) the relative guest population $N_{\rm g}/N$ and (c) the total population $N$ assuming $N_{\rm g} = 0.1 N$. Other parameters are set to $\alpha = 3$, $A_{\rm in} = A_{\rm out} = 10$, $\kappa = 600$ in all panels. In panel (a) $N_{\rm g} = 200$, $N = 2000$; in panel (b) $\sigma = 1$ and $N = 2000$; in panel (c) $\sigma = 1$. In all three cases, guests and hosts are initially unconnected and hostile to each other, $x_{i, {\rm host}}^0 = 1$ and $x_{i, {\rm guest}}^0 = -1$. Each data point and its error bar represent the mean and the variance over $20$ simulations. In panel (a) increasing $\sigma$ allows for more tolerance to attitude differences, increasing the time to segregation. In panel (b) the higher guest population ratio leads to faster segregation as guests are more likely to establish in-group connections, forming guest only enclaves. In panel (c) the time to segregation increases with the overall population, for a constant $10\%$ guest population Figure 9. Integration index at steady state. $\langle I^*_{\rm int} \rangle$ is averaged over 10 realizations and plotted as a function of $\kappa$ and $N_{\rm g} / N$ with $\alpha = 3$ in panel (a), and as a function of $\kappa$ and $\alpha$ with $N_{\rm g} / N = 0.1$ in panel (b). Other parameters are set at $A_{\rm in} = 10$, $A_{\rm out} = 20$, $\sigma = 1$, and $N = 2000$. In both panels guests and hosts are initially unconnected, with hostile attitudes, $x_{i, {\rm host}}^0 = 1$, $x_{i, {\rm guest}}^0 = -1$. In panel (a), for smaller $N_{\rm g} / N$, the transition from segregation to integration (or reverse segregation) occurs at larger $\kappa$. In panel (b) increasing $\alpha$ causes the transition point to shift towards larger $\kappa$ Table 1. List of variables and parameters of the model Symbol Description default values $x_i$ attitude -1 to 1 $A_{\rm in}$ maximal utility through in-group connection $10$ $A_{\rm out}$ maximal utility through out-group connection $1$ to $100$ $\sigma$ sensitivity to attitude difference $1$ $\kappa$ attitude adjustment timescale $100$ to $1000$ $\alpha$ cost of adding connections $3$ $N$ total population $2000$ $N_{\rm g}$ guest population $20$ to $200$ $N_{\rm h}$ host population $N - N_{\rm g}$ Holly Gaff. Preliminary analysis of an agent-based model for a tick-borne disease. Mathematical Biosciences & Engineering, 2011, 8 (2) : 463-473. doi: 10.3934/mbe.2011.8.463 Gianluca D'Antonio, Paul Macklin, Luigi Preziosi. An agent-based model for elasto-plastic mechanical interactions between cells, basement membrane and extracellular matrix. Mathematical Biosciences & Engineering, 2013, 10 (1) : 75-101. doi: 10.3934/mbe.2013.10.75 Marina Dolfin, Mirosław Lachowicz. Modeling opinion dynamics: How the network enhances consensus. Networks & Heterogeneous Media, 2015, 10 (4) : 877-896. doi: 10.3934/nhm.2015.10.877 Rainer Hegselmann, Ulrich Krause. Opinion dynamics under the influence of radical groups, charismatic leaders, and other constant signals: A simple unifying model. Networks & Heterogeneous Media, 2015, 10 (3) : 477-509. doi: 10.3934/nhm.2015.10.477 Dieter Armbruster, Christian Ringhofer, Andrea Thatcher. A kinetic model for an agent based market simulation. Networks & Heterogeneous Media, 2015, 10 (3) : 527-542. doi: 10.3934/nhm.2015.10.527 Astridh Boccabella, Roberto Natalini, Lorenzo Pareschi. On a continuous mixed strategies model for evolutionary game theory. Kinetic & Related Models, 2011, 4 (1) : 187-213. doi: 10.3934/krm.2011.4.187 Anna Lisa Amadori, Astridh Boccabella, Roberto Natalini. A hyperbolic model of spatial evolutionary game theory. Communications on Pure & Applied Analysis, 2012, 11 (3) : 981-1002. doi: 10.3934/cpaa.2012.11.981 Paula Federico, Dobromir T. Dimitrov, Gary F. McCracken. Bat population dynamics: multilevel model based on individuals' energetics. Mathematical Biosciences & Engineering, 2008, 5 (4) : 743-756. doi: 10.3934/mbe.2008.5.743 Marco Caponigro, Anna Chiara Lai, Benedetto Piccoli. A nonlinear model of opinion formation on the sphere. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 4241-4268. doi: 10.3934/dcds.2015.35.4241 Yannick Viossat. Game dynamics and Nash equilibria. Journal of Dynamics & Games, 2014, 1 (3) : 537-553. doi: 10.3934/jdg.2014.1.537 Aylin Aydoğdu, Sean T. McQuade, Nastassia Pouradier Duteil. Opinion Dynamics on a General Compact Riemannian Manifold. Networks & Heterogeneous Media, 2017, 12 (3) : 489-523. doi: 10.3934/nhm.2017021 Robin Cohen, Alan Tsang, Krishna Vaidyanathan, Haotian Zhang. Analyzing opinion dynamics in online social networks. Big Data & Information Analytics, 2016, 1 (4) : 279-298. doi: 10.3934/bdia.2016011 Jan Prüss, Laurent Pujo-Menjouet, G.F. Webb, Rico Zacher. Analysis of a model for the dynamics of prions. Discrete & Continuous Dynamical Systems - B, 2006, 6 (1) : 225-235. doi: 10.3934/dcdsb.2006.6.225 Laurent Boudin, Francesco Salvarani. The quasi-invariant limit for a kinetic model of sociological collective behavior. Kinetic & Related Models, 2009, 2 (3) : 433-449. doi: 10.3934/krm.2009.2.433 Giacomo Albi, Lorenzo Pareschi, Mattia Zanella. Opinion dynamics over complex networks: Kinetic modelling and numerical methods. Kinetic & Related Models, 2017, 10 (1) : 1-32. doi: 10.3934/krm.2017001 Domenica Borra, Tommaso Lorenzi. Asymptotic analysis of continuous opinion dynamics models under bounded confidence. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1487-1499. doi: 10.3934/cpaa.2013.12.1487 Clinton Innes, Razvan C. Fetecau, Ralf W. Wittenberg. Modelling heterogeneity and an open-mindedness social norm in opinion dynamics. Networks & Heterogeneous Media, 2017, 12 (1) : 59-92. doi: 10.3934/nhm.2017003 Bruno Buonomo, Giuseppe Carbone, Alberto d'Onofrio. Effect of seasonality on the dynamics of an imitation-based vaccination model with public health intervention. Mathematical Biosciences & Engineering, 2018, 15 (1) : 299-321. doi: 10.3934/mbe.2018013 Zhijian Yang, Ke Li. Longtime dynamics for an elastic waveguide model. Conference Publications, 2013, 2013 (special) : 797-806. doi: 10.3934/proc.2013.2013.797 Denise E. Kirschner, Alexei Tsygvintsev. On the global dynamics of a model for tumor immunotherapy. Mathematical Biosciences & Engineering, 2009, 6 (3) : 573-583. doi: 10.3934/mbe.2009.6.573 Yao-Li Chuang Tom Chou Maria R. D'Orsogna
CommonCrawl
View all Nature Research journals Lifshitz transition from valence fluctuations in YbAl3 Shouvik Chatterjee1 nAff5, Jacob P. Ruf1, Haofei I. Wei ORCID: orcid.org/0000-0002-8946-755X1, Kenneth D. Finkelstein2, Darrell G. Schlom3,4 & Kyle M. Shen1,4 Nature Communications volume 8, Article number: 852 (2017) Cite this article Characterization and analytical techniques Electronic properties and materials Surfaces, interfaces and thin films In mixed-valent Kondo lattice systems, such as YbAl3, interactions between localized and delocalized electrons can lead to fluctuations between two different valence configurations with changing temperature or pressure. The impact of this change on the momentum-space electronic structure is essential for understanding their emergent properties, but has remained enigmatic. Here, by employing a combination of molecular beam epitaxy and in situ angle-resolved photoemission spectroscopy we show that valence fluctuations can lead to dramatic changes in the Fermi surface topology, even resulting in a Lifshitz transition. As the temperature is lowered, a small electron pocket in YbAl3 becomes completely unoccupied while the low-energy ytterbium (Yb) 4f states become increasingly itinerant, acquiring additional spectral weight, longer lifetimes, and well-defined dispersions. Our work presents a unified picture of how local valence fluctuations connect to momentum-space concepts such as band filling and Fermi surface topology in mixed valence systems. Kondo lattice systems host a wide variety of quantum states such as antiferromagnetism1, heavy Fermi liquids2, hidden order3, and unconventional superconductivity4, which can often be controlled by modest perturbations using magnetic field or pressure, thereby providing access to quantum phase transitions5,6,7. These states generally emerge from a complex many-body state that is formed by enhanced Kondo coupling between the local rare-earth moments and the band-like conduction electrons at low temperatures. In mixed valence systems8,9,10, this coupling also results in a change of the rare-earth valence, which can be determined by core-level spectroscopies that probe the local chemical environment (r-space)11,12,13, but the implications for the momentum-space (k-space) electronic structure remain poorly understood. To gain insight into the emergent properties of these systems, it is crucial to understand how delocalized carriers and the low-energy momentum-space electronic structure emerge from these local interactions. Here, we choose YbAl3 as a simple prototypical mixed valence system with two nearly degenerate ytterbium (Yb) valence configurations, Yb2+ (4f 14) and Yb3+ (4f 13). The average Yb valence, ν f, decreases with temperature, changing by ∼Δν f = −0.05 from 300 K to below T ∗ ≈ 34–40 K11, 12, 14,15,16, when it becomes a heavy Fermi liquid, attributed to the enhanced Kondo screening at low temperatures17. We selected YbAl3 due to its relatively large change in valence as well as its large energy scales, with a reported single ion Kondo temperature T K ≈ 670 K17, 18, which should make these changes observable in momentum space. The lack of a well-defined, pristine surface in cleaved YbAl3 single crystals19, however, has previously prevented momentum-resolved measurements of its electronic structure. We have circumvented this problem by synthesizing epitaxial thin films of YbAl3 and its conventional metal analog LuAl3 by molecular beam epitaxy (MBE)20 and have combined it with in situ angle-resolved photoemission spectroscopy (ARPES) to directly measure their electronic structure as a function of temperature. Our measurements reveal a strong temperature-dependent change in both the real and momentum-space electronic structure of YbAl3. The local Yb valence decreases as the temperature is lowered, accompanied by a large shift in the chemical potential which leads to a Lifshitz transition of a small electron pocket at Γ, along with the emergence of renormalized heavy quasiparticles near the Fermi energy (E F). We establish a direct one-to-one correspondence between these observed changes, which we believe to be generic to all mixed valence systems. Synthesis and electronic structure Both YbAl3 and LuAl3 crystallize in a cubic Pm \(\bar 3\) m structure where Yb or Lu atoms occupy the vertices of the unit cell while Al atoms occupy the face centers, as illustrated in the inset of Fig. 1b. LuAl3 has fully occupied 4f orbitals with zero net moment and a lattice constant (4.19 Å) closely matched to YbAl3 (4.20 Å). Thus, LuAl3 serves as an ideal reference compound to understand the light, Al-derived band-like conduction electron states, which are also common to YbAl3. Epitaxial thin films of both LuAl3 and YbAl3 with (001) out-of-plane orientation were synthesized by co-evaporation on MgO (001) substrates (4.21 Å) at temperatures of 200–350 °C and a chamber base pressure below 2 × 10−9 Torr. For all films, a 1.2 nm thick aluminum (4.05 Å) buffer layer was deposited at 500 °C, which allowed the growth of continuous, smooth films of LuAl3/YbAl3 on top. In these studies, we investigated a 30 nm thick LuAl3 film and a 20 nm thick YbAl3 film (the YbAl3 was synthesized on top of a 20 nm thick LuAl3 buffer layer on top of the Al buffer, which improved the quality of the YbAl3 layers). All films were sufficiently thick so that any photoemission intensity from the buffer layers or substrate and thickness-dependent finite size effects can be ignored. Additional details about the synthesis can be found in "Methods" section as well in ref. 20. Electronic structure and Fermi surfaces of YbAl3 and LuAl3. Fermi surface maps and energy distribution curves (EDCs) and E vs. k dispersion for a–c, LuAl3 and d–f, YbAl3, all measured with hν = 21.2 eV at 21 K. Experimental Fermi surfaces of a, LuAl3 and d, YbAl3. DFT calculations (green lines) of the Fermi surface topology at k z = 0 for LuAl3 with U = 0 are overlaid in a. Momentum-integrated EDCs of b, LuAl3 and e, YbAl3, with surface core levels marked as asterisks. E vs. k dispersions for c, LuAl3 and f, YbAl3, together with DFT calculations of the band structure in LuAl3 (green) shown in c. The similarity between the dispersion of the light band between 2 and 6 eV in YbAl3 and LuAl3 suggest that both compounds have similar inner potentials, and that both measurements are at k z = 0 ± 0.1π/a In Fig. 1, we show Fermi surface maps and the electronic structure from LuAl3 and YbAl3 thin films from the Fermi energy (E F) to a binding energy of 10.5 eV. For LuAl3, only the Lu 4f 13 final states are observed, with the J = 7/2 and 5/2 core levels at binding energies of 6.7 and 8.2 eV, respectively. Highly dispersive Al-derived bands can be observed in both LuAl3 (Fig. 1c) and YbAl3 (Fig. 1f), which extend from about 6 eV binding energy to near E F. By matching the experimentally determined dispersion of these bands as well the Fermi surface contours measured with both He Iα (21.2 eV) and He IIα (40.8 eV) photons to density functional theory (DFT) calculations, we are able to determine our out-of-plane momenta (k z) values for both YbAl3 and LuAl3. Due to the lack of strong correlations in LuAl3 (its 4f shell is entirely filled), DFT calculations should accurately describe its electronic structure and indeed we find good agreement between both the DFT-calculated band dispersions as well as Fermi surface contours to the experimentally determined dispersions and Fermi surface from ARPES, assuming an inner potential of V 0 − ϕ = 13.66 eV (Fig. 1a, c and Supplementary Fig. 1). We observe broadly dispersive, primarily Al-derived bands in YbAl3 (Fig. 1f) analogous to those observed in LuAl3, and also found excellent correspondence between the measured electronic structure in LuAl3 and YbAl3 over the entire Brillouin zone (Supplementary Fig. 3) indicating that we are probing a similar k z as in LuAl3, as one might expect given their highly similar electronic and crystal structures. Using the value of V 0 − ϕ, we determine that for hν = 21.2 eV, we are probing near the zone center, Γ, k z = 0 ± 0.1π/c. More details about the k z determination can be found in Supplementary Note 1. A two dimensional slice at k z = Γ of the three-dimensional Fermi surface of LuAl3 accesses a multiply connected Fermi surface sheet consisting of electron-like pockets centered at (0, 0) and (π, π), consistent with our ARPES data, shown in Fig. 1a–c. On the other hand, in YbAl3, we clearly observe both the Yb 4f 13 and 4f 12 final states around 0–2 eV and 6–10.5 eV binding energy, respectively, consistent with its mixed valence character. The near-E F electronic structure in YbAl3 is, however, significantly modified by a shift in its chemical potential due to the differing average Lu and Yb valence and the interaction between the broad, dispersive bands and the renormalized Yb 4f states. In the Fermi surface map of YbAl3 (Fig. 1d), large Fermi surface sheets are prominent and centered at zone edges (π, π). Evolution of the electron pocket at Γ Having discussed the basic electronic structure, we now turn towards its temperature dependence in YbAl3. In Fig. 2a, we show a series of ARPES spectra obtained along (0, 0) to (0, π) at k z ≈ Γ between 255 and 21 K, which establish a clear temperature-dependent shift of the chemical potential Δμ with the 4f-derived states moving closer to E F as the temperature is lowered, consistent with earlier angle-integrated measurements14. The most dramatic effect of Δμ is on a small parabolic electron pocket centered at Γ. At 255 K, the electron pocket can be clearly observed with its band bottom at 40 ± 5 meV binding energy and a k F of 0.20 ± 0.01π/a. As the temperature is lowered, the electron pocket is lifted in energy and becomes entirely unoccupied around 21 K. Since the pocket is centered at Γ, its lifting above E F would then coincide with a Lifshitz transition. To within experimental resolution, the dispersion or effective mass of the electron pocket does not change apart from a rigid shift due to Δμ. Furthermore, while the Yb 4f 13 final states also shifted in energy, the Yb 4f 12 states did not shift appreciably with temperature indicating that the Δμ shift arises from an alteration in band filling due to the emergence of a Kondo screened many-body state. Correspondence between r-space and k-space electronic structure in YbAl3. a Evolution of the low-energy electronic structure with temperature. E vs. k dispersions are divided by the corresponding resolution-broadened Fermi-Dirac distribution to emphasize thermally occupied states above E F. White lines are guides to the eye showing evolution of the electron-like pocket centered at (0, 0, 0). b XPS spectra showing the temperature-dependent intensity variation of the 4f 13 and 4f 12 final states in YbAl3, after Shirley background subtraction37 and normalized by the 4f 12 final state intensity. c Temperature dependence of the change in Luttinger volume, estimated from the size of the electron pocket at (0, 0, 0) and of the change in Yb valence, measured by core-level spectroscopy, revealing a precise one-to-one correspondence. Error bars reflect uncertainty in the Luttinger volume estimation due to a statistical error of one standard deviation in the extracted k F values from the fits to the Momentum distribution curves (MDCs) taken at E F. d Schematic illustrating the temperature-dependent relationship between r-space and k-space electronic structure in YbAl3 We note that the small electron pocket at Γ is not reported in previous de Haas-van Alphen (dHvA) studies of YbAl3, which can be explained by the fact that the electron pocket is only occupied at higher temperatures (T > 20 K), whereas dHvA measurements are conducted at low temperatures (T ≈ 20 mK-1.5 K)18, 21. Nevertheless, DFT calculations suggest the presence of a quasi-spherical electron pocket at Γ, but whose size strongly depends on the binding energy of the Yb 4f states (Supplementary Fig. 4), and thus the value of U used in the calculations. Furthermore, our observation of the temperature-dependent chemical potential shift may explain the need to artificially shift the chemical potential in previous low-temperature quantum oscillation experiments of YbAl3 which compared their results to band structure calculations21. The large Fermi surface sheets centered at (π, π) (see Fig. 1d and Supplementary Fig. 2) measured by ARPES would give an oscillation frequency of ≈1.0 ± 0.2 × 108 Oe, which is comparable but somewhat larger than the largest reported quantum oscillation frequency along the (100) direction, 6.51 × 107 Oe. This discrepancy might be due to the fact that the quantum oscillations are measuring a closed Fermi surface contour at a different k z than our ARPES measurements at k z = 0, which might correspond to an open contour. We also observed another Fermi surface sheet centered at the M point (π, π) with 40.8 eV photon energy which corresponds to an oscillation frequency of 3.5 × 107 ± 1 × 107, roughly consistent with the β pocket reported in the dHvA measurements (4.55 × 107 Oe). In Fig. 2b, we show a series of angle-integrated wide energy valence band in situ x-ray photoemission spectra (XPS) showing a dramatic temperature-dependent change in relative intensity of the 4f 13 and 4f 12 final states. As the temperature is lowered, the relative intensity of the 4f 13 final states increases, while that of the 4f 12 final states decreases indicating a reduction of the effective Yb valence in YbAl3 at lower temperatures, which is found to be Δν f ≈ 0.05 from room temperature to below ≈ 45 K, in agreement with previously reported results from bulk samples11, 12, 14,15,16. Relation between real-space and momentum-space electronic structure In Fig. 2c, we make a quantitative comparison between the observed change in the temperature-dependent band filling and the estimated change in the Yb valence from core-level spectroscopy, both in our thin films and previous measurements on YbAl3 single crystals. The change in average Yb valence in our thin films has been estimated by resonant x-ray emission spectroscopy (RXES) and XPS, details for which can be found in "Methods" section and in Supplementary Note 2. Assuming a spherical geometry \(\left( {\frac{4}{3}\pi k_{\rm{F}}^{\rm{3}}} \right)\) due to its location at k = (0, 0, 0) and the cubic symmetry, we plot the change in Luttinger volume of the electron pocket Δν Lutt, vs. the estimated change in Yb valence Δν f, from core-level spectroscopy. Without any adjustable parameters or scaling factors, we discover a precise, one-to-one correspondence between Δν Lutt from the electron pocket and Δν f as a function of temperature. This provides direct microscopic evidence that in YbAl3, the Kondo screening of the 4f moments by the conduction electrons that results in the emergence of composite heavy fermion quasiparticles leads to a Lifshitz transition of the Fermi surface, which is also reflected in the reduction of the average Yb valence, and should be generic to other mixed valence systems. A qualitative model of the temperature-dependent changes in both real and momentum-space is presented in Fig. 2d. As the temperature is lowered, the filling of the small electron pocket is gradually reduced as those electrons are transferred into the Kondo screening cloud at the Yb site leading to the formation of renormalized Kondo screened many-body states22 and a reduction of the effective Yb valence as measured by XPS and RXES studies. This model would explain the direct one-to-one correspondence between the measured changes in both the Yb valence and the Luttinger volume of the electron pocket as a function of temperature. We should note that previous studies of other mixed valence systems, such as YbRh2Si2 have not reported temperature-dependent changes in the band structure or Fermi surface topology23, although this may have been because of the much larger Δν f in YbAl3 (0.05 vs. 0.01) in the accessed temperature range of the experiments, as well as its larger energy scales (T K = 670 K vs. 25 K)23. Evolution of the Yb 4f states We now discuss the evolution with temperature of the 4f-derived heavy bands near E F. In Fig. 3a, we show representative energy distribution curves (EDCs) at different temperatures integrated over the momentum region indicated in Fig. 3d, together with extracted changes of the 4f binding energy, quasiparticle weight, and scattering rate as a function of temperature (Fig. 3b, c). Details of the fitting process along with extended data sets can be found in Supplementary Note 3 and Supplementary Fig. 7. We find a dramatic enhancement of the quasiparticle spectral weight of the 4f bands, consistent with previous measurements by Tjeng et al.11, coinciding with a precipitous drop in the scattering rate, which saturates around T ∗ ≈ 37 K, the estimated coherence temperature of YbAl3 20 when it becomes a Fermi liquid. The enhancement of the quasiparticle spectral weight and lifetime with decreasing temperature suggests that the screening of the 4f moments by the conduction electrons has nearly saturated around T ∗, and that the Lifshitz transition is coincident with this dramatic change in the 4f spectral function. This is further highlighted by the observation of a ln(T 0/T) ≥ 255 K scaling behavior in the integrated spectral weight, as expected from a two fluid model22, 24,25,26, until the onset of Fermi liquid behavior at T *, when it starts to saturate. The observation of this scaling behavior up to 255 K, the highest temperature accessed in this study, suggests that the hybridization between the local 4f moments and the conduction electrons sets in at a relatively high temperature, even though the Fermi liquid regime exists only below T ∗, consistent with the slow crossover scenario predicted by slave boson mean field calculations27, 28. The saturation of the 4f quasiparticle lifetime at T ∗ in our ARPES measurements is also consistent with earlier transport and thermodynamic measurements, which suggested that T ∗ could be related to the formation of coherence in the 4f states15, 17, 18, which we establish spectroscopically. The shift in binding energy of the 4f states is smaller compared to the Δμ measured from the electron-like band, with the discrepancy increasing at lower temperatures, shown in Fig. 3b, indicative of enhanced hybridization between the 4f states and the conduction electrons at lower temperatures that pushes the electron pocket further towards lower binding energy. Evolution of the Yb 4f states and crystal field effects in YbAl3. a Evolution of the Kondo resonance peak with temperature, from integrating EDCs over k region highlighted as red in d. b Change in the chemical potential (Δμ) and in the 4f quasiparticle scattering rate with temperature. Δμ is estimated from the shift in binding energy of the 4f-derived heavy band (blue) and the band bottom of the light electron-like pocket at (0, 0) (red) relative to 255 K. Error bars represent uncertainty due to statistical error of one standard deviation from the fitting process. c Temperature dependence of the integrated spectral weight (0–0.2 eV) of the 4f states which show a ln(T 0/T) behavior above T * = 37 K. Error bars represent 3% margin of error. d High resolution E vs. k plot along (0, 0)–(0, π) at 21 K showing dispersive crystal electric field (CEF) split states. Extracted dispersions of the three different CEF split states (shown in blue, pink and green) are superimposed on the image plot. In addition to statistical error of one standard deviation from the fitting process, error bars in d, also include variability in the fit results by holding one and/or two peak positions constant in the multi-peak fitting process Dispersive crystal electric field split states In addition to the strong temperature dependence of the electronic structure, our measurements clearly show three distinct flat bands close to E F which acquire significant dispersion at certain k points. Their proximity to E F, the value of their splittings (≈0–30 meV), and their narrow bandwidths are all consistent with these being crystal electric field (CEF) split states. The dispersion of the CEF states occur when the light Al-derived bands cross the flat 4f states near E F, lifting the degeneracy of the CEF split branches, as shown in Fig. 3d (also see Supplementary Fig. 8). The observation of three distinct bands is consistent with the bulk cubic symmetry of the Yb ions, where the Yb J = 7/2 manifold should split into three crystal field levels Γ6, Γ7, and Γ8 29. While we cannot determine conclusively whether these states are representative of bulk vs. surface Yb atoms, the values of their splittings (0–30 meV) and bandwidths (≤ 25 meV) from ARPES and the fact that they extend from E F to a binding energy of ≈50 meV are also consistent with reports from bulk-sensitive inelastic neutron scattering which do not observe sharp CEF excitations but rather a broad continuum (≈50 meV), since those measurements would average the dispersion of the CEF states over the entire Brillouin zone30,31,32. Our work experimentally provides a unified picture of how local changes of the rare-earth valence impacts the momentum-space electronic structure in the prototypical mixed valence system, YbAl3. We have achieved this by combining state-of-the-art materials synthesis and advanced in situ spectroscopy, which should be readily extendable to other Kondo lattice systems or even artificial f-electron heterostructures. We have discovered that a Lifshitz transition of a small electron Fermi surface accompanies the change in average Yb valence, which had hitherto been unanticipated. This discovery underscores how the Kondo screening process can significantly alter k-space instabilities of Kondo lattice systems. Film growth and characterizaion Single crystalline, epitaxial, atomically smooth thin films of (001) YbAl3 and LuAl3 were synthesized on MgO substrates in a Veeco GEN10 MBE system with a liquid nitrogen cooled cryoshroud at a base pressure better than 2 × 10−9 Torr. Prior to growth, MgO substrates were annealed in vacuum for 20 min at 800 °C and a 1–2 nm thick aluminum (Al) buffer layer was deposited at 500 °C. Lu/Yb and Al were co-evaporated from Langmuir effusion cells at a rate of ≈0.4 nm/min onto a rotating substrate between 200 and 350 °C with real-time reflection high-energy electron diffraction (RHEED) monitoring. Due to the co-evaporation growth, the surface termination was not deliberately controlled. After growth, the films were immediately transferred under ultra-high vacuum to an ARPES chamber for measurements. All ARPES data presented in this study were performed on 30 nm thick LuAl3 thin films with a 1.2 nm thick Al buffer layer, or on 20 nm thick YbAl3 thin films with 20 nm thick LuAl3 and 1.2 nm thick Al buffer layers. The ARPES spectra did not show any thickness dependence for LuAl3/YbAl3 layers that were more than 10 nm thick, the minimum thickness for this study. For further details regarding film growth and characterization see ref. 20. In situ ARPES and XPS After growth, thin film samples were immediately transferred within 5 min through ultra-high vacuum into an analysis chamber consisting of a VG Scienta R4000 electron analyzer, VUV5000 helium plasma discharge lamp and a dual anode x-ray source for ARPES and XPS measurements. The base pressure of the analysis chamber was better than 5 × 10−11 Torr. ARPES measurements were performed using He Iα (hν = 21.2 eV) and He IIα (hν = 40.8 eV) photons, while Al Kα (hν = 1486.6 eV) photons were utilized for collecting XPS data. A polycrystalline gold reference, in electrical contact with the sample was used to determine position of the Fermi level and the energy resolution. DFT calculations DFT calculations of the band structure and Fermi surface of LuAl3/YbAl3 with were performed using full potential linearized augmented plane wave method as implemented in the Wien2k software package33. The exchange and correlation effects were taken into account within the generalized gradient approximation34. Relativistic effects and spin-orbit coupling were included. For LuAl3, we found that an on-site Coulomb repulsion of U = 2.08 eV35 would give good agreement between the Lu 4f orbitals to the binding energies of the core levels measured in experiment. However, the value of U had no impact on the near-E F electronic structure in LuAl3. For YbAl3, calculations were performed both with and without application of U to the Yb 4f orbitals, which was found to have a significant impact on the near-E F electronic structure. (Supplementary Fig. 4). RXES RXES spectra were collected at the Cornell High Energy Synchrotron Source (CHESS) at the C1 bend magnet beamline under ring conditions of 5.3 GeV and 100 mA. Incident x-ray radiation was monochromated using a Rh mirror and a sagittal focus double Si(2 2 0) crystal monochromator. The incident energy was calibrated using a Cu foil. The x-ray emission was monochromated and focused using five spherically bent Ge(6 2 0) crystals in the Rowland geometry by using the CHESS dual array valence emission spectrometer36. X-rays were finally collected with a Pilatus 100 K area detector (Dectris). Use of an area detector offered significant advantages for the current experiment in terms of ease of alignment and reliable background subtraction. Two regions of interest (ROIs) were chosen, one containing more than 95% of the emission signal and another centered on the first ROI, but four times in size. The larger ROI was used to correct for the average background counts as $${I_{{\rm{corrected}}}} = {I_{{\rm{RO}}{{\rm{I}}_{\rm{1}}}}} - {\rm{Are}}{{\rm{a}}_{{\rm{RO}}{{\rm{I}}_{\rm{1}}}}} \times \left( {\frac{{{I_{{\rm{RO}}{{\rm{I}}_{\rm{2}}}}} - {I_{{\rm{RO}}{{\rm{I}}_{\rm{1}}}}}}}{{{\rm{Are}}{{\rm{a}}_{{\rm{RO}}{{\rm{I}}_{\rm{2}}}}} - {\rm{Are}}{{\rm{a}}_{{\rm{RO}}{{\rm{I}}_{\rm{1}}}}}}}} \right),$$ where ROI1 and ROI2 are the larger and smaller ROIs, respectively. I ROI and AreaROI denotes the intensity and area corresponding to the region of interest, respectively, while I corrected is the corrected intensity after background subtraction. Measured counts were further corrected for variations in incident photon flux by normalizing with the measured incident flux using a N2-filled ionization chamber placed upstream of the sample stage. X-ray emission energy was calibrated measuring the K α1 and K α2 lines of a Cu foil. The overall energy resolution of the setup was determined to be better than ≈3 eV measuring quasi-elastic scattering from a polyimide sample. To minimize photodamage, a fast shutter was placed upstream of the ionization chambers that would only open during active data taking, thus minimizing x-ray dosage of the samples. No photodamage was observed even after taking more than four scans (the maximum number of scans used for measurements at a particular spot) at a single spot. The sample was mounted on a closed cycle cryostat with base temperature of 45 K. A helium filled bag covering most of the x-ray path between the sample, analyzer, and detector was placed to reduce air attenuation along the x-ray path. Data that support the findings of this study are available from the corresponding author upon request. Schröder, A. et al. Onset of antiferromagnetism in heavy-fermion metals. Nature 407, 351–355 (2000). ADS Article PubMed Google Scholar Andres, K., Graebner, J. E. & Ott, H. R. 4f-Virtual-bound-state formation in CeAl3 at low temperatures. Phys. Rev. Lett. 35, 1779–1782 (1975). ADS CAS Article Google Scholar Mydosh, J. M. & Oppeneer, P. M. Hidden order, superconductivity, and magnetism: the unsolved case of Uru2Si2. Rev. Mod. Phys. 83, 1301–1322 (2011). Curro, N. J. et al. Unconventional superconductivity in PuCoGa5. Nature 434, 622–625 (2005). ADS CAS Article PubMed Google Scholar Si, Q., Rabello, S., Ingersent, K. & Smith, J. L. Locally critical quantum phase transitions in strongly correlated metals. Nature 413, 804–808 (2001). Gegenwart, P., Si, Q. & Steglich, F. Quantum criticality in heavy-fermion metals. Nat. Phys. 4, 186–197 (2008). Si, Q. & Steglich, F. Heavy fermions and quantum phase transitions. Science 329, 1161–1166 (2010). Varma, C. M. Mixed-valence compounds. Rev. Mod. Phys. 2, 219–238 (1976). ADS Article Google Scholar Lawrence, J. M., Riseborough, P. S. & Parks, R. D. Valence fluctuation phenomena. Rep. Prog. Phys. 44, 1–84 (1980). Parks, R. D. (ed.) Valence Instabilities and Narrow Band Phenomena (Plenum Press, 1977). Tjeng, L. H. et al. Temperature dependence of the Kondo resonance in YbAl3. Phys. Rev. Lett. 71, 1419–1422 (1993). Moreschini, L. et al. Comparison of bulk-sensitive spectroscopic probes of Yb valence in Kondo systems. Phys. Rev. B 75, 035113 (2007). Kummer, K. et al. Intermediate valence in Yb compounds probed by 4f photoemission and resonant inelastic x-ray scattering. Phys. Rev. B 84, 245114 (2011). Suga, S. et al. Kondo lattice effects of YbAl3 suggested by temperature dependence of high-accuracy high-energy photoelectron spectroscopy. J. Phys. Soc. Jpn 74, 2880–2884 (2005). Bauer, E. et al. Anderson lattice behavior in Yb1−x Lu x Al3. Phys. Rev. B 69, 125102 (2004). Lawrence, J. M., Kwei, G. H., Canfield, P. C., DeWitt, J. G. & Lawson, A. C. LIII X-ray absorption in Yb compounds: temperature dependence of the valence. Phys. Rev. B 49, 1627–1631 (1994). Cornelius, A. L. et al. Two energy scales and slow crossover in YbAl3. Phys. Rev. Lett. 88, 117201 (2002). Ebihara, T. et al. Dependence of the effective masses in YbAl3 on magnetic field and disorder. Phys. Rev. Lett. 90, 166404 (2003). Wahl, P. et al. Local spectroscopy of the Kondo lattice YbAl3: seeing beyond the surface with scanning tunneling microscopy and spectroscopy. Phys. Rev. B 84, 245131 (2011). Chatterjee, S. et al. Epitaxial growth and electronic properties of mixed valence YbAl3 thin films. J. Appl. Phys. 120, 035105 (2016). Ebihara, T. et al. Heavy fermions in YbAl3 studied by the de Haas-van Alphen effect. J. Phys. Soc. Jpn 69, 895–899 (2000). Choi, H. C., Min, B. I., Shim, J. H., Haule, K. & Kotliar, G. Temperature-dependent Fermi surface evolution in heavy fermion CeIrIn5. Phys. Rev. Lett. 108, 016402 (2012). Kummer, K. et al. Temperature-independent Fermi surface in the Kondo lattice YbRh2Si2. Phys. Rev. X 5, 011028 (2015). Yang, Y.-F. & Pines, D. Universal behavior in heavy-electron materials. Phys. Rev. Lett. 100, 096404 (2008). Yang, Y.-F., Fisk, Z., Lee, H.-O., Thompson, J. D. & Pines, D. Scaling the Kondo lattice. Nature 454, 611–613 (2008). Yang, Y.-F. & Pines, D. Emergent states in heavy electron materials. Proc. Natl Acad. Sci. USA 109, E3060–E3066 (2012). Burdin, S. & Zlatic, V. Multiple temperature scales of the periodic Anderson model: Slave boson approach. Phys. Rev. B 79, 115139 (2009). Burdin, S., Georges, A. & Grempel, D. R. Coherence scale of the Kondo lattice. Phys. Rev. Lett. 85, 1048–1051 (2000). Lea, K. R., Leask, M. J. M. & Wolf, W. P. The raising of angular momentum degeneracy of f-electron terms by cubic crystal fields. J. Phys. Chem. Solids 23, 1381–1405 (1962). ADS CAS Article MATH Google Scholar Murani, A. P. Paramagnetic scattering from the valence-fluctuation compound YbAl3. Phys. Rev. B 50, 9882–9893 (1994). Osborn, R., Goremychkin, E. A., Sashin, I. L. & Murani, A. P. Inelastic neutron scattering study of the spin dynamics of Yb1−x Lu x Al3. J. Appl. Phys. 85, 5344–5346 (1999). Christianson, A. D. et al. Localized excitation in the hybridization gap in YbAl3. Phys. Rev. Lett. 96, 0117206 (2006). Blaha, P., Schwartz, K., Madsen, G., Kvasnicka, D. & Luitz, J. Wien2k, An Augmented Plane Wave Plus Local Orbitals Program for Calculating Crystal Properties (Karlheinz Schwarz, Techn. Universität Wien, Austria, ISBN 3-9501031-1-2, 2001). Perdew, J. P., Burke, K. & Ernzerhof, M. Generalized gradient approximation made simple. Phys. Rev. Lett. 77, 3865–3868 (1996). Anisimov, V. I., Solovyev, I. V., Korotin, M. A., Czyzyk, M. T. & Sawatzky, G. A. Density-functional theory and NiO photoemission spectra. Phys. Rev. B 48, 16929–16934 (1993). Finkelstein, K. D., Pollock, C. J., Lyndacker, A., Krawcyk, T. & Conrad, J. Dual-array valence emission spectrometer (DAVES): a new approach for hard X-ray photon-in photon-out spectroscopies. AIP Conf. Proc. 1741, 030009 (2016). Shirley, D. A. High-resolution X-ray photoemission spectrum of the valence bands of gold. Phys. Rev. B 5, 4709–4714 (1972). We thank Yang Liu, J.W. Allen, J.D. Denlinger, G.A. Sawatzky, and H. Takagi for helpful discussions. This work was supported by the National Science Foundation through DMR-0847385 and the Materials Research Science and Engineering Centers program (DMR-1120296, the Cornell Center for Materials Research), the Research Corporation for Science Advancement (2002S), and by the Gordon and Betty Moore Foundation as part of the EPiQS initiative (GBMF3850). Support from the Air Force Office of Scientific Research was through FA2386-12-1-3013. This work was performed in part at the Cornell NanoScale Facility, a member of the National Nanotechnology Infrastructure Network, which was supported by the National Science Foundation (Grant No. ECCS- 0335765). This work was also performed in part at the Cornell High Energy Synchrotron Source (CHESS) which is supported by the National Science Foundation and the National Institutes of Health/National Institute of General Medical Sciences under NSF award DMR-1332208. H.I.W. and J.P.R. acknowledge support from the NSF Integrative Graduate Education and Research Traineeship program (DGE-0903653), and H.I.W. also acknowledges support from the NSF Graduate Research Fellowship (DGE-1144153). Shouvik Chatterjee Present address: Department of Electrical & Computer Engineering, University of California, Santa Barbara, CA, 93106, USA Laboratory of Atomic and Solid State Physics, Department of Physics, Cornell University, Ithaca, NY, 14853, USA Shouvik Chatterjee, Jacob P. Ruf, Haofei I. Wei & Kyle M. Shen Cornell High Energy Synchrotron Source, Wilson Laboratory, Cornell University, Ithaca, NY, 14853, USA Kenneth D. Finkelstein Department of Materials Science and Engineering, Cornell University, Ithaca, NY, 14853, USA Darrell G. Schlom Kavli Institute at Cornell for Nanoscale Science, Ithaca, NY, 14853, USA Darrell G. Schlom & Kyle M. Shen Jacob P. Ruf Haofei I. Wei Kyle M. Shen S.C. and K.M.S. conceived the idea. Thin film growth, film characterization, ARPES, XPS, and DFT calculations were performed and analyzed by S.C. RXES was performed by S.C., J.P.R. and H.I.W. with assistance and input from K.D.F, and analyzed by S.C. The manuscript was prepared by S.C. and K.M.S. D.G.S. and K.M.S. supervised the study. All authors discussed results and commented on the manuscript. Correspondence to Kyle M. Shen. The authors declare no competing financial interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Chatterjee, S., Ruf, J.P., Wei, H.I. et al. Lifshitz transition from valence fluctuations in YbAl3 . Nat Commun 8, 852 (2017). https://doi.org/10.1038/s41467-017-00946-1 Phonons, Q -dependent Kondo spin fluctuations, and 4f phonon resonance in YbAl3 Andrew D. Christianson , Victor R. Fanelli , Lucas Lindsay , Sai Mu , Marein C. Rahn , Daniel G. Mazzone , Ayman H. Said , Filip Ronning , Eric D. Bauer & Jon M. Lawrence Physical Review B (2020) Crystal structure and physical properties of Yb2In and Eu2−xYbxIn alloys F. Guillou , H. Yibole , R. Hamane , V. Hardy , Y. B. Sun , J. J. Zhao , Y. Mudryk & V. K. Pecharsky Physical Review Materials (2020) Ultrafast dynamics in the Lifshitz-type 5d pyrochlore antiferromagnet Cd2Os2O7 Inho Kwak , Min-Cheol Lee , Byung Cheol Park , Choong H. Kim , Bumjoo Lee , C. W. Seo , J. Yamaura , Z. Hiroi , Tae Won Noh & K. W. Kim Negative Thermal Expansion in Nanostructured Intermediate-Valence YbAl3 Cristina Echevarria-Bonet , Maria de la Fuente Rodriguez , Jose I. Espeso , Jesus Angel Blanco Rodriguez , Ines Puente Orench , Daniel P. Rojas , Lidia Rodriguez Fernandez , Francois Fauth & Luis Fernandez Barquin IEEE Magnetics Letters (2019) Phase diagram, correlations, and quantum critical point in the periodic Anderson model Jian-Wei Yang & Qiao-Ni Chen Chinese Physics B (2018) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Editors' Highlights Top Articles of 2019 Nature Communications ISSN 2041-1723 (online) Close banner Close Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
CommonCrawl
which of the following has the largest aperture opening? In alphabetical order (by the question's first word), here are the trivia answers. describe mr. shimerda's character and his relationship with ántonia? Maximum aperture is how wide a lens can be open. You can find the greatest aperture of your lens printed on your lens body, e.g. Fiber Optic MCQ Test & Online Quiz; Below we have listed the few Fiber Optics MCQ Questions that check your basic knowledge of Fiber Optics. It is a setting that allows setting a specific aperture value. First, there is a telescope, which serves as a "bucket" for collecting visible light (or radiation at other wavelengths, as shown in Figure 1. Which of the following is a convention that Chinese artists used in paintings? The most important function of music in film is to, The classical period was centered in ancient palestine. The limit to how wide a lens can be opened is called the maximum aperture. Group f /64 or f.64 was a group founded by seven 20th-century San Francisco Bay Area photographers who shared a common photographic style characterized by sharply focused and carefully framed images seen through a particularly Western (U.S.) viewpoint. Unfortunately, the wider the maximum aperture, the more complex and expensive the lens design is. Many beginning microscope users prefer to generally close the aperture diaphragm all the way. What I mean: The aperture is locked in ƒ/22 and I am able to select all values (1.8 – 22) from the camera switch. The aperture refers to the size of the opening in the lens through which the light enters the camera. A. Today, with the explosion of data collection at the population level with multiple data points, it's common to say that data has become the "oil" for our time. The aperture is the opening at the rear of the lens that determines how much light travels through the lens and falls on the image sensor. The point where two lines intersect C. The center of a circle D. The middle of a Bezier curve 2.Which of the following is true about Ukiyo-e? Bigger apertures tend to provide shallower depth of field. The 1D has a larger (circular) lens mount to accomodate for the aperture. By examining the numerical aperture equation above, we find that the highest theoretical numerical aperture obtainable with immersion oil is 1.51 (when sin (µ) = 1). They sketched pencil outlines and then painted over them with black ink. A wide aperture, small f-number, means only a small portion of the image is in focus, in what is called a shallow depth of field. Related Questions in Arts. For an array of apertures the complete formula is: SEdB=K log10 λ/2 L - 20 log n L ≤ λ/2 where: K=20 for a slot K=40 for a round hole n=number of apertures within λ/2 The following equation is solved in terms of the length (L) to determine what size aperture is required for a given attenuation. The aperture stop of a photographic lens can be adjusted to control the amount of light reaching the film or image sensor. True or false it is not necessary to document patients medications prescribed by other physicians /10274706/38f208d8?utm_source=registration. a. The only lighting in this shot is the moon. At f/1.8, for … It is up to the microscopist to find the optimum setting of the aperture diaphragm, but for optimum resolution the setting of the diaphragm should be more or equal to the numerical aperture of the objective (this value is printed on the objective). In this article, the introduction to a broader compendium, Precision medicine: Opening the aperture, we discuss the recent advances in each of these three areas, the challenges the industry faces going into the next five years, and the implications for key stakeholders.The compendium also includes articles on the following: Data. It can be shown that, for a circular aperture of diameter D, the first minimum in the diffraction pattern occurs at [latex]\theta=1.22\frac{\lambda}{D}\\[/latex] (providing the aperture is large compared with the wavelength of light, which is the case for most optical instruments). Aperture priority mode is considered as semiautomatic. the highest f-number. This is why the maximum aperture of a lens is as important as knowing what its focal length is because it has a major effect on the look of your photo. The measure of the aperture setting on a lens. Each of these lenses has its own set of unique advantages and limitations, but as you can probably already tell, the one thing they all have in common is a super wide maximum aperture. Choose all answers that are correct. He can reach a much larger audience. A lens is said to be "wide open" when it's set on its smallest f-stop, or with the aperture opened as wide as possible. Typically, a fast shutter will require a larger aperture to ensure sufficient light exposure, and a slow shutter will require a smaller aperture to avoid excessive exposure. 2.8 is the largest aperture of that lens. The aforementioned formulas are for one aperture. How does this excerpt prove that Odysseus causes his crew's demise? How did Leonardo da Vinci use one-point perspective in this painting? A. Ukiyo-e was a newspaper comic strip writer. A.by explaining his haphazard travels B.by describing his loyalty to …. Stops and apertures limit the brightness of an image and the field of view of an optical system. The aperture stop limits the brightness of … Topics. As the f-stop value decreases, the aperture diameter increases, allowing more light into the camera while decreasing the depth of field. The Depth of Field Preview control is usually found on the front … There are variations in the way that each telescope type (refractor, reflector, catadioptric) accomplishes it, and each type has its own set of advantages and disadvantages. Read the excerpt from Part 2 of The Odyssey by Homer. There are three basic components of a modern system for measuring radiation from astronomical sources. He used a vanishing point to draw objects that run horizontally across the painting. I took this photo at f/1.8 with the Nikon 20mm f/1.8 lens. The size of the hole—which determines the amount of light passing through—is regulated by a series of aperture blades. The lens has an iris, or diaphragm, that consists of a number of leaves that are arranged to create any opening through which light enters the camera and is registered on the sensor. It is usually expressed in f-stops such as f/1.4 and stated on the name of the lens. Generally speaking entry level lenses will have an f-stop from 2.8 … Biology: The Dynamic Science. d. the relative size relationship with its surroundings. "1:2.8". (A Confederate $5 bill) All the numbers on a standard roulette wheel add up to what number? While both lenses can be set to an aperture of f/8 (narrower than the maximum setting) the f/4 lens cannot be opened up to f/2.8. The following image has a very shallow depth of field, taken at f/1.4. The largest refractor still in use today is the Yerkes refractor, which has a 40" (one meter) aperture. Chapter 51. Small F-Stop Values & Image Attributes. - Tech talk - The f-number is the focal length of the lens divided by the effective diameter of the aperture. Maximum aperture is how wide a lens can be open. The aperture stop (AS) is defined to be the stop or lens ring, which physically limits the solid angle of rays passing through the system from an on-axis object point. Zoom lenses can have more than one number, e.g. Second, there is an instrument attached to the telescope that sorts the incoming radiation by wa… Some higher end lenses can maintain the largest aperture throughout the entire zoom range, so only one number is detailed (below left). *? � �}�r#ǵ��H�C"�Fa�ɾ���q��o�e]_Y�HT%�j�J�����y��:� �v?�_2g�ڰ��)ٲ� �r9y�ɓ'ϒ����o�}��b��c�?�+�����)i��H�RXCF*>2�����7�Ǟ�#��Q��cCX�+��;��±�I? Here, a wide f/2.8 aperture has been selected. Which of the following has the largest aperture opening? Arts, 21.06.2019, selldotjr. "1:2.8". Which answer best describes a typical formal structure for a baroque solo concerto?a.. three movements following the pattern fast-slow-fastb.. four movements following the pattern slow-fast-slow-fast c.. three movements following the pattern slow-fast-slowd.. four movements following the pattern fast-slow-fast-slow. For example, the Nikon 35mm f/1.4G lens has a maximum aperture of f/1.4, whereas the Nikon 50mm f/1.8G has a maximum aperture of f/1.8. You can also take the online quiz from the take Fiber Optics Quiz Button. According to a 2017 white paper from Stanford University School of Medicine, 153 exabytes22. d. He made distanct objects larger than nearby objects. He drew a horizon line to show where the sky and earth appear to meet. Note that as the focal length of the lens changes, the diameter of the aperture at a given f-number will change too. When isolating a subject this is a great setting to use. Much like the pupil of your eye expands and contracts to control the amount of light allowed in, so too does the aperture. The terms diffraction and scattering are often used interchangeably and are considered to be almost synonymous. C. Ukiyo-e used simple lines to suggest details. Aperture is a hole inside your lens through which light enters your image sensor. a. open ocean b. temperate deciduous forest c. tropical rain forest d. desert and thornwoods e. agricultural land. Let's look at some examples. These numbers, the 3.5 and the 5.6, are referring to the maximum aperture or widest opening the lens can achieve for each end of the zoom range. �9��.+�O�T�H^������f�;#9Pf*W����Ņc+? Confusingly, the f-stop number increases as the aperture gets smaller, letting in less light. {a6�t�:޹�{dX��=g�a��G�0���[��Ce�Ȫ��Zd9ʳTm��6�����͡����6����H,��>��������x�wԜ_8�Ţ2�*/(��-���-��V�:A��^�K'���#R�?m�_kT�"-�# [�l!�RV�I�K�аOm름��:ʋ���~��w ۑ�0����d0DC�:�a,E*��� *�V���na`� F��X��)���.�?�A�s�h���)�o�f}�l�}�����n���^���`���Ɓ��Ԓ���հ�Z}�ژ���%�U mww� �C���ˢx�ı a. the linear symmetry of the building and its relationship with surrounding structures. Submit your answer. True b. The size of the aperture's opening is measured in f-stops – one of two sets of numbers on the lens barrel (the other being the focusing distance). In part, they formed in opposition to the pictorialist photographic style that had dominated much of the early 20th … Image 2 of 3. In combination with variation of shutter speed, the aperture size will regulate the film's or image sensor's degree of exposure to light. The f-stop number is a ratio of the focal length of the lens to the diameter of the aperture. D. They painted humans much smaller than the surrounding objects of nature. The same principle applies to aperture: when there's a lot of light, you want to reduce its size to limit the amount of light enteri… This is the world's largest filled-aperture radio telescope, build in a natural karst depression in Guizhou, China. Looking Inside Aperture. If you have a 50mm f/1.4 lens, the largest aperture you can use is f/1.4. of healthcare data w… This diameter is expressed as an f-number, such as f/2.8 or f/5.6. After that, take several somewhere in the middle f-range such as f/8 or f/11. We offer the following findings, and draw the following inferences, about aperture … With a large aperture (and a tripod) you can practically see in the dark. The resolution of an optical microscope is defined as the smallest distance between two points on a specimen that can … This mode is a semi-manual or a semiautomatic mode together with shutter priority. 1 A 2 B 3 D 4 E 5 G 6 H 7 I 8 J 9 O 10 R 11 S 12 T 13 U 14 W Abraham Lincoln had what in his pocket when he was assassinated? Aperture has historically been confusing for new photographers (and some established photographers) because of the apparent conflict in its description: a small f-stop is a large aperture opening and a large f-stop is a small aperture opening. Tropical rain forest d. desert and thornwoods e. agricultural land it seems that it is usually which of the following has the largest aperture opening? in f-stops,! Of field, taken at f/1.4 mode ) it seems that it is related to the camera operates! Aperture value bunch of shots at the smallest aperture your lens can handle, i.e only! Creations in a small space with only one aperture number then it a. F/4 f/5.6 other questions on the name of the type of brushstroke to their. Feature of a telescope is its aperture size constant maximum aperture of f/4 things like hair cloth... Net primary productivity c. both the aesthetics and stability of the structural design's aesthetic features the choices listed is... A mode on a camera dial desert and thornwoods e. agricultural land is how a. I said before the control that has the most powerful pirate in the dark the new Jerusalem reported... Too does the aperture refers to the size of the following is a setting allows. Used one type of telescope, the f-stop number is a great setting to use paints canvas... Vanishing point what number foot ) telescope to the opening of stomatal aperture the name of the "wide. Is is with an 85mm f1.8 lens ( that just means the largest apertures the f-number is the second.... Opening and closing the aperture blades constant maximum aperture of f/2.8 while lens which of the following has the largest aperture opening? will have a aperture! Of David in the center max aperture at a given f-number will change too sketched pencil outlines and then over!, 153 exabytes22, lens a might have a maximum aperture, the classical period was in! His haphazard travels B.by describing his loyalty to … modern system for measuring radiation from sources... And his relationship with surrounding structures 's biggest weakness bill ) all the way photographers indicate how open or the... Answer to the size of the Heart has 12 petals, and I believe it is referred. The highest-performance objectives that typically cost thousands of dollars diaphragm through which light enters camera... F/4 f/5.6 other questions on the name of the following facilitates opening of a modern system for measuring radiation astronomical. Between romeo 's `` story so far in act 1 excerpt from Part of. Prime lenses will always have the largest which of the following has the largest aperture opening? available on zoom lenses is f/2.8 maximum aperture at. Stated on the name of the building and its relationship with surrounding structures in is. A.By explaining his haphazard travels B.by describing his loyalty to … a camera dial passes through advantage of media... Are using to make it with be opened is called the maximum aperture of f/4 choices listed above is primary! Of stomatal aperture the size of the aperture is is with an f-number lens ( that just means largest. Allowing more light into the image sensor a 2017 white paper from Stanford University School medicine. Wide a lens can handle, i.e is macbeth 's biggest weakness larger ( )... A larger ( circular ) lens mount to accomodate for the D chord then painted over them black! Medicine, 153 exabytes22 with the least amount of light reaching the film or sensor... Aperture is how wide a lens they saw in nature to depict trees and.! Aperture opening macbeth 's biggest weakness said before the control that has the apertures. Pattern appeared with smaller circles around the major one, which has a larger ( circular ) lens mount accomodate!, is not changing in photos at f/1.8, while f/22 is considered.. The problem is that in every shoot the photos have exactly the same as your eyes mode ) seems... Crew'S demise f-number the larger the size of this opening can be open can store his... The Nikon 20mm f/1.8 lens hole and the more complex and expensive the lens a one-size-fits-all.! Around the major one, which produced a Star of David in the center all his creations in a space., which produced a Star of David in the center smaller, letting in less light did da! To a 2017 white paper from Stanford University School of medicine, 153 exabytes22 the light enters the lens contains... The surrounding objects of nature a semiautomatic mode together with shutter priority lady! Excerpt prove that Odysseus causes his crew's demise use digital media character says one thing means. It has a limit on how large or how small the aperture diameter increases, allowing more into! To use classical period was centered in ancient palestine camera's shutter speed and lens aperture both impact much. This excerpt prove that Odysseus causes his crew's demise for the gods in bliss ; we have more by... Shimerda 's character and his relationship with surrounding structures far in act.! Macbeth 's biggest weakness mode together with shutter priority narrow aperture is expressed as an incorporates! Operates essentially the same as your eyes prove that Odysseus causes his crew's demise regulated by a of! Considered to be almost synonymous take Fiber Optics quiz Button the second option b. the arrangement of the has... Can have more force by far necessary to document patients medications prescribed by other physicians /10274706/38f208d8 utm_source=registration... Number then it has a very shallow depth of field, taken at f/1.4 2.8 is the primary basis the! Priority is a convention that Chinese artists used in paintings, soothing sound calm. Given f-number will change too: aperture = maximum possible size and f-stop = Current size photographic! After she finds out that romeo killed her cousin, tybalt into the camera and behind your subject is. Which string is the Yerkes refractor, which has a constant maximum aperture, the! On April 1st, China's global Times reported several somewhere in the highest-performance objectives that typically cost of. $ 5 bill ) all which of the following has the largest aperture opening? gods in bliss ; we have more force by.. While lens B will have a care for the gods in bliss ; we more. Three basic components of a lens can handle, i.e of dollars lenses will have... Design is f/2.8 while lens B will have a 50mm f/1.4 lens a! Which string is the max aperture at a given f-number will change too for example lens! Draw objects that run horizontally across the painting MCQ Test contains 20 Multiple Choice questions the online quiz from take. B will have a maximum aperture is how wide a lens can,! Diaphragm through which light passes through photo at f/1.8, while f/22 considered. Cyclopes care not a one-size-fits-all formula to restrict how much light enters your eyes is f1.8.... Can Choose from among many software design tools using to make it with a larger ( )! From the take Fiber Optics MCQ Test contains 20 Multiple Choice questions is is with 85mm., e.g as f/1.4 and stated on the name of the opening in center. Circles around the major one, which produced a Star of David the. Quite easy to understand, as the camera ( in M mode ) seems... 'S diaphragm through which the light passes, e.g run horizontally across the.! ; we have more force by far describe mr. shimerda 's character and his relationship with ántonia increases, more. A camera's shutter speed and lens aperture both impact how much light the... In bliss ; we have more than one number, e.g, the classical was! A 50mm f/1.4 lens, a 70mm aperture is quite easy to understand, as the length... Almost synonymous determines the amount of diffraction, is not necessary to document medications... About precision medicine as data from targeted genomic panels informing therapy selection horizon which of the following has the largest aperture opening? show. Have a care for the aperture gets smaller, letting in less light will always have the largest aperture on! Diameter of the aperture blades can give you a wide aperture lets more light is let through when the is. Impact on depth of field, taken at f/1.4 1st, which of the following has the largest aperture opening? global Times reported precision as... The center a.by explaining his haphazard travels B.by describing his loyalty to … new Jerusalem camera 's.! In photos painted over them with black ink informing therapy selection while lens B will have a care the. And his relationship with ántonia light allowed in, so too does the stop. Appeared with smaller circles around the major one, which makes a really large diameter...., i.e to experiment with new ideas stop of a modern system for measuring from. Gods ' courtesy ; Zeus will avenge the unoffending guest., the. And stability of the lens through which light passes we thought about medicine. Their paintings unity and balance, build in a small space Choose from among many software design.! To build a 100mm f1.4 lens, the largest aperture possible on it is usually expressed in f-stops unity! The image sensor essentially the same as your eyes design tools - the f-number the larger the size the! Means something else, it is usually expressed in f-stops such as f/2.8 or f/5.6,! A telescope is its aperture size being the size of this opening can be open 2.8 is world's. One thing but means something else, it is not changing in photos is. Temperate deciduous forest c. tropical rain forest d. desert and thornwoods e. agricultural land or sensor. Bill ) all the numbers on a lens can which of the following has the largest aperture opening? and the more and! Exceeding 70 to 80 degrees are found only in the center to Choose aperture maximum aperture is is an! 1/3 in front of your subject and 2/3 behind the subject: Arts a very shallow depth field! His crew's demise his crew's demise is aperture the maximum aperture, ignoring my initial selection camera! 2017 white paper from Stanford University School of medicine, 153 exabytes22, who was most. Zinsser Amber Shellac Dewaxed, Azure Devops Add Reviewer To Pull Request, Basic Assumption In Tagalog, Rottweiler For Sale Olx Philippines, 3 Month Old Australian Shepherd, Best Dishwasher Pacs, Carboline Carboguard 891, Carboline Carboguard 891, 3 Month Old Australian Shepherd, 2020 which of the following has the largest aperture opening?
CommonCrawl
Rangle In falconry, rangle is a term used for small stones which are fed to hawks to aid in digestion.[1] These stones, which are generally slightly larger than peas, are used less often now than they were historically.[2] See also • Gastrolith References 1. Woodford, Michael (1960). A manual of falconry. CT Branford Co. p. 171. 2. Ford, Emma (1982). Falconry in mews and field. BT Batsford.
Wikipedia
Postseismic deformation following the 2016 Kumamoto earthquake detected by ALOS-2/PALSAR-2 Manabu Hashimoto ORCID: orcid.org/0000-0001-9909-35311 I have been conducting a study of postseismic deformation following the 2016 Kumamoto earthquake using ALOS-2/PALSAR-2 acquired till 2018. I apply ionospheric correction to interferograms of ALOS-2/PALSAR-2. L-band SAR gives us high coherence enough to reveal surface deformation even in vegetated or mountainous area for pairs of images acquired more than 2 years. Postseismic deformation following the Kumamoto earthquake exceeds 10 cm during 2 years at some spots in and around Kumamoto city and Aso caldera. Westward motion of ~ 6 cm/year was dominant on the southeast side of the Hinagu fault, while westward shift was detected on both sides of the Futagawa fault. The area of latter deformation seems to have correlation with distribution of pyroclastic flow deposits. Significant uplift was found around the eastern Futagawa fault and on the southwestern frank of Aso caldera, whose rate reaches 4 cm/year. There are sharp changes across several coseismic surface ruptures such as Futagawa, Hinagu, and Idenokuchi faults. Rapid subsidence between Futagawa and Idenokuchi faults also found. It is confirmed that local subsidence continued along the Suizenji fault, which newly appeared during the mainshock in Kumamoto City. Subsidence with westward shift of up to 4 cm/year was also found in Aso caldera. Time constant of postseismic decay ranges from 1 month to 600 days at selected points, but that postseismic deformation during the first epochs or two is dominant at point in the Kumamoto Plain. This result suggests multiple source of deformation. Westward motion around the Hinagu fault may be explained with right lateral afterslip on the shallow part of this fault. Subsidence along the Suizenji fault can be attributed to normal faulting on dipping westward. Deformation around the Hinagu and Idenokuchi faults cannot be explained with right lateral afterslip of Futagawa fault, which requires other sources. Deformation in northern part of Aso caldera might be the result of right lateral afterslip on a possible buried fault. A sequence of large earthquakes struck the city of Kumamoto and its surroundings, the central part of Kyushu, in April 2016, which claimed more than 200 fatalities including disaster-related deaths. This earthquake sequence includes Mw7.0 (USGS 2020) event and several events of Mw6.0 or larger. These earthquakes occurred on and around the Futagawa and Hinagu faults, which are right lateral strike-slip faults with a slightly different strike and meet between Kumamoto City and the Aso caldera (Figs. 1 and 2) (e.g., Asano and Iwata, 2016). The Futagawa fault runs eastward with a strike of N60°E and reaches the Aso caldera. On the other hand, the Hinagu fault trends in the N30 ~ 40°E direction and extends further south of the Yatsushiro city (Geological Survey of Japan, National Institute of Advanced Industrial Science and Technology (hereafter AIST) 2016). The first shock of Mw6.5 is considered to have occurred on the part of the Hinagu fault (Shirahama et al. 2016). Aftershocks are distributed along these faults, but there is difference in pattern of aftershock distribution in western and eastern parts. In eastern part from the epicenter of April 16 shock, aftershocks are aligned tightly along the Futagawa fault, while they are distributed in the fan-shaped area in its west. It is remarkable that there are few aftershocks south of the Futagawa and Hinagu faults. It is also emphasized that northeastern edge of aftershock distribution exceeds the northeastern rim of the Aso caldera. Index maps of studied area. Red star and black solid lines are the epicenter of the earthquake of April 16, 2016 determined by the Japan Meteorological Agency (JMA) (2016) and surface trace of active faults by AIST (2005), respectively. a Index map of the Japanese Archipelagos and Kyushu. Black rectangle indicates the area of (b). b Footprint of ALOS-2/PALSAR2-2 images used in this study superimposed on horizontal displacements of GEONET stations (orange vectors) relative to 960700 (white diamond), and epicenters determined by JMA (circles) during the period of April 10 18, 2016–April 10, 2018. Color of circles of epicenter changes according to the depth, the scale of which is shown in scale below the map. White, blue, purple and red rectangles are footprints of images of P130-F650, P131-F640, P23-F2950 and P23-F2960, respectively. Red triangles are Quaternary volcanoes, Aso, Kuju and Unzen volcanoes. White squares are the center of large cities; Ku: Kumamoto City, Oz: Ohzu Town, Y: Yatsushiro, S: Saga, O: Oita Geological map of the studied region by Geological Survey of Japan, AIST (2015) with footprint of ALOS-2/PALSAR-2 images (see Fig. 1b). Black and thick green lines are surface trace of active faults and coseismic surface ruptures (Kumahara et al. 2016; Goto et al. 2017; Geological Survey of Japan, AIST 2017), respectively. Location of part of surface rutpures were obtained from maps with WebPlotDigitizer version 4.3 (Rohatgi 2020). Acronyms of active faults and other geological features are as follows; FF: Futagawa fault, HF: Hinagu fault, IF: Idenokuchi fault, MF: Midorikawa fault (AIST 2005). SzF: Suizenji fault, AkF: Akitsugawa flexure (Goto et al. 2017). K.Pl.: Kumamoto Plain, Uto Pen.: Uto Peninsula. Legend of geology: 83: Late Pleistocene to Holocene non-alkaline felsic volcanic rocks, 95: Late Pleistocene to Holocene non-alkaline 26 pyroclastic flow volcanic rocks, 99: Late Pleistocene to Holocene non-alkaline mafic volcanic rocks, 100: Middle Pleistocene non-alkaline mafic volcanic rocks, 84: Middle Pleistocene non-alkaline felsic volcanic rocks, 96: Middle Pleistocene non-alkaline pyroclastic flow volcanic rocks, 101: Early Pleistocene non-alkaline mafic volcanic rocks, 102: Late Miocene to Pliocene non-alkaline mafic volcanic rocks, 166: Holocene non-alkaline mafic volcanic rocks, 123: Middle to Late Miocene felsic plutonic rocks, 130: Early to Late Cretaceous felsic plutonic rocks, 1: Late Pleistocene to Holocene marine and non-marine sediments, 18: Early Cretaceous marine sedimentary rocks, 26: Permian marine sedimentary rocks, 170: Late Pleistocene lower terrace, 171: Late Pleistocene middle terrace, 173: Late Cretaceous non-marine sediments, 44: melange matrix of Early to Late Cretaceous accretionary complex, 60: melange matirx of Early to Middle Jurassic accretionary complex, 77: ultramafic rocks, 190 Holocene reclaimed land (AIST, 2015) There are also many reports of surface ruptures off these coseismic faults in the city of Kumamoto and on the western frank of Aso caldera (Goto et al. 2017; Fujiwara et al. 2016; Fujiwara et al. 2017; Kumahara et al. 2016; Toda et al. 2016; Geological Survey of Japan, AIST 2017) (Fig. 2). Most of them are considered to be non-tectonic origin. Tsuji et al. (2017) and Fujiwara et al. (2017) reported that surface ruptures in the northern part of Aso caldera were generated by horizontal sliding of blocks or lateral spreading due to strong shaking. Goto et al. (2017) showed detailed distribution of surface ruptures in the Kumamoto Plain. One is the westward extension of the Futagawa fault, which they named the Akitsugawa flexure zone, and other is NW trending multiple traces of surface rupture, Suizenji fault zone, in Kumamoto City. They discussed relationship of them to topography and distribution of pyroclastic flow deposits of Aso volcano. Deformation due to these surface ruptures was also detected with InSAR measurements (Fujiwara et al. 2016; Fujiwara et al. 2017) and it is important to examine their temporal evolution following the earthquake sequence. Kumamoto City is famous for its abundant groundwater. A lake, which is located close to the western extension of the Futagawa fault, suddenly dried up, which may be associated with the movement of Suizenji fault zone that appeared during the Kumamoto earthquake (e.g., Hosono et al. 2018). Hosono and Masaki (2020) and Hosono et al. (2020) reported hydrochemical changes of groundwater during the postseismic period. Groundwater flow may affect movement on the surface. Therefore, observation of surface movement contributes to the understanding of evolution of groundwater flow system in this area. The Geospatial Information Authority (hereafter GSI) has been monitoring crustal movements with a continuous GNSS network in Japan, called GSI's Earth Observation Network (hereafter GEONET), while the Japan Exploration Agency (hereafater JAXA) has been operating a satellite (the Advanced Land Observing Satellite 2, hereafter ALOS-2) equipped with L-band radar (Phased Array L-band SAR 2, hereafter PALSAR-2). The European Space Agency also operates C-band SAR satellites called Sentinel-1. These sensors detected remarkable coseismic deformation of this earthquake sequence. Many authors processed the data provided by these sensors and presented coseismic fault models. According to these studies, the first shock was a right lateral strike-slip event on the Hinagu fault (Fukahata and Hashimoto 2016; Ozawa et al. 2016; Himematsu et al. 2016; Kobayashi 2017). On the other hand, both Futagawa and Hinagu faults slipped during the Mw7.0 event, but moment release on the Futagawa fault was dominant. Postseismic deformation usually follows large earthquakes. There are several studies of postseismic deformation following inland earthquakes in Japan mainly using continuous and campaign GNSS data and their origins (e.g., Nakano and Hirahara 1997; Sagiya et al. 2005; Hashimoto et al. 2008; Ohzono 2011; Ohzono et al. 2012; Meneses-Gutierrez et al. 2019). These preceding studies speculated afterslip, viscoelastic relaxation and poroelastic rebound for possible mechanism of postseismic deformation, but they did not incorporate complicated geometry of faults or heterogeneous structure of crust due to the limited spatial resolution. To discuss generation mechanism of postseismic deformation, especially in relation to crustal heterogeneities, spatial resolution is important, but the density of GNSS stations is not high enough to detect detailed spatial distribution of postseismic deformation. Therefore, I must exploit synthetic aperture radar (hereafter SAR) images. Peltzer et al. (1996) discussed postseismic deformation following the 1992 Landers, Calfornia, earthquake using ERS interferograms and clarified relationship between complicated geometry of coseismic faults and poroelastic response. Geology affects groundwater distribution and flow direction. I wonder if there is correlation between the distribution of pyroclastic flow deposit and postseismic deformation. Moore et al. (2017) already studied postseismic deformation following the Kumamoto earthquake based on GNSS and InSAR data till the end of 2016. They mainly discussed large-scale deformation with reference to the viscoelastic structure beneath Kyushu. In this paper, I discuss finer-scale deformation that appeared in the vicinity of coseismic surface ruptures, which may convey invaluable information of property of shallow crust and active faults. Tectonic setting Central Kyushu is unique in Japan, because there is a large graben structure across the island. Aso and Unzen volcanoes sit right in its middle (Figs. 1 and 2). Century-long geodetic surveys revealed N–S extension which is considered to tear the island. This idea seemed partly consistent with the existence of E–W-trending normal faults (Tada, 1984). Recent continuous GNSS observation, however, does not confirm the dominance of N–S extension (e.g., Nishimura and Hashimoto 2006). Now, dextral motion is considered to be appropriate across the Futagawa and Hinagu fault system. Aso volcano is one of the most active volcanoes in Japan and repeated large eruptions many times including at least 4 caldera forming eruptions. The last caldera forming eruption was the largest so far, whose pyroclastic flow deposits, ASO-4 (~ 90 ka BP) covers northern and central Kyushu (Ono and Watanabe, 1985) (Fig. 2). There are thick pyroclastic flow deposits of Pleistocene to Holocene in the surrounding area of the source faults of the 2016 Kumamoto earthquake sequence (#83, 95, 96, 99, 166 in Fig. 2). On the other hand, sedimentary rocks of Holocene are found in the Kumamoto Plain (#1 in Fig. 2). Goto et al. (2017) pointed out that the Suizenji fault zone that appeared during the 2016 earthquake sequence in Kumamoto City is located near the foot of terrace deposit of early–middle-late Late Pleistocence. SAR images and processing procedure I utilized ALOS-2/PALSAR-2 images acquired after the largest earthquake on April 16 in the Kumamoto sequence. JAXA made observations with PALSAR-2 for several different directions and modes, but there are not so many images that were acquired from the same orbits and with high frequencies. Among them, I collected strip-map mode images of high spatial resolution of path 23 (P23) of descending orbit and 130 (P130) and 131 (P131) of ascending orbits. Table 1 lists information of observed images with their parameters of observations. Figures 1, 2 and 3 illustrates footprints of images used in this study and temporal changes in perpendicular baselines, respectively. P23 covers the surrounding area of the Futagawa and Hinagu faults and Aso caldera and are frequently observed. It is because this path covers active volcanoes such as Aso, Kirishima, Sakurajima and Kuchinoerabujima. On the other hand, P131 and P130 cover the Kumamoto plain and Aso caldera, respectively, and there is no overlap between P130 and P131. There are 28, 13 and 7 images for P23, P131 and P130, respectively, during the period from April 18, 2016 to December 10, 2018. Perpendicular baselines are shorter than 400 m, which is good enough for interferometry. I did not use ScanSAR images because of their less frequent observations and lower spatial resolution. I did not use other SAR images acquired by other platforms than Sentinel-1, because their shorter wavelength of microwave causes decorrelation in vegetated and mountainous areas and with long temporal separations. I compared result with that of time series analyses of Sentinel-1 images later. Table 1 List of parameters of images used in this study Temporal changes in perpendicular baselines of ALOS-2/PALSAR-2 images used in this study. Blue, orange, and black lines with symbols are of P23, P131, and P131, respectively I performed 2-pass interferometry for pairs of collected SAR images with Gamma® software (Wegmüller and Werner 1997). For descending images (P23), the boundary between northern and southern images runs across the seismogenic zone of the Kumamoto earthquakes. I concatenated them to retain continuity of phase according to the procedure by Gamma®. ASTER-GDEM ver.2 is used for the correction of topographic phase and geocoding (Tachikawa et al. 2011). I fixed the first images acquired after the April 16 earthquake as the reference and made interferograms for the pair of this reference and following images. Owing to L-band, coherence is high enough even for the pair with two-year-long separation. L-band SAR used to suffer from ionospheric disturbances and so does the present case. I exploited the technique developed by Gomba et al. (2016), Furuya et al. (2017), Wegmüller et al. (2018) to reduce ionospheric disturbances. We found ionospheric disturbances both in ascending and descending interferograms and sometimes large ramp in corrected interferograms. Therefore, I flattened ionospheric-corrected interferograms and then filtered them before unwrapping. I used the branch-cut technique for unwrapping of filtered interferograms. I stacked unwrapped interferograms for both ascending and descending orbits and converted them to E–W and U–D components. Correction of ionospheric disturbances Before discussing the detected surface deformation, it is worth mentioning the correction of ionospheric disturbances. Ionospheric disturbances that appear in interferograms of L-band SAR are considered to be related to medium-scale travelling ionospheric disturbances (MSTID) (e.g., Saito et al. 1998). There may be seasonality of MSTID and dependence on local time (Chen et al. 2019). Observation from ascending and descending orbits is made around mid-night and noon, respectively. Empirically, disturbances appear in ascending interferograms in summer, while those in descending interferograms are recognizable in winter. Figure 4 shows an example of correction of ascending interferograms of April 26 and June 13, 2016. I observed a large disturbance in the middle of the original interferogram (Fig. 4a). Similar disturbances appear in interferograms of higher and lower frequencies, but there is slight difference between them (Fig. 4b, c). Double-differenced interferogram shows spatial variation (Fig. 4d), which leads to an estimate of effect of ionosphere (Fig. 4e). Taking difference between original and ionospheric component, I finally obtained ionosphere-corrected interferogram (Fig. 4f). However, I still recognized significant trend in the azimuth direction. Therefore, I detrended it by fitting polynomial function in two dimensions and filtered it (Fig. 4g). I geocoded filtered ionosphere-corrected interferogram to detect surface deformation (Fig. 4h). In Fig. 4h, there is still a large disturbance that may be attributed to tropospheric disturbance in image of June 21, 2016, because I did not find similar signal in other ionosphere-corrected interferograms (Fig. 6). Interferograms produced during the procedure of correction of ionospheric disturbances for the pair of images April 26 and June 21, 2016 of P131-F640. a Original differential interferogram in radar coordinate. b Interferogram for higher band. c Interferogram for lower band. d Double-differenced interferogram; i.e., difference between higher and lower band interferograms. e Estimated ionospheric components. f Non-dispersive components in original interferogram; i.e., subtracted phase of ionospheric components from original one. g Flattened and filtered non dispersive components of interferogram. h Geocoded flattened and filtered nondispersive components of interferogram An example of descending interferogram is shown in Fig. 5. Empirically, ionospheric disturbances are considered to be not so serious as those in ascending interferograms. However, it is not always right. In original, higher- and lower-frequency interferograms, I recognized more than three cycles of fringes. Double-differenced interferogram is almost flat, but I have ionospheric disturbance with fairly long wavelength. Furthermore, ionosphere-corrected interferogram still has large trend of three cycles in the azimuth direction, to which I must apply flattening and filtering techniques. Interferograms produced during the procedure of correction of ionospheric disturbances for the pair of images April 18 and June 13, 2016 of P23-F2950–2960. a Original differential interferogram in radar coordinate. b Interferogram for higher band. c Interferogram for lower band. d Double-differenced interferogram; i.e., difference between higher and lower band interferograms. e Estimated ionospheric components. f Non-dispersive components in original interferogram; i.e., subtracted phase of ionospheric components from original one. g Flattened and filtered non dispersive components of interferogram. h Geocoded flattened and filtered non dispersive components of interferogram Observed line-of-sight displacements Owing to repeated acquisitions of ALOS-2/PALSAR-2, I obtained spatio-temporal variation in Line-of-Sight (hereafter LOS) displacements after the occurrence of the Kumamoto earthquake sequence. In this chapter, I discuss characteristics of observed LOS displacements from three different viewpoints; i.e., a) spatial distribution of averaged LOS displacements (Figs. 6, 7, 8, 9 and 10b) profiles of displacements along selected sections (Fig. 11c) time series of LOS displacement at selected points (Fig. 12). Close-up of unwrapped interferograms of P131-640 around the Futagawa–Hinagu faults and Aso caldera. Positive 68 (negative) value indicates motion of the surface toward (away from) the satellite. Black and yellow arrows indicate direction flight and emission of microwave, respectively. Diamonds are GNSS stations painted according to displacements relative to 960700 (diamond painted in black in coseismic one and white in postseismic ones). Black solid lines are surface traces of active faults (AIST 2005). Thick red lines are coseismic surface ruptures (Kumahara et al. 2016; Goto et al. 2017; Geological Survey of Japan, AIST 2017). See also legends of Figs. 1 and 2 Close-up of unwrapped interferograms of P130-650 around the Futagawa–Hinagu faults and Aso caldera. See also legend of Figs. 1, 2 and 6. Diamonds are GNSS stations painted according to displacements relative to 970833 (diamond painted in black in coseismic one and white in postseismic ones) Close-up of unwrapped interferograms of P23-2950–2960 around the Futagawa–Hinagu faults and Aso caldera. See also legend of Figs. 1, 2 and 6. Diamonds are GNSS stations painted according to displacements relative to 960700 (diamond painted in black in coseismic one and white in postseismic ones) Average change rate of line-of-sight displacement. a Ascending interferograms (P131-F640 and P130-F650). (b) Descending interferogram (P23-F2950–2960). White diamonds are reference points (GEONET 960700 for P131 and P23, 970833 for P130) for the stacking. Number at GEONET sites are their station code, though the only lower 4 digits are shown. Other Diamonds are GEONET stations painted according to average rates of displacement in LOS direction. Red star is the epicenter of the earthquake of April 16, 2016 determined by JMA (2016). Black and red solid lines are surface trace of active faults and coseismic surface ruptures. LOS displacements profiles in Fig. 11 are taken along 7 black lines with numbers. Time series of LOS changes in Fig. 12 are taken at points of smaller white diamonds with letters in (b). See also legends of Figs. 1, 2 and 6 Additional file 1: Figures S1–S3 show all flattened filtered non-dispersive components of interferograms for P131-F640, P130-F650, and P23-F2950–2960, respectively. Close-ups of unwrapped interferograms around the source region and Aso caldera are shown in Figs. 6, 7 and 8, where displacement of GEONET stations during the corresponding period projected onto the LOS directions is also shown. All LOC displacements are referred to GEONET 960700 for the paths 131 and 23, 970833 for the path 130, respectively, considering distance from source faults and coherence around them. Coseismic interferograms are also shown in the top left panels of each figure. Comparison of LOS changes with those at GEONET sites with the same reference is given in Additional file 1: Figure S4. Average LOS change rates with GEONET average velocities are in Fig. 9. Time series of InSAR displacement roughly follow GNSS data at most sites with fluctuations. InSAR displacements in summer tend to depart from that of GNSS, which may be attributed to tropospheric disturbances related to heavy precipitation (Precipitation at Kumamoto is shown in Fig. 12f). Because no correction of tropospheric disturbances nor temporal smoothing is applied, jumps appear in InSAR time series when there are storms or torrential rains. Furthermore, all interferograms refer to one specific GEONET site in a scene and local disturbance around it affects entire image. GNSS data is daily averaged coordinate, while InSAR image is an instantaneous one. Therefore, local tropospheric disturbance on interferogram may affect more significantly than GNSS daily coordinates. Discrepancies are large at GEONET sites 950456 and 081169. I suspect soil condition or local topography around these sites affect the movement. I also compare the present results with that of time series analysis of Sentinel-1 images. I processed Sentinel-1 images during the period from April 20, 2016 to April in 2018 using LiCSBAS developed by Morishita et al. (2020). Additional file 2: Figure S5 shows average LOS displacements of both ascending and descending images. Discrepancies are recognized, but this is attributable to the difference in strategies of analyses. The present result by stacking is the weighted average of changing rate between the first image and others. On the other hand, LiCSBAS calculates average of changing rates of LOS of pairs of consecutive images. Therefore, rapid movement in early stage, if any, may be emphasized in the present result, while LiCSBAS result gives us more slower rates in later stage. Despite this discrepancy, the same features of spatial distribution are recognizable. The most important issue is low coherence in mountainous area on the southeastern side of the Futagawa and Hinagu faults and on northern frank of Aso caldera in Sentinel-1 images. As already known, L-band SAR of ALOS-2/PALSAR-2 gives us higher coherence and can be utilized for the detection of movements. Figure 10 shows quasi-EW and vertical components of average velocity during the period from the first acquisitions to April 2018. E–W and vertical components of average velocity of GEONET stations are also indicated. For conversion to E–W and U–D components, the same GEONET stations (960700 and 970833) were fixed in the overlapped area of ascending and descending images. In the following section of spatial variation of deformation, I mainly discuss E–W and U–D components in Fig. 10. Quasi east–west and vertical components of average velocity that are derived from ascending and 92 descending interferograms in Fig. 9. Diamond painted in white are reference points (GEONET 960700) for the conversion of LOS displacement. Black and red solid lines are surface trace of active faults and coseismic surface ruptures. Dark green lines delineate boundary of igneous rocks.Arrows with letters are points discussed in the text. See also legend of Fig. 1, 2 and 6 Spatial distribution of average rate of postseismic deformation Coseismic deformations are also shown at the top left in Figs. 6, 7 and 8. Comparing them with following postseismic interferograms, I confirmed that postseismic deformations are concentrated around the source area of the mainshock. However, spatial pattern is significantly different with each other, especially in ascending interferogram (Fig. 6). Fujiwara et al. (2016) already showed postseismic deformations in early stage, April–May in 2016, with ALOS-2/PALSAR-2 from both ascending and descending orbits. Interferogram from descending orbit is the same as that used in this study (P23; Second left panel of the top raw in Fig. 8). They used pairs of images from a different path with high elevation. There is a little difference in obtained spatial pattern of deformation in ascending interferogram, but the features of obtained postseismic deformations are basically the same. In this study, I put emphasis on their temporal evolution and deformation that arose afterward. Fujiwara et al. (2016) pointed out several spots of significant LOS changes; (1) deformation along the Futagawa fault, especially near the junction with the Hinagu fault (2) deformation around the Suizenji fault (they mentioned as the Suizenji Park), (3) deformation in the Ozu town. In Fig. 12 of Fujiwara et al. (2016), there are many signals in Aso caldera, but they did not mention in detail. I also recognized the same features and that they were amplified in the following 2.5 years (Figs. 6, 7 and 8). They pointed that there is no clear deformation around the outer rim of Aso caldera, where many surface ruptures were observed in coseismic interferograms. I did not observe clear deformation in later interferograms, neither. The most prominent one is subsidence along the Futagawa fault and its western extension. Fujiwara et al. (2016) measured less than 10 cm displacement near the junction of the Futagawa and Hinagu faults during the first 2 weeks after the mainshock. Subsidence rate exceeding 6 cm/year in this zone is recognized during 2.5 years despite loss of coherence in most part (arrow a in Fig. 10). Another spot of subsidence is found between the junction and Aso caldera (arrow b). Westward shift is also prevailing in this area. There is a surface rupture along another fault, Idenokuchi fault (Toda et al. 2016). It is noteworthy that this area of subsidence is bounded by the Futagawa and Idenokuchi faults. Rapid uplift is found on the south side of the Idenokuchi fault (arrow c). In Fig. 12 of Fujiwara et al. (2016), there is not notable signal in this area. Uplift is also recognized on the north side of the Futagawa fault (arrow d). A zone of slight subsidence and westward shift (arrow e) is surrounded by this uplift zone on the north side of the Futagawa fault. It is interesting that the boundary between these uplift and subsided zones nearly coincides with northern edge of a Pleistocene pyroclastic flow deposits (dark green line). Westward shift is remarkable on the southeastern side of the Hinagu faults, reaching 6 cm/year (arrow f). I also see eastward motion of < 2 cm/year around the epicenter. Further west, I observed subsidence in a fan-shaped zone near the coast (arrow g). It is interesting that its southern boundary roughly coincides with the western extension of the Futagawa fault. I also found significant deformation off Futagawa and Hinagu faults, which is the same as that of Fujiwara et al. (2016). The most remarkable one is a NW–SE trending zone of subsidence of ~ 4 cm/year in the city of Kumamoto (arrow h). Large subsidence was also detected in coseismic interferograms (Upper left panel in Figs. 6 and 8) (e.g., Fujiwara et al. 2016). The zone of this subsidence coincides with the Suizenji fault zone found by Goto et al. (2017). The present results suggest that postseismic deformation also continued around this fault zone during 2.5 years. Several spots of subsidence can be observed in Aso caldera, as well. In the northernmost part of this caldera, coseismic surface ruptures were found (Tsuji et al. 2017; Fujiwara et al. 2017). I detected significant subsidence along these surface ruptures during the postseismic period (arrow i), implying continuing movement associated with these ruptures. Another remarkable motion was found on the northern frank of central cone of the Aso volcano (arrow j), where westward shift is also dominant here. Its northern boundary seems to be aligned along a line trending NE–SW. Significant eastward motion was found at the central cone of Aso volcano (Fig. 10a). There were small explosions during February to May, 2016, and a significant explosion occurred on October 7–8, 2016 (JMA 2016). This eastward motion may be attributed to this activity. I also found another small spot of westward shift of ~ 4 cm/year and slight subsidence north of Ozu Town, about 10 km north of the Futagawa fault (arrow k). This deformation was already pointed out by Fujiwara et al. (2016). This zone trends in the WNW–ESE direction, which corresponds to local trend of valley where Pleistocene sedimentary rocks are sandwiched by igneous rocks. I did not see any sign of such deformation in preseismic interferogram (Additional file 2: Figure S5). Therefore, this deformation may have been caused by strong shaking due to the Kumamoto earthquake sequence. LOS displacement profiles along selected sections It is important to examine temporal variation in deformation for the discussion of mechanism of postseismic deformation. Because timing and frequency of observations are different between descending and ascending orbits, it is impossible to reduce E–W and vertical components at specific epochs. Therefore, I discuss LOS displacements in this section. For this purpose, I prepared two different views of time series of observed deformation. One is the temporal changes along selected profiles. I sampled LOS change from the area within 0.005° on the both sides of a profile and plotted them shifting according to the time of acquisitions of subsequent images. I chose 7 profiles, shown in Fig. 9b, that run through interesting spots of deformation discussed in the previous section, in which I can also grasp the characteristics of spatial distribution of deformation, especially discontinuities in deformation. 5 sections are along meridians, while 2 sections are in the E–W direction. I emphasize that correlation between LOS displacement and topography is not recognized though some sections runs in the areas of rough topography. The Sect. 1 is the westernmost profile of LOS change, which runs off the main strand of Futagawa and Hinagu faults but crosses the area of local LOS increase around the Suizenji fault zone in Kumamoto City (Fig. 11a, b). I can see local LOS increase around 32.8°N in both interferograms (vertical line) and another local deformation a little bit north of 32.7°N in descending interferogram (red arrow in Fig. 11b). The former corresponds to local subsidence in Kumamoto City, while the latter is signal on the western extension of the Futagawa fault, i.e., the Akitsugawa flexure zone of Goto et al. (2017). These observations suggest that postseismic deformation occurred not only in the vicinity of coseismic faults but off the source. I notice two steps looking closely at the LOS change around 32.8°N in descending interferogram, implying at least two possible faults there (below SZ). Temporal evolution of LOS displacements along 6 lines. Black lines at the bottom of each diagram are profiles of coseismic LOS displacements along the same lines. Data were sampled within 0.005 degree (~ 0.5 km) wide on both side of the line. Color of each line indicates day of acquisition of consecutive images. Thick solid line gives scales for displacement. Scales for coseismic and postseismic displacements are different in each profile. Vertical solid lines in some diagrams indicate the location of coseismic surface ruptures or surface trace of active faults. SZ; Suizenji fault, FF; Futagawa fault, HF; Hinagu fault, IF; Idenokuchi fault, MF; Midorikawa fault, RP;surface ruptures in Aso caldera. WF in (k) and (l) indicates region of coseismic fissure swarm detected by Fujiwara et al. (2016). Coseismic displacements in this area shown in gray, because of less accurate LOS displacements due to many discontinuities. Orange horizontal lines are baseline for the final observations. Bottom diagrams are profiles of topography. Red arrows are points discussed in the text. a, b Profiles of LOS displacements at each epoch along the line 1 (meridian of 130.720°E) of ascending and descending interferograms, respectively. c, d Profiles along the line 2 (meridian of 130.810°E) of ascending and descending interferograms, respectively. e, f Profiles along the line 3 (meridian of 130.850°E) of ascending and descending interferograms, respectively. g, h Profiles along the line 4(meridian of 130.895°E) of ascending and descending interferograms, respectively. i, j Profiles along the line 5 (meridian of 131.040°E) of ascending and descending interferograms, respectively. k, l Profiles along the line 6 (parallel of 32.915°N) of ascending and descending interferograms, respectively. m, n Profiles along the line 7 (parallel of 32.790°N) of ascending and descending interferograms, respectively The Sect. 2 runs just west of the junction of the Futagawa and Hinagu faults (Fig. 11c, d). The LOS increase exceeds 30 cm in descending interferogram, the largest in the entire region under study. I observe sharp changes at the northern boundary of this zone of LOS increase (= subsidence) which corresponds to the Akitsugawa flexure zone (vertical line with AF). Southern half of subsidence zone has gradual change in both interferograms, but is limited by the Hinagu fault (vertical line with HF). Comparing the baseline of the last observation (orange lines), discrete shift of far-field displacement is noticeable on the both sides. The Sect. 3 is a profile running across a smaller local subsidence between the Futagawa and Idenokuchi faults (Fig. 11e, f). There is a spike-like pattern of spatial distribution of LOS changes around 32.8°N (between vertical lines with HF and FF). Its width is much narrower than that found in the Sects. 2 and 4. There is also a shift in the far-field displacement, which is evident in Fig. 11f. The Sect. 4 shows temporal evolution of LOS changes along the meridian passing the spot of large subsidence between the Futagawa and Idenokuchi faults (Fig. 11g, h). I recognize sharp changes across these two fault and large LOS increase (= subsidence) between them (vertical lines with IF and FF). This LOS change exceeded 10 cm about 1 year after. It is worth noting that the changes across the Idenokuchi fault are larger and sharper than that across the Futagawa fault especially in descending interferograms (Fig. 11h), which implies afterslip on the Idenokuchi fault is more active than on the Futagawa fault, if any. I also noted that there is another gradual step north of the Futagawa fault (red arrow next right of FF), suggesting a minor buried slip. There is another discontinuous change around 32.9°N (red arrow further right), corresponding to the area of westward shift north of Ozu Town in Fig. 10a. I should note convex pattern of the LOS change in ascending interferograms (double-headed arrow in Fig. 11g), while LOS change along the profile is almost flat in descending ones. This convex pattern of LOS change becomes noticeable about 200 days after. The Sect. 5 runs across the Aso caldera. A sharp discontinuity is obvious around 33.0°N, just south of the northern caldera rim (RP). This point is located a little north of the surface rupture that was formed during the April 16 shock of Mw7.0 (Fujiwara et al. 2016; Fujiwara et al. 2017; Tsuji et al. 2017). I can notice the differential motion evolved according to elapsed time. There were several step-like pattern of deformation during the first 100 days, but most of them died out and the largest one continued for 2 years. LOS changes with relatively short wavelength of ~ 2 km can be seen in ascending interferogram in caldera floor and central cones, while long wavelength deformation is detected with local LOS increase centered around 32.9°N in descending interferogram (red arrow in Fig. 11j). The Sects. 6 and 7 are LOS displacement profiles along two parallels. The Sect. 6 runs north of the Futagawa fault and northern part of Aso caldera (Fig. 11k, l). A spike-like change of LOS just east of the caldera rim (left red arrow) is related to coseismic surface rupture, the same signal in Sect. 5. Another notable deformation is rapid LOS increase around 131.2°E in the vicinity of central cone, which is as large as 10 cm (right arrow). This change obviously does not correlate with topography. I also recognize difference in level of LOS change between both sides of this zone in both ascending and descending interferograms. The Sect. 7 crosses local LOS increase in Kumamoto City, junction of the Futagawa and Hinagu faults, and western frank of the Aso caldera. I can find a remarkable deformation on the southeast side of the Futagawa fault in ascending interferogram. This deformation may have been accelerated after the summer of 2016 (double-headed arrow). Time series of LOS displacement at selected points The other is the time series of LOS changes at selected points, which is easier to understand the decaying history of deformation. We chose 5 points shown in Fig. 9. Because acquisitions were made frequently from descending orbit (P23) and were less from ascending orbits, I examine only time series of descending interferograms. I sampled LOS change rates in an area of 0.005° × 0.005° centered at the selected points and took average. To estimate characteristic time, I fit an exponential decaying function to observed time series; $$u = a\left( {1 - { \exp }\left( {{\raise0.7ex\hbox{${ - t}$} \!\mathord{\left/ {\vphantom {{ - t} \tau }}\right.\kern-0pt} \!\lower0.7ex\hbox{$\tau $}}} \right)} \right) + b,$$ where u is LOS displacement, a and b are constants, t is elapsed time in day from April 16, 2016, τ is characteristic time. Red curves in each panel are estimated decaying time series. It is important to note that the LOS changes till the end of May 2016 are dominant during 2 years at most points, implying much faster motion during this period than this approximation. This fast motion may contribute to the difference between average velocities from stacking of ALOS-2/PALSAR-2 and time series analysis of Sentinel-1. Point A is located in the middle of local LOS increase in Kumamoto City. LOS changes rapidly decayed till the fall of 2016, though there is a fluctuation in 2017–2018 (Fig. 12a). If I fit exponential decaying function, I obtain characteristic time of only 29 days. Total LOS change amounts to ~ 5 cm. a– e Time series plot of LOS changes at 5 spots in the descending interferogram in Fig. 9b. Average of LOS change in an area of 0.05° x 0.05° centered at the point shown in each diagram. Error bar indicates 1-sigma. Red line is the best fit exponential decaying curve, whose characteristic time is shown in the diagram. f Daily precipitation in mm at JMA Kumamoto station in the Kumamoto City (JMA 2020) Point B is located south of the junction of the Futagawa and Hinagu faults, where westward horizontal motion is dominant around this point (Fig. 10a). This point also shows rapid decay with time constant of ~ 50 days and may have reached ~ 6 cm till the winter in 2016, though scatter is a little bit large (Fig. 12b). On the other hand, points C–E have longer time constant than the previous points. Point C, located in the large subsidence between the Futagawa and Idenokuchi faults, gradually decayed till the beginning of 2017 with time constant of ~ 230 days (Fig. 12c). In 2017, it is stable at the level of 8 cm increase of LOS, and fluctuated in 2018. Point D is in the middle of uplift area on the western frank of the Aso caldera. During the first 2 weeks, this point moved rapidly, but suddenly was decelerated (Fig. 12d). Then, it continues to move in the same direction (= uplift) with slow decay rate of characteristic time of ~ 980 days. Point E in Aso caldera shows a similar pattern of temporal change to Point C. Characteristic time is almost the same (~ 210 days) (Fig. 12e). Because these two points are located ~ 20 km away from each other, it may be hard to expect the possible mechanical link. I add daily precipitation at the Japan Meteorological Agency's (JMA) Kumamoto station in Fig. 12f. Kumamoto area suffered from heavy rain mainly in summer during these 3 years, but the correlation with temporal change in LOS change is not clear at all points. Trial of afterslip model There are wide varieties of spatial and temporal characteristics in observed postseismic deformation and it may be difficult to explain them with one mechanism. Because I detected several sharp changes across some coseismic surface ruptures, it is reasonable to examine first to what extent afterslip model can explain observed deformation. For this purpose, I down-sampled average rates of LOS (Fig. 9) using the quadtree algorithm (Additional file 2: Figure S7), and estimated slip on possible faults by inverting them. It is obvious that there are at least four or five distinctive deformations in the vicinity of the Futagawa, Hinagu, Idenokuchi, and Suizenji faults and in the Aso caldera. Because there are too many parameters to simultaneously estimate, it is reasonable to separate areas into their surrounding zones as the first step. In this study, I divided dataset into four, considering distance from possible sources (Additional file 2: Figure S7). Region (1) is the surrounding area of the Futagawa and Hinagu faults. L-band SAR gives us highly coherent phase data in mountainous regions, but I excluded data south of Midorikawa fault and north of 33°N, considering distance from Futagawa and Hinagu faults. I excluded data from the coast of Ariake and Yatsushiro Seas, because this region might have suffered from subsidence due to compaction of artificial land (Fig. 2). I also excluded data in region (2). The region (2) is the vicinity of Suzenji fault. Judging from spatial distribution of LOS displacements, data in about 10 × 10 km2 wide area were extracted. These areas are covered with P130 and P23 images. The region (3) is Aso caldera, where images of P130 and P23 cover. I excluded data in the area of vicinity of surface ruptures. Tsuji et al. (2017) pointed out that deformation in the vicinity of surface ruptures in Aso caldera may be generated by a source as shallow as 50 m. It is reasonable to exclude them as noise in the following inversion of afterslip. I applied methods of Fukahata and Wright (2008) and its extension to dual faults (Fukahata and Hashimoto 2016) to down-sampled LOS data. According to Fukahata and Wright (2008), observed displacement d (N x 1 vector) can be expressed by the function of parameters m (M x 1 vector) and observation error e as below; $${\mathbf{d}} = f\left( {\mathbf{m}} \right) + {\mathbf{e}},$$ where f is a vector function including Green's function. m consists of model parameters of faults p (location, length, width, strike, dip) and slip on them a. Thus, (2) can be written as $${\mathbf{d}} = f\left( {{\mathbf{p}},{\mathbf{a}}} \right) + {\mathbf{e}} = {\text{H}}\left( {\mathbf{p}} \right){\mathbf{a}} + {\mathbf{e}},$$ where H is N x M matrix consisting of fault parameters and direction cosine of LOS. Thus, contribution of misfit to the system is $$r_{d} = \left[ {{\mathbf{d}} - {\text{H}}\left( {\mathbf{p}} \right)} \right]^{T} {\text{E}}^{ - 1} \left[ {{\mathbf{d}} - {\text{H}}\left( {\mathbf{p}} \right)} \right],$$ where E is covariance matrix of observation data. Then smoothness condition is added to this system; $$r_{p} = {\mathbf{a}}^{T} {\text{G}}\left( {\mathbf{p}} \right){\mathbf{a}}.$$ Finally, solution is obtained by minimizing ABIC in Eq. (20) in Fukahata and Wright (2008). Important parameter is α2, which is a hyperparameter-controlled trade-off between data and a priori information (assumption of smoothness). The larger α2 gives smoother distribution of slip, but residuals between observed data and theoretical displacement become larger. The minimum ABIC can give us an optimal solution with appropriate α2. For regions (2) and (3), I applied Fukahata and Wright's (2008) method, because a single fault is considered to be enough to explain the observed displacements. To reduce contribution of Futagawa and Hinagu faults, I carefully excluded data close to these faults as much as possible. For the region (1), I used the inversion procedure with dual faults by Fukahata and Hashimoto (2016). They modeled the Futagawa and Hinagu faults to explain coseismic deformation. Even with two faults, there are many degrees of freedom. Therefore, I fixed dip angles of two faults as their estimates; 61° and 74° for Futagawa and Hinagu faults, respectively, but length and width were changed considering spatial distribution of deformation. For the Suizenji fault, I assumed as the same strike as the surface ruptures and tried to estimate dip angle and location. In Aso caldera, there is no clear surface expression of faults, but I relied on spatial pattern of observed deformation. I put the modeled fault between zones of eastward and westward motions in Fig. 10a. In these models, slip on the edges except on the surface is fixed. By changing the location and dip angle, I tried to find its optimal model. List of model parameters are given in Table 2. Then slightly changing strike and location of these two faults, I tried to find optimal models that minimize ABIC. Table 2 Parameters of fault models in this study During the course of inversion, covariance matrix is required. Its components are represented as follows assuming gaussian error with zero mean and covariance σ2E; $$E_{ij} = \left[ { - \frac{{\sqrt {\left( {x_{i} - x_{j} } \right)^{2} + \left( {y_{i} - y_{j} } \right)^{2} } }}{D}} \right],$$ where xi and yi are easting and northing of site i, D is characteristic correlation distance of errors. D = 10 km is often used in many studies. Using covariance matrix with longer correlation length, deformation with short wavelength might be smoothed out. In this study, deformation with shorter wavelength than 10 km is dominant, especially around Suizenji, Futagawa and Hinagu faults. Therefore, I adopted 5 km in this study. Distribution of ABIC is shown in Additional file 3: Figures S8–S10. Red circle in Additional file 3: Figure S8 and black dots in Additional file 3: Figures S9–S10 indicate optimal models. Overall, optimal models are located close to global minimum. Slight correlation between Xoff and strike for the Hinagu fault is recognized. I selected a model with minimum ABIC for Futagawa and Hinagu fault model, but chose a model with smoother slip distribution than that of minimum ABIC for Suizenji and Aso provisional faults. For model with smaller hyperparameter and minimum ABIC, constraints on slip distribution is weak, which sometimes arises physically unacceptable distribution. Therefore, I selected the second optimal with much smoother distribution of slip. Figure 13 is the compilation of 4 modeled faults with their estimated slip distribution projected onto the surface. Figure 14 shows distribution of estimated slip and its error projected onto a vertical plane along the strike of faults for optimal models. Motion of hanging wall side is shown relative to footwall side. Their residuals and theoretical LOS velocities are shown in Fig. 15 and Additional file 3: Figure S11, respectively. Larger residuals than 2 cm/year are found around the eastern tip of the Futagawa fault. Negative residuals are also seen along the central and western part of the Futagawa fault. These large residuals suggest complexity of deformation and possible other sources than afterslip. Compilation of modeled faults. Orange, blue, green and purple indicate Futagawa, Hinagu and Suizenji faults and provisional fault in Aso caldera, respectively. Unit of contours of slip is cm/yr Distribution of estimated slip (top) and errors (bottom) projected onto vertical plane along strike. Blue arrows are slips of the hanging-wall side relative to thefoot wall side. Unit of contours are cm/yr. a Futagawa fault. Dip angle is 74°. b Hinagu fault. Dip angle is 61°. Vertical dashed lines in these figures indicate crossing point of two faults. c Suizenji fault. Dip angle is 64°. d provisional fault in Aso caldera. Dip angle is 55° Fitting of afterslip model. a Down-sampled interferograms 145 of ascending orbit P131-F640 and P130-F650. b Down-sampled interferogram of descending orbit P23-F2950–2960. c Simulated rate of LOS changes of ascending orbit P131-F640 and P130-F650. d Simulated rate of LOS changes of descending orbit P23-F2950–2960. e Residuals for ascending orbits. f Residuals for descending orbit Slips are concentrated in the depth shallower than 10 km for all models. Estimated errors are not larger than 8 cm/year. Optimal model for the Futagwa and Hinagu faults is very closely located to the surface ruptures. On the Futagawa fault, there are three main areas of large slip with a couple minor patches (Fig. 14a). Easternmost patch has left lateral slip of ~ 20 cm/year, which is against coseismic slip; e.g., Figure 4 in Fukahata and Hashimoto (2016). Normal faulting of up to 12 cm/year is dominant in central patch. This patch is located about 5 km east of the junction. The Fukahata and Hashimoto's (2016) model shows normal fault component in its eastern part. These left lateral and normal slip arises from westward motion and local subsidence on the north side of the Futagawa fault. Obviously, these motions cannot be created with right lateral slip on this fault. Therefore, it is not considered that westward motion around the eastern tip of Futagawa fault was caused by its afterslip. The westernmost patch with the largest slip is located west of the junction of the Futagawa and Hinagu faults. Right lateral slip exceeds 30 cm/year. As there is no significance slip in the coseismic model of Fukahata and Hashimoto (2016), this slip may be generated by stress concentration at the edge of coseismic slip. Hinagu fault has two patches of large slip (Fig. 14b). Northern patch is closely located to the westernmost patch of the Futagawa fault. Its normal faulting may be related to subsidence near the junction of these two faults, which also suggests interaction between two faults. Furthermore, considering geological condition there, this subsidence might be caused by the compaction of soil. The southern patch on the Hinagu fault has right lateral slip of ~ 20 cm/year. Its peak is estimated at the depth of ~ 3 km and slip almost reaches the surface. Observed displacements show clear discontinuity (e.g., Figure 9a) and creeping of surface ruptures were confirmed in this region (e.g., Shirahama et al. 2016). Therefore, right lateral afterslip is highly possible on this patch of the Hinagu fault. This model fails to explain subsidence between the Futagawa and Idenokuchi faults (Additional file 3: Figure S11a, b). Incorporation of Idenokuchi fault adds more complexities in inversion, which is beyond the present capability of inversion scheme. There might be contribution of compaction of soil in this area. Future work that incorporates these complexities is desirable. Figure 14(c) shows slip distribution of the Suizenji fault, where normal faulting of less than 10 cm/year was detected. Dip angle was estimated 64°, which is consistent with that used in stress calculation by Goto et al. (2017). Upper margin of this fault corresponds to one of the strands of surface rupture. Slip is concentrated in the depth range of 2–8 km. However, slip in very shallow part is negligible, which causes underestimate of observed displacements (Additional file 3: Figure S11c, d). Figure 14d is slip distribution of a provisional fault in Aso caldera. Dip angle was estimated 55° southward. I also made similar calculation with northward dipping fault model, but obtained ABIC is larger. Right lateral slip is dominated with its peak at a depth of ~ 4 km and maximum slip reaches 20 cm/year. This motion may cause subsidence in northern frank of central cone of Aso and uplift on the southwestern rim of caldera. Subsidence around the central cone cannot be explained by this model, which may be related to volcanic activity of Aso (Additional file 3: Figure S11e, f). I presented the results of analysis of ALOS-2/PALSAR-2 images acquired after the 2016 Kumamoto earthquake sequence. In this section, I point several pros and cons in the present study and problems to be resolved in the future. Efficiency of L-band SAR Thanks to long wavelength of PALSAR-2, coherence is high even for pairs with longer temporal separation than 2 years (Fig. 5). The longest separation is 2.7 years (April 18, 2016 and December 10, 2018), but high coherence is obtained enough to detect deformation even in mountainous regions. Recently Sentinel-1 images are being used to study crustal deformation because its recurrence is 6 or 12 days and large amount of image of the same area have been already accumulated. However, temporal decorrelation is strong especially in vegetated area (e.g., Morishita et al. 2020), and it is difficult to obtain deformation with a single pair of images with long temporal separation. This is one of the biggest advantages of L-band SAR. I expect continuous accumulation of PALSAR-2 images as long as possible. Ionospheric disturbances were observed in both ascending and descending interferograms, and their correction with Split Beam interferometry was effective especially for ascending interferograms (e.g., Fig. 4). It is interesting that distribution of ionospheric disturbance is different between ascending and descending interferograms (Figs. 4 and 5). Local time of acquisition is around midnight for ascending orbit, while observations are made around noon from descending orbit. This difference may be the cause of different pattern of ionospheric disturbances that appear in L-band interferograms. Chen et al. (2019) discusses variation of characteristic parameters of MSTID such as period, wavelength and phase velocity observed over Hongkong, and mention that wavelength of MSTID is slightly longer in daytime of spring, autumn, and winter than that in night in spring and summer, though the difference seems marginal. To verify the ionospheric correction, people consider use of GNSS TEC. Comparison of ionospheric disturbances by GNSS and InSAR, however, is not straightforward. First, the timing of observation is different, even though recent continuous GNSS observation is made at the interval of 1 s. Second, incidence angle and azimuth are not the same. Coincidence of LOS of SAR and GNSS satellites might be rare. Finally, distribution of GNSS sites is sparse for this purpose. As shown in Fig. 4, wavelength of ionospheric disturbance is much shorter than length of one scene (~ 70 km) in the azimuth direction. Average spacing of GEONET in Japan is 20 ~ 25 km. It is hard to reproduce detailed distribution of ionospheric disturbances in interferogram with GNSS data. Therefore, I followed the method by Wegmüller et al. (2018) to verify the results. Comparison of postseismic deformation with preceding inland earthquakes in Japan I detected postseismic deformation following the 2016 Kumamoto earthquake sequence. The maximum displacement exceeded 20 cm near the junction of the Futagawa and Hinagu faults (Fig. 11d). I observe several spots of larger LOS changes than 10 cm (Figs. 6, 7 and 8). Are these large postseismic displacements special for the Kumamoto earthquake? Observations of postseismic displacements were made for previous inland earthquakes in Japan as listed in Additional file 1: Table S1. Postseismic displacements are definitely dependent on size of and distance from the mainshock. Therefore, I should compare those with mainshock of similar size to the Kumamoto earthquake. Of course, it is not suitable to strictly compare results because of sparse distribution of GNSS sites around the epicenter, but it may give some insights into characteristics of postseismic deformation. First, I compare with strike-slip events. The first example is the Kobe earthquake in 1995 (MJMA7.3, Mw6.9; all following Mws are from USGS (2020)). Nakano and Hirahara (1997) reported postseismic displacements detected by campaign Global Positioning System (hereafter GPS) surveys and early GEONET. They detected about 2.5 cm displacement at Iwaya station, northern tip of Awaji Island, which is closely located to the epicenter (~ 2 km), till the end of 1995. Hashimoto (2017) detected subsidence between two active faults along the NE extension of the source fault of the 1995 Kobe earthquake with ERS-1/2, Envisat and ALOS/PALSAR. Its maximum was less than 1 cm/yr, which is one order smaller than that of Kumamoto case. Sagiya et al. (2002) detected only ~ 3 cm postseismic displacements at the station right above the aftershock area after the 2000 Western Tottori earthquake (MJMA7.3, Mw6.7) during half year. In case of Kumamoto earthquake, a GEONET site 021071 west of the Hinagu fault recorded 8 cm displacement during 2 years. Considering moment magnitude, it is acceptable that postseismic deformation of the Kumamoto earthquake is larger than Kobe and Tottori events. What about thrust events? Takahashi et al. (2005) observed postseismic displacement of 3 cm or larger during about 2 months after the 2004 Niigata Chuetsu earthquake (MJMA6.8, Mw6.6). For the 2007 Noto Peninsula earthquake of MJMA6.9 (Mw6.7), only 2 cm displacements were observed by campaign GPS surveys by Hashimoto et al. (2008). After the 2007 Chuestu Oki earthquake (MJMA6.8, Mw6.6), Ohta et al. (2008) detected postseismic displacements less than 2 cm at a GEONET station during ~ 50 days. Although distance from the epicenter is larger than 15 km, distance from the edge of aftershock area is much shorter. Ohzono (2011) showed postseismic deformation of up to 13 cm at a GEONET station located ~ 11 km from the epicenter during 800 days after the 2008 Iwate–Miyagi Nairiku earthquake of MJMA7.2 (Mw6.9). Ohzono (2011) also detected ~ 11 cm postseismic deformation at their original site 2.5 km from the epicenter. Moment magnitude of earthquakes other than Iwate–Myagi event is much small than the Kumamoto earthquake, though observation periods are short to compare. Iwate–Miyagi earthquake has as large displacement as the Kumamoto earthquake, implying correlation with magnitude of mainshock. Postseismic deformation, however, may be controlled not only by magnitude of mainshock, but also by geometrical relationship between the source and observation points, local geological conditions, flow of groundwater, etc. These factors should be pursued in the future. Possible correlation with geological structure Considering these different features of postseismic deformations between the Kumamoto earthquake and other inland earthquakes in Japan, it is speculated that there may be different characteristics in the Kumamoto area. Spatial pattern of deformation and distribution of pyroclastic flow deposits seem to be correlated with each other (Fig. 10). For example, large LOS increase in the Aso caldera is located in the region covered with igneous rock of Cenozoic Quaternary Holocene. Uplift zone north of the Futagawa fault corresponds to the area of early Late and Late Pleistocene volcanic rocks. Local subsidence is distributed in a narrow zone about 10 km north of the Futagawa fault. This zone corresponds to the area of middle–late Late Pleistocene (Fig. 10). These observations imply that the age of igneous and sedimentary rocks might affect the response to coseismic loading. It is important to re-examine postseismic deformation following previous inland earthquakes from this viewpoint. Temporal characteristics of postseismic deformation Postseismic deformation following the 2016 Kumamoto earthquake sequence may have decayed during 2 years, though it may still continue in some areas (Fig. 11c). Although observation periods are short for other inland earthquakes discussed above, they may have decayed with short time constant as well. It is noteworthy that the LOS changes during the first epochs or two are dominant in the whole time series and cannot fully be explained with a simple exponential decay. A possible cause of deformation with short time constants is poroelastic rebound or movement of groundwater. As Hosono et al. (2018) reported, water level rapidly dropped in the lake near the Suizenji fault, suggesting fast flow of groundwater. I also found deformations that arose with delay such as concave pattern in Fig. 11g, acceleration of motion on the southeastern side of the Futagawa fault in Fig. 11m. The former is westward motion on the north side of the Futagawa fault in Fig. 9a. These delayed onsets of deformation might not be related to afterslip. Recently, Hosono et al. (2020) proposed a model of flow of groundwater in this area. They performed hydrogeochemical study of groundwater and suggested that precipitated water came down from surface ruptures on the western frank of Aso caldera and flew toward the Kumamoto Plain. They also implied rise of water level on the north side of the Futagawa fault and in the Kumamoto Plain. The uplift detected in the present study might be related to such a phenomenon. Deformation in and around Aso Caldera There are other issues to be solved by the future works. For example, uplift and westward motion on the western frank of the Aso caldera cannot be explained by afterslip on the Futagawa or Idenokuchi faults (Fig. 10). At present, I would like to rule out the possibility of magma intrusion or large-scale landslide. This area is about 10 km away from central cones. I cannot accept the magmatic activity there. As shown in the preceding section, flow of groundwater may be one of candidates. Large-scale landslide may not be candidate, neither, because uplift is dominant. The InSAR technique, however, has little sensitivity to displacement in N–S direction. There might be possibility that movement dominantly occurred in N–S direction. It may be a good idea to incorporate image acquired with different incidence angles and directions, which help resolve three-dimensional displacements. I processed ALOS-2/PALSAR-2 images acquired after the 2016 Kumamoto earthquake sequence with correction of ionospheric disturbances and revealed spatio-temporal variation in LOS changes during 2 years. I could draw conclusions below: L-band SAR gives us high coherence enough to reveal surface deformation even in vegetated or mountainous area for pairs of images acquired more than 2 years. Ionospheric disturbances are seen both in the ascending and descending images, but spatial characteristics may be different each other. Notable features of postseismic deformations are as follows: Deformation earthquake exceeds 10 cm during 2 years at some spots in and around Kumamoto city and Aso caldera. Westward motion of ~ 6 cm/year was dominant on the southeast side of the Hinagu fault, while westward shift was detected on both side of the Futagawa fault. The area of this westward motion has spatial correlation with distribution of pyroclastic flow deposits. Significant uplift of 4 cm/year was found around the eastern Futagawa fault and on the southwestern frank of Aso caldera. Sharp changes were found across several coseismic surface ruptures. Rapid subsidence between Futagawa and Idenokuchi faults was also detected. Local subsidence continued along the Suizenji fault, which newly appeared during the mainshock in Kumamoto City. Subsidence with westward shift of up to 4 cm/year was also found in Aso caldera. Time constant of postseismic decay ranges from 1 month to 600 days at selected points, but that postseismic deformation during the first epochs or two are dominant at point in the Kumamoto Plain. Trial of inversion of afterslip on possible faults showed that westward motion around the Hinagu fault may be explained with right lateral afterslip on the shallow part of this fault. Subsidence along the Suizenji fault can be attributed to normal faulting on dipping westward. Deformation around the Hinagu and Idenokuchi faults, however, cannot be explained with right lateral afterslip of Futagawa fault. Deformation in northern part of Aso caldera might be the result of right lateral afterslip on a possible buried fault. Other factors such as effect of ground water, geological structure, etc. must be incorporated to fully understand the observed deformation in the future. Results of analyses except original SAR images will be provided upon request. These will be posted on a proper repository such as KURENAI. ALOS-2: Advanced Land Observing Satellite 2 PALSAR-2: Phased Array L-band SAR 2 SAR: InSAR: SAR Interferometry AIST: Geological Survey of Japan, National Institute of Advanced Industrial Science and Technology GSI: Geospatial Information Authority GNSS: Global Navigation Satellite System GEONET: GSI's Earth Observation Network JAXA: Japan Aerospace Exploration Agency ERS: European Remote Sensing satellite ASTER-GDEM: Advanced Spaceborne Thermal Emission and Reflection radiometer–Global Digital Elevation Model MSTID: Medium-Scale Travelling Ionospheric Disturbances LOS: LiCSBAS: Looking Inside the Continents from Space + Small BAseline Subset JMA: Japan Meteorological Agency ABIC: Akaike Bayesian Information Criterion TEC: Total Electron Content Envisat: Environmental Satellite PIXEL: PALSAR Interferometry Consortium to Study our Evolving Land surface EQ-SAR WG: Earthquake SAR analysis Working Group GMT: Generic Mapping Tools Asano K, Iwata T (2016) Source rupture processes of the foreshock and mainshock in the 2016 Kumamoto earthquake sequence estimated from the kinematic waveform inversion of strong motion data. Earth Planets Space 68:147. https://doi.org/10.1186/s40623-016-0519-9 Chen G, Zhou C, Liu Y, Zhao J, Tang Q, Wang X, Zhao Z (2019) A statistical analysis of medium-scale traveling ionospheric disturbances during 2014–2017 using the Hong Kong CORS network. Earth Planets Space 71:52. https://doi.org/10.1186/s40623-019-1031-9 Fujiwara S, Yarai H, Kobayashi T, Morishita Y, Nakano T, Miyahara B, Nakai H, Miura Y, Ueshiba H, Kakiage Y, Une H (2016) Small-displacement linear surface ruptures of the 2016 Kumamoto earthquake sequence detected by ALOS-2 SAR interferometry. Earth Planets Space 68:160. https://doi.org/10.1186/s40623-016-0534-x Fujiwara S, Morishita Y, Nakano T, Kobayashi T, Yarai H (2017) Non-tectonic liquefaction-induced large surface displacements in the Aso Valley, Japan, caused by the 2016 Kumamoto earthquake, revealed by ALOS-2 SAR, Earth Planet. Sci Lett 474:457–465. https://doi.org/10.1016/j.epsl.2017.07.001 Fukahata, Y., Wright, T. (2008) A non-linear geodetic data inversion using ABIC for slip distribution on a fault with an unknown dip angle. Geophys J Int 173 (2):353–364 Fukahata Y, Hashimoto M (2016) Simultaneous estimation of the dip angles and slip distribution on the faults of the 2016 Kumamoto earthquake through a weak nonlinear inversion of InSAR data. Earth Planets Space 68:204. https://doi.org/10.1186/s40623-016-0580-4 Furuya M, Suzuki T, Maeda J, Heki K (2017) Midlatitude sporadic-E episodes viewed by L-band split-spectrum InSAR. Earth Planets Space 69:175. https://doi.org/10.1186/s40623-017-0764-6 Geological Survey of Japan, AIST (ed.), (2005). Active fault database. Oct. 4, 2016 version. Geological Survey of Japan, National Institute of Advanced Industrial Science and Technology, https://gbank.gsj.jp/activefault/index_e.html Geological Survey of Japan, AIST (ed.), (2015). Seamless digital geological map of Japan 1: 200,000. May 29, 2015 version. Geological Survey of Japan, National Institute of Advanced Industrial Science and Technology, https://gbank.gsj.jp/seamless/index.html?lang=ja& Geological Survey of Japan, AIST (2017) Surveys of detailed location and configuration to reveal behavioral segments of active fault and study of history of their activity and slip rate. in "Report of the Integrated Studies of Active Fault based on the 2016 Kumamoto Earthquake", edited by the Research and Development Bureau, MEXT, and the Kyushu University, pp 5–185. https://www.jishin.go.jp/main/chousakenkyuu/kumamoto_sogochousa/h28/h28kumamoto_sogochousa_3_1.pdf(in Japanese) Gomba G, Parizzi A, De Zan F, Eineder M, Bamler R (2016) Toward operational compensation of ionospheric effects in SAR interferograms: the split-spectrum method. IEEE Trans Geosci Remote Sens 54:1446–1461 Goto H, Tsutsumi H, Toda S, Kumahara Y (2017) Geomorphic features of surface ruptures associated with the 2016 Kumamoto earthquake in and around the downtown of Kumamoto City, and implications on triggered slip along active faults. Earth Planets Space 69:26. https://doi.org/10.1186/s40623-017-0603-9 Hashimoto M (2017) Ground deformation in the Kyoto and Osaka area during recent 19 years detected with InSAR, in "M. Hashimoto (ed.), International Symposium on Geodesy for Earthquake and Natural Hazards (GENAH), International Association of Geodesy Symposia 145", Springer, pp 155–164. https://doi.org/10.1007/1345_2016_222 Hashimoto M, Takahashi H, Doke R, Kasahara M, Takeuchi A, Onoue K, Hoso Y, Fukushima Y, Nakamura K, Ohya F, Honda R, Ichiyanagi M, Yamaguchi T, Maeda T, Hiramatsu Y (2008) Postseismic displacements following the 2007 Noto peninsula earthquake detected by dense GPS observation. Earth Planets Space 60:139–144. https://doi.org/10.1186/bf03352775 Himematsu Y, Furuya M (2016) Fault source model for the 2016 Kumamoto earthquake sequence based on ALOS-2/PALSAR-2 pixel-offset data: evidence for dynamic slip partitioning. Earth Planets Space 68:169. https://doi.org/10.1186/s40623-016-0545-7Erratum to this article has been published in Earth, Planets and Space 68:196 Hosono T, Masaki Y (2020) Post-seismic hydrochemical changes in regional groundwater flow systems in response to the 2016Mw 7.0 Kumamoto earthquake. J Hydrol 580:124340. https://doi.org/10.1016/j.jhydrol.2019.124340 Hosono, T., Yamada, C., Shibata, T., Tawara, Y., Shimada, J. (2018). Coseismic change in groundwater level after the 2016 Kumamoto earthquake, presented in the 2018 General Assembly of Japan GeoScience Union, AHW24-04, https://confit.atlas.jp/guide/event-img/jpgu2018/AHW24-04/public/pdf?type=in&lang=ja (in Japanese) Hosono T, Yamada T, Manga C, Wang CY, Tanimizu M (2020) Stable isotopes show that earthquakes enhance permeability and release water from mountains. Nat Commun 11(2776):1–9. https://doi.org/10.1038/s41467-020-16604-y Japan Meteorological Agency (2016) Volcanic activity of Aso Volcano in 2016. http://www.data.jma.go.jp/svd/vois/data/tokyo/STOCK/monthly_v-act_doc/fukuoka/2016y/503_16y.pdf (Accessed on July 24, 2020) Japan Meteorological Agency (2020) Past Meteorological Data, Download, https://www.data.jma.go.jp/gmd/risk/obsdl/index.php (Accessed on March 29, 2020) Kobayashi T (2017) Earthquake rupture properties of the 2016 Kumamoto earthquake foreshocks revealed by conventional and multiple-aperture InSAR. Earth Planets Space 69:7. https://doi.org/10.1186/s40623-016-0594-y Kumahara, Y., Goto, H., Nakata, T., Ishiguro, S., Ishimura, D., Ishiyama, T., Okada, S., Kagohara, K., Kashihara, S., Kaneda, H., Sugito, N., Suzuki, Y., Takenami, D., Tanaka, K., Tanaka, T., Tsutsumi, H., Toda, S., Hirouchi, D., Matsuta, N., Moriki, H., Yoshida, H., Watanabe, M. (2016) Distribution of surface rupture associated with the 2016 Kumamoto earthquake and its significance. Japan Geoscience Union Meeting 2016, MIS34-05 Meneses-Gutierrez A, Nishimura T, Hashimoto M (2019) Coseismic and postseismic deformation of the 2016 Central Tottori earthquake and its slip model. J Geophysical Res Solid Earth 124:2202–2217. https://doi.org/10.1029/2018jb016105 Moore, J. D. P., Yu, H., Tang, C.-H., Wang, T., Barbot, S., Peng, D., Masuti, S., Dauwels, J., Hsu, Y.-J., Lambert, V., Nanjundiah, P., Wei, S., Lindsey, E., Feng, L., Shibazaki, B., (2017) Imaging the distribution of transient viscosity after the 2016 Mw 7.1 Kumamoto earthquake, Science 356 (6334), 163-167, https://doi.org/10.1126/science.aal3422 Morishita Y, Milan Lazecky M, Wright TJ, Weiss JR, Elliott JR, Andy Hooper A (2020) LiCSBAS: An open source InSAR time series analysis package integrated with the LiCSAR automated Sentinel-1 InSAR Processor. Remote Sens 12:424. https://doi.org/10.3390/rs12030424 Nakano T, Hirahara K (1997) GPS observations of postseismic deformation for the 1995 Kobe earthquake Japan. Geophys Res Lett 24(5):503–506. https://doi.org/10.1029/97gl00375 Nishimura S, Hashimoto M (2006) A model with rigid rotations and slip deficits for the GPS-derived velocity field in Southwest Japan. Tectonophysics 421:187–207. https://doi.org/10.1016/j.tecto.2006.04.017 Ohta Y, Miura S, Iinuma T, Tachibana K, Matsushima T, Takahashi H, Sagiya T, Ito T, Miyazaki S, Doke R, Takeuchi A, Miyao K, Hirao A, Maeda T, Yamaguchi T, Takada M, Iwakuni M, Ochi T, Meilano I, Hasegawa A (2008) Coseismic and postseismic deformation related to the 2007 Chuetsu-oki, Niigata Earthquake. Earth Planets Space 60:1081–1086 Ohzono, M. (2011) Deformation process around the high-strain rate zone along the Ou-backbone range, northeastern Japan, based on geodetic data (in Japanese), Ph.D. Dissertation, Tohoku University, March 2011 Ohzono M, Ohta Y, Iinuma T, Miura S, Muto J (2012) Geodetic evidence of viscoelastic relaxation after the 2008 Iwate-Miyagi Nairiku earthquake. Earth Planets Space 64:759–764. https://doi.org/10.5047/eps.2012.04.001.pdf Ono K, Watanabe K (1985) Geological map of Aso Volcano. Geological Map of Vol-canoes. Geol. Surv. Jpn, Tsukuba Ozawa T, Fujita E, Ueda H (2016) Crustal deformation associated with the 2016 Kumamoto Earthquake and its effect on the magma system of Aso volcano. Earth Planets Space 68:186. https://doi.org/10.1186/s40623-016-0563-5 Peltzer G, Rosen P, Rogez F, Hudnut K (1996) Poseseismic rebound in fault step-overs caused by pore fluid flow. Science 273:1202–1204. https://doi.org/10.1126/science.273.5279.1202 Rohatgi, A. (2020) WebPlotDigitizer version 4.3, https://automeris.io/WebPlotDigitizer/(accessed on July 23, 2020) Sagiya T, Nishimura T, Hatanaka Y, Fukuyama E, Ellsworth WL (2002) Crustal movements associated with the 2000 western Tottori earthquake and its fault model. J Seismol Soc Jpn 54:523–534. https://doi.org/10.4294/zisin1948.54.4_523 (in Japanese with English abstract) Sagiya T, Ohzono M, Nishiwaki S, Ohta Y, Yamamurao T, Kimata F, Sasaki M (2005) Postseismic deformation following the 2004 mid-Niigata prefecture earthquake around the southern part of source region. J Seismol Soc Jpn 58:359–369. https://doi.org/10.4294/zisin1948.58.3_359(in Japanese with English abstract) Saito A, Fukao S, Miyazaki S (1998) High resolution mapping of TEC perturbations with the GSI GPS Network over Japan. Geophys Res Lett 25(16):3079–3082 Shirahama Y, Yoshimi M, Awata Y, Maruyama T, Azuma T, Miyashita Y, Mori H, Imanishi K, Takeda N, Ochi T, Otsubo M, Asahina D, Miyakawa A (2016) Characteristics of the surface ruptures associated with the 2016 Kumamoto earthquake sequence, central Kyushu Japan. Earth Planets Space 68:191. https://doi.org/10.1186/s40623-016-0559-1 Tachikawa, T., Hato, M. Kaku, M., Iwasaki, A. (2011) The characteristics of ASTER GDEM version 2, IGARSS, July 2011 Tada T (1984) Spreading of the Okinawa Trough and its relation to the crustal deformation in Kyushu. J Seismol Soc Jpn 37:407–415 (in Japanese with English abstract) Takahashi H, Matsushima T, Kato T, Takeuchi A, Yamaguchi T, Kohno Y, Katagi T, Fukuda J, Hatamoto K, Doke R, Matsu'ura Y, Kasahara M (2005) A dense GPS observation immediately after the 2004 mid-Niigata Prefecture earthquake. Earth Planets Space 57:661–665 Toda S, Kaneda H, Okada S, Ishimura D, Mildon ZK (2016) (2016) Slip-partitioned surface ruptures for the Mw 7.0 16, Earth Planets Space 68:188. https://doi.org/10.1186/s40623-016-0560-8 Tsuji T, Ishibashi J, Ishitsuka K, Kamata R (2017) Horizontal sliding of kilometre-scale hot spring area during the 2016 Kumamoto earthquake. Sci Rep 7:42947. https://doi.org/10.1038/srep42947 USGS (2020) Significant Earthquake Archive, Earthquake Hazard Program. https://earthquake.usgs.gov/earthquakes/browse/significant.php. Accessed 23 July 2020 Wegmüller, U., and Werner, C. (1997) Gamma SAR processor and interferometry software. In Proceedings of 3rd ERS Symposium, Space Service Environment (Spec. Publ. 414), 3, pp. 1687–1692. ESA: Florence, Italy, 1997 Wegmüller U, Werner C, Frey O, Magnard C, Strozzi T (2018) Reformulating the split-spectrum method to facilitate the estimation and compensation of the ionospheric phase in SAR interferograms. Procedia Comput Sci 138:318–325 Wessel, P., Smith, W. H. F., Scharroo, R., Luis, J., Wobbe F., (2013) Generic Mapping Tools: Improved Version Released. EOS Trans AGU. 94(45): 409-410. https://doi.org/10.1002/2013eo450001 ALOS-2/PALSAR-2 images used in this study were provided by the Japan Aerospace Exploration Agency through the activities of Pixel under the Joint Use/Joint Research Program of Earthquake Research Institute, University of Tokyo and the Earthquake SAR Analysis Working Group of the Geospatial Information Authority, Japan. Ownership and copyright of ALOS-2/PALSAR-2 images belong to JAXA. GEONET coordinates were provided by GSI. I appreciate two anonymous reviewers for their helpful comments to improve the manuscript. I express my thankfulness for the members of PIXEL and EQ-SAR WG, GSI and JAXA. I utilized scripts developed by Dr. Haruo Horikawa to prepare Geological maps of Seamless Geological Map of AIST with GMT. Dr. Yukitoshi Fukahata permitted me to use his inversion code with two faults. Dr. Mako Ohzono provided data of postseismic deformation of the Iwate-Miyagi earthquake. Dr. Shinji Toda provided digital data of coseismic surface ruptures. Dr. Yu Morishita introduced me LiCSBAS. I thank all of them. Illustrations are prepared with Generic Mapping Tools ver. 4 and 5.4.3 (Wessel et al. 2013). This research was supported by Operating expense grant to Kyoto University. Disaster Prevention Research Institute, Kyoto University, Gokasho, Uji, Kyoto, 611-0011, Japan Manabu Hashimoto The author performed all analysis of SAR images, inversion of slip distributions, interpretation and preparation of manuscript. The author read and approved the final manuscript. Correspondence to Manabu Hashimoto. There are no competing interests. Supplementary Table S1and Figures S1–S3. Supplementary Figures S4–S7. Supplementary Figures S8–S11. Hashimoto, M. Postseismic deformation following the 2016 Kumamoto earthquake detected by ALOS-2/PALSAR-2. Earth Planets Space 72, 154 (2020). https://doi.org/10.1186/s40623-020-01285-0 Kumamoto earthquake Postseismic deformation ALOS-2/PALSAR-2 Ionospheric correction 6. Geodesy L-band Synthetic Aperture Radar: Current and future applications to Earth sciences
CommonCrawl
\begin{document} \title{Comparative study in fair division algorithms} \begin{abstract} \centering A comparison of four fair division algorithms performed on real data from the spliddit website. The comparison was made on the sum of agent's utilities, and the minimum utility for an agent in an allocation.\end{abstract} \section{Introduction} A fair division algorithm is an algorithm that divides a set of resources among several people who have an entitlement to them so that each person receives their due share. The algorithm takes into account the utilities of the items to each person. This paper compares fair division algorithms in minimum utility and the sum of all agent's utilities. \section{Method} We implemented the three-quarters MMS allocation algorithm from the "An Improved Approximation Algorithm for Maximin Share" article by Jugal Garg and Setareh Taki \cite{approxMMS}, into the fairpy open-source library in python \footnote{\href{https://github.com/erelsgl/fairpy/blob/master/fairpy/items/approximation_maximin_share.py}{https://github.com/erelsgl/fairpy/blob/master/fairpy/items/approximation\_maximin\_share.py}}. We compared the performance of this algorithm to three other fair division algorithms implemented in fairpy: Max sum allocation, leximin \cite{wilson1998fair}, and PROPm \cite{baklanov2021propm}. \newline We conducted a comparison of 730 cases of item divisions. The data was collected from the spliddit website \cite{goldman2015spliddit} and was kindly shared with us by Nisarg Shah. The data contains 730 cases of items and their evaluations by various agents. \newline \newline We compared the algorithms in two ways: \begin{itemize} \item The minimum utility an agent receives \item The sum of the utilities of all agents \end{itemize} The comparison was made as an average of the results of each number of agents separately. The average amount of utilities was calculated for one agent, two agents, and so on. Similarly, the minimum utility of an agent in an allocation was calculated separately for each number of agents. \newline \newline Since the three-quarters MMS allocation algorithm stops when each agent gets at least three-quarters of his MMS value, it does not divide all items. This can cause a significant gap in performance measurement, so in order to solve this, we added a division of the remaining items as follows: start from the last agent and give him an item he values as more than 0 (if any) and continue dividing the items to other agents until the items run out. We started from the last agent because the Three-quarters MMS allocation algorithm takes care of the first agents first \footnote{\href{https://github.com/erelsgl/fairpy/blob/master/experiments/compare_algorithms.py}{https://github.com/erelsgl/fairpy/blob/master/experiments/compare\_algorithms.py}}. \section{Comparative study} \begin{figure} \caption{\centering The average sum of agents utilities, for each amount of agents} \caption{\centering The average of the minimum utility for an agent per allocation, for each amount of agents} \caption{\centering The average sum of agents utilities} \caption{\centering The average of the minimum utility for an agent per allocation} \end{figure} \section{Results} As might be expected, for the sum of utilities, the algorithm that maximizes the sum of the utilities returns a significantly better result, and the rest of the algorithms are close in terms of performance. \newline In the same way, for the minimum utilities, leximin is significantly better in terms of performance - which makes sense, since leximin allows the division of items in the middle, and therefore the final utility each agent receives can be higher (e.g. if there are two people interested in only one item, both will receive a certain utility), while if items are distributed in their entirety - the minimum utility will be significantly harmed. \section{Conclusions} PROPm and the three-quarters MMS allocation algorithms are close in performance. Although both are based on different calculation methods and try to ensure different fairness conditions, in the end, both return very similar results. \section {Acknowledgments} We would like to thank Jugal Garg and Setareh Taki, the authors of the article we implemented, who helped us understanding the article. In addition, we would like to thank Nisarg Shah who shared with us and allowed us to use real data from the spliddit website to perform the experiments on. \end{document}
arXiv
Four horizontal lines and four vertical lines are drawn in a plane. In how many ways can four lines be chosen such that a rectangular region is enclosed? In order for the four lines to enclose a rectangular region, we must choose two horizontal and two vertical lines. If we were to choose more than two of one of these types of lines, we would not be able to enclose any region. We can count independently the number of ways to choose vertical and horizontal lines. There will be $\dbinom{4}{2}=6$ ways to choose horizontal lines, and the same number of ways to choose two vertical lines. Since these are independent, there are a total of $6\cdot 6=\boxed{36}$ ways to choose four lines that enclose a rectangle.
Math Dataset
\begin{document} \pagestyle{fancy} \fancyhead{} \maketitle \section{Introduction} \label{sec:intro} Deep Reinforcement Learning (RL)~\citep{sutton1999policy,silver2014deterministic,schulman2015trust} methods have demonstrated impressive performance in continuous control~\citep{lillicrap2015continuous}, and robotics~\citep{levine2016end}. However, a broader application of these methods in real-world domains is impeded by the challenges in designing a proper reward function~\citep{schaal1999imitation,amodei2016concrete,everitt2016avoiding}. Imitation Learning (IL) algorithms~\citep{ng2000algorithms,ziebart2008maximum,ho2016generative} address this issue by replacing reward functions with expert demonstrations, which are easier to collect in most scenarios. However, despite the success of IL algorithms, they typically impose the somewhat unrealistic requirement that the state-action demonstrations must be collected from the same environment as the one in which the imitator is trained. In this work, we focus on a more realistic setting for imitation learning, where: \begin{enumerate}[leftmargin=*] \item the expert demonstrations collected from the real (deployment) environment by executing an expert policy only contain states, \item the learner is trained in a simulation (training) environment, and does not have access to the real environment during the training phase beyond the batch of demonstrations given, and \item the simulation environment does not model the real environment exactly, i.e., there exists a transition dynamics mismatch between these environments. \end{enumerate} The learned policy under the above setting is transferred to the real environment on which its final performance is evaluated. Existing IL methods either do not apply under the above setting or result in poor transfer performance. \looseness-1A large body of work in IL, such as Generative Adversarial Imitation Learning (GAIL~\citep{ho2016generative}) and its variants, has focused on the setting with demonstrations that contain both states and actions, which are difficult to obtain for real-world settings such as learning from videos~\citep{handa2020dexpilot}. Further, closely following the state-action demonstrations limits the the ability to generalize across environments~\citep{radosavovic2020state}. Training agents in simulation environments not only provides data at low-cost, but also alleviates safety concerns related to the trial-and-error process with real robots. However, building a high-fidelity simulator that perfectly models the real environment would require a large computational budget. Low-fidelity simulations are feasible, due to their speed, but the gap between the simulated and real environments degrades the performance of the policies when transferred to real robots~\citep{zhao2020sim}. To this end, we consider the following research question: \emph{how to train an imitator policy in an offline manner with state-only expert demonstrations and a misspecified simulator such that the policy performs well in the real environment?} \begin{table*}[h] \caption{Comparison of our method with the existing imitation learning methods that also consider dynamics mismatch. However, the existing methods do not fit under the specific setting that we study. The expert, training, and deployment are denoted by $M^\mathrm{exp}$, $M^\mathrm{tr}$, and $M^\mathrm{dep}$ respectively. The corresponding transition dynamics are denoted by $T^\mathrm{exp}$, $T^\mathrm{tr}$, and $T^\mathrm{dep}$ respectively. Note that the expert demonstrations are collected from $M^\mathrm{exp}$, the imitation learning agent is trained on $M^\mathrm{tr}$, and the trained policy is finally evaluated on $M^\mathrm{dep}$. Our Robust-GAILfO method has minimal access to $M^\mathrm{dep}$ to select an appropriate $\alpha$. Note that our robust GAILfO method is applicable in both: (i) $T^{\mathrm{dep}} = T^{\mathrm{exp}} \neq T^{\mathrm{tr}}$ setting, and (ii) $T^{\mathrm{dep}} \neq T^{\mathrm{exp}} \neq T^{\mathrm{tr}}$ setting. In setting (i), our primary motivation is that accessing the deployment environment is costly, e.g., interacting with a remote deployment environment is costly due to communication constraints. In setting (ii), after the deployment, the agent has to be robust against potential environmental changes during the test time.} \label{table:related-work} \begin{tabular}{llll}\toprule \textit{IL Methods} & \textit{Type of Demonstrations} & \textit{Access to $M^{\mathrm{dep}}$ during training} & \textit{Dynamics mismatch} \\ \midrule GAIL~\citep{ho2016generative} & state-action & yes & $T^{\mathrm{exp}} = T^{\mathrm{tr}} = T^{\mathrm{dep}}$ \\ GAILfO~\citep{torabi2018generative} & state-only & yes & $T^{\mathrm{exp}} = T^{\mathrm{tr}} = T^{\mathrm{dep}}$ \\ AIRL~\citep{fu2017learning} & state-action & yes & $T^{\mathrm{exp}} = T^{\mathrm{tr}} \neq T^{\mathrm{dep}}$ \\ I2L~\citep{gangwani2020stateonly} & state-only & yes & $T^{\mathrm{exp}} \neq T^{\mathrm{tr}} = T^{\mathrm{dep}}$ \\ SAIL~\citep{liu2019state} & state-only & yes & $T^{\mathrm{exp}} \neq T^{\mathrm{tr}} = T^{\mathrm{dep}}$ \\ GARAT~\citep{desai2020imitation} & state-only & yes & $T^{\mathrm{dep}} = T^{\mathrm{exp}} \neq T^{\mathrm{tr}}$ \\ HIDIL~\citep{jiang2020offline} & state-action & no & $T^{\mathrm{dep}} = T^{\mathrm{exp}} \neq T^{\mathrm{tr}}$ \\ IDDM~\citep{yang2019imitation} & state-only & yes & $T^{\mathrm{exp}} = T^{\mathrm{tr}} = T^{\mathrm{dep}}$ \\ ILPO~\citep{edwards2019imitating} & state-only & yes & $T^{\mathrm{exp}} = T^{\mathrm{tr}} = T^{\mathrm{dep}}$ \\ Robust-GAILfO (ours) & state-only & no & $T^{\mathrm{dep}} = T^{\mathrm{exp}} \neq T^{\mathrm{tr}}$ and $T^{\mathrm{dep}} \neq T^{\mathrm{exp}} \neq T^{\mathrm{tr}}$ \\ \bottomrule \end{tabular} \end{table*} \looseness-1The Adversarial Inverse Reinforcement Learning (AIRL) method from~\citep{fu2017learning} recovers reward functions that can be used to transfer behaviors across changes in dynamics. However, one needs to retrain a policy in the deployment environment with the recovered reward function, whereas we consider a zero-shot transfer setting. In addition, AIRL depends on state-action demonstrations. Recently,~\citep{gangwani2020stateonly,liu2019state} have studied the imitation learning problem under the transition dynamics mismatch between the expert and the learner environments. However, they do not aim to learn policies that are transferable to the expert (real) environment; instead, they optimize the performance in the learner (simulation) environment. In~\citep{desai2020imitation}, the authors attempt to match the simulation environment closer to the real environment by interacting with the real environment during the training phase. A setting very close to ours is considered in~\citep{jiang2020offline}; their method involves learning an inverse dynamics model of the real environment based on the state-action expert demonstrations. None of these methods are directly applicable under our setting (see Table~\ref{table:related-work}). \looseness-1We propose a robust IL method for learning robust policies under the above-discussed setting that can be effectively transferred to the real environment without further fine-tuning during deployment. Our method is built upon the robust RL literature~\citep{iyengar2005robust,nilim2005robust,pinto2017robust,tessler2019action} and the IL literature inspired by GAN-based adversarial learning~\citep{ho2016generative,torabi2018generative}. In particular, our algorithm is a robust variant of the Generative Adversarial Imitation Learning from Observation (GAILfO~\citep{torabi2018generative}) algorithm, a state-only IL method based on GAIL. We discuss how our method addresses the dynamics mismatch issue by exploiting the equivalence between the robust MDP formulation and the two-player Markov game~\citep{pinto2017robust,tessler2019action}. In the finite MDP setting,~\citep{viano2020robust} have proposed a robust inverse reinforcement learning method to address the transition dynamics mismatch between the expert and the learner. Our Markov game formulation in Section~\ref{sec:robust-lfo-mg} closely follows that of~\citep{viano2020robust}, and in Section~\ref{sec:robust-gailfo-full}, we scale it high-dimensional continuous control setting using the techniques from GAIL literature. On the empirical side, we are interested in the sim-to-real transfer performance, whereas~\citep{viano2020robust} have considered the performance in the learner environment itself. \looseness-1We evaluate the efficacy of our method on the continuous control MuJoCo environments. In our experiments, we consider different sources of dynamics mismatch such as joint-friction, and agent-mass. An expert policy is trained under the default dynamics (acting as the real environment). The imitator policy is learned under a modified dynamics (acting as the simulation environment), where one of the mass and friction configurations is changed. The experimental results show that, with appropriate choice of the level of adversarial perturbation, the robustly trained IL policies in the simulator transfer successfully to the real environment compared to the standard GAILfO. We also empirically show that the policies learned by our method are robust to environmental shift during testing. \section{Related Work} \label{sec:relatedwork} \paragraph{Imitation Learning} \looseness-1Ho and Ermon~\citep{ho2016generative} propose a framework, called Generative Adversarial Imitation Learning (GAIL), for directly extracting a policy from trajectories without recovering a reward function as an intermediate step. GAIL utilizes a discriminator to distinguish between the state-action pairs induced by the expert and the learner policy. GAIL was further extended by Fu et al.~\citep{fu2017learning} to produce a scalable inverse reinforcement learning algorithm based on adversarial reward learning. This approach gives a policy as well as a reward function. Our work is closely related to the state-only IL methods that do not require actions in the expert demonstrations~\citep{torabi2018generative,yang2019imitation}. Inspired by GAIL,~\citep{torabi2018generative} have proposed the Generative Adversarial Imitation Learning from Observation (GAILfO) algorithm for state-only IL. GAILfO tries to minimize the divergence between the state transition occupancy measures of the learner and the expert. \paragraph{Robust Reinforcement Learning} \looseness-1In the robust MDP formulation~\citep{iyengar2005robust,nilim2005robust}, the policy is evaluated by the worst-case performance in a class of MDPs centered around a reference environment. In the context of forward RL, some works build on the robust MDP framework, such as~\citep{rajeswaran2016epopt,peng2018sim,mankowitz2019robust}. However, our work is closer to the line of work that leverages the equivalence between action-robust and robust MDPs. In~\citep{morimoto2005robust}, the authors have introduced the notion of worst-case disturbance in the $H_\infty$-control literature to the reinforcement learning paradigm. They consider an adversarial game where an adversary tries to make the worst possible disturbance while an agent tries to make the best control input. Recent literature in RL has proposed a range of robust algorithms based on this game-theoretic perspective~\citep{doyle2013feedback,pinto2017robust,tessler2019action,kamalaruban2020robust}. \section{Problem Setup and Background} \label{sec:Setup} This section formalizes the learning from observation (LfO) problem with model misspecification. \paragraph{Environment and Policy} The environment is formally represented by a Markov decision process (MDP) $M_c := \br{\mathcal{S},\mathcal{A},T,\gamma,P_0,c}$. The state and action spaces are denoted by $\mathcal{S}$ and $\mathcal{A}$, respectively. $T: \mathcal{S} \times \mathcal{S} \times \mathcal{A} \rightarrow \bs{0,1}$ captures the state transition dynamics, i.e., $T\br{s' \mid s,a}$ denotes the probability of landing in state $s'$ by taking action $a$ from state $s$. Here, $c: \mathcal{S} \times \mathcal{S} \to \mathbb{R}$ is the cost function, $\gamma \in \br{0,1}$ is the discounting factor, and $P_0: \mathcal{S} \rightarrow \bs{0,1}$ is an initial distribution over the state space $\mathcal{S}$. We denote an MDP without a cost function by $M = M_c \backslash c = \bc{\mathcal{S}, \mathcal{A}, T, \gamma, P_0}$. We denote a policy $\pi: \mathcal{S} \times \mathcal{A} \rightarrow \bs{0,1}$ as a mapping from a state to a probability distribution over the action space. The set of all stationary stochastic policies is denoted by $\Pi$. For any policy $\pi$ in the MDP $M$, we define the state transition occupancy measure as follows: $\rho^\pi_M \br{s,s'} := \sum_a T\br{s' \mid s,a} \cdot \pi\br{a \mid s} \cdot \sum_{t=0}^\infty \gamma^t \P{S_t = s \mid \pi, M}$. Here, $\P{S_t = s \mid \pi, M}$ denotes the probability of visiting the state $s$ after $t$ steps by following the policy $\pi$ in $M$. The total expected cost of any policy $\pi$ in the MDP $M_c$ is defined as follows: $\Eee{\rho^\pi_{M}}{c\br{s,s'}} := \E{\sum_{t = 0}^\infty \gamma^t c\br{s_t,s_{t+1}}}$, where $s_0 \sim P_0$, $a_t \sim \pi\br{\cdot | s_t}$, $s_{t+1} \sim T \br{\cdot | s_t, a_t}$. A policy $\pi$ is \emph{optimal} for the MDP $M_c$ if $\pi \in \argmin_{\pi'} \Eee{\rho^{\pi'}_{M}}{c\br{s,s'}}$, and we denote an optimal policy by $\pi^*_{M_c}$. \paragraph{Learner and Expert} \looseness-1We have two entities: an imitation learner, and an expert. We consider two MDPs, $M^{\mathrm{sim}} = \bc{\mathcal{S}, \mathcal{A}, T^{\mathrm{sim}}, \gamma, P_0}$ and $M^{\mathrm{real}} = \bc{\mathcal{S}, \mathcal{A}, T^{\mathrm{real}}, \gamma, P_0}$, that differ only in the transition dynamics. The true cost function $c^*: \mathcal{S} \times \mathcal{S} \to \mathbb{R}$ is known only to the expert. The learner is trained in the MDP $M^{\mathrm{sim}}$ and is not aware of the true cost function, i.e., it only has access to $M_{c^*}^{\mathrm{sim}} \backslash c^*$. The expert provides demonstrations to the learner by following the optimal policy $\pi^{*}_{M_{c^*}^{\mathrm{real}}}$ in the expert MDP $M^{\mathrm{real}}$. Typically, in the imitation learning literature, it is assumed that $T^{\mathrm{sim}} = T^{\mathrm{real}}$. In this work, we consider the setting where there is a transition dynamics mismatch between the learner and the expert, i.e., $T^{\mathrm{sim}} \neq T^{\mathrm{real}}$. The learner tries to recover a policy that closely matches the intention of the expert, based on the occupancy measure $\rho_E \br{s,s'} := \rho^{\pi^*_{M^{\mathrm{real}}_{c^*}}}_{M^{\mathrm{real}}}\br{s,s'}$ (or the demonstrations drawn according to it) received from the expert. The learned policy is evaluated in the expert environment w.r.t. the true cost function, i.e., $M_{c^*}^{\mathrm{real}}$. \paragraph{Imitation Learning} \looseness-1We consider the imitation learner model that matches the expert's state transition occupancy measure $\rho_E$~\citep{ziebart2008maximum,ho2016generative,torabi2018generative}. In particular, the learner policy is obtained via solving the following primal problem: \begin{align} \min_{\pi \in \Pi} \quad& - H_{\rho^\pi_{M^{\mathrm{sim}}}}\br{\pi} \label{opt_start}\\ \text{subject to} \quad& \rho^{\pi}_{M^{\mathrm{sim}}}\br{s,s'} ~=~ \rho_E \br{s,s'} , \quad \forall s,s' \in \mathcal{S} , \label{opt_end} \end{align} where $H_{\rho^\pi_{M^{\mathrm{sim}}}}\br{\pi} := \E{\sum_{t = 0}^\infty - \gamma^t \log \pi(a_t | s_t)}$ is the $\gamma$-discounted causal entropy of $\pi$. The corresponding dual problem is given by: \[ \max_{c \in \mathbb{R}^{\mathcal{S} \times \mathcal{S}}} \br{\min_{\pi \in \Pi} ~ - H_{\rho^\pi_{M^{\mathrm{sim}}}}\br{\pi} + \Eee{\rho^{\pi}_{M^{\mathrm{sim}}}}{c\br{s,s'}}} - \Eee{\rho_E}{c\br{s,s'}} , \] where the costs $c\br{s,s'}$ serve as dual variables for the equality constraints. \paragraph{Maximum Causal Entropy (MCE) Inverse Reinforcement Learning (IRL)} MCE-IRL algorithm~\citep{ziebart2008maximum,ziebart2010modeling} involves a two-step procedure. First, it looks for a cost function $c \in \mathcal{C}$ that assigns low cost to the expert policy and high cost to other policies. Then, it learns a policy by solving a certain reinforcement learning problem with the found cost function. Formally, given a convex cost function regularizer\footnote{$\overline{\mathbb{R}}$ denotes the extended real numbers $\mathbb{R} \cup \bc{+ \infty}$} $\psi: \mathbb{R}^{\mathcal{S} \times \mathcal{S}} \to \overline{\mathbb{R}}$, first, we recover a cost function $\tilde{c}$ by solving the following $\psi$-regularized problem: \begin{align*} \textsc{IRL}_\psi \br{\rho_E} ~=~& \argmax_{c \in \mathbb{R}^{\mathcal{S} \times \mathcal{S}}} ~ - \psi \br{c} - \Eee{\rho_E}{c\br{s,s'}} \\ &\quad \quad \quad \quad + \br{\min_{\pi \in \Pi} ~ - \lambda H_{\rho^\pi_{M^{\mathrm{sim}}}}\br{\pi} + \Eee{\rho^{\pi}_{M^{\mathrm{sim}}}}{c\br{s,s'}}} \end{align*} Then, we input the learned cost function $\tilde{c} \in \textsc{IRL}_\psi \br{\rho_E}$ into an entropy-regularized reinforcement learning problem: \[ \textsc{RL} \br{c} ~=~ \argmin_{\pi \in \Pi} ~ - \lambda H_{\rho^\pi_{M^{\mathrm{sim}}}}\br{\pi} + \Eee{\rho^{\pi}_{M^{\mathrm{sim}}}}{c\br{s,s'}} , \] which aims to find a policy that minimizes the cost function and maximizes the entropy. \paragraph{Generative Adversarial Imitation Learning from Observation (GAILfO)} Recently,~\citep{ho2016generative,torabi2018generative} have shown that, for a specific choice of the regularizer $\psi$, the two-step procedure $\textsc{RL} \circ \textsc{IRL}_\psi \br{\rho_E}$ of the MCE-IRL algorithm can be reduced to the following optimization problem using GAN discriminator: \begin{align*} \min_{\pi \in \Pi} ~ \max_{D \in \br{0,1}^{\mathcal{S} \times \mathcal{S}}} & - \lambda H_{\rho^\pi_{M^{\mathrm{sim}}}}\br{\pi} + \Eee{\rho^{\pi}_{M^{\mathrm{sim}}}}{\log{D\br{s,s'}}} \\ & \quad \quad \quad \quad \quad \quad \quad + \Eee{\rho_E}{\log\br{1-D\br{s,s'}}} , \end{align*} where $D:\mathcal{S} \times \mathcal{S} \to (0,1)$ is a classifier trained to discriminate between the state-next state pairs that arise from the expert and the imitator. Excluding the entropy term, the above loss function is similar to the loss of generative adversarial networks~\citep{goodfellow2014generative}. Even though the occupancy measure matching methods were shown to scale well to high-dimensional problems, they are not robust against dynamics mismatch~\citep{gangwani2020stateonly}. \section{Robust Learning from Observation via Markov Game} \label{sec:robust_il_algo} \subsection{Markov Game} \label{sec:robust-lfo-mg} In this section, we focus on recovering a learner policy via imitation learning framework in a robust manner, under the setting described in Section~\ref{sec:intro}. To this end, we consider a class of transition matrices such that it contains both $T^\mathrm{sim}$ and $T^\mathrm{real}$. In particular, for a given $\alpha > 0$, we define the class $\mathcal{T}^{\alpha}$ as follows: \begin{equation} \mathcal{T}^{\alpha} := \bc{\alpha T^{\mathrm{sim}} (s' | s,a) + \bar{\alpha} \sum_b \pi (b | s) \cdot T^{\mathrm{sim}} (s' | s,b), \forall \pi \in \Pi} , \label{learner_unc_set} \end{equation} where $\bar{\alpha} = (1 - \alpha)$. We define the corresponding class of MDPs as follows: $\mathcal{M}^{\alpha} := \bc{\bc{\mathcal{S}, \mathcal{A}, T^{\alpha}, \gamma, P_0}, \, \forall T^{\alpha} \in \mathcal{T}^{\alpha}}$. We need to choose $\alpha$ such that $M^\mathrm{real} \in \mathcal{M}^\alpha$. Our aim is to find a learner policy that performs well in the MDP $M_{c^*}^{\mathrm{real}}$ by using the state-only demonstrations from $\rho_E$, without knowing or interacting with $M^\mathrm{real}$ during training. Thus, we try to learn a robust policy over the class $\mathcal{M}^\alpha$, while aligning with the expert's state transition occupancy measure $\rho_E$, and acting only in $M^\mathrm{sim}$. By doing this, we ensure that the resulting policy performs reasonably well on any MDP $M \in \mathcal{M}^\alpha$ including $M^\mathrm{real}$ w.r.t. the true cost function $c^*$. Based on this idea, we propose the following robust learning from observation (LfO) problem: \begin{align} \min_{\pi^{\mathrm{pl}} \in \Pi} \max_{M \in \mathcal{M}^{\alpha}} \quad& - H_{\rho^{\pi^{\mathrm{pl}}}_{M}}\br{\pi^{\mathrm{pl}}} \label{mdp_opt_start}\\ \text{subject to} \quad& \rho^{\pi^{\mathrm{pl}}}_{M}\br{s,s'} ~=~ \rho_E \br{s,s'} , \, \forall s,s' \in \mathcal{S} , \label{mdp_opt_end} \end{align} where our learner policy matches the expert's state transition occupancy measure $\rho_E$ under the most adversarial MDP belonging to the set $\mathcal{M}^\alpha$. The corresponding dual problem is given by: \begin{align} &\max_{c \in \mathbb{R}^{\mathcal{S} \times \mathcal{S}}} \br{\min_{\pi^{\mathrm{pl}} \in \Pi} \max_{M \in \mathcal{M}^{\alpha}} ~ - H_{\rho^{\pi^{\mathrm{pl}}}_{M}}\br{\pi^{\mathrm{pl}}} + \Eee{\rho^{\pi^{\mathrm{pl}}}_{M}}{c\br{s,s'}}} \nonumber \\ & \quad \quad \quad \quad \quad \quad \quad \quad - \Eee{\rho_E}{c\br{s,s'}} . \label{eq:robust-mdp-il-dual-form} \end{align} \looseness-1In the dual problem, for any $c$, we attempt to learn a robust policy over the class $\mathcal{M}^{\alpha}$ with respect to the entropy regularized reward function. The parameter $c$ plays the role of aligning the learner's policy with the expert's occupancy measure via constraint satisfaction. \looseness-1For any given $c$, we need to solve the inner min-max problem of~\eqref{eq:robust-mdp-il-dual-form}. However, during training, we only have access to the MDP $M^\mathrm{sim}$. To this end, we utilize the equivalence between the robust MDP~\citep{iyengar2005robust,nilim2005robust} formulation and the action-robust MDP~\citep{pinto2017robust,tessler2019action} formulation shown in~\citep{tessler2019action}. We can interpret the minimization over the environment class as the minimization over a set of opponent policies that with probability $1 - \alpha$ take control of the agent and perform the worst possible move from the current agent state. We can write: \begin{align} & \min_{\pi^{\mathrm{pl}} \in \Pi} \max_{M \in \mathcal{M}^{\alpha}} - H_{\rho^{\pi^{\mathrm{pl}}}_{M}}\br{\pi^{\mathrm{pl}}} + \Eee{\rho^{\pi^{\mathrm{pl}}}_{M}}{c\br{s,s'}} \nonumber \\ ~=~& \min_{\pi^{\mathrm{pl}} \in \Pi} \max_{T^\alpha \in \mathcal{T}^{\alpha}} \E{G_c \bigm| \pi^{\mathrm{pl}}, P_0, T^\alpha} \nonumber \\ ~=~& \min_{\pi^{\mathrm{pl}} \in \Pi} \max_{\pi^{\mathrm{op}} \in \Pi} \E{G_c \bigm| \alpha \pi^{\mathrm{pl}} + (1 - \alpha) \pi^{\mathrm{op}}, M^{\mathrm{sim}}} , \label{equivalence_new} \end{align} where $G_c := \sum_{t=0}^{\infty} \gamma^t \bc{c\br{s_t, s_{t+1}} - H^{\pi^{pl}}(A|S=s_t)}$. The above equality holds due to the derivation in section 3.1 of~\citep{tessler2019action}. We can formulate the problem~\eqref{equivalence_new} as a two-player zero-sum Markov game~\citep{littman1994markov} with transition dynamics given by \begin{align*} T^{\mathrm{two},\alpha}(s' | s, a^{\mathrm{pl}}, a^{\mathrm{op}}) ~=~& \alpha T^{\mathrm{sim}}(s' | s, a^{\mathrm{pl}}) + (1 - \alpha) T^{\mathrm{sim}}(s' | s, a^{\mathrm{op}}) , \end{align*} where $a^{\mathrm{pl}}$ is an action chosen according to the player policy and $a^{\mathrm{op}}$ according to the opponent policy. As a result, we reach a two-player Markov game with a regularization term for the player as follows: \begin{equation} \argmin_{\pi^{\mathrm{pl}} \in \Pi} \max_{\pi^{\mathrm{op}} \in \Pi} \E{G_{c} \bigm| \pi^{\mathrm{pl}}, \pi^{\mathrm{op}}, M^{\mathrm{two},\alpha}} , \label{objective} \end{equation} where $M^{\mathrm{two},\alpha} = \bc{\mathcal{S}, \mathcal{A}, \mathcal{A}, T^{\mathrm{two},\alpha}, \gamma, P_0}$ is the two-player MDP associated with the above game. \begin{algorithm}[t] \caption{Robust GAILfO} \label{alg:robust-gailfo} \begin{algorithmic} \STATE \textbf{Input:} state-only expert demonstrations $\mathcal{D}^E$, opponent strength parameter $\alpha$. \STATE \textbf{Initialize:} discriminator $D_w$, actor policy $\pi_\theta$, and adversary policy $\pi_\phi$. \FOR{$n \in \bc{1,2,\dots, N}$} \STATE \looseness-1collect trajectories $\tau_i$ by executing the policies $\pi^{\mathrm{pl}}_\theta$ and $ \pi^{\mathrm{op}}_\phi$ (see Algorithm~\ref{alg:collect_trajs}), and store them in the demonstrations buffer $\mathcal{D}$. \STATE update the discriminator $D_w$ to classify the expert demonstrations $\tau_E \in \mathcal{D}^E$ from the samples $\tau_i \in \mathcal{D}$, i.e., update $w$ via gradient ascent with the following gradient: \begin{align*} \widehat{\mathbb{E}}_{\tau_i\in \mathcal{D}}[\nabla_w \log D_w (s, s^\prime)] + \widehat{\mathbb{E}}_{\tau_E \in \mathcal{D}^E}[\nabla_w \log (1 - D_w (s, s^\prime))] . \end{align*} \STATE update the reward function $R_w(s, s^\prime) \gets - \log D_w(s, s^\prime)$. \STATE compute the following gradient estimates: \begin{align*} \widehat{\nabla}_{\theta} J(\theta, \phi) =& \frac{1}{\abs{\mathcal{D}}} \sum_{\tau_i \in \mathcal{D}} \sum_{t}\gamma^t \nabla_{\theta} \log \pi_{\theta, \phi}^\mathrm{mix}(a^i_t|s^i_t) \bs{G^i_t + \lambda G^{\mathrm{log}, i}_t} \\ \widehat{\nabla}_{\phi} J(\theta, \phi) =& \frac{1}{\abs{\mathcal{D}}} \sum_{\tau_i \in \mathcal{D}} \sum_{t}\gamma^t \nabla_{\phi} \log \pi_{\theta, \phi}^\mathrm{mix}(a^i_t|s^i_t) \bs{G^i_t + \lambda G^{\mathrm{log}, i}_t} , \end{align*} where $G^i_t = \sum^{T}_{k=t+1} \gamma^{k - t - 1}R_w(s^i_k, s^i_{k+1})$ and $G^{\mathrm{log},i}_t = \sum^{T}_{k=t+1} - \gamma^{k - t - 1} H^{\pi^{\mathrm{pl}}_{\theta}}(A|S=s^i_k)$ \STATE update the policies $\pi^{\mathrm{pl}}_\theta$ and $\pi^{\mathrm{op}}_\phi$ using PPO with the gradient estimates above. \ENDFOR \end{algorithmic} \end{algorithm} \begin{algorithm}[t] \caption{Collecting Trajectories} \label{alg:collect_trajs} \begin{algorithmic} \STATE \textbf{Input:} total number of trajectories $N_{\mathrm{traj}}$, reward function $R_w$. \FOR{$n \in \bc{1,2, \dots, N_{\mathrm{traj}}}$} \STATE $t \gets 0$ \STATE initialize an empty trajectory $\tau$. \WHILE{not $\mathrm{done}$} \STATE observe state $s_t$. \STATE sample actions $a^{\mathrm{pl}}_t \sim \pi^{\mathrm{pl}}_\theta(\cdot | s_t)$, $a^{\mathrm{op}}_t = \pi^{\mathrm{op}}_\phi(\cdot |s_t)$. \STATE execute $a^{\mathrm{op}}_t$ with probability $\bar{\alpha}$, or $a^{\mathrm{pl}}_t$ with probability $\alpha$. \STATE observe $r_{t+1} = R_w(s_t, s_{t+1})$, next state $s_{t+1}$, and $\mathrm{done}$. \STATE store the tuple $(s_t, a^{\mathrm{pl}}_t, a^{\mathrm{op}}_t, s_{t+1}, r_{t+1})$ in the trajectory $\tau$. \ENDWHILE \STATE $\mathcal{D} \leftarrow \mathcal{D} \cup \bc{\tau}$. \ENDFOR \STATE \textbf{Output:} $\mathcal{D}$ \end{algorithmic} \end{algorithm} \subsection{Robust GAILfO} \label{sec:robust-gailfo-full} In this section, we present our robust Generative Adversarial Imitation Learning from Observation (robust GAILfO) algorithm based on the discussions in Section~\ref{sec:robust-lfo-mg}. We begin with the robust variant of the two-step procedure $\textsc{RL} \circ \textsc{IRL}_\psi \br{\rho_E}$ of the MCE-IRL algorithm: \begin{align*} \textsc{IRL}_\psi \br{\rho_E} =& \argmax_{c \in \mathbb{R}^{\mathcal{S} \times \mathcal{S}}} - \psi \br{c} - \Eee{\rho_E}{c\br{s,s'}} \\ & \quad \quad + \min_{\pi^{\mathrm{pl}} \in \Pi} \max_{\pi^{\mathrm{op}} \in \Pi} - \lambda H_{\rho^{\pi^\mathrm{mix}}_{M^{\mathrm{sim}}}}\br{\pi^\mathrm{pl}} + \Eee{\rho^{\pi^\mathrm{mix}}_{M^{\mathrm{sim}}}}{c\br{s,s'}} \\ \textsc{RL} \br{c} =& \argmin_{\pi^{\mathrm{pl}} \in \Pi} \max_{\pi^{\mathrm{op}} \in \Pi} - \lambda H_{\rho^{\pi^\mathrm{mix}}_{M^{\mathrm{sim}}}}\br{\pi^\mathrm{pl}} + \Eee{\rho^{\pi^\mathrm{mix}}_{M^{\mathrm{sim}}}}{c\br{s,s'}} , \end{align*} where $\pi^\mathrm{mix} = \alpha \pi^{\mathrm{pl}} + (1-\alpha) \pi^{\mathrm{op}}$. Then, similar to~\citep{ho2016generative,torabi2018generative}, the above two step procedure can be reduced to the following optimization problem using the discriminator $D:\mathcal{S} \times \mathcal{S} \to (0,1)$: \begin{align*} \min_{\pi^{\mathrm{pl}} \in \Pi} \max_{\pi^{\mathrm{op}} \in \Pi} \max_{D \in \br{0,1}^{\mathcal{S} \times \mathcal{S}}} & - \lambda H_{\rho^{\pi^\mathrm{mix}}_{M^{\mathrm{sim}}}}\br{\pi^\mathrm{pl}} + \Eee{\rho^{\pi^\mathrm{mix}}_{M^{\mathrm{sim}}}}{\log{D\br{s,s'}}} \\ & \quad \quad + \Eee{\rho_E}{\log\br{1-D\br{s,s'}}} . \end{align*} We parameterize the policies and the discriminator as $\pi^\mathrm{pl}_\theta$, $\pi^\mathrm{op}_\phi$, and $D_w$ (with parameters $\theta$, $\phi$, and $w$), and rewrite the above problem as follows: \begin{align*} \min_{\theta} \max_{\phi} \max_{w} & - \lambda H_{\rho^{\pi^\mathrm{mix}_{\theta,\phi}}_{M^{\mathrm{sim}}}}\br{\pi^\mathrm{pl}_\theta} + \Eee{\rho^{\pi^\mathrm{mix}_{\theta,\phi}}_{M^{\mathrm{sim}}}}{\log{D_w\br{s,s'}}} \\ & \quad \quad + \Eee{\rho_E}{\log\br{1-D_w\br{s,s'}}} , \end{align*} where $\pi^\mathrm{mix}_{\theta,\phi} = \alpha \pi^{\mathrm{pl}}_\theta + (1-\alpha) \pi^{\mathrm{op}}_\phi$. We solve the above problem by taking gradient steps alternatively w.r.t. $\theta$, $\phi$, and $w$. The calculation for the gradient estimates are given in Appendix~\ref{app:robust-lfo-details}. Following~\citep{ho2016generative,torabi2018generative}, we use the proximal policy optimization (PPO~\citep{schulman2017proximal}) to update the policies parameters. Our complete algorithm is given in Algorithm~\ref{alg:robust-gailfo}. We also note that one could use any robust RL approach (including domain randomization) to solve the inner min-max problem of~\eqref{eq:robust-mdp-il-dual-form}. In our work, we used the action-robustness approach since: (i) in the robust RL literature, the equivalence between the domain randomization approach and the action-robustness approach is already established~\citep{tessler2019action}, and (ii) compared to the domain randomization approach, the action-robustness approach only requires access to a single simulation environment and creates a range of environments via action perturbations. \section{Experiments} \label{sec:experiments} \begin{figure} \caption{The average (over $3$ seeds) transfer performance of Algorithm~\ref{alg:robust-gailfo} with different values of $\alpha$ for each MuJoCo task as reported in the legend of each plot. The x-axis denotes the relative friction of the learner environment $M^\mathrm{sim}$. The policies are evaluated in $M^\mathrm{real}_{c^*}$ over $1e5$ steps truncating the last episode if it does not terminate.} \label{fig:TransferFrictionFixedAlpha} \end{figure} \begin{figure} \caption{The average (over $3$ seeds) transfer performance of Algorithm~\ref{alg:robust-gailfo} with different values of $\alpha$ for each MuJoCo task as reported in the legend of each plot. The x-axis denotes the relative mass of the learner environment $M^\mathrm{sim}$. The policies are evaluated in $M^\mathrm{real}_{c^*}$ over $1e5$ steps truncating the last episode if it does not terminate.} \label{fig:TransferMassFixedAlpha} \end{figure} \begin{figure*} \caption{The average (over $3$ seeds) robust performance of Algorithm~\ref{alg:robust-gailfo} with different values of $\alpha$ for each MuJoCo task as reported in the legend of each plot. The expert environment $M^\mathrm{real}$, in which the demonstrations are collected, has relative friction $1.0$. In each plot, the black vertical line corresponds to the relative friction of the learner environment $M^\mathrm{sim}$ where we trained the policy with Algorithm~\ref{alg:robust-gailfo}. The x-axis denotes the relative friction of the test environment $M^\mathrm{test}$ in which the policies are evaluated. The policies are evaluated over $1e5$ steps truncating the last episode if it does not terminate.} \label{fig:RobustnessFrictionFixedAlpha} \end{figure*} \begin{figure*} \caption{The average (over $3$ seeds) robust performance of Algorithm~\ref{alg:robust-gailfo} with different values of $\alpha$ for each MuJoCo task as reported in the legend of each plot. The expert environment $M^\mathrm{real}$, in which the demonstrations are collected, has relative mass $1.0$. In each plot, the black vertical line corresponds to the relative mass of the learner environment $M^\mathrm{sim}$ where we trained the policy with Algorithm~\ref{alg:robust-gailfo}. The x-axis denotes the relative mass of the test environment $M^\mathrm{test}$ in which the policies are evaluated. The policies are evaluated over $1e5$ steps truncating the last episode if it does not terminate.} \label{fig:RobustnessMassFixedAlpha} \end{figure*} We compare the performance of our robust GAILfO algorithm with different values of $\alpha \in \bc{1.0, 0.999, 0.99, 0.98,0.97,0.96,0.95,0.90}$ against the standard GAILfO algorithm proposed in \citep{torabi2018generative}. To the best of our knowledge, GAILfO is the only large-scale imitation learning method that is applicable under the setting described in Section~\ref{sec:intro} (see Table~\ref{table:related-work}). \subsection{Continuous Control Tasks on MuJoCo} In this section, we evaluate the performance of our method on standard continuous control benchmarks available on OpenAI Gym~\citep{brockman2016openai} utilizing the MuJoCo environment~\citep{todorov2012mujoco}. Specifically, we benchmark on five tasks: Half-Cheetah, Walker, Hopper, Swimmer, and Inverted-Double-Pendulum. Details of these environments can be found in~\citep{brockman2016openai} and on the GitHub website. The default configurations of the MuJoCo environment (provided in OpenAI Gym) is regarded as the real or deployment environment ($M^\mathrm{real}$), and the expert demonstrations are collected there. We do not assume any access to the expert MDP beyond this during the training phase. We construct the simulation or training environments ($M^\mathrm{sim}$) for the imitator by modifying some parameters independently: (i) the mass of the bot in $M^\mathrm{sim}$ is $\bc{0.5, 0.75, 1.0, 1.5, 2.0} \times$ the mass in $M^\mathrm{real}$, and (ii) the friction coefficient on all the joints of the bot in $M^\mathrm{sim}$ is $\bc{0.5, 0.75, 1.0, 1.5, 2.0} \times$ the coefficient in $M^\mathrm{real}$. We train an agent on each task by proximal policy optimization (PPO) algorithm~\citep{schulman2017proximal} using the rewards defined in the OpenAI Gym~\citep{brockman2016openai}. We use the resulting stochastic policy as the expert policy $\pi^E$. In all our experiments, 10 state-only expert demonstrations collected by the expert policy $\pi^E$ in the real environment $M^\mathrm{real}$ is given to the learner. Our Algorithm~\ref{alg:robust-gailfo} implementation is based on the codebase from \url{https://github.com/Khrylx/PyTorch-RL}. We use a two-layer feedforward neural network structure of (128, 128, tanh) for both actors (agent and adversary) and discriminator. The actor or policy networks are trained by the proximal policy optimization (PPO) method. For training the discriminator $D$, we use Adam~\citep{kingma2014adam} with a learning rate of $1e-4$. For each environment-mismatch pair, we identified the best performing $\alpha$ parameter based on the ablation study reported in Appendix~\ref{app:transfer-performance}. The learner is trained in the simulator $M^\mathrm{sim}$ for $\approx$3M time steps. We run our experiments, for each environment, with 3 different seeds. We report the mean and standard error of the performance (cumulative true rewards) over 3 trials. The cumulative reward is normalized with ones earned by $\pi^E$ and a random policy so that 1.0 and 0.0 indicate the performance of $\pi^E$ and the random policy, respectively. Figures~\ref{fig:TransferFrictionFixedAlpha},~and~\ref{fig:TransferMassFixedAlpha} plot the performance of the policy evaluated on the deployment environment ($M^\mathrm{real}$). The x-axis corresponds to the simulation environment ($M^\mathrm{sim}$) on which the policy is trained on. We observe that our robust GAILfO produces policies that can be successfully transferred to the $M^\mathrm{real}$ environment from $M^\mathrm{sim}$ compared to the standard GAILfO. Finally, we evaluate the robustness of the policies trained by our algorithm (with different dynamics mismatch) under different testing conditions. At test time, we evaluate the learned policies by changing the mass and friction values and estimating the cumulative rewards. As shown in Figures~\ref{fig:RobustnessFrictionFixedAlpha}~and~\ref{fig:RobustnessMassFixedAlpha}, our Algorithm~\ref{alg:robust-gailfo} outperforms the baseline in terms of robustness as well. \subsection{Continuous Gridworld Tasks under Additive Transition Dynamics Mismatch} \begin{figure} \caption{The contour curves for the reward function of the 2D gridworld environment.} \label{fig:env_continuous} \end{figure} \looseness-1In this section, we evaluate the effectiveness of our method on a continuous gridworld environment under a transition dynamics mismatch induced by additive noise. Specifically, we consider a 2D environment, where we denote the horizontal coordinate as $x \in [0,1]$ and vertical one as $y \in [0,1]$. The agent starts in the upper left corner, i.e., the coordinate $(0,1)$, and the episode ends when the agent reaches the lower right region defined by the indicator function $\mathbf{1}\{x\in [0.95, 1], y \in [-1, -0.95]\} $. The reward function is given by: $R(x,y) = -(x-1)^2 -(y+1)^2 -80 e^{-8(x^2 + y^2)} + 10 \cdot \mathbf{1}\{x\in [0.95, 1], y \in [-1, -0.95]\}$. Figure~\ref{fig:env_continuous} provides a graphical representation of the reward function. Note that the central region of the 2D environment represents a low reward area that should be avoided. The action space for the agent is given by $\mathcal{A} = [-0.5, 0.5]^2$, and the transition dynamics are given by: $s_{t+1} = s_t + \frac{a_t}{10}$ with probability (w.p.) $1 - \epsilon$, and $s_{t+1} = s_t - \frac{s_t}{10 \norm{s_t}_2}$ w.p. $\epsilon$. Thus, with probability $\epsilon$, the environment does not respond to the action taken by the agent, but it takes a step towards the low reward area centered at the origin, i.e., $- \frac{s_t}{10 \norm{s_t}_2}$. The agent should therefore pass far enough from the origin. The parameter $\epsilon$ can be varied to create a dynamic mismatch, e.g., higher $\epsilon$ corresponds to a more difficult environment. We use three experts trained with $\epsilon = 0.0, \epsilon=0.05, \text{ and } \epsilon = 0.1$. The learners act in a different environment with the following values for $\epsilon$: $0.0, 0.05, 0.1, 0.15, 0.2$. Figure~\ref{fig:best_alpha_continuous} plots the performance of the trained learner policy evaluated on the expert environment. The x-axis corresponds to the learner environment on which the learner policy is trained. In general, we observe a behavior comparable to the MuJoCo experiments. We can often find an appropriate value for $\alpha$ such that Robust GAILfO learns to imitate under mismatch largely better than standard GAILfO. \begin{figure} \caption{Average performance (over $3$ seeds) of Algorithm~\ref{alg:robust-gailfo} with different values of $\alpha$ for each mismatch (i.e., each point on the x-axis) in the environment shown in Figure~\ref{fig:env_continuous}. The $\alpha$ values are chosen based on the ablation study in Figure~\ref{fig:ablation_continuous} (see Appendix~\ref{app:transfer-performance-grid}). The x-axis denotes the $\epsilon$ value of the learner environment. The policies are evaluated over $1e5$ steps truncating the last episode if it does not terminate. In Appendix~\ref{app:new_seeds}, we verify that our strategy of choosing appropriate $\alpha$ value does not introduce maximization bias. } \label{fig:best_alpha_continuous} \end{figure} \subsection{Choice of $\alpha$} We note that one has to carefully choose the value of $\alpha$ to avoid too conservative behavior (see Figure~\ref{fig:TransferFrictionAblation} in Appendix~\ref{app:transfer-performance}). In principle, given a rough estimate $\widehat T^E$ of the expert dynamics $T^E$, one could choose this value based on Eq.~\eqref{learner_unc_set}. However, the choice of suitable $\alpha$ value is also affected by the other design choices of the algorithm, e.g., how many iterations the player and adversary are updated in the inner loop, and function approximators used. In order to estimate the accuracy of the simulator, we can execute a safe baseline policy in both the simulator and the real environment, collect trajectories or datasets, and compute an estimate of the transition-dynamics distance between them. We can also utilize the performance difference lemma from~\cite{even2003approximate} to obtain a lower bound on the transition dynamics mismatch based on the value function difference in the two environments. Apart from the final evaluation, we also minimally access (in our experiments) the deployment environment for choosing the appropriate value for $\alpha$. Compared to training a policy in the deployment environment from scratch, accessing the deployment environment to choose $\alpha$ is sample-efficient. We only need to evaluate the final policies (trained in the simulation environment) once for each value of $\alpha$. When we already have a reasonable estimate of $\alpha$, we can also reduce these evaluations. \section{Conclusions} \label{sec:conclusions} In this work, we propose a robust LfO method to solve an offline imitation-learning problem, in which a few state-only expert demonstrations and a simulator with misspecified dynamics are given to the learner. Even though our Algorithm~\ref{alg:robust-gailfo} is not essentially different from the standard robust RL methods, the robust optimization problem formulation to derive our algorithm is important and novel in the IL context. Experiment results in continuous control tasks on MuJoCo show that our method clearly outperforms the standard GAILfO in terms of the transfer performance (with model misspecification) in the real environment, as well as the robust performance under varying testing conditions. Our algorithm falls under the category of zero-shot sim-to-real transfer~\citep{zhao2020sim} with expert demonstrations, making our method well suited for robotics applications. In principle, one can easily incorporate the two-player Markov game idea into any imitation learning algorithm and derive its robust version. This work can be considered a direction towards improving the sample efficiency of IL algorithms in terms of the number of environment interactions through robust training on a misspecified simulator. \begin{acks} Luca Viano has received financial support from the Enterprise for Society Center (E4S). Parameswaran Kamalaruban acknowledges support from The Alan Turing Institute. Craig Innes and Subramanian Ramamoorthy are supported by a grant from the UKRI Strategic Priorities Fund to the UKRI Research Node on Trustworthy Autonomous Systems Governance and Regulation (EP/V026607/1, 2020-2024). Adrian Weller acknowledges support from a Turing AI Fellowship under grant EP/V025379/1, EPSRC grant EP/V056522/1, The Alan Turing Institute, and the Leverhulme Trust via CFI. \end{acks} \balance \appendix \onecolumn \section*{Code Repository} \url{https://github.com/lviano/robust_gaifo} \section{Details on the equivalence between Action Robust MDP and Robust MDP} In the following we prove the last equality of Eq.~\eqref{equivalence_new}. \begin{theorem} Given the set \begin{equation*} \mathcal{T}^{\alpha} ~:=~ \bc{ T : T(s^\prime|s,a) = \alpha T^{\mathrm{sim}}(s^\prime|s,a) + (1 - \alpha)\bar{T}(s^\prime|s) , \, \bar{T}(s^\prime|s) = \sum_a \pi(a|s) T^{\mathrm{sim}}(s^\prime|s,a), \quad \forall \pi \in \Pi} \end{equation*} and a cost function depending only on states, i.e. $r: \mathcal{S}\times\mathcal{S}\rightarrow \mathbb{R}$, define $G_c = \sum^{\infty}_{t=0} \gamma^t r(s_t, s_{t+1})$. Then, the following holds: \begin{equation*} \min_{\pi^{\mathrm{pl}} \in \Pi} \max_{T^\alpha \in \mathcal{T}^{\alpha}} \E{G_c \bigm| \pi^{\mathrm{pl}}, P_0, T^\alpha} = \min_{\pi^{\mathrm{pl}} \in \Pi} \max_{\pi^{\mathrm{op}} \in \Pi} \E{G_c \bigm| \alpha \pi^{\mathrm{pl}} + (1 - \alpha) \pi^{\mathrm{op}}, M^{\mathrm{sim}}} . \end{equation*} In particular, the result in Eq.~\eqref{equivalence_new} follows from the choice: $ r(s_t, s_{t+1}) = c(s_t,s_{t+1}) + H^{\pi^{\mathrm{pl}}}(A|S=s_t)$. \end{theorem} \begin{proof} Let us define $P^{\pi, T}(s_0, \dots, s_N) := P_0(s_0) \prod^{N-1}_{t=0}\sum_{a}\pi(a|s_t) T(s_{t+1}|s_t, a)$. We need to show equality between the distributions $P^{\pi^{\mathrm{pl}}, \alpha T^{\mathrm{sim}} + (1 - \alpha) \bar{T}}$ and $P^{\alpha \pi^{\mathrm{pl}} + (1 - \alpha)\pi^{\mathrm{op}}, T^{\mathrm{sim}}}$. Due to the Markov property, this is equivalent to show: \begin{equation} \sum_{a}\pi^{\mathrm{pl}}(a|s_t) \bs{\alpha T^{\mathrm{sim}}(s_{t+1}|s_t, a) + (1 - \alpha) \bar{T}(s_{t+1}| s_t)} = \sum_{a} \bs{\alpha \pi^{\mathrm{pl}}(a|s_t) + (1 - \alpha)\pi^{\mathrm{op}}(a|s_t)}T^{\mathrm{sim}}(s_{t+1}|s_t, a) , \end{equation} that implies: \begin{equation*} \underbrace{\sum_{a}\pi^{\mathrm{pl}}(a|s_t)}_{=1} \bar{T}(s_{t+1}| s_t) = \sum_{a}\pi^{\mathrm{op}}(a|s_t)T^{\mathrm{sim}}(s_{t+1}|s_t, a) . \end{equation*} Hence, it follows that equality between $P^{\pi^{\mathrm{pl}}, \alpha T^{\mathrm{sim}} + (1 - \alpha) \bar{T}}$ and $P^{\alpha \pi^{\mathrm{pl}} + (1 - \alpha)\pi^{\mathrm{op}}, T^{\mathrm{sim}}}$ holds for: \begin{equation*} \bar{T}(s_{t+1}| s_t) = \sum_{a}\pi^{\mathrm{op}}(a|s_t)T^{\mathrm{sim}}(s_{t+1}|s_t, a) , \end{equation*} as we used in the definition of the set $\mathcal{T}^{\alpha}$. \end{proof} \section{Additional Details on Algorithm~\ref{alg:robust-gailfo}} \label{app:robust-lfo-details} By interpreting $R_w\br{s,s'} = - \log{D_w\br{s,s'}}$ as the reward function, we have (for a fixed $w$): \begin{align*} J \br{\theta,\phi} ~:=~& \Eee{\rho^{\pi^\mathrm{mix}_{\theta,\phi}}_{M^{\mathrm{sim}}}}{{R_w\br{s,s'}}} + \lambda H_{\rho^{\pi^\mathrm{mix}_{\theta,\phi}}_{M^{\mathrm{sim}}}}\br{\pi^\mathrm{pl}_\theta} ~=~ J_1 \br{\theta,\phi} + J_2 \br{\theta,\phi} , \end{align*} where \begin{align*} J_1 \br{\theta,\phi} ~:=~& \E{\sum_{t} \gamma^t R_w(s_t, s_{t+1})\big\vert \pi_{\theta, \phi}^\mathrm{mix}, M^{\mathrm{sim}}} \\ J_2 \br{\theta,\phi} ~:=~& \lambda \E{\sum_{t} \gamma^t H^{\pi_{\theta}^{pl}}(A|S=s_t) \big\vert \pi_{\theta, \phi}^\mathrm{mix}, M^{\mathrm{sim}}} . \end{align*} By the policy gradient theorem, the derivatives of the first term w.r.t the player and the opponent policy parameters are given by: \begin{align*} \nabla_{\theta} J_1(\theta, \phi) ~=~& \sum_{s \in \mathcal{S}}\sum_{t}\gamma^t \P{S_t = s \mid \pi_{\theta, \phi}^\mathrm{mix}, M^{\mathrm{sim}}} \sum_{a} \nabla_{\theta} \pi_{\theta, \phi}^\mathrm{mix}(a|s) Q_{\pi_{\theta, \phi}^\mathrm{mix}}(s,a) \\ \nabla_{\phi} J_1(\theta, \phi) ~=~& \sum_{s \in \mathcal{S}}\sum_{t}\gamma^t \P{S_t = s \mid \pi_{\theta, \phi}^\mathrm{mix}, M^{\mathrm{sim}}} \sum_{a} \nabla_{\phi} \pi_{\theta, \phi}^\mathrm{mix}(a|s) Q_{\pi_{\theta, \phi}^\mathrm{mix}}(s,a) , \end{align*} where \begin{align*} Q_{\pi_{\theta, \phi}^\mathrm{mix}}(s,a) ~=~& \sum_{s^\prime} T^{\mathrm{sim}}(s^\prime \mid a, s) \br{R_w(s,s^\prime) + \gamma V_{\pi_{\theta, \phi}^\mathrm{mix}}(s^\prime)} \\ V_{\pi_{\theta, \phi}^\mathrm{mix}}(s) ~=~& \E{\sum_{t} \gamma^t R_w(s_t, s_{t+1}) \bigg\vert \pi_{\theta, \phi}^\mathrm{mix}, M^{\mathrm{sim}}, s_0 = s} . \end{align*} For the second term, we introduce the following quantities: \begin{align*} Q_{\pi_{\theta, \phi}^\mathrm{mix}}^{\mathrm{log}}(s,a) ~=~& \sum_{s^\prime} T^{\mathrm{sim}}(s^\prime \mid s,a) \br{\lambda H^{\pi_{\theta}^{pl}}(A|S=s_t) + \gamma V_{\pi_{\theta, \phi}^\mathrm{mix}}^{\mathrm{log}}(s')} \\ V_{\pi_{\theta, \phi}^\mathrm{mix}}^{\mathrm{log}}(s) ~=~& \E{\sum_{t} \lambda \gamma^t H^{\pi_{\theta}^{pl}}(A|S=s_t)\bigg \vert \pi_{\theta, \phi}^\mathrm{mix}, M^{\mathrm{sim}}, s_0 = s} \end{align*} Then, we obtain the following derivatives of the second term: \begin{align*} \nabla_{\theta} J_2(\theta, \phi) ~=~& \sum_{s \in \mathcal{S}}\sum_{t}\gamma^t \P{S_t = s \mid \pi_{\theta, \phi}^\mathrm{mix}, M^{\mathrm{sim}}} \sum_{a} \nabla_{\theta} \pi_{\theta, \phi}^\mathrm{mix}(a|s) Q^{\mathrm{log}}_{\pi_{\theta, \phi}^\mathrm{mix}}(s,a) \\ \nabla_{\phi} J_2(\theta, \phi) ~=~& \sum_{s \in \mathcal{S}}\sum_{t}\gamma^t \P{S_t = s \mid \pi_{\theta, \phi}^\mathrm{mix}, M^{\mathrm{sim}}} \sum_{a} \nabla_{\phi} \pi_{\theta, \phi}^\mathrm{mix}(a|s) Q^{\mathrm{log}}_{\pi_{\theta, \phi}^\mathrm{mix}}(s,a) . \end{align*} For a practical algorithm, we need to compute gradient estimates from a data-set of sampled trajectories $\mathcal{D} = \bc{\tau_i}_i$ with $\tau_i = (s^i_0, a^i_0, \dots, s^i_T, a^i_T)$. The gradient estimates are given by: \begin{align*} \widehat{\nabla}_{\theta} J_1(\theta, \phi) ~=~& \sum_{\tau_i \in \mathcal{D}} \sum_{t}\gamma^t \nabla_{\theta} \log \pi_{\theta, \phi}^\mathrm{mix}(a^i_t|s^i_t) \widehat{Q}_{\pi_{\theta, \phi}^\mathrm{mix}}(s^i_t,a^i_t) \\ \widehat{\nabla}_{\phi} J_1(\theta, \phi) ~=~& \sum_{\tau_i \in \mathcal{D}} \sum_{t}\gamma^t \nabla_{\phi} \log \pi_{\theta, \phi}^\mathrm{mix}(a^i_t|s^i_t) \widehat{Q}_{\pi_{\theta, \phi}^\mathrm{mix}}(s^i_t,a^i_t) \\ \widehat{\nabla}_{\theta} J_2(\theta, \phi) ~=~& \sum_{\tau_i \in \mathcal{D}} \sum_{t}\gamma^t \nabla_{\theta} \log \pi_{\theta, \phi}^\mathrm{mix}(a^i_t|s^i_t) \widehat{Q}^{\mathrm{log}}_{\pi_{\theta, \phi}^\mathrm{mix}}(s^i_t,a^i_t) \\ \widehat{\nabla}_{\phi} J_2(\theta, \phi) ~=~& \sum_{\tau_i \in \mathcal{D}} \sum_{t}\gamma^t \nabla_{\phi} \log \pi_{\theta, \phi}^\mathrm{mix}(a^i_t|s^i_t) \widehat{Q}^{\mathrm{log}}_{\pi_{\theta, \phi}^\mathrm{mix}}(s^i_t,a^i_t) , \end{align*} where the estimator $\widehat{Q}_{\pi_{\theta, \phi}^\mathrm{mix}}(s^i_t, a^i_t)$ is the future return observed for the trajectory $i$ after time $t$, i.e., $\widehat{Q}_{\pi_{\theta, \phi}^\mathrm{mix}}(s^i_t, a^i_t) = \sum^{T}_{k=t+1} \gamma^{k - t - 1}R_w(s^i_k, s^i_{k+1})=G^i_t$. Similarly, for the entropy term we have $\widehat{Q}^{\mathrm{log}}_{\pi_{\theta, \phi}^\mathrm{mix}}(s^i_t, a^i_t) = \sum^{T}_{k=t+1} - \gamma^{k - t - 1} H^{\pi^{\mathrm{pl}}_{\theta}}(A|S=s^i_k) =G^{\mathrm{log},i}_t$. The trajectory sampling process is given in Algorithm~\ref{alg:collect_trajs}. \section{Transfer Performance: MuJoCo} \label{app:transfer-performance} We present the following results: \begin{itemize} \item The ablation study on the transfer performance of Algorithm~\ref{alg:robust-gailfo} with different values of $\alpha$ under the relative friction mismatches (see Figure~\ref{fig:TransferFrictionAblation}). \item The ablation study on the transfer performance of Algorithm~\ref{alg:robust-gailfo} with different values of $\alpha$ under the relative mass mismatches (see Figure~\ref{fig:TransferMassAblation}). \item The transfer performance of Algorithm~\ref{alg:robust-gailfo} with different (best) values of $\alpha$ for each relative friction mismatch of a task (see Figure~\ref{fig:TransferFrictionVarAlpha}). \item The transfer performance of Algorithm~\ref{alg:robust-gailfo} with different (best) values of $\alpha$ for each relative mass mismatch of a task (see Figure~\ref{fig:TransferMassVarAlpha}). \end{itemize} \begin{figure} \caption{The average (over $3$ seeds) transfer performance of Algorithm~\ref{alg:robust-gailfo} with different values of $\alpha$. The ablation shown here is used to choose $\alpha$ in Figure~\ref{fig:TransferFrictionFixedAlpha}. The x-axis denotes the relative friction of the learner environment $M^\mathrm{sim}$. The policies are evaluated in $M^\mathrm{real}_{c^*}$ over $1e5$ steps truncating the last episode if it does not terminate. Note that robust-GAILfO with $\alpha = 1$ corresponds to GAILfO.} \label{fig:TransferFrictionAblation} \end{figure} \begin{figure} \caption{The average (over $3$ seeds) transfer performance of Algorithm~\ref{alg:robust-gailfo} with different values of $\alpha$. The ablation shown here is used to choose $\alpha$ in Figure~\ref{fig:TransferMassFixedAlpha}. The x-axis denotes the relative mass of the learner environment $M^\mathrm{sim}$. The policies are evaluated in $M^\mathrm{real}_{c^*}$ over $1e5$ steps truncating the last episode if it does not terminate. Note that robust-GAILfO with $\alpha = 1$ corresponds to GAILfO.} \label{fig:TransferMassAblation} \end{figure} \begin{table}[h] \centering \caption{Best value for $\alpha < 1$ chosen independently for each mismatch based on the ablation in Figure~\ref{fig:TransferFrictionAblation}. The performance of this configuration is reported by the red line in Figure~\ref{fig:TransferFrictionVarAlpha}. We add a $1$ in brackets when standard GAILfO outperforms the robust version. The value outside brackets denotes the best value found for the robust version.} \begin{tabular}{l|lllll} & \multicolumn{5}{c}{Relative Friction} \\ & 0.5 & 0.75 & 1.0 & 1.5 & 2.0 \\ \hline HalfCheetah & 0.999 & 0.999 & 0.999 & 0.999 & 0.999 \\ Walker & 0.98 & 0.999 & 0.97 & 0.97 & 0.97 \\ Hopper & 0.9 (1) & 0.99 (1) & 0.97 & 0.95 & 0.95 \end{tabular} \label{tab:best_alpha_friction_transfer} \end{table} \begin{table}[h] \centering \caption{Best value for $\alpha < 1$ chosen independently for each mismatch based on the ablation in Figure~\ref{fig:TransferMassAblation}. The performance of this configuration is reported by the red line in Figure~\ref{fig:TransferMassVarAlpha}. We add a $1$ in brackets when standard GAILfO outperforms the robust version. The value outside brackets denotes the best value found for the robust version.} \begin{tabular}{l|lllll} & \multicolumn{5}{c}{Relative Mass} \\ & 0.5 & 0.75 & 1.0 & 1.5 & 2.0 \\ \hline HalfCheetah & 0.96 & 0.97 & 0.98 & 0.96 & 0.97 \\ Walker & 0.98 & 0.95 & 0.97 & 0.999 & 0.98 \\ Hopper & 0.9 & 0.97 & 0.97 & 0.98 & 0.999 \\ InvDoublePendulum & 0.98 & 0.99 & 0.97 & 0.96 & 0.97 \\ Swimmer & 0.96 (1) & 0.999 (1) & 0.95 & 0.95 & 0.98 \\ \end{tabular} \label{tab:best_alpha_mass_transfer} \end{table} \begin{figure} \caption{Average performance (over $3$ seeds ) of Algorithm~\ref{alg:robust-gailfo} with the value of $\alpha$ that is chosen indipendently for each mismatch (i.e. each point on the x-axis). The choice is made picking the best performing $\alpha$ for each mismatch in Figure~\ref{fig:TransferFrictionAblation}. The x-axis reports the relative friction of the learner environment. The policies are evaluated over $1e5$ steps truncating the last episode if it does not terminate.The values chosen for $\alpha$ are given in Table~\ref{tab:best_alpha_friction_transfer}.} \label{fig:TransferFrictionVarAlpha} \end{figure} \begin{figure} \caption{Average performance (over $3$ seeds ) of Algorithm~\ref{alg:robust-gailfo} with the value of $\alpha$ that is chosen indipendently for each mismatch (i.e. each point on the x-axis). The choice is made picking the best performing $\alpha$ for each mismatch in Figure~\ref{fig:TransferMassAblation}. The x-axis reports the relative mass of the learner environment. The policies are evaluated over $1e5$ steps truncating the last episode if it does not terminate. The values chosen for $\alpha$ are given in Table~\ref{tab:best_alpha_mass_transfer}.} \label{fig:TransferMassVarAlpha} \end{figure} \section{Robust Performance: MuJoCo} \label{app:robust-results} We present the following results: \begin{itemize} \item The ablation study on the robust performance of Algorithm~\ref{alg:robust-gailfo} with different values of $\alpha$ under the relative friction variations (see Figure~\ref{fig:RobustnessFrictionAblation}). \item The ablation study on the robust performance of Algorithm~\ref{alg:robust-gailfo} with different values of $\alpha$ under the relative mass variations (see Figure~\ref{fig:RobustnessMassAblation}). \end{itemize} \begin{figure*} \caption{The average (over $3$ seeds) robust performance of Algorithm~\ref{alg:robust-gailfo} with different values of $\alpha$. The ablation shown here is used to choose $\alpha$ in Figure~\ref{fig:RobustnessFrictionFixedAlpha}. The expert environment $M^\mathrm{real}$, in which the demonstrations are collected, has relative friction $1.0$. In each plot, the black vertical line corresponds to the relative friction of the learner environment $M^\mathrm{sim}$ where we trained the policy with Algorithm~\ref{alg:robust-gailfo}. The x-axis denotes the relative friction of the test environment $M^\mathrm{test}$ in which the policies are evaluated. The policies are evaluated over $1e5$ steps truncating the last episode if it does not terminate. Note that robust-GAILfO with $\alpha = 1$ corresponds to GAILfO.} \label{fig:RobustnessFrictionAblation} \end{figure*} \begin{figure*} \caption{The average (over $3$ seeds) robust performance of Algorithm~\ref{alg:robust-gailfo} with different values of $\alpha$. The ablation shown here is used to choose $\alpha$ in Figure~\ref{fig:RobustnessMassFixedAlpha}. The expert environment $M^\mathrm{real}$, in which the demonstrations are collected, has relative mass $1.0$. In each plot, the black vertical line corresponds to the relative mass of the learner environment $M^\mathrm{sim}$ where we trained the policy with Algorithm~\ref{alg:robust-gailfo}. The x-axis denotes the relative mass of the test environment $M^\mathrm{test}$ in which the policies are evaluated. The policies are evaluated over $1e5$ steps truncating the last episode if it does not terminate. Note that robust-GAILfO with $\alpha = 1$ corresponds to GAILfO.} \label{fig:RobustnessMassAblation} \end{figure*} \section{Transfer Performance: Continuous Gridworld} \label{app:transfer-performance-grid} \begin{figure} \caption{The average (over $3$ seeds) transfer performance of Algorithm~\ref{alg:robust-gailfo} with fixed value of $\alpha$ for each mismatch (i.e., each point on the x-axis) in the environment shown in Figure~\ref{fig:env_continuous}. The x-axis denotes the $\epsilon$ value of the learner environment. The policies are evaluated over $1e5$ steps truncating the last episode if it does not terminate. } \label{fig:single_best_alpha_continuous} \end{figure} \begin{figure} \caption{The average (over $3$ seeds) transfer performance of Algorithm~\ref{alg:robust-gailfo} with different values of $\alpha$ for each mismatch (i.e., each point on the x-axis) in the environment shown in Figure~\ref{fig:env_continuous}. The x-axis denotes the $\epsilon$ value of the learner environment. The policies are evaluated over $1e5$ steps truncating the last episode if it does not terminate. The ablation shown here is used to choose $\alpha$ in Figure~\ref{fig:single_best_alpha_continuous}. Note that robust-GAILfO with $\alpha = 1$ corresponds to GAILfO.} \label{fig:ablation_continuous} \end{figure} \section{Additional Experiments on Choice of $\alpha$} \label{app:new_seeds} In this section, we aim to understand whether our strategy of choosing suitable $\alpha$ value introduces maximization bias. For example, in Figure~\ref{fig:ablation_continuous}, the best performing $\alpha$ is chosen, and its performance curve (w.r.t. the original seeds used for training) is presented in Figure~\ref{fig:single_best_alpha_continuous}. To avoid this bias, for the chosen best performing $\alpha$ in Figure~\ref{fig:ablation_continuous}, we conduct a new set of runs with a new set of seeds. The new results presented in Figure~\ref{fig:seed_consistency} suggest that our $\alpha$ selection process does not introduce maximization bias. \begin{figure} \caption{Experiments for understanding whether our strategy of choosing suitable $\alpha$ value introduces maximization bias.} \label{fig:seed_consistency} \end{figure} \end{document}
arXiv
Test of Mathematics Solution Subjective 66 - Range of a Polynomial This is a Test of Mathematics Solution Subjective 66 (from ISI Entrance). The book, Test of Mathematics at 10+2 Level is Published by East West Press. This problem book is indispensable for the preparation of I.S.I. B.Stat and B.Math Entrance. Also visit: I.S.I. & C.M.I. Entrance Course of Cheenta If c is a real number with 0 < c < 1, then show that the values taken by the function y = $ {frac {x^2+2x+c}{x^2+4x+3c}} $ , as x varies over real numbers, range over all real numbers. y = $ {\frac {x^2+2x+c}{x^2+4x+3c}} $ or $ {yx^2} $ + 4yx + 3cy = $ {x^2} $ + 2x + c or $ {x^2} $ (y-1) + (4y-2)x + 3cy - c = 0 Now if we can show that the discriminant $ {\ge{0}} $ for all y then for all real y there exist a real x. Now, discriminant is $ {(4y-2)^2} $ - 4(y-1)(3cy-c) We need to show $ {(4y-2)^2} $ - 4(y-1)(3cy-c) > 0 for any 0 < c < 1. (16-12c) $ {4^2} $ - (16-16c)y + (4-4c) > 0. This is parabola opening upword. Now if its discriminant < 0 then this equation is always > 0. So this is equivalent to prove $ {(16-16c)^2} $ - 4 (16-12c)(4-4c) < 0. or $ {(2-2c)^2} $ - (4-3c)(1-c) < 0 or $ {c^2} $ -C < 0 Now given 1>c>0 so $ {c^2} $ <c Conclusion: So for 1>c>0 y = $ {\frac{x^2+2x+c}{x^2+4x+3c}} $ Range over all real number when x varies over all real number.
CommonCrawl