text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
PDE's whose solutions can be presented using path integrals
It is well known that solutions of the Schroedinger equation and of the heat equation can be presented using path integrals: $$\psi(x,t)=\int K(x,t;y,0)\psi(y,0)dy,$$ where the kernel $K(x,t;y,0)$ is given by an integral of some expressions over all paths connecting $y$ and $x$.
Remark. In the former case this presentation exists only at the physical level of rigor as far as I know, while in the latter case it is mathematically rigorous.
I am wondering if there exist any other classes of PDE with the similar property (even at the physical level of rigor). In particular, does there exist such a presentation for solutions of the Dirac equation?
ap.analysis-of-pdes mp.mathematical-physics quantum-mechanics physics
MKOMKO
$\begingroup$ Certainly various "variants" of the heat equations are OK: you can substitute any generator of a (sufficiently nice) Markov process for the Laplacian, and add a killing term (sometimes called a "Schrödinger potential"). This is, however, rather far from the Dirac equation. $\endgroup$ – Mateusz Kwaśnicki Sep 12 '18 at 19:54
$\begingroup$ are you familiar with the Feynman-Kac formulas for elliptic and parabolic (see Oksendal)? Feynman-Kac formulas are an attempt to make sense of the path integrals. $\endgroup$ – OOESCoupling Sep 12 '18 at 20:42
$\begingroup$ @ThomasKojar: No, I am not familiar with that. Actually I am not aware of any other examples of PDE with the relevant property. Can you elaborate on those examples you mentioned? $\endgroup$ – MKO Sep 12 '18 at 20:53
$\begingroup$ en.wikipedia.org/wiki/… $\endgroup$ – user69208 Sep 13 '18 at 0:32
I believe the physicist's heuristic derivation the path integral formula for the Schroedinger equation can be generalized to arbitrary linear partial differential equations, although I don't have a clear idea of how to obtain the expression which the path integral is over. Seeing how vague this response is I hope I'm not just repeating things you already know.
Consider a time-dependent linear partial differential equation
$$ \partial_t u (t) = L (t) u (t) $$
Since the equation is linear, $u (t_1)$ can be determined from $u (t_0)$ with a kernel $K_{t_0 \to t_1}$,
$$ u (t_1, x_1) = \int K_{t_0 \to t_1} (x_0 \to x_1) u (t_0, x_0) dx_0 $$
Iterating this, we can determine $u$ at a later time $t_2$:
$$ u (t_2, x_2) = \iint K_{t_1 \to t_2} (x_1 \to x_2) K_{t_0 \to t_1} (x_0 \to x_1) u (t_0, x_0) dx_1 dx_0 $$
$$ u (t_n, x_n) = \idotsint K_{t_{n-1} \to t_n} (x_{n-1} \to x_n) K_{t_{n-2} \to t_{n-1}} (x_{n-2} \to x_{n-1}) \dots K_{t_0 \to t_1} (x_0 \to x_1) u (t_0, x_0) dx_{n-1} \dots dx_0 $$
Keeping $t_0, x_0, t_n, x_n$ fixed while taking the limit as the partition $(t_0, t_1, \dots, t_{n-1}, t_n)$ gets more and more fine should produce a path integral expression for $u (t_n, x_n)$, say
$$ u (t, y) = \int_{\gamma (t) = y} F_{s,t} (\gamma) u (s, \gamma (0)) \mathcal {D} \gamma $$
$$ F_{s,t} (\gamma) = \lim _{\substack {t_0 = s < t_1 < \dots < t_{n-1} < t_n = t \\ \max (t_{i+1} - t_i) \to 0}} \prod_{i=0}^{n-1} K_{t_i \to t_{i+1}} (\gamma (t_i) \to \gamma (t_{i+1})) $$
It may be necessary to add a term to this expression corresponding to the volume of the path spaces. Presumably, the next step is to use the fact that $t_{i+1}-t_i$ goes to zero to find a expression for the limit above only in terms of what appears directly in $L (t)$, without making reference to the kernel $K$. Unfortunately, I don't know what the general way of doing that is.
Itai Bar-NatanItai Bar-Natan
Not the answer you're looking for? Browse other questions tagged ap.analysis-of-pdes mp.mathematical-physics quantum-mechanics physics or ask your own question.
Why is the harmonic oscillator so important? (pure viewpoint sought). How to motivate its role in Getzler's work on Atiyah-Singer?
Relationship between Green's function and geodesic distance?
Dimensional regularization in odd dimensions
Does $u_{t}=g(t)u_{x}^{2}$ blow-up for bounded positive g? What about $u_{t}=u_{xx}+g(t)u_{x}^{2}$?
What is a Fermi surface?
Spurious length-scale cutoff emerges in propagator defined in Costello's "Renormalization and EFT"
Quantum tunneling on the line with non-symmetric double well potential
Localization of solutions for time-dependent Schroedinger equation | CommonCrawl |
'Rectilinear' and 'Diagonal' Basis in BB84 Protocol
A lot of the tutorials on BB84 protocol talks about these two measurement bases, 'Rectilinear' or 'Vertical-Horizontal' and 'Diagonal'. I understand that it is possible to create a physical device that would be able to measure a qubit in both vertical and horizontal direction, or in other words, in 'Rectilinear' basis, but what would be the matrix representation of it?
For example, we can use $\lvert 0 \rangle \langle 0 \lvert$ to measure a qubit in the $\lvert 0 \rangle$ basis and $\lvert 1 \rangle \langle 1 \lvert$ to measure in the $\lvert 1 \rangle$ basis. But what would be the combined measurement basis which we could call 'rectilinear' or 'vertical-horizontal'?
measurement cryptography terminology
Hasan IqbalHasan Iqbal
$\begingroup$ I think the terminology comes from photon polarisations which according to wikipedia at least is how the protocol was originally described. en.wikipedia.org/wiki/… $\endgroup$ – snulty May 16 '18 at 19:48
$\begingroup$ As written right now, the question seems to (wrongly) state that |0> is a basis and |1> is a different basis. Instead, they are the two possible results in the same basis. $\endgroup$ – agaitaarino May 17 '18 at 4:25
Talking about bases such as $\left|0\rangle\langle0\right|$ and $\left|1\rangle\langle1\right|$ (or the equivalent vector notation $\left|0\right>$ and $\left|1\right>$, which I'll use in this answer) at the same time as 'horizontal' and 'vertical' are, to a fair extent (pardon the pun) orthogonal concepts.
On a Bloch sphere, there are 3 different orthonormal bases - we generally consider $\left|0\right\rangle$ and $\left|1\right\rangle$; $\frac{1}{\sqrt 2}\left(\left|0\right\rangle+\left|1\right\rangle\right)$ and $\frac{1}{\sqrt 2}\left(\left|0\right\rangle-\left|1\right\rangle\right)$; $\frac{1}{\sqrt 2}\left(\left|0\right\rangle+i\left|1\right\rangle\right)$ and $\frac{1}{\sqrt 2}\left(\left|0\right\rangle-i\left|1\right\rangle\right)$. I'll refer to these as the 'quantum information bases' as this is the notation generally used in quantum information.
That looks a bit of a mess, so we can also write this as $\left|\uparrow_z\right\rangle$, $\left|\downarrow_z\right\rangle$; $\left|\uparrow_x\right\rangle$, $\left|\downarrow_x\right\rangle$; $\left|\uparrow_y\right\rangle$, $\left|\downarrow_y\right\rangle$, where the different bases are now labelled as $x, y$ and $z$. In terms of spin-half particles, this has a natural definition of up/down spin in each of those directions. However, there is freedom in choosing which direction (in the lab) these axes are in (unless otherwise constrained).
Photons (used in the BB84 protocol) aren't spin-half particles (they have a spin of one), but nevertheless have similarities to this - the 'axes' are the possible directions of the polarisation of a photon, only instead of labelling these as $x, y$ and $z$, they're labelled as horizontal/vertical, diagonal/antidiagonal and left-/right-circular, or in vector notation, this is shortened to $\left|H\right>$, $\left|V\right>$; $\left|D\right>$, $\left|A\right>$; $\left|L\right>$ and $\left|R\right>$. These can then be mapped on to the 'quantum information' bases above, although which basis gets labelled as $\left|0\right>$ and $\left|1\right>$ is somewhat arbitrary.
For the BB84 protocol (and indeed, frequently used in other applications), the rectilinear (vertical/horizontal) basis is the one labelled using $\left|0\right>$ and $\left|1\right>$.
That is: $$\left|H\right>=\left|0\right>$$ $$\left|V\right>=\left|1\right>$$ $$\left|D\right>=\frac{1}{\sqrt 2}\left(\left|H\right\rangle+\left|V\right\rangle\right)=\frac{1}{\sqrt 2}\left(\left|0\right\rangle+\left|1\right\rangle\right)$$ $$\left|A\right>=\frac{1}{\sqrt 2}\left(\left|H\right\rangle-\left|V\right\rangle\right)=\frac{1}{\sqrt 2}\left(\left|0\right\rangle-\left|1\right\rangle\right)$$ $$\left|R\right>=\frac{1}{\sqrt 2}\left(\left|H\right\rangle+i\left|V\right\rangle\right)=\frac{1}{\sqrt 2}\left(\left|0\right\rangle+i\left|1\right\rangle\right)$$ $$\left|L\right>=\frac{1}{\sqrt 2}\left(\left|H\right\rangle-i\left|V\right\rangle\right)=\frac{1}{\sqrt 2}\left(\left|0\right\rangle-i\left|1\right\rangle\right)$$
If you want to measure in any of these bases, use the 'projectors' of that basis. That is, if you want to measure in the rectilinear basis, the projectors are $\left|H\rangle\langle H\right|$ and $\left|V\rangle\langle V\right|$. Similarly, in the diagonal basis, $\left|D\rangle\langle D\right|$ and $\left|A\rangle\langle A\right|$; and in the circularly polarised basis, $\left|L\rangle\langle L\right|$ and $\left|R\rangle\langle R\right|$
Mithrandir24601♦Mithrandir24601
$\begingroup$ I think something like Figure 11 in What we can learn about quantum physics from a single qubit would be a perfect illustration for this answer! $\endgroup$ – agaitaarino May 17 '18 at 4:31
For the diagonal basis, the measurement operators are the $|0\rangle\langle 0|$ and $|1\rangle\langle 1|$, as stated in the question. For the other basis, any mutually unbiased basis will do, but people usually go for the two operators $(|0\rangle+|1\rangle)(\langle 0|+\langle 1|)/2$ and $(|0\rangle-|1\rangle)(\langle 0|-\langle 1|)/2$.
The labels of which basis you call what are fairly arbitrary, but I think that the rectilinear basis is usually the one that corresponds with horizontal/vertical polarisation and is labelled 0/1. The diagonal basis is then the other one.
Not the answer you're looking for? Browse other questions tagged measurement cryptography terminology or ask your own question.
What does it mean for a result of a measurement to be known/unknown?
Are true Projective Measurements possible experimentally?
How many bits do Alice and Bob needs to compare to make sure the channel is secure in BB84?
BB84 attack with entangled qubits example
Comprehension questions on quantum cryptography especially BB84
Density matrix after measurement on density matrix
What happens if I measure only the first qubit of a Bell state?
Quantum secret Sharing using GHZ state paper
Physical Interpretation of Pauli Matrices as Polarization Check
What's the difference between observing in a given direction and operating in that same direction? | CommonCrawl |
\begin{definition}[Definition:Underlying Set/Abstract Algebra]
Let $\struct {S, \circ}$ be an algebraic structure.
Then the '''underlying set''' of $\struct {S, \circ}$ is the set $S$.
\end{definition} | ProofWiki |
\begin{document}
\title{Finite generation of cohomology of finite groups} \author{Rapha\"el Rouquier} \address{Department of Mathematics, UCLA, Box 951555, Los Angeles, CA 90095-1555, USA} \email{[email protected]} \thanks{The author was partially supported by the NSF grant DMS-1161999.}
\date{October 2014} \maketitle
\begin{abstract} We give a proof of the finite generation of the cohomology ring of a finite $p$-group over ${\mathbf{F}}_p$ by reduction to the case of elementary abelian groups, based on Serre's Theorem on products of Bocksteins. \end{abstract}
\section{Definitions and basic properties} Let $k={\mathbf{F}}_p$. Given $G$ a finite group, we put $H^*(G)=H^*(G,k)$. We refer to \cite{Ev} for results on group cohomology.
Given $A$ a ring and $M$ an $A$-module, we say that $M$ is finite over $A$ if it is a finitely generated $A$-module.
Let $G$ be a finite group and $L$ a subgroup of $G$. We have a restriction map $\operatorname{res}\nolimits_L^G:H^*(G)\to H^*(L)$. It gives $H^*(L)$ the structure of an $H^*(G)$-module.
We denote by $\operatorname{norm}\nolimits_L^G:H^*(L)\to H^*(G)$ the norm map. If $L$ is central in $G$, then we have $\operatorname{res}\nolimits_L^G\operatorname{norm}\nolimits_L^G(\xi)=\xi^{[G:L]}$ for all $\xi\in H^*(L)$.
When $L$ is normal in $G$, we denote by $\inf^G_{G/L}:H^*(G/L)\to H^*(G)$ the inflation map.
Let $E$ be an elementary abelian $p$-group. The Bockstein $H^1(E)\to H^2(E)$ induces an injective morphism of algebras $S(H^1(E))\hookrightarrow H^*(E)$. We denote by $H^*_{\operatorname{pol}\nolimits}(E)$ its image. Note that $H^*(E)$ is a finitely generated $H^*_{\operatorname{pol}\nolimits}(E)$-module and given $\xi\in H^*(E)$, we have $\xi^p\in H^*_{\operatorname{pol}\nolimits}(E)$.
\section{Finite generation for finite groups}
The following result is classical. We provide here a proof independent of the finite generation of cohomology rings.
\begin{lemma} \label{le:onto} Let $G$ be a $p$-group and $E$ an elementary abelian subgroup. Then, $H^*(E)$ is finite over $H^{\mathrm{even}}(G)$. \end{lemma}
\begin{proof}
The result is straightforward when $G$ is elementary abelian. As a consequence, given $G$, it is enough to prove the lemma when $E$ is a maximal elementary abelian subgroup. We prove the lemma by induction on $|G|$. Let $Z\le Z(G)$ with $|Z|=p$. Let $P$ be a complement to $Z$ in $E$. Let $A= \mathrm{inf}_{E/Z}^E(H^*_{\operatorname{pol}\nolimits}(E/Z))$. Let $x$ be a generator of $H^2(Z)\buildrel \sim\over\to H^2(E/P)$ and $y=\inf_{E/P}^E(x)$. We have $$H^*_{\operatorname{pol}\nolimits}(E)=A\otimes k[y].$$
Let $\xi=\operatorname{res}\nolimits_E^G(\operatorname{norm}\nolimits_Z^G(x)^p)$. We have $\operatorname{res}\nolimits_Z^E(\xi)=x^{p[G:Z]}$, so $\xi-y^{p[G:Z]}\in H^*_{\operatorname{pol}\nolimits}(E)\cap \ker\operatorname{res}\nolimits_Z^E= A^{>0}H^*_{\operatorname{pol}\nolimits}(E)$. We deduce that $H^*(E)$ is finite over its subalgebra generated by $A$ and $\xi$.
By induction, $H^*(E/Z)$ is finite over $H^*(G/Z)$. We deduce that $H^*(E)$ is finite over its subalgebra generated by $\xi$ and $\mathrm{\inf}_{E/Z}^E\operatorname{res}\nolimits_{E/Z}^{G/Z}H^*(G/Z)= \operatorname{res}\nolimits_E^G\mathrm{inf}_{G/Z}^GH^*(G/Z)$. \end{proof}
Let us recall a form of Serre's Theorem on product of Bocksteins \cite{Se}. We state the result over the integers for a useful consequence stated in Corollary \ref{co:generation}.
\begin{thm}[Serre] \label{th:Serre} Let $G$ be a finite $p$-group. Assume $G$ is not elementary abelian. Then, there is $n\ge 2$, there are subgroups $H_1,\ldots,H_n$ of index $p$ of $G$ and an exact sequence of ${\mathbf{Z}} G$-modules $$0\to {\mathbf{Z}}\to \operatorname{Ind}\nolimits_{H_n}^G{\mathbf{Z}}\to\cdots\to\operatorname{Ind}\nolimits_{H_1}^G {\mathbf{Z}}\to {\mathbf{Z}}\to 0$$ defining a zero class in $\operatorname{Ext}\nolimits^n_{{\mathbf{Z}} G}({\mathbf{Z}},{\mathbf{Z}})$. \end{thm}
\begin{proof} Serre shows there are elements $z_1,\ldots,z_m\in H^1(G,{\mathbf{Z}}/p)$ such that $\beta(z_1)\cdots\beta(z_m)=0$. The element $z_i$ corresponds to a surjective morphism $G\to{\mathbf{Z}}/p$ with kernel $H_i$, and we identify $\operatorname{Ind}\nolimits_{H_i}^G{\mathbf{Z}}$ with ${\mathbf{Z}}[G/H_i]={\mathbf{Z}}[\sigma]/(\sigma^p-1)$, where $\sigma$ is a generator of $G/H_i$. The element $\beta(z_i)\in H^2(G,{\mathbf{Z}}/p)$ is the image of the class $c_i\in H^2(G,{\mathbf{Z}})$ given by the exact sequence $$0\to {\mathbf{Z}}\xrightarrow{1+\sigma+\cdots+\sigma^{p-1}} \operatorname{Ind}\nolimits_{H_i}^G{\mathbf{Z}}\xrightarrow{1-\sigma} \operatorname{Ind}\nolimits_{H_i}^G{\mathbf{Z}}\xrightarrow{\text{augmentation}}{\mathbf{Z}}\to 0.$$
Let $c=c_1\cdots c_m\in H^{2m}(G,{\mathbf{Z}})$. The image of $c$ in $H^{2m}(G,{\mathbf{Z}}/p)$ vanishes, hence $c\in pH^{2m}(G,{\mathbf{Z}})$. Fix $r$ such that $|G|=p^r$.
Since $|G|H^{>0}(G,{\mathbf{Z}})=0$, we deduce that $c^r=0$. \end{proof}
We will only need the case $R={\mathbf{F}}_p$ of the corollary below. We denote by $D^b(RG)$ the derived category of bounded complexes of finitely generated $RG$-modules.
\begin{cor} \label{co:generation} Let $G$ be a finite group and $R$ a discrete valuation ring with residue field of characteristic $p$ or a field of charactetistic $p$. Assume $x^{p-1}=1$ has $p-1$ solutions in $R$.
Let ${\mathcal{I}}$ be the thick subcategory of $D^b(RG)$ generated by modules of the form $\operatorname{Ind}\nolimits_E^G M$, where $E$ runs over elementary abelian subgroups of $G$ and $M$ runs over one-dimensional representations of $E$ over $R$.
We have ${\mathcal{I}}=D^b(RG)$. \end{cor}
\begin{proof} Assume first $G$ is an elementary abelian $p$-group. Let $L$ be a finitely generated $RG$-module. Consider a projective cover $f:P\to L$ and let $L'=\ker f$. The $R$-module $L'$ is free, so $L'$ is an extension of $RG$-modules that are free of rank $1$ as $R$-modules. So $L'\in{\mathcal{I}}$ and similarly $P\in{\mathcal{I}}$, hence $L\in{\mathcal{I}}$. As a consequence, the corollary holds for $G$ elementary abelian.
Assume now $G$ is a $p$-group that is
not elementary abelian. We proceed by induction on $|G|$. Let $L$ be a finitely generated $RG$-module. By induction, $\operatorname{Ind}\nolimits_H^G\operatorname{Res}\nolimits_H^G(L)\in{\mathcal{I}}$ whenever $H$ is a proper subgroup of $G$. Applying $L\otimes_{{\mathbf{Z}} G}-$ to the exact sequence of Theorem \ref{th:Serre}, we obtain an exact sequence $$0\to L\to \operatorname{Ind}\nolimits_{H_n}^G\operatorname{Res}\nolimits_{H_n}^G(L)\to\cdots\to\operatorname{Ind}\nolimits_{H_1}^G \operatorname{Res}\nolimits_{H_1}^G (L)\to L\to 0.$$ Since that sequence defines the zero class in $\operatorname{Ext}\nolimits^n(L,L)$, it follows that $L$ is a direct summand of $0\to \operatorname{Ind}\nolimits_{H_n}^G\operatorname{Res}\nolimits_{H_n}^G(L)\to\cdots\to\operatorname{Ind}\nolimits_{H_1}^G \operatorname{Res}\nolimits_{H_1}^G (L)\to 0$ in $D^b(RG)$. We deduce that $L\in{\mathcal{I}}$.
Finally, assume $G$ is a finite group. Let $P$ be a Sylow $p$-subgroup of $G$ and let $L$ a finitely generated $RG$-module. We know that $\operatorname{Ind}\nolimits_P^G\operatorname{Res}\nolimits_P^G(L)\in{\mathcal{I}}$. Since $L$ is a direct summand of $\operatorname{Ind}\nolimits_P^G\operatorname{Res}\nolimits_P^G(L)$, we deduce that $L\in{\mathcal{I}}$. \end{proof}
\begin{rem} Corollary \ref{co:generation} implies a corresponding generation result for the stable category of $RG$. That was observed by the author in the mid 90s and communicated to J.~Carlson who wrote an account of this in \cite{Ca}. \end{rem}
\begin{thm}[Golod, Venkov, Evens] Let $G$ be a finite $p$-group. The ring $H^*(G)$ is finitely generated. Given $M$ a finitely generated $kG$-module, then $H^*(G,M)$ is a finitely generated $H^*(G)$-module. \end{thm}
Note that the case where $G$ is an arbitrary finite group follows easily, cf \cite{Ev}.
\begin{proof} Let $S$ be a finitely generated subalgebra of $H^{\mathrm{even}}(G)$ such that $H^*(E)$ is a finitely generated $S$-module for every elementary abelian subgroup $E$ of $G$. Such an algebra exists by Lemma \ref{le:onto}.
Let ${\mathcal{J}}$ be the full subcategory of $D^b(kG)$ of complexes $C$ such that the $S$-module $H^*(G,C)=\bigoplus_i\operatorname{Hom}\nolimits_{D^b(kG)}(k,C[i])$ is finitely generated.
Let $C_1\to C_2\to C_3\rightsquigarrow$ be a distinguished triangle in $D^b(kG)$. We have a long exact sequence $$\cdots\to H^i(C_1)\to H^i(C_2)\to H^i(C_3)\to H^{i+1}(C_1)\to\cdots$$ Assume $C_1,C_3\in{\mathcal{J}}$. Let $I$ be a finite generating set of $H^*(C_1)$ as an $S$-module and $J$ a finite generating set of $\ker(H^*(C_3)\to H^{*+1}(C_1))$ as an $S$-module. Let $I'$ be the image of $I$ in $H^*(C_2)$ and let $J'$ be a finite subset of $H^*(C_2)$ with image $J$. Then, $I'\cup J'$ generates $H^*(C_2)$ as an $S$-module, hence $C_2\in{\mathcal{J}}$.
Note that if $C\oplus C'\in{\mathcal{J}}$, then $C\in{\mathcal{J}}$. We deduce that ${\mathcal{J}}$ is a thick subcategory of $D^b(kG)$.
Let $E$ be an elementary abelian subgroup of $G$. Since $H^*(G,\operatorname{Ind}\nolimits_E^G(k))\simeq H^*(E,k)$ is a finitely generated $S$-module, we deduce that $\operatorname{Ind}\nolimits_E^G(k)\in{\mathcal{J}}$.
We deduce from Corollary \ref{co:generation} that ${\mathcal{J}}=D^b(kG)$. \end{proof}
\end{document} | arXiv |
Local existence with mild regularity for the Boltzmann equation
KRM Home
Asymptotic behavior of solutions to the generalized cubic double dispersion equation in one space dimension
December 2013, 6(4): 989-1009. doi: 10.3934/krm.2013.6.989
Remarks on the full dispersion Kadomtsev-Petviashvli equation
David Lannes 1, and Jean-Claude Saut 2,
DMA, Ecole Normale Supérieure et CNRS UMR 8553, 45 rue d'Ulm, 75005 Paris
Laboratoire de Mathématiques, UMR 8628, Université Paris-Sud et CNRS, 91405 Orsay, France
Received September 2013 Revised September 2013 Published November 2013
We consider in this paper the Full Dispersion Kadomtsev-Petviashvili Equation (FDKP) introduced in [19] in order to overcome some shortcomings of the classical KP equation. We investigate its mathematical properties, emphasizing the differences with the Kadomtsev-Petviashvili equation and their relevance to the approximation of water waves. We also present some numerical simulations.
Keywords: full dispersion, KP equation, weakly transverse waves, nonlocal dispersion., water waves, Whitham equation.
Mathematics Subject Classification: 35Q53, 35R11, 76B15, 76B4.
Citation: David Lannes, Jean-Claude Saut. Remarks on the full dispersion Kadomtsev-Petviashvli equation. Kinetic & Related Models, 2013, 6 (4) : 989-1009. doi: 10.3934/krm.2013.6.989
J. Albert, J. L. Bona and J.-C.Saut, Model equations for waves in stratified fluids,, Proc. Royal Soc. London A, 453 (1997), 1233. doi: 10.1098/rspa.1997.0068. Google Scholar
D. Alterman and J. Rauch, The linear diffractive pulse equation,, Cathleen Morawetz: A great mathematician, 7 (2000), 263. Google Scholar
B. Alvarez-Samaniego and D. Lannes, Large time existence for 3d water-waves and asymptotics,, Invent. math., 171 (2008), 485. doi: 10.1007/s00222-007-0088-4. Google Scholar
W. Ben Youssef and D. Lannes, The long wave limit for a general class of 2D quasilinear hyperbolic problems,, Comm. Partial Differential Equations, 27 (2002), 979. doi: 10.1081/PDE-120004892. Google Scholar
J. L. Bona, T. Colin and D. Lannes, Long-wave approximation for water waves,, Arch. Ration. Mech. Anal., 178 (2005), 373. doi: 10.1007/s00205-005-0378-1. Google Scholar
A. de Bouard and J.-C. Saut, Solitary waves of generalized KP equations,, Annales IHP Analyse non Linéaire, 14 (1997), 211. doi: 10.1016/S0294-1449(97)80145-X. Google Scholar
J. Bourgain, On the Cauchy problem for the Kadomtsev-Petviashvili equation,, Geom. Funct. Anal., 3 (1993), 315. doi: 10.1007/BF01896259. Google Scholar
A. Castro, D. Córdoba and F. Gancedo, Singularity formation in a surface wave model,, Nonlinearity, 23 (2010), 2835. doi: 10.1088/0951-7715/23/11/006. Google Scholar
A. Constantin and J. Escher, Wave breaking for nonlinear nonlocal shallow water equations,, Acta Math., 181 (1998), 229. doi: 10.1007/BF02392586. Google Scholar
T. Colin and D. Lannes, Long-wave short-wave resonance for nonlinear geometric optics,, Duke Math. J., 107 (2001), 351. doi: 10.1215/S0012-7094-01-10725-4. Google Scholar
M. Ehrnström and H. Kalish, Traveling waves for the Whitham equation,, Diff. Int. Equations, 22 (2009), 1193. Google Scholar
M. Ehrnström, M. D. Groves and E. Wahlén, On the existence and stability of solitary-wave solutions to a class of evolution equations of Whitham type,, Nonlinearity, 25 (2012), 2903. doi: 10.1088/0951-7715/25/10/2903. Google Scholar
R. L. Frank and E. Lenzmann, On the uniqueness and nondegeneracy of ground states of $(-\Delta)^s Q+Q-Q^{\alpha +1}=0$ in $\mathbbR$,, , (2010). Google Scholar
Z. Guo, L. Peng and B. Wang, Decay estimates for a class of wave equations,, J. Funct. Analysis, 254 (2008), 1642. doi: 10.1016/j.jfa.2007.12.010. Google Scholar
B. B. Kadomtsev and V. I. Petviashvili, On the stability of solitary waves in weakly dispersing media,, Sov. Phys. Dokl., 15 (1970), 539. Google Scholar
C. Klein and J.-C. Saut, Numerical study of blow-up and stability of solutions to generalized Kadomtsev-Petviashvili equations,, J. Nonlinear Science, 22 (2012), 763. doi: 10.1007/s00332-012-9127-4. Google Scholar
C. Klein and J.-C. Saut, A numerical approach to blow-up issues for dispersive perturbations of the Burgers equation,, in preparation., (). Google Scholar
C. Klein, C. Sparber and P. Markowich, Numerical study of oscillatory regimes in the Kadomtsev-Petviashvili equation,, J. Nonl. Sci., 17 (2007), 429. doi: 10.1007/s00332-007-9001-y. Google Scholar
D. Lannes, The Water Waves Problem: Mathematical Theory and Asymptotics,, Mathematical Surveys and Monographs, (2013). Google Scholar
D. Lannes, Consistency of the KP approximation, Dynamical systems and differential equations (Wilmington, NC, 2002)., Discrete Cont. Dyn. Syst., (2003), 517. Google Scholar
D. Lannes and J.-C. Saut, Weakly transverse Boussinesq systems and the KP approximation,, Nonlinearity, 19 (2006), 2853. doi: 10.1088/0951-7715/19/12/007. Google Scholar
F. Linares, D. Pilod and J.-C. Saut, Dispersive perturbations of Burgers and hyperbolic equations I: Local theory,, , (2013). Google Scholar
S. V. Manakov, V. E. Zakharov, L. A. Bordag and V. B. Matveev, Two-dimensional solitons of the Kadomtsev-Petviashvili equation and their interaction,, Phys. Lett. A, 63 (1977), 205. doi: 10.1016/0375-9601(77)90875-1. Google Scholar
M. Ming, P. Zhang and Z. Zhang, Long-wave approximation to the 3-D capillary-gravity waves,, SIAM J. Math. Anal., 44 (2012), 2920. doi: 10.1137/11084220X. Google Scholar
L. Molinet, On the asymptotic behavior of solutions to the (generalized) Kadomtsev-Petviashvili-Burgers equations,, J. Diff. Eq., 152 (1999), 30. doi: 10.1006/jdeq.1998.3522. Google Scholar
L. Molinet, J.-C. Saut and N. Tzvetkov, Remarks on the mass constraint for KP type equations,, SIAM J. Math. Anal., 39 (2007), 627. doi: 10.1137/060654256. Google Scholar
P. I. Naumkin and I. A. Shishmarev, Nonlinear Nonlocal Equations in the Theory of Waves,, Translated from the Russian manuscript by Boris Gommerstadt. Translations of Mathematical Monographs, (1994). Google Scholar
J.-C. Saut, Remarks on the generalized Kadomtsev-Petviashvili equations,, Indiana Univ. Math. J., 42 (1993), 1011. doi: 10.1512/iumj.1993.42.42047. Google Scholar
H. Takaoka and N. Tzvetkov, On the local regularity of Kadomtsev-Petviashvili-II equation,, IMRN, 8 (2001), 77. doi: 10.1155/S1073792801000058. Google Scholar
S. Ukaï, Local solutions of the Kadomtsev-Petviashvili equation,, J. Fac. Sci. Univ. Tokyo Sect. IA Math., 36 (1989), 193. Google Scholar
M. Weinstein, Existence and dynamic stability of solitary wave solutions of equations arising in long wave propagation,, Commun. Partial Diff. Equ, 12 (1987), 1133. doi: 10.1080/03605308708820522. Google Scholar
M. Weinstein, Nonlinear Schrödinger equations and sharp interpolation estimates,, Commun. Math. Phys., 87 (1983), 567. Google Scholar
G. B. Whitham, Variational methods and applications to water waves,, Proc. R. Soc. Lond. A, 299 (1967), 6. Google Scholar
G. B. Whitham, Linear and Nonlinear Waves,, Pure and Applied Mathematics. Wiley-Interscience [John Wiley & Sons], (1974). Google Scholar
Rui Huang, Ming Mei, Yong Wang. Planar traveling waves for nonlocal dispersion equation with monostable nonlinearity. Discrete & Continuous Dynamical Systems - A, 2012, 32 (10) : 3621-3649. doi: 10.3934/dcds.2012.32.3621
Adrian Constantin. Dispersion relations for periodic traveling water waves in flows with discontinuous vorticity. Communications on Pure & Applied Analysis, 2012, 11 (4) : 1397-1406. doi: 10.3934/cpaa.2012.11.1397
Calin Iulian Martin. Dispersion relations for periodic water waves with surface tension and discontinuous vorticity. Discrete & Continuous Dynamical Systems - A, 2014, 34 (8) : 3109-3123. doi: 10.3934/dcds.2014.34.3109
Delia Ionescu-Kruse, Anca-Voichita Matioc. Small-amplitude equatorial water waves with constant vorticity: Dispersion relations and particle trajectories. Discrete & Continuous Dynamical Systems - A, 2014, 34 (8) : 3045-3060. doi: 10.3934/dcds.2014.34.3045
Shunlian Liu, David M. Ambrose. Sufficiently strong dispersion removes ill-posedness in truncated series models of water waves. Discrete & Continuous Dynamical Systems - A, 2019, 39 (6) : 3123-3147. doi: 10.3934/dcds.2019129
Calin Iulian Martin, Adrián Rodríguez-Sanjurjo. Dispersion relations for steady periodic water waves of fixed mean-depth with two rotational layers. Discrete & Continuous Dynamical Systems - A, 2019, 39 (9) : 5149-5169. doi: 10.3934/dcds.2019209
Rui Huang, Ming Mei, Kaijun Zhang, Qifeng Zhang. Asymptotic stability of non-monotone traveling waves for time-delayed nonlocal dispersion equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (3) : 1331-1353. doi: 10.3934/dcds.2016.36.1331
Hua Chen, Ling-Jun Wang. A perturbation approach for the transverse spectral stability of small periodic traveling waves of the ZK equation. Kinetic & Related Models, 2012, 5 (2) : 261-281. doi: 10.3934/krm.2012.5.261
Anwar Ja'afar Mohamad Jawad, Mohammad Mirzazadeh, Anjan Biswas. Dynamics of shallow water waves with Gardner-Kadomtsev-Petviashvili equation. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1155-1164. doi: 10.3934/dcdss.2015.8.1155
Caroline Obrecht, J.-C. Saut. Remarks on the full dispersion Davey-Stewartson systems. Communications on Pure & Applied Analysis, 2015, 14 (4) : 1547-1561. doi: 10.3934/cpaa.2015.14.1547
Fei Guo, Bao-Feng Feng, Hongjun Gao, Yue Liu. On the initial-value problem to the Degasperis-Procesi equation with linear dispersion. Discrete & Continuous Dynamical Systems - A, 2010, 26 (4) : 1269-1290. doi: 10.3934/dcds.2010.26.1269
Shengfu Deng. Periodic solutions and homoclinic solutions for a Swift-Hohenberg equation with dispersion. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 1647-1662. doi: 10.3934/dcdss.2016068
Elena Kopylova. On dispersion decay for 3D Klein-Gordon equation. Discrete & Continuous Dynamical Systems - A, 2018, 38 (11) : 5765-5780. doi: 10.3934/dcds.2018251
Federico Cacciafesta, Anne-Sophie De Suzzoni. Weak dispersion for the Dirac equation on asymptotically flat and warped product spaces. Discrete & Continuous Dynamical Systems - A, 2019, 39 (8) : 4359-4398. doi: 10.3934/dcds.2019177
Elena Kartashova. Nonlinear resonances of water waves. Discrete & Continuous Dynamical Systems - B, 2009, 12 (3) : 607-621. doi: 10.3934/dcdsb.2009.12.607
Robert McOwen, Peter Topalov. Asymptotics in shallow water waves. Discrete & Continuous Dynamical Systems - A, 2015, 35 (7) : 3103-3131. doi: 10.3934/dcds.2015.35.3103
Min Zhu, Shuanghu Zhang. Blow-up of solutions to the periodic modified Camassa-Holm equation with varying linear dispersion. Discrete & Continuous Dynamical Systems - A, 2016, 36 (12) : 7235-7256. doi: 10.3934/dcds.2016115
Yu-Zhu Wang, Si Chen, Menglong Su. Asymptotic profile of solutions to the linearized double dispersion equation on the half space $\mathbb{R}^{n}_{+}$. Evolution Equations & Control Theory, 2017, 6 (4) : 629-645. doi: 10.3934/eect.2017032
Min Zhu, Ying Wang. Blow-up of solutions to the periodic generalized modified Camassa-Holm equation with varying linear dispersion. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 645-661. doi: 10.3934/dcds.2017027
Masakazu Kato, Yu-Zhu Wang, Shuichi Kawashima. Asymptotic behavior of solutions to the generalized cubic double dispersion equation in one space dimension. Kinetic & Related Models, 2013, 6 (4) : 969-987. doi: 10.3934/krm.2013.6.969
2018 Impact Factor: 1.38
David Lannes Jean-Claude Saut | CommonCrawl |
Three-dimensional finite element modeling of ductile crack initiation and propagation
H. R. Javani1,2,
R. H. J. Peerlings2 &
M. G. D. Geers2
A crack initiation and propagation algorithm driven by non-local ductile damage is proposed in a three-dimensional finite strain framework. The evolution of plastic strain and stress triaxiality govern a non-local ductile damage field via constitutive equations. When the damage reaches a critical threshold, a discontinuity in the form of a crack surface is inserted into the three-dimensional continuum. The location and direction of the introduced discontinuity directly result from the damage field. Crack growth is also determined by the evolution of damage at the crack tip and the crack surface is adaptively extended in the computed direction. Frequent remeshing is used to computationally track the initiation and propagation of cracks, as well as to simultaneously maintain a good quality of the finite elements undergoing large deformations. This damage driven remeshing strategy towards fracture allows one to simulate arbitrary crack paths in three-dimensional evolving geometries. It has a significant potential for a wide range of industrial applications. Numerical examples are solved to demonstrate the ability of the proposed framework.
Controlling crack initiation and propagation is one of the important aspects in maintaining the integrity of an engineering structure. In some other cases, however, cracks are introduced on purpose. Examples can be found in forming processes such as cutting or blanking. Computational models are indispensable for the predictive analysis of the mechanics of ductile fracture. Algorithms for dealing with two-dimensional (2D) crack propagation problems are by now well established. However, at present, three-dimensional (3D) problems cannot be analyzed routinely, particularly if they are accompanied by large (plastic) strains. This is due to the complex topology and geometry changes, accompanied with localized deformation and material degradation. At the same time, full 3D modelling of cracks provides a more realistic prediction tool for studying true 3D structures, as well as local features like crack tunneling, e.g. [1].
There is an extensive literature on modelling cracks in general. They can either be modelled in a continuous way, by degrading and/or deleting elements, or by introducing a true discontinuity. A discontinuity can be implicitly modelled by element or nodal enrichment [2–14]. However, most of these methods are applicable for small displacements and cannot be directly applied for large deformations. In a second category of discontinuous approaches remeshing is used to explicitly model the discontinuity, i.e. by alignment of the mesh with the crack and nodal decoupling perpendicular to the crack [15–23].
Here we concentrate on the second category and extend a continuum damage mechanics approach to 3D crack initiation and propagation. Along these lines, Mediavilla et al. [22] suggested a continuous-discontinuous methodology for modeling cracks in 2D problems, in which the crack geometry is incorporated in the mesh by frequent remeshing. This algorithm is attractive especially when dealing with ductile failure, where large local deformations occur and remeshing is necessary even for the continuous part of the problem. Incorporating the additional geometrical changes due to crack growth then requires only a limited intervention in the algorithms used.
In this study, we develop an extension of Mediavilla et al.'s remeshing strategy to 3D problems in which damage growth and 3D crack propagation occur in a large deformation setting. Remeshing is used to deal with geometrical changes due to large deformations as well as crack growth [22]. Crack initiation and crack growth are governed by a continuum damage model which is intrinsically coupled to the underlying elasto-plastic constitutive model. The damage formulation is nonlocal (of the implicit gradient type) to ensure proper localization properties [24]. Once the damage reaches a critical level somewhere in the geometry, a discrete crack is introduced in the geometrical description of the body. This crack is extended when the damage field at its front becomes critical, whereby the orientation is governed by the direction of maximum nonlocal damage driving variable. As a result, no additional fracture criterion is required to control the crack growth. The crack surface is constructed by computing the propagation direction and distance for each node on the crack-front. By splitting the nodes along the crack surface, discontinuities are allowed along the element faces. Robustness of the simulations is ensured by temporarily applying the element internal forces as external forces on the crack nodes and gradually reducing them to zero.
In 3D, compared with the 2D case considered by Mediavilla et al. [22], the remeshing strategy which we follow presents a number of important additional computational challenges. (1) A reliable tetrahedral finite element is required to enable robust automatic remeshing of complex geometries. We adopt a bubble-enhanced mixed finite element formulation of the continuum model [25, 26]. (2) An accurate transfer operator is required to map history data from one mesh to the next. Here special precautions need to be taken to ensure consistency between the transferred fields [27, 28]. (3) Algorithms are needed to manipulate the 3D geometrical description of the problem upon initiation of a crack, as well as for every increment of crack growth. This is the main topic of the present paper.
The algorithm developed here is based on a geometrical description by a surface mesh, which is adapted according to the computed nonlocal damage field. To initiate a crack, elements with damage values higher than a critical limit are first identified. They form a cloud which is either completely inside the body or in contact with a surface. For internal clouds we use an averaging technique to compute the center of the cloud. This point is taken as the center of the emerging crack surface. Using the damage distribution, a plane is defined for inserting a discontinuity. For clouds which are in contact with an external boundary, a crack-front is constructed and this front is connected to the external surface by a discontinuity surface. When crack propagation is predicted by the damage evolution ahead of a crack, that part of the surface mesh which represents the crack faces is extended. For this the strategy followed in 2D by Mediavilla et al. [22] is applied in planes perpendicular to the crack front. Care needs to be taken to ensure the consistency of the crack front and to respect the outer surface of the body. At all stages of the simulation, the damage field is also used in order to refine the discretisation in critical regions of the geometry. We illustrate the methodology by showing two numerical examples, one illustrating crack initiation inside a body (i.e. a rectangular bar under tension) and one at the surface (of a double notched specimen).
This paper is structured as follows. In the next section, the continuum damage model, element technology, remeshing and transfer are briefly reviewed. We then first present the 3D crack propagation algorithm, since elements of it are used in the crack initiation algorithm, which is subsequently discussed for internal as well as surface cracks. After presenting two numerical examples, we conclude by highlighting the newly added features of the algorithm.
Continuum model and finite element discretisation
In this section we briefly review the coupled plasticity-damage continuum modelling, as well as its FEM implementation, which forms the basis of our developments. For a more detailed discussion we refer to Ref. [26] and references cited therein.
3D cracked geometry with boundary conditions
Continuum nonlocal damage model
The balance of momentum can be expressed in terms of Kirchhoff stress \({\varvec{\tau }}\) as
$$\begin{aligned} \vec {\nabla } \cdot \left( \varvec{\tau }\frac{1}{J}\right)= & {} \vec {0} \end{aligned}$$
where \(\vec {\nabla }\cdot \) represents the divergence operator (with respect to the current configuration) and \(J=det(\mathbf {F})\) is the volume change factor. The following boundary condition is applied on the free surfaces of the body considered and, in particular, also on the crack surfaces, see Fig. 1:
$$\begin{aligned} \vec {t} = \vec {n} \cdot \frac{{\varvec{\tau }}}{J}= & {} \vec {0} \end{aligned}$$
The Kirchhoff stress is related to the elastic deformation via the effective Kirchhoff stress tensor \(\varvec{\hat{\tau }}\) as follows:
$$\begin{aligned}&\varvec{\hat{\tau }}= \frac{ {\varvec{\tau }} }{(1-\omega _{p})}\end{aligned}$$
$$\begin{aligned}&\varvec{\hat{\tau }}=\frac{1}{2}{}^4\mathbf {H}:\ln \mathbf {b}_{e}\end{aligned}$$
$$\begin{aligned}&\mathbf {b}_{e}=\mathbf {F}_{e} \cdot \mathbf {F}_{e}^{T} \end{aligned}$$
where \({\mathbf {F}}_{e}\) is elastic part of the multiplicatively split deformation gradient \(\mathbf {F}\) and \({}^4\mathbf {H}\) is the standard fourth order elasticity tensor; \(\varvec{\hat{\tau }}\) is the effective stress tensor due to the presence of the (isotropic) damage characterised by \(\omega _{p}\).
Expressed in terms of the effective stress space, the elastic domain is bounded by the following equation:
$$\begin{aligned} \phi (\varvec{\hat{\tau }},\hat{\tau }_{y})=\hat{\tau }_{eq}-\hat{\tau }_{y} \le 0 \end{aligned}$$
where \(\hat{\tau }_{eq}=\sqrt{ \frac{3}{2}\varvec{\hat{\tau }}^{d}~:~\varvec{\hat{\tau }}^{d} }\) and \(\hat{\tau }_y\) is the current yield stress. \(J_{2}\) associative flow theory is used to model the plastic response of the material [29].
The evolution of the damage variable \(\omega _{p}\) is governed by the following equations:
$$\begin{aligned}&\dot{z} =~h_{z}\dot{\varepsilon }_{p}\end{aligned}$$
$$\begin{aligned}&h_{z}~=~\left\langle 1~+~A\frac{\hat{\tau }_{h}}{\hat{\tau }_{eq}}\right\rangle ~\varepsilon ^{B}_{p}\quad \text {with}\, \langle x \rangle ~= \left\{ \begin{array}{l@{\quad }l} x, &{} x>0\\ 0, &{} x\le 0 \end{array} \right. \end{aligned}$$
$$\begin{aligned}&\bar{z}-\ell ^{2}\nabla ^{2} \bar{z}~=~z,\quad \vec {\nabla }\bar{z}.\vec {n}~=~0\end{aligned}$$
$$\begin{aligned}&\dot{\kappa }~\ge ~0,\quad \bar{z}-\kappa \le 0,\quad \dot{\kappa }(\bar{z}-\kappa )=0\end{aligned}$$
$$\begin{aligned}&\dot{\omega }_{p}~=~h_{\omega }\dot{\kappa } \end{aligned}$$
In these relations, z is a local damage driving variable, the evolution of which depends on the effective plastic strain \(\varepsilon _{p}\) and the (effective) stress triaxiality \(\hat{\tau }_{h}/\hat{\tau }_{eq}\); A and B are material constants as introduced by Goijaerts et al. [30]. Equation (9 10) uses the local damage driving variable z together with a Neumann boundary condition (with normal \(\vec {n}\)) to compute a nonlocal damage driving variable \(\bar{z}\), which controls the damage evolution. The use of this nonlocal quantity is necessary to regularise the localisation of deformation and damage, which would otherwise become pathological [31]. The final link to the damage evolution is made via a history variable, \(\kappa \), and the evolution law (11).
Finite element form of the equations
In order to avoid locking effects due to isochoric plastic straining, the above constitutive model is implemented using a mixed formulation in a tetrahedral element [25, 26]. Therefore, the constitutive law of Eq. (4) is split into a mixed pressure/deviatoric form by decomposing the effective stress tensor as \(\varvec{\hat{\tau }}=\hat{\tau }^{h}\mathbf {I}+ \varvec{\hat{\tau }}^{d}\). The weak forms of Eq. (1), the volumetric elastic law and Eq. (9 10) are then obtained by the usual weighted residuals arguments as:
$$\begin{aligned} \int _\Omega (\vec {\nabla } \vec {\varphi } )^{T}:\left( \tau ^{h}\mathbf {I}+ {\varvec{\tau }}^{d}\right) \frac{1}{J}~d\Omega= & {} \int _{S} \vec {\varphi }\cdot \vec {t}d S \nonumber \\ \int _\Omega \psi \left( \hat{\tau }^{h} - \frac{1}{2}K \mathbf {I} : \ln \mathbf {b}_{e}\right) ~d\Omega= & {} 0 \\ \int _\Omega \left( \chi \bar{z}+ \ell ^{2}\vec {\nabla }\chi \cdot \vec {\nabla }\bar{z}-\chi z \right) ~d\Omega= & {} 0\nonumber \end{aligned}$$
where \(\vec {\varphi }\), \(\psi \) and \(\chi \) are weight functions corresponding to \(\vec {u}\), \(\hat{\tau }^{h}\) and \(\bar{z}\).
It is well known that the weak form of the governing Eqs. (12) when used with equal order interpolation for the hydrostatic stress \(\hat{\tau }^{h}\) and displacement \(\vec {u}\) is not stabilised. Stabilisation is performed by enriching the standard displacement with a displacement bubble, similar to the well known Mini element. The simplified version of this enrichment results in an enhanced strain so that the resulting algorithmic stress tensor reads [26]
$$\begin{aligned}&{} ^{m}\varvec{\hat{\tau }}^{d} = \varvec{\hat{\tau }}^{d} + ^{4}\mathbf {H}^{d} : \alpha {\varvec{\varepsilon }}_{b} \nonumber \\&{} ^{m}\hat{\tau }^{h} = \hat{\tau }^{h} + K\mathbf {I} : {\varvec{\varepsilon }}_{b} \end{aligned}$$
where \(\alpha \) is an element dependent stabilisation factor and \(\varvec{\varepsilon }_{b}\) denotes the symmetric part of the gradient of the bubble displacement:
in which the column matrix contains one bubble shape function per element. Note that \(\varvec{\varepsilon }_{b}\) uses a small strain definition with respect to the deformed configuration given by the conventional part of the displacement interpolation. This conventional part, however, is fully objective and rigorously deals with large strains.
Details of the discretisation of the weak forms (12) and their linearization are omitted here; see Ref. [26] for a detailed derivation. Here, we merely summarize the resulting set of nonlinear algebraic equations for future reference. The mixed formulation of equilibrium, including the bubble stabilisation, results in a combined set of equations which can be written as:
Likewise, the nonlocal Eq. (9 10) results in an additional set of equations as follows:
In the above, we have
Equation (15.2) is the result of the introduced enrichment in Eq. (13). This equation is solved at the element level for the bubble displacements and therefore does not lead to additional global degrees of freedom.
Remeshing
Our strategy to deal with 3D crack growth, as well as the large deformations which we wish to model, necessitates frequent remeshing on a global level. After a predefined number of increments, the surface mesh of the 3D body is extracted from the model. If crack growth is detected, the surface mesh must be modified to incorporate a new crack segment, see the next section. Otherwise, the existing surface mesh is used as input to the 3D tetrahedral mesh generator TetGen [32], together with an indicator field for the desired element size. The remeshing aims to produce smaller elements in areas where the damage evolves significantly and larger elements in undamaged regions or regions where the damage growth has stopped. The damage rate is used as a pointwise indicator to set the element size. Elements with the largest damage rate have the smallest volume and vice versa. For the intermediate damage rates, the volume of the elements is interpolated between the maximum and minimum values, proportional to their damage rate.
To avoid an overly refined discretisation, the triangular surface mesh is coarsened wherever element edges become smaller than a predefined value. Mesh decimation is done using an edge collapse technique [33, 34]. In each step, the shortest edge of the surface triangles (if shorter than a predefined value) is collapsed by unifying the two adjacent vertices (a and b in Fig. 2). Vertex a and the two adjacent faces vanish from the topology. Vertex b is moved to a new position c which is the midpoint between a and b. After collapsing an edge, we measure the dihedral angle between the neighboring newly produced faces and if overlapping occurs, the edge collapse is canceled and the original surface is recovered. This process is repeated until the desired coarsened surface is produced.
Edge \(a-b\) is collapsed to a middle point, node c
After remeshing, data which are available on the Gauss points of the old mesh are transferred to the Gauss points of the new mesh. For this we first use global smoothing, i.e. continuously interpolated, piecewise linear fields are determined which fit the integration point data best in a least squares sense. These data are subsequently interpolated at the new nodal coordinates and finally, using the element shape functions, the new Gauss point data are retrieved. In order to ensure a robust transfer, only a minimum set of data is transferred and the remaining data are reconstructed using the constitutive equations. This operation, which is an indispensable ingredient of the remeshing algorithm, is explained in detail in Ref. [28]. After transfer and reconstruction, balancing iterations are done to restore global equilibrium in the finite element sense. Since these iterations are not representative for any physical deformation, the material behavior is assumed to be elastic in order to guarantee convergence. Finally, because of the elastic nature of the balancing iterations, it is checked if the stress state obtained by them is on or inside the yield surface; otherwise the yield surface is corrected to restore yield consistency.
Crack propagation
In computational fracture mechanics, a critical stress intensity factor or J-integral value is typically used as a criterion for crack growth. In addition, a maximum hoop stress (MHS), minimum strain energy density or maximum energy release rate criterion is used to determine the crack growth direction. A different approach is employed in this study, where the evolution of the continuum damage variable, \(\omega _{p}\), governs the propagation of a crack. This has the advantage that crack initiation and propagation can be dealt with using the same (continuum) equations and no separate fracture criteria are necessary. Once the crack has been initiated, it follows the damage evolution ahead of it wherever the damage has become critical, i.e. \(\omega _{p}=1\). This concept has been successfully applied to 2D crack growth simulations in shear dominated problems like blanking [17, 22].
This section summarizes the required steps for extending these algorithms to 3D problems. In 2D, the crack-front is a point, whereas in 3D it is a curve. For each node lying on this curve, a growth direction is determined in a plane perpendicular to the front. By using the nonlocal damage driving variable field in this plane, a direction vector is computed for all nodes lying on the crack-front. Using all these vectors, the extended crack surface is constructed. We discuss the numerical treatment of crack initiation in the next section, since it employs concepts developed here for crack propagation.
Crack propagation direction and distance
Contrary to the 2D case, where a crack is ending in a point called crack tip, here it is delimited by a curve, the crack-front. The crack-front is either a closed loop or it has two ends called the crack-front corners, see Fig. 3.
3D curved crack-front
At each converged loading increment, the damage at a point lying on the crack-front is compared to the critical damage, \(\omega _{p}^{c}=0.99\), on the basis of which the crack is extended (or not). This value has been found to be sufficiently close to the theoretical value of \(\omega _p^c = 1\) to ensure that most of the energy dissipation due to damage growth has taken place and the stress level has dropped sufficiently for it to not to be affected significantly by the insertion of a new crack segment. For a detailed study of the influence of these numerical parameters, in 2D, we refer to Ref. [22].
Using a tetrahedral discretisation of the 3D geometry, crack-front points coincide with finite element nodes. The damage variable is extrapolated from Gauss points to these nodes using a global smoothing procedure, i.e. a continuous, piecewise linear field is determined which fits the integration point data best in a least squares sense. The crack is predicted to grow over a distance which depends on the damage field ahead of the considered crack node and its direction is evaluated differently at the crack-front compared to the crack-front corners. Both cases are therefore explained separately below, followed by the distance by which the crack is extended.
Propagation of a crack-front node
For each crack-front node, a corresponding growth direction and distance must be determined. For each node, a reference plane is defined in which the direction and distance of the crack growth will be computed. The tangent to the crack-front at the desired point, o in Fig. 4, is used as the normal to this reference plane. This normal is determined from the discretised crack-front as follows.
Crack propagation for crack-front vertex o: a crack-front tangent vector \(\vec {n}\), b reference plane \(\Pi \) normal to vector \(\vec {n}\), c maximum nonlocal damage driving variable direction in plane \(\Pi \)
As shown in Fig. 4a, for the crack-front point o, the vectors \(\vec {v}_{1}\) and \(\vec {v}_{2}\) are the vectors connecting the considered crack-front vertex to its neighboring vertices in the discretised geometry. The tangent vector is then computed as
$$\begin{aligned} \vec {n} = \frac{ \vec {v}_{1} / \Vert \vec {v}_{1} \Vert -\vec {v}_{2} / \Vert \vec {v}_{2} \Vert }{ \left\| \vec {v}_{1} / \Vert \vec {v}_{1} \Vert -\vec {v}_{2} / \Vert \vec {v}_{2} \Vert \right\| } \end{aligned}$$
where \(\Vert \vec {v}\Vert \) is the \(L^{2}\) norm of a vector \(\vec {v}\). Having obtained the (normal to the) reference plane for each node reduces the problem to a 2D crack propagation (direction and distance) problem, similar to the one dealt with by Mediavilla et al. [22]. The reference plane intersects the crack faces along two curves as shown in Fig. 4c. Motivated by the 2D procedure of Mediavilla et al. [22], the nonlocal damage driving variable \(\bar{z}\) is sampled in N points in a semi-circle located in the reference plane. A comparison has shown that using the nonlocal damage driving variable instead of the damage variable as used by Mediavilla et al. [22] avoids abrupt changes in the crack growth direction due to small local (numerical) variations between adjacent nodes. Vectors \(\vec {d}_{1}\) and \(\vec {d}_{2}\) in Fig. 4c are obtained from the intersection of the reference plane with the tetrahedral crack face edges of the discretised geometry. These two vectors are used to compute the vector \(\vec {d}\) that sets the central direction of the considered semi circle via
$$\begin{aligned} \vec {d} = -\frac{ \vec {d}_{1} / \Vert \vec {d}_{1} \Vert +\vec {d}_{2} / \Vert \vec {d}_{2} \Vert }{\left\| \vec {d}_{1} / \Vert \vec {d}_{1} \Vert +\vec {d}_{2} / \Vert \vec {d}_{2} \Vert \right\| } \end{aligned}$$
The position of a sampling point with respect to the crack-front vertex is given by the vector
$$\begin{aligned} \vec {r}_{ij} = r_{i} \cos (\theta _{j})~\vec {d} + r_{i}\sin (\theta _{j})\left( \vec {n}\times \vec {d}\right) , \quad -\frac{\pi }{2}< \theta _{j} < \frac{\pi }{2} \end{aligned}$$
where four radii \(r_{1}=\frac{3}{4}\Delta a,~r_{2}=\Delta a,~r_{3}=\frac{3}{2}\Delta a,~r_{4}=2\Delta a\) are used. \(\Delta a\) is the maximum crack growth distance which is typically chosen to be a few times the smallest element edge. The nonlocal variable is sampled in N different angles ranging from \(-\frac{\pi }{2}\) to \(\frac{\pi }{2}\). For each \(r_{i}\) an angle \(\theta _{i}\) is defined as (Fig. 4c)
$$\begin{aligned} \theta _{i}=\arg ~\max _{\theta _{j}}~~ \bar{z}(r_{i},\theta _{j}) \end{aligned}$$
\(\theta _{i}\) thus represents the angle at which the nonlocal damage driving variable has its maximum at a given distance \(r_{i}\). Using \(\theta _{i}\) and Eq. (20), the crack growth direction vector \(\vec {r}_{i}\) is obtained for each sampling distance. In order to ensure that the crack direction does not fluctuate due to local variations, the obtained crack growth direction vectors are averaged, yielding the following crack propagation direction \(\vec {R}\) for that node.
$$\begin{aligned} \begin{aligned}&\vec {r}_{avg} = \frac{ \vec {r}_{1} }{ \left\| \vec {r}_{1} \right\| } + \frac{ \vec {r}_{2} }{ \left\| \vec {r}_{2} \right\| } + \frac{ \vec {r}_{3} }{ \left\| \vec {r}_{3} \right\| } + \frac{ \vec {r}_{4} }{ \left\| \vec {r}_{4} \right\| } \\&\vec {R} = \frac{\vec {r}_{avg}}{\left\| \vec {r}_{avg} \right\| } \end{aligned} \end{aligned}$$
Propagation of a crack-front corner
Crack-front corners are the crack-front nodes located on the outer surface of the body. In order to ensure that the crack-front corner remains on the outer surface, the direction of crack growth has to be identified from the distribution of the nonlocal damage driving variable on this outer surface. Note that this surface is not necessarily planar and its geometry is available only in terms of the triangular surface mesh. The crack direction is computed in a similar fashion as for crack-front vertices, albeit on the discretised outer surface rather than the plane \(\Pi \). Instead of a semi-circular set of sampling points in the plane \(\Pi \), we therefore consider a set of planes intersecting the outer surface of the body to establish the potential growth directions. Each of these planes contains the crack-front corner node and has a normal \(\vec {n}_{j}\), see Figs. 5 and 6. To determine \(\vec {n}_{j}\), we first define the corner vector \(\vec {d}_{c}\) according to Eq. (19), where \(\vec {d}_{1}\) and \(\vec {d}_{2}\) are now the vectors along the element edge at the intersection of the outer surface and the two faces of the crack (Fig. 5a). We also define a corner vector \(\vec {m}_{c}\) perpendicular to vectors \(\vec {d}_{1}\) and \(\vec {d}_{2}\):
$$\begin{aligned} \vec {m}_{c} = \frac{ \vec {d}_{1} \times \vec {d}_{2} }{ \left\| \vec {d}_{1} \times \vec {d}_{2} \right\| } \end{aligned}$$
Crack propagation for the crack-front corner c: a element edges of the outer surface and their normal, \(\vec {m}_{c}\); b reference plane \(\Omega _{j}\) normal to vector \(\vec {n}_{j}\) at \(\theta =0\); c four radii of sampling locations
Intersection of the final crack extension direction plane with outer surface and addition of the new nodes
Finally vectors \(\vec {d}_{c}\) and \(\vec {m}_{c}\) are used to compute a set of plane normals as follows
$$\begin{aligned} \vec {n}_{j} = \cos (\theta _{j})\vec {d}_{c} + \sin (\theta _{j})( \vec {m}_{c}\times \vec {d}_{c}), \quad -\frac{\pi }{2}< \theta _{j} < \frac{\pi }{2} \end{aligned}$$
\(\Omega _{j}\) is the plane defined by the crack-front corner and the normal vector \(\vec {n}_{j}\). Figure 5b shows this plane for \(\theta =0\). N different angles \(\theta _{j}\), ranging from \(-\pi /2\) to \(\pi /2\) are selected. A piecewise linear curve is obtained for each of these planes by intersecting it with the outer surface. Along these curves the nonlocal damage driving variable \(\bar{z}\) is sampled at four different distances \(r_{i}\) measured along the piecewise linear curve. For each \(r_{i}\) (the same sampling distances are used as in the previous section but then relative to the crack-front corner) there exists a plane with normal vector \(\vec {n}_i\), where \(\bar{z}\) has its maximum on the intersection line of this plane with the external surface–cf. Eq. (21). Finally using Eq. (22), the average of these normals constitutes the growth direction for the crack-front corner.
Directional smoothing
Having obtained the averaged growth direction for all crack-front nodes and corners independently, these directions are again smoothed (relative to the neighboring ones) in order to damp possible numerically induced crack roughness. The direction vector of a node k on the crack-front is combined with that of the adjacent nodes using the following smoothing operation:
$$\begin{aligned} \vec {r}_{k}:=&\left( \vec {r}_{k-1}+2\vec {r}_{k}+\vec {r}_{k+1}\right) /4 \nonumber \\ \vec {R}_{k} =&\frac{\vec {r}_{k}}{\Vert \vec {r}_{k}\Vert } \end{aligned}$$
This filtering is only applied to the crack-front nodes and not the corners.
Growth distance
Smoothing the direction of the crack growth paves the path for obtaining a growth distance. At each node k at which the critical damage value \(\omega _{p}^{c}\) is exceeded, the crack is assumed to grow in the computed direction over a distance \(L_{k}\) until the damage drops below \(\omega _{p}=0.97~\omega _{p}^{c}\). To obtain a smoother crack surface for more stable (re)meshing and computation, we furthermore set a minimum and maximum growth distance as follows: \(L_{min}~=~0.1~\Delta a \); \(L_{max}~=~\Delta a\). This implies that for a point \(p_{k}^{o}\) on the old crack-front, the corresponding position on the new crack-front \(p_{k}^{n}\) is obtained as follows:
$$\begin{aligned} \overrightarrow{p_{k}^{o}p_{k}^{n}} = L_{k}\vec {R}_{k} \end{aligned}$$
Before constructing the crack surface and although the crack direction has already been smoothened in Eq. (25), the new crack-front is further smoothed by filtering all of its crack-front positions as follows
$$\begin{aligned} p_{k}^{n}:= \left( p_{k-1}^{n}+2p_{k}^{n}+p_{k+1}^{n}\right) /4 \end{aligned}$$
Note that this filtering influences mainly the obtained growth distances and has little influence on the direction, which was already smoothed in (25).
Construction of the new crack surface
The propagation direction and distance have now been computed for all nodes on the crack-front. Next step is to construct a new segment of the crack surface, along which the crack will be opened. First, the intersection of the new crack segment with the outer surface is determined. This procedure, which is schematically shown in Fig. 6, ensures that the crack surface remains properly connected to the outer surface.
In order to modify the surface, the computed crack extension direction plane for the crack-front corner is intersected with the triangular outer surface elements. Starting from the old crack-front corner, surface elements are split along the direction plane until the predicted growth distance has been reached. Triangle edges which are cut by the direction plane are split by adding a node and the triangle is divided into two triangles, see Fig. 6. If the intersection point is within a certain distance (namely a tolerance which here is 0.1 times the element edge) of an existing edge or node, the node or edge is mapped onto the crack direction plane. This avoids the creation of excessively small surface elements. If the crack extension direction exactly passes through a node or an already available edge, then no modification is made. This process is repeated until the predicted growth distance is reached. If the new crack-front corner is inside a triangle, then this triangle is divided into three triangles and the node is stored as the new crack-front corner.
The new crack-front is now obtained using the two new crack-front corner nodes on the boundary and the propagation direction and distance of the old crack-front nodes in the body's interior. Having modified the outer surface and computed the new crack-front, paves the way for the crack surface reconstruction. This is done by triangulation of the 3D surface which contains the old crack-front, new crack-front and the two crack-front corner traces as its boundaries, see Fig. 7.
Crack surface construction
There are some special cases where the above mentioned algorithm cannot be directly applied. One case is when two crack direction vectors are crossing each other. In this case, the points at which these directional vectors are pointing are swapped. Figure 8a shows how the directional vectors of \(p_{1}^{o}\) and \(p_{2}^{o}\) are crossing and their corresponding points \(p_{1}^{n}\) and \(p_{2}^{n}\) in Fig. 8b are swapped. Another case is when a vector ends outside of the (discretised) geometry. In this case this vector is discarded and the average crack vector of its neighboring nodes is used instead.
Crack direction vectors which cross each other are corrected
Meshing of the new geometry
The constructed crack surface based on the crack propagation distances and directions is now used to discretise the geometry. The geometrical description consists of the outer surface of the volume, possibly including parts of the already existing crack surface, and an inner surface which defines the new crack growth segment. The new crack surface is treated as an internal boundary by the mesher, so that tetrahedral elements are generated on both sides of the surface without intersecting it.
In order to properly model the opening of the crack surface, a topological data structure is needed. This data structure is built using the connectivity of the elements and geometry of the discretised domain. Using this data structure, the elements connected to each node and their position with respect to the crack surface are identified. Details of this data structure are given in the Appendix.
Crack opening
The mechanical insertion of the new crack surface is done by splitting the nodes generated on the new crack surface by the volumetric mesher. This implies that for each node, a corresponding node with the same coordinates is generated. The nodal connectivity of elements located on both sides of the crack is preserved, whereby the new node is used for the connectivity of the elements for one of the sides.
The two newly created surfaces are temporarily tied together by creating a dependency between their displacement degrees of freedom. While the crack is still closed, data from the last converged state is consistently transferred to the new mesh. Elastic equilibrium iterations are done in order to recover global consistency. During this iterative process the closed crack is treated as a new surface for Eqs. (15.3) and (16). However, the degrees of freedom for the pressure and the nonlocal damage driving variable are not tied. This improves stability of the simulation in the sense that the residual forces related to these two equations become zero in the elastic equilibrium iterations and artificial damage growth is prevented. This artificial damage growth, which is observed if all degrees of freedom are tied, may be caused by the sudden change in the boundary conditions for the nonlocal averaging Eq. (9 10).
Since the new crack surface is kept closed during the elastic equilibrium iterations using displacement tyings acting on the crack faces, a reaction force appears on these nodes. To mechanically open the crack, these reaction forces are first applied as external forces when the tyings are removed, and they are subsequently gradually released in a number of sub-steps, see Fig. 9 for an illustration (in which \(\vec {f}_{A}\) and \(\vec {f}_{B}\) represent these forces for one particular couple of nodes) and Ref. [22] for a more detailed description.
Crack initiation
Having established the algorithms to deal with crack propagation, we now turn our attention to the initiation of cracks based on the computed damage field. In a continuum damage mechanics approach, a crack is initiated when the scalar damage \(\omega _\mathrm {p}\) reaches a critical threshold. At this point in time, an already degenerated (softened) continuum material is split locally by introducing a discontinuity. For this purpose, the simulation is stopped, an initial crack surface is generated and, together with the outer skin of the geometry, is given as input to the 3D mesh generator. Cracks can start either inside the body (not connected to the outer surface) or from the geometry's surface. Each of these situations is addressed separately in the following sections.
Crack opening by gradually reducing the tying forces between corresponding nodes on the two crack faces
Internal crack initiation
The initiation points for cracks are the locations where the damage exceeds a predefined critical magnitude. To identify these points, all elements with damage values higher than the critical value are extracted. They constitute a 3D cloud of elements which are not necessarily interconnected. Groups of interconnected elements that are connected to an already existing crack are discarded since they contribute to crack propagation and not initiation. The element clouds connected to the outer surface form surface cracks, which are treated in the next section.
Internal crack initiation; a cloud of highly damaged elements (integration points shown as black dots); b center point and longest vector, \(\vec {r}_1\); c longest vector \(\vec {r}_2\) in plane \(\pi \)
Figure 10a shows a cloud of elements with damage values higher than a critical level at the center of a body. The center point of the cloud is calculated using
$$\begin{aligned} \vec {p} = \sum _{i=1}^{n} \dfrac{M_i}{\sum M_i} \, \vec {x}_i \end{aligned}$$
where \(\vec {x}_i\) are the centers of elements within the cloud,
$$\begin{aligned} M_i = \dfrac{-V_i}{\log \left( \omega _\mathrm {p}^i \right) } \end{aligned}$$
is a damage-dependent weight factor and \(V_i\) is the volume of each element in the cloud; \(\omega _\mathrm {p}^i\) is its damage value (constant damage elements are used). The weight factor \(M_i\) ensures that larger elements with higher damage values contribute more to the calculation of the center point than small elements or elements with low levels of damage.
Starting from the center point \(\vec {p}\), a vector \(\vec {r}_1\) is computed, which is the longest vector connecting point \(\vec {p}\) to any other node in the cloud. A plane (\(\pi \) in Fig. 10b) is defined in point \(\vec {p}\) and normal to \(\vec {r}_1\). This plane intersects a set of elements in the cloud. All vectors from point \(\vec {p}\) to any node in this set are projected on the plane and vector \(\vec {r}_2\) is then defined as the longest projected vector. Once \(\vec {r}_1\) and \(\vec {r}_2\) have been determined, they are mirrored to obtain \(\vec {r}{~}^{'}_{1}\) and \(\vec {r}{~}^{'}_{2}\). Together, these four vectors form a polygon with four sides lying in the same plane, see Fig. 11. The geometrical description of this plane, together with a point wise element size indicator (obtained from the damage rate) is given as input to the 3D tetrahedral mesher and is treated as an internal boundary for meshing.
Construction of crack plane; a opposite vectors; b constructed initial crack plane inside the body
Surface crack initiation
In some cases, a cloud of interconnected damaged elements contains nodes lying on the exterior surface of the geometry. If this is the case, a crack should nucleate from the exterior surface and propagate into the geometry with a proper propagation direction. For this purpose, the triangulated surface is modified to embed the new crack surface.
First, all damaged elements are identified that are in contact with the external surface and these are separated from the cloud. The center point of these surface elements (only a fraction of the original cloud) is obtained using Eq. (28). The closest node on the surface to this point is singled out as the surface center, \(\vec {P}_\mathrm {s}\) in Fig. 12b. The original cloud of elements is used to determine the direction of the crack. The center point of this cloud is also computed using Eq. (28). The connection line between the surface center and the cloud center provides the vector \(\vec {r}_1\). To define a crack initiation plane, a second vector is needed. This vector is obtained by calculating the longest vector from the center \(\vec {P}_\mathrm {s}\) to all surface nodes in the cloud, \(\vec {r}_2\) in Fig. 12b. A plane normal is finally defined using the following equation:
$$\begin{aligned} \vec {N_\mathrm {s}} = \dfrac{\vec {r}_1 \times \vec {r}_2}{\left\| \vec {r}_1 \times \vec {r}_2\right\| } \end{aligned}$$
The intersection of this plane with the surface elements located in the cloud forms a curve on the triangularized exterior of the geometry and defines the crack-front. The crack propagation algorithm is then used to propagate the front into the body.
Initiation of a crack from the surface; a cloud of highly damaged elements touching the exterior surface of geometry (inside the dotted cube); b direction vectors \(\vec {r}_{1}\) and \(\vec {r}_{2}\); c intersection of the plane with the exterior surface of the geometry
A crack surface has been defined at this stage for the cracked topology. This surface should be opened to recover equilibrium first. The applied methodology is explained here for cracks located inside the body.
Crack opening is done in two steps. In the first step, the geometrical description of the internal crack surface is provided as input to the mesher, Fig. 13a. Next, the geometry is discretised accommodating this new interior boundary, Fig. 13b. Finally, all nodes located on the crack surface (discarding nodes on the contour of the surface) are decoupled and all internal forces acting in the nodes of the connected elements are applied as external forces, Fig. 13c. An automatic sub-incrementation procedure is then used to gradually release these forces to zero, resulting in an opened crack [22].
Internal crack opening; a an internal crack plane; b applied discretisation by the mesher; c release of the residual forces to open the crack while its contour line (the new crack-front) remains closed (cut view)
A similar technique is applied to open cracks in contact with the boundary of the geometry. The difference here is that the new crack front residing on the boundary is also opened.
The developed algorithm has been employed for studying crack initiation and propagation in two examples. These examples have been selected in order to assess the performance of the methods developed above in dealing with crack initiation and propagation.
A nonlinear material hardening is used throughout, in which the current hardening modulus is defined as
$$\begin{aligned} h_{\varepsilon } = h + (\tau _{\mathrm {y}\infty }-\tau _{\mathrm {y}0}) \exp \left[ -\alpha \varepsilon _\mathrm {p}\right] \qquad \text {with} \,\, \alpha > 0 \end{aligned}$$
with elasto-plastic-damage material parameters as shown in Table 1.
Table 1 Material properties used [24, 29, 35]
The described constitutive law is implemented using a locking free mixed formulation of the tetrahedral element [25, 26], while a constant damage variable \(\omega _\mathrm {p}\) is used per element. In both examples, a vertical displacement is applied on the top surface of the model while the bottom surface is fixed. Frequent remeshing is used to maintain the quality of the elements and the damage rate \(\dot{\omega }_\mathrm {p}\) is employed as a point wise indicator for element size. Hence, the mesh is more refined in regions with a rapid evolution of damage.
Crack initiation in a rectangular bar
In this section, we present the results of a rectangular sample which is subjected to tension until necking and failure. The geometry and boundary conditions are shown in Fig. 14. A vertical displacement is prescribed to the top surface and the bottom surface is fully constrained. Therefore, necking is expected in the middle of the specimen.
Geometry (in mm) and boundary conditions of the rectangular specimen
Figure 15 shows the damage distribution as it evolves through different stages of remeshing and mesh refinement. Damage is maximal in the center of the specimen, where the hydrostatic stress is high.
Damage distribution at different stages of deformation
Internal crack initiation: a discretised face at the back plus half of the top and bottom face and cloud of critical elements; b embedding the new crack surface internally
As the necking progresses in the middle section of the specimen, a cloud of connected elements reveal a damage value higher than the critical level \(\omega _{p}^{c}\), as shown in Fig. 16a. The internal crack initiation algorithm is used to introduce a crack plane internally, see Fig. 16b. The geometry is therefore remeshed and by releasing the crack surface forces, a first crack appears inside the geometry.
Due to the concentration of the damage growth in the neck, a rapid crack growth is observed. Figure 17 shows an instant of crack propagation towards the outer surface. Since the mesh refinement is a function of the damage rate, a refined mesh appears around the crack, see Fig. 17b.
Crack propagation in a rectangular tensile bar: a damage distribution around the crack surface, b cross section demonstrating internal mesh refinement
The force versus displacement response obtained for this problem is shown in Fig. 18. The diagram shows that the simulation may be continued until full failure, i.e. until the reaction force vanishes. The minor jumps on particularly the descending part of the curve are due to the transfer of state variables following remeshing steps. This transfer results in a slight inconsistency of the deformation and stress state on the new discretisation compared with that on the old. The effect is more pronounced where (and when) these fields fluctuate strongly—which explains why they are more prominent at later stages of the failure process, when the deformation is highly localised.
Force versus displacement response for a rectangular sample under tensile loading
Surface crack initiation and propagation in a double notch specimen
The example of a double notched specimen is used to investigate the crack initiation on the surface and have a more substantial amount of crack growth. The geometry and boundary conditions are shown in Fig. 19. The deformation is imposed using displacement control and the front and back face of the geometry are free. The amount of damage growth during an increment is used as a point-wise indicator for mesh refinement. The geometry undergoes large deformations before crack initiation and propagation starts. As a result of this large deformation, elements may become distorted and compromise the required accuracy. To avoid this problem, frequent remeshing is used, after a predefined applied displacement. The number of sampling locations N and the crack increment length \(\Delta a\) are 50 and 0.3 mm respectively.
Geometry (in mm) and boundary conditions of the double notched specimen
Figure 20 shows the damage distribution as it evolves through the different remeshing steps before reaching a critical value for inserting the first crack segment. As can be seen from this figure, the damage grows considerably faster at the notches, especially at the top right hand corner.
Snapshots of the remeshing and mesh refinement near the highly damaged zones
Figure 21 shows how the specimen is necking across its thickness along a curve connecting the two notches. Damage has the highest value where the mid-plane of the specimen intersects the notch root, since the hydrostatic stress and consequently the stress triaxiality is higher at this point relative to the front and back face of the specimen. Also shown in Fig. 21 is the first crack segment when it is opened and all residual forces have been released.
3D non-planar crack initiated at the notch root
As the applied displacement increases, the crack which was initiated at the top right corner grows towards the bottom. After a while the damage at the bottom left corner also reaches the critical value and a second crack starts growing from there. While the second crack propagates, the crack propagation at the top is arrested. Since the crack tends to grow faster in the mid-plane of the specimen thickness, the crack-front is curved instead of a straight line, as shown in Fig. 22 for the crack growing from the bottom-left notch.
Crack surface of the crack emanating from the bottom notch
Figure 23 shows the force versus displacement response obtained from this simulation. As before, the jumps observed in this curve are due to the remeshing and transfer between two different discretisations. Note that the first crack insertion occurs only when the mechanical strength of the specimen has already been significantly degraded by the damage evolution.
Force versus displacement response obtained for the double notched specimen
A large deformation 3D methodology has been developed to simulate the initiation and propagation of a crack in a ductile material, based on an underlying ductile damage mechanics formulation and a remeshing strategy.
An approach is presented to initiate a crack in 3D bodies undergoing large plastic deformations. Cracks start either internally or at the surface of the geometry, whereby a procedure is proposed for each case. In contrast to a traditional fracture mechanics approach, the size and direction of crack initiation and propagation are solely governed by the underlying damage model, and no extra criterion is therefore required.
Once a crack has been nucleated, it may propagate according to the damage field ahead of the crack tip. For each of the nodes on the current crack-front, a propagation direction and distance is computed. Depending upon the location of the node on the crack-front (corner or mid nodes), a slightly different method is used to identify the propagation vector. These propagation vectors, together with the old crack-front, are assembled to construct a new crack surface segment. The geometry is then discretised and refined in critical locations based on the damage rate.
The performance of the proposed method is shown by two examples where both cases for initiation/propagation of a crack (at the surface or internally) are demonstrated. Our results show that the method is promising in studying phenomena like internal fracture and other relevant applications. The characteristics of the proposed algorithm renders it promising for modelling 3D cracks in applications where remeshing is unavoidable. It presents two essential advantages over a conventional fracture mechanics approach: first, it uses only a single criterion (damage model) for both crack initiation and propagation (distance and direction) and, second, the mechanical strength of the structure has been already degraded by the damage, making it more convenient to introduce a crack.
Gullerud AS, Dodds RH Jr, Hampton RW, Dawicke DS. Three-dimensional modeling of ductile crack growth in thin sheet metals: computational aspects and validation. Eng Fract Mech. 1999;63(4):347–74.
Simo JC, Oliver J, Armero F. An analysis of strong discontinuities induced by strain-softening in rate-independent inelastic solids. Comput Mech. 1993;12:277–96.
Armero F, Garikipati K. An analysis of strong discontinuities in multiplicative finite strain plasticity and their relation with the numerical simulation of strain localization in solids. Int J Solid Struct. 1996;33(20–22):2863–85.
Oliver J. Modelling strong discontinuities in solid mechanics via strain softening constitutive equations. Part 2: numerical simulation. Int J Numer Methods Eng. 1996;39:3601–23.
Melenk JM, Babuska I. The partition of unity finite element method: basic theory and applications. Comput Methods Appl Mech Eng. 1996;139(1–4):289–314.
Garikipati K, Hughes TJR. A study of strain localization in a multiple scale framework-the one-dimensional problem. Comput Methods Appl Mech Eng. 1998;159(3–4):193–222.
Belytschko T, Black T. Elastic crack growth in finite elements with minimal remeshing. Int J Numer Methods Eng. 1999;45(5):601–20.
Moës N, Dolbow J, Belytschko T. A finite element method for crack growth without remeshing. Int J Numer Methods Eng. 1999;46:131–50.
Wells GN, Sluys LJ. A new method for modelling cohesive cracks using finite elements. Int J Numer Methods Eng. 2001;50:2667–82.
Moës N, Belytschko T. Extended finite element method for cohesive crack growth. Eng Fract Mech. 2002;69(7):813–33.
Gravouil A, Moës N, Belytschko T. Non-planar 3D crack growth by extended finite elements and level sets-part I: mechanical model. Int J Numer Methods Eng. 2002;53:2549–68.
Gravouil A, Moës N, Belytschko T. Non-planar 3D crack growth by extended finite elements and level sets-part II: level set update. Int J Numer Methods Eng. 2002;53:2569–86.
Colombo D, Massin P. Fast and robust level set update for 3D non-planar X-FEM crack propagation modelling. Comput Methods Appl Mech Eng. 2011;200:2160–80.
Seabra MRR, Šuštarič P, Cesar de Sa JMA, Rodič T. Damage driven crack initiation and propagation in ductile metals using XFEM. Comput Mech. 2013;52(1):161–79.
Dodds RH Jr, Tang M, Anderson TL. Numerical procedures to model ductile crack extension. Eng Fract Mech. 1993;46(2):253–64.
Bittencourt TN, Wawrzynek PA, Ingraffea AR, Sousa JL. Quasi-automatic simulation of crack propagation for 2D LEFM problems. Eng Fract Mech. 1996;55(2):321–34.
Brokken D, Brekelmans WAM, Baaijens FPT. Numerical modelling of the metal blanking process. J Mater Process Technol. 1998;83(1–3):192–9.
Brokken D, Brekelmans WAM, Baaijens FPT. Predicting the shape of blanked products: a finite element approach. J Mater Process Technol. 2000;103(1):51–6.
Bouchard PO, Bay F, Chastel Y, Tovena I. Crack propagation modelling using an advanced remeshing technique. Comput Methods Appl Mech Eng. 2000;189(3):723–42.
Carter B, Wawrzynek P, Ingraffea A. Automated 3D crack growth simulation. Int J Numer Methods Eng. 2000;47:229–53.
Cavalcante Neto JB, Wawrzynek PA, Carvalho MTM, Martha LF, Ingraffea AR. An algorithm for three-dimensional mesh generation for arbitrary regions with cracks. Eng Comput. 2001;17:75–91.
Mediavilla J, Peerlings RHJ, Geers MGD. An integrated continuous-discontinuous approach towards damage engineering in sheet metal forming processes. Eng Fract Mech. 2006;73(7):895–916.
Feld-Payet S. Amorçage et propagation de fissures dans les milieux ductiles non locaux. PhD thesis, École Nationale Supérieure des Mines de Paris; 2010.
Mediavilla J, Peerlings RHJ, Geers MGD. A nonlocal triaxiality-dependent ductile damage model for finite strain plasticity. Comput Methods Appl Mech Eng. 2006;195(33–36):4617–34.
Javani HR, Peerlings RHJ, Geers MGD. Three dimensional modelling of non-local ductile damage: element technology. Int J Mater Form. 2009;2:923–6.
Javani HR. A computational damage approach towards three-dimensional ductile fracture. PhD thesis, Eindhoven University of Technology; 2011.
Javani HR, Peerlings RHJ, Geers MGD. A remeshing strategy for three dimensional elasto-plasticity coupled with damage applicable to forming processes. Int J Mater Form. 2010;3:915–8.
Javani HR, Peerlings RH, Geers MG. Consistent remeshing and transfer for a three dimensional enriched mixed formulation of plasticity and non-local damage. Comput Mech. 2014;53(4):625–39.
Simo JC. Algorithms for static and dynamic multiplicative plasticity that preserve the classical return mapping schemes of the infinitesimal theory. Comput Methods Appl Mech Eng. 1992;99(1):61–112.
Goijaerts AM, Govaert LE, Baaijens FPT. Evaluation of ductile fracture models for different metals in blanking. J Mater Process Technol. 2001;110(3):312–23.
Peerlings RHJ, de Borst R, Brekelmans WAM, Geers MGD. Localisation issues in local and nonlocal continuum approaches to fracture. Eur J Mech A Solids. 2002;21(2):175–89.
Si H. Tetgen A. A quality tetrahedral mesh generator and three-dimensional delaunay triangulator. 2007. http://www.tetgen.berlios.de/.
Heckbert PS, Garland M. Optimal triangulation and quadric-based surface simplification. Comput Geom. 1999;14(1–3):49–65.
Hoppe H. Progressive meshes. In: SIGGRAPH. 1996;96:99–108.
Geers MGD. Finite strain logarithmic hyperelasto-plasticity with softening: a strongly non-local implicit gradient framework. Comput Methods Appl Mech Eng. 2004;193(30–32):3377–401.
Author's contributions
HRJ carried out most of the study, including development of the methodology and its implementation, and drafted the manuscript. RHJP and MGDG conceived the study, participated in its design and coordination and critically reviewed the manuscript. All authors read and approved the final manuscript.
This research was carried out under project number MC2.05205c in the framework of the Research Program of the Materials innovation institute M2i (http://www.m2i.nl).
Materials innovation institute M2i, PO Box 5008, 2600 GA, Delft, The Netherlands
H. R. Javani
H. R. Javani, R. H. J. Peerlings & M. G. D. Geers
R. H. J. Peerlings
M. G. D. Geers
Correspondence to R. H. J. Peerlings.
Appendix: Data structure of the discretised geometry
A data structure is needed in order to identify the elements spanning the crack surface. The 3D geometry is discretised using tetrahedral elements and it contains an internal triangulated surface to which the tetrahedral volume mesh conforms. In order to open the crack surface, a search algorithm is used to locate the elements on both sides of the crack surface.
Crack face elements
The first task is to find the elements which are on both sides of a triangle of the crack surface. Figure 24 shows a triangle with node numbering \(\{1, 2, 3\}\) and two tetrahedra connected to it. An element located on side A is detected by analyzing the angle between the direction of the triangle normal (obtained from the counterclockwise triangle connectivity) and the vector to the remaining node of the tetrahedron. As shown in the figure, if that angle is less than 90 degrees, then this tetrahedron is located on side A of the triangle.
Detection of elements on both sides of a single triangle (A and B)
Crack nodes
In order to open a crack at a node, it is necessary to identify all elements (not only crack face elements) at each side of the crack surface. This operation is schematically shown in Fig. 25. \(E_{i}\) is the list of tetrahedral elements connected to node i. It must be subdivided into sets \(E_{A,i}\) and \(E_{B,i}\) associated with the respective sides of the crack surface, for which we have \(E_{i}=E_{A,i} \cup E_{B,i}\) and \(E_{A,i} \cap E_{B,i}=\emptyset \). Furthermore, the nodes connected to the elements in the subsets \(E_{A,i}\) and \(E_{B,i}\) are denoted as \(N_{A,i}^{E}\) and \(N_{B,i}^{E}\) respectively.
Elements groups: a all connected elements to node i, b connected elements to node i on side A, \(E_{A,i}\), and on side B, \(E_{B,i}\)
First step in identifying \(E_{A,i}\) and \(E_{B,i}\) is to identify the triangles connected to node i that are part of the crack surface (three bold triangles in Fig. 25a), called star triangles. The list of node numbers related to the star triangles, including node i (five nodes here), is called \(N_{S,i}\). Each triangle is connected to two tetrahedra, one on each side, which are identified by the algorithm explained in "Conclusion" section. The related element numbers are removed from set \(E_{i}\) and constitute the first entries of \(E_{A,i}\) and \(E_{B,i}\). In order to assign all tetrahedra in \(E_{i}\) to \(E_{A,i}\) and \(E_{B,i}\) we define a node list \(N_{A,i}\), which contains all nodes of elements connected to the crack face on side A, but are not on the crack face itself. Mathematically, this implies \(N_{A,i} = \{ N_{A,i} \subseteq N_{A,i}^{E}~~ \text{ and }~~ N_{A,i} \cap N_{S,i} = \emptyset \}\) and similarly \(N_{B,i} = \{ N_{B,i} \subseteq N_{B,i}^{E}~~ \text{ and }~~ N_{B,i} \cap N_{S,i} = \emptyset \}\). The remaining element numbers in \(E_{A,i}\) and \(E_{B,i}\) are recovered by iteratively checking if any node of the elements belongs to list \(N_{A,i}\) or \(N_{B,i}\). This element is then assigned to the proper set. This is repeated until all components of \(E_{i}\) have been visited. This technique only relies on the element connectivity, whereby no geometrical features are involved. Therefore, the crack surface complexity does not compromise the identification of \(E_{A,i}\) and \(E_{B,i}\).
Javani, H.R., Peerlings, R.H.J. & Geers, M.G.D. Three-dimensional finite element modeling of ductile crack initiation and propagation. Adv. Model. and Simul. in Eng. Sci. 3, 19 (2016). https://doi.org/10.1186/s40323-016-0071-y
Finite element method
Ductile fracture
Nonlocal damage
Advances in Computational Mechanics: Issue in honor of Prof. Ladevèze on the occasion of his 70th anniversary | CommonCrawl |
March 2014 , Volume 34 , Issue 3
Special issue dedicated to Arieh Iserles on the occasion of his 65th birthday
Elena Celledoni, Jesus M. Sanz-Serna and Antonella Zanna Munthe-Kaas
Arieh Iserles was born in Poland, on September 2, 1947. He was educated in Israel, where he received BSc and MSc degrees from the Hebrew University and obtained his PhD degree under the supervision of Giacomo Della Riccia at Ben Gurion University with the dissertation Numerical Solution of Stiff Differential Equations (1978). He first arrived in Cambridge, in 1978 and has remained there ever since. He has successively been Junior and Senior Research Fellow at King's College, and Lecturer (1987) and Professor (1999) at Cambridge University where he holds a chair in Numerical Analysis and Differential Equations. Arieh has received many honours, in particuluar the Lars Onsager Medal (1999) from the Nowegian University of Science and Technology and the David G. Crighton Medal (2012) from the London Mathematical Society and the Institute of Mathematics and its Applications. He holds Honorary Professorships at Huazhong University of Science and Technology, Wuhan, since 2002 and Jilin University, Changchun, since 2004.
Elena Celledoni, Jesus M. Sanz-Serna, Antonella Zanna Munthe-Kaas. Preface. Discrete & Continuous Dynamical Systems - A, 2014, 34(3): i-ii. doi: 10.3934/dcds.2014.34.3i.
A Gaussian quadrature rule for oscillatory integrals on a bounded interval
Andreas Asheim, Alfredo Deaño, Daan Huybrechs and Haiyong Wang
We investigate a Gaussian quadrature rule and the corresponding orthogonal polynomials for the oscillatory weight function $e^{i\omega x}$ on the interval $[-1,1]$. We show that such a rule attains high asymptotic order, in the sense that the quadrature error quickly decreases as a function of the frequency $\omega$. However, accuracy is maintained for all values of $\omega$ and in particular the rule elegantly reduces to the classical Gauss-Legendre rule as $\omega \to 0$. The construction of such rules is briefly discussed, and though not all orthogonal polynomials exist, it is demonstrated numerically that rules with an even number of points are well defined. We show that these rules are optimal both in terms of asymptotic order as well as in terms of polynomial order.
Andreas Asheim, Alfredo Dea\u00F1o, Daan Huybrechs, Haiyong Wang. A Gaussian quadrature rule for oscillatory integrals on a bounded interval. Discrete & Continuous Dynamical Systems - A, 2014, 34(3): 883-901. doi: 10.3934/dcds.2014.34.883.
Computing of B-series by automatic differentiation
Ferenc A. Bartha and Hans Z. Munthe-Kaas
We present an algorithm based on Automatic Differentiation for computing general B-series of vector fields $f\colon \mathbb{R}^n\rightarrow \mathbb{R}^n$. The algorithm has a computational complexity depending linearly on $n$, and provides a practical way of computing B-series up to a moderately high order $d$. Compared to Automatic Differentiation for computing Taylor series solutions of differential equations, the proposed algorithm is more general, since it can compute any B-series. However the computational cost of the proposed algorithm grows much faster in $d$ than a Taylor series method, thus very high order B-series are not tractable by this approach.
Ferenc A. Bartha, Hans Z. Munthe-Kaas. Computing of B-series by automatic differentiation. Discrete & Continuous Dynamical Systems - A, 2014, 34(3): 903-914. doi: 10.3934/dcds.2014.34.903.
On Volterra integral operators with highly oscillatory kernels
Hermann Brunner
We study the high-oscillation properties of solutions to integral equations associated with two classes of Volterra integral operators: compact operators with highly oscillatory kernels that are either smooth or weakly singular, and noncompact cordial Volterra integral operators with highly oscillatory kernels. In the latter case the focus is on the dependence of the (uncountable) spectrum on the oscillation parameter. It is shown that the results derived in this paper merely open a window to a general theory of solutions of highly oscillatory Volterra integral equations, and many questions remain to be answered.
Hermann Brunner. On Volterra integral operators with highly oscillatory kernels. Discrete & Continuous Dynamical Systems - A, 2014, 34(3): 915-929. doi: 10.3934/dcds.2014.34.915.
ADI splitting schemes for a fourth-order nonlinear partial differential equation from image processing
Luca Calatroni, Bertram Düring and Carola-Bibiane Schönlieb
We present directional operator splitting schemes for the numerical solution of a fourth-order, nonlinear partial differential evolution equation which arises in image processing. This equation constitutes the $H^{-1}$-gradient flow of the total variation and represents a prototype of higher-order equations of similar type which are popular in imaging for denoising, deblurring and inpainting problems. The efficient numerical solution of this equation is very challenging due to the stiffness of most numerical schemes. We show that the combination of directional splitting schemes with implicit time-stepping provides a stable and computationally cheap numerical realisation of the equation.
Luca Calatroni, Bertram D\u00FCring, Carola-Bibiane Sch\u00F6nlieb. ADI splitting schemes for a fourth-order nonlinear partial differential equation from image processing. Discrete & Continuous Dynamical Systems - A, 2014, 34(3): 931-957. doi: 10.3934/dcds.2014.34.931.
A Lie--Deprit perturbation algorithm for linear differential equations with periodic coefficients
Fernando Casas and Cristina Chiralt
A perturbative procedure based on the Lie--Deprit algorithm of classical mechanics is proposed to compute analytic approximations to the fundamental matrix of linear differential equations with periodic coefficients. These approximations reproduce the structure assured by the Floquet theorem. Alternatively, the algorithm provides explicit approximations to the Lyapunov transformation reducing the original periodic problem to an autonomous system and also to its characteristic exponents. The procedure is computationally well adapted and converges for sufficiently small values of the perturbation parameter. Moreover, when the system evolves in a Lie group, the approximations also belong to the same Lie group, thus preserving qualitative properties of the exact solution.
Fernando Casas, Cristina Chiralt. A Lie--Deprit perturbation algorithm for linear differential equations with periodic coefficients. Discrete & Continuous Dynamical Systems - A, 2014, 34(3): 959-975. doi: 10.3934/dcds.2014.34.959.
Preserving first integrals with symmetric Lie group methods
Elena Celledoni and Brynjulf Owren
The discrete gradient approach is generalized to yield first integral preserving methods for differential equations in Lie groups.
Elena Celledoni, Brynjulf Owren. Preserving first integrals with symmetric Lie group methods. Discrete & Continuous Dynamical Systems - A, 2014, 34(3): 977-990. doi: 10.3934/dcds.2014.34.977.
Numerical simulation of nonlinear dispersive quantization
Gong Chen and Peter J. Olver
2014, 34(3): 991-1008 doi: 10.3934/dcds.2014.34.991 +[Abstract](2556) +[PDF](1845.2KB)
When posed on a periodic domain in one space variable, linear dispersive evolution equations with integral polynomial dispersion relations exhibit strikingly different behaviors depending upon whether the time is rational or irrational relative to the length of the interval, thus producing the Talbot effect of dispersive quantization and fractalization. The goal here is to show that these remarkable phenomena extend to nonlinear dispersive evolution equations. We will present numerical simulations, based on operator splitting methods, of the nonlinear Schrödinger and Korteweg--deVries equations with step function initial data and periodic boundary conditions. For the integrable nonlinear Schrödinger equation, our observations have been rigorously confirmed in a recent paper of Erdoǧan and Tzirakis, [10].
Gong Chen, Peter J. Olver. Numerical simulation of nonlinear dispersive quantization. Discrete & Continuous Dynamical Systems - A, 2014, 34(3): 991-1008. doi: 10.3934/dcds.2014.34.991.
A conditional, collision-avoiding, model for swarming
Felipe Cucker and Jiu-Gang Dong
We propose a model for swarming (i.e., cohesion preserving) that shares all the good properties of the CS-model for flocking. In particular, we show for this model that under strong interactions of the agents swarming unconditionally occurs and that, furthermore, it does so in a collision avoiding manner. We also show that under weak interactions the same holds true provided the initial state of the population (their positions and velocities) satisfies some explicit inequalities.
Felipe Cucker, Jiu-Gang Dong. A conditional, collision-avoiding, model for swarming. Discrete & Continuous Dynamical Systems - A, 2014, 34(3): 1009-1020. doi: 10.3934/dcds.2014.34.1009.
The tridendriform structure of a discrete Magnus expansion
Kurusch Ebrahimi-Fard and Dominique Manchon
The notion of trees plays an important role in Butcher's B-series. More recently, a refined understanding of algebraic and combinatorial structures underlying the Magnus expansion has emerged thanks to the use of rooted trees. We follow these ideas by further developing the observation that the logarithm of the solution of a linear first-order finite-difference equation can be written in terms of the Magnus expansion taking place in a pre-Lie algebra. By using basic combinatorics on planar reduced trees we derive a closed formula for the Magnus expansion in the context of free tridendriform algebra. The tridendriform algebra structure on word quasi-symmetric functions permits us to derive a discrete analogue of the Mielnik--Plebański--Strichartz formula for this logarithm.
Kurusch Ebrahimi-Fard, Dominique Manchon. The tridendriform structure of a discrete Magnus expansion. Discrete & Continuous Dynamical Systems - A, 2014, 34(3): 1021-1040. doi: 10.3934/dcds.2014.34.1021.
Bernstein-type approximation of set-valued functions in the symmetric difference metric
Shay Kels and Nira Dyn
We study the approximation of univariate and multivariate set-valued functions (SVFs) by the adaptation to SVFs of positive sample-based approximation operators for real-valued functions. To this end, we introduce a new weighted average of several sets and study its properties. The approximation results are obtained in the space of Lebesgue measurable sets with the symmetric difference metric.
In particular, we apply the new average of sets to adapt to SVFs the classical Bernstein approximation operators, and show that these operators approximate continuous SVFs. The rate of approximation of Hölder continuous SVFs by the adapted Bernstein operators is studied and shown to be asymptotically equal to the one for real-valued functions. Finally, the results obtained in the metric space of sets are generalized to metric spaces endowed with an average satisfying certain properties.
Shay Kels, Nira Dyn. Bernstein-type approximation of set-valued functions in the symmetric difference metric. Discrete & Continuous Dynamical Systems - A, 2014, 34(3): 1041-1060. doi: 10.3934/dcds.2014.34.1041.
Analysis of the 3DVAR filter for the partially observed Lorenz'63 model
Kody Law, Abhishek Shukla and Andrew Stuart
The problem of effectively combining data with a mathematical model constitutes a major challenge in applied mathematics. It is particular challenging for high-dimensional dynamical systems where data is received sequentially in time and the objective is to estimate the system state in an on-line fashion; this situation arises, for example, in weather forecasting. The sequential particle filter is then impractical and ad hoc filters, which employ some form of Gaussian approximation, are widely used. Prototypical of these ad hoc filters is the 3DVAR method. The goal of this paper is to analyze the 3DVAR method, using the Lorenz '63 model to exemplify the key ideas. The situation where the data is partial and noisy is studied, and both discrete time and continuous time data streams are considered. The theory demonstrates how the widely used technique of variance inflation acts to stabilize the filter, and hence leads to asymptotic accuracy.
Kody Law, Abhishek Shukla, Andrew Stuart. Analysis of the 3DVAR filter for thepartially observed Lorenz\'63 model. Discrete & Continuous Dynamical Systems - A, 2014, 34(3): 1061-1078. doi: 10.3934/dcds.2014.34.1061.
Variable step size multiscale methods for stiff and highly oscillatory dynamical systems
Yoonsang Lee and Bjorn Engquist
We present a new numerical multiscale integrator for stiff and highly oscillatory dynamical systems. The new algorithm can be seen as an improved version of the seamless Heterogeneous Multiscale Method by E, Ren, and Vanden-Eijnden and the method FLAVORS by Tao, Owhadi, and Marsden. It approximates slowly changing quantities in the solution with higher accuracy than these other methods while maintaining the same computational complexity. To achieve higher accuracy, it uses variable mesoscopic time steps which are determined by a special function satisfying moment and regularity conditions. Detailed analytical and numerical comparison between the different methods are given.
Yoonsang Lee, Bjorn Engquist. Variable step size multiscale methods for stiff and highly oscillatory dynamical systems. Discrete & Continuous Dynamical Systems - A, 2014, 34(3): 1079-1097. doi: 10.3934/dcds.2014.34.1079.
Discrete gradient methods have an energy conservation law
Robert I. McLachlan and G. R. W. Quispel
We show for a variety of classes of conservative PDEs that discrete gradient methods designed to have a conserved quantity (here called energy) also have a time-discrete conservation law. The discrete conservation law has the same conserved density as the continuous conservation law, while its flux is found by replacing all derivatives of the conserved density appearing in the continuous flux by discrete gradients.
Robert I. McLachlan, G. R. W. Quispel. Discrete gradient methods have an energy conservation law. Discrete & Continuous Dynamical Systems - A, 2014, 34(3): 1099-1104. doi: 10.3934/dcds.2014.34.1099.
On an asymptotic method for computing the modified energy for symplectic methods
Per Christian Moan and Jitse Niesen
We revisit an algorithm by Skeel et al. [5,16] for computing the modified, or shadow, energy associated with symplectic discretizations of Hamiltonian systems. We amend the algorithm to use Richardson extrapolation in order to obtain arbitrarily high order of accuracy. Error estimates show that the new method captures the exponentially small drift associated with such discretizations. Several numerical examples illustrate the theory.
Per Christian Moan, Jitse Niesen. On an asymptotic method for computing the modified energy for symplectic methods. Discrete & Continuous Dynamical Systems - A, 2014, 34(3): 1105-1120. doi: 10.3934/dcds.2014.34.1105.
Integrability of nonholonomically coupled oscillators
Klas Modin and Olivier Verdier
We study a family of nonholonomic mechanical systems. These systems consist of harmonic oscillators coupled through nonholonomic constraints. The family includes the contact oscillator, which has been used as a test problem for numerical methods for nonholonomic mechanics. The systems under study constitute simple models for continuously variable transmission gearboxes.
The main result is that each system in the family is integrable reversible with respect to the canonical reversibility map on the cotangent bundle. By using reversible Kolmogorov--Arnold--Moser theory, we then establish preservation of invariant tori for reversible perturbations. This result explains previous numerical observations, that some discretisations of the contact oscillator have favourable structure preserving properties.
Klas Modin, Olivier Verdier. Integrability of nonholonomically coupled oscillators. Discrete & Continuous Dynamical Systems - A, 2014, 34(3): 1121-1130. doi: 10.3934/dcds.2014.34.1121.
Regarding the absolute stability of Størmer-Cowell methods
Syvert P. Nørsett and Andreas Asheim
High order variants of the classical Størmer-Cowell methods are still a popular class of methods for computations in celestial mechanics. In this work we shall investigate the absolute stability of Størmer-Cowell methods close to zero, and present a characterization of the stability of methods of all orders. In particular, we show that many methods are not absolutely stable at any point in a neighborhood of the origin.
Syvert P. N\u00F8rsett, Andreas Asheim. Regarding the absolute stability of St\u00F8rmer-Cowell methods. Discrete & Continuous Dynamical Systems - A, 2014, 34(3): 1131-1146. doi: 10.3934/dcds.2014.34.1131.
Discrete gradient methods for preserving a first integral of an ordinary differential equation
Richard A. Norton and G. R. W. Quispel
In this paper we consider discrete gradient methods for approximating the solution and preserving a first integral (also called a constant of motion) of autonomous ordinary differential equations. We prove under mild conditions for a large class of discrete gradient methods that the numerical solution exists and is locally unique, and that for arbitrary $p\in \mathbb{N}$ we may construct a method that is of order $p$. In the proofs of these results we also show that the constants in the time step constraint and the error bounds may be chosen independently from the distance to critical points of the first integral.
In the case when the first integral is quadratic, for arbitrary $p \in \mathbb{N}$, we have devised a new method that is linearly implicit at each time step and of order $p$. A numerical example suggests that this new method has advantages in terms of efficiency.
Richard A. Norton, G. R. W. Quispel. Discrete gradient methods for preserving a first integral of an ordinary differential equation. Discrete & Continuous Dynamical Systems - A, 2014, 34(3): 1147-1170. doi: 10.3934/dcds.2014.34.1147.
Periodic points on the $2$-sphere
Charles Pugh and Michael Shub
For a $C^{1}$ degree two latitude preserving endomorphism $f$ of the $2$-sphere, we show that for each $n$, $f$ has at least $2^{n}$ periodic points of period $n$.
Charles Pugh, Michael Shub. Periodic points on the $2$-sphere. Discrete & Continuous Dynamical Systems - A, 2014, 34(3): 1171-1182. doi: 10.3934/dcds.2014.34.1171.
The Landau--Kolmogorov inequality revisited
Alexei Shadrin
We consider the Landau--Kolmogorov problem on a finite interval which is to find an exact bound for $\|f^{(k)}\|$, for $0 < k < n$, given bounds $\|f\| \le 1$ and $\|f^{(n)}\| \le \sigma$, with $\|\cdot\|$ being the max-norm on $[-1,1]$. In 1972, Karlin conjectured that this bound is attained at the end-point of the interval by a certain Zolotarev polynomial or spline, and that was proved for a number of particular values of $n$ or $\sigma$. Here, we provide a complete proof of this conjecture in the polynomial case, i.e. for $0 \le \sigma \le \sigma_n := \|T_n^{(n)}\|$, where $T_n$ is the Chebyshev polynomial of degree $n$. In addition, we prove a certain Schur-type estimate which is of independent interest.
Alexei Shadrin. The Landau--Kolmogorov inequality revisited. Discrete & Continuous Dynamical Systems - A, 2014, 34(3): 1183-1210. doi: 10.3934/dcds.2014.34.1183.
Generating functions for stochastic symplectic methods
Lijin Wang and Jialin Hong
Symplectic integration of stochastic Hamiltonian systems is a developing branch of stochastic numerical analysis. In the present paper, a stochastic generating function approach is proposed, based on the derivation of stochastic Hamilton-Jacobi PDEs satisfied by the generating functions, and a method of approximating solutions to them. Thus, a systematic approach of constructing stochastic symplectic methods is provided. As validation, numerical tests on several stochastic Hamiltonian systems are performed, where some symplectic schemes are constructed via stochastic generating functions. Moreover, generating functions for some known stochastic symplectic mappings are given.
Lijin Wang, Jialin Hong. Generating functions for stochastic symplectic methods. Discrete & Continuous Dynamical Systems - A, 2014, 34(3): 1211-1228. doi: 10.3934/dcds.2014.34.1211.
Generating functions and volume preserving mappings
Huiyan Xue and Antonella Zanna
In this paper, we study generating forms and generating functions for volume preserving mappings in $\mathbf{R}^n$. We derive some parametric classes of volume preserving numerical schemes for divergence free vector fields. In passing, by extension of the Poincaré generating function and a change of variables, we obtained symplectic equivalent of the theta-method for differential equations, which includes the implicit midpoint rule and symplectic Euler A and B methods as special cases.
Huiyan Xue, Antonella Zanna. Generating functions and volume preserving mappings. Discrete & Continuous Dynamical Systems - A, 2014, 34(3): 1229-1249. doi: 10.3934/dcds.2014.34.1229. | CommonCrawl |
Quadratic Equation and Inequalities
Permutations and Combinations
Mathematical Induction and Binomial Theorem
Sequences and Series
Matrices and Determinants
Vector Algebra and 3D Geometry
Mathematical Reasoning
Sets and Relations
Trigonometric Functions & Equations
Properties of Triangle
Inverse Trigonometric Functions
Straight Lines and Pair of Straight Lines
Limits, Continuity and Differentiability
Application of Derivatives
Indefinite Integrals
Definite Integrals and Applications of Integrals
Let c, k $$\in$$ R. If $$f(x) = (c + 1){x^2} + (1 - {c^2})x + 2k$$ and $$f(x + y) = f(x) + f(y) - xy$$, for all x, y $$\in$$ R, then the value of $$|2(f(1) + f(2) + f(3) + \,\,......\,\, + \,\,f(20))|$$ is equal to ____________.
JEE Main 2022 (Online) 27th June Evening Shift
Let S = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}. Define f : S $$\to$$ S as
$$f(n) = \left\{ {\matrix{ {2n} & , & {if\,n = 1,2,3,4,5} \cr {2n - 11} & , & {if\,n = 6,7,8,9,10} \cr } } \right.$$.
Let g : S $$\to$$ S be a function such that $$fog(n) = \left\{ {\matrix{ {n + 1} & , & {if\,n\,\,is\,odd} \cr {n - 1} & , & {if\,n\,\,is\,even} \cr } } \right.$$.
Then $$g(10)g(1) + g(2) + g(3) + g(4) + g(5))$$ is equal to _____________.
Let [t] denote the greatest integer $$\le$$ t and {t} denote the fractional part of t. The integral value of $$\alpha$$ for which the left hand limit of the function
$$f(x) = [1 + x] + {{{\alpha ^{2[x] + {\{x\}}}} + [x] - 1} \over {2[x] + \{ x\} }}$$ at x = 0 is equal to $$\alpha - {4 \over 3}$$, is _____________.
Let f : R $$\to$$ R be a function defined by $$f(x) = {{2{e^{2x}}} \over {{e^{2x}} + e}}$$. Then $$f\left( {{1 \over {100}}} \right) + f\left( {{2 \over {100}}} \right) + f\left( {{3 \over {100}}} \right) + \,\,\,.....\,\,\, + \,\,\,f\left( {{{99} \over {100}}} \right)$$ is equal to ______________.
Questions Asked from Functions (Numerical)
JEE Main 2022 (Online) 28th July Morning Shift (1) JEE Main 2022 (Online) 27th July Evening Shift (1) JEE Main 2022 (Online) 27th July Morning Shift (1) JEE Main 2022 (Online) 25th July Evening Shift (2) JEE Main 2022 (Online) 29th June Evening Shift (1) JEE Main 2022 (Online) 29th June Morning Shift (1) JEE Main 2022 (Online) 27th June Evening Shift (2) JEE Main 2022 (Online) 27th June Morning Shift (1) JEE Main 2022 (Online) 26th June Evening Shift (1) JEE Main 2022 (Online) 25th June Morning Shift (1) JEE Main 2022 (Online) 24th June Morning Shift (2) JEE Main 2021 (Online) 31st August Evening Shift (1) JEE Main 2021 (Online) 27th August Morning Shift (1) JEE Main 2021 (Online) 27th July Evening Shift (1) JEE Main 2021 (Online) 27th July Morning Shift (1) JEE Main 2021 (Online) 22th July Evening Shift (1) JEE Main 2021 (Online) 18th March Evening Shift (1) JEE Main 2021 (Online) 24th February Evening Shift (1) JEE Main 2020 (Online) 6th September Evening Slot (1) JEE Main 2020 (Online) 6th September Morning Slot (1) JEE Main 2020 (Online) 5th September Evening Slot (1) JEE Main 2020 (Online) 9th January Morning Slot (1) JEE Main 2020 (Online) 7th January Evening Slot (1) | CommonCrawl |
# Basic concepts and definitions
Probability is a measure of the likelihood that an event will occur. It is usually expressed as a number between 0 and 1, where 0 represents impossibility and 1 represents certainty. For example, if we toss a fair coin, the probability of getting heads is 0.5, and the probability of getting tails is also 0.5.
An event is a specific outcome or set of outcomes of an experiment. For example, in the coin toss experiment, the event of getting heads is one possible outcome. Events can be classified as simple or compound. A simple event is an event that cannot be broken down into smaller events. A compound event is an event that consists of two or more simple events.
The sample space is the set of all possible outcomes of an experiment. It is denoted by the symbol Ω. For example, in the coin toss experiment, the sample space consists of two possible outcomes: heads and tails.
A probability distribution is a function that assigns probabilities to each possible outcome in the sample space. It describes the likelihood of each outcome occurring. There are two types of probability distributions: discrete and continuous.
A discrete probability distribution is a probability distribution that assigns probabilities to a finite or countable number of outcomes. For example, the probability distribution of rolling a fair six-sided die is a discrete probability distribution, as there are only six possible outcomes.
A continuous probability distribution is a probability distribution that assigns probabilities to an uncountable number of outcomes. For example, the probability distribution of the height of adults in a population is a continuous probability distribution, as height can take on any value within a certain range.
## Exercise
Consider the following experiment: rolling a fair six-sided die. Determine the sample space and the probability distribution for this experiment.
### Solution
The sample space for this experiment is {1, 2, 3, 4, 5, 6}, as these are the possible outcomes of rolling the die. The probability distribution is as follows:
P(1) = 1/6
P(2) = 1/6
P(3) = 1/6
P(4) = 1/6
P(5) = 1/6
P(6) = 1/6
# Discrete and continuous distributions
A discrete probability distribution is a probability distribution that assigns probabilities to a finite or countable number of outcomes. The probabilities assigned to each outcome must satisfy two conditions: they must be non-negative, and the sum of all probabilities must equal 1.
The probability mass function (PMF) is used to describe the probabilities of each outcome in a discrete probability distribution. It gives the probability that a random variable takes on a specific value. The PMF is denoted by P(X = x), where X is the random variable and x is a specific value.
For example, consider the experiment of rolling a fair six-sided die. The random variable X represents the outcome of the roll. The PMF for this experiment is:
P(X = 1) = 1/6
P(X = 2) = 1/6
P(X = 3) = 1/6
P(X = 4) = 1/6
P(X = 5) = 1/6
P(X = 6) = 1/6
A continuous probability distribution is a probability distribution that assigns probabilities to an uncountable number of outcomes. Unlike discrete probability distributions, which assign probabilities to specific values, continuous probability distributions assign probabilities to intervals of values.
The probability density function (PDF) is used to describe the probabilities of intervals in a continuous probability distribution. It gives the probability that a random variable falls within a certain interval. The PDF is denoted by f(x), where x is a specific value or interval.
For example, consider the experiment of measuring the height of adults in a population. The random variable X represents the height. The PDF for this experiment is a function that describes the likelihood of a person having a certain height within a range of values.
## Exercise
Consider the following experiment: flipping a fair coin. Determine whether the probability distribution for this experiment is discrete or continuous. If it is discrete, provide the PMF. If it is continuous, provide the PDF.
### Solution
The probability distribution for flipping a fair coin is discrete, as there are only two possible outcomes: heads or tails. The PMF for this experiment is:
P(Heads) = 0.5
P(Tails) = 0.5
# Properties of random variables
Random variables are a fundamental concept in probability theory. They are used to model the outcomes of random experiments and can take on different values based on the outcome of the experiment.
1. Range: The range of a random variable is the set of all possible values it can take on. For example, if we have a random variable X representing the outcome of rolling a fair six-sided die, the range of X is {1, 2, 3, 4, 5, 6}.
2. Probability distribution: The probability distribution of a random variable describes the likelihood of each possible outcome. It assigns probabilities to each value in the range of the random variable. For example, the probability distribution of X for rolling a fair six-sided die is {1/6, 1/6, 1/6, 1/6, 1/6, 1/6}, as each outcome has an equal probability of occurring.
3. Expected value: The expected value of a random variable is a measure of its central tendency. It represents the average value that the random variable is expected to take on over the long run. The expected value is denoted by E(X). For example, the expected value of X for rolling a fair six-sided die is (1/6) * 1 + (1/6) * 2 + (1/6) * 3 + (1/6) * 4 + (1/6) * 5 + (1/6) * 6 = 3.5.
4. Variance: The variance of a random variable measures the spread or dispersion of its values around the expected value. It is denoted by Var(X). For example, the variance of X for rolling a fair six-sided die is ((1-3.5)^2 + (2-3.5)^2 + (3-3.5)^2 + (4-3.5)^2 + (5-3.5)^2 + (6-3.5)^2) / 6 = 2.92.
5. Standard deviation: The standard deviation of a random variable is the square root of its variance. It is denoted by SD(X). For example, the standard deviation of X for rolling a fair six-sided die is sqrt(2.92) = 1.71.
These properties of random variables are important for understanding and analyzing probability distributions. They provide insights into the behavior and characteristics of random experiments.
## Exercise
Consider the following experiment: flipping a fair coin. Let X be a random variable that represents the outcome of the experiment, where X = 1 if the outcome is heads and X = 0 if the outcome is tails. Determine the range, probability distribution, expected value, variance, and standard deviation of X.
### Solution
The range of X is {0, 1}.
The probability distribution of X is {0.5, 0.5}.
The expected value of X is (0.5 * 0) + (0.5 * 1) = 0.5.
The variance of X is ((0-0.5)^2 * 0.5) + ((1-0.5)^2 * 0.5) = 0.25.
The standard deviation of X is sqrt(0.25) = 0.5.
# Probability density and mass functions
Probability density functions (PDFs) and probability mass functions (PMFs) are mathematical functions that describe the probability distribution of a random variable.
A probability density function (PDF) is used to describe the probability distribution of a continuous random variable. It assigns probabilities to intervals of values rather than individual values. The PDF is denoted by f(x) and satisfies the following properties:
1. f(x) ≥ 0 for all x.
2. The total area under the PDF curve is equal to 1.
A probability mass function (PMF) is used to describe the probability distribution of a discrete random variable. It assigns probabilities to individual values of the random variable. The PMF is denoted by P(X = x) and satisfies the following properties:
1. P(X = x) ≥ 0 for all x.
2. The sum of all probabilities in the PMF is equal to 1.
The PDF and PMF can be used to calculate probabilities and expected values of random variables. For continuous random variables, the probability of an event occurring within a certain interval can be calculated by integrating the PDF over that interval. For discrete random variables, the probability of a specific value occurring can be obtained directly from the PMF.
Consider a continuous random variable X that follows a normal distribution with mean µ and standard deviation σ. The PDF of X is given by:
$$f(x) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}$$
This PDF describes the probability distribution of X and can be used to calculate probabilities and expected values associated with X.
## Exercise
Consider a discrete random variable Y that follows a binomial distribution with parameters n = 10 and p = 0.3. Determine the PMF of Y.
### Solution
The PMF of Y is given by:
$$P(Y = k) = \binom{n}{k}p^k(1-p)^{n-k}$$
where k is the number of successes in n trials. In this case, n = 10 and p = 0.3. Plugging in these values, we get:
$$P(Y = k) = \binom{10}{k}(0.3)^k(0.7)^{10-k}$$
for k = 0, 1, 2, ..., 10.
# Expected value and variance
The expected value and variance are important measures of the central tendency and spread of a random variable, respectively. They provide valuable information about the distribution of a random variable.
The expected value of a random variable X, denoted by E(X) or µ, is a measure of the average value of X. It is calculated by taking the weighted average of all possible values of X, where the weights are the probabilities associated with each value. Mathematically, the expected value is given by:
$$E(X) = \sum xP(X = x)$$
for discrete random variables, and
$$E(X) = \int xf(x)dx$$
for continuous random variables, where f(x) is the probability density function (PDF) of X.
The variance of a random variable X, denoted by Var(X) or σ^2, measures the spread or dispersion of X around its expected value. It is calculated by taking the weighted average of the squared deviations of X from its expected value. Mathematically, the variance is given by:
$$Var(X) = E[(X - E(X))^2]$$
or
$$Var(X) = E(X^2) - (E(X))^2$$
Consider a discrete random variable Y that follows a binomial distribution with parameters n = 10 and p = 0.3. Calculate the expected value and variance of Y.
The expected value of Y is given by:
$$E(Y) = \sum yP(Y = y)$$
where y is the number of successes in n trials. In this case, n = 10 and p = 0.3. Plugging in these values, we get:
$$E(Y) = \sum_{y=0}^{10} y \binom{10}{y}(0.3)^y(0.7)^{10-y}$$
Calculating this sum, we find that E(Y) = 3.
The variance of Y is given by:
$$Var(Y) = E(Y^2) - (E(Y))^2$$
To calculate E(Y^2), we use the formula:
$$E(Y^2) = \sum y^2P(Y = y)$$
Plugging in the values, we get:
$$E(Y^2) = \sum_{y=0}^{10} y^2 \binom{10}{y}(0.3)^y(0.7)^{10-y}$$
Calculating this sum, we find that E(Y^2) = 2.1.
Therefore, the variance of Y is:
$$Var(Y) = E(Y^2) - (E(Y))^2 = 2.1 - 3^2 = 2.1 - 9 = -6.9$$
## Exercise
Consider a continuous random variable X that follows a uniform distribution on the interval [0, 1]. Calculate the expected value and variance of X.
### Solution
The expected value of X is given by:
$$E(X) = \int_0^1 xf(x)dx$$
where f(x) is the probability density function (PDF) of X. In this case, the PDF of X is:
$$f(x) = \begin{cases}
1 & \text{if } 0 \leq x \leq 1 \\
0 & \text{otherwise}
\end{cases}$$
Plugging in the values, we get:
$$E(X) = \int_0^1 x \cdot 1 dx = \frac{1}{2}$$
The variance of X is given by:
$$Var(X) = E(X^2) - (E(X))^2$$
To calculate E(X^2), we use the formula:
$$E(X^2) = \int_0^1 x^2f(x)dx$$
Plugging in the values, we get:
$$E(X^2) = \int_0^1 x^2 \cdot 1 dx = \frac{1}{3}$$
Therefore, the variance of X is:
$$Var(X) = E(X^2) - (E(X))^2 = \frac{1}{3} - \left(\frac{1}{2}\right)^2 = \frac{1}{12}$$
# Law of large numbers
The law of large numbers is a fundamental concept in probability theory. It states that as the number of trials or observations increases, the average of those trials or observations will converge to the expected value of the random variable.
In other words, if we repeat an experiment or observation many times, the average of the outcomes will become closer and closer to the expected value. This is true regardless of the initial values or outcomes of the experiment.
Formally, the law of large numbers can be stated as follows:
$$\lim_{n \to \infty} \frac{X_1 + X_2 + \ldots + X_n}{n} = E(X)$$
where $X_1, X_2, \ldots, X_n$ are independent and identically distributed random variables, and $E(X)$ is the expected value of $X$.
Consider a fair six-sided die. The expected value of a single roll of the die is $\frac{1}{6}(1 + 2 + 3 + 4 + 5 + 6) = \frac{7}{2}$.
Now, let's simulate rolling the die 100 times and calculate the average of the outcomes.
```python
import random
outcomes = []
for _ in range(100):
outcome = random.randint(1, 6)
outcomes.append(outcome)
average = sum(outcomes) / len(outcomes)
print(average)
```
If you run this code multiple times, you will see that the average of the outcomes is often close to $\frac{7}{2}$, even though individual outcomes may vary.
## Exercise
Simulate rolling a fair six-sided die 1000 times. Calculate the average of the outcomes and compare it to the expected value of $\frac{7}{2}$.
### Solution
```python
import random
outcomes = []
for _ in range(1000):
outcome = random.randint(1, 6)
outcomes.append(outcome)
average = sum(outcomes) / len(outcomes)
print(average)
```
If you run this code multiple times, you will see that the average of the outcomes is often even closer to $\frac{7}{2}$ than in the previous exercise, demonstrating the law of large numbers.
# Central Limit Theorem and its applications
The central limit theorem is another fundamental concept in probability theory. It states that the sum or average of a large number of independent and identically distributed random variables will have an approximately normal distribution, regardless of the shape of the original distribution.
In other words, if we have a large sample size and we calculate the sum or average of the observations, the distribution of those sums or averages will be approximately normal. This is true even if the original distribution is not normal.
Formally, the central limit theorem can be stated as follows:
Let $X_1, X_2, \ldots, X_n$ be independent and identically distributed random variables with mean $\mu$ and variance $\sigma^2$. Then, as $n$ approaches infinity, the distribution of the standardized sum or average $\frac{X_1 + X_2 + \ldots + X_n}{\sqrt{n}}$ approaches a standard normal distribution.
Suppose we have a population of students and we want to know the average height of the students. We take a random sample of 100 students and measure their heights. The heights of the students may not follow a normal distribution, but by the central limit theorem, the distribution of the sample mean will be approximately normal.
```python
import random
heights = []
for _ in range(100):
height = random.normalvariate(170, 10) # Assume mean height is 170 cm and standard deviation is 10 cm
heights.append(height)
average = sum(heights) / len(heights)
print(average)
```
If you run this code multiple times, you will see that the distribution of the sample means is approximately normal, even though the distribution of the individual heights may not be.
## Exercise
Simulate measuring the heights of 1000 students from the same population as in the previous exercise. Calculate the average height of the students and compare it to the expected value of 170 cm.
### Solution
```python
import random
heights = []
for _ in range(1000):
height = random.normalvariate(170, 10) # Assume mean height is 170 cm and standard deviation is 10 cm
heights.append(height)
average = sum(heights) / len(heights)
print(average)
```
If you run this code multiple times, you will see that the average height of the students is often close to 170 cm, demonstrating the central limit theorem.
# Joint and conditional distributions
In probability theory, we often deal with multiple random variables and their relationships. The joint distribution of two or more random variables describes the probability of their combined outcomes. The conditional distribution of a random variable given another random variable describes the probability of its outcome given a specific value of the other random variable.
The joint distribution of two discrete random variables $X$ and $Y$ can be represented by a joint probability mass function $P(X=x, Y=y)$. This function assigns a probability to each pair of possible outcomes $(x, y)$.
The conditional distribution of $X$ given $Y=y$ can be represented by a conditional probability mass function $P(X=x|Y=y)$. This function gives the probability of $X=x$ given that $Y=y$. It can be calculated using the formula:
$$P(X=x|Y=y) = \frac{P(X=x, Y=y)}{P(Y=y)}$$
where $P(Y=y) \neq 0$.
Suppose we have two dice, one red and one blue. We roll both dice and record the sum of the numbers on the two dice. The joint distribution of the sum $S$ and the outcomes of the individual dice $X$ and $Y$ can be represented by the following table:
| S | X | Y | P(X=x, Y=y) |
|----|----|----|-------------|
| 2 | 1 | 1 | 1/36 |
| 3 | 1 | 2 | 1/36 |
| 4 | 1 | 3 | 1/36 |
| 5 | 1 | 4 | 1/36 |
| 6 | 1 | 5 | 1/36 |
| 7 | 1 | 6 | 1/36 |
| 3 | 2 | 1 | 1/36 |
| 4 | 2 | 2 | 1/36 |
| 5 | 2 | 3 | 1/36 |
| 6 | 2 | 4 | 1/36 |
| 7 | 2 | 5 | 1/36 |
| 8 | 2 | 6 | 1/36 |
| ...| ...| ...| ... |
The conditional distribution of $X$ given $Y=3$ can be calculated as follows:
$$P(X=1|Y=3) = \frac{P(X=1, Y=3)}{P(Y=3)} = \frac{1/36}{1/6} = \frac{1}{6}$$
## Exercise
Consider the joint distribution of two discrete random variables $X$ and $Y$ given by the following table:
| X | Y | P(X=x, Y=y) |
|----|----|-------------|
| 1 | 1 | 1/4 |
| 1 | 2 | 1/8 |
| 2 | 1 | 1/8 |
| 2 | 2 | 1/2 |
Calculate the following conditional probabilities:
- $P(X=1|Y=2)$
- $P(Y=1|X=2)$
### Solution
$$P(X=1|Y=2) = \frac{P(X=1, Y=2)}{P(Y=2)} = \frac{1/8}{1/4} = \frac{1}{2}$$
$$P(Y=1|X=2) = \frac{P(X=2, Y=1)}{P(X=2)} = \frac{1/8}{1/2} = \frac{1}{4}$$
# Transformations of random variables
In probability theory, we often need to calculate the distribution of a function of one or more random variables. This is known as a transformation of random variables. The distribution of the transformed random variable can be derived from the distribution of the original random variable(s).
For a single random variable $X$, if we have the cumulative distribution function (CDF) $F_X(x)$, we can find the CDF of the transformed random variable $Y = g(X)$ using the formula:
$$F_Y(y) = P(Y \leq y) = P(g(X) \leq y) = P(X \leq g^{-1}(y)) = F_X(g^{-1}(y))$$
where $g^{-1}(y)$ is the inverse function of $g(x)$.
For multiple random variables $X_1, X_2, \ldots, X_n$, if we have the joint cumulative distribution function (CDF) $F_{X_1, X_2, \ldots, X_n}(x_1, x_2, \ldots, x_n)$, we can find the joint CDF of the transformed random variables $Y_1 = g_1(X_1), Y_2 = g_2(X_2), \ldots, Y_n = g_n(X_n)$ using the formula:
$$F_{Y_1, Y_2, \ldots, Y_n}(y_1, y_2, \ldots, y_n) = P(Y_1 \leq y_1, Y_2 \leq y_2, \ldots, Y_n \leq y_n) = P(g_1(X_1) \leq y_1, g_2(X_2) \leq y_2, \ldots, g_n(X_n) \leq y_n)$$
$$= P(X_1 \leq g_1^{-1}(y_1), X_2 \leq g_2^{-1}(y_2), \ldots, X_n \leq g_n^{-1}(y_n)) = F_{X_1, X_2, \ldots, X_n}(g_1^{-1}(y_1), g_2^{-1}(y_2), \ldots, g_n^{-1}(y_n))$$
where $g_1^{-1}(y_1), g_2^{-1}(y_2), \ldots, g_n^{-1}(y_n)$ are the inverse functions of $g_1(x_1), g_2(x_2), \ldots, g_n(x_n)$.
Suppose we have a random variable $X$ with a uniform distribution on the interval $[0, 1]$. We want to find the distribution of the transformed random variable $Y = X^2$.
The cumulative distribution function (CDF) of $X$ is given by:
$$F_X(x) = \begin{cases}
0 & \text{if } x < 0 \\
x & \text{if } 0 \leq x \leq 1 \\
1 & \text{if } x > 1 \\
\end{cases}$$
The inverse function of $g(x) = x^2$ is $g^{-1}(y) = \sqrt{y}$.
Using the formula for the CDF of the transformed random variable, we can find the CDF of $Y$:
$$F_Y(y) = F_X(g^{-1}(y)) = F_X(\sqrt{y}) = \begin{cases}
0 & \text{if } y < 0 \\
\sqrt{y} & \text{if } 0 \leq y \leq 1 \\
1 & \text{if } y > 1 \\
\end{cases}$$
The probability density function (PDF) of $Y$ can be obtained by differentiating the CDF:
$$f_Y(y) = \frac{d}{dy} F_Y(y) = \begin{cases}
0 & \text{if } y < 0 \\
\frac{1}{2\sqrt{y}} & \text{if } 0 \leq y \leq 1 \\
0 & \text{if } y > 1 \\
\end{cases}$$
The distribution of $Y$ is a triangular distribution on the interval $[0, 1]$.
## Exercise
Consider a random variable $X$ with a standard normal distribution. Find the distribution of the transformed random variable $Y = e^X$.
### Solution
The cumulative distribution function (CDF) of $X$ is given by $F_X(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^x e^{-\frac{t^2}{2}} dt$.
The inverse function of $g(x) = e^x$ is $g^{-1}(y) = \ln(y)$.
Using the formula for the CDF of the transformed random variable, we can find the CDF of $Y$:
$$F_Y(y) = F_X(g^{-1}(y)) = F_X(\ln(y)) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\ln(y)} e^{-\frac{t^2}{2}} dt$$
The probability density function (PDF) of $Y$ can be obtained by differentiating the CDF:
$$f_Y(y) = \frac{d}{dy} F_Y(y) = \frac{1}{\sqrt{2\pi}y} e^{-\frac{\ln^2(y)}{2}}$$
The distribution of $Y$ is a log-normal distribution.
# Sampling distributions and hypothesis testing
In statistics, a sampling distribution is the probability distribution of a statistic based on a random sample. It describes the variability of the statistic when different random samples are taken from the same population.
Sampling distributions are important because they allow us to make inferences about the population based on the sample. They also help us understand the behavior of different statistics and test hypotheses.
The sampling distribution of a statistic depends on the population distribution, the sample size, and the sampling method. In general, as the sample size increases, the sampling distribution becomes more concentrated around the population parameter.
One of the most important sampling distributions is the sampling distribution of the sample mean. According to the Central Limit Theorem, if the sample size is large enough, the sampling distribution of the sample mean will be approximately normal, regardless of the shape of the population distribution.
The mean of the sampling distribution of the sample mean is equal to the population mean, and the standard deviation is equal to the population standard deviation divided by the square root of the sample size.
Suppose we have a population of 1000 students and we want to estimate the average height of the students. We take a random sample of 100 students and calculate the sample mean height.
We repeat this process many times and create a sampling distribution of the sample mean. The sampling distribution will be approximately normal, with a mean close to the population mean and a standard deviation equal to the population standard deviation divided by the square root of the sample size.
## Exercise
Consider a population with a mean of 50 and a standard deviation of 10. Take a random sample of size 25 from this population and calculate the sample mean. Repeat this process 100 times and create a sampling distribution of the sample mean.
### Solution
To create the sampling distribution, we need to calculate the sample mean for each random sample. We can then plot a histogram of the sample means to visualize the distribution.
```python
import numpy as np
import matplotlib.pyplot as plt
population_mean = 50
population_std = 10
sample_size = 25
num_samples = 100
sample_means = []
for _ in range(num_samples):
sample = np.random.normal(population_mean, population_std, sample_size)
sample_mean = np.mean(sample)
sample_means.append(sample_mean)
plt.hist(sample_means, bins=10)
plt.xlabel('Sample Mean')
plt.ylabel('Frequency')
plt.title('Sampling Distribution of the Sample Mean')
plt.show()
```
The histogram should show a normal distribution centered around the population mean of 50. The standard deviation of the sampling distribution should be equal to the population standard deviation divided by the square root of the sample size, which is 10/sqrt(25) = 2.
# Applications of probability in real-world scenarios
1. Risk Assessment: Probability is often used in risk assessment to quantify the likelihood of certain events occurring and their potential impact. For example, in the insurance industry, probabilities are used to determine insurance premiums based on the risk of certain events, such as accidents or natural disasters.
2. Finance and Investment: Probability is also used in finance and investment to assess the potential risks and returns of different investment options. Probability models, such as the Black-Scholes model, are used to calculate the value of financial derivatives and options.
3. Quality Control: Probability is used in quality control to determine the likelihood of defects or errors occurring in a manufacturing process. By analyzing the probability distribution of defects, companies can identify areas for improvement and implement measures to reduce defects.
4. Sports Analytics: Probability is increasingly being used in sports analytics to predict game outcomes, player performance, and other statistical measures. By analyzing historical data and using probability models, teams and analysts can make informed decisions and gain a competitive edge.
5. Epidemiology: Probability is used in epidemiology to model the spread of infectious diseases and assess the effectiveness of interventions. By analyzing the probability of transmission and the impact of different control measures, public health officials can make informed decisions to prevent and control disease outbreaks.
6. Weather Forecasting: Probability is used in weather forecasting to estimate the likelihood of different weather conditions occurring. By analyzing historical weather data and using probability models, meteorologists can make predictions about future weather patterns and issue forecasts and warnings.
7. Decision Making: Probability is used in decision making under uncertainty to assess the likelihood of different outcomes and make optimal choices. By quantifying the probabilities of different scenarios and their potential outcomes, decision makers can evaluate the risks and benefits of different options.
Let's consider an example of how probability can be applied in risk assessment. Suppose you are an insurance company and you want to determine the probability of a car accident occurring for a specific driver. You have historical data on the driver's driving record, age, gender, and other relevant factors.
Using this data, you can build a probability model that takes into account these factors and calculates the likelihood of a car accident occurring for this driver. This probability can then be used to determine the insurance premium for the driver, with higher probabilities resulting in higher premiums.
## Exercise
Consider a scenario where you are a quality control manager at a manufacturing company. You want to assess the probability of a defect occurring in a specific production line. You have collected data on the number of defects in the past month and the total number of products produced.
Using this data, calculate the probability of a defect occurring in the production line.
### Solution
To calculate the probability of a defect occurring, divide the number of defects by the total number of products produced.
For example, if there were 10 defects out of 1000 products produced, the probability of a defect occurring would be 10/1000 = 0.01, or 1%.
```python
defects = 10
total_products = 1000
probability = defects / total_products
probability
```
The probability of a defect occurring in the production line is 0.01, or 1%. | Textbooks |
F-crystal
In algebraic geometry, F-crystals are objects introduced by Mazur (1972) that capture some of the structure of crystalline cohomology groups. The letter F stands for Frobenius, indicating that F-crystals have an action of Frobenius on them. F-isocrystals are crystals "up to isogeny".
F-crystals and F-isocrystals over perfect fields
Suppose that k is a perfect field, with ring of Witt vectors W and let K be the quotient field of W, with Frobenius automorphism σ.
Over the field k, an F-crystal is a free module M of finite rank over the ring W of Witt vectors of k, together with a σ-linear injective endomorphism of M. An F-isocrystal is defined in the same way, except that M is a module for the quotient field K of W rather than W.
Dieudonné–Manin classification theorem
The Dieudonné–Manin classification theorem was proved by Dieudonné (1955) and Manin (1963). It describes the structure of F-isocrystals over an algebraically closed field k. The category of such F-isocrystals is abelian and semisimple, so every F-isocrystal is a direct sum of simple F-isocrystals. The simple F-isocrystals are the modules Es/r where r and s are coprime integers with r>0. The F-isocrystal Es/r has a basis over K of the form v, Fv, F2v,...,Fr−1v for some element v, and Frv = psv. The rational number s/r is called the slope of the F-isocrystal.
Over a non-algebraically closed field k the simple F-isocrystals are harder to describe explicitly, but an F-isocrystal can still be written as a direct sum of subcrystals that are isoclinic, where an F-crystal is called isoclinic if over the algebraic closure of k it is a sum of F-isocrystals of the same slope.
The Newton polygon of an F-isocrystal
The Newton polygon of an F-isocrystal encodes the dimensions of the pieces of given slope. If the F-isocrystal is a sum of isoclinic pieces with slopes s1 < s2 < ... and dimensions (as Witt ring modules) d1, d2,... then the Newton polygon has vertices (0,0), (x1, y1), (x2, y2),... where the nth line segment joining the vertices has slope sn = (yn−yn−1)/(xn−xn−1) and projection onto the x-axis of length dn = xn − xn−1.
The Hodge polygon of an F-crystal
The Hodge polygon of an F-crystal M encodes the structure of M/FM considered as a module over the Witt ring. More precisely since the Witt ring is a principal ideal domain, the module M/FM can be written as a direct sum of indecomposable modules of lengths n1 ≤ n2 ≤ ... and the Hodge polygon then has vertices (0,0), (1,n1), (2,n1+ n2), ...
While the Newton polygon of an F-crystal depends only on the corresponding isocrystal, it is possible for two F-crystals corresponding to the same F-isocrystal to have different Hodge polygons. The Hodge polygon has edges with integer slopes, while the Newton polygon has edges with rational slopes.
Isocrystals over more general schemes
Suppose that A is a complete discrete valuation ring of characteristic 0 with quotient field k of characteristic p>0 and perfect. An affine enlargement of a scheme X0 over k consists of a torsion-free A-algebra B and an ideal I of B such that B is complete in the I topology and the image of I is nilpotent in B/pB, together with a morphism from Spec(B/I) to X0. A convergent isocrystal over a k-scheme X0 consists of a module over B⊗Q for every affine enlargement B that is compatible with maps between affine enlargements (Faltings 1990).
An F-isocrystal (short for Frobenius isocrystal) is an isocrystal together with an isomorphism to its pullback under a Frobenius morphism.
References
• Berthelot, Pierre; Ogus, Arthur (1983), "F-isocrystals and de Rham cohomology. I", Inventiones Mathematicae, 72 (2): 159–199, doi:10.1007/BF01389319, ISSN 0020-9910, MR 0700767
• Crew, Richard (1987), "F-isocrystals and p-adic representations", Algebraic geometry, Bowdoin, 1985 (Brunswick, Maine, 1985), Proc. Sympos. Pure Math., vol. 46, Providence, R.I.: American Mathematical Society, pp. 111–138, doi:10.1090/pspum/046.2/927977, ISBN 9780821814802, MR 0927977
• de Shalit, Ehud (2012), F-isocrystals (PDF)
• Dieudonné, Jean (1955), "Lie groups and Lie hyperalgebras over a field of characteristic p>0. IV", American Journal of Mathematics, 77 (3): 429–452, doi:10.2307/2372633, ISSN 0002-9327, JSTOR 2372633, MR 0071718
• Faltings, Gerd (1990), "F-isocrystals on open varieties: results and conjectures", The Grothendieck Festschrift, Vol. II, Progr. Math., vol. 87, Boston, MA: Birkhäuser Boston, pp. 219–248, MR 1106900
• Grothendieck, A. (1966), Letter to J. Tate (PDF).
• Manin, Ju. I. (1963), "Theory of commutative formal groups over fields of finite characteristic", Akademiya Nauk SSSR I Moskovskoe Matematicheskoe Obshchestvo. Uspekhi Matematicheskikh Nauk, 18 (6): 3–90, doi:10.1070/RM1963v018n06ABEH001142, ISSN 0042-1316, MR 0157972
• Mazur, B. (1972), "Frobenius and the Hodge filtration", Bull. Amer. Math. Soc., 78 (5): 653–667, doi:10.1090/S0002-9904-1972-12976-8, MR 0330169
• Ogus, Arthur (1984), "F-isocrystals and de Rham cohomology. II. Convergent isocrystals", Duke Mathematical Journal, 51 (4): 765–850, doi:10.1215/S0012-7094-84-05136-6, ISSN 0012-7094, MR 0771383
| Wikipedia |
MaKenzieMorganL
Terms in this set (229)
Direct Compensation
rewards that are directly linked to performance on the job are often referred to as
Indirect Compensation
compensation that is given simply for being a member of the organization is often referred to as
attract, retain, and motivate employees
HR functions and activities are important largely because they serve to
refers to the principles, rules, and values that individuals use in deciding what is right or wrong.
the values and principles that are used to evaluate whether the collective behavior of members of an organization are appropriate.
Consequentialist Theories
focus on the consequences of managers' actions.
Deontological Theories
focus on the rules or duties of managers
factors that the organization controls. For example, the organization's culture, product development, mission, and strategy
External Environment
includes those factors that are outside of the organization's control. While management does not govern these influences, the company must prepare for and respond to them all the same.
A "flat" structure
is one that has fewer managers or a smaller management hierarchy.
is a controlling rule, example, or guide which provides a framework for other judges to follow in deciding later cases.
includes financial and non-financial recognition of diversity ROI initiatives as well as relevant feedback.
Key deliverables that emphasize the role of diversity in the organization's overall strategy;
Utilization of diversity in the development of a high-performance work environment;
Ways in which the corporate culture is aligned with the organization's strategy; and
The efficiency of the diversity deliverables.
What are the 4 measures for the diversity scorecard?
Rehabilitation Act of 1973
Sections 501 and 505, which prohibit discrimination against a qualified individual with a disability in the federal government.
which allows for compensatory and punitive damages for violations of Title VII.
ancestory
racial or ethnic background
such as a person's color, hair, facial features, height, and weight
race-associated-illness
for example, diabetes, obesity, and sickle-cell anemia affect some races more than others
dress, grooming practices, accent or manner of speech
a belief that a person is a member of a particular racial group
a person's association with someone of a particular race (e.g., spouse, relatives, friends/associates of a certain race).
judgmental forecasting
is done by experts who assist in preparing the forecasts.
a projection of future demand is based on a past relationship between the organization's employment level and a variable related to employment, such as sales.
Managerial estimates
are typically made by top management, which means they are a top-down approach. They can also begin at lower levels, and be passed up for refinement (a bottom-up approach).
a large number of experts take turns presenting a forecast statement and assumptions. An intermediary passes each expert's forecast and assumptions to the others, who then make revisions to their own forecasts. This process continues until a viable composite forecast emerges.
Nominal Grouping Technique
several people sit around a conference table and independently list their ideas on a sheet of paper. After ten to twenty minutes, they take turns expressing their ideas to the group. As these ideas are presented, they are recorded on larger sheets of paper so that everyone can see all the ideas and refer to them in later parts of the session.
analysis is an extension of simple linear regression analysis. However, in ------, instead of relating employment to just one variable, multiple variables are used.
productivity ratios
historical data are used to examine the past level of a productivity index.
Human resource ratios
past HR data are examined to determine historical relationships among employees in various jobs or job categories. Regression analysis can then be used to project key group requirements for various job categories.
past staffing levels are used to project future HR needs. Past staffing levels are examined in order to isolate seasonal and cyclical variations, long-term trends, and random movements.
Stochastic ratios
the likelihood of landing a series of contracts is combined with the HR requirements for each contract, in order to estimate expected staffing requirements.
replacement planning
uses charts that show the names of the current occupants of positions in the organization and the names of likely replacements.
is similar to replacement planning, except that succession planning tends to be longer term, more developmental, and more flexible.
judgemental and statistical
There are basically two techniques to help forecast internal labor supply
redundancy planning
is essentially HR planning associated with the process of laying off employees who are no longer needed.
is important not only as a means of control, but also as a method for evaluating plans and programs and making adjustments.
strategic business planning
determines the organization's goals, future products and services, growth rate, location, legal environment, and structure.
Job/role planning
which follows strategic business planning, specifies what needs to be done at all levels of the organization in order to meet the firm's strategic initiatives.
HR planning
determines what types of jobs the organization needs to fill, and thus, the knowledge, skills, and abilities (KSAs) needed in job applicants.
Strategic business planning, job/role planning, and HR planning
What are the 3 planning components that recruiting programs are developed around?
individuals become applicants by walking into an organization's employment office. This method, like employee referrals, is relatively informal and inexpensive and is almost as effective as employee referrals for retaining applicants once hired.
Devoting attention to the job interview
Having a job-matching program
Carefully timing recruiting procedures
Developing policies regarding job offer acceptances
How can organizations ensure long time retention?
Contributes to the organization's bottom-line goals
Ensures that an organization's financial investment in employees pays off
Helps fulfill hiring goals specified in affirmative action programs
Minimizes litigation with people who claim to have been rejected for discriminatory reasons
Helps hire and place job applicants according to the best interests of the organization and of the individual
Effective selection does the following:
can facilitate the organization's selection decisions by projecting when and how many such decisions will need to be made
ultimate criterion
is a theoretical construct or abstract idea that can never actually be measured. It represents a complete set of ideal factors that constitute a successful person.
Actual criterion
on the other hand, includes the measurable factors that constitute a successful person. For example, some organizations use the periodic results of performance appraisals or the number of days the individual was absent as actual criteria.
serve as a criterion for evaluating the predictive and economic utility of selection procedures.
reflects the stability of the test over time. The higher the coefficient of stability, the more reliable the measure.
Empirical Validity
refers to how much a predictor relates to a criterion (some measure of job success). It's important because it describes the linkages between two measures.
Perfect Validity
future job performance is perfectly predictable from a job applicant's score on a selection test.
content validity
estimates the relevance of a predictor as an indicator of performance, without collecting actual performance information.
no statistical correlation is involved in assessing.....
requires demonstrating that a relationship exists between a selection procedure and a psychological trait or measure
application banks
seeks information about the applicant's background and current situation. These forms are often referred to as resumes, curriculum vitae (CVs,) or biographical information.
training refers to the process of having a new worker, the apprentice, work alongside and under the guidance of an experienced technician.
Salary increases or decreases
Demotions
Promotions/transfers
Administrative Uses of Performance Appraisals
Identifying training needs
Motivating employees to improve
Providing feedback
Counseling employees
Spotting performance deficiencies
Identifying and acknowledging strengths
Developmental Uses of Performance Appraisals
HR planning will help the organization better understand how many and what type of employees the organization will need in the future. HR planning also addresses how the firm obtains and trains future human capital.
What role does the planning function of human resource management play within organizations?
because if organizations don't attract a wide range of candidates, they will be less likely to successfully fill organizational needs.
What role does the staffing function of human resource management play within organizations?
Direct (performance related)
What are the types of employee compensation?
they are important for both measuring and monitoring an employee's contribution. Performance appraisals are frequently the basis for promotions, trainings, and raises, as well as for terminating employees.
What is the role of performance appraisals?
the ability to grow and develop personally than they are with their direct compensation. In today's global and chaotic environment, many firms use training and development activities to remain competitive.
How does HRM enhance human potential?
physical and psychological
HRM's work on improving the workplace environment focuses on what two environments?
the rights of employees
What must management be aware of in order to maintain effective work relationships?
Essential that organizations develop and implement HRM policies with international applicability, also making them relevant to employees from diverse cultures and backgrounds.
What impact has globalization had on HR practices?
Improve quality of work life
Ensure legal compliance
Foster ethical behavior
What are the four implicit HR objectives?
work organization and design
How can HR managers make a significant contribution toward productivity improvements?
most employees today desire more autonomy and a chance to make a greater contribution to the organization.Many employers are convinced that by providing opportunities for employees to realize these aspirations, employees will be more content, and as a result,the quality of work life (QWL) within the organization will improve.
How do organizations improve the quality of work life? How have employees' attitudes regarding involvement with their work shifted?
these laws impact nearly all functions and activities with which HRM is involved, HR managers and personnel must be familiar with regulations affecting hiring and pay decisions, promotion activities, health and safety considerations, and labor relations.
Why is it important that HR managers and personnel be familiar with the laws and regulations?
Define the term "moral philosophy."
Creating a positive work environment
Hiring ethical individuals
Providing ethical training
Labeling and modeling ethical behavior
Creating a well-defined code of ethics
Establishing an open-door policy
Providing an employee assistance program (EAP)
How can HR professionals create a culture where ethical behavior is encouraged?
Define the term "business ethics."
The costs and benefits associated with HR utilization
Productivity changes (resulting from changes in technology, capital investment, the 2008-2014 recession, capital utilization, outsourcing, and government policies)
The increasing pace and complexity of social, cultural, legal, demographic, and educational changes
The symptoms of dysfunction in the workplace
Societal trends of the 21st century
What are the six major trends have been identified as influencing the work environment?
identifies people's values and assumptions about their willingness to work, their ethics, and the way they should be treated. Culture is often reflected in the company's HR policies and practices.
Internal Environment: Define "culture" and identify who shapes an organization's culture.
generally refers to the equipment and knowledge used to produce goods and services, and the definition of technology may vary greatly by industry.
Internal Environment: Define technology.
with technology and computers, close supervision of employees is unnecessary. Furthermore, technology allows work to be performed during non-traditional hours, away from physical manufacturing plants and offices.
Internal Environment: How does technology influence an organization's structure?
HR Activities
Internal Environment: Organizational size is an important factor in determining ________________.
A strong economy tends to decrease unemployment, increase wages, make recruitment more competitive, and increase the desirability of training. On the other hand, a weak economy tends to increase unemployment, diminish wage demands, make recruitment less competitive, and reduce the need for training and development of current employees. HRM has a major role in both strong and weak economies, although its priorities and functions change depending on the state of the economy.
Internal Environment: How does the economy influence HR activities?
but also by the international external environment, including changes in economic developments throughout the world. For example, when the North American Free Trade Agreement (NAFTA) was created in 1994, the nature of work relationships among the United States, Canada, and Mexico changed dramatically. Because corporations were forced to become more competitive, HR policies and practices within many firms changed significantly.
Internal Environment: How does international competition influence HR activities?
is an acronym that stands for strengths, weaknesses, opportunities, and threats.
Define the term "SWOT".
commonly used to establish a level of understanding needed for a successful plan and can help confirm where a company stands against its competitors.
What is the purpose of a SWOT analysis?
Provide transformational leadership
Collaborate and resolve strategic challenges within the firm
Encourage real employee involvement
Empower and facilitate learning as well as change and decision-making
Design process and performance systems
Maintain a global business perspective
What are the six HR competencies?
they must acquire an understanding of the firm's business objectives and the means that must be employed to attain them. They must also have solid training in strategic planning; in-depth understanding of financial statements; familiarity with sales, marketing, and production techniques; and knowledge of how to use modern tools such as data processing and management information systems.
HR Characteristics: What knowledge of the Business and Industry must an HR professional possess?
The importance of understanding the economy becomes even more greater as HR professionals are increasingly asked to advise their firms on productivity and other issues.
HR Characteristics: Why is it important for an HR professional to have an understanding of the economy?
so they can diagnosis and solve problems. This means that they must be able to look at a problem and understand it, articulate the problem so that others can understand it (especially upper management), and then they must be able to find a solution.
HR Characteristics: Why is it important for an HR professional to have analytical abilities?
It is the the determining factor in human resource management success.
HR Characteristics: Why is the ability to influence others a critical characteristic of an HR professional?
To be a part of the decision making team HR managers must be proactive in approaching people to discover the problems that already exist.
HR Characteristics: Why is a propensity for action needed by an HR professional?
intimately involved in the structural changes of the organization. HR professionals must help develop plans and strategies, aiming to equip the company with the necessary workforce, both in quantity and quality; they must assist in the motivation and retention of employees whose organization is shrinking or rapidly expanding; and they must help manage succession by creating contingency plans for key employees within the firm.
HR Characteristics: Why is meant by "engagement?"
integrating all resources and rallying employees and colleagues in support of the organization's strategy. Changing organizational cultures requires great political skill.
HR Characteristics: Why is it important for HR professionals to have political awareness?
it should be emphasized that the HR manager must be able to balance acquiring services from the external environment as well as from management and employees. As organizations engage in corporate social responsibility and attempt to become good corporate citizens, HR professionals must be attentive to the needs of customers and the community
HR Characteristics: Why is it important for HR professionals to be customer focused?
usually provide guidance, support management, and serve as a source of help and information on human resource matters
Define the term: HR generalist.
responsible for specific human resource management functions within the organization.
Define the term: HR specialist.
For HR to be effective, HR managers should be at the top of the organizational hierarchy.
Why is it important that the HR managers be at the top of the organizational hierarchy?
are expected to become agents for change and have the necessary skills to facilitate organizational change and maintain organizational adaptability.
What is the expectation of the HR department regarding change?
empower line managers to make things happen. The HR department is basically providing a service to line managers.
How does the HR department help line managers?
organizations can generally hire, fire, or promote a person for any reason whatsoever.
Define "employment at-will."
potentially harsh consequences to employees, which may leave employees vulnerable and financially insecure.
What are the consequences to employees of "employment at-will?"
Public-policy exception
Implied-contract exception
Implied covenant-of-good-faith exception
What are the three exceptions to "employment at-will?"
is recognized by most states and is invoked when an employee is terminated for reasons that violate a public-policy interest. A public-policy interest could include an employee who refuses to break the law, exercises a legal right, fulfills a statutory duty, or engages in whistle-blowing activities.
List one example of the Public-Policy exception.
which is created not through formal contract negotiation and documentation, but by the actions of the employer and the employee. an implied contract may be created between an employer and employee if the employer gives oral assurances that the employee will have continued work for satisfactory job performance.
What is an implied contract?
an implied contract may be created between an employer and employee if the employer gives oral assurances that the employee will have continued work for satisfactory job performance.
List one example of the implied contract exception.
holds that each party to the employment relationship makes an implied promise to treat the other in good faith and fairness. When that covenant is broken, the employee has a cause of action for wrongful termination. The exception for an implied covenant of good faith and fair dealing is only valid in a handful of states.
Define the Covenant-of-Good-Faith exception.
diverse workforce can bring a variety of viewpoints and perspectives to the organization. It also fosters innovation and creativity.
What are the benefits of a diverse workforce?
is a strategy to help promote recognition and respect for individual differences found within the organization.
What is diversity management?
An effective diversity management program will help each individual within the organization feel included.
What is the result of diversity management within organizations?
similarities and differences between individuals, accounting for numerous aspects of personality and individual identity.
Define the term diversity.
the extent to which each person in the organization feels welcomed, respected, supported, and valued as a team member.
Define the term inclusion.
This law was intended to prevent the practice of treating people differently based on their race.
What is the Civil Rights Act?
makes it unlawful for an employer to refuse to hire any individual because of the individual's race, color, religion, sex, or national origin.
What is Title VII the Civil Rights Act?
race, color, religion, sex, or national origin.
What groups are protected under Title VII?
The federal agency created by the Civil Rights Act of 1964
How was the EEOC created?
enforces federal anti-discrimination statutes and provides oversight for all federal equal opportunity employment regulations.
What is the purpose of the EEOC?
is the unfavorable treatment of someone based on sex or gender.
"Women work much better inside than outside."
Define the term and provide an example of "gender discrimination."
involves treating someone, such as a job applicant or employee, unfavorably because of that person's sex. Such discrimination may include treating someone less favorably in any aspect of employment such as hiring, firing, pay, job assignments, promotions, and so on
Define the term and provide an example of "sex discrimination."
is discrimination on the basis of someone's being transgender, lesbian, gay, or bisexual.
Define the term and provide an example of "gender identity discrimination."
harassment directed at an employee because of his or her gender. Therefore, for an act to be classified as sexual harassment, it does not have to involve sexual motives, sexual behaviors, or requests for sexual favors.
was to protect workers from 40 to 65. Eventually, however, the upper age limit was eliminated altogether. The Age Discrimination in Employment Act applies to public and private employers, and to unions with more than 20 employees.
What is the purpose of the Age Discrimination in employment Act (ADEA)?
involves treating a person unfavorably because of his or her religious beliefs.
Define the term "religious discrimination."
Hassan recently applied for a computer programming position with a large software company in San Jose, California. While Hassan had more experience and programming skills than many of the other applicants, he was not selected for the job. A close friend who worked for the company later informed Hassan that he was not selected because of his Muslim beliefs.
List one example of the religious discrimination.
is a local geographic or global human population—a distinct group—that is evident by genetically transmitted characteristics. For example, American Indian and Pacific Islander are racial categories.
How does the EEOC define "race"?
is also commonly understood to mean pigmentation, complexion, or skin shade or tone. Color and race are related, but are not the same.
How does the EEOC define "color"?
occurs when an employer makes an adverse employment decision against an individual because the individual or his or her ancestor: 1) is from a certain country or place, 2) belongs to, or identifies with, a national, cultural, or ethnic group, or 3) associates with a person from that group.
Describe national origin employment discrimination.
is any action taken by an employer to overcome discriminatory effects of past or current practices or policies that create barriers to equal employment opportunity.
What was the purpose of affirmative action?
Such groups include women, African Americans, Asians, Pacific Islanders, disabled persons, American Indian/Alaska Native, and veterans.
What protected categories are covered under affirmative action?
Contractos
Who must have an affirmative action plan?
Reasonable self-analysis;
Reasonable rationale for taking affirmative action; and,
Reasonable affirmative action.
What are the three basic elements of an affirmative action plan?
Human capital means
Define the term "demand" in terms of HRM.
Who is available to fill capital needs
Define the term "supply" in terms of HRM.
HRP helps ensure that organizations fulfill their business plans for the future in terms of financial objectives, output goals, product mix, technologies, and resource requirements.
List the ways that human resource planning (HRP) helps organizations.
As new technologies, such as state-of-the-art information systems, robots, and office automation tools, are introduced into the workplace, HR planning professionals will be called upon to answer questions like the following: How can organizations assimilate technology into the workplace? What impact do these new technologies have on job design? Which skills are needed by the firm?
How has technology influenced HRP?
Gathering, analyzing, and forecasting data to develop an HR supply and demand forecast
Establishing HR objectives and policies and gaining approval and support for them from top management
Designing and implementing plans and action programs in such areas as recruitment, training, and promotion, that will enable the organization to achieve its HR objectives
Controlling and evaluating HRM plans and programs to facilitate progress toward HR objectives
What are the 4 steps (phases) in the HR planning process? What happens during each step/phase?
Judgmental and statistical
What are the two common forecasting techniques used to project the organization's demand for human resources?
Linear regression, multiple linear regression, productivity ratios, time series analysis, stochastic analysis.
What are the six statistical forecasting methods?
Managerial estimates, Delphi Technique, and Nominal grouping technique
What are the three judgmental forecasting methods?
A variable related to employment in sales
List an example of a variable.
multiple linear regression analysis may produce more accurate demand forecasts than simple linear regression analysis.
When should multiple linear regression be used to forecast demand?
When should time series analysis be used to forecast demand?
Replacement and Succession planning
What are the two judgmental techniques used by organizations to make supply forecasts?
uses charts that show the names of the current occupants of positions in the organization and the names of likely replacements. Replacement charts make potential vacancies readily apparent, based on the present performance levels of employees currently in jobs.
What information is included in replacement planning?
What is the difference between succession planning and replacement planning?
because it fosters HR strategies that support the firm's business plans.
Why is establishing HR objectives and policies (phase 2) vital?
may be designed to decrease the number of current employees (if a forecast suggests that supply exceeds demand).
What are the two types of action programs?
What action program is designed to increase the supply of the right employees in the organization?
Involved in this planning should be outplacement counseling, buy-outs, job skill retraining, and job transfers.
What should be included in redundancy planning?
HR plans and programs are essential to the effective management of human resources.
Why is the evaluation of HR plans and programs an important process?
Civil Rights Act These acts prohibit employers from discriminating against minorities.
Fair Labor Standards Act of 1938 (FLSA)- Among other things, this Act restricts child labor and provides for a minimum wage and overtime pay for employees.
Equal Pay Act of 1963 (EPA) - This Act requires that employers provide equal pay for men and women who do similar work.
Age Discrimination in Employment Act of 1967 (ADEA)- This Act protects those 40 and older from age discrimination throughout the recruitment process.
Pregnancy Discrimination Act of 1978- This Act recognizes pregnancy as a temporary disability and prohibits applicants from being discriminated against in the recruitment process because of pregnancy, childbirth, or related medical conditions.
Immigration Reform and Control Act of 1986 (IRCA):This Act makes it illegal to hire or recruit illegal immigrants knowingly. Under IRCA, employers may hire only persons who may legally work in the U.S., i.e., citizens and nationals of the U.S. and aliens authorized to work in the U.S.
Americans with Disabilities Act This Act prohibits discrimination against qualified individuals with disabilities throughout the recruitment process.
Genetic Information Nondiscrimination ActThis Act prohibits the use of genetic information in employment decisions and restricts employers from requesting, requiring, or purchasing genetic information.
Explain the impact of the following laws for recruiting practices:
Determine the present and future recruitment needs of the organization in conjunction with HR planning and job analysis activities.
Increase the pool of qualified job applicants at a minimum cost to the organization.
Increase the success rate of the selection process by reducing the number of underqualified or overqualified job applicants.
Reduce the probability that job applicants, once recruited and selected, will leave the organization after only a short time.
Meet the organization's responsibility for employment equity and other legal and social obligations regarding the composition of its workforce.
Increase organizational and individual effectiveness in both the short and long terms.
Evaluate the effectiveness of various techniques and locations of recruiting for all types of job applicants.
What are the purposes of recruitment?
A person's knowledge skills and abilities
What does acronym "KSA" mean?
Theoretical or practical understanding of a subject
How does the book define knowledge?
Proficiencies that are developed through experience
How does the book define skills?
Qualities that a certain person has to perform a specific task (more enduring than skills)
How does the book define abilities?
Familiar with people, policies and procedures
What are the factors that influence a "promotion-from-within" policy?
performance and merit
What criterion do organizations think should be used for transfers?
What criterion do unions think should be used for transfers?
used effectively to expose management trainees to various aspects of organizational life. It has also been used to relieve job burnout for employees in high-stress occupations.
What is job rotation?
Opportunity for employee growth and development
Equal opportunity for advancement of all employees
A greater openness in the organizational climate by making opportunities known to all employees
Staff awareness regarding salary grades, job descriptions, and general promotion and transfer procedures
Fulfills company goals and objectives while allowing each individual the opportunity to self-select the best possible "fit" in the organization
What is the purpose of a job posting?
are essentially word-of-mouth advertisements that generally involve rewarding employees for referring skilled job applicants to an organization.
Define the term "employee referral."
copmpany websites
What is the most common source of external recruiting in most industries?
Professional and managerial workers
Skilled workers
In what type of situations are employment agencies used?
Most communities and universities have picked up on this idea and now bring together large numbers of employers and job seekers for "job fairs." While job fairs provide limited interview time and thus
serve only as an initial step in the recruitment process, they are an efficient recruiting source for both employers and individuals.
What industries are most likely to use trade associations as a recruitment source?
is a systematic effort to identify people's KSAs (knowledge, skills, and abilities) and match them to job openings. There are two major components to job matching - Job profiles and candidate profiles.
What is " job matching?"
Job profiles - detailed job descriptions as well as job specifications.Candidate profiles contain information regarding the candidate's experience or skills related to specific jobs.
What are the two major components of job-matching systems
The candidate profile also lists the candidate's job preferences and interests. With these profiles, the organization can identify many more potentially qualified job applicants for specific jobs.
What information is included in a "candidate profile?"
is the process of gathering information about job applicants in order to determine who should be hired for an available position. For most employers, the application form or resume is the first step in the selection process, followed by interviews and reference checks.
Define the term "employee selection."
Successful orientation also helps the new employees gain a feeling of belonging and familiarity with the organization's culture and climate, with much less chance of their ever feeling isolated or uncomfortable.
Explain the benefits to organizations of new employee orientation and socialization.
*Competency: 3037.1.6 Employee Training and Development
Selection devices developed on the basis of a job analysis are more likely to be job-related, and therefore, more effective and more likely to satisfy legal considerations.
What are the benefits of developing selection devices (tests) on the basis of a job analysis?
is a behavior associated with a successful job-holder.
Define the term "criterion."
Ultimate - is a theoretical construct or abstract idea that can never actually be measured. It represents a complete set of ideal factors that constitute a successful person.
Actual - on the other hand, includes the measurable factors that constitute a successful person. For example, some organizations use the periodic results of performance appraisals or the number of days the individual was absent as actual criteria.
What is the difference between the "ultimate criterion" and the "actual criterion?"
the consistency or stability of a selection instrument (i.e., a predictor or criterion). This means that the instrument used, be it the results of a written test or impressions obtained during an interview, should yield the same estimate on repeated uses under identical conditions.
Define the term "reliability."
When an organization makes selection and placement decisions based on reliable predictors and criteria, employees are more likely to succeed in their jobs. In turn, if employees are successful in their jobs, the organization experiences less employee turnover, increased loyalty, and a positive work environment.
What are the benefits of a reliable selection process?
how accurately and precisely a measure assesses an attribute. It assumes the appropriateness of using a given measuring device for drawing inferences about the criteria.
Define "validity."
is identifying applicants who are the best qualified.
What are the key benefits of ensuring the validity of selection tests?
Legal action of the test is shown to be discriminatory
What can result from the poor validity of selection tests?
refers to how much a predictor relates to a criterion (some measure of job success). Empirical validity is important because it describes the linkages (or covariation) between two measures. For example, a job applicant who passes a welding test should be able to perform successfully as a welder if the test is valid.
Describe empirical validity and provide an example.
*Competency: 3037.1.7 Performance Management
estimates the relevance of a predictor as an indicator of performance, without collecting actual performance information. The administration of a typing test--which is actually a job sample test if used for typists--as a selection device for hiring typists is a classic example of a predictor judged to have content validity.
Describe content validity and provide an example.
requires demonstrating that a relationship exists between a selection procedure and a psychological trait or measure, which is called the "construct."For example, does a university entrance exam, such as the ACT or SAT, really measure a student's future academic success? To demonstrate construct validity for tests like these, one would need data showing that high scorers on the test are actually more successful in school. If some test were being used in management training, it would be necessary to show that the test is related to job responsibilities of the employee.
Describe construct validity and provide an example.
Application blanks: seeks information about the applicant's background and current situation. These forms are often referred to as resumes, curriculum vitae (CVs,) or biographical information.
Describe the role each of these selection instruments play in the selection process:
The information people communicate without words. Things like body movement, gestures, firmness of handshake, eye contact, and physical appearance provide nonverbal cues.
effective orientation and socialization reduce turnover and help create a positive work environment.
What does research teach us about the effectiveness of orientation and socialization?
usually refers to improving skills needed to perform better in the current job
Define employee training.
refers to improving knowledge for the future.
Define employee development.
What are the 5 steps of the ADDIE model?
suggests a continuous cycle wherein the results of evaluation from one training program become part of the assessment of needs for the next program. While thinking of the model as a cycle makes sense, it is also true that in many organizations, the "cycle" is more of a linear series of steps starting with assessment and ending with implementation, and only sometimes, evaluation.
Is the ADDIE model a continuous cycle or a linear series of steps? Provide your rationale.
to determine whether a training need exists, where in the organization this need exists, and the precise nature of the required training.
What are the goals of a needs assessment?
can appropriately provide input on the vision of the organization and how this relates to future needs for talent. They can also assess current organizational effectiveness in broad terms.
What information does upper-level mangement provide during a needs analysis?
can advise on the resources available for training, the nature of specific performance problems they see in their assigned areas of responsibility, and the categories of employees they see as targets for potential training programs.
What information does middle-level management provide during a needs analysis?
need to assess whether potential training programs are aligned with business strategy, but they will be particularly focused on how the needs assessment and analysis will assist them in the design, development, and conduct of the actual program.
what role do training managers/ instructional designers play during a needs analysis
These would be employees at any level in the organization who are experts on the nature of the tasks that need to be performed more effectively (or performed in the future), the knowledge, skills, and abilities (KSAs) necessary to perform these tasks, the equipment necessary (e.g., milling machines, computers) to conduct the task, and/or the working conditions under which a given task is performed.
What information can an SME provide during a needs analysis?
Organizational, task level, and personal level
What are the three levels of needs assessment associated with the ADDIE model?
is to look at a company's long-term vision and future direction, in order to determine the workforce needs of the future. This level of analysis also provides a broad look across the organization to identify where training might be needed based on high turnover and absenteeism or low performance and quality.
What is the primary purpose of an organizational level needs assessment?
Here the skills and knowledge necessary to do particular tasks are examined, looking for current or potential gaps when compared to workforce capabilities. At this level, analysts are concerned with what needs to be taught in a training program and how certain skills and knowledge translate into task performance.
What is the primary purpose of a task level needs assessment?
focus is on individual employees and how well they perform their jobs. (who needs the training?)
What is the primary purpose of a person level needs assessment?
consists of the goals, objectives, and evaluation tasks that must be developed and sequenced.
What information is included in the design blueprint?
facilitators and training managers can evaluate the sequencing of content, assess the effectiveness of chosen learning activities, assess the time allotted, determine if the physical space and layout are appropriate, and test the various assumptions about program design, development, and implementation.
What information is learned during "pilot program?"
formative and summative
What two types of data are typically gathered during the evaluation phase?
This means the extent to which trainees are able and willing to take what they learned in the classroom or training setting and use it back at their desk, their spot on the line, and with their teams.
Why is transfer of training important?
it is important to establish a mechanism to monitor whether the new behaviors are being used. All too often, participants return to work and slip into old patterns and behaviors, significantly decreasing the effectiveness of the training program. One serious mistake in designing training and development programs is the failure to provide systems, policies, and/or follow-up programs to ensure the learners' effective on-the-job use of their newly acquired KSAs. As a result, what an employee learns in a training program may never be used in the actual job situation. Or, if the newly learned behavior is tried, it may quickly be terminated due to lack of support by the manager or peers. Further, the transfer of training requires that appropriate resources, such as technology and time, be available. Next, the learner's confidence and desire to use the training impact his use. As such, it is important that provisions be made in training programs for the positive transfer to the job of the KSAs learned in training. There are three ways to do this. One is to have conditions in the training program identical to those in the job situation. The second is to teach principles for applying the behaviors learned in the training program to the job situation, and the third is the contract plan.
What 3 factors are critical for transfer of training to occur?
is often developed and implemented by the organization, but some training is informal. On-the-job training is used by organizations because it provides a "hands-on" learning experience that facilitates learning transfer and because it can fit into the organization's flow of activities.
Why would an organization choose to use on-the-job training?
disadvantage of numerous distractions and ongoing job pressures. Another disadvantage is that employees can usually only be trained one person at a time.
What are the advantages and disadvantages of on-site training?
is that it forces trainees to leave their work stations and focus on the training content. Training that involves complex learning and reasoning is best presented away from the work site.
What are the advantages of off-site training?
administrative and developmental uses
What are the two ways organizations use performance appraisal data?
If a formal job analysis has not been conducted to establish the validity of the PA form, and thus the job-relatedness of an evaluation criterion, the company may be accused of discrimination.
What role does a job analysis play in performance appraisals?
valuable when superiors lack access to some aspects of the subordinate's performance. Additionally, peer appraisal can be very useful for self-managed teams, when teamwork and participation are part of the organizational culture.
In what circumstances are peer appraisals useful?
subordinates' appraisals can make superiors more aware of their impact on subordinates. Sometimes, however, subordinates may evaluate their superiors solely on the basis of personality or with respect to their own needs rather than those of the organization. Finally, subordinates may inflate the evaluations of their superiors, particularly if they feel threatened by them and have no anonymity.
What information is learned from appraisal by subordinates?
Although this method is fast, and is also the most objective of those mentioned,
What are two advantages of using computer monitoring as a data source for performance appraisal?
a superior lists the subordinates in order from best to worst, usually on the basis of overall performance. Incumbents can also be ranked with regard to their performance on specific duties, such as attendance, record of meeting deadlines, quality of reports, etc.
What does the straight ranking method involve?
The first step is to put the best subordinate at the head of the list and the worst subordinate at the bottom, usually on the basis of overall performance. The superior then selects the best and worst from the remaining subordinates. The middle position on the list is the last to be filled. Alternative ranking approaches can be used quite efficiently not only by a single supervisor, but by the subordinates themselves.
What does the alternative ranking method involve?
involves comparing each employee to another incumbent, two at a time on a single standard, to determine which is better.
What does the paired comparison method involve?
must assign only a certain proportion of subordinates to each of several categories with respect to each other.
What is required from the supervisor under a forced distribution performance appraisal method?
the rater can describe the employee's strengths and weaknesses and suggest methods for improving performance.
What type of information is included in a narrative essay performance review?
because they are relatively easy to develop and permit quantitative results that allow comparisons across employees and departments.
Why are conventional rating forms extensively used by organizations?
such as rater errors, incomplete or inaccurate information, and poor communication.
What are three factors that affect the validity and reliability of an organization's performance appraisal process?
Which of the three types of validity (empirical, content, and construct) is legally defensible if challenged in court?
n a sense, a particular type of organization culture, where culture reflects the values, attitudes, and basic operating assumptions that are widely shared among organizational participants
What is a learning organization?
This type of culture is related to fewer levels in an organization's design and increased transfer of knowledge, regardless of level.
Why is being a learning organization beneficial?
Direct compensation consists of the basic wage or salary, and performance-based pay.
Indirect compensation(also known as "non-monetary rewards"), on the other hand, includes employee services, benefits, trainings and/or any other indirect form of compensation or benefit.
List and describe the two types of compensation.
as the process of comparing jobs by the use of formal and systematic procedures to determine their relative worth within the organization.
Define the term "job evaluation."
links employee pay to employee and organizational performance.
Variable compensation links an employee's pay to what?
For example, variable compensation may come in the form of profit sharing, bonuses, and/or stock options. Variable pay is typically a one-time payment that must be re-established and re-earned during each performance period. Organizations will often use variable pay to provide incentives and align the interest of management with shareholders.
What are three examples of variable pay?
is compensation that does not vary according to performance.
Define fixed pay.
Two of the most common types of fixed pay include an hourly wage and/or a yearly salary. In both of these situations, employees are paid regardless of performance.
What are the two most common forms of fixed pay?
are a specific type of performance-based pay system which is used to encourage specific actions and motivate employees. Incentive pay plans are typically structured in a way that reflects the organization's strategy, culture, objectives, and financial capabilities
What are incentive pay plans?
employees are guaranteed a standard pay rate for each unit of output, also called a piece rate. The rate per hour is frequently determined by the time-and-motion studies of standard output and the current base pay of the job.
Describe the term "piecework."
apply to salespeople and managers, who generally receive their pay in the form of commissions. A commission is usually a percentage of the sales revenue generated by the employee.
Describe the term "sales incentive (commission)."
is an opportunity for a manager to buy an organization's stock at a later date, but at a price established when the option is granted.
Describe the term "stock options."
provide for payment of profit shares at regular intervals, typically monthly or yearly.
Describe the term "cash plans."
Gainsharing leads to enhanced productivity and subsequently creates additional profits in which all parties share.
Describe the term "gainsharing (or bonus based on achievement)."
a special type of cash plan, set the percentage of profits paid to employees according to the amount of dividends paid to stockholders.
Describe the term "wage-dividend plan."
rewards provided by the organization to employees for their membership and/or participation (attendance) in the organization.
Describe indirect compensation
Sets found in the same folder
Introduction to HR Exam 2
220 terms
olivia_selmon
HR 4503 Final Exam Review - OU Fall 2019
ashlynndrobbins
KTK_
HR Theory Midterm Exam 2020
Megan_Dierks
Global Business Study Guide
Macroeconomics Study Guide
Microecon study study set
Microecon Study guide
Use the simplex method to maximize $f=70 x+5 y$ subject to $x+1.5 y \leq 150, x+0.5 y \leq 90, x \geq 0$, and $y \geq 0$.
Suggest a reason that following the competition is not the best model for creating a promotional budget.
Santo Design was founded by Thomas Grant in January 2011. Presented below is the adjusted trial balance as of December 31, 2020. $$ \begin{array}{ c } \hspace{15pt}\textbf{Anderson Cooper Co.}&\\ \hspace{15pt}\textbf{Trial Balance}&\\ \hspace{15pt}\textbf{December 31, 2020} \end{array}\\ $$ $$ \begin{array} {lrr}\hline \textbf{}&\textbf{Debit}&\textbf{Credit}\\ \hline \text{Cash}\hspace{40pt}& \text{\$11,350}&\\ \text{Accounts Receivable}\hspace{52pt} & \text{21,500} &\\ \text{Supplies}\hspace{40pt}& \text{5,000}&\\ \text{Prepaid Insurance}\hspace{40pt}& \text{2,500}&\\ \text{Equipment}\hspace{52pt}& \text{60,000} &\\ \text{Accumulated Depreciation-Equipment}\hspace{65pt} && \text{\$35,000}&\\ \text{Accounts Payable}\hspace{65pt} && \text{5,00}&\\ \text{Interest Payable Payable}\hspace{65pt} && \text{150}&\\ \text{Notes Payable}\hspace{65pt} && \text{5,00}&\\ \text{Unearned Service Revenue}\hspace{65pt} && \text{5,600}&\\ \text{Salaries and Wages Payable}\hspace{65pt} && \text{1,300}&\\ \text{Common Stock}\hspace{60pt}&& \text{10,000}&\\ \text{Retained Earnings}\hspace{60pt}&& \text{3,500}&\\ \text{Service Revenue}\hspace{71pt}&&\ \text{61,500}&\\ \text{Salaries and Wages Expense}\hspace{50pt}&\ \text{11,300}&\\ \text{Insurance Expense}\hspace{50pt}&\ \text{850}&\\ \text{Interest Expense}\hspace{50pt}&\ \text{150}&\\ \text{Depreciation Expense}\hspace{50pt}&\ \text{7,000}&\\ \text{Supplies Expense}\hspace{50pt}&\ \text{3,400}&\\ \text{Rent Expense}\hspace{50pt}&\ \text{4,000}&\\\hline \text{}\hspace{50pt}&\textbf{\$127,050} &\ \textbf{\$127,050} \\\hline \hline \end{array} $$ <span style="color: #000000"></span> **<p style="color:red;">Instructions</p>** **a.** Prepare an income statement and a retained earnings statement for the year ending December 31, 2020, and an unclassified balance sheet at December 31. **b.** Answer the following questions. 1. If the note has been outstanding 6 months, what is the annual interest rate on that note? 2. If the company paid $17,500 in salaries in 2020, what was the balance in Salaries and Wages Payable on December 31, 2019?
When did the service-based economy begin?
10th EditionElliot Aronson, Robin M. Akert, Samuel R. Sommers, Timothy D. Wilson
10th EditionElliot Aronson, Robin M. Akert, Timothy D. Wilson
2nd EditionDavid G Myers
A Concise Introduction to Logic
13th EditionLori Watson, Patrick J. Hurley | CommonCrawl |
Riesel number
In mathematics, a Riesel number is an odd natural number k for which $k\times 2^{n}-1$ is composite for all natural numbers n (sequence A101036 in the OEIS). In other words, when k is a Riesel number, all members of the following set are composite:
$\left\{\,k\times 2^{n}-1:n\in \mathbb {N} \,\right\}.$
If the form is instead $k\times 2^{n}+1$, then k is a Sierpinski number.
Riesel problem
Unsolved problem in mathematics:
Is 509,203 the smallest Riesel number?
(more unsolved problems in mathematics)
In 1956, Hans Riesel showed that there are an infinite number of integers k such that $k\times 2^{n}-1$ is not prime for any integer n. He showed that the number 509203 has this property, as does 509203 plus any positive integer multiple of 11184810.[1] The Riesel problem consists in determining the smallest Riesel number. Because no covering set has been found for any k less than 509203, it is conjectured to be the smallest Riesel number.
To check if there are k < 509203, the Riesel Sieve project (analogous to Seventeen or Bust for Sierpinski numbers) started with 101 candidates k. As of December 2022, 57 of these k had been eliminated by Riesel Sieve, PrimeGrid, or outside persons.[2] The remaining 42 values of k that have yielded only composite numbers for all values of n so far tested are
23669, 31859, 38473, 46663, 67117, 74699, 81041, 107347, 121889, 129007, 143047, 161669, 206231, 215443, 226153, 234343, 245561, 250027, 315929, 319511, 324011, 325123, 327671, 336839, 342847, 344759, 362609, 363343, 364903, 365159, 368411, 371893, 384539, 386801, 397027, 409753, 444637, 470173, 474491, 477583, 485557, 494743.
The most recent elimination was in April 2023, when 97139 × 218397548 − 1 was found to be prime by Ryan Propper. This number is 5,538,219 digits long.
As of April 2023, PrimeGrid has searched the remaining candidates up to n = 14,500,000.[3]
Known Riesel numbers
The sequence of currently known Riesel numbers begins with:
509203, 762701, 777149, 790841, 992077, 1106681, 1247173, 1254341, 1330207, 1330319, 1715053, 1730653, 1730681, 1744117, 1830187, 1976473, 2136283, 2251349, 2313487, 2344211, 2554843, 2924861, ... (sequence A101036 in the OEIS)
Covering set
A number can be shown to be a Riesel number by exhibiting a covering set: a set of prime numbers that will divide any member of the sequence, so called because it is said to "cover" that sequence. The only proven Riesel numbers below one million have covering sets as follows:
• $509203\times 2^{n}-1$ has covering set {3, 5, 7, 13, 17, 241}
• $762701\times 2^{n}-1$ has covering set {3, 5, 7, 13, 17, 241}
• $777149\times 2^{n}-1$ has covering set {3, 5, 7, 13, 19, 37, 73}
• $790841\times 2^{n}-1$ has covering set {3, 5, 7, 13, 19, 37, 73}
• $992077\times 2^{n}-1$ has covering set {3, 5, 7, 13, 17, 241}.
The smallest n for which k · 2n − 1 is prime
Here is a sequence $a(k)$ for k = 1, 2, .... It is defined as follows: $a(k)$ is the smallest n ≥ 0 such that $k\cdot 2^{n}-1$ is prime, or -1 if no such prime exists.
2, 1, 0, 0, 2, 0, 1, 0, 1, 1, 2, 0, 3, 0, 1, 1, 2, 0, 1, 0, 1, 1, 4, 0, 3, 2, 1, 3, 4, 0, 1, 0, 2, 1, 2, 1, 1, 0, 3, 1, 2, 0, 7, 0, 1, 3, 4, 0, 1, 2, 1, 1, 2, 0, 1, 2, 1, 3, 12, 0, 3, 0, 2, 1, 4, 1, 5, 0, 1, 1, 2, 0, 7, 0, 1, ... (sequence A040081 in the OEIS). The first unknown n is for that k = 23669.
Related sequences are OEIS: A050412 (not allowing n = 0), for odd ks, see OEIS: A046069 or OEIS: A108129 (not allowing n = 0)
Simultaneously Riesel and Sierpiński
A number may be simultaneously Riesel and Sierpiński. These are called Brier numbers. The five smallest known examples are 3316923598096294713661, 10439679896374780276373, 11615103277955704975673, 12607110588854501953787, 17855036657007596110949, ... (A076335).[4]
The dual Riesel problem
The dual Riesel numbers are defined as the odd natural numbers k such that |2n - k| is composite for all natural numbers n. There is a conjecture that the set of this numbers is the same as the set of Riesel numbers. For example, |2n - 509203| is composite for all natural numbers n, and 509203 is conjectured to be the smallest dual Riesel number.
The smallest n which 2n - k is prime are (for odd ks, and this sequence requires that 2n > k)
2, 3, 3, 39, 4, 4, 4, 5, 6, 5, 5, 6, 5, 5, 5, 7, 6, 6, 11, 7, 6, 29, 6, 6, 7, 6, 6, 7, 6, 6, 6, 8, 8, 7, 7, 10, 9, 7, 8, 9, 7, 8, 7, 7, 8, 7, 8, 10, 7, 7, 26, 9, 7, 8, 7, 7, 10, 7, 7, 8, 7, 7, 7, 47, 8, 14, 9, 11, 10, 9, 10, 8, 9, 8, 8, ... (sequence A096502 in the OEIS)
The odd ks which k - 2n are all composite for all 2n < k (the de Polignac numbers) are
1, 127, 149, 251, 331, 337, 373, 509, 599, 701, 757, 809, 877, 905, 907, 959, 977, 997, 1019, 1087, 1199, 1207, 1211, 1243, 1259, 1271, 1477, ... (sequence A006285 in the OEIS)
The unknown values of ks are (for which 2n > k)
1871, 2293, 25229, 31511, 36971, 47107, 48959, 50171, 56351, 63431, 69427, 75989, 81253, 83381, 84491, ...
Riesel number base b
One can generalize the Riesel problem to an integer base b ≥ 2. A Riesel number base b is a positive integer k such that gcd(k − 1, b − 1) = 1. (if gcd(k − 1, b − 1) > 1, then gcd(k − 1, b − 1) is a trivial factor of k×bn − 1 (Definition of trivial factors for the conjectures: Each and every n-value has the same factor))[5][6] For every integer b ≥ 2, there are infinitely many Riesel numbers base b.
Example 1: All numbers congruent to 84687 mod 10124569 and not congruent to 1 mod 5 are Riesel numbers base 6, because of the covering set {7, 13, 31, 37, 97}. Besides, these k are not trivial since gcd(k + 1, 6 − 1) = 1 for these k. (The Riesel base 6 conjecture is not proven, it has 3 remaining k, namely 1597, 9582 and 57492)
Example 2: 6 is a Riesel number to all bases b congruent to 34 mod 35, because if b is congruent to 34 mod 35, then 6×bn − 1 is divisible by 5 for all even n and divisible by 7 for all odd n. Besides, 6 is not a trivial k in these bases b since gcd(6 − 1, b − 1) = 1 for these bases b.
Example 3: All squares k congruent to 12 mod 13 and not congruent to 1 mod 11 are Riesel numbers base 12, since for all such k, k×12n − 1 has algebraic factors for all even n and divisible by 13 for all odd n. Besides, these k are not trivial since gcd(k + 1, 12 − 1) = 1 for these k. (The Riesel base 12 conjecture is proven)
Example 4: If k is between a multiple of 5 and a multiple of 11, then k×109n − 1 is divisible by either 5 or 11 for all positive integers n. The first few such k are 21, 34, 76, 89, 131, 144, ... However, all these k < 144 are also trivial k (i. e. gcd(k − 1, 109 − 1) is not 1). Thus, the smallest Riesel number base 109 is 144. (The Riesel base 109 conjecture is not proven, it has one remaining k, namely 84)
Example 5: If k is square, then k×49n − 1 has algebraic factors for all positive integers n. The first few positive squares are 1, 4, 9, 16, 25, 36, ... However, all these k < 36 are also trivial k (i. e. gcd(k − 1, 49 − 1) is not 1). Thus, the smallest Riesel number base 49 is 36. (The Riesel base 49 conjecture is proven)
We want to find and proof the smallest Riesel number base b for every integer b ≥ 2. It is a conjecture that if k is a Riesel number base b, then at least one of the three conditions holds:
1. All numbers of the form k×bn − 1 have a factor in some covering set. (For example, b = 22, k = 4461, then all numbers of the form k×bn − 1 have a factor in the covering set: {5, 23, 97})
2. k×bn − 1 has algebraic factors. (For example, b = 9, k = 4, then k×bn − 1 can be factored to (2×3n − 1) × (2×3n + 1))
3. For some n, numbers of the form k×bn − 1 have a factor in some covering set; and for all other n, k×bn − 1 has algebraic factors. (For example, b = 19, k = 144, then if n is odd, then k×bn − 1 is divisible by 5, if n is even, then k×bn − 1 can be factored to (12×19n/2 − 1) × (12×19n/2 + 1))
In the following list, we only consider those positive integers k such that gcd(k − 1, b − 1) = 1, and all integer n must be ≥ 1.
Note: k-values that are a multiple of b and where k−1 is not prime are included in the conjectures (and included in the remaining k with red color if no primes are known for these k-values) but excluded from testing (Thus, never be the k of "largest 5 primes found"), since such k-values will have the same prime as k / b.
b conjectured smallest Riesel k covering set / algebraic factors remaining k with no known primes (red indicates the k-values that are a multiple of b and k−1 is not prime) number of remaining k with no known primes
(excluding the red ks)
testing limit of n
(excluding the red ks)
largest 5 primes found
(excluding red ks)
2 509203 {3, 5, 7, 13, 17, 241} 23669, 31859, 38473, 46663, 47338, 63718, 67117, 74699, 76946, 81041, 93326, 94676, 107347, 121889, 127436, 129007, 134234, 143047, 149398, 153892, 161669, 162082, 186652, 189352, 206231, 214694, 215443, 226153, 234343, 243778, 245561, 250027, 254872, 258014, 268468, 286094, 298796, 307784, 315929, 319511, 323338, 324011, 324164, 325123, 327671, 336839, 342847, 344759, 362609, 363343, 364903, 365159, 368411, 371893, 373304, 384539, 386801, 388556, 397027, 409753, 412462, 429388, 430886, 444637, 452306, 468686, 470173, 474491, 477583, 485557, 487556, 491122, 494743, 500054 42 PrimeGrid is currently searching every remaining k at n > 14.5M 97139×218397548−1
93839×215337656−1
192971×214773498−1
206039×213104952−1
2293×212918431−1
3 63064644938 {5, 7, 13, 17, 19, 37, 41, 193, 757} 3677878, 6878756, 10463066, 10789522, 11033634, 16874152, 18137648, 20636268, 21368582, 29140796, 31064666, 31389198, 32368566, 33100902, 38394682, 40175404, 40396658, 50622456, 51672206, 52072432, 54412944, 56244334, 59254534, 61908864, 62126002, 62402206, 64105746, 65337866, 71248336, 87422388, 93193998, 94167594, 94210372, 97105698, 97621124, 99302706, 103101766, 103528408, 107735486, 111036578, 115125596, 115184046, ... 100714 k = 3677878 at n = 5M, 4M < k ≤ 2.147G at n = 1.07M, 2.147G < k ≤ 6G at n = 500K, 6G < k ≤ 10G at n = 250K, 10G < k ≤ 63G at n = 100K, , k > 63G at n = 655K
676373272×31072675−1
1068687512×31067484−1
1483575692×31067339−1
780548926×31064065−1
1776322388×31053069−1
4 9 9×4n − 1 = (3×2n − 1) × (3×2n + 1) none (proven) 0 − 8×41−1
6×41−1
5×41−1
3×41−1
2×41−1
5 346802 {3, 7, 13, 31, 601} 4906, 23906, 24530, 26222, 35248, 68132, 71146, 76354, 81134, 92936, 102952, 109238, 109862, 119530, 122650, 127174, 131110, 131848, 134266, 136804, 143632, 145462, 145484, 146756, 147844, 151042, 152428, 154844, 159388, 164852, 170386, 170908, 176240, 179080, 182398, 187916, 189766, 190334, 195872, 201778, 204394, 206894, 231674, 239062, 239342, 246238, 248546, 259072, 264610, 265702, 267298, 271162, 285598, 285728, 298442, 304004, 313126, 318278, 325922, 335414, 338866, 340660 54 PrimeGrid is currently searching every remaining k at n > 4.4M 3622×57558139-1
52922×54399812-1
177742×54386703-1
213988×54138363-1
63838×53887851-1[7]
6 84687 {7, 13, 31, 37, 97} 1597, 9582, 57492 1 5M 36772×61723287−1
43994×6569498−1
77743×6560745−1
51017×6528803−1
57023×6483561−1
7 408034255082 {5, 13, 19, 43, 73, 181, 193, 1201} 315768, 1356018, 2210376, 2494112, 2631672, 3423408, 4322834, 4326672, 4363418, 4382984, 4870566, 4990788, 5529368, 6279074, 6463028, 6544614, 7446728, 7553594, 8057622, 8354966, 8389476, 8640204, 8733908, 9492126, 9829784, 10096364, 10098716, 10243424, 10289166, 10394778, 10494794, 10965842, 11250728, 11335962, 11372214, 11522846, 11684954, 11943810, 11952888, 11983634, 12017634, 12065672, 12186164, 12269808, 12291728, 12801926, 13190732, 13264728, 13321148, 13635266, 13976426, ... 16399 ks ≤ 1G k ≤ 2M at n = 1M, 2M < k ≤ 10M at n = 500K, 10M < k ≤ 110M at n = 150K, 110M < k ≤ 300M at n = 100K, 300M < k ≤ 1G at n = 25K 1620198×7684923−1
7030248×7483691−1
7320606×7464761−1
5646066×7460533−1
9012942×7425310−1
8 14 {3, 5, 13} none (proven) 0 − 11×818−1
5×84−1
12×83−1
7×83−1
2×82−1
9 4 4×9n − 1 = (2×3n − 1) × (2×3n + 1) none (proven) 0 − 2×91−1
10 10176 {7, 11, 13, 37} 4421 1 1.72M 7019×10881309−1
8579×10373260−1
6665×1060248−1
1935×1051836−1
1803×1045882−1
11 862 {3, 7, 19, 37} none (proven) 0 − 62×1126202−1
308×11444−1
172×11187−1
284×11186−1
518×1178−1
12 25 {13} for odd n, 25×12n − 1 = (5×12n/2 − 1) × (5×12n/2 + 1) for even n none (proven) 0 − 24×124−1
18×122−1
17×122−1
13×122−1
10×122−1
13 302 {5, 7, 17} none (proven) 0 − 288×13109217−1
146×1330−1
92×1323−1
102×1320−1
300×1310−1
14 4 {3, 5} none (proven) 0 − 2×144−1
3×141−1
15 36370321851498 {13, 17, 113, 211, 241, 1489, 3877} 381714, 4502952, 5237186, 5725710, 7256276, 8524154, 11118550, 11176190, 12232180, 15691976, 16338798, 16695396, 18267324, 18709072, 19615792, ... 14 ks ≤ 20M k ≤ 10M at n = 1M, 10M < k ≤ 20M at n = 250K 4242104×15728840−1
9756404×15527590−1
9105446×15496499−1
5854146×15428616−1
9535278×15375675−1
16 9 9×16n − 1 = (3×4n − 1) × (3×4n + 1) none (proven) 0 − 8×161−1
5×161−1
3×161−1
2×161−1
17 86 {3, 5, 29} none (proven) 0 − 44×176488−1
36×17243−1
10×17117−1
26×17110−1
58×1735−1
18 246 {5, 13, 19} none (proven) 0 − 151×18418−1
78×18172−1
50×18110−1
79×1863−1
237×1844−1
19 144 {5} for odd n, 144×19n − 1 = (12×19n/2 − 1) × (12×19n/2 + 1) for even n none (proven) 0 − 134×19202−1
104×1918−1
38×1911−1
128×1910−1
108×196−1
20 8 {3, 7} none (proven) 0 − 2×2010−1
6×202−1
5×202−1
7×201−1
3×201−1
21 560 {11, 13, 17} none (proven) 0 − 64×212867−1
494×21978−1
154×21103−1
84×2188−1
142×2148−1
22 4461 {5, 23, 97} 3656 1 2M 3104×22161188−1
4001×2236614−1
2853×2227975−1
1013×2226067−1
4118×2212347−1
23 476 {3, 5, 53} 404 1 1.35M 194×23211140−1
134×2327932−1
394×2320169−1
314×2317268−1
464×237548−1
24 4 {5} for odd n, 4×24n − 1 = (2×24n/2 − 1) × (2×24n/2 + 1) for even n none (proven) 0 − 3×241−1
2×241−1
25 36 36×25n − 1 = (6×5n − 1) × (6×5n + 1) none (proven) 0 − 32×254−1
30×252−1
26×252−1
12×252−1
2×252−1
26 149 {3, 7, 31, 37} none (proven) 0 − 115×26520277−1
32×269812−1
73×26537−1
80×26382−1
128×26300−1
27 8 8×27n − 1 = (2×3n − 1) × (4×9n + 2×3n + 1) none (proven) 0 − 6×272−1
4×271−1
2×271−1
28 144 {29} for odd n, 144×28n − 1 = (12×28n/2 − 1) × (12×28n/2 + 1) for even n none (proven) 0 − 107×2874−1
122×2871−1
101×2853−1
14×2847−1
90×2836−1
29 4 {3, 5} none (proven) 0 − 2×29136−1
30 1369 {7, 13, 19} for odd n, 1369×30n − 1 = (37×30n/2 − 1) × (37×30n/2 + 1) for even n 659, 1024 2 500K 239×30337990−1
249×30199355−1
225×30158755−1
774×30148344−1
25×3034205−1
31 134718 {7, 13, 19, 37, 331} 6962, 55758 2 1M 126072×31374323−1
43902×31251859−1
55940×31197599−1
101022×31133208−1
37328×31129973−1
32 10 {3, 11} none (proven) 0 − 3×3211−1
2×326−1
9×323−1
8×322−1
5×322−1
Conjectured smallest Riesel number base n are (start with n = 2)
509203, 63064644938, 9, 346802, 84687, 408034255082, 14, 4, 10176, 862, 25, 302, 4, 36370321851498, 9, 86, 246, 144, 8, 560, 4461, 476, 4, 36, 149, 8, 144, 4, 1369, 134718, 10, 16, 6, 287860, 4, 7772, 13, 4, 81, 8, 15137, 672, 4, 22564, 8177, 14, 3226, 36, 16, 64, 900, 5392, 4, 6852, 20, 144, 105788, 4, 121, 13484, 8, 187258666, 9, ... (sequence A273987 in the OEIS)
See also
• Sierpiński number
• Woodall number
• Experimental mathematics
• BOINC
• PrimeGrid
References
1. Riesel, Hans (1956). "Några stora primtal". Elementa. 39: 258–260.
2. "The Riesel Problem statistics". PrimeGrid.
3. "The Riesel Problem statistics". PrimeGrid. Archived from the original on 14 March 2023. Retrieved 1 May 2023.
4. "Problem 29.- Brier Numbers".
5. "Riesel conjectures and proofs".
6. "Riesel conjectures & proofs powers of 2".
7. Brown, Scott (15 July 2022). "SR5 Mega Prime!". PrimeGrid. Retrieved 26 July 2022.
Sources
• Guy, Richard K. (2004). Unsolved Problems in Number Theory. Berlin: Springer-Verlag. p. 120. ISBN 0-387-20860-7.
• Ribenboim, Paulo (1996). The New Book of Prime Number Records. New York: Springer-Verlag. pp. 357–358. ISBN 0-387-94457-5.
External links
• PrimeGrid
• The Riesel Problem: Definition and Status
• The Prime Glossary: Riesel number
• List of primes of the form: k*2^n-1, k<300
• List of primes of the form: k*2^n-1, k<300, Project Riesel Prime Search
• Riesel and Proth Prime Database
Classes of natural numbers
Powers and related numbers
• Achilles
• Power of 2
• Power of 3
• Power of 10
• Square
• Cube
• Fourth power
• Fifth power
• Sixth power
• Seventh power
• Eighth power
• Perfect power
• Powerful
• Prime power
Of the form a × 2b ± 1
• Cullen
• Double Mersenne
• Fermat
• Mersenne
• Proth
• Thabit
• Woodall
Other polynomial numbers
• Hilbert
• Idoneal
• Leyland
• Loeschian
• Lucky numbers of Euler
Recursively defined numbers
• Fibonacci
• Jacobsthal
• Leonardo
• Lucas
• Padovan
• Pell
• Perrin
Possessing a specific set of other numbers
• Amenable
• Congruent
• Knödel
• Riesel
• Sierpiński
Expressible via specific sums
• Nonhypotenuse
• Polite
• Practical
• Primary pseudoperfect
• Ulam
• Wolstenholme
Figurate numbers
2-dimensional
centered
• Centered triangular
• Centered square
• Centered pentagonal
• Centered hexagonal
• Centered heptagonal
• Centered octagonal
• Centered nonagonal
• Centered decagonal
• Star
non-centered
• Triangular
• Square
• Square triangular
• Pentagonal
• Hexagonal
• Heptagonal
• Octagonal
• Nonagonal
• Decagonal
• Dodecagonal
3-dimensional
centered
• Centered tetrahedral
• Centered cube
• Centered octahedral
• Centered dodecahedral
• Centered icosahedral
non-centered
• Tetrahedral
• Cubic
• Octahedral
• Dodecahedral
• Icosahedral
• Stella octangula
pyramidal
• Square pyramidal
4-dimensional
non-centered
• Pentatope
• Squared triangular
• Tesseractic
Combinatorial numbers
• Bell
• Cake
• Catalan
• Dedekind
• Delannoy
• Euler
• Eulerian
• Fuss–Catalan
• Lah
• Lazy caterer's sequence
• Lobb
• Motzkin
• Narayana
• Ordered Bell
• Schröder
• Schröder–Hipparchus
• Stirling first
• Stirling second
• Telephone number
• Wedderburn–Etherington
Primes
• Wieferich
• Wall–Sun–Sun
• Wolstenholme prime
• Wilson
Pseudoprimes
• Carmichael number
• Catalan pseudoprime
• Elliptic pseudoprime
• Euler pseudoprime
• Euler–Jacobi pseudoprime
• Fermat pseudoprime
• Frobenius pseudoprime
• Lucas pseudoprime
• Lucas–Carmichael number
• Somer–Lucas pseudoprime
• Strong pseudoprime
Arithmetic functions and dynamics
Divisor functions
• Abundant
• Almost perfect
• Arithmetic
• Betrothed
• Colossally abundant
• Deficient
• Descartes
• Hemiperfect
• Highly abundant
• Highly composite
• Hyperperfect
• Multiply perfect
• Perfect
• Practical
• Primitive abundant
• Quasiperfect
• Refactorable
• Semiperfect
• Sublime
• Superabundant
• Superior highly composite
• Superperfect
Prime omega functions
• Almost prime
• Semiprime
Euler's totient function
• Highly cototient
• Highly totient
• Noncototient
• Nontotient
• Perfect totient
• Sparsely totient
Aliquot sequences
• Amicable
• Perfect
• Sociable
• Untouchable
Primorial
• Euclid
• Fortunate
Other prime factor or divisor related numbers
• Blum
• Cyclic
• Erdős–Nicolas
• Erdős–Woods
• Friendly
• Giuga
• Harmonic divisor
• Jordan–Pólya
• Lucas–Carmichael
• Pronic
• Regular
• Rough
• Smooth
• Sphenic
• Størmer
• Super-Poulet
• Zeisel
Numeral system-dependent numbers
Arithmetic functions
and dynamics
• Persistence
• Additive
• Multiplicative
Digit sum
• Digit sum
• Digital root
• Self
• Sum-product
Digit product
• Multiplicative digital root
• Sum-product
Coding-related
• Meertens
Other
• Dudeney
• Factorion
• Kaprekar
• Kaprekar's constant
• Keith
• Lychrel
• Narcissistic
• Perfect digit-to-digit invariant
• Perfect digital invariant
• Happy
P-adic numbers-related
• Automorphic
• Trimorphic
Digit-composition related
• Palindromic
• Pandigital
• Repdigit
• Repunit
• Self-descriptive
• Smarandache–Wellin
• Undulating
Digit-permutation related
• Cyclic
• Digit-reassembly
• Parasitic
• Primeval
• Transposable
Divisor-related
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
• Vampire
Other
• Friedman
Binary numbers
• Evil
• Odious
• Pernicious
Generated via a sieve
• Lucky
• Prime
Sorting related
• Pancake number
• Sorting number
Natural language related
• Aronson's sequence
• Ban
Graphemics related
• Strobogrammatic
• Mathematics portal
| Wikipedia |
back to index | new
Let real numbers $a, b, c, d$ satisfy $$ \left\{ \begin{array}{ccl} ax+by&=3\\ ax^2+by^2&=7\\ ax^3+by^3&=16\\ ax^4 + by^4 &=42 \end{array} \right. $$ Find $ax^5+by^5$.
If sequence $\{a_n\}$ has no zero term and satisfies that, for any $n\in\mathbb{N}$, $$(a_1+a_2+\cdots+a_n)^2=a_1^3+a_2^3+\cdots+a_n^3$$ - Find all qualifying sequences $\{a_1, a_2, a_3\}$ when $n=3$. - Is there an infinite sequence $\{a_n\}$ such that $a_{2013}=-2012$? If yes, give its general formula of $a_n$. If not, explain.
We define the Fibonaccie numbers by $F_0=0$, $F_1=1$, and $F_n=F_{n-1}+F_{n}$. Find the greatest common divisor $(F_{100}, F_{99})$, and $(F_{100}, F_{96})$.
Let $\{a_n\}$ be a sequence defined as $a_1=1$ and $a_n=\frac{a_{n-1}}{1+a_{n-1}}$ when $n\ge 2$. Find the general formula of $a_n$.
For each integer $a_0 >$ 1, define the sequence $a_0, a_1, a_2, \cdots$ by: $$ a_{n+1} = \left\{ \begin{array}{ll} \sqrt{a_n} & \text{if } \sqrt{a_n} \text{ is an integer}\\ a_n + 3 & \text{otherwise} \end{array} \right. $$ For all $n \ge 0$. Determine all values of $a_0$ for which there is a number $A$ such that $a_n = A$ for infinitely many values of $n$.
Show that $$x+n=\sqrt{n^2 + x\sqrt{n^2+(x+n)\sqrt{n^2+(x+2n)\sqrt{\cdots}}}}$$
Find the value of $$\sqrt{1+\sqrt{1+\sqrt{1+\cdots}}}$$
Let $\{x_n\}$ and $\{y_n\}$ be two real number sequences which are defined as follow: $$x_1=y_1=\sqrt{3},\quad x_{n+1}=x_n +\sqrt{1+x_n^2},\quad y_{n+1}=\frac{y_n}{1+\sqrt{1+y_n^2}}$$ for all $n\ge 1$. Prove that $2 < x_ny_n < 3$ for all $n>1$.
John uses the equation method to evaluate the following expression:$$S=1-1+1-1+1-\cdots$$ and get $$S=1-S \implies \boxed{S=\frac{1}{2}}$$ However, $S$ clearly cannot be a fraction. Can you point out what is wrong here?
The Fibonacci sequence $(F_n)_{n\ge 0}$ is defined by the recurrence relation $F_{n+2}=F_{n+1}+F_{n}$ with $F_{0}=0$ and $F_{1}=1$. Prove that for any $m$, $n \in \mathbb{N}$, we have $$F_{m+n+1}=F_{m+1}F_{n+1}+F_{m}F_{n}.$$ Deduce from here that $F_{2n+1}=F^2_{n+1}+F^2_{n}$ for any $n \in \mathbb{N}$
Find all numbers $n \ge 3$ for which there exists real numbers $a_1, a_2, ..., a_{n+2}$ satisfying $a_{n+1} = a_1, a_{n+2} = a_2$ and\[a_{i}a_{i+1} + 1 = a_{i+2}\]for $i = 1, 2, ..., n.$
Let $a_{0} = 2$, $a_{1} = 5$, and $a_{2} = 8$, and for $n > 2$ define $a_{n}$ recursively to be the remainder when $4$($a_{n-1}$ $+$ $a_{n-2}$ $+$ $a_{n-3}$) is divided by $11$. Find $a_{2018}$ • $a_{2020}$ • $a_{2022}$.
Compute: $1\times 2\times 3 + 2\times 3\times 4 + \cdots + 18\times 19\times 20$.
Compute: $\frac{1}{1\times 2\times 3} + \frac{1}{2\times 3\times 4} + \cdots + \frac{1}{2016\times 2017\times 2018}$
Compute $\frac{1}{1\times 2} + \frac{1}{2\times 3} + \cdots + \frac{1}{2017\times 2018}$
Compute $1\times 2 + 2\times 3 + \cdots + 19\times 20$
Let sequence $\{x_n\}$ satisfy the relation $x_{n+2}=x_{n+1}+2x_n$ for $n\ge 1$ where $x_1=1$ and $x_2=3$.
Let sequence $\{y_n\}$ satisfy the relation $y_{n+2}=2y_{n+1}+3y_n$ for $n\ge 1$ where $y_1=7$ and $y_2=17$.
Show that these two sequences do not share any common term.
Let $a$, $b$, and $x_0$ all be positive integers. Sequence $\{x_n\}$ is defined as $x_{n+1}=ax_n + b$ where $n \ge 1$. Show that $x_1$, $x_2$, $\cdots$ cannot be all prime.
Let sequence $\{a_n\}$ be $a_n=2^n + 3^n + 6^n - 1$ where $n\ge 1$. Find the sum of all positive integers which are co-prime to all the $a_n$.
How many different strings of length $10$ which contains only letter $A$ or $B$ contains no two consecutive $A$s are there?
Let $n$ and $k$ be two positive integers. Show that $$\frac{1}{\binom{n}{k}}=\frac{k}{k-1}\left(\frac{1}{\binom{n-1}{k-1}}-\frac{1}{\binom{n}{k-1}}\right)$$
Let $\{a_n\}$ be a geometric sequence whose initial term is $a_1$ and common ratio is $q$. Show that $$a_1\binom{n}{0}-a_2\binom{n}{1}+a_3\binom{n}{2}-a_4\binom{n}{3}+\cdots+(-1)^na_{n+1}\binom{n}{n}=a_1(1-q)^n$$
where $n$ is a positive integer.
Let $n$ be a positive integer and function $\lfloor{x}\rfloor$ return the largest integer not exceeding $x$. Compute the value of $$\sum_{k=0}^{\lfloor{\frac{n}{2}}\rfloor}\binom{n-k}{k}$$
Show that $$\sum_{k=0}^{n}(-1)^k\frac{m}{m+k}\binom{n}{k}=\frac{1}{\binom{m+n}{n}}$$
Show that $$\sum_{k=0}^{n}(-1)^k2^{2n-2k}\binom{2n-k+1}{k}=n+1$$
VietaTheorem SpecialEquation Sequence BasicSequence LinearRecursion TelescopingSeries InfiniteRepitition SpecialSequence Recursive (Counting) CombinatorialIdentity HockeyStickFormula NumberTheoryBasic EulerFermatTheorem MODBasic Trigonometry Induction PigeonholePrinciple
AIME IMO
US International
With Solutions
© 2009 - 2022 Math All Star | CommonCrawl |
Infinitely many symmetric solutions for anisotropic problems driven by nonhomogeneous operators
DCDS-S Home
Multiple solutions for (p, 2)-equations at resonance
April 2019, 12(2): 375-400. doi: 10.3934/dcdss.2019025
Critical Schrödinger-Hardy systems in the Heisenberg group
Patrizia Pucci
Department of Mathematics and Informatics, University of Perugia, Via Vanvitelli, 1, 06123 Perugia, Italy
Dedicated to Professor Vicentiu D. Radulescu on the occasion of his 60th birthday, with high feelings of admiration for his notable contributions in Mathematics and great affection
Received May 2017 Revised December 2017 Published August 2018
The paper is focused on existence of nontrivial solutions of a Schrödinger-Hardy system in the Heisenberg group, involving critical nonlinearities. Existence is obtained by an application of the mountain pass theorem and the Ekeland variational principle, but there are several difficulties arising in the framework of Heisenberg groups, also due to the presence of the Hardy terms as well as critical nonlinearities.
Keywords: Heisenberg group, entire solutions, Schrödinger-Hardy systems, subelliptic critical systems.
Mathematics Subject Classification: Primary: 35R03, 35H20, 35J70; Secondary: 35B33, 35A15.
Citation: Patrizia Pucci. Critical Schrödinger-Hardy systems in the Heisenberg group. Discrete & Continuous Dynamical Systems - S, 2019, 12 (2) : 375-400. doi: 10.3934/dcdss.2019025
R. A. Adams and J. J. F. Fournier, Sobolev Spaces, second eds., Academic Press, New York–London, 2003. Google Scholar
G. Autuori and P. Pucci, Existence of entire solutions for a class of quasilinear elliptic equations, NoDEA Nonlinear Differential Equations Appl., 20 (2013), 977-1009. doi: 10.1007/s00030-012-0193-y. Google Scholar
Z. M. Balogh and A. Kristály, Lions-type compactness and Rubik actions on the Heisenberg group, Calc. Var. Partial Differential Equations, 48 (1995), 89-109. doi: 10.1007/s00526-012-0543-y. Google Scholar
L. Boccardo and F. Murat, Almost everywhere convergence of the gradients of solutions to elliptic and parabolic equations, Nonlinear Anal., 19 (1992), 581-597. doi: 10.1016/0362-546X(92)90023-8. Google Scholar
S. Bordoni and P. Pucci, Schrödinger–Hardy systems involving two Laplacian operators in the Heisenberg group, Bull. Sci. Math., 146 (2018), 50-88. doi: 10.1016/j.bulsci.2018.03.001. Google Scholar
M. Caponi and P. Pucci, Existence theorems for entire solutions of stationary Kirchhoff fractional p-Laplacian equations, Ann. Mat. Pura Appl., 195 (2016), 2099-2129. doi: 10.1007/s10231-016-0555-x. Google Scholar
C. Chen, Infinitely many solutions to a class of quasilinear Schrödinger system in $\mathbb{R}^N$, Appl. Math. Lett., 52 (2016), 176-182. doi: 10.1016/j.aml.2015.09.007. Google Scholar
W. Chen and M. Squassina, Critical nonlocal systems with concave-convex powers, Adv. Nonlinear Stud., 16 (2016), 176-182. doi: 10.1515/ans-2015-5055. Google Scholar
J. Y. Chu, Z. W. Wei and Q. Y. Wu, Lp and BMO bounds for weighted Hardy operators on the Heisenberg group J. Inequal. Appl., (2016), Paper No. 282, 12 pp. doi: 10.1186/s13660-016-1222-x. Google Scholar
L. D'Ambrosio, Hardy-type inequalities related to degenerate elliptic differential operators, Ann. Sc. Norm. Super. Pisa Cl. Sci.(5), 4 (2005), 451-486. Google Scholar
F. Demengel and E. Hebey, On some nonlinear equations on compact Riemannian manifolds, Adv. Differential Equations, 3 (1998), 533-574. Google Scholar
A. Fiscella, P. Pucci and S. Saldi, Existence of entire solutions for Schrödinger-Hardy systems involving two fractional operators, Nonlinear Anal., 158 (2017), 109-131. doi: 10.1016/j.na.2017.04.005. Google Scholar
A. Fiscella, P. Pucci and B. Zhang, p–fractional Hardy–Schrödinger–Kirchhoff Systems with Critical Nonlinearities, submitted for publication, pages 22. Google Scholar
G. B. Folland, Subelliptic estimates and function spaces on nilpotent Lie groups, Ark. Math., 13 (1975), 161-207. doi: 10.1007/BF02386204. Google Scholar
G. B. Folland and E. M. Stein, Estimates for the ∂b complex and analysis on the Heisenberg group, Comm. Pure Appl. Math., 27 (1974), 429-522. doi: 10.1002/cpa.3160270403. Google Scholar
B. Franchi, C. Gutierrez and R. L. Wheeden, Weighted Sobolev-Poincaré inequalities for Grushin type operators, Comm. PDE, 19 (1994), 523-604. doi: 10.1080/03605309408821025. Google Scholar
Y. Fu, H. Li and P. Pucci, Existence of nonnegative solutions for a class of systems involving fractional (p, q)-Laplacian operators, Chin. Ann. Math. Ser. B, 39 (2018), 357-372. doi: 10.1007/s11401-018-1069-1. Google Scholar
N. Garofalo and E. Lanconelli, Frequency functions on the Heisenberg group, the uncertainty principle and unique continuation, Ann. Inst. Fourier, 40 (1990), 313-356. doi: 10.5802/aif.1215. Google Scholar
N. Garofalo and D.-M. Nhieu, Isoperimetric and Sobolev inequalities for Carnot-Carathéodory spaces and the existence of minimal surfaces, Comm. Pure Appl. Math., 49 (1996), 1081-1144. doi: 10.1002/(SICI)1097-0312(199610)49:10<1081::AID-CPA3>3.0.CO;2-A. Google Scholar
P. Han, The effect of the domian topology on the number of positive solutions of an elliptic system involving critical Sobolev exponents, Houston J. Math., 32 (2006), 1241-1257. Google Scholar
L. Hőrmander, Hypoelliptic second order differential equations, Acta Math., 119 (1967), 147-171. doi: 10.1007/BF02392081. Google Scholar
S. P. Ivanov, D. N. Vassilev, Extremals for the Sobolev Inequality and the Quaternionic Contact Yamabe Problem, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, ⅹⅷ+219 pp., 2011. doi: 10.1142/9789814295710. Google Scholar
G. P. Leonardi and S. Masnou, On the isoperimetric problem in the Heisenberg group $\mathbb H^n$, Ann. Mat. Pura Appl.(4), 184 (2005), 533-553. doi: 10.1007/s10231-004-0127-3. Google Scholar
A. Loiudice, Improved Sobolev inequalities on the Heisenberg group, Nonlinear Anal., 62 (2005), 953-962. doi: 10.1016/j.na.2005.04.015. Google Scholar
M. Magliaro, L. Mari, P. Mastrolia and M. Rigoli, Keller-Osserman type conditions for differential inequalities with gradient terms on the Heisenberg group, J. Diff. Equations, 250 (2011), 2643-2670. doi: 10.1016/j.jde.2011.01.006. Google Scholar
G. Mingione, A. Zatorska-Goldstein and X. Zhong, Gradient regularity for elliptic equations in the Heisenberg group, Adv. Math., 222 (2009), 62-129. doi: 10.1016/j.aim.2009.03.016. Google Scholar
X. Mingqi, V. Radulescu and B. Zhang, Combined effects for fractional Schrödinger–Kirchhoff systems with critical nonlinearities, ESAIM Control Optim. Calc. Var., (2017), pages 28. doi: 10.1051/cocv/2017036. Google Scholar
P. Niu, H. Zhang and Y. Wang, Hardy-type and Rellich type inequalities on the Heisenberg group, Proc. Amer. Math. Soc, 129 (2001), 3623-3630. doi: 10.1090/S0002-9939-01-06011-7. Google Scholar
P. Pucci, M. Q. Xiang and B. L. Zhang, Multiple solutions for nonhomogeneous Schrödinger-Kirchhoff type equations involving the fractional p-Laplacian in ${\mathbb {R}}^N$, Calc. Var. Partial Differential Equations, 54 (2015), 2785-2806. doi: 10.1007/s00526-015-0883-5. Google Scholar
D. Ricciotti, p–Laplace Equation in the Heisenberg Group. Regularity of Solutions, Springer Briefs in Mathematics, BCAM Basque Center for Applied Mathematics, Bilbao, ⅹⅳ+87 pp., 2015. doi: 10.1007/978-3-319-23790-9. Google Scholar
N. Varopoulos, Analysis on nilpotent Lie groups, J. Funct. Anal., 66 (1986), 406-431. doi: 10.1016/0022-1236(86)90066-2. Google Scholar
N. Varopoulos, Sobolev inequalities on Lie groups and symmetric spaces, J. Funct. Anal., 86 (1989), 19-40. doi: 10.1016/0022-1236(89)90063-3. Google Scholar
D. Vassilev, Existence of solutions and regularity near the characteristic boundary for sub-Laplacian equations on Carnot groups, Pacific J. Math., 227 (2006), 361-397. doi: 10.2140/pjm.2006.227.361. Google Scholar
Yongpeng Chen, Yuxia Guo, Zhongwei Tang. Concentration of ground state solutions for quasilinear Schrödinger systems with critical exponents. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2693-2715. doi: 10.3934/cpaa.2019120
Yimin Zhang, Youjun Wang, Yaotian Shen. Solutions for quasilinear Schrödinger equations with critical Sobolev-Hardy exponents. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1037-1054. doi: 10.3934/cpaa.2011.10.1037
Vincenzo Ambrosio. Concentration phenomena for critical fractional Schrödinger systems. Communications on Pure & Applied Analysis, 2018, 17 (5) : 2085-2123. doi: 10.3934/cpaa.2018099
Fengshuang Gao, Yuxia Guo. Multiple solutions for a nonlinear Schrödinger systems. Communications on Pure & Applied Analysis, 2020, 19 (2) : 1181-1204. doi: 10.3934/cpaa.2020055
Lun Guo, Wentao Huang, Huifang Jia. Ground state solutions for the fractional Schrödinger-Poisson systems involving critical growth in $ \mathbb{R} ^{3} $. Communications on Pure & Applied Analysis, 2019, 18 (4) : 1663-1693. doi: 10.3934/cpaa.2019079
Hongyu Ye. Positive solutions for critically coupled Schrödinger systems with attractive interactions. Discrete & Continuous Dynamical Systems - A, 2018, 38 (2) : 485-507. doi: 10.3934/dcds.2018022
Zhongwei Tang. Segregated peak solutions of coupled Schrödinger systems with Neumann boundary conditions. Discrete & Continuous Dynamical Systems - A, 2014, 34 (12) : 5299-5323. doi: 10.3934/dcds.2014.34.5299
Jing Yang. Segregated vector Solutions for nonlinear Schrödinger systems with electromagnetic potentials. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1785-1805. doi: 10.3934/cpaa.2017087
Youyan Wan, Jinggang Tan. The existence of nontrivial solutions to Chern-Simons-Schrödinger systems. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2765-2786. doi: 10.3934/dcds.2017119
Ran Zhuo, Yan Li. Nonexistence and symmetry of solutions for Schrödinger systems involving fractional Laplacian. Discrete & Continuous Dynamical Systems - A, 2019, 39 (3) : 1595-1611. doi: 10.3934/dcds.2019071
Jiabao Su, Rushun Tian, Zhi-Qiang Wang. Positive solutions of doubly coupled multicomponent nonlinear Schrödinger systems. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 2143-2161. doi: 10.3934/dcdss.2019138
Lushun Wang, Minbo Yang, Yu Zheng. Infinitely many segregated solutions for coupled nonlinear Schrödinger systems. Discrete & Continuous Dynamical Systems - A, 2019, 39 (10) : 6069-6102. doi: 10.3934/dcds.2019265
Guozhen Lu and Juncheng Wei. On positive entire solutions to the Yamabe-type problem on the Heisenberg and stratified groups. Electronic Research Announcements, 1997, 3: 83-89.
Takahisa Inui. Global dynamics of solutions with group invariance for the nonlinear schrödinger equation. Communications on Pure & Applied Analysis, 2017, 16 (2) : 557-590. doi: 10.3934/cpaa.2017028
Yajing Zhang, Jianghao Hao. Existence of positive entire solutions for semilinear elliptic systems in the whole space. Communications on Pure & Applied Analysis, 2009, 8 (2) : 719-724. doi: 10.3934/cpaa.2009.8.719
Antonio Azzollini, Pietro d'Avenia, Valeria Luisi. Generalized Schrödinger-Poisson type systems. Communications on Pure & Applied Analysis, 2013, 12 (2) : 867-879. doi: 10.3934/cpaa.2013.12.867
Hans Zwart, Yann Le Gorrec, Bernhard Maschke. Relating systems properties of the wave and the Schrödinger equation. Evolution Equations & Control Theory, 2015, 4 (2) : 233-240. doi: 10.3934/eect.2015.4.233
Mostafa Abounouh, H. Al Moatassime, J. P. Chehab, S. Dumont, Olivier Goubet. Discrete Schrödinger equations and dissipative dynamical systems. Communications on Pure & Applied Analysis, 2008, 7 (2) : 211-227. doi: 10.3934/cpaa.2008.7.211
Sandra Lucente, Eugenio Montefusco. Non-hamiltonian Schrödinger systems. Discrete & Continuous Dynamical Systems - S, 2013, 6 (3) : 761-770. doi: 10.3934/dcdss.2013.6.761
Cyril Joel Batkam, João R. Santos Júnior. Schrödinger-Kirchhoff-Poisson type systems. Communications on Pure & Applied Analysis, 2016, 15 (2) : 429-444. doi: 10.3934/cpaa.2016.15.429
HTML views (89) | CommonCrawl |
Outbreak minimization v.s. influence maximization: an optimization framework
Chun-Hung Cheng1,
Yong-Hong Kuo ORCID: orcid.org/0000-0002-6170-324X2 &
Ziye Zhou3
An effective approach to containing epidemic outbreaks (e.g., COVID-19) is targeted immunization, which involves identifying "super spreaders" who play a key role in spreading disease over human contact networks. The ultimate goal of targeted immunization and other disease control strategies is to minimize the impact of outbreaks. It shares similarity with the famous influence maximization problem studied in the field of social network analysis, whose objective is to identify a group of influential individuals to maximize the influence spread over social networks. This study aims to establish the equivalence of the two problems and develop an effective methodology for targeted immunization through the use of influence maximization.
We present a concise formulation of the targeted immunization problem and show its equivalence to the influence maximization problem under the framework of the Linear Threshold diffusion model. Thus the influence maximization problem, as well as the targeted immunization problem, can be solved by an optimization approach. A Benders' decomposition algorithm is developed to solve the optimization problem for effective solutions.
A comprehensive computational study is conducted to evaluate the performance and scalability of the optimization approach on real-world large-scale networks. Computational results show that our proposed approaches achieve more effective solutions compared to existing methods.
We show the equivalence of the outbreak minimization and influence maximization problems and present a concise formulation for the influence maximization problem under the Linear Threshold diffusion model. A tradeoff between computational effectiveness and computational efficiency is illustrated. Our results suggest that the capability of determining the optimal group of individuals for immunization is particularly crucial for the containment of infectious disease outbreaks within a small network. Finally, our proposed methodology not only determines the optimal solutions for target immunization, but can also aid policymakers in determining the right level of immunization coverage.
The containment of infectious disease outbreaks has been an important issue over decades. In the 21st century, there have still been major epidemics which posed serious global health threats, such as coronavirus disease 2019 (COVID-19), severe acute respiratory syndrome (SARS), dengue fever, middle east respiratory syndrome (MERS), and Ebola virus disease. This study was motivated by a project initiated at the Prince of Wales Hospital (PWH) of Hong Kong [1, 2], a major hospital in the city. The project aimed to investigate solutions for effective and timely responses to possible severe infectious disease outbreaks. PWH suffered from SARS in 2003; there were at least 138 suspected SARS cases potentially acquiring the disease at the facility, where 69 of them were healthcare workers (HCWs) [3]. After SARS, there were reviews of the causes of the hospital outbreak and the effectiveness of the intervention strategies. It was believed that contact tracing was a critical step to identify potential infected cases, as the disease could be spread through person-to-person contact. The recent advancements of information and communication technologies offered a possible and more effective way to establish the contact tracebility, instead of conducting a survey after the outbreak. In the project, a radio-frequency identification (RFID) system was developed to locate individuals (including patients and HCWs) within the facility. While the individuals' contact activities could be captured through this system, our next question is: what is an effective way to containing the disease? This motivated our current research.
Targeted immunization (TI) is a popular and effective approach to containing epidemic outbreaks. The essence of TI is to identify and immunize at-risk individuals or groups who have higher chances of spreading the disease to a larger population. There are several stages for the containment of infectious disease outbreaks. The very first stage is disease outbreak detection [4–6]. When an outbreak is identified, effective modeling of disease outbreaks and responsive actions for TI would be essential to the containment of the disease spread [7, 8]. The identification of the individuals or groups for immunization aims to mitigate the impacts of the disease spread as far as possible. TI provides protection for not only the targeted individuals but also other members within the same communities, e.g., those who cannot be vaccinated themselves such as infants and pregnant women. When vaccines are scarce with limited budgets, it is especially important to develop effective immunization strategies and to allocate resource optimally for containing infectious disease outbreaks. In the case of healthcare-facility outbreaks of infectious diseases such as SARS and MERS (e.g., [2]), it is essential to protect the healthcare workers (HCWs) who are the frontline medical staff against the outbreak. In this research, we focus on TI for infectious disease outbreaks spread by person-to-person contact [9], where the optimal resource allocation decisions are determined based on the contact network topology.
We consider an equivalent problem which could determine the TI solutions. TI shares similarity with the influence maximization (IM) problem which has been extensively studied in the field of social network analysis. In the IM problem, a user can influence others through social connections, making influence spread over social networks. The IM problem thus is to target a certain number of influential individuals, called "seed" nodes in the social network. These seed nodes are activated at the initial stage, such that the expected influence spread, usually associated with the expected number of nodes that eventually get activated, is maximized. While TI is to identify a set of individuals to minimize the effects of an epidemic spread, the IM problem is to identify a set of individuals to maximize the influence spread. It is natural to see that by considering the population protected from the epidemic outbreak as a reward, target immunization can be transformed to maximizing the reward, which is equivalent to the IM problem [10].
In this work, we first formulate TI as an optimization problem and show that it is equivalent to the standard formulation of the influence maximization problem under the framework of the Linear Threshold (LT) diffusion model. e aim to answer the following research question: can we achieve more effective IT solutions by an optimization approach for the IM problem, as compared with existing methods? To be specific, our research achieves the following contributions:
We show that the TI problem is equivalent to the famous IM problem.
We provide an explicit and concise formulation of the IM problem under the framework of the LT diffusion model.
We develop optimization approaches based on Linear Programming (LP) Relaxation and Bender's Decomposition.
We examine the solutions for the IM problem on real-world large-scale networks and show that the proposed optimization approach achieves more effective solutions, as compared with existing methods.
Insights into infectious disease outbreak containment are derived from the computation experiments.
Related work on the technical tools
We first provide an introduction to the technical tools we adopted in this research – influence maximization, linear threshold model, and Benders' Decomposition – and review the related work.
Influence maximization
The IM problem, originating from the area of viral marketing, was first studied in [11]. Later, an optimization problem was formulated and presented in [12]. After that, their work became the standard approach to solving the IM problem. They proposed an approximate solution based on the greedy algorithm. They also proved that it guarantees a (1−1/e−ε) bound to the optimal solution for diffusion models with submodular objective functions, such as the Independent Cascade model and the LT model. There are three assumptions in the standard IM model: random activation thresholds, monotonic diffusion functions, and submodular diffusion functions [13]. Mossel and Roch [13] showed that the submodularity holds for the network-level propagation at the global structure if the above three assumptions are satisfied. Soma et al. [14] defined a submodular function on the integer lattice, which extends submodular set functions, and introduced a maximization problem for monotonic submodular function under Knapsack constraints that no longer requires uniform costs. They proposed a polynomial time algorithm with (1−1/e) bound to solve the budget allocation problem and compared several strategies for selecting seed sets. Khanna and Lucier [15] proved through bond-percolation-based probabilistic analysis that, on undirected networks, the greedy algorithm could achieve a (1−1/e+c) bound.
There are two main directions which are extended from Kempe's work. One is to improve the effectiveness of the solution, as the greedy algorithm gives only (1−1/e) approximation to the optimal solution. The other direction is to increase the efficiency of the solution algorithm because the standard solution using Monte Carlo simulations to calculate the expected spread of a seed set requires a significant computation time. However, as far as we know, there is no work in the first direction that aims to improve the effectiveness of the solution. Almost all research work remains in the second direction focusing on speeding up the calculation of expected spread within the framework of a greedy algorithm, e.g., [10, 16–18].
Linear threshold model
The IM problem on the LT model is NP-hard [12], and the standard greedy algorithm based on Monte Carlo simulations is computationally expensive. Thus, extensive research has been carried out to advance the performance of approaches to computing the IM process on the LT model. Leskovec et al. [10] proposed a lazy-forward optimization to accelerate the simple greedy algorithm by reducing the number of spread estimation calls, based on the idea that the marginal gain of a node in previous iterations is always larger than (or at least equal to) its marginal gain at the current iteration. Chen et al. [17] proved that calculation of the expected spread on Directed Acyclic Graphs (DAGs) can be completed in linear time. Their algorithm constructs a local DAG for each node. It then iteratively selects a seed using the classic greedy algorithm, which achieves maximum incremental influence spread at each iteration. Goyal et al. [18] proposed an approximation algorithm that utilizes simple paths to calculate the influence for a node and treats the influence for a set as the sum of the influences for all nodes in the set. In this way, the calculation of expected spread is decoupled and becomes additive. Since enumerating all simple paths between a pair of nodes is computationally intractable, they speed up the algorithm by introducing a threshold to prune paths which have little influences.
Benders' decomposition
Benders' decomposition is a technique in mathematical programming that allows solving some huge mixed integer linear programming (MILP) problems of certain structures. Classical Benders' decomposition approaches separate a MILP problem into a master problem, usually a MILP problem, and LP subproblems whose dual solutions are used to derive new cuts for the master problem [19]. Hooker and Ottoson [20] proposed Logic-Based Benders' decomposition where cuts are obtained through the inference dual rather than from the dual formulation of the subproblem. Later, Codato and Fischetti [21] developed and applied Combinatorial Benders' decomposition, which is a particular case of Logic-Based Benders' decomposition, to MILP problems involving large numbers of conditional constraints or so-called the big-M constraints. A combinatorial Benders' cut is derived whenever the solution for the master MILP problem leads an infeasible subproblem. Combinatorial Benders' decomposition has been successfully applied to various real-world applications such as those related scheduling and assignment problems. Bai [22], for instance, used Combinatorial Benders' decomposition to solve an optimal allocation problem that tollbooths are allocated to roads for covering the entire road network such that the number of tollbooths required is minimized. By combinatorial decomposition, a large number of logic implications (big-M constraints) can be avoided.
To address the issue that existing IM methods are based on the greedy algorithm, which guarantees only (1−1/e) approximation on submodular diffusion functions, we present a novel and concise formulation of the IM problem on the LT model so that it can be solved by more effective optimization techniques. Our approach no longer suffers the limitation of (1−1/e) approximation, thus providing solutions with higher quality.
We first show the equivalence of the TI problem and the famous IM problem. Then we introduce the LT model and present the proposed Time Aware Influence Maximization (TAIM) model, which takes the temporal nature of influence propagation into the LT model. Notations used in the paper are summarized as follows:
\(G = (\mathcal {V},\mathcal {E})\) = the graph representing the social network;
N = number of nodes on the graph, i.e., \(|\mathcal {V}|\);
S = seed set;
K = number of seed nodes |S|;
\(\mathcal {N}^{in}(u)\) = in-neighbor set of node u;
\(\mathcal {N}^{out}(u)\) = out-neighbor set of node u;
wu,v = influence weight of node u on v;
π(S) = expected penalty incurred by S;
πi(S) = penalty incurred by S under scenario i;
σ(S) = expected number of nodes influenced by S;
σT(S) = expected number of nodes influenced by S within T time units;
\(\Delta p^{t}_{v}\) = delta influence of node v at time t; and
M = A sufficiently large number.
Targeted immunization and influence maximization
In the TI problem, a subset of nodes (i.e., individuals) is selected for immunization such that effects of the infectious disease outbreak can be minimized. Let set I represent all possible scenarios of the outbreak. An event i∈I represent a scenario that starts from a node \(s' \in \mathcal {V}\) and spreads through a network \(G = (\mathcal {V},\mathcal {E})\). When it reaches a protected node \(s \in S \subseteq \mathcal {V}\), the transmission subtree rooted at node s is cut off. Thus, a penalty function πi(s), dependent on the scenario i, is incurred for the population affected before a contaminant reaches the protected node s. The affected population is defined as the expected number of people who get infected. The goal of TI is to minimize the expected penalty over all possible scenarios, that is, to minimize the expected number of individuals that would be affected by the outbreak. The TI problem is formulated as:
$$\begin{array}{@{}rcl@{}} \min \qquad \pi(S) &=& \sum_{i\in I} P(i) \pi_{i}(S)\\ s.t.\qquad c(S) &\leq& B \end{array} $$
where P is a probability distribution over the events, c(S) is a cost function for set S, and B is a limited budget which the total cost cannot exceed.
The IM problem is to determine a seed set such that the expected influence spread is maximized. Different choices of seed nodes lead to different influence spreads that are measured by spread scores. Generally, the spread score is a set function σ that maps every seed set S to a real number σ(S). This set function σ is the objective to maximize in the problem. With this notion of expected influence spread, the IM problem can be formulated as the following optimization problem:
$$\begin{array}{@{}rcl@{}} \max \qquad \sigma(S) = \sum_{i\in I}P(i)\sigma_{i}(S)\\ s.t.\qquad c(S) &\leq& B \end{array} $$
where B is a budget which cannot be exceeded for selecting the seeds.
Following the argument in [10], we show the equivalence between the TI problem and the IM problem. In the TI problem, a maximum penalty π(∞) is set for not protecting any node in scenario i. We consider a scenario-specific penalty reduction σi(S)=πi(∞)−πi(S) instead of the penalty πi(S), which can be viewed as a reward for protecting nodes in S. Thus the expected penalty reduction
$$\sigma(S) = \sum_{\text{\i} \in I} P(i)\sigma_{i}(S) = \pi(\text{\O}) - \pi(S), $$
describes the expected reward obtained from providing protection for set S. Thus the TI problem and the IM problem become equivalent.
The LT model is defined as follows. In an LT influence graph \(G = (\mathcal {V},\mathcal {E})\), an arc (u,v) is assigned weight wu,v if \((u,v)\in \mathcal {E}\), where \(\sum _{u\in \mathcal {V}} w_{u,v} \leq 1, \forall v\). In other words, a node v is affected by its neighbor u with an influence weight wu,v. A condition of having the sum of influence weights for all in-neighbors to v no more than one is imposed to ensure that such influence is normalized. When a seed set \(S\in \mathcal {V}\) is selected, influence originates from S and spreads through the network in discrete steps. Each node v independently chooses a threshold λv uniformly at random from [0,1]. At each time step t, an inactive node v becomes active at time step t+1 if the total weights from its active in-neighbors reaches its threshold λv, i.e.,
$$\sum_{u\in \mathcal{N}^{in}(v)} b_{u,v} I(u,t)\geq \lambda_{v}, $$
where I(u,t)=1 if u is active at time step t, otherwise I(u,t)=0. Let σ(S) denote the expected number of nodes activated by seed set S over all λv values from uniform distributions. σ(S) is referred as the influence spread of seed set S on network G under the LT model, which is the objective to maximize in the model.
The standard formulation of the IM problem is general but requires the enumeration of all possible spreading scenarios. Such problem has been shown NP-hard. In this work, we aim to provide a concise formulation to characterize the IM process under the framework of the LT model. To this end, we exploit the discrete propagation nature of the LT model. Consider a local network, e.g., Fig. 1 in which v1 and v2 are in-neighbors of v0 and they are all non-seed nodes. Let pti denote the probability that node vi is active at time t. It is obvious that \(p_{0}^{t+1}=p_{1}^{t} w_{1,0} + p^{t}_{2} w_{2,0}\) and \(p^{t+2}_{0} = p^{t+1}_{1} w_{1,0} + p^{t+1}_{2} w_{2,0}\). Let \(\Delta p^{t+1}_{i} = p^{t+1}_{i} - p_{i}^{t}\), then
$$\begin{array}{@{}rcl@{}} \Delta p_{0}^{t+1} = \Delta p_{1}^{t} w_{1,0} + \Delta p_{2}^{t} w_{2,0} \end{array} $$
An example of a local network
$$\begin{array}{@{}rcl@{}} p_{0}^{T} = \sum_{t=1}^{T} \Delta p_{0}^{t} \end{array} $$
where wu,v is influence weight from u to v. The above equations mean that the influence on a node can be obtained through its delta influence at each time step which is determined by delta influences of its in-neighbors only. We define delta influence as follows.
Definition 1
(Delta Influence) The delta influence \(\Delta p^{t}_{v}\) is the influence increment on node v at time t, where the influence on v means the probability of v being activated. The sum of delta influences for a node over time periods [0,T] gives the influence on the node at time T.
(Time Aware Influence Maximization Problem) Gvien a directed network \(G = (\mathcal {V},\mathcal {E})\) with influence weight wu,v∈(0,1] for each arc \((u,v)\in \mathcal {E}\), and a budget K restricting the size of seed set, the objective is to determine a seed set \(S\in \mathcal {V}\) such that the expected influence within T time steps induced by S, σT(S), is maximized under the LT model.
Formulation of time aware influence maximization problem
By Definition 1, we formulate the TAIM problem explicitly as a MILP problem in a concise form:
$$\begin{array}{@{}rcl@{}} \max \qquad \sigma(S)=\sum_{i=1}^{N} y_{i} &+& \sum_{t=1}^{T}\sum_{i=1}^{N} x_{i}^{t} \end{array} $$
$$\begin{array}{@{}rcl@{}} s.t. \hspace{1in} \sum_{i=1}^{N} y_{i} &\leq& \end{array} $$
$$\begin{array}{@{}rcl@{}} x^{t}_{i} - M (1-y_{i}) &\leq& 0 \quad \forall i, t \geq 1 \end{array} $$
$$\begin{array}{@{}rcl@{}} x^{1}_{i} - \sum_{j \in \mathcal{N}^{in}(i)} w_{ji}y_{j} &\leq& 0 \quad \forall i \end{array} $$
$$\begin{array}{@{}rcl@{}} x^{t}_{i} - \sum_{j \in \mathcal{N}^{in}(i)} w_{ji}x^{t-1}_{j} &\leq& 0 \quad \forall i,t\geq 2 \end{array} $$
$$\begin{array}{@{}rcl@{}} y_{i} &\in & \{0,1\} \quad \forall i \end{array} $$
where \(x_{i}^{t} = \Delta _{i}^{t}, i \in \mathcal {V}, t \in \mathcal {T}\). The MILP problem has two sets of decision variables: \(\left \{y_{i}: \forall i \in \mathcal {V}\right \}\) and \(\left \{x^{t}_{i}: \forall i \in \mathcal {V}, t \in \mathcal {T}\right \}\). \(y_{i}, i \in \mathcal {V}\), is binary: 1 if node i is selected or 0 otherwise. Continuous variable \(x^{t}_{i} := \Delta p_{i}^{t}, i \in \mathcal {V}, t \in \mathcal {T}\), denotes the delta influence of node i at time t. Objective Function (3) maximizes the expected degree of influence spread initiated by a seed set S. That is, the size of the seed set S plus the sum of activation probabilities over all nodes and all time periods. Constraint (4) imposes the restriction on the budget. Constraints (5) ensure that the activation probability of any seed node is zero for all time periods. Constraints (6) and (7) establish the relationships among the delta influences of the nodes between consecutive periods according to the LT model.
To solve this MILP problem, a simple approach is proposed to solve its LP Relaxation (and use a heuristic to round LP solutions to generate seed sets that satisfy budget constraint). Another approach to solving the large-scale MILP problem is the Benders' Decomposition algorithm, which we will elaborate later in this paper. In the experiments, we evaluate solutions obtained from both the LP Relaxation and the Benders' Decomposition algorithm and compare their performance with popular IM algorithms. The computational experiments suggest that both approaches are effective in solving the TAIM problem, i.e., obtaining high-quality solutions.
Introduction to Benders' Decomposition
The original MILP formulation of the TAIM problem is difficult to solve, especially for large-scale instances. Fortunately, for optimization problems in certain forms, Benders' decomposition techniques, introduced by Benders [19], can be used to obtain an optimal or a near-optimal solution by an iterative procedure. It is an algorithm that decomposes a difficult problem into two manageable parts, the master problem and subproblems. The master problem obtains values for a subset of the variables by solving a relaxed version of the original problem. A subproblem accepts the variables of the master problem and solves for the remaining variables. The subproblem solution is then used to form new constraints or cuts which are added to the master problem and cut off the master problem solution. Master problems and subproblems are solved iteratively in such procedure until no more cuts can be generated. Finally, an optimal solution for the original problem is obtained by combining the solutions of the master problem and subproblem from the last iteration.
The classic Benders' Decomposition algorithm solves the master problem to optimality at each iteration, which often results in a significant amount of rework and a significant amount of time. In the modern approach, the algorithm solves only a single master MILP problem. Whenever a feasible solution for the master problem is found, it fixes the variables of the master problem to the feasible solution and solves the subproblem. This procedure can be realized using callbacks provided by off-the-shelf MILP solvers such as CPLEX and Gurobi.
Applying Benders' Decomposition to TAIM problem
We apply the Benders' Decomposition algorithm to the TAIM problem, resulting in a master problem and subproblems that are solved iteratively. In this way, part of the complexity of solving the original problem is shifted to two separated simpler problems.
The master problem determines which nodes are selected as seed nodes at the initial stage, the delta influence for each non-seed node at the first period, and the estimate of the delta influence for each node at the remaining periods. Subsequently, the subproblem verifies the estimated delta influence after accepting the solutions to the master problem. Whenever the subproblem identifies an overestimated expectation of the delta influence, an optimality cut is generated and added to the master problem. Since any seed set with size no more than |S| is a possible candidate seed set, the subproblem is always feasible whatever a seed set is passed to the master problem, meaning that the subproblem never generates a feasibility cut in this problem. The master problem and subproblems are discussed in detail in the following discussion.
The following MILP problem defines the master problem. Solving the master problem alone leads to a solution that selects seed nodes with the maximum weighted degree. This solution is adopted as the initial solution for the decomposition algorithm, that is the root node of the single solution tree of the master MILP problem. Thus the final solution generated by the decomposition algorithm is at least as good as the heuristic method of choosing the nodes with maximum weighted degrees.
$$\begin{array}{@{}rcl@{}} {}\max \sum_{i=1}^{N} \left(y_{i} + x^{1}_{i}\right) + z \end{array} $$
$$\begin{array}{@{}rcl@{}} {}s.t. \hspace{1in} \text{Constraints} (\text{\ref{eq:con1}}), (\text{\ref{eq:con3}}), & \text{and} & (\text{\ref{eq:con5}})\\ {}x^{1}_{i} + My_{i} &\leq& M \quad \forall i \end{array} $$
$$\begin{array}{@{}rcl@{}} {}z&\leq&N\qquad \end{array} $$
$$\begin{array}{@{}rcl@{}} {}\left(y,x^{1},z\right)&\in& \mathcal{O} \end{array} $$
The master problem has three sets of variables, binary variables \(y_{i},i \in \mathcal {V}\) denoting whether node i is selected as a seed node, continuous variables \(x^{1}_{i},i \in V\) denoting the delta influence of each node at the first period, and an auxiliary continuous variable z representing an estimate of the delta influence at remaining periods. Objective Function 9 maximizes the influence spread until the first period and the expected incremental influence afterwards. Constraints (4), (6), and (8) containing only master decision variables become part of the master problem. Constraints (10) are a subset of Constraints (5) for t=1. Constraint (11) is used to bound the auxiliary variable z to initialize a feasible solution of the master problem. Constraint (12) represents the optimality cuts generated from the subproblems.
The subproblems are defined as follows. Once the master problem has determined the seed nodes that are activated at the initial stage, a subproblem is solved to test whether the expected influence spread in the master problem violates the actual influence spread. That is, the optimality of the solution for this seed set is verified. Whenever an expectation overestimates the actual influence spread, an optimality cut is generated and added to the master problem to correct the estimation. On the other hand, if an expectation is consistent with the actual influence spread, the subproblem determines whether the current feasible integer solution is accepted or not.
$$\begin{array}{@{}rcl@{}} {}\max \sum_{t=2}^{T} \sum_{i=1}^{N} x_{i}^{t} \end{array} $$
$$\begin{array}{@{}rcl@{}} s.t. \hspace{1in} x^{2}_{i} &\leq& \sum_{j=1}^{N} w_{ji} x_{j}^{1} \qquad \forall i \end{array} $$
$$\begin{array}{@{}rcl@{}} x^{t}_{i}&\leq& \sum_{j=1}^{N} w_{ji} x_{j}^{t-1} \quad \forall i,t \geq 3 \end{array} $$
$$\begin{array}{@{}rcl@{}} x^{t}_{i} &\leq& M(1-y_{i}) \quad \forall i,t\geq 2 \end{array} $$
The continuous variables \(x^{t}_{i},i \in \mathcal {V},t \geq 2\) denote the delta influence of node i after the second period. Objective Function (13) identifies the actual influence spread at remaining periods. Constraints (14) are formed separately because master variables \(x^{1}_{i}, i \in \mathcal {V}\) are now fixed, and Constraints (15) are subsets of Constraints (7), which correspond to t=2 and t≥3 respectively. Constraint (16) is a subset of Constraints (5) for t≥2. Since a subproblem is always feasible, the optimality cut is dependent on only the objective function of the dual subproblem, which is defined in the following form.
$$ z\leq \sum_{i=1}^{N} \sum_{j=1}^{N} w_{ji} x_{j}^{1} u_{i} +\sum_{j=2}^{T} \sum_{i=1}^{N} M\left(1-y_{i}\right) u_{(T+t-3)N+i} $$
where u is the optimal solution for the dual problem.
The primal subproblem contains conditional constraints ("big M" coefficients) in Constraints (16), which in general may lead to loose bounds for the master problem due to the weak optimality cuts (12) generated with "big M" coefficients. Here the introduction of "big M" is to impose a constraint such that the subproblem variable \(x^{t}_{i} = 0, i \in \mathcal {V},t \geq 2\) only when the corresponding master variable \(y_{i},i \in \mathcal {V}\) is 1. In this case, we approximate the exact Benders' Decomposition by modifying Constraints (16) into the following form:
$$x^{t}_{i} \leq 0 \qquad \forall i \in S, t\geq 2 $$
where S is seed set with yi=1 for i∈S. This means that the conditional constraints are added literally. Another approach to speed up the algorithm is the Combinatorial Benders' Cuts, which can be generated in addition to the optimality cuts to provide stronger cuts that tighten the master problem.
Benders' Decomposition algorithm
The approximate Benders' Decomposition algorithm for the TAIM problem is presented as follows.
Whenever a candidate solution is found for the master problem during the optimization process, a subproblem is solved after fixing the master variables (y∗,x1∗,z∗) according to this candidate solution. Since the subproblem is always feasible with such master variables, no feasibility cuts will be generated in this decomposition procedure. Instead, optimality cuts are generated and added to the master problem through the following verification. Let zs denote the optimal objective value of the subproblem and u denote the optimal solution for the dual subproblem. If zs<z∗, meaning that the influence spread is overestimated by the master problem, the optimality cut \(z\leq \sum _{i=1}^{N} \sum _{j=1}^{N} w_{ji} x_{j}^{1} u_{i}\) is added to the master problem, and the algorithm continues by solving the master problem again. If zs=z∗, the solution is accepted. The Benders' Decomposition algorithm continues by searching for an incumbent solution for the master problem. The solution process ends after the master MILP problem is solved or when a feasible integer solution has been proved to be within a certain optimality gap.
The above algorithm can be modified to seek cuts more aggressively at every node in the master solution tree. Instead of waiting for a new candidate incumbent to add cuts, the algorithm can simply pass a fractional master solution to the subproblem or use a rounded master solution in the subproblem, which may tighten the master problem quickly to prune nodes high in the master search tree.
Algorithm 1 outlines the approximate Benders' Decomposition algorithm applied to the TAIM problem.
Computational environment
We conduct a comprehensive computation study to examine the performance of our proposed solution methodology. We present our findings, such as the effectiveness of the methodology, obtained from the computational experiments on three real-world datasets of human networks. The proposed method is implemented using the optimization software CPLEX 12.6. All the experiments are performed on a Linux server running Ubuntu 12.04 with four Intel Xeon CPU E5-2420 processors (1.9GHz) and 193GB memory. The performance of our solution methodology is compared with those resulting from popular IM algorithms and generic heuristic methods.
The characteristics of the datasets are listed in Table 1. Influence weights are obtained by normalizing the original arc weights for incoming arcs of a node, which is similar to the method used in [18]. Specifically, an arc (u,v) is assigned with a weight b(u,v)=w(u,v)/W(v), where w(u,v) is the original weight of arc (u,v) and W(v) is a normalization factor with \(W(v) = \sum _{u\in \mathcal {N}^{in}(v)} w(u,v)\). This assignment of values ensures that the sum of incoming weights for node v equals 1.
Table 1 Descriptive statistics of the datasets for the computational experiments
Below provides a brief description of the datasets used in this computational study.
NetPWH A person-to-person contact network of patients and HCWs in two main wards of PWH [1], constructed with the data collected during the project introduced in the "Background" section. The dataset covers activities from December 2011 to March 2012. The network contains 166 nodes, including 56 patients and 110 healthcare workers. Arcs are weighted proportionally to contact frequencies.
HepCollab A collaboration network of scientists on High Energy Physics - Theory section at arXiv.org, from the year 1991 to 2003. The graph contains 15K nodes and 62K arcs. Arcs are weighted based on the number of common papers and the number of authors of the papers.
SocEpinions A who-trust-whom online social network collected from a general consumer review site Epinions.com [23]. An arc indicates whether a member of the site decides to "trust" the other. All the trust relationships interact and form the Web of Trust. The network contains around 132K nodes and 841K arcs.
The computational experiment on the dataset NetPWH aims to examine the effectiveness of our proposed methodology in a realistic healthcare facility setting. In particular, we aim to investigate how the person-to-person contact network topology can be integrated into the optimization framework for mitigating the risk of nosocomial diseases outbreaks. To test the scalability of our approach, the two large datasets HepCollab and SocEpinions are used. The three networks NetPWH, HepCollab, and SocEpinions, respectively, can be considered as small, moderate, and large instances in our computational experiments.
Algorithms for comparison
We compare our proposed solution methodology with several popular IM algorithms and some generic heuristic methods.
Maximum weighted degree (MAXWEI-DEGREE). Similar to selecting nodes with highest degrees, this heuristic method selects nodes with the K highest total out-weights, i.e., \(\sum _{v\in \mathcal {N}^{in}(u)} w(u,v)\) for node u.
Monte-Carlo based cost-effective lazy forward algorithm (GREEDY). This is a greedy algorithm with CELF optimization proposed in [10, 12]. Monte-Carlo simulations are run to estimate the influence spread of a seed set, and the CELF optimization is to accelerate the spread computation.
Local directed acyclic graph (LDAG). This is the algorithm proposed in [17] that constructs a local DAG for each node to estimate the influence spread. The influence parameter θ is set to 1/320, as recommended in [17], to control the size of a local DAG.
Simple path algorithm (SIMPATH). This is the algorithm proposed in [18] that uses simple paths to estimate the influence spread for each node. We set the pruning threshold η=10−3 and the look-ahead value l=4 as recommended by the authors.
Solution of LP relaxation with the highest probabilities (HIGHPROB-LPR). To satisfy the integer constraints, we select k nodes that have the highest values in the solution for the LP relaxation. This can be considered as the selection of the nodes with the K highest probabilities to be activated at the initial stage.
Approximate Benders' decomposition (APPROX-BENDERS). This is the approximate Benders' decomposition algorithm, in which optimality cuts are generated based on the approximate form of the Benders' subproblem and the conditional constraints are passed to the subproblem literally.
Exact solution of LP relaxation (EXACT-LPR). The seed set is obtained by solving the LP relaxation of the MILP-TAIM problem. Since the binary decision variables are relaxed to continuous values, the solutions obtained by this approach are not feasible but provide an upper bound for the optimization problem. More specifically, the number of seed nodes with non-zero values of the associated decision variables may not be equal to K; but the sum of these variables is K. The solutions to EXACT-LPR can be used as a benchmark to measure the quality of solution (i.e., proximity of solutions to optimality).
In this section, we report the computational results and examine the performance of the proposed solution methodology, in terms of computational effectiveness and efficiency. The more detailed insights derived from the computational results will be given in the "Discussion" section.
Computational effectiveness
We first evaluate the performance of the algorithms on the dataset NetPWH. This experiment can be considered as a test on the effectiveness of the control of an infectious disease outbreak in a healthcare facility setting. This experiment also illustrates the feasibility of utilizing person-to-person contact network topology in the optimization framework for influence maximization, or equivalently, outbreak minimization. Figure 2 shows the percentage of active nodes achieved by different methods against the size of the seed set under the time-aware influence diffusion constraints. The higher the percentage, the more effective an algorithm is. As shown in the figure, APPROX-BENDERS achieves the highest percentage of active nodes for all set sizes. Note that there are gaps between the percentages of active nodes achieved by EXACT-LPR and APPROX-BENDERS. However, as the objective values achieved by EXACT-LPR are upper bounds for the optimal percentages of active nodes, the actual gaps are expected to be smaller than those presented in Fig. 2. While APPROX-BENDERS gives the most effective solutions, the algorithms MAXWEI-DEGREE, GREEDY, and SIMPATH are quite comparable to APPROX-BENDERS. LDAG and HIGHPROB-LPR gave the worst solution effectiveness in this experiment.
Percentage of active nodes v.s. number of seed nodes on the NetPWH instance
NetPWH is a relatively small dataset used to assess the effectiveness of the proposed methodology in a healthcare facility setting. To examine its performance on large-scale datasets, experiments on HepCollab and SocEpinions are conducted. In this set of experiments, we report the expected influence spread on these large-scale networks, as shown in Fig. 3. We have similar observations as in the experiments on NetPWH; EXACT-LPR and APPROX-BENDERS are the two most effective methodologies. However, the differences in the effectiveness of the seed sets become smaller, as compared with those from the experiments on NetPWH.
Expected influence spread. a HepCollab; b SocEpinions
Computational efficiency and scalability
This experiment is to evaluate the efficiency and scalability of our optimization-based approaches – APPROX-BENDERS, EXACT-LPR, and HIGHPROB-LPR – which are expected to be the more computationally expensive. The running time is reported against the size of the seed set on the three datasets, as shown in Fig. 4. For the experiments on NetPWH and HepCollab, as the size of the seed set increases, the running time of APPROX-BENDERS increases. We believe that it is due to the fact that when the size of the seed set increases, the solution space for the master problem of the Benders' Decomposition problem increases. Thus, more subproblems have to be solved and more optimality cuts are needed to be generated. On the contrary, EXACT-LPR and HIGHPROB-LPR are rather stable as the size of the seed set is only a parameter in the MILP, which does not increase the problem size. As for the scalability, EXACT-LPR and HIGH-LPR are efficient when dealing with the larger-scale datasets. They finish on the moderate dataset HepCollab within 70 min and on the large dataset SocEpinions within 10 min to determine the optimal set of 50 seeds. By comparison, APPROX-BENDERS is able to manage the large dataset SocEpinions. The computation finish in 110 min for the selection of 50 seeds, while it is not efficient on the moderate dataset HepCollab. It completes the experiments on HepCollab in around 1000 min for the selection of 30 to 50 seeds. This finding is non-trivial since all methods appear to be more efficient on the large network SocEpinions than on the moderate dataset HepCollab. The reason is that the experiments are run on a global DAG extracted from SocEpinions; however, for HepCollab, the algorithms are run on the original network, which contains more loops. We also measure the running time of MAXWEI-DEGREE, which is expected to be highly efficient. In all instances, MAXWEI-DEGREE returns the solution in second.
Running times of the algorithms under different environments. a NetPWH; b HepCollab; c SocEpinions
From the computational experiments, regarding the effectiveness of the seed set identified, we observe that APPROX-BENDERS outperforms other approaches. The quality of the solutions obtained by APPROX-BENDERS is illustrated by the small optimality gaps derived from EXACT-LPR. We observe that the optimality gaps and differences in effectiveness between algorithms are larger on a smaller network, by comparing Figs. 2 with 3. This suggests that an exact method for solving the IM problem is particularly important when dealing with small networks. The rationale is that in a smaller network, an influential node plays a more crucial role in maximizing the influence. In other words, the identification of an optimal group of individuals to immunize is more important for containing outbreaks of infectious diseases in a closed environment, for example, a nosocomial infectious disease outbreak. Thus, the contact frequencies of individuals and the contact network topology would be particularly helpful information in a healthcare facility setting. Hospital administrators may wish to investigate possible solutions for effective contact tracing, e.g., by the adoption of indoor tracking technologies. Existing studies also demonstrate that network topology constructed from surveillance data is useful for the control of disease transmission [24–26]. This study illustrates the feasibility and the significance of utilizing such person-to-person connectivity in the optimization framework for the control of infectious disease outbreaks.
Not surprisingly, there is a tradeoff between computational effectiveness and computational efficiency. A more effective set of seeds requires a more computationally expensive algorithm. In the experiment on the dataset NetPWH collected from a hospital, the solution time of the most effective algorithm APPROX-BENDERS is less than 100 minutes. In a practical setting, such solution time is still acceptable. However, for larger-scale instances HepCollab and SocEpinions, solution times could take almost a day. In cases when quick decisions are needed, heuristics, such as MAXWEI-DEGREE (which requires only to identify individuals of the K highest contact frequencies), can be adopted to provide responsive, yet high quality, recommendations for TI.
We also observed that the curves in Figs. 2 and 3 both exhibit concave shapes. This observation is in line with findings from other resource allocation problems; the marginal benefits of adding resources are more significant at a lower resource level. Beyond a certain size of the seeds, the effect of increasing the immunization level becomes mild. Thus, our proposed optimization framework not only identifies optimal solutions for TI, but also helps assess the benefits of expanding the immunization coverage and determine the right immunization level in a cost-effective manner.
In this work, we study the outbreak minimization problem, which is essential for developing epidemic control strategies. In general, the goal of outbreak control is to minimize the effects of the spread of infectious disease by targeting and preventing "super spreaders" who have significant influences on disease spread over human contact networks. This problem is similar to the famous influence maximization problem studied in social network analysis, which aims to identify a set of influential people to maximize the influence spread through social networks.
Specifically, we show the equivalence of the outbreak minimization and influence maximization problems and present a concise formulation for the influence maximization problem under the LT diffusion model. We then develop optimization approaches based on LP Relaxation and Benders' Decomposition algorithm, which take into account the contact network topology, to solve the problem. A comprehensive computation study is conducted to evaluate the performance of our proposed solution methodology. Computational results show that the Bender's Decomposition approach provide more effective solutions for maximizing the influence spread (i.e., minimizing the adverse consequences of infectious disease outbreak).
Our findings suggest that the capability of determining the optimal solutions is particularly important when containing infectious disease outbreaks in smaller networks, e.g., outbreaks of nosocomial infectious diseases. Thus, there is a potential to establish effective contact tracing methods, for example, by indoor tracking technologies, in healthcare facilities and utilize such information for optimal vaccination strategies.
We also illustrate a tradeoff between effectiveness and efficiency of the algorithms. Timely response is key to the success of infectious disease containment. For larger networks which require a long solution time with an exact method, heuristics for good-quality solutions could be a more appropriate alternative to facilitate responsive actions in practice.
Finally, our proposed methodology not only determines the optimal set of individuals for immunization, but also assists the policymakers in assessing the benefits of expanding the immunization coverage and in determining the right immunization level.
The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.
Directed acyclic graph
HCW:
Health care worker
IM:
LT:
Linear threshold
MERS:
Middle East respiratory syndrome
MILP:
Mixed integer linear programming
RFID:
Radio-frequency identification
SARS:
TAIM:
Time aware influence maximization
TI:
Cheng C-H, Kuo Y-H. Rfid analytics for hospital ward management. Flex Serv Manuf J. 2016; 28(4):593–616.
Cheng C-H, Kuo Y-H, Zhou Z. Tracking nosocomial diseases at individual level with a real-time indoor positioning system. J Med Syst. 2018; 42(11):222.
Lee N, Hui D, Wu A, Chan P, Cameron P, Joynt GM, Ahuja A, Yung MY, Leung C, To K, Lui SF. A major outbreak of severe acute respiratory syndrome in Hong Kong. N Engl J Med. 2003; 348(20):1986–94.
Zou J, Karr AF, Datta G, Lynch J, Grannis S. A Bayesian spatio–temporal approach for real–time detection of disease outbreaks: a case study. BMC Med Inform Decis Mak. 2014; 14(1):108.
Texier G, Farouh M, Pellegrin L, Jackson ML, Meynard J-B, Deparis X, Chaudet H. Outbreak definition by change point analysis: a tool for public health decision?BMC Med Inform Decis Mak. 2016; 16(1):33.
Texier G, Alldoji RS, Diop L, Meynard J-B, Pellegrin L, Chaudet H. Using decision fusion methods to improve outbreak detection in disease surveillance. BMC Med Inform Decis Mak. 2019; 19(1):38.
Ming R-X, Liu J, Cheung WK, Wan X. Stochastic modelling of infectious diseases for heterogeneous populations. Infect Dis Poverty. 2016; 5(1):107.
Xia S, Liu J, Cheung W. Identifying the relative priorities of subpopulations for containing infectious disease spread. PloS ONE. 2013; 8(6):65271.
Watkins RE, Eagleson S, Beckett S, Garner G, Veenendaal B, Wright G, Plant AJ. Using GIS to create synthetic disease outbreaks. BMC Med Inform Decis Mak. 2007; 7(1):4.
Leskovec J, Krause A, Guestrin C, Faloutsos C, VanBriesen J, Glance N. Cost-effective outbreak detection in networks. In: Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining: 2007. p. 420–9, ACM.
Domingos P, Richardson M. Mining the network value of customers. In: Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining: 2001. p. 57–66, ACM.
Kempe D, Kleinberg J, Tardos É. Maximizing the spread of influence through a social network. In: Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining: 2003. p. 137–46, ACM.
Mossel E, Roch S. On the submodularity of influence in social networks. In: Proceedings of the Thirty-ninth Annual ACM Symposium on Theory of Computing: 2007. p. 128–34, ACM.
Soma T, Kakimura N, Inaba K, Kawarabayashi K-I. Optimal budget allocation: Theoretical guarantee and efficient algorithm. In: International Conference on Machine Learning: 2014. p. 351–9.
Khanna S, Lucier B. Influence maximization in undirected networks. In: Proceedings of the Twenty-fifth Annual ACM-SIAM Symposium on Discrete Algorithms: 2014. p. 1482–96, Society for Industrial and Applied Mathematics.
Chen W, Wang C, Wang Y. Scalable influence maximization for prevalent viral marketing in large-scale social networks. In: Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining: 2010. p. 1029–38, ACM.
Chen W, Yuan Y, Zhang L. Scalable influence maximization in social networks under the linear threshold model. In: 2010 IEEE 10th International Conference on Data Mining (ICDM): 2010. p. 88–97, IEEE.
Goyal A, Lu W, Lakshmanan LV. Simpath: An efficient algorithm for influence maximization under the linear threshold model. In: 2010 IEEE 10th International Conference on Data Mining (ICDM): 2011. p. 211–20, IEEE.
Benders JF. Partitioning procedures for solving mixed-variables programming problems. Numer Math. 1962; 4(1):238–52.
Hooker JN, Ottosson G. Logic-based benders decomposition. Math Program. 2003; 96(1):33–60.
Codato G, Fischetti M. Combinatorial Benders' cuts for mixed-integer linear programming. Oper Res. 2006; 54(4):756–66.
Bai L, Rubin PA. Combinatorial Benders cuts for the minimum tollbooth problem. Oper Res. 2009; 57(6):1510–22.
Leskovec J, Krevl A. SNAP Datasets: Stanford large network dataset collection. 2015.
Liu J, Yang B, Cheung WK, Yang G. Malaria transmission modelling: a network perspective. Infect Dis poverty. 2012; 1(1):11.
Yang X, Liu J, Zhou X-N, Cheung WK. Inferring disease transmission networks at a metapopulation level. Health Inf Sci Syst. 2014; 2(1):8.
Wan X, Liu J, Cheung WK, Tong T. Inferring epidemic network topology from surveillance data. PloS ONE. 2014; 9(6):100661.
The authors would like to thank the Editor and the Reviewers for the constructive suggestions and comments, which have greatly improved the work.
This study was funded by Research Grants Council (RGC) of Hong Kong (Project No. 14201314 and 14209416). The research of the second author on the modeling and simulation techniques was partially supported by Health and Medical Research Fund, Food and Health Bureau, the Hong Kong SAR Government (Project No. 14151771), and HKU Engineering COVID-19 Action Seed Funding.
Logistics and Supply Chain MultiTech R&D Centre Limited, Unit 202, Level 2, Block B, Cyberport 4, 100 Cyberport Road, Hong Kong, Hong Kong, China
Chun-Hung Cheng
Department of Industrial and Manufacturing Systems Engineering, the University of Hong Kong, Pokfulam Road, Hong Kong, China
Yong-Hong Kuo
Department of Systems Engineering and Engineering Management, the Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China
Ziye Zhou
CH, YH and ZZ were involved with the conception and overall design of the methodologies. CH and ZZ formulated the research questions and designed the computing algorithms and computational experiments. ZZ developed the computational programmes, conducted the data analysis, and drafted the initial manuscript. YH provided knowledge of hospital operations and contributed to the research findings from a healthcare perspective. All authors read and approved the manuscript.
Correspondence to Yong-Hong Kuo.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visithttp://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Cheng, CH., Kuo, YH. & Zhou, Z. Outbreak minimization v.s. influence maximization: an optimization framework. BMC Med Inform Decis Mak 20, 266 (2020). https://doi.org/10.1186/s12911-020-01281-0
Accepted: 01 October 2020
Infectious diseases outbreak | CommonCrawl |
Hostname: page-component-7ccbd9845f-xwjfq Total loading time: 2.482 Render date: 2023-01-29T22:49:41.660Z Has data issue: true Feature Flags: { "useRatesEcommerce": false } hasContentIssue true
>Journals
>Proceedings of the Edinburgh Mathematical Society
>FirstView
>Toward a classification of the supercharacter theories...
Proceedings of the Edinburgh Mathematical Society
Central elements and commutators
Partition supercharacter theories
Non-trivial $\mathsf {S}$ -normal subgroups
Computations for small primes
Toward a classification of the supercharacter theories of Cp × Cp
Part of: Representation theory of groups
Published online by Cambridge University Press: 12 October 2022
Shawn T. Burkett and
Mark L. Lewis [Opens in a new window]
Shawn T. Burkett
Department of Mathematical Sciences, Kent State University, Kent, OH 44242, USA ([email protected]; [email protected])
Mark L. Lewis
Save PDF (0.31 mb) View PDF[Opens in a new window]
Rights & Permissions[Opens in a new window]
In this paper, we study the supercharacter theories of elementary abelian $p$ -groups of order $p^{2}$ . We show that the supercharacter theories that arise from the direct product construction and the $\ast$ -product construction can be obtained from automorphisms. We also prove that any supercharacter theory of an elementary abelian $p$ -group of order $p^{2}$ that has a non-identity superclass of size $1$ or a non-principal linear supercharacter must come from either a $\ast$ -product or a direct product. Although we are unable to prove results for general primes, we do compute all of the supercharacter theories when $p = 2,\, 3,\, 5$ , and based on these computations along with particular computations for larger primes, we make several conjectures for a general prime $p$ .
supercharacter theorieselementary abelian p-groupsautomorphisms
MSC classification
Primary: 20C15: Ordinary representations and characters
Secondary: 20D15: Nilpotent groups, $p$-groups
Proceedings of the Edinburgh Mathematical Society , First View , pp. 1 - 21
DOI: https://doi.org/10.1017/S0013091522000438[Opens in a new window]
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright © The Author(s), 2022. Published by Cambridge University Press
A supercharacter theory of a finite group is a somewhat condensed form of its character theory where the conjugacy classes are replaced by certain unions of conjugacy classes and the irreducible characters are replaced by certain pairwise orthogonal characters that are constant on the superclasses. In essence, a supercharacter theory is an approximation of the representation theory that preserves much of the duality exhibited by conjugacy classes and irreducible characters. Supercharacter theory has proven useful in a variety of situations where the full character theory is unable to be described in a useful combinatorial way.
The problem of classifying all supercharacter theories of a given finite group appears to be a difficult problem. For example, in [Reference Burkett, Lamar, Lewis and Wynn6], the authors saw no way other than to use a computer program to show that $\mathrm {Sp}_6(\mathbb {F}_2)$ has exactly two supercharacter theories. The problem of classifying all supercharacter theories of a family of finite groups seems likely to be much more difficult. At this time, we know of only a few families of groups for which this has been done; only one of which consists of non-abelian groups.
In his Ph.D. thesis [Reference Hendrickson9], Hendrickson classified the supercharacter theories of cyclic $p$ -groups. It is then explained in the subsequent paper [Reference Hendrickson10] that the supercharacter theories of cyclic groups had already been classified by Leung and Man under the guise of Schur rings (see [Reference Leung and Man16, Reference Leung and Man17]). In [Reference Hendrickson10], it is shown that the set of supercharacter theories of a group are in bijection with the set of central S-rings of the group. In fact, the Schur rings of cyclic $p$ -groups were classified even earlier. (See [Reference Pöschel20] for the odd prime case and [Reference Ch. Klin, Najmark and Pöschel12, Reference Kovács13] for the cyclic $2$ -groups.) The supercharacter theories of the groups $C_2\times C_2\times C_p$ for $p$ a prime were classified in [Reference Evdokimov, Kovács and Ponomarenko8] (again via Schur rings).
The supercharacter theories of the dihedral groups have been classified several times. Wynn classified the supercharacter theories of dihedral groups in his Ph.D. thesis [Reference Wynn24]. They were also classified by Lamar in his Ph.D. thesis [Reference Lamar15] (see also the preprint [Reference Lamar14]), where the properties of the lattice of supercharacter theories are also studied. It turns out that the supercharacter theories of the dihedral groups of order twice a prime were classified previously by Wai-Chee [Reference Shiu21], where their Schur rings were studied (supercharacter theories correspond to the central Schur rings). Finally, we mention that Wynn and the second author classified the supercharacter theories of Frobenius groups of order $pq$ in [Reference Lewis and Wynn18] and also reduced the problem of classifying the supercharacter theories of Frobenius groups and semi-extraspecial groups to classifying the supercharacter theories of various quotients and subgroups.
Each of the above classifications involves certain supercharacter theory products, including $\ast$ and direct products, which will be described explicitly in § 2. Each of the above classifications also involves supercharacter theories coming from automorphisms — those constructed from the action of a group by automorphisms. The purpose of this paper is to study the supercharacter theories of the elementary abelian group $C_p\times C_p$ , where $p$ is a prime. The first thing to notice is that a classification of the supercharacter theories of $C_p\times C_p$ would not need to include the above supercharacter theory products.
Theorem A. Every supercharacter theory of $C_p\times C_p$ that can be realized as a $\ast$ -product or direct product comes from automorphisms.
Although we do not give a full classification, we will classify certain types of supercharacter theories. Specifically, we prove the following:
Theorem B. Any supercharacter theory $\mathsf {S}$ of the elementary abelian group $C_p\times C_p$ that has a non-identity superclass of size one or a non-principal linear supercharacter comes from a supercharacter theory ($\ast$ or direct) product. In particular, $\mathsf {S}$ comes from automorphisms.
In § 5, we show that any partition of the non-trivial, proper subgroups of $C_p\times C_p$ gives rise to a supercharacter theory and that these partition supercharacter theories play a special role in the full lattice of supercharacter theories. In § 6, we make a strong conjecture regarding the structure of certain types of supercharacter theories that has been supported through computational evidence. Although we do not provide a full classification, we believe this paper to be a good starting point for anyone who desires to do so. We mention that we are able to provide a full classification for the prime $p=2,\,3$ and $5$ . Using the work in [Reference Ziv-Av25, 26], we obtain a similar classification for $p = 7$ .
We would like to thank Professor Ilia Ponomarenko for mentioning references [Reference Ch. Klin, Najmark and Pöschel12, Reference Kovács13, Reference Pöschel20, Reference van Dam and Muzychuk22, Reference Ziv-Av25] to us.
2. Preliminaries
Diaconis and Isaacs [Reference Diaconis and Isaacs7] define a supercharacter theory to be a pair $(\mathcal {X},\,\mathcal {K})$ , where $\mathcal {X}$ is a partition of $\mathrm {Irr}(G)$ and $\mathcal {K}$ is a partition of $G$ satisfying the following three conditions:
• $\lvert{\mathcal {X}}\rvert=\lvert {\mathcal {K}}\rvert$ ;
• $\{1\}\in \mathcal {K}$ ;
• For every $X\in \mathcal {X}$ , there is a character $\xi _X$ whose constituents lie in $X$ that is constant on the parts of $\mathcal {K}$ .
For each $X\in \mathcal {X}$ , the character $\xi _X$ is a constant multiple of the character $\sigma _X=\sum \nolimits _{\psi \in X}\psi (1)\psi$ . We let $\mathrm {BCh}(\mathsf {S})=\{\sigma _X:X\in \mathcal {X}\}$ and call its elements the basic $\mathsf {S}$ -characters. If $\mathsf {S}=(\mathcal {X},\,\mathcal {K})$ is a supercharacter theory, we write $\mathcal {K}=\mathrm {Cl}(\mathsf {S})$ and call its elements $\mathsf {S}$ -classes. The principal character of $G$ is always a basic $\mathsf {S}$ -character. When there is no ambiguity, we may refer to $\mathsf {S}$ -classes and basic $\mathsf {S}$ -characters as superclasses and supercharacters. We will frequently make use of the fact that $\mathsf {S}$ -classes and basic $\mathsf {S}$ -characters uniquely determine each other [Reference Diaconis and Isaacs7, Theorem 2.2 (c)].
The set $\mathrm {SCT}(G)$ of all supercharacter theories comes equipped with a partial order. Hendrickson [Reference Hendrickson10] shows that for any two supercharacter theories $\mathsf {S}$ and $\mathsf {T}$ , every $\mathsf {T}$ -class is a union of $\mathsf {S}$ -classes if and only if every basic $\mathsf {T}$ -character is a sum of basic $\mathsf {S}$ -characters. In this event, we write $\mathsf {S}\preccurlyeq \mathsf {T}$ . We say that $\mathsf {S}$ is finer than $\mathsf {T}$ or that $\mathsf {T}$ is coarser than $\mathsf {S}$ . Since $\mathrm {SCT}(G)$ has a partial order and a maximal (and minimal) element, it is actually a lattice. The join operation $\vee$ on $\mathrm {SCT}(G)$ is very well behaved and is inherited from the join operation on the set of partitions of $G$ under the refinement order, which we will also denote by $\vee$ . If $\mathsf {S}$ and $\mathsf {T}$ are supercharacter theories of $G$ , then the superclasses of $\mathsf {S}\vee \mathsf {T}$ is just the mutual coarsening of the partitions $\mathrm {Cl}(\mathsf {S})$ and $\mathrm {Cl}(\mathsf {T})$ ; i.e., $\mathrm {Cl}(\mathsf {S})\vee \mathrm {Cl}(\mathsf {T})$ . However, the meet operation $\wedge$ on $\mathrm {SCT}(G)$ is poorly behaved and difficult to compute. In particular, the equation $\mathrm {Cl}(\mathsf {S}\wedge \mathsf {T})=\mathrm {Cl}(\mathsf {S})\wedge \mathrm {Cl}(\mathsf {T})$ holds only sporadically. One example where this equality does hold will be discussed later in this section (see Lemma 2.2).
Every finite group has two trivial supercharacter theories. The first, which we denote by $\mathsf {m}(G)$ , is the supercharacter theory with superclasses the usual conjugacy classes of $G$ . The supercharacters of $\mathsf {m}(G)$ are exactly the irreducible characters of $G$ (multiplied by their degrees). This is the finest supercharacter theory of $G$ under the partial order discussed in the previous paragraph (i.e., $\mathsf {m}(G)\preccurlyeq \mathsf {S}$ for every supercharacter theory $\mathsf {S}$ of $G$ ). There is also a coarsest supercharacter theory of $G$ for the partial ordering of the previous paragraph, denoted by $\mathsf {M}(G)$ (i.e., $\mathsf {S}\preccurlyeq \mathsf {M}(G)$ for every supercharacter theory $\mathsf {S}$ of $G$ ). The $\mathsf {M}(G)$ -classes are just $\{1\}$ and $G\setminus \{1\}$ and the basic $\mathsf {M}(G)$ -characters are $\mathbb {1}$ and $\rho _G-\mathbb {1}$ , where $\mathbb {1}$ is the principal character and $\rho _G$ is the regular character of $G$ .
Supercharacter theories can arise in many different (often mysterious) ways. One of the more well-known ways comes from actions by automorphisms. If $A\le \mathrm {Aut}(G)$ , then $A$ acts on $\mathrm {Irr}(G)$ via $\chi ^{a}(g)=\chi (g^{a^{-1}})$ for $a\in A$ , $\chi \in \mathrm {Irr}(G)$ and $g\in G$ . Then Brauer's Permutation Lemma (see [Reference Isaacs11, Theorem 6.32], for example) can be used to show that the orbits of $G$ and $\mathrm {Irr}(G)$ under the action of $A$ yield a supercharacter theory. In this case, we say that $\mathsf {S}$ comes from $A$ or comes from automorphisms. An important aspect of the Leung–Man classification [Reference Leung and Man16, Reference Leung and Man17] (or Hendrickson's [Reference Hendrickson9]) is that every supercharacter theory of a cyclic group of prime order comes from automorphisms. This fact will be used extensively later without reference.
Just as every normal subgroup is determined by the conjugacy classes of $G$ and by the irreducible characters, there is a distinguished set of normal subgroups determined by a supercharacter theory $\mathsf {S}$ . Any subgroup $N$ that is a union of $\mathsf {S}$ -classes is called $\mathsf {S}$ -normal. In this situation, we write $N\lhd _{\mathsf {S}}G$ . It is not difficult to show that $N$ is the intersection of the kernels of those $\chi \in \mathrm {BCh}(\mathsf {S})$ that satisfy $N\le \ker (\chi )$ . In fact, this is another way to classify $\mathsf {S}$ -normal subgroups [Reference Marberg19].
Whenever $N$ is $\mathsf {S}$ -normal, Hendrickson [Reference Hendrickson10] showed that $\mathsf {S}$ gives rise to a supercharacter theory $\mathsf {S}_N$ of $N$ and $\mathsf {S}^{G/N}$ of $G/N$ . The $\mathsf {S}_N$ -classes are just the $\mathsf {S}$ -classes contained in $N$ and the basic $\mathsf {S}_N$ -characters are the restrictions of the basic $\mathsf {S}$ -characters, up to a constant. Moreover, it is a result of the first author (see [Reference Burkett1, Theorem 1.1.2] or [Reference Burkett3, Theorem A]) that if $\chi \in \mathrm {BCh}(\mathsf {S})$ and $\psi$ is the basic-$\mathsf{S}_{\mathsf{N}}$ -character lying under $\chi$ , then $\chi (1)/\psi (1)$ is an integer. The $\mathsf {S}^{G/N}$ -classes are the images of the $\mathsf {S}$ -classes under the canonical projection $G\to G/N$ and the basic $\mathsf {S}^{G/N}$ -characters can be identified with the basic $\mathsf {S}$ -characters with $N$ contained in their kernel. These constructions are compatible in the sense that $(\mathsf {S}_N)^{N/M}=(\mathsf {S}^{G/M})_{N/M}$ . As such, we simply write $\mathsf {S}_{N/M}$ in this situation. In [Reference Burkett2], the first author shows that these constructions respect the lattice structure of the set of $\mathsf {S}$ -normal subgroups. In particular, if $H$ and $N$ are any $\mathsf {S}$ -normal subgroups, then the images of the superclasses of $\mathsf {S}_{H/(H\cap N)}$ under the canonical isomorphism $H/(H\cap N)\to HN/N$ are exactly the $\mathsf {S}_{HN/N}$ -classes [Reference Burkett2, Theorem A].
Hendrickson also used these constructions to define supercharacter theories of the full group. Given any supercharacter theory $\mathsf {U}$ of a normal subgroup $N$ of $G$ whose superclasses are fixed (set-wise) under the conjugation action of $G$ and a supercharacter theory $\mathsf {V}$ of $G/N$ , Hendrickson defines the $\ast$ -product $\mathsf {U}\ast \mathsf {V}$ as follows. The supercharacters of $\mathsf {U}\ast \mathsf {V}$ that have $N$ in their kernel can be naturally identified with the supercharacters in $\mathrm {BCh}(\mathsf {V})$ and those that do not have $N$ in their kernel are just induced from non-principal members of $\mathrm {BCh}(\mathsf {U})$ . The superclasses of $\mathsf {U}\ast \mathsf {V}$ contained in $N$ are the superclasses of $\mathsf {U}$ , and the superclasses of $\mathsf {U}\ast \mathsf {V}$ lying outside of $N$ are the full preimages of the non-identity superclasses of $\mathsf {V}$ under the canonical projection $G\to G/N$ . If $\mathsf {S}$ is a supercharacter theory of $G$ and $N\lhd _{\mathsf {S}}G$ , then $\mathsf {S}\preccurlyeq \mathsf {S}_N\ast \mathsf {S}_{G/N}$ , with equality if and only if every $\mathsf {S}$ -class lying outside of $N$ is a union of $N$ -cosets.
Another characterization that appears in [Reference Burkett and Lewis5] is the following. Let $N$ be $\mathsf {S}$ -normal. Then $\mathsf {S}$ is a $\ast$ -product over $N$ if and only if every $\chi \in \mathrm {BCh}(\mathsf {S})$ satisfying $N\nleq \ker (\chi )$ vanishes on $G\setminus N$ . One direction of this result follows easily from the next lemma about basic $\mathsf {S}$ -characters vanishing off $\mathsf {S}$ -normal subgroups.
Lemma 2.1 Let $\mathsf {S}$ be a supercharacter theory of $G$ and let $\chi \in \mathrm {BCh}(\mathsf {S})$ . Assume that $\chi$ vanishes on $G\setminus N$ , where $N$ is $\mathsf {S}$ -normal. Then $\chi =\psi ^{G}$ for some basic $\mathsf {S}_N$ - character $\psi$ .
Proof. Since $\mathsf {S}\preccurlyeq \mathsf {S}_N\ast \mathsf {S}_{G/N}$ , $\psi ^{G}$ is a sum of distinct basic $\mathsf {S}$ -characters. Let $\xi$ be one such basic $\mathsf {S}$ -character, and note that $\chi _N=\frac {\chi (1)}{\psi (1)}\psi$ . Then
\begin{align*} \langle{\psi^{G},\xi}\rangle & =\displaystyle\frac{1}{\lvert{G}\rvert}\displaystyle\sum_{g\in G}\psi^{G}(g)\overline{\xi(g)}=\frac{1}{\lvert{G}\rvert}\sum_{g\in N}\psi^{G}(g)\overline{\xi(g)}\\ & =\displaystyle\frac{1}{\lvert{N}\rvert}\displaystyle\sum_{g\in N}\psi(g)\overline{\xi(g)}=\frac{\psi(1)}{\lvert{N}\rvert\chi(1)}\sum_{g\in N}\chi_N(g)\overline{\xi(g)}\\ & =\frac{\psi(1)}{\lvert{N}\rvert\chi(1)}\sum_{g\in G}\chi(g)\overline{\xi(g)}=\frac{\lvert{G:N}\rvert}{\chi(1)/\psi(1)}\langle{\chi,\xi}\rangle=\lvert{G:N}\rvert\psi(1)\delta_{\chi,\xi}. \end{align*}
The result easily follows.
Also defined in [Reference Hendrickson10] is the direct product of supercharacter theories. Given a supercharacter theory $\mathsf {E}$ of a group $H$ and a supercharacter theory $\mathsf {F}$ of a group $K$ , the supercharacter theory $\mathsf {E}\times \mathsf {F}$ of $H\times K$ is defined by $\mathrm {Cl}(\mathsf {E}\times \mathsf {F})=\{K\times L:K\in \mathrm {Cl}(\mathsf {E}),\,\ L\in \mathrm {Cl}(\mathsf {F})\}$ and $\mathrm {BCh}(\mathsf {E}\times \mathsf {F})=\{\chi \times \xi :\chi \in \mathrm {BCh}(\mathsf {E}),\,\ \xi \in \mathrm {BCh}(\mathsf {F})\}$ . The direct product supercharacter theory is intimately related to the $\ast$ -product, as this next result illustrates.
Lemma 2.2 Let $G=H\times N$ , let $\mathsf {S}\in \mathrm {SCT}(H),$ and let $\mathsf {T}\in \mathrm {SCT}(N)$ . Let $\varphi _1:H\to G/N$ and $\varphi _2:N\to G/H$ be the projections. Let $\tilde {\mathsf {S}}=\varphi _1(\mathsf {S})\in \mathrm {SCT}(G/N),$ and let $\tilde {\mathsf {T}}=\varphi _2(\mathsf {T})\in \mathrm {SCT}(G/H)$ . Write $\mathsf {U}=\mathsf {S}\ast \tilde {T}$ and $\mathsf {V}=\mathsf {T}\ast \tilde {S}$ . Then $\mathrm {Cl}(\mathsf {S}\times \mathsf {T})=\mathrm {Cl}(\mathsf {U})\wedge \mathrm {Cl}(\mathsf {V})$ . In particular, $\mathsf {S}\times \mathsf {T}$ is equal to $\mathsf {U}\wedge \mathsf {V}$ .
Proof. We have
\[ \mathrm{Cl}(\mathsf{U})=\bigcup_{K\in\mathrm{Cl}(\mathsf{S})}\{K\times\{1\},K\times(N\setminus\{1\})\} \]
\[ \mathrm{Cl}(\mathsf{V})=\bigcup_{L\in\mathrm{Cl}(\mathsf{T})}\{\{1\}\times L,(H\setminus\{1\})\times L\}. \]
The mutual refinement of these partitions is exactly
\[ \mathcal{K}=\{K\times L:K\in\mathrm{Cl}(\mathsf{S}),\ L\in\mathrm{Cl}(\mathsf{T})\}. \]
Since $\mathcal {K}$ is the set of superclasses of a supercharacter theory of $G$ , and $\mathcal {K}$ is the coarsest partition of $G$ finer than both $\mathrm {Cl}(\mathsf {U})$ and $\mathrm {Cl}(\mathsf {T})$ , it follows that $\mathcal {K}=\mathrm {Cl}(\mathsf {U}\wedge \mathsf {V})$ , which means $\mathsf {U}\wedge \mathsf {V}=\mathsf {S}\times \mathsf {T}$ .
Recall that $\mathsf {S}\preccurlyeq \mathsf {S}_N\ast \mathsf {S}_{G/N}$ whenever $N$ is an $\mathsf {S}$ -normal subgroup of $G$ . Thus, as an immediate corollary of Lemma 2.2, we deduce the following.
Corollary 2.3 Let $G=H\times N$ and suppose $\mathsf {S}$ is a supercharacter theory of $G$ in which both $H$ and $N$ are $\mathsf {S}$ -normal. Then $\mathsf {S}\preccurlyeq \mathsf {S}_H\times \mathsf {S}_N,$ with equality if and only if $\lvert {\mathsf {S}}\rvert=\lvert {\mathsf {S}_H}\rvert \cdot \lvert {\mathsf {S}_N}\rvert$ .
Proof. Using the notation in the statement of the previous result, we have $\widetilde {\mathsf {S}_H}=\mathsf {S}_{G/N}$ and $\widetilde {\mathsf {S}_N}=\mathsf {S}_{G/H}$ [Reference Burkett2, Theorem A]. Since
\[ \mathsf{S}\preccurlyeq\mathsf{S}_H\ast\mathsf{S}_{G/H}=\mathsf{S}_H\ast\widetilde{\mathsf{S}_N} \]
\[ \mathsf{S}\preccurlyeq\mathsf{S}_N\ast\mathsf{S}_{G/N}=\mathsf{S}_N\ast\widetilde{\mathsf{S}_H}, \]
\[ \mathsf{S}\preccurlyeq\bigl(\mathsf{S}_H\ast\widetilde{\mathsf{S}_N}\bigr)\wedge\bigl(\mathsf{S}_N\ast\widetilde{\mathsf{S}_H}\bigr)=\mathsf{S}_H\times\mathsf{S}_N. \]
Since $\mathsf {S}\preccurlyeq \mathsf {S}_H\times \mathsf {S}_N$ and $\lvert {\mathsf {S}_H\times \mathsf {S}_N}\rvert=\lvert {\mathsf {S}_H}\rvert\cdot \lvert {\mathsf {S}_N}\rvert$ , the result follows.
We mention one more construction we will need, also due to Hendrickson. If $G$ is an abelian group, then $\mathrm {Irr}(G)$ forms a group under the pointwise product. There is a natural isomorphism $G\to \mathrm {Irr}(\mathrm {Irr}(G))$ sending $g\in G$ to $\tilde {g}\in \mathrm {Irr}(\mathrm {Irr}(G))$ defined by $\tilde {g}(\chi )=\chi (g)$ . If $\mathsf {S}$ is a supercharacter theory of $G$ , then ${\check {\mathsf {S}}}$ is a supercharacter theory of $\mathrm {Irr}(G)$ , where $\mathrm {Cl}({\check {\mathsf {S}}})=\mathrm {BCh}(\mathsf {S})$ and $\mathrm {BCh}({\check {\mathsf {S}}})=\{\{\tilde {g}:g\in K\}:K\in \mathrm {Cl}(\mathsf {S})\}$ [Reference Hendrickson9, Theorem 5.3]. This duality construction will be used to simplify some arguments in the proof of Theorem B.
3. Central elements and commutators
Let $\mathsf {S}$ be a supercharacter theory of $G$ . In [Reference Burkett3], the first author discusses two important subgroups of $G$ associated with $\mathsf {S}$ . The first of these subgroups is an analog of the centre of a group and consists of the superclasses of size one. We denote this subgroup by $Z(\mathsf {S})$ . The fact that $Z(\mathsf {S})$ is a ($\mathsf {S}$ -normal) subgroup follows easily from [Reference Diaconis and Isaacs7, Corollary 2.3] and a proof appears in [Reference Hendrickson9]. Another consequence of [Reference Diaconis and Isaacs7, Corollary 2.3] appearing in [Reference Hendrickson9] is that $\mathrm {cl}_{\mathsf {S}}(g)z=\mathrm {cl}_{\mathsf {S}}(gz)$ for any $z\in Z(\mathsf {S})$ . Using this fact, as well as a consequence of [Reference Burkett3, Theorem A], we prove the following lemma that will be used in the proof of Theorem B.
Lemma 3.1 Let $\mathsf {S}$ be a supercharacter theory of $G$ and write $Z=Z(\mathsf {S})$ . If $gz\notin \mathrm {cl}_{\mathsf {S}}(g)$ for any $z\in Z,$ then $\lvert {\mathrm {cl}_{\mathsf {S}_{G/Z}}(gZ)}\rvert=\lvert{\mathrm {cl}_{\mathsf {S}}(g)}\rvert$ .
Proof. Let $h\in \mathrm {cl}_{\mathsf {S}}(g)$ . Then $h=gg^{-1}h\in \mathrm {cl}_{\mathsf {S}}(g)$ , so $g^{-1}h\notin Z$ . So the map $\mathrm {cl}_{\mathsf {S}}(g)\to \mathrm {cl}_{\mathsf {S}_{G/Z}}(gZ)$ is injective. Since $\lvert{\mathrm {cl}_{\mathsf {S}_{G/Z}}(gZ)}\rvert$ divides $\lvert {\mathrm {cl}_{\mathsf {S}}(g)} \rvert$ , the result follows.
It turns out that many analogs of classical results about the centre of the group exist for $Z(\mathsf {S})$ (see [Reference Burkett3] for more details). Among these is the next result, which is a generalization of a well-known fact about ordinary complex characters (e.g., see [Reference Isaacs11, Corollary 2.30]).
Lemma 3.2 Let $\chi$ be a basic $\mathsf {S}$ -character of $G$ and write $Z=Z(\mathsf {S})$ . Then $\chi (1)\le \lvert {G:Z(\mathsf {S})}\rvert,$ with equality if and only if $\chi$ vanishes on $G\setminus Z(\mathsf {S})$ .
Proof. Since $\mathsf {S}_{Z(\mathsf {S})}$ is the finest supercharacter theory, the restriction $\chi _{Z(\mathsf {S})}$ is a multiple of some linear character $\lambda$ . So $\langle{\chi _{Z(\mathsf {S})},\,\chi _{Z(\mathsf {S})}}\rangle=\chi (1)^{2}$ . On the other hand, $\langle{\chi _{Z(\mathsf {S})},\,\chi _{Z(\mathsf {S})}}\rangle \le \lvert {G:Z(\mathsf {S})}\rvert\langle {\chi,\,\chi }\rangle=\lvert {G:Z(\mathsf {S})}\rvert\chi (1)$ , with equality if and only if $\chi$ vanishes on $G\setminus Z(\mathsf {S})$ . The result follows.
We now discuss an analog of the commutator subgroup of $G$ . Note that one may write $[G,\,G]=\langle{g^{-1}k:k\in \mathrm {cl}_G(g)}\rangle$ . Using this description, it is natural to consider the subgroup $\langle{g^{-1}k:k\in \mathrm {cl}_{\mathsf {S}}(g)}\rangle$ , which we denote by $[G,\,\mathsf {S}]$ . It turns out this subgroup is always $\mathsf {S}$ -normal [Reference Burkett3, Proposition 3.7]. Moreover, $[G,\,\mathsf {S}]$ provides information of the structure of the basic $\mathsf {S}$ -characters. Most notably, a basic $\mathsf {S}$ -character $\chi$ is linear if and only if $[G,\,\mathsf {S}]\le \ker (\chi )$ [Reference Burkett3, Proposition 3.11].
As stated above, if $\mathsf {S}$ is a supercharacter theory of $G$ , $N$ is $\mathsf {S}$ -normal, $\chi$ is a basic $\mathsf {S}$ -character and $\psi$ is a basic $\mathsf {S}_N$ -character satisfying $\langle{\chi _N,\,\psi }\rangle >0$ , then $\psi (1)$ divides $\chi (1)$ . The next result, which is [Reference Burkett3, Proposition 3.13], shows this can be strengthened in certain situations, a fact that will be useful later.
Lemma 3.3 Let $N$ be an $\mathsf {S}$ -normal subgroup satisfying $[G,\,\mathsf {S}]\le N$ . Let $\chi$ be a basic $\mathsf {S}$ -character that does not contain $N$ in its kernel, and suppose that $\psi \in \mathrm {BCh}(\mathsf {S}_N)$ satisfies $\langle{\chi _N,\,\psi }\rangle>0$ . Then $\chi (1)/\psi (1)$ divides $\lvert {G:N}\rvert$ .
Proof. Since $[G,\,\mathsf {S}]\le N$ , $\Lambda =\mathrm {Ch}(\mathsf {S}/N)$ acts on $\mathrm {BCh}(\mathsf {S})$ in the obvious way. Consider the set $C=\{\psi \lambda :\ \psi \in X,\,\ \lambda \in \Lambda \}$ , where $X=\mathrm {Irr}(\chi )$ . On the one hand, $C$ is exactly the set of constituents of $\psi ^{G}$ . By [Reference Hendrickson9, Lemma 3.4], we conclude that
\[ \sigma_C(1)=\psi^{G}(1)=\lvert{G:N}\rvert\psi(1). \]
On the other hand, we have $C=\bigcup _{\lambda \in \Lambda }X^{\lambda }$ . Since $\mathrm {Irr}(\chi ^{\lambda })\cap \mathrm {Irr}(\chi )=\varnothing$ whenever $\chi ^{\lambda }\neq \chi$ and $\chi ^{\lambda }(1)=\chi (1)$ for each $\lambda \in \Lambda$ , we have
\[ \sigma_C(1)=\lvert{\mathrm{orb}_{\Lambda}(\chi)}\rvert\chi(1). \]
\[ \chi(1)=\frac{\lvert{G:N}\rvert\psi(1)}{\lvert{\mathrm{orb}_{\Lambda}(\chi)}\rvert}=\lvert{\mathrm{Stab}_{\Lambda}(\chi)}\rvert\psi(1). \]
The result follows as $\lvert {\mathrm {Stab}_{\Lambda }(\chi )}\rvert$ divides $\lvert{G:N}\rvert$ .
Remark 3.4 If $\chi \in \mathrm {Irr}(G)$ and $\psi \in \mathrm {Irr}(N)$ lie under $\chi$ , then $\chi (1)/\psi (1)$ is known to divide $\lvert {G:N}\rvert$ (for a proof, see [Reference Isaacs11, Corollary 11.29]). In the case $\mathsf {S}=\mathsf {m}(G)$ , Lemma 3.3 is saying something a little stronger. In this case, a basic $\mathsf {S}$ -character has the form $\chi (1)\chi$ for some $\chi \in \mathrm {Irr}(G)$ . If $\psi \in \mathrm {Irr}(N)$ lies under $\chi$ , then a basic $\mathsf {S}_N$ character has degree $\lvert {G:I_G(\psi )}\rvert$ , where $I_G(\psi )$ is the inertia group of $\psi$ in $G$ . Therefore, in this case, Lemma 3.3 is saying that if $G/N$ is abelian, $(\chi (1)/\psi (1))^{2}$ divides $\lvert {G:N}\rvert \lvert{G:I_G(\psi )}\rvert$ .
4. Proofs
In this section, we prove the main results of the paper. For the remainder of the paper, $p$ is an odd prime and $G$ is the abelian group of order $p^{2}$ and exponent $p$ .
Our first result shows that every $\ast$ -product and direct product supercharacter theory of $G$ comes from automorphisms. Note that this includes Theorem A.
Lemma 4.1 Let $N\ne M$ be a non-trivial, proper subgroups of $G$ . Let $\varphi :M\to G/N$ be the canonical isomorphism $x\mapsto xN$ . Let $\mathsf {U}$ be a supercharacter theory of $N,$ and let $\mathsf {V}$ be a supercharacter theory of $M$ . The following hold:
(1) There exists $A\le \mathrm {Aut}(G)$ such that $\mathsf {U}\ast \varphi (\mathsf {V})$ comes from $A;$
(2) There exists $B\le \mathrm {Aut}(G)$ such that $\mathsf {U}\times \mathsf {V}$ comes from $B$ .
Proof. Write $N=\langle{x}\rangle$ and $M=\langle{y}\rangle$ . Since $N$ and $M$ are cyclic of prime order, there exist integers $m_1$ , $m_2$ such that $\mathsf {U}$ comes from the automorphism $\sigma :N\to N$ defined by $x^{\sigma }=x^{m_1}$ and $\mathsf {U}$ comes from the automorphism $\tau :M\to M$ defined by $y^{\tau }=y^{m_2}$ .
First, we prove (1). Let $\mathsf {S}=\mathsf {U}\ast \varphi (\mathsf {V})$ . Then the $\mathsf {S}$ -classes contained in $N$ are the orbits of $\langle{\sigma }\rangle$ on $N$ . The $\mathsf {S}$ -classes lying outside of $N$ are the full preimages of the orbits of $\langle{\tau }\rangle$ on $M$ under the projection $G\to G/N$ . Extend $\sigma$ to an automorphism $\tilde {\sigma }$ of $G$ by setting $y^{\tilde {\sigma }}=y$ . For each $1\le k\le p-1$ , define the automorphism $\tau _k$ of $G$ by defining $(x^{i}y^{j})^{\tau _k}=x^{i+jk}y^{jm_2}$ . Let $A=\langle{\tilde {\sigma },\,\tau _1,\,\tau _2,\,\dotsc,\,\tau _{p-1}}\rangle$ . Then $N$ is $A$ -invariant and the orbits of $A$ on $N$ are exactly the orbits of $\langle{\sigma }\rangle$ on $N$ . Thus, $\mathrm {orb}_A(g)=\mathrm {cl}_{\mathsf {S}}(g)$ if $g\in N$ . Observe that $\{1,\,y,\,\dotsc,\,y^{p-1}\}$ is a transversal for $N$ in $G$ . For each $1\le j,\,k\le p-1$ , observe that $(y^{j})^{\tau _k}=y^{jm_2}x^{jk}$ . As $k$ ranges over the set $\{1,\,2,\,\dotsc,\,p-1\}$ , so does $jk$ . It follows that $\mathrm {orb}_A(y^{j})=\{hn:h\in \mathrm {orb}_{\langle{\tau }\rangle}(y^{j}),\, n\in N\}$ . For each $g\in G$ , we may write $g$ uniquely as $g=g_Ng_M$ where $g_N\in N$ and $g_M$ . Let $g\in G$ . From the arguments above, we see that $\mathrm {orb}_A(g)=\{hn:h\in \mathrm {orb}_{\langle {\tau }\rangle}(g_M),\, n\in N\}$ , which is exactly $\mathrm {cl}_{\mathsf {S}}(g)$ . This completes the proof of (1).
Now we show (2). Let $\mathsf {D}=\mathsf {U}\times \mathsf {V}$ . Extend $\sigma$ to an automorphism $\tilde {\sigma }$ of $G$ by setting $y^{\tilde {\sigma }}=y$ , and extend $\tau$ to an automorphism $\tilde {\tau }$ of $G$ by setting $x^{\tilde {\tau }}=x$ . Let $B=\langle {\tilde {\sigma },\,\tilde {\tau }}\rangle$ . Then $N$ and $M$ are both $B$ -invariant. If $g\in N\cup M$ , then it is easy to see that $\mathrm {orb}_B(g)=\mathrm {cl}_{\mathsf {D}}(g)$ . If $g\not \in N\cup M$ , then
\[ \mathrm{orb}_B(g)=\bigcup_{i=1}^{d_2}\bigl\{g_N^{m_1}g_M^{m_2^{i}},g_N^{m_1^{2}}g_M^{m_2^{i}},\dotsc,g_N^{m_1^{d_1-1}}g_M^{m_2^{i}}\bigr\} \]
where $d_i$ is the order of $m_i$ modulo $p$ . Thus, $\mathrm {orb}_B(g)=\mathrm {orb}_{\langle{\sigma }\rangle}(g_N)\times \mathrm {orb}_{\langle{\tau }\rangle}(g_M)=\mathrm {cl}_{\mathsf {S}}(g)$ . This completes the proof of (2).
We may now give a more precise statement and proof of Theorem B. We first remark that having a non-identity superclass of size $1$ is equivalent to the condition $Z(\mathsf {S}) > 1$ and having a non-principal linear supercharacter is equivalent to the condition $[G,\,\mathsf {S}] < G$ .
Theorem 4.2 Let $\mathsf {S}$ be a supercharacter theory of $G$ . Assume that $[G,\,\mathsf {S}]< G$ or $Z(\mathsf {S})>1$ . One of the following holds:
(1) $\mathsf {S}$ is a $\ast$ -product over $[G,\,\mathsf {S}]$ .
(2) $\mathsf {S}$ is a $\ast$ -product over $Z(\mathsf {S})$ .
(3) $\mathsf {S}$ is a direct product over $[G,\,\mathsf {S}]$ and $Z(\mathsf {S})$ .
In particular, $\mathsf {S}$ comes from automorphisms.
Proof. First observe that $[G,\,\mathsf {S}]=1$ if and only if $Z(\mathsf {S})=G$ , and the result holds trivially in this case. Thus, it suffices to assume that $[G,\,\mathsf {S}]$ or $Z(\mathsf {S})$ is non-trivial and proper. We can make another reduction by considering $G^{\ast }=\mathrm {Irr}(G)$ – the dual of $G$ – and the dual supercharacter theory ${\check {S}}$ of $\mathsf {S}$ . If $\mathsf {S}=(\mathcal {X},\,\mathcal {K})$ , then $Z(\mathsf {S})$ and $\mathrm {BCh}(G/[G,\,\mathsf {S}])$ are comprised of the parts of $\mathcal {K}$ and $\mathcal {X}$ of size 1, respectively. From this description, it is clear that $Z(\mathsf {S})^{\ast }=G^{\ast }/[G^{\ast },\,{\check {S}}]$ and $Z({\check {\mathsf {S}}})=(G/[G,\,\mathsf {S}])^{\ast }$ . Thus, it suffices to prove that the result holds in the case that $1<[G,\,\mathsf {S}]< G$ , which we now assume.
We first assume additionally that $[G,\,\mathsf {S}]$ is the unique $\mathsf {S}$ -normal subgroup of index $p$ . Then either ${Z}(\mathsf {S})=1$ or ${Z}(\mathsf {S})=[G,\,\mathsf {S}]$ . Assume that ${Z}(\mathsf {S})=[G,\,\mathsf {S}]$ . Let $\chi$ be an $\mathsf {S}$ -character without $[G,\,\mathsf {S}]$ in its kernel. Then $\chi (1)=\lvert {G:{Z}(\mathsf {S})}\rvert=p$ . By Lemma 3.2, we deduce that $\chi$ vanishes on $G\setminus {Z}(\mathsf {S})$ . Hence we see from [Reference Burkett and Lewis5, Theorem 2] that $\mathsf {S}$ is a $\ast$ -product over $[G,\,\mathsf {S}]$ . So we now assume that ${Z}(\mathsf {S})=1$ . Let $\chi \in \mathrm {BCh}(\mathsf {S}\mid [G,\,\mathsf {S}])$ , and let $\psi \in \mathrm {BCh}([G,\,\mathsf {S}])$ lie under $\chi$ . If every element of $\mathrm {Irr}(G/[G,\,\mathsf {S}])$ fixes $\chi$ , then $\chi$ vanishes on $G\setminus [G,\,\mathsf {S}]$ and so $\chi =\psi ^{G}$ by Lemma 2.1. Let $1\le j\le p-1$ . Then $\chi ^{j}\in \mathrm {BCh}(\mathsf {S})$ and $\chi ^{j}=(\psi ^{j})^{G}$ , where here $\alpha ^{j}(g)=\alpha (g^{j})$ . As $j$ ranges over all $j$ , $\psi ^{j}$ ranges over all non-principal basic $\mathsf {S}_{[G,\mathsf {S}]}$ -characters. Thus, we see that every $\chi \in \mathrm {BCh}(\mathsf {S}\mid [G,\,\mathsf {S}])$ vanishes on $G\setminus [G,\,\mathsf {S}]$ . Hence $\mathsf {S}$ is a $\ast$ -product over $[G,\,\mathsf {S}]$ in this case as well. So assume that this is not the case. We instead consider the dual situation in $G^{\ast }=\mathrm {Irr}(G)$ . Then $Z={Z}({\check {\mathsf {S}}})$ has order $p$ , $[G^{\ast },\,{\check {\mathsf {S}}}]=G^{\ast }$ and that $\mathrm {cl}_{\check {\mathsf {S}}}(g)$ is not fixed by the action of $Z$ for any $g\in G^{\ast }\setminus Z$ . Let $h\in G^{\ast }\setminus Z$ . Then $\langle{h}\rangle$ gives a transversal for $Z$ in $G^{\ast }$ . Since $G^{\ast }/Z\simeq C_p$ , there exists $1\le m\le p-1$ such that $\mathrm {cl}_{ {\check {\mathsf {S}}}_{G/Z}}(hZ)=\{hZ,\,h^{m}Z,\,\dotsc,\,h^{m^{d-1}}Z\}$ , where $d=\mathrm {ord}_p(m)$ . Also, since $\lvert {\mathrm {cl}_{\check {\mathsf {S}}}(h)}\rvert=\lvert {\mathrm {cl}_{\check {\mathsf {S}}_{G/Z}}(hZ)}\rvert$ by Lemma 3.1, there exist $z_1,\,z_2,\,\dotsc,\,z_{d-1}\in {Z}({\check {\mathsf {S}}})$ such that $\mathrm {cl}_{\check {\mathsf {S}}}(h)=\{h,\,h^{m}z_1,\,h^{m^{2}}z_2,\,\dotsc,\,h^{m^{d-1}}z_{d-1}\}$ . So
\[ \mathrm{cl}_{\check{\mathsf{S}}}(h^{m})=\mathrm{cl}_{\mathsf{S}}(h)^{m}=\{h^{m},h^{m^{2}}z_1^{m},h^{m^{3}}z_2^{m},\dotsc,hz_{d-1}^{m}\}, \]
from which it follows that $\mathrm {cl}_{\check {\mathsf {S}}}(h^{m})z_1=\mathrm {cl}_{\check {\mathsf {S}}}(h)$ . Since $\mathrm {cl}_{\check {\mathsf {S}}}(h)$ is not fixed by multiplication by any element of $Z$ , this implies that $z_2=z_1^{m+1}$ . Similarly $z_3=z_2^{m}z_1=z_1^{m^{2}+m+1}$ . Continuing this way, we deduce that $z_i =z^{1+m+m^{2}+\dotsb +m^{i-1}}=z_1^{(m^{i}-1)/(m-1)}$ for each $1\le i\le d-1$ , where $z=z_1$ . Thus, we see that $h^{-1}k\in \langle{h^{m-1}z}\rangle$ for every $k\in \mathrm {cl}_{\check {\mathsf {S}}}(h)$ . Similarly, for every $w\in Z$ and $k\in \mathrm {cl}_{\check {\mathsf {S}}}(hw)$ , $(hw)^{-1}k\in \langle{h^{m-1}z}\rangle$ . Since every element of $G^{\ast }$ has the form $h^{j}w$ for some integer $j$ and $w\in Z$ , it follows that $g^{-1}k\in \langle {h^{m-1}z}\rangle$ for every $g\in G^{\ast }$ and $k\in \mathrm {cl}_{\check {\mathsf {S}}}(g)$ . This implies that $[G^{\ast },\,{\check {\mathsf {S}}}]=\langle{h^{m-1}z}\rangle< G^{\ast }$ , a contradiction.
We may now assume that there is another $\mathsf {S}$ -normal subgroup, say $N$ . Since $[G,\,\mathsf {S}]< G$ , $[G,\,\mathsf {S}]\ne Z(\mathsf {S})$ . So $N\cap [G,\,\mathsf {S}]=1$ , which implies $N={Z}(\mathsf {S})$ . Let $\chi \in \mathrm {BCh}(\mathsf {S})$ satisfy $[G,\,\mathsf {S}]\nleq \ker (\chi )$ . Let $\psi \in \mathrm {BCh}(\mathsf {S}_{[G,\mathsf {S}]})$ lie under $\chi$ . Then $\chi (1)/\psi (1)\in \{1,\,p\}$ by Lemma 3.3. Assume that $\chi (1)=p\psi (1)$ . Since $\chi (1)\le \lvert {G:Z(\mathsf {S})}\rvert = p$ by Lemma 3.2, we deduce that $\psi (1)=1$ . Since $[G,\,\mathsf {S}]\nleq \ker (\chi )$ , $\psi$ is non-principal and thus $[[G,\,\mathsf {S}],\,\mathsf {S}]=1$ . We conclude that $\mathsf {S}_{[G,\mathsf {S}]}=\mathsf {m}([G,\,\mathsf {S}])$ , which contradicts the fact that $[G,\,\mathsf {S}]\ne Z(\mathsf {S})$ . So $\chi (1)=\psi (1)$ and so $\psi ^{G}$ is the sum of $p$ distinct $\mathsf {S}$ -characters. If $\mathrm {BCh}(\mathsf {S}/[G,\,\mathsf {S}])$ is the set of basic $\mathsf {S}$ -characters with $[G,\,\mathsf {S}]$ in their kernel and $\mathrm {BCh}(\mathsf {S}\mid [G,\,\mathsf {S}])$ is the set of those without $[G,\,\mathsf {S}]$ in their kernel, then by the above argument we see
\begin{align*} \lvert {\mathrm{BCh}(\mathsf{S})}\rvert & =\lvert {\mathrm{BCh}(\mathsf{S}/[G,\mathsf{S}])}\rvert+p \lvert{\mathrm{BCh}(\mathsf{S}\mid[G,\mathsf{S}])}\rvert\\ & =p+p \lvert{\mathrm{BCh}(\mathsf{S}\mid[G,\mathsf{S}])}\rvert\\ & =p \lvert{\mathrm{BCh}(\mathsf{S}_{[G,\mathsf{S}]})}\rvert= \lvert{\mathsf{S}_{{Z}(\mathsf{S})}}\rvert\cdot \lvert{\mathsf{S}_{[G,\mathsf{S}]}}\rvert. \end{align*}
It follows from Corollary 2.3 that $\mathsf {S}=\mathsf {S}_{{Z}(\mathsf {S})}\times \mathsf {S}_{[G,\mathsf {S}]}$ .
The final statement is a consequence of Lemma 4.1
5. Partition supercharacter theories
We now describe a type of supercharacter theory that is (essentially) unique to elementary abelian groups of rank two. As in the previous section, $p$ is a prime and $G$ is the elementary abelian group $C_p\times C_p$ .
Lemma 5.1 Let $G$ and let $H_1,\,H_2,\,\dotsc,\,H_{p+1}$ be the non-trivial, proper subgroups of $G$ . For every subset $I\subseteq \{1,\,2,\,\dotsc,\,p+1\}$ , define $N_I=\bigcup _{i\in I}(H_i\setminus 1)$ . For every partition $\mathcal {P}$ of $\{1,\,2,\,\dotsc,\,p+1\},$ the partition $\{N_I:I\in \mathcal {P}\}$ of $G\setminus 1$ gives the non-identity superclasses for a supercharacter theory $\mathsf {S}_{\mathcal {P}}$ of $G$ .
Proof. To prove this, it suffices to show that there exist non-negative integers $a_{I,J,1}$ and $a_{I,J,L}$ such that
\[ \widehat{N_I}\widehat{N_J}=a_{I,J,1}\cdot 1+\sum_{L\in\mathcal{P}}a_{I,J,L}\widehat{N_L} \]
for every $I,\,J\in \mathcal {P}$ . To that end, let $I,\,J\in \mathcal {P}$ . Then
\begin{align*} \widehat{N_I}\widehat{N_J} & =\left[\vphantom{\displaystyle\displaystyle\sum_{j\in J}}\sum_{i\in I}(\widehat{H_i}-1)\right]\left[\sum_{j\in J}(\widehat{H_j}-1)\right]\\ & =\sum_{(i,j)\in I\times J}(\widehat{H_i}-1)(\widehat{H_j}-1)\\ & =\sum_{(i,j)\in I\times J}(\widehat{H_i}\widehat{H_j}-\widehat{H_i}-\widehat{H_j}+1)\\ & =\sum_{(i,j)\in I\times J}[\widehat{G}-(\widehat{H_i}-1)-(\widehat{H_j}-1)+3\cdot 1]\\ & = \lvert{I}\rvert \lvert{J}\rvert \widehat{G}-\lvert{J}\rvert \widehat{N_I}-\lvert{I}\rvert \widehat{N_J}+3 \lvert{I}\rvert \lvert{J}\rvert \cdot 1\\ & =(\lvert{I}\rvert -1)(\lvert{J}\rvert -1)\widehat{G}+\lvert{J}\rvert (\widehat{G}-\widehat{N_I})+\lvert{I}\rvert (\widehat{G}-\widehat{N_J})+2 \lvert {I}\rvert \lvert{J}\rvert \cdot 1\\ & =( \lvert{I}\rvert -1)(\lvert{J}\rvert-1)\widehat{G}+\lvert{J}\rvert\sum_{\substack{L\in\mathcal{P}\\L\ne I}}\widehat{N_I}+\lvert{I}\rvert\sum_{\substack{L\in\mathcal{P}\\L\ne J}}\widehat{N_J}+2\lvert{I}\rvert \lvert{J}\rvert\cdot 1, \end{align*}
We will call the supercharacter theory of Lemma 5.1 the partition supercharacter theory of $G$ corresponding to $\mathcal {P}$ . As can be seen from the above proof, the reason that this construction works because any two distinct normal subgroups generate $G$ . As such, the same construction will work if $G$ is a direct product of two simple groups. That is, if $G=H_1\times H_2$ , where $H_1\cong H_2$ is a simple group, then the set of non-trivial normal subgroups of $G$ is $\{H_1,\,H_2\}$ . The only partitions of $\{1,\,2\}$ are $\{1,\,2\}$ , which corresponds to $\mathrm {M}(G)$ , and $\{\{1\},\,\{2\}\}$ , which corresponds to $\mathrm {M}(H_1)\times \mathrm {M}(H_2)$ .
However, if $G$ has at least three non-trivial normal subgroups, it is not difficult to see that $G$ must be an elementary abelian group of rank two if any two distinct normal subgroups generate $G$ . Indeed, this condition on $G$ implies that any non-trivial, proper normal subgroup is minimal normal. Let $L,\,N,\,M$ be distinct non-trivial, proper normal subgroups of $G$ . Then $G=L\times N=L\times M=N\times M$ , so $G=NM\le C_G(L)$ and $G=LM\le C_G(N)$ . Thus, we see that $G$ is abelian. So $L,\,N,\,M$ are cyclic of prime order. Since $L\cong G/N\cong M\cong G/L\cong N$ , $G=C_p\times C_p$ for some prime $p$ .
Lemma 5.2 Let $\mathcal {P}$ be the partition of $\{1,\,2,\,\dotsc,\,p+1\}$ consisting of all singletons. A supercharacter theory $\mathsf {S}$ is a partition supercharacter theory if and only if $\mathsf {S}_{\mathcal {P}}\preccurlyeq \mathsf {S}$ . In particular, the set of partition supercharacter theories is an interval in the lattice of supercharacter theories.
Proof. The supercharacter theory $\mathsf {S}$ is a partition supercharacter theory if and only if $\langle{g}\rangle \setminus \{1\}\subseteq \mathrm {cl}_{\mathsf {S}}(g)$ holds for every $g\in G$ . In other words, if and only if $g^{i}\in \mathrm {cl}_{\mathsf {S}}(g)$ holds for every $g\in G$ and $1\le i\le p-1$ . This is exactly the condition $\mathsf {S}_{\mathcal {P}}\preccurlyeq \mathsf {S}$ . Thus, the interval $[\mathsf {S}_{\mathcal {P}},\,\mathsf {M}(G)]$ is the set of partition supercharacter theories of $G$ .
Next, we give an example that shows the set of automorphic supercharacter theories is not a semilattice of $\mathrm {SCT}(G)$ .
Example 5.3 Let $p\ge 5$ . Let $H_1,\,H_2,\,H_3$ be distinct subgroups of $G$ of order $p$ . Define the supercharacter theories $\mathsf {S}$ and $\mathsf {T}$ as follows: $\mathsf {S}=\mathsf {M}(H_1)\times \mathsf {M}(H_2)$ , $\mathsf {T}=\mathsf {M}(H_2)\times \mathsf {M}(H_3)$ , which are both partition supercharacter theories. So $\mathsf {S}\wedge \mathsf {T}$ is also a partition supercharacter theory by Lemma 5.2. It is not difficult to see that $H_1,\,H_2,\,H_3$ are the only non-trivial, proper $(\mathsf {S}\wedge \mathsf {T})$ -normal subgroups of $G$ . By Theorem A, both $\mathsf {S}$ and $\mathsf {T}$ come from automorphisms. However, $\mathsf {S}\wedge \mathsf {T}$ does not come from automorphisms, since $\mathsf {S}\wedge \mathsf {T}$ has exactly three non-trivial, proper supernormal subgroups (see Lemma 6.6).
We close this section by noting that the construction found in this section is closely related to the amorphic association schemes studied in [Reference van Dam and Muzychuk22].
6. Non-trivial $\mathsf {S}$ -normal subgroups
In this section, we discuss the structure of the supercharacter theories of $G=C_p\times C_p$ , $p$ a prime, that have non-trivial, proper supernormal subgroups. We begin by studying the structure of supercharacter theories with exactly one such subgroup.
To prove Theorem 4.2, we showed that if $[G,\,\mathsf {S}]$ or $Z(\mathsf {S})$ were the unique $\mathsf {S}$ -normal subgroup of $G$ , then $\mathsf {S}$ is a $\ast$ -product. One may wonder if a similar result holds for any supercharacter theory of $G$ with a unique $\mathsf {S}$ -normal subgroup. This is not the case, as the next example illustrates.
Example 6.1 We show that not every supercharacter theory of $C_p\times C_p$ with a unique non-trivial, proper supernormal subgroup is a $\ast$ -product. This example comes from $G=C_5\times C_5$ . Write $G=\langle {x,\,y}\rangle$ . Let
\begin{align*} K & =\bigl\{\{1\},\{y,y^{2},y^{3},y^{4}\},\{x,x^{2},x^{3},x^{4},xy^{4},x^{2}y^{3},x^{3}y^{2},x^{4}y\}\bigr\}\\ & \cup\bigl\{\{xy,x^{2}y^{2},x^{3}y^{3},x^{4}y^{4},xy^{3},x^{2}y,x^{3}y^{4},x^{4}y^{2},xy^{2},x^{2}y^{4},x^{3}y,x^{4}y^{3}\}\bigr\}. \end{align*}
One may readily verify that $K$ gives the set of $\mathsf {S}$ -classes for a supercharacter theory $\mathsf {S}$ of $G$ . Observe that $N=\langle{y}\rangle$ is the unique non-trivial, proper $\mathsf {S}$ -normal subgroup. However, $\mathsf {S}$ is not a $\ast$ -product over $N$ since, for example, $x$ and $xy$ lie in different $\mathsf {S}$ -classes.
Observe that the above supercharacter theory is an example of a partition supercharacter theory. Indeed, the supercharacter theories $\mathsf {S}$ with a unique non-trivial, proper $\mathsf {S}$ -normal subgroup that we have observed is either a $\ast$ -product or a partition supercharacter theory. In particular, each supercharacter theory of this form has come from automorphisms or has been a partition supercharacter theory.
We have observed a similar phenomenon for those with exactly two non-trivial, proper $\mathsf {S}$ -normal subgroups. Specifically, it appears as though every supercharacter theory of $G$ with exactly two non-trivial, proper supernormal subgroups either comes from automorphisms or is a partition supercharacter theory. It is, however, not the case that each such supercharacter theory that is not a partition supercharacter theory is a direct product, as can be seen from the next example.
Example 6.2 Let $G=C_5\times C_5$ . Let $x$ and $y$ be distinct non-identity elements of $G$ . Let $H=\langle{x}\rangle$ and $N=\langle{y}\rangle$ . Let $\sigma$ be the automorphism of $G$ defined by $\sigma (x^{i}y^{j})=x^{-i}y^{2j}$ . Then the orbit of every element lying outside of $H$ has size four, and the orbit of every non-identity element of $H$ has size two. Let $\mathsf {S}$ be the supercharacter theory of $G$ coming from $\langle{\sigma }\rangle$ . Then $H$ and $N$ are the only non-trivial, proper $\mathsf {S}$ -normal subgroups of $G$ , $\lvert {\mathsf {S}_H}\rvert=3$ , $\lvert {\mathsf {S}_N}\rvert=2$ . Since $\lvert {\mathsf {S}_H\times \mathsf {S}_N}\rvert =\lvert {\mathsf {S}_H}\rvert \cdot \lvert {\mathsf {S}_N}\rvert =6<8=\lvert {\mathsf {S}}\rvert$ , $\mathsf {S}$ is not a direct product by Corollary 2.3.
As mentioned just prior to Example 6.2, it appears as though every supercharacter theory of $G$ with exactly two non-trivial, proper supernormal subgroups either comes from automorphisms or is a partition supercharacter theory (in fact, we have yet to find any supercharacter theory of $C_p\times C_p$ that does not either come from automorphisms or partitions). We do have some evidence of this, but only have one weak result in this direction. Before giving the result, we set up some convenient notation that will be used for the remainder of the paper.
Let $\mathsf {S}$ be a supercharacter theory of $G$ and let $H$ be a non-trivial, proper $\mathsf {S}$ -normal subgroup. Then $H$ is cyclic of prime order, so $\mathsf {S}_H$ comes from automorphisms, say from the subgroup $A\le \mathrm {Aut}(H)$ . Since $\mathrm {Aut}(H)\cong C_{p-1}$ is just the collection of power maps, there exists an integer $m$ such that $A$ is generated by the automorphism sending an element to its $m^{\rm th}$ -power. Thus, if $g\in H$ , then $\mathrm {cl}_{\mathsf {S}_H}(g)=\{g,\,g^{m},\,g^{m^{2}},\,\dotsc \}$ . We denote this supercharacter theory by $[H]_m$ . Given an integer $m$ and $g\in G$ , we let $[g]_m$ denote the set $\{g,\,g^{m},\,g^{m^{2}},\,\dotsc \}$ . We let $\lvert {m}\rvert_p$ denote the order of $m$ modulo $p$ , which is also the size of $[g]_m$ for $1\ne g\in G$ .
Lemma 6.3 Let $\mathsf {S}$ have exactly two $\mathsf {S}$ -normal subgroups $H$ and $N$ . Write $\mathsf {S}_H=[H]_{m_1}$ and $\mathsf {S}_N=[N]_{m_2}$ . Let $d_i=\lvert{m_i}\rvert_p$ , $i=1,\,2$ , and assume that $(d_1,\,d_2)=1$ . Then $\mathsf {S}=\mathsf {S}_H\times \mathsf {S}_N$ . In particular, $\mathsf {S}$ comes from automorphisms.
Proof. Let $g\in G\setminus (H\cup N)$ . Then $d_2=\lvert {\mathrm {cl}_{\mathsf {S}_{G/H}}(g)}\rvert$ and $d_1=\lvert {\mathrm {cl}_{\mathsf {S}_{G/N}}(g)}\rvert$ . So $d_1d_2=\mathrm {lcm}(d_1,\,d_1)$ divides $\lvert {\mathrm {cl}_{\mathsf {S}}(g)}\rvert$ . Since $\mathsf {S}\preccurlyeq \mathsf {S}_H\times \mathsf {S}_N$ , we know $\mathrm {cl}_{\mathsf {S}}(g)\subseteq \mathrm {cl}_{\mathsf {S}_H\times \mathsf {S}_N}(g)$ . Since $d_1d_2\le \lvert {\mathrm {cl}_{\mathsf {S}}(g)}\rvert \le \lvert {\mathrm {cl}_{\mathsf {S}_H\times \mathsf {S}_N}(g)}\rvert =d_1d_2$ , we conclude that $\mathrm {cl}_{\mathsf {S}}(g)=\mathrm {cl}_{\mathsf {S}_H\times \mathsf {S}_N}(g)$ . Hence $\mathsf {S}=\mathsf {S}_H\times \mathsf {S}_N$ , and the result follows from Lemma 4.1.
We suspect that the condition $(d_1,\,d_2)=1$ in Lemma 6.3 can be strengthened to $(d_1,\,d_2)< p-1$ . This has only been totally verified for the primes $p=2,\,3$ and $5$ .
Now suppose that there are at least three $\mathsf {S}$ -normal subgroups. We conjecture that $\mathsf {S}$ comes from automorphisms whenever the restriction to a $\mathsf {S}$ -normal subgroup is not the coarsest theory.
Conjecture 6.4 Let $\mathsf {S}$ be a supercharacter theory of $G$ . Suppose that $G$ has at least three non-trivial, proper $\mathsf {S}$ -normal subgroups, and let $H$ be one of them. If $\mathsf {S}_H\neq \mathsf {M}(H)$ , then every subgroup of $G$ is $\mathsf {S}$ -normal. In particular, $\mathsf {S}$ comes from automorphisms.
Although a general proof appears to be difficult, this can be proved rather easily in a couple of specific cases.
Lemma 6.5 Let $\mathsf {S}$ be a supercharacter theory of $G$ . Suppose that $G$ has at least three non-trivial, proper $\mathsf {S}$ -normal subgroups. Let $H$ be $\mathsf {S}$ -normal and write $\mathsf {S}_H=[H]_m$ . If $\lvert {m}\rvert _p=1$ or $2,$ then Conjecture 6.4 holds.
Proof. First assume that $\lvert {m}\rvert _p=1$ , and let $K$ be another $\mathsf {S}$ -normal subgroup. Then $\mathsf {S}_K=\mathsf {m}(K)$ , so $G=\langle{H,\,K}\rangle\le Z(\mathsf {S})$ .
Now assume that $\lvert {m} \rvert_p=2$ . We may find distinct non-identity elements $x,\,y\in G$ such that $\langle{x}\rangle$ , $\langle{y}\rangle$ and $\langle{xy}\rangle$ are all $\mathsf {S}$ -normal. We show by induction on $n$ that $\langle {xy^{n}}\rangle$ is $\mathsf {S}$ -normal for every $n\ge 1$ . Let $[g]$ denote the set $\{g,\,g^{-1}\}$ for $g\in G$ . Let $n\ge 2$ be the smallest integer for which $\langle {xy^{n}}\rangle$ is not $\mathsf {S}$ -normal. Then $\langle {xy^{n-1}}\rangle$ is $\mathsf {S}$ -normal, which means that $\widehat {[xy^{n-1}]}\widehat {[y]}$ can be expressed as a non-negative integer linear combination of $\mathsf {S}$ -class sums. Since $\widehat {[xy^{n-1}]}\widehat {[y]}=\widehat {[xy^{n}]}+\widehat {[xy^{n-2}]}$ and $\langle {xy^{n-2}}\rangle$ is $\mathsf {S}$ -normal, $[xy^{n}]$ must be a union of $\mathsf {S}$ -classes. So $\langle {[xy^{n}]}\rangle=\langle {xy^{n}}\rangle$ is also $\mathsf {S}$ -normal, which contradicts the choice of $n$ . Thus, $\langle {xy^{n}}\rangle$ is $\mathsf {S}$ -normal for every integer $n$ , as claimed.
Conjecture 6.4 also holds in the case that $\mathsf {S}$ comes from automorphisms.
Lemma 6.6 Let $\mathsf {S}$ be a supercharacter theory of $G$ coming from automorphisms. If $G$ has at least three non-trivial, proper $\mathsf {S}$ -normal subgroups, then every subgroup of $G$ is $\mathsf {S}$ -normal.
Proof. Suppose that $\mathsf {S}$ comes from $A\le \mathrm {Aut}(G)$ . Let $H_i$ , $i=1,\,2,\,3$ , be $\mathsf {S}$ -normal of order $p$ . We may assume that $H_1$ is generated by $x$ , $H_2$ is generated by $y$ and $H_3$ is generated by $xy$ . Let $a\in A$ . Since $H_1$ and $H_2$ are $\mathsf {S}$ -normal, there exist integers $i$ and $j$ so that $x^{a}=x^{i}$ and $y^{a}=y^{j}$ . Then $(xy)^{a}=x^{i}y^{j}$ . Since $H_3$ is $\mathsf {S}$ -normal, $(xy)^{a}\in \langle {xy}\rangle$ , which forces $i=j$ . We conclude that $a\in Z(\mathrm {Aut}(G))$ and hence fixes every subgroup of $G$ .
We remark that Lemma 6.6 also follows from [Reference Wielandt23, Lemma 26.3], a result about Schur rings.
We now outline a strategy we believe will work to prove Conjecture 6.4, although the actual proof has evaded us. This strategy involves an algorithm of the first author appearing in [Reference Burkett4], which we now describe. We begin by defining an equivalence relation on $G$ coming from a partial partition of $G$ . Let $\mathcal {C}$ be a $G$ -invariant partial partition of $G$ . For $K,\,L\in \mathcal {C}$ and $g\in G$ , define
\[ (K,L)_g=\lvert{\{(k,l)\in K\times L:kl=g\}}\rvert. \]
Also, recall that for $K\subseteq G$ , $\hat {K}$ denotes the element $\sum \nolimits _{g\in K}g$ of the group algebra $\mathbb {Z}(G)$ .
Define an equivalence relation $\sim$ on $G$ by defining $g\sim h$ if and only if
\[ (K,L)_g=(K,L)_h \]
for all $K,\,L\in \mathcal {C}$ . Let $\mathscr {K}(\mathcal {C})$ denote the set of equivalence classes of $\mathrm {Irr}(G)$ under $\sim$ . It is shown in [Reference Burkett4] that $\mathscr {K}(\mathcal {C})$ is a refinement of $\mathcal {C}$ and that $\mathscr {K}(\mathcal {C})=\mathcal {C}$ if and only if $\mathcal {C}$ is the set of superclasses for a supercharacter theory $\mathsf {S}$ of $G$ | CommonCrawl |
Fizzy.
Home About Topics Featured Archives Links
Get Fizzy Theme
Homepage GitHub Repo Roadmap New Changelog Report an issue
Gaussian Mixture Model
From K-means we know that:
K-means forces clusters to be spherical
In K-means clustering every point can only belong to one cluster
But sometimes it might be desirable to have elliptical clusters than spherical clusters. And what if there is a data point right in the center of two clusters?
Gaußian Mixture Model
With a random variable $X$, the mixed Gaussian model can be expressed by:
$$p(x)=\sum_{k=1}^{K} \pi_{k} \mathcal{N}\left(X | \mu_{k}, \Sigma_{k}\right)$$ where $\mathcal{N}\left(X | \mu_{k}, \Sigma_{k}\right)$ is the $k^{th}$ component of the mixture model.
Generalized Form
Then we can generate a generalized form:
$$p(X | M, \Sigma, \pi)=\prod_{i=1}^{s} \sum_{k=1}^{K} \pi_{k} \mathcal{N}\left(x_{i} | \mu_{k}, \Sigma_{k}\right)$$ $$\text{for} \quad \mathcal{N}\left(x_{i} | \mu_{k}, \Sigma_{k}\right)=\frac{1}{\sqrt{(2 \pi)^{n} \operatorname{det}\left(\Sigma_{k}\right)}} e^{-\frac{1}{2}\left\langle\Sigma_{k}^{-1}\left(x_{i}-\mu_{k}\right), x_{i}-\mu_{k}\right\rangle}$$ where
$X=\left( \begin{array}{llll}{x_{1}} & {x_{2}} & {\dots} & {x_{s}}\end{array}\right) \in \mathbb{R}^{n \times s} \quad \text { input data }$
$M=\left( \begin{array}{llll}{\mu_{1}} & {\mu_{2}} & {\dots} & {\mu_{K}}\end{array}\right) \in \mathbb{R}^{n \times K} \quad \text { prototype vectors }$
$\Sigma=\left(\Sigma_{1} \quad \Sigma_{2} \quad \ldots \quad \Sigma_{K}\right) \in \mathbb{R}^{n \times n \times K} \quad \text { covariance matrices }$
$\pi=\left( \begin{array}{llll}{\pi_{1}} & {\pi_{2}} & {\dots} & {\pi_{K}}\end{array}\right) \in \mathbb{R}^{K \times 1} \quad \text { mixing weights}$$$\text { and } \quad \pi_{k} \geq 0 \quad \text { for all } \quad k \in\{1, \ldots, K\} \quad \text { as well as } \sum_{k=1}^{K} \pi_{k}=1$$
Now the goal for the algorithm is: given X , determine the parameters M, Σ and π (for example by maximizing the likelihood)
Model Iteration Illustration
Yùzhāng Huáng
I am a human learner. Yes, you can call me Simon.
See all posts by Yùzhāng Huáng
Hessian Matrix
In mathematics, the Hessian matrix or Hessian is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. Hessian Matrices are often used in optimization problems within Newton-Raphson's method.
— TOPIC —
Introduction to Anomaly Detection
Feature Engineering: Label Encoding & One-Hot Encoding
Treatments for Imbalanced Dataset
Use GitHub Repo to Host Images
GitHub remains one of the most reliable repository website for code, but we can also utilize its storage capability to host images for our website. In this tutorial, we are going to use a image upload tool and set up custom domain for the repo to enable pretty link for our images.
© 2020 Fizzy
Using The Fizzy Theme v1.3.1 | Published with Ghost | CommonCrawl |
\begin{document}
\title{On the Asymptotic Behaviour of some Positive Semigroups }
\author{B. M. Makarow and M. R. Weber }
\vspace*{1cm}
\institut{Sankt Petersburg State University and Technische Universit\"at Dresden }
\preprintnumber{MATH-AN-09-2000 }
\date{\small{}}
\sloppy \makepreprinttitlepage
\maketitle
\setcounter{page}{1}
\maketitle
\begin{abstract} \noindent Similar to the theory of finite Markov chains it is shown that in a Banach space X ordered by a closed cone K with nonempty interior $\mathrm{int}(K)$ a power bounded positive operator $A$ with compact power such that its trajectories for positive vectors eventually flow into $\mathrm{int}(K)$, defines a "limit distribution", i.e. its adjoint operator has a unique fixed point in the dual cone. Moreover, the sequence $\{A^n\}_{n\in \mathbb{N}}$ converges with respect to the strong operator topology and for each functional $f\in X'$ the sequence $\{(A^*)^n(f)\}_{n\in \mathbb{N}}$ converges with respect to the weak*-topology (Theorem 5). If a positive bounded $C_0$-semigroup of linear continuous operators $\{S_t\}_{t\geq 0}$ on a Banach space contains a compact operator and the trajectories of the non-zero vectors $x\in K$ have the property from above then, in particular, $\{S_t\}_{t\geq 0}$ and $\{S^*_t\}_{t\geq 0}$ converge to their limit operator with repsect to the operator norm, respectively (Theorem 4). For weakly compact Markov operators in the space of real continuous functions on a compact topological space a corresponding result can be derived, that characterizes the long-term behaviour of regular Markov chains. \end{abstract}
\section{Introduction}
The main purpose of our paper is to show that the method which is used to prove the existence of a limit distribution in the theory of stationary Markov chains (see for example \cite{Fel2}, chapt.VII,\S7, \cite{KemSn}, chapt.IV) can be transfered to a much more general situation. The operator corresponding to the Markov chain is replaced by a positive semigroup of operators acting in a Banach space ordered by a cone with nonempty interior, and the condition of regularity of the Markov chain (in the sense of \cite{KemSn}) is transformed into the condition of strongly positivity of the operator or into an even more general condition (see condition 1) in the Theorems \ref{t1} --- \ref{t3}).
In particular, the results generalize Theorem 1 of \cite{Fel1} and show that the limit distribution exists provided the operator of the random walk on a compact space is weakly compact and satisfies the condition of regularity.
The main result (Theorem \ref{t2}) refers to the case, where a positive $C_0$-semigroup $\{S_t(x)\}_{t\geq 0}$ of linear continuous operators acts in an ordered Banach space $X$. The semigroup of operators is supposed to be uniformly bounded and to contain some compact operator.
Moreover, for each nonzero vector $x\in K$ its trajectory $\{S_t(x)\}_{t\geq 0}$ eventually flows into $\mathrm{int}(K)$. Then the following alternative takes place: either the operators $S_t$ for $t\to+\infty$ converge to $0$ with respect to the strong operator topology or the semigroup possesses a common fixed point $u$ in $\mathrm{int}(K)$, the adjoint operators $\{S^*_t\}_{t\geq 0}$ have a unique fixed point $f_0$ ("limit distribution") in the dual cone and finally, $\{S_t\}_{t\geq 0}$ and $\{S^*_t\}_{t\geq 0}$ both for $t\to+\infty$ converge to the operator $A_0=f_0\times u$ with respect to the norm topology.
Theorem \ref{t1} shows that the requirement of compactness can be considerably relaxed. However, in this case the convergence of the operators $\{S_t\}_{t\geq 0}$ and $\{S_t\}_{t\geq 0}$ to $A_0$ takes place only with respect to the strong operator
topology.
The Theorems \ref{t3} - \ref{t5} deal with sequences $\{A^n\}_{n\in \mathbb{N}}$ of iterates of a positive operator $A$ and are obtained as particular cases of the general results.
\section{Preliminaries}
We remember that a nonempty subset $K$ in vector space $X$ is a {\it wedge}, if $x, y\in K, \lambda,\mu\geq 0$ implies $\lambda x+\mu y\in K$. If in addition $x, -x\in K$ implies $x=0$, then $K$ is a {\it cone}.
In what follows we consider real normed spaces $(X,K,\|\cdot\|)$ in which an order is introduced by means of a closed cone $K$.
\begin{defi}\label{D2} {\rm Any complete normed space which is ordered by a closed cone we will call an {\it ordered Banach space}
and denote it by $(X, K, \|\cdot\|)$.} \end{defi}
Briefly we will write $X$ instead of $(X,K,\|\cdot\|)$ and denote its dual by $X'$. The closed ball in $X$ with radius $r>0$ and centered at the point $x$ is denoted by $B(x;r)$. We use the notations $x\in K$ and $x\geq 0$ synonymously.
A cone $K$ is said to be {\it generating} (or {\it reproducing}), if each vector $x\in X$ has a representation as $x=x_1-x_2$, where $x_1,x_2\in K$.
A cone $K$ is said to be {\it normal}, if there exists a positive number $\delta$, such that
$\|x+y\|\ge\delta\max\{\|x\|,\|y\|\}$ for any $x,\ y \in K$.
A cone $K$ is said to be {\it nonflat}, if there exists a positive constant $\gamma>0$, such that each element $x\in X$ is representable as $x=x_1-x_2$,
where $x_i\in K$ and $\|x_i\|\leq \gamma \|x\|\; (i=1,2)$.
A linear functional defined on $X$ is said to be {\it positive}, if it takes on nonnegative values on all vectors of the cone $K$. The set of all positive functionals of $X'$ is called the {\it dual wedge} and will be denoted by $K'$, i.e. $K'=\{f\in X\colon f(x)\geq 0 \ \mbox{for all} \ x\in K\}$.
The following result goes back to M.G. Krein and V.L. \v{S}mulian (s. \cite{KLS}) \begin{theo}\label{t0} If the cone $K$ is closed and normal then the wedge $K'$ is a closed genera\-ting cone, i.e. each functional of $X'$ has a representation as a difference of two positive functionals. Moreover, $K'$ is nonflat . \end{theo} \textit{Proof.\ } For the first part of the theorem see \cite{KLS}. We restrict ourselves to the proof of the nonflatness of $K'$.
Let be $B_+^*=\{f\in K'\colon \|f\|\leq 1\}$ and $E=B_+^*-B_+^*$. According to the Banach - Alaoglu Theorem (s. \cite{KA3},\ chapt.III, \S3) the set $B_+^*$ is weak$^*$-compact, and therefore $E$ is closed. From the first part it follows that $X'=\bigcup_{n\in\mathbb{N}}nE$, and so $0$ is an interior point of $E$, i.e. for some $r>0$ the ball $B^*(0;r)$ (in $X'$) belongs to $E$. This means
$r\frac{f}{\|f\|}\in E$ for each $f\in X',\ f\neq 0$, and implies that each
functional $r\frac{f}{\|f\|}$ can be represented as $f=f_1-f_2$, where $f_1, f_2\in B_+^*$. Now $\gamma^*=\frac{1}{r}$ can be taken as the constant of nonflatness of the cone $K'$.
\quad\rule{1.5ex}{1.5ex}
For a convenient refering we list some more properties (s. \cite{Kra}, \cite{KLS}) of the space $X$, its dual $X'$, of the cone $K$ and its dual cone $K'$ which are frequently used further on.
In the sequel we assume that the cone $K$ is closed and normal and satisfies $\mathrm{int}(K)\neq\varnothing$. \begin{itemize} \item[a)] {\it The cone $K$ is nonflat.}
Indeed. Fix $u\in \mathrm{int}(K)$. Then $u$ belongs to $K$ together with some closed ball centered at $u$, i.e. $\overline{B}(u;r)\subset K$ for some $r>0$. Then for any $x\in X$ \begin{equation}\label{f1}
\frac{\|x\|}{r}u \pm x \in K. \end{equation}
Put now $x_1=\frac{1}{2}\left(\frac{\|x\|}{r}u+x\right)$ and
$x_2=\frac{1}{2}\left(\frac{\|x\|}{r}u-x\right)$. One has $x_1, x_2\in
K, \; x=x_1-x_2$ and $\|x_i\|\leq
\frac{1}{2}\left(\frac{\|u\|}{r}+1\right)\|x\|$. As the constant $\gamma$ (of
nonflatness of $K$) can be taken the number $\frac{1}{r}\|u\|$.
\item[b)] {\it Each linear positive functional $f$ on $X$ is continuous and satisfies the condition $f(x)>0$, if $f\in K',\ f\neq 0,\ x\in \mathrm{int}(K)$.}
The relation (\ref{f1}) implies $\mp f(x)\leq
\frac{\|x\|}{r}f(u)$, which shows that $f$ is bounded on the unit ball of $X$ and $\|f\|\leq \frac{1}{r}f(u)$. If $f\neq 0$ then $f(u)>0$.
\item[c)] {\it Each additive and positive homogeneous functional $f$ on $K$ with values in the nonnegative reals extends uniquely to a linear positive functional on the whole $X$.}
Indeed, if $x\in X$ is an arbitrary vector then $x=x_1-x_2$, where $x_i\in K$. Put \[ f(x)=f(x_1)-f(x_2). \] It is easy to see that the functional $f$ is the required extension. We omit the standard proof (based on the nonflatness of $K$) of both the correctness of the definition and the uniqueness of the extension.
\item[d)] {\it For any $x\in K, x\neq 0$ there exists a functional $f\in K'$ such that $f(x)>0$.}
Indeed, according to the theorem on a sufficient number of functionals there is a functional
$f\in X'$ such that $f(x)\neq 0$. Since $f=f_1-f_2$ with $f_1, f_2 \in K'$, at least one
of the nonnegative numbers $f_1(x), f_2(x)$ is strongly positive.
Remember that a set $D\subset K$ is called a {\it base of the cone} $K$, if $D$ is convex and each vector $x\in K, \ x\neq 0$ has a unique representation as $x=\lambda y$, where $\lambda>0$ and $y\in D$.
In the sequel we are interested in bases of the dual cone $K'$. The existence of interior points in the cone $K$ guarantees that the cone $K'$ possesses a base.
\item[e)] Let now $\mathcal{F}$ be an arbitrary base of the cone$K'$. Then the closedness of $K$ implies the following important property: If $x,y\in X $ then \[
x\leq y \quad \mbox{is equivalent to}\quad f(x)\leq f(y)\quad
\mbox{for all}\quad f\in \mathcal{F} \ \; . \] and consequently $x=y$ is equivalent to $f(x)=f(y)$ for all $f\in \mathcal{F}$. Moreover, together with b) one has $x\in \mathrm{int}(K)$ if and only if $f(x)>0$ for each $f\in \mathcal{F}$.
\item[f)] For an arbitrary fixed element $u\in \mathrm{int}(K)$ denote \[ \mathcal{F}=\mathcal{F}_u=\{f\in K'\colon f(u)=1\}. \] {\it Then the set $\mathcal{F}$ is bounded, weak*-compact and is a base of the dual cone $K'$}.
The relation (\ref{f1}) implies the estimate \begin{equation}\label{f02}
|f(x)|\leq \frac{1}{r}\|x\| \quad \mbox{for any} \quad f\in \mathcal{F},\ x\in X. \end{equation} Because of its weak$^*$-closedness the set $\mathcal{F}$ is weak$^*$-compact by the Banach - Alaoglu Theorem. The set $\mathcal{F}$ is convex, and property b) implies that $f(u)>0$ for $f\in K,\ f\neq 0$. Therefore, $\mathcal{F}$ is a base of the dual cone.
By means of the interior point $u$ of the cone $K$ one can define
the following nonnegative functional on $X$ \[
\|x\|_u = \inf\{\lambda\geq 0\colon -\lambda u\leq x\leq \lambda u\} \] which is called the {\it u-norm}. Notice that the $u$-norm of an element $x$ can be calculated also by the formula
\[ \|x\|_u =\sup\{|f(x)|\colon f\in \mathcal{F}\}. \] It is clear that the u-norm is actual a norm and that it is monotone on $K$, i.e. $x\leq y$ implies
$\|x\|_u\leq \|y\|_u$. \item[g)] {\it The u-norm is equivalent to the original norm on $X$.} Indeed, (\ref{f02}) implies \[
\|x\|_u=\sup\{|f(x)|\colon f\in \mathcal{F}\}\leq \frac{1}{r}\|x\|. \]
On the other hand, let $x\in X, \ f\in X',\ \|f\|=1$ and $f(x)=\|x\|$. Then
$f=f_1-f_2$, where $f_i\in K'$ and $\|f_i\|\leq \gamma^*\|f\|$ and $\gamma^*$ denotes the constant of nonflatness of the cone $K'$. Then
\[ \|x\|=f(x)\leq |f_1(x)|+|f_2(x)|
\leq f_1(u)\|x\|_u+f_2(u)\|x\|_u\leq
C_0\|x\|_u, \]
where $C_0=2\gamma^*\|u\| $.
\end{itemize}
Summing up we have that for each vector $u\in \mathrm{int}(K)$ there is a constant $C_u>0$ such that for each $x\in X$ \begin{equation}\label{f05}
C_u^{-1} \|x\|\leq \|x\|_u\leq C_u\|x\|. \end{equation}
We consider now positive operators on $(X, K,\|\cdot\|)$. By $L(X)$ we denote the vector space of all linear continuous operators on $X$, equipped with the usual norm and the order. For $A\in L(X)$ we write $A \geq 0$ if $A(K)\subset K$. Such operators we will call {\it positive}. The simple properties of such operators are gathered in the
\begin{lem}\label{l1}
Let $(X,K,\|\cdot\|)$ be an ordered normed real vector space, and let $A$ be a positive linear continuous operator on $X$. Assume there exists a vector $u\in \mathrm{int}(K)$ such that $A(u)=u$. Let $\mathcal{F}_u=\{f\in K'\colon f(u)=1\}$ be the base of $K'$ corresponding to the vector $u$.
Then the following statements hold. \begin{itemize} \item[(i)] The adjoint operator $A^*$ is positive. \item[(ii)] $A^*(\mathcal{F}_u)\subset\mathcal{F}_u$.
\item[(iii)] $\|A(x)\|_u\leq \|x\|_u$ for each $x\in X$. \item[(iv)] $\{A^n\}_{n\in \mathbb{N}}$ is a norm bounded sequence in $L(X)$. \item[(v)] $A(x)\in \mathrm{int}(K)$ for each $x\in\mathrm{int}(K)$. \end{itemize} \end{lem} \textit{Proof.\ }
(i) For an arbitrary vector $f\in K$ we have $A^*(f)(x)=f(A(x))\geq 0$ for any $x\in K$. Thus $A^*(f)$ belongs to $K'$.
(ii) If $f\in \mathcal{F}_u$ then by (i) $A^*(f)\in K'$. Since $A(u)=u$ and $A^*(f)(u)=f(A(u))=f(u)=1$ we obtain $A^*(f)\in \mathcal{F}_u$.
(iii) From (ii) follows that for each $x\in X$ one has
\[ \|A(x)\|_u=\sup_{f\in \mathcal{F}_u}|f(A(x))|= \sup_{f\in \mathcal{F}_u}|(A^*(f))(x)|
\leq \sup_{f\in \mathcal{F}_u}|f(x)|= \|x\|_u . \]
(iv) It is convenient to use the $u$-norm in $X$ which, as was shown in property g), is equivalent to the norm $\|\cdot\|$. Use now the inequality (\ref{f05}) and (iii) then \[
\|A^n(x)\| \leq C_u\|A^n(x)\|_u \leq C_u\|A^{n-1}(x)\|_u
\leq \ldots \leq C_u\|x\|_u \leq C_u^2 \|x\| \quad \mbox{for all}\quad x\in X. \]
Therefore, $\|A^n\|\leq C_u^2$ for all $n\in \mathbb{N}$.
(v) If $x\in \mathrm{int}(K)$ then
in view of (ii) one has $f(A(x))=(A^*(f))(x)>0$ for each $f\in \mathcal{F}_u$. By property e) it follows $A(x)\in \mathrm{int}(K)$.
\quad\rule{1.5ex}{1.5ex}
We need also the following auxiliary result concerning positive operators \begin{lem}\label{l2} Let $A$ be a positive operator which satisfies the following conditions
\begin{enumerate} \item[1)] $A(\mathrm{int}(K))\subset \mathrm{int}(K)$; \item[2)] for each vector $x\in K, \ x\neq 0$ there exists a natural $n_x$ such that $A^{n_x}(x)\in \mathrm{int}(K)$. \end{enumerate} Then for any compact set $R\subset K$ such that $0\notin R$ there is a natural number $p$ with $A^p(R)\subset \mathrm{int}(K)$. \end{lem}
\textit{Proof.\ } The condition 1) implies \begin{equation}\label{f21} (A^n)^{-1}(\mathrm{int}(K))\subset(A^{n+j})^{-1}(\mathrm{int}(K)) \end{equation} for all $n,j\in\mathbb N$.
In view of condition 2) for each vector $z\in R$ there exists a power $n_z$ such that $A^{n_z}(z)\in \mathrm{int}(K)$. Therefore the sets $(A^{n_z})^{-1}(\mathrm{int}(K))$ form an open covering of $R$. Consider any finite subcovering \[ (A^{n_{z_1}})^{-1}(\mathrm{int}(K)), \ (A^{n_{z_2}})^{-1}(\mathrm{int}(K)),
\ldots,(A^{n_{z_s}})^{-1}(\mathrm{int}(K)) \] and let be $p=\max\{n_{z_1}, n_{z_2},\ldots,n_{z_s}\}$. Then taking into consideration inclusion (\ref{f21}) the family consisting of $s$ exemplars of $(A^p)^{-1}(\mathrm{int}(K))$ also covers the set $R$. Actually we have $R\subset (A^p)^{-1}(\mathrm{int}(K))$. This shows that $A^p(z)\in \mathrm{int}(K)$ for each $z\in R$.
\quad\rule{1.5ex}{1.5ex}
\textbf{Remark 1} The condition 1) of Lemma \ref{l2} is fulfiled, if the operator $A$ is positive and possesses a fixed point $u$ such that $u\in \mathrm{int}(K)$ (s. Lemma \ref{l1}(v)).
\section{Main results}
We need the following notations (\cite{CleHei}). \begin{defi}\label{D1} {\rm Let $X$ be a (real) Banach space. A family $\{S_t\}_{t\geq 0}$ of operators in $L(X)$ is called a} one-parameter semigroup of bounded linear operators {\rm if $S_0=I, \; S_{s+t}=S_s S_t \; (s,t\geq 0)$, where $I$ denotes the identity operator on $X$.
If, in addition, the function $t\mapsto S_t$ is continuous with respect to the strong operator topology, i.e. the function $t\mapsto S_t(x)$ is norm-continuous on $[0, +\infty)$ for each $x\in X$, then $\{S_t\}_{t\geq 0}$ is called a} {\it strongly continuous semigroup}, {\rm or also a} $C_0$-semigroup.
{\rm A $C_0$-semigroup $\{S_t\}_{t\geq 0}$ in an ordered Banach space is called} positive, {\rm if each operator $S_t$ is positive ($t\geq 0$).} \end{defi}
The results we are going to prove are valid in real ordered Banach spaces $(X,K,\|\cdot\|)$, briefly denoted by $X$, where $K\subset X$ is a closed normal cone which satisfies the condition $\mathrm{int}(K)\neq \varnothing$.
\begin{theo}\label{t1}
Let $(X,K,\|\cdot\|)$ be an ordered Banach space and $\{S_t\}_{t\geq 0}$ a positive $C_0$-semigroup of operators in $L(X)$ which satisfies the following conditions \begin{itemize} \item[1)] for each vector $x\in X,\,x\neq 0$ there exists a
number $t_x\in [0,+\infty)$ such that $S_{t_x}(x)\in \mathrm{int}(K)$; \item[2)] for each vector $x\in K$ its trajectory $\{S_t(x)\}_{t\geq 0}$ is relatively compact; \end{itemize} Then the family $\{S_t\}_{t\geq 0}$ converges pointwise for $t \to +\infty$ to some operator $A_0$.
If that operator $A_0$ is not the zero one, then there exist a vector $u\in \mathrm{int}(K)$ and a functional $f_0\in K', f_0\neq 0$ such that \begin{itemize} \item[(i)] $S_t(u)=u$, $S^*_t(f_0)=f_0$ for any $t\geq 0$ and moreover $f_0(x)>0$ if $x\in K,\,x\neq 0$; \item[(ii)] $A_0=f_0\otimes u$ \item[(iii)] for each $f\in X'$ one has $S^*_t(f)\mathop{\longrightarrow}\limits_{t\to\infty} A_0^*(f)$
with respect to the weak*-topology $\sigma(X',X)$; \item[(iv)] $\lambda=1$ is a simple eigenvalue of the operators $S_t$ and $S^*_t$ for all $t>0$. \end{itemize} \end{theo}
\textit{Proof.\ } I. First of all we show that all operators $S_t$ for $t\geq 0$ have a common fixed point in $\mathrm{int}(K)$, provided the family $\{S_t\}_{t\geq 0}$ does not pointwise converge to the zero operator ${\mathbf 0}$ for $t\to +\infty$. According to the principle of uniform boundedness the condition 2) implies that the norms of all operators of the semigroup
$\{S_t\}_{t\geq 0}$ are bounded, i.e. there exists a constant $C$ such that $\|S_t\|\leq C$ for all $t\in [0,\infty)$. If $\{S_t\}_{t\geq 0}$ does not converge to the zero operator, then \begin{equation}\label{fh2}
\limsup\limits_{t\to+\infty} \, \|S_t(x_0)\|>0. \end{equation} holds for some vector $x_0\in X$. Since $K$ is generating we may assume
$x_0\in K$. Now we show that $\inf\limits_{t\geq 0}\|S_t(x_0)\|>0$. Indeed, if the contrary is assumed we find an increasing sequence $\{s_k\}_{k=1}^\infty \subset [0,+\infty), \; s_k\to +\infty$ such that
$\|S_{s_k}(x_0)\|\mathop{\longrightarrow}\limits_{k\to\infty} 0$. Then an arbitrary sequence $\{t_n\}_{n=1}^\infty$ with $t_n\to +\infty$ satisfies $s_{k_n}\leq t_n\leq s_{k_{n+1}}$, where $s_{k_n}\mathop{\longrightarrow}\limits_{n\to\infty}+\infty$ and therefore, \[
\|S_{t_n}(x_0)\|=\|S_{t_n-s_{k_n}}S_{s_{k_n}}(x_0)\| \leq
C\|S_{s_{k_n}}(x_0)\| \mathop{\longrightarrow}\limits_{n\to\infty} 0 . \] This contradicts to the inequality (\ref{fh2}).
Consequently, the points of the trajectory $\{S_t(x_0)\}_{t\geq 0}$ belong to $K$ and their norms are separated from zero. Denote by $Q_0$ the closure of that trajectory. Then $Q_0$ is compact and $S_t(Q_0)\subset Q_0$ for any $t\geq 0$. Denote the closure of the convex hull of the set $Q_0$ by $Q_1$. We show now that the zero-vector also does not belong to $Q_1$. From $Q_0\subset K$ and $0\notin Q_0$ we find (property d)) \[
Q_0\subset\bigcup_{f\in K'}\{x\in X \colon f(x)>0\}. \] By selecting a finite covering we have \[
Q_0\subset\bigcup_{k=1}^N\{x\in X : f_k(x)>0\}, \] where $f_k\in K'$. Put $g=\sum_{k=1}^Nf_k$. Then, obviously, $Q_0\subset \{x\in X : g(x)>0\}$ and therefore, there is some $\sigma>0$ such that the set $Q_0$ is contained in the closed convex set $K\cap\{x\in X \colon g(x)\ge\sigma\}$, which in turn does not contain zero. Now it is clear that $0$ does not belong to the closed convex hull $Q_1$ either. Since $S_t(Q_1)\subset Q_1$
for any $t\ge0$ and since the operators of the family $\{S_t\}_{t\geq 0}$ commute in pairs we are able to apply the Markov-Kakutani Theorem to that family of operators on $Q_1$ (s.\cite{Edw}, chapt.III.3.2) and to conclude that they possess a common fixed point, say $u$, in $Q_1$. It is obvious that $u\neq 0$ and in view of condition 1) $u\in \mathrm{int}(K)$. This completes the first step of the proof and allows us to apply the Lemmata \ref{l1} and \ref{l2} to each of the operators $S_t$. In particular, from Lemma \ref{l1}(v) we get
$S_t(\mathrm{int}(K))\subset \mathrm{int}(K)$ for any $t>0$, and from condition 1) there follows
that
\[ S_s(x)=S_{t_x+t}(x)=S_t(S_{t_x}(x))\in \mathrm{int} (K)\quad \mbox{for any } s\geq
t_x ,\]
i.e. each trajectory starting at $x\in K$ will stay eventually in $\mathrm{int}(K)$.
II. We prove now that for each vector $x\in K$ there exists the limit $\lim\limits_{t\to\infty}S_t(x)$ with respect to the norm.
We introduce the sets \[
\mathcal{F}=\{ f\in K'\colon f(u)=1 \} \] and $\mathcal{S}_+=\{ x\in K \colon \max_{f\in\mathcal{F}}f(x)=1 \}$. As was mentioned above (property f)) $\mathcal{F}$ is a convex weak$^*$-compact set which is a base of the cone $K'$. Both sets $\mathcal F$ and $\mathcal{S}_+$ are closed with respect to the norm, and $u\in \mathcal{S}_+, \ 0\notin \mathcal{S}_+$. Observe that $S_t(u)=u$ for any $t>0$ implies $S^*_t(\mathcal F)\subset\mathcal F$ (Lemma \ref{l1}(ii)).
For any vector $x \in X$ and $t\in[0,+\infty)$ we define the numbers \[
M_x(t)=\sup_{f\in \mathcal{F}}f(S_t(x)) \quad \mbox{and} \quad
m_x(t)=\inf_{f\in \mathcal{F}}f(S_t(x)) . \]
The equation $f(S_{s+t}(x))=\left(S_s^*(f)\right)(S_t(x))$ and the inclusion $S_s^*(\mathcal{F})\subset \mathcal{F}$ imply \begin{equation}\label{f22} m_x(t)\leq m_x(t+s)\leq M_x(t+s)\leq M_x(t)\quad \mbox{for all}\quad s,t\geq 0. \end{equation}
Therefore the functions $M_x(t)$ and $m_x(t)$ are monotone and possess finite limits at infinity.
Moreover, since for any $f\in \mathcal{F}$ the inequalities $m_x(t)\leq f(S_t(x))\leq M_x(t)$ can be written as \[ f(m_x(t)\ u)\leq f(S_t(x))\leq f(M_x(t)\ u), \] the inequality \begin{equation}\label{fh3}
m_x(t)\ u \leq S_t(x)\leq M_x(t)\ u \end{equation} holds for each $x\in K$ (see property e)).
The main aspect of the proof is to establish the relation \begin{equation}\label{f22b} \delta_x(t)\equiv M_x(t)-m_x(t)\mathop{\longrightarrow}\limits_{t\to+\infty} 0 \end{equation} for each $x\in X$.
In view of (\ref{f22}) it suffices to prove that some sequence $\{\delta_x(kt_0)\}_{k\in \mathbb{N}}$ for $t_0>0$ converges to $0$. Assume by way of contradiction that there is some $x_0\in X$ such that $\delta_{x_0}(t)\not\rightarrow 0$ for $t\to+\infty$. Due to the monotony of $\delta_{x_0}(t)$ this means \begin{equation}\label{fh4} \delta_{x_0}(t) = M_{x_0}(t) - m_{x_0}(t)\ge\varepsilon \end{equation} for some $\varepsilon>0$ and all $t\in [0,\infty)$.
Consider now the set \[ R_0=\left\{\frac{S_t(x_0)-m_{x_0}(t)u}{\delta_{x_0}(t)}, \
\frac{M_{x_0}(t)u-S_t(x_0)}{\delta_{x_0}(t)} \colon \quad t\in [0,\infty) \right\}. \] In view of condition 2), the inequality (\ref{fh4}) and the boundedness of the functions $M_x(t)$ and $m_x(t)$, the set $R_0$ turns out to be relatively compact. Moreover, it is easy to see that $R_0\subset \mathcal{S}_+$. Therefore the closure $R$ of $R_0$ is also contained in $\mathcal{S}_+$ and, in particular, $0$ does not belong to $R$.
According to the Lemma \ref{l2} and the Remark 1 there is a natural number $p$ such that the compact set $Q=A^p(R)$ belongs to $\mathrm{int}(K)$, where $A=S_1$.
The bilinear form $\langle z,f\rangle=f(z)$ is strongly positive on the compact set $Q\times \mathcal{F}$, where $Q$ is considered with the norm topology (induced from $X$)
and $\mathcal{F}$ with the weak*-topology (s. property f)).
The inequalities
\begin{eqnarray*}
|\langle z,f\rangle -\langle y,g\rangle| &\leq &|\langle z-y,f\rangle| +
|\langle y,f-g\rangle| \\
& \leq & \|z-y\|\ \|f\| + |\langle y,f-g\rangle| \end{eqnarray*} show that the bilinear form is continuous on the set $Q\times \mathcal{F}$. Therefore there is some positive number $\beta$ such that $f(z)>\beta$ for all $z\in Q$ and $f\in\mathcal{F}$.
We shall assume $\beta<\frac{1}{2}$.
The vectors (remember that $A=S_1$) \[
A^p\left(\frac{A^n(x_0)-m_{x_0}(n)u}{M_{x_0}(n)-m_{x_0}(n)}\right)\quad \mbox{and} \quad A^p\left(\frac{M_{x_0}(n)u-A^n(x_0)}{M_{x_0}(n)-m_{x_0}(n)}\right), \] belong to $Q$ and, consequently, for each $f\in \mathcal F$ we have \[
f\left(A^p\left(\frac{A^n(x_0)-m_{x_0}(n)u} {M_{x_0}(n)-m_{x_0}(n)}\right)\right)\ge\beta \quad \mbox{and} \quad f\left(A^p\left(\frac{M_{x_0}(n)u-A^n(x_0)}{M_{x_0}(n)-m_{x_0}(n)}\right)\right) \ge \beta. \] This together with $\beta=f(\beta u)$ implies by e) \begin{eqnarray*} A^{n+p}(x_0) & \geq & m_{x_0}(n)\,u+\beta(M_{x_0}(n)-m_{x_0}(n))\,u \quad \text{ and } \\ A^{n+p}(x_0) & \leq & M_{x_0}(n)\,u-\beta(M_{x_0}(n)-m_{x_0}(n))\,u \quad \mbox{for any}\quad n\in\mathbb N. \end{eqnarray*}
Put now $n=kp$. Then \[
M_{x_0}((k+1)p)-m_{x_0}((k+1)p)\le (1-2\beta)(M_{x_0}(kp)-m_{x_0}(kp)), \] and therefore \[
M_{x_0}(kp)-m_{x_0}(kp)\le (1-2\beta)^k(M_{x_0}(0)-m_{x_0}(0))\mathop{\longrightarrow}\limits_{k\to\infty} 0. \] However this contradicts to (\ref{fh4}). So the relation (\ref{f22b}), i.e. $M_x(t)-m_x(t)\mathop{\longrightarrow}\limits_{t\to\infty} 0$, is proved.
III. In order to complete the final part of the proof we denote for each $x\in X$ \[ f_0(x)= \lim_{t\to\infty}m_x(t).\] From the inequalities \begin{equation}\label{f22d} m_x(t)\, u \leq f_0(x)\, u \leq M_x(t)\, u \end{equation} and (\ref{fh3}) by means of passing to the limit we obtain for each $f\in \mathcal F$ \[
f_0(x)=\lim_{t\to\infty}f(S_t(x)) \] and so $f_0$ is an additive, homogeneous and nonnegative functional on $X$ such that $f_0(u)=1$.
The inequalities (\ref{fh3}) and (\ref{f22d}) further imply for $x\in K$ \begin{equation}\label{fh8} -\left(M_x(t)-m_x(t))\right)\,u\le S_t(x)-f_0(x)\,u\le \left(M_x(t)-m_x(t))\right)\,u.
\end{equation}
Define now the rank-one operator $A_0$ by $A_0=f_0\otimes u$, i.e. $A_0(x)=f_0(x)\,u$ for $x\in X$. From (\ref{fh8}) and (\ref{f05}) it follows that \[
\|S_t(x)-A_0(x) \| \le C_u \|S_t(x)-A_0(x) \|_u \le C_u\left(M_x(t)-m_x(t)\right)\mathop{\longrightarrow}\limits_{t\to\infty} 0 . \]
This proves the statement (ii) of the theorem.
We finalize the proof of the statements (i) and (iii). Since $S_{(n+1)t}(x)=S_{nt}(S_t(x))$ for each $x\in X, \ t>0$ and $n\in \mathbb N$, after passing to the limits as $n\to\infty$ we obtain $f_0(x)\,u=f_0(S_t(x))\,u$ which shows that $f_0(x)=\big(S^*_t(f_0)\big)(x)$ for each $x\in X$, i.e. $f_0=S^*_t(f_0)$ for $t>0$. We show that $f_0(x)>0$ if $x\in K, x\neq 0$. For such $x$ there is some $t_x$
with $S_{t_x}(x)\in \mathrm{int}(K)$. In view of property b) and the weak$^*$-compactness of $\mathcal{F}$ we get $m_{x}(t_x)>0$. Then $f_0(x)=\lim\limits_{t\to\infty}m_x(t)\geq m_x(t_x)>0$.
From the already proved statement (ii) it follows that for each functional $f\in\mathcal{F}$ the family $\{S^*_t(f)\}$ converges to $f_0$ with respect to the weak$^*$-topology, i.e. \[ S_t^*(f)(x)=f(S_t(x))\mathop{\longrightarrow}\limits_{t\to\infty} f_0(x) \quad \mbox{for each}\quad x\in X. \] It remains to notice that due to the facts that any functional $f\in X'$ is representable as a difference of two nonnegative functionals (s. Theorem \ref{t0}) and that $\mathcal{F}$ is a base of the dual cone $K'$, the last relation implies \[ S_t^*(f)\mathop{\longrightarrow}\limits_{t\to\infty} f(u)f_0\quad\mbox{for each}\quad f\in X' \] with respect to the weak$^*$-topology. Now (iii) is proved.
It remains to prove (iv), i.e. that $\lambda=1$ is a simple eigenvalue of the operators $S_t$ and $S^*_t$ for $t>0$. Indeed, if $u'$ is another fixed point of $S_t$, then $S_{nt}(u')=u'$ for any $n\in\mathbb{N}$. Since $S_{nt}(u')\mathop{\longrightarrow}\limits_{n\to\infty}f_0(u') u$ one immediately has $u'=f_0(u')u$. That means the eigenspace of the operator $S_t$, corresponding to the eigenvalue $\lambda=1$, is one-dimensional. A similar argument shows the statement for the adjoint operator.
\quad\rule{1.5ex}{1.5ex}
\begin{corr}\label{C1} For the operators $S_t$ and $A_0$ for each $t\in [0,\infty)$ and $n\in\mathbb N$ there hold the following relations \begin{itemize} \item[a)] $A_0^n=A_0$; \item[b)] $S_tA_0=A_0S_t=A_0$; \item[c)] $(S_t-A_0)^n=S_{nt}-A_0$. \end{itemize} \end{corr} \textit{Proof.\ } a) - c) are obtained by a simple calculation which we will omit.
\quad\rule{1.5ex}{1.5ex}
If the condition 2) of the theorem is replaced by a slighty stronger one, then the operators $S_t^*$, for $t\to \infty$, converge to the operator $A_0^*$ not only in the weak operator toplogy but also pointwise. The new condition is well known in the theory of Markov chains (see, for example, \cite{nev}(Lemma V.3.1). We come now to one of our main results.
\begin{theo}\label{t1a}
Let $(X,K,\|\cdot\|)$ be an ordered Banach space and $\{S_t\}_{t\geq 0}$ a positive $C_0$-semigroup of operators in $L(X)$ which satisfies the following conditions \begin{itemize} \item[1)] for each vector $x\in K,\,x\neq 0$ there exists a
number $t_x\in [0,\infty)$ such that $S_{t_x}(x)\in \mathrm{int}(K)$; \item[2)] there exist a number $\tau>0$ and a compact operator $V$ such that
$\|S_{\tau}-V\|<1$;
\item[3)] $\sup\limits_{t\ge 0}\|S_t\|<\infty$. \end{itemize} Then all statements of Theorem \ref{t1} are valid. Moreover, \begin{itemize} \item[(v)] the operators $S_t^*$, for $t\to\infty$, converge pointwise to the operator $A_0^*$. \end{itemize} \end{theo}
\textit{Proof.\ } First of all we prove that the condition 2) of Theorem \ref{t1} is satisfied. If $x\in X$ then it suffices to show that for an arbitrary fixed $\varepsilon >0$ the trajectory $\{S_t(x)\}_{t\ge 0}$
possesses a relatively compact $\varepsilon$-net. Put $C=\sup\limits_{t\geq 0}\|S_t\|$,
$W=S_{\tau}-V$ and $q=\|W\|$. Obviously $C<\infty$ and \[ S_{n\tau}=S_{\tau}^n=W^n+V_n,\ \]
where $V_n$ is some compact operator and $\|W^n\|\leq q^n\mathop{\longrightarrow}\limits_{n\to \infty} 0$.
Fix a sufficiently large $N$ such that $q^N<\varepsilon$. Notice that the trajectory
$\{S_t(x)\}_{t\geq 0}$ is contained in the closed ball $B_x=B(0;C\|x\|)$. If $t=N\tau+t', t'>0$ then $S_{t'}(x)\in B_x$, and therefore \[
\|S_t(x)-V_N(S_{t'}(x))\| =\|(S^N_{\tau}-V_N)(S_{t'}(x))\|\leq q^NC\, \|x\|<C\varepsilon \|x\|. \] Now it is immediate that the relatively compact set \[ \{S_t(x)\colon 0\leq t\leq N\tau\}\cup V_N(B_x) \]
is a $C\varepsilon\|x\|$-net for the trajectory $\{S_t(x)\}_{t\geq 0}$, and so the condition 2) of the Theorem \ref{t1} holds.
We prove now the statement ($v$). Assume first $A_0\neq 0$. In this case it suffices to show
$\|S_t^*(f)-f_0\|\mathop{\longrightarrow}\limits_{n\to\infty} 0$ for each $f\in\mathcal{F}$. Let $\varepsilon$ be an arbitrary positive number and $N, t', V_N$ be the same as above. Let be $H=V_N(B(0;1))$. In view of the equalities \[ f(S_{N\tau}(x))=f(V_N(x))+f(W^N(x)), \quad f_0(x)=(S_{N\tau}^*(f_0))(x)=f_0(V_N(x))+f_0(W^N(x)) \] we obtain for $t=N\tau+t'$ the estimate \begin{eqnarray*}
\|S_t^*(f)-f_0\|& \leq & \sup_{\|x\|\leq 1}|S_{t'}^*(f)(V_N(x))-f_0(V_N(x))| \\
& + & \sup_{\|x\|\leq 1}|S_{t'}^*(f)(W^N(x))|+ \sup_{\|x\|\leq 1}|f_0(W^N(x))| \\
& \leq & \sup_{y\in H}|S_{t'}^*(f)(y)-f_0(y)|+C\|f\|+\|f_0\|q^N \\
& < & \sup_{y\in H}|S_{t'}^*(f)(y)-f_0(y)|+(C\|f\|+\|f_0\|)\varepsilon. \end{eqnarray*} This estimate holds for any $t'>0$. Because of $S_{t'}(f)\mathop{\longrightarrow}\limits_{t'\to\infty} f_0$ with respect to the weak$^*$ topology and the relative compactness of the set $H$ the supremum at the right side of the inequality converges to $0$ if $t'\to\infty$ by the the theorem on uniform convergence on compact sets. Consequently, for sufficiently large $t'$ we obtain
\[ \|S_t^*(f)-f_0\|<\varepsilon +(C\|f\|+\|f_0\|)\varepsilon, \] what has to be shown.
If $A_0=0$ then for each $f\in X'$ the given proof is applicable if $f_0=0$ is assumed.
\quad\rule{1.5ex}{1.5ex}
\begin{corr}\label{C2} Under the conditions of Theorem \ref{t1a} the operator $T:=T(t)=I-S_t+A_0$ is invertible for any $t>0$ and \begin{equation}\label{f30} T^{-1}=I + \sum_{n=1}^\infty(S_{nt}-A_0), \end{equation} where the series converges pointwise. \end{corr} \textit{Proof.\ } We show first that $\ker(T)=\{0\}$ for all $t>0$. (s. \cite{Edw}, propositions 9.10.2, 9.10.5).
Assume $A_0\neq {\bf 0}$. If $T(x_0)=0$ then apply the operator $A_0$ to the equation $-A_0(x_0)=x_0-S_t(x_0)$ and by taking into consideration the statements a) and b) of Corollary \ref{C1} we see that $A_0(x_0)=f(x_0)u=0$. Therefore $f_0(x_0)=0$, and due to statement (i) of the theorem we get $x_0=0$.
If $A_0={\bf 0}$, then the operator $S_t$ can not have any nonzero fixed point $x_0$, since in the opposite case there would be $S_{nt}(x_0)=x_0\not\to A_0(x_0)=0$. So, \[ 0=T(x_0)=x_0-S_t(x_0)+A_0(x_0)=x_0-S_t(x_0) \] means $S_t(x_0)=x_0$ and implies that the kernel of the operator $T$ is trivial for any $t>0$.
The proof of invertibility of the operator $T$ for all $t>0$ now follows.
By keeping the notation of the theorem we put $W=S_{\tau}-V, \ q=\|W\|$. Put also $U\equiv S_t-A_0$. Notice that according to the Corollary \ref{C1} and Theorem \ref{t1} the sequence $U^n=S_{nt}-A_0$ converges to $0$ pointwise.
We fix now some $m\in\mathbb{N}$ such that $2C\,q^m<1$, where
$C=\sup_{t\ge 0}\|S_t\|$, and prove that $T$ is invertible for $t>m\tau$. Remember that $S_{m\tau}=W^m+V_m$, where $V_m$ is some compact operator. If $t=m\tau+\sigma, \ \sigma\ge 0,$ then $T$ (at the moment $t$) is equal to \[ T=I-S_{m\tau}(S_\sigma-A_0)=I-W^m(S_\sigma-A_0)-V_m(S_\sigma-A_0). \] Notice that the operator $R=I-W^m(S_\sigma-A_0)$ in invertible because of
$\|W^m(S_\sigma-A_0)\|\le q^m(C+\|A_0\|)\le 2C\,q^m<1$. Hence \[ T=R\big(I-R^{-1}V_m(S_\sigma-A_0)\big). \] At the same time the operator $I-R^{-1}V_m(S_\sigma-A_0)$ is invertible since, in view of $\ker{T}=\{0\}$, its kernel is trivial, and the operator $R^{-1}V_m(S_\sigma-A_0)$ is compact together with $V_m$. The invertibility in this case of the operator $T$ is established.
In the case of $0<t<m\tau$, we use the identity \begin{equation}\label{f31} I-U^n=T(I+U+\dots +U^{n-1}). \end{equation} If $nt>m\tau$ then by what has been shown above the operator $I-U^n=I-S_{nt}+A_0=T(nt)$ is invertible. It follows from (\ref{f31}) that also the operator $T=T(t)$ is invertible (since due to the invertibility of the operator $I-U^n$ it shall be injective and surjective).
From the identity (\ref{f31}) and the invertibility of the operator $T$ it follows that \begin{equation}\label{f32} T^{-1}-T^{-1}U^n=I+U+\dots +U^{n-1}. \end{equation} Since $U^n\to 0$ pointwise this proves that the decomposition (\ref{f30}) takes place.
\quad\rule{1.5ex}{1.5ex}
Our next result is \begin{theo}\label{t2}
Let $(X,K,\|\cdot\|)$ be an ordered Banach space and $\{S_t\}_{t\geq 0}$ a positive $C_0$-semigroup of operators in $L(X)$ which satisfies the following conditions \begin{itemize} \item[1)] for each vector $x\in K,\,x\neq 0$ there exists a
number $t_x\in [0,\infty)$ such that $S_{t_x}(x)\in \mathrm{int}(K)$; \item[2)] for some $\tau>0$ the operator $S_\tau$ is compact;
\item[3)] $\sup\limits_{t\ge 0}\|S_t\|<\infty$. \end{itemize} Then the statements
of Theorem \ref{t1} are valid. Moreover, \begin{itemize}
\item[(v)] $\|S_t-A_0\|\mathop{\longrightarrow}\limits_{t\to\infty}0$, i.e. the operators $S_t$ (and, of course, the adjoint operators) converge to $A_0$ (to $A_0^*$) with respect to the norm. \end{itemize} \end{theo}
\textit{Proof.\ } Since the condition 2) of this theorem is stronger than the corresponding condition 2) of the previous theorem and the other ones coincide, it is left to prove only statement (v). \hide{s with them condition of Theorem \ref{t1} we check that also the condition 2) of Theorem \ref{t1} is satisfied.
Let be $\|S_t\|\le C$ for all $t\in[0,+\infty)$. Due to $\|S_t(x)\|\le C\,\|x\|$ the vectors $S_t(x)=S_s S_{t-s}(x))$ belong to the image under the compact operator $S_s$ of the closed ball $B(0; C\,\|x\|)$ for $t>s$. Therefore, taking into consideration the property of an $C_0$-semigroup, the trajectory of an arbitrary point $x$ belongs to the relatively compact set
$\{S_t(x)\colon 0\le t\le s\}\cup S_s(B(0; C\,\|x\|))$ what ensures that the condition 2) of Theorem \ref{t1} holds. We prove now the statement (v). }
Due to b) of Corollary \ref{C1} for $t>\tau$ one has \[
S_t-A_0=(S_{t-\tau}-A_0)\,S_\tau, \] and if $Q$ denotes the closure of the image under $S_\tau$ of the unit ball, therefore \[
\|S_t-A_0\|=\sup_{\|x\|\le 1}\|(S_{t-\tau}-A_0) S_\tau(x))\|=
\sup_{y\in Q}\|(S_{t-\tau}-A_0)(y)\|, \] Since $S_{t-\tau}\to A_0$ pointwise as $t\to \infty$ and $Q$ is compact one has (s. \cite{Bour1},chapt.III \S3, prop.5) \[
\sup_{y\in Q}\|(S_{t-\tau}-A_0)(y)\| \ \mathop{\longrightarrow}\limits_{t\to+\infty} \ 0. \] This completes our proof.
\quad\rule{1.5ex}{1.5ex}
\begin{corr}\label{C3} Under the conditions of Theorem \ref{t2} the operator $T:=T(t)=I-S_t+A_0$ is invertible for $t>0$ and \begin{equation}\label{f23}
T^{-1}(t) = I+\sum_{n=1}^\infty (S_{nt}-A_0). \end{equation} where the series converges with respect to the norm. \end{corr}
\textit{Proof.\ } The invertibility of the operator $T$ has been proved in Corollary \ref{C2}.
Therefore it remains to pass to the limit in the identity (\ref{f32}) by taking into account that $\|U^n\|=\|S_{nt}-A_0\|\mathop{\longrightarrow}\limits 0$, as it was shown in the proof of Theorem \ref{t2}.
\quad\rule{1.5ex}{1.5ex}
\textbf{Remark 2} By means of the functions $m_x(t), M_x(t)$, which have been introduced during the proof of Theorem \ref{t1} an estimate of the value
$\|S_t-A_0\|$ might be obtained.
In order to show this we remember some constants (s. properties a), g)):
$\gamma$ - the constant of nonflatness of the cone $K$ and $C_u$ a constant which satisfies $\|x\|\le C_u\|x\|_u$ for each $x\in X$ (s. inequality (\ref{f05}). Finally denote by $Q_+$ the closure of the set $S_\tau(B_+)$, where $B_+$ is the intersection of the unit ball $B(0;1)$ with the cone $K$. Notice that for each $y\in K$ and $f\in\mathcal F$ one has \begin{equation}\label{f24} m_y(t)\le f(S_t(y))\le M_y(t) \quad \mbox{and} \quad m_y(t)\le f(A_0(y))\le M_y(t). \end{equation} Then any vector $x\in B(0;1)$ can be represented as $x=\gamma\,(x'-x'')$, where $x',x''\in B_+$. Therefore $t>\tau$ implies \begin{eqnarray*}
\|S_t-A_0\| & \leq & \sup\limits_{x\in B(0;1)}\|(S_{t-\tau}-A_0)(S_\tau(x))\| \\
& \leq & 2\gamma\,\sup\limits_{x\in B_+}\|(S_{t-\tau}-A_0)(S_\tau)(x)\| \\
& = & 2\gamma\,\sup\limits_{y\in Q_+}\|S_{t-\tau}(y)-A_0(y)\| \\
& \leq & 2\gamma\,C_u\sup\limits_{y\in Q_+}\sup\limits_{f\in\mathcal F}|f(S_{t-\tau}(y))-f(A_0(y))|. \end{eqnarray*} In view of the inequality (\ref{f24}) we get for each $f\in\mathcal F$ \[
|f(S_{t-\tau}(y))-f(A_0(y))|\le M_y(t-\tau)-m_y(t-\tau). \] Together with the previous inequality this yields the required estimate \[
\|S_t-A_0\|\le 2\gamma\,C_u\sup_{y\in Q_+}(M_y(t-\tau)-m_y(t-\tau)). \] One easy proves that the functions $y\mapsto M_y(t), \ y\mapsto m_y(t)$ are continuous, and so Dini's theorem implies that the difference
$M_y(t-\tau)-m_y(t-\tau)$, which for $t\to+\infty$ monotonically converges to
zero, uniformly decreases to $0$ on the compact set $Q_+$. In this way we
get another proof of statement (v) of Theorem \ref{t2}.
It is easy to see that the statements of our Theorems \ref{t1} and \ref{t2} remain to be valid also for a "discrete" semigroup of operators, i.e. for the sequence of iterates $\{A^n\}_{n\in \mathbb{N}}$ of some positive operator $A$. The proofs of the Theorems \ref{t1} and \ref{t2} might be adapted to that case, even with some obvious simplifications. Therefore we restrict ourselves with only the formulations.
\begin{theo}\label{t3}
Let $(X,K,\|\cdot\|)$ be an ordered Banach space and $A\in L(X)$ a positive operator which satisfies the following conditions \begin{itemize} \item[1)] for each vector $x\in K,\,x\neq 0$ there exists a natural number $n_x$ such that $A^{n_x}(x)\in \mathrm{int}(K)$; \item[2)] for each vector $x\in K$ its trajectory $\{A^n(x)\}_{n\in \mathbb N}$ is relatively compact. \end{itemize} Then the sequence $\{A^n\}_{n\ge 0}$ pointwise converges to some operator $A_0$.
If this operator is not zero, then there exist a vector $u\in \mathrm{int}(K)$ and a functional $f_0\in K'$ such that \begin{itemize} \item[(i)] $A(u)=u$, $A^*(f_0)=f_0, \ f_0(u)=1$ and moreover $f_0(x)>0$ if $x\in K,\,x\neq 0$; \item[(ii)] $A_0=f_0\otimes u$; \item[(iii)] for each $f\in X'$ one has $\left(A^*\right)^n(f)\mathop{\longrightarrow}\limits_{n\to+\infty} A_0^*(f)$ for each $f\in X'$ with respect to the weak*-topology $\sigma(X',X)$; \item[(iv)] $\lambda=1$ is a simple eigenvalue of the operators $A$ and $A^*$. \end{itemize} \end{theo}
\begin{theo}\label{t4}
Let $(X,K,\|\cdot\|)$ an ordered normed space and $A\in L(X)$ a positive operator which satisfies the following conditions \begin{itemize} \item[1)] for each vector $x\in K,\,x\neq 0$ there exists a natural number $n_x$ such that $A^{n_x}(x)\in \mathrm{int}(K)$; \item[2)] some power of $A$ is a compact operator;
\item[3)] $\sup\limits_{n\in \mathbb N}\|A^n\|<+\infty$. \end{itemize} Then the statements of the Theorem \ref{t3} remain true. Moreover, \begin{itemize}
\item[(v)] $\|A^n-A_0\|\mathop{\longrightarrow}\limits_{n\to\infty}0$, i.e. the operators $A^n$ (and, of course, the adjoint operators $(A^n)^*$) converge to $A_0$ (to $A_0^*$) with respect to the norm. \end{itemize} \end{theo}
We remark another assertion, which turns out to be a special case of Theorem \ref{t4}. \begin{theo}\label{t5}
Let $(X,K,\|\cdot\|)$ an ordered normed space and $A\in L(X)$ a positive operator which satisfies the following conditions \begin{itemize} \item[1)] some power of the operator $A$ is strongly positive, i.e. for some $p\in\mathbb{N}$ one has $A^p(K\setminus\{0\})\subset \mathrm{int}(K)$; \item[2)] some power of $A$ is a compact operator;
\item[3)] $\sup\limits_{n\in \mathbb N}\|A^n\|<+\infty$. \end{itemize} Then all statements of the Theorem \ref{t4} remain true. \end{theo}
This theorem indeed is a special case of Theorem \ref{t4} because its condition 1) is stronger than condition 1) of Theorem \ref{t4} and the other assumptions are identic.
At the end we shall shortly deal with two examples. Let $X$ be either the vector space $C(Q)$ of all real continuous functions defined on the compact topological space $Q$ with the cone $K$ of all nonnegative functions or the vector space $\textbf{c}$ of all converging real sequences with the cone\footnote{\,In both cases the cone $K$ satisfies the condition ${\rm int}(K)\neq \varnothing$.} $K$ consisting of all nonnegative sequences. The symbol ${\bf 1}$ denotes correspondingly the function identically equal to $1$ on $Q$ or the sequence whose components are all $1$.
An operator $A\in L(X)$ is called a {\it Markov operator}, if it is positive and $A({\bf 1})={\bf 1}$. We indicate two examples of a compact Markov operator in the spaces $C([0,1])$ and $\textbf{c}$, respectively, which satisfies the condition 1) of Theorem \ref{t4} but does not satisfy the condition 1) of Theorem \ref{t5}.
\textbf{Example 1} Let be $X=C([0,1])$. For some arbitrary fixed number $\theta \in(0,1)$ denote the function $\varphi(s)=(\theta+\sqrt s)^{-1}$ and put \[ \big( A(x) \big)(s)=\varphi(s)\left(\int_0^\theta x(t)\,dt+\int_0^{\sqrt s}x(t)\,dt\right) \qquad (x\in C([0,1])\,). \]
Obviously, $A$ is compact operator in $C([0,1])$ and has the properties \[
A\ge 0, \quad A({\bf 1})={\bf 1}, \quad \|A\|=1 . \] Therefore the operator $A$ satisfies the identic conditions 2) and 3) of the Theorems \ref{t4} and \ref{t5}.
We show that $A$ satisfies the condition 1) of Theorem \ref{t4}, but not the condition 1) of Theorem \ref{t5}. For $x\in K, \ x\not\equiv 0$ put \[ p=\sup\{\tau\in [0,1] \colon x(t)=0 \ \mbox{for} \ t\in [0,\tau] \} \] and define $p=0$ if $x(0)\neq 0$.
It is easy to see that $p<1$ and that $p<\theta$ implies $\big(A(x)\big)(s)>0$ on $[0,1]$ because of
$\int_0^\theta x(t)\,dt =\int_p^\theta x(t)\,dt>0$.
If further for some $m\in \mathbb N$ the number $p$ satisfies the inequality \[ \theta^{\frac{1}{2^{m}}}\le p<\theta^{\frac{1}{2^{m-1}}}, \] then an induction argument shows that \[ \big(A^{m-1}(x)\big)(0)=0, \ \mbox{ but } \ \big(A^m(x)\big)(s)>0 \ \mbox{for} \ s\in [0,1], \] i.e. $A^m(x)\in \mathrm{int}(K)$ ($A^0=I$). Consequently, for each $x\in K$ there exists its individual power $m_x$ with $A^{m_x}(x)\in \mathrm{int}(K)$, however a common power, simultaneously for all $x\in K$, does not exist.
\textbf{Example 2} Let an operator $A$ be defined by a stochastic matrix as follows
\[ A=\left(\begin{array}{llllllll} \frac{1}{2} & \frac{1}{2} & 0 & 0 & \ldots & 0 & 0 &\ldots \\[2mm] \frac{1}{2} & \frac{1}{2^2}&\frac{1}{2^2} & 0 & \ldots & 0 & 0 & \ldots \\[2mm]
\vdots & \vdots & \vdots & & & & & \\
\frac{1}{2}& \frac{1}{2^2}&\frac{1}{2^3}& \frac{1}{2^4} & \ldots & \frac{1}{2^n} &
\frac{1}{2^n} & \ldots \\
\vdots & \vdots & \vdots & & & & & \end{array}\right). \]
$A$ defines a positive operator from $\textbf{c}$ to $\textbf{c}$, and obviously is a Markov operator. For each $n\in \mathbb{N}$ the matrix $A^n$ is also stochastic. Moreover, the first $n+k$ entries in the $k$-th row of $A^n$ are positive ($k,n=1,2,\ldots$). Denote by $e_n$ the sequence $(0,\ldots,0,1,0,\ldots)$ with $1$ at the $n$-th position. Then the first coordinate of the vector $A^n(e_{n+2})$ (still) turns out to be zero, all coordinates of $A^{n+1}(e_{n+2})$ are positive and from the second one on, they are all equal, i.e. all coordinates of $A^{n+1}(e_{n+2})$ are separated from $0$. For the vector $A^{n+2}(e_{n+2})$ even all coordinates are equal and positive. So, we have $A^n(e_{n+2})\notin \mathrm{int}(K)$ but $A^{n+1}(e_{n+2}), A^{n+2}(e_{n+2})\in \mathrm{int}(K)$. On the other hand is it clear that no iterate of $A$ can satisfy the condition $A^n(K\setminus \{0\})\subset \mathrm{int} (K)$.
The compactness of $A$ follows from the possibility to approximate the operator $A- g\otimes \bf 1$ (and so the operator $A$) by finite-rank operators, where $g$ is the functional generated by the sequence $\{\frac{1}{2^n}\}_{n\in \mathbb{N}}$.
The limit distribution of the operator $A$, i.e. a vector $f_0$ such that $A^*(f_0)=f_0$, can be calculated as the sequence with the members $c_n-c_{n+1}$, where $c_n=2^{-\frac{(n-1)n}{2}}$ for $n=1,2,\ldots$.
\textbf{Remark 3} We point out one particular situation, namely the situation of Markov operators, in which Theorem \ref{t5} allows us to obtain as a special case some result which has been proved in \cite{Fel2} (chapt.VIII, \S 7, Th.1).
Theorem \ref{t5} holds for any Markov operator $A$ in the space $C(Q)$, if some power of $A$ is a compact operator (in particular, if $A$ is weakly compact) and if $A$ satisfies the condition of regularity: for some $m$ the inequality $A^m(x)>0$ holds everywhere on $Q$ for each nonnegative function $x\in C(Q)$ that is not identical zero (with other words, $A^m$ is strongly positive).
In this case, the functional $f_0$ whose existence and properties are ensured by the statements of Theorem \ref{t5}, is called {\it "stationary distribution"} (s.\cite{Fel1}) or {\it "limit distribution"} (s.\cite{KemSn}, chapt.IV). Notice that under the made assumptions for any probability measure $\mu\in C^*(Q)$ one has not only the convergence $(A^*)^n(\mu)\mathop{\longrightarrow}\limits_{n\to\infty} f_0$ in
variation but, according to the statement ($v$) of Theorem \ref{t4}, even
$\sup_{\mu}\|(A^*)^n(\mu)-f_0\|\mathop{\longrightarrow}\limits_{n\to\infty} 0$, i.e. the sequence $\{(A^*)^n\}_{n\in \mathbb{N}}$ converges to the operator $A_0^*$ with respect to the norm.
At the end we notice another remark to Theorem \ref{t4} concerning the random walk on a compact space (compare with \cite{Fel2}, chapt.VIII, \S7 Th.1 and \cite{Bor}, \S\S6,7). We keep the terminology, which is familiar in the theory of stationary Markov chains and also the notations there (s. \cite{Fel1}, \cite{Fel2}).
\textbf{Remark 4} Let be $Q$ a metrizable compact space, $\{P_s\}_{s\in Q}\subset C^*(Q)$ a stochastic kernel which satisfies the conditions \begin{itemize} \item[1)] for each open set $G\subset Q$ the function $s \mapsto P_s(G)$ is continuous on $Q$; \item[2)] for each nonempty open set $G\subset Q$ and each point $s\in Q$ there exists a number $n=n(G,s)$ such that $P^{(n)}_s(G)>0$, where $P^{(n)}_s$ denotes the $n$-th iterate of the kernel $P_s$. \end{itemize} Then there exists a probability measure $P_0\in C^*(Q)$ such that \begin{itemize} \item[(i)] $P_0(G)>0$ for any nonempty open subset $G$;
\item[(ii)] $\sup_{s\in Q}\|P^{(n)}_s - P_0\|\mathop{\longrightarrow}\limits_{n\to\infty} 0$. \end{itemize}
For a short proof we consider the Markov operator $A$ corresponding to the kernel $\{P_s\}_{s\in Q}$, where $(Ax)(s)=\int_Q x(t)\,{\rm d}P_s(t)$. Condition 1) implies that the operator $A$ is weakly compact (s. \cite{Edw}, Th.9.4.10) and consequently the operator $A^2$ is compact (s. \cite{Edw}, sect.9.4.5). From condition 2) one gets that the operator $A$ meets the condition 1) of Theorem \ref{t4}. Therefore, the operator $A$ satisfies all conditions of Theorem \ref{t4}. It remains to notice that $P_0$ is that measure which is generated by the functional $f_0$. The statement (ii) holds because of
\[ \sup_{s\in Q} \|P^{(n)}_S-P_0\|\leq \|(A^*)^n-A^*_0\|\mathop{\longrightarrow}\limits_{n\to\infty} 0, \] where $A_0$ is the limit operator from Theorem \ref{t4}.
\footnotesize {\author{Boris M.~Makarow, \\
Sankt Petersburg State University, Faculty of Mathematics and Mechanics, 98 904 Sankt Petersburg, Russia. }\\ {\it e-mail: [email protected]} \\
\author{Martin R.~Weber, \\ Technische Universit\"at Dresden, Fakult\"at f\"ur Mathematik, Institut f\"ur Analysis, 01062 Dresden, Germany.}\\ {\it e-mail: [email protected]} }
\end{document} | arXiv |
Finally, it's not clear that caffeine results in performance gains after long-term use; homeostasis/tolerance is a concern for all stimulants, but especially for caffeine. It is plausible that all caffeine consumption does for the long-term chronic user is restore performance to baseline. (Imagine someone waking up and drinking coffee, and their performance improves - well, so would the performance of a non-addict who is also slowly waking up!) See for example, James & Rogers 2005, Sigmon et al 2009, and Rogers et al 2010. A cross-section of thousands of participants in the Cambridge brain-training study found caffeine intake showed negligible effect sizes for mean and component scores (participants were not told to use caffeine, but the training was recreational & difficult, so one expects some difference).
Took full pill at 10:21 PM when I started feeling a bit tired. Around 11:30, I noticed my head feeling fuzzy but my reading seemed to still be up to snuff. I would eventually finish the science book around 9 AM the next day, taking some very long breaks to walk the dog, write some poems, write a program, do Mnemosyne review (memory performance: subjectively below average, but not as bad as I would have expected from staying up all night), and some other things. Around 4 AM, I reflected that I felt much as I had during my nightwatch job at the same hour of the day - except I had switched sleep schedules for the job. The tiredness continued to build and my willpower weakened so the morning wasn't as productive as it could have been - but my actual performance when I could be bothered was still pretty normal. That struck me as kind of interesting that I can feel very tired and not act tired, in line with the anecdotes.
Even party drugs are going to work: Biohackers are taking recreational drugs like LSD, psilocybin mushrooms, and mescaline in microdoses—about a tenth of what constitutes a typical dose—with the goal of becoming more focused and creative. Many who've tried it report positive results, but real research on the practice—and its safety—is a long way off. "Whether microdosing with LSD improves creativity and cognition remains to be determined in an objective experiment using double-blind, placebo-controlled methodology," Sahakian says.
The evidence? In small studies, healthy people taking modafinil showed improved planning and working memory, and better reaction time, spatial planning, and visual pattern recognition. A 2015 meta-analysis claimed that "when more complex assessments are used, modafinil appears to consistently engender enhancement of attention, executive functions, and learning" without affecting a user's mood. In a study from earlier this year involving 39 male chess players, subjects taking modafinil were found to perform better in chess games played against a computer.
After my rudimentary stacking efforts flamed out in unspectacular fashion, I tried a few ready-made stacks—brand-name nootropic cocktails that offer to eliminate the guesswork for newbies. They were just as useful. And a lot more expensive. Goop's Braindust turned water into tea-flavored chalk. But it did make my face feel hot for 45 minutes. Then there were the two pills of Brain Force Plus, a supplement hawked relentlessly by Alex Jones of InfoWars infamy. The only result of those was the lingering guilt of knowing that I had willingly put $19.95 in the jorts pocket of a dipshit conspiracy theorist.
A fundamental aspect of human evolution has been the drive to augment our capabilities. The neocortex is the neural seat of abstract and higher order cognitive processes. As it grew, so did our ability to create. The invention of tools and weapons, writing, the steam engine, and the computer have exponentially increased our capacity to influence and understand the world around us. These advances are being driven by improved higher-order cognitive processing.1Fascinatingly, the practice of modulating our biology through naturally occurring flora predated all of the above discoveries. Indeed, Sumerian clay slabs as old as 5000 BC detail medicinal recipes which include over 250 plants2. The enhancement of human cognition through natural compounds followed, as people discovered plants containing caffeine, theanine, and other cognition-enhancing, or nootropic, agents.
A LessWronger found that it worked well for him as far as motivation and getting things done went, as did another LessWronger who sells it online (terming it a reasonable productivity enhancer) as did one of his customers, a pickup artist oddly enough. The former was curious whether it would work for me too and sent me Speciosa Pro's Starter Pack: Test Drive (a sampler of 14 packets of powder and a cute little wooden spoon). In SE Asia, kratom's apparently chewed, but the powders are brewed as a tea.
Gibson and Green (2002), talking about a possible link between glucose and cognition, wrote that research in the area …is based on the assumption that, since glucose is the major source of fuel for the brain, alterations in plasma levels of glucose will result in alterations in brain levels of glucose, and thus neuronal function. However, the strength of this notion lies in its common-sense plausibility, not in scientific evidence… (p. 185).
For obvious reasons, it's difficult for researchers to know just how common the "smart drug" or "neuro-enhancing" lifestyle is. However, a few recent studies suggest cognition hacking is appealing to a growing number of people. A survey conducted in 2016 found that 15% of University of Oxford students were popping pills to stay competitive, a rate that mirrored findings from other national surveys of UK university students. In the US, a 2014 study found that 18% of sophomores, juniors, and seniors at Ivy League colleges had knowingly used a stimulant at least once during their academic career, and among those who had ever used uppers, 24% said they had popped a little helper on eight or more occasions. Anecdotal evidence suggests that pharmacological enhancement is also on the rise within the workplace, where modafinil, which treats sleep disorders, has become particularly popular.
Evidence in support of the neuroprotective effects of flavonoids has increased significantly in recent years, although to date much of this evidence has emerged from animal rather than human studies. Nonetheless, with a view to making recommendations for future good practice, we review 15 existing human dietary intervention studies that have examined the effects of particular types of flavonoid on cognitive performance. The studies employed a total of 55 different cognitive tests covering a broad range of cognitive domains. Most studies incorporated at least one measure of executive function/working memory, with nine reporting significant improvements in performance as a function of flavonoid supplementation compared to a control group. However, some domains were overlooked completely (e.g. implicit memory, prospective memory), and for the most part there was little consistency in terms of the particular cognitive tests used making across study comparisons difficult. Furthermore, there was some confusion concerning what aspects of cognitive function particular tests were actually measuring. Overall, while initial results are encouraging, future studies need to pay careful attention when selecting cognitive measures, especially in terms of ensuring that tasks are actually sensitive enough to detect treatment effects.
"My husband and I (Ryan Cedermark) are so impressed with the research Cavin did when writing this book. If you, a family member or friend has suffered a TBI, concussion or are just looking to be nicer to your brain, then we highly recommend this book! Your brain is only as good as the body's internal environment and Cavin has done an amazing job on providing the information needed to obtain such!"
According to clinical psychiatrist and Harvard Medical School Professor, Emily Deans, "there's probably nothing dangerous about the occasional course of nootropics...beyond that, it's possible to build up a tolerance if you use them often enough." Her recommendation is to seek pharmaceutical-grade products which she says are more accurate regarding dosage and less likely to be contaminated.
While the mechanism is largely unknown, one commonly mechanism possibility is that light of the relevant wavelengths is preferentially absorbed by the protein cytochrome c oxidase, which is a key protein in mitochondrial metabolism and production of ATP, substantially increasing output, and this extra output presumably can be useful for cellular activities like healing or higher performance.
If you could take a pill that would help you study and get better grades, would you? Off-label use of "smart drugs" – pharmaceuticals meant to treat disorders like ADHD, narcolepsy, and Alzheimer's – are becoming increasingly popular among college students hoping to get ahead, by helping them to stay focused and alert for longer periods of time. But is this cheating? Should their use as cognitive enhancers be approved by the FDA, the medical community, and society at large? Do the benefits outweigh the risks?
Noopept shows a much greater affinity for certain receptor sites in the brain than racetams, allowing doses as small as 10-30mg to provide increased focus, improved logical thinking function, enhanced short and long-term memory functions, and increased learning ability including improved recall. In addition, users have reported a subtle psychostimulatory effect.
Despite some positive findings, a lot of studies find no effects of enhancers in healthy subjects. For instance, although some studies suggest moderate enhancing effects in well-rested subjects, modafinil mostly shows enhancing effects in cases of sleep deprivation. A recent study by Martha Farah and colleagues found that Adderall (mixed amphetamine salts) had only small effects on cognition but users believed that their performance was enhanced when compared to placebo.
If you want to try a nootropic in supplement form, check the label to weed out products you may be allergic to and vet the company as best you can by scouring its website and research basis, and talking to other customers, Kerl recommends. "Find one that isn't just giving you some temporary mental boost or some quick fix – that's not what a nootropic is intended to do," Cyr says.
By the end of 2009, at least 25 studies reported surveys of college students' rates of nonmedical stimulant use. Of the studies using relatively smaller samples, prevalence was, in chronological order, 16.6% (lifetime; Babcock & Byrne, 2000), 35.3% (past year; Low & Gendaszek, 2002), 13.7% (lifetime; Hall, Irwin, Bowman, Frankenberger, & Jewett, 2005), 9.2% (lifetime; Carroll, McLaughlin, & Blake, 2006), and 55% (lifetime, fraternity students only; DeSantis, Noar, & Web, 2009). Of the studies using samples of more than a thousand students, somewhat lower rates of nonmedical stimulant use were found, although the range extends into the same high rates as the small studies: 2.5% (past year, Ritalin only; Teter, McCabe, Boyd, & Guthrie, 2003), 5.4% (past year; McCabe & Boyd, 2005), 4.1% (past year; McCabe, Knight, Teter, & Wechsler, 2005), 11.2% (past year; Shillington, Reed, Lange, Clapp, & Henry, 2006), 5.9% (past year; Teter, McCabe, LaGrange, Cranford, & Boyd, 2006), 16.2% (lifetime; White, Becker-Blease, & Grace-Bishop, 2006), 1.7% (past month; Kaloyanides, McCabe, Cranford, & Teter, 2007), 10.8% (past year; Arria, O'Grady, Caldeira, Vincent, & Wish, 2008); 5.3% (MPH only, lifetime; Du-Pont, Coleman, Bucher, & Wilford, 2008); 34% (lifetime; DeSantis, Webb, & Noar, 2008), 8.9% (lifetime; Rabiner et al., 2009), and 7.5% (past month; Weyandt et al., 2009).
Oxiracetam is one of the 3 most popular -racetams; less popular than piracetam but seems to be more popular than aniracetam. Prices have come down substantially since the early 2000s, and stand at around 1.2g/$ or roughly 50 cents a dose, which was low enough to experiment with; key question, does it stack with piracetam or is it redundant for me? (Oxiracetam can't compete on price with my piracetam pile stockpile: the latter is now a sunk cost and hence free.)
70 pairs is 140 blocks; we can drop to 36 pairs or 72 blocks if we accept a power of 0.5/50% chance of reaching significance. (Or we could economize by hoping that the effect size is not 3.5 but maybe twice the pessimistic guess; a d=0.5 at 50% power requires only 12 pairs of 24 blocks.) 70 pairs of blocks of 2 weeks, with 2 pills a day requires (70 \times 2) \times (2 \times 7) \times 2 = 3920 pills. I don't even have that many empty pills! I have <500; 500 would supply 250 days, which would yield 18 2-week blocks which could give 9 pairs. 9 pairs would give me a power of:
Many of the positive effects of cognitive enhancers have been seen in experiments using rats. For example, scientists can train rats on a specific test, such as maze running, and then see if the "smart drug" can improve the rats' performance. It is difficult to see how many of these data can be applied to human learning and memory. For example, what if the "smart drug" made the rat hungry? Wouldn't a hungry rat run faster in the maze to receive a food reward than a non-hungry rat? Maybe the rat did not get any "smarter" and did not have any improved memory. Perhaps the rat ran faster simply because it was hungrier. Therefore, it was the rat's motivation to run the maze, not its increased cognitive ability that affected the performance. Thus, it is important to be very careful when interpreting changes observed in these types of animal learning and memory experiments.
The placebos can be the usual pills filled with olive oil. The Nature's Answer fish oil is lemon-flavored; it may be worth mixing in some lemon juice. In Kiecolt-Glaser et al 2011, anxiety was measured via the Beck Anxiety scale; the placebo mean was 1.2 on a standard deviation of 0.075, and the experimental mean was 0.93 on a standard deviation of 0.076. (These are all log-transformed covariates or something; I don't know what that means, but if I naively plug those numbers into Cohen's d, I get a very large effect: \frac{1.2 - 0.93}{0.076}=3.55.)
Some nootropics are more commonly used than others. These include nutrients like Alpha GPC, huperzine A, L-Theanine, bacopa monnieri, and vinpocetine. Other types of nootropics ware still gaining traction. With all that in mind, to claim there is a "best" nootropic for everyone would be the wrong approach since every person is unique and looking for different benefits.
You'll find several supplements that can enhance focus, energy, creativity, and mood. These brain enhancers can work very well, and their benefits often increase over time. Again, nootropics won't dress you in a suit and carry you to Wall Street. That is a decision you'll have to make on your own. But, smart drugs can provide the motivation boost you need to make positive life changes.
Cognition is a suite of mental phenomena that includes memory, attention and executive functions, and any drug would have to enhance executive functions to be considered truly 'smart'. Executive functions occupy the higher levels of thought: reasoning, planning, directing attention to information that is relevant (and away from stimuli that aren't), and thinking about what to do rather than acting on impulse or instinct. You activate executive functions when you tell yourself to count to 10 instead of saying something you may regret. They are what we use to make our actions moral and what we think of when we think about what makes us human.
Two increasingly popular options are amphetamines and methylphenidate, which are prescription drugs sold under the brand names Adderall and Ritalin. In the United States, both are approved as treatments for people with ADHD, a behavioural disorder which makes it hard to sit still or concentrate. Now they're also widely abused by people in highly competitive environments, looking for a way to remain focused on specific tasks.
Harrisburg, NC -- (SBWIRE) -- 02/18/2019 -- Global Smart Pills Technology Market - Segmented by Technology, Disease Indication, and Geography - Growth, Trends, and Forecast (2019 - 2023) The smart pill is a wireless capsule that can be swallowed, and with the help of a receiver (worn by patients) and software that analyzes the pictures captured by the smart pill, the physician is effectively able to examine the gastrointestinal tract. Gastrointestinal disorders have become very common, but recently, there has been increasing incidence of colorectal cancer, inflammatory bowel disease, and Crohns disease as well.
A provisional conclusion about the effects of stimulants on learning is that they do help with the consolidation of declarative learning, with effect sizes varying widely from small to large depending on the task and individual study. Indeed, as a practical matter, stimulants may be more helpful than many of the laboratory tasks indicate, given the apparent dependence of enhancement on length of delay before testing. Although, as a matter of convenience, experimenters tend to test memory for learned material soon after the learning, this method has not generally demonstrated stimulant-enhanced learning. However, when longer periods intervene between learning and test, a more robust enhancement effect can be seen. Note that the persistence of the enhancement effect well past the time of drug action implies that state-dependent learning is not responsible. In general, long-term effects on learning are of greater practical value to people. Even students cramming for exams need to retain information for more than an hour or two. We therefore conclude that stimulant medication does enhance learning in ways that may be useful in the real world.
As shown in Table 6, two of these are fluency tasks, which require the generation of as large a set of unique responses as possible that meet the criteria given in the instructions. Fluency tasks are often considered tests of executive function because they require flexibility and the avoidance of perseveration and because they are often impaired along with other executive functions after prefrontal damage. In verbal fluency, subjects are asked to generate as many words that begin with a specific letter as possible. Neither Fleming et al. (1995), who administered d-AMP, nor Elliott et al. (1997), who administered MPH, found enhancement of verbal fluency. However, Elliott et al. found enhancement on a more complex nonverbal fluency task, the sequence generation task. Subjects were able to touch four squares in more unique orders with MPH than with placebo.
This continued up to 1 AM, at which point I decided not to take a second armodafinil (why spend a second pill to gain what would likely be an unproductive set of 8 hours?) and finish up the experiment with some n-backing. My 5 rounds: 60/38/62/44/5023. This was surprising. Compare those scores with scores from several previous days: 39/42/44/40/20/28/36. I had estimated before the n-backing that my scores would be in the low-end of my usual performance (20-30%) since I had not slept for the past 41 hours, and instead, the lowest score was 38%. If one did not know the context, one might think I had discovered a good nootropic! Interesting evidence that armodafinil preserves at least one kind of mental performance.
Although piracetam has a history of "relatively few side effects," it has fallen far short of its initial promise for treating any of the illnesses associated with cognitive decline, according to Lon Schneider, a professor of psychiatry and behavioral sciences at the Keck School of Medicine at the University of Southern California. "We don't use it at all and never have."
The soft gels are very small; one needs to be a bit careful - Vitamin D is fat-soluble and overdose starts in the range of 70,000 IU35, so it would take at least 14 pills, and it's unclear where problems start with chronic use. Vitamin D, like many supplements, follows a U-shaped response curve (see also Melamed et al 2008 and Durup et al 2012) - too much can be quite as bad as too little. Too little, though, is likely very bad. The previously cited studies with high acute doses worked out to <1,000 IU a day, so they may reassure us about the risks of a large acute dose but not tell us much about smaller chronic doses; the mortality increases due to too-high blood levels begin at ~140nmol/l and reading anecdotes online suggest that 5k IU daily doses tend to put people well below that (around 70-100nmol/l). I probably should get a blood test to be sure, but I have something of a needle phobia.
The evidence? A 2012 study in Greece found it can boost cognitive function in adults with mild cognitive impairment (MCI), a type of disorder marked by forgetfulness and problems with language, judgement, or planning that are more severe than average "senior moments," but are not serious enough to be diagnosed as dementia. In some people, MCI will progress into dementia.
Vinpocetine walks a line between herbal and pharmaceutical product. It's a synthetic derivative of a chemical from the periwinkle plant, and due to its synthetic nature we feel it's more appropriate as a 'smart drug'. Plus, it's illegal in the UK. Vinpocetine is purported to improve cognitive function by improving blood flow to the brain, which is why it's used in some 'study drugs' or 'smart pills'.
OptiMind - It is one of the best Nootropic supplements available and brought to you by AlternaScript. It contains six natural Nootropic ingredients derived from plants that help in overall brain development. All the ingredients have been clinically tested for their effects and benefits, which has made OptiMind one of the best brain pills that you can find in the US today. It is worth adding to your Nootropic Stack.
Maj. Jamie Schwandt, USAR, is a logistics officer and has served as an operations officer, planner and commander. He is certified as a Department of the Army Lean Six Sigma Master Black Belt, certified Red Team Member, and holds a doctorate from Kansas State University. This article represents his own personal views, which are not necessarily those of the Department of the Army.
As with any thesis, there are exceptions to this general practice. For example, theanine for dogs is sold under the brand Anxitane is sold at almost a dollar a pill, and apparently a month's supply costs $50+ vs $13 for human-branded theanine; on the other hand, this thesis predicts downgrading if the market priced pet versions higher than human versions, and that Reddit poster appears to be doing just that with her dog.↩
^ Sattler, Sebastian; Forlini, Cynthia; Racine, Éric; Sauer, Carsten (August 5, 2013). "Impact of Contextual Factors and Substance Characteristics on Perspectives toward Cognitive Enhancement". PLOS ONE. 8 (8): e71452. Bibcode:2013PLoSO...871452S. doi:10.1371/journal.pone.0071452. ISSN 1932-6203. LCCN 2006214532. OCLC 228234657. PMC 3733969. PMID 23940757. | CommonCrawl |
TeamBright
Computer Science 335 335 11 silver badge88 bronze badges
Mathematics 301 301 11 silver badge99 bronze badges
Theoretical Computer Science 131 131 33 bronze badges
11 What is the relation between the existence of a cryptographic hash function and the existence of a PRG?
6 Is there a standard definition of non-malleability for the encryption schemes?
6 How to draw a directed graph to show the relations among some notions?
5 What is the relation between computational security and provable security?
functional-analysis
fourier-analysis
taylor-expansion
5 Find monotonically increasing function $f$ on $[ 1,+\infty )$ such that $ x ( f ( x^{2} ) + 1 ) = f ( x ) ( x^{2}+1 ) $? Aug 20 '17
5 Is there a metric function defined on $M_{n}\left( \mathbb{R} \right)$ such that the determinant function is not continuous? Oct 30 '17
3 Is there a bounded connected set $X$ such that for all point $b$ there exists $r > 0$ such that $X \setminus O(b, r)$ is disconnect? Apr 19 '19
2 If $g \in C^{3}(0,1)$ and $g(0)=g(1)=g'(0)=0$, prove that there exists $\xi \in (0,1)$ such that $g(x) = g'''(\xi)x^2(x-1)/6$ Jul 26 '17
1 If for every $t \in \mathbb{R}$, $\left\vert \int_{\mathbb{R}}f(x) e^{-2 \pi ix \cdot t} dx \right\vert < \infty$, is $f(x) \in L^{1}(\mathbb{R})$? Dec 1 '16
1 If $(AA^t)^r=I$ then $A^tA$ is invertible? Jul 25 '17
0 The symmetric solution of matrix equations $AX = B$ over $\mathbb{Z}_{q}$ Dec 17 '19
0 How to prove that a binary function is continuous? Jun 25 '18 | CommonCrawl |
Triakis tetrahedron
In geometry, a triakis tetrahedron (or kistetrahedron[1]) is a Catalan solid with 12 faces. Each Catalan solid is the dual of an Archimedean solid. The dual of the triakis tetrahedron is the truncated tetrahedron.
Triakis tetrahedron
(Click here for rotating model)
TypeCatalan solid
Coxeter diagram
Conway notationkT
Face typeV3.6.6
isosceles triangle
Faces12
Edges18
Vertices8
Vertices by type4{3}+4{6}
Symmetry groupTd, A3, [3,3], (*332)
Rotation groupT, [3,3]+, (332)
Dihedral angle129°31′16″
arccos(−7/11)
Propertiesconvex, face-transitive
Truncated tetrahedron
(dual polyhedron)
Net
The triakis tetrahedron can be seen as a tetrahedron with a triangular pyramid added to each face; that is, it is the Kleetope of the tetrahedron. It is very similar to the net for the 5-cell, as the net for a tetrahedron is a triangle with other triangles added to each edge, the net for the 5-cell a tetrahedron with pyramids attached to each face. This interpretation is expressed in the name.
The length of the shorter edges is 3/5 that of the longer edges.[2] If the triakis tetrahedron has shorter edge length 1, it has area 5/3√11 and volume 25/36√2.
Cartesian coordinates
Cartesian coordinates for the 8 vertices of a triakis tetrahedron centered at the origin, are the points (±5/3, ±5/3, ±5/3) with an even number of minus signs, along with the points (±1, ±1, ±1) with an odd number of minus signs:
• (5/3, 5/3, 5/3), (5/3, −5/3, −5/3), (−5/3, 5/3, −5/3), (−5/3, −5/3, 5/3)
• (−1, 1, 1), (1, −1, 1), (1, 1, −1), (−1, −1, −1)
The length of the shorter edges of this triakis tetrahedron equals 2√2. The faces are isosceles triangles with one obtuse and two acute angles. The obtuse angle equals arccos(–7/18) ≈ 112.88538047616° and the acute ones equal arccos(5/6) ≈ 33.55730976192°.
Tetartoid symmetry
The triakis tetrahedron can be made as a degenerate limit of a tetartoid:
Example tetartoid variations
Orthogonal projections
Orthogonal projections (graphs)
Centered by Short edge Face Vertex Long edge
Triakis
tetrahedron
(Dual)
Truncated
tetrahedron
Projective
symmetry
[1] [3] [4]
Orthogonal projections (solids)
Triakis
tetrahedron
Dual
compound
(Dual)
Truncated
tetrahedron
Projective
symmetry
[1] [2] [3]
Variations
A triakis tetrahedron with equilateral triangle faces represents a net of the four-dimensional regular polytope known as the 5-cell.
If the triangles are right-angled isosceles, the faces will be coplanar and form a cubic volume. This can be seen by adding the 6 edges of tetrahedron inside of a cube.
Stellations
This chiral figure is one of thirteen stellations allowed by Miller's rules.
Related polyhedra
The triakis tetrahedron is a part of a sequence of polyhedra and tilings, extending into the hyperbolic plane. These face-transitive figures have (*n32) reflectional symmetry.
*n32 symmetry mutation of truncated tilings: t{n,3}
Symmetry
*n32
[n,3]
Spherical Euclid. Compact hyperb. Paraco. Noncompact hyperbolic
*232
[2,3]
*332
[3,3]
*432
[4,3]
*532
[5,3]
*632
[6,3]
*732
[7,3]
*832
[8,3]...
*∞32
[∞,3]
[12i,3] [9i,3] [6i,3]
Truncated
figures
Symbol t{2,3} t{3,3} t{4,3} t{5,3} t{6,3} t{7,3} t{8,3} t{∞,3} t{12i,3} t{9i,3} t{6i,3}
Triakis
figures
Config. V3.4.4 V3.6.6 V3.8.8 V3.10.10 V3.12.12 V3.14.14 V3.16.16 V3.∞.∞
Family of uniform tetrahedral polyhedra
Symmetry: [3,3], (*332) [3,3]+, (332)
{3,3} t{3,3} r{3,3} t{3,3} {3,3} rr{3,3} tr{3,3} sr{3,3}
Duals to uniform polyhedra
V3.3.3 V3.6.6 V3.3.3.3 V3.6.6 V3.3.3 V3.4.3.4 V4.6.6 V3.3.3.3.3
See also
• Truncated triakis tetrahedron
References
1. Conway, Symmetries of things, p.284
2. "Triakis Tetrahedron - Geometry Calculator".
• Williams, Robert (1979). The Geometrical Foundation of Natural Structure: A Source Book of Design. Dover Publications, Inc. ISBN 0-486-23729-X. (Section 3-9)
• Wenninger, Magnus (1983), Dual Models, Cambridge University Press, doi:10.1017/CBO9780511569371, ISBN 978-0-521-54325-5, MR 0730208 (The thirteen semiregular convex polyhedra and their duals, Page 14, Triakistetrahedron)
• The Symmetries of Things 2008, John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, ISBN 978-1-56881-220-5 (Chapter 21, Naming the Archimedean and Catalan polyhedra and tilings, page 284, Triakis tetrahedron )
External links
• Eric W. Weisstein, Triakis tetrahedron (Catalan solid) at MathWorld.
Catalan solids
Tetrahedron
(Dual)
Tetrahedron
(Seed)
Octahedron
(Dual)
Cube
(Seed)
Icosahedron
(Dual)
Dodecahedron
(Seed)
Triakis tetrahedron
(Needle)
Triakis tetrahedron
(Kis)
Triakis octahedron
(Needle)
Tetrakis hexahedron
(Kis)
Triakis icosahedron
(Needle)
Pentakis dodecahedron
(Kis)
Rhombic hexahedron
(Join)
Rhombic dodecahedron
(Join)
Rhombic triacontahedron
(Join)
Deltoidal dodecahedron
(Ortho)
Disdyakis hexahedron
(Meta)
Deltoidal icositetrahedron
(Ortho)
Disdyakis dodecahedron
(Meta)
Deltoidal hexecontahedron
(Ortho)
Disdyakis triacontahedron
(Meta)
Pentagonal dodecahedron
(Gyro)
Pentagonal icositetrahedron
(Gyro)
Pentagonal hexecontahedron
(Gyro)
Archimedean duals
Tetrahedron
(Seed)
Tetrahedron
(Dual)
Cube
(Seed)
Octahedron
(Dual)
Dodecahedron
(Seed)
Icosahedron
(Dual)
Truncated tetrahedron
(Truncate)
Truncated tetrahedron
(Zip)
Truncated cube
(Truncate)
Truncated octahedron
(Zip)
Truncated dodecahedron
(Truncate)
Truncated icosahedron
(Zip)
Tetratetrahedron
(Ambo)
Cuboctahedron
(Ambo)
Icosidodecahedron
(Ambo)
Rhombitetratetrahedron
(Expand)
Truncated tetratetrahedron
(Bevel)
Rhombicuboctahedron
(Expand)
Truncated cuboctahedron
(Bevel)
Rhombicosidodecahedron
(Expand)
Truncated icosidodecahedron
(Bevel)
Snub tetrahedron
(Snub)
Snub cube
(Snub)
Snub dodecahedron
(Snub)
Convex polyhedra
Platonic solids (regular)
• tetrahedron
• cube
• octahedron
• dodecahedron
• icosahedron
Archimedean solids
(semiregular or uniform)
• truncated tetrahedron
• cuboctahedron
• truncated cube
• truncated octahedron
• rhombicuboctahedron
• truncated cuboctahedron
• snub cube
• icosidodecahedron
• truncated dodecahedron
• truncated icosahedron
• rhombicosidodecahedron
• truncated icosidodecahedron
• snub dodecahedron
Catalan solids
(duals of Archimedean)
• triakis tetrahedron
• rhombic dodecahedron
• triakis octahedron
• tetrakis hexahedron
• deltoidal icositetrahedron
• disdyakis dodecahedron
• pentagonal icositetrahedron
• rhombic triacontahedron
• triakis icosahedron
• pentakis dodecahedron
• deltoidal hexecontahedron
• disdyakis triacontahedron
• pentagonal hexecontahedron
Dihedral regular
• dihedron
• hosohedron
Dihedral uniform
• prisms
• antiprisms
duals:
• bipyramids
• trapezohedra
Dihedral others
• pyramids
• truncated trapezohedra
• gyroelongated bipyramid
• cupola
• bicupola
• frustum
• bifrustum
• rotunda
• birotunda
• prismatoid
• scutoid
Degenerate polyhedra are in italics.
| Wikipedia |
How to get a Presentation of a Group
$\newcommand{\R}{\mathbf R}$ Let $G$ be the group of homeomorphisms of $\R^2$ generated by $g$ and $h$, where $g(x, y)=(x+1, y)$ and $h(x, y)=(-x, y+1)$.
To show that $G\cong \langle a, b|\ b^{-1}aba\rangle$.
I tried the following:
Define a map $f:\langle a, b\rangle \to G$ which sends $a$ to $g$ and $b$ to $h$. Then it can be checked that $b^{-1}aba$ lies in the kernel of $f$. So $f$ factors through $\langle a, b|\ b^{-1}aba\rangle$ to give a map $\bar f: \langle a, b|\ b^{-1}aba\rangle\to G$.
What I am unable to show is that $\bar f$ is injective.
Also, here we were already given a presentation which we had to show is isomorphic to $G$. If it were not given, then is there a general way to get one?
group-theory free-groups
caffeinemachinecaffeinemachine
$\begingroup$ The relation $b^{-1}ab = a^{-1}$ allows you to write every element of $G$ as $a^ib^j$ for some $i,j \in {\mathbb Z}$. So to show that $\bar{f}$ is injective, it is sufficient to show that these elements all map onto distinct elements of $G$, which you should be able to do. (Finding a normal form for the elements of a group defined by a finite presentation is a standard technique for proving that the group is isomorphic to some other more group.) $\endgroup$ – Derek Holt May 25 '16 at 18:00
$\begingroup$ A different approach would be to use some algebraic topology, look at the quotient space of the group acting on the plane and the fundamental group of that space will be your group (this is only because the group acts nicely), and you can use van Kampens to calculate a presentation. Check out 1.2 and 1.3 in Hatcher's algebraic topology book. $\endgroup$ – Paul Plummer May 26 '16 at 15:53
$\begingroup$ it would be nice that I could receive any feedback for my answer, thanks! $\endgroup$ – janmarqz Apr 9 '18 at 15:07
$\begingroup$ @janmarqz I apologize for not noticing. I got a notification but I thought it is an old post so I did not see the page carefully. I didn't realize that a new answer was added. Give me some time to read your answer. I am actually quite busy right now with my PhD work. Thanks. $\endgroup$ – caffeinemachine Apr 9 '18 at 15:38
$\begingroup$ that you said is already a feedback :) thanks again $\endgroup$ – janmarqz Apr 9 '18 at 16:17
Direct calculations with $$\left(\begin{array}{c}x\\y\end{array}\right) \stackrel{g}\longmapsto \left(\begin{array}{c}x+1\\y\end{array}\right)\ \mbox{and} \ \left(\begin{array}{c}x\\y\end{array}\right) \stackrel{h}\longmapsto\left(\begin{array}{c}-x\\y+1\end{array}\right),$$ give you $$ \left( \begin{array}{c} x\\ y \end{array} \right) \stackrel{g^{-1}}\longmapsto \left( \begin{array}{c} x-1\\ y \end{array} \right) \ \mbox{and} \ \left( \begin{array}{c} x\\ y \end{array} \right) \stackrel{h^{-1}}\longmapsto \left( \begin{array}{c} -x\\ y-1 \end{array} \right), $$ respectively. But also $gh=hg^{-1}$ because:
$$\left(\begin{array}{c}x\\y\end{array}\right) \stackrel{h}\longmapsto\left(\begin{array}{c}-x\\y+1\end{array}\right) \stackrel{g}\longmapsto\left(\begin{array}{c}-x+1\\y+1\end{array}\right),$$ and $$\left(\begin{array}{c}x\\y\end{array}\right) \stackrel{g^{-1}}\longmapsto\left(\begin{array}{c}x-1\\y\end{array}\right) \stackrel{h}\longmapsto\left(\begin{array}{c}-x+1\\y+1\end{array}\right),$$
and this implies that $h^{-1}ghg=e$.
Take all the reduced word in the letters $g,h,g^{-1},h^{-1}$, which can brought into a canonical form $g^mh^n$ taking into account that $gh=hg^{-1}$.
So you have that the subgroup $\langle\{g,h\}\rangle$ (the subgruoup generated by $g,h$) satisfies $$\langle g,h\ |\ h^{-1}ghg=e\rangle.$$
janmarqzjanmarqz
$\begingroup$ I do not see how this gives us that $G$ is isomorphic to $\langle a, b| b^{-1}aba\rangle$. It seems that you have also shown that $f$ is surjective. Am I missing something? $\endgroup$ – caffeinemachine Apr 10 '18 at 1:49
$\begingroup$ @caffeinemachine. Other example of how to read a presentation for a subgroup inside another group: The matrix $j= \left( \begin{array}{cc} 1&1\\ 0&1 \end{array} \right) $ is an element of the group $SL_2(\Bbb Z)$ and satisfies $ \left( \begin{array}{cc} 1&1\\ 0&1 \end{array} \right)^n = \left( \begin{array}{cc} 1&n\\ 0&1 \end{array} \right) $ then the subgroup $\langle\{ j\}\rangle$ has the presentation $$\langle j\ |\quad\rangle\cong \Bbb Z$$ of the cyclic free rank one group. $\endgroup$ – janmarqz Apr 16 '18 at 17:44
Not the answer you're looking for? Browse other questions tagged group-theory free-groups or ask your own question.
How to show $\langle a, b \; | \; aba = bab \rangle \cong \langle x,y \; | \; x^3=y^2 \rangle$?
To show that a concretely defined group is isomorphic to an explicitly presented group, what strategies are available?
Free group on two generators and commutators. Why it's enough to add the relation ab=ba?
A group is generated by two elements of order $2$ is infinite and non-abelian
How to Identify a Quotient of a Given Free Group
Automorphism group of covering space of figure eight ($S^1 \vee S^1$) given by the integer grid in $\mathbb{R}^2$
How to prove that $\langle x, y \mid xyx^{-1}y^{-1}\rangle$ is a presentation for $\mathbb{Z} \times \mathbb{Z}$
Proof that dihedral group $D_{2n}$ is isomorphic to the group generated by two group elements of order 2
Manipulating Group Presentations: Are my arguments valid?
Subgroup of finite abelian group $G$ isomorphic to $G/H$ | CommonCrawl |
Ono's inequality
In mathematics, Ono's inequality is a theorem about triangles in the Euclidean plane. In its original form, as conjectured by T. Ono in 1914, the inequality is actually false; however, the statement is true for acute triangles and right triangles, as shown by F. Balitrand in 1916.
Statement of the inequality
Consider an acute or right triangle in the Euclidean plane with side lengths a, b and c and area A. Then
$27(b^{2}+c^{2}-a^{2})^{2}(c^{2}+a^{2}-b^{2})^{2}(a^{2}+b^{2}-c^{2})^{2}\leq (4A)^{6}.$
This inequality fails for general triangles (to which Ono's original conjecture applied), as shown by the counterexample $a=2,\,\,b=3,\,\,c=4,\,\,A=3{\sqrt {15}}/4.$
The inequality holds with equality in the case of an equilateral triangle, in which up to similarity we have sides $1,1,1$ and area ${\sqrt {3}}/4.$
See also
• List of triangle inequalities
References
• Balitrand, F. (1916). "Problem 4417". Intermed. Math. 23: 86–87. JFM 46.0859.06.
• Ono, T. (1914). "Problem 4417". Intermed. Math. 21: 146.
• Quijano, G. (1915). "Problem 4417". Intermed. Math. 22: 66.
• Lukarevski, M. (2017). "An alternate proof of Gerretsen's inequalities". Elem. Math. 72: 2–8.
External links
• Weisstein, Eric W. "Ono inequality". MathWorld.
Disproved conjectures
• Borsuk's
• Chinese hypothesis
• Connes
• Euler's sum of powers
• Ganea
• Hedetniemi's
• Hauptvermutung
• Hirsch
• Kalman's
• Keller's
• Mertens
• Ono's inequality
• Pólya
• Ragsdale
• Schoen–Yau
• Seifert
• Tait's
• Von Neumann
• Weyl–Berry
• Williamson
| Wikipedia |
How to construct an $n\times n$ unitary matrix taking an arbitrary $|\psi\rangle$ to a target state $|\phi\rangle$?
I came across Lecture 12 here https://viterbi-web.usc.edu/~tbrun/Course/ that does this but I was not able to understand. An example would be very helpful
quantum-gate unitarity
Shashi KumarShashi Kumar
$\begingroup$ Does this answer help? quantumcomputing.stackexchange.com/a/8863/15820 $\endgroup$
– Quantum Mechanic
$\begingroup$ Use householder transformation $\endgroup$
– KAJ226
$\begingroup$ you can simply write the operator $|\psi\rangle\!\langle\phi|$ and complete it to be a unitary. This can be done by completing in an arbitrary way $|\phi\rangle$ and $|\psi\rangle$ into orthonormal bases, call them $\{|\phi_i\rangle\}$ and $\{|\psi_i\rangle\}_i$ with $\psi_1=\psi$ and $\phi_1=\phi$. Any corresponding unitary $U=\sum_i |\psi_i\rangle\!\langle\phi_i|$ sends the input to the output. I'm pretty sure this was already asked and answered on the site, but I can't find the post right now $\endgroup$
$\begingroup$ related: quantumcomputing.stackexchange.com/q/5167/55 $\endgroup$
I didn't go through the attached pdf. But if you want to find a unitary matrix $U$ that maps a quantum state $|\psi \rangle$ to $|\phi\rangle$ then you can use the Householder transformation as I commented. Here the two vectors have the same length (they are unit vectors) because we are thinking of them as a quantum state, so there will always exist a Householder transformation that can do this.
For example: If you want to find a unitary, $U$, that maps $|00\rangle = \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \end{pmatrix}$ to $|11\rangle = \begin{pmatrix} 0\\0\\0\\1\end{pmatrix}$ then you can construct it as: $$ U = I - 2vv^T $$ where $v$ is the normalized vector of $|00\rangle - |11\rangle = \begin{pmatrix} 1 \\ 0 \\ 0 \\ -1 \end{pmatrix}$. That is, $ v = \begin{pmatrix} 1/\sqrt{2} \\ 0 \\ 0 \\ -1/\sqrt{2} \end{pmatrix} $.
From here, we can write $U$ out explicitly as:
\begin{align} U &= \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{pmatrix} - 2 \begin{pmatrix} 1/\sqrt{2} \\ 0 \\ 0 \\ -1/\sqrt{2} \end{pmatrix} \begin{pmatrix} 1/\sqrt{2} & 0 & 0 &-1/\sqrt{2} \end{pmatrix} \\ &= \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{pmatrix} - 2 \begin{pmatrix} 1/2 & 0 & 0 & -1/2\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ -1/2 & 0 & 0 & 1/2 \end{pmatrix} = \begin{pmatrix} 0 & 0 & 0 & 1\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0\end{pmatrix} \end{align}
You can check that this is infact unitary since $U\cdot U^\dagger = I$ and that
$$ U|00\rangle = \begin{pmatrix} 0 & 0 & 0 & 1\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0\end{pmatrix} \begin{pmatrix} 1 \\ 0 \\ 0 \\ 0 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 1 \end{pmatrix} = |11\rangle $$
The question now is about how to decompose this unitary matrix into a quantum circuit with certain set of gates... This can be done in different ways... look up KAK decomposition if you are interested.
KAJ226KAJ226
Not the answer you're looking for? Browse other questions tagged quantum-gate unitarity or ask your own question.
When can a matrix be "extended" into a unitary?
For any two quantum states does there exist a gate that takes you from one to the other
Symmetry of tensor product w.r.t. Vazirani 2-qubit video
General parametrisation of an arbitrary $2\times 2$ unitary matrix?
Is it correct to say that we need controlled gates because unitary matrices are reversible?
Find the local unitary that takes the bell state to a state phi that has an extractable bell state
How to decompose this two-qubit unitary matrix to the standard gate set?
In Qiskit quantum teleportation protocol, why they use CZ and CX gate at the end?
Question about DIY Quantum Computer Prototype
How to find the output state after evolution through a unitary?
What's free evolution for a period T?
How to construct local unitary transformations mapping a pure state to another with the same entanglement? | CommonCrawl |
Here $a$ and $b$ are the initial and the final values of the collective variable. TI is a general method, which can be applied to a variety of processes, e.g. phase transitions, electron transfer etc.
par_all27_prot_lipid.inp contains the force field parameters. You will be using the CHARMM v.22, a popular force field for biologically relevant systems.
Open the deca_ala.pdb protein data bank format file with vmd. Create a new representation for the protein, e.g. of type Ribbon to observe the alpha-helix.
Although the image below shows the deca-alanine in water, it is expensive to run thermodynamic integration for a solvated protein with many values of the constraints on small laptops. So we will run TI for the protein in the gas-phase.
Here you are asked to run several MD simulations for different values of the distance between atoms 11 and 91, in each run it will be constrained. In the original file md_std.inp the distance is set to $14.37$ Å as is in the deca_ala.pdb file. This is the first step to carry out the termodynamic integration, as described in the equation above.
We have made the script run_ti_jobs.sh to run these simulations, which you can find inside the compressed file deca_ala.tar.gz. Take a look at the script and familiarize yourself with it. At which values are we constraining the distances between the carbon atoms? In this case we are performing 5 different simulations, each with a different value of the constraint. You can edit this script to use a larger or smaller number of constraints and to increase or reduce the upper and/or lower bound of integration. Can you guess where in the script we are specifying the values of the constraints?
Be careful with the values chosen for the upper and lower bound of the constraints as the simulations might crash or the SHAKE algorithm for the computation of the constraints might not converge if the values of the constrained distances are unphysical.
We have set the number of steps of each constrained MD to 5000. Try to increase this number if you want to achieve better statistics or to decrease it to get the results faster, at the expense of a more converged free energy.
Look into the main input file of cp2k md_std.inp, and try to understand the keywords used as much as possible, by now you should be able to understand most of it, and you can experiment changing some of the keywords to see what happens. Look in particular at the definition of the section CONSTRAINT where the target value of the distance between the two carbon atoms at the edges of the proteins are constrained for instance to 14.37, and at the COLVAR section where the the collective variable for the distance between the two C atoms is defined.
The average Lagrange multiplier is the average force $F(x)$ required to constrain the atoms at the distance $x$. First of all, plot the force $F(x)$ with its standard error as a function of the collective variable to see if the simulation carried out so far is statistically relevant or the relative error is too large.
Discuss the form of the free energy profile and comment on what is the most stable state of the protein. Is it more stable when it is stretched or when it is in the $\alpha$-helix conformation? Is this result physical? Explain why or why not. How can the presence of water affect the conformation of the protein?
Tip 1: the most stable state will be that where the free energy is at the global minimum.
Tip 2: In order to understand whether the result obtained from thermodynamic integration is physical or not, have a look at the .xyz files for some of the constrained MD trajectories and think about what are the fundamental interactions between the constituents of the protein that we are taking into account with the CHARMM force field (e.g. electrostatic, van-der Waals, covalent bonds) and how these may contribute to the stabilization of the protein in a given state.
The two articles at the links below show how the free energy profile should look like, using thermodynamic integration or a different enhanced sampling method. Compare the free energy profile obtained from your simulations to either of those papers. Most likely, the free energy profile you obtained will not be as converged as theirs. What are some possible reasons for this, and how can one obtain better converged free energy profiles?
Paper 1: https://arxiv.org/pdf/0711.2726.pdf see figure 2, solid line obtained with thermodynamic integration, using the same force field (CHARMM v.22) used here. This paper however, uses a different collective variable, i.e. the distance between the N-atoms at the opposite edges.
Paper 2: https://pubs.acs.org/doi/pdf/10.1021/ct5002076 see figure 1, obtained with umbrella sampling and adaptive bias force sampling, for two versions of the CHARMM force field, v.22 and v.36. The collective variable in this case is the same as the one specified in our input.
Finally, in principle we could have performed a direct MD simulation (as we did in the past exercises) to compute the free energy profile as a function of the distance between two of the atoms at the opposite edges of the protein (the collective variable we chose for this particular problem). Instead, we chose to perform an enhanced simulation technique. Can you think of a problem we would face if we had decided to perform a direct MD simulation? What could be a possible way to overcome this problem?
We have provided you with a useful script called generate_plots.sh that extracts the average force and the standard error for each constrained MD simulation (see the grep command line above), and it prints out the file av_force_vs_x.dat containing the force as a function of the collective variable, and the error on the force (third column). Take a look at the script and modify it if necessary, e.g. if you have changed the lower and upper bound for the constraint or if you have changed the number of constraints.
In order to check the convergence of the free energy profile one should look at the error on the average force for each constrained MD simulation. The error on the free energy profile can be obtained by propagating the error on the average force upon integration.
From the file containing the average force as a function of collective variable you need to integrate $F(x) dx$ numerically to obtain $\Delta A$. You may use the trapezoidal rule (or equivalent) with EXCEL, ORIGIN or any scripting language.
Make sure that you get the units right when performing the integration. The Largange multipliers are written in atomic units (Hartree/bohr, dimension of a force), while the distances are in Angstrom. | CommonCrawl |
\begin{document}
\title{{\bf Heinz type estimates for graphs in Euclidean space }}
\author{Francisco Fontenele\thanks{Partially supported by CNPq (Brazil)}\\ \\\small{\it To my wife Andrea}}
\date{} \maketitle
\begin{quote} \small {\bf Abstract}. Let $M^n$ be an entire graph in the Euclidean $(n+1)$-space $\mathbb R^{n+1}$. Denote by $H$, $R$ and
$|A|$, respectively, the mean curvature, the scalar curvature and the length of the second fundamental form of $M^n$. We prove that if the mean curvature $H$ of $M^n$ is bounded then $\inf_M|R|=0$, improving results of Elbert and Hasanis-Vlachos. We also prove that if the Ricci curvature of $M^n$ is negative then
$\inf_M|A|=0$. The latter improves a result of Chern as well as gives a partial answer to a question raised by Smith-Xavier. Our technique is to estimate $\inf|H|,\;\inf|R|$ and $\inf|A|$ for graphs in $\mathbb R^{n+1}$ of $C^2$-real valued functions defined on closed balls in $\mathbb R^n$.
\end{quote}
\section{Introduction}
Let $B_r\subset\mathbb R^n$ be an open ball of radius $r$, and $f:B_r\to\mathbb R$ a $C^2$-function. For $n=2$, Heinz \cite{H} obtained the following estimates for the mean curvature $H$ and the Gaussian curvature $K$ of the graph of $f$:
\begin{eqnarray}\label{HH}
\inf |H|\leq\frac{1}{r}, \end{eqnarray}
\begin{eqnarray}\label{HK}
\inf |K|\leq\frac{3\,e^2}{r^2}, \end{eqnarray} where $e$ is the basis for the natural logarithm. Chern \cite{C} and Flanders \cite{Fl}, independently, obtained inequality (\ref{HH}) for any $n\geq 2$. As an immediate consequence one has that an entire graph in $\mathbb R^{n+1}$, i.e., the graph of a $C^2$-function from $\mathbb R^n$ to $\mathbb R$, cannot have mean curvature bounded away from zero. This result of Chern and Flanders implies that an entire graph in $\mathbb R^{n+1}$ with constant scalar curvature $R\geq 0$ satisfies $R=0$ (see Section 2, equality (\ref{HRA})).
\vskip5pt
Salavessa \cite{Sa} extended inequality (\ref{HH}) to graphs of smooth real valued functions defined on oriented compact domains of Riemannian manifolds, and Barbosa-Bessa-Montenegro \cite{BBM} extended it to transversally oriented codimension one $C^2$-foliations of Riemannian manifolds.
\vskip5pt
In the theorems below, by a graph over a closed ball $\overline B_r\subset\mathbb R^n$ we mean the graph in $\mathbb R^{n+1}$ of a
$C^2$-real valued function defined in $\overline B_r$. In our first result, we improve the estimates given by Heinz, Chern and Flanders for $\inf|H|$, by showing that the estimates can be made strict if we consider graphs over closed balls instead of graphs over open balls.
\begin{thm}\label{FH} If $M^n\subset\mathbb R^{n+1}$ is a graph over a closed ball $\overline B_r\subset\mathbb R^n$,then \begin{eqnarray}\label{FH1}
\inf_M|H|<\frac{1}{r}. \end{eqnarray} \end{thm}
\vskip10pt
The estimate (\ref{HK}) implies that an entire graph in $\mathbb R^3$ cannot have negative Gaussian curvature bounded away from zero, a result extended later to complete surfaces in $\mathbb R^3$ by Efimov \cite{E} in a remarkable work (see the discussion after Corollary \ref{E-HV}). In the next result, we obtain a version in higher dimensions of the inequality (\ref{HK}).
\vskip10pt
\begin{thm}\label{FR} Let $M^n\subset\mathbb R^{n+1}$ be a graph over a closed ball $\overline B_r\subset\mathbb R^n$, and denote by $R$ the scalar curvature of $M$. Then \begin{eqnarray}\label{FR1}
\inf_M|R|\leq\Big(\sup_M|H|+\frac{1}{r}\Big)\frac{2}{r}. \end{eqnarray} Moreover, if $M$ has a point where the second fundamental form is semi-definite, then \begin{eqnarray}\label{FR2}
\inf_M|R|<\frac{1}{r^2}. \end{eqnarray} \end{thm}
\vskip5pt
\noindent{\bf Remark.} For each $a>r$, let $M_a$ be the graph of $f:\overline B_r\subset\mathbb R^n\to\mathbb R$ given by $$ f(x_1,...,x_n)=\Big({a^2-\sum_{i=1}^nx_i^2}\Big)^{\frac{1}{2}}. $$ The mean curvature and the scalar curvature of $M_a$ are, respectively, $1/a$ and $1/a^2$. Since $a$ can be made arbitrarily close to $r$, we see that the estimates (\ref{FH1}) and (\ref{FR2}) are sharp.
\vskip5pt
The key to obtain strict inequalities in (\ref{FH1}) and (\ref{FR2}) is a general tangency principle by Silva and the author (\cite{FS1}, Theorem 1.1), which establishes relatively weak sufficient conditions for two hypersurfaces of a Riemannian manifold to coincide in a neighborhood of a tangency point (see Theorems \ref{TPH} and \ref{TPR} in Section 2 for particular cases of this tangency principle).
\vskip5pt
It is natural trying to establish versions for the higher order mean curvatures $H_k,\;k\geq 2$ (see Section 2 for the definitions) of the theorem of Chern and Flanders referred to in the beginning of the introduction. In this regard, Elbert \cite{El} proved that there is no entire graph in $\mathbb R^{n+1}$ with second fundamental form of bounded length and negative 2-mean curvature $H_2$ bounded away from zero (for hypersurfaces of a Euclidean space, the 2-mean curvature $H_2$ is nothing but the scalar curvature $R$ of the hypersurface). Hasanis-Vlachos \cite{HV} improved Elbert's result by proving that
$\inf_M|R|=0$ for all entire graphs in $\mathbb R^{n+1}$ with second fundamental form of bounded length (see \cite{El} and \cite{HV} for results regarding the other higher order mean curvatures). As an immediate consequence of the first part of Theorem \ref{FR}, we obtain the following improvement of these results of Elbert and Hasanis-Vlachos.
\begin{cor}\label{E-HV} If an entire graph $M^n\subset\mathbb R^{n+1}$ has bounded mean curvature, then $$
\inf_M|R|=0. $$ In particular, if the scalar curvature $R$ is constant, then $R=0$. \end{cor}
\vskip5pt
A classical theorem by Hilbert states that the hyperbolic plane cannot be isometrically immersed in the 3-dimensional Euclidean space. In a remarkable work, Efimov \cite{E} extended Hilbert's theorem by proving that there is no complete immersed surface in $\mathbb R^3$ with Gaussian curvature less than a negative constant.
\vskip5pt
Reilly \cite{R} and Yau \cite{Y1} (see also \cite{Y2}, problem 56, p. 682) proposed the following extension of Efimov's theorem:
\vskip5pt
``There are no complete hypersurfaces in $\mathbb R^{n+1}$ with negative Ricci curvature bounded away from zero.''
\vskip5pt
In a well known work, Smith-Xavier \cite{SX} showed that the above question has a positive answer for $n=3$ and provided a partial answer for $n>3$. This question has also a positive answer in the class of all entire graphs with negative Ricci curvature in Euclidean space, as Chern has shown \cite{C} that
$\inf_M|\text{Ric}|=0$ for all entire graphs $M^n\subset\mathbb R^{n+1},\;n\geq 3$, with negative Ricci curvature. The corollary of the following theorem improves this result of Chern.
\vskip10pt
\begin{thm}\label{FRic} Let $M^n\subset\mathbb R^{n+1},\;n\geq 3,$ be a graph over a closed ball $\overline B_r\subset\mathbb R^n$. If the Ricci curvature of $M$ is negative, then \begin{eqnarray}\label{FA1}
\inf_M|A|<\frac{3(n-2)}{r}, \end{eqnarray}
where $|A|$ is the length of the second fundamental form of $M^n$ in $\mathbb R^{n+1}$. \end{thm}
\vskip5pt
\begin{cor}\label{SX} If the Ricci curvature of an entire graph $M^n\subset\mathbb R^{n+1},\,n\geq 3$, is negative, then {\em inf}$_M|A|=0$. \end{cor}
Okayasu \cite{O} constructed an example of an $O(2)\times O(2)$-invariant complete hypersurface of constant negative scalar curvature in $\mathbb R^4$. Since the length $|A|$ of the second fundamental form in Okayasu's example is unbounded, one can then formulate the following Efimov type question: is there a complete hypersurface in $\mathbb R^{n+1}$ with bounded mean curvature and negative scalar curvature bounded away from zero? Corollary \ref{E-HV} shows that if such a hypersurface does exist then certainly it is not an entire graph. On the other hand, we do not know whether Corollary \ref{E-HV} holds without the assumption that the mean curvature is bounded.
\vskip5pt
In dimension 2, Milnor \cite{KO} conjectured (see also \cite{Y2}, problem 62, p. 684) the following improvement of Efimov's result: If $M^2\subset\mathbb R^3$ is a complete non-flat umbilic free surface whose Gaussian curvature does not change sign, then $\inf
|A|=0$. Smyth-Xavier \cite{SX} proposed the following analogue in higher dimensions: If $M^n\subset\mathbb R^{n+1}$ is a complete immersed hypersurface with negative Ricci curvature, then $\inf_M
|A|=0$. Corollary \ref{SX} shows that this question has a positive answer for entire graphs in Euclidean spaces.
\vskip10pt
In the following theorem we obtain an estimate for $\inf_M|A|$ under another geometric condition.
\begin{thm}\label{FA} Let $M^n\subset\mathbb R^{n+1}$ be a graph over a closed ball $\overline B_r\subset\mathbb R^n$. If the mean curvature of $M$ does not change sign, then \begin{eqnarray}\label{FA2}
\inf_M|A|<\frac{n}{r}. \end{eqnarray} \end{thm}
As immediate consequences of Theorem \ref{FA}, we obtain the following results by Silva and the author \cite{FS2}:
\begin{cor}\label{PH1} If the mean curvature of an entire graph $M^n\subset\mathbb R^{n+1}$ does not change sign, then {\em inf}$_M|A|=0$. \end{cor}
Corollary \ref{PH1} was obtained by Hasanis-Vlachos \cite{HV}
under the additional assumption that the length $|A|$ of the second fundamental form $A$ of $M$ is bounded.
\begin{cor}\label{PH2} Let $M^n\subset\mathbb R^{n+1}$ be an entire graph. If $|A|$ is constant and $H$ does not change sign, then $M$ is a hyperplane. \end{cor}
\noindent{\bf Remark.} Corollary \ref{PH1} does not hold for hypersurfaces which are not graphs. In fact, any circular cylinder satisfies $\inf |A|>0$.
\vskip10pt
We stress that our methods in this paper are substantially different from the ones employed by Heinz \cite{H}, which were based on an ingenious use of the divergence theorem, applied to the classical formulas for the mean and Gaussian curvature of a graph in two variables. By contrast, our proofs constitute another application of our work on the tangency principle \cite{FS1}. They also use a classical result of G\aa rding \cite{Ga} on hyperbolic polynomials.
\vskip10pt
\noindent{\bf Acknowledgements.} This paper was partly written during an extended visit of the author to the University of Notre Dame. He would like to record his gratitude to the mathematics department for the invitation and hospitality, and to professor Frederico Xavier for the suggestions and helpful discussions.
\section{Preliminaries}
Given an oriented immersed hypersurface $M^n$ of the $(n+1)$-dimensional Euclidean space $\mathbb R^{n+1}$, denote by $A$ the shape operator associated to the second fundamental form of the immersion and by $k_1(p),...,k_n(p)$ the principal curvatures of $M$ at a point $p$, labelled by the condition
$k_1(p)\leq\dots\leq k_n(p)$. The squared length $|A|^2(p)$ of the second fundamental form at a point $p$ is defined as the trace of $A^2(p)$. It is easy to see that \begin{eqnarray}\label{squarelength}
|A|^2(p)=\sum_{i=1}^nk_i^2(p). \end{eqnarray} Denote by $R$ the scalar curvature of $M$ and by $H$ the mean curvature of the immersion. If $e_1,...,e_n$ diagonalizes $A(p)$ with corresponding eigenvalues $k_1,...,k_n$, it follows from the Gauss equation \cite{Dj} that the Ricci curvature of $M$ at $p$ in the direction $e_i$ is given by \begin{eqnarray}\label{RC2} (n-1)\text{Ric}_p (e_i)=\sum_{j=1 , j\neq i}^n k_i k_j=k_i (nH-k_i). \end{eqnarray} Taking the sum on $i$, we obtain \begin{eqnarray}\label{HRA}
n^2H^2=|A|^2+n(n-1)R. \end{eqnarray}
\noindent{For} $1\leq k\leq n$, the $k$-mean curvature $H_k(x)$ of $M$ at a point $x$ is defined by \begin{eqnarray}\label{Hr} H_k(x)=\frac{1}{\binom nk}\sigma_k(k_1(x),...,k_n(x)), \end{eqnarray} where $\sigma_k:\mathbb R^n\to\mathbb R$ is given by \begin{eqnarray}\label{SF} \sigma_k(x_1,...,x_n)=\sum_{i_1<\dots <i_k}x_{i_1}\dots x_{i_k} \end{eqnarray} and is called the $k$-elementary symmetric function. Notice that $H_1$ is the mean curvature $H$ of the hypersurface and $H_2$ is, by the Gauss equation \cite{Dj}, simply the scalar curvature $R$ of $M$ (more generally, for hypersurfaces of an ambient space with constant sectional curvature $c$, we have $R=H_2+c$).
\vskip10pt
For $1\leq k\leq n$, denote by $\Gamma_k$ the connected component of the set $\{\sigma_k>0\}$ that contains the vector $(1,...,1)$. It follows immediately from the definitions that $\Gamma_k$ contains the positive cone $\mathcal O^n=\{(x_1,...,x_n)\in\mathbb R^n:x_i>0,\,\forall i,\}$, for all $1\leq k\leq n$. It was proved by G\aa rding \cite{Ga} that $\Gamma_k$ is an open convex cone, $1\leq k\leq n$, and that \begin{eqnarray}\label{Ga} \Gamma_1\supset\Gamma_2\supset\dots\supset\Gamma_n. \end{eqnarray}
Given a hypersurface $M^n\subset\mathbb R^{n+1}$, a point $p\in M$
and a vector $\eta_o\perp T_pM,\;|\eta_o|=1$, we can parametrize a neighborhood of $p$ in $M$ by \begin{eqnarray}\label{PG} \varphi(x)=x+\mu(x)\eta_o, \end{eqnarray} for some smooth real valued function $\varphi:V\to\mathbb R$ defined in a neighborhood of 0 in $T_pM$.
\vskip10pt
Let $M_1^n$ and $M_2^n$ be hypersurfaces of $\mathbb R^{n+1}$ tangent at a point $p$ and $\eta_o$ a unitary vector normal to $T_pM_1=T_pM_2$. Parametrize $M_1$ and $M_2$ as in (\ref{PG}), obtaining corresponding functions $\mu_1$ and $\mu_2$. As in \cite{FS1}, we say that $M_1$ remains above $M_2$ in a neighborhood of $p$ with respect to $\eta_o$ if $\mu_1(x)\geq\mu_2(x)$ for all $x$ in a neighborhood of zero.
\vskip10pt
In our proofs we will make use of the following theorems, which are particular cases of a general tangency principle by Silva and the author (\cite{FS1}, Theorem 1.1).
\begin{thm}\label{TPH} {\bf (Tangency Principle for Mean Curvature)} Let $M_1^n$ and $M_2^n$ be hypersurfaces of $\mathbb R^{n+1}$ tangent at a point $p$ and suppose that $M_1$ remains above $M_2$ in a neighborhood of $p$ with respect to a unit vector $\eta_o\perp T_pM_1$. If the mean curvature of $M_2$ at $(x,\varphi_2(x))$ is greater than or equal to the mean curvature of $M_1$ at $(x,\varphi_1(x))$, for all $x$ sufficiently small, then $M_1$ and $M_2$ coincide in a neighborhood of $p$. \end{thm}
\begin{thm}\label{TPR} {\bf (Tangency Principle for Scalar Curvature)} Let $M_1^n$ and $M_2^n$ be hypersurfaces of $\mathbb R^{n+1}$ tangent at a point $p$ and suppose that $M_1$ remains above $M_2$ in a neighborhood of $p$ with respect to a unit vector $\eta_o\perp T_pM_1$. If the scalar curvature of $M_2$ at $(x,\varphi_2(x))$ is greater than or equal to the scalar curvature of $M_1$ at $(x,\varphi_1(x))$, for all $x$ sufficiently small, and all principal curvatures $k_1(p),...,k_n(p)$ of $M_2$ at $p$ are positive (or more generally, if $\big(k_1(p),...,k_n(p)\big)\in\Gamma_2$), than $M_1$ and $M_2$ coincide in a neighborhood of $p$. \end{thm}
\section{Proofs of the Theorems}
\noindent{\bf Proof of Theorem \ref{FH}.} We can suppose
$c:=\text{inf}_M|H|>0$. Otherwise, inequality (\ref{FH1}) is trivial. Choose the orientation for $M$ so that $H\geq c>0$, and take a sphere $S$ of radius $r$, disjoint of $M^n$ and contained in the component of ($\overline B_r\times \mathbb R)\backslash M$ that contains the normals. Move $S$ until it touchs $M$ for the first time, say at $p$, and denote by $N$ the unit normal vector field in $M$. By our assumption that $M$ is a graph over $\overline B_r$ we have that $p$ belongs to the interior of $M$. If $p_o$ is the center of $S$, we have that $p$ is a point where the function $f:M\to\mathbb R$, $f(x)=\frac{1}{2}\parallel x-p_o\parallel^2$, attains its minimum. If $e_1,...,e_n$ is an orthonormal basis of $T_pM$ such that $A(e_i)=k_i(p)e_i,\,i=1,...,n,$ we thus have \begin{eqnarray}\label{grad} 0=\text{grad}f(p)=(p-p_o)^T \end{eqnarray} and \begin{eqnarray}\label{hess} 0\leq \text{Hess}f(p)(e_i,e_i)=1+\langle \sigma(e_i,e_i),p-p_o\rangle=1+\langle p-p_o,N(p)\rangle k_i(p). \end{eqnarray} Equality (\ref{grad}) implies $$ N(p)=\frac{p_o-p}{\parallel p_o-p\parallel}=\frac{p_o-p}{r} $$ and, by substitution of this on (\ref{hess}), we conclude that $k_i(p)\leq\frac{1}{r},\,i=1,...,n$. Thus \begin{eqnarray} nc\leq nH(p)=k_1(p)+\dots +k_n(p)\leq \frac{n}{r}, \end{eqnarray} from which we obtain \begin{eqnarray}\label{strict1}
\text{inf}_M|H|=c\leq 1/r. \end{eqnarray} If equality occurs in (\ref{strict1}), we have $H\geq 1/r$ along $M$ and, by Theorem \ref{TPH}, $M$ and $S$ coincide in a neighborhood of $p$. By a connectedness argument, we conclude that $M$ is a closed hemisphere of $S$. In particular, the tangent planes to $M$ along $\partial M$ are vertical, contradicting the assumption that $M$ is a graph over $\overline B_r$. This contradiction implies that the inequality in (\ref{strict1}) is strict.\qed
\vskip10pt
\noindent{\bf Proof of Theorem \ref{FR}.} We will first prove (\ref{FR1}). If $R$ changes sign, there is, by continuity, a point where the scalar curvature vanishes and (\ref{FR1}) follows trivially. If $R>0$ along $M$, we have from (\ref{HRA}) \begin{eqnarray}
n(n-1)|R|=n(n-1)R=n^2H^2-|A|^2\leq n^2H^2, \end{eqnarray} which implies \begin{eqnarray}
|R|\leq \frac{nH^2}{n-1}\leq \frac{n}{n-1}|H|\sup|H|. \end{eqnarray} Using Theorem \ref{FH}, we obtain \begin{eqnarray}
\inf_M|R|\leq
\frac{n}{n-1}\sup|H|\inf|H|\leq\frac{n}{r(n-1)}\sup|H|, \end{eqnarray} from which we easily obtain (\ref{FR1}).
\vskip5pt
Suppose now $R<0$ everywhere and orient $M$ by a unit normal vector field $N$. As in the proof of Theorem \ref{FH}, take a sphere $S$ of radius $r$, disjoint of $M$ and contained in the component of ($\overline B_r\times \mathbb R)\backslash M$ that contains the normals, and move $S$ until it touchs $M$ for the first time, say at $p$. Since $R<0$ along $M$, we have principal curvatures of both signs at each point of $M$. Let $l$ be number of negative principal curvatures of $M$ at $p$ so that \begin{eqnarray} k_1(p)\leq\dots\leq k_l(p)<0\leq k_{l+1}(p)\leq\dots\leq k_n(p). \end{eqnarray} By the Gauss equation, we have \begin{eqnarray} 0>\frac{n(n-1)}{2}R(p)\nonumber&=&\sum_{1\leq i<j\leq n}k_ik_j\\&=&\sum_{1\leq i<j\leq l}k_ik_j+\sum_{l+1\leq i<j\leq n}k_ik_j+\sum_{i=1,...,l; j=l+1,...,n}k_ik_j\nonumber\\&\geq&\sum_{i=1,...,l; j=l+1,...,n}k_ik_j, \end{eqnarray} and so \begin{eqnarray}\label{estR} 0>\frac{n(n-1)}{2}R(p)\geq (k_1+\dots+k_l)(k_{l+1}+\dots+k_n)=\Big(nH-\sum_{i=l+1}^nk_i\Big)\sum_{i=l+1}^nk_i. \end{eqnarray} Since $k_i(p)\leq 1/r,\,i=1,\dots,n$ (see the proof of Theorem \ref{FH}), we arrive at \begin{eqnarray}
\frac{n(n-1)}{2}\inf|R|&\leq&\nonumber
\frac{n(n-1)}{2}|R(p)|\\\nonumber&\leq&\Big(n\sup|H|+\sum_{i=l+1}^nk_i\Big)\sum_{i=l+1}^nk_i
\\\nonumber&\leq&\Big(n\sup|H|+\frac{n-l}{r}\Big)\frac{n-l}{r}\\&\leq&\Big(n\sup|H|+\frac{n-1}{r}\Big)\frac{n-1}{r}, \end{eqnarray} from which we easily obtain (\ref{FR1}).
\vskip5pt
We will now proceed to prove the second part of the theorem. Let $q$ be a point where the second fundamental form is semi-definite and choose the orientation $N$ so that all principal curvatures at
$q$ are nonnegative. We can suppose $\text{inf}_M|R|>0$, otherwise there is nothing to prove. Since $k_i(q)\geq 0,\,i=1,...,n$, we have $R>0$ along $M$. From $\mathcal O^n\subset\Gamma_2$ (see Section 2) we infer that the principal curvature vector $\overrightarrow{k}(q)=\big(k_1(q),\dots,k_n(q)\big)$ of $M$ at $q$ belongs to $\overline{\Gamma_2}$. Since $R(q)>0$, we have in fact $\overrightarrow{k}(q)\in\Gamma_2$. It follows from the connectedness of both $M$ and $\Gamma_2$ that $\overrightarrow{k}(x)\in\Gamma_2$, for all $x\in M$. In particular, $\overrightarrow{k}(p)\in\Gamma_2$, where $p$ is as in the first part of the proof . If we had \begin{eqnarray}\label{infR}
\text{inf}_M|R|\geq 1/r^2, \end{eqnarray} we would conclude, by Theorem \ref{TPR}, that $M$ and $S$ coincide in a neighborhood of $p$. Reasoning as in the proof of Theorem \ref{FH}, we would conclude that $M$ is a closed hemisphere of $S$, contradicting the assumption that $M$ is a graph over $\overline B_r$. This contradiction implies that (\ref{infR}) does not hold and concludes the proof of the theorem.\qed
\vskip10pt
\noindent{\bf Proof of Theorem \ref{FRic}} Since the Ricci curvature of $M$ is negative, we have, by (\ref{RC2}), that all principal curvatures are nonzero and that there are principal curvatures of both sign at each point of $M$. Let $l$ be the number of negative principal curvatures, so that $k_1\leq\dots\leq k_l<0< k_{l+1}\leq\dots\leq k_n$. Since $n\geq 3$, we can choose the orientation so that $n-1\geq l\geq 2$. Let $S$ and $p$ be as in the proof of Theorem \ref{FH}, and choose an orthonormal basis $\{e_1,...,e_n\}$ of $T_pM$ satisfying $A(e_i)=k_ie_i,\,i=1,...,n$. Since the Ricci curvature is negative, one has, by (\ref{RC2}), $$ k_i(k_1+\dots +\widehat{k_i}+\dots +k_l+k_{l+1}+\dots +k_n)<0,\;\;\;i=1,\dots,l, $$ and, since $k_i<0$, $$ k_{l+1}+\dots +k_n>-k_1-\dots -\widehat{k_i}-\dots
-k_l=|k_1|+\dots +\widehat{|k_i|}+\dots +|k_l|, $$ where the circumflex over $k_i$ means that this term is omitted on the sum. Taking the sum with $i=1,\dots ,l$, we obtain \begin{eqnarray}
l(k_{l+1}+\dots +k_n)>(l-1)\sum_{m=1}^l|k_m|. \end{eqnarray} Since $k_i(p)\leq 1/r$, $i=1,\dots,n$ (see the proof of Theorem \ref{FH}), we arrive at \begin{eqnarray}
\sum_{m=1}^l|k_m|<\frac{l(n-l)}{r(l-1)}. \end{eqnarray} Thus \begin{eqnarray}
\sum_{m=1}^n|k_m|=\sum_{m=1}^l|k_m|+\sum_{m=l+1}^n|k_m|<\frac{l(n-l)}{r(l-1)} +\frac{n-l}{r}=\frac{(n-l)(2l-1)}{r(l-1)}. \end{eqnarray} Noticing that the right hand side of the above equation is strictly decreasing in $l$, we have $$
\sum_{m=1}^n|k_m|<\frac{3(n-2)}{r}. $$ Hence $$
|A|^2(p)=\sum_{m=1}^n|k_m|^2<\Big(\sum_{m=1}^n|k_m|\Big)^2<\Big(\frac{3(n-2)}{r}\Big)^2, $$ from which we obtain $$
\text{inf}|A|\leq |A|(p)<\frac{3(n-2)}{r}.\qed $$
\vskip10pt
\noindent{\bf Proof of Theorem \ref{FA}.} Choose the orientation for $M$ so that $H\geq 0$ and let $S$ and $p$ be as in the proof of Theorem \ref{FH}. We have two cases to consider:
\noindent{\it First case:} All principal curvatures of $M$ at $p$ are nonnegative. Since $k_i(p)\leq 1/r,\,i=1,\dots,n$ (see the proof of Theorem \ref{FH}), we have \begin{eqnarray}
|A|^2(p)=\sum_{i=1}^nk_i^2(p)\leq\frac{n}{r^2}, \end{eqnarray} and so \begin{eqnarray}
\text{inf}_M|A|\leq |A|(p)\leq\frac{\sqrt n}{r}<\frac{n}{r}. \end{eqnarray}
\noindent{\it Second case:} There are negative principal curvatures of $M$ at $p$. Let $l$ the number of negative principal curvatures so that \begin{eqnarray} k_1(p)\leq\dots\leq k_l(p)<0\leq k_{l+1}(p)\leq\dots\leq k_n(p). \end{eqnarray} Notice that $l\leq n-1$ since $H\geq 0$. From $k_i(p)\leq 1/r,\,i=1,\dots,n$, and $H\geq 0$, we obtain \begin{eqnarray} \frac{n-l}{r}\geq k_{l+1}(p)+\dots+k_n(p)\geq -k_1(p)-\dots
-k_l(p)=|k_1|(p)+\dots +|k_l|(p). \end{eqnarray} Hence, \begin{eqnarray}
|A|^2(p)&=&\sum_{i=1}^lk_i^2+\sum_{i=l+1}^nk_i^2\leq\Big(\sum_{i=1}^l|k_i|\Big)^2+\sum_{i=l+1}^nk_i^2\nonumber \\&\leq& \frac{(n-l)^2}{r^2}+\frac{n-l}{r^2}=\frac{(n-l)(n-l+1)}{r^2}\nonumber\\&\leq&\frac{n(n-1)}{r^2}<\frac{n^2}{r^2}, \end{eqnarray} from which we obtain (\ref{FA2}). \qed
\noindent{Francisco Fontenele\\Departamento de Geometria\\ Universidade Federal Fluminense\\Niter\'oi, RJ, Brazil\\e-mail: [email protected]}
\end{document} | arXiv |
\begin{definition}[Definition:General Logarithm/Common/Characteristic]
Let $n \in \R$ be a positive real number such that $0 < n < 1$.
Let $n$ be presented (possibly approximated) in scientific notation as:
:$a \times 10^d$
where $d \in \Z$ is an integer.
Let $\log_{10} n$ be expressed in the form:
:$\log_{10} n = \begin {cases} c \cdotp m & : d \ge 0 \\ \overline c \cdotp m & : d < 0 \end {cases}$
where:
:$c = \size d$ is the absolute value of $d$
:$m := \log_{10} a$
$c$ is the '''characteristic''' of $\log_{10} n$.
\end{definition} | ProofWiki |
Login | Create
Sort by: Relevance Date Users's collections Twitter
Group by: Day Week Month Year All time
Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity)
HD 89345: a bright oscillating star hosting a transiting warm Saturn-sized planet observed by K2 (1805.01860)
V. Van Eylen, F. Dai, S. Mathur, D. Gandolfi, S. Albrecht, M. Fridlund, R. A. García, E. Guenther, M. Hjorth, A. B. Justesen, J. Livingston, M. N. Lund, F. Pérez Hernández, J. Prieto-Arranz, C. Regulo, L. Bugnet, M. E. Everett, T. Hirano, D. Nespral, G. Nowak, E. Palle, V. Silva Aguirre, T. Trifonov, J. N. Winn, O. Barragán, P. G. Beck, W. J. Chaplin, W. D. Cochran, S. Csizmadia, H. Deeg, M. Endl, P. Heeren, S. Grziwa, A. P. Hatzes, D. Hidalgo, J. Korth, S. Mathis, P. Montañes Rodriguez, N. Narita, M. Patzold, C. M. Persson, F. Rodler, A. M. S. Smith
May 4, 2018 astro-ph.SR, astro-ph.EP
We report the discovery and characterization of HD 89345b (K2-234b; EPIC 248777106b), a Saturn-sized planet orbiting a slightly evolved star. HD 89345 is a bright star ($V = 9.3$ mag) observed by the K2 mission with one-minute time sampling. It exhibits solar-like oscillations. We conducted asteroseismology to determine the parameters of the star, finding the mass and radius to be $1.12^{+0.04}_{-0.01}~M_\odot$ and $1.657^{+0.020}_{-0.004}~R_\odot$, respectively. The star appears to have recently left the main sequence, based on the inferred age, $9.4^{+0.4}_{-1.3}~\mathrm{Gyr}$, and the non-detection of mixed modes. The star hosts a "warm Saturn" ($P = 11.8$~days, $R_p = 6.86 \pm 0.14~R_\oplus$). Radial-velocity follow-up observations performed with the FIES, HARPS, and HARPS-N spectrographs show that the planet has a mass of $35.7 \pm 3.3~M_\oplus$. The data also show that the planet's orbit is eccentric ($e\approx 0.2$). An investigation of the rotational splitting of the oscillation frequencies of the star yields no conclusive evidence on the stellar inclination angle. We further obtained Rossiter-McLaughlin observations, which result in a broad posterior of the stellar obliquity. The planet seems to conform to the same patterns that have been observed for other sub-Saturns regarding planet mass and multiplicity, orbital eccentricity, and stellar metallicity.
Mass determination of the 1:3:5 near-resonant planets transiting GJ 9827 (K2-135) (1802.09557)
J. Prieto-Arranz, E. Palle, D. Gandolfi, O. Barragán, E.W. Guenther, F. Dai, M. Fridlund, T. Hirano, J. Livingston, P. Niraula, C.M. Persson, S. Redfield, S. Albrecht, R. Alonso, G. Antoniciello, J. Cabrera, W.D. Cochran, Sz. Csizmadia, H. Deeg, Ph. Eigmüller, M. Endl, A. Erikson, M. E. Everett, A. Fukui, S. Grziwa, A. P. Hatzes, D. Hidalgo, M. Hjorth, J. Korth, D. Lorenzo-Oliveira, F. Murgas, N. Narita, D. Nespral, G. Nowak, M. Pätzold, P. Montañés Rodríguez, H. Rauer, I. Ribas, A. M. S. Smith, V. Van Eylen, J.N. Winn
Feb. 26, 2018 astro-ph.EP
Aims. GJ 9827 (K2-135) has recently been found to host a tightly packed system consisting of three transiting small planets whose orbital periods of 1.2, 3.6, and 6.2 days are near the 1:3:5 ratio. GJ 9827 hosts the nearest planetary system (d = $30.32\pm1.62$ pc) detected by Kepler and K2 . Its brightness (V = 10.35 mag) makes the star an ideal target for detailed studies of the properties of its planets. Results. We find that GJ 9827 b has a mass of $M_\mathrm{b}=3.74^{+0.50}_{-0.48}$ $M_\oplus$ and a radius of $R_\mathrm{b}=1.62^{+0.17}_{-0.16}$ $R_\oplus$, yielding a mean density of $\rho_\mathrm{b} = 4.81^{+1.97}_{-1.33}$ g cm$^{-3}$. GJ 9827 c has a mass of $M_\mathrm{c}=1.47^{+0.59}_{-0.58}$ $M_\oplus$, radius of $R_\mathrm{c}=1.27^{+0.13}_{-0.13}$ $R_\oplus$, and a mean density of $\rho_\mathrm{c}= 3.87^{+2.38}_{-1.71}$ g cm$^{-3}$. For GJ 9827 d we derive $M_\mathrm{d}=2.38^{+0.71}_{-0.69}$ $M_\oplus$, $R_\mathrm{d}=2.09^{+0.22}_{-0.21}$ $R_\oplus$, and $\rho_\mathrm{d}= 1.42^{+0.75}_{-0.52}$ g cm$^{-3}$. Conclusions. GJ 9827 is one of the few known transiting planetary systems for which the masses of all planets have been determined with a precision better than 30%. This system is particularly interesting because all three planets are close to the limit between super-Earths and mini-Neptunes. We also find that the planetary bulk compositions are compatible with a scenario where all three planets formed with similar core/atmosphere compositions, and we speculate that while GJ 9827 b and GJ 9827 c lost their atmospheric envelopes, GJ 9827 d maintained its atmosphere, owing to the much lower stellar irradiation. This makes GJ 9827 one of the very few systems where the dynamical evolution and the atmospheric escape can be studied in detail for all planets, helping us to understand how compact systems form and evolve.
K2-141 b: A 5-M$_\oplus$ super-Earth transiting a K7 V star every 6.7 hours (1711.02097)
O. Barragán, D. Gandolfi, F. Dai, J. Livingston, C. M. Persson, T. Hirano, N. Narita, Sz. Csizmadia, J. N. Winn, D. Nespral, J. Prieto-Arranz, A. M. S. Smith, G. Nowak, S. Albrecht, G. Antoniciello, A. Bo Justesen, J. Cabrera, W. D. Cochran, H. Deeg., Ph. Eigmuller, M. Endl, A. Erikson, M. Fridlund, A. Fukui, S. Grziwa, E. Guenther, A. P. Hatzes, D. Hidalgo, M.C. Johnson, J. Korth, E. Palle, M. Patzold, H. Rauer, Y. Tanaka, V. Van Eylen
Jan. 11, 2018 astro-ph.SR, astro-ph.EP
We report on the discovery of K2-141 b (EPIC 246393474 b), an ultra-short-period super-Earth on a 6.7-hour orbit transiting an active K7 V star based on data from K2 campaign 12. We confirmed the planet's existence and measured its mass with a series of follow-up observations: seeing-limited MuSCAT imaging, NESSI high-resolution speckle observations, and FIES and HARPS high-precision radial-velocity monitoring. K2-141 b has a mass of $5.31 \pm 0.46 $ $M_{\oplus}$ and radius of $1.54^{+0.10}_{-0.09}$ $R_{\oplus}$, yielding a mean density of $8.00_{ - 1.45 } ^ { + 1.83 }$ $\mathrm{g\,cm^{-3}}$ and suggesting a rocky-iron composition. Models indicate that iron cannot exceed $\sim$70 % of the total mass. With an orbital period of only 6.7 hours, K2-141 b is the shortest-period planet known to date with a precisely determined mass.
K2-137 b: an Earth-sized planet in a 4.3-hour orbit around an M-dwarf (1707.04549)
A. M. S. Smith, J. Cabrera, Sz. Csizmadia, F. Dai, D. Gandolfi, T. Hirano, J. N. Winn, S. Albrecht, R. Alonso, G. Antoniciello, O. Barragán, H. Deeg, Ph. Eigmüller, M. Endl, A. Erikson, M. Fridlund, A. Fukui, S. Grziwa, E. W. Guenther, A. P. Hatzes, D. Hidalgo, A. W. Howard, H. Isaacson, J. Korth, M. Kuzuhara, J. Livingston, N. Narita, D. Nespral, G. Nowak, E. Palle, M. Pätzold, C.M. Persson, E. Petigura, J. Prieto-Arranz, H. Rauer, I. Ribas, V. Van Eylen
Nov. 6, 2017 astro-ph.EP
We report the discovery from K2 of a transiting terrestrial planet in an ultra-short-period orbit around an M3-dwarf. K2-137 b completes an orbit in only 4.3 hours, the second-shortest orbital period of any known planet, just 4 minutes longer than that of KOI 1843.03, which also orbits an M-dwarf. Using a combination of archival images, AO imaging, RV measurements, and light curve modelling, we show that no plausible eclipsing binary scenario can explain the K2 light curve, and thus confirm the planetary nature of the system. The planet, whose radius we determine to be 0.89 +/- 0.09 Earth radii, and which must have a iron mass fraction greater than 0.45, orbits a star of mass 0.463 +/- 0.052 Msol and radius 0.442 +/- 0.044 Rsol.
K2-106, a system containing a metal-rich planet and a planet of lower density (1705.04163)
E.W. Guenther, O. Barragan, F. Dai, D. Gandolfi, T. Hirano, M. Fridlund, L. Fossati, A. Chau, R. Helled, J. Korth, J. Prieto-Arranz, D. Nespral, G. Antoniciello, H. Deeg, M. Hjorth, S. Grziwa, S. Albrecht, A.P. Hatzes, H. Rauer, Sz. Csizmadia, A.M.S. Smith, J. Cabrera, N. Narita, P. Arriagada, J. Burt, R.P. Butler, W.D. Cochran, J.D. Crane, Ph. Eigmueller, A. Erikson, J.A. Johnson, A. Kiilerich, D. Kubyshkina, E. Palle, C.M. Persson, M. Paetzold, S. Sabotta, B. Sato, St.A. Shectman, J.K. Teske, I.B. Thompson, V. Van Eylen, G. Nowak, A. Vanderburg, R.A. Wittenmyer
Sept. 26, 2017 astro-ph.EP
Planets in the mass range from 2 to 15 MEarth are very diverse. Some of them have low densities, while others are very dense. By measuring the masses and radii, the mean densities, structure, and composition of the planets are constrained. These parameters also give us important information about their formation and evolution, and about possible processes for atmospheric loss.We determined the masses, radii, and mean densities for the two transiting planets orbiting K2-106. The inner planet has an ultra-short period of 0.57 days. The period of the outer planet is 13.3 days.Although the two planets have similar masses, their densities are very different. For K2-106b we derive Mb=8.36-0.94+0.96 MEarh, Rb=1.52+/-0.16 REarth, and a high density of 13.1-3.6+5.4 gcm-3. For K2-106c, we find Mc=5.8-3.0+3.3 MEarth, Rc=2.50-0.26+0.27 REarth and a relatively low density of 2.0-1.1+1.6 gcm-3.Since the system contains two planets of almost the same mass, but different distances from the host star, it is an excellent laboratory to study atmospheric escape. In agreement with the theory of atmospheric-loss processes, it is likely that the outer planet has a hydrogen-dominated atmosphere. The mass and radius of the inner planet is in agreement with theoretical models predicting an iron core containing 80+20-30% of its mass. Such a high metal content is surprising, particularly given that the star has an ordinary (solar) metal abundance. We discuss various possible formation scenarios for this unusual planet.
K2-99: a subgiant hosting a transiting warm Jupiter in an eccentric orbit and a long-period companion (1609.00239)
A. M. S. Smith, D. Gandolfi, O. Barragán, B. Bowler, Sz. Csizmadia, M. Endl, M. C. V. Fridlund, S. Grziwa, E. Guenther, A. P. Hatzes, G. Nowak, S. Albrecht, R. Alonso, J. Cabrera, W. D. Cochran, F. Cusano, H. J. Deeg, Ph. Eigmüller, A. Erikson, D. Hidalgo, T. Hirano, M. C. Johnson, J. Korth, A. Mann, N. Narita, D. Nespral, E. Palle, M. Pätzold, J. Prieto-Arranz, H. Rauer, I. Ribas, B. Tingley, V. Wolthoff
We report the discovery from K2 of a transiting planet in an 18.25-d, eccentric (0.19$\pm$ 0.04) orbit around K2-99, an 11th magnitude subgiant in Virgo. We confirm the planetary nature of the companion with radial velocities, and determine that the star is a metal-rich ([Fe/H] = 0.20$\pm$0.05) subgiant, with mass $1.60^{+0.14}_{-0.10}~M_\odot$ and radius $3.1\pm 0.1~R_\odot$. The planet has a mass of $0.97\pm0.09~M_{\rm Jup}$ and a radius $1.29\pm0.05~R_{\rm Jup}$. A measured systemic radial acceleration of $-2.12\pm0.04~{\rm m s^{-1} d^{-1}}$ offers compelling evidence for the existence of a third body in the system, perhaps a brown dwarf orbiting with a period of several hundred days.
Zodiacal Exoplanets in Time (ZEIT) II. A "Super-Earth" Orbiting a Young K Dwarf in the Pleiades Neighborhood (1606.05812)
E. Gaidos, A. W. Mann, A. RIzzuto, L. Nofi, G. Mace, A. Vanderburg, G. Feiden, N. Narita, Y. Takeda, T. M. Esposito, R. J. De Rosa, M. Ansdell, T. Hirano, J. R. Graham, A. Kraus, D. Jaffe
June 18, 2016 astro-ph.EP
We describe a "super-Earth"-size ($2.30\pm0.15R_{\oplus}$) planet transiting an early K-type dwarf star in the Campaign 4 field observed by the K2 mission. The host star, EPIC 210363145, was identified as a member of the approximately 120-Myr-old Pleiades cluster based on its kinematics and photometric distance. It is rotationally variable and exhibits near-ultraviolet emission consistent with a Pleiades age, but its rotational period is ~20 d and its spectrum contains no H$\alpha$ emission nor the Li I absorption expected of Pleiades K dwarfs. Instead, the star is probably an interloper that is unaffiliated with the cluster, but younger (< 1 Gyr) than the typical field dwarf. We ruled out a false positive transit signal produced by confusion with a background eclipsing binary by adaptive optics imaging and a statistical calculation. Doppler radial velocity measurements limit the companion mass to <2 times that of Jupiter. Screening of the lightcurves of 1014 potential Pleiades candidate stars uncovered no additional planets. An injection-and-recovery experiment using the K2 Pleiades lightcurves with simulated planets, assuming a planet population like that in the Kepler prime field, predicts only 0.8-1.8 detections (vs. ~20 in an equivalent Kepler sample). The absence of Pleiades planet detections can be attributed to the much shorter monitoring time of K2 (80 days vs. 4 years), increased measurement noise due to spacecraft motion, and the intrinsic noisiness of the stars.
Spin-orbit alignment of exoplanet systems: ensemble analysis using asteroseismology (1601.06052)
T. L. Campante, M. N. Lund, J. S. Kuszlewicz, G. R. Davies, W. J. Chaplin, S. Albrecht, J. N. Winn, T. R. Bedding, O. Benomar, D. Bossini, R. Handberg, A. R. G. Santos, V. Van Eylen, S. Basu, J. Christensen-Dalsgaard, Y. P. Elsworth, S. Hekker, T. Hirano, D. Huber, C. Karoff, H. Kjeldsen, M. S. Lundkvist, T. S. H. North, V. Silva Aguirre, D. Stello, T. R. White
The angle $\psi$ between a planet's orbital axis and the spin axis of its parent star is an important diagnostic of planet formation, migration, and tidal evolution. We seek empirical constraints on $\psi$ by measuring the stellar inclination $i_{\rm s}$ via asteroseismology for an ensemble of 25 solar-type hosts observed with NASA's Kepler satellite. Our results for $i_{\rm s}$ are consistent with alignment at the 2-$\sigma$ level for all stars in the sample, meaning that the system surrounding the red-giant star Kepler-56 remains as the only unambiguous misaligned multiple-planet system detected to date. The availability of a measurement of the projected spin-orbit angle $\lambda$ for two of the systems allows us to estimate $\psi$. We find that the orbit of the hot-Jupiter HAT-P-7b is likely to be retrograde ($\psi=116.4^{+30.2}_{-14.7}\:{\rm deg}$), whereas that of Kepler-25c seems to be well aligned with the stellar spin axis ($\psi=12.6^{+6.7}_{-11.0}\:{\rm deg}$). While the latter result is in apparent contradiction with a statement made previously in the literature that the multi-transiting system Kepler-25 is misaligned, we show that the results are consistent, given the large associated uncertainties. Finally, we perform a hierarchical Bayesian analysis based on the asteroseismic sample in order to recover the underlying distribution of $\psi$. The ensemble analysis suggests that the directions of the stellar spin and planetary orbital axes are correlated, as conveyed by a tendency of the host stars to display large inclination values.
The K2-ESPRINT Project. I. Discovery of the Disintegrating Rocky Planet K2-22b with a Cometary Head and Leading Tail (1504.04379)
R. Sanchis-Ojeda, S. Rappaport, E. Pallé, L. Delrez, J. DeVore, D. Gandolfi, A. Fukui, I. Ribas, K. G. Stassun, S. Albrecht, F. Dai, E. Gaidos, M. Gillon, T. Hirano, M. Holman, A. W. Howard, H. Isaacson, E. Jehin, M. Kuzuhara, A. W. Mann, G. W. Marcy, P. A. Miles-Páez, P. A. Montañés-Rodríguez, F. Murgas, N. Narita, G. Nowak, M. Onitsuka, M. Paegert, V. Van Eylen, J. N. Winn, L. Yu
We present the discovery of a transiting exoplanet candidate in the K2 Field-1 with an orbital period of 9.1457 hr: K2-22b. The highly variable transit depths, ranging from $\sim$0\% to 1.3\%, are suggestive of a planet that is disintegrating via the emission of dusty effluents. We characterize the host star as an M-dwarf with $T_{\rm eff} \simeq 3800$ K. We have obtained ground-based transit measurements with several 1-m class telescopes and with the GTC. These observations (1) improve the transit ephemeris; (2) confirm the variable nature of the transit depths; (3) indicate variations in the transit shapes; and (4) demonstrate clearly that at least on one occasion the transit depths were significantly wavelength dependent. The latter three effects tend to indicate extinction of starlight by dust rather than by any combination of solid bodies. The K2 observations yield a folded light curve with lower time resolution but with substantially better statistical precision compared with the ground-based observations. We detect a significant "bump" just after the transit egress, and a less significant bump just prior to transit ingress. We interpret these bumps in the context of a planet that is not only likely streaming a dust tail behind it, but also has a more prominent leading dust trail that precedes it. This effect is modeled in terms of dust grains that can escape to beyond the planet's Hill sphere and effectively undergo `Roche lobe overflow,' even though the planet's surface is likely underfilling its Roche lobe by a factor of 2.
Ramsey interferometry using the Zeeman sublevels in a spin-2 Bose gas (1303.0637)
M. Sadgrove, Y. Eto, S. Sekine, H. Suzuki, T. Hirano
March 4, 2013 quant-ph
We perform atom interferometry using the Zeeman sublevels of a spin-2 Bose-Einstein condensate of $^{87}$Rb. The observed fringes are strongly peaked, and fringe repetition rates higher than the fundamental Ramsey frequency are found in agreement with a simple theory based on spin rotations. With a suitable choice of initial states, the interferometer could function as a useful tool for magnetometry and studies of spinor dynamics in general.
Origin of spontaneous broken mirror symmetry of vortex lattices in Nb (1108.4240)
H. M. Adachi, M. Ishikawa, T. Hirano, M. Ichioka, K. Machida
Aug. 22, 2011 cond-mat.supr-con
Combining the microscopic Eilenberger theory with the first principles band calculation, we investigate the stable flux line lattice (FLL) for a field applied to the four-fold axis; $H\parallel [001]$ in cubic Nb. The observed FLL transformation along $H_{c2}$ is almost perfectly explained without adjustable parameter, including the tilted square, scalene triangle with broken mirror symmetry, and isosceles triangle lattices upon increasing $T$. We construct a minimum Fermi surface model to understand those morphologies, in particular the stability of the scalene triangle lattice attributed to the lack of the mirror symmetry about the Fermi velocity maximum direction in k-space.
Quantum cryptography using balanced homodyne detection (quant-ph/0008037)
T. Hirano, T. Konishi, R. Namiki
Sept. 10, 2010 quant-ph
We report an experimental quantum key distribution that utilizes balanced homodyne detection, instead of photon counting, to detect weak pulses of coherent light. Although our scheme inherently has a finite error rate, it allows high-efficiency detection and quantum state measurement of the transmitted light using only conventional devices at room temperature. When the average photon number was 0.1, an error rate of 0.08 and "effective" quantum efficiency of 0.76 were obtained.
On the Role of Initial Conditions and Final State Interactions in Ultrarelativistic Heavy Ion Collisions (0907.5529)
K. Werner, T. Hirano, Iu. Karpenko, T. Pierog, S. Porteboeuf, M. Bleicher, S. Haussler
Aug. 5, 2009 nucl-th
We investigate the rapidity dependence of the elliptical flow in heavy ion collisions at 200 GeV (cms), by employing a three-dimensional hydrodynamic evolution, based on different initial conditions, and different freeze-out scenarios. It will be shown that the form of pseudo-rapidity ($\eta$) dependence of the elliptical flow is almost identical to space-time-rapidity ($\eta_{s}$) dependence of the initial energy distribution, independent of the freeze-out prescriptions.
Quantized Berry Phases of a Spin-1/2 Frustrated Two-Leg Ladder with Four-Spin Exchange (0807.2896)
I. Maruyama, T. Hirano, Y. Hatsugai
July 18, 2008 cond-mat.str-el
A spin-1/2 frustrated two-leg ladder with four-spin exchange interaction is studied by quantized Berry phases. We found that the Berry phase successfully characterizes the Haldane phase in addition to the rung-singlet phase, and the dominant vector-chirality phase. The Hamiltonian of the Haldane phase is topologically identical to the S=1 antiferromagnetic Heisenberg chain. Decoupled models connected to the dominant vector-chirality phase revealed that the local object identified by the non-trivial ($\pi$) Berry phase is the direct product of two diagonal singlets.
Topological Identification of Spin-1/2 Two-Leg Ladder with Four-Spin Ring Exchange (0806.4416)
June 27, 2008 cond-mat.str-el
A spin-1/2 two-leg ladder with four-spin ring exchange is studied by quantized Berry phases, used as local order parameters. Reflecting local objects, non-trivial ($\pi$) Berry phase is founded on a rung for the rung-singlet phase and on a plaquette for the vector-chiral phase. Since the quantized Berry phase is topological invariant for gapped systems with the time reversal symmetry, topologically identical models can be obtained by the adiabatic modification. The rung-singlet phase is adiabatically connected to a decoupled rung-singlet model and the vector-chiral phase is connected to a decoupled vector-chiral model. Decoupled models reveals that the local objects are a local singlet and a plaquette singlet respectively.
Search for a Ridge Structure Origin with Shower Broadening and Jet Quenching (0805.2795)
R. Mizukawa, T. Hirano, M. Isse, Y. Nara, A. Ohnishi
May 21, 2008 nucl-th
We investigate the role of jet and shower parton broadening by the strong colour field in the $\Delta\eta$-$\Delta\phi$ correlation of high $p_T$ particles. When anisotropic momentum broadening ($\Delta p_z > \Delta p_T$) is given to jet and shower partons in the initial stage, a ridge-like structure is found to appear in the two hadron correlation. The ratio of the peak to the pedestal yield is overestimated.
Hadronic dissipative effects on transverse dynamics at RHIC (0805.0064)
T. Hirano, U.W. Heinz, D. Kharzeev, R. Lacey, Y. Nara
May 1, 2008 nucl-th
We simulate the dynamics of Au+Au collisions at the Relativistic Heavy Ion Collider (RHIC) with a hybrid model that treats the quark-gluon plasma macroscopically as an ideal fluid, but models the hadron resonance gas microscopically using a hadronic cascade. We find that much of the mass-ordering pattern for v_2(p_T) observed at RHIC is generated during the hadronic stage due to build-up of additional radial flow. We also find that the mass-ordering pattern is violated for phi meson due to small interaction cross section in the hadron resonance gas.
Onset of $J/\psi$ Melting in Quark-Gluon Fluid at RHIC (hep-ph/0703061)
T. Gunji, H. Hamagaki, T. Hatsuda, T. Hirano
July 6, 2007 hep-ph, nucl-th
A strong $J/\psi$ suppression in central Au+Au collisions has been observed by the PHENIX experiment at the Relativistic Heavy Ion Collider (RHIC). We develop a hydro+$J/\psi$ model in which hot quark-gluon matter is described by the full (3+1)-dimensional relativistic hydrodynamics and $J/\psi$ is treated as an impurity traversing through the matter. The experimental $J/\psi$ suppression pattern in mid-rapidity is reproduced well by the sequential melting of $\chi_{\rm c}$, $\psi'$, and $J/\psi$ in dynamically expanding fluid. The melting temperature of directly produced $J/\psi$ is well constrained by the participant-number dependence of the $J/\psi$ suppression and is found to be about $2.T_{\rm c}$ with $T_{\rm c}$ being the pseudo-critical temperature.
Jet-fluid string formation and decay in high-energy heavy-ion collisions (nucl-th/0702068)
M. Isse, T. Hirano, R. Mizukawa, A. Ohnishi, K. Yoshino, Y. Nara
Feb. 22, 2007 nucl-th
We propose a new hadronization mechanism, jet-fluid string (JFS) formation and decay, to understand observables in intermediate to high-$p_{T}$ regions comprehensively. In the JFS model, hard partons produced in jet lose their energy in traversing the QGP fluid, which is described by fully three-dimensional hydrodynamic simulations. When a jet parton escapes from the QGP fluid, it picks up a partner parton from a fluid and forms a color singlet string, then it decays to hadrons. We find that high-$p_T$ $v_2$ values in JFS are about two times larger than in the independent fragmentation model.
Hadron-string cascade versus hydrodynamics in Cu+Cu collisions at $\sqrt{s_{NN}}=200$ GeV (nucl-th/0506058)
T. Hirano, M. Isse, Y. Nara, A. Ohnishi, K. Yoshino
Oct. 3, 2005 hep-ph, nucl-ex, nucl-th
Single particle spectra as well as elliptic flow in Cu+Cu collisions at $\sqrt{s_{NN}}=200$ GeV are investigated within a hadronic cascade model and an ideal hydrodynamic model. Pseudorapidity distribution and transverse momentum spectra for charged hadrons are surprisingly comparable between these two models. However, a large deviation is predicted for the elliptic flow. The forthcoming experimental data will clarify the transport and thermalization aspects of matter produced in Cu+Cu collisions.
3D Jet Tomography of the Twisted Color Glass Condensate (nucl-th/0509064)
A. Adil, M. Gyulassy, T. Hirano
Sept. 23, 2005 nucl-th
Jet Tomography is proposed as a new test of Color Glass Condensate (CGC) initial conditions in non-central $A+A$ collisions. The $k_{T}$ factorized CGC formalism is used to calculate the rapidity twist in the reaction plane of both the bulk low $p_T< 2$ GeV matter as well as the rare high $p_T> 6$ GeV partons. Unlike conventional perturbative QCD, the initial high $p_{T}$ CGC gluons are shown to be twisted even further away from the beam axis than the the low $p_T$ bulk at high rapidities $|\eta|>2$. Differential directed flow $v_{1}(p_{T}>6,|\eta|>2)$ is proposed to test this novel high $p_T$ rapidity twist predicted by the CGC model.
The effect of early chemical freeze out on radial and elliptic flow from a full 3D hydrodynamic model (nucl-th/0202033)
T. Hirano, K. Tsuda
Feb. 11, 2002 hep-ph, nucl-ex, nucl-th
We investigate the effect of early chemical freeze-out on radial and elliptic flow by using a fully three dimensional hydrodynamic model. We find that the time evolution of temperature and the thermal freeze-out temperature dependence of average radial flow are different from the results by using a conventional hydrodynamic model in which chemical equilibrium is always assumed. We also analyse the p_t spectrum and v_2(p_t) at the RHIC energy and consistently reproduce experimental data by choosing the thermal freeze-out temperature T_th = 140 MeV.
Electromagnetic Spectrum from QGP Fluid (nucl-th/9708058)
T. Hirano, S. Muroya, M. Namiki
Aug. 29, 1997 hep-ph, nucl-th
We calculate thermal photon and electron pair distribution from hot QCD matter produced in high energy heavy-ion collisions, based on a hydrodynamical model which is so tuned as to reproduce the recent experimental data at CERN SPS, and compare these electromagnetic spectra with experimental data given by CERN WA80 and CERES. We investigate mainly the effects of the off-shell properties of the source particles on the electromagnetic spectra.
Thermal Photon Emission from QGP fluid (hep-ph/9612234)
Dec. 3, 1996 hep-ph, nucl-th
We compare the numerical results of thermal photon distribution from the hot QCD matter produced by high energy nuclear collisions, based on hydrodynamical model, with the recent experimental data obtained by CERN WA80. Through the asymptotic value of the slope parameter of the transverse momentum distribution, we discuss the characteristic temperature of the QCD fluid. | CommonCrawl |
Journal of Economics
September 2015 , Volume 116, Issue 1, pp 1–23 | Cite as
Managerial delegation and welfare effects of cost reductions
Thijs Jansen
Arie van Lier
Arjen van Witteloostuijn
We extend the literature on the welfare effects of cost reductions by developing strategic delegation Cournot oligopoly games with \(n\) firms, linear cost and demand functions, and sales bonuses. Our method generalizes Zhao (Int J Ind Organ 19:455–469, 2001), and expresses the results in terms of the effects of both small and large cost reductions. We find that the firm exit region with sales delegation is larger than in the classical Cournot duopoly benchmark case. We prove that the likelihood of a welfare loss after a cost reduction by an inefficient firm is higher with sales delegation. We show that repairing the welfare loss from such a cost reduction for any \(n > 2\) requires firm exit.
Managerial incentives Cost reduction Cournot oligopoly Welfare effects
C72 D21 D43 L13
Appendix A The proofs of Lemma 1 and Lemma 3
A.1 Proof of Lemma 1
Assume that the unit production costs for firm \(k\) decrease with \(\delta \). (a) If \(k < n\), then after the cost reduction firm \(n\) stays the least efficient firm. As total marginal costs become
$$\begin{aligned} \sum _i c_i - \delta , \end{aligned}$$
there is an interior equilibrium after the cost reduction if and only if
(b) If \(k = n\), then after a cost reduction \(\delta + \bigl [ c_n - c_{n - 1} \bigr ]\), the unit production costs for firm \(n\) become \(c_n - \bigl [ \delta + c_n - c_{n - 1} \bigr ] = c_{n - 1} - \delta \). So firm \(n - 1\) is the least efficient firm. As after the cost reduction total marginal costs become
$$\begin{aligned} \sum _{i < n - 1} c_i + 2c_{n - 1} - \delta , \end{aligned}$$
Hence,
$$\begin{aligned} \delta {+} \bigl [\! c_n - c_{n - 1} \!\bigr ] {<}\, a + \sum _{i < n - 1} c_i - (n - 1) c_{n - 1} + \bigl [\! c_n - c_{n - 1} \!\bigr ] {=}\, a - nc_{n - 1} + c_{-(n - 1)}. \square \end{aligned}$$
$$\begin{aligned} c_n < {1 \over n^2 + 1} \bigl [ a + n\sum _i c_i - n\delta \bigr ]&\Longleftrightarrow (n^2 + 1) c_n < a + nc_n + nc_{-n} - n\delta \\&\Longleftrightarrow \delta < {1 \over n} \bigl [ a - (n^2 - n + 1)c_n + nc_{-n}\bigr ]. \end{aligned}$$
$$\begin{aligned} c_{n - 1}&< {1 \over n^2 + 1} \bigl [ a + n\sum _{i < n - 1} c_i + 2nc_{n - 1} - n\delta \bigr ]\\&\Longleftrightarrow (n^2 + 1) c_{n - 1} < a + n\sum _{i < n - 1} c_i + 2nc_{n - 1} - n\delta \\&\Longleftrightarrow \delta < {1 \over n} \bigl [ a + n\sum _{i < n - 1} c_i + 2nc_{n - 1} - (n^2 + 1) c_{n - 1} \bigr ] \\&\Longleftrightarrow \delta < {1 \over n} \bigl [ a + n \sum _{i < n - 1} c_i - (n^2 - 2n + 1) c_{n - 1} \bigr ]. \end{aligned}$$
Appendix B The effects on social welfare of a cost reduction
B.1 The benchmark case
Using \((2)\), we obtain for producer purplus after cost reduction
$$\begin{aligned} PS(\delta )&= \sum _i \pi _i(\delta ) = \sum _i \bigl (q_i(\delta )\bigr )^2 = \sum _{i \not = k} \bigl (q_i(\delta ) \bigr )^2 + \bigl ( q_k(\delta ) \bigr )^2 \\&= \sum _{i \not = k} \biggl (q_i - {1 \over n + 1} \delta \biggr )^2 + \biggl ( q_k + {n \over n + 1}\delta \biggr )^2 \\&= \sum _i q_i^2 - {2 \over n + 1} \delta \sum _{i \not = k} q_i + {n - 1 \over (n + 1)^2} \delta ^2 + {2n \over n + 1}\delta q_k + {n^2 \over (n + 1)^2} \delta ^2 \\&= PS - {2 \over n + 1} \delta Q + 2 \delta q_k + {n^2 + n - 1 \over (n + 1)^2} \delta ^2. \end{aligned}$$
$$\begin{aligned} \Delta PS = \delta \biggl [ 2q_k - {2 \over n + 1} Q + { n^2 + n - 1 \over (n + 1)^2} \delta \biggr ]. \end{aligned}$$
$$\begin{aligned} CS(\delta ) = {1 \over 2} \bigl ( Q(\delta ) \bigr )^2 = {1 \over 2} \biggl ( Q + {1 \over n + 1} \delta \biggr )^2 = CS + {1 \over n + 1} \delta Q + {1 \over 2(n + 1)^2} \delta ^2, \end{aligned}$$
we obtain
$$\begin{aligned} \Delta CS = \delta \biggl [ {1 \over n + 1} Q + {1 \over 2(n + 1)^2} \delta \biggr ]. \end{aligned}$$
Addition of the expressions for \(\Delta PS\) and \(\Delta CS\) leads to the expression for the shift in social welfare.
B.2 Sales-related bonuses
Using \((5)\), we obtain for the producer surplus after cost reduction
$$\begin{aligned} PS(\delta )&= \sum _i \pi _i(\delta ) = {1 \over n} \sum _i \bigl (q_i(\delta )\bigr )^2 = {1 \over n} \sum _{i \not = k} \bigl (q_i(\delta )\bigr )^2 + {1 \over n} \bigl ( q_k(\delta ) \bigr )^2\\&= {1 \over n} \sum _{i \not = k} \biggl (q_i - {n^2 \over n^2 + 1} \delta \biggr )^2 + {1 \over n} \biggl ( q_k + {n(n^2 - n + 1) \over n^2 + 1}\delta \biggr )^2 \\&= {1 \over n} \sum _i q_i^2 - {2n \over n^2 + 1} \delta \sum _{i \not = k} q_k + {(n - 1)n^3 \over (n^2 + 1)^2} \delta ^2 + {2(n^2 - n + 1) \over n^2 + 1} \delta q_k\\&\quad + {n(n^2 - n + 1)^2 \over (n^2 + 1)^2} \delta ^2 \\&= PS - {2n \over n^2 + 1} \delta Q + {2n^2 + 2\over n^2 + 1} \delta q_k\\&\quad + {n \bigl [n^2(n - 1) + n^4 - 2n^2(n - 1) + (n - 1)^2 \bigr ] \over (n^2 + 1)^2} \delta ^2 \\&= PS - {2n \over n^2 + 1} \delta Q + 2 \delta q_k + {n \bigl [-n^3 + n^2 + n^4 + n^2 - 2n + 1 \bigr ] \over (n^2 + 1)^2} \delta ^2. \end{aligned}$$
$$\begin{aligned} \Delta PS = \delta \biggl [ 2q_k - {2n \over n^2 + 1} Q + { n \bigl ( n^4 - n^3 + 2n^2 - 2n + 1 \bigr ) \over (n^2 + 1)^2} \delta \biggr ]. \end{aligned}$$
$$\begin{aligned} CS(\delta ) = {1 \over 2} \bigl ( Q(\delta ) \bigr )^2 = {1 \over 2} \biggl ( Q + {n \over n^2 + 1} \delta \biggr )^2 = CS + {n \over n^2 + 1} \delta Q + {n^2 \over 2(n^2 + 1)^2} \delta ^2, \end{aligned}$$
$$\begin{aligned} \Delta CS = \delta \biggl [ {n \over n^2 + 1} Q + {n^2 \over 2(n^2 + 1)^2} \delta \biggr ]. \end{aligned}$$
Addition of the expressions for \(\Delta PS\) and \(\Delta CS\) leads to the expression for the difference in social welfare.
Fershtman C, Judd KL (1987) Equilibrium incentives in oligopoly. Am Econ Rev 77:927–940Google Scholar
Gaudet G, Salant S (1991) Increasing the profits of a subset of firms in oligopoly models with strategic substitutes. Am Econ Rev 81:658–665Google Scholar
Jansen M, van Lier A, van Witteloostuijn A (2007) A note on strategic delegation: the market share case. Int J Ind Organ 25:531–539CrossRefGoogle Scholar
Kimmel S (1992) Effects of cost changes on oligopolists' profits. J Ind Econ 40:441–449CrossRefGoogle Scholar
Miller N, Pazgal A (2002) Relative performance as a strategic commitment mechanism. Manag Decis Econ 23:51–68CrossRefGoogle Scholar
Ritz R (2008) Strategic incentives for market share. Int J Ind Organ 26:586–597CrossRefGoogle Scholar
Sklivas SD (1987) The strategic choice of managerial incentives. RAND J Econ 18:452–458CrossRefGoogle Scholar
Schmalensee R (1976) Is more competition necessarily good? Ind Organ Rev 4:120–121Google Scholar
Smythe DJ, Zhao J (2006) The complete welfare effects of cost reductions in a Cournot oligopoly. J Econ 87:181–193CrossRefGoogle Scholar
Vickers J (1985) Delegation and the theory of the firm. Econ J 95:138–147CrossRefGoogle Scholar
Zhao J (2001) A characterization for the negative welfare effects of cost reduction in Cournot oligopoly. Int J Ind Organ 19:455–469CrossRefGoogle Scholar
Wang XH, Zhao J (2007) Welfare reductions from small cost reductions in differentiated oligopoly. Int J Ind Organ 25:173–185CrossRefGoogle Scholar
© Springer-Verlag Wien 2014
1.Department of Quantitative Economics, School of Business and EconomicsMaastricht UniversityMaastrichtThe Netherlands
2.Department of Organization and StrategyTilburg UniversityTilburgThe Netherlands
3.Antwerp Center of Evolutionary DemographyUniversity of AntwerpAntwerpBelgium
Jansen, T., van Lier, A. & van Witteloostuijn, A. J Econ (2015) 116: 1. https://doi.org/10.1007/s00712-014-0428-y
Publisher Name Springer Vienna | CommonCrawl |
Probabilistic analysis of land subsidence due to pumping by Biot poroelasticity and random field theory
Sirui Deng1,
Haoqing Yang2,
Xiaoying Chen3 &
Xin Wei1
Land subsidence is a global problem in urban areas. The main cause of land subsidence is the pumping of subsurface water. It is of great significance to study the subsurface settlement and water flow of the lands due to pumping. In this study, the probabilistic analysis of land subsidence due to pumping is performed by Biot's poroelasticity and random field theory based on a case study. The results show that the change of deformation of the aquifer is far less significant than the hydraulic head over the years. When considering the spatial variability of soil strength, the land subsidence suffers from great uncertainty when the correlation length is large. Nevertheless, the spatial variability of soil strength on the uncertainty of hydraulic head can be ignored. When considering the spatial variability of soil hydraulic conductivity, the uncertainty of the hydraulic head is mainly located near the bedrock and increases markedly along with the rise of the correlation length. Time is another important factor to increase the uncertainty of the hydraulic head. However, its contribution to the uncertainty of displacement is insignificant.
Land subsidence is the gradual or rapid sinking of the ground surface due to the deformation of subsurface earth materials, which is a global problem in urban areas [18, 43]. The main cause of land subsidence is the pumping of subsurface water [8, 14, 31]. It is of great significance to study the subsurface settlement and water flow of the lands due to pumping.
The land subsidence is often simulated or evaluated based on the soil consolidation theory, which is a process of volumetric changes of soil due to water pressure. Early methods to model soil consolidation are based on Terzaghi's theory. It assumes that the settlement and flow of water are vertical. Ignoring the horizontal deformation does not allow for a complete analysis of problems of consolidation. If the horizontal deformation needs to be considered, this one-dimensional theory of consolidation may not be valid. In recent decades, the more rigorous Biot's poroelasticity considering horizontal and vertical components of elastic deformation has been widely used for the problems of land subsidence. Bear and Corapcioglu [3] developed a mathematical model for regional subsidence due to pumping from an aquifer based on Biot's theory on coupled three-dimensional consolidation. Chiou and Chi [11] studied the settlement induced by surface loading and land subsidence due to pumping for saturated layered soils. Xu et al. [42] presented the prediction approaches on land subsidence employed in China and found that Biot's consolidation can simulate the field data better. Ferronato et al. [16] proposed a coupled Biot model based on a three-field formulation to predict the land subsidence in the Chaobai River alluvial fan, China.
However, most of the studies related to land subsidence did not consider the uncertainty of geo-properties. It is well recognized that the subsurface geo-properties such as seepage and strength parameters are remarkably variable and heterogeneously suffering from great uncertainty. To understand the uncertainty of soil consolidation, probabilistic analysis by Monte Carlo simulation is always adopted regarding the different engineering geological backgrounds. The parameters in Biot's formulations are modeled as random variables to account for the uncertainty of subsurface geo-properties or further modeled as random fields to consider spatial variability. For example, Houmadi et al. [23] used a collocation-based stochastic response surface method for the probabilistic analysis of a consolidation problem of a single clayey layer, and the deterministic model is based on a Biot consolidation analysis using the finite difference code FLAC 3D. Cheng et al. [10] integrated random field simulation of soil spatial variability with numerical modeling of coupled flow and deformation to investigate consolidation in spatially random unsaturated soil. Zhang et al. [49] proposed a probabilistic method to calibrate coupled hydro-mechanical slope stability model with the integration of multiple types of field data. Houmadi et al. [24] analyzed the impact on surface settlement due to a uniform surcharge loading on the ground surface with a two-dimensional spatially varying Young's modulus by the subset simulation method. Savvides and Papadrakakis [30] presented a stochastic analysis to study the consolidation phenomenon of clayey interaction. In summary, based on the models of Biot's consolidation, the uncertainty of many geotechnical issues including land reclamation, embankments, tunnels, and excavation are evaluated by several researchers. However, the probabilistic analysis for the problem of land subsidence is seldom involved.
Therefore, in this study, the probabilistic analysis of land subsidence due to pumping is performed by Biot's poroelasticity and random field theory. First, based on Leake and Hsieh [26], the numerical model of an aquifer underlain by a bedrock step and pumping is established. Second, to consider soil spatial variability, two key parameters (i.e., Young's modulus and hydraulic conductivity) in Biot's equations are viewed as heterogeneous properties and generated by random field theory. Finally, the influence of correlation length and time on the uncertainty of pumping responses (i.e., displacement and hydraulic head) are investigated.
Biot's poroelasticity
In this study, the built-in module in COMSOL Multiphysics [13] is adopted to simulate land subsidence. Based on Biot's poroelastic theory [4, 5], the constitutive relations for the poroelastic behavior are:
$$ \boldsymbol{\upsigma} =\mathbf{c}:\varepsilon -{\alpha}_b\mathbf{I}p $$
where σ is the total stress; ":" stands for the double-dot tensor product; c denotes the elasticity matrix of solid; ε is the strain tensor; p is the fluid pore pressure; I is the identity matrix; αb is the Biot-Willis coefficient representing the coupling between the stress and the pore pressure. The value of αb is less than unity, indicating the extent to which the pore pressure contributes on elastic deformation.
The form of force balance equation is:
$$ \nabla \cdot \boldsymbol{\upsigma} +\rho \mathbf{g}=\nabla \cdot \boldsymbol{\upsigma} +\left({\phi \rho}_f+{\rho}_s\right)\mathbf{g}=0 $$
where ρ represents the average density of solid and fluid; ρf and ρs are the density of the fluid and solid, respectively; ϕ is the porosity; g represents the acceleration of gravity. Note that Eq. (1) is the linear theory of elasticity, implying that the general theory proposed by Biot is the linear poroelasticity. Biot's equations can be extended to nonlinear poroelasticity, such as elastoplastic materials, by changing the form of Eq. (1) [2].
Based on the mass conservation equation, with the increase of the rate of expansion of the pore space, the volume available for the fluid also increases and thereby gives rise to liquid sink [22]:
$$ {S}_b\frac{\partial p}{\partial t}+\nabla \cdot \left(-k\nabla p\right)=-{\alpha}_b\frac{\partial {\varepsilon}_v}{\partial t} $$
where t is time; k is the hydraulic conductivity; εv is the volumetric strain, εv = εx + εy +εz, which is the trace of ε; Sb is the storage coefficient of Biot's poroelasticity, which is related to the compressibility of the fluid and solid phases. When both the solid and the fluid are assumed compressible, it can be calculated from basic material properties as [7]:
$$ {S}_b=\frac{\phi }{K_f}+\frac{\alpha_b-\phi }{K_s} $$
where Kf is the fluid bulk modulus, which is the inverse of the fluid compressibility χf, and Ks is the solid bulk modulus and \( {K}_s=\frac{E}{3\left(1-2\nu \right)} \) for elastic materials. E and ν are Young's modulus and Poisson's ratio, respectively.
For saturated soil, some studies assumed that the water and soil are incompressible. Therefore, the values of Sb and αb can be 0 and 1, respectively, and ρ is equal to the density of soil [6, 23,24,25, 34]. While some other studies, such as oil reservoir simulation, considered the contributions of Sb and αb [20, 47]. Since the poroelasticity of Biot's consolidathangion is a built-in module in COMSOL Multiphysics, the solution of the above equations is very convenient. As a result, the compressible nature of soil and water is taken into consideration in this study.
Numerical model of an aquifer
The numerical model of land subsidence is referenced from Leake and Hsieh [26]. There is an aquifer system overlying an impermeable bedrock in a basin. The height of the aquifer is 420 m, and the length exceeds 4000 m. The bedrock is a fault and acts as a step near a mountain front. The aquifer system includes a middle compressible confining unit, which is 20 m below the ground surface (Fig. 1).
Numerical model of a basin based on the Biot's consolidation. a Materials and boundary conditions. b Geometry and mesh girds
In this study, the predefined mesh grid of COMSOL is adopted. The finer element size (Fig. 1b) is chosen for simulation. The maximum element size is 117 m. Note that the finite element mesh is usually finer than the random field grid to capture the information on the spatial variability. However, the overly fine mesh will lead to high computational costs. There is a trade-off between the accuracy of the solution and computational efficiency. In this study, the maximum element size is larger than some examined correlation lengths because the area to be simulated is very large. To overcome this problem, the midpoint discretization method [32, 37] is employed to determine grid points of the random field. Shen et al. [33] illustrated that this method is sufficient to obtain accurate statistics of model responses. Please refer to Shen et al. [33] for the discussion of finite element meshes and discretization error.
For the deterministic model, the parameters of an aquifer, semi-confined layer, and water are summarized in Table 1. The hydraulic and physical properties are set as the alluvial basin in the southwestern USA [21]. The values of porosity for aquifer ϕa and semi-confined layer ϕi are 0.25 and 0.025, respectively. The hydraulic conductivity ka of the aquifer is 25 m/day whereas ki = 0.01 m/day for the semi-confined layer. Young's modulus is assumed to be different. Ea = 800 MPa for aquifer and Ei = 80 MPa for semi-confined layer. Except for the above parameters, the Poisson's ratio and density of soil are the same for the aquifer and the semi-confined layer. The Poisson's ratio ν and ρ are assumed to be 0.25 and 2750 kg/m3, respectively. The constants for the water of compressibility χf and density ρf are 4 × 10−10 1/Pa and 1000 kg/m3, respectively.
Table 1 Parameters of the numerical model and random field
The boundary conditions of the aquifer model are shown in Fig. 1. The hydraulic head in JA-AB-BC is specified as zero-constant during the entire period of simulation to assume that no consolidation occurs in this part. IH is fixed with a head that linearly declines by 60 m over 10 years. Other boundaries are no-flow. For the mechanical boundary conditions, EF-FG-GH around the bedrock step is a fixed constraint, which means the horizontal and vertical displacements are zero. IH is the roller constraint allowing to move in the vertical direction. Free boundary conditions are used for other boundaries.
Random field
It is well known that the soil properties of an aquifer are variant but correlated in space due to the geological processes. Site investigation can only obtain limited samples of soil parameters. From the point of view of probability, the statistical characteristics of soil parameters can be obtained from limited samples with randomness. Therefore, random field theory is used to characterize the spatial variability of soil properties.
Soil parameters such as Young's modulus and hydraulic conductivity are positive and fit well with log-normal distributions [1, 29, 48]. Therefore, the natural logarithm of a certain soil parameter follows a normal distribution. Its mean value μln and the standard deviation σln are calculated as follows:
$$ {\sigma}_{\mathrm{ln}}^2=\ln \left(1+\frac{\sigma^2}{\mu^2}\right) $$
$$ {\mu}_{\mathrm{ln}}=\ln \mu -\frac{\sigma^2}{2} $$
where μ and σ are the mean value and the standard deviation of soil parameters, respectively.
In random field theory, the covariance function is proposed to illustrate the spatial correlation of a certain soil parameter. It is a function related to coordinates x = [(x1, z1), (x2, z2)] in the domain. The horizontal and vertical correlation lengths (lx and lz) are thresholds to determine the relevance of a soil parameter of two positions in the domain. In this study, an empirical covariance function C(x) is used to simulate the spatial variability of soil parameters [40, 41, 44]:
$$ C\left(\mathbf{x}\right)={\sigma}_{\mathrm{ln}}^2\exp \left\{-{\left[\frac{{\left({x}_1-{x}_2\right)}^2}{l_x^2}+\frac{{\left({z}_1-{z}_2\right)}^2}{l_z^2}\right]}^{\frac{1}{2}}\right\} $$
To generate random fields, the covariance function C(x) is decomposed by the Karhunen-Loève expansion method as previous studies [45, 46]. More details of this method can be found in Ghanem and Spanos [19].
The land subsidence based on Biot's consolidation is a coupled hydro-mechanical problem. Therefore, two parameters, Ea and ka, of strength and hydraulic conductivity for the aquifer are modeled by random field to consider their spatial variability. Correspondingly, two cases are used to illustrate the effects of spatial variability of soil strength and hydraulic conductivity on the uncertainty of model responses. It is recognized that the hydraulic properties of soil suffer from great uncertainty. According to previous studies, the CoV of the saturated coefficient of hydraulic conductivity can be ranged from 50 to 450% [9, 48]. Relatively, the CoVs of soil strength parameters are small, around 5~50% [12, 28]. Therefore, the CoVs of ka and Ea are assumed to be 80% and 30% in this study, respectively. The first case considers the spatial variability of soil strength. The mean and coefficient of variation (COV) of Ea are 800 MPa and 0.3, respectively. The same idea applies to ka for the second case, where it no longer goes into details. Please refer to Table 2 accordingly.
Table 2 Parameters of aquifer for probabilistic analysis
The selection of the correlation lengths for the parametric study is mainly based on the following facts: (1) For natural soil parameters, the vertical correlation length varies from less than 1 m to more than 20 m [15, 17, 36]. The horizontal correlation length is generally much larger than the vertical length due to the stratification of natural deposits. (2) For the practice of probabilistic study, many studies set the correlation lengths as a ratio of the model size for uncertainty or reliability analysis [35, 50]. It is suggested that the correlation length of the soil parameters can be taken as 0.02~2 times the model size. 3. In geotechnical engineering, the site-scale models are generally adopted, and the model size is commonly less than 100 m, while the model size in this study is in large basin-scale. The large-scale models, such as watershed-scale, are considered as references. It is reported that the correlation length can exceed 650 m [38, 39]. Therefore, in this study, lx and lz vary from 200 to approximately 800 m and 40 to approximately 160 m, respectively. Typical realization of lognormal random fields with different correlation lengths is shown in Fig. 2.
One realization of lognormal random fields (mean = 25, COV = 0.8). a lx = 200 m, lz = 40 m. b lx = 400 m, lz = 80 m. c lx = 600 m, lz = 120 m. d lx = 800 m, lz = 160 m
The uncertainty of the model responses can be determined by running the model repeatedly with different random soil parameters to arrive at an estimate of the standard deviation of the model responses, i.e., the so-called Monte Carlo simulation. A sensitivity analysis is conducted to determine the number of random fields for Monte Carlo simulation. Figure 3 presents the effect of the number of random fields on the mean values of subsidence at surface nodes. There is almost no fluctuation of the estimation of mean values when the number of random fields is less than 500. Therefore, a total number of 500 random fields is generated to assess the uncertainty of the model responses, which is also consistent with the previous study suggested by Peng et al. [27].
Effect of the number of random fields on the mean values of subsidence at surface nodes. a Random field of Ea (case 1, lx = 200 m, lz = 40 m). b Random field of ka (case 2, lx = 200 m, lz = 40 m)
Results and discussions
Deterministic results
Figure 4 shows the deterministic results of displacement over the years. The displacement at the upper boundary indicates the surface subsidence. The surface subsidence exceeds 2 m, and it is gradually grown over the years. With the increase of depth, the displacement is decreased and less sensitive to time.
Deterministic results of the displacement. a Year 1. b Year 4. c Year 7. d Year 10
Figure 5 shows the deterministic results of hydraulic heads over the years. The hydraulic head in the whole domain is reduced rapidly as a result of pumping. The hydraulic head around the bedrock is reduced from − 4 to − 40 m for 10 years, which nearly drops 4 m per year due to pumping. Comparably, the change of deformation of the aquifer is far less significant than the hydraulic head over these 10 years.
Deterministic results of the hydraulic head. a Year 1. b Year 4. c Year 7. d Year 10
Effects of correlation lengths
Case 1: Spatial variability of soil strength
The effect of the correlation length of Ea on the standard deviation of displacement (σs) is illustrated in Fig. 6. With the increase in correlation length of Ea, σs increases dramatically. The maximum value of σs for surface settlement is around 0.6 m with the largest correlation length. The effect of the correlation length of Ea on the standard deviation of displacement is significant. The land subsidence due to pumping suffers from great uncertainty when the correlation length of soil strength properties is large.
Effects of the correlation length of Ea on the uncertainty of displacement. a lx = 200 m, lz = 40 m. b lx = 400 m, lz = 80 m. c lx = 600 m, lz = 120 m. d lx = 800 m, lz = 160 m
Figure 7 shows the standard deviation of the hydraulic head (σh) considering the spatial variability of Ea. Although the σh rises with the increase of the correlation length of E, the values of σh are very small compared to displacement. Even when lx = 800 m and lz = 160 m, the maximum σh is only 0.004 m. It is indicated that the spatial variability of Ea has a slight influence on the uncertainty hydraulic head for land subsidence due to pumping.
Effects of the correlation length of Ea on the uncertainty of hydraulic head. a lx = 200 m, lz = 40 m. b lx = 400 m, lz = 80 m. c lx = 600 m, lz = 120 m. d lx = 800 m, lz = 160 m
Case 2: Spatial variability of soil hydraulic conductivity
The effects of the correlation length of ka on the uncertainty of displacement are displayed in Fig. 8. The spatial variability of soil hydraulic conductivity has a minor influence on the uncertainty of displacement. The maximum σs is only 0.03 m in year 10. It implies that the uncertainty of displacement is insignificant when dealing with the spatial variability of soil hydraulic conductivity.
Effects of the correlation length of ka on the uncertainty of displacement. a lx = 200 m, lz = 40 m. b lx = 400 m, lz = 80 m. c lx = 600 m, lz = 120 m. d lx = 800 m, lz = 160 m
Figure 9 presents the effect of the correlation length of ka on the uncertainty of the hydraulic head. When lx = 200 m and lz = 40 m changes to lx = 800 m and lz = 160 m, respectively, the maximum value of σh increases from 2 to 6 m, which is nearly tripled. The correlation length of ka is strongly influential to σh. The σh increases markedly along with the rise in the correlation length of ka. In addition, like Fig. 7, the σh around the bedrock is comparatively large, which illustrates that the uncertainty of the hydraulic head is mainly located near the bedrock.
Effects of the correlation length of ka on the uncertainty of hydraulic head. a lx = 200 m, lz = 40 m. b lx = 400 m, lz = 80 m. c lx = 600 m, lz = 120 m. d lx = 800 m, lz = 160 m
Effects of time
Figure 10 shows the uncertainty of displacement over the years considering the spatial variability of Ea. The σs is steadily increased with years. In the tenth year, the maximum σs is approximated to 0.30 m near the surface. It illustrates that the uncertainty of land subsidence rises gradually over the years due to pumping.
Uncertainty of displacement over the years considering the spatial variability of Ea (lx = 200 m, lz = 40 m, COV = 0.3). a Year 1. b Year 4. c Year 7. d Year 10
The effects of spatial variability of Ea on σh over the years are shown in Fig. 11. There is no difference among them, indicating the σh is constant with time. Although the hydraulic head is gradually increased, its uncertainty is invariable over the years if only considering the spatial variability of Ea. Besides, the values of σh are all around the order of a millimeter, indicating the trivial effect on the uncertainty of the hydraulic head. To conclude, the spatial variability of Ea on the uncertainty of hydraulic head for land subsidence due to pumping can be ignored.
Uncertainty of hydraulic head over the years considering the spatial variability of Ea (lx = 200 m, lz = 40 m, COV = 0.3). a Year 1. b Year 4. c Year 7. d Year 10
The uncertainty of displacement over the years considering spatial variability of ka is shown in Fig. 12. Obvious changes in the σh appeared, but σh is only approximated to 0.01 m even in the 10-year settlement for land subsidence. The contribution of spatial variability of soil hydraulic conductivity to the uncertainty of displacement is unimportant.
Uncertainty of displacement over the years considering the spatial variability of ka (lx = 200 m, lz = 40 m, COV = 0.8). a Year 1. b Year 4. c Year 7. d Year 10
In Fig. 13, the uncertainty of hydraulic head over the years considering the spatial variability of ka is shown. In year 1, the maximum of σh is approximately 0.2 m, and it is increased to 2 m in year 10. Therefore, besides the correlation length of hydraulic conductivity, time is another important factor to increase the uncertainty of hydraulic head for land subsidence due to pumping.
Uncertainty of hydraulic head over the years considering the spatial variability of ka (lx = 200 m, lz = 40 m, COV = 0.3). a Year 1. b Year 4. c Year 7. d Year 10
Effects of boundary conditions
The effect of boundary conditions on the uncertainty of land subsidence is further investigated. The results of two different boundary conditions are shown in Fig. 14. Figure 14a shows the hydraulic head and land subsidence at year 1 with hydraulic head boundary condition. Figure 14b shows the corresponding result with flux boundary condition in the final steady state. It can be seen that the two different boundary conditions produce the same results.
Effects of the boundary conditions on the deterministic results. a Hydraulic head boundary condition. b Flux boundary condition
The effects of boundary conditions on the uncertainty of hydraulic head considering spatial variability of ka are shown in Fig. 15. When choosing head boundary condition, a large uncertainty appeared around the bedrock (Fig. 15(a)). However, in Fig. 15(b), the uncertainty around the flux boundary condition is large. The standard deviation of the hydraulic head exceeds 0.25 m. Therefore, flow boundary condition has an obvious impact on the uncertainty of the hydraulic head.
Effects of the boundary conditions on the uncertainty of hydraulic head considering the spatial variability of ka (lx = 200 m, lz = 40 m, COV = 0.8). a Hydraulic head boundary condition. b Flux boundary condition
In this study, the probabilistic analysis of land subsidence due to pumping is performed by Biot's poroelasticity and random field theory based on a case study. First, the numerical model of an aquifer underlain by a bedrock step and pumping is established. Second, to consider soil spatial variability, two key parameters in Biot's equations controlling deformation and hydraulic head are viewed as heterogeneous properties and generated by random field theory. Finally, the influences of correlation length and time on the uncertainty of pumping responses are investigated. Major conclusions were summarized as follows:
The total surface settlement exceeds 2 m for 10 years of land subsidence due to pumping. The hydraulic head around the bedrock nearly drops 4 m per year due to pumping. In general, the change of deformation of the aquifer is far less significant than the hydraulic head over these 10 years.
When considering the spatial variability of soil strength, it suffers from great uncertainty when the correlation length is large. The uncertainty of displacement gradually rises over the years. Nevertheless, the spatial variability of Young's module on the uncertainty of hydraulic head can be ignored.
When considering the spatial variability of soil hydraulic conductivity, the uncertainty of the hydraulic head is mainly located near the bedrock and increases markedly along with the rise of the correlation length. Time is another important factor to increase the uncertainty of the hydraulic head. However, its contribution to the uncertainty of displacement is insignificant.
All data generated or analyzed during this study are included in this published article.
C(x):
Covariance function
c :
Elasticity matrix of solid
COV:
Coefficient of variation
COVE :
Coefficient of variation of Ea
COVk :
Coefficient of variation of ka
Young's modulus
E a :
Young's modulus of the aquifer
E i :
Young's modulus of the semi-confined layer
g :
Acceleration of gravity
Identity matrix
k :
Hydraulic conductivity
k a :
Hydraulic conductivity of the aquifer
K f :
Fluid bulk modulus
k i :
Hydraulic conductivity of the semi-confined layer
K s :
Solid bulk modulus
l x :
Horizontal correlation length
l z :
Vertical correlation length
M E :
Mean of Ea
M k :
Mean of ka
Fluid pore pressure
S b :
Storage coefficient of Biot's poroelasticity
Coordinates of two points in a domain
α b :
Biot-Willis coefficient
ε :
Strain tensor
ε v :
Volumetric strain
μ :
Mean value of soil parameters
μ ln :
Mean value of natural logarithm of soil parameters
ν :
Poisson's ratio
ρ :
Average density
ρ f :
Fluid density
ρ s :
Solid density
σ :
Standard deviation of soil parameters
Total stress matrix
σ h :
Standard deviation of hydraulic head
σ ln :
Standard deviation natural logarithm of soil parameters
σ s :
Standard deviation of displacement
ϕ :
Porosity
ϕ a :
Porosity of the aquifer
ϕ i :
Porosity of the semi-confined layer
χ f :
Fluid compressibility
Baecher GB, Christian JT (2005) Reliability and statistics in geotechnical engineering. John Wiley and Sons
Barucq H, Madaune-Tort M, Saint-Macary P (2005) On nonlinear Biot's consolidation models. Nonlinear Anal. Theory Methods Appl. 63(5-7):e985–e995. https://doi.org/10.1016/j.na.2004.12.010
Bear J, Corapcioglu MY (1981) A mathematical model for consolidation in a thermoelastic aquifer due to hot water injection or pumping. Water Resour. Res. 17(3):723–736. https://doi.org/10.1029/WR017i003p00723
Biot MA (1941) General theory of three-dimensional consolidation. J. App. Phys. 12(2):155–164. https://doi.org/10.1063/1.1712886
Biot MA (1955) Theory of elasticity and consolidation for a porous anisotropic solid. J. App. Phys. 26(2):182–185. https://doi.org/10.1063/1.1721956
Biot MA (1962) Mechanics of deformation and acoustic propagation in porous media. J. App. Phys. 33(4):1482–1498. https://doi.org/10.1063/1.1728759
Biot MA, Willis DG (1957) The Elastic Coefficients of the Theory of Consolidation. In: The elastic coefficients of the theory of consolidation
Chapter Google Scholar
Budhu M, Adiyaman IB (2010) Mechanics of land subsidence due to groundwater pumping. Int. J. Numer. Anal. Methods Geomech. 34(14):1459–1478. https://doi.org/10.1002/nag.863
Article MATH Google Scholar
Carsel RF, Parrish RS (1988) Developing joint probability distributions of soil water retention characteristics. Water Resour. Res. 24(5):755–769. https://doi.org/10.1029/WR024i005p00755
Cheng Y, Zhang LL, Li JH, Zhang LM, Wang JH, Wang DY (2017) Consolidation in spatially random unsaturated soils based on coupled flow-deformation simulation. Int. J. Numer. Anal. Methods Geomech. 41(5):682–706. https://doi.org/10.1002/nag.2572
Chiou Y, Chi S (1994) Boundary element analysis of Biot consolidation in layered elastic soils. Int. J. Numer. Anal. Methods Geomech. 18(6):377–396. https://doi.org/10.1002/nag.1610180603
Ching J, Phoon KK, Pan YK (2017) On characterizing spatially variable soil Young's modulus using spatial average. Struct. Saf. 66:106–117. https://doi.org/10.1016/j.strusafe.2017.03.001
COMSOL, A. B. (2018). COMSOL multiphysics reference manual. COMSOL AB.
Corapcioglu MY, Bear J (1984) Land Subsidence — B. A Regional Mathematical Model for Land Subsidence due to Pumping. In: Land subsidence - B. A regional mathematical model for land subsidence due to pumping, Springer, Dordrecht
Ferronato M, Gambolati G, Teatini P, Baù D (2006) Stochastic poromechanical modeling of anthropogenic land subsidence. Int. J. Solids Struct. 43(11-12):3324–3336. https://doi.org/10.1016/j.ijsolstr.2005.06.090
Ferronato M, Gazzola L, Castelletto N, Teatini P, Zhu L. (2017). A coupled mixed finite element Biot model for land subsidence prediction in the Beijing area. In Poromechanics VI (pp. 182-189).
Firouzianbandpey S, Ibsen LB, Griffiths DV, Vahdatirad MJ, Andersen LV, Sørensen JD (2015) Effect of spatial correlation length on the interpretation of normalized CPT data using a kriging approach. J. Geotech. Geoenviron. Eng. 141(12):04015052. https://doi.org/10.1061/(ASCE)GT.1943-5606.0001358
Galloway, D. L., Jones, D. R., & Ingebritsen, S. E. (Eds.). (1999). Land subsidence in the United States (Vol. 1182). US Geological Survey.
Ghanem RG, Spanos PD (2003) Stochastic finite elements: a spectral approach. Courier Corporation
Gudala M, Govindarajan SK (2020) Numerical modeling of coupled fluid flow and geomechanical stresses in a petroleum reservoir. J. Energy Resour. Technol. 142(6):063006. https://doi.org/10.1115/1.4045832
Hanson RT (1989) Aquifer-system compaction. Tucson Basin and Avra Valley, Arizona
Holzbecher, E. (2013). Poroelasticity benchmarking for FEM on analytical solutions. In Excerpt from the Proceedings of the COMSOL Conference Rotterdam (pp. 1-7).
Houmadi Y, Ahmed A, Soubra AH (2012) Probabilistic analysis of a one-dimensional soil consolidation problem. Georisk 6(1):36–49. https://doi.org/10.1080/17499518.2011.590090
Houmadi Y, Benmoussa MYC, Cherifi WNEH, Rahal DD (2020) Probabilistic analysis of consolidation problems using subset simulation. Comput. Geotech. 124:103612. https://doi.org/10.1016/j.compgeo.2020.103612
Huang J, Griffiths DV, Fenton GA (2010) Probabilistic analysis of coupled soil consolidation. J. Geotech. Geoenviron. Eng. 136(3):417–430. https://doi.org/10.1061/(ASCE)GT.1943-5606.0000238
Leake S, Hsieh PA (1995) Simulation of deformation of sediments from decline of ground-water levels in an aquifer underlain by a bedrock step. In US Geological Survey Subsidence Interest Group Conference, Proceedings of the Technical Meeting, Las Vegas, Nevada, February 14-16:1995 (Vol. 97, p. 10)
Peng XY, Zhang LL, Jeng DS, Chen LH, Liao CC, Yang HQ (2017) Effects of cross-correlated multiple spatially random soil properties on wave-induced oscillatory seabed response. Appl. Ocean Res. 62:57–69. https://doi.org/10.1016/j.apor.2016.11.004
Phoon KK, Kulhawy FH (1999) Characterization of geotechnical variability. Can. Geotech. J. 36(4):612–624. https://doi.org/10.1139/t99-038
Rétháti L (2012) Probabilistic solutions in geotechnics. Elsevier
Savvides AA, Papadrakakis M (2020) A probabilistic assessment for porous consolidation of clays. SN App. Sci. 2(12):2115. https://doi.org/10.1007/s42452-020-03894-6
Shen SL, Xu YS (2011) Numerical evaluation of land subsidence induced by groundwater pumping in Shanghai. Can. Geotech. J. 48(9):1378–1392. https://doi.org/10.1139/t11-049
Shen Z, Jin D, Pan Q, Yang H, Chian SC (2021a) Effect of soil spatial variability on failure mechanisms and undrained capacities of strip foundations under uniaxial loading. Comput. Geotech. 139:104387. https://doi.org/10.1016/j.compgeo.2021.104387
Shen Z, Jin D, Pan Q, Yang H, Chian SC (2021b) Reply to the discussion on "Effect of soil spatial variability on failure mechanisms and undrained capacities of strip foundations under uniaxial loading" by Zhe Luo. Comput. Geotech. 142:104539. https://doi.org/10.1016/j.compgeo.2021.104539
Sloan SW, Abbo AJ (1999) Biot consolidation analysis with automatic time stepping and error control part 1: theory and implementation. Int. J. Numer. Anal. Methods Geomech. 23(6):467–492. https://doi.org/10.1002/(SICI)1096-9853(199905)23:6<467::AID-NAG949>3.0.CO;2-R
Srivastava A, Babu GS, Haldar S (2010) Influence of spatial variability of permeability property on steady state seepage flow and slope stability analysis. Eng. Geol. 110(3-4):93–101. https://doi.org/10.1016/j.enggeo.2009.11.006
Sun YX, Zhang LL, Yang HQ, Zhang J, Cao ZJ, Cui Q, Yan JY (2020) Characterization of spatial variability with observed responses: application of displacement back estimation. J. Zhejiang Univ. Sci. 21(6):478–495. https://doi.org/10.1631/jzus.A1900558
Tabarroki M, Ching J (2019) Discretization error in the random finite element method for spatially variable undrained shear strength. Comput. Geotech. 105:183–194. https://doi.org/10.1016/j.compgeo.2018.10.001
Western AW, Blöschl G, Grayson RB (1998) Geostatistical characterisation of soil moisture patterns in the Tarrawarra catchment. J. Hydro. 205(1-2):20–37. https://doi.org/10.1016/S0022-1694(97)00142-X
Western AW, Zhou SL, Grayson RB, McMahon TA, Blöschl G, Wilson DJ (2004) Spatial correlation of soil moisture in small catchments and its relationship to dominant spatial hydrological processes. J. Hydro. 286(1-4):113–134. https://doi.org/10.1016/j.jhydrol.2003.09.014
Xu J, Zhang L, Li J, Cao Z, Yang H, Chen X (2021) Probabilistic estimation of variogram parameters of geotechnical properties with a trend based on Bayesian inference using Markov chain Monte Carlo simulation. Georisk 15(2):83–97. https://doi.org/10.1080/17499518.2020.1757720
Xu J, Zhang L, Wang Y, Wang C, Zheng J, Yu Y (2020) Probabilistic estimation of cross-variogram based on Bayesian inference. Eng. Geol. 277:105813. https://doi.org/10.1016/j.enggeo.2020.105813
Xu YS, Shen SL, Cai ZY, Zhou GY (2008) The state of land subsidence and prediction approaches due to groundwater withdrawal in China. Nat. Hazards 45(1):123–135. https://doi.org/10.1007/s11069-007-9168-4
Xue YQ, Zhang Y, Ye SJ, Wu JC, Li QF (2005) Land subsidence in China. Environ. Geol. 48(6):713–720. https://doi.org/10.1007/s00254-005-0010-6
Yang HQ, Zhang LL, Xue J, Zhang J, Li X (2019) Unsaturated soil slope characterization with Karhunen–Loève and polynomial chaos via Bayesian approach. Eng. Comput. 35(1):337–350. https://doi.org/10.1007/s00366-018-0610-x
Yang HQ, Chen X, Zhang L, Zhang J, Wei X, Tang C (2020) Conditions of hydraulic heterogeneity under which Bayesian estimation is more reliable. Water 12(1):160. https://doi.org/10.3390/w12010160
Yang HQ, Zhang L, Li DQ (2018) Efficient method for probabilistic estimation of spatially varied hydraulic properties in a soil slope based on field responses: a Bayesian approach. Comput. Geotech. 102:262–272. https://doi.org/10.1016/j.compgeo.2017.11.012
Zhang J, Cui X, Huang D, Jin Q, Lou J, Tang W (2016a) Numerical simulation of consolidation settlement of pervious concrete pile composite foundation under road embankment. Int. J. Geomech. 16(1):B4015006. https://doi.org/10.1061/(ASCE)GM.1943-5622.0000542
Zhang LL, Li JH, Li X, Zhang J, Zhu H (2016b) Rainfall-induced soil slope failure: stability analysis and probabilistic assessment. CRC Press
Zhang LL, Wu F, Zheng Y, Chen L, Zhang J, Li X (2018) Probabilistic calibration of a coupled hydro-mechanical slope stability model with integration of multiple observations. Georisk 12(3):169–182. https://doi.org/10.1080/17499518.2018.1440317
Zhu H, Zhang LM, Zhang LL, Zhou CB (2013) Two-dimensional probabilistic infiltration analysis with a spatially varying permeability function. Comput. Geotech. 48:249–259. https://doi.org/10.1016/j.compgeo.2012.07.010
The research received no specific grant from any funding agency in the public, commercial, or non-profit sectors.
School of Naval Architecture, Ocean & Civil Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai, 200240, China
Sirui Deng & Xin Wei
School of Civil and Environmental Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore, 639798, Singapore
Haoqing Yang
School of Earth Sciences and Engineering, Nanjing University, 163 Xianlin Road, Nanjing, 210023, China
Xiaoying Chen
Sirui Deng
SD analyzed and interpreted the data and wrote the manuscript. HY developed the ideas and frameworks and revised the manuscript. The authors have read and approved the final manuscript.
Correspondence to Haoqing Yang.
Deng, S., Yang, H., Chen, X. et al. Probabilistic analysis of land subsidence due to pumping by Biot poroelasticity and random field theory. J. Eng. Appl. Sci. 69, 18 (2022). https://doi.org/10.1186/s44147-021-00066-0
Land subsidence
Biot's consolidation
Poroelasticity | CommonCrawl |
Fenchel's duality theorem
In mathematics, Fenchel's duality theorem is a result in the theory of convex functions named after Werner Fenchel.
Let ƒ be a proper convex function on Rn and let g be a proper concave function on Rn. Then, if regularity conditions are satisfied,
$\inf _{x}(f(x)-g(x))=\sup _{p}(g_{*}(p)-f^{*}(p)).$
where ƒ * is the convex conjugate of ƒ (also referred to as the Fenchel–Legendre transform) and g * is the concave conjugate of g. That is,
$f^{*}\left(x^{*}\right):=\sup \left\{\left.\left\langle x^{*},x\right\rangle -f\left(x\right)\right|x\in \mathbb {R} ^{n}\right\}$
$g_{*}\left(x^{*}\right):=\inf \left\{\left.\left\langle x^{*},x\right\rangle -g\left(x\right)\right|x\in \mathbb {R} ^{n}\right\}$
Mathematical theorem
Let X and Y be Banach spaces, $f:X\to \mathbb {R} \cup \{+\infty \}$ and $g:Y\to \mathbb {R} \cup \{+\infty \}$ be convex functions and $A:X\to Y$ be a bounded linear map. Then the Fenchel problems:
$p^{*}=\inf _{x\in X}\{f(x)+g(Ax)\}$
$d^{*}=\sup _{y^{*}\in Y^{*}}\{-f^{*}(A^{*}y^{*})-g^{*}(-y^{*})\}$
satisfy weak duality, i.e. $p^{*}\geq d^{*}$. Note that $f^{*},g^{*}$ are the convex conjugates of f,g respectively, and $A^{*}$ is the adjoint operator. The perturbation function for this dual problem is given by $F(x,y)=f(x)+g(Ax-y)$.
Suppose that f,g, and A satisfy either
1. f and g are lower semi-continuous and $0\in \operatorname {core} (\operatorname {dom} g-A\operatorname {dom} f)$ where $\operatorname {core} $ is the algebraic interior and $\operatorname {dom} h$, where h is some function, is the set $\{z:h(z)<+\infty \}$, or
2. $A\operatorname {dom} f\cap \operatorname {cont} g\neq \emptyset $ where $\operatorname {cont} $ are the points where the function is continuous.
Then strong duality holds, i.e. $p^{*}=d^{*}$. If $d^{*}\in \mathbb {R} $ then supremum is attained.[1]
One-dimensional illustration
In the following figure, the minimization problem on the left side of the equation is illustrated. One seeks to vary x such that the vertical distance between the convex and concave curves at x is as small as possible. The position of the vertical line in the figure is the (approximate) optimum.
The next figure illustrates the maximization problem on the right hand side of the above equation. Tangents are drawn to each of the two curves such that both tangents have the same slope p. The problem is to adjust p in such a way that the two tangents are as far away from each other as possible (more precisely, such that the points where they intersect the y-axis are as far from each other as possible). Imagine the two tangents as metal bars with vertical springs between them that push them apart and against the two parabolas that are fixed in place.
Fenchel's theorem states that the two problems have the same solution. The points having the minimum vertical separation are also the tangency points for the maximally separated parallel tangents.
See also
• Legendre transformation
• Convex conjugate
• Moreau's theorem
• Wolfe duality
• Werner Fenchel
References
1. Borwein, Jonathan; Zhu, Qiji (2005). Techniques of Variational Analysis. Springer. pp. 135–137. ISBN 978-1-4419-2026-3.
• Bauschke, Heinz H.; Combettes, Patrick L. (2017). "Fenchel–Rockafellar Duality". Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer. pp. 247–262. doi:10.1007/978-3-319-48311-5_15. ISBN 978-3-319-48310-8.
• Rockafellar, Ralph Tyrrell (1996). Convex Analysis. Princeton University Press. p. 327. ISBN 0-691-01586-4.
| Wikipedia |
Does the mass of a pendulum string affect the amount of force required to hold it at a specific location?
On the left-hand side of the image I drew below, a pendulum bob hangs from a pendulum string of length $L$. A magnetic force of magnitude $F_{mb}$ pulls the bob to the left such that the bob equilibrates at an angle $\theta$; the bob is a horizontal distance $\Delta x$ from its equilibrium point. The magnitude of the force of the string on the bob is $F_{sb}$, and the magnitude of the weight force due to the Earth on the bob is $W_{eb}$.
Assuming I know the mass of the bob $m$ and the length of the pendulum, I can use this device to find $F_{mb}$. Using a simple 2D force-balancing approach, I get
$$ F_{mb} = W_{eb} \tan \theta = mg \tan \theta. $$ Assuming the angle $\theta$ is small, we can approximate $\sin \theta \approx \tan \theta$, and thus $$ F_{mb} = mg \sin\theta = mg\frac{\Delta x}{L}, $$ which is exactly what I need.
Now, what if the mass of the string is a significant fraction of the mass of the pendulum bob? Does this affect my expression for $F_{mb}$?
As an attempt to answer my problem, I drew the new extended free body diagram on the right-hand side of the figure below. The forces acting on the string are $F_{ps}$ (the force of the pivot on the string), $W_{es}$ (the weight force of the earth on the string, which acts at the COM), and $F_{bs}$ (the force of the pendulum bob on the string, which should be equal in magnitude to $F_{sb}$. It seems that what I want to do is to get an expression for $F_{bs}$, and then use that in the original free body diagram to solve for $F_{mb}$. The problem is that when I try to do this, everything is in terms of the unknown pivot force $F_{ps}$. How do I overcome this challenge to get a more accurate expression for the magnetic force?
newtonian-mechanics classical-mechanics
BunjiBunji
Yes. If the string were instead a rigid iron rod suspended from one end, then even without another mass attached at the lower end you would need to apply a force to hold the rod at an angle to the vertical.
The horizontal force $F$ that you need to apply to hold the pendulum in static equilibrium can be found from balancing moments around the suspension point : $$FL\cos\theta=mg(\frac12L\sin\theta)+Mg(L\sin\theta)$$ where $m, M$ are the masses of rod and bob.
A flexible string which has non-negligible weight will hang in a curve called a catenary, not in a straight line. The same method can be used to find $F$. However the position of the centre of mass of the string is not so easy to calculate as for a straight rod.
sammy gerbilsammy gerbil
Find the center of mass of the bob and string combination along the length of the string. Then simply assume that the total mass was concentrated as a bob on that part of the string.
Let the original bob have mass $m$, string have mass $m'$, tension be $T$ and magnetic force be $F$. The constant in this question is the angle made by the string.
$$W=(m+m')g$$ $$F=Wtan\theta$$ $$T=\sqrt {F^2+W^2}$$
Not the answer you're looking for? Browse other questions tagged newtonian-mechanics classical-mechanics or ask your own question.
Deriving an equation for the mass of a pendulum (Follow up)?
The $r$-component of the Total Force on a Simple Pendulum
How does tension work for a simple pendulum? What force is at play to keep a rigid body from stretching?
What causes the mass connected to the end of a pendulum to speed up when its pivot is moved?
The position and potential energy of the masses in two mass rigid body pendulum lie on a cirlce
How to make conclusions about a body moving in an accelerated frame as an observer in an inertial frame of reference?
Newton-Euler approach for the equations of motion of spherical pendulum attachted to a moving body with 6 degrees of freedom
Is the force on the bob of a pendulum with respect to ground 0 at the bottom?
Why does the Time period of a simple pendulum in a lift accelerating upwards change? | CommonCrawl |
Biomarkers of environmental manganese exposure and associations with childhood neurodevelopment: a systematic review and meta-analysis
Weiwei Liu1 na1,
Yongjuan Xin1 na1,
Qianwen Li1,
Yanna Shang1,
Zhiguang Ping1,
Junxia Min2,
Catherine M. Cahill3,
Jack T. Rogers3 &
Fudi Wang ORCID: orcid.org/0000-0001-8730-00031,2
Environmental Health volume 19, Article number: 104 (2020) Cite this article
Although prior studies showed a correlation between environmental manganese (Mn) exposure and neurodevelopmental disorders in children, the results have been inconclusive. There has yet been no consistent biomarker of environmental Mn exposure. Here, we summarized studies that investigated associations between manganese in biomarkers and childhood neurodevelopment and suggest a reliable biomarker.
We searched PubMed and Web of Science for potentially relevant articles published until December 31th 2019 in English. We also conducted a meta-analysis to quantify the effects of manganese exposure on Intelligence Quotient (IQ) and the correlations of manganese in different indicators.
Of 1754 citations identified, 55 studies with 13,388 subjects were included. Evidence from cohort studies found that higher manganese exposure had a negative effect on neurodevelopment, mostly influencing cognitive and motor skills in children under 6 years of age, as indicated by various metrics. Results from cross-sectional studies revealed that elevated Mn in hair (H-Mn) and drinking water (W-Mn), but not blood (B-Mn) or teeth (T-Mn), were associated with poorer cognitive and behavioral performance in children aged 6–18 years old. Of these cross-sectional studies, most papers reported that the mean of H-Mn was more than 0.55 μg/g. The meta-analysis concerning H-Mn suggested that a 10-fold increase in hair manganese was associated with a decrease of 2.51 points (95% confidence interval (CI), − 4.58, − 0.45) in Full Scale IQ, while the meta-analysis of B-Mn and W-Mn generated no such significant effects. The pooled correlation analysis revealed that H-Mn showed a more consistent correlation with W-Mn than B-Mn. Results regarding sex differences of manganese associations were inconsistent, although the preliminary meta-analysis found that higher W-Mn was associated with better Performance IQ only in boys, at a relatively low water manganese concentrations (most below 50 μg/L).
Higher manganese exposure is adversely associated with childhood neurodevelopment. Hair is the most reliable indicator of manganese exposure for children at 6–18 years of age. Analysis of the publications demonstrated sex differences in neurodevelopment upon manganese exposure, although a clear pattern has not yet been elucidated for this facet of our study.
Environmental metal exposure normally occurs in co-exposure to multiple metals, such as lead, cadmium, arsenic, mercury, chromium and manganese. Among these metals, manganese (Mn) is an essential trace element [1], but it is toxic, especially for brain functions, when abnormally deposition occurs in the body [2].
Growing interest has been recently generated to understand environmental manganese exposure in children [3, 4]. Meta-analysis about autism spectrum disorder (ASD) indicated that the mean difference in blood and hair manganese concentrations between ASD and control individuals was not significant [5]. In terms of neurocognitive development, these epidemiological studies had inconsistent conclusions across different biomarkers [6,7,8,9], which also left open the question as to whether there exists a useful biomarker for Mn exposure.
Evidence-based studies have also evaluated this association between manganese in hair and childhood IQ [10]. However, no comprehensive meta-analysis has been performed to examine Mn associations between different indicators and neurodevelopment. Thus, to the best of our knowledge, no meta-analysis has been performed regarding the putative correlation between such Mn indicators. Compared with cognition, the impacts of Mn on behavioral and motor development in children have been less evaluated, although Mn-related motor changes, such as in manganism, have been evaluated more extensively in occupational exposures [11, 12]. In addition, the potential for sex difference in the consequences of manganese exposure has also drawn attention, as there may be some differences between males and females in patterns of exposure, gastrointestinal absorption of chemicals, metabolism and detoxification [13].
To address these research gaps, the goal of this systematic review and meta-analysis has been to summarize and quantify the scientific evidence through different biomarkers or sources in order to obtain a clearer understanding of the exposure-response relationship between Mn indicators (biomarkers or environmental samples) and neurodevelopmental outcomes. In addition, we performed meta-analyses to seek a pooled correlation between Mn indicators (hair, blood and drinking water) and, here, suggested a potential biomarker for further epidemiologic studies of the toxic impact of Mn in childhood neurodevelopment. We also performed a preliminary meta-analysis to quantify the sex difference between manganese indicators and intelligence. Our conclusions provide useful suggestions for future public health studies, especially on the consequences of heavy metal exposures, such as Mn, towards human health.
Search strategy and inclusion criteria
Our study was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) Statement. The completed PRISMA checklist is provided in Additional file 1. This systematic review protocol was registered with PROSPERO (CRD42020182284). Two investigators (authors W.L. and Y.X.) independently conducted a literature search in PubMed and Web of Science for studies published through December 31th 2019 in English, using the following search terms: ("manganese" or "manganism" or "manganese exposure") and ("children" or "child" or "infant" or "childhood" or "adolescents" or "early life" or "young" or "younger populations") and ("neurotoxicity" or "neuropsychological effects" or "neurodevelopmental outcomes" or "cognition" or "cognitive" or "intellectual function" or "intellectual impairment" or "intelligence quotient" or "IQ" or "memory" or "attention" or "mental" or "academic performance" or "hyperactivity" or "behavior" or "hyperactive behaviors" or "neurobehavior" or "motor" or "neuromotor"). In addition, the references included in relevant articles were searched for additional eligible publications.
Studies included in this systematic review had to meet the following criteria of being: (1) An original peer reviewed article; (2) A study of populations up to 18 years of age; (3) Manganese exposure was assessed through medicinal biomarkers (i.e. hair and blood) or environmental samples (i.e. drinking water); (4) A study of neurodevelopment derived from manganese exposure, including: cognitive, behavioral and/or motor changes; (5) Potential confounders were adjusted in the mathematical model for the estimated association between Mn indicator and a specific neurological outcome in children.
For inclusion in the meta-analysis, studies had to satisfy the above criteria and had to have measured the effect of manganese exposure on neurodevelopment by regression models, while for correlation analysis, the correlation coefficient (r) was provided. We excluded studies about attention deficit hyperactivity disorder (ADHD), which was reviewed in a recent paper, and the results of which showed higher peripheral manganese concentrations in children diagnosed with ADHD than those in controls [14]. We did not exclude articles published using the same population with different neurodevelopmental assessments [15, 16].
Data extraction and quality assessment
The following information was extracted by two investigators (W.L. and Y.X.) independently using a standardized data collection form: first author, publication year, biomarker, country/study name, study design, sample size, age, sources of manganese exposure, neurological assessments and neurodevelopmental outcomes. For meta-analysis, the regression coefficient (β) with its 95% confidence interval (CI) and correlation coefficient (r) were also extracted. In the event of multiple articles published using the same population when assessing neurodevelopmental outcomes at different ages, and the same data were used in more than one publication, we consistently selected the most informative article, which was usually the most recent publication.
The guideline for Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) was applied to assess the methodological quality of each study by two investigators (W.L. and Y.X.) independently [17]. The STROBE Statement is a checklist of 22 items that was initially developed to evaluate the systematic clarity in communicating research results in observational studies. This checklist has been used in systematic reviews to evaluate the methodological quality of observational studies [10, 18]. Nine items in methods (items 4–12) were selected, which covering the different aspects of methodology in observational studies. The methodological quality was classified by the number of items that the research met. To be more specific, articles that met 0–3 items, 4–6 items and 7–9 items were regarded as low, moderate and high methodological quality, respectively. Any disagreements were resolved by group discussion with a third investigator (Q.L.).
A regression coefficient (β) with corresponding 95% CI was used as the common measure of association across studies [6,7,8, 16, 19,20,21]. A study that stratified by sex was treated as two separate reports [20]. We used a random-effects model to calculate the summarized β metrics and their corresponding 95% CIs. The meta-analysis was restricted to studies that used the Wechsler scales to evaluate IQ and linear regression models to examine the relationships between manganese exposure and children's IQ scores. One study exhibited the scores of estimated IQ, vocabulary, block design and digit span, which were subtests from the Wechsler Intelligence Scale [22]. We took the scores of estimated IQ, block design and vocabulary as Full Scale IQ, Performance IQ and Verbal IQ, respectively [23].
Three manganese exposure metrics were included: hair, blood and drinking water. Furthermore, the β metric was estimated through different expressions of manganese concentration: log10, log2, loge or non-transformation. We unified the expression as a log10-transformation to mean that the change in IQ (β) was associated with a 10-fold increase in the manganese exposure indicator, while we did not transform the β in blood, which was transformed using loge consistently.
More specifically, in a linear regression model where the manganese concentration (x) was transformed by logarithm base 2 to correct the skewness of the data distributions, we expressed it into a log10-transformation by the formula log2x * β = log10x * β1 to obtain a new coefficient (β1). The β1 was approximately equal to 3.32 * β. Two studies assessed the effect of manganese exposure with raw manganese concentration. We used the similar formula to transform it into the changes in base log10. Clearly, the β1 was equal to E(x)* β. E(x) was a function about the mean of manganese concentration (x), more specifically, E(x) = x/log10x.
In addition, a meta-analysis of correlation coefficients was also performed. Firstly, the Fisher's z transformation was used to transform data as below,
$$ \mathrm{Fisher}\hbox{'}\mathrm{s}\kern0.5em z=0.5\ast \mathrm{In}\frac{1+r}{1-r} $$
$$ \mathrm{SE}=\sqrt{\frac{1}{n-3}}\;\left(n\;\mathrm{is}\ \mathrm{the}\ \mathrm{size}\ \mathrm{of}\ \mathrm{the}\ \mathrm{sample}\right) $$
$$ \mathrm{summary}\kern0.5em r=\frac{e^{2z}-1}{e^{2z}+1}\kern0.5em \left(z\;\mathrm{was}\ \mathrm{the}\ \mathrm{summary}\ \mathrm{Fisher}\hbox{'}\mathrm{s}\ z\right) $$
Then, we put the Fisher's z and Standard Error (SE) into RevMan 5.3 using the generic inverse variance random effects model to obtain the summary Fisher's z. Finally, the formula 3 was used to estimate the summary r [24].
The meta-analysis was performed using Stata version 14 for regression and RevMan 5.3 for correlation. Heterogeneity among studies was estimated using the I2 statistic [25]. A "leave-one-out" sensitivity analysis and subgroup analysis based on the source of exposure were also performed. Publication bias was assessed using Egger's test with a significant value set to p < 0.10 [26]. Except where noted otherwise, differences with a p-value < 0.05 were considered significant.
A total of 1754 potentially relevant studies were identified through database searches (see Fig. 1). After applying the stringent inclusion and exclusion criteria described in the methods section, 55 original studies encompassing 13,388 children were ultimately included. Fifteen studies reporting 18 outcomes were included in the meta-analysis, with 9 studies for regression and 9 studies for correlation (see Fig. 1).
PRISMA flow diagram
The sources of manganese exposure were mainly from industrial activities (i.e. metallurgy and mining) and drinking water (see Tables 1, 2, 3). More studies examined postpartum manganese exposure than prenatal exposure, meanwhile there were also some studies that measured manganese exposure from prenatal to postnatal periods. The concentrations of manganese were more frequently measured in biomarkers (n = 52, i.e. hair, blood and teeth) than environmental samples (n = 21, i.e. drinking water, particulate matter and soil). The associations between manganese in biomarkers and neurodevelopmental outcomes were investigated in 15 cohort studies and 37 cross-sectional studies (see Tables 1, 2). Table 3 presents the associations between manganese in drinking water (W-Mn) and neurodevelopment.
Table 1 Neurodevelopmental outcomes of manganese exposure mainly prenatal exposure measured in biomarkers from cohort studies
Table 2 Neurodevelopmental outcomes of manganese exposure measured in biomarkers from cross-sectional studies
Table 3 Neurodevelopmental outcomes of manganese exposure measured in drinking water
The neurological outcomes were assessed more frequently among children between 6 and 18 years of age than children under 6 years old. Amongst children under 6 years old, most studies were cohort studies with the different editions of Bayley Scales of Infant and Toddler Development applied to assess neurodevelopment, and the measurements of manganese mainly reflected prenatal exposures. In the older groups, the well-defined versions of Wechsler Intelligence Scale for Children were used to assess the children's general cognitive abilities. Specific cognitive functions were assessed through its subtests. For behavioral performance, the variant editions of Conners' Rating Scale were applied in most studies. For motor coordination, Finger Tapping Test and Luria Nebraska Motor Battery were administered in most studies. Among these studies, the most adjusted confounders in the mathematical model were maternal education, maternal intelligence, child age and sex, which were selected based on established or plausible associations with neurodevelopment. A large percentage (44/55) of included studies was of high quality (see Additional file 2). Except for three studies, all the others described the efforts to address potential sources of bias, such as blinding of exposure status and outcomes assessment, using validated assessment scales and previously trained psychologists.
Manganese in biomarkers and neurodevelopmental outcomes
In children under 6 years of age, evidence from cohort studies in Table 1 revealed that higher manganese exposure had a negative effect on neurodevelopment [29,30,31,32,33,34,35,36], mainly cognitive and motor development. These studies enrolled pregnant women and mainly collected biomarker tissues, such as cord blood, maternal blood and hair, as well as placenta at delivery [29, 30, 32, 34,35,36]. One study sampled maternal hair and blood at intervals 1–3 times during pregnancy [33]. These biomarkers mentioned above were used to indicate prenatal exposure. The other study collected shed teeth from children beginning at age 7 [31], which provides fine scale temporal profiles of Mn concentrations over the prenatal and early childhood periods. Neurodevelopmental outcomes were assessed by trained psychometricians at follow-up, mainly at 1–2 years of age.
The other two birth cohort studies found an inverted U-shaped association between manganese exposure and cognitive or motor development [27, 28] (see Table 1). Claus Henn et al. (2010) reported that the effect of manganese was apparent for 12-month but diminished for mental development scores at older ages [28], suggesting the possible existence of critical developmental windows. Chung et al. (2015) found a nonlinear dose-response relationship between maternal blood manganese at term and 6-month psychomotor development scores, with a peak point approximately 24–28 μg/L, suggesting adverse neurodevelopmental effects of both low (< 20.0 μg/L) and high (≥ 30.0 μg/L) maternal blood manganese concentrations [27].
The results from cohort studies concerning children over 6 years old were intriguing. Two follow-up studies in Bangladesh and Canada were conducted to evaluate whether changes in drinking water manganese exposure were associated with changes in child intellectual outcomes. In Bangladesh, Wasserman et al. (2016) found that during 2 years of follow-up, the reduction in exposure (indicated by manganese in blood, B-Mn) was not, for the most part, translated into improvements in child IQ. In this cohort, baseline B-Mn was negatively associated with working memory after covariate adjustment [19]. In Quebec (Canada), the result revealed that, for children whose Mn concentrations in their water supply increased between baseline and follow-up, their Performance IQ scores decreased significantly. On the other hand, at follow-up, higher manganese in drinking water was associated with lower Performance IQ in girls, whereas the opposite was observed in boys. Similar trends were observed in hair [20]. Although the results of cohort studies need to be verified, they also suggest the importance of preventing such exposures.
Inconsistent conclusions were drawn from three cohort studies, one measured manganese in cord blood and spot urine [39], the other two sampled dentine of incisors [37, 38] (see Table 1). The birth cohort study in China reported that urinary Mn concentrations, but not cord blood manganese, were positively associated with Performance IQ of school-age children, especially in girls [39]. Mora et al. (2015) found that higher prenatal and early postnatal manganese in teeth (T-Mn) were adversely associated with behavioral outcomes, namely internalizing, externalizing and hyperactivity problems, in children at 7 and 10.5 years. In the sex stratified models, Mora et al.(2015) found that higher prenatal and postnatal T-Mn were associated with better memory abilities at ages 9 and 10.5, and better cognitive and motor outcomes at ages 7 and 10.5 years, among boys only [38]. On the other hand, Claus Henn et al. (2018) found that higher postnatal T-Mn was negatively associated with both Wide Range Assessment of Visual Motor Abilities (WRAVMA) total and visual spatial subtest scores, among boys only [37]. Mn interactions with lead (Pb) were also examined. Mora et al. (2015) reported that higher prenatal T-Mn was associated with poorer visuospatial memory outcomes at 9 years and worse cognitive scores at 7 and 10.5 years in children with higher prenatal blood lead concentrations (≥ 0.8 μg/dL) [38]. And Claus Henn et al. (2018) found that tooth Mn was positively associated with visual spatial and total WRAVMA scores in the second trimester, among children with lower (< median) tooth Pb concentrations, while no significant Mn association was observed at high Pb concentrations [37]. These inconsistent findings may be due to differences in biomarkers (blood and urine vs. teeth) or sources of Mn exposure (Mn-containing fungicides vs. dietary and airborne sources).
Although most cohort studies found adverse association between manganese exposure and neurodevelopment, Mn interactions with sex and other metals, such as Pb, were gaining attention. Among these studies, only six studies described the sources of manganese exposure during pregancy, such as mining, mancozeb and drinking water, the concentration of manganese was only measured in drinking water in two studies [19, 20]. And Dion et al. (2018) reported that Mn in hair (H-Mn) correlated with W-Mn at follow-up (r, 0.48; p < 0.001) and with time-averaged W-Mn (r, 0.43; p < 0.001) [20].
Two out of the 37 cross-sectional studies investigated associations between Mn exposure and developmental scores in infants. Postnatal manganese exposure were measured in breast milk, blood, urine and hair, no significant association was observed [40, 41], with significantly negative association in the unadjusted model [41] (see Table 2).
There were 35 studies concerning children over 6 years old, and these also measured manganese in related biomarkers, such as hair (n = 24) [6,7,8, 15, 16, 21, 22, 43, 45, 46, 49, 51,52,53,54, 59,60,61,62,63, 65,66,67,68], blood (n = 17) [6, 7, 9, 21, 44, 49, 52, 53, 56,57,58,59,60, 63,64,65, 67], teeth (n = 5) [42, 47, 48, 50, 55], saliva, toe nail [8] and urine [60]. One study could be included when using more than one biomarker, as was the case in New Brunswick (Canada), which measured manganese in hair, saliva and toe nail [8].
A central result was that elevated H-Mn was associated with poorer cognitive and behavioral performance in most studies (n = 17), in terms of IQ [6, 7, 16, 21, 22, 43, 49, 52, 68], working memory [61, 63], verbal memory [46], visuoperception and short-term visual memory [54], long-term memory and learning abilities [67], memory and attention [15], hyperactivity behaviors [45, 46], oppositional behaviors [45] and externalizing behavioral problems [62] (see Table 2). Among them, Oulhote et al. (2014) found that there was no significant association between manganese exposure and hyperactivity [15]. In this case, a large percent (13/17) of the studies reported that the mean of manganese in hair exceeded 0.55 μg/g, which was similar to the mean concentration from control groups [6, 54]. Haynes et al. (2015) also found that compared with the middle two quartiles, the lowest quartiles of H-Mn (< 0.21 μg/g) was associated with significantly lower mean perceptual reasoning scores [52]. Similarly, one cross-sectional study revealed a positive association between H-Mn and cognitive function in children aged at 6–8 years, with a low median concentration of Mn in hair (0.82 ng/g) [51]. No significant associations were found in three studies in terms of cognitive functions [8, 59, 66] and behavioral performance [59], with the average of manganese in hair ranged from 0.17 μg/g to 0.3 μg/g. For motor function, two studies showed that elevated H-Mn was associated with tremor intensity [60] and poor postural balance [65], while two articles found no association between H-Mn and motor function [46, 53]. In another publication, Oulhote et al. (2014) found a nonlinear association between H-Mn and motor function, with a slight increase at concentrations between 0.3 and 0.8 μg/g, and an apparent decrease in scores at H-Mn > 10 μg/g, although there were very few observations with such high concentrations [15]. This inconsistency may possibly due to the different levels of manganese exposure and the sensitivity of scales, as the average concentration of H-Mn ranged widely from 0.16 μg/g to 14.6 μg/g [15, 46, 53, 60, 65]. Carvalho et al. (2018) reported poorer cognition and behavior, while no effect on motor in the same exposure population [46] (see Table 2).
In contrast to hair, most (n = 9) reports in Table 2 indicated that B-Mn was non-significantly associated with cognitive and behavioral development [6, 7, 9, 21, 49, 56, 57, 59, 67]. However, two studies did show that elevated B-Mn was associated with poorer cognitive development, when using IQ [58], visual attention, visual perception and phonological awareness [63] as outcome measures. Two publications suggested that both low and high B-Mn were negatively associated with cognitive and behavioral development [44, 52]. In relation to motor development, three studies showed that elevated B-Mn was associated with impairment of motor functions, namely tremor intensity [60], postural balance [65], coordination and motor speed [53]. However, one study indicated no significant association [64] (see Table 2). Among these reports, the mean concentration of manganese in blood was mainly around 10 μg/L, suggesting the relatively homeostatic regulation of blood manganese.
Five publications used teeth as a biomarker [42, 47, 48, 50, 55] (see Table 2). One study found an inverted U-shaped association between prenatal Mn and visuospatial ability in girls, no significant associations were found in postnatal Mn [42]. Ericson et al. (2007) found that higher Mn in teeth was adversely associated with behavioral outcomes [50]. Horton et al.(2018) revealed that prenatal Mn exposure appeared to be protective against behavioral outcomes, yet postnatal Mn appeared as a risk factor for behavioral outcomes [55]. Two of which indicated that there were no significant associations between Mn in deciduous teeth and behavioral [47] and motor development [48]. These results suggested that Mn associations were partly driven by exposure timing and modified by sex. Three studies also found that tooth Mn concentrations were higher in the prenatal than postnatal period [42, 48, 55], indicating a greater demand for manganese in the prenatal period. No significant associations were observed between neurodevelopmental outcomes and Mn in saliva, toe nail [8], and urine [60] (see Table 2).
Manganese in drinking water and neurodevelopmental outcomes
Evidience from cohort studies indicated that elevated W-Mn was associated with lower IQ scores in girls [20] and the increased risk of children's behavioral problems at 10 years of age [69] (see Table 3). While Rodrigues et al. (2016) also found an inverted U-shaped association between W-Mn and motor development with an inflection point around 400 μg/L [70].
Most (n = 7) cross-sectional studies found that higher W-Mn was associated with poorer cognitive and behavioral function, such as IQ [9, 16, 49], memory [15], written language [63], mathematics scores [71] and the risk of behavioral problems [56]. The mean concentrations of W-Mn were shown to range from 795 to 1387.9 μg/L in three studies conducted in Araihazar, a rural area of Bangladesh [9, 56, 71], which were much higher than W-Mn in Canada, with its arithmetic mean of 98 μg/L reported in two studies [15, 16]. The W-Mn in two studies conducted in Brazil was much lower, with the mean of W-Mn around 20 μg/L in the rural group [49, 63]. By contrast, two reports found no clear association between W-Mn and childhood IQ [8] and behavioral function [15] (see Table 3). Both studies were conducted in Canada [8, 15], one described a situation where W-Mn was low, with approximately half of children's home tap water with a manganese concentration less than 5 μg/L [8]. For motor function, the association with W-Mn was significant, with a threshold indicating that scores decreased more steeply at concentrations above 180 μg/L, and this research also found that manganese intake from water was negatively associated with motor function [15]. Of note, most studies also measured manganese in hair or blood, the conclusions were mainly consistent with H-Mn [15, 16, 49, 63], whereas the inconsistency was observed in blood [9, 56].
Pooled effect estimates for IQ scores
The details for our meta-analysis were extracted to include three manganese exposure metrics: hair, blood and drinking water, as shown in Additional file 3. Among these studies, seven researches had a cross-sectional design, two cohort studies were treated as cross-sectional studies by using the associations between Mn exposure and concurrent IQ scores at baseline examinations [19] or follow-up examinations [20].
Figure 2 clearly shows that a 10-fold increase in hair manganese is associated with a decrease of 2.51 points (95% CI, − 4.58, − 0.45; I2 = 59.8%) in Full Scale IQ of children aged 6–18 years. Of note, this inverse relationship remained significant when we conducted a sensitivity analysis in which one study was removed at a time (see Additional file 4). The pooled results with respect to Performance IQ were not extremely robust and should be investigated further. Heterogeneity was 59.8% for Full Scale IQ, as a "leave-one-out" analysis revealed that there still existed some heterogeneities. Next, we performed a subgroup analysis based on the source of exposure, revealing a significantly inverse association between H-Mn and IQ scores from airborne manganese exposure [6, 21, 22], but not from waterborne [8, 16, 20] or mining waste manganese exposure [68]. The pooled β for the 10-fold increase in hair manganese from airborne manganese exposure was associated with a decrease of 7.62 points (95% CI, − 11.51, − 3.73; I2 = 0%) for Full Scale IQ. For Performance IQ, the decrease would be 2.60 points (95% CI, − 3.94, − 1.25; I2 = 0%), and 4.56 points decrease (95% CI, − 8.33, − 0.79; I2 = 45.5%) for Verbal IQ. Unexpectedly, the concentrations of H-Mn from airborne manganese exposure were much higher than the others. Therefore, we concluded that both the source of manganese exposure and the concentrations of H-Mn likely account, at least in part, for this relatively high heterogeneity. The results from Begg's and Egger's tests did not suggest the existence of publication bias.
Forest plots of effect size on intellectual quotient (IQ) by a 10-fold increase in hair manganese. a: waterborne manganese exposure, b: airborne manganese exposure, c: manganese exposure from mining waste
The meta-analysis in drinking water and blood revealed no significant effects (Additional files 5, 6). Of significance, among the reports that used both hair and blood as biomarkers, a large percent (7/11) of which indicated that H-Mn, but not B-Mn, was negatively associated with cognitive development [6, 7, 21, 49, 52, 59, 67] (Table 2).
Different biomarkers in Mn exposure and neurodevelopment
In the elder group, most studies used hair and blood as biomarkers. While teeth was also used as a biomarker in some publications with inconsistent Mn associations. Some researches also measured manganese in environmental samples, such as drinkable water, soil and particles. The correlations between manganese in drinking water and biomarker (hair or blood) or different biomarkers were analyzed in nine studies, as shown in Additional file 7. Among these publications, six studies used Spearman's rank correlation [7, 9, 49, 56, 63, 67], as the distributions of manganese concentrations in biomarkers and drinking water were considerably skewed. Three studies analyzed the correlation using Pearson correlation tests, among these studies, the concentrations of manganese in indicators were transformed in order to make distributions more symmetrical for Pearson correlation tests [20, 21, 52].
The preliminary meta-analysis was conducted to gain a pooled result of correlations between different manganese indicators (see Additional file 7). The correlation between H-Mn and W-Mn indicated that they did have significance, and the pooled correlation coefficient r was 0.48 (95% CI, 0.40, 0.55). By contrast, the summary correlation between B-Mn and W-Mn, even B-Mn and H-Mn indicated that there had no significance. Although different analytical methods were applied in three studies that analyzed the correlation between H-Mn and W-Mn, the conclusion was consistent [20, 49, 63].
Wasserman et al. (2011) found that blood did not vary predictably across the low and high W-Mn groups, suggesting that blood may not be a good reflection of drinking water Mn exposure [72]. From the airborne manganese exposure, Torres-Agustin et al. (2013) observed a statistically significant difference between the two groups in the median blood Mn concentrations of 8.0 and 9.5 μg/L for non-exposed and exposed children, respectively. Meanwhile, hair Mn concentrations in exposed children were, on average, 20 times higher (median 12.6 and mean 14.2 μg/g) than the nonexposed group (median 0.6 and mean 0.73 μg/g) [67]. These results indicate that hair is more sensitive than blood to reflect environmental manganese exposure.
In infants, one report found that maternal and cord blood manganese concentrations were correlated, though in a nonlinear manner. Here, the median manganese in cord blood was nearly twice the median concentration in maternal blood, unexpectedly, the inverse associations were found between manganese in maternal blood, but not cord blood, and early childhood mental and psychomotor development scores [29]. The other biomarkers (i.e. maternal and infant hair and placenta) used to reflect prenatal Mn exposure did not show the correlation.
Sex specific exposure-response relationships
Evidence from eight cohort studies yielded inconsistent conclusions as to sexual effects (see Table 4). Three of four studies reported a statistically significant sex interaction (p < 0.05), and concluded that girls were more susceptible to manganese exposure than boys in terms of cognition and motor [20, 31, 33]. While Takser et al. (2003) found that in children at 3 years, the hand skill score was negatively associated with cord blood Mn in boys (p = 0.002), but not in girls [34]. Two cohort studies found a positive non-statistically significant association between manganese exposure and cognitive development in girls [69] and cognitive and motor development in boys [38]. Sex interaction p-values in the remaining two reports were not available. Positive association between urinary Mn concentrations and Performance IQ of children was observed, especially in girls [39]. And Claus Henn et al. (2018) found significantly negative associations between T-Mn and visual spatial score, among boys only [37].
Table 4 Characteristics of the 18 studies that conducted sex-stratified analyses
Seven cross-sectional studies consistently concluded that girls were more susceptible to manganese exposure than boys with respect to cognition and behavior [6, 8, 16, 42, 54, 62, 67], with non-statistically significant association in three reports [16, 42, 54] and interaction p-values not available in four studies [6, 8, 62, 67]. Three studies reported a statistically significant interaction between manganese exposure and sex (p < 0.05), without a clear pattern [41, 46, 48]. Rink et al. (2014) found that in children aged 14–45 months, H-Mn was negatively associated with the cognitive, receptive language and expressive language scores for girls only in the unadjusted model [41]. While negative association between H-Mn and the free recall after interference score was observed, especially in boys [46]. Chiu et al. (2017) found that higher prenatal Mn was associated with better body stability in boys, with opposite associations in girls. For tremor, on the other hand, higher early postnatal Mn was associated with increased right-hand center frequency in girls, but increased Mn concentration at the later postnatal period was associated with increased center frequency in boys [48].
Three studies met the criteria for meta-analysis since they had a similar number of participants, and all adjusted for potential confounders such as maternal nonverbal intelligence, maternal education and family income [8, 16, 20]. Bouchard et al. (2011) only provided the detail of sex-stratified analysis for Full Scale IQ [16], therefore, there were two studies in the meta-analysis for Performance IQ and Verbal IQ. Figure 3 presents that higher W-Mn is associated with better Performance IQ among boys only (change in scores for a 10-fold increase in concentration, β = 3.21; 95% CI, 1.55, 4.87), in these two studies, a large percentage of children were exposed to drinking water manganese under 50 μg/L (the esthetic Canadian guideline concentration for W-Mn) [8, 20]. The meta-analysis on 3 studies concerning childhood IQ and H-Mn, found no significant difference between boys and girls [8, 16, 20] (Additional file 8).
Meta-analysis of studies that stratified by sex reporting the effect of a 10-fold increase in drinking water manganese on intellectual quotient (IQ)
This systemic review and meta-analysis was based on 55 studies, including 17 cohort studies and 38 cross-sectional studies, with 13,388 participants. Evidence from cohort studies found that higher manganese exposure had a negative effect on neurodevelopment, mainly cognitive and motor skills in children under 6 years of age. In children aged 6–18 years old, results from cross-sectional studies revealed that higher H-Mn and W-Mn, but not B-Mn or T-Mn, were negatively associated with cognitive and behavioral performance. Of these cross-sectional studies, most studies reported that the mean of manganese in hair was more than 0.55 μg/g. The pooled results in H-Mn revealed that a 10-fold increase in H-Mn was associated with a decrease of 2.51 points (95% CI, − 4.58, − 0.45) in Full Scale IQ in children aged 6–18 years old. In the elder group, hair was the most consistent and reliable indicator of manganese exposure. These published data did demonstrate sex differences upon manganese exposure, without a clear pattern, possibly girls were more susceptible to manganese exposure than boys.
It is worth noting that the association between manganese exposure and motor performance was inconsistent in hair, blood and teeth. Only the results from infants [27, 34, 36, 70] and one study from the elder group that measured manganese in drinking water [15] supported the conclusion that higher manganese exposure had a negative effect on motor skills. It is a fact that occupational manganese exposure in adults can cause parkinsonian-like movement disorders [12]. Of note, animal studies reported that the increased brain manganese concentrations, either by Mn exposure or genetic strategies can cause severe motor deficits [73,74,75]. This consequence of excess Mn appears to be partly due to its interaction with other metals, like iron, operating through the post-transcriptional iron-responsive element driven regulatory mechanisms [76, 77]. Future studies are needed to evaluate the association between manganese exposure and motor performance in children.
Given the emerging evidence associating elevated Mn exposure with neurological impairments in children, it is critical to explore children's exposure to Mn from the different sources. Evidence from cross-sectional studies indicated that groundwater and industrial emissions from ferromanganese alloy plants and mining were the main sources of environmental manganese exposure. Therefore, children were exposed to manganese mainly by inhaling pollutants from industrial emissions and by drinking water. Compared to cross-sectional studies (Table 2), most birth cohort studies enrolled mother-infant pairs in hospitals or clinics without specific source of manganese exposure. Among these studies, they all measured manganese in biomarkers, which can reflect all sources (i.e. diet, air and water) and routes of exposures [78], although Mn homeostasis differs markedly for dietary uptake and inhalation. Additionally, almost all these studies carried out the analysis based on percentile grouping, with some studies yielding additional indication of a dose-response relation of any shape (i.e. linear or an inverted U). Nevertheless, further research is needed to explore the mechanism with respect to the absorption and distribution of different sources of manganese.
Additionally, there is a particular need for a consistent biomarker to accurately assess children's exposure to Mn. The concentrations of manganese were frequently measured in hair and blood to reflect internal Mn dose in children aged 6–18 years. Results from water-borne and air-borne manganese exposure indicated that hair was more sensitive than blood to reflect the load of manganese in the body. What is more, hair manganese from airborne manganese exposure was much higher than waterborne manganese exposure and was negatively associated with childhood IQ scores.
From these analyses, hair is the more promising measure of long term Mn exposures when compared to blood (with a half-life of 4 or 39 days due to the different elimination pathways [79]). Many metals are deposited in keratin, a component of hair, and the relatively slow growth rate of hair means that hair represents integrated exposures [80]. The 2 cm of newly grown hair was used for measuring the concentration of manganese in most publications, which reflects the exposure during the 2–4 months before sampling [81]. In addition, teeth also reflects long-term exposures as a slow metabolism and accumulation of Mn occurs in teeth [82]. Among eight studies that used teeth as a biomarker, all of these studies measured manganese in naturally shed deciduous milk teeth, while the teeth type varied among these studies, such as for incisors, canines and molars. In most studies, cumulative Mn exposures were estimated in incisors that were free of obvious defects such as caries and extensive tooth wear, which reflect manganese exposure from 13 to 16 weeks after gestation to approximately 1 year of age [83]. In fact, animal studies showed that H-Mn was significantly correlated with T-Mn. Furthermore, correlation coefficients clearly supported links between H-Mn and cognitive functions, reflected by escape latencies and number of platform crossings, and the correlations were better than those in teeth [84]. Additionally, hair is easier to obtain than teeth. Toe-nail can also be employed as a tissue source to measure of chronic exposure to this metal [8], but this technique has rarely been used when establishing environmental manganese exposures in children. The characteristics of relevant biomarkers are summarized in Table 5 [79, 81,82,83, 85,86,87,88,89,90,91,92,93,94,95,96,97,98,99].
Table 5 Characteristics of relevant biomarkers used in children
Overall, this review suggests that hair is the most reliable indicator of environmental manganese exposure in children aged 6–18 years old. Traditionally, the main problem of using hair as a biomarker is the potential for external contamination. In response to this, except for a study published in 2007 [45], all other studies that measured manganese in hair had used defined cleaning methodologies to eliminate external contamination. Eastman et al. (2013), for example, developed a hair cleaning methodology to effectively eliminate exogenous metal contamination [86]. This method can substantiate the use of hair as a biomarker of environmental Mn exposure in children. It should be noted that hair dye or other topical treatment could influence the content of manganese in hair [100], although topical hair treatment is unfrequent in children. In spite of this, two studies, now included, also excluded children who reported using hair dye in the preceding 5 months [15, 16]. Further work is needed to determine the utility of hair as a biomarker in preschoolers exposed to manganese. For infants, there appears to be insufficient hair to be analyzed. It is worth noting that teeth provides integrated measures of exposure over the prenatal and early childhood periods of their development, perhaps presenting as a promising biomarker of manganese exposure in infants.
It is always important to accurately determine the safe range of manganese exposure. For this reason, we extracted the reference range or cut-off point used in the reviewed articles (see Additional file 9), while limitations in our data precluded us from directly addressing some aspects of this important issue. In regard to H-Mn, we found that the cut-off point was much higher than the upper limit value of reference range. Accordingly, negative associations between H-Mn and neurodevelopment were observed in two studies that used 2 or 3 μg/g as the cut-off point [43, 45]. And Haynes et al. (2015) found that compared with 0.21–0.75 μg/g, both lower and higher H-Mn were associated with lower IQ scores [52], this may be closer to the possible reference range in children.
It will be critical to consider the timing of Mn exposures, because there may be certain sensitive periods to the effects of environmental manganese exposures in the developing brain. Takser et al. (2003) found that there were negative relationships between cord blood Mn concentrations and several psychomotor sub-scales at age of 3 years, but not at 9 months or 6 years, after adjustment for potential confounders [34].
Some included studies measured prenatal exposure, as indicated by manganese in maternal and cord blood, maternal and infant hair and placenta. Other studies measured manganese in teeth, which reflects prenatal and postnatal exposure (from 13 to 16 weeks after gestation to 1 year of age). Most of the included studies measured postnatal manganese exposure with a cross-sectional design. Hair was the frequently-used biomarker, where that analyzed 2 cm closest to the scalp reflects the exposure during the 2–4 months before sampling [81]. Among these studies, we tend to believe that manganese exposure is continuous, as some cross-sectional studies recruited children who had lived in the same community for a minimum of 3 months or 5 years, to ensure continuous exposure to the same source for this period of time. The follow-up study is warranted to explore the periods of critical vulnerability of environmental manganese exposures.
In this review, information regarding sex differences of manganese exposure from both cohort studies and cross-sectional studies were inconsistent. Perhaps there was a trend showing that girls were more susceptible to manganese exposure than boys. While almost all studies found no significant sex differences for Mn in biomarkers and drinking water, except for four studies without the relevant details available [34, 39, 54, 67]. Given that most studies were not specifically designed to evaluate sex-interactions, therefore, low statistical power may in part explain some of the inconsistency between studies.
Recently a study of single nucleotide polymorphisms in Mn transporter genes SLC30A10 and SLC39A8 also found a sex difference between Mn concentrations and genotypes [101]. The mechanisms behind potential sex differences in Mn toxicity are complicated, possibly due to sex difference in the developing brain [102], possibly related to biological differences in neurochemistry and hormone activity [103]. In addition, data from animal studies had shown that Mn exposure caused sex-dependent neuronal morphological change, and this change was not due to differential Mn accumulation between sexes but due to differences in sensitivity to Mn exposure [104]. All these differences may contribute to sex dimorphism in the associations between Mn exposure and neurodevelopment.
Our study incorporated the following limitations that warrant discussion. Firstly, most studies in this review are cross-sectional studies, so that no causal relationship can be inferred. In addition, a consistent biomarker for infants was not identified, perhaps teeth is the most promising biosample in this case, while being less easy to obtain than hair. Limitations in our data precluded us from identifying the safe range of manganese exposure and the periods of critical vulnerability of environmental manganese exposures. Finally, limited number of studies could be analyzed due to the relative homogeneity, however, we do not believe that this affected our analysis, given the stability of our sensitivity analysis.
Overall, to the best of our knowledge, this is the only comprehensive systemic review and meta-analysis regarding the biomarkers and sources of manganese exposure and cognitive, behavioral and motor functions in children. Outcomes from cohort studies and cross-sectional studies indicated that higher manganese exposures were negatively associated with neurodevelopment in children. In addition, this is the first meta-analysis for correlation between different manganese indicators where our results indicated that H-Mn was more significantly correlated with W-Mn than B-Mn. Therefore, we propose that hair is the most suitable biomarker in future studies.
Higher manganese exposure is negatively associated with childhood neurodevelopment, especially cognitive and motor skills for children under 6 years old and cognitive and behavioral performance for children aged 6–18 years old. In the older group (6–18 years old), hair is the most reliable indicator of manganese exposure. However, evidence demonstrated sex difference upon manganese exposure while a clear pattern is not elucidated. Population based biomonitoring studies with standard cleaning methodologies of hair are warranted in order to set reference ranges of manganese in hair at different ages. Large prospective cohort studies are certainly warranted in order to support these results and identify the underlying biological mechanisms.
AARES:
The academic achievement records of the elementary schools
The attention-deficit/hyperactivity disorder (ADHD) diagnostic system
AMP:
Aptitudes mentales primarias
APS:
Accusway plus system
BASC-2:
Behavior assessment system for children, 2nd edition
BOT-2:
The bruininks-oseretsky test, 2nd edition
CANTAB:
Cambridge neuropsychological test automated battery
CAVLT:
The children's auditory verbal learning test
CBCL:
The standardized child behavior checklist
CDIIT:
The comprehensive developmental inventory for infants and toddlers
CPRS-R:
The revised conners' rating scale for parents
CPT-II:
Conners' continuous performance test ii version
CTRS-R:
The revised conners' rating scale for teachers
DBD:
The disruptive behavior disorders
DDST-II:
Denver developmental screening test II
Danish products developments
DS:
Digit span
FT:
Finger tapping
FTT:
The forbidden toy task
GDI:
Gesell developmental inventory
GP:
Grooved pegboard
LNMB:
Luria nebraska motor battery
MSCA:
The McCarthy scales of children's abilities
NBNA:
Neonatal behavioral neurological assessments
NEPSY-II:
Developmental neuropsychological assessment, second edition
NEUPSILIN-Inf:
The Brazilian child brief neuropsychological assessment battery
Pursuit aiming
PCM:
The Raven's progressive color matrices scale
PEDS:
Parents' evaluation of developmental status
RCPM:
The Raven's Colored progressive matrices
ROCF:
The rey-osterrieth complex fig.
Santa ana test
SDQ:
The strengths and difficulties questionnaire
The virtual radial arm maze
WASI:
Wechsler abbreviated scale of intelligence
WISC:
The wechsler intelligence scale for children
W-M:
Woodcock-Muñoz tests of cognitive abilities
WRAVMA:
The wide range assessment of visual motor abilities
Aschner M, Erikson K. Manganese. Adv Nutr. 2017;8(3):520–1.
de Water E, Proal E, Wang V, Medina SM, Schnaas L, Tellez-Rojo MM, et al. Prenatal manganese exposure and intrinsic functional connectivity of emotional brain areas in children. Neurotoxicology. 2018;64:85–93.
Article CAS Google Scholar
Leonhard MJ, Chang ET, Loccisano AE, Garry MR. A systematic literature review of epidemiologic studies of developmental manganese exposure and neurodevelopmental outcomes. Toxicology. 2019;420:46–65.
Iyare PU. The effects of manganese exposure from drinking water on school-age children: a systematic review. Neurotoxicology. 2019;73:1–7.
Saghazadeh A, Rezaei N. Systematic review and meta-analysis links autism and toxic metals and highlights the impact of country development status: higher blood and erythrocyte levels for mercury and lead, and higher hair antimony, cadmium, lead, and mercury. Prog Neuropsychopharmacol Biol Psychiatry. 2017;79(Pt B):340–68.
Riojas-Rodríguez H, Solís-Vivanco R, Schilmann A, Montes S, Rodríguez S, Ríos C, et al. Intellectual function in Mexican children living in a mining area and environmentally exposed to manganese. Environ Health Perspect. 2010;118(10):1465–70.
Haynes EN, Sucharew H, Hilbert TJ, Kuhnell P, Spencer A, Newman NC, et al. Impact of air manganese on child neurodevelopment in East Liverpool, Ohio. Neurotoxicology. 2018;64:94–102.
Bouchard MF, Surette C, Cormier P, Foucher D. Low level exposure to manganese from drinking water and cognition in school-age children. Neurotoxicology. 2018;64:110–7.
Wasserman GA, Liu X, Parvez F, Ahsan H, Levy D, Factor-Litvak P, et al. Water manganese exposure and children's intellectual function in Araihazar, Bangladesh. Environ Health Perspect. 2006;114(1):124–9.
Rodriguez-Barranco M, Lacasana M, Aguilar-Garduno C, Alguacil J, Gil F, Gonzalez-Alzaga B, et al. Association of arsenic, cadmium and manganese exposure with neurodevelopment and behavioural disorders in children: a systematic review and meta-analysis. Sci Total Environ. 2013;454–455:562–77.
Meyer-Baron M, Knapp G, Schäper M, van Thriel C. Performance alterations associated with occupational exposure to manganese--a meta-analysis. Neurotoxicology. 2009;30(4):487–96.
Kwakye GF, Paoliello MM, Mukhopadhyay S, Bowman AB, Aschner M. Manganese-induced parkinsonism and Parkinson's disease: shared and distinguishable features. Int J Env Res Pub He. 2015;12(7):7519–40.
Llop S, Lopez-Espinosa MJ, Rebagliato M, Ballester F. Gender differences in the neurotoxicity of metals in children. Toxicology. 2013;311(1–2):3–12.
Shih JH, Zeng BY, Lin PY, Chen TY, Chen YW, Wu CK, et al. Association between peripheral manganese levels and attention-deficit/hyperactivity disorder: a preliminary meta-analysis. Neuropsychiatr Dis Treat. 2018;14:1831–42.
Oulhote Y, Mergler D, Barbeau B, Bellinger DC, Bouffard T, Brodeur ME, et al. Neurobehavioral function in school-age children exposed to manganese in drinking water. Environ Health Perspect. 2014;122(12):1343–50.
Bouchard MF, Sauve S, Barbeau B, Legrand M, Brodeur ME, Bouffard T, et al. Intellectual impairment in school-age children exposed to manganese from drinking water. Environ Health Perspect. 2011;119(1):138–43.
von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP. The strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. Int J Surg. 2014;12(12):1495–9.
Mikó A, Pótó L, Mátrai P, Hegyi P, Füredi N, Garami A, et al. Gender difference in the effects of interleukin-6 on grip strength - a systematic review and meta-analysis. BMC Geriatr. 2018;18(1):107.
Wasserman GA, Liu X, Parvez F, Factor-Litvak P, Kline J, Siddique AB, et al. Child intelligence and reductions in water arsenic and manganese: a two-year follow-up study in Bangladesh. Environ Health Perspect. 2016;124(7):1114–20.
Dion LA, Saint-Amour D, Sauve S, Barbeau B, Mergler D, Bouchard MF. Changes in water manganese levels and longitudinal assessment of intellectual function in children exposed through drinking water. Neurotoxicology. 2018;64:118–25.
Menezes-Filho JA, Novaes Cde O, Moreira JC, Sarcinelli PN, Mergler D. Elevated manganese and cognitive performance in school-aged children and their mothers. Environ Res. 2011;111(1):156–63.
Carvalho CF, Menezes-Filho JA, de Matos VP, Bessa JR, Coelho-Santos J, Viana GF, et al. Elevated airborne manganese and low executive function in school-aged children in Brazil. Neurotoxicology. 2014;45:301–8.
Wechsler D. Wechsler intelligence scale for children. San Antonio, TX: Psychological Corporation; 1991.
Shadish WR, Haddock CK. Combining estimates of effect size. In: Cooper H, Hedges LV, editors. The Handbook of Research Synthesis. New York: Russell Sage Foundation; 1994. p. 265–6.
Higgins JP, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ. 2003;327(7414):557–60.
Egger M, Davey Smith G, Schneider M, Minder C. Bias in meta-analysis detected by a simple, graphical test. BMJ. 1997;315(7109):629–34.
Chung SE, Cheong HK, Ha EH, Kim BN, Ha M, Kim Y, et al. Maternal blood manganese and early neurodevelopment: the mothers and children's environmental health (MOCEH) study. Environ Health Perspect. 2015;123(7):717–22.
Claus Henn B, Ettinger AS, Schwartz J, Tellez-Rojo MM, Lamadrid-Figueroa H, Hernandez-Avila M, et al. Early postnatal blood manganese levels and children's neurodevelopment. Epidemiology. 2010;21(4):433–9.
Claus Henn B, Bellinger DC, Hopkins MR, Coull BA, Ettinger AS, Jim R, et al. Maternal and cord blood manganese concentrations and early childhood neurodevelopment among residents near a mining-impacted superfund site. Environ Health Perspect. 2017;125(6):067020.
Freire C, Amaya E, Gil F, Fernandez MF, Murcia M, Llop S, et al. Prenatal co-exposure to neurotoxic metals and neurodevelopment in preschool children: the environment and childhood (INMA) project. Sci Total Environ. 2018;621:340–51.
Gunier RB, Arora M, Jerrett M, Bradman A, Harley KG, Mora AM, et al. Manganese in teeth and neurodevelopment in young Mexican-American children. Environ Res. 2015;142:688–95.
Lin CC, Chen YC, Su FC, Lin CM, Liao HF, Hwang YH, et al. In utero exposure to environmental lead and manganese and neurodevelopment at 2 years of age. Environ Res. 2013;123:52–7.
Mora AM, Cordoba L, Cano JC, Hernandez-Bonilla D, Pardo L, Schnaas L, et al. Prenatal mancozeb exposure, excess manganese, and neurodevelopment at 1 year of age in the infants' environmental health (ISA) study. Environ Health Perspect. 2018;126(5):057007.
Takser L, Mergler D, Hellier G, Sahuquillo J, Huel G. Manganese, monoamine metabolite levels at birth, and child psychomotor development. Neurotoxicology. 2003;24(4–5):667–74.
Yu XD, Zhang J, Yan CH, Shen XM. Prenatal exposure to manganese at environment relevant level and neonatal neurobehavioral development. Environ Res. 2014;133:232–8.
Yu X, Chen L, Wang C, Yang X, Gao Y, Tian Y. The role of cord blood BDNF in infant cognitive impairment induced by low-level prenatal manganese exposure: LW birth cohort, China. Chemosphere. 2016;163:446–51.
Claus Henn B, Austin C, Coull BA, Schnaas L, Gennings C, Horton MK, et al. Uncovering neurodevelopmental windows of susceptibility to manganese exposure using dentine microspatial analyses. Environ Res. 2018;161:588–98.
Mora AM, Arora M, Harley KG, Kogut K, Parra K, Hernandez-Bonilla D, et al. Prenatal and postnatal manganese teeth levels and neurodevelopment at 7, 9, and 10.5 years in the CHAMACOS cohort. Environ Int. 2015;84:39–54.
Zhou T, Guo J, Zhang J, Xiao H, Qi X, Wu C, et al. Sex-specific differences in cognitive abilities associated with childhood cadmium and manganese exposures in school-age children: a prospective cohort study. Biol Trace Elem Res. 2019;193(1):89–99.
Al-Saleh I, Al-Mohawes S, Al-Rouqi R, Elkhatib R. Selenium status in lactating mothers-infants and its potential protective role against the neurotoxicity of methylmercury, lead, manganese, and DDT. Environ Res. 2019;176:108562.
Rink SM, Ardoino G, Queirolo EI, Cicariello D, Manay N, Kordas K. Associations between hair manganese levels and cognitive, language, and motor development in preschool children from Montevideo, Uruguay. Arch Environ Occup Health. 2014;69(1):46–54.
Bauer JA, Claus Henn B, Austin C, Zoni S, Fedrighi C, Cagna G, et al. Manganese in teeth and neurobehavior: sex-specific windows of susceptibility. Environ Int. 2017;108:299–308.
Betancourt O, Tapia M, Mendez I. Decline of general intelligence in children exposed to manganese from mining contamination in puyango river basin, southern Ecuador. Ecohealth. 2015;12(3):453–60.
Bhang SY, Cho SC, Kim JW, Hong YC, Shin MS, Yoo HJ, et al. Relationship between blood manganese levels and children's attention, cognition, behavior, and academic performance--a nationwide cross-sectional study. Environ Res. 2013;126:9–16.
Bouchard M, Laforest F, Vandelac L, Bellinger D, Mergler D. Hair manganese and hyperactive behaviors: pilot study of school-age children exposed through tap water. Environ Health Perspect. 2007;115(1):122–7.
Carvalho CF, Oulhote Y, Martorelli M, Carvalho CO, Menezes-Filho JA, Argollo N, et al. Environmental manganese exposure and associations with memory, executive functions, and hyperactivity in Brazilian children. Neurotoxicology. 2018;69:253–9.
Chan TJ, Gutierrez C, Ogunseitan OA. Metallic burden of deciduous teeth and childhood behavioral deficits. Int J Environ Res Pub He 2015;12(6):6771–87.
Chiu YM, Claus Henn B, Hsu HL, Pendo MP, Coull BA, Austin C, et al. Sex differences in sensitivity to prenatal and early childhood manganese exposure on neuromotor function in adolescents. Environ Res. 2017;159:458–65.
do Nascimento SN, Barth A, Goethel G, Baierle M, Charao MF, Brucker N, et al. Cognitive deficits and ALA-D-inhibition in children exposed to multiple metals. Environ Res. 2015;136:387–95.
Ericson JE, Crinella FM, Clarke-Stewart KA, Allhusen VD, Chan T, Robertson RT. Prenatal manganese levels linked to childhood behavioral disinhibition. Neurotoxicol Teratol. 2007;29(2):181–7.
Frndak S, Barg G, Canfield RL, Quierolo EI, Manay N, Kordas K. Latent subgroups of cognitive performance in lead- and manganese-exposed Uruguayan children: examining behavioral signatures. Neurotoxicology. 2019;73:188–98.
Haynes EN, Sucharew H, Kuhnell P, Alden J, Barnas M, Wright RO, et al. Manganese exposure and neurocognitive outcomes in rural school-age children: the communities actively researching exposure study (Ohio, USA). Environ Health Perspect. 2015;123(10):1066–71.
Hernandez-Bonilla D, Schilmann A, Montes S, Rodriguez-Agudelo Y, Rodriguez-Dozal S, Solis-Vivanco R, et al. Environmental exposure to manganese and motor function of children in Mexico. Neurotoxicology. 2011;32(5):615–21.
Hernandez-Bonilla D, Escamilla-Nunez C, Mergler D, Rodriguez-Dozal S, Cortez-Lugo M, Montes S, et al. Effects of manganese exposure on visuoperception and visual memory in schoolchildren. Neurotoxicology. 2016;57:230–40.
Horton MK, Hsu L, Claus Henn B, Margolis A, Austin C, Svensson K, et al. Dentine biomarkers of prenatal and early childhood exposure to manganese, zinc and lead and childhood behavior. Environ Int. 2018;121(Pt 1):148–58.
Khan K, Factor-Litvak P, Wasserman GA, Liu X, Ahmed E, Parvez F, et al. Manganese exposure from drinking water and children's classroom behavior in Bangladesh. Environ Health Perspect. 2011;119(10):1501–6.
Kicinski M, Vrijens J, Vermier G, Hond ED, Schoeters G, Nelen V, et al. Neurobehavioral function and low-level metal exposure in adolescents. Int J Hyg Environ Health. 2015;218(1):139–46.
Kim Y, Kim BN, Hong YC, Shin MS, Yoo HJ, Kim JW, et al. Co-exposure to environmental lead and manganese affects the intelligence of school-aged children. Neurotoxicology. 2009;30(4):564–71.
Lucchini RG, Zoni S, Guazzetti S, Bontempi E, Micheletti S, Broberg K, et al. Inverse association of intellectual function with very low blood lead but not with manganese exposure in Italian adolescents. Environ Res. 2012a;118:65–71.
Lucchini RG, Guazzetti S, Zoni S, Donna F, Peter S, Zacco A, et al. Tremor, olfactory and motor changes in Italian adolescents exposed to historical ferro-manganese emission. Neurotoxicology. 2012b;33(4):687–96.
Lucchini RG, Guazzetti S, Renzetti S. Neurocognitive impact of metal exposure and social stressors among schoolchildren in Taranto, Italy. Environ Health. 2019;18(1):67.
Menezes-Filho JA, de Carvalho-Vivas CF, Viana GF, Ferreira JR, Nunes LS, Mergler D, et al. Elevated manganese exposure and school-aged children's behavior: a gender-stratified analysis. Neurotoxicology. 2014;45:293–300.
Nascimento S, Baierle M, Goethel G, Barth A, Brucker N, Charao M, et al. Associations among environmental exposure to manganese, neuropsychological performance, oxidative damage and kidney biomarkers in children. Environ Res. 2016;147:32–43.
Parvez F, Wasserman GA, Factor-Litvak P, Liu X, Slavkovich V, Siddique AB, et al. Arsenic exposure and motor function among children in Bangladesh. Environ Health Perspect. 2011;119(11):1665–70.
Rugless F, Bhattacharya A, Succop P, Dietrich KN, Cox C, Alden J, et al. Childhood exposure to manganese and postural instability in children living near a ferromanganese refinery in southeastern Ohio. Neurotoxicol Teratol. 2014;41:71–9.
Torrente M, Colomina MT, Domingo JL. Metal concentrations in hair and cognitive assessment in an adolescent population. Biol Trace Elem Res. 2005;104(3):215–21.
Torres-Agustin R, Rodriguez-Agudelo Y, Schilmann A, Solis-Vivanco R, Montes S, Riojas-Rodriguez H, et al. Effect of environmental manganese exposure on verbal learning and memory in Mexican children. Environ Res. 2013;121:39–44.
Wright RO, Amarasiriwardena C, Woolf AD, Jim R, Bellinger DC. Neuropsychological correlates of hair arsenic, manganese, and cadmium levels in school-age children residing near a hazardous waste site. Neurotoxicology. 2006;27(2):210–6.
Rahman SM, Kippler M, Tofail F, Bolte S, Hamadani JD, Vahter M. Manganese in drinking water and cognitive abilities and behavior at 10 years of age: a prospective cohort study. Environ Health Perspect. 2017;125(5):057003.
Rodrigues EG, Bellinger DC, Valeri L, Hasan MO, Quamruzzaman Q, Golam M, et al. Neurodevelopmental outcomes among 2- to 3-year-old children in Bangladesh with elevated blood lead and exposure to arsenic and manganese in drinking water. Environ Health. 2016;15:44.
Khan K, Wasserman GA, Liu X, Ahmed E, Parvez F, Slavkovich V, et al. Manganese exposure from drinking water and children's academic achievement. Neurotoxicology. 2012;33(1):91–7.
Wasserman GA, Liu X, Parvez F, Factor-Litvak P, Ahsan H, Levy D, et al. Arsenic and manganese exposure and children's intellectual function. Neurotoxicology. 2011;32(4):450–7.
Tuschl K, Meyer E, Valdivia LE, Zhao N, Dadswell C, Abdul-Sada A, et al. Mutations in SLC39A14 disrupt manganese homeostasis and cause childhood-onset parkinsonism-dystonia. Nat Commun. 2016;7:11601.
Xin Y, Gao H, Wang J, Qiang Y, Imam MU, Li Y, et al. Manganese transporter slc39a14 deficiency revealed its key role in maintaining manganese homeostasis in mice. Cell Discov. 2017;3:17025.
Xia Z, Wei J, Li Y, Wang J, Li W, Wang K, et al. Zebrafish slc30a10 deficiency revealed a novel compensatory mechanism of Atp2c1 in maintaining manganese homeostasis. PLoS Genet. 2017;13(7):e1006892.
Rogers JT, Xia N, Wong A, Bakshi R, Cahill CM. Targeting the iron-response elements of the mRNAs for the Alzheimer's amyloid precursor protein and ferritin to treat acute lead and manganese neurotoxicity. Int J Mol Sci. 2019;20(4):994.
Venkataramani V, Doeppner TR, Willkommen D, Cahill CM, Xin Y, Ye G, et al. Manganese causes neurotoxic iron accumulation via translational repression of amyloid precursor protein and H-ferritin. J Neurochemistry. 2018;147(6):831–48.
Aprea MC. Environmental and biological monitoring in the estimation of absorbed doses of pesticides. Toxicol Lett. 2012;210(2):110–8.
Mahoney JP, Small WJ. Studies on manganese. 3. The biological half-life of radiomanganese in man and factors which affect this half-life. J Clin Invest. 1968;47(3):643–53.
Kordas K, Queirolo EI, Ettinger AS, Wright RO, Stoltzfus RJ. Prevalence and predictors of exposure to multiple metals in preschool children from Montevideo, Uruguay. Sci Total Environ. 2010;408(20):4488–94.
Robbins CR. Chemical and physical behavior of human hair. 4th ed. New York: Springer-Verlag; 2002.
Arora M, Hare D, Austin C, Smith DR, Doble P. Spatial distribution of manganese in enamel and coronal dentine of human primary teeth. Sci Total Environ. 2011;409(7):1315–9.
Hillson S. Dental anthropology. New York: Cambridge University Press; 1996.
Liang G, Zhang L, Ma S, Lv Y, Qin H, Huang X, et al. Manganese accumulation in hair and teeth as a biomarker of manganese exposure and neurotoxicity in rats. Environ Sci Pollut Res Int. 2016;23(12):12265–71.
Coetzee DJ, McGovern PM, Rao R, Harnack LJ, Georgieff MK, Stepanov I. Measuring the impact of manganese exposure on children's neurodevelopment: advances and research gaps in biomarker-based approaches. Environ Health. 2016;15(1):91.
Eastman RR, Jursa TP, Benedetti C, Lucchini RG, Smith DR. Hair as a biomarker of environmental manganese exposure. Environ Sci Technol. 2013;47(3):1629–37.
Ward EJ, Edmondson DA, Nour MM, Snyder S, Rosenthal FS, Dydak U. Toenail manganese: a sensitive and specific biomarker of exposure to manganese in career welders. Ann Work Expos Heal. 2017;62(1):101–11.
Smith D, Gwiazda R, Bowler R, Roels H, Park R, Taicher C, et al. Biomarkers of Mn exposure in humans. Am J Ind Med. 2007;50(11):801–11.
Niedzielska K, Struzak-Wysokińska M, Wujec Z. Analysis of correlations between the content of various elements in hard tissues of milk teeth with and without caries. Czas Stomatol. 1990;43(6):316–22.
Zheng W, Fu SX, Dydak U, Cowan DM. Biomarkers of manganese intoxication. Neurotoxicology. 2011;32(1):1–8.
Lucas EL, Bertrand P, Guazzetti S, Donna F, Peli M, Jursa TP, et al. Impact of ferromanganese alloy plants on household dust manganese levels: implications for childhood exposure. Environ Res. 2015;138:279–90.
Ntihabose R, Surette C, Foucher D, Clarisse O, Bouchard MF. Assessment of saliva, hair and toenails as biomarkers of low level exposure to manganese from drinking water in children. Neurotoxicology. 2018;64:126–33.
Wang D, Du X, Zheng W. Alteration of saliva and serum concentrations of manganese, copper, zinc, cadmium and lead among career welders. Toxicol Lett. 2008;176(1):40–7.
Laohaudomchok W, Lin X, Herrick RF, Fang SC, Cavallari JM, Christiani DC, et al. Toenail, blood, and urine as biomarkers of manganese exposure. J Occup Environ Med. 2011;53(5):506–10.
Yaemsiri S, Hou N, Slining MM, He K. Growth rate of human fingernails and toenails in healthy American young adults. J Eur Acad Dermatol Venereol. 2010;24(4):420–3.
He K. Trace elements in nails as biomarkers in clinical research. Eur J Clin Investig. 2011;41(1):98–102.
Rodrigues EG, Kile M, Dobson C, Amarasiriwardena C, Quamruzzaman Q, Rahman M, et al. Maternal-infant biomarkers of prenatal exposure to arsenic and manganese. J Expo Sci Environ Epidemiol. 2015;25(6):639–48.
Arora M, Bradman A, Austin C, Vedar M, Holland N, Eskenazi B, et al. Determining fetal manganese exposure from mantle dentine of deciduous teeth. Environ Sci Technol. 2012;46(9):5118–25.
Yoon M, Nong A, Clewell HJ 3rd, Taylor MD, Dorman DC, Andersen ME. Evaluating placental transfer and tissue concentrations of manganese in the pregnant rat and fetuses after inhalation exposures with a PBPK model. Toxicol Sci. 2009;112(1):44–58.
Sky-Peck HH. Distribution of trace elements in human hair. Clin Physiol Biochem. 1990;8(2):70–80.
Wahlberg K, Arora M, Curtin A, Curtin P, Wright RO, Smith DR, et al. Polymorphisms in manganese transporters show developmental stage and sex specific associations with manganese concentrations in primary teeth. Neurotoxicology. 2018;64:103–9.
Kaczkurkin AN, Raznahan A, Satterthwaite TD. Sex differences in the developing brain: insights from multimodal neuroimaging. Neuropsychopharmacology. 2019;44(1):71–85.
Ngun TC, Ghahramani N, Sánchez FJ, Bocklandt S, Vilain E. The genetics of sex differences in brain and behavior. Front Neuroendocrinol. 2011;32(2):227–46.
Madison JL, Wegrzynowicz M, Aschner M, Bowman AB. Gender and manganese exposure interactions on mouse striatal neuron morphology. Neurotoxicology. 2011;32(6):896–906.
We thank the members of the Wang and Min Laboratories for helpful discussions, especially Junhao Wang, Hao Wang, Xuexian Fang and Peng An.
This research was funded by the National Natural Science Foundation of China (31530034 and 31930057 to F.W.; 31570791 to J.M.) and the National Key Research and Development Program of China (2018YFA0507802 to F.W.; 2018YFA0507801 to J.M.), and Michael J. FOX Foundation (J.T.R.).
Weiwei Liu and Yongjuan Xin contributed equally to this work.
Department of Nutrition, Precision Nutrition Innovation Center, School of Public Health, Zhengzhou University, Zhengzhou, China
Weiwei Liu, Yongjuan Xin, Qianwen Li, Yanna Shang, Zhiguang Ping & Fudi Wang
The First Affiliated Hospital, School of Public Health, Institute of Translational Medicine, Zhejiang University School of Medicine, Hangzhou, China
Junxia Min & Fudi Wang
Neurochemistry Laboratory, Department of Psychiatry-Neuroscience, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, USA
Catherine M. Cahill & Jack T. Rogers
Weiwei Liu
Yongjuan Xin
Qianwen Li
Yanna Shang
Zhiguang Ping
Junxia Min
Catherine M. Cahill
Jack T. Rogers
Fudi Wang
W.L., Y.X., F.W. and J.T.R. designed the study; W.L., Y.X.and Q. L identified the studies for inclusion, extracted the data and assessed the quality of the included studies; W.L. and Q.L. conducted the meta-analysis; W.L. and Y.X. wrote the first draft of the manuscript; Y.S. and Z.P. provided critical input for the manuscript; F.W., J.T.R., J.M. and C.M.C. did critical revision of the manuscript to improve and optimally present the key intellectual content and they also supervised this study. All authors have contributed significantly, and all authors are in agreement with respect to the content of the manuscript. The author (s) read and approved the final manuscript.
Correspondence to Jack T. Rogers or Fudi Wang.
No human subjects, human material, or human data were involved in this research, which is based on literature review.
PRISMA 2009 Checklist
Evaluation of methodological quality of articles by using checklist in the Strengthening the Reporting of Observational Studies in Epidemiology Statement
Characteristics of the articles included in the meta-analysis
Sensitivity analysis was performed to evaluate the stability of the result
Meta-analysis of studies reporting the effect of a 10-fold increase in drinking water manganese on intellectual quotient (IQ)
Meta-analysis of studies reporting the effect of a e-fold increase in blood manganese on intellectual quotient (IQ)
Correlations between manganese in biomarkers and environmental sample
Meta-analysis of studies that stratified by sex reporting the effect of a 10-fold increase in hair manganese on intellectual quotient (IQ)
The reference range or cut-off point used in the reviewed articles
Liu, W., Xin, Y., Li, Q. et al. Biomarkers of environmental manganese exposure and associations with childhood neurodevelopment: a systematic review and meta-analysis. Environ Health 19, 104 (2020). https://doi.org/10.1186/s12940-020-00659-x
Accepted: 22 September 2020
Manganese exposure | CommonCrawl |
\begin{document}
\title{Asymptotic expansions of Jacobi polynomials and of the nodes and weights of Gauss-Jacobi quadrature for large degree and parameters in terms of elementary functions }
\author{ A. Gil\\ Departamento de Matem\'atica Aplicada y CC. de la Computaci\'on.\\ ETSI Caminos. Universidad de Cantabria. 39005-Santander, Spain.\\
\and J. Segura\\
Departamento de Matem\'aticas, Estadistica y
Computaci\'on,\\
Universidad de Cantabria, 39005 Santander, Spain.\\ \and N. M. Temme\\ IAA, 1825 BD 25, Alkmaar, The Netherlands.\footnote{Former address: Centrum Wiskunde \& Informatica (CWI),
Science Park 123, 1098 XG Amsterdam, The Netherlands}\\ }
\maketitle \begin{abstract} Asymptotic approximations of Jacobi polynomials are given in terms of elementary functions for large degree $n$ and parameters $\alpha$ and $\beta$. From these new results, asymptotic expansions of the zeros are derived and methods are given to obtain the coefficients in the expansions. These approximations can be used as initial values in iterative methods for computing the nodes of Gauss--Jacobi quadrature for large degree and parameters. The performance of
the asymptotic approximations for computing the nodes and weights of these Gaussian quadratures is illustrated with numerical examples. \end{abstract}
\section{Introduction}\label{sec:Intro} This paper is a further exploration in our research on Gauss quadrature for the classical orthogonal polynomials; earlier publications are \cite{Gil:2018:FGH}, \cite{Gil:2018:GHL}, \cite{Gil:2018:AJP}, \cite{Gil:2019:NIG}. Other recent relevant papers on this topic are \cite{Bogaert:2014:IFC}, \cite{Hale:2013:FAC}, \cite{Town:2016:IMA}.
When we assume that the degree $n$ and the two parameters $\alpha$ and $\beta$ of the Jacobi polynomial $P_n^{(\alpha,\beta)}(x)$ are large, and we consider the variable $x$ as a parameter that causes nonuniform behavior of the polynomial, it can be expected that, for a detailed and optimal description of the asymptotic approximation, we need a function of three variables. Candidates for this are the Gegenbauer and the Laguerre polynomial. The Gegenbauer polynomial can be used when the ratio $\alpha/\beta$ does not tend to zero or to infinity. When it does, the Laguerre polynomial is the best option.
It is possible to transform an integral of $P_n^{(\alpha,\beta)}(x)$ into an integral resembling one of the Gegenbauer or the Laguerre polynomial (and similar when we are working with differential equations). From a theoretical point of view this may be of interest, however, for practical purposes, when using the results for Gauss quadrature, the transformations and the coefficients in the expansions become rather complicated. In addition, computing the approximants, that is, large degree polynomials with large additional parameter and a variable in domains where nonuniform behavior of these polynomials may happen, gives an extra nontrivial complication.
Even when we use the Bessel functions or Hermite polynomials as approximants, these complications are still quite relevant. For this reason we consider in this paper expansions in terms of elementary functions, and we will see that to evaluate a certain number of coefficients already gives quite complicated expressions.
For large values of $\beta$ with fixed degree $n$ we have quite simple results derived in \cite{Gil:2018:AJP}, which paper is inspired by \cite{Dimitrov:2016:ABJ}. Large-degree results valid near $x=1$ are given in \cite[\S28.4]{Temme:2015:AMI}, and for the case that $\beta$ is large as well we refer to \cite[\S28.4.1]{Temme:2015:AMI}.
\section{Several asymptotic phenomena}\label{sec:phen}
To describe the behavior of the Jacobi polynomial for large degree and parameters $\alpha$ and $\beta$, with $x\in[-1,1]$, it is instructive to consider the differential equation of the function \begin{equation}\label{eq:Intro01}
W(x)=(1-x)^{\frac12(\alpha+1)}(1+x)^{\frac12(\beta+1)}P_n^{(\alpha,\beta)}( x). \end{equation} By using the Liouville-Green transformations as described in \cite{Olver:1997:ASF} uniform expansions can be derived for all combinations of the parameters $n$, $\alpha$, $\beta$.
Let $\sigma$, $\tau$ and $\kappa$ be defined by \begin{equation}\label{eq:Intro02} \sigma=\frac{\alpha+\beta}{2\kappa},\quad \tau=\frac{\alpha-\beta}{2\kappa},\quad \kappa=n+\tfrac12(\alpha+\beta+1). \end{equation} Then $W(x)$ satisfies the differential equation \begin{equation}\label{eq:Intro03} \frac{d^2}{dx^2}W(x)=-\frac{\kappa^2(x_+-x)(x-x_-) +\frac14(x^2+3)}{(1-x^2)^2} W(x), \end{equation} where \begin{equation}\label{eq:Intro04} x_\pm=-\sigma\tau\pm\sqrt{(1-\sigma^2)(1-\tau^2)}; \end{equation} $x_-$ and $x_+$ are called turning points. We have $-1\le x_-\le x_+\le 1$ when $\alpha$ and $\beta$ are positive. When $\sigma^2+\delta^2=1$, one of the turning points $x_\pm$ is zero.
When we skip the term $\frac14(x^2+3)$ of the denominator in \eqref{eq:Intro03}, the differential equation becomes one for the Whittaker or Kummer functions, with special case the Laguerre polynomial, and when we take $\alpha=\beta$ the equation becomes a differential equation for the Gegenbauer polynomial.
When $\kappa$ is large we can make a few observations.
\begin{enumerate} \item If $n\gg \alpha+\beta$, then $\sigma\to0$ and $\tau\to0$. Hence, $x_-\to-1$ and $x_+\to1$. This is the standard case for large degree, the zeros are spread over the complete interval $(-1,1)$. \item When $\alpha$ and/or $\beta$ become large as well, the zeros are inside the interval $(x_-,x_+)$. When, in addition, $\alpha/\beta\to0$, the zeros shift to the right, when $\beta/\alpha\to0$, they shift to the left. See also the limit in \eqref{eq:Intro09}. The zeros become all positive when $x_-\ge0$. In that case $\sigma^2+\delta^2\ge1$.
\item When $x$ is in a closed neighborhood around $x_-$ that does not contain $-1$ and $x_+$, an expansion in terms of Airy functions can be given. Similar for $x$ in a closed neighborhood around $x_+$ that does not contain $x_-$ and $1$. The points $x_\pm$ are called turning points of the equation in \eqref{eq:Intro03}. \item When $-1\le x\le x_-(1+a)<x_+$, with $a$ a fixed positive small number, an expansion in terms of Bessel functions can be given. Similar for $x_-<x_+(1-a)\le x\le 1$. The latter case corresponds to the limit \begin{equation}\label{eq:Intro05} \lim_{n\to\infty} n^{-\alpha}P_n^{(\alpha,\beta)}\left(1-\frac{x^2}{2n^2}\right)=\left(\frac{2}{x}\right)^\alpha J_\alpha(x). \end{equation} Also, $\sqrt{x}J_\alpha\left(\alpha\sqrt{x}\right)$ satisfies the differential equation \begin{equation}\label{eq:Intro06} \frac{d^2}{dx^2}w(x)=\left(\alpha^2 \frac{1-x}{4x^2}-\frac{1}{4x^2}\right)w(x), \end{equation} in which $x=1$ is a turning point when $\alpha $ is large.
\item If $ \alpha+\beta\gg n$, then $\sigma\to1$ and the turning points $x_-$ and $x_+$ coalesce at~$-\tau$. When $\alpha$ and $\beta$ are of the same order, the point $-\tau$ lies properly inside $(-1,1)$, and this case has been studied in \cite{Olver:1980:UAE} to obtain approximations of Whittaker functions in terms of parabolic cylinder functions. In the present case the parameters are such that the parabolic cylinder functions become Hermite polynomials. This corresponds to the limit (see \cite{Lopez:1999:AOP}) \begin{equation}\label{eq:Intro07} \lim_{\alpha,\beta\to\infty} \left(\frac{8}{\alpha+\beta}\right)^{n/2}\, P_n^{(\alpha,\beta)}\left(x\sqrt{{\frac{2}{\alpha+\beta}}}- \frac{\alpha-\beta}{\alpha+\beta}\right)=\frac1{n!}\,H_n(x), \end{equation} derived under the conditions \begin{equation}\label{eq:Intro08} x={\cal O}(1),\quad n={\cal O}(1),\quad \frac{\alpha-\beta}{\alpha+\beta}=o(1),\quad \alpha, \beta\to\infty. \end{equation}
\item If $\alpha\gg\beta$, then $\tau\to1$, and $x_-$ and $x_+$ coalesce at $-\sigma$; if $\beta/\kappa=o(1)$, then the collision will happen at $-1$. Approximations in terms of Laguerre polynomials can be given. This corresponds to the limit \begin{equation}\label{eq:Intro09} \lim_{\alpha\to\infty}P^{(\alpha,\beta)}_{n}\bigl((2x/\alpha)-1\bigr)=(-1)^{n }L^{(\beta)}_{n}(x). \end{equation} Similar for $\beta\gg\alpha$, in which case $L^{(\alpha)}_{n}(x)$ becomes the approximant.
\end{enumerate}
As explained earlier, we consider in this paper the second case: new expansions of $P^{(\alpha,\beta)}_{n}(x)$, and its zeros and weights in terms of elementary functions. Preliminary results regarding the role of Gegenbauer and Laguerre polynomials as approximants can be found in \cite{Temme:1990:PAE}.
\section{An integral representation and its saddle points}\label{sec:Jacnabelfunint} The Rodrigues formula for the Jacobi polynomials reads (see \cite[\S18.15(ii)]{Koornwinder:2010:OPS}) \begin{equation}\label{eq:int01} P_n^{(\alpha,\beta)}(x)=\frac{(-1)^n}{2^n n!\,w(x)}\frac{d^n}{dx^n}\left(w(x)(1-x^2)^n\right), \end{equation} where \begin{equation}\label{eq:int02} w(x)=(1-x)^\alpha(1+x)^\beta. \end{equation} This gives the Cauchy integral representation \begin{equation}\label{eq:int03} P_n^{(\alpha,\beta)}(x)=\frac{(-1)^n}{2^n\,w(x)}\frac{1}{2\pi i}\int_{{\cal C}} \frac{w(z)(1-z^2)^n}{(z-x)^{n+1}}\,dz, \quad x\in(-1,1), \end{equation} where the contour ${{\cal C}}$ is a circle around the point $z=x$ with radius small enough to have the points $\pm1$ outside the circle.
We write this in the form\footnote{The multi-valued functions of the integrand are discussed in Remark~\ref{rem:rem01}.} \begin{equation}\label{eq:int04} P_n^{(\alpha,\beta)}(x)=\frac{-1}{2^n\,w(x)}\frac{1}{2\pi i}\int_{{\cal C}} e^{-\kappa \phi(z)}\,\frac{dz}{\sqrt{(1-z^2)(x-z)}}, \end{equation} where \begin{equation}\label{eq:int05} \kappa=n+\tfrac12(\alpha+\beta+1). \end{equation} and \begin{equation}\label{eq:int06} \phi(z)=-\frac{n+\alpha+\frac12}{\kappa}\ln(1-z)-\frac{n+\beta+\frac12}{\kappa}\ln(1+z)+\frac{n+\frac12}{\kappa}\ln(x-z). \end{equation} We introduce the notation \begin{equation}\label{eq:int07} \sigma=\frac{\alpha+\beta}{2\kappa},\quad \tau=\frac{\alpha-\beta}{2\kappa}, \end{equation} and it follows that \begin{equation}\label{eq:int08} \phi(z)=-(1+\tau)\ln(1-z)-(1-\tau)\ln(1+z)+(1-\sigma)\ln(x-z). \end{equation} The saddle points $z_{\pm}$ follow from the zeros of \begin{equation}\label{eq:int09} \phi^\prime(z)= - \frac{(1+\sigma)z^2+2(\tau-x)z+1-\sigma-2\tau x}{(1-z^2)(x-z)}, \end{equation} and are given by \begin{equation}\label{eq:int10} \begin{array}{@{}r@{\;}c@{\;}l@{}} z_{\pm}&=&\displaystyle{\frac{x-\tau\pm iU(x)}{1+\sigma},}\\[8pt] U(x)&=&\sqrt{1-2\sigma\tau x-\tau^2-\sigma^2-x^2}=\sqrt{(x_+-x)(x-x_-)}, \end{array} \end{equation} where (see also \eqref{eq:Intro04}) \begin{equation}\label{eq:int11} x_{\pm}=-\sigma\tau\pm\sqrt{(1-\sigma^2)(1-\tau^2)}. \end{equation} In this representation we assume that $x_-\le x \le x_+$, in which $x$-domain the zeros of the Jacobi polynomial are located.
\begin{remark}\label{rem:rem01} The starting integrand in \eqref{eq:int03} has a pole at $z=x$, while the one of \eqref{eq:int04} shows an algebraic singularity at $z=x$ and $\phi(z)$ defined in \eqref{eq:int06} has a logarithmic singularity at this point. To handle this from the viewpoint of multi-valued functions, we can introduce a branch cut for the functions involved from $z=x$ to the left, assuming that the phase of $z-x$ is zero when $z>x$, equals $-\pi$ when $z$ approaches $-1$ on the lower part of the saddle point contour of the integral in \eqref{eq:int04}, and $+\pi$ on the upper side. Because the saddle points $z_\pm$ stay off the interval $(-1,1)$, we do not need to consider function values on the branch cuts for the asymptotic analysis. \eoremark \end{remark}
\section{Deriving the asymptotic expansion}\label{sec:Jacnabelfun}
We derive an expansion in terms of elementary functions which is valid for $x\in[x_-(1+\delta),x_+(1-\delta)]$, where $x_\pm$ are the turning points defined in \eqref{eq:int11} and $\delta$ is a fixed positive small number. Also, we assume that $\sigma\in[0,\sigma_0]$ and $\tau\in[-\tau_0,\tau_0]$, where $\sigma_0$ and $\tau_0$ are fixed positive numbers smaller than $1$. The case $\sigma\to1$ is explained in Case~5 of Section~\ref{sec:phen}. A similar phenomenon occurs when $\tau\to\pm1$.
First we consider contributions from the saddle point $z_+$ using the transformation \begin{equation}\label{eq:Jacasymp01} \phi(z)-\phi(z_+)=\tfrac12w^2 \end{equation} for the contour from $z=+1$ to $z=-1$ through $z_+$, with $\phi(z)$ and $z_+$ given in \eqref{eq:int08} and \eqref{eq:int10}. This transforms the part of the integral in \eqref{eq:int04} that runs with $\Im z\ge0$ into \begin{equation}\label{eq:Jacasymp02} P^+=\frac{e^{-\kappa\phi(z_+)}}{2^n\,w(x)}\frac{1}{2\pi i}\int_{-\infty}^\infty e^{-\frac12\kappa w^2}f_+(w)\,dw, \end{equation} where \begin{equation}\label{eq:Jacasymp03} f_+(w)= \frac{1}{\sqrt{(1-z^2)(x-z)}}\frac{dz}{dw},\quad \frac{dz}{dw}=\frac{w}{\phi^\prime(z)}. \end{equation} We expand $\displaystyle{f_+(w)=\sum_{j=0}^\infty f_j^+w^j}$, where \begin{equation}\label{eq:Jacasymp04} f_0^+= \frac{1}{\sqrt{(1-z_+^2)(x-z_+)\phi^{\prime\prime}(z_+)}}=\Frac{e^{\frac14\pi i}}{\sqrt{2U(x)}}, \end{equation} and $U(x)$ is defined in \eqref{eq:int10}. Because the contribution from the saddle point $z_-$ is the complex conjugate of that from $z_+$\footnote{We assume that $x\in(x_-,x_+)$ and that $\alpha$ and $\beta$ are positive.}, we take twice the real part of the contribution from $z_+$ and obtain the expansion \begin{equation}\label{eq:Jacasymp05} P_n^{(\alpha,\beta)}(x)\sim\Re\frac{e^{-\kappa\phi(z_+)-\frac14\pi i}}{2^{n}\,w(x)\sqrt{\pi \kappa U(x)}}\,\sum_{j=0}^\infty \frac{c_{j}^+}{\kappa^j}, \quad c_{j}=2^j\left(\tfrac12\right)_j \frac{f_{2j}^+}{f_0^+}. \end{equation}
Evaluating $\phi(z_+)$ we find \begin{equation}\label{eq:Jacasymp06} \begin{array}{@{}r@{\;}c@{\;}l@{}} \phi(z_+)&=&-\ln 2+\psi+\xi+i\chi(x),\\[8pt] \psi&=&-\frac12(1-\tau)\ln(1-\tau)-\frac12(1+\tau)\ln(1+\tau)\ +\\[8pt] &&\frac12(1+\sigma)\ln(1+\sigma)+\frac12(1-\sigma)\ln(1-\sigma),\\[8pt] \xi(x)&=&-\frac12(\sigma+\tau)\ln(1-x)-\frac12(\sigma-\tau)\ln(1+x),\\[8pt] \chi(x)&=&\displaystyle{(\tau+1)\arctan\frac{U(x)}{1-x+\sigma+\tau}+(\tau-1)\arctan\frac{U(x)}{1+x+\sigma-\tau}\ +}\\[8pt]
&&(1-\sigma)\,{\rm{atan}}2(-U(x),\tau+x\sigma). \end{array} \end{equation}
\begin{figure}
\caption{ The quantity $\chi(x)$ defined in \eqref{eq:Jacasymp06} for $x\in(x_-,x_+)$; $\alpha=90$, $\beta=75$, $n=125$. For these values, $\kappa=208$, $\sigma=\frac{165}{416}$, $\tau=\frac{15}{416}$, $x_-=-0.931$, $x_+=0.903$.}
\label{fig:fig01}
\end{figure}
In Figure~\ref{fig:fig01} we show a graph of $\chi(x)$ on $(x_-,x_+)$ for $\alpha=90$, $\beta=75$, $n=125$. For these values, $\kappa=208$, $\sigma=\frac{165}{416}$, $\tau=\frac{15}{416}$, $x_-=-0.931$, $x_+=0.903$. At the left endpoint we have $\chi(x_-)=-(1-\sigma)\pi=-1.896$.
\begin{remark}\label{rem:rem02} The denominators of the first and second arctan functions of $\chi(x)$ in \eqref{eq:Jacasymp06} are always positive on $(x_-,x_+)$; this follows easily from the relations in \eqref{eq:int07}. The function ${\rm{atan}}2(y,x)$ in the third term of $\chi(x)$ denotes the phase $\in(-\pi,\pi]$ of the complex number $x+iy$. Because $\tau+x\sigma$ may be negative on $(x_-,x_+)$ we cannot use the standard arctan function for that term. \eoremark \end{remark}
Observe that $e^{-\kappa\xi(x)}=\sqrt{w(x)}$, with $w(x)$ defined in \eqref{eq:int02}. To compute $x$ from $\chi(x)$, for example by using a Newton-procedure, it is convenient to know that \begin{equation}\label{eq:Jacasymp07} \frac{d\chi(x)}{dx}=\frac{U(x)} {\left(1-x^2\right)}. \end{equation}
We return to the result in \eqref{eq:Jacasymp05} and split the coefficients of \eqref{eq:Jacasymp05} in real and imaginary parts. We write $c_j^+=p_j+iq_j$, and obtain \begin{equation}\label{eq:Jacasymp08} \begin{array}{@{}r@{\;}c@{\;}l@{}} P_n^{(\alpha,\beta)}(x)&=&\displaystyle{\frac{2^{\frac12(\alpha+\beta+1)}e^{-\kappa\psi}} {\sqrt{\pi \kappa w(x)U(x)}}W(x)},\\[8pt] W(x)&=&\displaystyle{\cos\left(\kappa\chi(x)+\tfrac14\pi\right)P(x)+\sin\left(\kappa\chi(x)+\tfrac14\pi\right)Q(x),} \end{array} \end{equation} with expansions \begin{equation}\label{eq:Jacasymp09} P(x)\sim \sum_{j=0}^\infty \frac{p_{j}}{\kappa^j},\quad Q(x)\sim \sum_{j=0}^\infty \frac{q_{j}}{\kappa^j}. \end{equation} Because $c_0^+=1$, we have $p_0=1$, $q_0=0$.
To evaluate the coefficients $f_{2j}^+$ of the expansion in \eqref{eq:Jacasymp05}, we need the coefficients $z_j^+$ of the expansion $z=z_++\sum_{j=1}^\infty z_j^+ w^j$ that follow from \eqref{eq:Jacasymp01}. The first values are \begin{equation}\label{eq:Jacasymp10} \begin{array}{@{}r@{\;}c@{\;}l@{}}
z_2^+&=&-\tfrac16 z_1^4\phi_3,\quad z_3^+=\displaystyle{\tfrac1{72}z_1^5\left(5z_1^2\phi_3^2-3\phi_4\right)},\\[8pt] z_4^+&=&\displaystyle{-\tfrac1{1080}z_1^6\left(9\phi_5-45z_1^2\phi_3\phi_4+40z_1^4\phi_3^2\right),} \end{array} \end{equation} where $z_1=z_1^+=1/\sqrt{\phi^{\prime\prime}(z_+)}$ and $\phi_j$ denotes the $j$th derivative of $\phi(z)$ at the saddle point $z=z_+$ defined in \eqref{eq:int10}.
With these coefficients we expand $f(w)$ defined in \eqref{eq:Jacasymp04}. This gives \begin{equation}\label{eq:Jacasymp11} \begin{array}{@{}r@{\;}c@{\;}l@{}} c_1^+&=&\displaystyle{-\frac{ z_+}{8z_1 (1- z_+^2)^2 (x- z_+)^2}}\Bigl(-6 z_1^3 z_+^2+3 z_1^3-72 z_1 z_2 z_+^2 x\ +\\[8pt] &&24 z_1 z_+ z_2 x^2-24 z_1 z_+^3 z_2 x^2-48 z_3 x z_+-48 z_3 z_+^2 x^2+96 z_3 z_+^3 x\ +\\[8pt] &&24 z_3 z_+^4 x^2-48 z_3 z_+^5 x-12 z_1 z_+ z_2+48 z_1 z_+^3 z_2-48 z_3 z_+^4\ +\\[8pt] &&24 z_3 z_+^6+12 z_1 z_2 x-36 z_1 z_2 z_+^5-4 z_1^3 x z_++8 z_1^3 z_+^2 x^2-20 z_1^3 z_+^3 x\ + \\[8pt] &&4 z_1^3 x^2+15 z_1^3 z_+^4+24 z_3 x^2+24 z_3 z_+^2+60 z_1 z_2 z_+^4 x\Bigr), \end{array} \end{equation} where $z_j$ denotes $z_j^+$. The coefficients $p_1$ and $q_1$ of the expansions in \eqref{eq:Jacasymp09} follow from $c_1^+=p_1+iq_1$.
\subsection{Expansion of the derivative}\label{sec:Pderiv} For the weights of the Gauss quadrature it is convenient to have an expansion of $\displaystyle{\frac{d}{dx}}P_n^{(\alpha,\beta)}(x)$. Of course this follows from using \eqref{eq:Jacasymp08} with different values of $\alpha$ and $\beta$ and the relation \begin{equation}\label{eq:Jacasymp12} \frac{d}{dx}P_n^{(\alpha,\beta)}(x)=\tfrac{1}{2}\left(\alpha+\beta+n+1\right)P_{n-1}^{(\alpha+1,\beta+1)}(x), \end{equation} but it is useful to have a representation in terms of the same parameters.
By straightforward differentiation of \eqref{eq:Jacasymp08} we obtain \begin{equation}\label{eq:Jacasymp13} \begin{array}{@{}r@{\;}c@{\;}l@{}} \displaystyle{\frac{d}{dx}P_n^{(\alpha,\beta)}(x)}&=& \displaystyle{-\sqrt{\frac{\kappa}{\pi}}\,2^{\frac12(\alpha+\beta+1)e^{-\kappa\psi}}\chi^\prime(x)A(x)
\ \times}\\[8pt] &&\displaystyle{\left(\sin\left(\kappa\chi(x)+\tfrac14\pi\right)R(x)-\cos\left(\kappa\chi(x)+\tfrac14\pi\right)S(x)\right)}, \end{array} \end{equation} where $\chi^\prime(x)$ is given in \eqref{eq:Jacasymp07} and \begin{equation}\label{eq:Jacasymp14} \begin{array}{@{}r@{\;}c@{\;}l@{}} A(x)&=&\displaystyle{\frac{1}{\sqrt{w(x)U(x)}}},\\[8pt] R(x)&=&\displaystyle{P(x)-\frac{1}{\kappa\chi^\prime(x)}Q^\prime(x)-\frac{A^\prime(x)}{\kappa A(x)\chi^\prime(x)}Q(x)},\\[8pt] S(x)&=&\displaystyle{Q(x)+\frac{1}{\kappa\chi^\prime(x)}P^\prime(x)+\frac{A^\prime(x)}{\kappa A(x)\chi^\prime(x)}P(x)}. \end{array} \end{equation} We have the expansions \begin{equation}\label{eq:Jacasymp15} R(x)\sim \sum_{j=0}^\infty \frac{r_{j}}{\kappa^j},\quad S(x)\sim \sum_{j=0}^\infty \frac{s_{j}}{\kappa^j}, \end{equation} where the coefficients follow from the relations in \eqref{eq:Jacasymp14}. The first coefficients are $r_0=p_0=1$, $s_0=q_0=0$, and \begin{equation}\label{eq:Jacasymp16} r_1=p_1,\quad s_1=q_1+\frac{A^\prime(x)}{A(x)\chi^\prime(x)}. \end{equation}
\section{Expansion of the zeros}\label{sec:Jacnabzer} A zero $x_\ell$, $1\le \ell\le n$, of $P_n^{(\alpha,\beta)}(x)$ follows from the zeros of (see \eqref{eq:Jacasymp08}) \begin{equation}\label{eq:Jaczeros01} W(x)=\cos\left(\kappa\chi(x)+\tfrac14\pi\right)P(x)+\sin\left(\kappa\chi(x)+\tfrac14\pi\right)Q(x), \end{equation} where $\chi(x)$ is defined in \eqref{eq:Jacasymp08}. For a first approximation we put the cosine term equal to zero. That is, we can write \begin{equation}\label{eq:Jaczeros02} \kappa\chi(x)+\tfrac14\pi=\tfrac12\pi-(n+1-\ell)\pi, \end{equation} where $\ell$ is some integer. It appears that this choice in the right-hand side is convenient for finding the $\ell$th zero.
Because the expansions in \eqref{eq:Jacasymp09} are valid for $x$ properly inside $(x_-,x_+)$, we may expect that the approximations of the zeros in the middle of this interval will be much better than those near the endpoints. We describe how to compute approximations of all $n$ zeros by considering the zeros of $\cos\left.(\chi(x)\kappa +\frac14\pi\right)$.
We start with $\ell=1$ and using \eqref{eq:Jaczeros02} we compute $\chi_1=\left(\frac14-n\right)\pi/\kappa$. Next we compute an approximation of the zero $x_1$ by inverting the equation $\chi(x)=\chi_1$, where $\chi(x)$ is defined in \eqref{eq:Jacasymp08}. For a Newton procedure we can use $x_-+1/n$ as a starting value.
\begin{example}\label{exemp:ex01} When we take $\alpha=50$, $\beta=41$, $n=25$, we have $\kappa=71$, $\sigma=91/142$, $\tau=9/142$. We find $\chi_1= -1.095133$ and the starting value of the Newton procedure is $x= -0.7667437$. We find $x_1\doteq -0.7415548$. Comparing this with the first zero computed by using the solver of Maple to compute the zeros of the Jacobi polynomial with Digits = 16, we find a relative error $0.00074$.
For the next zero $x_2$, we compute $\chi_2$ from \eqref{eq:Jaczeros02} with $\ell=2$, use $x_1$ as a starting value for the Newton procedure, and find $x_2 \doteq-0.682106$, with relative error $0.00032$. And so on. The best result is for $x_{13}$ with relative error $0.000013$, and the worst result is for $x_{25}$ with a relative error $0.0010$. \eoexample \end{example} \begin{remark}\label{rem:rem03} We don't have a proof that the found zero always corresponds with the $\ell$th zero, when we start with \eqref{eq:Jaczeros02}. In a number of tests we have found all agreement with this choice. \eoremark \end{remark}
To obtain higher approximations of the zeros, we use the method described in our earlier papers. We assume that the zero $x_\ell$ has an asymptotic expansion \begin{equation}\label{eq:Jaczeros03} x_\ell=\xi_0+{\varepsilon},\quad {\varepsilon}\sim \frac{\xi_2}{\kappa^2}+\frac{\xi_4}{\kappa^4}+\ldots, \end{equation} where $\xi_0$ is the value obtained as a first approximation by the method just described.
The function $W(x)$ defined in \eqref{eq:Jaczeros01} can be expanded at $\xi_0$ and we have \begin{equation}\label{eq:Jaczeros04} W(x_\ell)=W(\xi_0+{\varepsilon})=W(\xi_0)+\frac{{\varepsilon}}{1!}W^\prime(\xi_0)+ \frac{{\varepsilon}^2}{2!}W^{\prime\prime}(\xi_0)+\ldots = 0, \end{equation} where the derivatives are with respect to $x$. We find upon substituting the expansions of ${\varepsilon}$ and those of $P$ and $Q$ given \eqref{eq:Jacasymp09}, and comparing equal powers of $\kappa$, that the first coefficients are \begin{equation}\label{eq:Jaczeros05} \begin{array}{@{}r@{\;}c@{\;}l@{}} \xi_2&=&\displaystyle{\frac{\left(1-x^2\right)q_1(x)}{U(x)}},\\[8pt] \xi_4&=&\displaystyle{\frac{1}{6U(x)^4}}\Bigl(3x^5q_1^2+3x^4q_1^2\sigma\tau-6x^3q_1^2-6x^2q_1^2\sigma\tau+3q_1^2x+3q_1^2\sigma\tau\ +\\[8pt] &&\bigl(6q_1^\prime q_1x^4+6x^3q_1^2-12q_1^\prime x^2q_1-6xq_1^2+6q_1^\prime q_1\bigr)U(x)^2\ +\\[8pt] &&\bigl(6p_2x^2q_1+2q_1^3x^2+6q_3-6p_2q_1-6q_3x^2-2q_1^3\bigr)U(x)^3\Bigr), \end{array} \end{equation} where $U(x)$ is defined in \eqref{eq:int10}, and $x$ takes the value of the first approximation of the zero as obtained in Example~\ref{exemp:ex01}.
When we take the same values $\alpha=50$, $\beta=41$, $n=25$ as in Example~\ref{exemp:ex01}, and use \eqref{eq:Jaczeros03} with the term $\xi_2/\kappa^2$ included, we obtain for the zero $x_{13}$ a relative error $0.80\times10^{-9}$. With also the term $\xi_4/\kappa^4$ included we find for $x_{13}$ a relative error $0.13\times10^{-12}$.
A more extensive test of the expansion is shown in Figure ~\ref{fig:fig02}. The label $\ell$ in the abscissa represents the order of the zero (starting from $\ell = 1$ for the smallest zero). In this figure we compare the approximations to the zeros obtained with the asymptotic expansion against the results of a Maple implementation (with a large number of digits) of an iterative algorithm which uses the global fixed point method of \cite{Segura:2010:RCO}. The Jacobi polynomials used in this algorithm are computed by using the intrinsic Maple function. As before, we use \eqref{eq:Jaczeros03} with the term $\xi_2/\kappa^2$ included. As can be seen, for $n = 100$ the use of the expansion allows the computation of the zeros $x_\ell$, $10\le\ell\le90$, with absolute error less than $10^{-8}$. When $n = 1000$, an absolute accuracy better than $10^{-12}$ can be obtained for about 90\% of the zeros of the Jacobi polynomials. The results become less accurate for the zeros near the endpoints $\pm1$, as expected.
\begin{figure}
\caption{ Performance of the asymptotic expansion for computing the zeros of $P^{(\alpha,\beta)}_n(x)$ for
$\alpha=50$, $\beta=41$ and $n=100,\,1000$.}
\label{fig:fig02}
\end{figure}
In Figure~\ref{fig:fig03} we show the absolute errors for $n=100$ and $\alpha=50$, $\beta=41$ compared with $\alpha=150$, $\beta=141$. We see that the accuracy is slightly better for the larger parameters, and that the asymptotics is quite uniform when $\alpha$ and $\beta$ assume larger values. \begin{figure}
\caption{ Performance of the asymptotic expansion for computing the zeros for $n=100$ and $\alpha=50$, $\beta=41$ compared with $\alpha=150$, $\beta=141$. }
\label{fig:fig03}
\end{figure}
\section{The weights of the Gauss-Jacobi quadrature}\label{sec:weights}
As we did in \cite{Gil:2019:NIG}, and in our earlier paper \cite{Gil:2018:GHL} for the Gauss--Hermite and Gauss--Laguerre quadratures, it is convenient to introduce scaled weights. In terms of the derivatives of the Jacobi polynomials, the classical form of the weights of the Gauss-Jacobi quadrature can be written as \begin{equation}\label{eq:weights01} \begin{array}{@{}r@{\;}c@{\;}l@{}} w_\ell &=& \displaystyle{\frac{M_{n,\alpha,\beta}}{ \left(1-x_\ell^2\right) P_n ^{(\alpha ,\beta)\prime}(x_\ell)^2}},
\\ &&\\ M_{n,\alpha,\beta}&=&\displaystyle{2^{\alpha+\beta+1}\frac{\Gamma (n+\alpha+1)\Gamma (n+\beta+1)}{n! \Gamma (n+\alpha+\beta+1 )}}. \end{array} \end{equation}
In Figure 4 we show the relative errors in the computation of the weights $w_\ell$ defined in \eqref{eq:weights01}, with the derivative of the Jacobi polynomial computed by using the relation in \eqref{eq:Jacasymp12}. We have used the representation in \eqref{eq:Jacasymp08}, with the asymptotic series \eqref{eq:Jacasymp09} truncated after $j = 3$ and the expansion \eqref{eq:Jaczeros03} for the nodes with the term $\xi_2/\kappa^2$ included. The relative errors are obtained by using high-precision results computed by using Maple.
\begin{figure}
\caption{ Performance of the computation of the weights $w_\ell$ by using the asymptotic expansion of the Jacobi polynomial for
$\alpha=50$, $\beta=41$ and $n=100,\,1000$.}
\label{fig:fig04}
\end{figure}
As an alternative we consider the scaled weights defined by \begin{equation}\label{eq:weights02} \omega_\ell=\frac{1}{v^\prime(x_\ell)^{2}}, \end{equation}
where
\begin{equation}\label{eq:weights03} v(x)=C_{n,\alpha,\beta}\, (1-x)^{a}(1+x)^{b} P_n^{(\alpha,\beta)}(x), \end{equation} and we choose $a$ and $b$ such that $v^{\prime\prime}(x_\ell)=0$; $C_{n,\alpha,\beta}$ does not depend on $x$, and will be chosen later. We have \begin{equation}\label{eq:weights04} \begin{array}{@{}r@{\;}c@{\;}l@{}} v^\prime(x)&=&C_{n,\alpha,\beta}\bigl( \left(-a(1-x)^{a-1}(1+x)^{b} +b(1-x)^{a}(1+x)^{b-1}\right)P_n^{(\alpha,\beta)}(x)\ +\\[8pt] &&(1-x)^{a}(1+x)^{b}P_n^{(\alpha,\beta)\prime}(x)\bigr). \end{array} \end{equation} Evaluating $v^{\prime\prime}(x_\ell)$, we find \begin{equation}\label{eq:weights05} \begin{array}{@{}r@{\;}c@{\;}l@{}} v^{\prime\prime}(x_\ell)&=&C_{n,\alpha,\beta}(1-x_\ell)^{a}(1+x_\ell)^{b}(1-x_\ell^2)\ \times\\[8pt] &&\left((1-x_\ell^2)P_n^{(\alpha,\beta)\prime\prime}(x_\ell)+ 2\left(b-a-(a+b)x_\ell\right)P_n^{(\alpha,\beta)\prime}(x_\ell)\right), \end{array} \end{equation} where we skip the term containing $P_n^{(\alpha,\beta)}(x_\ell)$, because $x_\ell$ is a zero.
The differential equation of the Jacobi polynomials is \begin{equation}\label{eq:weights06} \left(1-x^2\right) y^{\prime\prime}(x)+\left(\beta-\alpha-(\alpha+\beta+2)x\right)y^\prime(x)+n(\alpha+\beta+n+1)y(x)=0, \end{equation} and we see that $v^{\prime\prime}(x_\ell)=0$ if we take $a=\frac12(\alpha+1)$, $b=\frac12(\beta+1)$.
We obtain
\begin{equation}\label{eq:weights07} v(x)=C_{n,\alpha,\beta}\, (1-x)^{\frac12(\alpha+1)}(1+x)^{\frac12(\beta+1)} P_n^{(\alpha,\beta)}(x), \end{equation} with properties
\begin{equation}\label{eq:weights08} v^\prime(x_\ell)=C_{n,\alpha,\beta}\, (1-x_\ell)^{\frac12(\alpha+1)}(1+x_\ell)^{\frac12(\beta+1)} P_n^{(\alpha,\beta)\prime}(x_\ell), \quad v^{\prime\prime}(x_\ell)=0. \end{equation} The weights $w_\ell$ are related with the scaled weights $\omega_\ell$ by \begin{equation}\label{eq:weights09} w_\ell =M_{n,\alpha,\beta}C^2_{n,\alpha,\beta}(1-x_\ell)^{\alpha}(1+x_\ell)^{\beta} \omega_\ell. \end{equation}
The advantage of computing scaled weights is that, similarly as described in \cite{Gil:2018:GHL}, scaled weights do not underflow/overflow for large parameters. In additional, they are well-conditioned as a function of the roots $x_\ell$. Indeed, introducing the notation \begin{equation}\label{eq:weights10} V(x)=\frac{1}{v^\prime(x)^{2}}, \end{equation} the scaled weights are $\omega_\ell=V(x_\ell)$ and $V^\prime(x_\ell)=0$ because $v^{\prime\prime}(x_\ell)=0$. The vanishing derivative of $V(x)$ at $x_\ell$ may result in a more accurate numerical evaluation of the scaled weights.
When considering the representation of the Jacobi polynomials in \eqref{eq:Jacasymp08}, the function $v(x)$ can be written as \begin{equation}\label{eq:weights11} v(x)= \frac{2^{\frac12(\alpha+\beta+1)}}{\sqrt{\pi \kappa}}\,C_{n,\alpha,\beta}e^{-\kappa\psi}Z(x)W(x),\quad Z(x)=\sqrt{\frac{1-x^2}{U(x)}}, \end{equation}
where $U(x)$ is defined in \eqref{eq:int10}. For scaling $v(x)$ we choose \begin{equation}\label{eq:weights12} C_{n,\alpha,\beta}=2^{-\frac12(\alpha+\beta+1)}e^{\kappa\psi}. \end{equation}
This gives \begin{equation}\label{eq:weights13} v(x)= \frac{Z(x)W(x)}{\sqrt{\pi \kappa}}. \end{equation}
For the numerical computation of $\psi$ defined in \eqref{eq:Jacasymp06} for small values of $\sigma$ or $\tau$, we can use the expansion \begin{equation}\label{eq:weights14} (1-x)\ln(1-x)+(1+x)\ln(1+x)=\sum_{k=1}^\infty\frac{x^{2k}}{k(2k-1)},\quad \vert x\vert < 1. \end{equation}
For computing the modified Gauss weights it is convenient to have an expansion of the derivative of the function $v(x)$ of \eqref{eq:weights13}, with $W(x)$ defined in \eqref{eq:Jacasymp08} and $Z(x)$ in \eqref{eq:weights11}.
We have \begin{equation}\label{eq:weights20} \frac{d}{dx}v(x)=-\sqrt{\frac{\kappa}{\pi}}\chi^\prime(x)Z(x) \left(\sin\left(\kappa\chi(x)+\tfrac14\pi\right)M(x)-\cos\left(\kappa\chi(x)+\tfrac14\pi\right)N(x)\right), \end{equation} where $\chi^\prime(x)$ is given in \eqref{eq:Jacasymp07} and \begin{equation}\label{eq:weights21} \begin{array}{@{}r@{\;}c@{\;}l@{}} M(x)&=&\displaystyle{P(x)-\frac{1}{\kappa}p(x)Q^\prime(x)-\frac{1}{\kappa}q(x)Q(x)},\\[8pt] N(x)&=&\displaystyle{Q(x)+\frac{1}{\kappa} p(x)P^\prime(x)+\frac{1}{\kappa}q(x)P(x)}, \end{array} \end{equation} where \begin{equation}\label{eq:weights22} \begin{array}{@{}r@{\;}c@{\;}l@{}} p(x)&=&\displaystyle{\frac{1}{\chi^\prime(x)}=\frac{1-x^2}{U(x)},}\\[8pt] q(x)&=&\displaystyle{\frac{Z^\prime(x)}{Z(x)\chi^\prime(x)}=\frac{(1-x^2)(x+\sigma\tau)-2xU^2(x)}{2U^3(x)}}. \end{array} \end{equation} We have the expansions \begin{equation}\label{eq:weights23} M(x)\sim \sum_{j=0}^\infty \frac{m_{j}}{\kappa^j},\quad N(x)\sim \sum_{j=0}^\infty \frac{n_{j}}{\kappa^j}, \end{equation} where the coefficients follow from the relations in \eqref{eq:Jacasymp14}. The first coefficients are $m_0=p_0=1$, $n_0=q_0=0$, and for $j=1,2,3,\ldots$ \begin{equation}\label{eq:weights24} \begin{array}{@{}r@{\;}c@{\;}l@{}} m_j&=&\displaystyle{p_j-p(x)q_{j-1}^\prime-q(x)q_{j-1},}\\[8pt] n_j&=&\displaystyle{q_j+p(x)p_{j-1}^\prime+q(x)p_{j-1}.} \end{array} \end{equation}
As an example, Figure~\ref{fig:fig05} shows the performance of the asymptotic expansion \eqref{eq:weights20} for computing the scaled weights \eqref{eq:weights02} for $\alpha=50$, $\beta=41$ and $n=1000$. The computation of the non-scaled weights \eqref{eq:weights01} is shown as comparison.
\begin{figure}
\caption{ Comparison of the performance of the asymptotic expansions for computing non-scaled \eqref{eq:weights01} and scaled \eqref{eq:weights02} weights for
$\alpha=50$, $\beta=41$ and $n=1000$.}
\label{fig:fig05}
\end{figure}
In Figure~\ref{fig:fig06} and Figure~\ref{fig:fig07} we compare the effect of computing the weights $w_\ell$ defined in \eqref{eq:weights01} and the scaled weights $\omega_\ell$ defined in \eqref{eq:weights02} when we compute these weights with the asymptotic expansion of the zeros in \eqref{eq:Jaczeros03} with the term $\xi_4/\kappa^4$ included or not included. From these computations it follows that the that the scaled weights are well-conditioned as a function of the nodes and therefore they are not so critically dependent on the accuracy of the nodes. Contrary the non-scaled weights have worse condition and the accuracy of the nodes is more important.
\begin{figure}
\caption{ Performance of the computation of the weights $w_\ell$ defined in \eqref{eq:weights01} by using the asymptotic expansion of the Jacobi polynomial for
$\alpha=50$, $\beta=41$ and $n=1000$. The comparison is between the expansion of the zeros in \eqref{eq:Jaczeros03} with the term $\xi_4/\kappa^4$ included or not included.}
\label{fig:fig06}
\end{figure}
\begin{figure}
\caption{ Same as in Figure~\ref{fig:fig06} for the scaled weights $\omega_\ell$ defined in \eqref{eq:weights02}.}
\label{fig:fig07}
\end{figure}
\subsection{About quantities appearing in the weights. }\label{sec:weightscoeff}
First we consider the term $e^{\kappa\psi}$, with $\psi$ given in \eqref{eq:Jacasymp06}. Using the relations in \eqref{eq:int07}, we have \begin{equation}\label{eq:weights15} \begin{array}{lll} &\kappa(1+\tau)=n+\alpha+\frac12,&\kappa(1-\tau)=n+\beta+\frac12,\\[8pt] &\kappa(1+\sigma)=n+\alpha+\beta+\frac12,&\kappa(1-\sigma)=n+\frac12, \end{array} \end{equation} and this gives \begin{equation}\label{eq:weights16} \begin{array}{@{}r@{\;}c@{\;}l@{}} e^{2\kappa\psi}&=&\displaystyle{\frac{\left(n+\alpha+\beta+\frac12\right)^{n+\alpha+\beta+\frac12} \left(n+\frac12\right)^{n+\frac12}} {\left(n+\alpha+\frac12\right)^{n+\alpha+\frac12} \left(n+\beta+\frac12\right)^{n+\beta+\frac12}}}\\[8pt] &=&\displaystyle{\frac{\Gamma\left( n+\alpha+\beta+\frac12\right)\Gamma\left(n +\frac12 \right)}{\Gamma\left(n+\alpha+\frac12 \right)\Gamma\left( n+\beta+\frac12\right)} \ \frac {\Gamma^*\left(n+\alpha+\frac12 \right)\Gamma^*\left( n+\beta+\frac12\right)} {\Gamma^*\left( n+\alpha+\beta+\frac12\right)\Gamma^*\left(n+\frac12 \right)}
\times}\\[8pt] &&\displaystyle{\sqrt{\frac{\left( n+\alpha+\beta+\frac12\right)\left( n+\frac12\right)}{\left( n+\alpha+\frac12\right)\left( n+\beta+\frac12\right)}}}, \end{array} \end{equation} where \begin{equation}\label{eq:weights17} \Gamma^*(z)=\sqrt{{z/(2\pi)}}\,e^z z^{-z}\Gamma(z),\quad {\rm ph}\,z\in(-\pi,\pi),\quad z\ne0. \end{equation} We have $\Gamma^*(z)=1+{\cal O}(1/z)$ as $z\to\infty$.
It follows that (see \eqref{eq:weights01} and \eqref{eq:weights09}) \begin{equation}\label{eq:weights18} \begin{array}{@{}r@{\;}c@{\;}l@{}} M_{n,\alpha,\beta}C^2_{n,\alpha,\beta}&=& \displaystyle{ \frac {\Gamma\left(n+\alpha+1 \right)\Gamma\left( n+\beta+1\right)\Gamma\left( n+\alpha+\beta+\frac12\right)\Gamma\left(n+\frac12 \right)} {\Gamma\left(n+\alpha+\frac12 \right)\Gamma\left( n+\beta+\frac12\right)\Gamma\left( n+\alpha+\beta+1\right)\Gamma\left(n+1 \right)}}\ \times\\[8pt] &&\displaystyle{ \frac {\Gamma^*\left(n+\alpha+\frac12 \right)\Gamma^*\left( n+\beta+\frac12\right)} {\Gamma^*\left( n+\alpha+\beta+\frac12\right)\Gamma^*\left(n+\frac12 \right)} \sqrt{\frac{\left( n+\alpha+\beta+\frac12\right)\left( n+\frac12\right)}{\left( n+\alpha+\frac12\right)\left( n+\beta+\frac12\right)}}. }\end{array} \end{equation} Using $\Gamma\left(z+\frac12\right)/\Gamma(z)\sim z^{\frac12}$ as $z\to\infty$, we see that, in the case that $\alpha$, $\beta$ and $n$ are all large, we have $M_{n,\alpha,\beta}C^2_{n,\alpha,\beta}\sim1$, and that, when using more details on expansions of gamma functions and ratios thereof (see \cite[\S6.5]{Temme:2015:AMI}), we can obtain \begin{equation}\label{eq:weights19} M_{n,\alpha,\beta}C^2_{n,\alpha,\beta}\sim 1+\frac{\sigma^2-\tau^2}{12(1-\sigma^2)(1-\tau^2)\kappa}+ \frac{(\sigma^2-\tau^2)^2}{288(1-\sigma^2)^2(1-\tau^2)^2\kappa^2}+\ldots, \end{equation} again, when $\alpha$, $\beta$ and $n$ are all large.
As observed in the first lines of Section~\ref{sec:Jacnabelfun}, in the present asymptotics we assume that $\sigma$ and $\vert\tau\vert$ are bounded away from $1$.
\section*{Acknowledgments}
We acknowledge financial support from Ministerio de Ciencia e Innovaci\'on, Spain, projects MTM2015-67142-P (MINECO/FEDER, UE) and PGC2018-098279-B-I00 (MCIU/AEI/FEDER, UE). NMT thanks CWI, Amsterdam, for scientific support.
\end{document}
\end{document} | arXiv |
17.5 Parallel resistors
End of chapter exercises
Interactive Exercises
Exercise 17.9
Presentation: VPftf
The potential difference across the terminals of a battery when it is not in a complete circuit is the electromotive force (emf) measured in volts (\(\text{V}\)).
The potential difference across the terminals of a battery when it is in a complete circuit is the terminal potential difference measured in volts (\(\text{V}\)).
Voltage is a measure of required/done to move a certain amount of charge and is equivalent to \(\text{J·C$^{-1}$}\).
Current is the rate at which charge moves/flows and is measured in amperes (A) which is equivalent to \(\text{C·s$^{-1}$}\).
Conventional current flows from the positive terminal of a battery, through a circuit, to the negative terminal.
Ammeters measure current and must be connected in series.
Voltmeters measure potential difference (voltage) and must be connected in parallel.
Resistance is a measure of how much work must be done for charge to flow through a circuit element and is measured in ohms (\(\text{Ω}\)) and is equivalent to \(\text{V·A$^{-1}$}\).
Resistance of circuit elements is related to the material from which they are made as well as the physical characteristics of length and cross-sectional area.
Current is constant through resistors in series and they are called voltage dividers as the sum of the voltages is equal to the voltage across the entire set of resistors.
The total resistance of resistors in series is the sum of the individual resistances, \({R}_{S}={R}_{1}+{R}_{2}+\ldots\)
Voltage is constant across resistors in parallel and they are called current divides because the sum of the current through each is the same as the total current through the circuit configuration.
The total resistance of resistors in parallel is calculated by using \(\frac{1}{{R}_{P}}=\frac{1}{{R}_{1}}+\frac{1}{{R}_{2}}+ \ldots\) which is \({R}_{P}=\frac{{R}_{1}{R}_{2}}{{R}_{1}+{R}_{2}}\) for two parallel resistors.
Physical Quantities
Unit symbol
Potential difference (\(\text{V}\))
\(\text{V}\)
Voltage (\(\text{V}\))
Current (\(\text{I}\))
\(\text{A}\)
Resistance (\(\text{R}\))
\(\text{Ω}\)
Table 17.1: Units used in electric circuits
temp text
Except where otherwise noted, this site is covered by a closed copyright license. All rights reserved. Terms and Conditions and Privacy Policy. | CommonCrawl |
Kolmogorov Prize
The Kolmogorov Prize mathematical prize awarded once by the Soviet Union and subsequently by Russia for outstanding results in the field of mathematics. It bears the name of the mathematician Andrey Kolmogorov.
The award was established by the Decree of the Presidium of the Russian Academy of Sciences on February 23, 1993.[1] As a rule, it is awarded every three years.
Awarded Scientists
The following scientists have won the award:[2][3]
• 1994 — Albert Shiryaev
• 1997 — Nikolay Nekhoroshev
• 2000 — Sergey Nikolsky
• 2003 — Anatoli Vitushkin
• 2006 — Alexei Semenov
• 2006 — Andrey Muchnik
• 2009 — Boris Gurevich
• 2009 — Valeriy Oseledets
• 2009 — Anatoly Styopin
• 2012 — Boris Kashin
• 2015 — Aleksandr Borovkov
• 2015 — Anatoly Mogulsky
• 2018 — Vladimir Bogachev
• 2018 — Stanislav Shaposhnikov
• 2018 — Andrey Kirillov
• 2021 — Alexander Bulinsky
References
1. Золотые медали Российской академии наук [Gold medals of the Russian Academy of Sciences] (in Russian). Russian Academy of Sciences.
2. Премия имени А.Н. Колмогорова [Prize named after A.N. Kolmogorov] (in Russian). Russian Academy of Sciences.
3. Именные премии и медали [Nominal awards and medals] (in Russian). Russian Academy of Sciences.
| Wikipedia |
Marion Beiter
Sister Marion Beiter OSF (August 23, 1907 – October 11, 1982), born Dorothy Katharine Beiter, was an American mathematician and educator. Her research focused on the area of cyclotomic polynomials.[1]
Sister
Marion Beiter
OSF
Born
Dorothy Katharine Beiter
(1907-08-23)August 23, 1907
Buffalo, New York
DiedOctober 11, 1982(1982-10-11) (aged 75)
Stella Niagara, New York
Resting placeSisters of St. Francis Cemetery, Stella Niagara, New York
Alma materCatholic University of America
Scientific career
InstitutionsRosary Hill College (later Daemen College)
ThesisCoeflicients in the cyclotomic polynomial for numbers with at most three distinct odd primes in their factorization (1960)
Beiter was born in Buffalo to Kathryn (née Kiel) and Edward Frederick Beiter, where she attended Sacred Heart Academy.[2] She entered the Sisters of St. Francis of Penance and Christian Charity in 1923, and professed her final vows in 1929.
She began her career in 1925 as a teacher in parochial and private schools, continuing in this capacity until 1952, when she was appointed chairwoman of the mathematics department of Rosary Hill College. She meanwhile graduated from Canisius College (1944) and St. Bonaventure University (1948), before obtaining a PhD from the Catholic University of America in 1960.[3] In her work on cyclotomic polynomials and their coefficients she made a conjecture referred to as Sister Beiter conjecture.[4] Besides a sabbatical year at the State University of New York at Buffalo in 1971–1972, Beiter remained at Rosary Hill until her retirement in May 1977.[1]
Beiter died in 1982 of a series of strokes.[5]
Publications
• Coeflicients in the cyclotomic polynomial for numbers with at most three distinct odd primes in their factorization (Thesis). Washington, D.C.: The Catholic University of America Press. 1960.
• "The Midterm Coefficient of the Cyclotomic Polynomial $F_{pqr}(x)$". The American Mathematical Monthly. 71 (7): 769–770. September 1964. doi:10.2307/2310894. JSTOR 2310894.
• "Magnitude of the Coefficients of the Cyclotomic Polynomial $F_{pqr}(x)$". The American Mathematical Monthly. 75 (4): 370–372. April 1968. doi:10.2307/2313416. JSTOR 2313416.
• "Magnitude of the Coefficients of the Cyclotomic Polynomial $F_{pqr}(x)$, II". Duke Mathematical Journal. 38 (3): 591–594. September 1971. doi:10.1215/S0012-7094-71-03873-7.
• "Coefficients of the Cyclotomic Polynomial $F_{3qr}(x)$" (PDF). The Fibonacci Quarterly. 16 (4): 302–306. August 1978.
References
1. Doyle, Bill (November 4, 1982). "Former Daemen Prof. Dies". Daemen Ascent. Vol. 38, no. 4. Amherst, N.Y.: Daemen College. p. 4.
2. "Nun Gets Degree". Buffalo Courier-Express. June 13, 1960. p. 97.
3. Who's Who of American Women (9th ed.). Marquis Who's Who. 1975–1976. p. 61. ISBN 9780837904092.
4. Juran, Branko; Moree, Pieter; Riekert, Adrian; Schmitz, David; Völlmecke, Julian (2023). "A proof of the corrected Sister Beiter cyclotomic coefficient conjecture inspired by Zhao and Zhang". arXiv:2304.09250 [math.NT].
5. "Beloved Sister Passed Away". Daemen College Response. Vol. 3, no. 2. Amherst, N.Y.: Daemen College. November 1982. p. 6. Archived from the original on June 1, 2020.
| Wikipedia |
School of Sciences, UNAM
The Faculty of Sciences (Spanish: Facultad de Ciencias) at the National Autonomous University of Mexico (UNAM) is the entity where natural and exact science-based majors are taught. It has both undergraduate and graduate studies, some of the former in joint teaching with other faculties, most commonly the Faculty of Engineering. The Faculty of Sciences is the most important science school in the country by the number of students and the quality of its research. Together with the research institutes that surround it, it is considered one of the biggest research complexes of the UNAM.
Faculty of Sciences
Hallmark of the UNAM's Faculty of Sciences
TypeFaculty
Established1938[1]
PresidentCatalina Elizabeth Stern Forgach
Students9,578 [2]
Undergraduates6,799
Postgraduates2,779
Location
Mexico City
,
Mexico
ColorsBlue and white
Websitewww.fciencias.unam.mx
History
The history of this faculty is rather different from that of other schools that have their origins in former national schools. The study plan for this faculty was initially given in the Philosophy Faculty of the UNAM. The Biology major was the first one to have a structured study plan, originating around 1930.[3]
It was not until 1933 that the majors of Physics and Mathematics were founded. Formerly, the faculty was located in a small building between the Faculty of Engineering and Medicine.
Due to the increasing number of students, the Faculty had to construct new buildings, conveniently located among the research science institutes of Mathematics, Biology and Physics.
This faculty, along with Philosophy, is notable for its history of activism during the 1999 strike; these two faculties alone kept the strike going longer than any other school.
Organization & Departments
The faculty is run by the Faculty Dean, currently Catalina Stern.
Has 3 main divisions:
• Biology:
• This area covers the bachelor's degree in Biology, Environmental Sciences, and currently researches over Evolutionary Biology, Comparative Biology, Cell Biology and Ecology & Natural Resources
• Physics
• This area is in charge of the bachelor's degree of Physics, Biomedical Physics and Earth Science. Further research is done in the Physics Institute
• Mathematics
• This area currently offers bachelor's degrees in Applied Mathematics, Mathematics, Computer Science and Actuarial Science
Location and facilities
The Faculty is located in Ciudad Universitaria in Mexico City, across the street from the Faculty of Engineering and the Faculty of Administration. Its premises are located next to the Physics, Mathematics and Astronomy research institutes.
The Faculty occupies various buildings: Buildings, "O" and "P", composed only of classrooms, in addition to four other buildings lodging the Faculty's respective departments and their faculty. The newer part of the complex consists of two buildings, one called Amoxcalli, which holds the Faculty's library and the Computing Center. The newest one is called Tlahuizcalpan which hosts various labs and research premises, as well as classrooms.
In recent years, an interdisciplinary extension of the Faculty was inaugurated in the port of Sisal, Yucatán, devoted to the research of coastal ecosystems and their species.
The National Herbarium of Mexico at UNAM in Mexico City houses the largest collection of plant specimens in all of Latin America and is one of the 10 most active herbaria in the world.
Graduate programs
The Faculty of Sciences offers graduate programs on computer science, material science, astronomy, biology, earth science, ocean science, physics, mathematics, statistics & actuarial science, history and philosophy of science and science education,[4] although most of these are run in collaboration with the nearby institutes, like Physics, Mathematics and IIMAS (Institute for applied mathematics and systems). Some programs are offered in conjunction with other faculties and institutes nationwide.
References
1. http://www.fciencias.unam.mx/historia.html
2. Facultad de Ciencias
External links
• http://www.fciencias.unam.mx/, official website (in Spanish)
• National Herbarium of Mexico at UNAM
National Autonomous University of Mexico
Faculties
• Engineering
• Accounting and Administration
• Architecture
• Chemistry
• Economics
• Law
• Medicine
• Odontology
• Philosophy and Letters
• Political and Social Sciences
• Psychology
• Sciences
• Veterinarian Medicine
FES
• Acatlán
• Aragón
• Cuautitlán
• Iztacala
• Zaragoza
Schools
• Arts and Design
• Music
• Nursery and Obstetrics
• Social Work
• National Preparatory School
Centres
• Cinematographic Studies
• DGSCA
• Centro de Relaciones Internacionales (CRI)
Institutes
• Applied Mathematics and Systems Research Institute
• Aesthetics Research Institute
• Engineering Institute
Buildings
• Central Library
Facilities
• Ciudad Universitaria (Main Campus)
• Olympic Stadium
• Radio UNAM (AM, FM)
• TV UNAM
• Museo Universitario Arte Contemporáneo (MUAC)
• National Observatory
• Kan Balam (Super Computer)
History
• 1999 students' strike
• 2018 students' protests
• National Autonomous University of Mexico
• Okupa Che
• Royal and Pontifical University of Mexico
Alumni
• Alumni
• Astronomical Society
Professors and researchers
• Axel Didriksson
• Luis E. Miramontes
• Francisco Gil Villegas
• Miguel Ángel Mancera
• Arturo Zaldívar Lelo de Larrea
• Fernando Quevedo
• Francisco González de la Vega
• Antonio Lazcano
Sports
• Football club
• Pumas Dorados de la UNAM
19.3244114°N 99.1791499°W / 19.3244114; -99.1791499
| Wikipedia |
How many positive and negative integers is $12$ a multiple of?
The number $12$ is a multiple of $-12, -6, -4, -3, -2, -1, 1, 2, 3, 4, 6,$ and $12,$ for a total of $\boxed{12}$ integers. | Math Dataset |
Simple Construction of a Frame which is $epsilon$-nearly Parseval and $epsilon$-nearly Unit Norm
Simple Construction of a Frame which is $\epsilon$-nearly Parseval and $\epsilon$-nearly Unit Norm
Mohammad Ali Hasankhani Fard
Department of Mathematics Vali-e-Asr University, Rafsanjan, Iran.
In this paper, we will provide a simple method for starting with a given finite frame for an $n$-dimensional Hilbert space $\mathcal{H}_n$ with nonzero elements and producing a frame which is $\epsilon$-nearly Parseval and $\epsilon$-nearly unit norm. Also, the concept of the $\epsilon$-nearly equal frame operators for two given frames is presented. Moreover, we characterize all bounded invertible operators $T$ on the finite or infinite dimensional Hilbert space $\mathcal{H}$ such that $\left\{f_k\right\}_{k=1}^\infty$ and $\left\{Tf_k\right\}_{k=1}^\infty$ are $\epsilon$-nearly equal frame operators, where $\left\{f_k\right\}_{k=1}^\infty$ is a frame for $\mathcal{H}$. Finally, we introduce and characterize all operator dual Parseval frames of a given Parseval frame.
Parseval frame
$epsilon$-nearly Parseval frame
$epsilon$-nearly equal frame operators
Operator dual Parseval frames
Frame Theory
[1] P. Balazs, J.P. Antoine, and A. Grybos, Weighted and Controlled Frames: Mutual Relationship and first Numerical Properties, Int. J. Wavelets Multiresolut. Inf. Process., 109 (2010), pp. 109-132.
[2] J.J. Benedetto, Frame Decomposition, Sampling, and Uncertainty Principle Inequalities in "Wavelets: Mathematics and Applications" (J.J. Benedetto and M.W. Frazier, Eds.), CRC Press., Boca Raton, FL, 1994.
[3] B.G. Bodmann and P.G. Casazza, The road to equal-norm Parseval frames, J. Funct. Anal., 258 (2010), pp. 397-420.
[4] J. Cahill, P.G. Casazza, and G. Kutyniok, Operators and frames, J. Operator Theory., 70 (2013), pp. 145-164.
[5] P.G. Casazza and J. Kovacevic, Equal-norm tight frames with erasures, Adv. Comput. Math., 18 (2003), pp. 387-430.
[6] O. Christensen, An Introduction to Frames and Riesz Bases, Birkhauser., Boston, Basel, Berlin, 2002.
[7] O. Christensen and Y. Eldar, Oblique dual frames and shift-invariant spaces, Appl. Comput. Harmon. Anal., 17 (2004), pp. 48-68.
[8] O. Christensen and R.S. Laugesen, Approximately dual frames in Hilbert spaces and application to Gabor frames, Sampl. Theory Signal Image Process., 9 (2011), pp. 77-90.
[9] D. Freeman and D. Speegle, The discretization problem for continuous frames., https://arxiv.org/abs/1611.06469.
[10] V.K. Goyal, J. Kovacevic, and J.A. Kelner, Quantized frame expansions with erasures, Appl. Comput. Harmon. Anal., 10 (2001), pp. 203-233.
[11] C. Heil, Y.Y. Koo, and J.K. Lim, Duals of frame sequences, Acta Appl. Math., 107 (2008), pp. 75-90.
[12] C. Heil and D. Walnut, Continuous and discrete wavelet transform, SIAM Rev., 31 (1969), pp. 628-666.
[13] A.A. Hemmat and J.P. Gabardo, Properties of oblique dual frames in shift-invariant systems, J. Math. Anal. Appl., 356 (2009), pp. 346-354.
[14] S. Li and H. Ogawa, Pseudo duals of frames with applications, Appl. Comput. Harmon. Anal., 11 (2001), pp. 289-304.
[15] R. Young, An Introduction to Nonharmonic Fourier Series, Academic Press., New York, 1980.
PDF 94.51 K
Hasankhani Fard, M. (2019). Simple Construction of a Frame which is $\epsilon$-nearly Parseval and $\epsilon$-nearly Unit Norm. Sahand Communications in Mathematical Analysis, 16(1), 57-67. doi: 10.22130/scma.2018.79613.374
Mohammad Ali Hasankhani Fard. "Simple Construction of a Frame which is $\epsilon$-nearly Parseval and $\epsilon$-nearly Unit Norm". Sahand Communications in Mathematical Analysis, 16, 1, 2019, 57-67. doi: 10.22130/scma.2018.79613.374
Hasankhani Fard, M. (2019). 'Simple Construction of a Frame which is $\epsilon$-nearly Parseval and $\epsilon$-nearly Unit Norm', Sahand Communications in Mathematical Analysis, 16(1), pp. 57-67. doi: 10.22130/scma.2018.79613.374
Hasankhani Fard, M. Simple Construction of a Frame which is $\epsilon$-nearly Parseval and $\epsilon$-nearly Unit Norm. Sahand Communications in Mathematical Analysis, 2019; 16(1): 57-67. doi: 10.22130/scma.2018.79613.374 | CommonCrawl |
Volume 5, Number 3, 2000
Inozemtsev V. I.
On a Set of Bethe-Ansatz Equetions for Quantium Heisenberg Chains with Elliptic Exchange
The eigenvectors of the Hamiltonian $\mathscr{H}_N$ of $N$-sites quantum spin chains with elliptic exchange are connected with the double Bloch meromorphic solutions of the quantum continuous elliptic Calogero–Moser problem. This fact allows one to find the eigenvectors via the solutions to the system of highly transcendental equations of Bethe-ansatz type which is presented in explicit form.
Citation: Inozemtsev V. I., On a Set of Bethe-Ansatz Equetions for Quantium Heisenberg Chains with Elliptic Exchange, Regular and Chaotic Dynamics, 2000, vol. 5, no. 3, pp. 243-250
Morales-Ruiz J. J.
Kovalevskaya, Liapounov, Painleve, Ziglin and the Differential Galios Theory
We give a review about the integrability of complex analytical dynamical systems started with the works of Kovalevskaya, Liapounov and Painleve as well as by Picard and Vessiot at the end of the XIX century. In particular, we state a new result which generalize a theorem of Ramis and the author. This last theorem is itself a generalization of Ziglin's non-integrability theorem about the monodromy group of the first order variational equation. Also we try to point out some ideas about the connection of the above results with the Painleve property.
Citation: Morales-Ruiz J. J., Kovalevskaya, Liapounov, Painleve, Ziglin and the Differential Galios Theory, Regular and Chaotic Dynamics, 2000, vol. 5, no. 3, pp. 251-272
Kruskal M. D., Tamizhmani K. M., Grammaticos B., Ramani A.
Asymmetric Discrete Painleve Equations
We investigate the possible integrable nonautonomous forms of a given class of mappings involving more than one dependent variable. These integrable discrete systems define "asymmetric" Painlevé equations. Our main tool of investigation is the application of the singularity confinement discrete integrability criterion. A new way of implementing it, first proposed for the singularity analysis of continuous systems, is also introduced.
Citation: Kruskal M. D., Tamizhmani K. M., Grammaticos B., Ramani A., Asymmetric Discrete Painleve Equations, Regular and Chaotic Dynamics, 2000, vol. 5, no. 3, pp. 273-280
Chang C. H., Mayer D.
Thermodynamic Formalism and Selberg's Zeta Function for Modular Groups
In the framework of the thermodynamic formalism for dynamical systems [26] Selberg's zeta function [29] for the modular group $PSL(2,\mathbb{Z})$ can be expressed through the Fredholm determinant of the generalized Ruelle transfer operator for the dynamical system defined by the geodesic flow on the modular surface corresponding to the group $PSL(2,\mathbb{Z})$ [19]. In the present paper we generalize this result to modular subgroups $\Gamma$ with finite index of $PSL(2,\mathbb{Z})$. The corresponding surfaces of constant negative curvature with finite hyperbolic volume are in general ramified covering surfaces of the modular surface for $PSL(2,\mathbb{Z})$. Selberg's zeta function for these modular subgroups can be expressed via the generalized transfer operators for $PSL(2,\mathbb{Z})$ belonging to the representation of $PSL(2,\mathbb{Z})$ induced by the trivial representation of the subgroup $\Gamma$. The decomposition of this induced representation into its irreducible components leads to a decomposition of the transfer operator for these modular groups in analogy to a well known factorization formula of Venkov and Zograf for Selberg's zeta function for modular subgroups [34].
Citation: Chang C. H., Mayer D., Thermodynamic Formalism and Selberg's Zeta Function for Modular Groups, Regular and Chaotic Dynamics, 2000, vol. 5, no. 3, pp. 281-312
Varin V. P.
Degeneracies of Periodic Solutions to the Beletsky Equation
We suggest a new method of analysis of degeneracies in families of periodic solutions to an ODE, which is based upon the application of variational equations of higher order. The equation of oscillations of a satellite in the plane of its elliptic orbit (the Beletsky equation) is considered as a model problem. We study the degeneracies of arbitrary co-dimension in the families of its $2\pi$-periodic solutions and obtain the explicit formulas for them, which allows to localize the degeneracies with high accuracy and to give them a geometric interpretation.
Citation: Varin V. P., Degeneracies of Periodic Solutions to the Beletsky Equation, Regular and Chaotic Dynamics, 2000, vol. 5, no. 3, pp. 313-328
Ivanov A. V.
Study of the Double Mathematical Pendulum — III. Melnikov's Method Applied to the System In the Limit of Small Ratio of Pendulums Masses
We consider the double mathematical pendulum in the limit when the ratio of pendulums masses is close to zero and if the value of one of other system parameters is close to degenerate value (i.e. zero or infinity). We investigate homoclinic intersections, using Melnikov's method, and obtain an asymptotic formula for the homoclinic invariant in this case.
Citation: Ivanov A. V., Study of the Double Mathematical Pendulum — III. Melnikov's Method Applied to the System In the Limit of Small Ratio of Pendulums Masses, Regular and Chaotic Dynamics, 2000, vol. 5, no. 3, pp. 329-343
Bardin B. S., Maciejewski A. J.
Non-linear oscillations of a Hamiltonian system with one and half degrees of freedom
We study non-linear oscillations of a nearly integrable Hamiltonian system with one and half degrees of freedom in a neighborhood of an equilibrium. We analyse the resonance case of order one. We perform careful analysis of a small finite neighborhood of the equilibrium. We show that in the case considered the equilibrium is not stable, however, this instability is soft, i.e. trajectories of the system starting near the equilibrium remain close to it for an infinite period of time. We discuss also the effect of separatrices splitting occurring in the system. We apply our theory to study the motion of a particle in a field of waves packet.
Citation: Bardin B. S., Maciejewski A. J., Non-linear oscillations of a Hamiltonian system with one and half degrees of freedom, Regular and Chaotic Dynamics, 2000, vol. 5, no. 3, pp. 345-360 | CommonCrawl |
Degravitation
Last time I scolded the speaker for giving an utterly unattractive seminar title. Degravitation - which is the title of Gia Dvali's talk two weeks ago - is on the other hand very catchy and will certainly attract many Roswell aficionados to my blog. But this post, I'm afraid, is not about classified experiments with gravity performed here at CERN but about a new interesting approach to solving the cosmological constant problem. Gia is going to be around for some time, so you may expect more posts with weird titles in future.
The cosmological constant problem is usually phrased as the question why the vacuum energy is so small. Formulated that way, it is very hard to solve, given large existing contributions (zero-point oscillations, vacuum condensates) and vicious no-go theorems set up by Weinberg. The problem has ruined many lives and transformed some weaker spirits into anthropic believers. Gia does not give up and attempts to tackle the problem from a different angle. He tries to construct a theory where the vacuum energy may be large but it does not induce large effects on the gravitational field. This is of course impossible in Einstein gravity where all forms of energy gravitate. The idea can be realized, however, in certain modified gravity theories.
Gia pursues theories where gravity is strongly modified at large distances, above some distance scale L usually assumed to be of similar size as the observable universe. The idea is to modify the equations of gravity so as to filter out sources whose characteristic length is larger than L. The gravitational field would then ignore the existence of a cosmological constant, which uniformly fills the entire universe.
On a slightly more formal level, Gia advocates a quite general approach where the equations for the gravitational fields can be written as
$ ( 1 - \frac{m^2(p^2)}{p^2} ) G_{\mu \nu} = \frac{1} {2} T_{\mu \nu}$
where, as usual, $G$ is the Einstein tensor and $T$ is the energy-momentum tensor. Deviations from the Einstein theory are parameterized by $m^2(p^2)$ which is a function of momentum (or a funtion of derivatives in the position-space picture). For $m^2=0$, the familiar Einstein equations are recovered. The effects of $m^2$ set in at large distance scales.
At low momenta/large distances one assumes $m^2 \sim L^{-2} (p^2 L^2)^\alpha$ with $0 <= \alpha < 1$. The case $\alpha = 0$ corresponds to adding the graviton mass, the case $\alpha = 1/2$ corresponds to a certain 5D framework called the DGP model (where Gia is the D). In fact, the latter case is the only one for which the full, non-linear, generally covariant completion is known. Other values of $\alpha$ may or may not correspond to a sensible non-linear theory.
Gia argues that any consistent theory effectively described by this kind of filter equations has to be a theory of a massive or resonance graviton. This means that the graviton propagates 5 degrees of freedom and not 2 as in the Einstein theory. In addition to 2 tensor polarizations, there are 2 vector and 1 scalar polarization. The additional polarizations also couple to massive sources and their exchange contributes to the gravitational potential.
Everybody who ever played with modified gravity knows well that Einstein gravity reacts histerically to all manipulations and often breaks down. In the present case what happens is that, once the theory is extended beyond the linear approximation, the scalar polarization gets strongly coupled far below the Planck scale. But Gia argues that one can live with it and, in fact, the strong coupling saves the theory. It is well known since ages that the massive gravity suffers from the so-called van Dam--Veltman discontinuity: the potential between two sources is different than in Einstein gravity, even in the zero-mass limit. The responsible for that is precisely the scalar polarization. The predictions from massive gravity are at odds with precise tests of gravity, for example with observations of the light-bending by the Sun. These predictions, however, are derived using the linear approximation which breaks down near massive sources. Gia argues that the effect of the strong coupling is to suppress the scalar polarization exchange near massive sources and there is no contradiction with experiment.
So the picture of the gravitational field around a massive source in massive or resonance gravity is more complex, as shown to the right. Apart from the Schwarzschild radius, there are two other scales. One is the scale L above which gravity shuts off. The other is the r* scale where the scalar polarization gets strongly coupled. At scales larger than r* we have a sort of scalar-tensor gravity that differs ifrom Einstein gravity. At scales shorter than r* Einstein gravity is approximately recovered up to small corrections. Gia estimates that these latter corrections can be measured in future by the lunar laser ranging experiment if $\alpha$ is of order 1/2.
Coming back to the cosmological constant problem, the analysis is complicated and depends on the non-linear completion of the theory. Gia's analysis shows that this class of theories can indeed degravitate the cosmological constant when $\alpha < 1/2$. I'm not sure if this conclusion is bulletproof since it is derived in a special limit where the equations for the tensor and scalar polarizations decouple. What is certain is that the complete non-linear DGP model (corresponding to $\alpha = 1/2$) does not enjoy the mechanism of degravitation. The hope is that theories with $\alpha < 1/2$ do exist and that a full non-linear analysis will demonstrate one day that the cosmological constant problem is solved.
Slides available here. The paper has been out for 6 months now. It is worth looking at the previous paper of Gia, where the strong coupling phenomenon is discussed at more length. Try also to google degravitation to see how amazing paths the human mind may wander.
The problem has ruined many lives and transformed some weaker spirits into anthropic believers.
I look at this a little differently:
I see "anthropic selection" as a weaker persons bail-out on first principles, but I think that it requires an even weaker "soul" to reject the very obvious anthropic constraint on the forces as a plausible answer to the problem from first principles.
In other words, Jester, you act like the universe isn't observed to be anthropically constrained, so like Peter Woit, you sit in denial of the observed fact, and I'll bet that you know about as much about the anthropic physics as he does, which ain't much.
In other, other words.... we have David Gross and others who cry about how the biggest failure of science in the last twenty years is its inability to produce a dynamic stabilty principle, and yet... they refuse to recognize evidence that we might be directly related to it.
For twenty straight years the freaking math whizzes can't even add one plus one...
Sabine Hossenfelder said...
Yes, its an interesting idea, isn't it? I've had a post on that as well, see Filtering Gravity. I hope there is some follow-up work on this out soon.
Holographic Baryons
Higgs and Beyond
Habemus DG
Football @ CERN
Buffalo Conspiracy | CommonCrawl |
MATF: a multi-attribute trust framework for MANETs
Muhammad Saleem Khan1,
Majid Iqbal Khan1,
Saif-Ur-Rehman Malik1,
Osman Khalid2,
Mukhtar Azim1 &
Nadeem Javaid1
EURASIP Journal on Wireless Communications and Networking volume 2016, Article number: 197 (2016) Cite this article
To enhance the mobile ad hoc networks (MANETs) security, various trust-based security schemes have been proposed. However, in most of the trust-based security schemes, a node's trust is computed based on a single trust attribute criteria, such as data forwarding. Using single trust attribute criteria may cause the bootstrapping problem, which refers to the time required by the trust-based scheme to build trust and reputation among nodes in the network. The bootstrapping problem in these schemes may provide more opportunities to misbehaving nodes to drop packets and remain undetected for longer time in the network. Moreover, using single trust attribute criteria does not effectively deal with the selective misbehavior by a smart malicious node.
In this work, we propose a scheme that is based on the multi-attribute trust criteria to minimize the bootstrapping time, which ultimately improves the performance of the scheme in terms of high malicious node detection rate, low false-positive rate, and packet loss rate. The contributions of this paper are (a) identification of trust attributes along with the development of a comprehensive multi-attribute trust framework (MATF) using multiple watchdogs for malicious node identification and isolation, (b) formal modeling and verification of our proposed MATF using HLPN, SMT-Lib, and Z3 Solver, and (c) simulation-based validation and evaluation of the proposed trust framework in the context of optimized link state routing (OLSR) protocol against various security threats, such as message dropping, message modification, and link withholding attacks. The simulation results revealed that the proposed trust framework achieves about 98 % detection rate of malicious nodes with only 1–2 % false positives. Moreover, the proposed MATF has an improved packet delivery ratio as compared to the single attribute-based scheme.
Due to the non-availability of central authority and the unreliability of wireless links, the routing protocols in mobile ad hoc networks (MANETs) are vulnerable to various types of security threats [1]. The resource-constrained nature of MANETs with continuously evolving topology and frequent network partitioning complicates the security challenges in MANETs' routing. Most of the secure routing protocols for MANETs utilize some form of cryptography to ensure the network security [2–4]. However, there are scenarios, where cryptography techniques fail to capture malicious behavior of a node. For example, (a) to disrupt the network topology, a node may provide falsified routing information to other nodes, (b) to preserve the battery, a node may not participate in the routing functions, and (c) a node may drop data packets instead of forwarding because of the malicious intention. To address these issues, trust-based security schemes [5–10] have been proposed to augment the security of traditional cryptography-based approaches.
In MANETs, trust can be defined as to what extent a node can fulfill the expectations of other node(s) as per the specification of an underlying communication protocol [11]. In trust-based security schemes, each node within the network manages an independent trust table to compute and store the trust values of other nodes. The routing decisions are based on the computed trust values of the nodes. Although a lot of research work has been carried in the field of trust and reputation based systems in MANETs, however, almost all the proposed schemes suffer from one basic problem known as bootstrapping problem [12]. It refers to the time required by the trust-based scheme to build trust and reputation among nodes in the network. Such delay in accumulation of trust and reputation is often not acceptable in time-critical applications. Due to the slow trust building process, a misbehaving node may have more opportunities to drop packets before being detected as malicious. One of the basic reasons for the aforementioned bootstrapping problem is that in most of the trust-based security schemes, an evaluated node's trust is computed based on a single trust attribute, such as data forwarding [13–17]. Moreover, using single trust attribute may not effectively deal with the problem of selective misbehavior [12]. A smart malicious node may misbehave in the context of one network function and behave properly for other network functions. For example, a node may misbehave in the context of data forwarding while demonstrating good behavior when dealing with the control packet forwarding. As the existing schemes [7–10, 13–17] use single trust attribute, the aforementioned selective misbehaving node is declared as malicious node and isolated from the routing path, hence no longer will be available to be used for other network functions.
In trust-based security schemes, each node collects two major types of information about other nodes: first-hand information (based on self-observations) and second-hand information (based on the other node observations). In literature, efforts have been made to minimize the bootstrapping time and to increase the detection rate by using second-hand information to evaluate the trustworthiness of the nodes [14, 17]. However, the aforementioned schemes still suffer from data sparsity problem [14]. In trust-based security schemes, data sparsity is a situation where lack of information or insufficient interaction experience makes it difficult to evaluate the node's trust, especially in the early time of network establishment. Moreover, using second-hand information without any filtration may cause bad-mouthing and false praise attack [11], which ultimately cause high false positive and false negative rate. In bad-mouthing attack, a misbehaving node propagates dishonest and unfair recommendations against an innocent node with a negative intention to confuse the trust model. Similarly, in false praise attack, a misbehaving node propagates unfairly positive recommendations against the malicious node to mislead the trust model.
It is also of critical importance to prove the correctness of the trust-based security schemes in dynamic and unpredictable environments, such as of MANETs. A well-established approach to prove the correctness of a system's model is by employing a formal verification process [18].
To minimize the bootstrapping time and expedite the trust building process, and to effectively deal with the selective misbehavior, there is a strong need for a mechanism that works on multi-attribute-based trust strategy. Each node should be observed in the context of all the possible network functions, such as control message generation, control message forwarding, and data packet forwarding. Moreover, an efficient recommendation filtration technique is required to filter the source of information and information itself. To avoid bad-mouthing and false praise attacks, second-hand information from only designated and trustworthy nodes must be considered in a trust computation process.
Our contributions: In this work, we address the bootstrapping and delusive trust dissemination problem when using second-hand information. We propose a trust-based security scheme which uses multi-attribute-based trust criteria, such as control packet generation, control packet forwarding, and data packet forwarding. Using multi-attribute trust criteria which minimizes the bootstrapping time and expedite the trust building process, as nodes are assessed in the context of different aforementioned network functions. Moreover, to avoid bad-mouthing and false praise attacks, second-hand information is considered from only those nodes called watchdog nodes, whose trust values are above some threshold. Furthermore, second-hand information from recommender nodes with trust deviation (τ dev) value1 less than the deviation threshold (τ dev−th) will be considered in the trust computation process. This paper has the following major contributions.
Identification of the trust attributes for a node's trust building process.
Development of a comprehensive multi-attribute and multiple watchdog nodes trust framework (MATF) for malicious node detection and isolation.
Formal verification of our proposed MATF using high-level Petri nets (HLPNs), satisfiability modulo theories-library (SMT-Lib), and Z3 Solver.
Implementation of the proposed trust framework in the context of optimized link state routing (OLSR) protocol in NS-2 [19].
Simulation-based validation and evaluation of the proposed MATF in comparison with the recently proposed trust scheme by Shabut et al. [14] (single trust-attribute-based scheme), against various security threats, such as message dropping, message modification, and link withholding attacks.
Security analysis of the proposed MATF.
The rest of the paper is organized as follows. In Section 2, we present the related work. Section 3 presents the discussion on trust and its formulation in MANETs along with a multi-attribute trust framework. Section 4 presents the formal modeling and verification of the proposed framework. Section 5 presents the simulation results and summarizes the performance evaluation of the proposed model. Security analysis of the proposed scheme is presented in Section 6, and the paper is concluded in Section 7.
Trust-based security schemes are one of the active research areas for ensuring the security in MANETs [20]. In recent years, different trust-based security schemes have been proposed to enhance the security in MANETs. In these schemes, nodes evaluate their neighbor nodes based on the first-hand information or using recommendations from other nodes [12, 20]. Though, these schemes paid some attention to the problem of bootstrapping and delusive trust dissemination problem, however, an efficient mechanism to mitigate the aforementioned problem is still a challenging issue in MANETs. We categorize the state-of-the-art schemes in the following categories.
Watchdog and path-rater schemes
One of the key works in trust-based schemes was presented by Marti et al. [13]. They proposed watchdog and path-rater mechanisms implemented on the dynamic source routing (DSR) protocol to minimize the impact of malicious nodes on the throughput of the network. The aforementioned approach detects the misbehaving nodes by using only source node as a monitoring node. However, the proposed scheme has some major shortcomings, such as it cannot detect the misbehaving nodes in the case of ambiguous collision, receiver collision, limited transmission power, partial dropping, and collaborative attacks [21]. Moreover, watchdog and path-rater mechanisms utilize only first-hand information for node misbehavior detection that causes the aforementioned issues.
Feedback-based schemes
To solve the issues in the watchdog and path-rater schemes, various approaches were proposed, such as acknowledgment-based detection systems including two network-layer acknowledgment-based schemes, termed as TWOACK [22], adaptive acknowledgment (AACK) [23], and enhanced adaptive acknowledgment (EEACK) [21]. The TWOACK scheme has focused to solve the receiver collision and limited transmission power problems of the watchdog and path-rater approach. Every data packet transmitted is acknowledged by every three consecutive nodes along the path from the source to the destination. Sheltami et al. [23] proposed an improved version of the acknowledgment-based scheme, AACK. The AACK is the intrusion detection system which is a combination of TWOACK and end-to-end acknowledgement scheme. Although, AACK has significantly reduced the overhead as compared to TWOACK scheme, it still suffers from the problem of detecting malicious nodes generating false misbehavior report and forged acknowledgment packets. To remove the shortcomings of the acknowledgement-based schemes, Shakshuki et al. [21] proposed EAACK protocol to detect misbehavior nodes in MANETs' environment using digital signature algorithm (DSA) [24] and Rivest-Shamir-Adleman (RSA) algorithm [25] digital signatures. Although, their technique can validate and authenticate the acknowledgement packets, yet at the expense of extra resources, and it also requires pre-distributed keys for digital signatures.
Network monitoring-based schemes
Buchegger et al. presented a cooperation of nodes-fairness in distributed ad-hoc networks (CONFIDANT) protocol [7] to detect misbehaving nodes in the network. In addition to first-hand information, second-hand information is also used while computing a node's trustworthiness. In CONFIDANT protocol, first-hand information is propagated after every 3 s, while weight given to the second-hand information is 20 %. To avoid false praise attack [11], only negative experiences as second-hand information are shared among nodes. One of the shortcomings in CONFIDANT protocol is that ALARM messages used in the protocol can be exploited by the bad-mouthing nodes. Bad-mouthing nodes may generate ALARM messages against the legitimate nodes to induce biasness in the protocol's results [22]. Similarly, a collaborative reputation mechanism to enforce node cooperation in MANETs called CORE [9] also uses the second-hand information to compute the reputation of a node. Only positive experiences are shared by the node with other nodes in the network to avoid bad-mouthing attack.
In contrast to CONFIDANT and CORE [9], observation-based cooperation enforcement in ad hoc networks (OCEAN) protocol [26] uses only first-hand observation to avoid false praising and bad-mouthing type of attack. In OCEAN, avoid-list strategy is implemented to not forward the traffic from misbehaving nodes. However, if a node identifies that its ID is inserted to the avoid-list, it may change its strategy. A tamper-proof hardware is required to secure the avoid-list to avoid the aforementioned incident.
To filter the second-hand information, [14] proposed a defence trust scheme based on three parameters: (a) confidence value, indicating how many interactions took place between a recommender node and an evaluated node, (b) deviations in the opinions of recommender node and evaluating node, and (c) closeness value, indicating the distance-wise close of recommender node and the evaluating node. On the basis of the aforementioned values, an evaluating node filters the second-hand information in the proposed trust scheme. However, the second-hand information filtration mechanism in the proposed scheme may not work well in some scenario. For example, recommender nodes R 1,R 2…R N send the bad reputation value of misbehaving node M to evaluating node E, while node E has a good reputation value about node M based on its own first-hand information. In the aforementioned proposed scheme [14], such recommendations are filtered out because of more deviation in the trust values. In contrast, our proposed MATF scheme filters the recommendation by using the following methodology. When recommendations received at the evaluating node from the recommender node about some particular evaluated node, the evaluating node averages the recommendations already received from all the watchdog nodes (recommender nodes) then, finds the trust deviation of the recommender node's trust value from the average trust value. If the deviation in trust values is less than certain deviation threshold, weight is given to the recommendations in the trust computation; otherwise, no weight is given to these recommendations.
Li et al. [27] proposed a simple trust model which takes into account the packet forwarding ratio as metric to evaluate the trustworthiness of neighbor nodes. A node's trust is computed by the weighted sum of packet forwarding ratio. To find a path trust, continued product of node's trust values in a routing path is computed. The aforementioned approach only considers packet forwarding behavior as a trust metric. A trust prediction model based on the node's historical behavior called trust-based source routing (TSR) protocol was presented in [28]. On the basis of assessment and prediction results, the nodes can select the shortest trusted route to transmit the required packets. One of the weaknesses of this work is that no second-hand information is considered for trust computation that may result in bootstrapping and data sparsity problem [14]. Trust-based security schemes like [16] only consider the security of data traffic, while schemes like [29, 30] only consider the security of control traffic. Moreover, the aforementioned solutions result in more energy consumption due to excessive information propagation and detection messages. In [31], energy efficiency is considered as one of the parameters and have improved previously existing trust-based algorithms.
To summarize, the trust-based security schemes discussed in this section have some open problems that need to be solved. Most of the existing schemes use single trust criteria for the trust building process that causes the bootstrapping and data sparsity problem. Minimizing the bootstrapping time and the data sparsity problem is still an open issue [12, 14]. Moreover, using all the available information from each and every node in the network does help in building reputation and trust among nodes quickly, but as discussed earlier, it makes the system vulnerable to false report attacks. To solve the aforementioned false praise and bad-mouthing attacks, there should be a mechanism which filters the spurious second-hand information. Although, the aforementioned approaches suggest the misbehavior detection schemes, these schemes use single trust attributes like data forwarding. Moreover, second-hand information are considered from recommender nodes without any filtration that can result in erroneous trust estimation, especially under high nodal mobility. In contrast, our proposed MATF is based on multiple trust attributes with multiple observer nodes that results in better trust estimation. Second-hand information are considered from recommender nodes with deviation values less than the deviation threshold, which results in better trust estimation, especially under high nodal mobility.
MATF: the proposed scheme
In this section, we present the trust attributes, trust formulation in the proposed MATF, a mechanism for trust deviation test, and watchdog node selection process. In the proposed MATF, the watchdog node is the designated neighbor node of the evaluating node to monitor the activities of the evaluated node B on the basis of defined trust attributes and is represented by W. It can be the evaluating node itself or any other node that has been assigned the monitoring task by the evaluating node. The evaluating node computes the final trust of the evaluated node based on its own observations and those reported by the watchdog nodes. Our proposed trust model consists of three steps. The first step is the monitoring step, in which an evaluating node S and watchdog nodes W n observe the behavior of an evaluated node B in the context of trust attributes ρ. For clarity, in the following equations, we treat an evaluating node as one of the watchdog nodes. In the second step, an evaluating node aggregates its own observations and the watchdog nodes' observations in the context of each trust attribute. Finally, an evaluating node computes the final trust of an evaluated node in the context of all the trust attributes using the weighted sum. Also, the value range of ρ is [0,1], 0 being the minimum and 1 the maximum.
Trust attributes
Trust attributes are the factors responsible for shaping the trust levels and denoted by ρ. Each trust attribute value ranges between 0 and 1. Before going into the details as how we applied trust in MANETs, first, we discuss the basic trust attributes and then, define our trust model.
We have identified the following trust attributes in the context of control and data traffic for the proposed trust model.
Control packet generation (ρ cpg)
Control packet is the protocol-specific information that nodes exchange to build routes and maintain topology. By using this trust attribute, an evaluating node assesses the trustworthiness of the evaluated node in the context of control packet generation behavior as specified in the underlying routing protocol. Observations of a node W about node B in terms of control packet generation is given in the following equation:
$$ \rho_{\text{cpg}}^{W,B}\left(t,t+\Omega \right)= \frac{p}{p_{\text{exp}}}, $$
where t is the current time, Ω is the trust update period, p is the total actual number of control messages generated in the time interval (t,t+Ω) by node B as observed by W, and p exp is the expected number of control messages that should have been generated by node B.
An evaluating node then aggregates its observations and the observations reported by the watchdog nodes to build a reputation about node B as shown in the following equation:
$$ \rho_{\text{cpg}}(t,t+\Omega)= \alpha\rho_{\text{cpg}}^{S,B}+(1-\alpha)\left(\frac{1}{n}\sum\limits_{{i}=1}^{n} \rho_{\text{cpg}}^{W_{i},B}\right), $$
where α is the weight factor given to an evaluating node observation and watchdog node observations.
Control packet forwarding (ρ cpf)
Nodes in a MANET depend on mutual cooperation to forward traffic. A non-cooperative forwarding node may drop packet or forward control packet with delay that can result in the inconsistent view of the network topology. Let us denote the packets that are successfully overheard as p ack. The observations of a node W regarding node B in terms of control packet forwarding can be computed using following equation:
$$ \rho_{\text{cpf}}^{W,B}\left(t,t+\Omega \right) =1- \frac{p - p_{\text{ack}}}{p}. $$
According to the above equation, the minimum possible packet loss rate observed at an evaluating/watchdog node W is 0, while the maximum possible packet loss rate is equal to 1, i.e., all the sent packets are dropped by misbehaving nodes. An evaluating node then aggregates its own observations and that of watchdog nodes to obtain an aggregated reputation of node B in terms of control packet forwarding as follows:
$$ \rho_{\text{cpf}}\left(t,t+\Omega \right)=\alpha\rho_{\text{cpf}}^{(S,B)}+(1-\alpha)\frac{1}{n}\sum\limits_{{i}=1}^{n} \left(\rho_{\text{cpf}}^{(W_{i},B)}\right), $$
where α is the weight factor given to an evaluating node observations and watchdog node observations in the above equation.
Data packet forwarding (ρ dpf)
In addition of control traffic, nodes are also responsible of relaying data packets. A node may drop the data packet and forward data packets with delay or with maliciously modified contents. The observations of node W regarding node B in terms of data packet forwarding can be computed using the following equation:
$$ \rho_{\text{dpf}}^{W,B}\left(t,t+\Omega \right) = 1-\frac{\xi - p_{\text{ack}}}{\xi}, $$
where ξ is the total number of data packet sent and p ack is the data packet successfully overheard at watchdog node W. Aggregating evaluating node's and watchdog node's observations, we get the aggregated reputation of an evaluated node in the context of data packet forwarding as given in the following equation:
$$ \rho_{\text{dpf}}\left(t,t+\Omega \right) =\alpha\rho_{\text{dpf}}^{(S,B)}+ (1-\alpha)\frac{1}{n}\sum\limits_{{i}=1}^{n} \left(\rho_{\text{dpf}}^{(W_{i},B)}\right). $$
Trust formulation and algorithm
We are now able to combine the equations introduced so far into our mathematical model for the multi-attribute trust computation. By combining Eqs. 2, 4 and 6, we obtain
$$ {\tau_{S}^{B}}\left(t,t+\Omega \right) = \frac{\delta\rho_{\text{cpg}}+\beta\rho_{\text{cpf}}+\gamma\rho_{\text{dpf}}}{\delta+\beta+\gamma}, $$
where δ,β, and γ are weight factors assigned to each metric and δ+β+γ=3. The weights can be tuned based on the specific security goal to be achieved. For example, if a higher throughput and packet delivery is concerned, we consider the data traffic as vital, so data forwarding parameter carry more weight than other parameters, such as control packet generation and forwarding. An evaluating node S aggregates the trust computed for evaluated node B during the time interval (t,t+Ω) in the context of each trust attribute ρ and assigns weights to each aforementioned attributes in the above equation. The trust computed in Eq. (7) is compared with a threshold value to make a decision regarding trustworthiness of a node.
Algorithm 1 presents the pseudo code for the MATF. In the proposed algorithm, an evaluating node and designated watchdog nodes observe the evaluated node in terms of different network functions during the monitoring period (lines 1–4). A filtration criteria is applied on the recommendations received from watchdog nodes (line 5). Based on the filtered recommendations, an evaluating node computes the trust of an evaluated node (lines 8–10). If the trust of an evaluated node is lower than a threshold (lines 12–13), it is isolated from the routing path and a new route selection process is initiated (Line 14).
Trust deviation
The trust computed by the watchdog nodes will be used as a second-hand information in the proposed scheme. To avoid bad-mouthing and false praise attacks, only those information will be used by the evaluating node which is received from the designated nodes and have a trust deviation value less than the deviation threshold. Trust deviation can be computed as given in the following equation.
$$ \tau_{\text{dev}}=\left|\left(\frac{1}{k-1}\sum\limits_{{i}=1}^{k-1}\tau_{(W_{i},j)}\right)-\tau_{(W_{k},j)}\right| \leq \tau_{\mathrm{dev-th}}, $$
where \(\tau _{(W_{i},j)}\) is the average trust already received from the watchdog node W i about the evaluated node j and \(\tau _{(W_{k},j)}\) is the trust recommendation received from watchdog node W k about the evaluated node j.
Watchdog selection process
In order to avoid the bad-mouthing and false praise attacks, the second-hand information in the proposed MATF is considered from only designated and trustworthy watchdog nodes, as discussed in the previous subsection. In this section, we discuss the selection process of the watchdog nodes, which will perform the monitoring task. When the network is initialized, each evaluating node selects a set of neighboring nodes called watchdog set to monitor the behavior of a particular evaluated node. The proposed security scheme allows flexibility in the watchdog selection. Depending on the available network topology, one or multiple watchdogs may be selected. There is no fixed ratio per node of watchdog nodes to be selected. It will be varying depending on the available network topology. It is worth mentioning that in case of any change in network topology, an evaluating node will re-compute the watchdog nodes. The criteria and selection process of watchdog nodes are presented in Algorithm 2, which is a modified version of the relay node selection algorithm presented in [32]. An example scenario of the detailed working of the watchdog selection algorithm is presented below.
In the given scenario, node S discovers its neighbors through exchange of control messages and calculates the one-hop neighbor set N 1 and the two-hop neighbor set N 2 (used as an input in Algorithm 2). From the set N 1, each evaluating node S computes the relay node set R(S) (lines 6–18) and the watchdog set W(S), having a trust value greater than the trust threshold (lines 20–26). R(S) is the smallest possible subset of N 1(S) required to reach all nodes in N 2(S).
As an example, in Fig. 1, R(S)={B,C,E} contains the minimum number of one-hop neighbors of S required to reach all two-hop neighbors of S. Thereafter, node S selects the watchdog set for each node present in R(S)={B,C,E}. To calculate the watchdog set for each node in the R(S), the node S takes the intersection of the one-hop neighbor set N1(S) and the one-hop neighbor set of each relay node.
An example working scenario of the MATF
Node S broadcasts the W(S) to the neighboring nodes by appending it in the periodic control messages along with R(S). This enables the neighboring nodes of S to check whether or not they have been selected as a watchdog. By utilizing the broadcast information sent by the node S, each node builds the watchdog selector set. The watchdog selector set consists of all those nodes that have selected node W as a watchdog. For example, as reflected in Fig. 1, node S populates the W(S){A,H} for the relay node C by taking the intersection of sets N 1(S)={A,B,C,D,E,H} and N 1(C)={A,S,H,X}. Thereafter, node S broadcasts the watchdog set to inform both nodes A and H that from now onward, these nodes have to monitor node C.
Formal modeling and verification of the MATF
Formal verification is the process verifying that algorithms work correctly with respect to some formal property [33]. Formally modeling systems helps to analyze the interconnection of components and processes and how the information is processed in the system [34]. Formal modeling provides valuable tools to design, evaluate, and verify such protocols [35]. To verify the correctness of the MATF, we use HLPNs for the modeling and analysis [18]. HLPNs provide a mathematical representation and help to analyze the behavior and structural properties of the system.
To perform a formal verification of the MATF, the HLPN models are first translated into SMT-Lib [36] using the Z3 Solver [37]. Then, the correctness properties were identified and verified to observe the expected behavior of the models.
In this section, we present a brief overview of HLPNs and a formal verification of the MATF.
High-level Petri nets
Petri nets are used to model systems which are non-deterministic, distributed, and parallel in nature. HLPNs are a variation of conventional Petri nets. A HLPN is a structure comprised of a seven-tuple, N=(P,T,F,φ,R,L,M 0). The meaning of each variable is provided in Table 1.
Table 1 Variables and meaning
SMT-Lib and Z3 Solver
SMT is an area of automated deduction for checking the satisfiability of formulas over some theories of interest and has the roots from Boolean satisfiability solvers (SAT) [34]. The SMT-Lib is an international initiative that provides a standard benchmarking platform that works on common input/output framework. In this work, we used Z3, a high-performance theorem solver and satisfiability checker developed by Microsoft Research [38].
Modeling and verification of the MATF
To model and verify the design of the MATF, the places P and the associated types need to be specified. The data type refers to a non-empty set of data items associated with a P. The data types used in the HLPN model of the MATF are described in Table 2. Figure 2 a present the HLPN model for the relay and watchdog node selection in the MATF. Moreover, message forwarding, trust computation, and malicious node isolation are depicted in the HLPN model shown in Fig. 2 b. As depicted in Fig. 2 a, there are six places in relay and watchdog selection HPLN, whereas seven places in the HLPN model for trust computation, as shown in Fig. 2 b. The names of places and description are given in Table 3. The next step is to define the set of rules, pre-conditions, and post-conditions to map to T. The mapping of transition T to the processes used in the MATF, referred to as rules (R). After defining the notations, we can now define formulas (pre- and post-conditions) to map on transitions in the following. The set of transitions T ={Gen-Nlist, Gen-WDN, Gen-relay, Broadcast, Forward, Trust-obs, Comp-Mali }. The following are the rules used for modeling and verification.
HLPN of the MATF (a, b)
Table 2 Data types and their descriptions
Table 3 Places and mappings of data types to the places
The rule R1 depicts the HELLO message processing. When the network is initialized, nodes exchange the HELLO messages with each other to discover the neighbors in the network. The HELLO message contains the list of one-hop neighbors of a node. On the basis of received HELLO messages, a node compute the one-hop and one-hop neighbors.
$${} \begin{aligned} & \mathbf{R(Gen-Nlist)}=\forall hp\in HP,\forall gn\in G-Node\mid \forall 1hl\in 1HL\\ &| 1hl[2]:= GenNeighbour \left(hp,gn[1]\right)\wedge 1HL\prime=1HL\cup \\ &\left\{\left(gn[1],1hl[2]\right)\right\}\wedge \forall 2hl\in 2HL | 2hl[2]:=Gen-2HN\\ &(hp,\ gn[1])\wedge 2HL\prime=2HL\cup \{(gn[1],2hl[2])\} \end{aligned} $$
(R1)
After populating one-hop and two-hop list, watchdog nodes and relay nodes are selected for monitoring and packet relaying purpose, respectively, as depicted in Algorithm 1. In rule R2, using the one-hop list, watchdog nodes are selected. The nodes that are not relay nodes and in the one-hop list of the relay node and source node are selected as watchdog. Also, the set of relay nodes are selected from one-hop neighbor list to reach two-hop neighbors. The transition Gen-WDN and Gen-relay is mapped to the following rules R2 and R3, respectively.
$${} \begin{aligned} & \mathbf{R(Gen-WDN)}=\! \forall g1\in G-1HL, \forall \ mpl\in \!G-MPL,\forall wdl \\ &\in WDL,\forall gn\in Gn | gn[1] \notin mpl\longrightarrow wdl[1]:=mpl [1]\wedge\\ &wdl\left [2\right] :=mpl\left [2\right]\wedge wdl[3]:=gn[1] \wedge WDL\prime=WDL\cup\\ &\left\{\left(wdl[1], wdl[2], wdl [3] \right)\right\} \end{aligned} $$
$${} \begin{aligned} & \mathbf{R(Gen-relay)}=\forall \ g1\in G-1HL,\forall g2\in G-2HL,\forall gn\in\\ &GN, \forall mpl\in relay-L| \left[Con\left(g1{[1]}_{i:g1[1]}, gn[1], g2[1]\right)>\right.\\ &Con \left(g1{[1]}_{j:g1[1] \wedge j\ne i}, gn[1] g2[1]\right)\vee Con-iso\left(g1[1], gn\right.\\ &\left.\left.\![1], g2 [1]\right)=True\right] \longrightarrow \ mpl[\!1]:=gn[1]\wedge mpl[2]:=g1\\ & [1]\wedge relay-L\prime= relay-L\cup \{(mpl[1],mpl[2])\} \end{aligned} $$
So far, the watchdog and the relay nodes are selected. Now, the source node generates a message and wants to broadcast it into the network. In rule R4, the same process is depicted, where the source node generates the message and in response the watchdog nodes overhear it and the respective relay node receives it.
$${} \begin{aligned} & \mathbf{R(Broadcast)}= \forall m\in Msg,\forall oh-sn\in OH-SN, \forall \ rm\in \\ &Rec-SN|oh-sn [1]:=m[1]\wedge oh-sn [2]:=m[2]\wedge oh-\\ &sn [4] :=m[4] \wedge oh-sn[5]:=m[5] \wedge OH-SN\prime=OH-\\ &SN \cup \lbrace(oh-sn[1],oh-sn[2],oh-sn[3],oh-sn[4],\\ &oh-sn[5],oh-sn[6],oh-sn [7])\rbrace \wedge rm[1]:=m[1]\wedge \\ &rm[2]:=m[2]\wedge rm[4]:=m[4]\wedge rm[5]:=m[5]\wedge Rec-\\ &SN\prime=Rec-SN \cup (rm[1], rm[2], rm[3],rm[4],rm[5], \\ &rm[6],rm[7],rm[8])\rbrace \end{aligned} $$
The relay node forwards the message that it received from the source node. When the relay node forwards the message, the watchdog nodes and the source node overhear the message forwarded by the relay node. The same is depicted in rule R5. We compute the trust of the relay nodes by (a) computing the number of messages forwarded by the relay nodes by analyzing the overheard messages of source node and watchdog nodes, (b) checking the contents of the message forwarded by the relay node, and (c) by investigating if the relay node generate its own control messages. The computations are performed the same way as explained in Algorithm 1.
$${} \begin{aligned} & \mathbf{R(Forward)}=\forall ohm \in OH-relay \forall rem \in Get-Msg,\forall f\\ &\in Flood,\forall ohsn \in OH-SNMP \mid rem[2]\neq NULL \wedge rem \\ &[8]=Send \left(\right) \longrightarrow (f[1]:=rem[1] \wedge f[2]:=rem[2]\wedge f[3]\\ &\!\!:=rem[4]\wedge f[4]:=rem[5] \wedge Flood=Flood \cup (f[1],f[2],\\ &f[3],f[4])\wedge (ohm[1]:=rem[1]\wedge ohm[2]:=rem[2]\wedge\\ &ohm[\!3]:=rem[\!3]\wedge ohm[4]:=rem[4]\wedge ohm[5]:=rem[5]\\ &\wedge ohm[6]:=rem[6]\wedge ohm[7]:=rem[7] \wedge OH-relay\prime=\\ &OH-relay \cup (ohm[\!1],ohm[\!2],ohm[3], ohm[\!4],ohm[\!5],\\ &ohm[6],ohm[\!7])ohsn[\!6]:=rem[2]\wedge OH-SNMP=OH\\ &\!-SNMP\! \cup \!\lbrace(ohsn[1],ohsn[2],ohsn[3],ohsn[4],ohsn[5],\\ &ohsn[6],ohsn[7]) \rbrace) \end{aligned} $$
In rule R6, the source node computes the trust of relay node based on its own observations and those received from watchdog nodes according to Eq. 7. In rule R7, trust computed in rule R6 is compared to the trust threshold and if certain node trust falls below threshold, the node will be isolated from the routing path.
$${} \begin{aligned} & \mathbf{R(Trust-Obs)}=\forall gsno \in Get-SNO,\forall gwdo \in Get-\\ &WDO,\forall t\in Trust \mid gsno[\!4]=gwdo[\!4] \wedge gsno[\!2]=gwdo[\!2]\\ &\!\longrightarrow t[1](gsno[\!6])\cup (gwdo[6]) gsno[2] \alpha \wedge Content(gsno\\ &[2],gsno[6])= Content (gwdo[\!2],gwdo[\!6])\longrightarrow t[2] \beta \wedge\\ &gsno[5]=TC \wedge gwdo[5]=TC \wedge Gen-TC-Pack (gwdo\\ &[4],gsno[4])> gsno[7] \longrightarrow t[3]\gamma \wedge t[4],gsno[4]\wedge T\prime=\\ &T \cup \lbrace(t[1],t[2],t[3],t[4])\rbrace \end{aligned} $$
$${} \begin{aligned} & \mathbf{R(Comp-Mali)}=\forall \ gto \in Get-T,\forall gth \in Get-Th\forall\\ &cm \in Comp-Mali \mid Sum(gto[1], vgto[2],gto[3])< gth\\ &\!\longrightarrow \!cm[\!1]:= gto[\!4] \wedge cm[\!2]:=Sum(gto[\!1],gto[\!2],gto[\!3])\\ &CM\prime= CM\cup (cm[1],cm[2]) \end{aligned} $$
Verification of properties
In our analysis, we aim at verification of the following correctness properties.
Property 1:
common neighbors of source node S and relay node x having a trust greater than the trust threshold must be selected as watchdog nodes.
second-hand information must be considered from only those nodes which are designated as watchdog nodes.
second-hand information is considered from only those nodes having a trust value greater than the trust threshold and whose trust deviation is less than the deviation threshold.
A trust of a malicious node M misbehaving in the context of one of the trust attribute must be decremented as per the specification of MATF.
Verification results
To perform the verification of the HLPN models using Z3, we unroll the model M and the formula f (properties) that provides M k and f k , respectively. Moreover, the said formulas are then passed to Z3 to check if M k ⊧f k (if the formula f holds in the model M up to the bound k execution time). The solver performs the verification and provide the results as satisfiable (s a t) or unsatisfiable (u n s a t). If the answer is sat, then the solver will generate a counter example, which depicts the violation of the property or formula f. Moreover, if the answer is unsat, then formula or the property f holds in M up to the bound k (in our case k is exec. time). In these verification results, we verify the properties mentioned previously. It is worth mentioning here that in our formal verification results, we verify the correctness properties of the proposed scheme, not the performance of the proposed scheme (for performance evaluation results, please refer to the Section 5).
Due to high time-consuming process, execution time is an important metric to verify the properties of the MATF. Figure 3 depicts the time taken by the Z3 Solver to prove that the properties discussed previously in Subsection 4.4 hold in the model.
Verification time taken by the Z3 Solver
Experimental performance analysis
In this section, we evaluate the performance of MATF in comparison to the scheme proposed in [14], referred to as single attribute-based trust framework (SATF) in what follows. Network simulator 2 (NS-2)[19, 39] is used to implement and analyze the performance of the proposed MATF. For the simulation experiments, we have varied the mobility speed of the nodes between 1 and 10 m/s. For data traffic, 30 % of the total nodes in the network are selected as source-destination pairs (sessions), spread randomly over the network. Only 512-byte data packets are sent. The packet sending rates in each pair are varied to change the offered load in the network. All traffic sessions are established at random times near the beginning of the simulation run and stay active until the end. Moreover, a very popular and commonly used mobility model, called random way point mobility model [40], is used for node mobility. In the aforementioned mobility model, each node selects a random destination and starts moving with a randomly chosen speed (uniformly distributed between 0 and a predefined maximum speed).
The trust threshold value is 0.4 in this set of experiments [14], which is the maximum tolerated misbehavior for a node to be a part of the network [41]. A trust threshold value determines the trust level that a node has to maintain to be a legitimate node. To handle high-dimensional parameter space, we define some commonly used simulation parameters, as stated in Table 4. The number of simulation experiments has been chosen sufficiently large in order to get 95 % confidence interval for the results.
Table 4 Simulation parameters
Experimental adversarial model
In our adversarial model, the malicious node count is set to 10– 30 % of the total nodes in the network. In order to evaluate the proposed scheme against the adversary nodes thoroughly, malicious nodes are selected randomly to keep their distribution uniform in the network. In our experiments, we simulated packet dropping attack by having malicious nodes dropping control and data packets randomly or selectively with 25 % probability. Moreover, malicious nodes are also misbehaving by launching the withholding attack against the legitimate nodes. In withholding attack, misbehaving node does not generate control traffic as per the specification of the routing protocol. Because of the aforementioned behavior of misbehaving nodes, legitimate nodes are unable to have a consistent and updated view of the network. Furthermore, number of malicious nodes exercise bad-mouthing and false praise attacks in collusion is varied from 10 to 50 % of the total nodes in the simulation scenarios.
Simulation results and analysis
We now discuss the results of the comparison between the MATF and the SATF in terms of several performance metrics.
Impact of trust deviation threshold
Trust deviation threshold means that second-hand information whose deviation from an evaluating node's observations is greater than the aforementioned threshold will be filtered out while computing the evaluated node trustworthiness. To select the best optimal trust deviation threshold to filter second-hand information, we simulate the MATF for varying the deviation threshold with increasing number of dishonest nodes. For this set of simulation, the mobility speed is set to 1–4 m/s. Dishonest nodes exercise the false praise and bad-mouthing attacks to show the impact on detection rate and false positives rate, respectively (Fig. 4).
Trust value computation vs. simulation time
Figure 5 a, b illustrates the impact of increasing number of dishonest nodes on the false positive rate and detection rate under different trust deviation thresholds. It can be inferred from Fig. 5 a that detection rate is first increasing up to the deviation threshold of 0.4 and then decreasing with increasing number of dishonest nodes. The reason is that with higher trust deviation threshold, false recommendations from bad-mouthing nodes are not filtered out during the trust computation of evaluated nodes, which provides more opportunities to misbehaving nodes to remain undetected.
Impact of trust deviation threshold on detection rate and false positive rate (a, b)
Similarly, Fig. 5 b shows the impact of varying trust deviation threshold for increasing number of dishonest nodes. It is obvious from the figure that with increasing trust deviation threshold, the false positives rate is also increasing. The reason is that with higher deviation threshold, such as 0.5 and 0.6, false recommendation from bad-mouthing nodes having deviation of 60 % are only filtered out which causes legitimate nodes as misbehaving nodes, hence more false positives rate.
It can be summarized from the above results that 0.4 is an optimal trust deviation threshold in terms of detection rate and false positives. It is worth mentioning here that we will use the trust deviation threshold of 0.4 for the rest of the simulation scenarios.
Trust values
Figure 4 shows the trust values computation of a some specific misbehaving node at different simulation time instances. As shown in the figure, the MATF decrements the trust in an expedite way of the misbehaving node to achieve the threshold because of multi-attribute and efficient dishonest recommendation filtration criteria, hence more informed decisions. The MATF evaluates the evaluated node on the basis of different network functions, hence more informed and prompt decisions about the trustworthiness of nodes can be taken. However, in case of the SATF, the trust is computed slowly due to high bootstrapping time and data sparsity problem. The reason for this behavior is that evaluated nodes are observed in the context of data forwarding only. It can be inferred from Fig. 4 that the MATF efficiently overcomes the bootstrapping and data sparsity at the start-up of the network as compared to SATF.
Detection time and detection rate
Detection time refers to the time taken by the trust-based security scheme to detect and declare a misbehaving node as a malicious node. Similarly, malicious node detection rate is calculated as the percentage of malicious nodes detected among the total number of malicious nodes within the network.
Figure 6 a shows the malicious node detection time for increasing node speed in the MATF and the SATF. Aforementioned figure shows that the time required in case of the MATF for increasing node speed is smaller as compared to the SATF. The detection time required for misbehaving node detection in the SATF is almost double the MATF. The reason for this behavior is the slow trust building process as discussed in the Fig. 4 analysis. Overall, the detection time is increasing for increasing node speed. This is because of the fact that for higher node speed, nodes have smaller time of interaction; hence, it takes time to build the trust under the high node mobility.
Effect on detection rate (a–e)
Figure 6 b shows the detection rate for increasing node speed. As shown in figure, detection rate is higher in case of the MATF. The reason is that in the MATF, the node's trust is analyzed in multiple contexts, which expedite the detection rate. Similarly, Fig. 6 c shows the malicious node detection rate with the simulation time. The figure shows that the percentage of the malicious node detection is higher in case of the MATF as compared to the SATF. The detection rate is 100 % at time t=500 s in the MATF, while half of the malicious nodes are detected in the case of the SATF.
Figure 6 d illustrates the impact of increasing the number of nodes on the detection rate while keeping the mobility fixed at 1–6 m/s. It can be inferred from the figure that there is a slight increase in the detection rate with increasing node density. This is due to the fact that under high node density, higher number of watchdogs will be available to observe the behavior of an evaluated node that leads to better detection rate.
The impact of colluding dishonest attackers on detection rate is shown in Fig. 6 e. As the figure shows, MATF scheme is able to keep the detection rate nearly about 90 % even in case of higher number of false praise nodes as compared to SATF. The reason is the implementation of an efficient trust deviation criteria, hence more confidant decisions. Due to efficient trust deviation criteria, recommendations from colluding dishonest attackers are filtered out and are not considered in the trust computation of an evaluated node.
False positive rate
The false positive rate is the ratio of the legitimate nodes declared as malicious to the total number of legitimate nodes.
Effect of node speed on false positive rate is shown in Fig. 7 a, under the MATF and the SATF. Figure 7 a illustrates that false positive rate is much lower in the MATF as compared to the SATF. The reason for the aforementioned behavior is that MATF uses the second-hand information from only designated nodes which have a deviation in trust values less than the deviation threshold, hence more informed decisions about the node's trustworthiness. While in case of the SATF, second-hand information are used from all the neighbor nodes to compute the trustworthiness of a node. As there are some nodes deployed in the network, exercising the bad-mouthing attack against the legitimate nodes causes higher false positives rate in the SATF. Overall, the figure shows that with an increase in the node speed, the false positives rate also increases. The aforementioned behavior is due to the fact that an evaluating node and the watchdog nodes cannot differentiate between intentional and unintentional malicious activities of a node. For example, even if a node fails to forward a packet because of the network conditions, it is regarded as a malicious activity by a node. As a result, under high node speed, the false positives rate increases.
Effect on false positives (a–c)
Similarly, Fig. 7 b shows the effect of increasing node density on false positive rate. The figure illustrates that for increasing node density, the false positive rate in case of the MATF is lower as compared to the SATF. The reason is that more legitimate nodes are selected as watchdog, which provides accurate and precise information about the trustworthiness of the evaluated nodes and also because of using an efficient filtration criteria to filter the dishonest recommendations. In case of the SATF, the false positive rate is increasing as the number of bad-mouthing and false praising nodes are also increasing, which causes a false trust estimation about the legitimate nodes.
Figure 7 c shows the impact of dishonest colluding attackers on false positive rate. It is obvious from the figure that MATF withstands effectively against the increasing dishonest nodes in terms of false positives. The reason is the use of an efficient trust deviation criteria in the proposed scheme as previously discussed in the reasoning of Fig. 6 e.
Packet delivery ratio
Packet delivery ratio (PDR) is the ratio of the number of data packets generated by a source node and the number of packets received at the destination. With malicious node count set to 20 % of the total number of deployed nodes, the control and data packet dropping and withholding attacks are implemented. Figure 8 a illustrates the effect of the mobility speed of the nodes on the PDR while keeping the data rate constant at 4 kbps. Figure 8 a shows that the MATF has higher PDR as compared to the SATF as it isolates malicious nodes from the routing paths very earlier (as shown in Fig. 6 c). Moreover, it can also be observed that the PDR decreases with increasing node speed. The reason for the aforementioned behavior is that at a higher node speed, the node drops packets due to the frequent link changes. These results illustrate that the MATF eliminates the malicious nodes well in time from the network and improves the PDR by 10– 12 % for varying mobility speeds of the nodes.
Effect on PDR and packet loss rate (a, b)
In this section, we present the packet loss analysis of the proposed MATF. Although the packet delivery ratio provides the big picture of efficiency and effectiveness of any scheme, however, the reason to present the packet loss analysis in this paper is to show the effectiveness of the MATF scheme in terms of reducing the packet loss due to misbehaving nodes. As there are many reasons of packet loss in MANETs, such as packet loss due to link errors, queue overflow, frequent link changes, and malicious drop [42, 43]. In these simulation results, we consider the packet loss that is only caused by the malicious node-dropping packets. Figure 8 b shows the packet loss rate for the increasing node speed in the MATF and the SATF. The results show that the MATF has about 8– 15 % less packet loss rate as compared to the SATF. The reason for this behavior is that misbehaving nodes are detected and isolated well in time on the basis of multi-attribute trust criteria. However, in case of the SATF, the misbehaving nodes are detected and isolated very late in the simulation (as shown in Fig. 6 c), which provides more packet drop opportunities to the misbehaving nodes.
The major causes of the energy consumption in MANETs are the packet transmission and reception. To compute the energy consumed by the nodes in both the MATF and SATF schemes, we use the generic energy model supported by NS-2. The generic energy model can estimate the consumption of energy for continuous and variable transmission power levels. The parameters we used are as follows: 100 J of initial energy, 0.05 W for transmission, 0.02 W for reception, 0.01 W for idling, and 0.0 W when sleeping. It is worth mentioning that energy consumed is shown in percentage in these results, which is the total percentage energy consumption of the initial energy of a node. The energy consumption of the proposed MATF in comparison to the SATF is shown in Fig. 9 a. As there is no extra message communication in the MATF in comparison to the SATF, the figure shows that energy consumption is almost equal to that of the SATF. A slight increase in the energy consumption in case of MATF is because of the nodes in MATF requiring some extra processing to compute the trust of the nodes on the basis of multi-attribute trust criteria. Moreover, the packet delivery ratio is higher and packet loss due to malicious nodes is lower in the MATF in comparison to the SATF, which also causes more energy consumption as packets need to travel more longer paths in the network, hence more energy consumption at those nodes in the routing path.
Effect on energy consumption and NRL (a, b)
Normalized routing load
Normalized routing load (NRL) is the ratio of the total number of control packets transmitted by the nodes to the total number of received data packets at the destination nodes. It is used to evaluate the efficiency of a routing protocol.
Figure 9 b illustrates that NRL is smaller in the MATF as compared to the SATF. The reason is the more packet delivery ratio per control packets in the MATF. As the SATF suffers from more packet loss as shown in the figure, control packets sent per data packet is higher, which causes higher NRL in the SATF. Overall, the routing overhead is increasing in both the schemes with an increase in the node speed. The reason for this behavior is that to maintain the routes under high node mobility, more control packets are transmitted.
Security analysis
In this section, we present the security analysis of the proposed MATF against the various attacks.
Security against bad-mouthing and false praise attack
In the MATF, second-hand information is considered from only those nodes, which are designated as watchdog nodes, having trust value greater than the trust threshold, and trust deviation is less than the deviation threshold. Due to the aforementioned criteria for second-hand information, the MATF effectively withstands against the bad-mouthing and false praise attacks.
Security against selective misbehavior
A smart adversary node may misbehave selectively, such as drops data packets, while forwards control packets. Depending upon the security requirements and the privilege provided by the MATF, an evaluating node can selectively use smart misbehaving nodes to perform different network functions. For example, if an adversary node misbehaves by dropping data packets only, then an evaluating node can use such a node for other network functions, such as control packet forwarding.
Security against colluding attackers
In the proposed scheme, an evaluating node uses the trust attributes based on local states and its own observation; collusion attack is not much effective against the scheme. The only collusion attack that is possible against the scheme is the publication of false-praise and bad-mouthing information against the legitimate nodes. In the proposed MATF, efficient trust deviation criteria are used which filter such false-praise and bad-mouthing information, as discussed in Figs. 6 e and 7 c. Results presented in the aforementioned figures reveal that the proposed MATF scheme efficiently withstands against the colluding attackers up to 30 % of the total nodes.
Conclusion and future work
In this work, we proposed a scheme that is based on the multi-attribute trust criteria to minimize the bootstrapping time and to deal with the selective misbehavior. The proposed trust model augments the security of a MANET by enabling a node to identify and remove malicious nodes from the routing paths by overhearing transmission at multiple nodes (evaluating node and watchdog nodes). The proposed security scheme not only provides a way to detect attacks and malicious behavior accurately and timely but also reduces the number of false positives by using the concept of multi-watchdogs. The proposed trust model is evaluated in the context of OLSR routing protocol. Moreover, to prove the correctness of the proposed scheme, we also presented a formal verification of our proposed MATF using HLPN, SMT-Lib, and Z3 Solver. Comparison between the MATF and the SATF has shown that our proposed scheme has more efficiently detected malicious nodes. Moreover, the MATF has shown promising results under high mobility speed of the nodes and frequent topology changes.
Simulation results show that the proposed trust model achieves 98– 100 % detection rate of malicious nodes with only 1– 2 % false positives. The proposed MATF has an improved packet delivery ratio in comparison to the SATF of about 90–75 and 80–65 %, respectively, in a network with malicious nodes.
We plan to extend our work by using the adaptive mechanism for the weight assignment to different trust attributes based on the run-time network conditions. Moreover, we will evaluate our proposed scheme as an extension to some other reactive routing protocol like DSR to analyze the effect of underlying routing protocol.
1 The difference between the trust values of a recommender node and an evaluating node about a particular evaluated node.
S Zhao, A Aggarwal, S Liu, H Wu, in IEEE Wireless Communications and Networking Conference (WCNC2008). A secure routing protocol in proactive security approach for mobile ad-hoc networks (IEEELas Vegas, 2008), pp. 2627–2632. doi:10.1109/WCNC.2008.461.
YC Hu, A Perrig, DB Johnson, Ariadne: A secure on-demand routing protocol for ad hoc networks. Wirel. Netw. 11:, 21–38 (2005).
P Papadimitratos, ZJ Haas, in IEEE Applications and the Internet Workshops. Secure link state routing for mobile ad hoc networks (IEEEOrlando, 2003), pp. 379–383.
MS Obaidat, I Woungang, SK Dhurandher, V Koo, A cryptography-based protocol against packet dropping and message tampering attacks on mobile ad hoc networks security and communication networks (John Wiley & Sons, Ltd, Malden MA, 2014).
T Zahariadis, P Trakadas, HC Leligou, S Maniatis, P Karkazis, A novel trust-aware geographical routing scheme for wireless sensor networks. Wirel. Pers. Commun. 69(2), 805–826 (2013).
G Zhan, W Shi, J Deng, Design and implementation of TARF: a Trust-Aware Routing Framework for WSNs. IEEE Trans. Dependable Secure Comput. 9(2), 184–197 (2012).
S Buchegger, JY Le Boudec, in Proceedings of the 3rd ACM international symposium on Mobile ad hoc networking & computing. Performance analysis of the CONFIDANT protocol (ACMNew York, 2002), pp. 226–236.
A Chakrabarti, V Parekh, A Ruia, in Advances in Computer Science and Information Technology.Networks and Communications (Springer). A trust based routing scheme for wireless sensor networks (SpringerBerlin Heidelberg, 2012), pp. 159–169.
P Michiardi, R Molva, in Advanced communications and multimedia security. Core: a collaborative reputation mechanism to enforce node cooperation in mobile ad hoc networks (SpringerUSA, 2002), pp. 107–121.
S Ganeriwal, LK Balzano, MB Srivastava, Reputation-based framework for high integrity sensor networks. ACM Trans. Sens. Netw. (TOSN). 4(3), 15 (2008).
O Khalid, SU Khan, SA Madani, K Hayat, MI Khan, N MinAllah, J Kolodziej, L Wang, S Zeadally, D Chen, Comparative study of trust and reputation systems for wireless sensor networks. Secur. Commun. Netw. 6(6), 669–688 (2013).
A Ahmed A, KA Bakar, MI Channa, K Haseeb, AW Khan, A survey on trust based detection and isolation of malicious nodes in ad hoc and sensor networks. Front. Comput. Sci. 9(2), 280–296 (2015).
S Marti, TJ Giuli, K Lai, M Baker, in ACM Proceedings of the 6th annual international conference on Mobile computing and networking. Mitigating routing misbehavior in mobile ad hoc networks (ACMNew York, 2000), pp. 255–265.
AM Shabut, KP Dahal, SK Bista, IU Awan, Recommendation based trust model with an effective defence scheme for MANETs. IEEE Trans. Mob. Comput. 14(10), 2101–2115 (2015).
FS Proto, A Detti, C Pisa, G Bianchi, in IEEE International Conference on Communications (ICC). A framework for packet-droppers mitigation in OLSR wireless community networks (IEEEKyoto, 2011), pp. 1–6.
JM Robert, H Otrok, A Chriqi, RBC-OLSR: Reputation-based clustering OLSR protocol for wireless ad hoc networks. Comput. Commun. 35(4), 487–499 (2012).
D Zhang, CK Yeo, Distributed court system for intrusion detection in mobile ad hoc networks. Comput. Secur. 30(8), 555–570 (2011).
SU Malik, SU Khan, Formal methods in LARGE-SCALE computing systems. ITNOW. 55(2), 52–53 (2013).
T Issariyakul, E Hossain, Introduction to network simulator NS2 (Springer Science & Business Media, USA, 2011).
S Tan, X Li, Q Dong, Trust based routing mechanism for securing OSLR-based MANET. Ad Hoc Netw. 30:, 84–98 (2015).
EM Shakshuki, N Kang, TR Sheltami, EAACK—a secure intrusion-detection system for MANETs. IEEE Trans. Ind. Electron. 60(3), 1089–1098 (2013).
K Liu, J Deng, PK Varshney, K Balakrishnan, An acknowledgment-based approach for the detection of routing misbehavior in MANETs. IEEE Trans. Mob. Comput. 6(5), 536–550 (2007).
TR Sheltami, A Basabaa, EM Shakshuki, A3ACKs: adaptive three acknowledgments intrusion detection system for MANETs. J. Ambient Intell. Humanized Comput. 5(4), 611–620 (2014).
P Gallagher, C Furlani, Digital signature standard (DSS). Federal Information Processing Standards Publications, volume FIPS (2013), 186–3 (2013).
RL Rivest, A Shamir, L Adleman, A method for obtaining digital signatures and public-key cryptosystems. Commun. ACM. 26(1), 96–99 (1983).
S Bansal, M Baker, Observation-based cooperation enforcement in ad hoc networks. Research Report cs.NI/0307012, Stanford University, 120–130 (2003).
X Li, Z Jia, P Zhang, R Zhang, H Wang, Trust-based on-demand multipath routing in mobile ad hoc networks. IET Inf. Secur. 4(4), 212–232 (2010).
H Xia, Z Jia, X Li, L Ju, EH Sha, Trust prediction and trust-based source routing in mobile ad hoc networks. Ad Hoc Netw. 11(7), 2096–2114 (2013).
A Adnane, C Bidan, RT de Sousa, Trust-based security for the OLSR routing protocol. Comput. Commun. 36(10), 1159–1171 (2013).
A Adnane, in Proceedings of the 2008 ACM symposium on Applied computing. Autonomic trust reasoning enables misbehavior detection in OLSR (ACMNew York, 2008), pp. 2006–2013.
D Kukreja, SK Dhurandher, BVR Reddy, Enhancing the Security of Dynamic Source Routing Protocol Using Energy Aware and Distributed Trust Mechanism in MANETs. Intelligent Distributed Computing (Springer International Publishing, Springer Switzerland, 2015).
R Abdellaoui, J Robert, in 4th Conference on Security in Network Architectures and Information Systems (SAR-SSI). Su-olsr: A new solution to thwart attacks against the olsr protocol (Luchon, 2009), pp. 239–245.
D Câmara, AA Loureiro, F Filali, in IEEE Global Telecommunications Conference (GLOBECOM'07). Methodology for formal verification of routing protocols for ad hoc wireless networks (IEEEWashington, 2007), pp. 705–709.
SU Malik, SU Khan, SK Srinivasan, Modeling and analysis of state-of-the-art VM-based cloud management platforms. IEEE Trans. Cloud Comput. 1(1), 1–1 (2013).
F Ghassemi, S Ahmadi, W Fokkink, A Movaghar, Model checking MANETs with arbitrary mobility, (2013).
C C Barrett, A Stump, C Tinelli, The Satisfiability Modulo Theories Library (SMT-LIB), (2010). http://smtlib.cs.uiowa.edu/. Accessed 15 Jan 2016.
L De Moura, N Bjørner, in Tools and Algorithms for the Construction and Analysis of Systems. Z3: An efficient SMT solver (SpringerBerlin Heidelberg, 2008), pp. 337–340.
SU Malik, SK Srinivasan, SU Khan, L Wang, in 12th International Conference on Scalable Computing and Communications (ScalCom). A methodology for OSPF routing protocol verification (IEEEChangzhou, 2012).
P Whigham, The VINT project, the network simulator - ns-2 (University of Otago, 2003). http://www.isi.edu/nsnam/ns/. Accessed 05 Jan 2016.
J Broch, DA Maltz, DB Johnson, YC Hu, J Jetcheva, in Proceedings of the 4th annual ACM/IEEE international conference on Mobile computing and networking. A performance comparison of multi-hop wireless ad hoc network routing protocols (ACMNew York, 1998), pp. 85–97.
MS Khan, D Midi, MI Khan, E Bertino, in IEEE Trustcom/BigDataSE/ISPA, Vol.1. Adaptive trust threshold strategy for misbehaving node detection and isolation (IEEEHelsinki, 2015), pp. 718–725.
Z Wei, H Tang, FR Yu, P Mason, in IEEE Military Communications Conference (MILCOM). Trust establishment based on Bayesian networks for threat mitigation in mobile ad hoc networks (IEEEPerundurai, 2014), pp. 171–177.
Y Lu, Y Zhong, B Bhargava, Packet Loss in Mobile Ad Hoc Networks (IEEE, Baltimore, 2003).
Technical Report CSD-TR 03-009. Department of Computer Science, Purdue University (2003). http://docs.lib.purdue.edu/cstech/1558/. Retrieved 26 Dec 2015.
The work reported in this paper has been partially supported by the Higher Education Commission (HEC), Pakistan.
Department of Computer Sciences, COMSATS Institute of Information Technology, Islamabad, Pakistan
Muhammad Saleem Khan
, Majid Iqbal Khan
, Saif-Ur-Rehman Malik
, Mukhtar Azim
& Nadeem Javaid
Department of Computer Sciences, COMSATS Institute of Information Technology, Abbottabad, Pakistan
Osman Khalid
Search for Muhammad Saleem Khan in:
Search for Majid Iqbal Khan in:
Search for Saif-Ur-Rehman Malik in:
Search for Osman Khalid in:
Search for Mukhtar Azim in:
Search for Nadeem Javaid in:
Correspondence to Nadeem Javaid.
Khan, M., Khan, M., Malik, S. et al. MATF: a multi-attribute trust framework for MANETs. J Wireless Com Network 2016, 197 (2016). https://doi.org/10.1186/s13638-016-0691-4
Bootstrapping time
Security attacks
Intelligent Mobility Management for Future Wireless Mobile Networks | CommonCrawl |
\begin{definition}[Definition:Rounding]
'''Rounding''' is the process of approximation of a value of a variable to a multiple of a given power of whatever number base one is working in (usually decimal).
Let $n \in \Z$ be an integer.
Let $x \in \R$ be a real number.
Let $y \in \R$ such that:
:$y = 10^n \floor {\dfrac x {10^n} + \dfrac 1 2}$
or:
:$y = 10^n \ceiling {\dfrac x {10^n} - \dfrac 1 2}$
where $\floor {\, \cdot \,}$ denotes the floor function and $\ceiling {\, \cdot \,}$ denotes the ceiling function.
Then $y$ is defined as '''$x$ rounded to the nearest $n$th power of $10$'''.
Both of these definitions amount to the same thing, except for when $\dfrac x {10^n}$ is exactly halfway between $\floor {\dfrac x {10^n} }$ and $\ceiling {\dfrac x {10^n} }$.
How these instances is treated is known as the '''treatment of the half'''.
\end{definition} | ProofWiki |
Solid State & Surface Chemistry
Chemical Kinetics and Nuclear Chemistry
Periodic Table & Periodicity
Isolation of Elements
p-Block Elements
d and f Block Elements
Basics of Organic Chemistry
Alcohols, Phenols and Ethers
Compounds Containing Nitrogen
What transition in the hydrogen spectrum would have the same wavelength as the Balmer transition n = 4 to n = 2 of He+ spectrum?
n = 2 to n = 1
Estimate the difference in energy between 1st and 2nd Bohr orbit for a hydrogen atom. At what minimum atomic number, a transition from n = 2 to n = 1 energy level would result in the emission of X-rays with $$\lambda = 3.0 \times {10^{ - 8}}$$? Which hydrogen atom like species does this atomic number correspond to?
10.22 eV, 2, He+
According to Bohr's theory, the electronic energy of hydrogen atom in the nth Bohr's orbit is given by $${E_n} = {{ - 21.6 \times {{10}^{ - 19}}} \over {{n^2}}}J$$. Calculate the longest wavelength of light that will be needed to remove an electron from the third Bohr orbit of the He+ ion.
2055 Å
What is the maximum number of electrons that may be present in all atomic orbitals with principal quantum number 3 and azimuthal quantum number 2?
Questions Asked from Structure of Atom
On those following papers in Subjective | CommonCrawl |
Proportionality (mathematics)
In mathematics, two sequences of numbers, often experimental data, are proportional or directly proportional if their corresponding elements have a constant ratio. The ratio is called coefficient of proportionality (or proportionality constant) and its reciprocal is known as constant of normalization (or normalizing constant). Two sequences are inversely proportional if corresponding elements have a constant product, also called the coefficient of proportionality.
This definition is commonly extended to related varying quantities, which are often called variables. This meaning of variable is not the common meaning of the term in mathematics (see variable (mathematics)); these two different concepts share the same name for historical reasons.
Two functions $f(x)$ and $g(x)$ are proportional if their ratio $ {\frac {f(x)}{g(x)}}$ is a constant function.
If several pairs of variables share the same direct proportionality constant, the equation expressing the equality of these ratios is called a proportion, e.g., a/b = x/y = ⋯ = k (for details see Ratio). Proportionality is closely related to linearity.
Direct proportionality
See also: Equals sign
Given an independent variable x and a dependent variable y, y is directly proportional to x[1] if there is a non-zero constant k such that
$y=kx.$
The relation is often denoted using the symbols "∝" (not to be confused with the Greek letter alpha) or "~":
$y\propto x,$ or $y\sim x.$
For $x\neq 0$ the proportionality constant can be expressed as the ratio
$k={\frac {y}{x}}.$
It is also called the constant of variation or constant of proportionality.
A direct proportionality can also be viewed as a linear equation in two variables with a y-intercept of 0 and a slope of k. This corresponds to linear growth.
Examples
• If an object travels at a constant speed, then the distance traveled is directly proportional to the time spent traveling, with the speed being the constant of proportionality.
• The circumference of a circle is directly proportional to its diameter, with the constant of proportionality equal to π.
• On a map of a sufficiently small geographical area, drawn to scale distances, the distance between any two points on the map is directly proportional to the beeline distance between the two locations represented by those points; the constant of proportionality is the scale of the map.
• The force, acting on a small object with small mass by a nearby large extended mass due to gravity, is directly proportional to the object's mass; the constant of proportionality between the force and the mass is known as gravitational acceleration.
• The net force acting on an object is proportional to the acceleration of that object with respect to an inertial frame of reference. The constant of proportionality in this, Newton's second law, is the classical mass of the object.
Inverse proportionality
The concept of inverse proportionality can be contrasted with direct proportionality. Consider two variables said to be "inversely proportional" to each other. If all other variables are held constant, the magnitude or absolute value of one inversely proportional variable decreases if the other variable increases, while their product (the constant of proportionality k) is always the same. As an example, the time taken for a journey is inversely proportional to the speed of travel.
Formally, two variables are inversely proportional (also called varying inversely, in inverse variation, in inverse proportion)[2] if each of the variables is directly proportional to the multiplicative inverse (reciprocal) of the other, or equivalently if their product is a constant.[3] It follows that the variable y is inversely proportional to the variable x if there exists a non-zero constant k such that
$y={\frac {k}{x}},$
or equivalently, $xy=k.$ Hence the constant "k" is the product of x and y.
The graph of two variables varying inversely on the Cartesian coordinate plane is a rectangular hyperbola. The product of the x and y values of each point on the curve equals the constant of proportionality (k). Since neither x nor y can equal zero (because k is non-zero), the graph never crosses either axis.
Hyperbolic coordinates
Main article: Hyperbolic coordinates
The concepts of direct and inverse proportion lead to the location of points in the Cartesian plane by hyperbolic coordinates; the two coordinates correspond to the constant of direct proportionality that specifies a point as being on a particular ray and the constant of inverse proportionality that specifies a point as being on a particular hyperbola.
Computer encoding
The Unicode characters for proportionality are the following:
• U+221D ∝ PROPORTIONAL TO (∝, ∝, ∝, ∝, ∝)
• U+007E ~ TILDE
• U+2237 ∷ PROPORTION
• U+223C ∼ TILDE OPERATOR (∼, ∼, ∼, ∼)
• U+223A ∺ GEOMETRIC PROPORTION (∺)
See also
• Linear map
• Correlation
• Eudoxus of Cnidus
• Golden ratio
• Inverse-square law
• Proportional font
• Ratio
• Rule of three (mathematics)
• Sample size
• Similarity
• Basic proportionality theorem
Growth
• Linear growth
• Hyperbolic growth
Notes
1. Weisstein, Eric W. "Directly Proportional". MathWorld – A Wolfram Web Resource.
2. "Inverse variation". math.net. Retrieved October 31, 2021.
3. Weisstein, Eric W. "Inversely Proportional". MathWorld – A Wolfram Web Resource.
References
• Ya. B. Zeldovich, I. M. Yaglom: Higher math for beginners, p. 34–35.
• Brian Burrell: Merriam-Webster's Guide to Everyday Math: A Home and Business Reference. Merriam-Webster, 1998, ISBN 9780877796213, p. 85–101.
• Lanius, Cynthia S.; Williams Susan E.: PROPORTIONALITY: A Unifying Theme for the Middle Grades. Mathematics Teaching in the Middle School 8.8 (2003), p. 392–396.
• Seeley, Cathy; Schielack Jane F.: A Look at the Development of Ratios, Rates, and Proportionality. Mathematics Teaching in the Middle School, 13.3, 2007, p. 140–142.
• Van Dooren, Wim; De Bock Dirk; Evers Marleen; Verschaffel Lieven : Students' Overuse of Proportionality on Missing-Value Problems: How Numbers May Change Solutions. Journal for Research in Mathematics Education, 40.2, 2009, p. 187–211.
| Wikipedia |
Gelfand–Raikov theorem
The Gel'fand–Raikov (Гельфанд–Райков) theorem is a theorem in the theory of locally compact topological groups. It states that a locally compact group is completely determined by its (possibly infinite dimensional) unitary representations. The theorem was first published in 1943.[1] [2]
A unitary representation $\rho :G\to U(H)$ of a locally compact group $G$ on a Hilbert space $H=(H,\langle \,,\rangle )$ defines for each pair of vectors $h,k\in H$ a continuous function on $G$, the matrix coefficient, by
$g\mapsto \langle h,\rho (g)k\rangle $.
The set of all matrix coefficientsts for all unitary representations is closed under scalar multiplication (because we can replace $k\to \lambda k$), addition (because of direct sum representations), multiplication (because of tensor representations) and complex conjugation (because of the complex conjugate representations).
The Gel'fand–Raikov theorem now states that the points of $G$ are separated by its irreducible unitary representations, i.e. for any two group elements $g,h\in G$ there exist a Hilbert space $H$ and an irreducible unitary representation $\rho :G\to U(H)$ such that $\rho (g)\neq \rho (h)$. The matrix elements thus separate points, and it then follows from the Stone–Weierstrass theorem that on every compact subset of the group, the matrix elements are dense in the space of continuous functions, which determine the group completely.
See also
• Gelfand–Naimark theorem
• Representation theory
References
1. И. М. Гельфанд, Д. А. Райков, Неприводимые унитарные представления локально бикомпактных групп, Матем. сб., 13(55):2–3 (1943), 301–316, (I. Gelfand, D. Raikov, "Irreducible unitary representations of locally bicompact groups", Recueil Mathématique. N.S., 13(55):2–3 (1943), 301–316)
2. Yoshizawa, Hisaaki. "Unitary representations of locally compact groups. Reproduction of Gelfand–Raikov's theorem." Osaka Mathematical Journal 1.1 (1949): 81–89.
| Wikipedia |
\begin{document}
\title[Representations of Hopf algebras of dimension 72]{On a family of Hopf algebras of dimension 72} \author[andruskiewitsch and vay] {Nicol\'as Andruskiewitsch and Cristian Vay}
\address{FaMAF-CIEM (CONICET), Universidad Nacional de C\'ordoba, Medina A\-llen\-de s/n, Ciudad Universitaria, 5000 C\' ordoba, Rep\' ublica Argentina.} \email{[email protected], [email protected]}
\thanks{\noindent 2000 \emph{Mathematics Subject Classification.} 16W30. \newline This work was partially supported by ANPCyT-Foncyt, CONICET, Ministerio de Ciencia y Tecnolog\'{\i}a (C\'ordoba) and Secyt (UNC)}
\begin{abstract} We investigate a family of Hopf algebras of dimension 72 whose coradical is isomorphic to the algebra of functions on ${\mathbb S}_3$. We determine the lattice of submodules of the so-called Verma modules and as a consequence we classify all simple modules. We show that these Hopf algebras are unimodular (as well as their duals) but not quasitriangular; also, they are cocycle deformations of each other. \end{abstract}
\maketitle
\setcounter{tocdepth}{1}
\section*{Introduction} The study of finite dimensional{} Hopf algebras over an algebraically closed field $\Bbbk$ of characteristic 0 is split into two different classes: the class of semisimple Hopf algebras and the rest. The Lifting Method from \cite{AS-cambr} is designed to deal with non-semisimple Hopf algebras whose coradical is a Hopf subalgebra\footnote{An adaptation to general non-semisimple Hopf algebras was recently proposed in \cite{AC}.}. Pointed Hopf algebras, that is Hopf algebras whose coradical is a group algebra, were intensively studied by this Method. It is natural to consider next the class of Hopf algebras whose coradical is the algebra $\Bbbk^G$ of functions on a non-abelian group $G$. This class seems to be interesting at least by the following reasons:
\bigbreak $\bullet$ The categories of Yetter-Drinfeld modules over the group algebra $\Bbbk G$ and $\Bbbk^G$, $G$ a finite group, are equivalent. Thence, a lot
sensible information needed for the Lifting Method (description of Yetter-Drinfeld modules, determination of finite dimensional{} Nichols algebras)
can be translated from the pointed case to this case --or vice versa.
\bigbreak $\bullet$ The representation theory of Hopf algebras whose coradical is the algebra of functions on a non-abelian group looks easier that the
the representation theory of pointed Hopf algebras with non-abelian group, because the representation theory of $\Bbbk^G$
is easier than that of $G$. Indeed, $\Bbbk^G$ is a semisimple abelian algebra and we may
try to imitate the rich methods in representation theory of Lie algebras, with $\Bbbk^G$ playing the role of the Cartan subalgebra.
We believe that the representation theory of Hopf algebras with coradical $\Bbbk^G$ might be helpful to study Nichols algebras and deformations.
We have started the consideration of this class in \cite{AV}, where finite dimensional{} Hopf algebras whose coradical is $\Bbbk^{{\mathbb S}_3}$ were classified and, in particular, a new family of Hopf algebras of dimension 72 was defined. The purpose of the present paper is to study these Hopf algebras. We first discuss in Section \ref{sect:preliminaries} some general ideas about modules induced from simple $\Bbbk^{G}$- modules, that we call Verma modules. We introduce in Section \ref{sect:gral-sn} a new family of Hopf algebras, as a generalization of the construction in \cite{AV}, attached to the class of transpositions in ${\mathbb S}_n$ and depending on a parameter $\mathbf{a}$.
Our main contributions are in Section \ref{sect:modules}: we determine the lattice of submodules of the various Verma modules and as a consequence we classify all simple modules over the Hopf algebras of dimension 72 introduced in \cite{AV}. Some further information on these Hopf algebras is given in Section \ref{sec: tipo de rep} and Section \ref{sect:more-info}.
We assume that the reader has some familiarity with Yetter-Drinfeld modules and Nichols algebras ${\mathcal B}(V)$; we refer to \cite{AS-cambr} for these matters.
\subsection*{Conventions}
\
If $V$ is a vector space, $T(V)$ is the tensor algebra of $V$. If $S$ is a subset of $V$, then we denote by $\langle S\rangle$ the vector subspace generated by $S$. If $A$ is an algebra and $S$ is a subset of $A$, then we denote by $(S)$ the two-sided ideal generated by $S$ and by $\Bbbk\langle S\rangle$ the subalgebra generated by $S$. If $H$ is a Hopf algebra, then $\Delta$, $\epsilon$, $\mathcal{S}$ denote respectively the comultiplication, the counit and the antipode. We denote by $\widehat R$ the set of isomorphism classes of a simple $R$-modules, $R$ an algebra; we identify a class in $\widehat{R}$ with a representative without further notice. If $S$, $T$ and $M$ are $R$-modules, we say that \emph{$M$ is an extension of $T$ by $S$} when $M$ fits into an exact sequence $0\rightarrow S\rightarrow M\rightarrow T\rightarrow 0$.
\section{Preliminaries}\label{sect:preliminaries}
\subsection{The induced representation}\label{subsect:induced-generalities}
\
We collect well-known facts about the induced representation. Let $B$ be a subalgebra of an algebra $A$ and let $V$ be a left $B$-module. The induced module is $\operatorname{Ind}_{B}^{A} V = A \o_{B} V$. The induction has the following properties:
\begin{itemize}
\item Universal property: if $W$ is an $A$-module and $\varphi: V \to W$ is morphism of $B$-modules, then it extends to a morphism of $A$-modules $\overline{\varphi}: \operatorname{Ind}_{B}^{A} V \to W$. Hence, there is a natural isomorphism (called Frobenius reciprocity): $\Hom_{B} (V, \operatorname{Res} _B^A W) \simeq \Hom_{A} (\operatorname{Ind}_{B}^{A} V, W)$. In categorical terms, \emph{induction is left-adjoint to restriction}.
\medbreak \item Any finite dimensional{} simple $A$-module is a quotient of the induced module of a simple $B$-module. \end{itemize}
Indeed, let $S$ be a finite dimensional{} simple $A$-module and let $T$ be a simple $B$-submodule of $S$. Then the induced morphism $\operatorname{Ind}_{B}^{A} T \to S$ is surjective.
\begin{itemize} \item If $B$ is semisimple, then any induced module is projective. \end{itemize}
The induction functor, being left adjoint to the restriction one, preserves projectives, and any module over a semisimple algebra is projective.
\begin{itemize} \item If $A$ is a free right $B$-module, say $A\simeq B^{(I)}$, then $\operatorname{Ind}_{B}^{A} V = B^{(I)} \o_{B} V = V^{(I)}$ as $B$-modules, and a fortiori as vector spaces. \end{itemize}
\smallbreak We summarize these basic properties in the setting of finite dimensional{} Hopf algebras, where freeness over Hopf subalgebras is known \cite{NZ}. Also, finite dimensional{} Hopf algebras are Frobenius, so that injective modules are projective and vice versa.
\begin{prop}\label{pr:induced} Let $A$ be a finite dimensional{} Hopf algebra and let $B$ be a semisimple Hopf subalgebra. \begin{itemize}
\item If $T\in \widehat B$, then $\dim \operatorname{Ind}_{B}^{A} T = \dfrac{\dim T\dim A}{\dim B}$.
\smallbreak \item Any finite dimensional{} simple $A$-module is a quotient of the induced module of a simple $B$-module.
\smallbreak \item The induced module of a finite dimensional{} $B$-module is injective and projective.\qed \end{itemize} \end{prop}
\subsection{Representation theory of Hopf algebras with coradical a dual group algebra}\label{subsect:U-modules}
\
An optimal situation to apply the Proposition \ref{pr:induced} is when the coradical of the finite dimensional{} Hopf algebra $A$ is a Hopf subalgebra; in this case $B =$ coradical of $A$ is the best choice. It is tempting to say that the induced module of a simple $B$-module is a \emph{Verma module} of $A$.
Assume now the coradical $B$ of the finite dimensional{} Hopf algebra $A$ is the algebra of functions $\Bbbk^{G}$ on a finite group $G$. In this case, we have:
\medbreak $\bullet$ Any simple $B$-module has dimension 1 and $\widehat{B} \simeq G$; for $g\in G$, the simple module $\Bbbk_g$ has the action $f\cdot 1 = f(g) 1$,
$f\in\Bbbk^{G}$. Thus any simple $A$-module is a quotient of a Verma module $M_{g} := \operatorname{Ind}_{\Bbbk^{G}}\Bbbk_g$, for some $g\in G$.
\medbreak $\bullet$ The ideal $A\delta_g$ is isomorphic to $M_g$ and $A \simeq \oplus_{g\in G} M_g$; here $\delta_g$ is the characteristic function of the subset $\{g\}$.
\medbreak $\bullet$\label{bullet:injective-hull} Let $g\in G$ such that $\delta_g$ is a primitive idempotent of $A$. Since $A$ is Frobenius, $M_g\simeq A\delta_g$ has a unique simple submodule $S$ and a unique maximal submodule $N$; $M_g$ is the injective hull of $S$ and the projective cover of $M_g/N$. See \cite[(9.9)]{CR}.
\medbreak $\bullet$ In all known cases, $\gr A \simeq {\mathcal B}(V) \# \Bbbk^{G}$, where $V$ belongs to a concrete and short list. Hence, $\dim M_{g} = \dim {\mathcal B}(V)$ for any $g\in G$. More than this, in all known cases we dispose of the following information:
\medbreak \begin{itemize}\renewcommand{$\circ$}{$\circ$}
\item There exists a rack $X$ and a 2-cocycle $q\in Z^2(X, \Bbbk^{\times})$ such that $V \simeq (\Bbbk X, c^q)$ as braided vector spaces, see \cite{AG-adv} for details.
\medbreak
\item\label{item:epimorphism}
There exists an epimorphism of Hopf algebras $\phi:T(V)\#\Bbbk^{G}\rightarrow A$, see \cite[Subsection 2.5]{AV} for details. Note that $\phi(f\cdot x)=\ad f(\phi(x))$ for all $f\in\Bbbk^{G}$ and $x\in T(V)$.
\medbreak
\item Let ${\mathbb X}$ be the set of words in $X$, identified with a basis of the tensor algebra $T(V)$. There exists ${\mathbb B} \subset {\mathbb X}$
such that the classes of the monomials in ${\mathbb B}$ form a basis of ${\mathcal B}(V)$. The corresponding classes in $A$ multiplied
with the elements $\delta_g\in \Bbbk^{G}$, $g\in G$, form a basis of $A$.
\medbreak
\item If $x\in X$, then there exists $g_x\in G$ such that $\delta_h\cdot x =\delta_{h, g_x}x$ for all $h\in G$.
We extend this to have $g_x\in G$ for any $x\in {\mathbb X}$.
\medbreak
\item If $x\in X$, then $x^2 = 0$ in ${\mathcal B}(V)$ and there exists $f_x\in \Bbbk^{G}$ such that $x^2 = f_x$ in $A$.
\end{itemize}
Let $g\in G$. If $x\in {\mathbb B}$, then we denote by $m_x$ the class of $x$ in $M_g$. Hence $(m_x)_{x\in {\mathbb B}}$ is a basis of $M_g$. We may describe the action of $A$ on this basis of $M_g$, at least when we know explicitly the relations of $A$ and the monomials in ${\mathbb B}$. To start with, let $f\in \Bbbk^{G}$ and $x\in {\mathbb B}$. Then \begin{equation}\label{eq:action-of-coradical} \begin{aligned} f\cdot m_x &= \overline{fx\otimes 1} = \overline{f\_{1}\cdot x f\_{2}\otimes 1} = \overline{f\_{1}\cdot x \otimes f\_{2}\cdot 1}\\ & = f(g_xg) \, m_x. \end{aligned} \end{equation}
Let now $x= x_1 \dots x_t$ be a monomial in ${\mathbb B}$, with $x_1, \dots, x_t\in X$. Set $y = x_2\dots x_t$; observe that $y$ need not be in ${\mathbb B}$. Then \begin{equation}\label{eq:action-of-first-letter} \begin{aligned} x_1\cdot m_x &= \overline{x_1^2x_2\dots x_t\otimes 1} = \overline{f_{x_1}y\otimes 1} = f_{x_1}(g_{y}g) \, \overline{y\otimes 1}. \end{aligned} \end{equation}
Let now $M$ be a finite dimensional $A$-module. It is convenient to consider the decomposition of $M$ in isotypic components as $\Bbbk^{G}$-module: $M = \oplus_{g\in G}M[g]$, where $M[g] = \delta_g\cdot M$. Note that \begin{align}\label{eq:comp-isotyp} x\cdot M[g] &= M[g_xg] & \text{for all } x\in {\mathbb B},\, g&\in G. \end{align}
For instance, \eqref{eq:action-of-coradical} says that the isotypic components of the Verma module $M_{g}$ are $M_g[h] = \langle m_x: x\in {\mathbb B}, \, g_xg = h\rangle$.
\section{Hopf algebras related to the class of transpositions in the symmetric group}\label{sect:gral-sn}
\subsection{Quadratic Nichols algebras}\label{subsect:nichols-gral-sn}
\
Let $n\ge 3$; denote by ${\mathcal O}_2^n$ the conjugacy class of $\mbox{\footnotesize (12)}$ in ${\mathbb S}_n$ and by $\sgn:C_{{\mathbb S}_n}\mbox{\footnotesize (12)} \rightarrow\Bbbk$ the restriction of the sign representation of ${\mathbb S}_n$ to the centralizer of $\mbox{\footnotesize (12)}$. Let $V_n = M(\mbox{\footnotesize (12)},\sgn) \in {}^{\Bbbk^{{\mathbb S}_n}}_{\Bbbk^{{\mathbb S}_n}}\mathcal{YD}$\label{V_n}; $V_n$ has a basis $(\xij{ij})_{(ij)\in{\mathcal O}_{2}^n}$ such that the action $\cdot$ and the coaction $\delta$ are given by \begin{align*} &\delta_h\cdot\xij{ij} =\delta_{h,(ij)}\,\xij{ij}\quad \forall h\in {\mathbb S}_n &\mbox{and}&& \delta(\xij{ij})=\sum_{h\in {\mathbb S}_n}\sgn(h)\delta_{h}{\otimes} x_{h^{-1}(ij)h}. \end{align*}
\bigbreak Let $n=3, 4, 5$. By \cite{milinskisch,grania}, we know that ${\mathcal B}(V_n)$ is quadratic and finite dimensional; actually, the ideal $\mathcal{J}_n$ of relations of ${\mathcal B}(V_n)$ is generated by \begin{align} \label{eq:rels-powers}\xij{ij}^2&,\\ \label{eq:rels-ijkl}R_{(ij)(kl)}&:=\xij{ij}\xij{kl}+\xij{kl}\xij{ij}, \\ \label{eq:rels-ijik}R_{(ij)(ik)}&:=\xij{ij}\xij{ik}+\xij{ik}\xij{jk}+\xij{jk}\xij{ij} \end{align} for $(ij),(kl),(ik)\in{\mathcal O}_2^n$ with $\#\{i,j,k,l\} = 4$.
For $n\geq 6$, we define the \emph{quadratic Nichols algebra} ${\mathcal B}_n$ in the same way, that is as the quotient of the tensor algebra $T(V_n)$ by the ideal generated by the quadratic relations \eqref{eq:rels-powers}, \eqref{eq:rels-ijkl} and \eqref{eq:rels-ijik} for $(ij),(kl),(ik)\in{\mathcal O}_2^n$ with $\#\{i,j,k,l\} = 4$. It is however open whether:
\begin{itemize}
\item ${\mathcal B}(V_n)$ is quadratic, i. e. isomorphic to ${\mathcal B}_n$;
\item the dimension of ${\mathcal B}(V_n)$ is finite;
\item the dimension of ${\mathcal B}_n$ is finite. \end{itemize}
But we do know that the only possible finite dimensional Nichols algebras\footnote{There is one exception when $n = 4$ that is finite dimensional{} and two exceptions when $n=5$ and 6 that are not known.} over ${\mathbb S}_n$ are related to the orbit of transpositions and a pair of characters \cite[Th. 1.1]{AFGV}. Also, the Nichols algebras related to these two characters are twist-equivalent \cite{ve}.
\subsection{The parameters}\label{subsect:parameters-gral-sn}
\
We consider the set of parameters $$ \gA_n :=\Big\{ \mathbf{a}=(\aij{ij})_{(ij)\in{\mathcal O}_2^n}\in\Bbbk^{{\mathcal O}_2^n}: \sum_{(ij)\in{\mathcal O}_2^n}\aij{ij}=0\Big\}. $$ The group $\Gamma_n := \Bbbk^{\times}\times\operatorname{Aut}({\mathbb S}_n)$ acts on $\gA_n$ by \begin{align}\label{equ:action} (\mu,\theta)\triangleright\mathbf{a}&=\mu(a_{\theta(ij)}), & \mu&\in \Bbbk^{\times}, & \theta &\in\operatorname{Aut}({\mathbb S}_n),& \mathbf{a} & \in \gA_n. \end{align} Let $[\mathbf{a}]\in \Gamma_n\backslash\gA_n$ be the class of $\mathbf{a}$ under this action. Let $\triangleright$ denote also the conjugation action of ${\mathbb S}_n$ on itself, so that\footnote{It is well-known that
${\mathbb S}_n$ identifies with the group of inner automorphisms and that this equals $\operatorname{Aut} {\mathbb S}_n$, except for $n=6$.} ${\mathbb S}_n < \{e\}\times \operatorname{Aut}({\mathbb S}_n)<\Gamma_n$. Let ${\mathbb S}_{n}^{\mathbf{a}}=\{g\in{\mathbb S}_n|g\triangleright\mathbf{a}=\mathbf{a}\}$ be the isotropy group of $\mathbf{a}$ under the action of ${\mathbb S}_n$.
\medbreak We fix $\mathbf{a}\in \gA_n$ and introduce \begin{align} \label{eq:fij-n} f_{ij} &= \sum_{g\in{\mathbb S}_n}(\aij{ij} - a_{g^{-1}(ij)g})\delta_g \in \Bbbk^{{\mathbb S}_n}, & (ij)\in{\mathcal O}_2^n. \end{align} Clearly, \begin{align}\label{eq:fij-propiedad} & &\fij{ij}(ts) &=\fij{ij}(s) & \forall &t\in C_{{\mathbb S}_n}\mbox{\footnotesize (ij)}, \quad s\in{\mathbb S}_n. \end{align}
\begin{Def}\label{def:linked} We say that $g$ and $h \in {\mathbb S}_n$ are $\mathbf{a}$-\emph{linked}, denoted $g\sim_{\mathbf{a}} h$, if either $g = h$ or else there exist $(i_mj_m)$, \dots, $(i_{1}j_{1})\in\mathcal{O}_2^n$ such that \begin{itemize}
\item $g = (i_mj_m)\cdots(i_{1}j_{1})h$,
\item $\fij{i_sj_s}((i_sj_s)(i_{s-1}j_{s-1})\cdots(i_{1}j_{1})h)\neq0$ for all $1\leq s\leq m$. \end{itemize} \end{Def}
In particular, $\fij{i_1j_1}(h) \neq 0$ by \eqref{eq:fij-propiedad}. We claim that $\sim_{\mathbf{a}}$ is an equivalence relation. For, if $g$ and $h \in {\mathbb S}_n$ are $\mathbf{a}$-linked, then $h=(i_1j_1)\cdots(i_{m}j_{m})g$ and \begin{align*}
\noalign{\smallbreak} \fij{i_{s}j_{s}}((i_{s}j_{s})(i_{s+1}j_{s+1})\cdots(i_{m}j_{m})g)&=\fij{i_{s}j_{s}}((i_{s-1}j_{s-1})\cdots(i_{1}j_{1})h)\\ &\overset{\eqref{eq:fij-propiedad}}=\fij{i_{s}j_{s}}((i_{s}j_{s})(i_{s-1}j_{s-1})\cdots(i_{1}j_{1})h)\neq0. \end{align*} In the same way, we see that if $g\sim_{\mathbf{a}} h$ and also $h\sim_{\mathbf{a}} z$, then $g\sim_{\mathbf{a}} z$.
\subsection{A family of Hopf algebras}\label{subsect:family-gral-sn}
\
We fix $\mathbf{a}\in \gA_n$; recall the elements $f_{ij}$ defined in \eqref{eq:fij-n}. Let $\mathcal{I}_{\mathbf{a}}$ be the ideal of $T(V_n)\#\Bbbk^{{\mathbb S}_n}$ generated by \eqref{eq:rels-ijkl}, \eqref{eq:rels-ijik} and \begin{align} \label{eq:rels-powers Aa} \xij{ij}^2& - f_{ij}, \end{align} for all $(ij),(kl),(ik)\in{\mathcal O}_2^n$ such that $\#\{i,j,k,l\} = 4$. Then $$\mathcal{A}_{[\mathbf{a}]}:= T(V_n)\#\Bbbk^{{\mathbb S}_n}/\mathcal{I}_{\mathbf{a}}$$ is a Hopf algebra, see Remark \ref{obs:Aa-Hopf}. Also, if $\gr\mathcal{A}_{[\mathbf{a}]}\simeq{\mathcal B}(V_n)\#\Bbbk^{{\mathbb S}_n}\simeq\gr\mathcal{A}_{[\mathbf{b}]}$, then $\mathcal{A}_{[\mathbf{a}]} \simeq \mathcal{A}_{[\mathbf{b}]}$ if and only if $[\mathbf{a}]=[\mathbf{b}]$, what justifies the notation. If $n =3$, then $\gr\mathcal{A}_{[\mathbf{a}]}\simeq{\mathcal B}(V_3)\#\Bbbk^{{\mathbb S}_3}$ and $\dim \mathcal{A}_{[\mathbf{a}]} = 72$ \cite{AV}; for $n =4,5$ the dimension is finite but we do not know if it is the "right" one; for $n \geq 6$, the dimension is unknown to be finite.
\begin{Rem}\label{obs:Aa-Hopf} A straightforward computation shows that \begin{align*} \Delta(\xij{ij}^2)&=\xij{ij}^2\ot1+\sum_{h\in{\mathbb S}_n}\delta_{h}{\otimes} x_{h^{-1}(ij)h}^2\quad\mbox{ and }\\ \Delta(\fij{ij})&=\fij{ij}\ot1+\sum_{h\in{\mathbb S}_n}\delta_{h}{\otimes} f_{h^{-1}(i)h^{-1}(j)}. \end{align*} Then $J=\langle\xij{ij}^2 - f_{ij}:(ij)\in\mathcal{O}_2^n\rangle$ is a coideal. Since $\fij{ij}(e)=0$, we have that $J\subset\ker\epsilon$ and $\mathcal{S}(J)\subseteq \Bbbk^{{\mathbb S}_n}J$. Thus $\mathcal{I}_{\mathbf{a}} = (J)$ is a Hopf ideal and $\mathcal{A}_{[\mathbf{a}]}$ is a Hopf algebra quotient of $T(V_n)\#\Bbbk^{{\mathbb S}_n}$. We shall say that \emph{$\Bbbk^{{\mathbb S}_n}$ is a subalgebra of $\mathcal{A}_{[\mathbf{a}]}$} to express that the restriction of the projection $T(V_n)\#\Bbbk^{{\mathbb S}_n} \twoheadrightarrow \mathcal{A}_{[\mathbf{a}]}$ to $\Bbbk^{{\mathbb S}_n}$ is injective. \end{Rem}
\bigbreak Let us collect a few general facts on the representation theory of $\mathcal{A}_{[\mathbf{a}]}$.
\begin{Rem}\label{obs:fij} Assume that $\Bbbk^{{\mathbb S}_n}$ is a subalgebra of $\mathcal{A}_{[\mathbf{a}]}$ and let $M$ be an $\mathcal{A}_{[\mathbf{a}]}$-module. Hence \renewcommand{\theenumi}{\alph{enumi}} \renewcommand{\labelenumi}{(\theenumi)} \begin{enumerate}
\item\label{item:rem-fij-a} If $(ij)\in{\mathcal O}_2^n$ satisfies $\fij{ij}(h)\neq0$, then $\rho(\xij{ij}):M[h]\rightarrow M[\mbox{\footnotesize (ij)}h]$ is an isomorphism.
\smallbreak \item\label{item:rem-fij-b} Let $g\sim_{\mathbf{a}} h \in {\mathbb S}_n$. Then $\rho(\xij{i_mj_m})\circ\cdots\circ\rho(\xij{i_{1}j_{1}}):M[h]\rightarrow M[g]$ is an isomorphism. \end{enumerate} \end{Rem}
\begin{proof} $\rho(\xij{ij}):M[h]\rightarrow M[\mbox{\footnotesize (ij)}h]$ is injective and $\rho(\xij{ij}):M[\mbox{\footnotesize (ij)}h]\rightarrow M[h]$ is surjective, by \eqref{eq:rels-powers Aa}. Interchanging the roles of $h$ and $\mbox{\footnotesize (ij)}h$, we get \eqref{item:rem-fij-a}. Now \eqref{item:rem-fij-b} follows from \eqref{item:rem-fij-a}. \end{proof}
This Remark is particularly useful to compare Verma modules.
\begin{prop}\label{prop: g h linked then the Verma are isomorphic} Assume that $\dim \mathcal{A}_{[\mathbf{a}]} < \infty$ and $\Bbbk^{{\mathbb S}_n}$ is a subalgebra of $\mathcal{A}_{[\mathbf{a}]}$. If $g$ and $h$ are $\mathbf{a}$-linked, then the Verma modules $M_g$ and $M_h$ are isomorphic. \end{prop} \begin{proof} The Verma module $M_h$ is generated by $m_1=1{\otimes}_{\Bbbk^{{\mathbb S}_n}}1\in M_{h}[h]$. By Remark \ref{obs:fij} \eqref{item:rem-fij-b}, there exists $m\in M_h[g]$ such that $M_h=\mathcal{A}_{[\mathbf{a}]}\cdot m$. Therefore, there is an epimorphism $M_g\twoheadrightarrow M_h$. Since $\mathcal{A}_{[\mathbf{a}]}$ is finite dimensional, all the Verma modules have the same dimension; hence $M_g\simeq M_h$. \end{proof}
\begin{Def}\label{def:generic} We say that the parameter $\mathbf{a}$ is \emph{generic} when any of the following equivalent conditions holds.
\renewcommand{\theenumi}{\alph{enumi}} \renewcommand{\labelenumi}{(\theenumi)} \begin{enumerate}
\item\label{item:def-generic-a} $\aij{ij}\neq\aij{kl}$ for all $(ij) \neq (kl)\in\mathcal{O}^n_2$.
\item\label{item:def-generic-b} $\aij{ij}\neq a_{h\triangleright (ij)}$ for all $(ij)\in{\mathcal O}_2^n$ and all $h\in {\mathbb S}_n - C_{{\mathbb S}_n}\mbox{\footnotesize ($ij$)}$.
\item\label{item:def-generic-c} $\fij{ij}(h) \neq 0$ for all $(ij)\in{\mathcal O}_2^n$ and all $h\in {\mathbb S}_n - C_{{\mathbb S}_n}\mbox{\footnotesize ($ij$)}$. \end{enumerate} \end{Def}
\begin{proof} \eqref{item:def-generic-a} $\implies$ \eqref{item:def-generic-b} is clear, since $(ij) \neq h\triangleright (ij)$ by the assumption on $h$. \eqref{item:def-generic-b} $\implies$ \eqref{item:def-generic-a} follows since any $(kl) \neq (ij)$ is of the form $(kl) = h\triangleright (ij)$, for some $h\notin {\mathbb S}_n^{(ij)}$. \eqref{item:def-generic-b} $\iff$ \eqref{item:def-generic-c}: given $(ij)$, we have $$ \{h\in {\mathbb S}_n : \aij{ij} = a_{h\triangleright (ij)}\} = \{h\in {\mathbb S}_n : \fij{ij}(h) = 0\}; $$ hence, one of these sets equals $C_{{\mathbb S}_n}\mbox{\footnotesize ($ij$)}$ iff the other does.\end{proof}
\begin{lema}\label{le:bounded in the dimension of modules for aij all different} Assume that $\mathbf{a}$ is generic, so that $g\sim_{\mathbf{a}} h$ for all $g,h\in{\mathbb S}_n -\{e\}$. If $\Bbbk^{{\mathbb S}_n}$ is a subalgebra of $\mathcal{A}_{[\mathbf{a}]}$, then \renewcommand{\theenumi}{\alph{enumi}} \renewcommand{\labelenumi}{(\theenumi)} \begin{enumerate} \item\label{item:lema-generic-b} If $\mathcal{A}_{[\mathbf{a}]}$ is finite dimensional{}, then the Verma modules $M_g$ and $M_h$ are isomorphic, for all $g,h\in{\mathbb S}_n -\{e\}$.
\smallbreak
\item\label{item:lema-generic-a} If $M$ is an $\mathcal{A}_{[\mathbf{a}]}$-module, then $\dim M[h]=\dim M[g]$ for all $g,h\in{\mathbb S}_n -\{e\}$. Thus $\dim M = (n!-1)\dim M[(ij)] + \dim M[e]$.
\smallbreak \item\label{item:lema-generic-c} If $M$ is simple and $n=3$, then $\dim M[h]\leq1$ for all $h\in{\mathbb S}_3 - \{e\}$.
\end{enumerate}
\end{lema}
\begin{proof} Let $(ij)\in {\mathbb S}_n$ and $g\in{\mathbb S}_n -\{e\}$. \begin{itemize}
\item If $g=(ik)$, then $g\sim_{\mathbf{a}} (ij)$, as $(ik)=(jk)(ij)(jk)$ and $\mathbf{a}$ is generic.
\smallbreak \item If $g=(kl)$ with $\#\{i,j,l,k\}=4$, then $(ij)\sim_{\mathbf{a}} (ik)$ and $(ik)\sim_{\mathbf{a}} (kl)$, hence $(ij)\sim_{\mathbf{a}} (kl)$.
\smallbreak \item If $g=(i_1i_2\cdots i_r)$ is an $r$-cycle, then $g=(i_1i_r)(i_1i_2\cdots i_{r-1})$. Hence $g\sim_{\mathbf{a}} (ij)$ by induction on $r$.
\smallbreak \item Let $g = g_1 \cdots g_{m}$ be the product of the disjoint cycles $g_1, \dots, g_{m}$, with $m\geq 2$; say $g_1=(i_1\cdots i_r)$, $g_2 = (i_{r+1}\cdots i_{r+s})$ and denote $y = g_3 \cdots g_{m}$. Then $g=(i_1i_{r+1})(i_1\cdots i_{r+s})y$ and $y\in C_{{\mathbb S}_n}\mbox{\footnotesize ($i_1i_{r+1}$)}$. Hence $g$ and $(ij)$ are linked by induction on $m$. \end{itemize}
Now \eqref{item:lema-generic-b} follows from Proposition \ref{prop: g h linked then the Verma are isomorphic} and \eqref{item:lema-generic-a} from Remark \ref{obs:fij}. If $n =3$ and $M$ is simple, then $\dim\mathcal{A}_{[\mathbf{a}]}=72>(\dim M)^2\geq25(\dim M[\mbox{\footnotesize (12)}])^2$ and the last assertion of the lemma follows. \end{proof}
\medbreak The characterization of all one dimensional $\mathcal{A}_{[\mathbf{a}]}$-modules is not difficult. Let $\thickapprox$ be the equivalence relation in ${\mathcal O}_2^n$ given by $(ij)\thickapprox(kl)$ iff $\aij{ij}=\aij{kl}$. Let ${\mathcal O}_2^n =\coprod_{s\in \Upsilon} {\mathcal C}_s$ be the associated partition. If $h\in {\mathbb S}_n$, then \begin{align}\label{eq:conditions-h} \fij{ij}(h) &= 0 \,\forall (ij)\in{\mathcal O}_2^n & &\iff & h^{-1}{\mathcal C}_sh &= {\mathcal C}_s \,\forall s\in \Upsilon & &\iff & h&\in{\mathbb S}_n^\mathbf{a}. \end{align}
\begin{lema}\label{prop:dim-uno} Assume that $\Bbbk^{{\mathbb S}_n}$ is a subalgebra of $\mathcal{A}_{[\mathbf{a}]}$ and let $h\in {\mathbb S}_n^\mathbf{a}$. Then $\Bbbk_h$ is a $\mathcal{A}_{[\mathbf{a}]}$-module with the action given by the algebra map $\zeta_h:\mathcal{A}_{[\mathbf{a}]}\rightarrow\Bbbk$, \begin{align} &\zeta_h(\xij{ij})=0,& (ij)&\in{\mathcal O}_2^n &\mbox{ and }& &\zeta_h(f)=f(h),& &f\in\Bbbk^{{\mathbb S}_n}. \end{align}
The one-dimensional representations of $\mathcal{A}_{[\mathbf{a}]}$ are all of this form. \end{lema}
\begin{proof} Clearly, $\zeta_h$ satisfies the relations of $T(V_n)\#\Bbbk^{{\mathbb S}_n}$, \eqref{eq:rels-ijkl} and \eqref{eq:rels-ijik}; \eqref{eq:rels-powers Aa} holds because $h$ fulfills \eqref{eq:conditions-h}. Now, let $M$ be a module of dimension 1. Then $M = M[h]$ for some $h$; thus $\fij{ij}(h)=0$ for all $(ij)\in{\mathcal O}_2^n$ by Remark \ref{obs:fij}. \end{proof}
\section{Simple and Verma modules over Hopf algebras with coradical $\Bbbk^{{\mathbb S}_3}$}\label{sect:modules}
\subsection{Verma modules}\label{subsect: cAa1a2}
\
In this Section, we focus on the case $n =3$. Let $\mathbf{a}\in \gA_3$. Explicitly, $\mathcal{A}_{[\mathbf{a}]}$ is the algebra $(T(V_3)\#\Bbbk^{{\mathbb S}_3})/\mathcal{I}_{\mathbf{a}}$ where $\mathcal{I}_{\mathbf{a}}$ is the ideal generated by \begin{align}\label{eq:rels-ideal} &R_{(13)(23)}, & &R_{(23)(13)}, & &\xij{ij}^2 - \fij{ij}, & &(ij)\in {\mathcal O}_2^3, \end{align} where \begin{align}\label{eq:fij-3} \begin{aligned} \fij{13} &= (\aij{13} - \aij{23})(\delta_{(12)}+\delta_{(123)}) + (\aij{13} - \aij{12})(\delta_{(23)}+\delta_{(132)}), \\\fij{23} &= (\aij{23} - \aij{12})(\delta_{(13)}+\delta_{(123)})+(\aij{23} - \aij{13})(\delta_{(12)}+\delta_{(132)}),\\ \fij{12} &= (\aij{12} - \aij{13})(\delta_{(23)}+\delta_{(123)}) + (\aij{12} - \aij{23})(\delta_{(13)}+\delta_{(132)}). \end{aligned} \end{align}
We know from \cite{AV} that $\mathcal{A}_{[\mathbf{a}]}$ is a Hopf algebra of dimension 72 and coradical isomorphic to $\Bbbk^{{\mathbb S}_3}$, for any $\mathbf{a}\in\gA_3$. Furthermore, any finite dimensional{} non-semisimple Hopf algebra with coradical $\Bbbk^{{\mathbb S}_3}$ is isomorphic to $\mathcal{A}_{[\mathbf{a}]}$ for some $\mathbf{a}\in\gA_3$; $\mathcal{A}_{[\mathbf{b}]}\simeq\mathcal{A}_{[\mathbf{a}]}$ iff $[\mathbf{a}]=[\mathbf{b}]$. Let $\Omega= \fij{13}(\mbox{\footnotesize (12)}\underline{\quad})- \fij{13}$, that is \begin{equation}\label{eq:omega} \begin{aligned} \Omega = &(\aij{23} - \aij{13})(\dij{(12)}-\dij{e})\\ & + (\aij{13} - \aij{12})(\dij{(13)}-\dij{(132)}) + (\aij{12} - \aij{23})(\dij{(23)}-\dij{(123)}). \end{aligned} \end{equation} The following formulae follow from the defining relations: \begin{align} \label{eq: rel 12 13 12} \xij{12}\xij{13}\xij{12}=& \xij{13}\xij{12}\xij{13}+\xij{23}(\aij{13} - \aij{12}),\\ \label{eq: rel 23 12 23} \xij{23}\xij{12}\xij{23}=& \xij{12}\xij{23}\xij{12}-\xij{13}(\aij{23} - \aij{12})\,\mbox{ and}\\ \label{eq: rel 23 12 13} \xij{23}\xij{12}\xij{13}=& \xij{13}\xij{12}\xij{23}+\xij{12}\Omega. \end{align}
Let $$ {\mathbb B}=\left\{ \begin{matrix} 1, &\xij{13}, &\xij{13}\xij{12}, &\xij{13}\xij{12}\xij{13}, &\xij{13}\xij{12}\xij{23}\xij{12},\\
&\xij{23}, &\xij{12}\xij{13}, &\xij{12}\xij{23}\xij{12},\\
&\xij{12}, &\xij{23}\xij{12}, &\xij{13}\xij{12}\xij{23},\\
& &\xij{12}\xij{23} \end{matrix} \right\}. $$
Then $\{x\delta_g|x\in {\mathbb B},\,g\in{\mathbb S}_3\}$ is a basis of $\mathcal{A}_{[\mathbf{a}]}$ \cite{AV}. Fix $g\in G$. The classes of the monomials in ${\mathbb B}$ form a basis of the Verma module $M_g$. Denote by $m_{(ij)\dots (rs)}$ the class of $\xij{ij}\dots\xij{rs}$; we simply set $m_{\textsf{top}} = m_{(13)(12)(23)(12)}$. The action of $\mathcal{A}_{[\mathbf{a}]}$ on $M_g$ is described in this basis by the following formulae: \begin{align}\label{eq:action-Verma-group-uno} f\cdot m_1 &= f(g) m_1, & f&\in \Bbbk^{{\mathbb S}_3}; \\ \label{eq:action-Verma-group-letras} f\cdot m_{(ij)\dots (rs)} &= f(\mbox{\footnotesize (ij)\dots (rs)} g)\, m_{(ij)\dots (rs)}, & f&\in \Bbbk^{{\mathbb S}_3}; \\\label{eq:action-Verma-uno} \xij{ij}\cdot m_1 &= \mij{ij}, & (ij)&\in {\mathcal O}^{3}_2; \\ \label{eq:action-Verma-letras1} \xij{ij}\cdot\mij{ij} &= \fij{ij}(g)m_1, & (ij)&\in {\mathcal O}^{3}_2; \\ \label{eqn:cuentas-muno} \xij{13}\cdot\mij{23} &= -\mdos{23}{12}-\mdos{12}{13}, \\ \label{eqn:13 en 12} \xij{13}\cdot\mij{12} &= \mdos{13}{12}, \\ \xij{23}\cdot\mij{13} &= -\mdos{12}{23}-\mdos{13}{12}, \\ \label{eqn:23 en 12} \xij{23}\cdot\mij{12} &= \mdos{23}{12}, \\ \xij{12}\cdot\mij{13} &= \mdos{12}{13}, \\ \xij{12}\cdot\mij{23} &= \mdos{12}{23}; \end{align} \begin{align} \xij{13}\cdot\mdos{13}{12} & = \fij{13}(\mbox{\footnotesize (12)}g)\, \mij{12}, \\ \xij{13}\cdot \mdos{12}{13} & = \mtres{13}{12}{13}, \\ \xij{13}\cdot \mdos{23}{12} & = -\mtres{13}{12}{13} - \fij{13}(\mbox{\footnotesize (23)}g)\, \mij{23} \\ \xij{13}\cdot \mdos{12}{23} & = \mtres{13}{12}{23}; \\ \xij{23}\cdot\mdos{13}{12} & = -\mtres{12}{23}{12} - \fij{12} (g) \mij{13}, \\ \xij{23}\cdot \mdos{12}{13} & = \mtres{13}{12}{23} + \Omega(g) \mij{12}, \\ \xij{23}\cdot \mdos{23}{12} & = \fij{23}(\mbox{\footnotesize (12)}g)\mij{12}, \\ \xij{23}\cdot \mdos{12}{23} & = \mtres{12}{23}{12}-\mij{13}\fij{23}(\mbox{\footnotesize (13)}), \\ \xij{12}\cdot\mdos{13}{12} & = \mtres{13}{12}{13}+\mij{23}\fij{13}(\mbox{\footnotesize (23)}), \\ \xij{12}\cdot \mdos{12}{13} & = \fij{12}(\mbox{\footnotesize (13)}g) \mij{13}, \\ \xij{12}\cdot \mdos{23}{12} & = \mtres{12}{23}{12}, \\ \xij{12}\cdot \mdos{12}{23} & = \fij{12}(\mbox{\footnotesize (23)}g) \mij{23}; \end{align} \begin{align} \xij{13}\cdot\mtres{13}{12}{13} & = \fij{13}(\mbox{\footnotesize (12)}\mbox{\footnotesize (13)}g)\, \mdos{12}{13}, \\ \xij{13}\cdot \mtres{12}{23}{12} & = m_{\textsf{top}}, \\ \label{eq:13 act 13 12 23}\xij{13}\cdot \mtres{13}{12}{23} & = \fij{13}(\mbox{\footnotesize (12)}\mbox{\footnotesize (23)}g) \, \mdos{12}{23}, \\ \xij{23}\cdot\mtres{13}{12}{13} & = m_{\textsf{top}} - (\fij{12}\Omega + (\aij{13} - \aij{12})\fij{23})(g) m_1, \\ \xij{23}\cdot \mtres{12}{23}{12} & = \fij{12}(g)\mdos{12}{23} + (\aij{12} - \aij{23})\mdos{13}{12}, \\ \label{eq:23 act 13 12 23} \xij{23}\cdot \mtres{13}{12}{23} & = \fij{23}(\mbox{\footnotesize (23)}\mbox{\footnotesize (12)}g)\mdos{12}{13} - \Omega(g)\mdos{23}{12}, \\ \xij{12}\cdot\mtres{13}{12}{13} & = (\fij{13}(g) + \fij{12}(\mbox{\footnotesize (23)}))\mdos{13}{12}+ \fij{12}(\mbox{\footnotesize (23)})\mdos{12}{23}, \\ \xij{12}\cdot \mtres{12}{23}{12} & = \fij{12}(\mbox{\footnotesize (23)}\mbox{\footnotesize (12)}g)\, \mdos{23}{12}, \\\label{eqn:cuentas-mtres} \xij{12}\cdot \mtres{13}{12}{23} & = - m_{\textsf{top}} + (\fij{13}(\mbox{\footnotesize (23)})\fij{23} - \fij{12}(\mbox{\footnotesize (13)}\underline{\quad})\fij{13})(g) m_1; \end{align} \begin{align} \label{eq:mcuatro-b} \xij{13}\cdot m_{\textsf{top}} & = \fij{13}(g)\, \mtres{12}{23}{12}, \\ \label{eq:mcuatro-a}\xij{23}\cdot m_{\textsf{top}} & = \fij{23}(g)\mtres{13}{12}{13}+(\fij{13}(\mbox{\footnotesize (23)})\fij{23}+\Omega\fij{12})(g)\mij{23}, \\ \label{eq:mcuatro-c} \xij{12}\cdot m_{\textsf{top}} & =-\fij{12}(g)\mtres{13}{12}{23}\\ \notag &+ (\fij{13}(\mbox{\footnotesize (23)})\fij{23}(\mbox{\footnotesize (12)}\underline{\quad})-\fij{12}(\mbox{\footnotesize (23)}\underline{\quad}) \fij{13}(\mbox{\footnotesize (12)}\underline{\quad}))(g)\mij{12}; \end{align}
\bigbreak To proceed with the description of the simple modules, we split the consideration of the algebras $\mathcal{A}_{[\mathbf{a}]}$ into several cases.
\begin{itemize} \medbreak
\item $\aij{13} = \aij{12} = \aij{23}$. In this case, there is a projection $\mathcal{A}_{[\mathbf{a}]}\to \Bbbk^{{\mathbb S}_3}$. It is easy to see that any simple
$\mathcal{A}_{[\mathbf{a}]}$-module is obtained from a simple $\Bbbk^{{\mathbb S}_3}$-module composing with this projection; thus, $\widehat{\mathcal{A}_{[\mathbf{a}]}} \simeq
{\mathbb S}_3$.
\medbreak
\item $\aij{13} = \aij{12}$ or $\aij{23} = \aij{12}$ or $\aij{13} = \aij{23}$, but not in the previous case.
Up to isomorphism, cf. \eqref{equ:action}, we may assume $\aij{12}\neq\aij{13}=\aij{23}$. For shortness, we shall say that
$\mathbf{a}$ is \emph{sub-generic}.
\medbreak
\item $\mathbf{a}$ is generic.
\end{itemize}
In the next subsections, we investigate these two different cases. Let us consider the decomposition of the Verma module $M_g$ in isotypic components as $\Bbbk^{{\mathbb S}_3}$-modules. The isotypic components of the Verma module $M_e$ are \begin{equation}\label{eq:isotypic} \begin{aligned} M_e[e] &= \langle m_1, m_{\textsf{top}}\rangle, & M_e[\mbox{\footnotesize (12)}] &= \langle \mij{12}, \mtres{13}{12}{23}\rangle, \\ M_e[\mbox{\footnotesize (13)}] &= \langle \mij{13}, \mtres{12}{23}{12} \rangle, & M_e[\mbox{\footnotesize (23)}] &= \langle \mij{23}, \mtres{13}{12}{13}\rangle, \\ M_e[\mbox{\footnotesize (123)}] &= \langle \mdos{13}{12}, \mdos{12}{23}\rangle, & M_e[\mbox{\footnotesize (132)}] &= \langle \mdos{12}{13}, \mdos{23}{12}\rangle. \end{aligned} \end{equation} Let $g, h \in {\mathbb S}_3$, $(ij)\in {\mathcal O}_2^3$. By \eqref{eq:action-Verma-group-letras} and \eqref{eq:comp-isotyp}, we have \begin{align}\label{eq:isotypic-gral} M_g[h] &= M_e[hg^{-1}], \\ \label{eq:action-monomials}\xij{ij}\cdot M_g[h] &\subseteq M_g[\mbox{\footnotesize (ij)}h]. \end{align} It is convenient to introduce the following elements: \begin{align}\label{eq:msoc} m_{\textsf{soc}} &= \fij{13}(\mbox{\rm\footnotesize (23)})\fij{23}(\mbox{\rm\footnotesize (13)}) m_1-m_{\textsf{top}}, \\\label{eq:mo} m_{\textsf{o}} &= \mtres{13}{12}{13}+\fij{13}(\mbox{\rm\footnotesize (23)})\mij{23}. \end{align}
\subsection{Case $\mathbf{a}\in\gA_3$ generic.}\label{subsec: generic case}
\
To determine the simple $\mathcal{A}_{[\mathbf{a}]}$-modules, we just need to determine the maximal submodules of the various Verma modules. By Lemma \ref{le:bounded in the dimension of modules for aij all different} \eqref{item:lema-generic-b}, we are reduced to consider the Verma modules $M_e$ and $M_g$ for some fixed $g\neq e$. We choose $g = \mbox{\footnotesize (13)(23)}$; for the sake of an easy exposition, we write the elements of ${\mathbb S}_3$ as products of transpositions.
We start with the following observation. Let $M$ be a cyclic $\mathcal{A}_{[\mathbf{a}]}$-module, generated by $v\in M[\mbox{\footnotesize (13)(23)}]$. By \eqref{eq:action-monomials} and acting by the monomials in our basis of $\mathcal{A}_{[\mathbf{a}]}$, we see that $M[\mbox{\footnotesize (23)(13)}]=\langle \xij{13}\xij{23}\cdot v, \xij{23}\xij{12}\cdot v, \xij{12}\xij{13}\cdot v\rangle$. This weight space is $\neq 0$ by Lemma \ref{le:bounded in the dimension of modules for aij all different} \eqref{item:lema-generic-a}, and a further application of this Lemma gives the following result.
\begin{Rem}\label{obs:ciclico-generico} Let $M$ be a cyclic $\mathcal{A}_{[\mathbf{a}]}$-module, generated by $v\in M[\mbox{\footnotesize (13)(23)}]$. If $\dim M[\mbox{\footnotesize (23)(13)}]= 1$, then \begin{equation}\label{eq:module generated by 1 element} \begin{aligned} M[\mbox{\footnotesize (23)}]&=\langle\xij{13}\cdot v\rangle,& M[e]&=\langle \xij{12}\xij{23}\cdot v, \xij{13}\xij{12}\cdot v\rangle,\\ M[\mbox{\footnotesize (12)}]&=\langle \xij{23}\cdot v\rangle, & M[\mbox{\footnotesize (13)}]&=\langle\xij{12}\cdot v\rangle, \\ M[\mbox{\footnotesize (13)(23)}]&=\langle v\rangle, & M[\mbox{\footnotesize (23)(13)}]&=\langle \xij{13}\xij{23}\cdot v\rangle. \end{aligned} \end{equation} \end{Rem} Thus, any cyclic module as in the Remark has either dimension 5, 6 or 7. Moreover, there is a simple module $L$ like this;
$L$ has a basis $\{v_g|e\neq g\in{\mathbb S}_3\}$ and the action is given by \begin{align}\label{eq:def-L} v_g&\in L[g], & \xij{ij}\cdot v_g &=\begin{cases} v_{(ij)g} & \mbox{ if }\sgn g=1,\\ \fij{ij}(g) v_{(ij)g}& \mbox{ if }\sgn g=-1.\\ \end{cases} \end{align} Let $\Bbbk_e$ be as in Lemma \ref{prop:dim-uno}. We shall see that $L$ and $\Bbbk_{e}$ are the only simple modules of $\mathcal{A}_{[\mathbf{a}]}$.
\medbreak The Verma module $M_e$ projects onto the simple submodule $\Bbbk_e$, hence the kernel of this projection is a maximal submodule; explicitly this is $$N_e=\mathcal{A}_{[\mathbf{a}]}\cdot M_e[\mbox{\footnotesize (13)(23)}]=\oplus_{g\sim_{\mathbf{a}}(13)(23)}M_e[g]\oplus\langlem_{\textsf{top}}\rangle.$$ We see that this is the unique maximal submodule, as consequence of the following description of all submodules of $M_e$.
\begin{lema}\label{le:submodules Me in the generic case} The submodules of $M_e$ are \begin{align*} \langlem_{\textsf{top}}\rangle\subsetneq\mathcal{A}_{[\mathbf{a}]}\cdot v\subsetneq N_e\subsetneq M_e \end{align*} for any $v\in M_e[\mbox{\rm\footnotesize (13)(23)}] - 0$. The submodules $\mathcal{A}_{[\mathbf{a}]}\cdot v$ and $\mathcal{A}_{[\mathbf{a}]}\cdot u$ coincide iff $v\in\langle u\rangle$. The quotients $\mathcal{A}_{[\mathbf{a}]}\cdot v /\langlem_{\textsf{top}}\rangle$ and $N_e/ \mathcal{A}_{[\mathbf{a}]}\cdot v$ are isomorphic to $L$; and $M_e/N_e$ and $\langlem_{\textsf{top}}\rangle$ are isomorphic to $\Bbbk_e$. \end{lema}
\begin{proof} By \eqref{eq:mcuatro-a}, \eqref{eq:mcuatro-b} and \eqref{eq:mcuatro-c}, we have $\xij{ij}\cdotm_{\textsf{top}}=0$ for all $(ij)\in\mathcal{O}_2^3$. Let \begin{align*} v&=\lambda\mdos{23}{12}+\mu\mdos{12}{13} & &\in M_e[\mbox{\footnotesize \mbox{\footnotesize (13)}(23)}] - 0, \\w &=\mu\mdos{12}{23}+(\mu-\lambda)\mdos{13}{12} & &\in M_e[\mbox{\footnotesize (23)(13)}]. \end{align*} Using the formulae \eqref{eqn:cuentas-muno} to \eqref{eqn:cuentas-mtres}, we see that $\xij{13}\xij{23}\cdot v$, $\xij{23}\xij{12}\cdot v$ and $\xij{12}\xij{13}\cdot v$ are non-zero multiples of $w$. That is, $\dim (\mathcal{A}_{[\mathbf{a}]}\cdot v)[\mbox{\footnotesize (23)(13)}] = 1$. Also, $\xij{12}\xij{23}\cdot v=-\mum_{\textsf{top}}$ and $\xij{13}\xij{12}\cdot v=\lambdam_{\textsf{top}}$. Hence $$\biggl\{v,\,\xij{23}\cdot v,\,\xij{12}\cdot v,\,\xij{13}\cdot v,\, w,\, m_{\textsf{top}}\biggr\}$$ is a basis of $\mathcal{A}_{[\mathbf{a}]}\cdot v$ by Remark \ref{obs:ciclico-generico}.
Let now $N$ be a (proper, non-trivial) submodule of $M_e$. If $N \neq \langlem_{\textsf{top}}\rangle$, then there exists $v\in N[\mbox{\footnotesize (13)(23)}] - 0$. Hence $\mathcal{A}_{[\mathbf{a}]}\cdot v$ is a submodule of $N$ and $N[e]=\langlem_{\textsf{top}}\rangle$ because $m_1\in M_e[e]$ and $\dim M_e[e]=2$. Therefore $N=\mathcal{A}_{[\mathbf{a}]}\cdot N[\mbox{\footnotesize (13)(23)}]$. \end{proof}
It is convenient to introduce the following $\mathcal{A}_{[\mathbf{a}]}$-modules which we will use in the Section \ref{sec: tipo de rep}. \begin{definition}\label{def: Wt ext L by ke - a generic} Let $\mathbf{t}\in\gA_3$. We denote by $W_{\mathbf{t}}(L,\Bbbk_e)$ the $\mathcal{A}_{[\mathbf{a}]}$-module with basis $\{w_g:g\in{\mathbb S}_3\}$ and action given by \begin{align*} &w_g\in W_{\mathbf{t}}(L,\Bbbk_e)[g],& &\xij{ij}\cdot w_g =\begin{cases} 0 & \mbox{ if }g=e,\\ w_{(ij)g} & \mbox{ if }g\neq e\mbox{ and }\sgn g=1,\\ \fij{ij}(g) w_{(ij)g} & \mbox{ if }g\neq(ij)\mbox{ and }\sgn g=-1,\\ \tij{ij}w_e & \mbox{ if }g=(ij). \end{cases} \end{align*} \end{definition} The well-definition of $W_{\mathbf{t}}$ follows from the next lemma. \begin{lema}\label{le: Wt ext L by ke - a generic} Let $\mathbf{t}, \tilde{\mathbf{t}}\in\gA_3$. \renewcommand{\theenumi}{\alph{enumi}} \renewcommand{\labelenumi}{(\theenumi)} \begin{enumerate} \item\label{item: Wt ext L by ke - a generic: t=0} If $\mathbf{t}=(0,0,0)$, then $W_{\mathbf{t}}(L,\Bbbk_e)\simeq\Bbbk_e\oplus L$. \smallbreak \item \label{item: Wt ext L by ke - a generic: t non 0} If $\mathbf{t}\neq(0,0,0)$, then there exists $v\in M_e[\mbox{\rm\footnotesize (13)(23)}] - 0$ such that $W_{\mathbf{t}}(L,\Bbbk_e)\simeq\mathcal{A}_{[\mathbf{a}]}\cdot v$. \item \label{item: Wt ext L by ke - a generic: t non 0 reciprocal} If $v\in M_e[\mbox{\rm\footnotesize (13)(23)}] - 0$, then there exists $\mathbf{t}\neq(0,0,0)$ such that $W_{\mathbf{t}}(L,\Bbbk_e)\simeq\mathcal{A}_{[\mathbf{a}]}\cdot v$. \smallbreak \smallbreak \item \label{item: Wt ext L by ke - a generic: is an ext} $W_{\mathbf{t}}(L,\Bbbk_e)$ is an extension of $L$ by $\Bbbk_e$. \smallbreak \item \label{item: Wt ext L by ke - a generic: iso} $W_{\mathbf{t}}(L,\Bbbk_e)\simeq W_{\tilde{\mathbf{t}}}(L,\Bbbk_e)$ if and only if $\mathbf{t}=\mu\tilde{\mathbf{t}}$ with $\mu\in\Bbbk^\times$. \end{enumerate} \end{lema}
\begin{proof} \eqref{item: Wt ext L by ke - a generic: t=0} is immediate. If we prove \eqref{item: Wt ext L by ke - a generic: t non 0}, then \eqref{item: Wt ext L by ke - a generic: is an ext} follows from Lemma \ref{le:submodules Me in the generic case}.
\eqref{item: Wt ext L by ke - a generic: t non 0} We set $w_{(13)(23)}=\tij{13}\mdos{23}{12}-\tij{12}\mdos{12}{13}\in M_e[\mbox{\footnotesize \mbox{\footnotesize (13)}(23)}]-0$, \begin{align*} &w_{(23)}=\xij{13}\cdot w_{(13)(23)},\quad w_{(13)}=\xij{12}\cdot w_{(13)(23)},\quad w_{(12)}=\xij{23}\cdot w_{(13)(23)},\\ &w_{(23)(13)}=\frac{1}{\fij{23}(\mbox{\footnotesize (13)})}\xij{23}\xij{12}\cdot w_{(13)(23)}\quad\mbox{ and }\quad w_e=m_{\textsf{top}}. \end{align*} By the proof of Lemma \ref{le:submodules Me in the generic case} and \eqref{eq: rel 23 12 23}, we see that $W_{\mathbf{t}}(L,\Bbbk_e)\simeq\mathcal{A}_{[\mathbf{a}]}\cdot w_{(13)(23)}$. \eqref{item: Wt ext L by ke - a generic: t non 0 reciprocal} follows from the proof of Lemma \ref{le:submodules Me in the generic case}. \eqref{item: Wt ext L by ke - a generic: iso} Let $\{\tilde{w}_g:g\in{\mathbb S}_3\}$ be the basis of $W_{\tilde{\mathbf{t}}}(L,\Bbbk_e)$ according to Definition \ref{def: Wt ext L by ke - a generic}. Let $F:W_{\mathbf{t}}(L,\Bbbk_e)\rightarrow W_{\tilde{\mathbf{t}}}(L,\Bbbk_e)$ be an isomorphism of $\mathcal{A}_{[\mathbf{a}]}$-module. Since $F$ is an isomorphism of $\Bbbk^{{\mathbb S}_3}$-modules, there exists $\mu_{g}\in\Bbbk^\times$ for all $g\in{\mathbb S}_3$ such that $F(w_g)=\mu_g \tilde{w}_g$. In particular, $F$ induces an automorphism of $L$. Since $L$ is simple (cf. Theorem \ref{thm:simples in the generic case}), $\mu_{g}=\mu_L$ for all $g\neq e$. Since $F(\xij{ij}\cdot w_{(ij)})=\xij{ij}\cdot F(w_{(ij)})$, we see that $\mathbf{t}=\frac{\mu_L}{\mu_e}\tilde{\mathbf{t}}$. Conversely, $F$ is well defined for all $\mu_e$ and $\mu_L$ such that $\mu=\frac{\mu_L}{\mu_e}$. \end{proof}
The Verma module $M_{(13)(23)}$ projects onto the simple module $L$, hence the kernel of this projection is a maximal submodule; explicitly this is $$N_{(13)(23)} = \mathcal{A}_{[\mathbf{a}]}\cdot M_{(13)(23)}[e]=M_{(13)(23)}[e]\oplus\mathcal{A}_{[\mathbf{a}]}\cdot m_{\textsf{soc}}.$$ We see that this is the unique maximal submodule, as consequence of the following description of all submodules of $M_{(13)(23)}$. Recall $m_{\textsf{soc}}$ from \eqref{eq:msoc}.
\begin{lema}\label{le:submodules Mg in the generic case} The submodules of $M_{(13)(23)}$ are \begin{align*} \mathcal{A}_{[\mathbf{a}]}\cdot m_{\textsf{soc}} \subsetneq\mathcal{A}_{[\mathbf{a}]}\cdot v\subsetneq N_{(13)(23)} \subsetneq M_{(13)(23)} \end{align*} for all $v\in M_{(13)(23)}[e]-0$. The submodules $\mathcal{A}_{[\mathbf{a}]}\cdot v$ and $\mathcal{A}_{[\mathbf{a}]}\cdot u$ coincide iff $v\in\langle u\rangle$. The quotients $\mathcal{A}_{[\mathbf{a}]}\cdot v /\mathcal{A}_{[\mathbf{a}]}\cdot m_{\textsf{soc}}$ and $N_{(13)(23)}/ \mathcal{A}_{[\mathbf{a}]}\cdot v$ are isomorphic to $\Bbbk_e$; and $M_{(13)(23)}/N_{(13)(23)}$ and $\mathcal{A}_{[\mathbf{a}]}\cdot m_{\textsf{soc}}$ are isomorphic to $L$. \end{lema}
\begin{proof} Let $v=\lambda m_1+\mum_{\textsf{top}}\in M_{(13)(23)}[\mbox{\footnotesize (13)(23)}] - 0$ and $N = \mathcal{A}_{[\mathbf{a}]}\cdot v$. Using the formulae \eqref{eqn:cuentas-muno} to \eqref{eqn:cuentas-mtres}, we see that \begin{align*} \xij{12}\xij{13}\cdot v &=\lambda\mdos{12}{13}-\mu\fij{13}(\mbox{\footnotesize (23)})^2\mdos{23}{12}\,\mbox{ and}\\ \xij{23}\xij{12}\cdot v &=\mu\fij{23}(\mbox{\footnotesize (13)})^2\mdos{12}{13}+\bigl(\lambda+2\mu\fij{13}(\mbox{\footnotesize (23)}) \fij{23}(\mbox{\footnotesize (13)})\bigr)\mdos{23}{12}. \end{align*} Thus, $\dim N[\mbox{\footnotesize (23)(13)}]= 1$ iff $\lambda+\mu\fij{13}(\mbox{\footnotesize (23)}) \fij{23}(\mbox{\footnotesize (13)})=0$, that is iff $v \in \langle m_{\textsf{soc}}\rangle - 0$. In this case, $$\biggl\{v,\,\xij{23}\cdot v,\,\xij{12}\cdot v,\,\xij{13}\cdot v,\, \xij{12}\xij{13}\cdot v\biggr\}$$ is a basis of $\mathcal{A}_{[\mathbf{a}]}\cdot m_{\textsf{soc}}$ by Remark \ref{obs:ciclico-generico}.
Let now $N$ be an arbitrary submodule of $M_{(13)(23)}$. If $\dim N[\mbox{\footnotesize (13)(23)}]= 2$, then $N = M_{(13)(23)}$. If $\dim N[\mbox{\footnotesize (13)(23)}]= 0$, then $N\subset M_{(13)(23)}[e]$ by Lemma \ref{le:bounded in the dimension of modules for aij all different}. But this is not possible since $\ker\xij{13}\cap\ker\xij{23}\cap\ker\xij{12}=0$, what is checked using the formulae \eqref{eqn:cuentas-muno} to \eqref{eq:mcuatro-c}. It remains the case $\dim N[\mbox{\footnotesize (13)(23)}]=1$. By the argument at the beginning of the proof, the lemma follows. \end{proof}
It is convenient to introduce the following $\mathcal{A}_{[\mathbf{a}]}$-modules which we will use in the Section \ref{sec: tipo de rep}. \begin{definition}\label{def: Wt ext ke by L - a generic} Let $\mathbf{t}\in\gA_3$. We denote by $W_{\mathbf{t}}(\Bbbk_e, L)$ the $\mathcal{A}_{[\mathbf{a}]}$-module with basis $\{w_g:g\in{\mathbb S}_3\}$ and action given by \begin{align*} &w_g\in W_{\mathbf{t}}(\Bbbk_e, L)[g],& &\xij{ij}\cdot w_g =\begin{cases} t_{(ij)} w_{(ij)} & \mbox{ if }g=e,\\ \fij{ij}(g) w_{(ij)g} & \mbox{ if }g\neq e\mbox{ and }\sgn g=1,\\ w_{(ij)g} & \mbox{ if }\sgn g=-1. \end{cases} \end{align*} \end{definition} The well-definition of $W_{\mathbf{t}}(\Bbbk_e, L)$ follows from the next lemma. \begin{lema}\label{le: Wt ext ke by L - a generic} Let $\mathbf{t}, \tilde{\mathbf{t}}\in\gA_3$. \renewcommand{\theenumi}{\alph{enumi}} \renewcommand{\labelenumi}{(\theenumi)} \begin{enumerate} \item\label{item: Wt ext ke by L - a generic: t=0} If $\mathbf{t}=(0,0,0)$, then $W_{\mathbf{t}}(\Bbbk_e, L)\simeq L\oplus\Bbbk_e$. \smallbreak \item \label{item: Wt ext ke by L - a generic: t non 0} If $\mathbf{t}\neq(0,0,0)$, then there exists $v\in M_{(13)(23)}[e] - 0$ such that $W_{\mathbf{t}}(\Bbbk_e, L)\simeq\mathcal{A}_{[\mathbf{a}]}\cdot v$. \smallbreak \item \label{item: Wt ext ke by L - a generic: t non 0 reciprocal} If $v\in M_{(13)(23)}[e] - 0$, then there exists $\mathbf{t}\neq(0,0,0)$ such that $W_{\mathbf{t}}(\Bbbk_e, L)\simeq\mathcal{A}_{[\mathbf{a}]}\cdot v$. \smallbreak \item \label{item: Wt ext ke by L - a generic: is an ext} $W_{\mathbf{t}}(\Bbbk_e, L)$ is an extension of $\Bbbk_e$ by $L$. \smallbreak \item \label{item: Wt ext ke by L - a generic: iso} $W_{\mathbf{t}}(\Bbbk_e, L)\simeq W_{\tilde{\mathbf{t}}}(\Bbbk_e, L)$ if and only if $\mathbf{t}=\mu\tilde{\mathbf{t}}$ with $\mu\in\Bbbk^\times$. \end{enumerate} \end{lema}
\begin{proof} \eqref{item: Wt ext ke by L - a generic: t=0} is immediate. If we prove \eqref{item: Wt ext ke by L - a generic: t non 0}, then \eqref{item: Wt ext ke by L - a generic: is an ext} follows from Lemma \ref{le:submodules Mg in the generic case}.
\eqref{item: Wt ext ke by L - a generic: t non 0} We set $w_{(13)(23)}=m_{\textsf{soc}}\in M_{(13)(23)}[\mbox{\footnotesize \mbox{\footnotesize (13)}(23)}]$, \begin{align*} &w_{(23)}=\frac{\xij{13}\cdot w_{(13)(23)}}{\fij{13}(\mbox{\footnotesize (13)(23)})},\, w_{(13)}=\frac{\xij{12}\cdot w_{(13)(23)}}{\fij{12}(\mbox{\footnotesize (13)(23)})},\quad w_{(12)}=\frac{\xij{23}\cdot w_{(13)(23)}}{\fij{23}(\mbox{\footnotesize (13)(23)})}, \end{align*} $w_{(23)(13)}=\xij{23}\xij{12}\cdot w_{(13)(23)}$ and $w_e=-\tij{12}\mdos{13}{12}+\tij{13}\mdos{12}{23}\neq0$ Using the formulae \eqref{eqn:cuentas-muno} to \eqref{eqn:cuentas-mtres}, it is not difficult to see that $W_{\mathbf{t}}(\Bbbk_e, L)\simeq\mathcal{A}_{[\mathbf{a}]}\cdot w_{e}$. \eqref{item: Wt ext ke by L - a generic: t non 0 reciprocal} follows using the formulae \eqref{eqn:cuentas-muno} to \eqref{eqn:cuentas-mtres}. The proof of \eqref{item: Wt ext ke by L - a generic: iso} is similar to the proof of Lemma \ref{le: Wt ext L by ke - a generic} \eqref{item: Wt ext L by ke - a generic: iso}. \end{proof}
\bigbreak
\begin{thm}\label{thm:simples in the generic case} Let $\mathbf{a}\in\gA_3$ be generic. There are exactly $2$ simple $\mathcal{A}_{[\mathbf{a}]}$modules up to isomorphism, namely $\Bbbk_e$ and $L$. Moreover, $M_e$ is the projective cover, and the injective hull, of $\Bbbk_e$; also, $M_{(13)(23)}$ is the projective cover, and the injective hull, of $L$. \end{thm}
\begin{proof} We know that $\Bbbk_e$ and $L$ are the only two simple $\mathcal{A}_{[\mathbf{a}]}$-modules up to isomorphism by Proposition \ref{pr:induced} and Lemmata \ref{le:bounded in the dimension of modules for aij all different} \eqref{item:lema-generic-b}, \ref{le:submodules Me in the generic case} and \ref{le:submodules Mg in the generic case}. Hence, a set of primitive orthogonal idempotents has at most 6 elements \cite[(6.8)]{CR}. Since the $\delta_g$, $g\in {\mathbb S}_3$ are orthogonal idempotents, they must be primitive. Therefore
$M_e$ and $M_{(13)(23)}$ are the projective covers (and the injective hulls) of $\Bbbk_e$ and $L$, respectively by \cite[(9.9)]{CR}, see page \pageref{bullet:injective-hull}. \end{proof}
\subsection{Case $\mathbf{a}\in\gA_3$ sub-generic.}\label{subsec: non generic case}
\
Through this subsection, we suppose that $\aij{12}\neq\aij{13}=\aij{23}$. Then the equivalence classes of ${\mathbb S}_3$ by $\sim_{\mathbf{a}}$ are \begin{align*} &\{e\}, & &\{(12)\} & &\text{and } \{(13), (23), (13)(23), (23)(13)\}. \end{align*} In fact, \begin{itemize} \item $e$ and $(12)$ belong to the isotropy group ${\mathbb S}_3^{\mathbf{a}}$. \smallbreak
\item $\mbox{\footnotesize (13)}=\mbox{\footnotesize (23)(12)(23)}$ with $\fij{12}(\mbox{\footnotesize (23)})=\aij{12}-\aij{13}\neq0$ and\\ $\fij{23}(\mbox{\footnotesize (12)(23)})=\aij{23}-\aij{12}\neq0$.
\smallbreak
\item $\mbox{\footnotesize (123)} = \mbox{\footnotesize (13)(23)}$ with $\fij{13}(\mbox{\footnotesize (23)})=\aij{13}-\aij{12}\neq0$.
\smallbreak
\item $\mbox{\footnotesize (132)} =\mbox{\footnotesize (23)(13)}$ with $\fij{23}(\mbox{\footnotesize (13)})=\aij{23}-\aij{12}\neq0$. \end{itemize}
\smallbreak
To determine the simple $\mathcal{A}_{[\mathbf{a}]}$-modules, we proceed as in the subsection above; that is, we just need to determine the maximal submodules of the Verma modules $M_{e}$, $M_{(12)}$ and $M_{(13)(23)}$, see Proposition \ref{prop: g h linked then the Verma are isomorphic}.
\smallbreak
Let $M$ be a cyclic $\mathcal{A}_{[\mathbf{a}]}$-module generated by $v\in M[\mbox{\footnotesize (13)(23)}]$. Here again, we can describe the weight spaces of $M$. By \eqref{eq:action-monomials} and acting by the monomials in our basis, we see that $M[\mbox{\footnotesize (23)(13)}]=\langle \xij{13}\xij{23}\cdot v, \xij{23}\xij{12}\cdot v, \xij{12}\xij{13}\cdot v\rangle$. This weight space is $\neq 0$ by Remark \ref{obs:fij} applied to $(13)(23)\sim_{\mathbf{a}}(23)(13)$, and a further application of this Remark gives the following result.
\begin{Rem}\label{obs:ciclico-no-generico} Let $M$ be a cyclic $\mathcal{A}_{[\mathbf{a}]}$-module generated by $v\in M[\mbox{\footnotesize (13)(23)}]$. If $\dim M[\mbox{\footnotesize (23)(13)}]= 1$, then \begin{equation}\label{eq:module generated by 1 element in the non generic case} \begin{aligned} M[e] =& \langle\xij{23}\xij{13}\cdot v, (\xij{12}\xij{23})\cdot v, \xij{13}\xij{12}\cdot v\rangle, & M[\mbox{\footnotesize (13)(23)}]=\langle &v\rangle,\\ M[&\mbox{\footnotesize (12)}]=\langle \xij{23}\cdot v, (\xij{13}\xij{12}\xij{13})\cdot v\rangle, & M[\mbox{\footnotesize (23)}]=\langle\xij{13}\cdot &v\rangle, \\ M[\mbox{\footnotesize (23)(13)}]&=\langle \xij{12}\xij{13}\cdot v\rangle,
& M[\mbox{\footnotesize (13)}]=\langle\xij{12}\cdot &v\rangle. \end{aligned} \end{equation} \end{Rem}
There is a simple module $L$ like this; $\{v_{(13)}, v_{(23)}, v_{(13)(23)}, v_{(23)(13)}\}$ is a basis of $L$ and the action is given by \begin{align}\label{eq:def-Ltilde} v_g&\in L[g], & \xij{ij}\cdot v_g =\begin{cases} 0 & \mbox{ if }g=(ij)\\ m_{(ij)g} & \mbox{ if }g\neq(ij),\,\sgn g=-1,\\ \fij{ij}(g) m_{(ij)g}& \mbox{ if }\sgn g=1.\\ \end{cases} \end{align} Let $\Bbbk_{(12)}$ and $\Bbbk_e$ be as in Lemma \ref{prop:dim-uno}. We shall see that $L$, $\Bbbk_{(12)}$ and $\Bbbk_e$ are the only simple modules of $\mathcal{A}_{[\mathbf{a}]}$.
\smallbreak The Verma module $M_e$ projects onto the simple module $\Bbbk_e$, hence the kernel of this projection is a maximal submodule; explicitly this is $$N_e=\mathcal{A}_{[\mathbf{a}]}\cdot \left(M_e[\mbox{\footnotesize (13)(23)}]\oplus M_e[\mbox{\footnotesize (12)}]\right)=\oplus_{g\sim_{\mathbf{a}}(13)(23)}M_e[g]\oplus M_e[\mbox{\footnotesize (12)}]\oplus\langlem_{\textsf{top}}\rangle.$$
We see that this is the unique maximal submodule, as consequence of the following description of all submodules of $M_e$. \begin{lema}\label{le:submodules Me in the non generic case} The lattice of (proper, non-trivial) submodules of $M_{e}$ is displayed in \eqref{eq:lattice-Me-nongeneric}, where $v$ and $w$ satisfy $$M_e[\mbox{\rm\footnotesize (13)(23)}]=\langle v, \mdos{23}{12}\rangle, \qquad M_e[\mbox{\rm\footnotesize (12)}]=\hspace{-3pt}\langle w,\mtres{13}{12}{23}\rangle.$$ The submodules $\mathcal{A}_{[\mathbf{a}]}\cdot v$ (resp. $\mathcal{A}_{[\mathbf{a}]}\cdot w$) and $\mathcal{A}_{[\mathbf{a}]}\cdot v_1$ (resp. $\mathcal{A}_{[\mathbf{a}]}\cdot w_1$) coincide iff $v\in \langle v_1\rangle$ (resp. $w\in \langle w_1\rangle$). The labels on the arrows indicate the quotient of the module on top by the module on the bottom.
\end{lema}
\begin{equation}\label{eq:lattice-Me-nongeneric} \xymatrix{ & N_{e} \ar@{-}[1, -1]_{\Bbbk_{(12)}}\ar@{-}[1, 1]^L & \\ \mathcal{A}_{[\mathbf{a}]}\cdot M_{e}[\mbox{\rm\footnotesize (13)(23)}]\ar@{-}[d]_L \ar@{-}[1, 1]^L& & \mathcal{A}_{[\mathbf{a}]}\cdot M_{e}[\mbox{\rm\footnotesize (12)}] \ar@{-}[d]^{\Bbbk_{(12)}} \ar@{-}[1, -1]_{\Bbbk_{(12)}}\\ \mathcal{A}_{[\mathbf{a}]}\cdot v \ar@{-}[d]_L & \ar@{-}[1, -1]^L \hspace{-3pt}\mathcal{A}_{[\mathbf{a}]}\cdot\langle\mtres{13}{12}{23},\mdos{23}{12}\rangle \ar@{-}[1, 1]_{\Bbbk_{(12)}} & \mathcal{A}_{[\mathbf{a}]}\cdot w \ar@{-}[d]^{\Bbbk_{(12)}} \\ \mathcal{A}_{[\mathbf{a}]}\cdot \mtres{13}{12}{23} \quad \ar@{-}[1, 1]_{\Bbbk_{(12)}} & & \mathcal{A}_{[\mathbf{a}]}\cdot\mdos{23}{12}\ar@{-}[1, -1]^{L}\\ & \langlem_{\textsf{top}}\rangle & } \end{equation}
\begin{proof} Let \begin{align*} v&=\lambda\mdos{23}{12}+\mu\mdos{12}{13} & &\in M_e[\mbox{\footnotesize \mbox{\footnotesize (13)}(23)}] - 0,\\ \tilde{v} &=\mu\mdos{12}{23}+(\mu-\lambda)\mdos{13}{12} & &\in M_e[\mbox{\footnotesize (23)(13)}]. \end{align*} Using the formulae \eqref{eqn:cuentas-muno} to \eqref{eqn:cuentas-mtres}, we see that $\xij{23}\xij{12}\cdot v$ and $\xij{12}\xij{13}\cdot v$ are non-zero multiples of $\tilde{v}$. That is, $\dim (\mathcal{A}_{[\mathbf{a}]}\cdot v)[\mbox{\footnotesize (23)(13)}] = 1$. Moreover, $\xij{12}\xij{23}\cdot v=-\mum_{\textsf{top}}$ and $\xij{13}\xij{12}\cdot v=\lambdam_{\textsf{top}}$; and $\xij{23}\cdot v$ and $(\xij{13}\xij{12}\xij{13})\cdot v$ are non-zero multiples of $\mu\mtres{13}{12}{23}$. By Remark \ref{obs:ciclico-no-generico}, we obtain a basis for $\mathcal{A}_{[\mathbf{a}]}\cdot v$: \begin{align}\label{eq:submodules Me in the non generic case 2} \biggl\{v,\,\xij{12}\cdot v,\,\xij{13}\cdot v,\, \tilde{v},\, m_{\textsf{top}},\,\mu\mtres{13}{12}{23}\biggr\}; \end{align} if $\mu=0$, we obviate the last vector.
\smallbreak
By \eqref{eq:mcuatro-a}, \eqref{eq:mcuatro-b} and \eqref{eq:mcuatro-c}, $\xij{ij}\cdotm_{\textsf{top}}=0$ for all $(ij)\in\mathcal{O}_2^3$. Then $$\mathcal{A}_{[\mathbf{a}]}\cdotm_{\textsf{top}}=\langlem_{\textsf{top}}\rangle$$ and $\mathcal{A}_{[\mathbf{a}]}\cdot u=\mathcal{A}_{[\mathbf{a}]}\cdot m_1=M_e$ if $u\in M_e[e]$ is linearly independent to $m_{\textsf{top}}$.
\smallbreak
By \eqref{eq:13 act 13 12 23}, \eqref{eq:23 act 13 12 23} and \eqref{eqn:cuentas-mtres}, $\xij{ij}\cdot\mtres{13}{12}{23}=-\delta_{(12)}(\mbox{\footnotesize (ij)})m_{\textsf{top}}$ for all $(ij)\in\mathcal{O}_2^3$. Then $$\mathcal{A}_{[\mathbf{a}]}\cdot\mtres{13}{12}{23}=\langlem_{\textsf{top}}, \mtres{13}{12}{23}\rangle.$$ By \eqref{eq:action-Verma-letras1}, \eqref{eqn:13 en 12} and \eqref{eqn:23 en 12}, $\xij{ij}\cdot\mij{12}=\delta_{(13)}(\mbox{\footnotesize(ij)})\mdos{13}{12}+\delta_{(23)}(\mbox{\footnotesize(ij)})\mdos{23}{12}$ for all $(ij)\in\mathcal{O}_2^3$. Then $$\mathcal{A}_{[\mathbf{a}]}\cdot w=\mathcal{A}_{[\mathbf{a}]}\cdot\mdos{23}{12}\oplus\langle w\rangle$$ by \eqref{eq:submodules Me in the non generic case 2} and Remark \ref{obs:fij}, if $w\in M_e[\mbox{\footnotesize (12)}]$ is linearly independent to $\mtres{13}{12}{23}$.
\smallbreak
Let now $N$ be a (proper, non-trivial) submodule of $M_e$ which is not $\langlem_{\textsf{top}}\rangle$. We set $\widetilde{N} =\mathcal{A}_{[\mathbf{a}]}\cdot N[\mbox{\footnotesize (12)}]+ \mathcal{A}_{[\mathbf{a}]}\cdot N[\mbox{\footnotesize (13)(23)}]$. Then $\widetilde{N}[g]=N[g]$ for all $g\neq e$ by Remark \ref{obs:fij}. By the argument at the beginning of the proof, $\langlem_{\textsf{top}}\rangle\subset\widetilde{N}$. Then $\widetilde{N}[e]=\langlem_{\textsf{top}}\rangle=N[e]$ because otherwise $N=M_e$. Therefore $N=\widetilde{N}$ . To finish, we have to calculate the submodules of $M_e$ generated by homogeneous subspaces of $M_e[\mbox{\footnotesize (12)}]\oplus M_e[\mbox{\footnotesize (13)(23)}]$; this follows from the argument at the beginning of the proof. \end{proof}
\smallbreak
The Verma module $M_{(13)(23)}$ projects onto the simple module $L$, hence the kernel of this projection is a maximal submodule; explicitly this is \begin{align*} N_{(13)(23)} &= \mathcal{A}_{[\mathbf{a}]}\cdot\left(M_{(13)(23)}[e]\oplus M_{(13)(23)}[\mbox{\footnotesize (12)}]\right)\\ &=M_{(13)(23)}[e]\oplus M_{(13)(23)}[\mbox{\footnotesize (12)}]\oplus\mathcal{A}_{[\mathbf{a}]}\cdot m_{\textsf{soc}}. \end{align*}
We see that this is the unique maximal submodule, as consequence of the following description of all submodules of $M_{(13)(23)}$.
\begin{lema}\label{le:submodules Mg in the non generic case} The lattice of (proper, non-trivial) submodules of $M_{(13)(23)}$ is $$ \xymatrix{ & N_{(13)(23)} \ar@{-}[1, -1]_{\Bbbk_{(12)}} \ar@{-}[1, 1]^{\Bbbk_\epsilon} & \\ \mathcal{A}_{[\mathbf{a}]}\cdot M_{(13)(23)}[e]\ar@{-}[d]_{\Bbbk_e} \ar@{-}[1, 1]^{\Bbbk_e} & & \mathcal{A}_{[\mathbf{a}]}\cdot M_{(13)(23)}[\mbox{\rm\footnotesize (12)}] \ar@{-}[d]^{\Bbbk_{(12)}}\ar@{-}[1, -1]_{\Bbbk_{(12)}}\\ \mathcal{A}_{[\mathbf{a}]}\cdot v \ar@{-}[d]_{\Bbbk_e} & \ar@{-}[1, -1]^{\Bbbk_{e}}\mathcal{A}_{[\mathbf{a}]}\cdot\langle m_{\textsf{o}},\mdos{12}{23}\rangle\ar@{-}[1, 1]_{\Bbbk_{(12)}} & \mathcal{A}_{[\mathbf{a}]}\cdot w \ar@{-}[d]^{\Bbbk_{(12)}} \\ \mathcal{A}_{[\mathbf{a}]}\cdot m_{\textsf{o}} \quad \ar@{-}[1, 1]_{\Bbbk_{(12)}} & & \mathcal{A}_{[\mathbf{a}]}\cdot\mdos{12}{23}\ar@{-}[1, -1]^{\Bbbk_e}\\ & \mathcal{A}_{[\mathbf{a}]}\cdot m_{\textsf{soc}} & } $$ Here $v$ and $w$ satisfy $M_{(13)(23)}[e]=\langle v,\mdos{12}{23}\rangle$, $M_{(13)(23)}[(12)]=\langle w, m_{\textsf{o}}\rangle$. The submodules $\mathcal{A}_{[\mathbf{a}]}\cdot v$ (resp. $\mathcal{A}_{[\mathbf{a}]}\cdot w$) and $\mathcal{A}_{[\mathbf{a}]}\cdot v_1$ (resp. $\mathcal{A}_{[\mathbf{a}]}\cdot w_1$) coincide iff $v\in \langle v_1\rangle$ (resp. $w\in \langle w_1\rangle$). The labels on the arrows indicate the quotient of the module on top by the module on the bottom.
\end{lema}
\begin{proof} Let $u=\lambda m_1+\mum_{\textsf{top}}\in M_{(13)(23)}[\mbox{\footnotesize (13)(23)}] - 0$. Using the formulae \eqref{eqn:cuentas-muno} to \eqref{eqn:cuentas-mtres}, we see that \begin{align*} \xij{12}\xij{13}\cdot u &=\lambda\mdos{12}{13}-\mu\fij{13}(\mbox{\footnotesize (23)})^2\mdos{23}{12}\,\mbox{ and}\\ \xij{23}\xij{12}\cdot u &=\mu\fij{23}(\mbox{\footnotesize (13)})^2\mdos{12}{13}+\bigl(\lambda+2\mu\fij{13}(\mbox{\footnotesize (23)}) \fij{23}(\mbox{\footnotesize (13)})\bigr)\mdos{23}{12}. \end{align*} Thus, $\dim N[\mbox{\footnotesize (23)(13)}]= 1$ iff $\lambda+\mu\fij{13}(\mbox{\footnotesize (23)}) \fij{23}(\mbox{\footnotesize (13)})=0$, that is iff $u\in\langle m_{\textsf{soc}}\rangle-0$. By Remark \ref{obs:ciclico-no-generico}, $$ \mathcal{A}_{[\mathbf{a}]}\cdot m_{\textsf{soc}}=\langle m_{\textsf{soc}},\,\xij{12}\cdot m_{\textsf{soc}},\,\xij{13}\cdot m_{\textsf{soc}},\, \xij{12}\xij{13}\cdot m_{\textsf{soc}}\rangle $$ and $\mathcal{A}_{[\mathbf{a}]}\cdot u=\mathcal{A}_{[\mathbf{a}]}\cdot m_1 =M_{(13)(23)}$, if $u\in M_{(13)(23)}[\mbox{\footnotesize (13)(23)}]$ is linearly independent to $m_{\textsf{soc}}$.
\smallbreak
By the formulae \eqref{eqn:cuentas-muno} to \eqref{eq:mcuatro-c}, if $u\in\bigl( M_{(13)(23)}[e]\oplus M_{(13)(23)}[\mbox{\footnotesize (12)}]\bigr)-0$, then $0\neq\langle\xij{13}\cdot u, \xij{23}\cdot u\rangle\subset\mathcal{A}_{[\mathbf{a}]}\cdot m_{\textsf{soc}}$. Therefore $$\mathcal{A}_{[\mathbf{a}]}\cdot m_{\textsf{soc}}\subset\mathcal{A}_{[\mathbf{a}]}\cdot u$$ by Remark \ref{obs:fij}. Also, if $v$ and $w$ satisfy $M_{(13)(23)}[e]=\langle v,\mdos{12}{23}\rangle$ and $M_{(13)(23)}[(12)]=\langle w, m_{\textsf{o}}\rangle$, then $$ \langle\xij{12}\cdot v\rangle=\langle m_{\textsf{o}}\rangle\quad\mbox{ and }\quad\langle\xij{12}\cdot w\rangle=\langle\mdos{12}{23}\rangle. $$
\smallbreak
Let now $N$ be a (proper, non-trivial) submodule of $M_{(13)(23)}$ which is not $\mathcal{A}_{[\mathbf{a}]}\cdot m_{\textsf{soc}}$. We set $\widetilde{N}= \mathcal{A}_{[\mathbf{a}]}\cdot N[e]+\mathcal{A}_{[\mathbf{a}]}\cdot N[\mbox{\footnotesize (12)}]$. Then $\widetilde{N}[g]=N[g]$ for $g= e, (12)$ by Remark \ref{obs:fij}. By the argument at the beginning of the proof, $\mathcal{A}_{[\mathbf{a}]}\cdot m_{\textsf{soc}}\subset\widetilde{N}$. Then $\oplus_{g\sim_{\mathbf{a}}(13)(23)}N[g]=\mathcal{A}_{[\mathbf{a}]}\cdot m_{\textsf{soc}}=\oplus_{g\sim_{\mathbf{a}}(13)(23)}\tilde{N}[g]$ because otherwise $N=M_{(13)(23)}$. Therefore $N=\widetilde{N}$. To finish, we have to calculate the submodules of $M_{(13)(23)}$ generated by homogeneous subspaces of $M_{(13)(23)}[\mbox{\footnotesize (12)}]\oplus M_{(13)(23)}[e]$; this follows from the argument at the beginning of the proof. \end{proof}
\smallbreak
The Verma module $M_{(12)}$ projects onto the simple module $\Bbbk_{(12)}$, hence the kernel of this projection is a maximal submodule; explicitly this is \begin{align*} N_{(12)} &= \mathcal{A}_{[\mathbf{a}]}\cdot\left(M_{(12)}[\mbox{\footnotesize (13)(23)}]\oplus M_{(12)}[e]\right)\\ &=\oplus_{g\sim_{\mathbf{a}}(13)(23)}M_{(12)}[g]\oplus M_{(12)}[e]\oplus\langlem_{\textsf{top}}\rangle. \end{align*}
We see that this is the unique maximal submodule, as consequence of the following description of all submodules of $M_{(12)}$.
\begin{lema}\label{le:submodules M12 in the non generic case} The lattice of (proper, non-trivial) submodules of $M_{(12)}$ is $$ \xymatrix{ & N_{(12)} \ar@{-}[1, -1]_{\Bbbk_e}\ar@{-}[1, 1]^L & \\ \mathcal{A}_{[\mathbf{a}]}\cdot M_{(12)}[\mbox{\rm\footnotesize (13)(23)}]\ar@{-}[d]_L \ar@{-}[1, 1]^L & & \mathcal{A}_{[\mathbf{a}]}\cdot M_{(12)}[e] \ar@{-}[d]^{\Bbbk_e} \ar@{-}[1, -1]_{\Bbbk_e}\\ \mathcal{A}_{[\mathbf{a}]}\cdot v \ar@{-}[d]_L & \ar@{-}[1, -1]^L \mathcal{A}_{[\mathbf{a}]}\cdot\langle\mtres{13}{12}{23}, m_{\textsf{o}}\rangle \ar@{-}[1, 1]_{\Bbbk_e} & \mathcal{A}_{[\mathbf{a}]}\cdot w \ar@{-}[d]^{\Bbbk_e} \\ \mathcal{A}_{[\mathbf{a}]}\cdot\mtres{13}{12}{23}\ar@{-}[1, 1]_{\Bbbk_e} & & \mathcal{A}_{[\mathbf{a}]}\cdot m_{\textsf{o}}\ar@{-}[1, -1]^L\\ & \langlem_{\textsf{top}}\rangle & } $$ Here $v$ and $w$ satisfy $M_{(12)}[\mbox{\rm\footnotesize (13)(23)}]=\langle v, m_{\textsf{o}}\rangle$, $M_{(12)}[e]=\langle w, \mtres{13}{12}{23}\rangle$. The submodules $\mathcal{A}_{[\mathbf{a}]}\cdot v$ (resp. $\mathcal{A}_{[\mathbf{a}]}\cdot w$) and $\mathcal{A}_{[\mathbf{a}]}\cdot v_1$ (resp. $\mathcal{A}_{[\mathbf{a}]}\cdot w_1$) coincide iff $v\in \langle v_1\rangle$ (resp. $w\in \langle w_1\rangle$). The labels on the arrows indicate the quotient of the module on top by the module on the bottom.
\end{lema}
\begin{proof} Let $v=\lambda\mij{23}+\mu\mtres{13}{12}{13}\in M_{(12)}[\mbox{\footnotesize (13)(23)}]$ be a non-zero element. By Remark \ref{obs:ciclico-no-generico} and using the formulae \eqref{eqn:cuentas-muno} to \eqref{eq:mcuatro-c}, we see that \begin{align} \notag (\mathcal{A}_{[\mathbf{a}]}\cdot v)[\mbox{\footnotesize (13)(23)}]&=\langle v\rangle,\\ \notag (\mathcal{A}_{[\mathbf{a}]}\cdot v)[\mbox{\footnotesize (13)}]&=\langle(\fij{13}(\mbox{\footnotesize (23)})\mu-\lambda)\mdos{12}{23} -\mu\fij{13}(\mbox{\footnotesize (23)})\mdos{13}{12}\rangle, \\ \label{eq:submodules M12 in the non generic case 1} (\mathcal{A}_{[\mathbf{a}]}\cdot v)[\mbox{\footnotesize (23)}]&=\langle(\fij{13}(\mbox{\footnotesize (23)})\mu-\lambda)\mdos{12}{13}-\lambda\mdos{23}{12}\rangle,\\ \notag (\mathcal{A}_{[\mathbf{a}]}\cdot v)[\mbox{\footnotesize (23)(13)}]&=\langle(\fij{13}(\mbox{\footnotesize (23)})\mu-\lambda)\fij{23}(\mbox{\footnotesize (13)})\mij{13}+ \lambda\mtres{12}{23}{12}\rangle,\\ \notag (\mathcal{A}_{[\mathbf{a}]}\cdot v)[\mbox{\footnotesize (12)}]&=\langlem_{\textsf{top}}\rangle\mbox{ and}\\ \notag (\mathcal{A}_{[\mathbf{a}]}\cdot v)[e]&=\langle(\fij{13}(\mbox{\footnotesize (23)})\mu-\lambda)\mtres{13}{12}{23}\rangle. \end{align} \smallbreak
By \eqref{eq:mcuatro-a}, \eqref{eq:mcuatro-b} and \eqref{eq:mcuatro-c}, $\xij{ij}\cdotm_{\textsf{top}}=0$ for all $(ij)\in\mathcal{O}_2^3$. Then $$\mathcal{A}_{[\mathbf{a}]}\cdotm_{\textsf{top}}=\langlem_{\textsf{top}}\rangle$$ and $\mathcal{A}_{[\mathbf{a}]}\cdot u=\mathcal{A}_{[\mathbf{a}]}\cdot m_1=M_e$, if $u\in M_{(12)}[\mbox{\footnotesize (12)}]$ is linearly independent to $m_{\textsf{top}}$. By \eqref{eq:13 act 13 12 23}, \eqref{eq:23 act 13 12 23} and \eqref{eqn:cuentas-mtres}, $\xij{ij}\cdot\mtres{13}{12}{23}=-\delta_{(12)}(\mbox{\footnotesize (ij)})m_{\textsf{top}}$ for all $(ij)\in\mathcal{O}_2^3$. Then $$\mathcal{A}_{[\mathbf{a}]}\cdot\mtres{13}{12}{23}=\langlem_{\textsf{top}}, \mtres{13}{12}{23}\rangle.$$ By \eqref{eq:action-Verma-letras1}, \eqref{eqn:13 en 12} and \eqref{eqn:23 en 12}, $\xij{ij}\cdot\mij{12}=\delta_{(13)}(\mbox{\footnotesize(ij)})\mdos{13}{12}+\delta_{(23)}(\mbox{\footnotesize(ij)})\mdos{23}{12}$ for all $(ij)\in\mathcal{O}_2^3$. Then $$\mathcal{A}_{[\mathbf{a}]}\cdot w=\mathcal{A}_{[\mathbf{a}]}\cdot m_{\textsf{o}}\oplus\langle w\rangle$$ by \eqref{eq:submodules M12 in the non generic case 1} and Remark \ref{obs:fij}, if $w\in M_{(12)}[e]$ is linearly independent to $\mtres{13}{12}{23}$.
\bigbreak
Let now $N$ be a (proper, non-trivial) submodule of $M_{(12)}$ which is not $\langlem_{\textsf{top}}\rangle$. We set $\widetilde{N}=\mathcal{A}_{[\mathbf{a}]}\cdot N[e]+ \mathcal{A}_{[\mathbf{a}]}\cdot N[\mbox{\footnotesize (13)(23)}]$. Then $\widetilde{N}[g]=N[g]$ for all $g\neq (12)$ by Remark \ref{obs:fij}. By the argument at the beginning of the proof, $\langlem_{\textsf{top}}\rangle\subset\widetilde{N}$. Then $N[\mbox{\footnotesize (12)}]=\langlem_{\textsf{top}}\rangle=\tilde{N}[\mbox{\footnotesize (12)}]$ because otherwise $N=M_{(12)}$. Therefore $N=\widetilde{N}$. To finish, we have to calculate the submodules of $M_{(12)}$ generated by homogeneous subspaces of $M_{(12)}[\mbox{\footnotesize (13)(23)}]\oplus M_{(12)}[e]$; this follows from the argument at the beginning of the proof. \end{proof}
\bigbreak
As a consequence, we obtain the simples modules in the sub-generic case. The proof of the next theorem runs in the same way as that of Theorem \ref{thm:simples in the generic case}.
\begin{thm}\label{thm:simples in the non generic case} Let $\mathbf{a}\in\gA_3$ with $\aij{12}\neq\aij{13}=\aij{23}$. There are exactly $3$ simple $\mathcal{A}_{[\mathbf{a}]}$modules up to isomorphism, namely $\Bbbk_e$, $\Bbbk_{(12)}$ and $L$. Moreover, $M_e$ is the projective cover, and the injective hull, of $\Bbbk_e$; $M_{(12)}$ is the projective cover, and the injective hull, of $\Bbbk_{(12)}$; and $M_{(13)(23)}$ is the projective cover, and the injective hull, of $L$. \end{thm}
\begin{proof} We know that $\Bbbk_e$, $\Bbbk_{(12)}$ and $L$ are the only two simple $\mathcal{A}_{[\mathbf{a}]}$-modules up to isomorphism by Proposition \ref{pr:induced} and Lemmata \ref{le:submodules Me in the non generic case}, \ref{le:submodules Mg in the non generic case} and \ref{le:submodules M12 in the non generic case}. Hence, a set of primitive orthogonal idempotents has at most 6 elements \cite[(6.8)]{CR}. Since the $\delta_g$, $g\in {\mathbb S}_3$ are orthogonal idempotents, they must be primitive. Therefore
$M_e$, $M_{(12)}$ and $M_{(13)(23)}$ are respectively the projective covers (and the injective hulls) of $\Bbbk_e$, $\Bbbk_{(12)}$ and $L$ by \cite[(9.9)]{CR}, see page \pageref{bullet:injective-hull}. \end{proof}
\section{Representation type of $\mathcal{A}_{[\mathbf{a}]}$}\label{sec: tipo de rep}
In this section, we assume that $n=3$ as in the preceding one. We will determine the $\mathcal{A}_{[\mathbf{a}]}$-modules which are extensions of simple $\mathcal{A}_{[\mathbf{a}]}$-modules. As a consequence, we will show that $\mathcal{A}_{[\mathbf{a}]}$ is not of finite representation type for all $\mathbf{a}\in\gA_3$.
\subsection{Extensions of simple modules}\label{subsec: ext of simple mod}
By the following lemma, we are reduced to consider only submodules of the Verma modules for to determine the extensions of simple $\mathcal{A}_{[\mathbf{a}]}$-modules. Then we shall split the consideration into three different cases like Section \ref{sect:modules} and use the lemmata there.
\begin{lema}\label{le: extensions are ss or included in verma} Let $\mathbf{a}\in\gA_3$ be non-zero. Let $S$ and $T$ be simple $\mathcal{A}_{[\mathbf{a}]}$-modules and $M$ be an extension of $T$ by $S$. Hence either $M\simeq S\oplus T$ as $\mathcal{A}_{[\mathbf{a}]}$-modules or $M$ is an indecomposable submodule of the Verma module which is the injective hull of $S$. \end{lema}
\begin{proof}
If there exists a proper submodule $N$ of $M$ which is not $S$, then $M\simeq S\oplus T$ as $\mathcal{A}_{[\mathbf{a}]}$-modules. In fact, $N\cap S$ is either $0$ or $S$ because $S$ is simple. Let $\pi$ be as in \eqref{eq:comm diagram}. Since $T$ is simple, $\pi_{|N}:N\rightarrow T$ results an epimorphism. Therefore $M\simeq S\oplus T$ since $\dim N=\dim(N\cap S)+\dim T$.
Let $M_S$ be the Verma module which is the injective hull of $S$. Then we have the following commutative diagram \begin{align}\label{eq:comm diagram}
\xymatrix{ 0\ar[r] & S\ar[r]^\imath \ar@{^{(}->}[1,0]& M\ar[r]^\pi \ar@{-->}[1,-1]^f& T\ar[r] & 0 \\
& M_S & & & } \end{align}
Therefore either $M\simeq S\oplus T$ as $\mathcal{A}_{[\mathbf{a}]}$-modules or $f$ is inyective. If $f$ is inyective, then $M$ results indecomposable by Lemmata \ref{le:submodules Me in the generic case} and \ref{le:submodules Mg in the generic case} in the generic case, and by Lemmata \ref{le:submodules Me in the non generic case}, \ref{le:submodules Mg in the non generic case} and \ref{le:submodules M12 in the non generic case} in the sub-generic case. \end{proof}
\smallbreak
Recall the modules $W_\mathbf{t}(L,\Bbbk_e)$ and $W_\mathbf{t}(\Bbbk_e, L)$ from Definitions \ref{def: Wt ext L by ke - a generic} and \ref{def: Wt ext ke by L - a generic}. The next results follow from Lemmata \ref{le:submodules Me in the generic case}, \ref{le:submodules Mg in the generic case}, \ref{le:submodules Me in the non generic case}, \ref{le:submodules Mg in the non generic case} and \ref{le:submodules M12 in the non generic case} by Lemma \ref{le: extensions are ss or included in verma}.
\begin{lema}\label{le: ext of simple modules a generic} Let $\mathbf{a}\in\gA_3$ be generic. Let $S$ and $T$ be simple $\mathcal{A}_{[\mathbf{a}]}$-modules and $M$ be an extension of $T$ by $S$. \renewcommand{\theenumi}{\alph{enumi}} \renewcommand{\labelenumi}{(\theenumi)} \begin{enumerate} \item\label{item: ext of simple modules a generic: S=T} If $S\simeq T$, then $M\simeq S\oplus S$. \item\label{item: ext of simple modules a generic: L ke} If $S\simeq\Bbbk_e$ and $T\simeq L$, then $M\simeq W_{\mathbf{t}}(L,\Bbbk_e)$ for some $\mathbf{t}\in\gA_3$. \item\label{item: ext of simple modules a generic: ke L} If $S\simeq L$ and $T\simeq\Bbbk_e$, then $M\simeq W_{\mathbf{t}}(\Bbbk_e, L)$ for some $\mathbf{t}\in\gA_3$. $\qed$ \end{enumerate} \end{lema}
\begin{lema}\label{le: ext of simple modules a sub generic} Let $\mathbf{a}\in\gA_3$ with $\aij{12}\neq\aij{13}=\aij{23}$. Let $S$ and $T$ be simple $\mathcal{A}_{[\mathbf{a}]}$-modules and $M$ be an extension of $T$ by $S$. \renewcommand{\theenumi}{\alph{enumi}} \renewcommand{\labelenumi}{(\theenumi)} \begin{enumerate} \item\label{item: ext of simple modules a sub generic: S=T} If $S\simeq T$, then $M\simeq S\oplus S$. \item\label{item: ext of simple modules a sub generic: k12 ke} If $S\simeq\Bbbk_e$ and $T\simeq \Bbbk_{(12)}$, then $M\simeq\mathcal{A}_{[\mathbf{a}]}\cdot\mtres{13}{12}{23}\subset M_e$. \item\label{item: ext of simple modules a sub generic: ke k12} If $S\simeq \Bbbk_{(12)}$ and $T\simeq\Bbbk_e$, then $M\simeq\mathcal{A}_{[\mathbf{a}]}\cdot\mtres{13}{12}{23}\subset M_{(12)}$. \item\label{item: ext of simple modules a sub generic: L ke} If $S\simeq\Bbbk_e$ and $T\simeq L$, then $M\simeq\mathcal{A}_{[\mathbf{a}]}\cdot\mdos{23}{12}\subset M_e$. \item\label{item: ext of simple modules a sub generic: ke L} If $S\simeq L$ and $T\simeq\Bbbk_e$, then $M\simeq\mathcal{A}_{[\mathbf{a}]}\cdot\mdos{12}{23}\subset M_{(13)(23)}$. \item\label{item: ext of simple modules a sub generic: L k12} If $S\simeq \Bbbk_{(12)}$ and $T\simeq L$, then $M\simeq\mathcal{A}_{[\mathbf{a}]}\cdot m_{\textsf{o}}\subset M_{(12)}$. \item\label{item: ext of simple modules a sub generic: k12 L} If $S\simeq L$ and $T\simeq\Bbbk_{(12)}$, then $M\simeq\mathcal{A}_{[\mathbf{a}]}\cdot m_{\textsf{o}}\subset M_{(13)(23)}$. $\qed$ \end{enumerate} \end{lema}
\smallbreak
\begin{lema}\label{le: extensions of one dimensional simple mod} Let $\Bbbk_g$ and $\Bbbk_h$ be one-dimensional simple $\mathcal{A}_{[(0,0,0)]}$-modules and $M$ be an extension of $\Bbbk_h$ by $\Bbbk_g$. Hence \renewcommand{\theenumi}{\alph{enumi}} \renewcommand{\labelenumi}{(\theenumi)} \begin{enumerate} \item \label{item: extensions of one dimensional simple mod: same sgn} If $\sgn g=\sgn h$, then $M\simeq\Bbbk_g\oplus\Bbbk_h$. \item \label{item: extensions of one dimensional simple mod: dife sgn} If $\sgn g\neq\sgn h$ and $M$ is not isomorphic to $\Bbbk_g\oplus\Bbbk_h$, then $g=(st)h$ for a unique $(st)\in\mathcal{O}_3^2$ and $M$ has a basis $\{w_g, w_h\}$ such that $\langle w_g\rangle\simeq \Bbbk_g$ as $\mathcal{A}_{[\mathbf{a}]}$-modules, $w_h\in M[h]$ and $\xij{ij} w_h=\delta_{(ij), (st)}w_g$. \end{enumerate} \end{lema}
\begin{proof} $M=M[g]\oplus M[h]$ as $\Bbbk^{{\mathbb S}_3}$-modules and $M[g]\simeq\Bbbk_g$ as $\mathcal{A}_{[\mathbf{a}]}$-modules. Since $\xij{ij}\cdot M[h]\subset M[\mbox{\footnotesize (ij)}h]$, the lemma follows. \end{proof}
\subsection{Representation type} We summarize some facts about the representation type of an algebra.
Let $R$ be an algebra and $\{S_1, ..., S_t\}$ be a complete list of non-isomorphic simple $R$-modules. The \emph{separated quiver of} $R$ is constructed as follows. The set of vertices is $\{S_1, ..., S_t, S_1', ..., S_t'\}$ and we write $\dim\operatorname{Ext}_R^1(S_i, S_j)$ arrows from $S_i$ to $S_j'$, cf. \cite[p. 350]{ARS}. Let us denote by $\Gamma_R$ the underlying graph of the separated quiver of $R$.
A characterization of the hereditary algebras of finite and tame representation type is well-known, see for example \cite{dlab-ringel2}. As a consequence, the next well-known result is obtained. If $R$ is of finite representation type, then it is Theorem D of \cite{dlab-ringel1} or Theorem X.2.6 of \cite{ARS}. The proof given in \cite{ARS} adapts immediately to the case when $R$ is of tame representation type.
\begin{thm}\label{thm: rep type ARS} Let $R$ be a finite dimensional algebra with radical square zero. Then $R$ is of finite (resp. tame) representation type if and only if $\Gamma_R$ is a finite (resp. affine) disjoint union of Dynkin diagrams. $\qed$ \end{thm}
In order to use the above theorem, we know that
\begin{Rem}\label{obs: for apply thm rep type ARS} If $\mathfrak{r}$ is the radical of $R$, then the separated quiver of $R$ is equal to the separated quiver of $R/\mathfrak{r^2}$, see for example \cite[Lemma 4.5]{agustin}. \end{Rem}
We obtain the following result by combining Corollary VI.1.5 and Proposition VI.1.6 of \cite{ARS}.
\begin{prop}\label{prop:combined of ARS} Let $R$ be an artin algebra, $\chi$ an infinite cardinal and assume there are $\chi$ non-isomorphic indecomposable modules of length $n$. Then $R$ is not of finite representation type. $\qed$ \end{prop}
\smallbreak
Here is the announced result.
\begin{prop} $\mathcal{A}_{[(0,0,0)]}$ is of wild representation type. If $\mathbf{a}\in\gA_3$ is non-zero, then $\mathcal{A}_{[\mathbf{a}]}$ is not of finite representation type. \end{prop}
\begin{proof} If $\mathbf{a}\in\gA_3$ is generic, we can apply Proposition \ref{prop:combined of ARS} by Lemma \ref{le: Wt ext L by ke - a generic} and Lemma \ref{le: Wt ext ke by L - a generic}. Hence $\mathcal{A}_{[\mathbf{a}]}$ is not of finite representation type for all $\mathbf{a}\in\gA_3$ generic.
Let $\mathbf{a}\in\gA_3$ be sub-generic or zero. Then $\dim\operatorname{Ext}_{\mathcal{A}_{[\mathbf{a}]}}^1(T,S)=0$ if $S\simeq T$ by Lemma \ref{le: ext of simple modules a sub generic} and \ref{le: extensions of one dimensional simple mod}, and $\dim\operatorname{Ext}_{\mathcal{A}_{[\mathbf{a}]}}^1(T,S)=1$ in otherwise. In fact, suppose that $\aij{12}\neq\aij{13}=\aij{23}$, $S\simeq\Bbbk_{e}$ and $T\simeq L$. By Lemma \ref{le:submodules Mg in the non generic case} and Therorem \ref{thm:simples in the non generic case}, $L$ admits a projective resolution of the form $$ ... \longrightarrow P^2\longrightarrow M_e\oplus M_{(12)}\overset{F}{\longrightarrow} M_{(13)(23)}\longrightarrow L\longrightarrow 0, $$
where $F$ is defined by $F_{|M_e}(m_1)=v$ and $F_{|M_{(12)}}(m_1)=w$; here $v$ and $w$ satisfy $M_{(13)(23)}[e]=\langle v,\mdos{12}{23}\rangle$, $M_{(13)(23)}[(12)]=\langle w, m_{\textsf{o}}\rangle$. Then $$ 0\longrightarrow\Hom_{\mathcal{A}_{[\mathbf{a}]}}(M_{(13)(23)},\Bbbk_e)\overset{\partial_0}{\longrightarrow}\Hom_{\mathcal{A}_{[\mathbf{a}]}}(M_e\oplus M_{(12)},\Bbbk_e) \overset{\partial_1}{\longrightarrow} ... $$ and $\operatorname{Ext}_{\mathcal{A}_{[\mathbf{a}]}}^1(L,\Bbbk_e)=\ker\partial_1/\operatorname{Im}\partial_0$. Since $M_h$ is generated by $m_1\in M_h[h]$ for all $h\in{\mathbb S}_3$, $\Hom_{\mathcal{A}_{[\mathbf{a}]}}(M_{(13)(23)},\Bbbk_e)=0$ and $\dim\Hom_{\mathcal{A}_{[\mathbf{a}]}}(M_e\oplus M_{(12)},\Bbbk_e)=1$. By Lemma \ref{le: ext of simple modules a sub generic}, we know that there exists a non-trivial extension of $L$ by $\Bbbk_e$ and therefore $\dim\operatorname{Ext}_{\mathcal{A}_{[\mathbf{a}]}}^1(L,\Bbbk_e)=1$ because it is non-zero. For other $S$ and $T$ and for the case $\mathbf{a}=(0,0,0)$, the proof is similar.
\smallbreak Hence if $\mathbf{a}\in\mathcal{A}_{[\mathbf{a}]}$ is sub-generic and $\aij{12}\neq\aij{13}=\aij{23}$, the separated quiver of $\mathcal{A}_{[\mathbf{a}]}$ is $$ \xymatrix{ \Bbbk_e\ar@{->}[d]\ar@{->}[r] & \Bbbk_{(12)}' & L \ar@//[d] \ar@//[l] \\ L'& \Bbbk_{(12)} \ar@{->}[l]\ar@{->}[r]& \Bbbk_e'; } $$ and the separated quiver of $\mathcal{A}_{[(0,0,0)]}$ is $$ \xymatrix{ &\Bbbk_e\ar@{->}[d]\ar@{->}[dr]\ar@{->}[dl]& & &\Bbbk_{(12)}\ar@{->}[d]\ar@{->}[dr]\ar@{->}[dl]& \\ \Bbbk_{(12)}'& \Bbbk_{(13)}'& \Bbbk_{(23)}' & \Bbbk_{e}'& \Bbbk_{(13)(23)}'& \Bbbk_{(23)(13)}' \\ \Bbbk_{(13)(23)}\ar@{->}[u]\ar@{->}[ur]\ar@{->}[urr]& & \Bbbk_{(23)(13)}\ar@{->}[u]\ar@{->}[ul]\ar@{->}[ull]& \Bbbk_{(13)}\ar@{->}[u]\ar@{->}[ur]\ar@{->}[urr]& & \Bbbk_{(23)}.\ar@{->}[u]\ar@{->}[ul]\ar@{->}[ull] } $$ Therefore the lemma follows from Theorem \ref{thm: rep type ARS} and Remark \ref{obs: for apply thm rep type ARS}. \end{proof}
\begin{Rem} Let $\mathbf{a}\in\gA_3$ be generic. It is not difficult to prove that the separated quiver of $\mathcal{A}_{[\mathbf{a}]}$ is \begin{align*} &\xymatrix{ \Bbbk_e\ar@{->}[r]\ar@{->}@<-1ex>[r] & L'} & &\xymatrix{ L\ar@{->}[r]\ar@{->}@<-1ex>[r] & \Bbbk_e'.} \end{align*} \end{Rem}
\section{On the structure of $\mathcal{A}_{[\mathbf{a}]}$}\label{sect:more-info}
In this section, we assume that $n=3$ as in the preceding one.
\subsection{Cocycle deformations}
\
We show in this subsection that the algebras $\mathcal{A}_{[\mathbf{a}]}$ are cocycle deformation of each other. For this, we first recall the following theorem due to Masuoka.
\smallbreak If $K$ is a Hopf subalgebra of a Hopf algebra $H$ and $J$ is a Hopf ideal of $K$, then the two-sided ideal $(J)$ of $H$ is in fact a Hopf ideal of $H$.
\begin{thm}\label{thm:cocycle}\cite[Thm. 2]{masuoka}, \cite[Thm. 3.4]{bitidascarainu}. Suppose that $K$ is Hopf subalgebra of a Hopf algebra $H$. Let $I,J$ be Hopf ideal of $K$. If there is an algebra map $\psi$ from $K$ to $\Bbbk$ such that \begin{itemize}
\item $J=\psi\rightharpoonup I\leftharpoonup\psi^{-1}$ and
\item $H/(\psi\rightharpoonup I)$ is nonzero, \end{itemize} then $H/(\psi\rightharpoonup I)$ is a $(H/(I),H/(J))$-biGalois object and so the quotient Hopf algebras $H/(I)$, $H/(J)$ are monoidally Morita-Takeuchi equivalent. If $H/(I)$ and $H/(J)$ are finite dimensional, then $H/(I)$ and $H/(J)$ are cocycle deformations of each other. {$\qed$} \end{thm}
We will need the following lemma to apply the Masuoka's theorem.
\begin{lema}\label{le:tensor subalgebra} If $W$ is a vector space and $U$ is a vector subspace of $W^{{\otimes} n}$, then the subalgebra of $T(W)$ generated by $U$ is isomorphic to $T(U)$. \end{lema}
\begin{proof} It is enough to prove the lemma for $U=W^{{\otimes} n}$. Fix $n$ and let $(x_i)_{i\in I}$ be a basis of $W$. Then $\mathbf{B}=\{X_{\mathbf{i}}=x_{i_1} \cdots x_{i_n}: \mathbf{i}=(i_1, ..., i_n)\in I^{\times n} \}$ forms a basis of $W^{{\otimes} n}$. Since the $X_{\mathbf{i}}$'s are all homogeneous elements of the same degree in $T(W)$, we only have to prove that $\{X_{\mathbf{i}_1}\cdots X_{\mathbf{i}_m}:\mathbf{i}_1, ..., \mathbf{i}_m\in I^{\times n}\}$ is linearly independent in $T(W)$ for all $m\geq1$ and this is true because $\mathbf{B}$ is a basis of monomials of the same degree. \end{proof}
Here is the announced result. Observe that this gives an alternative proof to the fact that $\dim\mathcal{A}_{[\mathbf{a}]} = 72$, proved in \cite{AV} using the Diamond Lemma.
\begin{prop}\label{prop:cocycle deformations} For all $\mathbf{a}\in\gA_3$, $\mathcal{A}_{[\mathbf{a}]}$ is a Hopf algebra monoidally Morita-Takeuchi equivalent to ${\mathcal B}(V_3)\#\Bbbk^{{\mathbb S}_3}$. \end{prop}
\begin{proof} To start with, we consider the algebra $\mathcal{K}_{\mathbf{a}} := T(V_3)\#\Bbbk^{{\mathbb S}_3}/\mathcal{J}_{\mathbf{a}}$, $\mathbf{a}\in\gA_3$, where $\mathcal{J}_{\mathbf{a}}$ is the ideal generated by \begin{align} \label{eq:rels-powers Ka}R_{(13)(23)},\quad R_{(23)(13)}\quad\mbox{ and }\quad \xij{ij}^2+\sum_{g\in{\mathbb S}_3}a_{g^{-1}(ij)g}\,\delta_g,\quad (ij)\in\mathcal{O}_2^3. \end{align}
Let $M_3=\Bbbk^{{\mathbb S}_3}$ with the regular representation. For all $\mathbf{a}\in\gA_3$, $M_3$ is an $\mathcal{K}_{\mathbf{a}}$-module with action given by \begin{align*} \quad\xij{ij}\cdot m_g
=\begin{cases}
m_{(ij)g} & \mbox{ if }\sgn g=-1,\\
-a_{g^{-1}(ij)g}\,m_{(ij)g} & \mbox{ if }\sgn g=1.\\
\end{cases} \end{align*} We have to check that the relations defining $\mathcal{K}_{\mathbf{a}}$ hold in the action. Then \begin{align*} \delta_h(\xij{ij}\cdot m_g)&=\delta_h(\lambda_g m_{(ij)g})=\lambda_g\delta_h((ij)g) m_{(ij)g}=\lambda_g\delta_{(ij)h}(g)m_{(ij)g}\\ &=\xij{ij}\cdot(\delta_{(ij)h}\cdot m_{g}) \end{align*} with $\lambda_g\in\Bbbk$ according to the definition of the action. Note that $$ \xij{ij}\cdot(\xij{ik}\cdot m_g) =\begin{cases} -a_{g^{-1}(ik)(ij)(ik)g}\,m_{(ij)(ik)g} & \mbox{ if }\sgn g=-1,\\ -a_{g^{-1}(ik)g}\,m_{(ij)(ik)g} & \mbox{ if }\sgn g=1.\\ \end{cases} $$ In any case, we have that $\xij{ij}^2\cdot m_g=-a_{g^{-1}(ij)g}\,m_g$ and $$ R_{(ij)(ik)}\cdot m_{g}=-(\sum_{(st)\in\mathcal{O}_2^3}a_{g^{-1}(st)g})m_{(ij)(ik)g}=0. $$ Let $W=\langle R_{(13)(23)}, \, R_{(23)(13)},\, \xij{ij}^2: (ij)\in\mathcal{O}_2^3\rangle$ and $K$ be the subalgebra of $T(V_3)$ generated by $W$; $K$ is a braided Hopf subalgebra because $W$ is a Yetter-Drinfeld submodule contained in $\mathcal{P}(T(V_3))$ the primitive elements of $T(V_3)$. Then $K\#\Bbbk^{{\mathbb S}_3}$ is a Hopf subalgebra of $T(V_3)\#\Bbbk^{{\mathbb S}_3}$. For each $\mathbf{a}\in\gA_3$, by Lemma \ref{le:tensor subalgebra} we can define the algebra morphism $\psi=\psi_K\otimes\epsilon:K\#\Bbbk^{{\mathbb S}_3}\rightarrow\Bbbk$ where $$
\psi_{K|W[g]}=0\,\mbox{ if }\, g\neq e\,\mbox{ and }\, \psi_K(\xij{ij}^2)=-\aij{ij}\,\forall(ij)\in\mathcal{O}_2^3. $$
If $J$ denotes the ideal of $K\#\Bbbk^{{\mathbb S}_3}$ generated by the generator of $K$, then $\psi^{-1}\rightharpoonup J\leftharpoonup\psi$ is the ideal generated by the generators of $\mathcal{I}_\mathbf{a}$. In fact, $\psi^{-1}=\psi\circ\mathcal{S}$ is the inverse element of $\psi$ in the convolution group $\Alg(K\#\Bbbk^{{\mathbb S}_3},\Bbbk)$, $\mathcal{S}(W)[g]\subset(K\#\Bbbk^{{\mathbb S}_3})[g^{-1}]$ and $\mathcal{S}(\xij{ij}^2)=-\sum_{h\in{\mathbb S}_3}\delta_{h^{-1}} x_{h^{-1}(ij)h}^2$. Then our claim follows if we apply $\psi{\otimes}\id{\otimes}\psi^{-1}$ to $(\Delta{\otimes}\id)\Delta(\xij{ij}^2)=$ $$ =\xij{ij}^2\ot1\ot1+\sum_{h\in{\mathbb S}_3}\delta_{h}{\otimes} x_{h^{-1}(ij)h}^2\ot1 +\sum_{h,g\in{\mathbb S}_3}\delta_{h}{\otimes}\delta_{g}{\otimes} x_{g^{-1}h^{-1}(ij)hg}^2 $$ and $(\Delta{\otimes}\id)\Delta(x)=x{\otimes} 1\ot1+x_{-1}{\otimes} x_0\ot1+x_{-2}{\otimes} x_{-1}{\otimes} x_0$ for $g\neq e$ and $x\in W[g]$; note that also $x_0\in W[g]$.
The ideal $\psi^{-1}\rightharpoonup J$ is generated by $$ R_{(13)(23)},\quad R_{(23)(13)}\quad\mbox{ and }\quad\xij{ij}^2+\sum_{g\in{\mathbb S}_3}a_{g^{-1}(ij)g}\delta_g\quad \forall(ij)\in\mathcal{O}_2^3. $$
Now $\mathcal{K}_\mathbf{a} = T(V_3)\#\Bbbk^{{\mathbb S}_3}/\langle \psi^{-1}\rightharpoonup J\rangle \neq 0$ because it has a non-zero quotient in $\operatorname{End} (M_3)$. Hence $\mathcal{A}_{[\mathbf{a}]}$ is monoidally Morita-Takeuchi equivalent to ${\mathcal B}(V_3)\#\Bbbk^{{\mathbb S}_3}$, by Theorem \ref{thm:cocycle}. \end{proof}
\subsection{Hopf subalgebras and integrals of $\mathcal{A}_{[\mathbf{a}]}$}
\
We collect some information about $\mathcal{A}_{[\mathbf{a}]}$. Let $$\chi=\sum_{g\in{\mathbb S}_3}\sgn(g)\delta_g, \quad y = \sum_{(ij)\in\mathcal{O}_2^3}\xij{ij}.$$ It is easy to see that $\chi$ is a group-like element and that $y\in \mathcal{P}_{1,\chi}(\mathcal{A}_{[\mathbf{a}]})$.
\begin{prop}\label{lema:the unique one dimensional submodule} Let $\mathbf{a}\in\gA_3$. Then \renewcommand{\theenumi}{\alph{enumi}} \renewcommand{\labelenumi}{(\theenumi)} \begin{enumerate} \item\label{item:lema20-a} $G(\mathcal{A}_{[\mathbf{a}]})=\{1, \chi\}$.
\smallbreak \item\label{item:lema20-b} $\mathcal{P}_{1,\chi}(\mathcal{A}_{[\mathbf{a}]})=\langle 1 - \chi, y\rangle$.
\smallbreak \item\label{item:lema20-d} $\Bbbk\langle\chi, y\rangle$ is isomorphic to the 4-dimensional Sweedler Hopf algebra.
\smallbreak \item\label{item:lema20-i} The Hopf subalgebras of $\mathcal{A}_{[\mathbf{a}]}$ are $\Bbbk^{{\mathbb S}_3}$, $\Bbbk\langle \chi\rangle$ and $\Bbbk\langle\chi, y\rangle$.
\smallbreak \item\label{item:lema20-c} $\mathcal{S}^2(a)=\chi a\chi^{-1}$ for all $a\in\mathcal{A}_{[\mathbf{a}]}$.
\smallbreak \item\label{item:lema20-h} The space of left integrals is $\langlem_{\textsf{top}}\delta_e\rangle$; $\mathcal{A}_{[\mathbf{a}]}$ is unimodular.
\smallbreak \item\label{item:lema20-g} $(\mathcal{A}_{[\mathbf{a}]})^*$ is unimodular.
\smallbreak \item\label{item:lema20-j} $\mathcal{A}_{[\mathbf{a}]}$ is not a quasitriangular Hopf algebra. \end{enumerate} \end{prop}
\begin{proof}
We know that the coradical $(\mathcal{A}_{[\mathbf{a}]})_0$ of $\mathcal{A}_{[\mathbf{a}]}$ is isomorphic to $\Bbbk^{{\mathbb S}_3}$ by \cite{AV}. Since $G(\mathcal{A}_{[\mathbf{a}]}) \subset(\mathcal{A}_{[\mathbf{a}]})_0$, \eqref{item:lema20-a} follows.
\eqref{item:lema20-b} Recall that $V_3 = M((12),\sgn)\in{}^{\Bbbk^{{\mathbb S}_3}}_{\Bbbk^{{\mathbb S}_3}}\mathcal{YD}$, see Subsection \ref{subsect:nichols-gral-sn}. Then $\mathcal{P}_{1,\chi}(\mathcal{A}_{[\mathbf{a}]}) / \langle 1 - \chi\rangle$ is isomorphic to the isotypic component of the comodule $V_3$ of type $\chi$.
That is, if $z=\sum_{(ij)\in\mathcal{O}_2^3}\lambda_{(ij)} \xij{ij} \in (V_3)_{\chi}$, then
$$
\delta (z) = \sum_{h\in G, (ij)\in\mathcal{O}_2^3}\sgn(h) \lambda_{(ij)}\delta_{h}{\otimes} x_{h^{-1}(ij)h} = \chi\otimes z.
$$
Evaluating at $g\otimes \id$ for any $g\in {\mathbb S}_3$, we see that $\lambda_{(ij)}=\lambda_{(12)}$ for all $(ij)\in\mathcal{O}_2^n$. Then $z=\lambda_{(12)}y$. The proof of \eqref{item:lema20-d} is now evident.
\eqref{item:lema20-i} Let $A$ be a Hopf subalgebra of $\mathcal{A}_{[\mathbf{a}]}$. Then $A_{0} = A\cap(\mathcal{A}_{[\mathbf{a}]})_0 \subseteq \Bbbk^{{\mathbb S}_3}$ by \cite[Lemma 5.2.12]{mongomeri}. Hence $A_0$ is either $\Bbbk\langle \chi\rangle$ or else $\Bbbk^{{\mathbb S}_3}$. If $A_0=\Bbbk\langle \chi\rangle$, then $A$ is a pointed Hopf algebra with group ${\mathbb Z}/2$. Hence $A$ is either $\Bbbk\langle \chi\rangle$ or else $\Bbbk\langle\chi, y\rangle$ by \eqref{item:lema20-b} and \cite{N} or \cite{CD}\footnote{The classification of all finite dimensional{} pointed Hopf algebras with group ${\mathbb Z}/2$ also follows easily performing the Lifting method \cite{AS-cambr}.}. If $A_0 = \Bbbk^{{\mathbb S}_3}$, then $A$ is either $\Bbbk^{{\mathbb S}_3}$ or else $A = \mathcal{A}_{[\mathbf{a}]}$ by \cite{AV}.
To prove \eqref{item:lema20-c}, just note that $\chi\xij{ij}\chi^{-1}=-\xij{ij}$.
\eqref{item:lema20-h} follows from Subsections \ref{subsec: generic case} and \ref{subsec: non generic case}. Let $\Lambda$ be a non-zero left integral of $\mathcal{A}_{[\mathbf{a}]}$. By Lemma \ref{prop:dim-uno}, the distinguished group-like element of $(\mathcal{A}_{[\mathbf{a}]})^*$ is $\zeta_h$ for some $h\in{\mathbb S}_3^{\mathbf{a}}$, hence $\Lambda\delta_h=\zeta_h(\delta_h)\Lambda=\Lambda$. Let us consider $\mathcal{A}_{[\mathbf{a}]}$ as a left $\Bbbk^{{\mathbb S}_3}$-module via the left adjoint action, see page \pageref{item:epimorphism}. Let $\Lambda_g\in(\mathcal{A}_{[\mathbf{a}]})[g]$ such that $\Lambda=\sum_{g\in{\mathbb S}_3}\Lambda_g$. Then $\Lambda=\delta_e\Lambda=\sum_{s,t\in{\mathbb S}_3}\ad\delta_s(\Lambda_{t})\delta_{s^{-1}}\delta_h=\Lambda_{h^{-1}}\delta_h$. Since $M_h\simeq\mathcal{A}_{[\mathbf{a}]}\delta_h$, we can use the lemmata of the Section \ref{sect:modules} to compute $\Lambda$.
If $\mathbf{a}$ is generic, then $h=e$ by Theorem \ref{thm:simples in the generic case}. Since $\xij{ij}\Lambda=0$ for all $(ij)\in{\mathbb S}_3$, $\Lambda=m_{\textsf{top}}\delta_e$ by Lemma \ref{le:submodules Me in the generic case}.
If $\mathbf{a}$ is sub-generic, we assume that $\aij{12}\neq\aij{13}=\aij{23}$, then either $\Lambda=\Lambda_e\delta_e$ or $\Lambda_{(12)}\delta_{(12)}$ by Theorem \ref{thm:simples in the non generic case}. Since $\xij{ij}\Lambda=0$ for all $(ij)\in{\mathbb S}_3$, $\Lambda=m_{\textsf{top}}\delta_e$ by Lemma \ref{le:submodules Me in the non generic case} and Lemma \ref{le:submodules M12 in the non generic case}.
\eqref{item:lema20-g} By \eqref{item:lema20-c}, $\mathcal{S}^4= \id$. By Radford's formula for the antipode and \eqref{item:lema20-h}, the distinguished group-like element of $\mathcal{A}_{[\mathbf{a}]}$ is central, hence trivial. Therefore, $(\mathcal{A}_{[\mathbf{a}]})^*$ is unimodular.
\eqref{item:lema20-j} If there exists $R\in\mathcal{A}_{[\mathbf{a}]}{\otimes}\mathcal{A}_{[\mathbf{a}]}$ such that $(\mathcal{A}_{[\mathbf{a}]}, R)$ is a quasitriangular Hopf algebra, then $(\mathcal{A}_{[\mathbf{a}]}, R)$ has a unique minimal subquasitriangular Hopf algebra $(A_R, R)$ by \cite{radford}. We shall show that such a Hopf subalgebra does not exist using \eqref{item:lema20-i} and therefore $\mathcal{A}_{[\mathbf{a}]}$ is not a quasitriangular Hopf algebra.
\smallbreak
By \cite[Prop. 2, Thm. 1]{radford} we know that there exist Hopf subalgebras $H$ and $B$ of $\mathcal{A}_{[\mathbf{a}]}$ such that $A_R=HB$ and an isomorphism of Hopf algebras $H^{*\cop}\rightarrow B$. Then $A_R\neq\mathcal{A}_{[\mathbf{a}]}$. In fact, let $M(d,\Bbbk)$ denote the matrix algebra over $\Bbbk$ of dimension $d^2$. Then the coradical of $(\mathcal{A}_{[\mathbf{a}]})^{*}$ is isomorphic to \begin{itemize}
\item $\Bbbk^6$ if $\mathbf{a}=(0,0,0)$.
\item $\Bbbk\oplus M(5,\Bbbk)^*$ if $\mathbf{a}$ is generic by Theorem \ref{thm:simples in the generic case}.
\item $\Bbbk^2\oplus M(4,\Bbbk)^*$ if $\mathbf{a}$ is sub-generic by Theorem \ref{thm:simples in the non generic case}. \end{itemize} Since $(\mathcal{A}_{[\mathbf{a}]})_0\simeq\Bbbk^{{\mathbb S}_3}$, $\mathcal{A}_{[\mathbf{a}]}$ is not isomorphic to $(\mathcal{A}_{[\mathbf{a}]})^{*\cop}$ for all $\mathbf{a}\in\gA_3$. Clearly, $A_R$ cannot be $\Bbbk^{{\mathbb S}_3}$. Since $\mathcal{A}_{[\mathbf{a}]}$ is not cocommutative, $R$ cannot be $1\ot1$. The quasitriangular structures on $\Bbbk\langle\chi\rangle$ and $\Bbbk\langle\chi, y\rangle$ are well known, see for example \cite{radford}. Then it remains the case $A_R\subseteq\Bbbk\langle\chi, y\rangle$ with $R=R_0+R_{\alpha}$ where $R_0=\frac{1}{2}(1\ot1+1{\otimes}\chi+\chi\ot1-\chi{\otimes}\chi)$ and $R_{\alpha}=\frac{\alpha}{2}(y{\otimes} y+y{\otimes}\chi y+\chi y{\otimes}\chi y-\chi y{\otimes} y)$ for some $\alpha\in\Bbbk$. Since $\Delta(\delta_g)^{\cop}R=R\Delta(\delta_g)$ for all $g\in{\mathbb S}_3$, then \begin{align*} \Delta(\delta_g)^{\cop}R_0&=R_0\Delta(\delta_g)=\Delta(\delta_g)R_0\quad\mbox{in $\Bbbk^{{\mathbb S}_3}$;} \end{align*} but this is not possible because $R_0^2=1\ot1$ and $\Bbbk^{{\mathbb S}_3}$ is not cocommutative. \end{proof}
\end{document} | arXiv |
\begin{document}
\title{Conflict diagnostics for evidence synthesis in a multiple testing framework}
\author{Anne M. Presanis, David Ohlssen, Kai Cui, \\ Magdalena Rosinska,
Daniela De Angelis} \date{\today}
\maketitle
\begin{center}
\emph{Medical Research Council Biostatistics Unit, University of
Cambridge, U.K. \\ Novartis Pharmaceuticals Corporation, East
Hanover, NJ, U.S.A. \\ Department of Epidemiology, National
Institute of Public Health, \\ National Institute of Hygiene, Warsaw, Poland} \\
e-mail: [email protected]
\end{center}
\begin{abstract}\noindent Evidence synthesis models that combine multiple datasets of varying design, to estimate quantities that cannot be directly observed, require the formulation of complex probabilistic models that can be expressed as graphical models. An assessment of whether the different datasets synthesised contribute information that is consistent with each other, and in a Bayesian context, with the prior distribution, is a crucial component of the model criticism process. However, a systematic assessment of conflict suffers from the multiple testing problem, through testing for conflict at multiple locations in a model. We demonstrate the systematic use of conflict diagnostics, while accounting for the multiple hypothesis tests of no conflict at each location in the graphical model. The method is illustrated by a network meta-analysis to estimate treatment effects in smoking cessation programs and an evidence synthesis to estimate HIV prevalence in Poland.
\noindent{\it KEYWORDS: Conflict; evidence synthesis; graphical models; model criticism; multiple testing; network meta-analysis.} \end{abstract}
\section{Introduction \label{sec_intro}} Evidence synthesis refers to the use of complex statistical models that combine multiple, disparate and imperfect sources of evidence to estimate quantities on which direct information is unavailable or inadequate \citep[e.g.][]{AdesSutton2006,WeltonEtAl2012,DeAngelisEtAl2014}. Such evidence synthesis models are typically graphical models represented by a directed acyclic graph (DAG) $\mathcal{G}(\bs{V}, \bs{E})$, where $\bs{V}$ and $\bs{E}$ are sets of nodes and edges respectively, encoding conditional independence assumptions \citep{Lauritzen1996}. With increased computational power, models of the form of $\mathcal{G}(\bs{V}, \bs{E})$ have proliferated, requiring also the development of model criticism tools adapted to the challenges of evidence synthesis. In a Bayesian framework, any of the prior distribution, the assumed form of the likelihood and structural and functional assumptions may conflict with the observed data or with each other. To assess the consistency of each of these components, various mixed- or posterior-predictive checks have been proposed. In particular, the ``conflict p-value'' \citep{MarshallSpiegelhalter2007,GasemyrNatvig2009,PresanisEtAl2013,Gasemyr2015} is a diagnostic calculated by splitting $\mathcal{G}(\bs{V}, \bs{E})$ into two independent sub-graphs (``partitions'') at a particular ``separator'' node $\phi$, to measure the consistency of the information provided by each partition about the node (a ``node-split''). \citet{GasemyrNatvig2009} and \citet{PresanisEtAl2013} demonstrate how the conflict p-value may be evaluated in different contexts, including both one- and two-sided hypothesis tests, and \citet{Gasemyr2015} demonstrates the uniformity of the conflict p-value in a wide range of models.
The conflict p-value may be used in a targeted manner, searching for conflict at particular nodes in a DAG. However, in complex evidence syntheses, often the location of potential conflict may be unclear. A systematic assessment of conflict throughout a DAG is then required to locate problem areas \citep[e.g.][]{KrahnEtAl2013}. Such systematic assessment, however, suffers from the multiple testing problem, either through testing for conflict at each node in $\mathcal{G}(\bs{V}, \bs{E})$ or through the separation of $\mathcal{G}(\bs{V}, \bs{E})$ into more than two partitions to simultaneously test for conflict between each pair-wise partition. Here we account for these multiple tests by adopting the general hypothesis testing framework of \citet{HothornEtAl2008,BretzEtAl2011}, allowing for simultaneous multiple hypotheses in a parametric setting. They propose different possible tests to account for multiplicity: we concentrate here on maximum-T type tests.
In section \ref{sec_egintro}, we define evidence synthesis before introducing the particular models that motivate our work on systematic conflict assessment: a network meta-analysis and a model for estimating HIV prevalence. Section \ref{sec_methods} describes the methods we use to test for conflict and account for the multiple tests we perform. We apply these methods to our examples in Section \ref{sec_applications} and end with a discussion in Section \ref{sec_discuss}.
\section{Motivating examples \label{sec_egintro}} Formally, our goal is to estimate $K$ \emph{basic} parameters $\bs{\theta} = (\theta_1, \ldots, \theta_K)$ given a collection of $N$ independent data sources $\bs{y} = (\bs{y}_1, \ldots, \bs{y}_N)$, where each $\bs{y}_i, i \in 1, \ldots, N$ may be a vector or array of data points. Each $\bs{y}_i$ provides information on a \emph{functional} parameter $\psi_i$ (or potentially a vector of functions $\bs{\psi}_i$). When $\psi_i = \theta_k$ is the identity function, the data $\bs{y}_i$ are said to \emph{directly} inform $\theta_k$. Otherwise, $\psi_i = \psi_i(\bs{\theta})$ is a function of multiple parameters in $\bs{\theta}$: the $\bs{y}_i$ therefore provide \emph{indirect} information on these parameters. Given the conditional independence of the datasets $\bs{y}_i$, the likelihood is $L(\bs{\theta}; \bs{y}) = \prod_{i = 1}^NL_i(\psi_i(\bs{\theta}); \bs{y}_i)$, where $L_i(\psi_i(\bs{\theta}); \bs{y}_i)$ is the likelihood contribution of $\bs{y}_i$ given the basic parameters $\bs{\theta}$. In a Bayesian context, for a prior distribution $p(\bs{\theta})$, the posterior distribution $p(\bs{\theta} \mid \bs{y}) \propto p(\bs{\theta})L(\bs{\theta}; \bs{y})$ summarises all information, direct and indirect, on $\bs{\theta}$. Let $\bs{\psi} = (\psi_1, \ldots, \psi_N)$ be the set of functional parameters informed by data and $\bs{\phi} = \{\bs{\theta}, \bs{\psi}\}$ be the set of all unknown quantities, whether basic or functional. In this setup, the DAG $\mathcal{G}(\bs{V},\bs{E})$ representing the evidence synthesis model has a set of nodes $\bs{V} = \{\bs{\phi}, \bs{y}\}$ representing either known or unknown quantities; and the directed edges $\bs{E}$ represent dependencies between nodes. Each `child' node is independent of its `siblings' conditional on their direct `parents'. The joint distribution of all nodes $\bs{V}$ is the product of the conditional distributions of each node given its direct parents. An example DAG of an evidence synthesis model is given in Figure \ref{fig_genSplit}(i). Circles denote unknown quantities: either basic parameters $\bs{\theta}$ that are `founder' nodes at the top of a DAG having a prior distribution (double circles); or functional parameters $\bs{\psi}$. Squares denote observed quantities, solid arrows represent stochastic distributional relationships, and dashed arrows represent deterministic functional relationships. This DAG could be extended to more complex hierarchical priors and models, where repetition over variables is represented by `plates', rounded rectangles around the repeated nodes, labelled by the range of repetition. In general, the set $\bs{V}$ may be larger than the set of basic and functional parameters, including also other intermediate nodes in the DAG, for example unit-level parameters in a hierarchical model. For brevity, from here on we will abbreviate any DAG to the notation $\mathcal{G}(\bs{\phi}, \bs{y})$.
\subsection{Network meta-analysis}
Network meta-analysis (NMA) is a specific type of evidence synthesis \citep{Salanti2012}, that generalises meta-analysis from the synthesis of studies measuring a treatment effect (e.g. of treatment B versus treatment A in a randomised clinical trial), to the synthesis of data on more than two treatment arms. The studies included in the NMA may not all measure the same treatment effects, but each study provides data on at least two of the treatments. For example, considering a set of treatments $\{A,B,C,D\}$, the network of trials may consist of studies of different ``designs'', i.e. with different subsets of the treatments included in each trial \citep{JacksonEtAl2014}, such as $\{ABC,ABD,BD,CD\}$. As with meta-analysis, NMA models can be implemented in either a two-stage or single-stage approach, as described more comprehensively elsewhere \citep{Salanti2012,JacksonEtAl2014}. Here we concentrate on a single-stage approach, where the original data $Y_{di}^J$ for each treatment $J$ of study $i$ of design $d$ are available. A full likelihood model specifies $$ Y_{di}^{J} \sim f(p_{di}^{J} \mid w_{di}^{J}) $$ for some distribution $f(\cdot)$ and treatment outcome $p_{di}^{J}$ with associated information $w_{di}^{J}$. For example, if the data are numbers of events out of total numbers at risk of the event, then $w_{di}^{J}$ might be the denominator for treatment $J$. We might assume the data are realisations of a Binomial random variable, $Y_{di}^{J} \sim Bin(w_{di}^{J}, p_{di}^{J})$, where the proportion $p_{di}^{J}$ is a function of a study-specific baseline $\alpha_{di}$ representing a design/study-specific baseline treatment $B_d$ and a study-specific treatment contrast (log odds ratio) $\mu_{di}^{B_dJ}$, through a logistic model, $logit(p_{di}^J) = \alpha_{di} + \mu_{di}^{B_dJ}$. The intercept is $\alpha_{di} = logit(p_{di}^{B_d})$. To complete the model specification requires parameterisation of the treatment effects $\mu_{di}^{AJ}$. A common effect model, for a network-wide reference treatment $A$, is given by \begin{equation} \mu_{di}^{AJ} = \eta^{AJ} \label{eqn_consFE} \end{equation} for each $J \neq A$, i.e. assumes that all studies of all designs measure the same treatment effects. The $\eta^{AJ}$ are basic parameters, of which there are the number of treatments in the network minus 1, representing the relative effectiveness of treatment $J$ compared to the network baseline treatment $A$. All other contrasts $\eta^{JK}, J,K \neq A$ are functional parameters, defined by assuming a set of \emph{consistency} equations $\eta^{JK} = \eta^{AK} - \eta^{AJ}$ for each $J,K \neq A$. These equations define a transitivity property of the treatment effects. The extension to a random-effects model, still under the consistency assumption, implies \begin{equation} \mu_{di}^{AJ} = \eta^{AJ} + \beta_{di}^{AJ} \label{eqn_consRE} \end{equation} where usually the random effects $\beta_{di}^{AJ}$, reflecting between-study heterogeneity, are assumed normally distributed around $0$, with a covariance structure defined as a square matrix $\Sigma_{\beta}$ such that all entries on the leading diagonal are $\sigma_{\beta}^2$ and all remaining entries are $\sigma_{\beta}^2/2$ \citep{Salanti2012,JacksonEtAl2015}. Figure \ref{fig_nmaDAGs} of the Supplementary Material shows the DAG structure of both the common and random effects models for a full likelihood setting where the outcome is binomial. The set of \emph{basic} parameters is denoted $\bs{\eta}_b = (\eta^{AJ})_{J \neq A}$ and the corresponding set of \emph{functional} parameters is denoted $\bs{\eta}_f = (\eta^{JK} = \eta^{AK} - \eta^{AJ})_{J,K \neq A}$. Note that the common-effect model is a special case of the random-effects model. In the Bayesian paradigm, we specify prior distributions for the basic parameters $\bs{\eta}_b$, the (nuisance) study-specific baselines $\alpha_{di}$, and in the case of the random treatment effects model, the common standard deviation parameter $\sigma_{\beta}$ in terms of which the variance-covariance matrix $\Sigma_{\beta}$ is defined. Note that any change in parameterisation of the model, for example changing treatment labels, will affect the joint prior distribution, making invariance challenging or even impossible in a Bayesian setting.
\paragraph{A smoking cessation example} \citet{DiasEtAl2010}, amongst many others \citep{LuAdes2006,HigginsEtAl2012,JacksonEtAl2015}, considered an NMA of studies of smoking cessation. The network consists of 24 studies of 8 different designs, including 2 three-arm trials. Four smoking cessation counselling programs are compared (Figure \ref{fig_smkSpan}): A no intervention; B self-help; C individual counselling; D group counselling. The data (Supplementary Material Table \ref{tab_smkData}) are the number of individuals out of those participating who have successfully ceased to smoke at 6-12 months after enrollment. Here we fit the common- and random-effect models under a consistency assumption and diffuse priors: Normal$(0,10^2)$ on the log-odds scale for $\bs{\eta}_b$ and $\alpha_{di}$; and Uniform$(0,5)$ for $\sigma_{\beta}$. We find (Supplementary Material Table \ref{tab_nmaConsistent}) that the deviance information criterion ($DIC$, \citet{SpiegelhalterEtAl2002}) prefers the random-effect model, suggesting it is necessary to explain the heterogeneity in the network. The estimates of the treatment effects from the random-effect model are both somewhat different and more uncertain than those from the common-effect model, agreeing with estimates found by others, including \citet{DiasEtAl2010}. Moreover, the posterior expected deviance for the random-effect model, $\mathbb{E}_{\theta \mid y}(D) = 54$, is slightly larger than the number of observations (50), suggesting still some lack of fit to the data.
\paragraph{A single node-split model} This residual lack of fit and the general potential in NMA for variability between groups of direct and indirect information from multiple studies that is excess to between-study heterogeneity (``inconsistency'', \citet{LuAdes2006}) has motivated various approaches to the detection and resolution of inconsistency \citep{Lumley2002,LuAdes2006,DiasEtAl2010,HigginsEtAl2012,WhiteEtAl2012,JacksonEtAl2014}. \citet{DiasEtAl2010} apply the idea of node-splitting, based on \citet{MarshallSpiegelhalter2007}, to the NMA context, splitting a single mean treatment effect $\eta^{JK}$ in the random effects consistency model (\ref{eqn_consRE}). A DAG is partitioned into \emph{direct} evidence from studies directly comparing $J$ and $K$ versus \emph{indirect} evidence from all remaining studies. Specifically, for any study $i$ of design $d$ that directly compares $J$ and $K$, the study-specific treatment effect is expressed in terms of the direct treatment effect: $ \mu_{di}^{JK} = \eta^{JK}_{Dir} + \beta_{di}^{JK}; $ whereas the indirect version of the treatment effect is estimated from the remaining studies via the consistency equation: $ \eta^{JK}_{Ind} = \eta^{AK} - \eta^{AJ}. $ The posterior distribution of the contrast or inconsistency parameter $\delta^{JK} = \eta^{JK}_{Dir} - \eta^{JK}_{Ind}$ is then examined to check posterior support for the null hypothesis $\delta^{JK} = 0$.
\paragraph{Multiple node-splits} Although the single node-split approach in \citet{DiasEtAl2010} has been extended to automate the generation of different single node-splitting models for conflict assessment \citep{vanValkenhoefEtAl2016}, the simultaneous splitting of multiple nodes in a NMA has not yet been considered. In section \ref{sec_nmaSplit}, we use multiple splits to investigate conflict in the smoking cessation network beyond heterogeneity, accounting for the multiplicity.
\subsection{Generalised evidence synthesis \label{sec_ges}} As further illustration of systematic conflict detection, we consider an evidence synthesis approach to estimating HIV prevalence in Poland, among the exposure group of men who have sex with men (MSM) \citep{RosinskaEtAl2015}. The data aggregated to the national level are given in Supplementary Material Table \ref{tab_plResults}. There are three basic parameters to be estimated: the proportion of the male population who are MSM, $\rho$; the prevalence of HIV infection in the MSM group, $\pi$; and the proportion of those infected who are diagnosed, $\kappa$ (Figure \ref{fig_plDAGs}(a)).
\paragraph{Likelihood} The total population of Poland, $N = 15,749,944$, is considered fixed. The remaining 5 data points $y_1,\ldots,y_5$ directly inform, respectively: $\rho$; prevalence of diagnosed infection $\pi\kappa$; prevalence of undiagnosed infection $\pi(1-\kappa)$; and lower ($D_L$) and upper ($D_U$) bounds for the number of diagnosed infections $D = N\rho\pi\kappa$ (Figure \ref{fig_plDAGs}(a), Supplementary Material Table \ref{tab_plResults}). These data are modelled independently as either Binomial ($y_1,y_2,y_3$) or Poisson ($y_4,y_5$).
\paragraph{Priors} The number diagnosed $D$ is constrained \emph{a priori} to lie between the stochastic bounds $D_L$ and $D_U$, which in turn are given vague log-normal priors. Since $D$ is already defined as a function of the basic parameters, the constraint is implemented via introduction of an auxiliary Bernoulli datum of observed value $1$, with probability parameter given by a functional parameter $c = Pr(D_L \leq D \leq D_U)$ (Figure \ref{fig_plDAGs}(a)). The basic parameters $\rho,\pi$ and $\kappa$ are given independent uniform prior distributions on $[0,1]$.
\paragraph{Exploratory model criticism} This initial analysis reveals a lack of fit to some of the data (Supplementary Material Table \ref{tab_plResults}), with particularly high posterior mean deviances for the data informing $\rho$ and $\pi\kappa$. This lack of fit in turn may suggest the existence of conflict in the DAG \citep{SpiegelhalterEtAl2002}. In \citet{RosinskaEtAl2015}, conflict between evidence sources was not directly considered or formally measured, instead resolving the lack of fit by modelling potential biases in the data in a series of sensitivity analyses. By contrast, in Section \ref{sec_plSplit} we systematically assess the consistency of evidence coming from the prior model and from each likelihood contribution, by splitting the DAG at each functional parameter (Figure \ref{fig_plDAGs}(b)).
\section{Methods \label{sec_methods}}
\subsection{A single conflict p-value} Briefly, as in \citet{PresanisEtAl2013}, consider partitioning a DAG $\mathcal{G}(\bs{\phi},\bs{y})$ into two independent partitions, at a separator node $\phi$. The separator could either be a founder node, i.e. a basic parameter, or a node internal to the DAG, and is split into two copies $\phi_a$ and $\phi_b$, one in each partition (Figure \ref{fig_genSplit}(ii,iii)). Suppose that partition $\mathcal{G}(\bs{\phi}_a, \bs{y}_a)$ contains the data vector $\bs{y}_a$ and provides inference resulting in a posterior distribution $p(\phi_{a} \mid \bs{y}_a)$, and that similarly partition $\mathcal{G}(\bs{\phi}_b, \bs{y}_b)$ results in $p(\phi_{b} \mid \bs{y}_b)$. The aim is to assess the null hypothesis that $\phi_{a} = \phi_{b}$. For $\phi$ taking discrete values, we can directly evaluate $p(\phi_{a} = \phi_{b} \mid \bs{y}_a, \bs{y}_b)$. If the support of $\phi$ is continuous, we consider the posterior probability of $\delta = h(\phi_{a}) - h(\phi_{b})$, where $h(\cdot)$ is a function that transforms $\phi$ to a scale for which a uniform prior is appropriate. The two-sided ``conflict p-value'' is defined as $ c = 2 \times \min\left\{\textrm{Pr}\{p_{\delta}(\delta \mid \bs{y}_a, \bs{y}_b) < p_{\delta}(0 \mid \bs{y}_a, \bs{y}_b)\}, 1 - \textrm{Pr}\{p_{\delta}(\delta \mid \bs{y}_a, \bs{y}_b) < p_{\delta}(0 \mid \bs{y}_a, \bs{y}_b)\} \right\} $, where $p_{\delta}$ is the posterior density of the difference $\delta$, so that the smaller $c$ is, the greater the conflict.
\subsection{Defining multiple hypothesis tests of conflict} Generalising now to multiple tests of conflict, suppose that $\mathcal{G}(\bs{\phi}, \bs{y})$ is partitioned into $Q$ independent sub-graphs, $\mathcal{G}_1(\bs{\phi}_1, \bs{y}_1), \ldots, \mathcal{G}_Q(\bs{\phi}_Q, \bs{y}_Q)$, where each disjoint subset of the data $\bs{y}_q, q \in 1, \ldots, Q$ is chosen to identify part of the basic parameter space $\bs{\theta}_q = (\theta_{q1}, \ldots, \theta_{qb_q})$, where $b_q$ is the number of basic parameters in partition $q$. Note that $\bs{\theta}_q \subset \bs{\phi}_q$ for each $q \in 1, \ldots, Q$, whereas the complementary subset $\bs{\phi}_q \setminus \bs{\theta}_q$ consists of functional and other non-basic parameters. To test the consistency of information provided by each partition about a set of $J$ separator nodes $(\phi_1^{(s)}, \ldots, \phi_J^{(s)}) \subseteq \bs{\phi}$ from the original model, a set of constrasts $ \bs{\delta}_j = (\delta_{j1}, \ldots, \delta_{jC_j}) $ is formed for each $j \in 1, \ldots, J$, one contrast per pair of partitions in which $\phi_j$ appears. A maximum of $Q \choose 2$ contrasts are possible for each separator, i.e. $C_j \leq {Q \choose 2}$. Each contrast $\delta_{jc}$ is defined as $$ \delta_{jc} = h_{j}(\phi_{jq_A} \mid \bs{y_A}) - h_{j}(\phi_{jq_B} \mid \bs{y_B}) $$ for the pair of partitions $c = \{q_A,q_B\}$ and node-split
copies $\{\phi_{jq_A}, \phi_{jq_B}\}$. The functions $h_{j}(\cdot)$ are functions that transform the separator nodes $\{\phi_{jq_A}, \phi_{jq_B}\}$ to an appropriate scale for a uniform (Jeffreys’) prior to be applicable, if either is a founder node in either partition.
Denote the separator nodes in each partition by $\bs{\phi_q^{(s)}} = \{\phi_{jq}, j \in 1, \ldots, m_q, q \in 1,
\ldots, Q\}$, where $m_q \leq J$ is the number of separator nodes in partition $q$. Writing these nodes as a stacked vector $\bs{\phi_S} = (\bs{\phi_1^{(s)}}, \ldots, \bs{\phi_Q^{(s)}}) = (\phi_{11}, \ldots, \phi_{m_11}, \phi_{12}, \ldots, \phi_{m_22}, \ldots, \phi_{1Q}, \ldots, \phi_{m_QQ})^T$, and the transformed version as $\bs{\phi_H} = \bs{h}(\bs{\phi_S})$, the total set of contrasts is $$ \bs{\Delta} = (\bs{\delta_1}, \ldots, \bs{\delta_J})^T = \bs{C_{\Delta}}^T\bs{\phi_H} $$ for an appropriate contrast matrix of 1s and 0s, $\bs{C_{\Delta}}^T$. Note that not every separator node necessarily appears in every partition, so although $\bs{\phi_H}$ has maximum length $J \times Q$, in practice, its length $m = \sum_{q=1}^Q m_q \leq J \times Q$. The contrast matrix $\bs{C_{\Delta}}^T$ therefore has dimension $p \times m$, so that it maps from the space of the $m$ separator nodes (including node-split copies) to that of the $p = \sum_{j=1}^J C_j$ contrasts. A test for consistency of the information in each partition may be expressed as a test of the null hypothesis that \begin{equation} H_0: \bs{\Delta} = \bs{C_{\Delta}}^T\bs{\phi_H} = \bs{0} \label{eqn_nullhyp_bayes} \end{equation}
\subsection{Asymptotic theory}
Using standard asymptotic theory \citep[][see also derivation in Supplementary Material Appendix \ref{sec_AppBasymp}]{BernardoSmith1994}, it can be shown that if the joint posterior distribution of all parameters $\bs{\phi}$
in all partitions is asymptotically multivariate normal (i.e. if the
prior is flat enough relative to the likelihood), and if $\frac{\partial\bs{\Delta(\phi)}}{\partial\bs{\phi}} = \bs{C_{\Delta}}^T$ is non-singular with continuous entries, then the posterior mean of $\bs{\Delta}$ is $\overline{\bs{\Delta}} = \bs{C_{\Delta}}^T \overline{\bs{\phi_H}} \overset{a}{\approx} \bs{C_{\Delta}}^T \hat{\bs{\phi}}_H$ and the posterior variance-covariance matrix of $\bs{\Delta}$ is $\bs{S_{\Delta}} \overset{a}{\approx} \bs{C_{\Delta}}^T \bs{V_H} \bs{C_{\Delta}}$, where: $\bs{\hat{\phi}}_H$ is the maximum likelihood estimate of $\bs{\hat{\phi}}_H$; the matrix $\bs{V_H} =
\bs{J_h}(\bs{\hat{\phi}_S})^T \bs{V_S} \bs{J_h}(\bs{\hat{\phi}_S})$;
$\bs{J_h}(\bs{\hat{\phi}_S})$ is the Jacobian of the transformation
$\bs{h}(\bs{\phi_S})$; and $\bs{V_S}$ is a blocked diagonal matrix consisting of the inverse observed information matrices for the separator nodes in each partition along the diagonal. The posterior summaries $\overline{\bs{\Delta}}$ and $\bs{S_{\Delta}}$, i.e. the Bayes' estimator under a mean-squared error Bayes' risk function and corresponding variance-covariance matrix, may therefore be used under the general simultaneous inference framework of \citet{HothornEtAl2008,BretzEtAl2011} to construct a multiplicity-adjusted test that $\bs{\Delta} = \bs{0}$.
\subsection{Simultaneous hypothesis testing}
Given the estimator $\overline{\bs{\Delta}}$ and corresponding variance-covariance matrix $\bs{S_{\Delta}}$, define a vector of test statistics $\bs{T}_n = \bs{D}_n^{-1/2} (\overline{\bs{\Delta}} - \bs{\Delta})$, where $n$ is the dimension of the data $\bs{y}$ and $\bs{D}_n = diag(\bs{S_{\Delta}})$. Then it can be shown \citep{HothornEtAl2008,BretzEtAl2011} that $T_n$ tends in distribution to a multivariate normal distribution, $ \bs{T}_n \asym N_m(\bs{0}, \bs{R}) \label{eqn_T_n_refdist} $, where $\bs{R} := \bs{D}_n^{-1/2} \bs{S_{\Delta}} \bs{D}_n^{-1/2} \in \mathbb{R}^{m,m}$ is the posterior correlation matrix for the vector (length $m$) of contrasts $\bs{\Delta}$. Under the null hypothesis (\ref{eqn_nullhyp_bayes}), $\bs{T}_n = \bs{D}_n^{-1/2} \overline{\bs{\Delta}} \asym N_m(\bs{0}, \bs{R})$, and hence, assuming $\bs{S_{\Delta}}$ is fixed and known, the authors show that a global $\chi^2$-test of conflict can be formulated: $$ X^2 = \bs{T}_n^T\bs{R}^+\bs{T}_n \tendist \chi^2(Rank(\bs{R})) $$
where the superscript $^+$ denotes the Moore-Penrose inverse of the corresponding matrix and $Rank(\bs{R})$ is the degrees of freedom. Importantly, it is also possible to construct multiply-adjusted local (individual) conflict tests, based on the $m$ $z-$scores corresponding to $\bs{T}_n$ and the null distribution of the maximum of these, $Z_{max}$, \citep{HothornEtAl2008,BretzEtAl2011}. This latter null distribution is obtained by integrating the limiting $m-$dimensional multivariate normal distribution over $[-z,z]$ to obtain the cumulative distribution function $\mathbb{P}(Z_{max} \leq z)$. The individual conflict p-values are then calculated as $\mathbb{P}(|z_k| < Z_{max}), k \in 1, \ldots, m$, with a corresponding global conflict p-value (an alternative to the $\chi^2$-test) given by $\mathbb{P}(|z_{max}| < Z_{max})$.
\section{Examples \label{sec_applications}}
We now illustrate the idea of systematic multiple node-splitting to assess conflict in our two motivating examples. All analyses were carried out in \texttt{OpenBUGS 3.2.2} \citep{LunnEtAl2009} and \texttt{R 3.2.3} \citep{Rproject2015}. We use the \texttt{R2OpenBUGS} package \citep{R2OpenBUGS2005} to run \texttt{OpenBUGS} from within \texttt{R} and the \texttt{multcomp} package \citep{BretzEtAl2011} to carry out the simultaneous local and global max-T tests.
\subsection{Network meta-analysis}
Consider first a NMA in general, and for simplicity, assume there are no multi-arm trials and a common-effect model (equation (\ref{eqn_consFE})) for the data. The basic parameters $\bs{\eta}_b$ form a spanning tree of the network of evidence (Figure \ref{fig_smkSpan}), i.e. a graph with no cycles, such that each node in the network can be reached from every other node, either directly or indirectly through other nodes \citep{vanValkenhoefEtAl2012}. Multiple possible partitionings of the evidence network exist, so a choice must be made (Figure \ref{fig_smkSpan}). Suppose the spanning tree $\bs{\eta}_b$ is identifiable by a set of evidence $\bs{Y}_b$ containing outcomes from all trials designed to directly estimate the treatment effects in $\bs{\eta}_b$. Then every treatment effect is identifiable from $\bs{Y}_b$, by definition of a spanning tree and the fact that each treatment effect represented by edges outside the spanning tree is a functional parameter in the set $\bs{\eta}_f$, equal to a linear combination of the basic parameters. The data $\bs{Y}_b$ therefore \emph{indirectly} inform the functional parameters $\bs{\eta}_f$, whereas the remaining data, $\bs{Y}_f = \bs{Y} \setminus \bs{Y}_b$ \emph{directly} inform $\bs{\eta}_f$. A comparison between the direct and indirect evidence on $\bs{\eta}_f$ is therefore possible, to assess conflict between the two types of evidence. The network is split into two partitions, $\{\bs{\eta}_f^{Dir},\bs{Y}_f\}$ (the ``direct evidence partition'', DE) and $\{\bs{\eta}_f^{Ind},\bs{Y}_b\}$ (the ``spanning tree partition'', ST) and the direct and indirect versions of the functional parameters compared: $ \bs{\Delta} = \bs{\eta}_f^{Dir} - \bs{\eta}_f^{Ind} . $ Depending on the studies that are in the DE partition, the basic parameters $\bs{\eta}_b$ may also be weakly identifiable in the DE partition, due to prior information. Since a NMA model may be formulated as a DAG, this Direct/Indirect partitioning is equivalent to a multi-node split in the DAG at the functional parameters (Supplementary Material Figure \ref{fig_nmaDAGsplit}).
Generalising now to more complex situations, if the direct data $\bs{Y}_f$ form a sub-network of evidence, the question arises of whether these data should be split into further partitions, by identifying a spanning tree for the sub-network. Then the vector $\bs{\Delta}$ of contrasts to test would involve comparisons between more than two partitions, e.g. for three partitions: $$ \bs{\Delta} = \left( \bs{\eta}_f^1 - \bs{\eta}_f^2, \bs{\eta}_f^1 - \bs{\eta}_f^3, \bs{\eta}_f^2 - \bs{\eta}_f^3 \right)^T $$
If we now consider a random rather than common heterogeneity effects model (equation (\ref{eqn_consRE})), a decision must be made on how to handle the variance components in $\Sigma_{\beta}$. One approach would be to split the variance components simultaneously with the means, so that $\bs{\Delta}$ also includes contrasts for the variances. Alternatively, if the variance components are not well identified by the evidence in a partition, a common variance component could be assumed. Such commonality could potentially allow for feedback between partitions, since they would not be fully independent \citep{MarshallSpiegelhalter2007,PresanisEtAl2013}.
Finally, for multi-arm trials, the key consideration is that multi-arm studies should have internal consistency, and hence their observations should not be split between partitions. A choice must therefore be made whether to initially include multi-arm data in the ST data $\bs{Y}_b$, in the DE data $\bs{Y}_f$, or in a third partition of their own. In the latter case, any study-specific treatment effect $\mu^{JK}_{di}$, where $d$ is a multi-arm design, could be compared at least with the ST partition, where $\eta^{JK}$ is definitely identified. Potentially, it could also be compared simultaneously with the DE partition, if the edge $JK$ is identifiable in the DE partition. The comparison can be made even if $JK$ is not identifiable, or only weakly identifiable from the prior, but if the prior is diffuse, then no conflict will be detected due to the uncertainty. Such a comparison is not therefore particularly meaningful, unless we are interested in prior-data conflict.
\paragraph{Smoking cessation example \label{sec_nmaSplit}} To illustrate concretely the above issues, we consider first the spanning tree $(AB,AC,AD)$ corresponding to the parameters $\bs{\eta_b} = \{\eta^{AB},\eta^{AC},\eta^{AD}\}$ for the smoking cessation example. Figures \ref{fig_smkSpan}(b-d) demonstrate different ways of splitting the evidence based on this spanning tree, depending on how we treat the evidence from multi-arm trials. In Figures \ref{fig_smkSpan}(b,c), we consider just two partitions, with the multi-arm evidence either left in the ST partition $\{\bs{\eta}_f^{Ind},\bs{Y}_b\}$ or included in the DE partition $\{\bs{\eta}_f^{Dir},\bs{Y}_f\}$, respectively. We compare the direct and indirect evidence on each of the edges or treatment comparisons $(BC,BD,CD)$. In Figure \ref{fig_smkSpan}(d), we consider a series of spanning trees ($(AB,AC,AD), (BC,BD)$ and $(CD)$), together with a final partition consisting of evidence from multi-arm trials, resulting in four partitions.
We also consider an alternative choice of spanning tree, $(AB,AC,BD)$, as in Figures \ref{fig_smkSpan}(e,f). In these two models, we again make a choice between including the multi-arm evidence in either the ST or DE partitions and compare the evidence in each partition on edges $(AD,BC,CD)$. In all cases, we assume random heterogeneity effects and make the choice to assume common variance components across the partitions, splitting only the means.
Table \ref{tab_nmaResults} gives posterior mean (sd) estimates of the treatment effects (log odds ratios) for edges outside the spanning tree, from each partition, where the subscript 1 denotes the ST partition and 2 denotes the DE partition for the two-partition models (b,c,e,f). For the four-partition model (d), 1-3 denote the sequential spanning tree partitions and 4 the multi-arm trial partition. Also given, for each edge outside the original spanning tree, are the posterior mean (sd) differences between partitions and both the local and global posterior probabilities of no difference, adjusted for the multiple tests and their correlation. First, note that the global test of no conflict varies by model, and hence by what partitions of evidence are compared with each other: the posterior probability of no conflict in model (b) is $94.7\%$, compared to only $23.4\%$ and $27.4\%$ for models (c) and (e). These latter two models appear to detect some mild evidence of conflict, despite the large uncertainty in many of the partition-specific treatment effect estimates, with several of the posterior standard deviations of the same order of magnitude as the corresponding posterior means, if not larger. The DIC is also slightly smaller for the two models (c) and (e) which detect potential conflict, compared to those that don't. This lack of invariance of the global test to the partitions employed suggests it is not enough to rely on a single node-splitting model to search for conflict in a DAG. Moreover, it motivates looking at local tests for conflict in different node-splitting models, to locate the specific items of evidence that may conflict with each other.
A closer look at the local posterior probabilities of no conflict for each edge outside the initial spanning tree reveals that the potential conflict detected by models (c) and (e) involves edges including treatment $D$ (posterior probabilities $17.8\%$ and $18.6\%$ for edges $BD$ and $CD$ in model (c), $12.4\%$ and $10.5\%$ for edges $AD$ and $CD$ in model (e)). Each of these four local tests involves a partition where the estimated treatment effect for the relevant edge is implausibly large ($>6$ on the log odds ratio scale, i.e. $>400$ on the odds ratio scale) and where the sample sizes of the studies involved are small (e.g. studies 7, 20, 23 and 24 in Supplementary Material Table \ref{tab_smkData}).
Unlike models (c), (e) and (f), where in both partitions, each sub-network spans all 4 treatments, in models (b) and (d), the spanning tree chosen, $(AB,AC,AD)$, is such that for each sub-network outside the spanning tree, not all the treatments are included (Figure \ref{fig_smkSpan}). This results in a lack of identifiability for the basic parameters $\bs{\eta_b}$ in partition 2 of model (b) and in partitions 2 and 3 of model (d) (Table \ref{tab_nmaResults}), where their estimates are dominated by their diffuse prior distribution (Normal$(0,10^2)$ on the log odds ratio scale). There is therefore no potential for detecting conflict about the basic parameters $\bs{\eta_b}$, only about the functional parameters $\bs{\eta_f}$.
The different results obtained from each of the five models are understandable, since each model partitions the evidence in a different way, and the detection of conflict relies on the conflicting evidence being in different rather than the same partitions. However, where the same evidence is in the same partition for different models --- for example, the evidence directly informing the $AC$ edge in models (c) and (d) --- approximately the same estimate is reached in each model, as expected ($0.81 (0.26)$ in model (c), $0.82 (0.28)$ in model (d), Table \ref{tab_nmaResults}).
\subsection{HIV prevalence evidence synthesis \label{sec_plSplit}}
Figure \ref{fig_plDAGs}(b) demonstrates the multiple node-splits we make to systematically assess conflict in the original DAG of Figure \ref{fig_plDAGs}(a), separating out the contributions of the prior model and each likelihood contribution. These node-splits result in 5 partitions, with 6 contrasts to test for equality to zero. Denoting the nodes in the ``prior'' partition (above the red arrows in Figure \ref{fig_plDAGs}(b)) by the subscript $p$ and the nodes in each ``likelihood'' partition (below the red arrows in Figure \ref{fig_plDAGs}(b)) by $d$, the vector of contrasts to test is then \begin{eqnarray*} \bs{\Delta} & = & (h(\rho_p) - h(\rho_d), h(\pi_p\kappa_p) - h([\pi\kappa]_d), h(\pi_p(1-\kappa_p)) - h([\pi(1-\kappa)]_d), \\ & & g(D_{L_p}) - g(D_{L_d}), g(D_{U_p}) - g(D_{U_d}), g(D_p) - g(D_d)))^T \end{eqnarray*} where $h(\cdot)$ and $g(\cdot)$ denote the logit and log functions respectively. These contrasts are represented by the red dot-dashed arrows in Figure \ref{fig_plDAGs}(b). In the ``prior'' partition, the priors given to the basic parameters are those of the original model (Section \ref{sec_ges}). In each ``likelihood'' partition, the basic parameters are given Jeffreys' priors so that the posteriors represent only the likelihood. These priors are Beta$(\nicefrac{1}{2}, \nicefrac{1}{2})$ for the proportions and $p(D_{B_d}) \propto 1 / D_{B_d}^{1/2}$ for the lower and upper bounds ($B = L,U$) for $D$. $D_d$ is given a Uniform prior between $D_{L_d}$ and $D_{U_d}$.
Figure \ref{fig_plSatDiffs} shows the posterior distributions of the contrasts $\bs{\Delta}$, where 0 lies in these distributions and the corresponding unadjusted ($p_U$) and multiply-adjusted ($p_A$) individual conflict p-values testing for equality to $0$. A global $\chi^2$-squared (Wald) test gives a conflict p-value of $0.001$, suggesting conflict exists somewhere in the DAG. Examining the individual unadjusted (naive) conflict p-values would suggest prior-data conflict at the upper bound for the number diagnosed $D_U$ (posterior probability of zero difference is $p_U = 0.008$) and hence at the number diagnosed itself, $D$ ($p_U = 0.039$), as well as possibly at the proportion at risk, $\rho$ ($p_U = 0.078$). However, once the correlation between the individual tests has been taken into account, the posterior probabilities of no conflict increase for all contrasts, albeit the probabilities are still low for $D_U$ and $D$, at $p_A = 0.175$ and $p_A = 0.058$ respectively. Note that the posterior contrasts in Figure \ref{fig_plSatDiffs} are slightly non-normal, hence we interpret the adjusted posterior probabilities of no conflict as exploratory, rather than as absolute measures.
Examining closer the posterior distributions of the ``prior'' and ``likelihood'' versions of the node $D$ (Supplementary Material Figure \ref{fig_plSatD}, upper panel), we visualise better the prior-data conflict: the ``likelihood'' version lies very much in the lower tail of the ``prior'' version. This is in spite of -- or rather because of -- the flat Uniform priors of the prior model, which translate into a non-Uniform implied prior for the function $D_p = N\rho_p\pi_p\kappa_p$.
The ``saturated'' model splitting apart each component of evidence in the DAG allows us to assess prior-data conflict in this model, but not conflict between different combinations of the likelihood evidence, due to lack of identifiability: in each likelihood partition in Figure \ref{fig_plDAGs}(b), clearly only the parameter directly informed by the data, whether basic or functional, can be identified. To assess consistency of evidence between likelihood terms, we employ a cross-validatory ``leave-n-out'' approach, for $n = 1$ and $n = 2$, splitting in each case the relevant nodes \emph{directly} informed by the left-out data items. Note that other possibilities exist, such as splitting at the basic parameters, depending on which data are left out. Table \ref{tab_plResultsLnO} gives unadjusted ($p_U$) and various multiply-adjusted ($p_{AW},p_{AL},p_{AA}$) individual posterior probabilities of no difference between nodes split between partitions 1 (the ``left-out'' evidence) and 2 (the remaining evidence). These posterior probabilities highlight inconsistency in the network of evidence $\{y_1,y_2,y_4,y_5\}$, i.e. informing the three nodes $\rho, \pi\kappa$ and $D = N\rho\pi\kappa$. Splits at these three nodes demonstrate low posterior probabilities of no difference in the ``leave-1-out'' models (A), (B) and (E), and in the ``leave-2-out'' models (B), (C), and (J) in particular. There is no potential for the evidence $y_3$ on the prevalence of undiagnosed infection $\pi(1-\kappa)$ to conflict with any other evidence, since $\pi$ and $\kappa$ are not separately identifiable from the remaining evidence $\{y_1,y_2,y_4,y_5\}$ alone. Hence all of the posterior probabilities of no difference concerning the node $\pi(1-\kappa)$ are high.
The conflict in the $\{y_1,y_2,y_4,y_5\}$ network is well illustrated by the node-split model (J), where the count data on the lower and upper bounds for the $D$ are ``left out'' in partition 1. Supplementary Material Figure \ref{fig_plSatD} (lower panel) shows the posterior distributions for each of $D_L, D_U$ and $D$ in both partitions. Since in partition 2 the data on the limits for $D$ have been excluded, the posterior distributions for the bounds (solid black and red lines) are flat and hugely variable. Despite this, the posterior distribution for $D_2$ is relatively tightly peaked, due to the indirect evidence on $D_2$ provided by the data informing $\rho_2$ and $\pi_2\kappa_2$. It is this indirect evidence that conflicts with the direct evidence informing $D_1$ via the data $\{y_4,y_5\}$ on the bounds for $D_1$.
\section{Discussion \label{sec_discuss}}
We have proposed here the systematic assessment of conflict in an evidence synthesis, in particular accounting for the multiple tests for consistency entailed, through the simultaneous inference framework proposed by \citet{HothornEtAl2008,BretzEtAl2011}. We have chosen the max-T tests that allow both for multiply-adjusted local and global testing simultaneously.
Note that the use of this (typically classical) simultaneous inference framework relies on the asymptotic multivariate normality of the joint posterior distribution. In cases where the likelihood does not dominate the prior, resulting in a skewed or otherwise non-normal posterior, we treat the results of conflict analysis as exploratory, rather than absolute measures of conflict. If the posterior is skewed but still uni-modal, a global, implicitly multiply-adjusted, test for conflict can be formulated in terms of the Mahalanobis distance of each posterior sample from their mean, as we proposed in \citet{PresanisEtAl2013}. This is a multivariate equivalent of calculating the tail area probability for regions further away from the posterior mean than the point $\bs{0}$. However, the Mahalanobis-based test does not allow us to obtain local tests for conflict, nor does it apply in the case of a multi-modal posterior. In the latter case, kernel density estimation could be used to obtain the multivariate tail area probability, although such estimation is computationally challenging for large posterior dimension.
Although generalised evidence syntheses have mostly been carried out in a Bayesian framework, there are examples \citep[e.g.][]{CommengesHejblum2013} that are either frequentist or not fully Bayesian. In the NMA field, maximum likelihood and Bayesian methods are both common \citep[e.g.][]{WhiteEtAl2012,JacksonEtAl2014}. An advantage of the simultaneous inference framework \citep{HothornEtAl2008,BretzEtAl2011} is that, given any estimator $\bs{\overline{\Delta}}$ of a vector of differences and its corresponding variance-covariance matrix $\bs{S_{\Delta}}$, regardless of the method used to obtain the estimates, the global and local max-T tests can be formulated.
Conflict p-values can be seen as cross-validatory posterior predictive checks \citep{PresanisEtAl2013}. There is a large literature on various types of Bayesian predictive diagnostics, including prior-, posterior- and mixed-predictive checks \citep[e.g.][]{Box1980,GelmanEtAl1996,MarshallSpiegelhalter2007}. A key issue much discussed in this literature is the lack of uniformity of posterior predictive p-values under the null hypothesis \citep{Gelman2013}, with such p-values conservative due to the double use of data. Much work has therefore been devoted to either alternative p-values \citep[e.g.][]{BayarriBerger2000} or post-processing of p-values to calibrate them \citep[e.g.][]{SteinbakkStorvik2009}. \citet{Gelman2013} argues that the importance of uniformity depends on the context in which the model checks are conducted: in general non-uniformity is not an issue, but if the posterior predictive tests rely on parameters or imputed latent data, then care should be taken. Since conflict p-values are cross-validatory, the issue of conservatism and the double use of data does not apply. In fact, for a wide class of standard hierarchical models, \citet{Gasemyr2015} has demonstrated the uniformity of the conflict p-value.
As illustrated by both applications, the choice of different ways of partitioning the evidence in a DAG can lead to different conclusions over the existence of conflict. This is to be expected when considering the local conflict p-values, since conflicting evidence may need to be in different partitions in order to be detectable. This is analogous to the idea of ``masking'' in cross-validatory outlier detection, where outliers may not be detected if multiple outliers exist \citep{ChalonerBrant1988}. In the case of the global tests for conflict, the NMA example showed that these are also not invariant to the choice of partition. In the NMA literature, alternative methods accounting for inconsistency include models that introduce ``inconsistency parameters'' that absorb any variability due to conflict beyond between-study heterogeneity \citep{LuAdes2006,HigginsEtAl2012,JacksonEtAl2014}. \citet{HigginsEtAl2012,JacksonEtAl2014} have pointed out that the apparent algorithm that \citet{LuAdes2006} follow for identifying inconsistency parameters does not guarantee that all such parameters are identified, nor that the Lu-Ades model is invariant to the choice of baseline treatment. The authors further posit, and more recently have proved \citep{JacksonEtAl2015}, that their ``design-by-treatment interaction model'', which introduces an inconsistency parameter systematically for each non-baseline treatment within each design, contains each possible Lu-Ades model as a sub-model. In related ongoing work, we note that each Lu-Ades model corresponds to a particular choice of node-splitting model, one being a reparameterisation of the other. The lack of invariance of results of testing for inconsistency from one Lu-Ades model to another is therefore not surprising, since, as we illustrated here, different choices of node-splitting model correspond to different partitions of evidence being compared. The lack of invariance of a global test for conflict to the choice of node-splitting model, although unsurprising, is perhaps unsatisfactory: however, as we illustrated in this paper, this lack clearly emphasises the need for a more comprehensive and systematic assessment of conflict throughout a DAG, both at a local level and across different types of node-split model, than just a single global test can provide. We therefore recommend that although a global test may be an initial step in any conflict analysis, to be sure of detecting any potential conflict requires testing for conflict throughout a DAG. One strategy is to start from splitting every possible node in the DAG, as we did in the HIV example, before looking at more targeted leave-n-out approaches. The design-by-treatment interaction model provides a way of doing so and we are further investigating the relationship of the (fixed inconsistency effects) design-by-treatment interaction model to such a ``saturated'' node-splitting model.
Note that in the NMA example considered here, we have concentrated on a ``contrast-based'' as opposed to ``arm-based'' parameterisation \citep{HongEtAl2015,DiasAdes2015}. Also, we have considered the case where each study has a study-specific baseline treatment $B_d$ and the network as a whole has a baseline treatment $A$. However, alternative parameterisations could be considered, such as using a two-way linear predictor with main effects for both treatment and study, treating the counter-factual or missing treatment designs as missing data \citep{JonesEtAl2011,PiephoEtAl2012}. Although we have not yet explored alternative parameterisations, we posit that systematic node-splitting could be equally well applied.
As with any cross-validatory work, the systematic assessment of conflict at every node in a DAG can quickly become computationally burdensome as a model grows in dimension. An area for future research is the systematic analysis of conflict using efficient algorithms \citep{LunnEtAl2013,GoudieEtAl2015} in a Markov melding framework \citep{GoudieEtAl2016} which allows for an efficient modular approach to model building.
\begin{figure}\label{fig_genSplit}
\end{figure}
\begin{figure}
\caption{Smoking cessation evidence network, under (a) a consistency assumption; (b)-(f) inconsistency assumptions, where the evidence is partitioned in different ways. In (b), (c), (e) and (f), the direct evidence (dashed lines) is compared with the indirect evidence (solid lines) on each contrast where there is a dashed line. In (d), the evidence is separated into three spanning trees and a fourth partition for the multi-arm trial evidence. }
\label{fig_smkSpan}
\end{figure}
\begin{figure}
\caption{(a) DAG of initial model for synthesising Polish HIV
prevalence data. (b) DAG of multiple node-split model comparing
priors to each likelihood contribution. Note that the square
brackets are used in denoting the nodes in the likelihood
partition ($[\pi\delta]_d, [\pi(1-\delta)]_d$) to emphasise the
fact that these two nodes are independent parameters not
functionally related to each other. }
\label{fig_plDAGs}
\end{figure}
\begin{figure}
\caption{Posterior distributions of the contrasts $\bs{\Delta}$ for the HIV prevalence example. The red lines denote 0 difference, $p_U$ is the unadjusted and $p_A$ the multiply-adjusted individual conflict p-value respectively. }
\label{fig_plSatDiffs}
\end{figure}
\begin{table}[!p]
\caption{Multiply adjusted posterior mean (sd) estimates of conflict between partitions, for each model (b)-(f) respectively. In the two-partition models (b,c,e,f), partition 1 is the spanning tree (indirect) evidence partition and partition 2 is the direct data partition. In model (d), partitions 1-3 are the sequential spanning trees and partition 4 is the multi-arm study partition. \label{tab_nmaResults}}
\centering
\scriptsize
\begin{tabular}{lrrrrrrrrrr}
\toprule
\midrule
ST: & \multicolumn{6}{c}{\tiny{AB,AC,AD}} & \multicolumn{4}{c}{\tiny{AB,AC,BD}} \\
Model: & \multicolumn{2}{c}{(b)} & \multicolumn{2}{c}{(c)} & \multicolumn{2}{c}{(d)} & \multicolumn{2}{c}{(e)} & \multicolumn{2}{c}{(f)} \\
Posterior: & Mean & SD & Mean & SD & Mean & SD & Mean & SD & Mean & SD \\
\midrule
$AB_1$ & 0.472 & (0.489) & 0.338 & (0.534) & 0.329 & ( 0.568) & 0.259 & (0.429) & 0.334 & (0.566) \\
$AB_2$ &-0.415 & (5.276) & 0.319 & (0.983) &-0.230 & ( 5.849) & 6.261 & (3.251) & 1.513 & (1.041) \\
$AB_3$ & & & & &-0.044 & (10.009) & & & & \\
$AB_4$ & & & & & 0.456 & ( 1.247) & & & & \\
\midrule
$AC_1$ & 0.877 & (0.262) & 0.814 & (0.261) & 0.828 & ( 0.280) & 0.812 & (0.238) & 0.824 & (0.272) \\
$AC_2$ &-0.165 & (5.262) & 0.615 & (0.866) &-0.379 & ( 5.848) & 6.173 & (3.140) & 1.496 & (0.835) \\
$AC_3$ & & & & & 0.114 & ( 7.051) & & & & \\
$AC_4$ & & & & & 0.784 & ( 0.956) & & & & \\
\midrule
$AD_1$ & 1.010 & (0.598) & 9.337 & (4.999) & 9.508 & ( 5.330) & 0.908 & (0.794) & 1.748 & (1.526) \\
$AD_2$ & 0.262 & (5.266) & 0.679 & (0.871) & 0.871 & ( 5.859) & 12.712 & (6.225) & 3.102 & (1.690) \\
$AD_3$ & & & & & 0.319 & ( 7.044) & & & & \\
$AD_4$ & & & & & 0.439 & ( 0.956) & & & & \\ $\Delta_{AD_{1-2}}$ & & & & & & &-11.804 & (6.268) &-1.354 & (2.279) \\
$p_{AD_{1-2}}$ & & & & & & & 0.124 & & 0.806 & \\
\midrule
$BC_1$ & 0.405 & (0.527) & 0.476 & (0.590) & 0.499 & ( 0.633) & 0.553 & (0.461) & 0.490 & (0.629) \\
$BC_2$ & 0.251 & (0.808) & 0.296 & (0.577) &-0.149 & ( 1.015) & -0.087 & (0.951) &-0.017 & (0.693) \\
$BC_3$ & & & & & 0.158 & (12.270) & & & & \\
$BC_4$ & & & & & 0.329 & ( 0.957) & & & & \\ $\Delta_{BC_{1-2}}$ & 0.155 & (0.963) & 0.180 & (0.821) & 0.649 & ( 1.198) & 0.641 & (1.059) & 0.507 & (0.937) \\ $\Delta_{BC_{1-3}}$ & & & & & 0.341 & (12.290) & & & & \\ $\Delta_{BC_{1-4}}$ & & & & & 0.171 & ( 1.161) & & & & \\
$p_{BC_{1-2}}$ & 0.986 & & 0.971 & & 0.979 & & 0.807 & & 0.839 & \\
$p_{BC_{1-3}}$ & & & & & 1.000 & & & & & \\
$p_{BC_{1-4}}$ & & & & & 1.000 & & & & & \\
\midrule
$BD_1$ & 0.538 & (0.691) & 8.999 & (5.031) & 9.180 & ( 5.357) & 0.649 & (0.530) & 1.414 & (1.173) \\
$BD_2$ & 0.678 & (0.809) & 0.360 & (0.569) & 1.101 & ( 1.026) & 6.451 & (3.083) & 1.589 & (0.802) \\
$BD_3$ & & & & & 0.363 & (12.255) & & & & \\
$BD_4$ & & & & &-0.017 & ( 0.948) & & & & \\ $\Delta_{BD_{1-2}}$ &-0.140 & (1.067) & 8.639 & (5.069) & 8.079 & ( 5.444) & & & & \\ $\Delta_{BD_{1-3}}$ & & & & & 8.817 & (13.149) & & & & \\ $\Delta_{BD_{1-4}}$ & & & & & 9.196 & ( 5.430) & & & & \\
$p_{BD_{1-2}}$ & 0.991 & & 0.178 & & 0.491 & & & & & \\
$p_{BD_{1-3}}$ & & & & & 0.952 & & & & & \\
$p_{BD_{1-4}}$ & & & & & 0.355 & & & & & \\
\midrule
$CD_1$ & 0.133 & (0.594) & 8.523 & (5.011) & 8.680 & ( 5.337) & 0.095 & (0.778) & 0.924 & (1.553) \\
$CD_2$ & 0.427 & (0.680) & 0.063 & (0.460) & 1.250 & ( 1.438) & 6.539 & (3.198) & 1.606 & (1.081) \\
$CD_3$ & & & & & 0.204 & ( 0.771) & & & & \\
$CD_4$ & & & & &-0.345 & ( 0.714) & & & & \\ $\Delta_{CD_{1-2}}$ &-0.294 & (0.902) & 8.459 & (5.033) & 7.430 & ( 5.526) & -6.443 & (3.287) &-0.682 & (1.893) \\ $\Delta_{CD_{1-3}}$ & & & & & 8.476 & ( 5.385) & & & & \\ $\Delta_{CD_{1-4}}$ & & & & & 9.025 & ( 5.378) & & & & \\
$p_{CD_{1-2}}$ & 0.943 & & 0.186 & & 0.588 & & 0.105 & & 0.934 & \\
$p_{CD_{1-3}}$ & & & & & 0.430 & & & & & \\
$p_{CD_{1-4}}$ & & & & & 0.365 & & & & & \\
\midrule
Global p & 0.947 & & 0.234 & & 0.700 & & 0.274 & & 0.733 & \\
DIC &98.843 & &95.420 & &96.354 & & 95.745 & &98.351 & \\
\midrule
\bottomrule
\end{tabular}
\end{table}
\begin{table}[!p]
\caption{Results from ``leave-n-out'' node-split models for the
Polish HIV data. $p_{U}$ denotes the unadjusted conflict p-value;
$p_{AW}$ is the p-value adjusted for the multiple tests carried
out \emph{within} each model (A)-(J) for the leave-2-out approach;
$p_{AL}$ is the p-value adjusted for the 23 tests carried
out in all models (A)-(J) for the leave-2-out approach; and
$p_{AA}$ is the p-value adjusted for 28 tests carried out in all
leave-1-out models (A)-(E) and all leave-2-out models (A)-(J). \label{tab_plResultsLnO}}
\centering
\begin{tabular}{ccccrrrr}
\toprule
\midrule
Model & Partition 1 & Partition 2 & Node split & $p_U$ & $p_{AW}$
& $p_{AL}$ & $p_{AA}$ \\
\midrule
\multicolumn{8}{c}{Leave-1-out} \\
\midrule
(A) & $y_1$ & $\{y_2,y_3,y_4,y_5\}$ & $\rho$ & $<0.0001$ &
& 0.0060 & 0.0311\\
(B) & $y_2$ & $\{y_1,y_3,y_4,y_5\}$ & $\pi\kappa$ & $<0.0001$
& & 0.0047 & 0.0246\\
(C) & $y_3$ & $\{y_1,y_2,y_4,y_5\}$ & $\pi(1-\kappa)$ & $0.6201$ &
& 0.9857 & 1.0000\\
(D) & $y_4$ & $\{y_1,y_2,y_3,y_5\}$ & $D_L$ & $0.1257$ &
& 0.6242 & 0.9934\\
(E) & $y_5$ & $\{y_1,y_2,y_3,y_4\}$ & $D_U$ & $<0.0001$
& & 0.5852 & 0.9890\\
\midrule
\multicolumn{8}{c}{Leave-2-out} \\
\midrule
(A) & $\{y_1,y_2\}$ & $\{y_3,y_4,y_5\}$ & $\rho$ &
0.6972 & 0.7480 & 1.0000 & 1.0000 \\
& & & $\pi\kappa$ &
0.2209 & 0.2230 & 0.9842 & 0.9937 \\
(B) & $\{y_1,y_3\}$ & $\{y_2,y_4,y_5\}$ & $\rho$ &
$<0.0001$ & 0.0023 & 0.0240 & 0.0294\\
& & & $\pi(1-\kappa)$ &
0.4906 & 0.7717 & 1.0000 & 1.0000 \\
(C) & $\{y_2,y_3\}$ & $\{y_1,y_4,y_5\}$ & $\pi\kappa$ &
$<0.0001$ & $<0.0010$ & $<0.0010$ & $<0.0010$\\
& & & $\pi(1-\kappa)$ & 0.8322 & 0.9000 & 1.0000 & 1.0000 \\
& & & $\pi$ & 0.9921 & 0.9490 & 1.0000 & 1.0000 \\
& & & $\kappa$ & 0.3329 & 0.6700 & 0.9998 & 1.0000 \\
\midrule
(D) & $\{y_1,y_4\}$ & $\{y_2,y_3,y_5\}$ & $\rho$ &
$<0.0001$ & 0.0779 & 0.5754 & 0.6499 \\
& & & $D_L$ &
0.0783 & 0.2851 & 0.9705 & 0.9866 \\
(E) & $\{y_1,y_5\}$ & $\{y_2,y_3,y_4\}$ & $\rho$ &
0.0745 & 0.1260 & 0.7614 & 0.8271 \\
& & & $D_U$ &
0.0026 & 0.0949 & 0.6543 & 0.7276 \\
\midrule
(F) & $\{y_2,y_4\}$ & $\{y_1,y_3,y_5\}$ & $\pi\kappa$ & 0.4682 & 0.9590 & 1.0000 & 1.0000 \\
& & & $D_L$ &
0.0869 & 0.3000 & 0.9764 & 0.9898 \\
(G) & $\{y_2,y_5\}$ & $\{y_1,y_3,y_4\}$ & $\pi\kappa$ & 0.4420 & 0.6690 & 1.0000 & 1.0000 \\
& & & $D_U$ &
0.0137 & 0.1970 & 0.9044 & 0.9434 \\
\midrule
(H) & $\{y_3,y_4\}$ & $\{y_1,y_2,y_5\}$ & $\pi(1-\kappa)$ &
0.1471 & 0.3330 & 0.9855 & 0.9944 \\
& & & $D_L$ &
0.1328 & 0.3280 & 0.9844& 0.9938 \\
(I) & $\{y_3,y_5\}$ & $\{y_1,y_2,y_4\}$ & $\pi(1-\kappa)$ & 0.5237 & 0.8100 & 1.0000 & 1.0000 \\
& & & $D_U$ &
$<0.0001$ & 0.2850 & 0.9702 & 0.9864 \\
\midrule
(J) & $\{y_4,y_5\}$ & $\{y_1,y_2,y_3\}$ & $D_L$ &
0.1958 & 0.5117 & 0.9933 & 0.9978 \\
& & & $D_U$ &
$<0.0001$ & 0.3963 & 0.9706 & 0.9866 \\
& & & $D$ &
$<0.0001$ & 0.0030 & 0.0213 & 0.0260 \\
\midrule
\bottomrule
\end{tabular}
\end{table}
\appendix
\section*{Supplementary Material}
\renewcommand{\Alph{subsection}}{\Alph{subsection}} \renewcommand\thefigure{\Alph{subsection}.\arabic{figure}} \setcounter{figure}{0} \renewcommand\thetable{\Alph{subsection}.\arabic{table}} \setcounter{table}{0}
\subsection{Figures and Tables}
\begin{figure}
\caption{(a) DAG of NMA under assumptions of a common treatment effect $\eta^{JK}$ (no heterogeneity) and consistency $\eta^{JK} = \eta^{AK} - \eta^{AJ}$. (b) DAG of NMA under assumptions of random treatment effects, to account for heterogeneity, and consistency. }
\label{fig_nmaDAGs}
\end{figure}
\begin{table}
\caption{Smoking cessation data set}
\label{tab_smkData}
\centering
\begin{tabular}{rlrrrrrrrrrrrr}
\toprule
\midrule
Study & Design & $y_A$ & $n_A$ & $y_A/n_A$ & $y_B$ & $n_B$ & $y_B/n_B$ & $y_C$ & $n_C$ & $y_C/n_C$ & $y_D$ & $n_D$ & $y_D/n_D$ \\
\midrule
\midrule
1 & AB & 79 & 702 &0.113 & 77 & 694 &0.111 & . & . & . & . & . & . \\
2 & AB & 18 & 671 &0.027 & 21 & 535 &0.039 & . & . & . & . & . & . \\
3 & AB & 8 & 116 &0.069 & 19 & 149 &0.128 & . & . & . & . & . & . \\
\midrule
4 & AC & 75 & 731 &0.103 & . & . & . & 363 & 714 &0.508 & . & . & . \\
5 & AC & 2 & 106 &0.019 & . & . & . & 9 & 205 &0.044 & . & . & . \\
6 & AC & 58 & 549 &0.106 & . & . & . & 237 &1561 &0.152 & . & . & . \\
7 & AC & 0 & 33 &0.000 & . & . & . & 9 & 48 &0.188 & . & . & . \\
8 & AC & 3 & 100 &0.030 & . & . & . & 31 & 98 &0.316 & . & . & . \\
9 & AC & 1 & 31 &0.032 & . & . & . & 26 & 95 &0.274 & . & . & . \\
10 & AC & 6 & 39 &0.154 & . & . & . & 17 & 77 &0.221 & . & . & . \\
11 & AC & 64 & 642 &0.100 & . & . & . & 107 & 761 &0.141 & . & . & . \\
12 & AC & 5 & 62 &0.081 & . & . & . & 8 & 90 &0.089 & . & . & . \\
13 & AC & 20 & 234 &0.085 & . & . & . & 34 & 237 &0.143 & . & . & . \\
14 & AC & 95 &1107 &0.086 & . & . & . & 143 &1031 &0.139 & . & . & . \\
15 & AC & 15 & 187 &0.080 & . & . & . & 36 & 504 &0.071 & . & . & . \\
16 & AC & 78 & 584 &0.134 & . & . & . & 73 & 675 &0.108 & . & . & . \\
17 & AC & 69 &1177 &0.059 & . & . & . & 54 & 888 &0.061 & . & . & . \\
\midrule
18 & ACD & 9 & 140 &0.064 & . & . & . & 23 & 140 &0.164 & 10 & 138 &0.072 \\
19 & AD & 0 & 20 &0.000 & . & . & . & . & . & . & 9 & 20 &0.450 \\
\midrule
20 & BC & . & . & . & 20 & 49 &0.408 & 16 & 43 &0.372 & . & . & . \\
21 & BCD & . & . & . & 11 & 78 &0.141 & 12 & 85 &0.141 & 29 & 170 &0.171 \\
22 & BD & . & . & . & 7 & 66 &0.106 & . & . & . & 32 & 127 &0.252 \\
\midrule
23 & CD & . & . & . & . & . & . & 12 & 76 &0.158 & 20 & 74 &0.270 \\
24 & CD & . & . & . & . & . & . & 9 & 55 &0.164 & 3 & 26 &0.115 \\
\midrule
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\caption{Treatment effect posterior estimates (mean (sd)) for the common- and random-effect models respectively, with deviance summaries: posterior mean deviance $\mathbb{E}_{\theta \mid y}(D)$; deviance evaluated at posterior means $D(\mathbb{E}_{\theta \mid y}\theta)$; effective number of parameters $p_D$; and deviance information criterion $DIC$.}
\label{tab_nmaConsistent}
\centering
\begin{tabular}{lrrrr}
\toprule
\midrule
Model: & \multicolumn{2}{c}{Common-effect} & \multicolumn{2}{c}{Random-effect} \\
$\mu^{JK}$: & Posterior mean & Posterior sd & Posterior mean & Posterior sd \\
\midrule
$AB$ & 0.224 & (0.124) & 0.496 & (0.405) \\
$AC$ & 0.765 & (0.059) & 0.843 & (0.236) \\
$AD$ & 0.840 & (0.174) & 1.103 & (0.439) \\
$BC$ & 0.541 & (0.132) & 0.347 & (0.419) \\
$BD$ & 0.616 & (0.192) & 0.607 & (0.492) \\
$CD$ & 0.075 & (0.171) & 0.260 & (0.418) \\
\midrule
$\mathbb{E}_{\theta \mid y}(D)$ & 267 & & 54 & \\
$D(\mathbb{E}_{\theta \mid y}\theta)$ & 240 & & 10 & \\
$p_D$ & 27 & & 44 & \\
$DIC$ & 294 & & 98 & \\
\midrule
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\caption{Results from initial HIV model: observations; posterior mean (sd) estimates; posterior mean deviance $\mathbb{E}_{\theta \mid y}(D)$; deviance evaluated at posterior means $D(\mathbb{E}_{\theta \mid y}\theta)$; effective number of parameters $p_D$; and deviance information criterion $DIC$.}
\label{tab_plResults}
\centering
\begin{tabular}{crrrrrrrrr}
\toprule
\midrule
Parameter & \multicolumn{3}{c}{Data} & \multicolumn{2}{c}{Estimates} & \multicolumn{4}{c}{Deviance summaries} \\
$\theta$ & $y$ & $n$ & $y/n$ & $\hat{y}$ & $\hat{\theta}$ & $\mathbb{E}_{\theta \mid y}(D)$ & $D(\mathbb{E}_{\theta \mid y}\theta)$ & $p_D$ & $DIC$ \\
\midrule $\rho$ & 35 & 1536 & 0.023 & 14.6 ( 1.5) & 0.010 (0.001) & 21.0 & 20.7 & 0.4 & 21.4 \\ $\pi\kappa$ & 113 & 2840 & 0.040 & 92.5 ( 8.9) & 0.033 (0.003) & 5.5 & 4.4 & 1.1 & 6.5 \\ $\pi(1-\kappa)$ & 136 & 2725 & 0.050 & 136.7 (11.3) & 0.050 (0.004) & 1.0 & 0.0 & 1.0 & 2.0 \\ $D_L$ & 836 & & & 836.2 (28.9) & 836.2 (28.9) & 1.0 & 0.0 & 1.0 & 2.0 \\ $D_U$ & 5034 & & & 5054.3 (70.8) & 5054.4 (70.8) & 1.1 & 0.1 & 1.0 & 2.1 \\
\midrule Total & & & & & & 29.5 & 25.1 & 4.4 & 33.9 \\
\midrule
\bottomrule
\end{tabular}
\end{table}
\begin{figure}
\caption{DAG of common-effect network meta-analysis model, split into direct (DE) and indirect (ST) evidence informing the functional parameters $\bs{\eta}_f$, i.e. those edges outside of the spanning tree formed by the basic parameters $\bs{\eta}_b$. }
\label{fig_nmaDAGsplit}
\end{figure}
\begin{figure}
\caption{Upper panel: Posterior distributions of the nodes $D_p$ and $D_d$ for
the HIV prevalence example, on the log scale. The right-hand blue
line denotes where the total population of Poland ($N =
15,749,944$) lies, i.e. the maximum possible value \emph{a priori}
for the number diagnosed. The left-hand blue line denotes the
value $\log(N \times 0.5^3)$, i.e. the prior mean of $\log(D_p) =
\log(N\rho_p\pi_p\kappa_p)$. Lower panel: Posterior distributions
of the nodes $D_{L1}, D_{L2}, D_{U1}, D_{U2}, D_1$ and $D_2$ for
the HIV prevalence ``leave-2-out'' node-split model (J), on the
log scale. The dashed lines represent the nodes in partition 1, i.e. the ``left-out'' partition, where the posteriors are based only on the likelihood given by $\{y_4,y_5\}$ and Jeffreys' priors for $D_{L1}, D_{U1}$. The solid lines give the corresponding posteriors in partition 2, i.e. based on all the original model priors and on the dataset $\{y_1,y_2,y_3\}$. }
\label{fig_plSatD}
\end{figure}
\subsection{Asymptotics \label{sec_AppBasymp}} Let $p(\bs{\theta}_1), \ldots, p(\bs{\theta}_Q)$ denote the set of prior distributions for the basic parameters $\bs{\theta}_q$ in each partition $q$. Then by the independence of each partition, the
joint posterior distribution of all parameters $\bs{\phi}$ in all
partitions is $$ p(\bs{\phi} \mid \bs{y}) = \prod_{q = 1}^Q p(\bs{\theta}_q) p(\bs{y}_q \mid \bs{\theta}_q). $$ If the joint prior distribution is dominated by the likelihood, then asymptotically \citep{BernardoSmith1994}, the joint posterior distribution of all nodes is multi-variate normal: $$ \bs{\phi} \mid \bs{y} \overset{a}{\sim} N_{\sum_q n_q} \left( (\bs{\hat{\phi}}_1, \ldots, \bs{\hat{\phi}}_Q), \bs{V} \right) $$ where $n_q$ is the total number of parameters in partition $q$,
whether basic or not, and $\bs{V}$ is the inverse observed
information matrix for the parameters $\bs{\phi}$. Since the vector of
separator nodes, $\bs{\phi_S} = (\bs{\phi_1^{(s)}}, \ldots,
\bs{\phi_Q^{(s)}})$, is a subset of $\bs{\phi}$, their joint
posterior is also multivariate normal: \begin{equation} \bs{\phi_S} \mid \bs{y} \overset{a}{\sim} N_{m} \left( (\bs{\hat{\phi}}_1^{(s)}, \ldots, \bs{\hat{\phi}}_Q^{(s)}), \bs{V_S} \right) \label{eqn_bayes_asym} \end{equation} where $m = \sum_q m_q$ is the total number of separator nodes, including node-split copies, and $\bs{V_S}$ is the appropriate sub-matrix of $\bs{V}$. Since the partitions are independent, $\bs{V_S}$ is a blocked diagonal matrix consisting of the inverse observed information matrices for separator nodes in each partition along the diagonal.
By theorem 5.17 of \citet{BernardoSmith1994}, since \eqref{eqn_bayes_asym} holds and if $\bs{J_h}(\bs{\phi_S}) =
\frac{\partial\bs{h}(\bs{\phi_S})}{\partial\bs{\phi_S}}$ is non-singular with continuous entries, then the posterior distribution of the transformed separator nodes, $\bs{\phi_H} = \bs{h}(\bs{\phi_S})$, is also asymptotically normal: $$ \bs{\phi_H} \mid \bs{y} \asym N_m \left(
\bs{h}(\bs{\hat{\phi}}_1^{(s)}, \ldots, \bs{\hat{\phi}}_Q^{(s)}),
\bs{J_h}(\bs{\hat{\phi}_S})^T \bs{V_S} \bs{J_h}(\bs{\hat{\phi}_S}) \right) $$ The Jacobian $\bs{J_h}(\bs{\phi_S})$ exists and is non-singular for the sorts of transformations we use in practice, for example log and logit transformations.
A further application of theorem 5.17 of \citet{BernardoSmith1994} results in a posterior distribution of the contrasts $\bs{\Delta}$ that is also aymptotically multivariate normal, if $\frac{\partial\bs{\Delta(\phi)}}{\partial\bs{\phi}} = \bs{C_{\Delta}}^T$ is non-singular with continuous entries, which as a contrast matrix it is: \begin{eqnarray} \bs{\Delta} \mid \bs{y} & \asym & N_p \left( \bs{C_{\Delta}}^T
\bs{h}(\bs{\hat{\phi}}_1^{(s)}, \ldots, \bs{\hat{\phi}}_Q^{(s)}),
\bs{C_{\Delta}}^T \bs{J_h}(\bs{\hat{\phi}_S})^T \bs{V_S}
\bs{J_h}(\bs{\hat{\phi}_S}) \bs{C_{\Delta}} \right) \label{eqn_bayes_asym_lincom_full} \\
& = & N_p \left( \bs{C_{\Delta}}^T \hat{\bs{\phi}}_H,
\bs{C_{\Delta}}^T \bs{V_H} \bs{C_{\Delta}}\right) \nonumber \end{eqnarray} for $\bs{V_H} = \bs{J_h}(\bs{\hat{\phi}_S})^T \bs{V_S}
\bs{J_h}(\bs{\hat{\phi}_S})$. Asymptotically, therefore, the posterior mean $\overline{\bs{\Delta}} = \bs{C_{\Delta}}^T \overline{\bs{\phi_H}} \overset{a}{\approx} \bs{C_{\Delta}}^T \hat{\bs{\phi}}_H$ and the posterior variance-covariance matrix of $\bs{\Delta}$ is $\bs{S_{\Delta}} \overset{a}{\approx} \bs{C_{\Delta}}^T \bs{V_H} \bs{C_{\Delta}}$.
\end{document} | arXiv |
communications chemistry
Measurement of ligand coverage on cadmium selenide nanocrystals and its influence on dielectric dependent photoluminescence intermittency
Aidan A. E. Fisher1,
Mark A. Osborne ORCID: orcid.org/0000-0001-6660-29851,
Iain J. Day1 &
Guillermo Lucena Alcalde1
Communications Chemistry volume 2, Article number: 63 (2019) Cite this article
Excited states
Photoluminescent quantum dots are used in a range of applications that exploit the unique size tuneable emission, light harvesting and quantum efficient properties of these semiconductor nanocrystals. However, optical instabilities such as photoluminescence intermittency, the stochastic switching between bright, emitting states and dark states, can hinder quantum dot performance. Correlations between this blinking of emission and the dielectric properties of the nanoenvironment between the quantum dot interface and host medium, suggest surface ligands play a role in modulating on-off switching rates. Here we elucidate the nature of the cadmium selenide nanocrystal surface, by combining magic angle spinning NMR and x-ray photoelectron spectroscopy to determine ligand surface densities, with molecular dynamics simulation to assess net ligand filling at the nanocrystal interface. Results support a high ligand coverage and are consistent with photoluminescence intermittency measurements that indicate a dominant contribution from surface ligand to the dielectric properties of the local quantum dot environment.
Semiconducting quantum dots (QDs) have become ubiquitous across a number of applications recently including areas as diverse as solar cells1,2, biological tags3,4 and within the display industry5. Unfortunately, QDs suffer from a number of often-undesirable instabilities. One such instability is the phenomenon of photoluminescence intermittency (PI), which manifests as stochastic blinking between a bright, emissive state and a dark, quenched state at the single molecule level6,7,8. The dark state has been attributed to a number of mechanisms including multiple (non-radiative) recombination centres (MRC)9 and diffusion-controlled electron transfer (DCET)10. Alternatively, the dark state can arise from QD charging and subsequent non-radiative relaxation to the valence band via an Auger quenching mechanism11,12. Importantly, former studies have shown that the bulk host-substrate dielectric constant, in which the QD is embedded, plays an important role in this stochastic switching process. Early work by Cichos and colleagues identified a linear correlation between the residence time of a QD in a dark state and reaction field factor, (1 − 1/εm) where εm is the dielectric constant of the substrate or embedding medium13,14. Further investigations by Osborne and Fisher led to the introduction of perturbations to the host-medium dielectric constant to account for coverage of the QD surface by stabilising ligands15,16. In this case, an effective dielectric constant was employed to define the self-trap energies for exciton charge-carriers in the vicinity of the QD interface, in the quantitative modelling of PI using a recently advanced charge-tunnelling and self-trapping (CTST) model15. To better understand the interplay between PI dynamics in QDs and the dielectric properties of the QD interface, we employ high resolution magic angle spinning (HR-MAS) NMR and XPS to measure the ligand surface population and molecular dynamics (MD) simulation to gauge ligand surface coverage and its contribution to the local QD nanoenvironment.
A number of analytical methods have emerged to study a wide variety of nanoparticles including gold, palladium and platinum nanoparticles using a range of techniques such as thermogravimetric analysis (TGA), Rutherford backscattering spectroscopy (RBS) and x-ray photoelectron spectroscopy (XPS)17. Typically, solid specimen methods, such as RBS and XPS, are utilised to interrogate surface coverage18,19. Unfortunately, results from both quantitative RBS and XPS, in this instance, require careful consideration to deconvolute uncoordinated ligand from ligand attached to the QD surface.
Here, we show that ligand coverage of the QD surface derived from our analysis of HR-MAS NMR and XPS spectra is consistent with dielectric dependent PI measurements on single CdSe QDs that support a lead contribution of the ligand to the effective dielectric properties of the QD nanoenvironment within the CTST framework. The study follows in the spirit of former research pioneered by Griffin, Bawendi and Alivisatos and colleagues to approximate ligand surface coverage on CdSe QDs20,21. The application of HR-MAS NMR permits accurate and rapid synthesis of spectrally resolved peaks assigned to surface coordinated and uncoordinated ligand. The use of an internal standard enables simple quantitative calculations to be performed by direct peak integration. Resolution of the surface bound ligand density, ultimately, provides the basis for determining coverage of the QD by considering the conformation of the ligand at the QD surface within a self-avoiding model (Flory theory) of polymer dynamics, as well as through MD simulation. Results are used to interpret the influence of the ligand-shell on PI dynamics within the CTST framework, key aspects of which are introduced below.
Charge-tunnelling and self-trapping at the QD-host interface
The photo-charging phenomenon in QDs and effects such as PI has long been a subject of interest since the early observations conducted by Brus and his colleagues22. Since then, a handful of models have evolved that capture the underlying photophysics. At the forefront of many of these descriptions is a tunnelling process of the exciton to trap states, either at the QD surface or externally in the supporting medium. In the latter case, this renders the parent QD charged and non-emissive due to a highly efficient Auger relaxation mechanism23,24. The nature of the trap state however, remains somewhat elusive. A recent study highlighted the intimate relationship between the stochastic blinking of single CdSe QDs and the substrate dielectric constant13,14. Notably, this may be further complicated by atmospheric effects, where it has been found that moisture in the air may passivate surface traps and alter PI blinking statistics25, although this was not found to impact the former dielectric dependence studies on polymer substrates performed under ambient air, where even a low concentration of water molecules might be expected to dominate the local dielectric constant \((\varepsilon _{{\mathrm{H}}_2{\mathrm{O}}} = 80)\)13,14,16. Osborne and Fisher further investigated the correlations between PI and the dielectric properties of the QD surround via a recently published charge-tunnelling and self-trapping (CTST) model15,16. In the CTST scheme (Fig. 1), the native, uncharged and emissive QD is denoted by the X00 state. The neutral QD may undergo stochastic transitions to either a charged core, non-emissive state (X+10) or an emissive charged surface-state (X+01) where the ejected electron is trapped by the host matrix. In the CTST nomenclature, subscripts represent the position of the exciton hole on the QD, either core (10) or surface (01), while superscripts indicate the charged state on the QD. The energetics (Fig. 1a) and kinetics (Fig. 1b) of the CTST model are formulated in terms of the QD and host-substrate macroscopic properties, namely QD size and host-dielectric properties. For example, the trapped electron is stabilised according to the host-substrate dielectric constant as alluded to by Cichos and colleagues13,14. Importantly, CTST simulations of PI in single QDs suggest that the surface ligands play a dominant role in dielectric effects of the medium surrounding the QD. The work herein investigates QD ligand coverage and identity using HR-MAS NMR spectroscopy and XPS to resolve both bound and unbound ligand fractions. Furthermore, the effects of altering the ligand surface coverage on PI is realised through simulation using the CTST model. At a more general level, understanding the impact of the nanoenvironment on the photophysical properties of CdSe QDs, may yield new information on the exciton dynamics for a number of systems including the recently reported perovskite QDs26,27.
Simplified energetic and kinetic scheme of the CTST model. a energetic scheme of the CTST framework for a general CdSe.ZnS QD. In the simplest QD system explored here, QDs were not coated with an epitaxial shell and the shell depth was set equal to zero. The band structure of the QD and trap state energies were estimated from former model iterations. In this scheme, the ground state may be excited by 473 nm light (cyan). The bulk band gap (red) and QD band gap (orange) are given by Eg and EQD, respectively. The tunnelling length of the excited state electron and hole are given by l and d respectively. b Reduced kinetic scheme, where the neutral ground state (X00) is excited with a rate constant, kx, yielding the excited neutral state. This may relax radiatively (kr), or undergo an ionisation process (ki+) to either the photoluminescent surface-charged state (X+01) or the dark core-charged state (X+10). The slow return of the electron renders the QD once again neutral and emissive. Importantly, the hole trap associated with the X+01, (red box), is influenced by the local dielectric environment. c The hole trap energetic profile for the charged surface-state. Note the hole trap depth ϕh is deeper when the surface ligands dielectric constant dominates (blue). If surface ligand effects are ignored the hole trap depth is shallower (red). Ultimately, the shallow hole trap, associated with no ligand perturbations, reduces the average potential energy barrier for electron return, ki−, resulting in short on times (large αon) in the PI profile. This was not observed experimentally and a nanocomposite dielectric environment was found to best simulate the blinking dynamics.
In a condensed CTST kinetic scheme (Fig. 1b), the PI is accounted for by charge-tunnelling of the photoexcited electron between the parent QD and the host matrix. Specifically, the neutral ground state QD (X00) may absorb a photon, yielding the bright (X00) native state. This may subsequently relax via radiative recombination producing the neutral ground state once more. Alternatively, the photoexcited neutral state (X00) may undergo a stochastic transition to either a photoluminescent charged surface-state (X+01) or dark charged core state (X+10), which upon further excitation relaxes through a dominant Auger process. The reader is referred to the literature for a complete model description15,16. Importantly, for the ionised QD, the trapping potential experienced by the surface-hole state (X+01) is described using elementary electrostatic theory for a dielectric interface and the method of image charges (Fig. 1a, c). At the simplest level the trapping potential is given (Gaussian units) by
$$\phi _{\mathrm{h}} = \frac{{Ke^2}}{{2r_{\mathrm{h}}}}\left( {\frac{1}{{\varepsilon _{\mathrm{m}}}} - \frac{1}{{\varepsilon _{{\mathrm{QD}}}}}} \right)$$
where the dielectric constants of the host-matrix and QD are given by εm and εQD, respectively, K = (εQD − εm)/(εQD + εm), the fundamental charge is given by e and rh is the hole trap radius28. In CTST, the surface hole trap differs from the more distant matrix-bound electron trap, since the stabilising ligands of the QD will influence the dielectric properties of the matrix in close proximity to the surface. To incorporate effects of surface ligand, εm is replaced with an effective dielectric constant, εeff, which is strictly a variable in CTST and generalises the self-trap energy of the hole to account for variations in the dielectric constant at the QD interface in mixed-media. HR-MAS was employed to evaluate the QD-ligand surface coverage and comparison made to CTST modelling of PI kinetics, which supports the ligand-shell as the dominant contributor to dielectric dependent QD blinking16.
HR-MAS studies of single QD ligand density
The use of NMR has recently emerged as a spectroscopic tool for the study of QDs, where the ease of preparation compared with RBS has made NMR particularly attractive29,30,31. Furthermore, it was found that HR-MAS (Fig. 2) provided satisfactory deconvolution of the broad, pseudo solid-state signal, associated with coordinated ligand, and the sharp resonance of the free, solution phase ligand and residual non-coordinating octadecene (ODE). The spinning of the samples at a rate of 2 kHz at the magic angle removes any broadening arising from magnetic susceptibility differences due to the heterogeneous nature of the sample. A comparison between the NMR spectrum under MAS and a static solution spectrum is shown in Supplementary Fig. 1. In the case of the static solution spectrum, the bound ligand alkyl backbone signal is peak broadened and lost in the baseline. HR-MAS reduces broadening effects and an emergent bound signal is observed. No vortex effects were observed at the low spin rate. All measurements were performed at a temperature setting of 298 K, regulated by the spectrometer's variable temperature controller. The samples actual temperature at this spin rate and temperature setting is approximately 305 K as measured by an ethylene glycol thermometer32.
1D proton NMR using MAS of free stearic acid ligand and CdSe QDs with ligand surfactant. a Proton NMR of the free stearic acid ligand dissolved in deuterated toluene. The triplet (0.9 ppm) was assigned as the –CH3 methyl terminus of the aliphatic chain, the broad poorly resolved peak (1.2 ppm) was assigned as the –(CH2)14– backbone of the long alkyl chain. The β protons appear at 1.48 ppm, while the α proton signal, inset, is at 2.03 ppm. The multiplet (2.05 ppm) arises from undeuterated toluene. b Proton NMR under HR-MAS of CdSe QDs in deuterated toluene. Note the broad shoulder (1.4 ppm) observed for the ligand signals, which is expected to arise from a slowly tumbling fraction of ligand bound to the QD surface. c Free SA ligand proton NMR, which has been expanded to show the scalar coupling. No broad downfield shoulder was detected for free SA. d Enlarged NMR spectrum of the CdSe QD sample. HR-MAS was employed to sharpen the broad resonance of the bound surface ligand, which appears downfield to the free ligand. This presented a unique opportunity to perform quantitative NMR using a benzene internal standard, where the broad shoulder peak area was integrated directly.
To calculate the ligand density, as-synthesised core QDs were analysed using UV-Vis spectroscopy. The empirical relations outlined by Peng and colleagues33 may be used to determine the QD concentration, number and diameter. We note that several different empirical functions can be found in the literature, each yielding subtly different QD numbers in solution. For example, Peng's relations give a QD population of 3.77 × 1015, whereas more recent work by Jasieniak and colleagues34 derive a QD population of 2.16 × 1015. Work reported here employs the latter, more recent fitting parameters. The HR-MAS rotor, specifically designed to handle liquids and slurries, was subsequently loaded with a known number of QDs and an internal benzene standard. It was observed in the NMR spectra that the sharp methyl terminus resonance of the rapidly tumbling ligands displayed evidence of a broad shoulder. Typically, broad resonances may be associated with slow molecular tumbling in solution phase NMR and it was conjectured that the signal was an indication of the coordinated ligand over the NMR timescale. Moreover, it was expected that the broadening of the methyl resonance is not indicative of multiple surface binding sites since the methyl terminus is >2 nm from the QD surface in solution phase. In order to confirm the nature of the broad resonance, we performed diffusion ordered NMR (DOSY) under MAS (Fig. 3a, b) in an effort to acquire the diffusion coefficient of the detected broad fraction. The measured diffusion coefficient, for the broad resonance, may be substituted into the Stokes Einstein equation to yield an estimate of the hydrodynamic radius. In this case, a radius of 2 nm was found to be consistent with data acquired using HR-TEM (Fig. 4). The sharp resonance of the free ligand yields a hydrodynamic radius of 0.4 nm. This suggests some coiling of the free aliphatic chain in solution, where for a fully extended ligand, based on the C-C bond length (0.154 nm), we would expect a hydrodynamic radius of about (0.154 × 18)/2 = 1.4 nm for the ligand. Lastly, to reinforce these measurements, we followed in the spirit of former publications35,36,37 and performed NOE under MAS, where we observed a distinctive negative NOE for the broad resonance and a positive peak for the sharp signal (Fig. 3c, d). Primarily, this arises from the dominant longitudinal relaxation pathway, where slow tumbling molecules favour the zero quantum transition pathway. In contrast, rapidly tumbling molecules, id est small molecules, predominantly relax via a double quantum transition. Ultimately, spectral resolution of the average free and bound ligand allows direct integration of the broad resonance and numerical calculation of the number of ligands bound to a single QD.
Deconvolution of the free and bound ligand on CdSe QDs using diffusion NMR and NOE techniques. a DOSY spectrum under MAS of the ligand methyl resonance. The sharp resonance of the triplet, associated with free ligand/residual ODE, diffuses at about 12 × 10−10 m2 s−1. Importantly, the broad shoulder at 0.95 ppm is associated with bound ligand diffusing at approximately 2 × 10−10 m2 s−1. The Stokes Einstein equation was employed to estimate QD size from the diffusion data yielding a hydrodynamic radius of 2 nm. b the alkyl backbone of the ligand similarly exhibits sharp and peak broadened resonances, which may be ascribed the free/residual ODE and bound ligand, respectively. c NOE spectrum of the CdSe QDs under MAS. The methyl resonance decomposed into both a positive (red) and negative (blue) NOE phase evidencing a rapid and slowly tumbling ligand population, respectively. d detailed NOE contour spectrum under MAS emphasising the methyl resonance. The sharp resonance (positive NOE) and broad shoulder (negative NOE) highlight the rapid tumbling of the free ligand/residual ODE (red) and slow tumbling of the bound ligand (blue).
HR-TEM image and lattice reconstruction of as-prepared CdSe QDs for use in NMR surface ligand density studies. a high resolution image of a single CdSe QD of diameter 3 nm. This QD size was used to estimate the number of surface atoms from Eq. (2) and hence ligand density. b reciprocal space image of the CdSe QD viewed along the [110] zone axis. c reconstructed real space lattice after noise filtering of the reciprocal lattice. d CdSe zinc blende atomic model (ICSD:620439) of the (110) lattice facet. The density of the zinc blende phase (5.66 g cm−3) was calculated and used in Eq. (2) to estimate the number of surface atoms. The red and black lines highlight the (002) and (\(\bar 1\)11) planes, respectively. The purple and green spheres represent selenium and cadmium lattice positions respectively. e schematic of the QDs dispersed in deuterated toluene undergoing NMR under MAS. The effective dielectric constant and matrix dielectric constant around the QD (ICSD:620439) are given as εeff and εmatrix, respectively. At the QD surface, coordinated ligands, resolved by NMR, modify the matrix dielectric constant, which leads to subtle perturbations in the hole trap depth defined by Eq. (1).
Attention is now turned to the composition of the ligand shell. We performed simple 1D NMR on stock octadecylamine (ODA), stearic acid (SA) and trioctylphosphine (TOP) (Supplementary Fig. 2) in a bid to detect unique NMR peaks between the various ligands. Importantly, SA exhibits a triplet (2.03 ppm), which is absent in our QD NMR spectra. This may be taken as evidence of the absence of SA in our QD samples. However, another explanation is spectral broadening of the signal, due to surface binding, and its loss in the background noise. To test this we used XPS elemental analysis. Each ligand (SA, ODA and TOP) exhibits a unique elemental binding energy (oxygen, nitrogen and phosphorus), respectively. XPS data (Supplementary Fig. 3) shows strong signals for cadmium, selenium and oxygen with no evidence of nitrogen or phosphorus. This simplifies our ligand system to exclusively SA which has a dielectric constant of εSA = 2.238.
In the case of our QD NMR samples (Fig. 2), we detect a weak isolated signal at 1.95 ppm, which may be attributed to residual non-coordinating ODE from the QD synthesis. This peak integral (0.035) is used to infer the expected peak areas for the ODE alkyl backbone and methyl protons at 1.2 and 0.9 ppm respectively. This analysis highlights that the free alkyl (1.2 ppm) and methyl (0.9 ppm) proton signals for our QD sample, are predominantly non-coordinating residual ODE (Supplementary Note 1). For simplicity, we assume no free ligand is present in subsequent calculations for the QD sample. The NMR peak integration ratio for pure SA ligand is theoretically expected to be 1:10.0 (methyl:alkyl backbone protons excluding downfield alpha protons)(Supplementary Fig. 2). Experimentally, for the bound ligand fraction of our QD sample, we observe an integration ratio of 1:10.0 pointing toward an exclusive SA ligand surface. Hence, by exploiting the benzene internal standard we calculate a ligand surface coverage of 2.3 ligands nm−2 for the linear aliphatic SA chains. This appears to be in good agreement with literature estimates31 and supports findings from our XPS measurements.
To further evaluate the ligand density, the density of a single CdSe QD crystal was calculated using diffraction data (Supplementary Fig. 4) and HR-TEM images (Fig. 4 and Supplementary Fig. 5). The number of CdSe ion pairs was then determined using the following relationship
$$\frac{{V_{{\mathrm{QD}}}\rho _{{\mathrm{CdSe}}}}}{{M_{r{\mathrm{CdSe}}}}}N_{\mathrm{A}} = 249\,{\mathrm{ion}}\,{\mathrm{pairs}}$$
where VQD is the volume of a single QD, the density of the QD is given by ρCdSe, the molecular weight is MrCdSe and NA is the Avogadro constant. The number of surface atoms coordinated to ligand was then approximated using a spherical cluster analysis. It can be shown, that for a cluster size of about 500 atoms (249 CdSe ion pairs), 45% of the atoms reside at the crystal surface (Supplementary Note 2). Assuming a crystal stoichiometry of approximately (1:1 = Cd:Se), as verified by inductively coupled plasma mass spectroscopy (ICP-MS) (Supplementary Fig. 6), then approximately 225 atoms (Cd + Se) exist at the QD surface, where a single QD accommodates about 65 SA ligands. Thus, one may naively assume one quarter coverage of the QD surface on a ligand per atom basis, leaving the surface largely exposed to intrusion by the host-embedding medium. However, results from CTST simulations agree somewhat poorly with experimental data when the dielectric constant of the QD surrounding-medium approaches that of the host-matrix alone16. To interpret this discrepancy we develop arguments using former communications, which report the effects of ligand folding dynamics of aliphatic amines. Strouse and colleagues29 made use of 77Se 1H heteronuclear correlation (HETCOR) NMR to investigate the interaction between the alkyl backbone of hexadecylamine (HDA) and surface selenium sites. The authors showed evidence of significant chain tilting towards the QD surface, where a total of five selenium surface sites were found to interact with the alkyl chain. Generally, photoluminescence studies on single QDs are performed in the solid state and it is reasoned here that the absence of solvent will accentuate collapse of the ligand onto the QD surface. Moreover, studies on surface coverage in the solid state, conducted by Rosenthal and colleagues19 using RBS, revealed the surface coverage to be in excess of 70%, where the surface ligand was a sterically bulky trioctylphosphine oxide (TOPO) ligand. Importantly, in the PI studies performed here, we found our CTST simulations were in agreement with experiment when a value of the effective dielectric constant, εeff, close to that of the ligand, was employed. Hence, we argue from our own data, and the findings of Strouse and Rosenthal, that ligand coverage should fill at least 70% of the surface, where in this case the QD is efficiently wrapped by the long aliphatic chain of SA.
Indeed it was found that within a modified-CTST model used to describe the shell-thickness dependent blinking statistics in core-shell CdS capped CdSe QDs, a ligand filling of almost 100% was required to faithfully reproduce trends in the power-law exponents and cut-off times with increasing CdS monolayers15. Again, the result highlights the need to consider ligand conformation on the QD surface when correlating coverage, as measured in solution phase, with photoluminescent (PL) blinking statistics measured under dry, ambient conditions.
QD blinking and effects of the nanoenvironment
Complete filling of the QD surface by collapse of the ligand is not wholly unexpected, given the long C18 backbone of the dominant SA [CH3(CH2)16CO2H], even at only 28% surface atom coverage (65 SA per 225 atoms). Within a self-avoiding model of polymer dynamics we can use the Flory radius, RF = l(d/3)ν/3Nν, to estimate the area covered by a ligand, where l = 0.154 nm is the C-C bond-length, N = 18 is the number of carbon units and ν = 3/(d + 2) is the Flory exponent for a conformational space of dimension d39. For tethered polymers the Flory exponent has been found to vary from the 3D to 2D limits, ν = 3/5 and v = 3/4, respectively, with increasing surface interaction40. Using the 3D limit as a lower bound a single ligand has a footprint of \(\pi R_{\mathrm{F}}^2 \simeq 2.4\,{\mathrm{nm}}^{\mathrm{2}}\), giving a total surface coverage of 156 nm2 for 65 SA per QD. For a mean QD radius of 1.5 nm and surface area \(4\pi R_{{\mathrm{QD}}}^2 \simeq 28\,{\mathrm{nm}}^2\), the ligand represents an >5× coverage of the surface. The excess is similarly significant from an atomistic perspective, with the 65×C18 atoms again representing more than five-fold the number of surface atoms. As such relaxation of the ligand at the QD surface would be expect to provide more than monolayer capping of the surface. We illustrate this possibility with results from MD simulations (ChemSite Pro, ChemSW) of SA ligands on a CdSe surface (Fig. 5a, b). The simulation was reduced to a 4 × 4 array (5.76 nm2) of the CdSe zinc-blend unit cell with 13 associated SA molecules (2.3 ligands nm−2) to represent closely the NMR measured ligand densities. From an extended, "standing" start (Fig. 5a), the ligand conformation expected from swelling in the organic NMR solvent, SA molecules were relaxed at constant temperature (300 K) under periodic boundary conditions and without dielectric solvation to represent the ambient air under which PL blinking studies have been performed. We found the SA ligand backbone readily condensed with signficant coverage of the QD surface, with typical exposures of only 2–3 CdSe units of the 32 surface pairs corresponding to >90% filling at the QD interface (Fig. 5b).
QD-ligand surface coverage from MD simulation and CTST analysis of PI in CdSe. a Representation of SA molecules in extended conformations on a 4 × 4 supercell CdSe surface, at a density of 2.3 ligands nm−2 and b following relaxation of ligands under ambient, non-solvated conditions at constant temperature (300 K) with periodic boundary conditions. In the extended ligand conformation a significant fraction of the CdSe surface is exposed (with 13 ligands per 32 CdSe surface pairs), while in the collapsed state much of the surface is filled by the ligand backbone. c Extract of a single QD PL intensity trajectory from a sample of the same core-QDs analysed by NMR and d the corresponding on-times (blue) and off-times (red) PDDs compiled from 20+ individual QDs. e Dependence of the CTST mean electron tunnelling barrier potentials Von and Voff on the effective dielectric constant of the QD nanoenvironment. The A−B/εeff scaling (dotted lines with blue on, red off) in each case (Aon/off ≃ 7.37/7.44, Bon/off ≃ 3.0/3.6), arises from the self-energies that define the charge-carrier traps, notably the hole stabilisation energy ϕh (see text). f Dependence of TPL (see text) parameters αon and αoff on εeff. Error bars generated from fits to 30 on/off PDDs derived from CTST simulations. Inset: dependence of truncation time,\({\mathrm{\Gamma }}_{{\mathrm{on}}}^{ - 1}\) as defined in the CTST model on \(\varepsilon _{{\mathrm{eff}}}^{ - 1}\) (inset). Dotted lines represent linear least squares fit to αon (blue), αoff (red) and \({\mathrm{\Gamma }}_{{\mathrm{on}}}^{ - 1}\) (inset green).
The nature of the QD surface effects the range of PL-on and off switching rates in the QD PL intensity trajectory (Fig. 5c), as characterised by the probability density distribtuions (PDDs) of on and off-times extracted (Fig. 5d) by threshold analysis16. In the CTST model the effective dielectric constant influences QD PL-blinking through a dependency of the electron and hole tunnelling potentials on the self-trap energies of the charge-carriers in the host and at the QD surface, respectively. Specifically, the dependence of electron and hole trapping energies on the permitivity of the QD nanoenvironment, in this case εeff, gives rise to barrier heights that increase as Von/off = A−B/εeff with increasing dielectric constant (Fig. 5e), where constants A and B are functions of the electron affinity and the QD band gap (Supplementary Note 3). For electron recombination in the surface-charged, on-state, X+01, additional stabilisation of the hole at the QD-surface, raises the overall barrier and reduces the downhill tunnelling gradient compared to electron return to the valence band hole in the core-charged, off-state, X+10. The effect on PI statistics is to increase the range of PL-on and off times with increasing εeff, as characterised by the decreasing exponents αon and αoff of the truncated power-law (TPL), \(P\left( t \right) = At^{ - \alpha _{{\mathrm{on/off}}}}e^{ - {\mathrm{\Gamma }}t}\), commonly used to describe the on and off-time PDDs (Fig. 5f). The saturation rate in the exponential component defines a cut-off time, Γ−1 at which the PL blinking statistics no longer follow power-law behaviour and is generally most evident in on-time distributions. Within the CTST framework, cut-off in the on-times evolves from a biexciton mechanism that leads to quenching of the X+01 on-state by recombination of a hot-electron at the surface trapped hole. Here, the net rate of biexciton formation (within the exciton lifetime) is given by \(k_{\mbox{x}}^2/k_{\mbox{r}}\), where the excitation and radiative relaxation rate constants kx and kr depend on the local field produced by the QD-ligand dielectric mismatch (εQD ~ 9ε0 vs εeff ~ 2ε0). Ultimately this leads to a cut-off rate that scales as the square of the local field factor, F = 3εeff/(εQD + 2εeff) and a cut-off time that increases near-linearly with 1/εeff (Fig. 5e inset). We find an effective dielectric constant εeff = 2.2, corresponding to that of the SA ligand, provides a best fit of simulated on- and off-time PDDs and associated TPL parameters to those derived from experiment. The dielectric constant is consistent with the model of ligand condensation and collapse at the QD surface. Values αon = 1.64 ± 0.18 and αoff = 1.59 ± 0.14 compare favourably (within error) with those obtained by Isaac et al.13 for TOPO stabilised core CdSe QDs in low dielectric polymeric media (1.56 and 1.62 respectively in p-terphenyl, εm = 2.1), although the authors observe a weaker inverse dielectic dependence of αoff on the dielectric environment than the CTST model immediately suggests and αon was observed to be almost independent of host permitivity. However, for the range of QD host matrices analysed, with dielectic constants spanning εm = 2.1 (p-terphenyl) to 14 (PVA), we note that given TOPO and SA ligands have closely related dielectric constants (2.5 and 2.2, respectively), the effective constants reduce to the range, εeff = 2.2 to 3.4, assuming a simple mean of ligand and host-dielectric constants and the lower estimate of 90% surface coverage obtained from our simulations. In this case, the corresponding values of αoff = 1.62 and 1.38 (from ref. 13) for these reduced εeff constants now fall in line with those predicted by the CTST model (Fig. 5f, blue dotted line), although corresponding on-time exponents αon = 1.56 and 1.58 remain less sensitive to εeff than expected from CTST. We further note that while our experimentally derived truncation time, \({\mathrm{\Gamma }}_{{\mathrm{on}}}^{ - 1} = 4.8 \pm 3.0\) is subject to large error and differs from typical values (1.2–1.8) measured by Isaac et al., it is does fit within the εeff dependent local field factor that governs the biexciton generation and surface-hole quenching associated with the suppression of long on-times in the CTST model. In sum the results suggest an effective dieletric constant, εeff, which accounts for a signficant coverage of the QD surface by ligand and restricted exposure to the QD host substrate or embedding medium, is a relevant quantity in the analysis of QD optical properties in different dielectric environments.
To conclude, this communication has investigated the nature of single QD surfaces using HR- MAS to provide a quantitative measure of the average number of ligand molecules bound to the QD surface. Measured ligand densities have been used to estimate the QD ligand coverage, following condensation at the QD surface, through MD simulation. Finally, ligand coverage was used to interpret PI in single QDs within the CTST framework. Comparison between simulation and experiment found the CTST model consistent with QD blinking kinetics that are dependent on dielectric properties at the QD-surface, where the dominant contributor to the effective dielectric constant is the stabilising ligand.
Cadmium oxide (CdO) 99.5% trace metals basis, stearic acid (SA) 95% reagent grade, 1-octadecene (ODE) 90% technical grade, octadecylamine (ODA) 90% technical grade, trioctylphosphine (TOP) 90% technical grade and toluene-d8 99 atom % D were sourced from Sigma-Aldrich. Selenium 99.999% 200 mesh was sourced from Alfar Aesar. Ethanol and methanol was sourced from VWR. Chemicals were used without further purification.
Synthesis of CdSe QDs
Zinc-blende QDs were produced using a modified literature protocol41. Briefly, a three neck round bottom flask was infused with cadmium oxide (35 mg), stearic acid (0.35 g), 1-octadecene (4 mL) and a magnetic stir bar. A vacuum was applied to the sealed vessel for 15 mins followed by an argon stream. The mixture was heated to 240 oC and the temperature maintained until a clear colourless solution formed. The mixture was then cooled to room temperature and octadecylamine (1.4 g) was added. The flask was degassed and backfilled with argon. The mixture was subsequently heated to 270 oC for the injection the selenium precursor.
The selenium precursor was prepared by dissolving selenium powder (0.12 g) in trioctylphosphine (2 mL). The rapid injection of the selenium precursor into the hot cadmium mixture induced QD nucleation, indicated by an immediate colour change from clear and colourless to yellow/red. Growth of the QDs proceeded for two minutes at 250 oC before cooling to room temperature. Flocculation of the QDs was achieved by the addition of a methanol/ethanol anti-solvent. The resulting suspension was centrifuged at 3000 rpm for 30 mins. The supernatant was discarded and the QD solid dissolved in deuterated toluene.
Single molecule photoluminescent studies
Photoluminescence experiments were conducted using an inverted optical microscope (Nikon, Eclipse TE2000, Japan) equipped with a high numerical aperture objective lens (Nikon, Plan Apo, 60x, NA 1.45, Japan). Excitation light from a CW laser (473 nm Scitec Instruments, UK) was passed through a quarter-wave retarder before being focussed into the back aperture of the objective lens via a 200 mm plano-convex lens. Through objective total internal reflection (TIR) was achieved by tilting the excitation beam off-axis. The excitation light and sample photoluminescence was separated by employing a dichroic mirror (Semrock, FF509-Di01, USA). The light collected from the QDs was subsequently passed through an emission filter (Semrock, Brightline 609/54, USA) before being projected onto a chilled ICCD camera (Princeton Instruments, PI-Max Gen 3). The data was collected using a camera integration time of 80 ms and a beam excitation power of ~100 W cm−2, after accounting for near-field enhancement. Laser powers were kept low to minimise any photo-oxidation effects. Typically, QD blinking profiles were acquired for at least 12,000 frames after which the QDs underwent significant photobleaching. Image focus was maintained with the assistance of a bespoke feedback loop, which monitors the z-plane stage displacement. The focus was restored using a motorised focus drive and control hub (Prior Scientific, Proscan 2, UK). The colloidal QD dispersions were spun cast 3000 RPM onto a flame-cleaned coverslip (Menzel Glaser, 22 × 40, 1.5, EU), where QD concentrations were adjusted to produce a desirable density of about 0.1 QD μm−2 for single molecule measurements.
Image processing and data analysis
Collected image stacks of single QDs were analysed using ImageJ and a customised photoluminescence intermittency algorithm. Essentially, the algorithm detects and crops about single photoluminescent QDs. The QD grey-scale counts were then plotted in the temporal domain to produce the intensity trajectory. The photoluminescently quenched dark state was distinguished from the bright state by establishing a threshold, where we adopt a threshold of 2σ above the background as is common within the literature. Furthermore, it was noted that grey-states were rare within the trajectory profiles and a separate treatment of these borderline intensity states was not required. This is shown in (Supplementary Fig. 7). In this case, the grey state of bare CdSe QDs is only weakly modulated and is approximately the same intensity as the on state. In contrast, core-shell CdSe.CdS QDs exhibit a deeply modulated grey state, which approaches the on-off threshold for blinking analysis. The grey states arise naturally from the CTST description, where the excess hole equilibrium is shifted toward the core for thick shell QDs. This manifests as low intensity grey states over the camera exposure time. Since we investigate only bare CdSe QDs in detail, which show well-defined on/off states, we use the simple 2σ threshold to separate on and off events. The probability density distribution was calculated using the method outlined by Nessbitt and colleagues7 given by P(t) = 2Ni/[(ti + 1 − ti) + (ti −;ti − 1)], where Ni is the number of occurrences of the bright or dark event of duration ti while ti + 1 and ti − 1 represent the durations of the proceeding and preceding events, respectively. The PDD data were aggregated for a minimum of 30 individual QDs and the data was fitted using a truncated power law (TPL), \(P\left( t \right) = At^{ - \alpha }e^{ - t/\tau _{\mathrm{c}}}\), where fitting parameters α and τc were varied using a Leven-Marquardt algorithm for nonlinear least-squares minimisation.
UV-Vis measurements
UV-Vis data was obtained using a Thermo Scientific UV300 spectrometer. The data was collected using a 1 nm data interval across a single scan cycle at 240 nm/min.
NMR methods
NMR measurements were performed using a Varian VNMRS 600 spectrometer. All magic angle spinning experiments employed a 4 mm HR-MAS gHX Nano probe equipped with a magic angle gradient coil, producing up to 138 G cm−1. The spectrometer temperature was regulated at 298 K for all measurements with an actual sample temperature at low spin speeds of 305 K as calibrated using an ethylene glycol thermometer. The 1D NMR and NOESY data was processed using the MestreNova software. The raw DOSY data was processed using the DOSY toolbox42.
1D proton HR-MAS NMR was produced at a 1H frequency of 599.7 MHz with a spectral width of 9615 Hz, 32768 data points with up to 128 transients. Quantitative measurements were performed at a spin rate of 2 kHz using a small flip angle pulse enabling more rapid recycling of the magnetisation (recycle delay 1 s). An inversion recovery experiment was performed to determine the T1 of the slowest relaxing signal (0.74 s). The inter-pulse separation (recycle delay + acquisition time) was at least five times longer than the slowest relaxing signal.
2D HR-MAS DOSY NMR was generated using the One-shot pulse sequence43, which reduces problematic eddy currents and does not require phase cycling. Importantly, the diffusion delay was 50 ms, the diffusion encoding gradient pulse was 1 ms, and a total of 15 gradient amplitudes were used from 5.09 G cm−1 up to 80.1 G cm−1. An imbalance factor of 0.2 was using for all experiments. The pulse sequence delays were rotor synchronised, with the spin rate being 2 kHz. The data was subsequently analysed using the modified Stejskal-Tanner equation in order to synthesise the 2D spectra.
HR-MAS NOESY NMR was achieved using a zero-quantum filter pulse sequence. The mixing delay was 200 ms with a relaxation time of 1 s using 200 t1 increments. Measurements were recorded at a spin rate of 2 kHz.
TEM techniques
High resolution TEM, high angle annular dark field and energy dispersive x-ray analysis was performed using an FEI Technai Osiris S/TEM equipped with an FEI extreme Schottky field-assisted thermionic emitter. EDX maps were acquired from four Bruker silicon drift detectors arranged about the central beam axis leading to a collection solid angle of about 0.9 sr. The electron beam was scanned over the sample with a dwell time of 200 ms/pixel.
HR-TEM images were captured using a Gatan Ultrascan 1000XP (2048 × 2048 pixels) camera with a high-speed upgrade. The images of the QDs were acquired at near-Scherzer defocus in order to maximise spatial resolution. The HR-TEM images were processed using the ImageJ software to generate fast Fourier transforms of the raw images.
X-ray diffraction was performed using a Siemens D500 powder x-ray diffractometer using copper K-alpha radiation (0.154 nm) operating at 40 kV and 30 mA. The CdSe QDs were ground into a powder using a pedestal and mortar to form a fine powder.
Inductively coupled plasma mass spectroscopy
Typically, CdSe QDs were dissolved using concentrated nitric acid overnight. The concentrated solution was diluted a 10-fold prior to ICP-MS. Stoichiometric measurements were carried out using an Agilent 7500ce series ICP-MS operating in helium collision mode. All ICP-MS calibration standards were purchased from Sigma-Aldrich.
X-ray photoelectron spectroscopy
Samples were analysed using a Thermo Scientific K-alpha XPS instrument equipped with a micro-focused, monochromatic Al x-ray source. The Al x-ray source was operated at 12 KeV using a 400-micron spot size. A constant analyser energy of 200 eV was used for survey scans and 50 eV for detailed scans. A low energy/ion flood source was used to achieve charge neutralisation.
QD blinking simulation algorithm
Stochastic simulations within the CTST framework were accomplished using a standard Gillespie algorithm. Importantly, the algorithm is well suited to describe the highly distributed QD kinetics and stochastically models both the residence time in a given state and transitions out of the current state. The simulated kinetics were advanced in time by sampling from an exponential distribution according \(\tau = - \ln (u_1)/r_0\) where τ is the incremental time step, u1 is a random number in the interval (0/1) and r0 is the sum of all transitions out of the current state. The state, to which the QD transitions, is evaluated according to the condition \(\mathop {\sum}\limits_{i = 1}^m {r_i \, < \, r_0u_2 \le } \mathop {\sum}\limits_{i = 1}^{m + 1} {r_i}\) where u2 is a random number in the interval (0/1) and ri is the ith transition out of the current state. The simulation parameters were updated following each iteration and the procedure repeated. On and off-times were extracted from simulated PL intensity trajectories and PDDs constructed using the same procedures applied in the analysis of experimental PL trajectories. The key parametersαon, αoff andτc, describing the QD-blinking statistics were obtained by non-linear least squares fit of the TPL to the PDDs as per experimental profiles.
Materials, data and codes are available upon request from the corresponding authors.
Huynh, W. U., Dittmer, J. J. & Alivisatos, A. P. Hybrid nanorod-polymer solar cells. Science 295, 2425–2427 (2002).
Shen, Q., Kobayashi, J., Diguna, L. J. & Toyoda, T. Effect of ZnS coating on the photovoltaic properties of CdSe quantum dot-sensitized solar cells. J. Appl. Phys. 103, 084304 (2008).
Gao, X., Cui, Y., Levenson, R. M., Chung, L. W. & Nie, S. In vivo cancer targeting and imaging with semiconductor quantum dots. Nat. Biotechnol. 22, 969–976 (2004).
Wu, X. et al. Immunofluorescent labeling of cancer marker Her2 and other cellular targets with semiconductor quantum dots. Nat. Biotechnol. 21, 41–46 (2003).
Jang, E. et al. White‐light‐emitting diodes with quantum dot color converters for display backlights. Adv. Mater. 22, 3076–3080 (2010).
Frantsuzov, P., Kuno, M., Janko, B. & Marcus, R. A. Universal emission intermittency in quantum dots, nanorods and nanowires. Nat. Phys. 4, 519–522 (2008).
Kuno, M., Fromm, D., Hamann, H., Gallagher, A. & Nesbitt, D. J. "On"/"off" fluorescence intermittency of single semiconductor quantum dots. J. Chem. Phys. 115, 1028–1040 (2001).
Neuhauser, R., Shimizu, K., Woo, W., Empedocles, S. & Bawendi, M. Correlation between fluorescence intermittency and spectral diffusion in single semiconductor quantum dots. Phys. Rev. Lett. 85, 3301–3304 (2000).
Frantsuzov, P. A., Volkán-Kacso, S. N. & Jankó, B. R. Universality of the fluorescence intermittency in nanoscale systems: experiment and theory. Nano Lett. 13, 402–408 (2013).
Tang, J. & Marcus, R. Mechanisms of fluorescence blinking in semiconductor nanocrystal quantum dots. J. Chem. Phys. 123, 054704 (2005).
Bae, W. K. et al. Controlled alloying of the core–shell interface in CdSe/CdS quantum dots for suppression of Auger recombination. ACS Nano 7, 3411–3419 (2013).
Efros, A. L. & Rosen, M. Random telegraph signal in the photoluminescence intensity of a single quantum dot. Phys. Rev. Lett. 78, 1110–1113 (1997).
Issac, A., Krasselt, C., Cichos, F. & Von Borczyskowski, C. Influence of the dielectric environment on the photoluminescence intermittency of CdSe quantum dots. ChemPhysChem 13, 3223–3230 (2012).
Issac, A., Von Borczyskowski, C. & Cichos, F. Correlation between photoluminescence intermittency of CdSe quantum dots and self-trapped states in dielectric media. Phys. Rev. B 71, 161302 (2005).
Fisher, A. A. E. & Osborne, M. A. Sizing up excitons in core–shell quantum dots via shell-dependent photoluminescence blinking. ACS Nano 11, 7829–7840 (2017).
Osborne, M. A. & Fisher, A. Charge-tunnelling and self-trapping: common origins for blinking, grey-state emission and photoluminescence enhancement in semiconductor quantum dots. Nanoscale 8, 9272–9283 (2016).
Smith, A. M., Johnston, K. A., Crawford, S. E., Marbella, L. E. & Millstone, J. E. Ligand density quantification on colloidal inorganic nanoparticles. Analyst 142, 11–29 (2017).
Katari, J. B., Colvin, V. L. & Alivisatos, A. P. X-ray photoelectron spectroscopy of CdSe nanocrystals with applications to studies of the nanocrystal surface. J. Phys. Chem. 98, 4109–4117 (1994).
Taylor, J., Kippeny, T. & Rosenthal, S. J. Surface stoichiometry of CdSe nanocrystals determined by Rutherford backscattering spectroscopy. J. Clust. Sci. 12, 571–582 (2001).
Sachleben, J. R. et al. NMR studies of the surface structure and dynamics of semiconductor nanocrystals. Chem. Phys. Lett. 198, 431–436 (1992).
Becerra, L. R., Murray, C. B., Griffin, R. G. & Bawendi, M. G. Investigation of the surface morphology of capped CdSe nanocrystallites by 31 P nuclear magnetic resonance. J. Chem. Phys. 100, 3297–3300 (1994).
Krauss, T. D. & Brus, L. E. Charge, polarizability, and photoionization of single semiconductor nanocrystals. Phys. Rev. Lett. 83, 4840–4843 (1999).
Gao, F., Bajwa, P., Nguyen, A. & Heyes, C. D. Shell-dependent photoluminescence studies provide mechanistic insights into the off–grey–on transitions of blinking quantum dots. ACS Nano 11, 2905–2916 (2017).
Verberk, R., van Oijen, A. M. & Orrit, M. Simple model for the power-law blinking of single semiconductor nanocrystals. Phys. Rev. B 66, 233202 (2002).
Müller, J. et al. Air-induced fluorescence bursts from single semiconductor nanocrystals. Appl. Phys. Lett. 85, 381–383 (2004).
Protesescu, L. et al. Nanocrystals of cesium lead halide perovskites (CsPbX3, X=Cl, Br, and I): novel optoelectronic materials showing bright emission with wide color gamut. Nano Lett. 15, 3692–3696 (2015).
Swarnkar, A. et al. Colloidal CsPbBr3 perovskite nanocrystals: luminescence beyond traditional quantum dots. Angew. Chem. 127, 15644–15648 (2015).
Cazaux, J. The electric image effects at dielectric surfaces. IEEE Trans. Dielectr. Electr. Insul. 3, 75–79 (1996).
Berrettini, M. G., Braun, G., Hu, J. G. & Strouse, G. F. NMR Analysis of Surfaces and Interfaces in 2-nm CdSe. J. Am. Chem. Soc. 126, 7063–7070 (2004).
Gomes, R. et al. Binding of phosphonic acids to CdSe quantum dots: a solution NMR study. J. Phys. Chem. Lett. 2, 145–152 (2011).
Hens, Z. & Martins, J. C. A solution NMR toolbox for characterizing the surface chemistry of colloidal nanocrystals. Chem. Mater. 25, 1211–1221 (2013).
Ammann, C., Meier, P. & Merbach, A. A simple multinuclear NMR thermometer. J. Magn. Reson. (1969) 46, 319–321 (1982).
Yu, W. W., Qu, L., Guo, W. & Peng, X. Experimental determination of the extinction coefficient of CdTe, CdSe, and CdS nanocrystals. Chem. Mater. 15, 2854–2860 (2003).
Jasieniak, J., Smith, L., Van Embden, J., Mulvaney, P. & Califano, M. Re-examination of the size-dependent absorption properties of CdSe quantum dots. J. Phys. Chem. C. 113, 19468–19474 (2009).
Fritzinger, B., Capek, R. K., Lambert, K., Martins, J. C. & Hens, Z. Utilizing self-exchange to address the binding of carboxylic acid ligands to CdSe quantum dots. J. Am. Chem. Soc. 132, 10195–10201 (2010).
Hassinen, A., Moreels, I., de Mello Donegá, C., Martins, J. C. & Hens, Z. Nuclear magnetic resonance spectroscopy demonstrating dynamic stabilization of CdSe quantum dots by alkylamines. J. Phys. l Chem. Lett. 1, 2577–2581 (2010).
Morris-Cohen, A. J., Malicki, M., Peterson, M. D., Slavin, J. W. & Weiss, E. A. Chemical, structural, and quantitative analysis of the ligand shells of colloidal quantum dots. Chem. Mater. 25, 1155–1165 (2012).
Lizhi, H., Toyoda, K. & Ihara, I. Dielectric properties of edible oils and fatty acids as a function of frequency, temperature, moisture and composition. J. Food Eng. 2, 151–158 (2008).
Flory, P. J. Principles of polymer chemistry. (Cornell University Press, New York, 1953).
Li, H., Qian, C.-J., Sun, L.-Z. & Luo, M.-B. Conformational properties of a polymer tethered to an interacting flat surface. Polym. J. 42, 383 (2010).
Li, J. J. et al. Large-scale synthesis of nearly monodisperse CdSe/CdS core/shell nanocrystals using air-stable reagents via successive ion layer adsorption and reaction. J. Am. Chem. Soc. 125, 12567–12575 (2003).
Nilsson, M. The DOSY toolbox: a new tool for processing PFG NMR diffusion data. J. Magn. Reson. 200, 296–302 (2009).
Pelta, M. D., Morris, G. A., Stchedroff, M. J. & Hammond, S. J. A one‐shot sequence for high‐resolution diffusion‐ordered spectroscopy. Magn. Reson. Chem. 40, 147–152 (2002).
Electron microscopy was peformed by Dr. C. Ducati and Dr. F. Wisnivesky-Rocca-Rivarola at the University of Cambridge, Department of Materials Science and Metallurgy, for which we are immensely grateful. ICP-MS measurements were kindly performed by Chris Dadswell. XRD measurements were acquired with assistance from the Chen group at Sussex.
Department of Chemistry, University of Sussex, Falmer, BN1 9QJ, United Kingdom
Aidan A. E. Fisher
, Mark A. Osborne
, Iain J. Day
& Guillermo Lucena Alcalde
Search for Aidan A. E. Fisher in:
Search for Mark A. Osborne in:
Search for Iain J. Day in:
Search for Guillermo Lucena Alcalde in:
A.A.E.F. conducted synthetic methodologies reported, recorded UV-Vis measurements and made significant contribution to manuscript preparation. M.A.O. conceived the CTST model and performed simulations reported and made a substantial contribution to manuscript preparation. I.J.D. and G.L.A. performed and processed raw NMR data and assisted with the interpretation thereof.
Correspondence to Aidan A. E. Fisher or Mark A. Osborne.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Fisher, A.A.E., Osborne, M.A., Day, I.J. et al. Measurement of ligand coverage on cadmium selenide nanocrystals and its influence on dielectric dependent photoluminescence intermittency. Commun Chem 2, 63 (2019) doi:10.1038/s42004-019-0164-x
Received: 16 November 2018
Accepted: 17 May 2019
P band intermediate state (PBIS) tailors photoluminescence emission at confined nanoscale interface
Taiqun Yang
, Bingqian Shan
, Fang Huang
, Songqiu Yang
, Bo Peng
, Enhui Yuan
, Peng Wu
& Kun Zhang
Communications Chemistry (2019)
Communications Chemistry menu | CommonCrawl |
What is the average mass of galaxies according to Hubble Deep and Ultra Deep field observations?
It is very widely known among people interested in astronomy that there are 100-400 billion stars in the Milky Way galaxy and there are ~ 100 billion galaxies in the observable universe, which is usually calculated as $10^{22}$ stars in the observable universe. I have also seen estimates an order or even two orders of magnitude higher than this.
What I am thinking here is why it is assumed that the Milky Way is the average galaxy ? One way to check whether or not the Milky Way is an average galaxy in mass and so in stellar mass is to check the results of the Hubble deep and ultra deep field images. While searching for information, I came across this page, which I somehow doubt its source of information, and it states: "Within the Hubble Ultra Deep Field there are approximately 10,000 discrete objects. Most of these objects are very small and likely have masses in the range of $10^5$ to $10^7$ solar masses. Note the mass of the Milky Way galaxy is $10^{12}$ solar masses."
So if "most" of the 10,000 galaxies seen in the Hubble ultra deep field image are that small in mass, then it is completely incorrect to consider the Milky Way an average mass galaxy. It is also clear from this diagram that the Hubble deep field extends to the "Normal Galaxies", while the Ultra deep field extends to the "First Galaxies". So it is expected to observe many small irregular and dwarf galaxies in the HUDF. As far as I understand, these small galaxies merge to form more massive spiral and elliptical galaxies that we see today. So again, if we are observing 10,000 small galaxies that should merge to form normal galaxies, we shouldn't be thinking that the image contains 10,000 galaxies similar to the Milky Way.
So, am I correct that the Milky Way is not an average mass galaxy in the observable universe and that we should use a smaller number when referring to the number of stars in the universe ?
astronomy astrophysics galaxies observational-astronomy
Abanob EbrahimAbanob Ebrahim
$\begingroup$ Who thinks the Milky Way is an average galaxy? It isn't. $\endgroup$ – Rob Jeffries Dec 17 '14 at 23:13
$\begingroup$ Might want to read Phil Plait and Emily Lackdalwalla's blogs. $\endgroup$ – Carl Witthoft Dec 17 '14 at 23:16
I think the following image, which comes from Tomczak et al. (2014) and the so-called ZFOURGE/CANDELS galaxy survey should do the trick.
It shows how the galaxy stellar mass function (i.e. the number of galaxies per unit mass per cubic megaparsec that have a certain stellar mass) evolves as a function of redshift. As you might imagine this is not just a case of counting galaxies and estimating their masses - you have to account for the fact that it is harder to see low-mass galaxies.
Anyway, these are their results and they clearly show that a galaxy like the Milky Way that has about 200 billion stars and a stellar mass of about $5\times10^{10}M_{\odot}$ (note that the total mass of the Milky Way is dominated by dark matter), is quite a massive galaxy (note the logarithmic y-axis).
In other words, small galaxies dominate the statistics. However, when you look at the Hubble Deep or Ultradeep fields, it is quite difficult to use this information. You will always tend to see the most luminous and most massive galaxies and the low-mass galaxies will not be represented as shown in the mass functions shown in this picture. So there are actually two separate things here, and I'm not sure I can definitively answer either. (i) What is the average mass of a galaxy; (ii) what is the average mass of a galaxy seen in the Hubble Deep fields?
The answer to (ii) will obviously be much bigger than the answer to (i). Fortunately you can see from the plot that the straight(ish) line section below about the mass of the Milky Way are a power laws with slope $\sim -0.5$. That means that $M\Phi(M) \propto M^{+0.5}$ and when you integrate this over some range, it is the upper limit that dominates. So low-mass galaxies, do not dominate the stellar mass. In fact, it is galaxies about the size of the Milky Way that dominate the stellar mass. Galaxies with $M>10^{11}M_{\odot}$ (in stars) become increasingly rare, so these do not contribute so much. Therefore, very roughly, the number of stars in the Universe will be given by the number of galaxies with mass within a factor of a few of the Milky Way multiplied by the number of stars in the Milky Way.
I cannot provide an answer for the average mass for a galaxy in the UDF or any other survey volume because it is unclear how many of the lowest mass objects there are or what lower mass cut-off to work with. The plots shown for the CANDELS field below will be perfectly representative of the UDF or any other deep observation, the cosmic variance should not be an issue for order of magnitude estimates.
EDIT: As an example, let's take the average space density of $5\times 10^{10}M_{\odot}$ galaxies to be $10^{-2.5}$ per dex per Mpc$^{3}$ in the low redshift universe and assume galaxies over a 1 order of magnitude (1 dex) range of mass contribute almost all the stellar mass. If the observable universe if 46 billion light years ($\sim 15,000$ Mpc - see Size of the Observable Universe) and the average star is $0.25M_{\odot}$; there are: $$N_* = 10^{-2.5} \times 5\times10^{10} \times \frac{4\pi}{3} \times (15000)^3/0.25 \simeq 10^{22}$$ stars in the observable universe.
$\begingroup$ Great, that's exactly what I am looking for. But honestly I seem to be unable to read the plots so I hope you can help me with this. There are two things that should answer my question. 1) If there is a certain number of galaxies at redshifts < 3, what is the percentage of this number that should be in galaxies with stellar masses > $5\times10^{9}$ Solar masses ? 2) Is there a way to find the number of galaxies in the HUDF image with redshifts less than 3 ? $\endgroup$ – Abanob Ebrahim Dec 18 '14 at 0:35
$\begingroup$ @AbanobEbrahim Your (1) could be quite difficult. Where do you make the low-end cutoff? For a warmup, how many satellite galaxies orbit the Milky Way? This number was revised not long ago, and is still debated. And what is a galaxy anyway? Do you count globular clusters? $\endgroup$ – user10851 Dec 18 '14 at 4:37
$\begingroup$ @AbanobEbrahim You need to convert the differential frequency distributions to cumulative frequencies by integrating the mass functions. As Chris says, difficult unless you define a lower mass limit. Second point: you need to find papers on the UDF that do this - I'm sure you will find some. $\endgroup$ – Rob Jeffries Dec 18 '14 at 7:33
$\begingroup$ Let me try to make this a easier. Are there any examples for stellar masses of ANY galaxies in observed in the UDF ? I just need a general idea about how the stellar masses change with increasing redshifts. $\endgroup$ – Abanob Ebrahim Dec 18 '14 at 20:49
$\begingroup$ Why would you think the UDF is different to the CANDELS fields? The redshift distribution might be different, but the mass function at a given redshift will be the same. $\endgroup$ – Rob Jeffries Dec 18 '14 at 21:21
Not the answer you're looking for? Browse other questions tagged astronomy astrophysics galaxies observational-astronomy or ask your own question.
Size of the Observable Universe
Why are there not many detectable supernovae?
Solar system, visible stars and deep sky objects
What is an extreme deep field image (XDF) and how is it captured?
A question concerning the act of observing distant galaxies
What is the luminosity of the Milky Way galaxy?
According to the initial mass function, should there be more brown dwarfs than red dwarfs?
What is the area of the sky that is covered by the Hubble Ultra Deep Field image?
Primordial galaxies and associated mass of blackholes
What is the limit of the deep field exploration?
Do satellite galaxies have the same proportion of dark matter as "ordinary" galaxies | CommonCrawl |
Volume 5, Number 3 (1977), 319-350.
A Functional Law of the Iterated Logarithm for Empirical Distribution Functions of Weakly Dependent Random Variables
Walter Philipp
More by Walter Philipp
PDF File (2092 KB)
Let $\{n_k, k \geqq 1\}$ be a sequence of random variables uniformly distributed over $\{0, 1\}$ and let $F_N(t)$ be the empirical distribution function at stage $N$. Put $f_n(t) = N(F_N(t) - t)(N\log\log N)^{-\frac{1}{2}}, 0 \leqq t \leqq 1, N \geqq 3$. For strictly stationary sequences $\{n_k\}$ where $n_k$ is a function of random variables satisfying a strong mixing condition or where $n_k = n_k x \mod 1$ with $\{n_k, k \geqq 1\}$ a lacunary sequence of real numbers a functional law of the iterated longarithm is proven: The sequence $\{f_N(t), N \geqq 3\}$ is with probability 1 relatively compact in $D\lbrack 0, 1\rbrack$ and the set of its limits is the unit ball in the reproducing kernel Hilbert space associated with the covariance function of the appropriate Gaussian process.
Ann. Probab., Volume 5, Number 3 (1977), 319-350.
Primary: 60F15: Strong theorems
Secondary: 10K05
Functional law of the iterated logarithm empirical distribution functions mixing random variables lacunary sequences reproducing kernel Hilbert space uniform distribution mod 1
Philipp, Walter. A Functional Law of the Iterated Logarithm for Empirical Distribution Functions of Weakly Dependent Random Variables. Ann. Probab. 5 (1977), no. 3, 319--350. doi:10.1214/aop/1176995795. https://projecteuclid.org/euclid.aop/1176995795
A Functional Law of the Iterated Logarithm for Weighted Empirical Distributions
James, Barry R., The Annals of Probability, 1975
Diophantine equations and the LIL for the discrepancy of sublacunary sequences
Aistleitner, Christoph, Illinois Journal of Mathematics, 2009
The Law of the Iterated Logarithm for Gaussian Processes
Oodaira, Hiroshi, The Annals of Probability, 1973
Asymptotic Expansions Associated with the $n$th Power of a Density
Johnson, R. A., The Annals of Mathematical Statistics, 1967
The Limit of a Ratio of Convolutions
Ney, P. E., The Annals of Mathematical Statistics, 1963
Bernard Friedman's Urn
Freedman, David A., The Annals of Mathematical Statistics, 1965
Strong Limiting Bounds for Maximal Uniform Spacings
Deheuvels, Paul, The Annals of Probability, 1982
Sums of Small Powers of Independent Random Variables
Shapiro, J. M., The Annals of Mathematical Statistics, 1960
On the Functional Form of the Law of the Iterated Logarithm for the Partial Maxima of Independent Identically Distributed Random Variables
Wichura, Michael J., The Annals of Probability, 1974
Laws of the Iterated Logarithm for the Empirical Characteristic Function
Lacey, Michael T., The Annals of Probability, 1989 | CommonCrawl |
Define the function $f(x) = 2x - 5$. For what value of $x$ is $f(x)$ equal to $f^{-1}(x)$?
Substituting $f^{-1}(x)$ into our expression for $f$ we get \[f(f^{-1}(x))=2f^{-1}(x)-5.\]Since $f(f^{-1}(x))=x$ for all $x$ in the domain of $f^{-1}$, we have \[x=2f^{-1}(x)-5.\]or \[f^{-1}(x)=\frac{x+5}2.\]We want to solve the equation $f(x) = f^{-1}(x)$, so \[2x-5=\frac{x+5}2.\]or \[4x-10=x+5.\]Solving for $x$, we find $x = \boxed{5}$. | Math Dataset |
\begin{document}
\begin{frontmatter}
\title{The Range Description of a Conical Radon Transform}
\author[main_address]{Weston Baines\corref{corresponding_author}} \cortext[corresponding_author]{Corresponding author}\ead{[email protected]} \address[main_address]{Department of Mathematics Mailstop 3368 Texas A\&M University College Station, TX 77843-3368}
\begin{abstract} In this work we consider the Conical Radon Transform, which integrates a function on $\mathbb{R}^n$ over families of circular cones. Transforms of this type are known to arise naturally as models of Compton camera imaging and single-scattering optical tomography (in the latter case, when $n=2$). The main results (which depend on the parity of $n$) provide a description of the range of the transform on the space $C_0^\infty(\mathbb{R}^n)$ \end{abstract}
\begin{keyword} Radon Transform \sep Cone Transform \sep Optical Tomography \sep Compton Camera Imaging \sep Wave Equation \end{keyword}
\end{frontmatter}
\nocite{*}
\section{Introduction}\label{S:intro}
In his seminal 1923 paper, Arthur Compton derived a physical model for the phenomenon by which X-rays scatter via interaction with charged particles \citep{Compton_1923}. This phenomenon has come to be known as \emph{Compton scattering}. The scattering interaction between a high-energy photon and an electron is modeled by the equation \begin{equation} \label{eq: Compton_Scattering}
E_{\gamma'} = \frac{E_\gamma}{1+(E_\gamma/(m_e c^2))(1-\cos \theta)}, \end{equation} (see Fig. \ref{fig: Compton_Scattering}) where $E_\gamma$ is the initial energy of the photon, $E_{\gamma'}$ is the energy of the photon after scattering, $m_e$ is the rest mass of an electron, $c$ is the speed of light, and $\theta$ is the scattering angle. \begin{figure}
\caption{Incident X-ray scatters off electron at angle $\theta$ via Compton Scattering, transferring $E_{\gamma}-E_{\gamma'}$ energy to the electron.}
\label{fig: Compton_Scattering}
\end{figure}
\begin{figure}
\caption{Schematic representation of a Compton camera.}
\label{fig: Compton_Camera}
\end{figure}
The recently popularized Compton camera utilize Compton scattering to detect high energy photons. A Compton cameras consists of a pair of parallel detector plates (See Fig. \ref{fig: Compton_Camera}). When a high energy photon hits the first plate, it undergoes Compton scattering and the scattered photon is absorbed in the second plate. The positions $\boldsymbol P_s$ and $\boldsymbol P_a$ of the scattering and absorption events, as well as the deposited energies $E_s$ and $E_a$ are recorded. This data allows one to determine a surface cone \begin{equation}
\mathfrak{C}(\boldsymbol P_s, \phi, \boldsymbol \beta)=\{ \boldsymbol y \in \mathbb{R}^n : (\boldsymbol y - \boldsymbol P_s) \cdot \boldsymbol \beta = |\boldsymbol y -\boldsymbol P_s| \cos \phi \}. \end{equation} containing the incident photon trajectory (see Fig. \ref{fig: Compton_Camera}). This cone has vertex $\boldsymbol P_s$, opening (scattering) angle $\phi$ and central axis direction $\boldsymbol \beta$ determined by \begin{equation}
\cos \phi = 1- \frac{m_e c^2 E_s}{(E_s+E_a)E_a} \qquad \boldsymbol \beta = \frac{\boldsymbol P_s-\boldsymbol P_a}{|\boldsymbol P_s-\boldsymbol P_a|}. \end{equation}
In this manuscript we will assume that high energy $\gamma$-photons are emitted from a radiating source with intensity source distribution $f$ that is smooth and compactly supported ($f\in C_{0}^{\infty}(\mathbb{R}^n)$). By exposing the Compton camera to $\gamma$-photons for a sufficiently long duration of time and counting the number of photons measured at each position $\boldsymbol x$ through a scattering angle $\phi$ and with central cone axis $\boldsymbol \beta$ we can approximate the integral of $f$ over surface cones: \begin{equation} \label{eq: CRT}
\mathcal{C}[f](\boldsymbol x, \phi, \boldsymbol \beta) = \int_{\mathfrak{C}(\boldsymbol x, \phi, \boldsymbol \beta)} f(\boldsymbol y) dS(\boldsymbol y), \end{equation} where $dS$ is the surface measure of the cone.
This is a Radon type transform, as it projects functions onto a parametric family of hypersurfaces. For this reason many works refer to this as the \emph{Conical Radon Transform} (CRT) \citep{GZ,Ambartsoumian2013,Palamodov2017}, as will we in this manuscript. Depending on the specific engineering of the Compton camera, the measure $dS$ may include additional weight terms, typically a power weight $|x-y|^{-k}$ \citep{Basko,Maxim,Smith2005}. We denote this \emph{Weighted Conical Radon Transform} as $\mathcal{C}^{k}$. When $k<n-1$ the transform $\mathcal{C}^{k}$ is called \emph{regular} and when $k=n-1$, it is called \emph{singular} \citep{Palamodov2017}.
The main advantage of Compton cameras over other traditional $\gamma$-photon detectors, such as Anger cameras, is that they give directional information without any need for collimation. In order for an Anger camera to detect trajectories of $\gamma$-photons, a mechanical filter must be placed in front of the detector to block all photons except those traveling along a particular path \citep{Peterson} (see also Fig. \ref{fig: Collimation}). Although this gives very precise directional information, the signal is significantly attenuated, making it unsuitable in cases of weak signals and strong background noise.
\begin{figure}
\caption{Schematic representation of mechanical collimation in an Anger camera.}
\label{fig: Collimation}
\end{figure}
An important application where traditional Anger cameras are not feasible is detecting the presence of illicit nuclear material \citep{ADHKK, cargo}. In such settings the nuclear source is shielded and very weak, and the background noise is very strong, with a signal-to-noise (SNR) ratio significantly less than 1\%. The difficulty is further compounded by the presence of complex configurations of scattering and absorbing materials.
The primary goal of Compton imaging is to recover $f$ from $\mathcal{C}[f]$, that is, inversion of the CRT. Inversion of Radon-type transforms is a rich area of study in inverse problems, where one must address questions of existence, uniqueness and stability of solution \citep{Kuchment,Natt}. The CRT is particularly interesting since the hypersurfaces under consideration (surface cones) have a singularity, and the transform is overdetermined (the CRT maps a function of $n$ variables to a function of $2n$ variables). The CRT, being overdetermined, has ``small'' range - in the sense that it has infinite co-dimension - thus, there are infinitely many left inverses, and in fact a variety of inversion formulae have been studied (see \citep{TKK} and the references therein).
An important topic in tomographic studies of Radon-type transforms is the description of their ranges \citep{Kuchment,Natt}, since they aid in improving inversion algorithms, completing incomplete data, correcting measurement errors, measuring sampling errors, etc. \citep{Natt,Fatma}.
\begin{figure}
\caption{3D cone with central axis aligned along the ``vertical'' coordinate and $\pi/2$ radians opening.}
\label{fig: Ex_Axis_Aligned_Cone}
\end{figure}
This manuscript is structured as follows. Section \ref{S:notation} contains the main notions and notations involved. In Section \ref{S:results} we describe the range of the CRT under the restriction of fixed cone axis on the space $C_0^\infty(\mathbb{R}^n)$ of smooth functions with compact support. We find that the description differs for even and odd dimensions, which one might expect, as the wave equation is relevant in the study of the CRT (see e.g. the discussion in Sec. 5 of \citep{Palamodov2017}). Proofs of these results are given in Section \ref{S:proofs}. These results are formulated and stated, to avoid complicating notations, for cones with the opening angle $\pi/2$ radians (half-opening angle $\pi/4$ radians). They, however, can be readily formulated for any opening angle, which is done in Section \ref{S:angle}. Since even restricted CRT data is sufficient for inversion, one might expect that studying symmetries of the CRT will reveal the full range description. Indeed, we show that this is possible in Section \ref{S: Full_Range}.
\section{Some Objects and Notations}\label{S:notation}
Throughout the following sections, we will denote vectors with a bold font, e.g. $\boldsymbol x \in \mathbb{R}^{n-1}$ and $\boldsymbol \omega \in \mathbb{C}^{n-1}$. For a complex number $z$ we denote its real and imaginary parts with $\Re{(z)}$ and $\Im{(z)}$ respectively.
When we discuss the restricted CRT, we will write $\mathbb{R}^n=\mathbb{R}^{n-1}_{\boldsymbol x}\times\mathbb{R}_t$, and thus vectors are represented as $(\boldsymbol x,t)$. We will assume that the axes of all cones are aligned with the $t$-direction, i.e. $$\boldsymbol \beta=\boldsymbol e_n=(\textbf{0},1) \in \mathbb{R}^{n-1}_{\boldsymbol x} \times \mathbb{R}_t.$$ For a fixed opening angle $\phi$ will denote the restricted CRT with \begin{equation}
\mathcal{C}_{\phi}[f] (\boldsymbol x, t) = \mathcal{C}[f]((\boldsymbol x,t),\phi,\boldsymbol e_n). \end{equation} In this case we can identify the CRT of a function as its convolution with the distribution \begin{equation}
D_\phi(\boldsymbol x,t) = \delta(-t-|(\cot \phi) \boldsymbol x|) \end{equation} where $\delta$ is the Dirac delta distribution.\footnote{ Appearance of such cones suggests a possible difference in formulas of the CRT depending on the parity of the dimension. This difference does materialize. E.g., for odd dimensions inversion require non-local transformation.}
The weighted CRT will be important for our study of the range of the CRT, and under the restrictions on opening angle and axis direction, the weighted CRT of a function can be written as its convolution with the distribution \begin{equation}
D_{w,\phi} (\boldsymbol x, t,\phi) = w(\boldsymbol x,t,\phi) \delta(-t-|(\cot \phi) \boldsymbol x|) \end{equation}
where $w(\boldsymbol x,t,\phi)$ is a given weight. While a variety of weight functions can arise, in this work we will only need the power weight $w(\boldsymbol x,t,\phi)=|((\cot \phi) \boldsymbol x,t)|^{-1}$. We will denote the weighted CRT with power weight \begin{equation}
D_{\phi}^{1} \coloneqq D_{|((\cot \phi) \boldsymbol x,t)|^{-1}} = |((\cot \phi) \boldsymbol x,t)|^{-1}\delta(-t-|(\cot \phi) \boldsymbol x|) \end{equation} and the corresponding weighted CRT as \begin{equation}
\mathcal{C}_{\phi}^{1}[f] = D_{\phi}^{1} \ast f. \end{equation} For the special case discussed in Sections \ref{S:results} and \ref{S:proofs} we will frequently omit the subscript $\phi$ in the notation as we fix $\phi=\pi/4$: \begin{equation*}
\mathcal{C}[f] (\boldsymbol x, t) = \mathcal{C}[f]((\boldsymbol x,t),\pi/4,\boldsymbol e_n), \qquad D(\boldsymbol x,t) = \delta(-t-|\boldsymbol x|). \end{equation*} \begin{equation*}
\mathcal{C}^{1}[f] (\boldsymbol x, t) = |(\boldsymbol x,t)|^{-1}\delta(-t-|\boldsymbol x|) = D^{1} \ast f. \end{equation*} We also introduce the standard d'Alembertian operator \begin{equation}
\square \coloneqq \frac{\partial^2}{\partial t^2} - \Delta_{\boldsymbol x}, \end{equation} where $\Delta_{\boldsymbol x}$ is the Laplacian with respect to the spatial variable $\boldsymbol x$.
The Heaviside function will be denoted as $$h(t) \coloneqq \Bigg\{\begin{array}{cc}
1, & t\geq 0 \\
0 & t<0
\end{array} $$ and \begin{equation}
\Theta(\boldsymbol x,t): = h(t) \cdot \delta (\boldsymbol x). \end{equation} The distribution $\Theta$ is supported on a ray, and if we convolve $\Theta$ with a function $f$ we obtain \begin{equation} \label{eq: t_integral}
\Theta \ast f(\boldsymbol x, t) = \int_{-\infty}^{t}f(\boldsymbol x,z)dz. \end{equation}
We will use $\mathbbm{1}_{\Omega}$ to denote the indicator function of a domain $\Omega$: \begin{equation}
\mathbbm{1}_{\Omega}(\boldsymbol x) = \begin{cases}
1 \quad &\text{if} \, \boldsymbol x \in \Omega \\
0 \quad &\text{if} \, \boldsymbol x \notin \Omega. \\
\end{cases} \end{equation}
Let $T(\boldsymbol x,t)$ be a tempered distribution. When $T$ is supported in a half-space \begin{equation}\label{eq:Ht}
H_{t_0} \coloneqq \{(\boldsymbol{x},t)|\,t\leq t_0\}, \end{equation} its Fourier transform (which is a tempered distribution itself), \begin{equation}
\hat{T}(\boldsymbol \omega, \sigma) = \mathcal{F}[T] (\boldsymbol \omega, \sigma) = T(e^{-i(\boldsymbol x \cdot \boldsymbol \omega + \sigma t)}), \end{equation} has analytic extension to $\Im(\sigma) > 0$ (See \citep{Bremmerman, Sharyn1999, Strichartz}).
We thus can simplify our considerations by working with the Fourier transform on the open set $\mathbb{H}_+=\{(\boldsymbol \omega, \sigma):\Im(\sigma) > 0\}$ and taking the limit $\Im(\sigma) \searrow0$ when needed. This applies to the distributions $D$ and $D^{\prime}$ introduced before. In particular (see, e.g.\citep{Stein}), $\widehat{D}$ on $\mathbb{H}_+$ can be computed as follows: \begin{align*}
\widehat{D}(\boldsymbol \omega,\sigma)
&= 2^{-\frac{1}{2}} \int_{-\infty}^{0} \int_{\partial B(0,1)} \left ( e^{i r \boldsymbol \omega \cdot \boldsymbol \theta -i r \sigma) } \right ) d\boldsymbol \theta (-r)^{n-2} dr \\
&= 2^{-\frac{1}{2}} \int_{\mathbb{R}^{n-1}} \left ( e^{i \sigma |\boldsymbol u|} e^{-i \boldsymbol \omega \cdot \boldsymbol u} \right ) d \boldsymbol u \\
&=\alpha_n \frac{i\sigma}{(\sigma^2-|\boldsymbol \omega|^2)^{\frac{n}{2}}}, \end{align*} where $d \boldsymbol\theta$ is the surface measure on the sphere, $\boldsymbol u \in \mathbb{R}^{n-1}$, and \begin{equation} \label{eq: alpha}
\alpha_n \coloneqq -(-1)^{-\frac{n}{2}} 2^{\frac{2n-1}{2}} \pi^{\frac{n-2}{2}} \Gamma \left ( \frac{n}{2} \right). \end{equation}
Using the relation $\sqrt{2}tD^{1} = D$, we conclude that on $\mathbb{H}_+$ one has \begin{equation} \label{eq: FFT_D'}
\widehat{D^{1}}(\boldsymbol \omega,\sigma) = \frac{\beta_n}{(\sigma^2-|\boldsymbol \omega|^2)^{\frac{n-2}{2}}}, \end{equation} where \begin{equation} \label{eq: beta}
\beta_n \coloneqq -(-1)^{-\frac{n}{2}} \frac{2^{n-1}}{2-n} \pi^{\frac{n-2}{2}} \Gamma \left ( \frac{n}{2} \right). \end{equation} Finally, one last identity that will be needed in Section \ref{S:proofs} for $\widehat{\Theta \ast f}$ when $f \in C_{0}^{\infty}(\mathbb{R}^n)$ and \begin{equation} \label{eq: vanishing_vertical_line_integral} \int_{-\infty}^{\infty} f(\boldsymbol x, t)dt = 0 \qquad \forall \boldsymbol x \in \mathbb{R}^{n-1}. \end{equation} In this case we compute \begin{align} \label{eq: FourierVerticalLineIdentity}
\widehat{\Theta \ast f} &= \int_{\mathbb{R}^{n-1}} \int_{-\infty}^{\infty} \int_{-\infty}^{t} f(\boldsymbol x, s)ds e^{-i \sigma t}dt e^{-i \boldsymbol \omega \cdot \boldsymbol x} d \boldsymbol x \nonumber \\
&= \int_{\mathbb{R}^{n-1}} -\frac{1}{i \sigma} \int_{-\infty}^{t} f(\boldsymbol x, s)ds e^{-i \sigma t}|_{t \rightarrow -\infty}^{t \rightarrow \infty} + \frac{1}{i \sigma} \int_{-\infty}^{\infty} f(\boldsymbol x, t) e^{-i \sigma t} dt e^{-i \boldsymbol \omega \cdot \boldsymbol x} d \boldsymbol x \nonumber \\
&=\frac{1}{i \sigma} \int_{\mathbb{R}^{n-1}} f(\boldsymbol x, t) e^{-i \boldsymbol \omega \cdot \boldsymbol x-i \sigma t} d\boldsymbol x dt \nonumber \\
&= \frac{1}{i \sigma} \hat{f} (\boldsymbol \omega, \sigma). \end{align} This identity holds for all $\sigma \in \mathbb{C} \backslash \{0\}$, and has analytic continuation to $\sigma = 0$. To see this observe that an equivalent expression for (\ref{eq: FourierVerticalLineIdentity}) is \begin{equation}
\int_{\mathbb{R}^{n-1}} \frac{1}{i \sigma} \int_{-\infty}^{\infty} f(\boldsymbol x, t) e^{-i \sigma t} dt e^{-i \boldsymbol \omega \cdot \boldsymbol x} d\boldsymbol x. \end{equation} Since $f$ is compactly supported $\hat{f}$ is entire according to the Paley-Wiener Theorem, and $f$ satisfies (\ref{eq: vanishing_vertical_line_integral}), therefore the inner integral vanishes when $\sigma=0$ and thus has power series representation with no zeroth order term: \begin{align*}
\int_{\mathbb{R}^{n-1}} \frac{1}{i \sigma} \int_{-\infty}^{\infty} f(\boldsymbol x, t) e^{-i \sigma t} dt e^{-i \boldsymbol \omega \cdot \boldsymbol x} d\boldsymbol x = \int_{\mathbb{R}^{n-1}} \frac{1}{i \sigma} \sum_{j=1}^{\infty} \xi_{j}(\boldsymbol x) \sigma^j e^{-i \boldsymbol \omega \cdot \boldsymbol x} d\boldsymbol x \\
= -i \int_{\mathbb{R}^{n-1}} \sum_{j=1}^{\infty} \xi_{j}(\boldsymbol x) \sigma^{j-1} e^{-i \boldsymbol \omega \cdot \boldsymbol x} d\boldsymbol x \overset{\sigma \rightarrow 0}{\rightarrow} -i \int_{\mathbb{R}^{n-1}} \xi_{1}(\boldsymbol x) e^{-i \boldsymbol \omega \cdot \boldsymbol x} d\boldsymbol x. \end{align*} Each term $\xi_j$ is compactly supported, thus the above expression analytic.
\section{Range description for restricted CRT}\label{S:results}
As indicated in the Introduction, our first goal is to characterize the range of the restricted CRT $\mathcal{C}$ as a map from $C_{0}^{\infty}(\mathbb{R}^n)$ to $C^{\infty}(\mathbb{R}^n)$. As one might have expected, the answers differ in odd and even dimensions. \begin{theorem}\label{thm: even} Let $n=2k$ be even (and thus $\boldsymbol x\in\mathbb{R}^{2k-1}$). A function $g \in C^{\infty}(\mathbb{R}^{2k-1} \times \mathbb{R})$ is in the range of $\mathcal{C}$ on $C_{0}^{\infty}(\mathbb{R}^{2k-1} \times \mathbb{R})$ if and only if the following conditions are satisfied: \begin{enumerate}[(i)]
\item $\square^k g(\boldsymbol x,t)$ has compact support.
\item $\int_{-\infty}^{\infty} \square^k g(\boldsymbol x,t)dt=0$ for every $\boldsymbol x \in \mathbb{R}^{2k-1}$.
\item $\supp g \subseteq H_{t_0}$ (see (\ref{eq:Ht})) for some $t_0 \in \mathbb{R}$. \end{enumerate} \end{theorem} An analogous result for odd dimensions is as follows: \begin{theorem}\label{thm: odd} Let $n=2k+1$ be odd (and thus $\boldsymbol x\in\mathbb{R}^{2k}$). A function $g \in C^{\infty}(\mathbb{R}^{2k} \times \mathbb{R})$ is in the range of $\mathcal{C}$ on $C_{0}^{\infty}(\mathbb{R}^{2k} \times \mathbb{R})$ if and only if \begin{enumerate}[(i)]
\item $\square^{2k} \mathcal{C}^{1}[g](\boldsymbol x,t)$ has compact support.
\item $\int_{-\infty}^{\infty} \square^{2k} \mathcal{C}^{1}[g](\boldsymbol x,t)dt=0$ for every $\boldsymbol x \in \mathbb{R}^{2k}$.
\item $\supp g \subseteq H_{t_0}$ for some $t_0 \in \mathbb{R}$. \end{enumerate} \end{theorem}
As seen in these theorems, there is a close relationship between the CRT and the weighted CRT $\mathcal{C}^{1}$. In fact the range of $\mathcal{C}^{1}$ is very similar to the range of $\mathcal{C}$.
\begin{theorem} \label{thm: weighted_even} Let $n=2k$ be even (and thus $\boldsymbol x\in\mathbb{R}^{2k-1}$). A function $g \in C^{\infty}(\mathbb{R}^{2k-1} \times \mathbb{R})$ is in the range of $\mathcal{C}^{1}$ on $C_{0}^{\infty}(\mathbb{R}^{2k-1} \times \mathbb{R})$ if and only if the following conditions are satisfied: \begin{enumerate}[(i)]
\item $\square^{k-1} g(\boldsymbol x,t)$ has compact support.
\item $\supp g \subseteq H_{t_0}$ (see (\ref{eq:Ht})) for some $t_0 \in \mathbb{R}$. \end{enumerate} \end{theorem}
An analogous result for odd dimensions is as follows: \begin{theorem}\label{thm: weighted_odd} Let $n=2k+1$ be odd (and thus $\boldsymbol x\in\mathbb{R}^{2k}$). A function $g \in C^{\infty}(\mathbb{R}^{2k} \times \mathbb{R})$ is in the range of $\mathcal{C}^{1}$ on $C_{0}^{\infty}(\mathbb{R}^{2k} \times \mathbb{R})$ if and only if \begin{enumerate}[(i)]
\item $\square^{2k-1} \mathcal{C}^{1}[g](\boldsymbol x,t)$ has compact support.
\item $\supp g \subseteq H_{t_0}$ for some $t_0 \in \mathbb{R}$. \end{enumerate} \end{theorem}
\section{Proofs of Theorems}\label{S:proofs}
We start with the following auxiliary results:
\begin{lemma} \label{lm: dAlembert_Kernel} Let $g\in C^\infty(\mathbb{R}^n)$ be such that $\square^l g=0$ for some natural number $l$ and $\supp g\subset H_{t_0}$ for some $t_0 \in \mathbb{R}$, then $g\equiv 0$. \end{lemma} \begin{proof} Let $g$ satisfy the conditions of the lemma and $l=1$. Then for any $s>t_0$, $g$ solves the Cauchy problem \begin{equation} \label{eq: Cauchy_Homogeneous}
\left \{\begin{array}{c}
\square g = 0 \\
g(\boldsymbol x,s) = 0\\
g_t(\boldsymbol x,s) = 0\\
\end{array} \right . \end{equation} (See Fig. \ref{fig: WaveEqnDomain}). \begin{figure}
\caption{Domain of $g$ in Lemma \ref{lm: dAlembert_Kernel}. If $g$ vanishes outside a half-space, $g$ must vanish everywhere.}
\label{fig: WaveEqnDomain}
\end{figure} It is well known (see e.g. \citep{evans}) that the only solution to the Cauchy problem (\ref{eq: Cauchy_Homogeneous}) is $g \equiv 0$.
If $l>1$, then $v=(\square)^{l-1}g$ satisfies the same conditions of the lemma for $l=1$, and hence $v\equiv 0$. By induction we conclude that $g\equiv 0$. \end{proof}
\begin{corollary} \label{co: iterated_kernel} A smooth solution of $\square^l g=f$ supported in $H_{t_0}$ for some $t_0\in\mathbb{R}$ (if such a solution exists) is unique. \end{corollary}
\begin{lemma} \label{lm: bounded_even} Let $g \in C^{\infty}(\mathbb{R}^{2k-1} \times \mathbb{R})$ satisfy the conditions of Theorem \ref{thm: even}, then $g$ and its derivatives are bounded. \end{lemma} \begin{proof}
To prove the claim that $g$ is bounded, due to Corollary \ref{co: iterated_kernel}, it will suffice to construct a solution of the PDE \begin{equation} \label{eq: pde_lemma}
\square^k h = f \end{equation} that is supported in some $H_{t_0}$ and is indeed bounded\footnote{Here $f$ is \textbf{defined} as $\square^k g$.}.
We achieve this by using a fundamental solution $\Phi^{(k,2k)}$ for $\square^k$. Then the needed solution of (\ref{eq: pde_lemma}) will be given by the convolution $\Phi^{(k,2k)} \ast f$, which will be supported on a half-space.
Fundamental solutions for $\square^\xi h$ for any complex number $\xi$ were studied extensively in \citep{Bollini}. In particular, for integer $k$ and dimension $2k$ the fundamental solution is given by \begin{equation}\label{eq: fundamental_solution}
\Phi^{(k,2k)} = (-1)^{k}\alpha_{2k}^{-1}\Theta \ast D. \end{equation} The fact that it is a fundamental solution is easily verified by making use of the Fourier transform for $\Im(\sigma) > 0$: \begin{equation} \label{eq: fundamental_computation_even}
\mathcal{F}[\square^k \Phi^{(k,2k)} \ast f] = \alpha_{2k}^{-1} (i \sigma)^{-1} (\sigma^2 - |\boldsymbol \omega|^2)^k \alpha_{2k} \frac{i \sigma}{(\sigma^2 - |\boldsymbol \omega|^2)^k} \hat{f} = \hat{f}, \end{equation} where $\hat{f}$ denotes the Fourier transform of $f$.
Furthermore, the convolution $\Theta \ast D$ is supported outside a cone, due to the geometry of the supports of $\Theta$ and $D$ (see Fig. \ref{fig: VolCone}). \begin{figure}
\caption{The supports of $\Theta$ (left), $D$ (center), and their convolution (right).}
\label{fig: VolCone}
\end{figure}
By construction, the function $h$ can be written $h=\Phi^{(k,2k)} \ast f$ (which is in fact closely related to the inversion formula for the conical Radon transform). Let us now show that $\Phi^{(k,2k)} \ast f$, and hence $g$, is bounded. Observe that
$$ |\Phi^{(k,2k)} \ast f(\boldsymbol x,t)| \leq |\alpha_{2k}|^{-1} \int_{|\boldsymbol x-\boldsymbol y| = t-s} dS \int_{-\infty}^{s} |f(\boldsymbol y,z)|dz. $$ Since the right hand side is a volume integral, we get the estimate \begin{equation}\label{eq: bound}
|\Phi^{(k,2k)} \ast f(\boldsymbol x,t)| \leq C \max (|f|) Vol(\supp (f)). \end{equation} This implies boundedness of $h=\Phi^{(k,2k)} \ast f$.
\end{proof} \begin{remark} By applying the same argument to $\partial^{m}\Phi^{(k,2k)} \ast f=\Phi^{(k,2k)} \ast \partial^m f$ where $m$ is a multiindex, one confirms that all derivatives are also bounded. \end{remark}
\begin{lemma} \label{lm: bounded_odd} Let $g \in C^{\infty}(\mathbb{R}^{2k} \times \mathbb{R})$ satisfy the conditions of Theorem \ref{thm: odd}, then $g$ and its derivatives are bounded. \end{lemma} \begin{proof}
Similarly to the previous lemma, we shall prove this by studying the solution of the equation \begin{equation} \label{eq: ipde_lemma}
\square^{2k} \mathcal{C}^{1}[h] = f \qquad f \in C_{0}^{\infty}(\mathbb{R}^{2k} \times \mathbb{R}) \end{equation} under the constraint \begin{equation} \label{eq: ipde_constraint}
\int_{-\infty}^{\infty} \square^{2k} \mathcal{C}^{1}[h](\boldsymbol x,t)dt = 0 \qquad \forall \boldsymbol x \in \mathbb{R}^{2k}. \end{equation} Our approach here is similar to the one above: finding an appropriate fundamental solution of the equation \begin{equation}\label{eq: ipde_fundamental}
\square^{2k} \mathcal{C}^{1}[\Phi^{\left (\frac{2k+1}{2},2k+1 \right )}] = \delta \end{equation} and then using Corollary \ref{co: iterated_kernel}.
The relevant fundamental solution is \begin{equation}\label{eq: fundamental_solution_odd}
\Phi^{\left (\frac{2k+1}{2},2k+1 \right )} = \alpha_{2k+1}^{-1}\beta_{2k+1}^{-1}\Theta \ast D. \end{equation} To verify this, we make use of the Fourier transform and (\ref{eq: FFT_D'}) with $\Im(\sigma) > 0$ to get \begin{align} \label{eq: FT_fundamental_solution}
\mathcal{F}[\square^{2k} \mathcal{C}^{1}[ \Phi^{\left (\frac{2k+1}{2},2k+1 \right )} \ast f]] = \nonumber \\
\alpha_{2k+1}^{-1} \beta_{2k+1}^{-1} \left (\frac{(\sigma^2 - |\boldsymbol \omega|^2)^{2k+1}}{i\sigma} \right) \alpha_{2k+1} \beta_{2k+1} \frac{i \sigma}{(\sigma^2 - |\boldsymbol \omega|^2)^{2k}} \hat{f} = \hat{f}. \end{align} Boundedness of $\Phi^{\left (\frac{2k+1}{2},2k+1 \right )} \ast f$ is proven exactly as in the previous case. \end{proof} \begin{remark} Although we only require that $\supp g \subset H_{t_0}$, if $g$ is the CRT of some compactly supported function $f$ then it must be that $\supp g \subset \supp f + C(0,-\boldsymbol e_n, \pi/4)$. One can easily verify that the conditions imposed on $g$ in Theorems \ref{thm: even} and \ref{thm: odd} guarantee this. \end{remark}
\begin{lemma} \label{lm: weighted_bounded_even} Let $g \in C^{\infty}(\mathbb{R}^{2k-1} \times \mathbb{R})$ satisfy the conditions of Theorem \ref{thm: weighted_even}, then $g$ and its derivatives are bounded. \end{lemma} \begin{proof}
Again we seek a solution of the PDE \begin{equation} \label{eq: weighted_pde_lemma}
\square^{k-1} h = f \end{equation} that is supported in some $H_{t_0}$.
The relevant fundamental solution $\Phi^{(k,2k)}$ for $\square^{k-1}$ is given by \begin{equation}\label{eq: weighted_fundamental_solution}
\Phi^{(k-1,2k)} = (-1)^{k-1}\beta_{2k}^{-1} \ast D^{1}. \end{equation} The fact that it is a fundamental solution is easily verified by making use of the Fourier transform for $\Im(\sigma) > 0$: \begin{equation} \label{eq: weighted_fundamental_computation_even}
\mathcal{F}[\square^{k-1} \Phi^{(k-1,2k)} \ast f] = \beta_{2k}^{-1} (\sigma^2 - |\boldsymbol \omega|^2)^{k-1} \frac{\beta_{2k}}{(\sigma^2 - |\boldsymbol \omega|^2)^{k-1}} \hat{f} = \hat{f}, \end{equation} where $\hat{f}$ denotes the Fourier transform of $f$.
By construction, the function $h$ can be written $h=\Phi^{(k-1,2k)} \ast f$. Let us now show that $\Phi^{(k-1,2k)} \ast f$, and hence $h$ is bounded. Observe that
$$ |\Phi^{(k-1,2k)} \ast f(\boldsymbol x,t)| \leq |\beta_{2k}|^{-1} \int_{|\boldsymbol x-\boldsymbol y| = t-s} f(\boldsymbol x,t) dS \leq |\beta_{2k}|^{-1} \int_{\mathbb{R}} \int_{\mathbb{R}^{n-1}} f(\boldsymbol x,t) d\boldsymbol x dt. $$ Since the right hand side is a volume integral, we get the estimate \begin{equation}\label{eq: weighted_bound}
|\Phi^{(k-1,2k)} \ast f(\boldsymbol x,t)| \leq C \max (|f|) Vol(\supp (f)). \end{equation} This implies boundedness of $h=\Phi^{(k-1,2k)} \ast f$.
\end{proof} \begin{remark} By applying the same argument to $\partial^{m}\Phi^{(k-1,2k)} \ast f=\Phi^{(k-1,2k)} \ast \partial^m f$ where $m$ is a multiindex, one confirms that all derivatives are also bounded. \end{remark}
\begin{lemma} \label{lm: weighted_bounded_odd} Let $g \in C^{\infty}(\mathbb{R}^{2k} \times \mathbb{R})$ satisfy the conditions of Theorem \ref{thm: weighted_odd}, then $g$ and its derivatives are bounded. \end{lemma} \begin{proof}
Similarly to the previous lemma, we shall prove this by studying the solution of the equation \begin{equation} \label{eq: weighted_ipde_lemma}
\square^{2k-1} \mathcal{C}^{1}[h] = f \qquad f \in C_{0}^{\infty}(\mathbb{R}^{2k} \times \mathbb{R}) \end{equation}
The relevant fundamental solution is \begin{equation}\label{eq: weighted_fundamental_solution_odd}
\Phi^{\left (\frac{2k-1}{2},2k+1 \right )} = (-1)^{2k-1} \beta_{2k+1}^{-2} D^{1}. \end{equation} To verify this, we make use of the Fourier transform and (\ref{eq: FFT_D'}) with $\Im(\sigma) > 0$ to get \begin{align} \label{eq: weighted_FT_fundamental_solution}
\mathcal{F}[\square^{2k-2} \mathcal{C}^{1}[ \Phi^{\left (\frac{2k-1}{2},2k+1 \right )} \ast f]] = \nonumber \\
\beta_{2k+1}^{-2} \left ((\sigma^2 - |\boldsymbol \omega|^2)^{2k-1} \right) \frac{\beta_{2k+1}}{(\sigma^2 - |\boldsymbol \omega|^2)^{\frac{2k+1}{2}}} \frac{\beta_{2k+1}}{(\sigma^2 - |\boldsymbol \omega|^2)^{\frac{2k+1}{2}}} \hat{f} = \hat{f}. \end{align} Boundedness of $\Phi^{\left (\frac{2k-1}{2},2k+1 \right )} \ast f$ is proven exactly as in the previous case. \end{proof}
The upshot of these lemmas is that $g$ is well-behaved enough for it and its derivatives to be identified with tempered distributions supported in a half-space, in which case we can utilize computations with Fourier transforms freely.
\subsection{Proof of Theorem \ref{thm: even}}
\begin{proof} Let us start proving the necessity of the conditions. Suppose $g= \mathcal{C}[f]$ for some $f \in C_{0}^{\infty}(\mathbb{R}^{2k-1} \times \mathbb{R})$ such that $\supp{f} \subset H_{t_0}$. It follows from the definition of CRT that $g$ is smooth, bounded and $\supp g \subset H_{t_0}$ (in particular, the condition (iii) of the theorem holds)\footnote{Since $g=D \ast f$, the support of $g$ belongs to the sum of the supports of $f$ and the distribution $D$. Because $\supp f \subseteq H_{t_0}$, $\supp g \subseteq H_{t_0}+H_0=H_{t_0}$.}. Therefore, for $\Im(\sigma) > 0$ one has \begin{align*}
\widehat{\square^k g}(\boldsymbol \omega,\sigma) &= (-1)^{k} (\sigma^2 - |\boldsymbol \omega|^2)^k \alpha_{2k} \frac{i \sigma}{(\sigma^2 - |\boldsymbol \omega|^2)^k} \hat{f}(\boldsymbol \omega,\sigma) \\
&= (-1)^{k} \alpha_{2k} i \sigma \hat{f}(\boldsymbol \omega,\sigma). \end{align*} In the limit $\Im(\sigma) \rightarrow 0^+$, we obtain \begin{equation}
\square^k g(\boldsymbol x,t) = (-1)^{k} \alpha_{2k} f_t(\boldsymbol x,t). \end{equation} Conditions (i) and (ii) follow immediately from the previous identity, which finishes the proof of necessity of the conditions.
To prove the sufficiency of the conditions we will make use of the inverse CRT. Suppose $g$ is smooth and satisfies the conditions specified in Theorem \ref{thm: even}. Define \begin{equation}
f( \boldsymbol x,t): = (-1)^{k}\alpha_{2k}^{-1} \int_{-\infty}^{t} \square^k g(\boldsymbol x,z)dz. \end{equation} Then $f$ is smooth with compact support, as $g$ is smooth and satisfies conditions (i) and (ii). Moreover, we have for $\Im(\sigma) > 0$
\begin{align*}
\widehat{\mathcal{C}[f]}(\boldsymbol \omega,\sigma) &= \frac{\alpha_{2k}}{\alpha_{2k}} \frac{i \sigma}{(\sigma^2 - |\boldsymbol \omega|^2)^k} \left ( \frac{1}{i \sigma} (\sigma^2 - |\boldsymbol \omega|^2)^k \hat{g} \right )=\hat{g}. \end{align*} In the limit $\Im(\sigma) \rightarrow 0^+$ we find that in fact $\mathcal{C}[f]=g$ and thus $g$ is in the range\footnote{Note that this computation is justified by Lemma \ref{lm: bounded_even}.} of $\mathcal{C}$. \end{proof}
\subsection{Proof of Theorem \ref{thm: odd}}
\begin{proof} We proceed in the same vein as in the previous proof. Starting with proving necessity, suppose $g= \mathcal{C}[f]$ for some $f \in C_{0}^{\infty}(\mathbb{R}^{2k} \times \mathbb{R})$, and $\supp{f} \subset H_{t_0}$. Then, as in the previous theorem, $g$ is smooth and bounded, and $\supp g\subset H_{t_0}$. Thus, for $\Im(\sigma) > 0$ we get \begin{align*}
\widehat{\square^{2k} \mathcal{C}^{1}[ g]}(\boldsymbol \omega,\sigma) &= \frac{\beta_{2k+1}}{(\sigma^2 - |\boldsymbol \omega|^2)^{k-1/2}} (\sigma^2 - |\boldsymbol \omega|^2)^{2k} \alpha_{2k+1} \frac{ i \sigma}{(\sigma^2 - |\boldsymbol \omega|2)^{k+1/2}} \hat{f}(\boldsymbol \omega,\sigma) \\
&= \alpha_{2k+1} \beta_{2k+1} i \sigma \hat{f}(\boldsymbol \omega,\sigma). \end{align*} Sending $\Im(\sigma) \rightarrow 0^+$, we obtain \begin{equation}
\square^{2k} \mathcal{C}^{1}[ g](\boldsymbol x,t) = \alpha_{2k+1} \beta_{2k+1} f_t(\boldsymbol x,t). \end{equation} This implies the conditions (i) and (ii), while (iii) has already been established.
Let us turn to proving sufficiency of the conditions. Suppose $g$ satisfies the conditions of Theorem \ref{thm: odd}. We define \begin{equation}
f(\boldsymbol x,t): = \alpha_{2k+1}^{-1} \beta_{2k+1}^{-1} \int_{-\infty}^{t} \square^{2k} \mathcal{C}^{1}[ g](\boldsymbol x,z)dz. \end{equation} Then $f$ is smooth with compact support, as $g$ is smooth and satisfies conditions (i) and (ii). Moreover, we have for $\Im(\sigma) >0$
\begin{align*}
\widehat{\mathcal{C}[f]}(\boldsymbol \omega,\sigma) = \frac{\alpha_{2k+1}}{\alpha_{2k+1}\beta_{2k+1}} \frac{i\sigma}{(\sigma^2 - |\boldsymbol \omega|^2)^{k+1/2}} \left ( \frac{\beta_{2k+1} (\sigma^2 - |\boldsymbol \omega|^2)^{2k}}{ i \sigma (\sigma^2 - |\boldsymbol \omega|^2)^{k-1/2}} \right ) \hat{g}=\hat{g}. \end{align*} Then in the limit $\Im(\sigma) \rightarrow 0^+$ we find that $\mathcal{C}[f]=g$ and thus $g$ is in the range of $\mathcal{C}$. \end{proof}
\subsection{Proof of Theorem \ref{thm: weighted_even}}
\begin{proof} Once again, we begin with the proof of necessity. Suppose $g= \mathcal{C}^{1}[f]$ for some $f \in C_{0}^{\infty}(\mathbb{R}^{2k-1} \times \mathbb{R})$ such that $\supp{f} \subset H_{t_0}$. It follows from the definition of weighted CRT that $g$ is smooth, bounded and $\supp g \subset H_{t_0}$ (in particular, the condition (ii) of the theorem holds). Therefore, for $\Im(\sigma) > 0$ one has \begin{align*}
\widehat{\square^{k-1} g}(\boldsymbol \omega,\sigma) &= (-1)^{k-1} (\sigma^2 - |\boldsymbol \omega|^2)^{k-1} \frac{\beta_{2k}}{(\sigma^2 - |\boldsymbol \omega|^2)^{k-1}} \hat{f}(\boldsymbol \omega,\sigma) \\
&= (-1)^{k-1} \beta_{2k} \hat{f}(\boldsymbol \omega,\sigma). \end{align*} In the limit $\Im(\sigma) \rightarrow 0^+$, we obtain \begin{equation}
\square^{k-1} g(\boldsymbol x,t) = (-1)^{k-1} \beta_{2k} f(\boldsymbol x,t). \end{equation} Condition (i) of the theorem follows immediately from the previous identity, which finishes the proof of necessity of the conditions.
Now to prove sufficiency, suppose $g$ is smooth and satisfies the conditions specified in Theorem \ref{thm: weighted_even}. Define \begin{equation}
f( \boldsymbol x,t): = (-1)^{k-1}\beta_{2k}^{-1} \square^{k-1} g(\boldsymbol x,t). \end{equation} Then $f$ is smooth with compact support, as $g$ is smooth and satisfies condition (i). Moreover, we have for $\Im(\sigma) > 0$
\begin{align*}
\widehat{\mathcal{C}^{1}[f]}(\boldsymbol \omega,\sigma) &= \frac{\beta_{2k}}{\beta_{2k}} \frac{1}{(\sigma^2 - |\boldsymbol \omega|^2)^{k-1}} \left ( (\sigma^2 - |\boldsymbol \omega|^2)^{k-1} \hat{g} \right )=\hat{g}. \end{align*} In the limit $\Im(\sigma) \rightarrow 0^+$ we find that in fact $\mathcal{C}[f]=g$ and thus $g$ is in the range of $\mathcal{C}$. \end{proof}
\subsection{Proof of Theorem \ref{thm: weighted_odd}}
\begin{proof} Suppose $g= \mathcal{C}[f]$ for some $f \in C_{0}^{\infty}(\mathbb{R}^{2k} \times \mathbb{R})$, and $\supp{f} \subset H_{t_0}$. Then, $g$ is smooth and bounded, and $\supp g\subset H_{t_0}$. Thus, for $\Im(\sigma) > 0$ we get \begin{align*}
\widehat{\square^{2k-1} \mathcal{C}^{1}[ g]}(\boldsymbol \omega,\sigma) &= \frac{\beta_{2k+1}}{(\sigma^2 - |\boldsymbol \omega|^2)^{k-1/2}} (\sigma^2 - |\boldsymbol \omega|^2)^{2k-1} \frac{\beta_{2k+1} }{(\sigma^2 - |\boldsymbol \omega|2)^{k-1/2}} \hat{f}(\boldsymbol \omega,\sigma) \\
&=\beta_{2k+1}^{2} \hat{f}(\boldsymbol \omega,\sigma). \end{align*} Sending $\Im(\sigma) \rightarrow 0^+$, we obtain \begin{equation}
\square^{2k-1} \mathcal{C}^{1}[ g](\boldsymbol x,t) = \beta_{2k+1}^{2} f(\boldsymbol x,t). \end{equation} This implies the condition (i).
Now suppose $g$ satisfies the conditions of Theorem \ref{thm: weighted_odd}. We define \begin{equation}
f(\boldsymbol x,t): = (-1)^{2k-1} \beta_{2k+1}^{-2} \square^{2k-1} \mathcal{C}^{1}[ g](\boldsymbol x,z)dz. \end{equation} Then $f$ is smooth with compact support, as $g$ is smooth and satisfies condition (i). Moreover, we have for $\Im(\sigma) >0$
\begin{align*}
\widehat{\mathcal{C}^{1}[f]}(\boldsymbol \omega,\sigma) = \frac{\beta_{2k+1}}{\beta_{2k+1}^2} \frac{1}{(\sigma^2 - |\boldsymbol \omega|^2)^{k-1/2}} \left ( \frac{\beta_{2k+1} (\sigma^2 - |\boldsymbol \omega|^2)^{2k-1}}{ (\sigma^2 - |\boldsymbol \omega|^2)^{k-1/2}} \right ) \hat{g} =\hat{g}. \end{align*} Then in the limit $\Im(\sigma) \rightarrow 0^+$ we find that $\mathcal{C}[f]=g$ and thus $g$ is in the range of $\mathcal{C}$. \end{proof}
\section{Arbitrary angle of the cone}\label{S:angle}
For simplicity we have so far restricted our attention to cones with a right angle opening and central axis aligned with coordinate $t$ of our chosen coordinate system $(\boldsymbol x,t)$. The axis' direction does not restrict the generality. Let us tackle an arbitrary half-opening angle $\phi$. The mapping $(x,t)\mapsto ((\cot \phi) \boldsymbol x,t)$ transforms the $\pi/4$ half-opening angle to $\phi$.
This suggests consideration of the modified d'Alembertian \begin{equation} \label{eq: dAlembertian_COB}
\square_{\phi} \coloneqq \frac{\partial^2}{\partial t^{2}} - (\tan^2 \phi) \Delta_{\boldsymbol x} \end{equation} Arguing along the same line as the previous sections will prove the results stated below\footnote{One could also obtain them by changing variables.} for the CRT $\mathcal{C}_\phi$ with half-opening angle $\phi$. \begin{theorem}\label{thm: even_COB} Let $n=2k$ be even. A function $g\in C_0^\infty(\mathbb{R}^{2k-1}\times \mathbb{R})$ is in the range of $\mathcal{C}_\phi$ acting on $C_0^\infty(\mathbb{R}^n)$ if and only if the following conditions are satisfied: \begin{enumerate}[(i)]
\item $\square_{\phi}^{k} g(\boldsymbol x,t)$ has compact support.
\item $\int_{-\infty}^{\infty} \square_{\phi}^{k} g(\boldsymbol x,t)dt=0$ for every $\boldsymbol x \in \mathbb{R}^{2k-1}$.
\item $\supp g \subseteq H_{t_0}$ for some $t_0 \in \mathbb{R}$. \end{enumerate} \end{theorem} An analogous result for odd dimensions is as follows: \begin{theorem}\label{thm: odd_COB} Let $n=2k+1$ be odd. A function $g \in C^{\infty}(\mathbb{R}^{2k} \times \mathbb{R})$ is in the range of $\mathcal{C}_{\phi}$ on $C_{0}^{\infty}(\mathbb{R}^{2k} \times \mathbb{R})$ if and only if the following conditions are satisfied: \begin{enumerate}[(i)]
\item $\square_{\phi}^{2k} \mathcal{C}_{\phi}^{1}[g](\boldsymbol x,t)$ has compact support.
\item $\int_{-\infty}^{\infty} \square_{\phi}^{2k} \mathcal{C}_{\phi}^{1}[g](\boldsymbol x,t)dt=0$ for every $\boldsymbol x \in \mathbb{R}^{2k}$.
\item $\supp g \subseteq H_{t_0}$ for some $t_0 \in \mathbb{R}$. \end{enumerate} where $\mathcal{C}_{\phi}^{1}$ is the weighted CRT corresponding to the opening angle $\phi$: \begin{equation}
\mathcal{C}_{\phi}^{1}[f](\boldsymbol x, t) = D_{\phi}^{1} \ast f= \left [ |((\cot \phi) \boldsymbol x, t)|^{-1} \delta(t - \cot \phi |x|) \right ] \ast f \end{equation} \end{theorem}
\begin{theorem} \label{thm: weighted_even_COB} Let $n=2k$ be even (and thus $\boldsymbol x\in\mathbb{R}^{2k-1}$). A function $g \in C^{\infty}(\mathbb{R}^{2k-1} \times \mathbb{R})$ is in the range of $\mathcal{C}_{\phi}^{1}$ on $C_{0}^{\infty}(\mathbb{R}^{2k-1} \times \mathbb{R})$ if and only if the following conditions are satisfied: \begin{enumerate}[(i)]
\item $\square_{\phi}^{k-1} g(\boldsymbol x,t)$ has compact support.
\item $\supp g \subseteq H_{t_0}$ (see (\ref{eq:Ht})) for some $t_0 \in \mathbb{R}$. \end{enumerate} \end{theorem}
An analogous result for odd dimensions is as follows: \begin{theorem}\label{thm: weighted_odd_COB} Let $n=2k+1$ be odd (and thus $\boldsymbol x\in\mathbb{R}^{2k}$). A function $g \in C^{\infty}(\mathbb{R}^{2k} \times \mathbb{R})$ is in the range of $\mathcal{C}_{\phi}^{1}$ on $C_{0}^{\infty}(\mathbb{R}^{2k} \times \mathbb{R})$ if and only if \begin{enumerate}[(i)]
\item $\square_{\phi}^{2k-1} \mathcal{C}_{\phi}^{1}[g](\boldsymbol x,t)$ has compact support.
\item $\supp g \subseteq H_{t_0}$ for some $t_0 \in \mathbb{R}$. \end{enumerate} \end{theorem}
\subsection{Proof of Theorems for an arbitrary angle of the cone}
First note that Lemma \ref{lm: dAlembert_Kernel} holds for any wave speed, in particular the result holds for $\square_\phi$. Next, observe that the critical computations in the proofs of Theorems \ref{thm: even} and \ref{thm: odd} are (\ref{eq: fundamental_computation_even}) and (\ref{eq: FT_fundamental_solution}), which are still valid due to the respective fundamental solutions being tempered distributions supported on a half-space. It will therefore suffice to produce tempered distributions which are fundamental solutions to the PDEs: \begin{equation} \label{eq: PDE_COB_even}
\square_{\phi}^k \Phi_{\phi}^{(k,2k)} = \delta \end{equation} and \begin{equation} \label{eq: PDE_COB_odd}
\square_{\phi}^{2k} \mathcal{C}_{\phi}^{1}[\Phi_{\phi}^{\left (\frac{2k+1}{2},2k+1 \right )}] = \delta. \end{equation} We can repeat the computation at the end of Section \ref{S:notation} to determine the Fourier transforms of $D_\phi$ and $D_{\phi}^{1}$ on $\mathbb{H}_+$: \begin{align*}
\widehat{D_\phi}(\boldsymbol \omega,\sigma)
&= \sin^{n-2} \phi \sec^{n-1} \phi \int_{-\infty}^{0} \int_{\partial B(0,1)} \left ( e^{ir \tan \phi \boldsymbol \omega \cdot \boldsymbol \theta -i r \sigma) } \right ) d\boldsymbol \theta (-r)^{n-2} dr \\
&=\alpha_{n,\phi} \frac{i\sigma}{(\sigma^2-\tan^2 \phi|\boldsymbol \omega|^2)^{\frac{n}{2}}}, \end{align*} where now, \begin{equation} \label{eq: alpha_phi}
\alpha_{n,\phi} \coloneqq -\sin^{n-2} \phi \sec^{n-1} \phi (-1)^{-\frac{n}{2}} 2^{n-1} \pi^{\frac{n-2}{2}} \Gamma \left ( \frac{n}{2} \right). \end{equation} Using the relation $t \sec \phi D_{\phi}^{1} = D_\phi$, we conclude that on $\mathbb{H}_+$ one has \begin{equation} \label{eq: FFT_D'_phi}
\widehat{D^{1}}(\boldsymbol \omega,\sigma) = \frac{\beta_{n,\phi}}{(\sigma^2-\tan^2 \phi |\boldsymbol \omega|^2)^{\frac{n-2}{2}}}, \end{equation} where now \begin{equation} \label{eq: beta_phi}
\beta_{n,\phi} \coloneqq - \sin^{n-2} \phi \sec^{n-2} \phi (-1)^{-\frac{n}{2}} \frac{2^{n-1}}{2-n} \pi^{\frac{n-2}{2}} \Gamma \left ( \frac{n}{2} \right). \end{equation} The fundamental solution for (\ref{eq: PDE_COB_even}) is \begin{equation}\label{eq: fundamental_solution_COB_even}
\Phi_{\phi}^{(k,2k)} = (-1)^{k}\alpha_{2k,\phi}^{-1}\Theta \ast D_\phi \end{equation} and the fundamental solution for (\ref{eq: PDE_COB_odd}) is \begin{equation}\label{eq: fundamental_solution_COB_odd}
\Phi_{\phi}^{\left (\frac{2k+1}{2},2k+1 \right )} = \alpha_{2k+1,\phi}^{-1}\beta_{2k+1,\phi}^{-1}\Theta \ast D_\phi. \end{equation} The remainder of the proofs is identical to the proofs in Section \ref{S:proofs}.
\section{A range description of the full CRT} \label{S: Full_Range}
We now turn our attention to the full CRT, i.e. the one that uses \emph{all} cones. Before we discuss its range, we recall the divergent beam and spherical mean transforms. The reason is that the CRT can be factored into the composition of the former two \citep{Fatma}. This will induce certain additional characteristics on the range of the CRT. These new features, along with Theorems \ref{thm: even_COB} and \ref{thm: odd_COB} will lead to a description of the range of the full CRT.
\subsection{The Divergent Beam Transform} \label{sec: divergent_beam} \begin{figure}
\caption{Schematic representation of the divergent beam transform.}
\label{fig: DivergentBeam}
\end{figure} The divergent beam transform $P[f](\boldsymbol x, \boldsymbol \beta)$ of $f\in C_0^{\infty}(\mathbb{R}^n)$ is the integral of $f$ over the ray originating at $\boldsymbol x$ and emanating in the direction $\boldsymbol \beta \in \mathbb{S}^{n-1}$ \citep{Natt} (see Fig. \ref{fig: DivergentBeam}): \begin{equation} \label{eq: divergent_beam_transform}
P[f](\boldsymbol x, \boldsymbol \beta)= \int_{0}^{\infty} f(\boldsymbol x + t \boldsymbol \beta) dt. \end{equation} It is evident from the definition that for $f\in C_0^{\infty}(\mathbb{R}^n)$, $P[f]\in C^{\infty}(\mathbb{R}^n \times \mathbb{S}^{n-1})$. \subsection{The Spherical Mean Transform} \label{sec: spherical_mean} \begin{figure}
\caption{Schematic representation of the spherical mean transform.}
\label{fig: SphericalMean}
\end{figure} The spherical mean transform $M_S[f](\boldsymbol x,r)$ of $f\in C^{\infty}(\mathbb{R}^n)$ is the average of $f$ over the sphere with center $\boldsymbol x$ and radius $r\in \mathbb{R}_+$ (see \citep{Fritz} and Fig. \ref{fig: SphericalMean}): \begin{equation} \label{eq: spherical_mean_transform}
M_S[f](\boldsymbol x, r)= \frac{1}{\omega_{n-1}} \int_{\partial B(\boldsymbol x, r)} f(\boldsymbol y) d\boldsymbol \theta (\boldsymbol y) \end{equation} where $\omega_n$ is the surface area of the $n-$sphere. The spherical mean transform is extremely interesting in its own right, and several works are devoted to its study (see e.g. \citep{Agranovsky2009,Helgason,Fritz,Rubin}). A key property of the spherical mean transform is that its range is contained in the kernel of the Euler--Poisson--Darboux operator \citep{Fritz,Rubin}: \begin{equation} \label{eq: EPD}
\mathcal{E} \coloneqq \frac{\partial ^2}{\partial r^2} +\frac{n-1}{r} \frac{\partial}{\partial r} - \Delta_{\boldsymbol x} \end{equation} \begin{figure}
\caption{Schematic representation of the spherical mean transform on the sphere.}
\label{fig: SphericalMeanOnSphere}
\end{figure} In fact, these results can be generalized to the spherical mean transform over a wide class of manifolds (namely, two-point homogeneous spaces) \citep{Helgason}. The case of the sphere will be of particular interest to us.
The spherical mean transform $M[f](\boldsymbol \beta,r)$ of $f\in C^{\infty}(\mathbb{S}^{n-1})$ is the average of $f$ over the $(n-2)$-sphere with center $\boldsymbol \beta \in \mathbb{S}^{n-2}$ and geodesic radius $r\in (0,\pi/2)$ (see Fig. \ref{fig: SphericalMeanOnSphere}): \begin{equation} \label{eq: spherical_mean_transform_on_sphere}
M[f](\boldsymbol \beta, r)= \frac{1}{A_{n-2}(r)} \int_{\partial B(\boldsymbol \beta, r)} f(\boldsymbol y) d\boldsymbol \theta (\boldsymbol y) \end{equation} where $A_n(r)$ is the surface area of the $n-$sphere with geodesic radius $r$. From this point forward we will refer to (\ref{eq: spherical_mean_transform_on_sphere}) simply as the spherical mean transform. Two key properties of this spherical mean transform which will be critical in our discussion of the range of the CRT are the following injectivity result and range description. \begin{theorem}[Injectivity (Helgason \citep{Helgason})] \label{thm: injectivity} For any fixed $r>0$ the transform $M[f](\boldsymbol \beta,r)$ is injective for $f \in L^1(\mathbb{S}^{n-1})$. \end{theorem} \begin{theorem}[Range (Helgason\protect\footnotemark \citep{Helgason})] \label{thm: range} The range of the transform $M[f](\boldsymbol \beta,r)$ over $C^2(\mathbb{S}^{n-2})$ is the kernel of the Euler--Poisson--Darboux type operator \begin{equation} \label{eq: EPD_sphere}
\mathcal{E}_S \coloneqq \frac{\partial^2 }{\partial r^2} + \frac{A'(r)}{A(r)} \frac{\partial }{\partial r} - \Delta_{S} \end{equation} where $\Delta_S$ is the Laplace-Beltrami operator on the sphere $\mathbb{S}^{n-1}$ and $A'(r)$ is the derivative of $A(r)$. \end{theorem} \footnotetext{Helgason in fact proved in \citep{Helgason} these two results for the general case of the spherical mean transforms on compact two-point homogeneous spaces.} \subsection{Factoring the CRT} \label{sec: factor} As before, the CRT can be factored into the composition of the divergent beam and spherical mean transforms. This can be seen through the following computation: \begin{align*}
\mathcal{C}[f](\boldsymbol x, \phi, \boldsymbol \beta) &= \sin \phi \int_{\mathbb{R}^n} f(\boldsymbol y) \delta ((\boldsymbol y - \boldsymbol x)\cdot \boldsymbol \beta - |\boldsymbol y - \boldsymbol x| \cos \phi)d\boldsymbol y \\
& = \sin \phi \int_{\mathbb{R}^n} f(\boldsymbol x -\boldsymbol y) \delta (\boldsymbol y\cdot \boldsymbol \beta - |\boldsymbol y| \cos \phi)d\boldsymbol y \\
& = \sin \phi \int_{\mathbb{S}^{n-1}} P[f](\boldsymbol x, \boldsymbol y) \delta (\boldsymbol y\cdot \boldsymbol \beta -\cos \phi)d\theta(y) \\
& = \sin \phi \int_{\mathbb{S}^{n-2}} P[f](\boldsymbol x,\cos \phi \boldsymbol \beta + \sin \phi \boldsymbol y) d\theta(y) \\
& = \sin \phi A(\phi) M[P[f](\boldsymbol x, \boldsymbol \alpha)](\boldsymbol \beta, \phi)), \end{align*} where the spherical mean transform $M[P(\boldsymbol x,\boldsymbol \alpha)]$ is taken with respect to $\alpha \in \mathbb{S}^{n-1}$. Geometrically this factoring makes sense, as the surface cone $\mathfrak{C}(\boldsymbol x, \phi, \boldsymbol \beta)$ is the union of rays originating from $\boldsymbol x$ and emanating in the directions $\boldsymbol \alpha$ satisfying $\boldsymbol \alpha \cdot \boldsymbol \beta = \cos \phi$. Accordingly, \begin{equation}
\frac{1}{\sin \phi A(\phi)} \mathcal{C}[f](\boldsymbol x, \phi, \boldsymbol \beta) \end{equation} is in the range of $M$, and thus must be in the kernel of (\ref{eq: EPD_sphere}). Indeed, it was shown in \citep{Fatma} that a generalization of this result holds for the weighted CRT when the weight is an integer power of the distance from the cone vertex.
\subsection{A symmetry of the CRT} \label{sec: symmetry} The final detail we will need to complete our description of the range of the CRT is to observe a symmetry. Namely, suppose we apply the CRT to $f\in C_{0}^{\infty}( \mathbb{R}^n)$ twice: \begin{align} \label{eq: CRT_twice}
& \mathcal{C}[\mathcal{C}[f]](\boldsymbol x, \phi,\psi, \boldsymbol \beta,\boldsymbol \gamma) \nonumber \\
& = \sin \phi \sin \psi \int_{\mathbb{R}^{2n}} f(\boldsymbol x - \boldsymbol y - \boldsymbol u) \delta (\boldsymbol y \cdot \boldsymbol \beta - |\boldsymbol y| \cos \phi) \delta(\boldsymbol u \cdot \gamma - \cos \psi) d\boldsymbol y d\boldsymbol u. \end{align} Since $\mathcal{C}[f]$ is supported on a half-space, this expression makes sense when $\psi$ and $\boldsymbol \gamma$ are near $\phi$ and $\boldsymbol \beta$ respectively (See Fig. \ref{fig: DoubleCone}). Also, the role of the inner and outer CRT are interchangeable, and we have the following identity: \begin{equation} \label{eq: symmetry}
\frac{\partial}{\partial \boldsymbol \beta} \mathcal{C}[\mathcal{C}[f]](\boldsymbol x, \phi,\phi, \boldsymbol \beta,\boldsymbol \gamma)|_{\boldsymbol \beta = \boldsymbol \gamma} = \frac{\partial}{\partial \boldsymbol \gamma} \mathcal{C}[\mathcal{C}[f]](\boldsymbol x, \phi,\phi, \boldsymbol \beta,\boldsymbol \gamma)|_{\boldsymbol \beta = \boldsymbol \gamma}. \end{equation} Here, by $\frac{\partial}{\partial \boldsymbol \beta}$ we mean the gradient with respect to $\boldsymbol \beta$. \begin{figure}
\caption{The CRT can be applied twice, provided the central axis direction and opening angle of the second cone are close to the first.}
\label{fig: DoubleCone}
\end{figure} \subsection{Range Description} \label{sec: Range_Description} We can now obtain a characterization of the range of the full CRT. \begin{theorem} The function $g \in C^{\infty}(\mathbb{R}^n \times (0, \pi/2, \mathbb{S}^{n-1})$ can be represented as $g=\mathcal{C}[f]( \boldsymbol x, \phi, \boldsymbol \beta)$ for some $f \in C_0^{\infty}(\mathbb{R}^n)$ if and only if $g$ satisfies the following conditions: \begin{enumerate}[(i)]
\item For any fixed $\phi_0$, $g(\boldsymbol x, \phi_0, \boldsymbol e_n)$ satisfies the conditions of Theorems \ref{thm: even_COB} and \ref{thm: odd_COB} (depending on the parity of $n$). \label{cond: one}
\item $\frac{1}{\sin \phi A(\phi)} g(\boldsymbol x, \phi, \boldsymbol \beta)$ is in the kernel of $\frac{\partial^2 }{\partial r^2} + \frac{A'(r)}{A(r)} \frac{\partial }{\partial r} - \Delta_{S}$ for each $\boldsymbol x$. \label{cond: two}
\item There is a fixed $t \in \mathbb{R}$ such that $g(\boldsymbol x, \phi, \boldsymbol \beta)$ is supported in a half-space $H_t^{\boldsymbol \beta} = \{\boldsymbol x \in \mathbb{R}^n : \boldsymbol x \cdot \boldsymbol \beta >t\}$. \label{cond: three}
\item For all $\phi$ and $\boldsymbol \beta$, $g(\boldsymbol x, \phi, \boldsymbol \beta)$ is bounded. \label{cond: four}
\item $\frac{\partial }{\partial \boldsymbol \beta}\mathcal{C}[g](\boldsymbol x, \phi, \phi, \boldsymbol \beta, \boldsymbol \gamma)|_{\boldsymbol \beta = \boldsymbol \gamma} =\frac{\partial }{\partial \boldsymbol \gamma}\mathcal{C}[g](\boldsymbol x, \phi, \phi, \boldsymbol \beta, \boldsymbol \gamma)|_{\boldsymbol \gamma = \boldsymbol \beta}$. \label{cond: five} \end{enumerate}
\end{theorem} \begin{proof} The necessity of each of these conditions has in fact been established throughout this manuscript. Condition \ref{cond: one} was established when we considered the restricted CRT. Condition \ref{cond: two} is a consequence of the factoring of the CRT into the divergent beam and spherical mean transforms and Theorem \ref{thm: range}. Condition \ref{cond: three} was established in the case $\boldsymbol \beta = \boldsymbol e_n$, and the argument for other directions is identical. Condition \ref{cond: four} was also established in the case $\boldsymbol \beta = \boldsymbol e_n$, and, again, the argument for other directions is identical. Condition \ref{cond: five} was established in the previous subsection. Thus we need only prove the sufficency of these conditions.
Fix $\phi_0$ and let $f\in C_{0}^{\infty}(\mathbb{R}^n)$ satisfy $g(\boldsymbol x, \phi_0,\boldsymbol e_n)=\mathcal{C}[f](\boldsymbol x, \phi_0, \boldsymbol e_n)$. The existence of $f$ is guaranteed by Theorems \ref{thm: even_COB} and \ref{thm: odd_COB}. We will show that in fact $C[f](\boldsymbol x, \phi, \boldsymbol \beta) = g(\boldsymbol x, \phi, \boldsymbol \beta)$. We first show that $\mathcal{C}[f](\boldsymbol x, \phi_0, \boldsymbol \beta)$ is the \emph{unique} solution to the boundary value problem \begin{equation} \label{eq: Symmetry_PDE}
\frac{\partial }{\partial \boldsymbol \beta}\mathcal{C}[u](\boldsymbol x, \phi_0, \phi_0, \boldsymbol \beta, \boldsymbol \gamma)|_{\boldsymbol \beta = \boldsymbol \gamma} =\frac{\partial }{\partial \boldsymbol \gamma}\mathcal{C}[u](\boldsymbol x, \phi_0, \phi_0, \boldsymbol \beta, \boldsymbol \gamma)|_{\boldsymbol \beta = \boldsymbol \gamma} \end{equation} with $u \in C^{\infty}(\mathbb{R}^n \times (0, \pi/2, \mathbb{S}^{n-1})$ satisfying Conditions 3 and 4 and $u(\boldsymbol x, \phi_0, \boldsymbol e_n) = g(\boldsymbol x, \phi_0, \boldsymbol e_n)$.
\begin{figure}
\caption{The CRT can be applied $g$, provided the central axis direction and opening angle of the cone are close to $\boldsymbol \beta$ and $\phi$ respectively.}
\label{fig: DoubleConeData}
\end{figure} Under these conditions $\mathcal{C}[u](\boldsymbol x, \phi_0, \phi_0, \boldsymbol \beta, \boldsymbol \gamma)$ will be a tempered distribution in $\boldsymbol x$. Thus the application of the Fourier transform in $\boldsymbol x$ is justified, and therefore we can compute the Fourier transform of (\ref{eq: Symmetry_PDE}) to obtain the differential equation: \begin{equation} \label{eq: Fourier_Commutation}
\left [ \frac{\partial}{\partial \boldsymbol \gamma} \hat{D}(\boldsymbol \omega,\phi_0,\boldsymbol \gamma) \hat{u}(\boldsymbol \omega, \phi_0, \boldsymbol \beta) \right ] _{\boldsymbol \beta = \boldsymbol \gamma}- \left [ \hat{D}(\boldsymbol \omega,\phi_0,\boldsymbol \gamma) \frac{\partial}{\partial \boldsymbol \beta} \hat{u}(\boldsymbol \omega, \phi_0, \boldsymbol \beta) \right ]_{\boldsymbol \beta = \boldsymbol \gamma}=0 \end{equation} where \begin{equation}
\hat{D}(\boldsymbol \omega, \phi_0, \boldsymbol \gamma) = p(R_{\boldsymbol \gamma} \boldsymbol \omega,\phi_0) \end{equation} is the Fourier transform of the cone, with \begin{equation}
p(\boldsymbol \omega,\phi_0) = \frac{\alpha_{n,\phi_0}i \omega_n}{(|(\omega_1,\omega_2,...,\omega_{n-1})|^2-\tan^2 \phi_0 \omega_n^2)^{n/2}} \end{equation} and $R_{\boldsymbol \gamma}$ is any rotation matrix such that $R_{\boldsymbol \gamma} \boldsymbol e_n = \boldsymbol \gamma.$\footnote{Due to the symmetry of the cone, any rotation mapping $\boldsymbol e_n \rightarrow \boldsymbol \gamma$ will map the cone with central axis $\boldsymbol e_n$ to the cone with central axis $\boldsymbol \gamma$. Due to the rotational invariance of the Fourier transform, its Fourier transform exhibits this same symmetry.} As discussed earlier, these distributions have analytic continuation to the complex half-space ${\Im(\boldsymbol \omega \cdot \boldsymbol \gamma) > 0}$ and are continuous at the boundary $\Im(\boldsymbol \omega \cdot \boldsymbol \gamma)=0$. Moreover, $\hat{D}$ is smooth and non-vanishing when $\Im(\boldsymbol \omega \cdot \boldsymbol \gamma) > 0$, and because (\ref{eq: Fourier_Commutation}) is linear, it admits a unique solution $\hat{u}(\boldsymbol x, \phi_0, \boldsymbol \beta)$ up to a choice of initial data $\hat{u}(\boldsymbol \omega, \phi_0 ,\boldsymbol e_n)$. In fact, the solution is given by \begin{equation} \label{eq: Fourier_Commutation_Solution}
\hat{u}(\boldsymbol \omega, \phi_0, \boldsymbol \beta) = \hat{U}(\boldsymbol \omega, \phi_0) \hat{D} (\boldsymbol \omega, \phi_0, \boldsymbol \beta), \end{equation} as can be verified by direct computation. Given the initial condition $\hat{g}(\boldsymbol \omega, \phi_0, \boldsymbol e_n)$, $\hat{U}$ is determined by \begin{equation}
\hat{U}(\boldsymbol \omega, \phi_0) = \frac{\hat{u}(\boldsymbol \omega, \phi_0, \boldsymbol e_n)}{\hat{D}(\boldsymbol \omega, \phi_0, \boldsymbol e_n)}=\frac{\hat{g}(\boldsymbol \omega, \phi_0, \boldsymbol e_n)}{\hat{D}(\boldsymbol \omega, \phi_0, \boldsymbol e_n)}=\frac{\widehat{\mathcal{C}[f]}(\boldsymbol \omega, \phi_0, \boldsymbol e_n)}{\hat{D}(\boldsymbol \omega, \phi_0, \boldsymbol e_n)}=\hat{f}. \end{equation} Furthermore, due to Condition \ref{cond: one}, $\hat{U}$ is everywhere analytic and has exponential order. Thus the solution (\ref{eq: Fourier_Commutation_Solution}) is analytic in $\Im(\boldsymbol \omega \cdot \boldsymbol \beta) > 0$. Hence, according to Conditions \ref{cond: three} and \ref{cond: four}, $\hat{u}(\boldsymbol \omega, \phi_0, \boldsymbol \beta)$ for $\boldsymbol \omega \in \mathbb{R}^n$ is determined by $\lim_{\Im(\boldsymbol \omega \cdot \boldsymbol \beta) \rightarrow 0}\hat{u}(\boldsymbol \omega, \phi_0, \boldsymbol \beta)$ and therefore \begin{equation}
\hat{u}(\boldsymbol \omega, \phi_0, \boldsymbol \beta) = \hat{f}(\boldsymbol \omega) \hat{D} (\boldsymbol \omega, \phi_0, \boldsymbol \beta) \overset{\mathcal{F}^{-1}}{\rightarrow} u(\boldsymbol x, \phi_0, \boldsymbol \beta)=\mathcal{C}[f](\boldsymbol x, \phi_0, \boldsymbol \beta). \end{equation} To finish the proof, we simply note that since $g(\boldsymbol x, \phi, \boldsymbol \beta)$ is in the kernel of the Euler-Poisson-Darboux operator for every point $\boldsymbol x \in \mathbb{R}^n$, by Theorems \ref{thm: injectivity} and \ref{thm: range} the map $g(\boldsymbol x, \phi_0, \boldsymbol \beta) \mapsto g(\boldsymbol x, \phi, \boldsymbol \beta)$ is bijective. Thus since $C[f](\boldsymbol x, \phi_0, \boldsymbol \beta) = g(\boldsymbol x, \phi_0, \boldsymbol \beta)$ we can conclude that $C[f](\boldsymbol x, \phi, \boldsymbol \beta) = g(\boldsymbol x, \phi, \boldsymbol \beta)$, as desired. \end{proof}
\section{Final remarks}\label{S:concl}
In this work we determined range descriptions of the Conical Radon Transform. In the case of the restricted CRT an important connection to the d'Alembertion operator is observed, and as a result the range description depends on the parity of the dimension. Range descriptions for the restricted CRT with arbitrary opening angle and dimension are formulated. Using this description, the factorization of the CRT into the composition of divergent beam and spherical mean transforms, and symmetry of the CRT we then obtain a description of the full CRT.
Another interesting problem would be to describe the range of the transform when the vertices of the cones are restricted to a given surface, while the axes and opening angles are arbitrary. This is the natural set-up in Compton camera imaging \citep{ADHKK,bkr2020,Maxim,cargo,Fatma,TKK,X}, in which the vertices of the cones are at the detector's surface. A step toward this was made in \citep{Fatma}.
\end{document} | arXiv |
Consensus dynamics in online collaboration systems
Ilire Hasani-Mavriqi1,2,
Dominik Kowald1,2,
Denis Helic2 &
Elisabeth Lex2
Computational Social Networks volume 5, Article number: 2 (2018) Cite this article
In this paper, we study the process of opinion dynamics and consensus building in online collaboration systems, in which users interact with each other following their common interests and their social profiles. Specifically, we are interested in how users similarity and their social status in the community, as well as the interplay of those two factors, influence the process of consensus dynamics.
For our study, we simulate the diffusion of opinions in collaboration systems using the well-known Naming Game model, which we extend by incorporating an interaction mechanism based on user similarity and user social status. We conduct our experiments on collaborative datasets extracted from the Web.
Our findings reveal that when users are guided by their similarity to other users, the process of consensus building in online collaboration systems is delayed. A suitable increase of influence of user social status on their actions can in turn facilitate this process.
In summary, our results suggest that achieving an optimal consensus building process in collaboration systems requires an appropriate balance between those two factors.
In this work, we study opinion dynamics and consensus building in online collaboration systems. In collaboration systems such as online encyclopediae, question & answering (Q&A) sites or discussion forums, users engage in complex interactions with others to reach a common goal, such as to write an article or to answer a difficult question. Often, this is a long-lasting iterative process, in which users share their knowledge and opinions, discuss problems and solutions, write and edit joint articles, or vote on each others' contributions. Ideally, this process converges to a shared common result. However, many times, consensus cannot be reached and a given topic or question remains unresolved within the community.
Understanding the factors, which govern a consensus building process in online collaboration systems, as well as mechanisms that may turn such a process into a success or failure is one of the pressing questions that our research community has already recognized. In many studies, researchers analyzed the underlying dynamics of opinion formation to identify key factors that contribute to consensus building (see, e.g., [1] for an overview). Such studies may be seen as a first step toward a more ambitious goal of developing tools that promote consensus building processes in online communities. For example, connecting otherwise non-interacting users by recommendations may lead to discussions resolving issues that hinder consensus.
To study consensus building processes, researchers frequently apply agent-based models. In an agent-based model, opinions of individual agents are represented as states and agents interact with each other following a set of predefined interaction rules. Through such interactions, agents alter their states until some criteria are met or the system reaches an equilibrium state. The interactions between agents give rise to a particular behavior of the whole population. The Naming Game model [2] is among the most prominent agent-based models for studying opinion dynamics and consensus building in groups of interacting agents. Often, such studies simulate opinion dynamics on synthetic networks, see for example [3,4,5,6,7,8,9,10].
In one of our previous works [11], we studied the influence of social status on consensus building in online collaboration systems. In that study, we assumed that the underlying network of previous interactions determines the constraints on the possible future interactions. In other words, only users who have already interacted with each other in the past were allowed to interact in the future. For example, user interactions on Reddit include users writing comments or voting on postings of other users. Such interactions allow us to extract user interaction networks from the system logs. In such networks, users are nodes and two users are connected by a link if they interacted in the past. However, in real-world online collaboration systems, there are certain user actions and interactions, which leave no or inconclusive traces in the system logs. For example, when users on Reddit simply read submissions but never leave comments or votes, in general we do not know which particular comments and postings these users actually have read. Moreover, many real-world datasets contain inaccuracies and are therefore inherently uncertain [12].
In this paper, we set out to study consensus building by adopting a model of interacting agents, whose future interactions are not restricted to the edges of the observed interaction network. Rather, we allow interactions between all pairs of users with varying preferences. In particular, we apply the Naming Game model and extend it to reflect (i) latent similarities between users and (ii) observed social status of users in real-world systems. Technically, with those two factors, we parametrize a probability distribution over pairs of users, which determines the likelihood of a future interaction between any two given users. This results in a flexible approach that enables us to explore and analyze various interesting and realistic configurations as opposed to restricting interactions to the edges of the observed network, which fixes the interaction probabilities to zero for previously non-interacting users.
To that end, we investigate consensus building within different society forms, which we characterize according to user similarity into open, modular and closed societies and according to social status into egalitarian, ranked and stratified societies. Open and closed societies represent two extreme cases based on the influence of user similarity: in an open society, any pairs of users can interact and exchange opinions with each other regardless of their similarity, whereas in a closed society only highly similar users interact with each other. Between these two society forms we define a modular society, in which probability of users' interaction is proportional to their similarity. Similarly, egalitarian and stratified societies represent two extreme cases governed by configuring the influence of social status: in an egalitarian society, the influence of social status is neglected, indicating that users can interact and exchange opinions with each other regardless of their social status, whereas in a stratified society, opinions can flow only from users with a higher social status to those with a lower social status. Between these two extreme cases, we can model different situations (ranked societies) by tuning the influence of social status so that opinions are very likely to flow from individuals with a higher social status to those with a lower social status, but with small probability they can also flow into the other direction.
For our experiments, we extract 17 collaboration networks from the real-world systems Reddit and StackExchange. For each of these networks, we first determine user similarity and their social status. We determine user similarity by calculating their regular equivalence [13]. With regular equivalence, we capture global user similarities between non-interacting users as opposed to local similarity measures, which take into account only the immediate neighbors of a node. To determine social status of users, we use the built-in scoring schemes of Reddit and StackExchange. With these networks in place, we simulate opinion spreading among users to study how the process of consensus building is governed by configurable influences of user similarity, user social status and a complex interplay between those two factors.
The contributions of our work are twofold. First, we extend the Naming Game model with an interaction mechanism that is based on user similarities and their social status. With this extension, we conduct experiments on empirical collaboration networks and contribute in this way to the limited line of research on opinion dynamics in empirical networks. Second, our experimental results reveal interesting and non-trivial findings, namely, that user similarity and user social status are opposing forces with respect to consensus building. Whereas user social status may speed up the emergence of consensus, user similarities typically hinder that process. Thus, for an efficient consensus building the negative effect of similarity needs to be carefully compensated by the positive effect of social status.
At present, we identify three main lines of research related to our work: (i) social impact theory, (ii) works that study the interplay between user similarity and social status and its impact on user behavior in online systems, and (iii) opinion dynamics in interaction networks.
Social impact theory
In the field of social psychology, the social impact theory of Latané [14] attempts to explain how individuals are influenced by their social environments. According to it, the social impact felt by individuals can be explained in terms of social forces, to which they are exposed [15, 16]. Latané [14] compares these social forces to physical forces, such as electromagnetic forces or forces that govern the transmission of light, sound and gravity [15]. In this analogy, social forces felt by individuals are moderated by the strength, immediacy and number of other people present in their social environment. In relation to our work, the influence of users social status in our experiments refers to the strength of the impact of other people (e.g., their authority or power of persuasion), whereas the user similarity is analogous to the immediacy of the others (e.g., their closeness in space or time) [17]. Mathematically, the social impact felt by an individual, known also as a target, is a multiplicative function of the three features of a source person and is given in the following form: \({\text{Impact}} = f(S \cdot I \cdot N),\) where Impact is the social impact on the target person and S, I, and N, are the strength, immediacy and number of the source persons, respectively [14, 15]. The social impact function constitutes the theoretical basis for our agent-based model and its multiplicative effects.
Connecting the social impact theory with agent-based modeling has been also the aim of previous research [17], in which researchers applied computer simulations to examine the extent to which group-level phenomena are driven by individual-level processes. In synthetic datasets that represent sets of individuals, they studied the attitude change of individuals and group polarization with respect to binary opinion states. Similarly, in our work we apply agent-based modeling. However, we perform experiments on empirical datasets from online collaboration systems and consider more than two opinion states.
Recent work followed a theory-driven approach to conduct empirical analysis of Twitter data that supported the assumptions of the social impact theory [18]. In our work, however, we study the process of opinion dynamics in online collaboration systems, by applying a data-driven model as well as by simulating how opinions spread in those systems.
Cultural dynamics in society classes and their role on the adaption of fashion are the focus of the work of the sociologist Georg Simmel [19]. According to Simmel's theory the latest fashion is defined by the higher society classes and the lower ones imitate and copy the fashion from them. As soon as this happens, higher classes move from the current fashion and adopt a new style to differentiate them from the masses. Similarly, in our analysis, we define higher and lower social status classes and analyze the opinion flow between them. The effect of lower status agents inflicting opinions to the higher ones, observed in our experiments, is comparable to the phenomenon of imitation, whereas the effect of limiting the communication from low-status agents to high-status agents reflects the phenomenon of differentiation.
The work presented in [20] applies an agent-based model to simulate the effects of Simmel's theory by exploring its spatial dimension. While the authors use synthetic data and synthetic agent social statuses, we use empirical datasets from Reddit and StackExchange and apply the empirical reputation scores provided by both systems as a proxy for social status.
Research on how the position and social status of a node influence the network originates from network exchange theory [21,22,23]. Similarly, we study how the social status of a node in an interaction network affects the spread of opinion that leads to consensus building. Additionally, in our work we define classes of nodes based on the social status and determine how their interaction affects the process of consensus building.
The influence of the interplay between user similarity and social status on user behavior in online systems
In our previous work [11], we studied the impact of social status on opinion dynamics and consensus building in online collaboration systems. In contrast, in the present work, we study how latent user similarity and the interplay between the user similarity and user social status impact the process of consensus building.
In [24] the authors present a framework for link prediction in evolving networks and show that popularity is just one dimension of attractiveness, in the context of link creation, and another important dimension is the similarity between users. In other words, user similarity and user popularity are two main forces that drive people to form links in various networks. In our work, we also study the effect of user similarity and user social status, but in relation to dynamical processes that take place in online collaboration systems.
User similarity in online social networks has also been studied in [25]. Here, the authors present a method for evaluating social networks according to network connections and profile attributes. In [26], the effect of similarity (in terms of user characteristics) and social status, as well as their interplay is studied on online evaluations carried out among users. They found that when two users are similar social status plays less of a role when users evaluate each other. Major difference to our work is that the authors calculate user similarity as cosine similarity between user action vectors. User actions are, for example, editing an article on Wikipedia, asking or answering a question on a Q&A site or rating a review on Epinions. In our work, we calculate user similarities by applying the regular equivalence that captures latent similarities even between non-interacting users and users who do not share common actions. Similar work to [26] is described in [27], with the difference that the authors consider only the relative social status between two users (i.e., their comparative levels of status in the group) when studying how users evaluate each other. The authors found that users with comparable status hesitate to give positive evaluations to each other.
Opinion dynamics in interaction networks
Research on opinion dynamics in interaction networks builds upon insights from the field of statistical physics [1, 28]. In this field, social processes of interaction among individuals are modeled mathematically by representing how changes in the local and global state of an individual and a group take place. A well-known model following this approach, the Naming Game, has been introduced in the context of linguistics [2, 29] with the aim to demonstrate how autonomous agents can achieve a global agreement through pairwise communications without central coordination [30].
Recent research [9, 31] applies the mean field principle while using the Naming Game model for their experiments. For example, the work in [31] studies the impact of learning and the resistance toward learning (as two opposing factors) on consensus building among a population of agents. In [9], the authors consider the case of an arbitrary number of agent opinions and the presence of zealots in the Naming Game. They provide a methodology to numerically calculate critical points in two special cases: the case in which there exist zealots of only one type and the case in which there are an equal number of zealots for each opinion. Similarly to our approach, the work of Brigatti et al. [3] describes a variation of the Naming Game that incorporates the agent social status scores. In the beginning, social status is randomly distributed among the agents via a Gaussian distribution. Successful communication increases the agent social status and during each iteration, the agent with the higher social status acts as a teacher and the one with the lower status as a learner. In contrast to our work, the dynamic social status scores are synthetically created whereas we adopt empirical status scores.
We base our model on the Naming Game [2, 4, 32,33,34]. The Naming Game is an agent-based model, in which agents are represented as nodes in a network. Agents interact with each other by following a set of predefined rules, with the aim of giving a name to a single unknown object. Consensus is reached when all agents agree on a single name for the object.
Each agent possesses an inventory of names or words (i.e., opinions), which is initially empty. At each interaction step, two agents are randomly chosen to meet (i.e., to communicate), where one of them is designated the role of the speaker while the other one is the listener. If the speaker's inventory is empty, a word is invented and it is communicated to the listener, or otherwise the speaker selects randomly a word from her inventory and communicates it to the listener. If the communicated word is unknown to the listener (i.e., it does not exist in the listener's inventory), the listener adds this word to her inventory. Contrarily, if the communicated word is known to the listener, both speaker and listener agree on that word and drop all other words from their inventories.
In this work, we extend the Naming Game with an interaction mechanism that accounts for latent user similarities and social status. In [24], the authors have identified user similarity and user popularity as two main forces that drive people to form links in various networks. User similarity is a property of pairs of users whereas social status is a property of individual users. In general, in collaboration systems, users tend to connect with similar users, i.e., with those sharing similar interests, tastes or social backgrounds, as well as with users of a higher social status or a higher popularity [35].
Regular equivalence
To calculate the user similarity, we apply similarity measures from graph theory and social network analysis. In these fields, there are two main types of similarity: (i) structural similarity, and (ii) regular equivalence [13]. In particular, two nodes in a network are structurally similar if they share many common neighbors. On the other hand, two nodes are regularly equivalent if they have common neighbors that are themselves similar even if they do not share the same neighbors. Thus, regular equivalence quantifies not only observable but also latent similarities.
With regular equivalence, the basic idea is to define a similarity score \(\sigma _{ij}\) between nodes i and j, such that i and j are similar if i has a neighbor k that is similar to j [13]:
$$\begin{aligned} \sigma _{ij} = \alpha \sum _{k} A_{ik} \sigma _{kj} + \delta _{ij}, \end{aligned}$$
where \(\alpha\) is a constant known as damping factor, \(A_{ik}\) are elements of the adjacency matrix \(\mathbf {A}\) (with \(A_{ij}\ge 0\) if i and j are connected by an edge with a positive weight and \(A_{ij}=0\) otherwise), \(\sigma _{kj}\) is the similarity score between k and j, and \(\delta _{ij}\) is the Kronecker delta function, which is 1 for \(i=j\) and 0 otherwise. The damping factor \(\alpha\) should satisfy \(\alpha < 1/\kappa _{1}\) in order for similarity scores to converge, where \(\kappa _{1}\) is the largest eigenvalue of the adjacency matrix. The recursive calculation of the regular equivalence counts paths of all lengths between pairs of nodes. It assigns high similarity values to nodes that either share many common neighbors or to nodes that are connected by many longer paths, or both. By choosing \(\alpha\) closer to \(1/\kappa _{1}\), we assign more weight to longer paths, whereas smaller \(\alpha\) values prefer shorter paths. Since we want to capture as much of latent similarities as possible, we set \(\alpha = 0.9/\kappa _{1}\).
Probabilistic Meeting Rule—illustrative example. Top row: we depict an interaction network with five users, the social status of users (\(s_1\) to \(s_5\)) and the adjacency matrix \(\varvec{A}.\) All edge weights in \(\varvec{A}\) are 1, indicating that the corresponding users interacted only once with each other in the past. If we restrict meetings to the edges of the interaction network, the meeting probabilities are symmetric and equal to the entries of \(\varvec{A}.\) Thus, the users 1 and 4 cannot participate in a meeting since \(p_{14}=p_{41}=0\) (elements marked in red in \(\varvec{A}\)). The average meeting probability \(p_m\) corresponds to the network density and evaluates to 0.5. Middle row: we calculate the regular equivalence matrix \(\varvec{\sigma }\) and normalize it with the degrees and the minimal neighbor similarity (normalization results in asymmetric similarities). We set closeness factor \(\gamma =1/2\) (modular society) and calculate the matrix of meeting probabilities \(\varvec{P_{\sigma }}\) (we set zeros on the diagonal since each meeting requires two users). The rows correspond to the meeting probabilities of a user acting as the speaker. We observe now non-zero probabilities between users who are not connected by an edge. For example, for users 1 and 4 (cf. red-marked elements in \(\varvec{P_{\sigma }}\)), the meeting probability is \(p_{14}=0.31\) (user 1 acts as the speaker) and \(p_{41}=0.54\) (user 4 acts as the speaker). In this setting, the average meeting probability is significantly higher than previously \(p_m=0.69.\) Bottom row: the matrix \(\varvec{S}\) keeps the (asymmetric) social status differences between users. Again, the rows correspond to users acting as the speaker in a meeting. We set stratification factor \(\beta =1/2\) (ranked society) and calculate the matrix of the meeting probabilities \(\varvec{P_S}.\) The social status mechanism results in non-zero probabilities between all pairs of users. For example, for users 1 and 4 (cf. red-marked elements in \(\varvec{P_S}\)), the meeting probability is \(p_{14}=0.22\) (user 1 is the speaker) and \(p_{41}=1\) (user 4 is the speaker). The average meeting probability for this configuration is \(p_m=0.71.\) Finally, if similarity as well as social status rules apply, the final meeting probabilities are calculated by element-wise multiplication of \(\varvec{P_\sigma }\) and \(\varvec{P_S}\)
The formula for similarity scores tends to give higher similarity to high-degree nodes due to their many neighbors [13]. A standard approach to remedy this situation is to normalize the scores by dividing them with the node degree.
Furthermore, we once more normalize the similarity values by subtracting for each user the minimum similarity of her direct neighbors. This lets us take into account the social adaptation of individual agents to the local norms induced by their neighbors [36]. As a result, we have positive similarity values only for the direct neighbors, as well as for all other users that are more similar than the direct neighbors (see Fig. 1 for an example of regular equivalence calculation).
Probabilistic Meeting Rule
Algorithm 1 describes the procedure of our extension of the Naming Game. In particular, we modify the meeting rule between two agents and replace it with our Probabilistic Meeting Rule, which defines the probability of a meeting taking place:
$$\begin{aligned} p_{sl} = \underbrace{{\text{min}}\, (1, e^{\gamma \sigma _{sl}})}_{{{\text{similarity}}}} \cdot \underbrace{{\text{min}}\, (1, e^{\beta (s_s - s_l)})}_{{{\text{social status}}}}. \end{aligned}$$
Here, \(\sigma _{sl}\) is the similarity score between speaker s and listener l, \(s_s\) is the speaker's social status, \(s_l\) is the listener's social status, \(\gamma \ge 0\) is the closeness factor and \(\beta \ge 0\) is the stratification factor. Note that those two factors are the controlling parameters in our model.
The Probabilistic Meeting Rule is a flexible rule that enables us to model various scenarios and society forms. The first term in the equation (\({\text{min}}\, (1, e^{\gamma \sigma _{sl}})\)) controls the degree of openness of a society. It evaluates to 1 for \(\gamma =0\). We refer to this scenario as open society, in which any pair of users (mean field approach) can interact independently of how similar are they to each other. In other words, in an open society, the similarity between users does not play a role and everybody can interact with everyone else. Open society thus reflects the Solaria world introduced by Watts [37, 38].
By increasing \(\gamma\), the influence of the user similarity becomes stronger indicating a so-called modular society (i.e., the first term in the Probabilistic Meeting Rule takes on a value between 0 and 1). In this scenario, highly similar users interact with each other with a high probability, whereas less similar users still interact with each other but with a smaller probability than highly similar users. By further increasing the closeness factor we arrive at a closed society, in which users interact only with other highly similar users and the interaction probability between less similar users evaluates to a value close to 0. This scenario is analogous to the Watts' caveman world, in which users who live in "caves" (i.e., closed communities) interact with each other but they never or rarely interact with users from other "caves" [37, 38].
Similarly, with the stratification factor, we can configure the level of influence of the users social status on the probabilities of their interactions. In particular, if the speaker's social status is higher than the listener's social status, the second term (\({\text{min}}\, (1, e^{\beta (s_s - s_l)})\)) in Eq. 2 takes the value of 1. This means that a meeting between a speaker with a social status higher than the listener's always takes place. When the listener has a higher social status than the speaker, several scenarios are possible, depending on the value of the stratification factor. For example, for \(\beta =0\), the second term evaluates to 1 and we call this scenario egalitarian society. In an egalitarian society, everyone can talk to everyone else independently of their social status. If we increase the stratification factor, the second term starts to decay and in general, takes a value between 0 and 1. We refer to this situation as a ranked society, in which opinions always flow from individuals with a higher social status to those with a lower social status. Flow into the other direction is also possible, however only with small probability. By further increasing \(\beta\), we reach a situation where the second term always evaluates to a value close to 0 if the speaker's social status is smaller than the listener's social status. In other words, we have reached what we call a stratified society where meetings take place only if the speaker's social status is higher than the listener's social status but never in the opposite case. Thus, with varying configurations of both terms, we can explore nine different combinations of the above-mentioned scenarios.
In Fig. 1, we show an illustrative example for the calculation of the meeting probabilities for a modular, ranked society. In general, we observe two effects of our approach: (i) the meeting probabilities increase as compared to a model which restricts interaction to the edges of the interaction network, and (ii) the meeting probabilities are asymmetric.
Datasets and experiments
In our experiments, we use 17 empirical datasets from Reddit and StackExchange by selecting them randomly to ensure a broad coverage of different topics.
Extracting interaction networks
In Reddit, registered users post new submissions (typically links or texts), comment and discuss existing submissions, or create new communities (so-called subreddits), which revolve around a specific topic. For our experiments, we parsed the dumps of 16 different subreddits from the year 2014, which belong to four main categoriesFootnote 1: Movies (Documentaries, True film, Movie details and Harry Potter), Politics (Political discussion, Political humor, Neutral politics and World politics), Programming (Julia, Python, Ruby and Compsci) and Sports (Skiing, Tennis, Badminton and Volleyball). To construct the Reddit interaction network, we extract the users' contributions from the submissionFootnote 2 and from the commentFootnote 3 dumps. We then create an interaction network, in which users are represented as nodes and two users are connected by an edge if one user commented on the submission of another one, or if they both commented on the same submission of a third user. For each edge, we set a weight, which corresponds to the number of interactions between two given users.
StackExchangeFootnote 4 is a Q & A site, where users collaboratively solve problems through asking and answering questions in posts. Similarly to the Reddit networks, we construct the StackExchange interaction networks to represent co-posting activities. Specifically, two nodes (i.e., users) are connected via a weighted edge if the users contributed to the same question. Correspondingly, the edge weight encodes the number of common contributions. We use the following StackExchange editions covering different topics for our experiments: English, Cooking, Academia, Movies, Politics, Music, German, Japanese, History, Chinese, Spanish, French, Sports.Footnote 5
Finally, in all networks, we extract the largest connected component and perform all experiments on that component. We give the basic statistics of our empirical datasets such as the number of nodes (n) and edges (m), as well as average node degree (d), average social status (s), average edge weight (e) and density (\(\rho\)) in Table 1. The network density \(\rho\) calculated as \(2m/(n(n-1))\) is defined as the fraction of all possible edges that are present in a network. In the context of our model, density can be interpreted as an average meeting probability if meetings are restricted only to the edges of the network. In other words, the probability that a randomly selected pair of users participates in a joint meeting equals, on average, to the network
Table 1 Dataset characteristics
density. In practice, the majority of social and other networks such as interaction networks are extremely sparse networks with densities that lay way beyond 1%. Thus, our empirical interaction networks indeed constitute a very rigid constraint on any possible interactions.
Determining social status
To determine the social status scores for users, we exploit the built-in user rewarding system of Reddit and StackExchange. In Reddit, users can accumulate so-called "karma" scores that rise if their posts receive good ratings from other users. Thus, karma scores represent the reflection of the user "vibes" in the community and we apply it as a proxy for social status. Since karma scores are not included in the publicly available Reddit dumps, we crawled those scores using the public APIFootnote 6 and the python-based PRAW API wrapper.Footnote 7 On the other hand, in StackExchange users are rewarded by the community with reputation scores for providing not only valuable answers but also valuable questions. As shown in [39], the scores given by this user-rewarding system highly correlate with the quality of the user content and thus, we assume that a high-reputation user contributes with a high-quality content to the community. Reputation scores are provided in the dataset dumps and we use them as a proxy for social status in StachExchange systems. This setup allows us to investigate the effect of social status from two view points: in Reddit, the social status is a reflection of how other persons experience a given user in the society (i.e., charisma) and in StackExchange, social status is more related to a position that users earn in a society based on the quality of their work (i.e., reputation).
Our experiments consist of six steps. First, for each interaction network, we construct a weighted adjacency matrix \(\mathbf {A}\) by setting \(A_{ij}\) to the edge weight between users i and j, if they are connected or to 0 otherwise. Second, we compute the matrix of similarity scores using the methodology described in "Methodology" section.
Third, we compute the closeness factor \(\gamma\) and the stratification factor \(\beta\) by estimating the expected meeting probability in our networks using a standard Monte Carlo method [11]. This enables us to control the communication intensity between users. For the closeness factor, we determine two parameter values to depict modular and close societies by controlling the percentages of successful meetings induced by the first similarity factor in our multiplicative Probabilistic Meeting Rule: (i) for the modular society, we determine \(\gamma\) such that approximately 75% of all possible meetings (up to the statistical fluctuations) take place, (ii) for the closed society, we determine \(\gamma\) for which approximately half of all meetings are successful on average. In addition, for the open society, in which all meetings take place, we set \(\gamma =0.\)
Average meeting probability of 50% is 2 orders of magnitude higher than the average network density of our empirical interaction networks (0.27%) (cf. Table 1). Thus, even though our model biases the user interactions toward more similar users, it is substantially less restrictive than an alternative model solely based on the interaction network. Another (simpler) alternative to avoid the restrictions of the interaction networks would be to, for example, allow for each second interaction to take place between arbitrary pairs of (non-adjacent) users. Nevertheless, this approach would miss the possibility to induce similarity or social status biases.
Similarly to the closeness factor, we also estimate two values for the stratification factor \(\beta\) that correspond to the ranked and stratified society forms. Here, we control the opinion flow from low to high social status users and set \(\beta\) such that on average, 50% of meetings take place (ranked society) and so that none of the meetings takes place (stratified society) (again we only control the second social status factor in the multiplicative meeting rule). In addition, by setting \(\beta =0\) we achieve the egalitarian society, in which all meetings take place. Note that we define high social status users as users with a social status above the 90th percentile, whereas low social status users have a social status below the 90th percentile.
Fourth, we initialize agents' inventories by randomly selecting three words from a set of unique words for each agent. Fifth, we create a set of meetings, i.e., randomly selected pairs of users. From this set, we generate meeting sequences by picking meetings at random without repetition for each possible combination of closeness factor and stratification factor. This ensures that the random factor due to the meeting sequence remains insignificant for various values of \(\gamma\) and \(\beta.\) We determine the length of the meeting sequence (c) (i.e., maximum number of user interactions) based on the number of users in a given dataset. The length of the meeting sequence c is 2 orders of magnitude higher than the number of users n. For each configuration, we simulate the meetings 100 times and report the averaged simulation results.
Finally, we store the state of the agents' network for each c/100 interaction of our 100 runs as well as for all values of closeness factor and stratification factor. This enables us to investigate the distinct number of overall opinions adopted by each agent during the interactions. Additionally, we can derive the percentages of agents that have reached consensus on a particular opinion.
To ensure the reproducibility of our experiments, we provide our simulation framework as an open-source project. The source code can be downloaded from our Git repository.Footnote 8
The influence of user similarity and social status on consensus dynamics
We show our simulation results in Fig. 2. The plots in Fig. 2a, b depict the evolution of the agents' inventory mean size (over 100 runs) as a function of the simulation progress for the Reddit Movies and StackExchange English datasets, respectively. All other empirical datasets exhibit comparable results, so we omit them from Fig. 2; but we provide them in Appendix in Fig. 5. Each line in the plots corresponds to the results obtained using one particular closeness factor and stratification factor. Line colors depict different values of closeness factor, whereas line markers illustrate values of stratification factor.
Due to our Probabilistic Meeting Rule, whenever we set one of the factors to 0, we can study the impact of the other factor on the process of consensus building. Thus, by analyzing society forms with \(\beta =0\) (egalitarian) and varying closeness factor, we can investigate the effect of user similarity on the consensus building process. Our results reveal that in (modular, egalitarian) and (closed, egalitarian) societies (cf. blue and red lines with circle markers in Fig. 2) consensus is slowed down as compared to (open, egalitarian), which represents a society where all meetings take
The influence of user similarity and social status on consensus dynamics. The plots show the mean size (100 runs) of the agents' inventories (y-axes) in relation to the number of interactions (x-axes) for Reddit Movies (a) and StackExchange English (b) datasets. Each line represents results for one particular \(\gamma\) and \(\beta\). The line colors represent three values of \(\gamma\) and line markers three different values of \(\beta.\) We notice that in (modular, egalitarian) and (closed, egalitarian) societies (marked with blue and red lines with circle markers), opinion convergence rates are slower than in (open, egalitarian) society marked with green and circle markers. This indicates that as soon as user similarity plays a role, consensus building is delayed. However, in (modular, ranked) and (closed, ranked) societies (blue and red lines with diamond markers) we observe faster consensus building. This means that by increasing the effect of social status, we are able to partially compensate the negative effect of similarity. By further increasing the impact of the social status through the stratification factor, the positive effect of social status dissolves. This is visible in the green, blue and red lines with star markers corresponding to (open, stratified), (modular, stratified) and (closed, stratified) societies. Thus, for a faster consensus building, a careful balancing between the influence of similarity and social status is needed. a Reddit Movies, b StackExchange English
place. Thus, as soon as user similarity starts to exhibit influence on the meeting probabilities the consensus building process is delayed. Although the average meeting probability in modular society forms is still very high, even this slight preference toward meeting with more similar users is able to slow down the spread of opinions.
On the other hand, a weak increase in the influence of the user social status is beneficial for the consensus. In (modular, ranked) and (closed, ranked) societies (cf. blue and red lines with diamond markers in Fig. 2), we observe faster consensus building. Thus, by increasing the effect of social status, we can compensate the initial negative effect of similarity.
Nevertheless, the positive effect of social status diminishes quickly. In (modular, stratified) and (closed, stratified) societies (cf. blue and red lines with star markers in Fig. 2), the convergence rate again slows down. Thus, an initially positive effect of social status in ranked society forms undergoes a phase transition toward a negative effect in stratified societies.
Our simulation results indicate that user similarity and social status exhibit opposing forces with respect to consensus building in online collaborative systems. While an increase in the influence of user similarity has a negative effect, the social status exhibits both the phase of a positive effect as well as the phase of a negative effect. Consequently, an optimal configuration for a faster consensus requires a careful balance between those two factors.
Coarse analysis
We consider the average inventory size of agents equalling 1 as a first criterion for reached consensus among agents (cf. Fig. 2). Further, we aim to determine the distinct number of opinions present in the agents network and the consensus strength during the interactions. We define the consensus strength as percentages of agents having one single opinion in their inventories over the course of simulations. The consensus strength reaches its maximum when all agents unanimously agree on one particular opinion.
Figure 3 shows consensus strength (averaged over 100 runs) over the number of interactions for the Reddit Movies (Fig. 3a) and StackExchange English (Fig. 3b) datasets. Analogous to Fig. 2, each line represents results for one particular \(\gamma\) and \(\beta.\) The line colors represent three values of \(\gamma\) and line markers three different values of \(\beta.\)
For almost all societies except for (open, stratified), (modular, stratified) and (closed, stratified) (cf. green, blue and red lines with star markers in Fig. 3), there is a saturation of the consensus strength visible in the plots. The growth curves resemble logistic growth curves with the phases of quick initial growth and a saturation phase as the process reaches its equilibrium. The growth rates of the consensus strength lines determine how quickly agents reach consensus. The growth rates are faster for (open, ranked), (modular, ranked) and (closed, ranked) (cf. green, blue and red lines with diamond markers) compared to (open, egalitarian), (modular, egalitarian) and (closed, egalitarian) societies (cf. green, blue and red lines with circle markers). These results complement our findings presented in the previous
Coarse analysis. Percentages of consensus strength (averaged over 100 runs) reached among agents on one particular opinion (y-axes) are shown as a function of the number of interactions (x-axes) for different values of \(\gamma\) and \(\beta.\) The line colors illustrate three different values of \(\gamma\) and line markers three values of \(\beta.\) The plot in a illustrates Reddit Movies results, whereas the plot in b presents the results of StackExchange English dataset. Each line represents results for one particular configuration of \(\gamma\) and \(\beta\). We consider that the consensus strength reaches its maximum when all agents unanimously agree on one particular opinion. With each interaction, agents exchange opinions and the consensus strength increases, but with different growth rates for different configurations of \(\gamma\) and \(\beta\). The growth rates of the consensus strength lines determine how quickly agents reach consensus. A saturation of consensus strength lines is visible for almost all society forms except for (open, stratified), (modular, stratified) and (closed, stratified) (cf. green, blue and red lines with star markers). The growth rates are faster for (open, ranked), (modular, ranked) and (closed, ranked) (cf. green, blue and red lines with diamond markers) compared to (open, egalitarian), (modular, egalitarian) and (closed, egalitarian) societies (cf. green, blue and red lines with circle markers). These results complement our previous findings presented in Fig. 2 and reveal that the appropriate balance between user similarity and social status enables faster consensus strength growth rates in online collaborative systems. a Reddit Movies, b StackExchange English
subsection, namely, with the increase of the influence of user similarity on the meeting probabilities, consensus building among agents is delayed. This negative effect is compensated to some extent with the increase of the influence of social status (ranked societies). A further increase of the influence of social status yet hinders consensus building, which means that no saturation state can be observed in case of stratified societies (at least not in the number of interactions that we simulate).
Our coarse analysis reveals that the optimal balance between user similarity and social status enables faster growth rates toward consensus building in our datasets.
Communication intensity between social classes
Now we are interested in identifying causes of these observed effects. For this, we investigate the communication intensity (i.e., the number of successful meetings) across user social classes that we introduced earlier, namely high social status class with users above the 90th percentile and low social status class with all other users.
In our previous study [11], we found that the direction of opinion flow impacts how fast opinions converge. Specifically, the flow from low social status to high social status users, as well as from low social status users to low social status users, is crucial. As described in [40], high social status users are typically able to impose their opinions to other users in a system. Thus, whenever the opinions of these high social status users frequently change the system as a whole experiences oscillatory behavior and cannot reach consensus. Due to the heterogeneous distributions of user social status in many systems, the number of low social status users is substantially higher than the number of high social status users. Therefore, whenever the communication intensity in the direction from low social status users to high social status users is high, low social status users are able to cause oscillations in the opinions of high social status users and the consensus building process is delayed.
On the other hand, it is important that communication direction from low social status users to other low social status users remains unhindered. Due to the high number of low social status users, they have to be able to intensely communicate among themselves to spread opinions. Low social status users cannot rely on a small number of high social status users to reach many low social status users and distribute opinions. In other words, the process of consensus building among low social status users is a two-phase process. First, high social status users impose their opinions onto a small fraction of low social status users, and second, those opinions are subsequently spread among low social status users themselves.
These mechanisms can potentially explain the results of our experiments. For example, due to their numerous previous interactions high social status users are on average more similar to other users than low social status users. Therefore, whenever user similarity is the driving force behind meetings taking place we expect users with high social status to participate in a large number of meetings.
On the other hand, the number of low social status users is high and the second meeting participant is very likely a low social status user. Thus, our expectation is that we will observe many meetings with one high social status and one low social status user. This increases the communication intensity between low and high social status users and this leads to increased opinion fluctuations for high social status users. This in turn can slow down the consensus building process.
To further investigate this hypothesis, we analyze the percentages of users' interactions that turn into successful meetings after applying our Probabilistic Meeting Rule. Specifically, we analyze two important communication directions and their intensities: (i) low-to-high and (ii) low-to-low, where the first term refers to the speaker and the second to the listener.
In Fig. 4, we show a heatmap with communication intensities between social classes for all nine combinations of society forms for the StackExchange English dataset. Again, here we only present the results for this dataset, since in all other datasets we obtain comparable results; we provide them in Appendix in Figs. 7 and 8. The heatmap in Fig. 4a depicts the percentages of successful meetings in the low-to-high class of users, whereas the heatmap in Fig. 4b depicts the percentages of successful meetings taking place in the low-to-low class. Columns of the heatmaps show the society forms based on similarity (i.e., open, modular and closed) and rows show the social status society forms (i.e., egalitarian, ranked and stratified).
The communication intensity from low to high social status users (cf. Fig. 4a) is decreased when either the influence of user similarity (switch from open to modular society) or social status (switch from egalitarian to ranked society) is increased. In the ranked society, we observe a slightly higher reduction in the opinion flow from low to high social status users than in the modular society. Thus, even though high social status users are on average more similar to other users, increase in the influence of similarity reduces the opinion flow from low social status to high social status users. Since this is a desired behavior there seems to be some other mechanism causing the delay in the opinions convergence.
Therefore, we turn our attention now on the low-to-low communication direction. By switching from an open to a modular society, we observe a decreasing opinion flow from low to low social status users (cf. Fig. 4b). However, for optimal consensus building, the communication in this class of users should not be disturbed. On the other hand, when we switch from an egalitarian to a ranked society, the intensity of the communication between users in the low-to-low class remains unchanged and we observe fast convergence rates. Thus, through the increase in similarity the communication channel from low social status users to other low social status users is being closed and this slows down the consensus building process. Similar behavior can be also observed for the social status when we switch from ranked to stratified society form. Thus, a balanced influence of social status improves convergence rates, whereas even a low influence of similarity hinders the process.
Our analysis indicates that the increased influence of similarity reduces the communication intensity between both low social status users and high social status users, as well as between low social status users and other low social status users. While the former has a positive effect on the spreading of opinions the latter hinders that process and causes the delay in consensus. Meetings governed by similarity are locally contained to smaller groups of users and the communication between different users groups is less intensive.
Heatmaps of the communication intensity in (a) low-to-high and (b) low-to-low social status classes of users for the StackExchange English dataset. The columns represent three society forms based on similarity: open, modular and closed, whereas rows show three social status society forms: egalitarian, ranked and stratified. The colors depict the intensity of the communication between users (i.e., percentages of the successful meetings taking place). In the plot in a, we notice that the communication intensity from low- to high-status users is decreased by increasing either the influence of user similarity (switch from open to modular society) or the social status (switch from egalitarian to ranked society). In b, we see that by switching from an open to a modular society the communication intensity from low- to low-status users is decreased. But for optimal consensus building, the communication in this class of users should not be disturbed. When we switch from an egalitarian to a ranked society, the intensity of the communication between users in the low-to-low class remains unchanged. This is one of the factors that in the ranked societies we observe fast opinion convergence rates. To summarize, through the increase in similarity the communication channel from low-status users to other low-status users is being closed and this slows down the consensus building process. a Low-to-High, b Low-to-Low
Conclusion and future work
In this paper, we studied the process of opinion dynamics and consensus building in online collaboration systems. Specifically, we adopted a model of interacting agents, in which we allow interactions between all pairs of users with varying preferences beyond the observed interaction network. To that end, we presented an extension to the Naming Game model, i.e., the Probabilistic Meeting Rule that reflects (i) latent similarities between users and (ii) observed social status of users in real-world systems. We conducted our experiments on 17 empirical datasets from Reddit and StackExchange.
Our experimental results revealed that user similarity and social status exhibit opposing forces with respect to consensus building in online collaborative systems. Our main finding indicates that while an increase in the influence of user similarity has a negative effect, social status exhibits both the phase of a positive effect as well as the phase of a negative effect. Consequently, for a faster consensus, a careful balance between those two factors is required.
To explain our results, we further investigated the communication intensity (i.e., the number of successful meetings) between the social classes we defined. Our findings showed that the increased influence of similarity reduces the communication intensity between both low-status users and high-status users, as well as between low-status users and other low-status users. While the former has a positive effect on the spreading of opinions the latter hinders that process and causes the delay in consensus.
In our opinion, our work has the following limitations. First, we neglected any dynamic changes of user similarity and social status and the networks as such. In reality, social networks constantly change as users may leave the system while others join. We could gain more realistic insights by comparing results of dataset snapshots between different points in time. Second, we used a simplification for opinions of users exchanged in online collaboration networks by presenting them as a set of numbers. An alternative would be to use the real content exchanged among users.
Future work
For future work, we plan to use our insights to design personalized user recommendation algorithms. Thus, by identifying the factors that lead to barriers and conflicts in collaborations, we plan to design meaningful interventions by suggesting possible collaborators with the goal to create network structures, in which consensus building is supported (i.e., recommending experts or high social status users as possible collaborators with the goal to speed up the process of consensus building). We also plan to verify our findings in other types of empirical networks, for example, gathered from the collaborative editing system Wikipedia, where we will investigate the dynamics of the editing process.
https://www.reddit.com/r/ListOfSubreddits/wiki/listofsubreddits/.
https://www.reddit.com/r/datasets/comments/3mg812/full_reddit_submission_corpus_now_available_2006/.
https://www.reddit.com/r/datasets/comments/3bxlg7/i_have_every_publicly_available_reddit_comment/.
https://stackexchange.com/.
https://archive.org/details/stackexchange.
https://www.reddit.com/dev/api/.
https://praw.readthedocs.io/en/stable/.
https://git.know-center.tugraz.at/summary/?r=SocialNetworkAnalysis.git.
Castellano C, Fortunato S, Loreto V. Statistical physics of social dynamics. Rev Mod Phys. 2009;81:591–646.
Baronchelli A, Felici M, Caglioti E, Loreto V, Steels L. Sharp transition towards shared vocabularies in multi-agent systems. J Stat Mech. 2006;2006:P06014.
Brigatti E. Consequence of reputation in an open-ended naming game. Phys Rev E. 2008;78(4):046108.
Dall'Asta L, Baronchelli A, Barrat A, Loreto V. Agreement dynamics on small-world networks. EPL Europhys Lett. 2006;73(6):969.
Li B, Chen G, Chow TWS. Naming game with multiple hearers. Comm Nonlinear Sci Numer Simul. 2013;18(5):1214–28.
Liu R-R, Wang W-X, Lai Y-C, Chen G, Wang B-H. Optimal convergence in naming game with geography-based negotiation on small-world networks. Phys Lett A. 2011;375(3):363–7.
Lu Q, Korniss G, Szymanski B. The naming game in social networks: community formation and consensus engineering. J Econ Interact Coord. 2009;4(2):221–35.
Gao Y, Chen G, Chan RHM. Naming game on networks: let everyone be both speaker and hearer. CoRR. 2013.
Waagen A, Verma G, Chan K, Swami A, D'Souza R. Effect of zealotry in high-dimensional opinion dynamics models. Phys Rev E Stat Nonlin Soft Matter Phys. 2015;91(2):022811.
Wang WX, Lin BY, Tang CL, Chen GR. Agreement dynamics of finite-memory language games on networks. Eur Phys J B. 2007;60(4):529–36.
Hasani-Mavriqi I, Geigl F, Pujari SC, Lex E, Helic D. The influence of social status and network structure on consensus building in collaboration networks. Soc Netw Anal Min. 2016;6(1):1–17.
Martin T, Ball B, Newman MEJ. Structural inference for uncertain networks. Phys Rev E. 2016;93:012306.
Newman M. Networks: an introduction. New York: Oxford University Press Inc; 2010.
Latané B. The psychology of social impact. Am Psychol. 1981;36:343–65.
Jackson JM. Social impact theory: a social forces model of influence. In: Mullen B, Goethals GR, editors. Theories of group behavior. New York: Springer; 1987. p. 111–24.
Pettijohn TF. Psychology: a connectext. New York City: McGraw-Hill Higher Education, Pennsylvania Plaza; 1998.
Nowak A, Szamrej J, Latané B. From private attitude to public opinion: a dynamic theory of social impact. Psychol Rev. 1990;97(3):362–76.
Garcia D, Mavrodiev P, Casati D, Schweitzer F. Understanding popularity, reputation, and social influence in the twitter society. Policy Internet. 2017;9(3):343–64.
Simmel G. Fashion. Am J Sociol. 1957;62(6):541–58.
Pedone R, Conte R. The Simmel effect: imitation and avoidance in social hierarchies. In: Moss S, Davidsson P, editors. Multi-agent based simulation. Heidelberg: Springer; 2001. p. 149–56.
Markovsky B, Skvoretz J, Willer D, Lovaglia MJ, Erger J. The seeds of weak power: an extension of network exchange theory. Am Sociol Rev. 1993;58(2):197–209.
Walker HA, Thye SR, Simpson B, Lovaglia MJ, Willer D, Markovsky B. Network exchange theory: recent developments and new directions. Soc Psychol Quart. 2000;63(4):324–37.
Willer D. Network Exchange Theory. Westport: Praeger; 1999.
Papadopoulos F, Kitsak M, Serrano M, Boguñá M, Krioukov D. Popularity versus similarity in growing networks. Nature. 2012;489:537–40.
Akcora CG, Carminati B, Ferrari E. User similarities on social networks. Soc Netw Anal Min. 2013;3(3):475–95.
Anderson A, Huttenlocher D, Kleinberg J, Leskovec J. Effects of user similarity in social media. In: Adar E, Teevan J, Agichtein E, Maarek Y, editors. WSDM '12. New York: ACM; 2012. p. 703–12.
Leskovec J, Huttenlocher D, Kleinberg J. Governance in social media: a case study of the Wikipedia promotion process. International AAAI Conference on Web and Social Media. AAAI Press, North America (2010)
Iniguez G, Török J, Yasseri T, Kaski K, Kertesz J. Modeling social dynamics in a collaborative environment. EPJ Data Sci. 2014;3:7.
Dall'Asta L, Baronchelli A, Barrat A, Loreto V. Non-equilibrium dynamics of language games on complex networks. Phys Rev E. 2006;74:036105.
Zhang W, Lim CC, Korniss G, Szymanski BK. Opinion dynamics and influencing on random geometric graphs. Sci Rep. 2014;4:5568.
Maity SK, Porwal A, Mukherjee A. Understanding how learning affects agreement process in social networks. In: 2013 international conference on social computing (SocialCom); 2013. pp. 228– 35.
Baronchelli A, Dall'Asta L, Barrat A, Loreto V. Topology-induced coarsening in language games. Phys Rev E. 2006;73:015102.
Baronchelli A, Dall'Asta L, Barrat A, Loreto V. Strategies for fast convergence in semiotic dynamics. New York: MIT Press; 2005. p. 480–5.
Dall'Asta L, Baronchelli A, Barrat A, Loreto V. Nonequilibrium dynamics of language games on complex networks. Phys Rev E. 2006;74:036105.
Scholz M. Node similarity is the basic principle behind connectivity in complex networks. CoRR abs/1010.0803. 2010.
Sayama H, Sinatra R. Social diffusion and global drift on networks. Phys Rev E. 2015;91:032809.
Watts DJ. Networks, dynamics, and the small world phenomenon. Am J Sociol. 1999;105(2):493–527.
Watts DJ. Six degrees the science of a connected age. New York: W. W. Norton and Company, 500 Fifth Avenue; 2004.
Movshovitz-Attias D, Movshovitz-Attias Y, Steenkiste P, Faloutsos C. Analysis of the reputation system and user contributions on a question answering website: Stackoverflow. In: Proceedings of the 2013 IEEE/ACM international conference on advances in social networks analysis and mining. ASONAM '13. ACM: New York; 2013 pp. 886– 893.
Leskovec J, Adamic LA, Huberman BA. The dynamics of viral marketing. ACM Trans Web TWEB. 2007;1(1):5.
IHM implemented the proposed approach, carried out the experiments and drafted the first version of the manuscript. DK, DH and EL aided in defining the methodology, interpreting the results and contributed intellectually to all research phases. All authors edited the final manuscript. All authors read and approved the final manuscript.
This work is supported by the Know-Center Graz and the AFEL project funded from the European Union's Horizon 2020 research and innovation programme under grant agreement No 687916. The Know-Center is funded within the Austrian COMET Program—Competence Centers for Excellent Technologies—under the auspices of the Austrian Ministry of Transport, Innovation and Technology, the Austrian Ministry of Economics and Labor and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency (FFG).
Availability of supporting data
We provide our simulation framework as an open-source project. The source code can be downloaded from our Git repository: https://git.know-center.tugraz.at/summary/?r=SocialNetworkAnalysis.git.
Know-Center GmbH, Research Center for Data-Driven Business & Big Data Analytics, Inffeldgasse 13/6, 8010, Graz, Austria
Ilire Hasani-Mavriqi
& Dominik Kowald
Institute of Interactive Systems and Data Science, Graz University of Technology, Inffeldgasse 13/6, 8010, Graz, Austria
, Dominik Kowald
, Denis Helic
& Elisabeth Lex
Search for Ilire Hasani-Mavriqi in:
Search for Dominik Kowald in:
Search for Denis Helic in:
Search for Elisabeth Lex in:
Correspondence to Ilire Hasani-Mavriqi.
See Figs. 5, 6, 7, 8.
The influence of user similarity and social status on consensus dynamics. The plots show the mean size (100 runs) of the agents' inventories (y-axes) in relation to the number of interactions (x-axes) for all datasets from Table 1 not included in Fig. 2. The simulation results are similar to the ones presented in Fig. 2. a Reddit Politics, b Reddit Programming, c Reddit Sports, d StackExchange Cooking, e StackExchange Academia, f StackExchange Movies, g StackExchange Politics, h StackExchange Music, i StackExchange German, j StackExchange Japanese, k StackExchange History, l StackExchange Chinese, m StackExchange Spanish, n Stackexchange French, o StackExchange Sports
Coarse analysis. Percentages of consensus strength (averaged over 100 runs) reached among agents on one particular opinion (y-axes) are shown as a function of the number of interactions (x-axes) for different values of \(\gamma\) and \(\beta\) for all datasets from Table 1 not included in Fig. 3. Coarse analysis results are similar to the ones presented in Fig. 3. a Reddit Politics, b Reddit Programming, c Reddit Sports, d StackExchange Cooking, e StackExchange Academia, f StackExchange Movies, g StackExchange Politics, h StackExchange Music, i StackExchange German, j StackExchange Japanese, k StackExchange History, l StackExchange Chinese, m StackExchange Spanish, n Stackexchange French, o StackExchange Sports
Communication intensity between social classes. Heatmaps of the communication intensity in low-to-high (left) and low-to-low (right) social status classes of users for all Reddit datasets from Table 1 not included in Fig. 4. The results are very similar to those presented in Fig. 4. a Reddit Movies, b Reddit Politics, c Reddit Programming, d Reddit Sports
Communication intensity between social classes. Heatmaps of the communication intensity in low-to-high (left) and low-to-low (right) social status classes of users for StackExchange datasets from Table 1 not included in Fig. 4. The results are very similar to those presented in Fig. 4. a StackExchange Cooking, b StackExchange Academia, c StackExchange Movies, d StackExchange Politics, e StackExchange Music, f StackExchange German, g StackExchange Japanese, h StackExchange History, i StackExchange Chinese, j StackExchange Spanish, k Stackexchange French, l StackExchange Sports
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Hasani-Mavriqi, I., Kowald, D., Helic, D. et al. Consensus dynamics in online collaboration systems. Comput Soc Netw 5, 2 (2018) doi:10.1186/s40649-018-0050-1
Consensus dynamics
Online collaboration systems
Interaction networks | CommonCrawl |
Volume 19 Supplement 9
Proceedings of the 2018 International Conference on Intelligent Computing (ICIC 2018) and Intelligent Computing and Biomedical Informatics (ICBI) 2018 conference: medical informatics and decision making
A comparison between two semantic deep learning frameworks for the autosomal dominant polycystic kidney disease segmentation based on magnetic resonance images
Vitoantonio Bevilacqua1,
Antonio Brunetti1,
Giacomo Donato Cascarano1,
Andrea Guerriero1,
Francesco Pesce2,
Marco Moschetta2 &
Loreto Gesualdo2
The automatic segmentation of kidneys in medical images is not a trivial task when the subjects undergoing the medical examination are affected by Autosomal Dominant Polycystic Kidney Disease (ADPKD). Several works dealing with the segmentation of Computed Tomography images from pathological subjects were proposed, showing high invasiveness of the examination or requiring interaction by the user for performing the segmentation of the images. In this work, we propose a fully-automated approach for the segmentation of Magnetic Resonance images, both reducing the invasiveness of the acquisition device and not requiring any interaction by the users for the segmentation of the images.
Two different approaches are proposed based on Deep Learning architectures using Convolutional Neural Networks (CNN) for the semantic segmentation of images, without needing to extract any hand-crafted features. In details, the first approach performs the automatic segmentation of images without any procedure for pre-processing the input. Conversely, the second approach performs a two-steps classification strategy: a first CNN automatically detects Regions Of Interest (ROIs); a subsequent classifier performs the semantic segmentation on the ROIs previously extracted.
Results show that even though the detection of ROIs shows an overall high number of false positives, the subsequent semantic segmentation on the extracted ROIs allows achieving high performance in terms of mean Accuracy. However, the segmentation of the entire images input to the network remains the most accurate and reliable approach showing better performance than the previous approach.
The obtained results show that both the investigated approaches are reliable for the semantic segmentation of polycystic kidneys since both the strategies reach an Accuracy higher than 85%. Also, both the investigated methodologies show performances comparable and consistent with other approaches found in literature working on images from different sources, reducing both the invasiveness of the analyses and the interaction needed by the users for performing the segmentation task.
Autosomal Dominant Polycystic Kidney Disease (ADPKD) is a hereditary disease characterised by the onset of renal cysts that lead to a progressive increase of the Total Kidney Volume (TKV) over time. Specifically, ADPKD is a genetic disorder in which the renal tubules become structurally abnormal, resulting in the development and growth of multiple cysts within the kidney parenchyma [1]. The mutation of two different genes characterises the disease. The ADPKD type I, which is caused by the PKD1 gene mutation, involves the 85 - 90% of the cases, usually affecting people older than 30 years. The mutation of the PKD2 gene, instead, leads to ADPKD type II (affecting the 10 - 15% of the cases), which mostly regards children developing cysts already when in the maternal uterus and die within a year. HConsidering the clinical characteristics of the patients with PKD1 or PKD2 mutations, they are the same, even though the latter mutation is associated with a milder clinical phenotype and a later onset of End-Stage Kidney Disease (ESKD). In all the cases, the size of cysts is extremely variable, ranging from some millimetres to 4 - 5 centimetres [2].
Currently, there is not a specific cure for ADPKD and the TKV estimation over time allows to monitor the disease progression. Tolvaptan has been reported to slow the rate of cysts enlargement and, consequently, the progressive kidney function decline towards ESKD [3, 4]. Since all the actual pharmacological treatments aimed at slowing the growth of the cysts, the design of a non-invasive and accurate assessment of the renal volume is of fundamental importance for the estimation and assessment of the ADPKD progression over time.
There are several methods in the literature performing the TKV estimation; traditional methodologies, requiring imaging acquisitions, such as Computed Tomography (CT) and Magnetic Resonance (MR), include stereology and manual segmentation [5, 6]. Also, several studies tried to correlate this metric with body surface and area measurements in order to have a non-invasive estimation of TKV [7, 8]. Stereology consists in the superimposition of a square grid, with specific cell positions and spacing, on each slice of the volumetric acquisition (CT or MR). The bidimensional area obtained counting all the cells containing parts of the kidneys, interpolated with the other slices, considering the thickness of the acquisitions, allows obtaining the final three-dimensional volume. Manual segmentation, instead, requires the manual contouring of the kidney regions contained in every slice. Several tools supporting this task have been developed, introducing digital free-hand contouring tools or interactive segmentation systems to assist the clinicians while delineating the region of interest.
Considering both the phenotyping of the disease and the introduced approaches, the segmentation of biomedical images of kidneys is a tricky and troublesome task, strictly dependent on the human operator performing the segmentation, also requiring expert training. In fact, co-morbidities and the presence of cysts in neighbouring organs or contact surfaces make challenging achieving an accurate and standardised assessment of the TKV.
To reduce the limitations of the previous methodologies, both in time and performance, due to the manual interaction, several approaches for the semi-automatic segmentation of kidneys have been investigated such as the mid-slice or the ellipsoid methods, allowing to estimate the TKV starting from a reduced number of selected slices [9–11]. Although the reported methodologies are faster and more compliant than the previous ones, these are far from being accurate enough to be used in clinical protocols [12, 13].
In recent years, innovative approaches based on Deep Learning (DL) strategies have been introduced for the classification and segmentation of images. In details, deep architectures, such as Deep Neural Networks (DNNs) or Convolutional Neural Networks (CNNs), allowed to perform image classification tasks, detection of Regions Of Interest (ROIs) or semantic segmentation [14–17], reaching higher performance than traditional approaches [18]. The architecture of DL classifiers let avoiding the design of procedures for extracting hand-crafted features, as the classifier itself generally computes the most characteristic features automatically for each specific dataset. These peculiarities let DL approaches to be investigated in different fields, including medical imaging, signal processing or gene expression analysis [19–23].
Lastly, recent studies about imaging acquisitions for assessing kidneys growth suggested that MR should be preferred to other imaging techniques [24]. However, different research works allowed estimating TKV starting from CT images thanks to the higher availability of the acquisition devices and the more accurate and reliable measurement of TKV and the volume of cysts. On the other side, CT protocols for ADPKD are always contrast-enhanced using a contrast medium harmful for the health of the patient under examination; also, CT exposes the patients to ionising radiations. On these premises, the automatic, or semi-automatic, segmentation of images from MR acquisitions for improving the TKV estimation capabilities should be further investigated for improving the state-of-the-art performances.
Starting from a preliminary work performed on a small set of patients [25], we present two different approaches based on DL architectures to perform the automatic segmentation of kidneys affected by ADPKD starting from MR acquisitions. Specifically, we designed and evaluated several Convolutional Neural Networks, for discriminating the class of each pixel of the images, in order to perform their segmentation; Fig. 1 represents the corresponding workflow. Subsequently, we investigated the object detection approach using the Regions with CNN (R-CNN) technique [26] to automatically detect ROIs containing parts of the kidneys, with the aim to subsequently perform the semantic segmentation only on the extracted regions; Fig. 2 shows a representation of the workflow implemented in this approach.
Workflow for the semantic segmentation starting from the full image
Workflow for the semantic segmentation of ROIs automatically detected with R-CNN
Patients and acquisition protocol
From February to July 2017, 18 patients affected by ADPKD (mean age 31.25 ± 15.52 years) underwent Magnetic Resonance examinations for assessing the TKV. The acquisition protocol was carried out by the physicians from the Department of Emergency and Organ Transplantations (DETO) of the Bari University Hospital.
Examinations for the acquisition of the images were performed on a 1.5 Tesla MR device (Achieva, Philips Medical Systems, Best, The Netherlands) by using a four-channel breast coil. The protocol did not use contrast material intravenous injection and consisted of:
Transverse and Coronal Short-TI Inversion Recovery (STIR) Turbo-Spin-Echo (TSE) sequences (TR/TE/TI = 3.800/60/165 ms, field of view (FOV) = 250 x 450 mm (AP x RL), matrix 168 x 300, 50 slices with 3 mm slice thickness and without gaps, 3 averages, turbo factor 23, resulting in a voxel size of 1.5 x 1.5 x 3.0 mm3; sequence duration of 4.03 min);
Transverse and Coronal T2-weighted TSE (TR/TE = 6.300/130 ms, FOV = 250 x 450 mm (AP x RL), matrix 336 x 600, 50 slices with 3 mm slice thickness and without gaps, 3 averages, turbo factor 59, SENSE factor 1.7, resulting in a voxel size of 0.75 x 0.75 x 3.0 mm3; sequence duration of 3.09 min);
Three-Dimensional (3D) T1-Weighted High Resolution Isotropic Volume Examination (THRIVE) sequence (TR/TE = 4.4/2.0 ms, FOV = 250 x 450 x 150 mm (AP x RL x FH), matrix 168 x 300, 100 slices with 1.5 mm slice thickness, turbo factor 50, SENSE factor 1.6, data acquisition time of 1 min 30 s).
In this work, only the coronal T2-Weighted TSE sequence only was considered for the processing and classification strategies. In order to have the segmentation ground truth for all the acquired images, our framework included a preliminary step allowing the radiologists to manually contour all the ROIs using a digital tool specifically designed and implemented for this task.
After the manual contouring of the kidneys, 526 images, with the corresponding labelled samples, constituted the working dataset; Fig. 3 represents an MR image with the corresponding labelled sample, where white pixels belong to the kidneys whereas the black ones include the remaining parts of the image.
Example of an input image segmented manually; left: the representation of a DICOM image in greyscale; right: the mask obtained after the manual contouring of the selected slice
Segmentation approaches
Two different approaches based on DL techniques have been investigated to perform a fully-automated segmentation of polycystic kidneys without needing to design any procedure for the extraction of hand-crafted features. In details, the first approach allowed performing the semantic segmentation of the MR images, classifying each pixel belonging to the kidney or not; the second methodology, instead, allowed performing the detection of reduced areas containing the kidneys before their semantic segmentation.
Semantic segmentation
Semantic segmentation is a procedure allowing to perform the automatic classification of each pixel of images; thus, it is possible to classify each pixel of an image with a specific label. Although the segmentation of images is a well-established process in literature, counting a multitude of works and algorithms developed in several fields for different aims [27–29], the introduction and spread of DL architectures for performing this task, such as Convolutional Neural Networks, let image segmentation to regain interest in the scientific community [30, 31].
According to different architectures designed in previous works, such as SegNet [32] and Fully Convolutional Network (FCN) [33], the CNNs performing semantic segmentation tasks show an encoder-decoder design, as the architecture represented in Fig. 4. Traditionally, this kind of classifier includes several encoders interspersed with pooling layers for downsampling the input; each encoder includes sequences of Convolutional layers, Normalisation layers and Linear layers. Based on the encoding part, there are specular decoders with up-sampling layers for reconstructing the input size. Finally, there are fully-connected neural units before the final classification layer able to label each pixel of the input image.
Encoder–Decoder architecture for SegNet [32]
In this work, we designed and tested several CNNs architectures for the segmentation of the images. Since optimising the architecture of classifiers is still an open problem [34–36], often faced with evolutionary approaches, we decided to start from a well-known general CNN, the VGG-16 [37], and modify its structure varying several parameters.
These included the number of encoders (and decoders), the number of layers for each encoder, the number of convolutional filters for each layer and the learner used during the training (i.e., SGDM - stochastic gradient descent with momentum, or ADAM [38]). All the investigated architectures included convolutional layers with kernels [3x3], stride [1 1] and padding [1 1 1 1] allowing to keep unchanged the dimensions of the input across each encoder; downsampling (and upsampling) was performed only in the max-pooling layers (upsampling layers for the decoder) having stride [2 2] and dimension [2x2].
The semantic segmentation of the input images took into account two classes: Kidney and Background. Considering the example reported in Fig. 3, the white pixels were labelled as Kidney, whereas the remaining pixels as Background.
For training the classifiers, also the dataset augmentation was performed according to recent works demonstrating the effectiveness of this procedure for improving the classification performance [31, 39, 40]; the following image transformations were randomly performed:
horizontal shift in the range [-200; 200] pixels;
horizontal flip;
scaling with factor ranging in [0.5; 4].
Table 1 reports the configurations designed and tested for performing this task in terms of number of layers per encoder, number of convolutional filters per layer and applied learner. The table reports only the three configurations showing the higher performance among all the investigated architectures.
Table 1 Configurations designed and tested for the semantic segmentation of the full image
Regions with convolutional neural networks
Due to the presence of cysts in the organs near the kidneys and very similar structures located near the area of interest, which may affect the segmentation performance, we investigated a second approach based on the object detection strategy using R-CNN. In this approach, we designed a classifier for performing the automatic detection of smaller regions inside each input image to subsequently segment according to the procedure described in the previous section.
Object detection is a technique for finding instances of specific classes in images or videos. Like the semantic segmentation, also the object detection is a well-established process in literature employed in different fields [41, 42]. According to the literature, the CNNs for object detection include a region proposal algorithm, often based on EdgeBoxes or Selective Search [43, 44], as a pre-processing step before running the classification algorithm. Traditional R-CNN and Fast R-CNN are the most employed techniques [26, 45]. Recently, Faster R-CNN was also introduced, addressing the region proposal mechanism using the CNN itself, thus making the region proposal a part of the CNN training and prediction steps [46].
FAs for the previous approach, we investigated several CNN architectures for detecting areas containing the kidney, considering the Fast R-CNN approach. For creating the ground truth, the manual contour of each kidney was enclosed in a rectangular bounding box and used for training the network. Differently for the CNN aimed at performing the semantic segmentation, these architectures have only the encoding part, where each encoder includes Convolutional layers and ReLu layers. Each encoder ends with a max-pooling layer to perform image sub-sampling (size [3x3] and stride [2 2]). At the end of the encoding part, there are two fully-connected layers before the final classification layer. Table 2 reports the configurations designed and tested for the detection purpose (in this case too, the table reports only the three configurations that reached the higher performance).
Table 2 Configurations designed and tested for the CNN in the ROI detector
After designing the classifier for the automatic detection of the ROIs, the same architectures designed for the segmentation of the whole images (reported in Table 1), were considered for performing the semantic segmentation of the ROIs. Furthermore, since the detected ROIs might have different sizes, a rescaling procedure was performed to adapt all the ROIs to the size required by the CNN for the segmentation task. Images augmentation was performed, as well, considering the following image transformations:
horizontal shift in the range [-25; 25] pixels;
vertical shift in the range [-25; 25] pixels;
scaling with scale factor ranging in [0.5; 1.1].
This section reports the results for both the investigated approaches. In particular, we describe the performance obtained considering the R-CNN approach and subsequently, the results of the classifiers performing the semantic segmentation on both the full image and the ROIs automatically detected. The input dataset, which was constituted by MR images from 18 patients, was randomly split to create the training and test sets considering data from 15 and 3 patients, respectively. For improving the generalisation capabilities of the segmentation system, we performed a 5-fold cross-validation for the training the classifiers. The final segmentation on the images from the test set was obtained through the majority voting computed among the segmentation results from each trained classifier.
We considered several metrics for evaluating the classifiers; all the reported results refer to the performance obtained evaluating the networks only on the test set. Accuracy (Eq. 1), Boundary F1 Score, or BF Score, (Eq. 2) and Jaccard Similarity Coefficient, or Intersection over Union - IoU, (Eq. 3)were computed considering the number of instances of True Positives (TP), True Negatives (TN), False Positives (FP) and False Negatives (FN), where the Positive label corresponds to a pixel belonging to the Kidney class for the semantic segmentation approach, or to a ROI correctly detected (confidence > 0.8), for the R-CNN approach.
Regarding the semantic segmentation, the BF Score measures how close the predicted boundary of an object matches the corresponding ground truth; it is defined as the harmonic mean of the Precision (Eq. 5) and Recall (Eq. 6) values. The resulting score spreads in the range [0, 1], from a bad to a good match. The Jaccard Similarity Coefficient, instead, is the ratio between the number of pixels belonging to the Positive class classified correctly (TP) and the sum of the number of pixels belonging to the Positive class (P =TP+FN) and the Negative pixels wrongly predicted as Positive (FP). Regarding R-CNN performance, the Average Precision (Eq. 5) and the Log-Average Miss Rate were evaluated, considering the Miss Rate (MR) according to Eq. 4.
$$ Accuracy \;=\; \frac{TP+TN}{TP+TN+FP+FN} $$
$$ Boundary\;F1\;Score \;=\; \frac{2*Precision*Recall}{Precision+Recall} $$
$$ Jaccard\;Similarity\;Coefficient \;=\; \frac{TP}{TP+FP+FN} $$
$$ Miss\;Rate \;=\; \frac{FN}{FN+TP} $$
$$ Precision \;=\; \frac{TP}{TP+FP} $$
$$ Recall \;=\; \frac{TP}{TP+FN} $$
R-CNN performance
For each R-CNN architecture reported in Table 2, the Precision-Recall plot, showing the Precision obtained at different Recall values, and the Log-Average Miss Rate plot, reporting how varies the miss rate at different levels of FP per image are represented. Specifically, Figs. 5, 6 and 7 show the plots for R-CNN-1, R-CNN-2 and R-CNN-3 respectively. Figure 8, instead, shows the result obtained performing the detection of kidneys on an image sample. As represented in the plots, the average Precision for R-CNN-1 and R-CNN-3 is higher than 0.75, also maintaining low the Log-Average Miss Rate.
Precision – Recall plot and log Average Miss rate for R-CNN-1
Results from R-CNN classifier. Input image is on the left; the image on the right contains squares on the detected ROIs, each one is associated with a score
Since the aim of detecting ROIs was the identification of regions with fewer Background pixels, respect to the whole image, for the subsequent semantic segmentation step, R-CNN-1 revealed to be the best candidate among all the analysed architectures. In fact, it reached a Recall value of about 0.8 with the Precision higher than 0.65, meaning that the classifier was able to detect the 80% of the ROIs containing the kidneys, but with a high number of False Positives. However, this was not a problem since the subsequent step of semantic segmentation would detect all the pixels belonging to the Kidney class.
Semantic segmentation performance
Concerning the semantic segmentation, this section reports the performance obtained for the segmentation of both MR images and ROIs. Specifically, Table 3 shows the results obtained for each of the CNN architectures performing the semantic segmentation of the MR image, without performing any image processing procedure. As reported in the table, the architecture achieving the highest performance for the semantic segmentation of the full image is the S-CNN-1, showing an Accuracy higher than 88%.
Table 3 Performance indices for the classifiers working on MR images
The introduction of an additional layer into the first encoder of VGG-16 architecture allowed the network to create a set of features more significant and discriminative than those generated by the others, leading to more accurate classification performance. Conversely, increasing the number of convolutional filters in the first layer of the first encoder of S-CNN-1 did not improve the overall discrimination capabilities of the classifier. Table 4 reports the normalised confusion matrices obtained for the three considered cases in this approach, whereas Fig. 9 shows an example of the output generated by the implemented classifier performing the semantic segmentation of the MR images.
Result of the semantic segmentation considering an image sample. Top left: the MR slice represented in greyscale; top right: the segmentation result; bottom left: the ground-truth mask; bottom right: superimposition of the segmentation result to the ground-truth mask
Table 4 Normalized Confusion Matrix for VGG-16, S-CNN-1 and S-CNN-2 segmenting the MR images for the test set
As for the segmentation of the whole MR images, Table 5 reports the performance indices for the semantic segmentation of the ROIs automatically detected by the R-CNN-1, which showed the optimal trade-off in detecting ROIs considering the miss rate. As for the previous case, the S-CNN-1 architecture allowed achieving the highest Accuracy in performing the semantic segmentation of the ROIs. Table 6 reports the normalised confusion matrices for all the classifiers investigated. Figure 10, instead, shows the results obtained for the semantic segmentation of ROI extracted from an image sample.
Example result for ROI detection and semantic segmentation. Top left: the MR slice represented in greyscale; top right: the R-CNN detection result; middle left: one of the detected ROIs; middle right the segmentation result; bottom left: the ground-truth mask for the considered ROI; bottom right: superimposition of the classification result to the ground-truth mask
Table 5 Performance indices for the classifiers working on the ROIs
Table 6 Normalized Confusion Matrix for VGG-16, S-CNN-1 and S-CNN-2 segmenting the ROIs detected by the R-CNN-1 from the MR images of the test set
In recent years, several works were proposed dealing with the segmentation of diagnostics images for assessing the ADPKD. Since the most used imaging procedure includes CT scans, most of the researches consider this kind of images in order to support the clinical assessment of the pathology. In some cases, the proposed approaches need minimum interaction by the user for the complete segmentation of the kidneys [47, 48]. Also, some procedures in literature dealt with the fully-automated segmentation of the images, some of them based on DL strategies [49, 50].
However, the proposed approaches for the automatic segmentation show several limitations, including the invasiveness from the contrast medium used for enhancing CT acquisitions [51], or rather the necessity of having an a-priori knowledge for the correct processing of the images [52]. In order to reduce the invasiveness of the imaging analysis, a preliminary investigation proposing a fully automated approach for the segmentation of non-contrast-enhanced CT images was proposed very recently, showing good performance on a reduced cohort of patients [53].
In this work, instead, the developed classification systems allowed to reach performances of about 80% of Accuracy in performing the segmentation of MR images, without using any procedure for contrast enhancement. However, the segmentation of the entire MR image revealed to be more reliable than those performed on the extracted ROIs. In fact, although the phase of extracting subregions from MR images showed an average Precision of 78%, it could still not find areas of interest, thus missing regions belonging to the kidneys.
According to the analysed literature, the reported results are consistent with other precursory investigations dealing with MR images, including the preliminary results presented in [25] on a reduced cohort of patients. Also, the proposed approaches overcome the limitations shown by manual or semi-automatic procedures in segmenting kidneys affected by ADPKD for evaluating diagnostics and prognostics parameters. In addition, the proposed methodologies did not use any contrast medium, thus without any harmful or potentially lethal ionising radiation for the patients.
In this work, we investigated two strategies performing the automatic segmentation of MR images from people affected by ADPKD based on DL architectures. Both the designed strategies considered several Convolutional Neural Networks for classifying, between Kidney or Backgroud, all the pixels in the images.
In the first approach, we trained, validated and tested the classifiers considering the full MR image as input, without performing any procedure of image pre-processing. The second methodology, instead, investigated the object detection approach using the Regions with CNN (R-CNN) technique for firstly detecting ROIs containing parts of the kidneys. Subsequently, we employed (trained, validated and tested) the CNNs considered in the previous approach to perform the semantic segmentation on the ROIs automatically extracted by the R-CNN showing the most reliable performance.
The obtained results show that both the approaches are comparable and consistent with other methodologies reported in the literature, but dealing with images from different sources, such as CT scans. Also, the proposed approaches may be considered reliable methods to perform a fully-automated segmentation of kidneys affected by ADPKD.
In the future, the interaction among Deep Learning strategies and image processing techniques will be further investigated to improve the performance reached by the actual classifiers. Moreover, evolutionary approaches for optimising the topology of classifiers, or their hyper-parameters, will also be explored considering the acquired images in a three-dimensional way.
The dataset employed for the current study is not publicly available due to restrictions associated with the anonymity of participants but could be made available from the corresponding author on reasonable request.
ADPKD:
Autosomal dominant polycystic kidney disease
CNN:
DL:
DNN:
Deep neural network
ESKD:
End-stage kidney disease
R-CNN:
Regions with convolutional neural network
Region of interest
TKV:
Total kidney volume
Grantham JJ. Autosomal dominant polycystic kidney disease. N Engl J Med. 2008; 359(14):1477–85. https://doi.org/10.1056/NEJMcp0804458.
Harris PC, Bae KT, Rossetti S, Torres VE, Grantham JJ, Chapman AB, Guay-Woodford LM, King BF, Wetzel LH, Baumgarten DA, Kenney PJ, Consugar M, Klahr S, Bennett WM, Meyers CM, Zhang Q, Thompson PA, Zhu F, Miller JP. Cyst number but not the rate of cystic growth is associated with the mutated gene in autosomal dominant polycystic kidney disease. J Am Soc Nephrol. 2006; 17(11):3013–9. https://doi.org/10.1681/ASN.2006080835.
Torres VE, Chapman AB, Devuyst O, Gansevoort RT, Grantham JJ, Higashihara E, Perrone RD, Krasa HB, Ouyang J, Czerwiec FS. Tolvaptan in patients with autosomal dominant polycystic kidney disease. N Engl J Med. 2012; 367(25):2407–18. https://doi.org/10.1056/NEJMoa1205511.
Irazabal MV, Torres VE, Hogan MC, Glockner J, King BF, Ofstie TG, Krasa HB, Ouyang J, Czerwiec FS. Short-term effects of tolvaptan on renal function and volume in patients with autosomal dominant polycystic kidney disease. Kidney Int. 2011; 80(3):295–301. https://doi.org/10.1038/ki.2011.119.
Bae KT, Commean PK, Lee J. Volumetric measurement of renal cysts and parenchyma using mri: Phantoms and patients with polycystic kidney disease. J Comput Assist Tomogr. 2000; 24(4):614–9. https://doi.org/10.1097/00004728-200007000-00019.
King BF, Reed JE, Bergstralh EJ, Sheedy PF, Torres VE. Quantification and longitudinal trends of kidney, renal cyst, and renal parenchyma volumes in autosomal dominant polycystic kidney disease. J Am Soc Nephrol. 2000; 11(8):1505–11.
Vauthey JN, Abdalla EK, Doherty DA, Gertsch P, Fenstermacher MJ, Loyer EM, Lerut J, Materne R, Wang X, Encarnacion A, Herron D, Mathey C, Ferrari G, Charnsangavej C, Do KA, Denys A. Body surface area and body weight predict total liver volume in western adults. Liver Transplant. 2002; 8(3):233–40. https://doi.org/10.1053/jlts.2002.31654.
Emamian SA, Nielsen MB, Pedersen JF, Ytte L. Kidney dimensions at sonography: Correlation with age, sex, and habitus in 665 adult volunteers. Am J Roentgenol. 1993; 160(1):83–6. https://doi.org/10.2214/ajr.160.1.8416654.
Higashihara E, Nutahara K, Okegawa T, Tanbo M, Hara H, Miyazaki I, Kobayasi K, Nitatori T. Kidney volume estimations with ellipsoid equations by magnetic resonance imaging in autosomal dominant polycystic kidney disease. Nephron. 2015; 129(4):253–62. https://doi.org/10.1159/000381476.
Irazabal MV, Rangel LJ, Bergstralh EJ, Osborn SL, Harmon AJ, Sundsbak JL, Bae KT, Chapman AB, Grantham JJ, Mrug M, Hogan MC, El-Zoghby ZM, Harris PC, Erickson BJ, King BF, Torres VE. Imaging classification of autosomal dominant polycystic kidney disease: A simple model for selecting patients for clinical trials. J Am Soc Nephrol. 2015; 26(1):160–72. https://doi.org/10.1681/ASN.2013101138.
Bae KT, Tao C, Wang J, Kaya D, Wu Z, Bae JT, Chapman AB, Torres VE, Grantham JJ, Mrug M, Bennett WM, Flessner MF, Landsittel DP. Novel approach to estimate kidney and cyst volumes using mid-slice magnetic resonance images in polycystic kidney disease. Am J Nephrol. 2013; 38(4):333–41. https://doi.org/10.1159/000355375. NIHMS150003.
Grantham JJ, Torres VE. The importance of total kidney volume in evaluating progression of polycystic kidney disease. Nat Rev Nephrol. 2016; 12(11):667–77. https://doi.org/10.1038/nrneph.2016.135. 15334406.
Grantham JJ, Torres VE, Chapman AB, Guay-Woodford LM, Bae KT, King BF, Wetzel LH, Baumgarten DA, Kenney PJ, Harris PC, Klahr S, Bennett WM, Hirschman GN, Meyers CM, Zhang X, Zhu F, Miller JP. Volume progression in polycystic kidney disease. N Engl J Med. 2006; 354(20):2122–30. https://doi.org/10.1056/NEJMoa054341.
Brunetti A, Carnimeo L, Trotta GF, Bevilacqua V. Computer-assisted frameworks for classification of liver, breast and blood neoplasias via neural networks: A survey based on medical images. Neurocomputing. 2019; 335:274–98. https://doi.org/10.1016/j.neucom.2018.06.080.
Biswas M, Kuppili V, Saba L, Edla D, Suri H, Cuadrado-Godia E, Laird J, Marinhoe R, Sanches J, Nicolaides A, et al. State-of-the-art review on deep learning in medical imaging. Front Biosci (Landmark Ed). 2019; 24:392–426.
Akkus Z, Galimzianova A, Hoogi A, Rubin DL, Erickson BJ. Deep learning for brain MRI segmentation: State of the art and future directions. J Digit Imaging. 2017; 30(4):449–59. https://doi.org/10.1007/s10278-017-9983-4.
Lecun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015; 521(7553):436–44. https://doi.org/10.1038/nature14539.
Bevilacqua V, Brunetti A, Guerriero A, Trotta GF, Telegrafo M, Moschetta M. A performance comparison between shallow and deeper neural networks supervised classification of tomosynthesis breast lesions images. Cogn Syst Res. 2019; 53:3–19. https://doi.org/10.1016/j.cogsys.2018.04.011.
Shen Z, Bao W, Huang D-S. Recurrent neural network for predicting transcription factor binding sites. Sci Rep. 2018; 8(1):15270.
Deng S-P, Cao S, Huang D-S, Wang Y-P. Identifying stages of kidney renal cell carcinoma by combining gene expression and dna methylation data. IEEE/ACM Trans Comput Biol Bioinforma. 2017; 14(5):1147–53.
Yi H-C, You Z-H, Huang D-S, Li X, Jiang T-H, Li L-P. A deep learning framework for robust and accurate prediction of ncrna-protein interactions using evolutionary information. Mol Therapy-Nucleic Acids. 2018; 11:337–44.
Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Med Image Anal. 2017; 42:60–88. https://doi.org/10.1016/j.media.2017.07.005.
Schmidhuber J. Deep learning in neural networks: An overview. Neural Netw. 2015; 61:85–117. https://doi.org/10.1016/j.neunet.2014.09.003.
Magistroni R, Corsi C, Martí T, Torra R. A review of the imaging techniques for measuring kidney and cyst volume in establishing autosomal dominant polycystic kidney disease progression. Am J Nephrol. 2018; 48(1):67–78.
Bevilacqua V, Brunetti A, Cascarano GD, Palmieri F, Guerriero A, Moschetta M. A deep learning approach for the automatic detection and segmentation in autosomal dominant polycystic kidney disease based on magnetic resonance images In: Huang D, Jo K, Zhang X, editors. Intelligent Computing Theories and Application - 14th International Conference, ICIC 2018, Wuhan, China, August 15-18, 2018, Proceedings, Part II. Lecture Notes in Computer Science, vol. 10955. Cham: Springer: 2018. p. 643–9. https://doi.org/10.1007/978-3-319-95933-7_73.
Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. CVPR '14. Washington: IEEE Computer Society: 2014. p. 580–7. https://doi.org/10.1109/CVPR.2014.81.
Bevilacqua V, Dimauro G, Marino F, Brunetti A, Cassano F, Maio AD, Nasca E, Trotta GF, Girardi F, Ostuni A, Guarini A. A novel approach to evaluate blood parameters using computer vision techniques. In: 2016 IEEE International Symposium on Medical Measurements and Applications, MeMeA 2016, Benevento, Italy, May 15-18, 2016: 2016. p. 1–6. https://doi.org/10.1109/MeMeA.2016.7533760.
Bevilacqua V, Pietroleonardo N, Triggiani V, Brunetti A, Di Palma AM, Rossini M, Gesualdo L. An innovative neural network framework to classify blood vessels and tubules based on haralick features evaluated in histological images of kidney biopsy. Neurocomputing. 2017; 228:143–53. https://doi.org/10.1016/j.neucom.2016.09.091.
Bevilacqua V, Brunetti A, Trotta GF, Dimauro G, Elez K, Alberotanza V, Scardapane A. A novel approach for hepatocellular carcinoma detection and classification based on triphasic CT protocol. In: 2017 IEEE Congress on Evolutionary Computation, CEC 2017, Donostia, San Sebastián, Spain, June 5-8, 2017: 2017. p. 1856–63. https://doi.org/10.1109/CEC.2017.7969527.
Garcia-Garcia A, Orts-Escolano S, Oprea S, Villena-Martinez V, Rodríguez JG. A review on deep learning techniques applied to semantic segmentation. CoRR. 2017; abs/1704.06857. http://arxiv.org/abs/1704.06857.
Bevilacqua V, Altini D, Bruni M, Riezzo M, Brunetti A, Loconsole C, Guerriero A, Trotta GF, Fasano R, Pirchio MD, Tartaglia C, Ventrella E, Telegrafo M, Moschetta M. A supervised breast lesion images classification from tomosynthesis technique In: Huang D, Jo K, Figueroa-García JC, editors. Intelligent Computing Theories and Application - 13th International Conference, ICIC 2017, Liverpool, UK, August 7-10, 2017, Proceedings, Part II. Lecture Notes in Computer Science, vol. 10362. Cham: Springer: 2017. p. 483–9. https://doi.org/10.1007/978-3-319-63312-1_42.
Badrinarayanan V, Kendall A, Cipolla R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. CoRR. 2015; abs/1511.00561. http://arxiv.org/abs/1511.00561.
Brostow GJ, Fauqueur J, Cipolla R. Semantic object classes in video: A high-definition ground truth database. Pattern Recogn Lett. 2009; 30(2):88–97. https://doi.org/10.1016/j.patrec.2008.04.005.
Buongiorno D, Barsotti M, Barone F, Bevilacqua V, Frisoli A. A linear approach to optimize an emg-driven neuromusculoskeletal model for movement intention detection in myo-control: A case study on shoulder and elbow joints. Front Neurorobotics. 2018; 12:74.
Bortone I, Trotta GF, Brunetti A, Cascarano GD, Loconsole C, Agnello N, Argentiero A, Nicolardi G, Frisoli A, Bevilacqua V. A novel approach in combination of 3d gait analysis data for aiding clinical decision-making in patients with parkinson's disease. In: Intelligent Computing Theories and Application. ICIC 2017. Lecture Notes in Computer Science, vol 10362. Cham: Springer: 2017. p. 504–14. https://doi.org/10.1007/978-3-319-63312-1_44.
Bevilacqua V, Uva AE, Fiorentino M, Trotta GF, Dimatteo M, Nasca E, Nocera AN, Cascarano GD, Brunetti A, Caporusso N, et al.A comprehensive method for assessing the blepharospasm cases severity. In: Recent Trends in Image Processing and Pattern Recognition. RTIP2R 2016. Communications in Computer and Information Science, vol 709. Singapore: Springer: 2016. p. 369–81. https://doi.org/10.1007/978-981-10-4859-3_33.
Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. 2014.
Kingma DP, Ba J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. 2014.
Wong SC, Gatt A, Stamatescu V, McDonnell MD. Understanding data augmentation for classification: When to warp? In: 2016 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2016, Gold Coast, Australia, November 30 - December 2, 2016: 2016. p. 1–6. https://doi.org/10.1109/DICTA.2016.7797091.
Xu Y, Jia R, Mou L, Li G, Chen Y, Lu Y, Jin Z. Improved relation classification by deep recurrent neural networks with data augmentation. In: COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16. Japan: ACL: 2016. p. 1461–70.
Brunetti A, Buongiorno D, Trotta GF, Bevilacqua V. Computer vision and deep learning techniques for pedestrian detection and tracking: A survey. Neurocomputing. 2018; 300:17–33. https://doi.org/10.1016/j.neucom.2018.01.092.
Kulchandani JS, Dangarwala KJ. Moving object detection: Review of recent research trends. In: 2015 International Conference on Pervasive Computing (ICPC), Pune. IEEE: 2015. p. 1–5. https://doi.org/10.1109/PERVASIVE.2015.7087138.
Zitnick CL, Dollár P. Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V. Lecture Notes in Computer Science, vol. 8693 In: Fleet DJ, Pajdla T, Schiele B, Tuytelaars T, editors. Cham: Springer: 2014. p. 391–405. https://doi.org/10.1007/978-3-319-10602-1_26.
Uijlings JRR, van de Sande KEA, Gevers T, Smeulders AWM. Selective search for object recognition. Int J Comput Vis. 2013; 104(2):154–71. https://doi.org/10.1007/s11263-013-0620-5.
Girshick RB. Fast R-CNN. In: 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015: 2015. p. 1440–8. https://doi.org/10.1109/ICCV.2015.169.
Ren S, He K, Girshick RB, Sun J. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell. 2017; 39(6):1137–49. https://doi.org/10.1109/TPAMI.2016.2577031.
Sharma K, Peter L, Rupprecht C, Caroli A, Wang L, Remuzzi A, Baust M, Navab N. Semi-automatic segmentation of autosomal dominant polycystic kidneys using random forests. arXiv e-prints. 2015:1510–06915. http://arxiv.org/abs/1510.06915.
Kline TL, Edwards ME, Korfiatis P, Akkus Z, Torres VE, Erickson BJ. Semiautomated segmentation of polycystic kidneys in t2-weighted mr images. Am J Roentgenol. 2016; 207(3):605–13.
Kline TL, Korfiatis P, Edwards ME, Blais JD, Czerwiec FS, Harris PC, King BF, Torres VE, Erickson BJ. Performance of an artificial multi-observer deep neural network for fully automated segmentation of polycystic kidneys. J Digit Imaging. 2017; 30(4):442–8.
Kline TL, Korfiatis P, Edwards ME, Warner JD, Irazabal MV, King BF, Torres VE, Erickson BJ. Automatic total kidney volume measurement on follow-up magnetic resonance images to facilitate monitoring of autosomal dominant polycystic kidney disease progression. Nephrol Dial Transplant. 2015; 31(2):241–8.
Sharma K, Rupprecht C, Caroli A, Aparicio MC, Remuzzi A, Baust M, Navab N. Automatic segmentation of kidneys using deep learning for total kidney volume quantification in autosomal dominant polycystic kidney disease. Sci Rep. 2017; 7(1):2049.
Kim Y, Ge Y, Tao C, Zhu J, Chapman AB, Torres VE, Yu ASL, Mrug M, Bennett WM, Flessner MF, Landsittel DP, Bae KT, for the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease (CRISP). Automated segmentation of kidneys from mr images in patients with autosomal dominant polycystic kidney disease. Clin J Am Soc Nephrol. 2016; 11(4):576–84. https://doi.org/10.2215/CJN.08300815.
Turco D, Valinoti M, Martin EM, Tagliaferri C, Scolari F, Corsi C. Fully automated segmentation of polycystic kidneys from noncontrast computed tomography: A feasibility study and preliminary results. Acad Radiol. 2018; 25(7):850–5.
The authors wish to thanks the colleagues from University of Bari "Aldo Moro" who provided insight and expertise that greatly assisted the research.
About this supplement
This article has been published as part of BMC Medical Informatics and Decision Making Volume 19 Supplement 9, 2019: Proceedings of the 2018 International Conference on Intelligent Computing (ICIC 2018) and Intelligent Computing and Biomedical Informatics (ICBI) 2018 conference: medical informatics and decision making. The full contents of the supplement are available online at https://bmcmedinformdecismak.biomedcentral.com/articles/supplements/volume-19-supplement-9.
Publication costs have been partially funded by the PON MISE 2014-2020 "HORIZON 2020" program, project PRE.MED.: Innovative and integrated platform for the predictive diagnosis of the risk of progression of chronic kidney disease, targeted therapy and proactive assistance for patients with autosomal dominant polycystic genetic disease.
Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Italy, Via Edoardo Orabona, 4, Bari, 70125, Italy
Vitoantonio Bevilacqua
, Antonio Brunetti
, Giacomo Donato Cascarano
& Andrea Guerriero
D.E.T.O. University of Bari Medical School, Piazza Giulio Cesare, 11, Bari, 70124, Italy
Francesco Pesce
, Marco Moschetta
& Loreto Gesualdo
Search for Vitoantonio Bevilacqua in:
Search for Antonio Brunetti in:
Search for Giacomo Donato Cascarano in:
Search for Andrea Guerriero in:
Search for Francesco Pesce in:
Search for Marco Moschetta in:
Search for Loreto Gesualdo in:
VB, LG and MM conceived the study and participated in its design and coordination. AB and GDC designed the classifiers and carried out the data classification and segmentation. MM and VB organized the enrolment of the patients and the acquisition of the data and then validated the final results. VB and AB drafted the manuscript and then all the authors read and approved the final manuscript.
Correspondence to Vitoantonio Bevilacqua.
The experimental procedures were conducted in accordance with the Declaration of Helsinki. All participants provided written informed consent.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Bevilacqua, V., Brunetti, A., Cascarano, G.D. et al. A comparison between two semantic deep learning frameworks for the autosomal dominant polycystic kidney disease segmentation based on magnetic resonance images. BMC Med Inform Decis Mak 19, 244 (2019) doi:10.1186/s12911-019-0988-4
R-CNN
ADPKD | CommonCrawl |
Journal of Biomedical Semantics
Completing the is-a structure in light-weight ontologies
Patrick Lambrix1,2,
Fang Wei-Kleiner1 &
Zlatan Dragisic1,2
Journal of Biomedical Semantics volume 6, Article number: 12 (2015) Cite this article
With the increasing presence of biomedical data sources on the Internet more and more research effort is put into finding possible ways for integrating and searching such often heterogeneous sources. Ontologies are a key technology in this effort. However, developing ontologies is not an easy task and often the resulting ontologies are not complete. In addition to being problematic for the correct modelling of a domain, such incomplete ontologies, when used in semantically-enabled applications, can lead to valid conclusions being missed.
We consider the problem of repairing missing is-a relations in ontologies. We formalize the problem as a generalized TBox abduction problem. Based on this abduction framework, we present complexity results for the existence, relevance and necessity decision problems for the generalized TBox abduction problem with and without some specific preference relations for ontologies that can be represented using a member of the \({\mathcal {EL}}\) family of description logics. Further, we present algorithms for finding solutions, a system as well as experiments.
Semantically-enabled applications need high quality ontologies and one key aspect is their completeness. We have introduced a framework and system that provides an environment for supporting domain experts to complete the is-a structure of ontologies. We have shown the usefulness of the approach in different experiments. For the two Anatomy ontologies from the Ontology Alignment Evaluation Initiative, we repaired 94 and 58 initial given missing is-a relations, respectively, and detected and repaired additionally, 47 and 10 missing is-a relations. In an experiment with BioTop without given missing is-a relations, we detected and repaired 40 new missing is-a relations.
With the increasing presence of biomedical data sources on the Internet more and more research effort is put into finding possible ways for integrating and searching such often heterogeneous sources. Semantic Web technologies such as ontologies, are becoming a key technology in this effort. Ontologies provide a means for modelling the domain of interest and they allow for information reuse, portability and sharing across multiple platforms. Efforts such as the Open Biological and Biomedical Ontologies (OBO) Foundry [1], BioPortal [2] and Unified Medical Language System (UMLS) [3] aim at providing repositories for biomedical ontologies and relations between these ontologies thus providing means for annotating and sharing biomedical data sources. Many of the ontologies in the biomedical domain, e.g., SNOMED [4] and Gene Ontology [5], are, regarding knowledge representation, light-weight ontologies. They are taxonomies or can be represented using the \({\mathcal {EL}}\) description logic or small extensions thereof (e.g. [6] and the TONES Ontology Repository [7])a. Therefore, in this paper, we consider ontologies that are represented by TBoxes in the \({\mathcal {EL}}\) family, which consist of axioms such as Carditis \(\sqsubseteq \) Fracture, with the intended meaning that Carditis is a Fracture, where Carditis and Fracture are concepts and the relationship is an is-a relation. (For detailed syntax see Section Preliminaries - description logics \({\mathcal {EL}}\) and \({\mathcal {EL}^{++}}\) ). A set of such terminological axioms is a TBox.
Developing ontologies is not an easy task and often the resulting ontologies (including their is-a structures) are not complete. In addition to being problematic for the correct modelling of a domain, such incomplete ontologies also influence the quality of semantically-enabled applications. Incomplete ontologies when used in semantically-enabled applications can lead to valid conclusions being missed. For instance, in ontology-based search, queries are refined and expanded by moving up and down the hierarchy of concepts. Incomplete structure in ontologies influences the quality of the search results. As an example, suppose we want to find articles in PubMed [8] using the MeSH [9] term Scleral Disease. By default the query will follow the hierarchy of MeSH and include more specific terms for searching, such as Scleritis. If the relation between Scleral Disease and Scleritis is missing in MeSH, we will miss 922 articles in the search result, which is about 57% of the original resultb. The structural information is also important information in ontology engineering research. For instance, most current ontology alignment systems use structure-based strategies to find mappings between the terms in different ontologies (e.g. overview in [10]) and the modeling defects in the structure of the ontologies have an important influence on the quality of the ontology alignment results.
In this paper we tackle the problem of completing the is-a structure of ontologies. Completing the is-a structure requires adding new correct is-a relations to the ontology. We identify two cases for finding relations which need to be added to an ontology. In case 1 missing is-a relations have been detected and the task is to find ways of making these detected is-a relations derivable in the ontology. There are many approaches to detect missing is-a relations, e.g., in ontology learning [11] or evolution [12], using linguistic [13] and logical [14,15] patterns, by using knowledge intrinsic to an ontology network [16-21], or by using machine learning and statistical methods [22-26]. However, in general, these approaches do not detect all missing is-a relations and in several cases even only few. Therefore, we assume that we have obtained a set of missing is-a relations for a given ontology (but not necessarily all). In the case where our set of missing is-a relations contains all missing is-a relations, completing the ontology is easy. We just add all missing is-a relations to the ontology and a reasoner can compute all logical consequences. However, when the set of missing is-a relations does not contain all missing is-a relations - and this is the common case - there are different ways to complete the ontology. The easiest way is still to just add the missing is-a relations to the ontology. For instance, T in Figure 1 (and Figure 2) represents a small ontology inspired by Galen ontology (http://www.openclinical.org/prj_galen.html), that is relevant for our discussions. Assume that we have detected that Endocarditis \(\sqsubseteq \) PathologicalPhenomenon and GranulomaProcess \(\sqsubseteq \) NonNormalProcess are missing is-a relations (M in Figure 1). Obviously, adding these relations to the ontology will repair the missing is-a structure. However, there are other more interesting possibilities. For instance, adding Carditis \(\sqsubseteq \) CardioVascularDisease and GranulomaProcess \(\sqsubseteq \) PathologicalProcess also repairs the missing is-a structure. Further, these is-a relations are correct according to the domain and constitute new is-a relations (e.g. Carditis \(\sqsubseteq \) CardioVascularDisease) that were not derivable from the ontology and not originally detected by the detection algorithmc. We also note that from a logical point of view, adding Carditis \(\sqsubseteq \) Fracture and GranulomaProcess \(\sqsubseteq \) NonNormalProcess also repairs the missing is-a structure. However, from the point of view of the domain, this solution is not correct. Therefore, as it is the case for all approaches for dealing with modeling defects, a domain expert needs to validate the logical solutions.
Small \({\mathcal {EL}}\) example. (C is the set of atomic concepts in the ontology. T is a TBox representing the ontology. M is a set of missing is-a relations. Or is the oracle representing the domain expert).
Graphical representation of the \({\mathcal {EL}}\) example in Figure 1. (Ovals represent concepts. Full arrows represent is-a relations between concepts in the ontology. Dashed arrows represent missing is-a relations).
In case 2 no missing is-a relations are given. In this case we investigate existing is-a relations in the ontology and try to find new ways of deriving these existing is-a relations. This might pinpoint to the necessity of adding new missing is-a relations to the ontology. As an example, let us assume that our ontology contains relations T∪M in Figure 1. If we assume now that we want to investigate new ways of deriving relations in M then obviously adding Carditis \(\sqsubseteq \) CardioVascularDisease and GranulomaProcess \(\sqsubseteq \) PathologicalProcess would be one possibility given that both are correct according to the domain.
The basic problem underlying the two cases can be formalized in the same way as a new kind of abduction problem (formal definitions in Section Abduction framework). Abduction is a reasoning method to generate explanations for observed symptoms and manifestations. When the application domain is described by a logical theory, it is called logic-based abduction [27]. Logic-based abduction is widely applied in diagnosis, planning, and database updates [28], among others. Further, as we have seen above, there may be different ways to complete the is-a structure of ontologies. Therefore, we propose two preference criteria on the solutions for this new abduction problem as well as different ways to combine them and conduct complexity analysis on important decision problems regarding the various preference criteria for ontologies represented using \({\mathcal {EL}}\) or \({\mathcal {EL}^{++}}\).
The contributions of this paper are the following.
We formalize the repairing of the missing is-a structure in an ontology as a generalized version of the TBox abduction problem (GTAP).
We present complexity results for the existence, relevance and necessity decision problems for GTAP in ontologies represented in \({\mathcal {EL}}\) and \({\mathcal {EL}^{++}}\) with and without the preference relations subset minimality and semantic maximality as well as three ways of combining these (maxmin, minmax, skyline). Subset minimality is a preference criterion that is often used in abductive reasoning problems. Semantic maximality is a new criterion that is important for GTAP.
We provide algorithms for finding a skyline optimal solution to GTAP in ontologies represented in \({\mathcal {EL}}\) and \({\mathcal {EL}^{++}}\). Although in theory, maxmin optimal solutions are normally preferred, in practice, they cannot be guaranteed and skyline optimal solutions are the best we can do.
We provide a system and show its usefulness through experiments.
Preliminaries - description logics \({\mathcal {EL}}\) and \({\mathcal {EL}^{++}}\)
Description logics are knowledge representation languages. In description logics concept descriptions are constructed inductively from a set N C of atomic concepts and a set N R of atomic roles and (possibly) a set N I of individual names. The concept constructors for \({\mathcal {EL}^{++}}\) are the top concept ⊤, the bottom concept ⊥, nominals, conjunction, existential restriction and a restricted form of concrete domains. In this paper, we consider the version of \({\mathcal {EL}^{++}}\) without concrete domains. Note that this simplification does not affect the complexity results presented later on. For the syntax of the different constructors see Table 1.
Table 1 \(\boldsymbol{\mathcal {EL}}^{++}\) syntax and semantics
An interpretation consists of a non-empty set \(\Delta ^{{\mathcal {I}}}\) and an interpretation function \(\cdot ^{{\mathcal {I}}}\) which assigns to each atomic concept A∈N C a subset \(A^{{\mathcal {I}}} \subseteq \Delta ^{{\mathcal {I}}}\), to each atomic role r∈N R a relation \(r^{{\mathcal {I}}} \subseteq \Delta ^{{\mathcal {I}}} \times \Delta ^{{\mathcal {I}}}\), and to each individual name a∈N I an element \(a^{{\mathcal {I}}} \in \Delta ^{{\mathcal {I}}}\). The interpretation function is straightforwardly extended to complex concepts. An \({\mathcal {EL}^{++}}\) TBox (named CBox in [6]) is a finite set of general concept inclusions (GCIs) and role inclusions (RIs) whose syntax can be found in the lower part of Table 1. Note that a finite set of GCIs is called a general TBox. An interpretation is a model of a TBox T if for each GCI and RI in T, the conditions given in the third column of Table 1 are satisfied.
\({\mathcal {EL}}\) has the restricted form of \({\mathcal {EL}^{++}}\) which allows for concept constructors of top concept ⊤, conjunction and existential restriction. An \({\mathcal {EL}}\) TBox contains only GCIs.
The main reasoning task for description logics is subsumption in which the problem is to decide for a TBox T and concepts C and D whether \(T \models C {\sqsubseteq } D\). Subsumption in \({\mathcal {EL}^{++}}\) is polynomial even w.r.t. general TBoxes [6].
Abduction framework
In the following we explain how the problem of finding possible ways to repair the missing is-a structure in a ontology is formalized as a generalized version of the TBox abduction problem as defined in [29]. We assume that our ontology is represented using a TBox T in a language which in this paper is \({\mathcal {EL}}\) or \({\mathcal {EL}^{++}}\). Further, we have a set of missing is-a relations which are represented by a set M of atomic concept subsumptions. In case 1 in Section Background, these missing is-a relations were detected. In case 2 the elements in M are existing is-a relations in the ontology that are temporarily removed, and T represents the ontology that is obtained by removing the elements in M from the original ontology. (They can later be added again after completing the ontology.) To complete the is-a structure of an ontology, the ontology should be extended with a set S of atomic concept subsumptions (repair) such that the extended ontology is consistent and entails the missing is-a relations. However, the added atomic concept subsumptions should be correct according to the domain. In general, the set of all atomic concept subsumptions that are correct according to the domain are not known beforehand. Indeed, if this set were given then we would only have to add this to the ontology. The common case, however, is that we do not have this set, but instead can rely on a domain expert that can decide whether an atomic concept subsumption is correct according to the domain. In our formalization the domain expert is represented by an oracle Or that when given an atomic concept subsumption, returns true or false. It is then required that for every atomic concept subsumption s∈S, we have that O r(s)=t r u e. The following definition formalizes this.
(GENERALIZED TBOX ABDUCTION) Let T be a TBox in language and C be the set of all atomic concepts in T. Let \(M = \{ A_{i}~{\sqsubseteq }~B_{i}\}_{i = 1}^{n}\) with A i ,B i ∈C be a finite set of TBox assertions. Let \(\text{\textit{Or}} : \{ C_{i} ~{\sqsubseteq }~ D_{i} \mid C_{i}, D_{i}\!\! \in \!\! C \} \rightarrow \{ true, false \}\). A solution to the generalized TBox abduction problem (GTAP) (T,C,O r,M) is any finite set of TBox assertions \(S = \{ E_{i} ~{\sqsubseteq }~ F_{i}\}_{i = 1}^{k}\) such that ∀E i ,F i :E i ,F i ∈C, \(\forall E_{i}, F_{i}: Or(E_{i} ~{\sqsubseteq }~ F_{i}) = true\), T∪S is consistent and T∪S⊧M. The set of all such solutions is denoted as \({\mathcal {S}}(T, C, Or, M)\).
As an example, consider GTAP as defined in Figure 1. Then {Carditis \(\sqsubseteq \) CardioVascularDisease, InflammationProcess \(\sqsubseteq \) PathologicalProcess, GranulomaProcess \(\sqsubseteq \) InflammationProcess} is a solution for . Another solution is {Carditis \(\sqsubseteq \) CardioVascularDisease, GranulomaProcess \(\sqsubseteq \) PathologicalProcess} as shown in Section Background.
There can be many solutions for a GTAP and, as explained in Section Background, not all solutions are equally interesting. Therefore, we propose two preference criteria on the solutions as well as different ways to combine them. The first criterion is a criterion that is not used in other abduction problems, but that is particularly important for GTAP. In GTAP it is important to find solutions that add to the ontology as much information as possible that is correct according to the domain. Therefore, the first criterion prefers solutions that imply more information.
(MORE INFORMATIVE) Let S and S ′ be two solutions to the GTAP (T,C,O r,M). S is said to be more informative than S ′ iff T∪S⊧S ′ and T∪S ′⊮S.
Further, we say that S is equally informative as S ′ iff T∪S⊧S ′ and T∪S ′⊧S.
Consider two solutions to , S 1 = {InflammationProcess \(\sqsubseteq \) PathologicalProcess, GranulomaProcess \(\sqsubseteq \) InflammationProcess}d and S 2 = {InflammationProcess \(\sqsubseteq \) PathologicalProcess, GranulomaProcess \(\sqsubseteq \) PathologicalProcess}. In this case solution S 1 is more informative than S 2.
(SEMANTIC MAXIMALITY) A solution S to the GTAP (T,C,O r,M) is said to be semantically maximal iff there is no solution S ′ which is more informative than S. The set of all semantically maximal solutions is denoted as \({\mathcal {S}}^{max}(T, C, Or, M)\).
An example of a semantically maximal solution to is {InflammationProcess \(\sqsubseteq \) PathologicalProcess, GranulomaProcess \(\sqsubseteq \) InflammationProcess, Carditis \(\sqsubseteq \) CardioVascularDisease}.
The second criterion is a classical criterion in abduction problems. It requires that no element in a solution is redundant.
(SUBSET MINIMALITY) A solution S to the GTAP (T,C,O r,M) is said to be subset minimal iff there is no proper subset \(S^{\prime } \subsetneq S\) such that S ′ is a solution. The set of all subset minimal solutions is denoted as \({\mathcal {S}}_{\textit {min}}(T, C, Or, M)\).
An example of a subset minimal solution for is {InflammationProcess \(\sqsubseteq \) PathologicalProcess, GranulomaProcess \(\sqsubseteq \) InflammationProcess}. On the other hand, solution {Carditis \(\sqsubseteq \) CardioVascularDisease, InflammationProcess \(\sqsubseteq \) PathologicalProcess, GranulomaProcess \(\sqsubseteq \) InflammationProcess} is not subset minimal as it contains Carditis \(\sqsubseteq \) CardioVascularDisease which is redundant for repairing the missing is-a relations.
In practice, both of the above two criteria are desirable. We therefore define ways to combine these criteria depending on what kind of priority we assign for the single preferences.
(COMBINING WITH PRIORITY FOR SEMANTIC MAXIMALITY) A solution S to the GTAP (T,C,O r,M) is said to be maxmin optimal iff S is semantically maximal and there does not exist another semantically maximal solution S ′ such that S ′ is a proper subset of S. The set of all maxmin optimal solutions is denoted as \({\mathcal {S}}_{\textit {min}}^{\mathbf {max}}(T, C, Or, M)\).
As an example, {InflammationProcess \(\sqsubseteq \) PathologicalProcess, GranulomaProcess \(\sqsubseteq \) InflammationProcess, Carditis \(\sqsubseteq \) CardioVascularDisease} is a maxmin optimal solution for . The advantage of maxmin optimal solutions is that a maximal body of correct information is added to the ontology and without redundancy. For GTAP these are the most attractive solutions, but it is not clear how to generate such solutions, except for a brute-force methode that would query the oracle with, for larger ontologies, unfeasibly many questions.
(COMBINING WITH PRIORITY FOR SUBSET MINIMALITY) A solution S to the GTAP (T,C,O r,M) is said to be minmax optimal iff S is subset minimal and there does not exist another subset minimal solution S ′ such that S ′ is more informative than S. The set of all minmax optimal solutions is denoted as \({\mathcal {S}}_{\textbf {min}}^{max}(T, C, Or, M)\).
As an example, {InflammationProcess \(\sqsubseteq \) PathologicalProcess, GranulomaProcess \(\sqsubseteq \) InflammationProcess} is a minmax optimal solution for . In practice, minmax optimal solutions ensure fewer is-a relations to be added, thus avoiding redundancy. This is desirable if the domain expert would prefer to look at as small solutions as possible. The disadvantage is that there may be correct relations that are not derivable when they are not included in the solution.
For the skyline interpretation, we consider the subset minimality and the semantic maximality as two dimensions for a solution S (see [30] for an explanation of how the definition satisfies the skyline interpretation).
(SKYLINE OPTIMAL) A solution S to the GTAP (T,C,O r,M) is said to be skyline optimal iff there does not exist another solution S ′ such that S ′ is a proper subset of S and S ′ is equally informative as S. The set of all skyline optimal solutions is denoted as \({\mathcal {S}}_{\textit {min}}^{max}(T, C, Or, M)\).
All subset minimal, minmax optimal and maxmin optimal solutions are also skyline optimal solutions. However, there are semantically maximal solutions that are not skyline optimal. For example, {InflammationProcess \(\sqsubseteq \) PathologicalProcess, GranulomaProcess \(\sqsubseteq \) InflammationProcess, Carditis \(\sqsubseteq \) CardioVascularDisease, Endocarditis \(\sqsubseteq \) CardioVascularDisease} is a semantically maximal solution for , but it is not skyline optimal as its subset {InflammationProcess \(\sqsubseteq \) PathologicalProcess, GranulomaProcess \(\sqsubseteq \) InflammationProcess, Carditis \(\sqsubseteq \) CardioVascularDisease} is equally informative. There also exist skyline optimal solutions that are not subset minimal solutions. For instance, {InflammationProcess \(\sqsubseteq \) PathologicalProcess, GranulomaProcess \(\sqsubseteq \) InflammationProcess, Carditis \(\sqsubseteq \) CardioVascularDisease} is a skyline optimal solution that is not subset minimal as removing Carditis \(\sqsubseteq \) CardioVascularDisease would still yield a solution (although not as informative). Skyline optimal is a relaxed criterion. It requires subset minimality for some level of informativeness.
Although maxmin or semantically maximal solutions are preferred, in practice, as mentioned before, it is not clear how to generate such solutions, except for a brute-force method that would query the oracle with, for larger ontologies, unfeasibly many questions. Therefore, a skyline solution is the next best thing and, in the case solutions exist, it is easy to generate a skyline optimal solution. However, the difficulty lies in reaching an as high level of informativeness as possible.
Complexity results
In addition to finding solutions, traditionally, there are three main decision problems for logic-based abduction: existence, relevance and necessity.
Given a GTAP (T,C,O r,M) we define the following decision problems:
ᅟ
Existence \({\mathcal {S}}(T, C, Or, M) \neq \emptyset \)?
Relevance Given ψ, does a solution \(S \in {\mathcal {S}}(T, C, Or, M)\) exist such that ψ∈S?
Necessity Given ψ, do all the solutions in \({\mathcal {S}}(T, C, Or, M)\) contain ψ?
If we replace in Definition 8 with \({\mathcal {S}}_{\textit {min}}\), \({\mathcal {S}}^{max}\), \({\mathcal {S}}_{\textbf {min}}^{max} {\mathcal {S}}_{\textit {min}}^{\textbf {max}}\) and \({\mathcal {S}}_{\textit {min}}^{max}\), respectively, we obtain the GTAP decision problems under the criteria of subset minimality, semantic maximality and the combinations.
We have proven complexity results for these GTAP decision problems and show the summary of the results in Tables 2 (\({\mathcal {EL}}\)) and 3 (\({\mathcal {EL}^{++}}\)). For the proofs we refer to the Appendix.
Table 2 Complexity results of GTAP for \({\boldsymbol{\mathcal {EL}}}\)
Table 3 Complexity results of GTAP for \({\boldsymbol{\mathcal {EL}}^{++}}\)
While it is not surprising that with either of the single preferences of subset minimality and semantic maximality, the complexity for \({\mathcal {EL}^{++}}\) remains the same as the case without any preference, it is interesting to observe that combining the two preferences yields different complexity results. The combinations maxmin and skyline do not increase the complexity, while for minmax the complexity is higher which is at the second level of polynomial hierarchy. The intuition behind that can be explained informally as follows: for maxmin and skyline, the checking of both preference criteria can be conducted sequentially, while for minmax it is not possible. The complexity results provide a guideline on the choosing of suitable preference criteria for designing repairing algorithms in practice. As a result, the remaining part of the paper is dedicated to a concrete algorithm for finding one skyline optimal solution, together with a system based on the algorithm as well as experiments.
In this section we present algorithms for completing the is-a structure (solving GTAP (T,C,O r,M)) in light-weight ontologies. Based on lessons learned in [30], we require that the missing is-a relations are validated before the repairing and thus ∀m∈M:O r(m)=t r u e. We also require that T∪M is consistent. For ontologies represented in \({\mathcal {EL}}\) this is trivially true as all TBoxes are consistent. For \({\mathcal {EL}^{++}}\) this is a requirement for the existence of a solution to GTAP. Given these assumptions we also know that M is a solution.
In general, we would like to find a solution for GTAP at the highest level of informativeness. However, this can only be guaranteed if we know all missing is-a relations. As discussed before, a way to obtain this is using a brute-force method and ask Or for every pair in C×C whether it is a correct is-a relation according to the domain or not. In practice, for large ontologies this is not feasible. Therefore, the algorithms in this section compute initially a skyline optimal solution for GTAP (T,C,O r,M) and iteratively try to find other skyline optimal solutions at higher levels of informativeness.
As M is a solution, the algorithm will always return a result. The result can be a subset minimal solution that is a subset of M or a solution that is more informative than M.
In algorithm 1 we show the common part for the algorithms for the different representation languages. The algorithms contain 3 basic steps: finding a skyline-optimal solution for one missing is-a relation, finding a skyline-optimal solution for a set of missing is-a relations and finding a more informative skyline-optimal solution.
In RepairSingleIsa a skyline-optimal solution is found for a single missing is-a relation. This part of the algorithm is different for different knowledge representation languages and is discussed for \({\mathcal {EL}}\) and \({\mathcal {EL}^{++}}\) in Sections Algorithm - \({\mathcal {EL}}\) and Algorithm - \({\mathcal {EL}^{++}}\) , respectively.
In RepairMultipleIsa the algorithm collects for each missing is-a relation a solution from RepairSingleIsa and takes the union of these. Therefore, the following holds for Solution in line 6: T∪S o l u t i o n⊧M and ∀s∈S o l u t i o n:O r(s)=t r u e. The statements in lines 7-8 (which are redundant for \({\mathcal {EL}}\)) guarantee consistency. This leads to the fact that Solution is a solution of GTAP (T,C,O r,M). Further, in line 9, we remove redundancy while keeping the same level of informativeness, and thus obtain a skyline optimal solution. (In the case where there are several ways to remove redundancy, one is chosen, as the extended ontologies will be equivalent in the sense that they entail the same statements.)
In Repair we try to improve the result from RepairMultipleIsa by trying to find a skyline optimal solution on a higher level of informativeness. Given that any element in the solution of RepairMultipleIsa that is not in M can be considered as a new missing is-a relation (which was not detected earlier), we can try to find additional more informative ways of repairing by solving a new GTAP problem for these new missing is-a relations (and continue as long as new missing is-a relations are detected). As a (skyline optimal) solution for the new GTAP is also a (skyline optimal) solution of the original GTAP, the solution found in Repair is a skyline optimal solution for the original GTAP.
Algorithm - \({\mathcal {EL}}\)
We now present an algorithm for RepairSingleIsa for ontologies that are represented in \({\mathcal {EL}}\) and where the TBox is normalized as described in [6]. A normalized TBox T contains only axioms of the forms \(A_{1} \sqcap \dots \sqcap A_{n} \sqsubseteq B\), \(A \sqsubseteq \exists r.B\), and \(\exists r.A \sqsubseteq B\), where A, A 1, …, A n and B are atomic concepts and r is a role. Every \({\mathcal {EL}}\) TBox can in linear time be transformed into a normalized TBox that is a conservative extension, i.e., every model of the normalized TBox is also a model of the original TBox and every model of the original TBox can be extended to a model of the normalized TBox.
The algorithm in Algorithm 2 computes a solution for a GTAP with one missing is-a relation (i.e. GTAP \((T, C, Or, \{E \sqsubseteq F\})\) in the following way. First, superconcepts of E are collected in a Source set and subconcepts of F are collected in a Target set (lines 3 and 4). Source contains expressions of the forms A and ∃r.A while Target contains expressions of the forms A, A 1⊓⋯⊓A n and ∃r.A where A, A 1, …, A n are atomic concepts and r is a role. Adding an is-a relation between an element in Source and an element in Target to the ontology would make \(E~ \sqsubseteq ~ F\) derivable (and thus this gives us logical solutions, but not necessarily solutions that are correct according to the domain). As we are interested in solutions containing is-a relations between atomic concepts, we check for every pair (A,B) ∈ Source × Target whether A and B are atomic concepts and Or(\(A ~\sqsubseteq ~B\)) = true (i.e. correct according to the domain). If so, then this is a possible solution for GTAP \((T, C, Or, \{E ~\sqsubseteq ~ F\})\). However, to conform to subset minimality and semantic maximality, if the current solution already contains is-a relations that would lead to the entailment of \(A~ \sqsubseteq ~ B\) then we do not use \(A~ \sqsubseteq ~ B\) (8-9). Otherwise we use \(A~ \sqsubseteq ~ B\) and remove elements from the current solution that would be entailed if \(A~ \sqsubseteq ~ B\) is used (10-12). Further, in the case where A is of the form ∃r.N and B is of the form ∃r.O, then making \(N~ \sqsubseteq ~ O\) derivable would also make \(A ~\sqsubseteq ~ B\) derivable (14-15)f. It is clear that for the result of RepairSingleIsa, i.e. Sol, the following holds: \(T \cup Sol \models E~ \sqsubseteq ~ F\) and ∀s∈S o l:O r(s)=t r u e. Together with the fact that \(\mathcal {EL}\) TBoxes are consistent, this leads to the fact that Sol is a solution of GTAP \((T, C, Or, \{E ~\sqsubseteq ~ F\})\).
As an example run for the solving GTAP for \({\mathcal {EL}}\) ontologies, consider the GTAP in Figure 1. For a given ontology and set of missing is-a relations, the algorithm will first find solutions for repairing individual missing is-a relations using RepairSingleIsA. For the missing is-a relation Endocarditis \(\sqsubseteq \) PathologicalPhenomenon the following is-a relations, when added to the ontology, would allow to derive the missing is-a relation: Endocarditis \(\sqsubseteq \) PathologicalPhenomenon, Endocarditis \(\sqsubseteq \) Fracture, Endocarditis \(\sqsubseteq \) CardioVascularDisease, Carditis \(\sqsubseteq \) PathologicalPhenomenon, Carditis \(\sqsubseteq \) Fracture, Carditis \(\sqsubseteq \) CardioVascularDisease as well as InflammationProcess \(\sqsubseteq \) PathologicalProcess. As the first one is the missing is-a relation which was already validated, only the other six is-a relations are presented to the oracle for validation. Out of these six Endocarditis \(\sqsubseteq \) Fracture and Carditis \(\sqsubseteq \) Fracture are not correct according to the domain and are therefore not included in solutions. Further, relations Endocarditis \(\sqsubseteq \) CardioVascularDisease, Endocarditis \(\sqsubseteq \) PathologicalPhenomenon, Carditis \(\sqsubseteq \) PathologicalPhenomenon are removed given it is possible to entail them from the ontology together with the remaining relations. Therefore, after validation, RepairSingleIsA returns {InflammationProcess \(\sqsubseteq \) PathologicalProcess, Carditis \(\sqsubseteq \) CardioVascularDisease}. The same process is repeated for the second missing is-a relation GranulomaProcess \(\sqsubseteq \) NonNormalProcess. In this case the following is-a relations, when added to the ontology, would allow to derive the missing is-a relation: GranulomaProcess \(\sqsubseteq \) NonNormalProcess and GranulomaProcess \(\sqsubseteq \) PathologicalProcess. GranulomaProcess \(\sqsubseteq \) NonNormalProcess is the missing is-a relation and was already validated as correct according to the domain. GranulomaProcess \(\sqsubseteq \) PathologicalProcess is presented to the oracle and validated as correct according to the domain. As GranulomaProcess \(\sqsubseteq \) NonNormalProcess can be entailed from the ontology together with GranulomaProcess \(\sqsubseteq \) PathologicalProcess, RepairSingleIsA returns {GranulomaProcess \(\sqsubseteq \) PathologicalProcess}. The solutions for the single is-a relations are then combined to form a solution for the set of missing is-a relations. In our case, there are no redundant relations and therefore RepairMultipleIsA returns {InflammationProcess \(\sqsubseteq \) PathologicalProcess, Carditis \(\sqsubseteq \) CardioVascularDisease, GranulomaProcess \(\sqsubseteq \) PathologicalProcess}. We note that this is a skyline optimal solution. In Repair the system tries to improve the acquired solution. This time the oracle is presented with a total of 13 relations for validation out of which only one is validated to be correct, i.e. GranulomaProcess \(\sqsubseteq \) InflammationProcess. This is added to the solution. Given this new is-a relation, GranulomaProcess \(\sqsubseteq \) PathologicalProces is removed from the solution as it can now be entailed from the ontology and GranulomaProcess \(\sqsubseteq \) InflammationProcess. The new solution is {InflammationProcess \(\sqsubseteq \) PathologicalProcess, Carditis \(\sqsubseteq \) CardioVascularDisease, GranulomaProcess \(\sqsubseteq \) InflammationProcess}. This is again a skyline optimal solution and it is more informative than the previous solution. As new missing is-a relations were detected, the repairing is run for the third time. However, in this run the solution is not improved and thus the algorithm outputs the final result. We note that in this example we found a skyline optimal solution that is also semantically maximal. In general, however, it is not possible to know whether the solution is semantically maximal without checking every possible is-a relation between atomic concepts in the ontology.
Algorithm - \({\mathcal {EL}^{++}}\)
We now present an algorithm for RepairSingleIsa for ontologies that are represented in \({\mathcal {EL}^{++}}\) (Algorithm 3) and where the TBox is normalized as described in [6]. A normalized TBox T contains only axioms of the forms \(A_{1} \sqcap \dots \sqcap A_{n} \sqsubseteq B\), \(A \sqsubseteq \exists r.B\), and \(\exists r.A \sqsubseteq B\), as well as role inclusions of the forms \(r \sqsubseteq s\) and \(r_{1} \circ r_{2} \sqsubseteq s\) where A, A 1, …, A n and B are atomic concepts and r, r 1, r 2 and s are roles. We note that, as for \({\mathcal {EL}}\) TBoxes, every \({\mathcal {EL}^{++}}\) TBox can in linear time be transformed into a normalized TBox that is a conservative extension of the original TBox.
The main difference with respect to the algorithm for \({\mathcal {EL}}\) ontologies is that the algorithm for \({\mathcal {EL}^{++}}\) needs to take into account role inclusions when searching for solutions which are found using axioms containing ∃ expressions. This is shown in lines 15-19 and FindExistsSolutions. As in the algorithm for \({\mathcal {EL}}\), if A is of the form ∃r.N and B is of the form ∃r.O, then making \(N~ \sqsubseteq ~ O\) derivable would also make \(A ~\sqsubseteq ~ B\) derivable. In \({\mathcal {EL}^{++}}\) there are two more possibilities when A is of the form ∃r.N and B is of the form ∃s.O. If T contains \(r\sqsubseteq s\), then making \(N~ \sqsubseteq ~ O\) derivable would also make \(A ~\sqsubseteq ~ B\) derivable. Further, if T contains \(r \circ r_{1} \sqsubseteq s\) and \(N \sqsubseteq \exists r_{1}.P\), then making \(P~ \sqsubseteq ~ O\) derivable would also make \(A ~\sqsubseteq ~ B\) derivable.
As an example run for the solving GTAP for \({\mathcal {EL}^{++}}\) ontologies, consider the GTAP in Figure 3 (and Figure 4). For a given ontology and set of missing is-a relations, the algorithm will first find solutions for repairing individual missing is-a relations using RepairSingleIsA. For the missing is-a relation Endocarditis \(\sqsubseteq \) PathologicalPhenomenon the following is-a relations, when added to the ontology, would allow to derive the missing is-a relation: Endocarditis \(\sqsubseteq \) PathologicalPhenomenon, Endocarditis \(\sqsubseteq \) Fracture, Endocarditis \(\sqsubseteq \) CardioVascularDisease, Carditis \(\sqsubseteq \) PathologicalPhenomenon, Carditis \(\sqsubseteq \) Fracture, Carditis \(\sqsubseteq \) CardioVascularDisease as well as InflammationProcess \(\sqsubseteq \) PathologicalProcess. As the first one is the missing is-a relation which was already validated, only the other six is-a relations are presented to the oracle for validation. Out of these six Endocarditis \(\sqsubseteq \) Fracture and Carditis \(\sqsubseteq \) Fracture are not correct according to the domain and are therefore not included in solutions. Further, relations Endocarditis \(\sqsubseteq \) CardioVascularDisease, Endocarditis \(\sqsubseteq \) PathologicalPhenomenon, Carditis \(\sqsubseteq \) PathologicalPhenomenon are removed given it is possible to entail them from the ontology together with the remaining relations. Therefore, after validation, RepairSingleIsA returns {InflammationProcess \(\sqsubseteq \) PathologicalProcess, Carditis \(\sqsubseteq \) CardioVascularDisease}. The same process is repeated for the second missing is-a relation GranulomaProcess \(\sqsubseteq \) NonNormalProcess. In this case the following is-a relations, when added to the ontology, would allow to derive the missing is-a relation: GranulomaProcess \(\sqsubseteq \) NonNormalProcess and GranulomaProcess \(\sqsubseteq \) PathologicalProcess. GranulomaProcess \(\sqsubseteq \) NonNormalProcess is the missing is-a relation and was already validated as correct according to the domain. GranulomaProcess \(\sqsubseteq \) PathologicalProcess is presented to the oracle and validated as correct according to the domain. As GranulomaProcess \(\sqsubseteq \) NonNormalProcess can be entailed from the ontology together with GranulomaProcess \(\sqsubseteq \) PathologicalProcess, RepairSingleIsA returns {GranulomaProcess \(\sqsubseteq \) PathologicalProcess}. For the missing is-a relation Wound \(\sqsubseteq \) PathologicalPhenomenon relations Wound \(\sqsubseteq \) PathologicalPhenomenon, SoftTissueTraumaProcess \(\sqsubseteq \) PathologicalProcess, Wound \(\sqsubseteq \) Fracture, Wound \(\sqsubseteq \) CardioVascularDisease, when added to the ontology, would allow to derive the missing is-a relation. Out of these, only Wound \(\sqsubseteq \) PathologicalPhenomenon and SoftTissueTraumaProcess \(\sqsubseteq \) PathologicalProcess are correct according to the oracle, and RepairSingleIsA therefore returns {Wound \(\sqsubseteq \) PathologicalPhenomenon, SoftTissueTraumaProcess \(\sqsubseteq \) PathologicalProcess}. For the remaining missing is-a relations BurningProcess \(\sqsubseteq \) SoftTissueTraumaProcess and BurningProcess \(\sqsubseteq \) TraumaticProcess the procedure RepairSingleIsA returns {BurningProcess \(\sqsubseteq \) SoftTissueTraumaProcess} and {BurningProcess \(\sqsubseteq \) TraumaticProcess} respectively. The solutions for the single is-a relations are then combined to form a solution for the set of missing is-a relations. In our case, Wound \(\sqsubseteq \) PathologicalPhenomenon is redundant and therefore RepairMultipleIsA returns {InflammationProcess \(\sqsubseteq \) PathologicalProcess, Carditis \(\sqsubseteq \) CardioVascularDisease, GranulomaProcess \(\sqsubseteq \) PathologicalProcess, BurningProcess \(\sqsubseteq \) TraumaticProcess, BurningProcess \(\sqsubseteq \) SoftTissueTraumaProcess, SoftTissueTraumaProcess \(\sqsubseteq \) PathologicalProcess}. We note that this is a skyline optimal solution. In Repair the system tries to improve the acquired solution. This time the oracle is presented with a total of 25 relations for validation out of which only two are validated to be correct, i.e. GranulomaProcess \(\sqsubseteq \) InflammationProcess and SoftTissueTraumaProcess \(\sqsubseteq \) TraumaticProcess. These are added to the solution. Given these new is-a relations, GranulomaProcess \(\sqsubseteq \) PathologicalProcess and BurningProcess \(\sqsubseteq \) TraumaticProcess are removed from the solution as they are redundant. The new solution is {InflammationProcess \(\sqsubseteq \) PathologicalProcess, Carditis \(\sqsubseteq \) CardioVascularDisease, GranulomaProcess \(\sqsubseteq \) InflammationProcess, SoftTissueTraumaProcess \(\sqsubseteq \) TraumaticProcess, BurningProcess \(\sqsubseteq \) SoftTissueTraumaProcess, SoftTissueTraumaProcess \(\sqsubseteq \) PathologicalProcess}. This is again a skyline optimal solution and it is more informative than the previous solution.
Small \({\mathcal {EL}^{++}}\) example. (C is the set of atomic concepts in the ontology. T is a TBox representing the ontology. M is a set of missing is-a relations. Or is the oracle representing the domain expert).
Graphical representation of the \({\mathcal {EL}^{++}}\) example in Figure 3. (Ovals represent concepts. Full arrows represent is-a relations between concepts in the ontology. Dashed arrows represent missing is-a relations).
As new missing is-a relations were detected, the repairing is run for the third time. In this iteration 5 relations required validation and only relation TraumaticProcess \(\sqsubseteq \) PathologicalProcess is validated as correct according to the domain. The new solution is {InflammationProcess \(\sqsubseteq \) PathologicalProcess, Carditis \(\sqsubseteq \) CardioVascularDisease, GranulomaProcess \(\sqsubseteq \) InflammationProcess, SoftTissueTraumaProcess \(\sqsubseteq \) TraumaticProcess, BurningProcess \(\sqsubseteq \) SoftTissueTraumaProcess, TraumaticProcess \(\sqsubseteq \) PathologicalProcess}. The relation SoftTissueTraumaProcess \(\sqsubseteq \) PathologicalProcess was removed from the solution as it is redundant.
The algorithm is run again and in this iteration no new is-a relations were validated to be correct so the solution from the previous iteration is returned as the final solution.
We have implemented a system for repairing missing is-a relations. The input to the system is an ontology in \({\mathcal {EL}}\) or \({\mathcal {EL}^{++}}\) and a set of validated missing is-a relations. The output is a solution to GTAP (called a repairing action). The system was implemented in Java and uses the ELK reasoner (version 0.4.1) [31] to detect implicit entailments in the ontology. The system is semi-automatic and requires interaction with a user which is a domain expertg serving as an oracle and who decides whether an is-a relation is correct according to the domain.
Once the ontology and the set of missing is-a relations are loaded, the user starts the debugging process by pressing the button Generate Repairing Actions (Figure 5). The system then removes redundant is-a relations and the non-redundant missing is-a relations are shown in a drop-down list allowing the user to switch between missing is-a relations. Additional relations acquired using ∃ expressions are also included in the drop-down list. It is also possible to scroll between relations using the arrow buttons in the bottom part of the screen.
Screenshot - repairing using source and target sets.
After selecting an is-a relation from the list, the user is presented with the Source and the Target set for that is-a relation. The user then needs to choose relations which are correct according to the domain for that is-a relation. Missing is-a relations are automatically validated to be correct according to the domain while the relations that were acquired using ∃ expressions have to be explicitly validated by the user.
In Figure 5 the user is presented with the Source and the Target set for the missing is-a relation Endocarditis \(\sqsubseteq \) PathologicalPhenomenon (concepts in the missing is-a relation are marked in red). In this case the user has selected {Carditis \(\sqsubseteq \) CardioVascularDisease} as a repairing action for the missing is-a relation (concepts marked in purple) and needs to confirm this by clicking the Validate button.
The user also has the option to check which relations have been validated so far and which relations can be validated, by clicking the Validate Is-a Relations button. In the pop-up window that appears the user can validate new relations, remove validations from already validated relations as well as ask for a recommendation by clicking the Recommend button (Figure 6). Recommendations are acquired by querying external sources (currently, WordNet [32], UMLS Methathesaurus and Uberon [33]) by checking for the pairs consisting of a concept in Source and a concept in Target whether there is an is-a relation between these in the external sourceh.
Screenshot - validating is-a relations in a repairing action.
The validation phase is ended by clicking on the Validation Done button. The system then calculates the consequences of the chosen repairing actions and presents the user with a new set of is-a relations that need to be repaired. The validation phase and consequent computations represent one iteration of the Repair procedure in Algorithm 2. If the repairing did not change between two iterations the system outputs the repairing.
At any point the user can save validated relations from the "File" menu which makes it possible to do debugging accross multiple sessions.
We have run several debugging experiments. Our goal was to investigate the usefulness of our approach in cases 1 and 2 and for real ontologies. Therefore, we developed experiments for cases 1 and 2 and used existing ontologies regarding anatomy (case 1) and Biotop (case 2). The question about usefulness was divided into two parts. First, we wanted an indication of the additional knowledge that was added to the ontology. For this we measure the number of newly found is-a relations. Further, we wanted an indication of the required user interaction with the domain expert who needs to validate the solutions. For this we measure the number of and sizes of Source and Target sets which represent all the logical solutions found by our system.
The experiments were performed on an Intel Core i7-2620M Processor at 3.07 GHz with 4 GB RAM under Windows 7 Professional and Java 1.7 compiler. In all experiments the validation phase took the most time while the computations between iterations took less than 10 seconds.
The results are summarized in Tables 4, 5, 6, 7 and 8. The 'It' columns represent the different iterations of Repair in Algoritm 1. The 'Missing' rows give the number of missing is-a relations in each iteration. For instance, in Table 5 in the first iteration, there are the 5 original missing is-a relations. Such a missing is-a relation can be repaired by adding itself ('Repaired by itself'), or by adding other is-a relations that were not derivable in the ontology extended with the missing is-a relations and thus represent new knowledge added to the ontology ('Repaired using new knowledge'). The 'New relations' row shows how many new is-a relations were added to the ontology to repair the missing is-a relations which were repaired using new knowledge. When such relations were found using ∃ (e.g., lines 14-15 in Algorithm 2 or lines 15-19 in Algorithm 3), then the number of such relations is shown in parentheses. For instance, in Table 5, in the first iteration 3 original missing is-a relations were repaired by adding 4 new relations representing new knowledge of which 2 were found using ∃. We note that for iteration i+1 the missing is-a relations (row 'Missing') are obtained by taking the union of the missing is-a relations repaired by themselves from iteration i and the new relations from iteration i that were used to repair the other missing is-a relations in iteration i, and then removing the redundant relations from this set. For instance, in Table 5, for the second iteration the missing is-a relations are the 2 original is-a relations that were repaired by adding themselves and the 4 new is-a relations that were added for repairing the 3 other original missing is-a relations. As there are no redundant relations among these, the number of missing is-a relations in iteration 2 is 6. We also note that in the last iteration all missing is-a relations from that iteration are always repaired by themselves and these represent the final repairing action.
Table 4 Results for the small ontology in Figure 1
Table 6 Results for debugging AMA - Mouse Anatomy ontology
Table 7 Results for debugging NCI-A - Human Anatomy ontology
Table 8 Results for debugging the Biotop ontology
For the example in Figure 1 the system behaves as explained in Section Algorithm - \({\mathcal {EL}}\) and the results are summarized in Table 4. The results for the example in Figure 3 are given in Table 5. Further, we performed experiments for the two different cases (missing is-a relations given or not) with existing biomedical ontologies.
During a session the user is presented with Source and Target sets for each of the current missing is-a relations. To add an is-a relation to the ontology the user chooses an element from the Source set and an element from the Target set. Multiple such is-a relations may be chosen for each shown pair of Source and Target set. In Tables 9, 10 and 11 we show the number of Source and Target sets of particular sizes for the different iterations of the algorithm. For instance, Table 9 shows that there were three iterations of the algoritm (cells have 3 values x/y/z). In the first iteration ('x' values), there were 56 Source sets of size 1 and 38 of size between 2 and 10, while there were 34 Target sets of size 1, 12 of size between 2 and 10, 10 of size between 11 and 20, 3 of size between 31 and 40, 6 of size between 41 and 50, 4 of size between 51 and 100, 21 of size between 101 and 200, and 4 of size between 301 and 400. The numbers for the second and third iteration are represented by the 'y' and 'z' values, respectively.
Table 9 Source and target set sizes for debugging AMA - Mouse Anatomy ontology
Table 10 Source and target set sizes for debugging NCI-A - Human Anatomy ontology
Table 11 Source and target set sizes for debugging the Biotop ontology
Case 1 experiment - OAEI anatomy
We debugged the two ontologies from the Anatomy track at the 2013 Ontology Alignment Evaluation Initiative, i.e. Mouse Anatomy ontology (AMA) containing 2744 concepts and 4493 asserted is-a relations and a fragment of NCI human anatomy ontology (NCI-A) containing 3304 concepts and 5423 asserted is-a relations. The input missing is-a relations for these two experiments were a set of 94 and 58 missing is-a relations, respectively, for AMA and NCI-A. These missing is-a relations were obtained by using a logic-based approach using an alignment between AMA and NCI-A [34] to generate candidate missing is-a relations which were then validated by a domain expert to obtain actual missing is-a relations. Therefore, this experiment is related to case 1. We note that due to the lack of axioms involving ∃ in these ontologies, no solutions are found using ∃ (i.e., there are no numbers in parentheses in the 'New relations' rows).
Mouse anatomy
The results for debugging AMA are given in Table 6. Three iterations were required to reach the final solution. Out of 94 initial missing is-a relations 37 were repaired by repairing actions which add new knowledge to the ontology while 57 were repaired using only the missing is-a relation itself. There were no derivable relations. In total 44 new and non-redundant relations were added to the ontology in the first iteration. Out of 37 relations which were repaired by adding new relations, 22 had more than 1 non-redundant relation in the repairing action. For example, the missing is-a relation wrist joint \(\sqsubseteq \) joint is repaired by a repairing action {limb joint \(\sqsubseteq \) joint, wrist joint \(\sqsubseteq \) synovial joint}.
The set of missing is-a relations in the second iteration contains 101 relations, i.e. 57 relations which were repaired by adding the missing is-a relation itself and 44 newly added relations. In this iteration, 3 is-a relations were repaired by adding new knowledge to the ontology. All 3 of these is-a relations are is-a relations which were added in the previous iteration. For example, is-a relation wrist joint \(\sqsubseteq \) synovial joint is repaired by a repairing action {wrist joint \(\sqsubseteq \) hand joint} which is possible given that the is-a relation metacarpo-phalangeal joint \(\sqsubseteq \) joint from the initial set of missing is-a relations was repaired by a repairing action {hand joint \(\sqsubseteq \) synovial joint, limb joint \(\sqsubseteq \) joint} in the first iteration. Finally, the set of missing is-a relations containing 101 is-a relations in the third iteration is also the solution for the initial set of missing is-a relations given that no new relations were added in the third iteration.
The sizes for the Source and Target sets for the different iterations are given in Table 9. We note that many sets have size 1 and most of the sets have size up to 10. This means that it is easy to visualize these sets in the system and the cognitive effort for the user is not so high. For some sets there are too many elements to have a suitable visualization in the current system.
NCI - human anatomy
The results for debugging NCI-A are given in Table 7. The initial set of missing is-a relations contained 58 relations. Out of these 58 relations in the first iteration 9 were repaired by adding relations which introduce new knowledge to the ontology. In total 6 new is-a relations were added and 4 missing is-a relations were derivable.
In the second iteration, 5 out of 55 is-a relations were repaired by adding new relations while repairing actions for the 50 other is-a relations were unchanged. All 5 is-a relations which were repaired by adding new relations to the ontology are is-a relations which were repaired by repairing actions containing only the missing is-a relation from the first iteration. This exemplifies why it is beneficial to consider already repaired is-a relations in subsequent iterations as Source and Target sets for some missing is-a relations can change and more informative solutions might be identified.
The input to the third iteration is a set of 54 is-a relations and given that no changes were made, these relations are the final solution.
The sizes for the Source and Target sets for the different iterations are given in Table 10. The same comments as for the AMA experiment hold for this experiment.
Case 2 experiment - Biotop
This experiment relates to Case 2. In this experiment we used the Biotop ontology from the 2013 OWL Reasoner Evaluation Workshop dataset containing 280 concepts and 42 object properties as well as 267 asserted is-a relations and 65 asserted equivalence relations. For the set of missing is-a relations we randomly selected 47 is-a relations. Then the ontology was modified by removing is-a relations which would make the selected is-a relations derivable. The unmodified ontology was used as domain knowledge in the experiment. The results for debugging Biotop ontology are presented in Table 8.
The debugging process took 4 iterations. In the first iteration 28 relations were repaired by adding new relations. In total 26 new relations were added in the first iteration using axioms containing ∃ expressions. For example, for missing is-a relation GreatApe \(\sqsubseteq \) Primate we have a repairing action {FamilyHominidaeQuality \(\sqsubseteq \) OrderPrimatesQuality} given that the ontology contains axioms GreatApe \(\sqsubseteq \exists \)hasInherence.FamilyHominidaeQuality and ∃hasInherence.OrderPrimatesQuality \(\sqsubseteq \) Primate.
The input to the second iteration contained 41 non-redundant is-a relations (4 redundant is-a relations were removed from the solution in iteration 1). In total 10 is-a relations were repaired by adding new is-a relations. Out of these 10 repaired is-a relations, 5 are relations from the initial set of missing is-a relations while the other 5 are relations which were added in the first iteration. For example, is-a relation Atom \(\sqsubseteq \) Entity from the initial set of missing relations can be repaired with {Atom \(\sqsubseteq \) MaterialEntity} given that MaterialEntity \(\sqsubseteq \) Entity was added in the previous iteration.
In the third iteration, the input contained 42 is-a relations. In total 4 is-a relations (3 from the initial set of missing is-a relations and 1 from iteration 1) were repaired by adding 3 new relations. Out of the 3 new relations 1 is acquired using axioms containing ∃ expressions.
Finally, in the fourth iteration no new relations were added and the system outputs the solution.
During the repairing we found two new is-a relations that could not be derived from the original ontology and thus constitute new knowledge.
The sizes for the Source and Target sets for the different iterations are given in Table 11. Similar comments as for the AMA and NCI-A experiments hold for this experiment.
We have formalized the completing of missing is-a structure in ontologies as a GTAP, an abduction problem. However, there are several properties of completing the is-a structure in ontologies which distinguish themselves from the classic abduction framework. First, in the classic abduction framework there is a hypothesis H from which the solution S is chosen such that S⊆H holds. The corresponding component in the completing of is-a structure is the set of atomic concept subsumptions that should be correct according to the domain. In general, this set is not known beforehand. In the repairing scenario, a domain expert decides whether an atomic concept subsumption is correct according to the domain, and can return true or false like an oracle. Consequently, in the formalization we have an oracle Or, rather than a hypothesis set H. This has also an impact in how solutions can be found. In the classic abduction problem finding solutions can start from H. In GTAP this is not possible, but (partial) solutions are validated using Or. Secondly, in completing missing is-a structure a more informative solution is preferred to a less informative one where informativeness is a measurement for how much information the added subsumptions (i.e. solution S) can derive. This is in contrast to the criteria of minimality (e.g. subset minimality, cardinality minimality) from the classic abduction framework. In principle this difference on the preference stems from the original purpose of the two formalisms. The abduction framework is often used for diagnostic scenarios, thus the essential goal is to confine the cause of the problem as small as possible. Whilst for ontology repairing, the goal is to add more subsumptions to enrich the ontology. As long as the added rules are correct, a more informative repairing means more enrichment to the ontology.
The experiments have shown the usefulness of our approach. In each of the cases, whether missing is-a relations were identified, or whether we investigated existing is-a relations, our approach identified new information to be added to the ontologies.
The experiments have also shown that the iterative approach to repairing missing is-a relations is beneficial as in all our experiments additional relations were added to the ontology in subsequent iterations. Running the system on already repaired is-a relations gives the opportunity to identify new repairing actions which introduce new knowledge to the ontology. An example of this is found in the BioTop experiment where is-a relations from the initial set of missing is-a relations were repaired by more informative solutions in the third iteration.
High-quality debugging of modeling defects always requires validation by a domain expert and this is thus also the case for the completing of the is-a structure in ontologies. For each of the missing is-a relations a domain expert has to validate the generated solutions. In our system the solutions are shown in groups using the Source and Target sets. This allows the domain expert to (i) look at different related solutions at the same time and (ii) have a context for the solutions. For AMA the user looked at 94, 101 and 101 Source and Target sets in the three iterations, respectively. For NCI-A this was 58, 55 and 54, respectively. For these 2 ontologies the number of Source-Target sets pairs is equal to the number of missing is-a relations in each iteration. For BioTop there are additionally the Source-Target pairs related to solutions based on ∃-expressions. The numbers for BioTop were 50, 62, 63 and 53 for the four iterations, respectively. The sizes for the Source and Target sets for the different iterations were small for most cases with sizes up to 10. This means that it is easy to visualize these sets in the system and the cognitive effort for the user is not so high. For some sets there were too many elements to have a suitable visualization in the current system.
Currently, the system removes redundant is-a relations from a solution after every iteration. This step is crucial for producing skyline optimal solutions. The advantage of removing redundant relations is the reduction of computation time as well as the reduction of unneccesary user interaction. However, in some cases redundancy may be interesting. For instance, developers may want to have explicitly stated is-a relations in the ontologies even though they are redundant. This can happen, for instance, for efficiency reasons in applications or as domain experts have validated asserted relations, these may be considered more trusted than derived relations. In this case, the minimality criterion is not considered important and we may aim for semantically maximal solutions. Our algorithms can be adapted by removing the redundancy checking. The algorithms would then try to find solutions at an as high level of informativeness, but not take into account redundancy. Also for finding solutions it may be interesting to keep redundancy. For instance, in situations where an is-a relation is repaired by a relation acquired from the axioms containing ∃ expressions it might be advantageous to keep also the missing is-a relation in subsequent iterations even though it is redundant. The reason for this is that the Source set and the Target set for the missing is-a relation might get updated in later iterations and therefore new repairing actions might be identified. One way to solve this is to make it possible in the system to show these missing is-a relations with their Source and Target sets but not to include them in the solution unless they are repaired using new knowledge. For example, let us assume that the missing is-a relation Human \(\sqsubseteq \) Primate was repaired in one iteration by a repairing action {Human \(\sqsubseteq \) Primate, SpeciesHomoSapiensQuality \(\sqsubseteq \) OrderPrimatesQuality} in which case the second relation was found using ∃. In the next iteration the relation GreatApe \(\sqsubseteq \) Primate was added to the ontology. If the system removed redundant relation Human \(\sqsubseteq \) Primate then relation Human \(\sqsubseteq \) GreatApe would not be detected as a possible repairing action for Human \(\sqsubseteq \) Primate.
We note that our algorithms in every iteration except the last produce a skyline optimal solution that is on a higher level of informativeness than the solution in the previous iteration. This means that we get closer to a maxmin solution in every step. However, maxmin solutions are not guaranteed. Also, checking whether the solution in the final iteration is a maxmin solution would require full knowledge which we in general do not have and can only be obtained by a, for large ontologies, unfeasible brute-force method. This problem is inherent in GTAPi.
There are several factors that influence the performance of our algorithms. Some of these can, in principle, not be controlled. A first issue has to do with the domain expert. We assume that the domain expert answers correctly, but this is not sure. We assume that the missing is-a relations have been validated, but also here mistakes could have been made. Further, we assume that the original ontology is correct. For flat ontologies (few levels in the is-a hierarchy) our algorithms will repair the missing structure, but the possibility of finding more informative solutions is higher when the area around the missing is-a relations is not flat. How flat the original ontology is depends on the domain as well as the original ontology development. Our approaches find solutions that contain ŚcontributingŠ is-a relations, i.e., they will not compute solutions for which some is-a relations in the solution do not help explain the repairing of the missing is-a relations.
Our approach assumes that the ontologies are represented in description logics. The advantage of this approach is that we can use the formal tools of logic to generate solutions as well as that we are able to prove properties about the problem (e.g. complexity, existence of solutions) and the algorithms (e.g. soundness, properties of the generated solutions). Although more and more ontologies can be represented as logic-based ontologies, this may not be the case for all. Our system can still be used for such ontologies that contain a hierarchical structure, but there is no guarantee for the quality of the output.
Further, we note that the 'is-a relation' is still not well-understood and/or used. For instance, [35] analyzed links in semantic networks and identified set/superset, generalization/specialization (based on predicates), 'a kind of', and conceptual containment (related to lambda-abstraction) as different uses of 'is-a' and in [36] genus-subsumption, determinable-subsumption, specification and specialization were proposed. The problem of 'is-a' overloading is also addressed in [17]. Different uses of 'is-a' may not have the same properties. For instance, multiple inheritance does not make sense for all uses of 'is-a'. These difficulties are not always recognized by ontology builders, while some may decide to focus one use of 'is-a'. For instance, the Relation Ontology [37] for OBO defined the is-a relation for OBO ontologies, but is now superseeded by RO [38] in which no more definition for is-a is given, but instead the subclass construct of OWL is used. The work in this paper is based on logic and we assume that the is-a relation is reflexive, antisymmetric and transitive. The repairing of missing is-a relations in our work is based on logical reasoning. Our debugging tool does not take into account different uses of 'is-a'. Instead, it provides support for repairing missing structure that logically follows from decisions that were made by the developers of the ontologies.
For our algorithms we assume that the ontology extended with the missing is-a relations (T∪M) is consistent. This is important for \({\mathcal {EL}^{++}}\) ontologies as otherwise there is no solution. If T∪M is not consistent, we should first use approaches for debugging semantic defectsj. Further, we assume for the algorithms that the missing is-a relations are validated. If these are not validated there is a risk that we introduce modeling defects in our ontologies.
For our OAEI Anatomy experiment we used sets of missing is-a relations that were generated by using an alignment between the two ontologies. Using an alignment allows us to generate missing is-a relations that are logically derivable from the information in the ontologies and the alignment. Our system can, in addition, also find missing is-a relations that were not logically derivable. This is the case whenever a missing is-a relation is repaired by using 'new relations' (Tables 4, 5, 6, 7 and 8). Further, we note that even though the alignment that was used is a reference alignment that has been used for many years, this alignment may still not be complete nor correctk. Therefore, even using the best ontology alignment systems may not provide us with complete alignments. Further, high-quality alignments may not always be available.
When alignments are available there could, however, be interesting ways of interaction between ontology alignment and ontology debugging. In [39] ontology alignment is considered as a special case of ontology debugging that focuses on completing the set of mappings between ontologies. A framework was proposed that unifies the phases of alignment and debugging and integrates them within one workflow. It is shown that debugging of the ontologies allows for improvement of the result of the alignment algorithms and vice versa.
The quality of the oracle also influences the quality of the repaired ontologies. In [30] different types of domain expert were discussed. The 'complete knowledge' expert always answers the question whether an is-a relation is correct or not according to the domain in a correct manner. This is the desired case, but may not be always achievable. (People make mistakes and domain experts may not always agree.) The 'partial correct' expert always gives correct answers, but may sometimes not give an answer. This represents a domain expert who knows a part of the domain well, but not the whole domain. To approximate this case we could use several domain experts and a skeptical approach. The 'Wrong' expert may give wrong answers which implies that defects may be introduced in the ontologies. The use of tools such as the one presented in this paper will, however, reduce the introduction of errors in the ontology by the domain expert.
There is not much work on the completing of missing is-a structure. In [19,34] this was addressed in the setting of taxonomies where the problem as well as some preference criteria were defined. Further, an algorithm was given and an implemented system was proposed. We note that the algorithm presented in this paper can be restricted to taxonomies and in that case finds more informative solutions than [19]. A later version of the [19] system, presented in [21], also deals with semantic defects, and was used for debugging ontologies related to a project for the Swedish National Food Agency [20]. An extension dealing with both ontology debugging and ontology alignment is described in [39]. In [40] an algorithm was given for finding solutions for \({\mathcal ALC}\) acyclic terminologies. In terms of the framework presented in this paper, those systems all returned solutions for GTAP, but there was no guarantee that the solutions were skyline optimal. Further, other heuristics were used.
There is no other work yet on GTAP. There is some work on TBox abduction. Hubauer et al. [41] proposes an automata-based approach to TBox abduction in \(\mathcal {EL}\). It is based on a reduction to the axiom pinpointing problem which is then solved with automata-based methods.
Further, there is work that addresses related topics but not directly the problem that is addressed in this paper.
Detection of missing (is-a) relations: In [14] the authors propose an approach for detecting modeling and semantic defects within an ontology based on patterns and antipatterns. The patterns and antipatterns are logic-based and mainly deal with logical constructs not available in taxonomies. Some suggestions for repairing are also given. In [18-21] detection is preformed using the mappings between two ontologies. Given two pairs of terms between two ontologies which are linked by the same kind of relationship, if the two terms in one ontology are linked by an is-a relation while the corresponding terms in the other are not, then a candidate missing is-a relation is detected. The work in [16] discusses the alignment of AMA and NCI-A and uses the notion of structural validation to remove mappings that cannot be structurally validated. Structural validation could be used to detect candidate missing is-a relations.
The properties of is-a can be used for detecting modeling defects. For instance, based on the notions of identity, rigidity and dependence, not all is-a relations in existing ontologies make sense [17]. These is-a relations can be detected by checking these properties. In [15] two reasoning services are proposed for detecting flaws in OWL property expressions. The defects relate to the property is-a hierarchy, domain and range axioms and property chains.
Detecting missing is-a relations may be seen as a special case of detecting relations. There is much work on finding relationships between terms in the ontology learning area [11]. In this setting, new ontology elements are derived from text using knowledge acquisition techniques. There is, however, also work specifically focused on the discovery of is-a relations. One paradigm is based on linguistics using lexico-syntactic patterns. The pioneering research conducted in this line is in [13], which defines a set of patterns indicating is-a relationships between words in the text. However, depending on the chosen corpora, these patterns may occur rarely. Thus, though the approach has a reasonable precision, its recall is very low. Other linguistic approaches may make use of, for instance compounding, the use of background and itemization, term co-occurrence analysis or superstring prediction (e.g. [42,43]). Another paradigm is based on machine learning and statistical methods, such as k-nearest neighbors approach [23], association rules [22], bottom-up hierarchical clustering techniques [25], supervised classification [26] and formal concept analysis [24]. Ontology evolution approaches [12,44] allow for the study of changes in ontologies and using the change management mechanisms to detect candidate missing relations.
As mentioned before, these approaches, in general, do not detect all missing is-a relations.
Debugging of semantic defects: There is much work on debugging of semantic defects which is a dual problem to the one addressed in this paper. Most of the work on this topic aims at identifying and removing logical contradictions from an ontology [21,45-49], from mappings between ontologies [21,50-53] or ontologies in a network [20,21,54]. There is more work that addresses semantic defects in ontologies. Most of it aims at identifying and removing logical contradictions from an ontology. Standard reasoners are used to identify the existence of a contradiction, and provide support for resolving and eliminating it [49]. In [46] minimal sets of axioms are identified which need to be removed to render an ontology coherent. An algorithm for finding solutions is proposed which uses a variant of the single relation heuristic. Similarly, in [47,48] strategies are described for repairing unsatisfiable concepts detected by reasoners, explanation of errors, ranking erroneous axioms, and generating repair plans. The generated solutions, however, are based on other heuristics than [21,46]. In [45] the focus is on maintaining the consistency as the ontology evolves through a formalization of the semantics of change for ontologies. In [50-52] the setting is extended to repairing ontologies connected by mappings. In this case, semantic defects may be introduced by integrating ontologies. All approaches assume that ontologies are more reliable than the mappings and try to remove some of the mappings to restore consistency. In [50,52] the solutions are based on the computation of minimal unsatisfiability-preserving sets or minimal conflict sets. While [50] proposes solutions based on a heuristic using distance in WordNet, [52] allows the user to choose between all, some or one solution. In [51] the authors focus on the detection of certain kinds of defects and redundancy. The work in [53] further characterizes the problem as mapping revision. Using belief revision theory, the authors give an analysis for the logical properties of the revision algorithms. The approach in [54] deals with the inconsistencies introduced by the integration of ontologies, and unintended entailments validated by the user. We note that most of these approaches can deal with ontologies represented in more expressive languages than in our work. However, few of the early approaches have implemented systems and were usually only tested on small ontologies. Recently, several ontology alignment systems such as LogMap and AML manage to produce alignments with a low incoherence ratio for the Anatomy and the Large Biomedical Ontologies tracks of the OAEI (e.g. [55]). One remaining problem with these approaches is that the choice of which information to remove is completely logic-based and therefore may prefer solutions with modeling defects over solutions that are correct according to the domain [56].
Abductive reasoning in (simple) description logics: In addition to TBox abduction, [29] defines three more abduction problems. Concept abduction deals with finding sub-concepts. Abox abduction deals with retrieving instances of concepts or roles that, when added to the knowledge base, allow the entailment of a desired ABox assertion. Knowledge base abduction includes both ABox and TBox abduction. Most of the existing work deals with concept abduction and ABox abduction. The work on concept abduction is based on tableau-based (e.g. [57,58]) or structural subsumption (e.g. [59]) approaches. The work on Abox abduction often uses a tableau-based method (e.g. [60,61]) or an abductive logic programming approach (e.g. [62,63]). There is also work on the complexity of the ABox abduction (e.g. [64]) and concept abduction problems (e.g. [65]).
Conclusions and future work
In this paper we presented an approach for completing the is-a structure of \({\mathcal {EL}}\) and \({\mathcal {EL}^{++}}\) ontologies. Many biomedical ontologies can be represented by \({\mathcal {EL}}\) or a small extension thereof. We first defined a model of GTAP and extended it with various preferences. Then we presented complexity results on the existence, relevance and necessity decision problems for ontologies that can be represented as TBoxes using a member of the \({\mathcal {EL}}\) family. Unless the polynomial hierarchy collapses, GTAP is much harder than the classical deduction problem, which is tractable for \({\mathcal {EL}^{++}}\). Further, we provided algorithms and a system for finding skyline optimal solutions to the GTAP, and evaluated our approach on three biomedical ontologies. The evaluation has shown the usefulness of the system as in all experiments new is-a relations have been identified.
In the future, we are interested in studying the GTAP for other knowledge representation languages. Further, we will investigate variants of the GTAP with different preference relations and restrictions of the signature. Another interesting topic is to study the GTAP in the context of modular ontologies where it may not be possible to introduce changes in the imported ontologies. Further, we will look into the integration of different abduction frameworks to deal with both modeling and semantic defects.
a As an example, for SNOMED all constructors are in \({\mathcal {EL}^{++}}\). Also taxonomies can be represented in \({\mathcal {EL}}\). Gene Ontology has, in addition to \({\mathcal {EL}}\) constructs, some inverse roles and NCI Thesaurus has some disjunctions. We note that, although our approaches do not consider constructors outside \({\mathcal {EL}^{++}}\), our algorithms still will find correct solutions for these ontologies. Further, to deal with more expressive languages other less efficient techniques may be necessary such as in [40] where a tableau-based method is used for \({\mathcal ALC}\) acyclic terminologies. Another case is MeSH which is a thesaurus, but the hierarchical relation does not always express is-a, and therefore, although the algorithms can be applied to MeSH, the proposed solutions may not always be logically correct.
b PubMed accessed on 21-02-2014.
c Therefore, the approach in this paper can also be seen as a detection method that takes already found missing is-a relations as input.
d Observe that both missing is-a relations are derivable using S 1. GranulomaProcess \(\sqsubseteq \) NonNormalProcess is derivable as GranulomaProcess \(\sqsubseteq \) InflammationProcess (S 1), InflammationProcess \(\sqsubseteq \) PathologicalProcess (S 1), and PathologicalProcess \(\sqsubseteq \) NonNormalProcess (T). Endocarditis \(\sqsubseteq \) PathologicalPhenomenon is derivable as Endocarditis \(\sqsubseteq \exists \)hasAssociatedProcess.InflammationProcess (T), ∃hasAssociatedProcess.InflammationProcess \(\sqsubseteq \exists \)hasAssociatedProcess.PathologicalProcess (S 1), and ∃hasAssociatedProcess.PathologicalProcess \(\sqsubseteq \) PathologicalPhenomenon (T).
e For an ontology of 3000 concepts (similar in size as the ontologies in our OAEI Anatomy experiments) this method would need to ask the domain expert 9000000 questions. With a smart strategy this number can be reduced a lot. For instance, if we know that limb joint is a joint, then we also know that every subconcept of limb joint is a joint and thus we do not need to ask the domain expert. However, even if we can reduce the search space by 90% we would still need to ask the domain expert 900000 questions. This is not feasible. We also note that this brute-force method is essentially ontology development.
f The algorithm without lines 14-15 provides a RepairSingleIsa for taxonomies.
g Our aim is that a domain expert with ontology engineering expertise can use tools based on our approach without much introduction. If the domain expert lacks this expertise, an ontology engineer may work together with the domain expert. The domain expert needs to make the decisions on the validity of is-a relations, while the ontology engineer may help with understanding is-a (e.g. as opposed to part-of) and understanding the consequences of a particular repairing. In an earlier experiment for the Swedish National Food Agency [20] the domain expert had some expertise in ontology engineering and few help from us was needed.
h An optimized version of this approach is shown in [34].
i This relates also to the difference between the classic abduction problem where solutions can be constructed starting from H, while we can only validate solutions in GTAP using Or.
j A system that integrates completing of ontologies with debugging of semantic defects for taxonomies is presented in [21].
k In [21] it is suggested that 12 mappings in the alignment are not correct.
Appendix - complexity proofs
In this appendix we prove the complexity results shown in Tables 2 and 3.
The proof for the existence problem for the general case of GTAP follows the technique presented in Theorem 5.2 of [27]. In general, the existence problem is not harder than the relevance problem.
Since it holds that every definite Horn theory can be represented by a general \({\mathcal {EL}}\) TBox and every Horn theory can be represented by a general \({\mathcal {EL}^{++}}\) TBox [65], some existing complexity results on the abduction of Horn theory can be adapted here for the case of general existence and subset minimality case. Note that this applies to the hardness proofs.
For convenience we primarily deal with dispensability rather than with necessity. Results for necessity are easy corollaries to our results on dispensability. Dispensability Given ψ, does a solution \(S \in {\mathcal {S}}(T, C, Or\), M) exist such that ψ∉S?
Complexity - \({\mathcal {EL}^{++}}\)
To decide if \({\mathcal {S}}(T, C, Or, M) \neq \emptyset \) for a given GTAP (T,C,O r,M) is NP-complete.
The entailment problem of \({\mathcal {EL}^{++}}\) is tractable [6]. Therefore the membership in NP follows.
NP-hardness of this problem is shown by a transformation from well-known satisfiability problem (SAT), cf. [66]. Let C l={C l 1,…,C l m } be a set of propositional clauses on X={x 1,…,x n }. Let X ′={x1′,…,x n′}, G={g 1,…,g m }, R={r 1,…,r n } be sets of new concepts and c be a new concept. Then, the GTAP (T,C,O r,M) is constructed as follows.
Note that in order to simplify the presentation, for the definition of the oracle, we write Or as a set containing the subsumptions that are true according to the oracle. We also apply this simplification in the other proofs of the paper.
$$\begin{aligned} C &= X \cup X' \cup G \cup R \cup c \\ M &= \{ c {\sqsubseteq} r_{i}: 1 \leq i \leq n, ~~ c {\sqsubseteq} g_{j} : 1 \leq j \leq m \} \\ Or &= \{ c {\sqsubseteq} x_{i}: 1 \leq i \leq n, ~~ c {\sqsubseteq} x^{\prime}_{i} : 1 \leq j \leq n \} \\ T &= \{ x_{i} \sqcap x^{\prime}_{i} {\sqsubseteq} \bot, x_{i} {\sqsubseteq} r_{i}, x'_{i} {\sqsubseteq} r_{i} : 1 \leq i \leq n \} \cup \{ c {\sqsubseteq} \top, \top {\sqsubseteq} c\}\\ & \bigcup\limits_{i=1}^{m} {\left(\{ x_{j} {\sqsubseteq} g_{i} : x_{j} \in {Cl}_{i} \} \cup \{ x^{\prime}_{j} {\sqsubseteq} g_{i} : \neg x_{j} \in {Cl}_{i} \}\right)} \end{aligned} $$
Next we prove that Cl is satisfiable iff (T,C,O r,M) has a solution. We first observe that for each \(S \in {\mathcal {S}}(T,C,Or,M)\), either \(c {\sqsubseteq } x_{i} \in S\) or \(c {\sqsubseteq } x'_{i} \in S\) (but not both) must hold, for 1≤i≤n, since otherwise \(T \cup S \not \models c {\sqsubseteq } r_{i}\).
Assume Cl is satisfiable. Let ψ be the truth assignment such that ψ(C l) is true. Define the solution S as
$$\begin{aligned} S & = \{ c {\sqsubseteq} x_{i} : \psi(x_{i}) = true, 1 \leq i \leq n \} \cup \\ &\quad\,\, \{ c {\sqsubseteq} x^{\prime}_{i} : \psi(x_{i}) = false, 1 \leq i \leq n \} \end{aligned} $$
Then \(T \cup S \models c {\sqsubseteq } r_{1} \wedge \ldots \wedge c {\sqsubseteq } r_{n}\). Moreover, because for every C l i (1≤i≤m)ψ(C l i ) is true, we have \(T \cup S \models c {\sqsubseteq } g_{1} \wedge \ldots \wedge c {\sqsubseteq } g_{m}\). Therefore T∪S⊧M holds.
Consider Cl is not satisfiable. For a solution S, either x i or \(x^{\prime }_{i}\) must exist in S. Since there does not exist any truth assignment such that ψ(C l) is true, there does not exist such S such that \(T \cup S \models c {\sqsubseteq } g_{1} \wedge \ldots \wedge c {\sqsubseteq } g_{m}\). Therefore \({\mathcal {S}}(T, C, Or, M) = \emptyset \).
♣ □
To decide if a given ψ is relevant for a given GTAP (T,C,O r,M) is NP-complete. To decide if a given ψ is dispensable for a given GTAP (T,C,O r,M) is NP-complete.
Guess a solution S which contains ψ (resp. does not contain ψ). Since the checking if \(S \in {\mathcal {S}}(T, C, Or, M)\) is in P, the membership in NP follows.
Hardness can be proven by a slight modification of the reduction for the existence problem in Theorem 1. Define the GTAP (T ′,C ′,O r ′,M ′) as
$$\begin{aligned} C^{\prime}& = C \cup e \cup e^{\prime} \\ M^{\prime} & = M \cup h \\ Or^{\prime} & = Or \cup \{c {\sqsubseteq} e, c {\sqsubseteq} e^{\prime}\} \\ T^{\prime} & = T \setminus \{x_{i} {\sqsubseteq} r_{i}, x^{\prime}_{i} {\sqsubseteq} r_{i} : 1 \leq i \leq n \} \cup \\ &\quad \{x_{i} \sqcap e {\sqsubseteq} r_{i}, x^{\prime}_{i} \sqcap e {\sqsubseteq} r_{i} : 1 \leq i \leq n \} \cup \\ &\quad \{ e^{\prime} {\sqsubseteq} r_{i}: 1 \leq i \leq n, ~~ e^{\prime} {\sqsubseteq} g_{j} : 1 \leq j \leq m \} \cup \\ &\quad\{e \sqcap e^{\prime} {\sqsubseteq} \bot, e {\sqsubseteq} h, e^{\prime} {\sqsubseteq} h\} \end{aligned} $$
where e,e ′,h are new concepts not occurring in C.
We show that Cl is satisfiable if and only if (T ′,C ′,O r ′,M ′) has a solution containing \(c {\sqsubseteq } e\) and not containing \(c {\sqsubseteq } e'\).
$$\begin{array}{lll} S & = &\{ c {\sqsubseteq} x_{i} : \psi(x_{i}) = true, 1 \leq i \leq n \} \cup \\ &&\{ c {\sqsubseteq} x^{\prime}_{i} : \psi(x_{i}) = false, 1 \leq i \leq n \} \cup \{c {\sqsubseteq} e\} \end{array} $$
Then T ′∪S⊧M ′ holds. Note that one and only one of \(c {\sqsubseteq } e\) and \(c {\sqsubseteq } e^{\prime }\) is in any solution to (T ′,C ′,O r ′,M ′). Therefore, \(c {\sqsubseteq } e^{\prime } \not \in S\) holds.
Assume Cl is not satisfiable. Then the solution S is \(\{ c {\sqsubseteq } e^{\prime }\}\). Then \(c {\sqsubseteq } e \not \in S\) holds. This concludes the proof.
Subset minimality
To decide if \({\mathcal {S}}_{\textit {min}}(T, C, Or, M) \neq \emptyset \) for a given GTAP (T,C,O r,M) is NP-complete.
We show that the problem is equivalent to the existence problem in general case. That is, \({\mathcal {S}}_{\textit {min}}(T, C, Or, M) \neq \emptyset \) iff \({\mathcal {S}}(T, C, Or, M) \neq \emptyset \). The 'only if' direction is trivial. Now we prove the 'if' direction. We show that if there is a solution \(S \in {\mathcal {S}}(T, C, Or, M)\), then there is a solution \(S^{\prime } \in {\mathcal {S}}_{\textit {min}}(T, C, Or, M)\) and S ′⊆S. If S is subset minimal, then S ′=S. Otherwise, let be the set of all solutions S ′′ such that S ′′⊂S. Since the empty set is not a solution, there exists an \(S^{\prime } \in \mathcal {W}\), such that \(\forall P \in \mathcal {W}\), P⊄S ′ holds. Then S ′ is a subset minimal solution.
To decide if a given ψ is min-relevant for a given GTAP (T,C,O r,M) is NP-complete. To decide if a given ψ is min-dispensable for a given GTAP (T,C,O r,M) is NP-complete.
Membership: guess a set S which contains (resp. does not contain) ψ. Note that \(S \in {\mathcal {S}}_{\textit {min}}(T, C, Or, M)\) iff \(S \in {\mathcal {S}}(T, C, Or, M)\) and \(\forall h \in S: S \setminus \{h\} \not \in {\mathcal {S}}(T, C, Or, M)\). This is due to the monotonicity of ⊧ in \({\mathcal {EL}^{++}}\). The checking is in P, hence the membership in NP follows.
Hardness under the restrictions follows immediately by Theorem 2.
Semantic maximality
To decide if \({\mathcal {S}}^{max}(T, C, Or, M) \neq \emptyset \) for a given GTAP (T,C,O r,M) is NP-complete.
The proof is analogous to that of Theorem 3: we show that the problem is equivalent to the existence problem of the general case. That is, \({\mathcal {S}}^{max}(T, C, Or, M) \neq \emptyset \) iff \({\mathcal {S}}(T, C, Or, M) \neq \emptyset \). The 'only if' direction is trivial. Now we prove the 'if' direction. We show that if there is a solution \(S \in {\mathcal {S}}(T, C, Or, M)\), then there is a solution \(S^{\prime } \in {\mathcal {S}}^{max}(T, C, Or, M)\) and S⊆S ′. Let be the set of all solutions S ′′ that S⊆S ′′. Then there exists \(S^{\prime } \in \mathcal {W}\), such that \(\forall P \in \mathcal {W}\), S ′⊄P holds. It is easy to show that S ′ is semantically maximal. Assume the opposite. There is another solution S 1 which is more informative than S ′. That is, there is a ψ such that T∪S 1⊧S ′∪{ψ} and T∪S ′⊮ψ. Then S ′∪S 1 should be a solution and it is a superset of S ′. ⇒ Contradiction.
To decide if a given ψ is max-relevant for a given GTAP (T,C,O r,M) is NP-complete. To decide if a given ψ is max-dispensable for a given GTAP (T,C,O r,M) is NP-complete.
Membership: guess a set S which contains (resp. does not contain) ψ. \(S \in {\mathcal {S}}^{max}(T, C, Or, M)\) iff \(S \in {\mathcal {S}}(T, C, Or, M)\) and ∀h∈O r s.t. T∪S⊮h:T∪S∪{h}⊧M. This is due to the monotonicity of ⊧ in \({\mathcal {EL}^{++}}\). The checking can be done in polynomial time since the number of possible TBox assertions is polynomial to C. Hence the membership follows.
Due to the fact that the set of skyline optimal solutions contains all subset minimal solutions, the existential problem follows trivially. That is, if there exists a subset minimal solution, then there exists a skyline optimal solution.
To decide if \({\mathcal {S}}_{\textit {min}}^{max}(T, C, Or, M) \neq \emptyset \) for a given GTAP (T,C,O r,M) is NP-complete.
To decide if a given ψ is skyline-relevant for a given GTAP (T,C,O r,M) is NP-complete. To decide if a given ψ is skyline-dispensable for a given GTAP (T,C,O r,M) is NP-complete.
Membership: guess a set S which contains (resp. does not contain) ψ. Note that \(S \in {\mathcal {S}}_{\textit {min}}^{max}(T, C, Or, M)\) iff \(S \in {\mathcal {S}}(T, C, Or, M)\) and ∀h∈S:T∪(S∖{h})⊮S. This is due to the monotonicity of ⊧ in \({\mathcal {EL}^{++}}\). The checking is in P, hence the membership in NP follows. Hardness under the restrictions follows immediately by Theorem 2.
Maxmin
To decide if \({\mathcal {S}}_{\textit {min}}^{\textbf {max}}(T, C, Or, M) \neq \emptyset \) for a given GTAP (T,C,O r,M) is NP-complete.
Again, we show that the problem is equivalent to the existence problem of the general case. Since the existence problem of \({\mathcal {S}}^{max}(T, C, Or, M)\) is shown to be equivalent to the general case, there exists \({\mathcal {S}}^{max}(T, C, Or, M)\). Since \({\mathcal {S}}_{\textit {min}}^{\textbf {max}}(T, C, Or, M) \subseteq {\mathcal {S}}^{max}(T, C, Or, M)\) holds, we need to remove from \({\mathcal {S}}^{max}(T, C, Or, M)\) those solutions {S|∃S ′,s.t.S ′⊂S:T∪S ′⊧S}. Given a maximal solution S, we call such an S ′ the witness of S. Note that if \(S \in {\mathcal {S}}^{max}(T, C, Or, M)\), then all the witnesses of S as defined above are also in \({\mathcal {S}}^{max}(T, C, Or, M)\). Therefore, during the removing process, if S is removed, S must have a witness S ′ and S ′ is still in \({\mathcal {S}}^{max}(T, C, Or, M)\). As a result, there will be at least one solution remaining in \({\mathcal {S}}^{max}(T, C, Or, M)\) after the removal process. This concludes the proof.
To decide if a given ψ is maxmin-relevant for a given GTAP (T,C,O r,M) is NP-complete. To decide if a given ψ is maxmin-dispensable for a given GTAP (T,C,O r,M) is NP-complete.
Membership: guess a set S which contains (resp. does not contain) ψ. Note that \(S \in {\mathcal {S}}_{\textit {min}}^{\textbf {max}}(T, C, Or, M)\) iff \(S \in {\mathcal {S}}^{max}(T, C, Or, M)\) and ∀h∈S:T∪(S∖{h})⊮S.
To check whether \(S \in {\mathcal {S}}^{max}(T, C, Or, M)\) is feasible in polynomial time as shown in Theorem 6. The minimality check is also feasible in polynomial time as shown in Theorem 8, hence the membership in NP follows. Hardness under the restrictions follows immediately by Theorem 2.
Minmax
To decide if \({\mathcal {S}}_{\textbf {min}}^{max}(T, C, Or, M) \neq \emptyset \) for a given GTAP (T,C,O r,M) is NP-complete.
We show that the problem is equivalent to the existence problem of the general case. That is, \({\mathcal {S}}_{\textbf {min}}^{max}(T, C, Or, M) \neq \emptyset \) iff \({\mathcal {S}}(T, C, Or, M) \neq \emptyset \). If there is a solution \(S \in {\mathcal {S}}(T, C, Or, M)\), then from Theorem 3 there is a solution which is subset minimal. Let be the set of all the subset minimal solutions. Then we remove from the solutions which are less informative, in the sense that if there is \(S^{\prime }, S^{\prime \prime } \in \mathcal W\) such that S ′ is more informative than S ′′, then S ′′ is removed. Since the relation more informative is transitive, the removal process is confluent. Then there exists a unique non-empty set \(\mathcal W^{\prime } \subseteq \mathcal W\), such that no solution is more informative than another. It is obvious that \(\mathcal W^{\prime }\) is \({\mathcal {S}}_{\textbf {min}}^{max}(T, C, Or, M)\).
To decide if a given ψ is minmax-relevant for a given GTAP (T,C,O r,M) is \({\Sigma _{2}^{P}}\)-complete. To decide if a given ψ is minmax-dispensable for a given GTAP (T,C,O r,M) is \({\Sigma _{2}^{P}}\)-complete.
Membership can be shown by first guessing a solution S containing (resp. not containing) ψ, then verifying if \(S \in {\mathcal {S}}_{\textbf {min}}^{max}(T, C, Or, M)\). That is, to check whether there does not exist a subset minimal solution which is more informative than S. The check can be done by a co-NP oracle, since checking that there does exist such a solution can be done in NP (we guess a solution S ′. Checking S ′ is subset minimal and S ′ is more informative than S can be done in polynomial time). Therefore, the membership in \({\Sigma _{2}^{P}}\) follows.
\({\Sigma _{2}^{P}}\)-hardness of this problem is shown by a transformation from deciding Φ∈ QBF 2,∃. Let Φ without loss of generality be a QBF ∃x 1…∃x n ∀y 1…∀y m E. Let E be in disjunctive normal form D 1∨⋯∨D l where D i (1≤i≤l) is a conjunction of literals. Let X={x 1,…,x n }, Y={y 1,…,y m }, \(X^{\prime } = \{x^{\prime }_{1}, \ldots, x^{\prime }_{n}\}\), and \(Y^{\prime } = \{y^{\prime }_{1}, \ldots, y^{\prime }_{m}\}\). Let further G={g 1,…,g m }, R={r 1,…,r n } be sets of new concepts and h, e, e ′, c be new concepts. Then, the GTAP (T,C,O r,M) is constructed as follows.
$$\begin{aligned} C & = X \cup X^{\prime} \cup Y \cup Y^{\prime} \cup G \cup R \cup h \cup c \cup e \cup e^{\prime}\\ M & = \{ c {\sqsubseteq} h\}\\ Or & = \{ c {\sqsubseteq} e, c {\sqsubseteq} e^{\prime}, c {\sqsubseteq} x_{i}: 1 \leq i \leq n, \\ & \quad\, c {\sqsubseteq} x^{\prime}_{i}: 1 \leq i \leq n, c {\sqsubseteq} y_{j}: 1 \leq j \leq m, \\ & \quad\, c {\sqsubseteq} y^{\prime}_{j}: 1 \leq j \leq m\}\\ \end{aligned} $$
$$\begin{aligned} T & = \{c{\sqsubseteq} \top, \top {\sqsubseteq} c\} \\ & \cup \{ x_{i} \sqcap x^{\prime}_{i} {\sqsubseteq} \bot, x_{i} \sqcap e {\sqsubseteq} r_{i}, x^{\prime}_{i} \sqcap e {\sqsubseteq} r_{i} : 1 \leq i \leq n \} \\ & \cup \{ r_{1} \sqcap \ldots \sqcap r_{n} {\sqsubseteq} h\} \\ & \cup \{ y_{i} \sqcap y^{\prime}_{i} {\sqsubseteq} \bot, y_{i} \sqcap e^{\prime} {\sqsubseteq} g_{i}, y^{\prime}_{i} \sqcap e^{\prime} {\sqsubseteq} g_{i} : 1 \leq i \leq m \} \\ & \cup \{g_{1} \sqcap \ldots \sqcap g_{m} {\sqsubseteq} e\} \cup T^{\prime} \cup T^{\prime\prime}\\ \end{aligned} $$
$$\begin{aligned} T^{\prime} & = \bigcup\limits_{i=1}^{l} \bigcup\limits_{j=1}^{s} {\big(\{ y_{i_{1}} \sqcap \ldots \sqcap y_{i_{p}} \sqcap y^{\prime}_{i_{p+1}} \sqcap \ldots \sqcap y^{\prime}_{i_{q}}} \\ & \quad \sqcap_{k=1, k\neq j}^{s} {x_{i_{k}}} \sqcap_{k=s+1}^{t} {x^{\prime}_{i_{k}}} ~~~ {\sqsubseteq} ~~~ x^{\prime}_{i_{j}}: \\ D_{i} &= y_{i_{1}} \wedge \ldots \wedge y_{i_{p}} \wedge \neg y_{i_{p+1}} \wedge \ldots \wedge \neg y_{i_{q}} \\ & \quad\wedge x_{i_{1}} \wedge \ldots \wedge x_{i_{s}} \wedge \neg x_{i_{s+1}} \wedge \ldots \wedge \neg x_{i_{t}} \}\big) \\ T^{\prime\prime}& = \bigcup\limits_{i=1}^{l} \bigcup\limits_{j=s+1}^{t} {\big(\{ y_{i_{1}} \sqcap \ldots \sqcap y_{i_{p}} \sqcap y^{\prime}_{i_{p+1}} \sqcap \ldots \sqcap y^{\prime}_{i_{q}}} \\ &\quad \sqcap_{k=1}^{s} {x_{i_{k}}} \sqcap_{k=s+1, k\neq j}^{t} {x^{\prime}_{i_{k}}} ~~~ {\sqsubseteq} ~~~ x_{i_{j}}: \\ D_{i} &= y_{i_{1}} \wedge \ldots \wedge y_{i_{p}} \wedge \neg y_{i_{p+1}} \wedge \ldots \wedge \neg y_{i_{q}} \\ &\quad \wedge x_{i_{1}} \wedge \ldots \wedge x_{i_{s}} \wedge \neg x_{i_{s+1}} \wedge \ldots \wedge \neg x_{i_{t}} \}\big) \end{aligned} $$
Intuitively, for each disjunct D i in E, for each x literal in D i , T ′ and T ′′ consists of a subsumption where the negated form of x is at the right hand side. More precisely, if x is of the form x i , then \(x^{\prime }_{i}\) occurs at the right hand side; if x is of the form ¬x i , then x i occurs at the right hand side. For instance, assume D i =y 1∧¬y 2∧x 1∧¬x 2. Then T ′ consists of the subsumption \(y_{1} \sqcap y_{2}^{\prime } \sqcap x^{\prime }_{2} {\sqsubseteq } x^{\prime }_{1}\), and T ′′ consists of \(y_{1} \sqcap y_{2}^{\prime } \sqcap x_{1} {\sqsubseteq } x_{2}\).
Note that T is consistent and that (T,C,O r,M) is constructible in polynomial time. We show that Φ∈ QBF 2,∃ holds iff (\(c {\sqsubseteq } e) \in S\) (resp. (\(c {\sqsubseteq } e^{\prime }) \not \in S\)) such that \(S \in {\mathcal {S}}_{\textbf {min}}^{max}(T, C, Or, M)\).
"Only if": Assume Φ∈ QBF 2,∃ holds. Hence, there exists a truth assignment ϕ(X) such that ∀y 1…∀y m E ϕ (X)∈ QBF 1,∀ holds. Define the solution S as
\(S = \{ c {\sqsubseteq } x_{i} : \phi (x_{i}) = true, 1 \leq i \leq n \} \cup \{ c {\sqsubseteq } x^{\prime }_{i} : \phi (x_{i}) = false, 1 \leq i \leq n \} \cup \{ c {\sqsubseteq } e\}\).
Then T∪S⊧M. Moreover, S is subset minimal. Next we show there is no other subset minimal solution which is more informative than S. Other than ϕ, there are 2n−1 possible truth assignments over X. For each such truth assignment ψ, we can obtain the corresponding solution S ′, analogously to the way obtaining S by replacing ϕ with ψ. Then every such S ′ is a subset minimal solution. However, it is obvious that T∪S ′⊮S, since S≠S ′ and there is at least one variable x i such that ϕ(x i )≠ψ(x i ).
Let μ be an arbitrary truth assignment over Y. Define S ′ as
\(S^{\prime } = \{ c {\sqsubseteq } y_{i} : \mu (y_{i}) = true, 1 \leq i \leq m \} \cup \{ c {\sqsubseteq } y^{\prime }_{i} : \mu (y_{i}) = false, 1 \leq i \leq m \} \cup \{ c {\sqsubseteq } e^{\prime }\}\). Any other subset minimal solution S ′′ which does not contain \(c {\sqsubseteq } e\) must contain such an S ′. Note that we do not fix S ′ since μ is arbitrary. To prove S is a minmax solution, we need to show that there does not exist such a subset minimal solution S ′′ such that T∪S ′′⊧S holds. In the following we show that for every such a possible solution S ′′, T∪S ′′∪S is inconsistent.
Since ∀y 1…∀y m E ϕ (X)∈ QBF 1,∀ holds, there exists a disjunct D i ∈E, such that D i ϕ,μ (X,Y) is true. That is, for every z∈D i , \(c {\sqsubseteq } z \in S \cup S^{\prime \prime }\) and for every ¬z∈D i , \(c {\sqsubseteq } z^{\prime } \in S \cup S^{\prime \prime }\). Let ρ be a rule in T ′∪T ′′ regarding D i (w. l. o. g.) with the form:
$$\begin{array}{l} y_{i_{1}} \sqcap \ldots \sqcap y_{i_{p}} \sqcap y^{\prime}_{i_{p+1}} \sqcap \ldots \sqcap y^{\prime}_{i_{q}} \\ \sqcap_{k=1, k\neq j}^{s} x_{i_{k}} \sqcap_{k=s+1}^{t} {x^{\prime}_{i_{k}}} {\sqsubseteq} x^{\prime}_{i_{j}} \end{array} $$
Since ρ∈T, we have \(T \cup S^{\prime \prime } \cup S \models c {\sqsubseteq } x^{\prime }_{i_{j}}\). On the other hand, \(T \cup S^{\prime \prime } \cup S \models c {\sqsubseteq } x_{i_{j}}\) holds too, because \(x_{i_{j}} \in D_{i}\). Therefore T∪S ′′∪S is not consistent, hence T∪S ′′⊮S.
"If": Assume Φ∈ QBF 2,∃ does not hold. Hence, for every truth assignment ϕ(X), there exists a truth assignment μ(Y), such that E ϕ,μ (X,Y) is false. That is, each D i ϕ,μ (X,Y) (1≤i≤l) is false. We prove that there does not exist a minmax solution which contains \(c {\sqsubseteq } e\) (resp. does not contain \(c {\sqsubseteq } e^{\prime }\)). Define the solution S as
\(S = \{ c {\sqsubseteq } x_{i} : \phi (x_{i}) = true, 1 \leq i \leq n \} \cup \{ c {\sqsubseteq } x^{\prime }_{i} : \phi (x_{i}) = false, 1 \leq i \leq n \} \cup \{ c {\sqsubseteq } e\}\). Then T∪S⊧M. Moreover, S is subset minimal. Next we show that there exists another subset minimal solution which is more informative than S. Define S ′ as
\(S^{\prime }= \{ c {\sqsubseteq } y_{i} : \mu (y_{i}) = true, 1 \leq i \leq m \} \cup \{ c {\sqsubseteq } y^{\prime }_{i} : \mu (y_{i}) = false, 1 \leq i \leq m \} \cup \{ c {\sqsubseteq } e^{\prime }\}\). First we show that T∪S∪S ′ is consistent. From the construction of T, we notice that inconsistency can only occur if there is an x j ∈X (resp. \(x^{\prime }_{j} \in X^{\prime }\)) such that \(c {\sqsubseteq } x_{j} \in S\) (resp. \(c {\sqsubseteq } x^{\prime }_{j} \in S\)), and \(T \cup S \cup S^{\prime } \models c {\sqsubseteq } x_{j}^{\prime }\) (resp. \(T \cup S \cup S^{\prime } \models c {\sqsubseteq } x_{j}\)) also holds.
Consider any subsumption \(\rho = Q {\sqsubseteq } p\) in T ′∪T ′′. Assume ρ is regarding the disjunct D i . If for every z∈Q, \((c {\sqsubseteq } z) \in S \cup S^{\prime }\) holds, then except for one literal (we call it z 1), the truth assignments enable all other literals in D i to be true. Since D i ϕ,μ (X,Y) is false, z 1 has to be false. If z 1 is a positive literal with the form of x, then x is assigned as false in ϕ. Therefore \(c {\sqsubseteq } x^{\prime }\) is in S. From the construction of ρ we obtain that p is in fact x ′. Thus \(T \cup S \cup S^{\prime } \models c {\sqsubseteq } x^{\prime }\) holds, and T∪S∪S ′ is consistent. Analogously, if z 1 is a negative literal with the form of ¬x, then x is assigned as true in ϕ. Therefore \(c {\sqsubseteq } x\) is in S. From the construction of ρ we obtain that p is in fact x. Thus \(T \cup S \cup S^{\prime } \models c {\sqsubseteq } x\) holds, and T∪S∪S ′ is consistent.
Now that T∪ S∪S ′ is consistent, T∪S∪S ′⊧S holds. Further, (\(S\, \cup S^{\prime } \setminus \{c {\sqsubseteq } e\}\)) is a subset minimal solution. Moreover, it is straightforward to verify that \(T \cup (S \cup S^{\prime } \setminus \{c {\sqsubseteq } e\}) \models S\). This concludes the proof.
Complexity - \({\mathcal {EL}}\)
In the following proofs we define the solution S or as \(S_{\textit {or}} = \{ P_{i} ~{\sqsubseteq }~ Q_{i} \mid \forall P_{i}, Q_{i} \in C : Or(P_{i} ~{\sqsubseteq }~ Q_{i}) = true\}\) with the intended meaning that S or consists of all the subsumptions that are true according to the domain expert.
To decide if \({\mathcal {S}}(T, C, Or, M) \neq \emptyset \) for a given GTAP (T,C,O r,M) is in P.
To decide the existence problem, we need to test whether T∪S or ⊧M, and the entailment problem of \({\mathcal {EL}}\) is tractable [6]. Note that T∪S or is consistent, thus if T∪S or ⊮M, then there does not exist a solution.
To decide if a given ψ is relevant for a given GTAP (T,C,O r,M) is in P.
We assume O r(ψ) is true. Otherwise the relevant problem returns false. The problem is equivalent to the existence problem. That is, if there exists a solution S, then S∪{ψ} is also a solution. If there does not exist a solution, then ψ is not relevant.
To decide if a given ψ is in all the solutions for a given GTAP (T,C,O r,M) is in P.
Two entailment tests are called: (1) T∪S or ⊧M and (2) T∪(S or ∖{ψ})⊮M. If both (1) and (2) holds, then ψ is in every solution. Otherwise, either there does not exist a solution ((1) does not hold), or there is a solution that does not contain ψ (T∪(S or ∖{ψ})).
To decide if \({\mathcal {S}}_{\textit {min}}(T, C, Or, M) \neq \emptyset \) for a given GTAP (T,C,O r,M) is in P.
The problem is equivalent to the existence problem in general case. Detailed proof see Theorem 3.
To decide if a given ψ is min-relevant for a given GTAP (T,C,O r,M) is NP-complete.
Hardness follows immediately due to the fact that the min-relevant problem for definite Horn theory problem is NP-complete [65,67]. For the upper bound, we can guess a solution S which contains ψ, and test whether \(S \in {\mathcal {S}}_{\textit {min}}(T, C, Or, M)\). Note that \(S \in {\mathcal {S}}_{\textit {min}}(T, C, Or, M)\) iff T∪S⊧M and ∀h∈S:T∪(S∖{h})⊮M. Thus the problem is in NP.
To decide if a given ψ is in every minimal solution for a given GTAP (T,C,O r,M) is in P.
The upper bound follows the proof in general case in Theorem 15. That is, two entailment tests are called: (1) T∪S or ⊧M and (2) T∪(S or ∖{ψ})⊮M. If both (1) and (2) holds, then ψ is in every solution, thus also in every solution of \({\mathcal {S}}_{\textit {min}}(T, C, Or, M)\). Otherwise, S=T∪(S or ∖{ψ}) is a solution which does not contain ψ. Then there is a subset minimal solution S ′⊆S. Obviously S ′ does not contain ψ as well.
For \({\mathcal {EL}}\) TBox, S or if T∪S or ⊧M is the most informative solution. Therefore all the decision problems are trivial.
To decide if \({\mathcal {S}}_{\textbf {min}}^{max}(T, C, Or, M) \neq \emptyset \) for a given GTAP (T,C,O r,M) is in P.
Follows the counterpart in \({\mathcal {EL}^{++}}\), see Theorem 11.
To decide if a given ψ is minmax-relevant for a given GTAP (T,C,O r,M) is NP-complete.
Hardness follows from the NP-complete complexity of the min-relevance problem. In the following we prove the upper bound. First a subset minimal solution S that contains ψ can be guessed and tested. Given a solution S, we define c l o s u r e(S)={x:T∪S⊧x}. Next we prove that S is minmax optimal iff {∀h∈S:T∪(S or ∖c l o s u r e(S))∪(S∖{h})⊮h}. If: if ∀h∈S:T∪(S or ∖c l o s u r e(S))∪(S∖{h})⊮h, then no element from S can be derived from outside the closure of S. Thus no more informative solution exists. Only if: assume ∃h∈S:T∪(S or ∖c l o s u r e(S))∪(S∖{h})⊧h holds. Then S ′=(S or ∖c l o s u r e(S))∪(S∖{h}) is a solution and T∪S ′⊧S. We first reduce S ′ to S ′′ such that T∪S ′′⊧S holds and S ′′ is subset minimal. Next we show that S ′′ is more informative than S. Since S is subset minimal, T∪(S∖{h})⊮h holds. Then from S ′′ we know that there must be an h ′∈S ′′ such that h ′∈(S or ∖c l o s u r e(S)). Then it follows that T∪S⊮h ′.
To decide if a given ψ is in every minmax solution for a given GTAP (T,C,O r,M) is in P.
The upper bound follows the proof in minimal case in Theorem 18.
To decide if \({\mathcal {S}}_{\textit {min}}^{max}(T, C, Or, M) \neq \emptyset \) for a given GTAP (T,C,O r,M) is in P.
The problem is equivalent to the existence problem in general case, thus the upper bound follows immediately.
To decide if a given ψ is skyline-relevant for a given GTAP (T,C,O r,M) is NP-complete.
The upper bound follows the NP-completeness of the skyline-relevant problem on \({\mathcal {EL}^{++}}\), see Theorem 8. To prove the hardness, we construct a reduction from the relevance problem of the subset minimality for \({\mathcal {EL}}\) as follows. Given a GTAP (T,C,O r,M) (denoted as P1) where T is a TBox in \({\mathcal {EL}}\), where \(M = \{ A~{\sqsubseteq }~B\}\). Note that this simplification does not affect the NP hardness of the problem. We construct another GTAP (T ′,C,O r,M) (denoted as P2), with \(T^{\prime } = T \cup \{P_{i} {\sqsubseteq } A, B {\sqsubseteq } Q_{i} | P_{i} {\sqsubseteq } Q_{i} \in S_{\textit {or}}\}\). The intuition of P2 is that if there is a solution S such that T∪S⊧M, then both T ′∪S⊧M and T ′∪S⊧S or hold.
In the following we prove that a given ψ is subset minimal relevant to P1 if and only if ψ is skyline relevant to P2.
If: Assume ψ is skyline relevant to P2. There exists a solution S 2 containing ψ, such that there does not exist any solution \(S_{2}^{\prime } \subset S_{2}\) and \(S_{2}^{\prime }\) is equally informative to S 2. Now we show that S 2 is also a subset minimal solution to P1. First we prove that T∪S 2⊧M. Assume the opposite: T∪S 2⊮M holds, then it follows T ′∪S 2⊮M, because extending T with \(\{P_{i} {\sqsubseteq } A, B {\sqsubseteq } Q_{i}\}\) does not result in the subsumption of \(A {\sqsubseteq } B\). Assume S 2 is not subset minimal in P1. Then there is another solution \(S_{2}^{\prime \prime } \subset S_{2}\), such that \(T \cup S_{2}^{\prime \prime } \models M\). Then it follows that \(T^{\prime } \cup S_{2}^{\prime \prime } \models M\) and \(T^{\prime } \cup S_{2}^{\prime \prime } \models S_{\textit {or}}\). Note that T ′∪S 2⊧M and T ′∪S 2⊧S or also hold, thus S 2 and \(S_{2}^{\prime \prime }\) are equally informative in P2, contradiction.
Only if: ψ is subset minimal relevant to P1. Then there exist a solution S 1 containing ψ and S 1 is a minimal solution. Next we show that S 1 is also a skyline solution to P2. Since T⊆T ′, S 1 is also a solution to P2. Since S 1 is minimal to P1, for any subset \(S_{1}^{\prime }\) of S 1, we have \(T \cup S_{1}^{\prime } \not \models M\). It follows that \(T^{\prime } \cup S_{1}^{\prime } \not \models M\), because extending T with \(\{P_{i} {\sqsubseteq } A, B {\sqsubseteq } Q_{i}\}\) does not result in the subsumption of \(A {\sqsubseteq } B\). Thus \(S_{1}^{\prime }\) is not a solution to P2. Therefore S 1 is a skyline solution to P2. □
To decide if a given ψ is in every skyline solution for a given GTAP (T,C,O r,M) is in P.
Follows Theorem 18.
To decide if \({\mathcal {S}}_{\textit {min}}^{\textbf {max}}(T, C, Or, M) \neq \emptyset \) for a given GTAP (T,C,O r,M) is in P.
To decide if a given ψ is maxmin-relevant for a given GTAP (T,C,O r,M) is in P.
To decide if a given ψ is in every maxmin solution for a given GTAP (T,C,O r,M) is in P.
OBO. The open biological and biomedical ontologies. http://www.obofoundry.org/.
BioPortal. http://bioportal.bioontology.org/.
UMLS. Unified medical language system. http://www.nlm.nih.gov/research/umls/about_umls.html.
SNOMED Clinical Terms. http://www.ihtsdo.org/snomed-ct/.
Ashburner M, Ball CA, Blake JA, Botstein D, Butler H, Cherry JM, et al. Gene Ontology: Tool for the Unification of Biology. Nat Genet. 2000; 25(1):25–29.
Baader F, Brandt S, Lutz C. Pushing the \(\mathcal {EL}\) envelope. In: 19th International Joint Conference on Artificial Intelligence: 2005. p. 364–9.
TONES Ontology Repository. http://www.w3.org/2001/sw/wiki/TONES.
PubMed. http://www.ncbi.nlm.nih.gov/pubmed/.
MeSH. Medical subject headings. http://www.nlm.nih.gov/mesh/.
Lambrix P, Strömbäck L, Tan H. Information Integration in Bioinformatics with Ontologies and Standards In: Bry and Maluszynski, editor. Semantic Techniques for the Web: The REWERSE perspective, chapter 8. Springer: 2009. p. 343–76.
Cimiano Ph, Buitelaar P, Magnini B. Ontology Learning from Text: Methods, Evaluation and Applications. IOS Press. 2005. ISBN: 978-1-58603-523-5.
Hartung M, Terwilliger J, Rahm E. Recent advances in schema and ontology evolution. In: Schema Matching and Mapping: 2011. p. 149–90.
Hearst M. Automatic acquisition of hyponyms from large text corpora. In: 14th International Conference on Computational Linguistics: 1992. p. 539–45.
Corcho O, Roussey C, Vilches LM, Pérez I. Pattern-based OWL ontology debugging guidelines. In: Workshop on Ontology Patterns: 2009. p. 68–82.
Keet M. Detecting and revising flaws in OWL object property expressions. In: 18th International Conference on Knowledge Engineering and Knowledge Management: 2012. p. 252–66.
Bodenreider O, Hayamizu T, Ringwald M, De Coronado S, Zhang S. Of mice and men: Aligning mouse and human anatomies. In: Proceedings of AMIA Annual Symposium: 2005. p. 61–5.
Guarino N. Some ontological principles for designing upper level lexical resources. In: 1st International Conference on Language Resources and Evaluation: 1998.
Bada M, Hunter L. Identification of OBO nonalignments and its implication for OBO enrichment. Bioinformatics. 2008; 24(12):1448–55.
Lambrix P, Liu Q, Tan H. Repairing the Missing is-a Structure of Ontologies. In: 4th Asian Semantic Web Conference: 2009. p. 76–90.
Ivanova V, Laurila Bergman J, Hammerling U, Lambrix P. Debugging taxonomies and their alignments: the ToxOntology - MeSH use case. In: 1st International Workshop on Debugging Ontologies and Ontology Mappings: 2012. p. 25–36.
Lambrix P, Ivanova V. A unified approach for debugging is-a structure and mappings in networked taxonomies. J Biomed Semantics. 2013; 4:10.
Maedche A, Staab S. Discovering conceptual relations from text. In: 14th European Conference on Artificial Intelligence: 2000. p. 321–5.
Maedche A, Pekar V, Staab S. Ontology learning part one - on discovering taxonomic relations from the web. In: Zhong, Liu, Yao, editors. Web Intelligence. Heidelberg: Springer: 2003. p. 301–20.
Cimiano Ph, Hotho A, Staab S. Learning concept hierarchies from text corpora using formal concept analysis. J Artif Intelligence Res. 2005; 24:305–39.
MATH Google Scholar
Zavitsanos E, Paliouras G, Vouros GA, Petridis S. Discovering subsumption hierarchies of ontology concepts from text corpora. In: IEEE/WIC/ACM International Conference on Web Intelligence: 2007. p. 402–8.
Spiliopoulos V, Vouros G, Karkaletsis V. On the discovery of subsumption relations for the alignment of ontologies. J Web Semantics. 2010; 8:69–88.
Eiter T, Gottlob G. The complexity of logic-based abduction. J ACM. 1995; 42(1):3–42.
Article MATH MathSciNet Google Scholar
Kakas AC, Mancarella P. Database updates through abduction. In: 16th International Conference on Very Large Data Bases: 1990. p. 650–61.
Elsenbroich C, Kutz O, Sattler U. A case for abductive reasoning over ontologies. In: OWL: Experiences and Directions: 2006.
Lambrix P, Wei-Kleiner F, Dragisic Z, Ivanova V. Repairing missing is-a structure in ontologies is an abductive reasoning problem. In: 2nd International Workshop on Debugging Ontologies and Ontology Mappings: 2013. p. 33–44.
Kazakov Y, Krötzsch M, Simančík F. Concurrent classification of \(\mathcal {EL}\) ontologies. In: 10th International Semantic Web Conference: 2011. p. 305–20.
WordNet. http://wordnet.princeton.edu/.
Uberon. http://uberon.org/.
Lambrix P, Liu Q. Debugging the missing is-a structure within taxonomies networked by partial reference alignments. Data & Knowledge Eng. 2013; 86:179–205.
Brachman RJ. What IS-A is and isn't: An analysis of taxonomic links in semantic networks. IEEE Comput. 1983; 16(10):30–6.
Johansson I, Klein B. Four kinds of "is-a" relations: genus-subsumption, determinable subsumption, specification, and specialization. In: 3rd International Workshop on Philosopy and Informatics: 2006.
Smith B, Ceusters W, Klagges B, Köhler J, Kumar A, Lomax J, Mugall C, Neuhaus F, Rector AL, Rosse C. Relations in biomedical ontologies. Genome Biol. 2005; 6:R46.
OBO RO. http://code.google.com/p/obo-relations/.
Ivanova V, Lambrix P. A unified approach for aligning taxonomies and debugging taxonomies and their alignments. In: 10th Extended, Semantic Web Conference: 2013. p. 1–15.
Lambrix P, Dragisic Z, Ivanova V. Get my pizza right: Repairing missing is-a relations in \(\mathcal {ALC}\) ontologies. In: 2nd Joint International Semantic Technology Conference: 2012. p. 17–32.
Hubauer T, Lamparter S, Pirker M. Automata-based abduction for tractable diagnosis. In: International Workshop on Description Logics: 2010. p. 360–71.
Wächter T, Tan H, Wobst A, Lambrix P, Schroeder M. A corpus-driven approach for design, evolution and alignment of ontologies. In: Winter Simulation Conference: 2006. p. 1595–602.
Arnold P, Rahm E. Semantic enrichment of ontology mappings: A linguistic-based approach. In: 17th East European Conference on Advances in Databases and Information Systems: 2013. p. 42–55.
Dos Reis JC, Dinh D, Pruski C, Da Silveira M, Reynaud-Delaitre C. Mapping adaptation actions for the automatic reconciliation of dynamic ontologies. In: 22nd ACM International, Conference on Information and Knowledge Management: 2013. p. 599–608.
Haase P, Stojanovic L. Consistent Evolution of OWL Ontologies. In: 2nd European, Semantic Web Conference: 2005. p. 182–97.
Schlobach S. Debugging and Semantic Clarification by Pinpointing. In: 2nd European Semantic Web Conference: 2005. p. 226–40.
Kalyanpur A, Parsia B, Sirin E, Hendler J. Debugging Unsatisfiable Classes in OWL Ontologies. J Web Semantics. 2006; 3(4):268–93.
Kalyanpur A, Parsia B, Sirin E, Cuenca-Grau B. Repairing Unsatisfiable Concepts in OWL Ontologies. In: 3rd European Semantic Web Conference: 2006. p. 170–84.
Flouris G, Manakanatas D, Kondylakis H, Plexousakis D, Antoniou G. Ontology Change: Classification and Survey. Knowledge Eng Rev. 2008; 23(2):117–52.
Meilicke C, Stuckenschmidt H, Tamilin A. Repairing Ontology Mappings. In: 22th National Conference on Artificial Intelligence: 2007. p. 1408–13.
Wang P, Xu B. Debugging ontology mappings: a static approach. Comput Inf. 2008; 27:21–36.
Ji Q, Haase P, Qi G, Hitzler P, Stadtmuller S. RaDON - repair and diagnosis in ontology networks. In: 6th European Semantic Web Conference: 2009. p. 863–7.
Qi G, Ji Q, Haase P. A Conflict-Based Operator for Mapping Revision. In: 8th International Semantic Web Conference: 2009. p. 521–36.
Jimenez-Ruiz E, Cuenca Grau B, Horrocks I, Berlanga R. Ontology Integration Using Mappings: Towards Getting the Right Logical Consequences. In: 6th European Semantic Web Conference: 2009. p. 173–87.
Cuenca Grau B, Dragisic Z, Eckert K, Euzenat J, Ferrara A, Granada R, et al.Results of the ontology alignment evaluation initiative 2013. In: 8th International Workshop on Ontology Matching: 2013. p. 61–100.
Pesquita C, Faria D, Santos E, Couto FM. To repair or not to repair: reconciling correctness and coherence in ontology reference alignments. In: 8th International Workshop on Ontology Matching: 2013. p. 13–24.
Colucci S, Di Noia T, Di Sciascio E, Donini F, Mongiello M. A uniform tableaux-based approach to concept abduction and contraction in \(\mathcal {ALN}\). In: International Workshop on Description Logics: 2004. p. 158–67.
Donini F, Colucci S, Di Noia T, Di Sciasco E. A tableaux-based method for computing least common subsumers for expressive description logics. In: 21st International Joint Conference on Artificial Intelligence: 2009. p. 739–45.
Di Noia T, Di Sciascio E, Donini F. Semantic matchmaking as non-monotonic reasoning: A description logic approach. J Artif Intelligence Res. 2007; 29:269–307.
Klarman S, Endriss U, Schlobach S. Abox abduction in the description logic \(\mathcal {ALC}\). J Autom Reasoning. 2011; 46:43–80.
Halland K, Britz K. Naive abox abduction in \(\mathcal {ALC}\) using a DL tableau. In: 25th International Workshop on Description Logics: 2012. p. 443–53.
Du J, Qi G, Shen Y-D, Pan J. Towards practical Abox abduction in large OWL DL ontologies. In: 25th AAAI Conference on Artificial Intelligence: 2011. p. 1160–5.
Du J, Wang K, Shen Y. A tractable approach to abox abduction over description logic ontologies. In: Proceedings of the 28th AAAI Conference on Artificial Intelligence: 2014. p. 1034–40.
Calvanese D, Ortiz M, Simkus M, Stefanoni G. The complexity of explaining negative query answers in DL-Lite. In: 13th International Conference on Principles of Knowledge Representation and Reasoning: 2012. p. 583–7.
Bienvenu M. Complexity of abduction in the \(\mathcal {EL}\) family of lightweight description logics: 2008. p. 220–30.
Garey MR, Johnson DS. Computers and Intractability: A Guide to the Theory of NP-Completeness. New York, NY, USA: W. H. Freeman & Co; 1979. ISBN: 978-0716710455.
Friedrich G, Gottlob G, Nejdl W. Hypothesis classification, abductive diagnosis and therapy. In: International Workshop on Expert Systems in Engineering: Principles and Applications: 1990. p. 69–78.
Wei-Kleiner F, Dragisic Z, Lambrix P. Abduction framework for repairing incomplete \(\mathcal {EL}\) ontologies: Complexity results and algorithms. In: 28th AAAI Conference on, Artificial Intelligence: 2014. p. 1120–7.
We thank the Swedish Research Council (Vetenskapsrådet), the Swedish e-Science Research Centre (SeRC) and the Swedish National Graduate School in Computer Science (CUGS) for financial support.
This paper is a revised and extended version of a paper presented at DILS 2014, 10th International Conference on Data Integration in the Life Sciences. It extends the original paper with results of [68] as well as some new experimental results, discussion and proofs of the complexity results.
Department of Computer and Information Science, Linköping University, Linköping, Sweden
Patrick Lambrix, Fang Wei-Kleiner & Zlatan Dragisic
Swedish e-Science Research Centre, Linköping University, Linköping, Sweden
Patrick Lambrix & Zlatan Dragisic
Fang Wei-Kleiner
Zlatan Dragisic
Correspondence to Patrick Lambrix.
PL defined the problem and a large part of the theory. FW proved the complexity results. ZD did the implementation work and most of the analysis. The other tasks were performed by all authors. All authors read and approved the final manuscript.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Lambrix, P., Wei-Kleiner, F. & Dragisic, Z. Completing the is-a structure in light-weight ontologies. J Biomed Semant 6, 12 (2015). https://doi.org/10.1186/s13326-015-0002-8
Ontology engineering
Ontology debugging | CommonCrawl |
\begin{document}
\hypersetup{pageanchor=false} \pagestyle{empty}
\begin{center} {\Large CZECH TECHNICAL UNIVERSITY IN PRAGUE} \\[3mm] {\Large Faculty of Nuclear Sciences and Physical Engineering}
}
{\huge\bf DIPLOMA THESIS}
}
\end{center} {\Large \hspace*{1cm} 2013
Dominik \v{S}afr\'{a}nek \hspace*{1cm}}
\begin{center} {\Large CZECH TECHNICAL UNIVERSITY IN PRAGUE} \\[3mm] {\Large Faculty of Nuclear Sciences and Physical Engineering}
}
}
{\huge\bf DIPLOMA THESIS}
}
{\huge Delayed Choice experiments}
{\huge {and causality in Quantum mechanics}}
}
\end{center} \begin{tabular}{ll} \large Author: &\large Dominik \v{S}afr\'{a}nek\\ \large Supervisor: &\large Ing. Petr Jizba, PhD.\\ \large Consultants: &\large Dr. Jacob Dunningham\\ \large Year: &\large 2013\\ \end{tabular}
\pagestyle{empty}
\section*{Prohlá\v{s}ení} Prohla\v{s}uji, \v{z}e jsem svùj výzkumný úkol práci vypracoval samostatnì a pou\v{z}il jsem pouze literaturu uvedenou v p\v{r}ilo\v{z}eném seznamu.
Nemám záva\v{z}ný dùvod proti u\v{z}ití \v{s}kolního díla ve smyslu §60 Zákona \v{c} 212/2000 Sb., o právu autorském, o právech souvisejících s právem autorským a o zm\v{e}n\v{e} n\v{e}kterých zákon\r{u} (autorský zákon).
\section*{Declaration} I declare, I wrote my Research Project independently and exclusively with the use of cited bibliography.
I agree with the usage of this thesis in the purport of the §60 Act 121/2000 (Copyright Act). \\ \\
V Praze dne ...........................\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ......................................
\noindent \begin{tabular}{ll} \emph{Název práce:}& \textbf{``Delayed choice'' experimenty}\\
& \textbf{a kauzalita v Kvantové mechanice}\\ \emph{Autor:}& Dominik \v Safránek\\ \emph{Obor:}& Matematické in\v{z}enýrství\\ \emph{Druh práce:}& Výzkumný úkol\\ \emph{Vedoucí práce:}& Ing. Petr Jizba, PhD. \\ \emph{Konzultanti:}& Dr. Jacob Dunningham \\ & Katedra fyziky, Fakulta jaderná a fyzikálnì in\v{z}enýrská \\ \end{tabular}
\begin{center}\textbf{Abstrakt}\end{center} \noindent
P\v resto, \v ze se m\r u\v ze zdát, \v ze kvantové experimenty se zpo\v zd\v eným výb\v erem naru\v sují klasickou kauzalitu, (tj. \v ze zdánliv\v e lze sestrojit experiment, který m\r u\v ze ovliv\v novat minulost), pomocí kvantové teorie ``mnoha sv\v et\r u'' dokazujeme, \v ze tomu tak nem\r u\v ze být. Dále matematicky formulujeme koncept ``which-path'' informace a ukazujeme, \v ze její potenciální získatelnost (byt' \v cáste\v cná) zp\r usobuje vyhasnutí interference. Pro lep\v sí názornost konstruujeme my\v slenkový systém, který vykazuje jak interferenci, tak kvantovou korelaci, a demonstrujeme, \v ze interference a korelace jsou nevyhnuteln\v e komplementární koncepty. S pou\v zitím konceptu ``Kvantové gumy'' demonstrujeme, \v ze neexistuje objektivní reality ve smyslu Einsteina, Podolského a Rosena. Dále diskutujeme rozdíl mezi ``vn\v ej\v sím'' (unitárním) a ``vnit\v rním'' (neunitárním) pozorovatelem. Práce je vybavena 3 apendixy, které rozvádí technické detaily této diplomové práce. \\ \\ \\ \noindent \begin{tabular}{ll} \emph{Klí\v cová slova:}& kauzalita, korelace \\ & which-path informace \\ & experiment se zpo\v zd\v eným výb\v erem\\ & realita, Many World\\ & problém kvantového m\v e\v rení\\ & Kvantová guma \end{tabular}
\noindent \\ \\ \begin{tabular}{ll} \emph{Title:}& \textbf{Delayed Choice Experiments }\\ &\textbf{and Particle Entanglement}\\ \emph{Author:}& Dominik \v Safránek\\ \end{tabular} \\ \begin{center}\textbf{Abstract}\end{center} \noindent
Although it may seem The Delayed Choice experiments contradict causality and one could construct an experiment which could possibly affect the past, using Many World interpretation we prove it is not possible. We also find a mathematical background to Which-path information and show why its obtainability prevents system from interfering. We find a system which exhibit both interference and correlation and show why one-particle interference and correlations are complementary. Better visible interference pattern leads to worse correlations and vice versa. Then, using knowledge gained from Quantum Eraser and Delayed Choice experiments we prove there is not an objective reality in a sense of Einstein, Podolsky and Rosen. Furthermore, we discuss the difference between ``outer'' (non-interacting) and ``inner'' (interacting) observer. We find the mathematical relationship between the ``universal'' wave function used by ``outer'' observer and processes the ``inner'' observer sees, which is our small contribution to the measurement problem. \\ \\ \\ \noindent \begin{tabular}{ll} \emph{Keywords:}& causality, correlations \\ & Which-path information \\ & Delayed Choice, Quantum Eraser\\ & reality, Many World interpretation\\ & measurement problem \\ \end{tabular}
\noindent
\tableofcontents \noindent
\hypersetup{pageanchor=true} \pagestyle{plain} \pagenumbering{arabic} \setcounter{page}{1} \chapter*{Preface}\addcontentsline{toc}{chapter}{Preface}
This thesis is a result of my 3-years long research on the foundations of Quantum Mechanics. I was always interested in fundamental issues and wonderful ideas that enter the realm of the quantum world. In the presented work I have strived to explain some of the pressing issues of contemporary Quantum Mechanics in a simple and possibly non-confusing manner. After exploring the Delayed Choice and Quantum Eraser experiments I started to think what is it all about, what happens when we generalize the knowledge we gained. When you want to understand properly, take it to the extreme, do not fear the results you can get, do not fear to talk about it. When you find the slightest inconsistency, do not move away from it and try to solve it. That is why I introduced the concept of causality, tests of free will, the idea of brain-washing and relative realities. Yet, throughout this thesis I have always tried to support my statements and suggestions by rigorous mathematics or at least by ``thought'' experiments."
\chapter*{Introduction}\addcontentsline{toc}{chapter}{Introduction}
In this thesis we will elaborate on inner workings of ``delayed choice quantum eraser''. This will be first worked out in detail on selected examples and then generalized further. After this we will elaborate philosophical consequences which the aforementioned experiments provide. We will try to justify the general idea of The Many World interpretation, still with some reservations of ours. Most of the physicist believe this interpretation cannot be proven directly, probably not even indirectly, yet we will try show you that this approach elegantly solves quantum eraser experiments while avoiding vague and unnatural statement ``possibly obtainable which-path information prevents system from interfering''. This approach also shows that interference and two-particle interference (correlations) complementarity is not just an experimental fact but also a simple result of The Many World and Wigner's friend approach to Quantum Mechanics.
Then we move to even more unbelievable issue, which we will show is again a simple consequence of existence of quantum eraser experiments and assumption the people are made from the very same atoms as the world around us, so they also behave in the very same way, they follow the same laws of physics.
\textbf{Observers follow the same laws of physics. Observers are just another quantum system.}
We show that there is not an objective reality, but each person can see the reality differently. Reality is relative. Not only about how we feel about things but also if these things exists or not -- they may exist for one observer and not to exist for another. Or they may exists for one, but after you erase the information about its existence from his brain, it may even stop to exist again.
We illustrate that ``worlds'' that enter Many World interpretation are not immune from seeing each other --- the fact that is usually believed to be true. Yet, we show that in order to pass between such distinct worlds we need to follow very special rules which are necessary in order to prevent us from experiencing paradoxes.
At the end of the last chapter we also elaborate on the notorious measurement problem. In particular, we suggest a way how and when the measurement is done and we also find the relationship between the universal function (describing both observer and the observed system) and the reality observer sees. To this end we introduce the ``passive Zeno effect'', that is the effect, where the observer does not notice any change, he does not detect anything, but still he destroys a possible interference.
\chapter{Delayed Choice and Quantum Eraser experiments}
Delayed choice experiments were though experiments at first, but lately with many experimental realizations \cite{JaquesExperimentalRealizationDelayedChoice, PeruzzoDelayedChoiceExperimental, KaiserDelayedChoiceExperimental, ZeilingerDelayedChoiceExperimentalSwapping}. It shows that non-locality is an inherent part of quantum particles and asking where exactly the quantum particle remain before the actual measurement is irrelevant. We can say where particle is just when position measurement has been done, but not before and even not after (due to wave-packet spreading).
Quantum Eraser experiments shows that when a measurement device itself is a quantum system, it is possible to cancel possible interference by measurement, but we can regain it when we erase the information from the measuring device. We will see that some of the eraser experiments does not fit this very well, so maybe it is better to refer to these experiments in a more general way as the processes of destroying and regaining interference patterns.
In the last sections we will deal with delayed choice quantum erasers which combine the previous two and the Free Will experiment, which is the type of delayed choice quantum eraser which could possibly determine whether the person has or has not the free will.
\section{Wheeler's delayed choice}\label{sectiondelayedchoice}
First thought Delayed Choice experiment was proposed by Wheeler \cite{WheelersDelayedChoice} and its slightly modified version is on figure \ref{Wheelers}.
\begin{figure}
\caption{Wheeler's delayed choice experiment.}
\label{Wheelers}
\end{figure}
The reflected beam from the beam-splitter has changed polarization, so we put we put there half-wave plate to change it back, otherwise it would not interfere because beams with perpendicular polarizations cannot interfere. This is a special case of in principle distinguishable histories, which cannot interfere. We will talk about this experimental fact later in quantum eraser experiments.
For simplicity let us consider only one photon goes through the optical system at a time. It leaves the pump beam and arrives to the beam splitter. With the classical (Newtonian) approach there is fifty percent chance of the particle going through and fifty percent chance of the particle to be reflected. So within this approach the photon goes either through the upper path or through the lower part. Then the particle arrives at lens and is focused on a rotatable mirror. With the correct rotation of the mirror we can decide which measurement we make. For example if the mirror is vertically positioned, we measure an interference pattern - which defies the Newtonian approach -- the particle must have gone through both paths at once, otherwise it could not form the interference pattern. We say that the particle behaves as a wave here. Also do not forget we had only one particle in the system, so in fact the particle hit the interference plate somewhere with certain probability and the interference pattern arise only when many such experiments are done. To illustrate that let us look to the figure \ref{doubleslit} from the double slit experiment realization with electrons.
\begin{figure}
\caption{Experimental setup of the double slit experiment and results of Akira Tonomura's realization using electrons as quantum particles. Quantum particles are red lighted.}
\label{doubleslit}
\end{figure}
When the mirror is horizontally positioned, we do not measure the interference pattern but rather which path the photon went. So in this experiment we decide which measurement we want to do, either interference measurement or path measurement, by rotating the mirror. Here comes the delayed choice. We can rotate the mirror not only before the photon has been released, but even after photon passed through beam-splitter and before it reached the lens. So by rotating the mirror we decide whether the photon went through the one path only (by choosing the path measurement) or went through the both paths at once (by choosing the interference measurement). But we did that after the ``decision'' of particle whether to behave like a particle or like a wave has been already done, since it already passed through the beam-splitter.
Now we have caused a lot of confusion for sure. It is because in the classical wave-particle duality interpretation one presumes it is the experimental setup which decides whether the particle is going to behave like a wave (interfere) or like a particle (going through one path only). But this delayed choice experiment shows that such prediction must be false, or at least misleading the right intuition (How could particle go through both paths at once and only through one path simultaneously?)
Now we reach the conclusion. It is not the measurement which causes the particle to behave like a wave or like a particle. It is only our interpretation we give to the particle's behavior. But in fact the particle always behave like a way (going through every possible history with certain probability), but when we look at it, we only see a particle. We see one hit at the interference plate or one hit at the detector. In other words, particles always behave like a wave when we are not looking, but when we look at them, we see only particles. The interference pattern arises only when more such experiments with more particles are done. Only the statistics of large number of particles gives us the information about the wave-like behavior. Moreover, in this experiment there is no really qualitative difference between which-path measurement and the interference measurement. Both types of detectors measure the position of the particles. The only difference is that in which-path measurement we have two detectors spatially separated. Still the saying ``The photon hit the detector D1 and thus it must have gone through lower path'' is false. All we can say is that most of the probability of hitting detector D1 comes from the lower path, but it could go through upper too. Whether the particle hits the detector D1 or detector D2 is not decided when particle goes through the system. It is decided at the place and the time of the measurement. This has also multiple philosophical consequences we will deal with in the following chapters, namely in quantum eraser experiment and relative realities approach.
\section{Quantum Eraser}
As said in the introduction, Quantum Eraser experiments are the experiments where we have some interfering system, we cancel the interference with various ways and then with certain actions on the system we may gain the interference again. We will present three examples here. Other erasers including some kind of delayed choice will be examined in the next chapter. We will see that the first two examples are not very satisfying in some sense, especially because we loose some information through the interference regaining process and one may wonder whether the regain is not just a consequence of throwing out this information. As in most articles \cite{ComplementarityHerzog, MultiparticleInterferometryGreenbergerZeilinger, KimDelayedChoiceQuantumEraser} is used the experimental fact ``only indistinguishable histories can interfere'' we will use it here too, but soon we will move to less vague and more theoretical explanation.
\subsection{Double-slit Eraser}\label{doublesliteraserSection}
First example is a very simple one -- just a slightly modified double-slit experiment depicted on a figure \ref{eraser1}.
\begin{figure}
\caption{double-slit experiment, double-slit experiment where we marked the second path using half-wave plate, double-slit with erasure of which-path information using polarization filter}
\label{eraser1}
\end{figure}
For simplicity suppose the pump beam produces linearly polarized light with spin ``up''. The two paths -- going through upper slit or lower slit, are now in principle indistinguishable and thus interfere.\footnote{Do not forget it works in a very same way when only one photon is present.} But if we use quantum marker, half-wave plate, to turn the polarization of light by $90^{\circ}$, the two paths become in principle distinguishable and thus the interference pattern disappear. Nevertheless, when we erase the which-path information from the system by using polarization filter, the interference reappear.
Still, this experiment is not very satisfying. It is because the polarization filter actually filters out half of the incident photons. One may ask whether the reappearance of the interference pattern is not just a simple consequence of filtering. To correct this problem we may use different instrument instead of the polarization filter. We use the half-wave plate to turn the polarization again, or we can use two connected quarter-wave plates, which in fact must be differently oriented. Alternative erasures are on figure \ref{eraser1c}.
\begin{figure}
\caption{Alternative erasures. Additional half-wave plate on the upper path, lower path and two connected quarter-wave plates.}
\label{eraser1c}
\end{figure}
Why we could not find something similar to polarization filter which acts in the same way to the both paths? Before we answer that question, we have to explain ``distinguishable histories'' in a proper way. ``Distinguishable histories'' means that the the which-path information about these histories are also included in some other states, which are orthogonal to each other but itself do not contribute on the actual interference. For example in the double slit eraser experiment we just presented we have interference in position. The interference here is arranged by two interfering orthogonal states $\ket{\psi_{u}}_0$ and $\ket{\psi_{d}}_0$ describing particle ``being in the upper slit'' (up) and ``being in the lower slit'' (down) respectively. These states evolves through some unitary and thus remain orthogonal as we will see later. Still they interfere. Let us denote $\ket{\psi_{u}}$ and $\ket{\psi_{d}}$ the evolved states, $\ket{\psi}=\ket{\psi_{u}}+\ket{\psi_{d}}$ wave function describing a particle going through a double slit at the time of hitting the interference plate. Intensity of point $\ket{x}$ on the interference plate is then proportional to \begin{equation*} \begin{split}
|\braket{x}{\psi}|^2&=|\braket{x}{\psi_u}+\braket{x}{\psi_d}|^2\\
&=|\braket{x}{\psi_u}|^2+\braket{\psi_u}{x}\braket{x}{\psi_d}+\braket{\psi_d}{x}\braket{x}{\psi_u}+|\braket{x}{\psi_d}|^2\\
&=|\braket{x}{\psi_u}|^2+2\mathrm{Re}(\braket{\psi_u}{x}\braket{x}{\psi_d})+|\braket{x}{\psi_d}|^2\ , \end{split} \end{equation*} where we see the interference term $2\mathrm{Re}(\braket{\psi_u}{x}\braket{x}{\psi_d})$.
However, in our case wave function $\ket{\psi}$ does not describe everything about our photon. Photon also has some polarization states, so the total wave function is \[ \ket{\tilde{\psi}}_1=\ket{\uparrow}\ket{\psi}\ , \end{equation} if we put nothing in front of the slits (first picture on figure \ref{eraser1}) and \[ \ket{\tilde{\psi}}_2=\ket{\uparrow}\ket{\psi_{u}}+\ket{\rightarrow}\ket{\psi_{d}} , \end{equation} if we put the half wave plate in front of the lower slit (second picture on figure \ref{eraser1}). If we take state $\ket{\tilde{\psi}}_1$ and calculate intensity of point $\ket{x}$ on the interference plate we get the same result as in the previous case without polarization. However intensity of point $\ket{x}$ with $\ket{\tilde{\psi}}_2$ is proportional to \begin{equation*} \begin{split}
|\braket{x}{\tilde{\psi}}_2|^2&=|\ket{\uparrow}\braket{x}{\psi_u}+\ket{\rightarrow}\braket{1}{\psi_d}|^2\\ &=\left(\braket{\psi_d}{x}\bra{\rightarrow}+\braket{\psi_u}{x}\bra{\uparrow}\right)\left(\ket{\uparrow}\braket{x}{\psi_u}+\ket{\rightarrow}\braket{x}{\psi_d}\right)\\
&=1|\braket{x}{\psi_u}|^2+0+0+1|\braket{x}{\psi_d}|^2\\
&=|\braket{x}{\psi_u}|^2+|\braket{x}{\psi_d}|^2\ \end{split} \end{equation*} and the interference term disappeared.
Notice that although the interfering wave functions $\ket{\psi_{u}}$, $\ket{\psi_d}$ were orthogonal, it was not them which destroyed the interference pattern. It were orthogonal states $\ket{\uparrow}$, $\ket{\rightarrow}$ which yielded the information about interfering states but did not directly take part in the interference itself (in calculations there was nothing like $\braket{x}{\uparrow}$).
In the third picture on figure \ref{eraser1} the interference reappear. It is because polarization filter acts on $\ket{\tilde{\psi}}_2$ as projector \[ P=\frac{1}{2}\left(\ket{\bra{\rightarrow}+\bra{\uparrow}}\right)\left(\ket{\uparrow}+\ket{\rightarrow}\right)\ . \end{equation} As we said before, the problem of this is it filters out half of the incident photons, which also appears in a fact the resulting state $P\ket{\tilde{\psi}}_2$ is not normalized anymore.
Again, the two paths on the second picture on figure \ref{eraser1} do not interfere because spin ``up'' $\ket{\uparrow}$ and spin ``down'' $\ket{\rightarrow}$ are orthogonal to each other. It also means that we can have ``slightly distinguishable'', ``modestly distinguishable'' and ``very distinguishable'' histories, which always refers to the percentage of non-orthogonality.\footnote{For example $\ket{0},\ket{1}$ are slightly distinguishable when $\braket{0}{1}=0.1$, modestly when $\braket{0}{1}=0.4$ etc.} We can also have slight, modest and total interference patterns as the percentage of non-orthogonality rises. Now we see that the terms of the ``distinguishable histories'' or ``distinguishable processes'' are not very satisfactory. Also some experimental realizations show that there is a continuous transition between wave-like (interference) and particle-like (non-interference) behavior, for example \cite{KaiserDelayedChoiceExperimental}.
Now back the the earlier posed question. Why there cannot be one type of substitute for the polarization filter and we need to use two quarter-wave plates glued-up together instead? We do not want to throw any information out from our system as polarization filter did and every such instrument must be expressed by a unitary operator as it is the only one which preserves all the information in the system.\footnote{We do not consider anti-unitary which are typically
reserved to time reversal situations.} Let the $\ket{0}$ $\ket{1}$ be some orthogonal states (alternating $\ket{\uparrow}$, $\ket{\rightarrow}$). We want to find one unitary operator such that after acting on both (as the polarization filter did) the system starts interfere again. But \[ \bra{0}U^{\dagger}U\ket{1}=\braket{0}{1}=0\ . \end{equation} So transformed states are orthogonal again and the system cannot interfere. Any instrument which preserves information in a system (does not filter out particles) and is used to regain the interference must act differently on the different parts of the system.
\subsection{BBO Eraser}\label{Herzogsection}
The second Quantum Eraser experiment we mention here and use it later in this thesis is the experimental realization in \cite{ComplementarityHerzog}. Experimental scheme without erasure is on figure \ref{Herzog1}.
\begin{figure}
\caption{Herzog et al. experiment without erasure. Idler interferes.}
\label{Herzog1}
\end{figure}
Suppose the pump beam pumps one photon at a time. This initial photon goes to the Beta-Barium Borate (BBO) crystal and has some chance to produce pair of entangled photons with correlated polarizations via parametric-down conversion. The production destroys the original photon. However, the original photon may go through crystal, be reflected by a mirror and produce the pair in a second run. So at the beginning there is one high-energy photon and at the end there are two less-energy photons. Nevertheless, in Quantum mechanics every possible history is maintained, so the pair \textbf{is} produced in the first and the second run simultaneously and these histories interfere. In other words, the photons produced in the first run (light green) of the initial high-energy photon (red) interfere with the photons produced in the second run (dark green) by the same initial high-energy photon. We will come back to this interesting fact later in the last chapter \ref{sectionRelative Realities}, which is mostly the reason why we mention this particular experiment here.
The upper photon (the one which goes to the detector D1) is usually called idler and the lower one (going to the detector D2) is called signal. Signal photon here will be used as a reference frame from which the interference on the idler can be observed. I.e. according to the article, there is $5 ns$ time window which tells us the idler and signal photon are from the same pair. We assume the lengths between BBO crystal and all of the mirrors are initially the same. However, if we move a little the lowest mirror, it causes the time interference on the idler. Idler photon can be detected before or after the signal photon is detected or even not detected within the time window at all. As we move the lowest mirror, number of detected idlers within the time window will rise and fall again finally showing the sinusoid. This is the interference as we see it.
Nevertheless, when we put the quarter-wave plate in a proper angle if front of the lowest mirror, it will turn the polarization to the perpendicular upon double passage (figure \ref{Herzog2}).
\begin{figure}
\caption{Herzog et al. experiment with marked path. Idler does not interfere.}
\label{Herzog2}
\end{figure}
The which-path information on the idler can be in principle retrieved so it cannot interfere.
At last, we can erase the which-path information again using the same method we have used in the previous example. We put the polarization filter on the idler path (at 45°) and regain the interference again (figure \ref{Herzog3}).
\begin{figure}
\caption{Herzog et al. experiment, regained interference with the help of polarization filter.}
\label{Herzog3}
\end{figure}
Again, we can understand what happens without any which-path information reasoning, using just a simple mathematical analysis. Now we write all of the final states. Let $\ket{\psi}_{i,1}$, $\ket{\psi}_{i,2}$ be a wave function of the idler photon produced upon the first passage through BBO crystal, second passage respectively. Same with the signal photon $\ket{\psi}_{s,1}$, $\ket{\psi}_{s,2}$. Since the signal photon is the reference one, we can presume $\ket{\psi}_{s,1}=\ket{\psi}_{s,2}\equiv\ket{\psi}_s$.
Final state from the first experimental setup is \[ \ket{\psi}_{i,1}\ket{\uparrow}_i\ket{\psi}_s\ket{\uparrow}_s+\ket{\psi}_{i,2}\ket{\uparrow}_i\ket{\psi}_s\ket{\uparrow}_s= (\ket{\psi}_{i,1}+\ket{\psi}_{i,2})\ket{\uparrow}_i\ket{\psi}_s\ket{\uparrow}_s\ . \end{equation} The joint wave function can be written as a multiple of idler position wave-function and the rest and when we calculate probabilities in such states, interference element appears. Idler interferes.
We get the final state from the second experimental setup by a slight modification of the previous. \[ \ket{\psi}_{i,1}\ket{\uparrow}_i\ket{\psi}_s\ket{\uparrow}_s+\ket{\psi}_{i,2}\ket{\rightarrow}_i\ket{\psi}_s\ket{\uparrow}_s \end{equation} This cannot be written in a form of multiplication, or, in other words, interference elements disappear as they are equal to zero due to the $\braket{\uparrow}{\rightarrow}=0$. Idler does not interfere.
We get the final state of the third setup simply from the previous just by applying polarization filter projector \[ P=\frac{1}{2}(\bra{\uparrow}+\bra{\rightarrow})(\ket{\rightarrow}+\ket{\uparrow})_i\ . \end{equation} The final state is then \[ (\ket{\psi}_{i,1}+\ket{\psi}_{i,2})(\ket{\rightarrow}+\ket{\uparrow})_i\ket{\psi}_s\ket{\uparrow}_s \end{equation} and the interference pattern reappear.
Note that this eraser experiment yields the same deficiency as the previous. By putting polarization filter there we actually throw out half of the initial photons.
\section{Delayed Choice Quantum Eraser experi- \newline -ments, Free will test}
One may wonder, could the delayed choice be used to predicting the future? Could we use such experiments to determined what the person is going to do? When the experimenter's decision affects the behavior of the particle, could not we just look at the particles behavior and find out what the experimenter is going to do? Could consequence precede the cause? Recent thought experiments which could possibly achieve that were proposed \cite{AharonovCanaFutureChoiceAffect},\cite{ZeilingerDelayedChoiceExperimentalSwapping}, nevertheless no violation of causality has been discovered. It always seem there is some tiny little thing which forbids that, still it may be pretty hard to find it. Here we will present our own thought experiment -- an attempt to beat the causality and again we will show that in this particular experiment it is not possible. However, in the next chapter we present the proof based on the Many-world interpretation which tells us all attempts to violate the causality and predict the future are doomed to failure.
\subsection{Free Will test} Our experimental was inspired by delayed choice quantum eraser experiment from \cite{DemystifyingTheDelayedChoice}. The scheme is on figure \ref{FreeWill}.
\begin{figure}
\caption{Free Will experiment. The tested person decides whether to push the button or not. M-mirror, BS-beam-splitter.}
\label{FreeWill}
\end{figure}
First to the delayed choice quantum eraser. The photon is produced in a beam splitter and goes through the double slit. We already know from the previous section \ref{sectiondelayedchoice} that both these histories are real, in other words, the photon actually goes through both slits simultaneously. Even more interesting although not so surprising is a fact that this superposition is hereditary. After the initial photon hits the BBO crystal, it produces a pair of entangled photons of which one goes up and one goes down. The initial photon went through both of the slits so the production of pair happened on two places too. Thus the produced photon is in the Bell's (maximally entangled) state \[\label{freewill1} \ket{UP1}\ket{UP2}+\ket{DOWN1}\ket{DOWN2}\ . \end{equation} We presume all of the states for each particle have the same energy so the time evolution is just an overall phase factor we can suppress (viz Appendix \ref{timeEvolution}). UP state refers to the higher place of the pair production, DOWN to the lower, 1 refers to the first photon of a pair, 2 to the second.
The idea of the free will experiment is the following. If the experimenter decides to measure the which-path information of the second photon (he pushes the red button which moves the detectors D3 and D4 into the way of approaching photon), the place of the creation of pair will be thus determined. For example if he detects the photon at the detector D4, the pair must have been created at the higher place (UP) and the first photon cannot interfere, since there is only one possible history. Put mathematically, detection of the second is equivalent of projecting the state \ref{freewill1} to \begin{multline} \ket{UP2}\bra{UP2}\left(\ket{UP1}\ket{UP2}+\ket{DOWN1}\ket{DOWN2}\right)\\ =\ket{UP1}\ket{UP2}\ . \end{multline}
However, if the experimenter decides not to push the button the which-path information is erased and the first photon can interfere.
The interesting fact is we can make the path of the second photon much longer than the path of the first so we make sure the decision of the experimenter -- our tested person -- may come after the first photon hit the detector. There is the delayed choice.\footnote{In our experiment we assume energies of all photons are the same, so the evolution yields only a physically irrelevant overall phase factor.}
So could we in principle look at the interference plate and see what the tested person is going to do? When we see the interference pattern, the tested person will decide not to push the button, when we do not see it, he will decide to push it. So we could in principle determine when the tested person is going to do before he actually does it.
The problem is, it does not work, because we will never see the interference pattern on the interference plate. Let us first analyze the case where the person decides to push the button.
In 50 percent of cases it is the detector D3 which makes a click. In this case we know the pair was created at the lower part and the first photon also hits the lower part of the interference plate. In 50 percent cases D4 detects a particle. The resulting intensity in these cases and their sum are depicted on figure \ref{freewillint1}.
\begin{figure}
\caption{Intensity patterns of the first photon when detector D3 clicks, D4 clicks and added together.}
\label{freewillint1}
\end{figure}
If the tested person decides not to push the button, the which path information is erased and the resulting state is \[ \ket{D1}(\ket{UP1}+\ket{DOWN1})+\ket{D2}(\ket{UP1}-\ket{DOWN1})\ . \end{equation} where the $\ket{D1}$ and $\ket{D2}$ are the states refering to the detection of the second photon at detector D1, D2 respectively. So in fact the first photon interferes no matter which of the detectors D1 and D2 clicks. But if the detector D1 clicks the first photon interferes differently compared to the case when D2 receive the particle. In fact, the interfering patterns are complementary and added together they form the very same shape as in the non-interfering case (figure \ref{freewillint2}).
\begin{figure}
\caption{Intensity patterns of the first photon when detector D1 clicks, D2 clicks and added together they form the very same pattern as in the previous case.}
\label{freewillint2}
\end{figure}
Now let us focus a little on the conception of Delayed Choice in this experiment. Are we saying that the decision in future (push or not to push the button?) affected the result of the experiment in the past (first electron hit in a way that it forms either non-interference pattern or interference pattern)? No, this is very wrong. First of all, the we can say nothing about the decision of the tested person just from the viewing the result on the first photon. We could only say something when we already have the results measured by a second person. After we receive this result, we could retrospectively filter out our measurements on the first photon and the pattern appears. For example if the tested person just gives us list of his results in a form of zeros and ones, but he does not tell us whether he did not pushed the button or kept it pushed, we could say what he did just from filtering half of our results. We can throw out all of the results from pairs where he received zero. Then if we see the interference pattern, we can be sure he did not push the button. If we see just one peak we can be sure he kept the button pushed. But this could happen after we received his results and as we know, he could not give us his result faster than the speed of light. So no prediction of future is happening. Because to predict what will happen we need to know first something from this future.
Other reason why usual interpretation of Delayed Choice experiments as the experiments where future results affect the past fails is because the case is totaly symmetrical. For instance suppose the tested person decided not to push the button. We say that if the detector D1 clicked (in future) then we have quite a big chance that the first photon hit somewhere (in past) where the intensity depicted on the very left picture of figure \ref{freewillint2} is the highest. But we also turn this over so we do not get any strange interpretation. We can say that if the first photon hit somewhere where the intensity of the very left picture is the highest, we have quite a big chance that the second photon will be detected at D1. And this case is not paradoxical at all.
Entanglement is symmetric and somehow out of time. It does not matter which particle from the pair do we measure first. It is just our interpretation what changes, but not real measurable quantities. The entangled nature comes out only when both experimenters controlling parts of the entangled pair come together and compare their results. From measurement of one particle from pair we cannot say what someone decides to do to the second. We only can predict what the this person will measure, but only if we know in advance what he is going to do.
The entangled nature is somehow different from the real world. As EPR paradox and violating Bell's inequalities \cite{PalssonViolatingBellExperimental} shows there must be some hidden interaction between the pair, obviously faster than light, but we cannot use this interaction for any causality violations nor for predicting the future. We will come back to this in the next chapter and generalize this from one example showed here to any entangled system using the Many World interpretation and presumption of unitary evolution of the closed system \ref{sectionNoCommunication}.
In the next section we will present the last delayed choice quantum eraser we will deal with here. It will be another example and maybe even better to demonstrate the ideas gained here.
\subsection{Delayed Choice entanglement swapping}
The basic idea of delayed choice entanglement swapping is to swap the entanglement \emph{after} one of the particle of entangled pair has been already measured and thus the entanglement may not\footnote{in the classical point of view where the measurement leads to the instantaneous collapse of the wave function} longer exist. The original though experiment was introduced by Peres \cite{PeresDelayedChoiceEntanglementSwapping} but we will utilize mainly the experimental realization presented in \cite{ZeilingerDelayedChoiceExperimentalSwapping}.
The basic scheme is on figure \ref{entanglementswapping}
\begin{figure}
\caption{Delayed choice entanglement swapping.}
\label{entanglementswapping}
\end{figure}
In this experiment we produce two pairs of entangled photons and Alice measures polarization of the first and Bob of the fourth. Second and third photon (one of each pair) go to Victor and Victor decides whether to subject them to entangled-state measurement (projecting them onto one of the Bell's states) or separate state measurement (projecting them tensor product of the polarizations of the second and third). Note that this is the very analog of the decision of our tested person in Free will experiment.
Now we present confusing explanation what is happening to clarify it later. If Victor decides to do the entangled-state measurement, the first and fourth will entangle, even they have not never met. And according to the experiment setup, Victor can entangle them even after the polarization of the first and the fourth has been actually measured!
To clarify, let us do the same reasoning we did in the Free Will experiment. First of all, if the Alice and Bob would be able to say whether their photons are entangled or not, they would be able to predict the (future) Victor's decision. From our former experience we assume that this is not possible. Indeed, the analysis of this experiment shows that they cannot be able to predict what will Victor do. The analysis is quite straightforward but very long so we will not deal with it here. Actually, to be able to say the first and fourth photon are entangled we need to have Victor's measurement results first. After we have it, we can sort the Alice's and Bob's measurement into two groups -- group where Victor measured zeros and group where Victor measured ones. After we have these two groups we can say whether the first and fourth photon are entangled or not. Without much surprise we discover in case when Victor decided to do the entangled-state measurement the first and fourth photon are correlated (and thus must have been entangled) and where Victor decided to do the separable-state measurement there is not correlation at all.
So did we discover that future decision affected the entanglement of the already destroyed (measured) states? Sort of. Still such statement is very misleading since there is not causality issue here again (we need Victor's results first to say whether or not they are entangled) and again, as in the Free Will experiment, the affecting is symmetrical. We can also say that the results of the experiments Alice and Bob did on their first and fourth photon affected the result Victor received. And this is true indeed. In confusing explanation we say the Victor's decision affected the entanglement of the Alice's and Bob's photon but to prove that we need to have Victor's measurement results first. But these results were affected by the results of Alice's and Bob's measurements! It is the same to say the consequence affected the cause. In fact, It does not matter what we call the cause and what the consequence since there is not any causal relation. Only our interpretation is sometimes confusing, because without much success we are trying to replace correlations and causal influence which are in its essence unmistakable.
\chapter{General view on Delayed Choice and Quantum eraser experiments}
In the previous chapter we presented many Delayed Choice experiments and we showed that in these particular experiments causality is not violated. We cannot use them to predict the future, we cannot use them to send information back in time, we cannot use them to determine whether the tested person has or has not the free will. But is it a general rule that entanglement cannot be used for such wonderful purposes? Yes it is. At least when you accept the presumption of unitary evolution of the closed system and the concept that measurement is just an interaction between quantum system and the observer, which is quantum system itself.
\section{Why predicting the future does not work? Answer: No-communication Theorem}\label{sectionNoCommunication}
There were two kinds of Delayed Choice experiments, first type, as the Wheeler's delayed choice, included only one particle and as such it cannot be used to predict the future in any manner. However, the second type using entangled particles could be in principle used for prediction. We already presented two such experiments - the Free will experiment and Delayed Choice Entanglement Swapping. However in both of these we showed we cannot beat the causality. We will generalize this to any system using the entanglement. To do that, let us present the generalized scheme on figure \ref{generalizedscheme}.
\begin{figure}
\caption{Generalized Scheme of Delayed Choice experiments used for predicting the future using entanglement.}
\label{generalizedscheme}
\end{figure}
At the beginning there is some process producing an entangled system of two or more particles. It does not matter how the two systems $A$ and $B$ became entangled, nor what kind of the entanglement we deal with here. We suppose the joint state of systems $A$ and $B$ is pure and is described by a wave function $\ket{\psi}$. If Alice could observe the wave function itself, she would be able to notice the instantaneous collapse of it when the Bob measures his part, even if the measurement was done in future. That itself is an interesting fact so we will highlight it here.
\textbf{Direct observing of a wave function would lead to predicting the future.}
Nevertheless, she is not able to see it. All she can see are some hits or any other types of measurement results and these results are somewhat random, although following some pattern. We will assume that at the time of the measuring the systems $A$ and $B$ are no longer interacting. If this were not true, some information from the Alice's system could reach the Bob's and affect his decision and we want to let him decide on his own. Let us derive whether Alice will be able to determine from her results what Bob is going to do.
The whole system is in a pure state $\ket{\psi}$ but Alice has only access to a part of it, to the system $A$. The whole system is thus described by a density operator \[ \rho=\ket{\psi}\bra{\psi}\ , \end{equation} and her system before the measurement is described by this density operator traced over the system $B$ \[ \rho_A=\mathrm{Tr}_B(\rho)\ . \end{equation} All probabilities of her results are governed by this density operator, so the question is: Will she be able to notice the change when Bob chooses to push the red button? In other words, will the density operator change?
Let us first suppose Bob decides whether to apply on his part $B$ of the system some unitary operator $U$ or not to apply anything. The unitary operator will act on the whole system as a tensor product $\tilde{U}=\mathbf{1}\otimes U$. Alice's density operator after his decision to apply is
\begin{equation}\label{doesnotchange} \rho_{A\text{ after application}}=\mathrm{Tr}_B(\tilde{U}\rho\tilde{U}^{\dagger})=\mathrm{Tr}_B(\rho\tilde{U}^{\dagger}\tilde{U}) =\mathrm{Tr}_B(\rho) \end{equation}
We owe the second equality to the special form of $\tilde{U}$ as a tensor product, since in general a partial trace is not invariant under cyclic permutations. We see that in this case Alice's density operator describing her system $A$ does not change so she is observers the very same results as if Bob decided to do nothing. She cannot predict anything.
Maybe you noticed we silently omitted the unitary evolution of the system itself. Let us set it right now. We assumed our system is no longer interacting, in other words, it evolves through the unitary which is the tensor product of unitary operators acting on system $A$ and system $B$. If we take the time evolution into consideration, the derivation of (\ref{doesnotchange}) will be the very same, only a bit longer because we would need to take the time into account.
Now to the generalized operation. Let us suppose Bob is going to do something non-unitary, i.e. measurement. He decides whether to do one type of measurement or the other. In Free will experiment it was the decision whether to push the button and measure the which-path information or do not push anything and erase this information. In Delayed Choice Entanglement Swapping it was the Victors decision whether to do separable-state or entangled-state measurements (and the role of Alice in our generalized scheme play Alice and Bob together). Now we use the basic presumption of the Many World interpretation. Measurement is just an interaction between observer and the system from the view of the observer. It seems non-unitary to the observer because the observer is involved in it. But from the outside the interaction between the observer and his measured system is just a unitary evolution on the joint system Observer$+$observed system. So here Bob and system $B$ together act as a closed system (not interacting with other systems, in particular system $A$) and from the point of view of Alice the joint system evolves through unitary. We can just expand the system $B$ to a new one $\tilde{B}=B+\mathrm{Bob}$ and derive the very same equality for the density operator describing the system A. Again, Alice is not able to notice any change and thus cannot determine what Bob is going to do.
The derived theorem is called no-communication theorem and is usually proven using set of Kraus operators\footnote{$\{K_1,\dots,K_m\}$ such that $K_1^{\dagger}K_1+K_2^{\dagger}K_2+\cdots+K_m^{\dagger}K_m=\mathbf{1}$} which act as the most generalized measurement one is able to do. The advantage of our proof is that we do not have to presume any form of the most generalized measurement, instead we take much simpler presumption of unitary evolution of a closed system and presumption that people/any other observer behave in the very same way as the rest since they are made of the same atoms and there is not reason why they should have behave differently.
The name of the theorem fits very well. It actually says that no information can be passed through the entanglement alone. In other words, two parties cannot communicate using just an entangled systems alone.
This theorem solves the causality questions in our Free Will experiment and in many articles too, namely \cite{ZeilingerDelayedChoiceExperimentalSwapping, KaiserDelayedChoiceExperimental, AharonovCanaFutureChoiceAffect, PeruzzoDelayedChoiceExperimental} In all of these experiments they showed there is not any causal link. Now we know if we design new experiment with similar principles, it will behave in a same way. There will be no causal link between two entangled particles and one cannot use just the entanglement to predict what the other person is going to do.
Notice one interesting thing. We presumed the unitary evolution of Bob is entirely deterministic, in other words, his unitary evolution with the system $B$ guaranteed Alice was not be able to determine his decision, she cannot predict what he is going to do, his decision is entirely non-deterministic for her. It seems the deterministic evolution of Bob gives him a chance to have a free will. Still, strictly speaking, if there was a minimal chance he decides either way, both of his decisions are made with certain probability and both are real, as in quantum mechanics every possibility exists simultaneously as proved in the first section in Wheeler's delayed choice experiment. Maybe free will is just an ability to choose the outcome of your measurement.
Let us summarize this whole section in one last sentence.
\textbf{Entanglement alone in Delayed Choice experiments cannot be used for prediction of the future, all such experiments trying to do that are doomed to failure.}
\section{Complementarity of one-particle interference and correlations}\label{sectionComplementarityofinterferenceandcorrelations}
Here we will show using the same presumption as in the previous section often used experimental fact that one-particle interference and two-particle interference (correlations between two particles) are complementary, in other words, we cannot observe the interference pattern of one particle while it is correlated with some other. More precisely speaking, we can observe it, but better visible interference pattern would lead to worse visible correlations and vice versa. Perfect interference pattern of one particle leads to no correlation with any other.\footnote{In an observable we measure. For example if we measure the polarization of a photon, it does not mean its energy is not entangled and thus correlated with some other particles.}
First we demonstrate the basic ideas on the well known experiment -- the double slit experiment. We just use a slight modification of this experiment as we also did in the Wheeler's delayed choice. The scheme is on figure \ref{entanglementdoubleslit}. It is well known fact that even non-destructive measurement of the which-path information prevents the system from interfering. Here we show that one-particle and two-particle interference complementarity and the non-destructive measurement destroying the interference are basically the same thing.
\begin{figure}
\caption{Double-slit experiment with interfering paths.}
\label{entanglementdoubleslit}
\end{figure}
On the figure we see the double-slit experiment without slits but beam-splitters instead, still the idea is identical. Let us denote $\ket{\psi_u(0)}$ the upper path of the photon and $\ket{\psi_d(0)}$ the lower path. We have also some non-destructive measurement device there which could measure in which way the photon goes.\footnote{Since photons interact only weakly one can, for the sake of argument, assume electrons instead. Our reasonings will not be influenced by this assumption.} Although if we do not put it in the photon's way, the two waves evolved from $\ket{\psi_u(0)}$ and $\ket{\psi_d(0)}$ finally overlap each other and interferes on the interference plate. So we have one-particle interference. Let us denote evolved states as $\ket{\psi_u}$, $\ket{\psi_d}$.\footnote{Now we will apply the very same derivation we used in section \ref{doublesliteraserSection} of the first chapter when we described distinguishable histories. The difference now is that instead of polarization states of our interfering particle we now use some additional particle/particles we call measurement device and we will write $\ket{0}$ instead of $\ket{\uparrow}$ and $\ket{1}$ instead of $\ket{\rightarrow}$. So the difference is in type of system we describe and different goal, math remains the same, at least in the beginning.} The state final state of a particle is then a simple sum $\ket{\psi}=\ket{\psi_u}+\ket{\psi_d}$ Probability of measuring a particle hitting the position $x$ on the interference plate is then proportional to\footnote{We say proportional because we did not normalize our states. In our consideration we, however, do not need to work with normalized states and so, for the sake of simplicity, we restrain from using the normalization.} \begin{equation*} \begin{split}
|\braket{x}{\psi}|^2&=|\braket{x}{\psi_u}+\braket{x}{\psi_d}|^2\\
&=|\braket{x}{\psi_u}|^2+\braket{\psi_u}{x}\braket{x}{\psi_d}+\braket{\psi_d}{x}\braket{x}{\psi_u}+|\braket{x}{\psi_d}|^2\\
&=|\braket{x}{\psi_u}|^2+2\mathrm{Re}(\braket{\psi_u}{x}\braket{x}{\psi_d})+|\braket{x}{\psi_d}|^2\ , \end{split} \end{equation*} where we can see the interference component $2\mathrm{Re}(\braket{\psi_u}{x}\braket{x}{\psi_d})$.
However, if we put detector in a way of the photon, the interference disappear. The interesting fact remains our detector needs not to be consisted of millions of particles, one or few particles totaly suffice. In the beginning of this thesis we assumed that since everything is made of the same type of particles, we can threat the detector consisted of million particles in a same way as the one particle only. This approach will be even more justified in the next chapter where we present Wigner's friend approach to the measurement. Suppose that the detector can be in two orthonormal states $\ket{0}$, which is its initial state and it remains if it detected nothing and $\ket{1}$, if it detected a particle.
These are the basis states we measure when we look at our detector and it is unimportant whether it is only few particles or macroscopic system.
So the whole system after measurement (unitary interaction \ref{unitary1} between the particle and the measurement device, which itself can be a single particle) is in a joint state \[ \ket{\psi}=\ket{0}\ket{\psi_u}+\ket{1}\ket{\psi_d}\ . \end{equation} For now for simplicity we suppose the evolution of measuring device after interaction (or measuring particle if you like) is trivial, i. e. it consists only from overall phase factor we can suppress. We will come to that later.
Now our particle cannot interfere, since the probability is now proportional to \begin{equation*} \begin{split}
|\braket{x}{\psi}|^2&=|\ket{0}\braket{x}{\psi_u}+\ket{1}\braket{1}{\psi_d}|^2\\ &=(\braket{\psi_d}{x}\bra{1}+\braket{\psi_u}{x}\bra{0})(\ket{0}\braket{x}{\psi_u}+\ket{1}\braket{x}{\psi_d})\\
&=1|\braket{x}{\psi_u}|^2+0+0+1|\braket{x}{\psi_d}|^2\\
&=|\braket{x}{\psi_u}|^2+|\braket{x}{\psi_d}|^2\ , \end{split} \end{equation*} and the interference term disappeared. So we do not have one-particle interference in this case, but we have correlations, two-particle interference in other words. We may not be able to distinguish between the states $\ket{\psi_u}$ and $\ket{\psi_d}$ perfectly, for example in this double slit experiment the probability amplitudes may overlap as on the picture \ref{freewillint1}, still for position $\ket{x}$ sufficiently high on the interference plate we have \[
P(0|x)=P(x|0)\approx 1. \end{equation}
So the complementarity of the one-particle and two-particle interference is equivalent to the complementarity of interaction or non-interaction between the two particles\footnote{In our case it was it was one photon/electron and measurement device consisting of one or more particles.} or complementarity of common origin or not common origin\footnote{As the entanglement of two photon produced via parametric-down conversion in BBO crystal.}.
We must note one important fact. If we choose to measure our measurement device in some different basis, for example \[ \ket{+}=\frac{1}{\sqrt{2}}(\ket{0}+\ket{1}),\ \ket{-}=\frac{1}{\sqrt{2}}(\ket{0}-\ket{1})\ , \end{equation} the correlations will change as well. For example $\ket{+}$ will be now correlated with the $(\ket{\psi_u}+\ket{\psi_d})$, which forms one of the sinusoid depicted on figure \ref{freewillint2}. The correlations would have also appear differently if the evolution of the detector was not trivial. In which form the correlations appear depends on both unitary evolution and the basis in which we measure, but the fact remains, no unitary evolution acting only on the measurement device alone (of form $U\otimes\mathbf{1}$ on a whole system) can restore interference, since unitary evolution preserves orthogonality. You may also ask: ``Could some, not necessarily unitary evolution, but some action on the measuring device alone restore the interference on our formerly interfering particle?'' No. Because this non-unitary action still must be unitary in some bigger system. This reasoning led us to the No-communication theorem which itself answers that posed question alone. Here we just see another explicit application of that.
Now we will show on an example we can continuously move between all correlation-like behavior and all interference-behavior and also that the exact amount between of interference and correlation depends on the interaction between the two particles. In other words, we will show that better interference pattern leads to worse correlations and vice versa.
Let us suppose initial state (before the interaction) \[ \ket{i}=N_i(\ket{0}+\ket{1})(\ket{\psi_1}+\ket{\psi_2}) \end{equation} going onto final state (after the interaction) \[ \ket{f}=N_f(\ket{0}\ket{\psi_1}+\ket{1}\ket{\psi_2}) \end{equation} via some unitary transformation, where $N_i,\ N_f$ are the normalization constants, \[ \ket{\psi_1}=\ket{L} \end{equation} \[ \ket{\psi_2}=\frac{3}{5}\ket{L}+\frac{4}{5}\ket{R}\ , \end{equation} where $\ket{L},\ \ket{R}$ are orthonormal vectors. In the previous case $\ket{0},\ket{1}$ were two states of the detector, but now strictly speaking we cannot call this detector again, since detector should project on the orthogonal states, but $\ket{\psi_1}$ and $\ket{\psi_2}$ are not longer orthogonal here. Still, the unitary transformation between the initial and final state exists and one of the many possible is written in the appendix \ref{unitary2}. After a successful normalization we get \[ \ket{i}=\frac{1}{\sqrt{10}}\left(\ket{0}+\ket{1}\right)\left(2\ket{L}+\ket{R}\right) \end{equation} \[ \ket{f}=\frac{1}{\sqrt{2}}\ket{0}\ket{L}+\frac{3}{5\sqrt{2}}\ket{1}\ket{L}+\frac{2\sqrt{2}}{5}\ket{1}\ket{R}\ . \end{equation}
From the look of the final state we are sure that we could see some correlations. The exact amount of entanglement can be calculated using for example relative Von Neumann entropy \cite{NielsenChuang} but we will not do it here. We will rather look on a change of the visibility of the interference pattern. Let us suppose we measure observable $A$ on the second system. Probability of obtaining a value $a$ on the initial state is proportional to \begin{equation*} \begin{split}
|\braket{a}{i}|^2&\propto|2\braket{a}{L}+\braket{a}{R}|^2\\
&=4|\braket{a}{L}|^2+2\braket{a}{L}\braket{a}{R}+2\braket{R}{a}\braket{L}{a}+|\braket{a}{R}|^2\\
&=4|\braket{a}{L}|^2+4\mathrm{Re}(\braket{a}{L}\braket{a}{R})+|\braket{a}{R}|^2\\
&\propto\frac{4}{9}|\braket{a}{L}|^2+\frac{4}{9}\mathrm{Re}(\braket{a}{L}\braket{a}{R})+\frac{1}{9}|\braket{a}{R}|^2\ . \end{split} \end{equation*}
Probability of obtaining the same value in final state is proportional to \begin{equation*} \begin{split}
|\braket{a}{f}|^2&\propto|5\ket{0}\braket{a}{L}+3\ket{1}\braket{a}{L}+4\ket{1}\braket{a}{R}|^2\\ &=(4\braket{R}{a}\bra{1}+3\braket{L}{a}\bra{1}+5\braket{L}{a}\bra{0})\cdot\\ &~\cdot(5\ket{0}\braket{a}{L}+3\ket{1}\braket{a}{L}+4\ket{1}\braket{a}{R})\\
&=34|\braket{a}{L}|^2+24\mathrm{Re}(\braket{a}{L}\braket{a}{R})+16|\braket{a}{R}|^2\\
&\propto\frac{17}{37}|\braket{a}{L}|^2+\frac{12}{37}\mathrm{Re}(\braket{a}{L}\braket{a}{R})+\frac{8}{37}|\braket{a}{R}|^2\ . \end{split} \end{equation*}
Now we see that we were able to find a state which interferes as well as the results of its measurements are correlated with some other. The initial state was not correlated at all. However, to impose some correlation between the two systems we had to pay by a worse visible pattern on a second. Our trade off is \[ \frac{12}{37}=0.32<0.44=\frac{4}{9}\ . \end{equation}
It would be interesting to find a precise relationship between amount of correlations and amount of interference. This is the task for the future.
At the end of this section let us note that in the beginning we used the very same method the Decoherence Theory use to explain the measurement \cite{SchlosshauerDecoherenceMeasurementInterpretations}. The Decoherence Theory says measurement is a random unitary interaction of our system (particle) with the environment consisted of millions of particles. To properly describe the system we should work with a huge wave function including our system and the millions of particles, but because we are interested only in our system, we trace over the environment and acquire reduced density operator, generally mixed state. Due to the vast amount of particles in the environment is our particle in one of the orthogonal state -- one of the eigenvectors of the reduced density operator. The difference is our environment, i. e. non-destructive measurement device was not necessarily consisted of vast amount of particles, one or few particles were sufficient. That was also a reason why we were able to describe something as a correlation. Loosely speaking, information about correlation between particle and the environment consisted of millions of particles is lost in numbers. However, even though Decoherence Theory describes why the particle passes onto one of the orthogonal states after the measurement, it still does not explain why it passes onto one particular.
\chapter{Relative Realities}
In this chapter we take the knowledge gained from studying delayed choice and mainly quantum eraser experiments to introduce new concept of looking at the Quantum Mechanics. There are many theories, or rather meta-theories describing what is really happening in with their own views at the measurement problem. Namely Decoherence theory, which explains the measurement as an act of particle randomly interacting with huge amounts of other particles and from this concept one is able to derive Born projection rule, but it does not explain the measurement itself and its probability character. On the other hand other theory, well known Many World interpretation is going further (although it is older) and is saying an act of measurement is an act where one world splits in two, where each of the new worlds yields the different outcome of the measurement. The World splits infinitely to the new worlds, in which every possibility is realized. Other interpretation, Consistent Histories approach rather use Propositions as the fundamental issue in Quantum Mechanics, while some propositions together do make sense (the histories are consistent) and some do not.
There are many theories, but the goal of this chapter is not to contradict any of these theories, rather to broaden their perspective. We will be closest to Many World interpretation, still, we will show that this interpretation leads to some unsolved questions, mainly due to the lack of precise explanation what the measurement is. Many world interpretation also says the splitted worlds are no longer independent, cannot communicate with each other and once the splitting is done there is no turning back. We will disprove the last statement and rather propose another approach. We will show the turning back does not contradict anything and does not lead to any kinds of logical paradoxes and we will also have experimental support for this hypothesis. The tool of this turning will be nothing else than well-understood Quantum Eraser experiments, only applied on bigger systems, which we assume must follow the same laws as the quantum tiny world.
Since antique period people are dealing with issue of relativity. Maybe one of the first discovery was the relativity of distances. One thing may closer for one person, farther for the other. There is no doubt already cavemen used this idea. Still it took centuries when people discovered the movement is also relative. We just should not say ``The car is moving by 100 km per hour.'', because this sentence does not make sense. What we should say is ``The car is moving by 100km per hour with respect to ground.'' When we talk about cars we usually mean it in the latter sense, but we could also think about car moving with respect to Sun by $107,300$ km per hour. The proper understanding of this knowledge came after Newton in seventeenth century. Two hundred years later Einstein discovered that not only movement is relative, but also lengths and time are and some people could possibly use this knowledge to outlive their grand grand grandchildren. Due to this, we also know wave functions must be relative for different observers, since for a moving observer wave function of some wave packet is squeezed, for observers who posses some additional information the wave function is the conditional one, for example knowing outcome of the measurement on the entangled pair will allow us to describe our particle better, while others are forced to use density operator to describe the same particle. We will show is the reality is also relative. For one observer there may exists a particle which do not for other, and the same particle may exist for you in one time, but do not exist in other. But to do these tricks we are forced to follow the rules which guarantee no logical paradox can occur. The main conclusion of this chapter is there is not an absolute reality as there is not an absolute time. We always need to relate the reality to a concrete observer as we do it with time. We call it Relative Realities approach.
\section{Wigner's friend}\label{sectionWigner}
Wigner's friend approach to measurement is the first step towards the relative realities approach. When we are talking about measurement, we always need to refer to observer who is measuring. Similarly, strictly speaking ``Car is moving.'' is a nonsense, because we should rather say ``Car is moving relative to the ground''. We should always say ``Bob is measuring position of a particle'' instead of ``position of a particle is being measured''.
We already talked about measurements as entangling observer with the system from the observer's point of view. To illustrate Wigner's friend approach let us suppose we have a closed box with the dead-alive cat and Bob who does not like cats and is happy when he see the cat dead after he opens the box. He is in a normal mood before. So the whole system $cat + Bob$ is described by \[ (\ket{\text{alive cat}}+\ket{\text{dead cat}})\ket{\neutranie}\ , \end{equation} before opening the box and \[\label{smileyfrownie} \ket{\text{dead cat}}\ket{\smiley}+\ket{\text{alive cat}}\ket{\frownie}\ , \end{equation} after opening the box.
From the point of view of Bob it is just a cat what he is observing. He cannot observe himself. But when some other observer came in and take look at the cat, let us call her Alice, she does not observe only the cat, but the whole Bell state (\ref{smileyfrownie}). He can look at Bob himself and see his sad face and because Alice know about Bob he hates cats she also knows the cat is alive. Or she can take look at alive cat and will know Bob is not smiling. The fact remains no matter what Alice does, she does not measure only the cat as Bob did, but the joint system $cat + Bob$. By measuring it, either by looking at Bob or at cat Alice entangle herself with this joint system. So the state (\ref{smileyfrownie}) is the system $cat + Bob$ from the point of view of outer observer, in our case Alice's view.
If we suppose Alice loves cats, for some another outer observer the system $Cat$+$Bob$+$Alice$ is then described by \[ \ket{\text{dead cat}}\ket{\smiley}\ket{\frownie}_{A}+\ket{\text{alive cat}}\ket{\frownie}\ket{\smiley}_{A}\ . \end{equation}
We can do the same for arbitrary amount of observers, each of them looking at something different. More people come to measure, more people entangle with the original system and observers who measured before.
Another consequence of this approach is quite strange and stunning discovery. When we have some measuring device and the system which is measured by it, the actual measurement from our point of view is not when the measuring device entangle itself with the system, but when we look at the device and entangle ourselves with it. If we have some device measuring dead-alive cat and we come take a look at the display whether it shows $0$ or $1$, the system we are observing is \[ \ket{\text{dead cat}}\ket{0}+\ket{\text{alive cat}}\ket{1}\ . \end{equation} It does not matter how big the measuring device is, it is in the superposition anyway until we look at it, or until the information about dead-alive cat leaks from some other part of the device already entangled with the cat, for instance by emission of light.
How exactly is this process of entangling made is called measurement problem and is not solved yet entirely, however we try to put some insight into it in the last section of this chapter \ref{staringzenoeffect}.
\section{Brainwash experiment}
We talked already about presumption that macroscopic system behave in the very same way as the tiny one, since it is made of very same atoms and thus must obey the same laws of physics. Now we will use that assumption to construct brainwash experiment. It is nothing else than quantum eraser experiment with person acting the main role -- the role of observer whose knowledge is going to be erased. Let Alice be in that unpleasant position.
We suppose Alice has only one qubit spanned by $\{\ket{0},\ket{1}\}$ of memory. It is not much but will suffice to our cause. Now we will use her as an observer. For simplicity let us suppose she observes system (let us call it particle for now) entirely described by another qubit $\{\ket{L},\ket{R}\}$ (left, right). She observes this system or in other words she measures it but we already know that for some observer looking from the outside on the joint system of Alice and the particle it just behave like a unitary entangling operation between these two.
Suppose the initial state is \[ \ket{\psi_i}=\frac{1}{\sqrt{2}}\ket{0}(\ket{L}+\ket{R}) \end{equation} resulting in the state \[ \ket{\psi_f}=\frac{1}{\sqrt{2}}(\ket{0}\ket{L}+\ket{1}\ket{R})=U\ket{\psi_i} \end{equation} after observation. One from the unitary operators able to do this is for example \begin{equation} \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \end{array} \right). \end{equation} in basis $\{\ket{0}\ket{L},\ket{0}\ket{R},\ket{1}\ket{L},\ket{1}\ket{R}\}$ but this operator is not uniquely defined. Some other unitary able to do this is for example \begin{equation}\label{otherunitarychoice} \left( \begin{array}{cccc} 0 & 1 & 0 & 0 \\ 0 & 0 & \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ 0 & 0 & \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\ 1 & 0 & 0 & 0 \end{array} \right). \end{equation}
The evolution may describe Alice looking on one slit in double slit experiments or controlling one-path in our which-path experiment in section \ref{doublesliteraserSection} or opening the box with dead-alive cat or looking at a car whether it is parked on left or right side of a road.
Before an observation Alice know nothing about the system, but after it she knows whether the car is on the left $\ket{L}$ or on the right $\ket{R}$ and remembers it. If the car is on the left her brain remains in state $\ket{0}$ and when it is on the right her brain goes to state $\ket{1}$. For her it appears the car is either on the left side or on the right side, but for an outer observer both it true. We know both possibilities are ``real'' and we have to take them into account because otherwise particle in double-slit experiment would not interfere and two photons in Herzog et Al. Quantum eraser in section \ref{Herzogsection} neither. The funny thing is we can eraser her memory and undo her observation, so she may be able to observe the car again. The only thing we need to do is find some unitary $\tilde{U}$ which applied to the final state gives us the initial one. \[ \tilde{U}U\ket{\psi_i}=\ket{\psi_i} \end{equation} We know some certainly exist, for example we may put $\tilde{U}=U^{\dagger}=U^{-1}$ which actually acts as propagator taking back the time undoing observation to the time when it has never even happened. But this is not the only way how to do it. For example inverse of the unitary in (\ref{otherunitarychoice}) does the same thing and it does not even need to be propagating back in time. Another example of erasing device which does not need to propagate anything back in time is a Beam-splitter. Beam splitter acts as a simple Hadamard gate \begin{equation} \frac{1}{\sqrt{2}}\left( \begin{array}{cccc} 1 & 1 \\ 1 & -1 \end{array} \right). \end{equation} and is its own inverse matrix so upon double passage everything goes back to the initial state.
The second unitary $\tilde{U}$ erase Alice's memory and now, when everything is back to the initial state, she can again look at the car and see it on the opposite side now. The unitary $\tilde{U}$ brain-washed her and that is why we call this brain-wash experiment. So she may have see the car on the left side for the first time and then after we brainwashed her she may see it on the right. Now we are getting to the true relative realities. For her in some time the car was on the left. It was her personal reality. But after we erased her memory, we also changed the possible reality she is able to see. In latter time she may see the car on the right side so her personal reality will be exactly the opposite. It seems we can change the realities as long as we erase memory of everyone who is involved and that is how any possible paradox is solved. No-one can say that in some earlier time his reality has been different, because to change the reality we need to erase his memory first.
We already constructed brain-washing machine but we can also do it differently, with more complexity. We can construct device who first take Alice's memories away and and erase it at a later time. For now suppose Alice has qutrit memory $\{\ket{0},\ket{1},\ket{2}\}$ and her initial state is $\ket{0}$ so she remembers whether she observed car already or not yet. Suppose we have another ``switching unit'' system consisted of $\{\ket{u},\ket{d}\}$ (up, down). Now we can temporarily move Alice's memories to switching unit, switching unit moves to the car, disentangles and everything is back to the initial state. \[ \begin{split} &\frac{1}{2}\ket{0}(\ket{L}+\ket{R})(\ket{u}+\ket{d})\\ \stackrel{U_1}{\longrightarrow} &\frac{1}{2}(\ket{1}\ket{L}+\ket{2}\ket{R})(\ket{u}+\ket{d})\\ \stackrel{U_2}{\longrightarrow} &\frac{1}{\sqrt{2}}\ket{0}(\ket{L}\ket{u}+\ket{R}\ket{d})\\ \stackrel{U_3}{\longrightarrow} &\frac{1}{2}\ket{0}(\ket{L}+\ket{R})(\ket{u}+\ket{d}) \end{split} \end{equation}
One can easily find some unitary operators achieving it, an example is in appendix \ref{unitary3}. The same switching device could be used to move Alice's memory to Bob. Note we cannot just erase the quantum information (no-deleting theorem \cite{PatiNoDeleting}) nor to copy it in general (no-cloning theorem \cite{NielsenChuang}).
Nevertheless, although this may seem revolutionary, it may be very hard to accomplish such experiment. It is mainly due to the continuous interacting with other systems and decoherence of involved car and we already know to perform the successful eraser experiment we need to involve every particle interacting with a car. This may be overcome by using very small systems instead of a whole car, for example a single particle, but still there is a problem how to accomplish the proper erasing unitary $\tilde{U}$. The interaction has to be made through some medium otherwise Alice would not see anything. For example by photon de-excitation. But then the erasing unitary would represent pulling the photon back from Alice's eye and stuffing it back the the observed particle. Also no-one could tell anyone whether the experiment about relative realities has been successful because to make it successful memories of all participants have to be erased. Usefulness is also questionable, nevertheless, when you have a chance to save the life of a cat, why not to utilize it?
\section{Relative Realities}\label{sectionRelative Realities}
This chapter will be motivated by a quantum eraser experiment in section \ref{Herzogsection}. Let us reintroduce the this experiment depicted on figure \ref{Herzog1}. In this experiment we had high energy photon which transformed into two low energy photons. Twice. At the first passage there was some probability the consecutive pair of lower energy photons are created and some probability that they are not created and high energy photon passes through. After it is reflected back and has the second chance to create the same pair of photons. However, at the end we see the interference between pair created upon first passage and the pair created upon second. So both of these possibilities of creation were actually true. There also must have been time where both high-energy photon and the created pair coexisted, otherwise the system would not interfere. And it interferes. This is not just thought experiment but a real one with well documented interference pattern \cite{ComplementarityHerzog}. So this experiment seems to violate conversation energy law, since at the beginning we have one high-energy photon and some later time we have this photon + another pair yielding the same energy. In fact, the law of energy conversation has quite different meaning here. If we take a look at arbitrary part of the system, we never see both high-energy photon and a pair at the same time. Only one possibility comes true. Also, as we showed in section \ref{sectionComplementarityofinterferenceandcorrelations} this observing prevents system from interfering. By observing it we banned the second possibility and thus also saved the conservation energy law. At a given time we may never see any paradox because we see either high energy photon or the pair. We can never see both at once. Conversation energy law here just says the mean value of energy is conserved and when there is a three percent chance the pair is produced upon one passage, also this possibility possess only three percent of energy, while the other possibility yields the rest 97 percent.
In the previous section we also showed in principle there is a way how to take everything back. If Alice look at this experiment trying to determine whether she see only one high-energy photon or one-pair of low energy photons only one possibility comes true. But by applying some unitary operator on the whole system including Alice we may able to take everything back. Then she may look again. She may have seen high-energy photon for the first time, but for the second time she may see a pair of low energy photons.
So even the existence of particle is relative. From the point of view of some outer observer (Bob for example) the system behaves nicely by some unitary evolution and can show interference, but for the observer stuck inside the system itself (Alice) it appears very differently -- it seems only one reality (history, possibility) is true.
For Alice being in the system and observing it means the system does not evolve unitary anymore. It is because she interacts with the system and getting entangled with it, this is usually called measurement. In Everett's Many World interpretation we assume in every measurement the world splits into more worlds with certain probability where the consecutive worlds differ in the measurement outcome. Thus the whole evolution of the universe is consisted of infinite and irreversible splitting. Nevertheless in previous section we proved we can go back from one such world to another by a simple quantum erasure. Also measurement is strictly local issue, since it does involve only people interacting with the observed system, not the people far away who have nothing to do with it.
In the relative realities approach we suggest there is only one world. The world consisted of every possibility where these possibilities interfere with each other. However by looking at this world from the inside, only some possibilities come true. By looking at this world we build our own reality\footnote{And we will not discuss whether we have any control over how it is built or not, whether we can choose the outcome of our measurement or not.} while interacting with it. However this reality we build is not irreversible. Someone from outside can change for us by a quantum erasure and give us a second chance to observe.
We illustrate it in the following comics ``Alice in Wonderland'' (figure \ref{aliceinwonderland4} and \ref{aliceinwonderland5}) and Personal realities spreading (figure \ref{personalrealities}) at the end of this chapter.
\section{Continuous evolution and Quantum Zeno effect}
Another problem in Many World interpretation and thus also Relative realities approach we intentionally avoided until now is the problem with continuously evolving system, when observer continuously entangles with its observed system. Until now we worked with unitary evolution independent of time, represented by some unitary jump (appendix \ref{listofUnitaries}) instead of rather unitary evolution continuously transformed from identity operator. But a lot of unitary evolution is time-dependent. Imagine for example Alice slowly opening the box with dead-alive cat or Bob looking at the decaying atom.
Suppose Bob is happy when sees the the alpha particle from the atom decay because after that he can go finally home. Bob with the atom is described then by \[\label{continuousBob} \sqrt{e^{-\lambda t}}\ket{U}\ket{\frownie}+\sqrt{1-e^{-\lambda t}}\ket{Th}\ket{\smiley}\ . \end{equation}
If we suppose initial state \[ \ket{U}\ket{\frownie} \end{equation} the unitary matrix between initial and final state is then for example \begin{equation} \left( \begin{array}{cccc} \sqrt{e^{-\lambda t}} & \sqrt{1-e^{-\lambda t}} & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ \sqrt{1-e^{-\lambda t}} & -\sqrt{e^{-\lambda t}} & 0 & 0 \end{array} \right), \end{equation} in basis $\{\ket{U}\ket{\frownie},\ket{U}\ket{\smiley},\ket{Th}\ket{\frownie},\ket{Th}\ket{\smiley}\}$.
Alice continuously opening the box can be described for instance by continuous evolution \[\label{continuousAlice} \begin{split} \frac{e^{-it}}{\sqrt{2}}&(\cos t \ket{\text{alive}}\ket{\neutranie}+\cos t \ket{\text{dead}}\ket{\neutranie}\\ +&i\sin t \ket{\text{dead}}\ket{\smiley}+i\sin t \ket{\text{alive}}\ket{\frownie}) \end{split} \end{equation} with initial state \[ (\ket{\text{alive}}+\ket{\text{dead}})\ket{\neutranie}\ , \end{equation} at $t=0$ and final state \[\label{smileyfrownie} \ket{\text{dead cat}}\ket{\smiley}+\ket{\text{alive cat}}\ket{\frownie}\ , \end{equation} at $t=\frac{\pi}{2}$, which is the very same as presented in section \ref{sectionWigner}. The unitary matrix able to do that is written in appendix \ref{unitary4}.
Both of these examples shows the usual interpretation of splitting into two worlds with some probabilities seems somehow irrelevant. How could Alice or Bob enter different worlds with certain probabilities when these probabilities change continuously? Or do they enter just once and then stay in this new world? When exactly do they jump into their new world? Or do they keep jumping from one world to another?
All these question arise because we tried to describe what Alice and Bob see using the knowledge of some another outer observer. But states (\ref{continuousBob}),(\ref{continuousAlice}) do not describe what Bob and Alice sees, it does describe what the outer observer observing both Bob and atom or Alice and Cat will see when he decides to look at them. The question is, is there any relationship about these states and about what Bob and Alice actually see? Can we derive when Bob will detect an alpha particle and when Alice finally see the dead or alive cat just from this outer observer's wave function?
One example which shows there is a difference about what outer observer and inner observer see is Quantum Zeno effect. Simply speaking it says very frequent measurements\footnote{This is usually referred to as {\em continuous measurement}, but it is always large but finite amount of measurements with small time lags between measurements.} of the same system inhibits evolution of this system. This is mainly due to the fact that wave function quantum systems change as $\sim \Delta t$ while probability of measuring the state different from the one measured in the previous measurement is proportional to $\sim (\Delta t)^2$, where $\Delta t$ is a delay between two consecutive measurements. Probability the state remains the same between two consecutive measurements is then \[ P=1-\frac{(\Delta t)^2}{\Delta t} \end{equation} which goes to $1$ when the delay is sufficiently small.
This effect has been experimentally proved, for example in \cite{ItanoZenoEffectExperimental} where authors of the article observed this effect in an rf transition between two 9Be+ ground-state hyperfine levels. Short pulses of light, applied at the same time as the rf field, made the measurements. There are also theoretical works, for example on a particle in double-well potential \cite{GagenContinuousPositionMeasurements}, where authors prove frequent position measurement of particle in such potential inhibits tunneling from one well to the second. However, all of these quantum Zeno effects require \emph{active} measurements, where experimenter keep pumping some sort of energy into the system. It is the laser beam in the first example which keeps electrons jumping from low orbit to higher one and back by spontaneous de-excitation and in the second example the position measurements keep pushing the particle to the higher and higher energy level which finally leads to higher energy than the barrier between the two wells and sets the particle free.
However, such Quantum Zeno effect does not solve our problem. Because in our problem we do not put any energy in, we just passively look around. It is a simple fact just looking at some unstable material we are not able to inhibit its decay.\footnote{I experienced it myself. Holding Americium 243 in my hand and looking at it has not prevented my Geiger--M\"{u}ller counter from clicking.} To solve our problem, to answer what observers are experiencing we need something else. We introduce it in the next section.
\section{Possible solution to what Observers are experiencing, the true form of wave-particle duality, passive Zeno effect}\label{staringzenoeffect}
Wave-particle duality is usually explained as complementarity of wave and particle behavior, in other words, that both wave-like and particle-like behavior cannot be observed simultaneously. As some experiments show \cite{KaiserDelayedChoiceExperimental} and as we showed in the first chapter \ref{doublesliteraserSection} there is a continuous transformation between wave-like and particle-like behavior.
One very simple example is double-slit experiment. Normally when we send some photons through double slit it interferes and we will see interference fringes on our interference plate -- we have wave-like behavior. But when we put correctly oriented half-wave plate in front of one slit, it turns the polarization of light coming through this slit perpendicular to the polarization of light coming through the second slit and the interference disappear -- we have particle-like behavior. Well but we can do also something between. We can orient half-wave plate on the first slit in a such way it turns the polarization just by $45^\circ$. Then we will see less visible interference pattern with some scent of non-interference intensity peaks in front of each slit -- we have something between particle-like and wave-like behavior. We could do the same calculations we did in the second chapter \ref{sectionComplementarityofinterferenceandcorrelations} when deriving complementarity of correlation and one particle interference. But all the calculation were made using waves, not particles. Note that particle behavior comes out only when we measure. As in Tonomura's double-slit experiment using electrons, the interference pattern was not seen at once, but has to be filled by single hits finally forming the well-known fringes. Thus we suggest the true form of wave-particle duality:
\textbf{Particles always evolve as waves, their particle nature comes out only when we entangle with them, measuring them. When we describe the system we do not interact with, particles come through every possible history and each of these histories are real, each of them contributes to some measurable quantities. Quantization is a result of entangling ourselves with the system. When we interact with the system or, in other words, we measure the system, we see only quanta - something we call particles. Wave behavior thus refers to the system we do not interact with, while particle behavior refers to our observation of the system.}
Now we can finally answer what Bob and Alice see. While from the outside is Bob described by \[\label{continuousBob2} \sqrt{e^{-\lambda t}}\ket{U}\ket{\frownie}+\sqrt{1-e^{-\lambda t}}\ket{Th}\ket{\smiley}\ . \end{equation} His knowledge about the state change must be carried by some interaction particle, some quanta he is able to detect. In this case it is an alpha particle, which gives the information about the atom decay. From the view of outer observer the system Bob+atom is evolving continuously (wave behavior), but from the point of view of Bob it is strictly jump effect. He just detects an alpha particle at some point. To derive when he detects the particle we use the wave function describing the overall state (\ref{continuousBob2}). From the form of the wave function we can be sure at time $t_0=0$ the atom has not decayed yet. The probability it decays between time $(0,t)$ is equal to \footnote{U as uranium, Th as thorium.} \[
p(U\rightarrow Th(0,t)|U(t_0=0))=\left(\sqrt{1-e^{-\lambda t}}\right)^2=1-e^{-\lambda t}, \end{equation} which is the very same probability Bob detects an alpha particle between $(0,t)$. Let us denote probability he detects particle at time $\tau$ as \[
p(U\rightarrow Th(\tau)|U(t_0=0))=\rho(U\rightarrow Th(\tau)|U(t_0=0))\mathrm{d}\tau\ , \end{equation} where $\rho(\cdot)$ is probability density. From the construction it is obvious that \[
\int_0^t\rho(U\rightarrow Th(\tau)|U(t_0=0))\mathrm{d}\tau=p(U\rightarrow Th(0,t)|U(t_0=0))\ . \end{equation}
After differentiation we get the differential equation for $\rho$ \[
\rho(U\rightarrow Th(t)|U(t_0=0))=\frac{\mathrm{d}p(U\rightarrow Th(0,t)|U(t_0=0))}{\mathrm{d}t}=\lambda e^{-\lambda t}. \end{equation}
So we derived Bob will detect incoming particle at time $t$ with probability \[
p(U\rightarrow Th(t)|U(t_0=0))=\lambda e^{-\lambda t}\mathrm{d}t\ . \end{equation}
From general wave function describing both Bob and atom we were able to derive what and when Bob is going to see. Now back to the relativity of realities. For outer observer looking\footnote{Strictly speaking looking is a wrong word here. Outer observer should not look, because looking usually refers to some sort of measurement, some type of the interaction with the system. To be precise, we should rather talk about observer not looking but describing the system Bob+atom.} at both Bob and atom the whole system is in superposition, while for Bob the atom stays in the same state $\ket{U}$ until he detects an alpha particle. After detection it stays in $\ket{Th}$. Now we illustrated the abysmal difference between observer describing the system while not interacting with it and observer interacting with the system. While for the outer observer the evolution is continuous transformation, for inner observer the evolution is a jump process.\footnote{As reader probably noticed we just paved our path to well-known Born rule.}
Now suppose Bob has not detected an alpha particle up until $t_0\>0$. Knowing this we can modify his probability density to better correspond the current knowledge. Now the differential equation is \[
\rho(U\rightarrow Th(t)|U(t_0))=\frac{\mathrm{d}p(U\rightarrow Th(0,t)|U(t_0))}{\mathrm{d}t}=\lambda e^{-\lambda (t-t_0)} \end{equation} This is quite interesting. Not observing anything gives Bob some additional information about the system and he is able to modify his probabilities accordingly. Not only detecting a particle but also not detecting anything gives Bob some information and his wave function describing his observed system changes to the conditional one.
Now we can also compute probability of detecting a particle in a very close future, while not detected before. For $\Delta t=t-t_0\ll 1$ \[
p(U\rightarrow Th(t)|U(t_0))=\lambda e^{-\lambda \Delta t}\Delta t\doteq(1-\lambda \Delta t)\lambda \Delta t\doteq\lambda \Delta t\ . \end{equation}
Now we get back to Alice's observation of alive-dead cat. Namely wave function \[\label{continuousAlice2} \begin{split} \frac{e^{-it}}{\sqrt{2}}&(\cos t \ket{\text{alive}}\ket{\neutranie}+\cos t \ket{\text{dead}}\ket{\neutranie}\\ +&i\sin t \ket{\text{dead}}\ket{\smiley}+i\sin t \ket{\text{alive}}\ket{\frownie})\ . \end{split} \end{equation} In Bob's case we had some particle which tells us the state has changed. We need to have some here too. For that reason suppose the dead cat turns blue and thus emits blue light, while alive cat remains ginger and emits red light. Now when Alice opens the box seeing color of the first photon she can tell whether the cat is alive and when is dead. After receiving this first photon Alice will be receiving many others of the same color, because the original Quantum Zeno effect takes place. Simply speaking, if cat proves to be alive, there is much greater chance it will stay alive if the photon emission is frequent enough. If cat proves to be dead, it will be most likely dead some time later.
Now state $\ket{\text{alive}}\ket{\neutranie}$ refers to the situation where the cat is alive but we did not received our photon yet so we do not know yet. Similarly with the state $\ket{\text{dead}}\ket{\neutranie}$. Finally the states $\ket{\text{alive}}\ket{\frownie})$ and $\ket{\text{dead}}\ket{\smiley}$ refer to the cases when we received the photon and found out whether the cat is alive or dead.
Analogously to the Bob's case, probability of finding out in interval $(0,t)$ the cat is dead with initial state \[ \ket{i}=\frac{1}{\sqrt{2}}(\ket{\text{alive}}+\ket{\text{dead}}) \end{equation} when slowly opening a box is \[
p(blue(0,t)|\ket{i}(t_0=0))=\frac{e^{-it}}{\sqrt{2}}(i \sin t)\frac{e^{it}}{\sqrt{2}}(-i \sin t)=\frac{1}{2}\sin^2 t \end{equation} and the respective probability density of finding out the cat is dead at time $t$ is \[ \begin{split}
\rho(blue(t)|\ket{i}(t_0=0))&=\frac{\mathrm{d}p(blue(0,t)|\ket{i}(t_0=0))}{\mathrm{d}t}\\ &=\sin t \cos t=\frac{1}{2}\sin (2t)\ . \end{split} \end{equation} We get the same probability density when calculating alive cat \[
\rho(red(t)|\ket{i}(t_0=0))=\frac{1}{2}\sin (2t)\ . \end{equation} So at any time $0\leq t\leq\frac{\pi}{2}$ the probability we find out the cat dead or alive is the same.
We know that if some outer observer look at Alice after $t>\frac{\pi}{2}$ he must find out Alice either happy or sad, nothing between. That simply means probability of detecting blue photon in the next short interval $(t_0,t)$, $\Delta t=t-t_0$ supposing we have not detected any particle until $t_0$ must go to infinity as $t_0$ goes to $\frac{\pi}{2}$. Also not detecting a particle until $t_0$ means that we just have to throw out some possible outcomes from our sample space and renormalize the probability of the others. We throw out all outcomes of type ``Alice detected a particle at time $\tau<t_0$''. Probability density of detecting blue photon at time $t>t_0$ while Alice has not detected any particle until $t_0$ is thus\footnote{We could have use the same reasoning for the Bob and his unstable atom and get the same result.} \[ \begin{split}
\rho(blue(t)|\ket{i}(t_0))&=\frac{\rho(blue(t)|\ket{i}(0))}{\int_{t_0}^\frac{\pi}{2}\rho(blue(\tau)|\ket{i}(0))\mathrm{d}\tau+\int_{t_0}^\frac{\pi}{2}\rho(red(\tau)|\ket{i}(0))\mathrm{d}\tau}\\ &=\frac{\frac{1}{2}\sin (2t)}{2\int_{t_0}^\frac{\pi}{2}\frac{1}{2}\sin (2\tau)\mathrm{d}\tau} =\frac{\sin(2t)}{1+\cos(2t_0)}\ . \end{split} \end{equation}
After short derivation we get the probability of detecting blue photon in the next short interval $(t_0,t)$, $t_0<t<\frac{\pi}{2}$ supposing we have not detected anything until $t_0$\footnote{The reader may noticed that the probability has not correct dimension. It is because we work only win $\sin(t)$ instead of $\sin(\omega t)$.} \[
p(blue(t)|\ket{i}(t_0))=\frac{\sin(2t)}{1+\cos(2t_0)}\Delta t\doteq\frac{\sin(2t_0)+\cos(2t_0)\Delta t}{1+\cos(2t_0)}\Delta t\ . \end{equation}
For $t_0\approx\frac{\pi}{2}$ using Taylor series we get \[
p(blue(t)|\ket{i}(t_0))=\frac{\Delta t^2}{2(\frac{\pi}{2}-t_0)^2}\ . \end{equation}
Note that the actual state of the cat does not matter on Alice's view anymore. All she can tell about the cat is mediated by the particle she is receiving. She supposes the cat is dead when she received the blue photon and that is nearly all she can do. Whether the blue photon really tells her that is totaly different question. It just seems we are bounded by our limited ability to perceive and all we suggest about surrounding world is just an interpretation of little information we receive. What we see is not anymore about the actual object but rather about particles mediating the interaction. Blue photon just makes Alice happy and red photon makes her sad. However these mediating particles yield some information about the object they have been emitted from and that is why we can say something about it.
To illustrate better passive Zeno effect let us present our last example in this thesis. Imagine particle going through 2 paths at once, while Bob controls only one path. It is just like looking into one of two slits, with one slight difference, we presume quite idealized rectangular wave packet now. Suppose even though Bob looks into one slit he will not detect any particle, the particle just went through the other way. Illustration is on figure \ref{passiveZeno}.
\begin{figure}
\caption{As we look on one part of the system and we do not receive anything probability of particle being somewhere else is rising. If we do not detect anything we can be sure particle is somewhere on the other side.}
\label{passiveZeno}
\end{figure}
If we omit from our reasoning the actual horizonal position of a particle and rather focus on what Bob is seeing, the wave function is \[ \sqrt{\frac{1}{2}\left(1-\frac{t}{T}\right)}\ket{u}\ket{0}+\sqrt{\frac{1}{2}\frac{t}{T}}\ket{u}\ket{1}+\sqrt{\frac{1}{2}}\ket{d}\ket{0}\ , \end{equation} where $T=\frac{l}{v}$ is duration of interaction, $l$ initial length of wave packet, $v$ wave packet velocity, $\ket{u},\ket{d}$ particle position which refers to up or down, $\ket{0},\ket{1}$ excitation of Bob's eye, $0$ refers to non-excited eye (not particle detected) and $1$ to excited eye (particle detected).
The interesting fact about this experiment is even though Bob did not detect anything, he entangled himself with the system and destroyed possible interference. He did not notice any change, still there was a physical consequence. That is what we call passive Zeno effect. Because similarly to the original quantum Zeno effect the wave function collapsed, but now it was not a consequence of active contribution of an observer, it was rather amusing circumstance, Bob did not even notice he achieved something like that.
Since this experiment is mathematically somewhat different from the previous examples, let us calculate the respective probabilities for the last time.
With initial state \[ (\ket{u}+\ket{d})\ket{0} \end{equation} the probability density of detecting a particle at time $t$, $0<t<T$ \[ \begin{split}
\rho(0\rightarrow 1(t)|\ket{i}(t_0=0))&=\frac{\mathrm{d}p(0\rightarrow 1(0,t)|\ket{i}(t_0=0))}{\mathrm{d}t}\\ &=\frac{1}{2T}\ . \end{split} \end{equation} Probability density of detecting particle at time $t>t_0$ while Bob has not detected anything until $t_0$ is \[ \begin{split}
\rho(0\rightarrow 1(t)|\ket{i}(t_0))&=\frac{\rho(0\rightarrow 1(t)|\ket{i}(0))}{1-p(0\leq t\leq t_0|\ket{i}(0))}\\ &=\frac{\frac{1}{2T}}{1-\int_{0}^{t_0}\frac{1}{2T}\mathrm{d}\tau}=\frac{1}{2T-t_0}\ , \end{split} \end{equation}
where the denominator $1-p(0<t<t_0|\ket{i}(0))$ is equal to sum of probabilities of possible outcomes on the presumption Bob has not detected anything until $t_0$. As before, we just renormalized probabilities of possible events in our new sample space. Probability of finding a particle in the next short interval $(t_0,t)$ when Bob has not detected any particle until $t_0$ is then \[
P(0\rightarrow 1(t)|\ket{i}(t_0))=\frac{1}{2T-t_0}\Delta t\ . \end{equation}
To show renormalization of probabilities is a right way of calculating conditional probabilities, let us do it on the atom decay again, where $\rho(U\rightarrow Th(t)|U(0))=\lambda e^{-\lambda t}$. Our new conditional probability of detecting alpha particle on the presumption Bob has not detected it until $t_0$ is then \[ \begin{split}
\rho(U\rightarrow Th(t)|U(t_0))&=\frac{\lambda e^{-\lambda t}}{\int_{t_0}^\infty\lambda e^{-\lambda \tau}\mathrm{d}\tau}= \frac{\lambda e^{-\lambda t}}{1-\int_{0}^{t_0}\lambda e^{-\lambda \tau}\mathrm{d}\tau}\\ &=\frac{\lambda e^{-\lambda t}}{e^{-\lambda t_0}}=\lambda e^{-\lambda (t-t_0)}\ , \end{split} \end{equation}
which is the very same result we received before directly from differentiation of $p(U\rightarrow Th(0,t)|U(t_0))$.
At the end of this chapter let us summarize what we learned here.
\textbf{For outer observer who do not interact with the system system transforms continuously through some unitary evolution, without any jumps, while observer interacting with the system never see any continuous transformation. Instead the information he receives about the system is delivered to him by quanta -- indivisible amounts of energy. Probabilities of receiving this quanta at a time $t$ can be calculated by differentiating squared coefficients of the wave function describing both observer and the system he interacts with.}
We added the presumption that for outer observer there is always continuous transformation, without any jumps. You can immediately argue that for example applying unitary matrix \begin{equation}\label{jumpunitary} U=\left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right). \end{equation} is not a continuous transformation. But using Occam's razor we suggest all of such jumping unitary matrices are limits of continuous ones, for example (\ref{jumpunitary}) being limit of \begin{equation} U=e^{-i\omega t}\left( \begin{array}{cc} \cos (\omega t) & i\sin(\omega t) \\ i\sin(\omega t) & \cos (\omega t) \end{array} \right). \end{equation} for $\omega\rightarrow\infty$, so the continuous transformation is so fast it seems as an instant. That is why we said ``For outer observer the evolution is always continuous.'' ``For inner (interacting) observer the evolution is always jumping.''
\begin{figure}
\caption{Alice in Wonderland. Alice look at the car and see it on the left side. She does not like it because it is much easier for her to ride from the right side. Thus she decides to go to brainwash unit. Her knowledge about parked car has been passed to the switching unit.}
\label{aliceinwonderland4}
\end{figure}
\begin{figure}
\caption{Alice in Wonderland. Switching unit flies back to the parked car and makes everything undecided again. Alice can take a look for the second time. She sees the car on the right side now.}
\label{aliceinwonderland5}
\end{figure}
\begin{figure}
\caption{Personal (relative) realities of three people (highly entangled and continuously interacting systems). As these people interact with world, observe it, they get entangled with it. As shown in Delayed Choice Entanglement Swapping, they can also entangle indirectly, they can be entangled with things they never met before. That is why we chose only black lines to represent entanglement. Amount and specific type of entanglement depends on a type of specific interaction. On the last picture we also depicted two quantum erasures.}
\label{personalrealities}
\end{figure}
\chapter{Conclusion}
We began this thesis by presumption ``Observer is just another quantum system'' and later added ``Close system evolves through unitary interaction''. These two presumptions accompanied us on our way to understanding.
Using it, we were able to mathematically describe why obtainable which-path information prevents system from interfering, we found that there is a continuous transformation from wave-like behavior and particle-like behavior.
Using these two presumption we were able to prove why Free will test, the experiment which tests one's free will based on the quantum entanglement is not possible. We were able to prove we cannot predict the future using quantum entanglement alone. That was the Non-communication theorem.
Using these two presumptions we were able to prove formerly experimental concept of complementarity of one-particle interference and the correlations. We proved that better correlation leads to worse visible interference pattern and vice versa and we found a system which exhibits both interference and correlations to some other system. This concept was mathematically the very same as the proof of why which-path information prevents system from interfering. It is basically the same phenomenon.
These two presumptions are the basic building blocks of the Many World interpretation. With the knowledge gained during studies of quantum eraser experiments we used this interpretation as a start point to more complex view, Relative Realities approach. We proved there is a way how to pass between two worlds, which is the opposite of what original Many World interpretation suggests. Still to do it, we need to erase every knowledge about the former world first to prevent paradoxes. We showed there is a different reality for each observer. At different time people can see the same thing differently. If we erase someone's memory, he may be able to do the same measurement for the second time and get a different result. Still, no paradox occurs since the involved person does not remember anything from his former experience. Also two people cannot see a different result on the same experiment, because the latter do not measure only the system, but both system and the person who measured first.
In Relative realities approach, we suggest there is only one world, full of all possible superpositions and all possible outcomes. By observing it we entangle ourselves with it and reduce our scope. Nevertheless, there is coming back by disentangling ourselves using quantum erasure. Thus in contrast to Many World interpretation, this approach fits better the time symmetric nature of Schrödinger's equation.
At last, we showed probability calculation in Many World interpretation does not fit very well with time-dependent interaction between the system and the observer. We tried to fix this problem and we suggested a solution. While for the ``outer'' observer who do not interact with the joint system of ``inner'' observer and system ``inner'' observer interacts with the evolution is always continuous, for ``inner'' observer interacting with the system the evolution is always jump process. The information about observed system is delivered to him by quanta -- indivisible package of energy (or information). Probability of receiving quanta at a given time $t$ can be calculated from universal wave function\footnote{Here we mean the wave function that describes both the ``inner'' observer and the ``observed system'' itself from the point of view of the ``outer'' (or reference) observer.} by differentiating squared coefficients belonging to observer's state. Calculating conditional probabilities of type ``Bob will detect a particle at a time $t$ while he did not detected anything until now'' is done by renormalizing of the remaining probabilities, or in other words, by renormalizing probabilities which refer to remaining events from our sample space. We successfully used this computation to describe Bob observing an atom decay and calculated two other interactions, namely Alice slowly opening a box with the dead-alive cat and Bob looking on one of two slits. We also introduced passive Zeno effect, which says an observer can destroy possible interference pattern even though he did not notice any change, he did not registered anything. That was our small contribution to the measurement problem.
\appendix
\chapter{Time evolution of some special systems}\label{timeEvolution}
Here we will show that the systems where each particle has only one energy level and these particles do not longer interact time ordering of measurements on exclusive parts (different particles) has no particular importance and thus we can omit it in our derivations. All of the experiments in the first chapter satisfy such presumption. For example in Wheeler's quantum eraser it does not matter whether the photon goes through the upper or lower part, since it energy is determined only by its wavelength and it is the same in both cases. In Free will experiment the reasoning is the same. In Delayed Choice Entanglement Swapping we do not measure position but polarization instead. In this special case it does not matter, but if we took electrons with spins and we did the same experiment in magnetic field, it would have behave differently, since the energy of spin up would be different than energy of spin down.
Suppose we have $n$ non-interacting particles and each can have only one value of energy. The governing hamiltonian is then just a multiple of identical operator \[ H=H_1+H_2+\cdots+H_n \end{equation} where $$H_1=E_1\bold{1}\otimes\bold{1}\otimes\cdots\otimes\bold{1},$$ $$H_2=\bold{1} \otimes E_2\bold{1}\otimes\cdots\otimes\bold{1},$$ $$\dots$$ $$H_n=\bold{1}\otimes\bold{1}\otimes\cdots\otimes E_n\bold{1}$$ and the time evolution is a simple multiplying by an overall phase factor \[ \ket{\psi(t)}=e^{-i(E_1+E_2+\cdots+E_n)(t-t_0)}\ket{\psi_0}\ , \end{equation} where \[ \ket{\psi_0}=\sum_{i_1,i_2...,i_n}\alpha_{i_1i_2...i_n}\ket{i_1}\ket{i_2}...\ket{i_n}\ . \end{equation}
All operators, in particular projectors, commute with such hamiltonian and thus in such experiments it does not matter at all when the measurements on the exclusive parts of the system happened. Suppose we measure observable $A$ on the first particle and observable $B$ on the second. Observable $A$ acts as an identity at the rest of the system (particles $2,\dots,n$) and the same holds for the $B$. $A$ and $B$ thus commute. Probability of measuring value $a$ of an observable $A$ in a time $t_1$ given the measured value $b$ of an observable $B$ in a time $t_2$ remains the same if we choose any different times, for example $t_3$ and $t_4$ respectively. For simplicity suppose our initial time $t_0=0$. \begin{equation*} \begin{split}
P&(A=a(t_1)|B=b(t_2))\\ &=\bra{\psi_0}e^{iHt_2}P_{B=b}e^{iH(t_1-t_2)}P_{A=a}e^{-iH(t_1-t_2)}P_{B=b}e^{-iHt_2}\ket{\psi_0}\\ &=\bra{\psi_0}P_{B=b}P_{A=a}P_{B=b}\ket{\psi_0}\\ &=\bra{\psi_0}P_{A=a}P_{B=b}P_{B=b}\ket{\psi_0}\\ &=\bra{\psi_0}P_{A=a}P_{B=b}\ket{\psi_0}\\ &=\bra{\psi_0}P_{B=b}P_{A=a}\ket{\psi_0}\\
&=P(A=a(t_3)|B=b(t_4)) \end{split} \end{equation*}
Now we see in such systems it does not matter when the system is measured, nor it does not matter on the time ordering of the measurements.
\chapter{List of unitary transformations used in the thesis}\label{listofUnitaries} \section{Unitary 1}\label{unitary1} States just before the interaction $\ket{\psi_u(0)}$, $\ket{\psi_d(0)}$ are orthogonal and if we suppose they are normalized too the unitary operator changing \[ \frac{1}{\sqrt{2}}\ket{0}(\ket{\psi_u(0)}+\ket{\psi_d(0)})\ , \end{equation} to \[ \frac{1}{\sqrt{2}}(\ket{0}\ket{\psi_u(0)}+\ket{1}\ket{\psi_d(0)})\ , \end{equation} is for example\footnote{Unitary transformation doing this is not unique.} of the form \begin{equation*} \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \end{array} \right), \end{equation*} in basis $\{\ket{0}\ket{\psi_u(0)},\ket{0}\ket{\psi_d(0)},\ket{1}\ket{\psi_u(0)},\ket{1}\ket{\psi_d(0)}\}$.
\section{Unitary 2}\label{unitary2} The unitary matrix between vectors $\ket{i}=\left(\sqrt{\frac{2}{5}},\frac{1}{\sqrt{10}},\sqrt{\frac{2}{5}},\frac{1}{\sqrt{10}}\right)^T$ $\ket{f}=\left(\frac{1}{\sqrt{2}},0,\frac{3}{5\sqrt{2}},\frac{2\sqrt{2}}{5}\right)^T$ (in basis $\{\ket{0}\ket{L},\ket{0}\ket{R},\ket{1}\ket{L},\ket{1}\ket{R}\}$) is for example \[ U=P_2P_1^{-1}=P_2P_1^T\ , \end{equation} where \begin{equation*} P_1=\left( \begin{array}{cccc} \sqrt{\frac{2}{5}} & \frac{1}{\sqrt{2}} & 0 & -\frac{1}{\sqrt{10}} \\ \frac{1}{\sqrt{10}} & 0 & \frac{1}{\sqrt{2}} & \sqrt{\frac{2}{5}} \\ \sqrt{\frac{2}{5}} & -\frac{1}{\sqrt{2}} & 0 & -\frac{1}{\sqrt{10}} \\ \frac{1}{\sqrt{10}} & 0 & -\frac{1}{\sqrt{2}} & \sqrt{\frac{2}{5}} \end{array} \right),\ \ \ P_2=\left( \begin{array}{cccc} \frac{1}{\sqrt{2}} & 0 & \frac{3}{5\sqrt{2}} & \frac{2}{\sqrt{17}} \\ 0 & 1 & 0 & 0 \\ \frac{3}{5\sqrt{2}} & 0 & -\frac{1}{\sqrt{2}} & \frac{6}{5\sqrt{17}} \\ \frac{2\sqrt{2}}{5} & 0 & 0 & -\frac{\sqrt{17}}{5} \end{array} \right)\ . \end{equation*}
\section{Unitary 3}\label{unitary3} \begin{equation*} U_1=\left( \begin{array}{cccccc} 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \end{array} \right), \end{equation*} acting between Alice and Car in basis $\{0L,0R,1L,1R,2L,2R\}$ (schematically).
\begin{equation*} U_1=\left( \begin{array}{cccccc} 0 & 0 & \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 & 0 \\ 0 & 0 & 0 & 0 & \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ 0 & 0 & \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0 & 0 \\ 0 & 0 & 0 & 0 & \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \end{array} \right), \end{equation*} acting between Alice and Switching unit in basis $\{0u,0d,1u,1d,2u,2d\}$.
\begin{equation*} U_3=\frac{1}{\sqrt{2}}\left( \begin{array}{cccc} 1 & 1 & 0 & 0 \\ 1 & -1 & 0 & 0 \\ 0 & 0 & -1 & 1 \\ 0 & 0 & 1 & 1 \end{array} \right), \end{equation*} acting between Car and Switching unit in basis $\{Lu,Ld,Ru,Rd\}$.
\section{Unitary 4}\label{unitary4} \begin{equation*} U=e^{-it}\left( \begin{array}{cccccc} \cos t & 0 & i \sin t & 0 & 0 & 0 \\ 0 & \cos t & 0 & i\sin t & 0 & 0 \\ i\sin t & 0 & \cos t & 0 & 0 & 0 \\ 0 & i \sin t & 0 & \cos t & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{array} \right), \end{equation*} in basis $$\{\neutranie \text{alive},\neutranie \text{dead},\smiley \text{dead},\frownie \text{alive},\smiley \text{alive},\frownie \text{dead}\}$$ (schematically).
\chapter{Very short introductions to some interpretations of Quantum Mechanics}
\section{Everett's Many World interpretation}
Hugh Everett's Relative states formulation, later renamed by Bryce DeWitt as Many World interpretation is based on the idea whole quantum mechanics together with Born rule of probability can be derived using only unitary evolution. There is not general consensus whether it was successful \cite{LandsmanManyWorldBornRuleSuccesful}, \cite{KentManyWorldCritique}. Process of measurement in Many world interpretation is just entangling observer with the system from the point of view of observer.
Let the resulting state be for example \[ \frac{1}{2}\ket{\text{dead cat}}\ket{\smiley}+\frac{\sqrt{3}}{2}\ket{\text{alive cat}}\ket{\frownie}\ . \end{equation} Many world interpretation says the world splits into two worlds. Observer enters world where cat dies with probability $(\frac{1}{2})^2=\frac{1}{4}$ and is smiling because he does not like cats and with probability $(\frac{\sqrt{3}}{2})^2=\frac{3}{4}$ observer enters world where the cat is still alive and he is happy.
\section{Decoherence theory}
Decoherence theory see the measurement as interacting of particle with the environment, which is complex system usually consisted of millions of particles. Still the environment can be described by some state (although we do not know it exactly) and interacting with the particle changes this state in some way. Put mathematically, let \[ \ket{\tilde\psi_0}=\sum_i\ket{i}\braket{i}{\psi}\ , \end{equation} where $\{\ket{i}\}$ is einselected basis (environmentally induced selected basis). Total wave function of particle plus the environment is \[ \ket{\psi_0}=\sum_i\ket{i}\ket{\epsilon}\braket{i}{\psi}\ , \end{equation} where $\ket{\epsilon}$ is the initial state of the environment. Each state $\ket{i}\ket{\epsilon}$ evolves to $\ket{\epsilon_i}$ through some unitary evolution. Thus the final state is \[ \ket{\psi_0}=\sum_i\ket{\epsilon_i}\braket{i}{\psi}\ . \end{equation} Since unitary evolution conserves orthogonality, we have \[ \braket{\epsilon_i}{\epsilon_j}=\braket{i}{j}=\delta_{i,j} \end{equation}
We cannot effectively control all degrees of freedom of the environment and that is why we need to use the density matrix and trace over the environment to describe our particle. But now the particle is not anymore in a pure state, it is in a mixed state, which is known as decoherence. Eigenvectors of this new density matrix are exactly the states our former state $\ket{\tilde\psi_0}$ can pass onto which fits perfectly the well-known Born rule.
\section{Consistent Histories}
Consistent histories approach is based on the notion of Proposition, such as ``The particle went through upper slit at a time $t$''. Set of propositions form a history. \emph{Homogenous history} $H_i$ is a sequence of propositions $P_{i,j}$, where index $j$ refers to time. \[ H_i=(P_{i,1},P_{i,1},\dots P_{i,n_i}) \end{equation} meaning proposition $P_{i,1}$ is true at time $t_1$, then proposition $P_{i,2}$ is true at time $t_2$ etc.
Each proposition can be represented by a projection operator $\hat P_{i,j}$ acting on a Hilbert space. Homogenous history is represented by tensor product of propositions \[ \hat H_i=\hat P_{i,1}\otimes\hat P_{i,2}\otimes\dots\otimes\hat P_{i,n_i} \end{equation}
We define \emph{Class operator} acting on a history as \[ \hat C_{\hat H_i}=T\prod_{j=1}^{n_i}P_{i,j}=\hat P_{i,1}\hat P_{i,2}\dots\hat P_{i,n_i} , \end{equation} which orders Projectors $\hat P_{i,j}$ chronologically, i.e. $t_j\geq t_{j+1}$ .
Set of Histories $\{H_i\}$ is \emph{consistent} if \[ \Tr{\hat C_{\hat H_i}\rho\hat C_{\hat H_j}^{\dagger}}=0 \end{equation} for all $i\neq j$ and initial density operator $\rho$.
Probabilities of history $\hat H_i$ is then \[ Pr(\hat H_i)=\Tr{\hat C_{\hat H_i}\rho\hat C_{\hat H_i}^{\dagger}}\ . \end{equation}
\addcontentsline{toc}{chapter}{Bibliography}
\end{document} | arXiv |
Nuclear gene proximity and protein interactions shape transcript covariations in mammalian single cells
A top-down measure of gene-to-gene coordination for analyzing cell-to-cell variability
Dana Vaknin, Guy Amit & Amir Bashan
Global coordination level in single-cell transcriptomic data
Guy Amit, Dana Vaknin Ben Porath, … Amir Bashan
RNA proximity sequencing data and analysis pipeline from a human neuroblastoma nuclear transcriptome
Steven W. Wingett, Simon Andrews, … Jörg Morf
Identifying chromatin features that regulate gene expression distribution
Thanutra Zhang, Robert Foreman & Roy Wollman
Deconvolution of single-cell multi-omics layers reveals regulatory heterogeneity
Longqi Liu, Chuanyu Liu, … Xun Xu
From single-cell RNA-seq to transcriptional regulation
Gioele La Manno
The changing mouse embryo transcriptome at whole tissue and single-cell resolution
Peng He, Brian A. Williams, … Barbara J. Wold
Joint single-cell measurements of nuclear proteins and RNA in vivo
Hattie Chung, Christopher N. Parkhurst, … Aviv Regev
Identification of spatial expression trends in single-cell gene expression data
Daniel Edsgärd, Per Johnsson & Rickard Sandberg
Marcel Tarbier ORCID: orcid.org/0000-0003-0556-25311,
Sebastian D. Mackowiak1 na1,
João Frade ORCID: orcid.org/0000-0003-3961-76412 na1,
Silvina Catuara-Solarz2,
Inna Biryukova ORCID: orcid.org/0000-0003-0701-28081,
Eleni Gelali ORCID: orcid.org/0000-0003-0067-54733,
Diego Bárcena Menéndez2,
Luis Zapata ORCID: orcid.org/0000-0002-1386-20192,4,
Stephan Ossowski2,5,6,
Magda Bienko ORCID: orcid.org/0000-0002-6499-90823,
Caroline J. Gallant7 &
Marc R. Friedländer ORCID: orcid.org/0000-0001-6577-43631
Nature Communications volume 11, Article number: 5445 (2020) Cite this article
Gene regulatory networks
Single-cell RNA sequencing studies on gene co-expression patterns could yield important regulatory and functional insights, but have so far been limited by the confounding effects of differentiation and cell cycle. We apply a tailored experimental design that eliminates these confounders, and report thousands of intrinsically covarying gene pairs in mouse embryonic stem cells. These covariations form a network with biological properties, outlining known and novel gene interactions. We provide the first evidence that miRNAs naturally induce transcriptome-wide covariations and compare the relative importance of nuclear organization, transcriptional and post-transcriptional regulation in defining covariations. We find that nuclear organization has the greatest impact, and that genes encoding for physically interacting proteins specifically tend to covary, suggesting importance for protein complex formation. Our results lend support to the concept of post-transcriptional RNA operons, but we further present evidence that nuclear proximity of genes may provide substantial functional regulation in mammalian single cells.
Two genes that increase or decrease coordinately in expression over multiple conditions are said to covary. Gene expression covariation can be studied over two conditions (e.g., healthy and diseased tissue), in time-series experiments, or in metastudies spanning hundreds of tissues and cell types, for instance from public expression repositories1,2,3. Over the last 20 years, such studies have yielded numerous important biological insights due to the fact that covarying genes are often functionally related, and commonly share the same gene regulatory mechanisms.
In the last ten years, single-cell sequencing methods have emerged, making it possible to profile the entire transcriptomes of individual cells4,5,6. This makes it possible to identify the genes that covary in expression across individual cells, considering in effect every cell as a distinct condition. This research direction holds great promise, since it could reveal biological covariations that are not detectable in analyses of bulk cell populations. First, differences in cellular compositions between samples may disturb covariation analyses in bulk tissues7. And second, transcripts can appear to be constantly and moderately expressed in all studied tissues or cell cultures, but may in fact display temporally fluctuating and covarying expression in single cells. This type of covariation may never be detected in bulk tissues. Until now, however, transcriptome-wide single-cell studies of such intrinsic gene covariation patterns have been limited by confounding factors such as cell cycle progression and cell differentiation, which are extrinsic to the genes of interest8,9. These confounding factors have a strong impact on the global covariation patterns and could overshadow the more subtle—and potentially more interesting—underlying patterns.
Here, we apply carefully designed experimental conditions to remove the confounding extrinsic effects of differentiation and cell cycle progression, and apply sensitive Smart-Seq2 single-cell sequencing to profile the transcriptomes of hundreds of mouse embryonic stem cells (mESCs). Specifically, using stringent cut-offs we report >67,000 gene pairs that intrinsically covary in expression—more than have been described in previous single-cell studies. These covarying gene pairs interlink to form a network with well-established biological features, following a so-called power-law distribution, and recover known regulatory patterns and pathways. We further apply a novel computational framework to study the relative importance of distinct regulatory mechanisms for gene expression covariation, and find that genes regulated by the same transcription factors or miRNAs tend to covary. We validate that a subset of the covariations is directly induced by miRNAs by repeating our entire experiment in miRNA-deficient cells. The strongest effect, however, is seen between genes that are in nuclear proximity on the same chromosomes, and a similar but weaker effect is seen for genes that are in nuclear proximity but located on distinct chromosomes.
Finally, we test two competing hypotheses regarding the putative function of these gene expression covariations. The first hypothesis states that genes covary in expression to ensure stoichiometric abundances of proteins that function in the same pathway, while the second hypothesis proposes that covariations are important for proper stoichiometry of proteins that are part of the same complexes. We find that covarying genes only tend to share the same function if their encoded proteins also physically interact, lending evidence to the protein complex hypothesis.
In summary, we have combined single-cell RNA sequencing with a tailored experimental design and computational framework to quantify regulatory drivers in single mammalian embryonic stem cells, highlighting the importance of nuclear proximity for gene expression covariations. Additionally, we present evidence that these covariations play a role in ensuring stoichiometry between interacting proteins.
Smart-Seq2 sequencing of mouse single-cell transcriptomes
To obtain reliable and reproducible measurements of gene expression for our study, we applied the Smart-Seq2 protocol to sequence the transcriptomes of 567 individual mouse embryonic stem cells divided between three well-plates which serve as biological replicates (Supplementary Table 1). While labor-intensive and not easily scalable, Smart-seq2 is highly sensitive and precise6,10,11. It also reliably detects both exons and introns, which is useful for distinguishing between transcriptional and post-transcriptional regulation12. We performed strict quality filtering on the initial set of cells, which resulted in a total of 355 cells considered (see "Methods" section, Supplementary Fig. 1 and Supplementary Table 2). Gene expression values in each cell were normalized to the sum of mRNA sequence reads in the given cell (see "Methods" section, Supplementary Fig. 2), and only genes that displayed substantial biological variation above technical noise, as estimated by artificial ERCC spike-ins (see "Methods" section, Supplementary Fig. 3), were retained. Overall, our analysis yielded reliable gene expression measurements for 8989 genes (Supplementary Table 3 and Supplementary Data 1).
Homogenous cell population unconfounded by dynamic processes
For the sequencing experiment, we took several precautions to eliminate the confounding extrinsic effects of cell cycle and differentiation. First, all cells were cultured in 2i+LIF medium, which is a well-established protocol to maintain embryonic stem cells in a homogeneous pluripotent state—excluding potential differentiation effects13. Second, we used fluorescence-activated cell sorting to specifically select cells in G2/M phase of the cell cycle, thus excluding major cell cycle effects. This exact combination of growth medium (2i + LIF) and cell cycle phase (G2/M) has been reported to generate particularly homogeneous cell populations with regard to their transcriptome signatures9. Indeed, our cell population forms a single cluster when common dimensionality reductions are applied (Supplementary Fig. 4). Using published marker genes, we confirmed that our cells were in the correct cell cycle phase13,14 and expressed pluripotency but not differentiation marker genes9,15 (Supplementary Fig. 5). Altogether our cells comprise a homogenous population, unconfounded by cell cycle or differentiation effects.
Discovery of >67,000 significant positive and negative gene covariations
To study pairwise gene covariations we calculated Spearman's rank correlation coefficient for all possible gene pairs. We chose this procedure for its ability to detect nonlinear monotonous dependencies and for its robustness towards outliers (see "Methods" section). The measured correlation coefficients were centered around zero (Fig. 1a, left), indicating the absence of overall confounding factors. Importantly, the observed coexpression values had a greater spread than those of permutated controls (Fig. 1a), suggesting the presence of numerous nonrandom biological covariations. Sixty-seven thousand three hundred and twenty-eight gene pairs were considered significantly covarying (42,938 positively and 24,390 negatively) after stringent covariation calling (see "Methods" section). We randomly permuted the count matrix one thousand times and found that the highest number of significant covariations observed was ∼2000, which corresponds to only 3.1% of the covariations observed in the original data (see Online Methods, Supplementary Table 4 and Supplementary Data 2). As an additional benchmark, we performed correlation calculations on the pooled replicates and applied multiple-hypothesis testing (Benjamini–Hochberg). Around 90% of the covariations called by our approach are supported by pooled and corrected covariations. In fact, our approach is stricter than simple multiple-hypothesis testing since fewer gene pairs are considered significant. An example of a highly significant gene pair is shown in Fig. 1b, wherein each data point represents expression measures in one individual cell. Significant covariations identified using Spearman's ranked correlation coefficient have a high overlap with those retrieved by Pearson's correlation coefficient and with dependency measures recovered through Hoeffding's D statistics (Fig. 1c), showing the robustness of the approach. Finally, we validated several of the gene expression covariations using single-molecule FISH (Supplementary Figs. 6, 7) and single-cell quantitative RT-PCR (Supplementary Fig. 8). In summary, we present >67,000 high-confidence gene pair covariations—more than have been reported in previous single-cell studies.
Fig. 1: Covariation network reflects biological features.
a Transcriptome-wide covariation (co-expression) values for all possible gene pairs. Violin plot of Spearman's ranked statistics (rho-value) for the entire transcriptome (blue) and for a permuted control matrix (gray). Value for the gene pair Npm1—Ppia is highlighted. Arrows indicate at which rho-value p-values become smaller than 0.01 (rho ∼ 0.253). b Covariation of the genes Ppia and Npm1. Abundances for the two genes are in reads per million (RPM) and plotted in log scale. Each data point represents their respective measurement in the same single cell. Spearman's ranked correlation was applied. c Spearman's ranked coefficients are in accordance with other covariation and dependency measures. d Gene covariation network is scale-free (y ≈ 2.1). Number of significant covariations per gene against the number of genes with that number of covariations (blue points). Green line illustrates the degree distribution of a random network with same number of genes (nodes) and covariations (edges) as the observed network. e Cholesterol biosynthesis pathway is highly enriched for gene pair covariations. Genes involved in cholesterol biosynthesis from acetyl-CoA. Only genes that were robustly detected in our sequencing data are shown. Arrows indicate the flow of metabolites, lines indicated significant covariation between genes. Gene names in bold indicate direct targets of Srebpf1, a transcription factor that is well known to regulate cholesterol biosynthesis. f Gene sets that share functional annotations are enriched for covariations. Gene covariation enrichment scores (CES) for gene sets sharing the same gene ontology or sharing the same KEGG pathway annotation as well as respective controls (p-values represent a two-sided independent two-group t-test). Gene covariation enrichment scores indicate the ratio of observed significant covariations relative to the amount of expected covariations (see main text). g Example subnetworks (subset of significant covariations, selected functional subnetworks are highlighted).
Covariation network features reflect biological functions
We observed that the covarying gene pairs link together in complex patterns that can be described as a network. It is well-established that biological networks, such as those arising from transcription factor targeting or protein interactions, have properties that differ from those of random networks16. For instance, biological networks tend to be scale-free following a so-called power law distributions, such that most genes only have few interactions with other genes while few genes represent hubs in the network, interacting with many other genes. Consistent with our network having biological rather than technical origins, we found that our covariation network follows such a power law distribution (γ ≈ 2.1, Fig. 1d, blue). Importantly, this network structure is distinctly different from that of a random network with the same overall connectivity (Fig. 1d, green). Further network features are listed in Supplementary Fig. 9c.
Within this covariation network we identified many biologically meaningful subnetworks, such as the one formed by genes involved in cholesterol biosynthesis (Fig. 1e). These genes are known to be activated when the SREBF1 transcription factor is cleaved from the Golgi membranes, and shuttled to the nucleus in response to lack of cholesterol17 and can therefore be expected to covary in expression, depending on the localization of SREBF1 protein. Another notable subnetwork is formed by genes involved in the formation of the TCP1 ring complex, a chaperone involved e.g. in tubulin biogenesis18 (Supplementary Fig. 10).
A substantial proportion of the observed covariations (∼14,500 gene pairs, not included in overall counts listed above) are between ribosomal proteins. These covariations have previously been reported for bulk cell populations19 and likely have functions in proteostasis20. It was recently reported that four of these proteins (RPL10, RPL38, RPS7, and RPS25) are optional components of the ribosome whose inclusion or exclusion can influence which pools of transcripts are preferentially translated21. We find that these four ribosomal proteins all covary positively and significantly with each other, providing evidence that they may not function by a mutually exclusive either-or logic in single mouse embryonic stem cells in steady state condition. In the following sections, we have excluded ribosomal protein genes and focus on other types of covariations.
Applying our method for measuring covariation enrichment over large gene sets (see section of CES score below), we find that genes sharing common Gene Ontology terms are 1.47-fold more likely to be covarying (we observe 47% more significant covariation than we would expect by chance), while permuted control sets show no such enrichment (Fig. 1f). The same holds true for genes sharing common a KEGG pathway annotation, where the enrichment is 1.86-fold (Fig. 1f). We reason that genes sharing functions or pathways are more likely to be regulated in a similar fashion and thus tend to covary. In conclusion, the covarying gene pairs form a comprehensive scale-free network which is associated with annotated cellular functions and pathways.
Covariations retrace known aspects of stem cell biology
The pluripotency of mouse embryonic stem cells has been studied extensively and several studies focus on characterizing their transcriptomes and gene regulatory circuits13,19,22,23,24. The network we observe recapitulates many known relationships between pluripotency markers in mouse embryonic stem cells. For instance, positive covariations support the activation of Fgf4 through Nanog and Sox225,26, while negative covariations support the inhibition of Dnmt3a/b/l by Prdm1415,27 and of Dppa3 by Tbx328. While our data support previous claims that Nanog is positively covarying with Klf4, Sox2, Tet2, and Kat6b9, we see little support for a covariation with Esrrb, Zfp42 and Tet1, and we observe significant negative covariations with Pou5f1 and Dnmt3a in single cells. With regard to predicted pluripotency genes, we can confirm that there are strong covariations between Etv5 and weak covariations between Ptma, and Zfp710 and other pluripotency genes. Covariations of pluripotency genes can be found in Supplementary Table 5. In summary, the detected covariations are in accordance with known gene expression patterns in stem cell biology and give hints at new connections.
Covariation enrichment score (CES)
To systematically investigate the functional and regulatory implications of expression covariations, we defined the CES for gene sets of interest. It indicates whether for a given gene set, we observe fewer or more significant covariations between the genes than we would expect based on a simple background model. The CES provides an easily interpretable single metric—fold-enrichment rather than coefficients and p-values, which can be difficult to interpret. It also allows for easy visualization and comparison of covariations in gene sets.
Our background model considers the total number of significant covariations for each gene as well as the number of covariations of all its potential pairing genes. In other words, it is the factor of the probabilities of two genes if their covariations were distributed randomly [Eq. (1)], summing over all possible pairs in the gene set (Supplementary Fig. 11).
$$P\left( {{\mathrm{sigCov}}\left( {g_{{a}},g_{{b}}} \right)} \right) = \frac{{\mathop {\sum}\nolimits_{i = 1}^N {{\mathrm{sigCov}}\left( {g_{{a}},g_{{i}}} \right)} \times \mathop {\sum}\nolimits_{j = 1}^N {{\mathrm{sigCov}}(g_{{b}},g_{{j}})} }}{{\mathop {\sum}\nolimits_{i = 1}^N {\mathop {\sum}\nolimits_{j = 1}^N {{\mathrm{sigCov}}(g_{{i}},g_{{j}})} } }}$$
We can then test whether genes that are regulated by the same regulatory factor, e.g., a transcription factor or a miRNA, tend to covary as a consequence of varying abundance or activity of said factor in individual cells.
MiRNAs induce transcriptome-wide gene expression covariation
We first apply the CES to study the regulatory impact of miRNAs, which are important post-transcriptional regulators of gene expression29. In most conditions, these small RNAs downregulate the expression of protein coding genes by binding their mRNA transcripts and leading to their degradation30. This targeting takes place in the cytoplasm and is therefore spatially decoupled from transcriptional regulation.
We speculate that miRNA regulation of gene expression may be a source of gene covariations. For instance, if a miRNA is highly abundant in a given cell, its targets may be coordinately repressed, and we expect an enrichment of covariations across single cells for these targets. To test this hypothesis, we investigated the top-ranking miRNA targets according to TargetScan31, the most widely used catalog of miRNA-target interactions. In this study, we focused on the seven most highly expressed conserved miRNA families (including the miR-15 and miR-290 families) in mouse embryonic stem cells.
Strikingly, miRNA gene target sets are significantly enriched for gene covariations. In median the top 200 targets of each of the seven miRNA families are 28% more likely to covary with each other than expected (p = 0.032). The enrichments exhibit a gradient such that the top-ranking targets show a stronger enrichment in comparison to sets that include lower ranking targets (Fig. 2a). As introns are spliced out in the nucleus, their abundances cannot be impacted by miRNA action in the cytosol. Consistent with this, miRNA targets do not significantly covary at the intron level (Fig. 2b).
Fig. 2: miRNAs, transcription factors and nuclear organization define covariations.
a miRNA targets tend to covary. Covariations enrichment scores (CES) for the top 200, 300, and 500 ranked miRNA targets according to TargetScan, for the seven highest expressed conserved miRNA and for a control set for comparison for 500 randomly selected targets. p-values refer to respective controls. b miRNA target covariations occur post-transcriptionally and are miRNA-dependent. Enrichment in sets of the top 200 ranked miRNA targets in parental cells (WT) and Drosha KO cells, that are void of canonical miRNAs. Enrichments are color coded for exonic reads, representing post-transcriptional regulation (orange) or intronic reads representing transcriptional regulation (yellow). c Covarying genes are enriched for shared miRNA targeting. Reverse covariation enrichment shows the log2 ratio between covariations that share a common miRNA and permuted covariations that share a common miRNA. d Transcription factor targets are enriched for gene covariations. Enrichment in sets of the top 200, 300, and 500 transcription factor targets, for 145 transcription factors profiled with ChIP-seq. Control for comparison is shown for 500 randomly selected targets. p-values refer to respective controls. e Transcription factor target covariations are transcriptional and miRNA-independent. Enrichment in sets of the 200 ranked transcription factor targets in parental cells (WT) and Drosha KO cells. Enrichments are color coded for exonic reads (dark green) or intronic reads (light green). f Covarying genes are enriched for shared transcription factor targeting (figure similar to c). g Genes that are in close nuclear proximity and locate to the same chromosome are enriched for covariations. The range categories are mutually exclusive, for instance pairs of genes that are <5 MB apart are not included in the <25 MB category. h Gene regions that are in close nuclear proximity and locate to different chromosomes are enriched for covariations. Since relatively few intrachromosomal Hi-C contacts were identified, we here used a less stringent criteria (p-value <0.05) cut-off to robustly identify significant covariations. i Circos plot showing significant covariations and Hi-C contacts for chromosomes 15, 17, and 19. Significantly covarying gene pairs are connected by a light blue line. Inter-chromosomal Hi-C contacts are shown as gray lines. j Covarying genes are enriched for interchromosomal Hi-C contacts (figure similar to c). a, b, d, e All p-values represent two-sided independent two-group t-tests between targets sets and respective controls (see "Methods" section).
To exclude the possibility that these covariations originate from other post-transcriptional effectors, we investigated cells that are void of canonical miRNAs. DROSHA is an endonuclease involved in the biogenesis of miRNAs without which canonical miRNAs cannot be produced. We used an inducible Drosha knock-out cell line to validate the miRNA dependence of these covariations (see "Methods" section) and sequenced the transcriptomes of 343 of these knock-out cells using clonal expansion from a single cell and sorting of cells in G2/M phase as described above. We have previously demonstrated the global loss of miRNAs in this particular cell line32. Furthermore, predicted miRNA targets are specifically upregulated in cells void of miRNA as a result of their de-repression (Supplementary Fig. 12). As expected, there is no covariation enrichment in miRNA target sets in Drosha knock-out cells (Fig. 2b), demonstrating that these covariations are directly caused by miRNA activity.
We additionally investigated what we call the reverse covariation enrichment. Here, we observe whether significantly covarying gene pairs are regulated by the same miRNA more often than a permuted background set (see "Methods" section). We find that covarying genes are 12% more likely to be co-regulated by the top 16 miRNAs and 35% more likely to be regulated by the seven most highly expressed conserved miRNAs (Fig. 2c), showing the importance of miRNA conservation and abundance in inducing covariations. It has previously been reported that individual miRNAs can induce gene covariations33, but here we show that this in fact holds true for many miRNAs, transcriptome-wide. We also present evidence that natural (noninduced) fluctuations of miRNA abundance or activity are sufficient to cause gene expression covariations.
From a network perspective, we found that >6000 high-confidence gene covariations were lost in the cells devoid of miRNAs, while less than 3000 new covariations were gained (Supplementary Fig. 9a). A substantial number of the genes that ceased to covary were miRNA targets and the ratio of lost to gained covariations increased when high-confidence targets were considered (Supplementary Fig. 9b). The genes that lost covariations were enriched in functions in RNA biology (Supplementary Fig. 9d), including PolII regulation. The average number of covariations per gene decreased significantly in the miRNA-depleted cells, from 10.1 to 8.4 covariations, and the number of genes without covariations increased from 2265 to 2866 (Supplementary Fig. 9c). Overall, this indicates a global loss of gene expression coordination in cells devoid of miRNAs.
Genes regulated by the same TFs covary with each other
To investigate how regulation by transcription factors influences covariation patterns, we studied the binding sites of 145 transcription factors for which mouse ES cell ChIP-seq data were deposited in the Cistrome database34. As for miRNAs, we observe a gradient in covariation enrichment which is stronger for the top-ranking transcription factor targets compared to lower ranking targets (Fig. 2d). Importantly, transcription factor target sets are significantly enriched for gene covariations both on the exon and the intron level (Fig. 2e), consistent with transcriptional regulation. In median, the top 200 ranked targets of these transcription factors 1.38-fold enriched for coexpression on the exon level (p-value <2.3 × 10−16) and 1.22-fold enriched for coexpression at the intron level (p-value <2.3 × 10−16). As expected, the covariation enrichment of transcription factor targets is not significantly lowered in Drosha knock-out cells (Fig. 2e). In conclusion, genes that are regulated by the same transcription factor tend to covary, possibly due to stochastic variations in transcription factor abundance and activity between individual cells. This effect acts on millions of gene pairs while the mean magnitude of the regulation is similar to what we describe for miRNA-specific regulation.
Proximal genes on the same or different chromosomes covary
Genes that neighbor on the same chromosome are known to show coexpression35; this also holds for genes within the same chromatin loop or within the same topologically associated domain (TAD). Furthermore, the concept of transcription factories covers dynamically assembled complexes that facilitate transcription, and are dependent on intrachromosomal or interchromosomal interactions36. To investigate the covariation enrichment on genomic regions that are in proximity within the nucleus, we analyzed mouse embryonic stem cell Hi-C-seq data37,38. From here on, we define proximal genes as those whose interaction is supported by Hi-C data, whether the interaction is intrachromosomal or interchromosomal (see "Methods" section). Our data shows that genes which are proximal and located on the same chromosome are highly enriched for covariations (Fig. 2g). Genes that are close in linear distance on the chromosome (<5 MB) are enriched ∼4-fold in covariations, while genes that are distal (>50 MB) are enriched 2.1-fold. This observation is robust to changes in the computational analysis and normalization (Supplementary Figs. 13, 14). The effect is also detectable at the intron level, confirming an origin in transcriptional regulation at the level of nascent transcripts. Genes that are on the same chromosome are almost twice (1.9-fold) as likely to covary than expected, even when their proximity is not supported by Hi-C (Fig. 2g, far right). The highest enrichment was detected for genes that are both close in linear distance on the same chromosome and are predicted to be in the same TAD, which are ∼15-fold more likely to covary (Fig. 2g, far left). Intriguingly, proximal genes on different chromosomes also show substantial covariation enrichment (Fig. 2h–j), supporting the notion of transcription factories that incorporate areas from multiple chromosomes.
Next, we ranked the relative importance of transcription factors, miRNAs, and nuclear proximity for the regulation of covariation (Fig. 3). We observed that miRNA targets were 1.28-fold, transcription factor targets were 1.38-fold, genes in nuclear proximity on different chromosome were 1.46-fold, and, remarkably, genes that are proximal on the same chromosome are 4.3-fold more likely to covary. While the exact enrichments are likely specific to the study system and target selection strategy, it is clear that transcriptional regulation, miRNA-mediated regulation and, surprisingly, interchromosomal nuclear proximity all play important roles. In our setting, however, intrachromosomal nuclear proximity is the strongest predictor of gene expression covariations.
Fig. 3: Relative importance of miRNAs, transcription factors and nuclear proximity for covariations.
Comparison of covariation enrichment (CES) scores for genes that are either regulated by the same miRNA, regulated by the same transcription factor or that are in nuclear proximity—divided into intrachromosomal and interchromosomal pairs.
Protein interaction drives gene covariations
Next, we examined putative functions of the covariations that we observe in single cells. We formulated two hypotheses. The first hypothesis is the pathway hypothesis: that genes involved in the same pathway are coordinated in expression, for instance to avoid bottlenecks in the production of metabolic intermediates39. The second hypothesis is the complex hypothesis: that covariations ensure correct stoichiometry among proteins that are part of the same heteromeric protein complex, since surplus proteins may misfold or even cause aggregates40.
As previously stated, genes that share the same Gene Ontology function or process or the same KEGG pathway annotation are significantly enriched for gene covariations (Fig. 1f). The same is true for genes that physically interact on the protein level according to experimental evidence gathered by the STRING database (Fig. 4a). For these interactions we observe a gradient in which those interactions with the highest confidence/affinity score also have the highest enrichment for covariations, consistent with previous findings in single cancer cells41. We then determined that genes that contribute to the same complex are 2.6-fold more likely to covary, compared to just 1.6-fold for genes that are part of the same pathway (Fig. 4b), lending support to the complex hypothesis.
Fig. 4: Proteins that physically interact are specifically enriched for covariations at the RNA level.
a Genes that interact on the protein level are highly enriched for covariations. Covariation enrichment of genes that are annotated to be interacting on the protein level according to the STRING database. b Covariations enrichment for genes whose protein products are part of the same physical complex and for genes that are part of the same reaction (pathway). c Gene covariations are mainly driven by protein interaction. Covariation enrichment of genes sharing the same GO annotation or KEGG pathway annotation. GO and KEGG annotations are stratified into pairs with shared annotation and experimentally identified protein interaction (+PI) and shared annotation but lack of experimentally identified protein interaction (−PI). d Model. Heteromeric protein complexes require proper stoichiometry of protein components. Proteins that are in surplus can be degraded, misfolded or form aggregates.
To further test the two hypotheses, we split the set of functionally related genes into one set with genes that share a functional annotation as well as protein interaction and those that share a functional annotation but no protein interaction. If the pathway hypothesis holds, we would expect both gene sets to covary, since they share functions. If the complex hypothesis holds, we would expect only the genes whose proteins physically interact to covary, since the covariations are needed for proper stoichiometry of proteins in the complexes. Strikingly, when genes that physically interact at the protein level are excluded from the analysis, we find no covariation enrichment for either of the GO and KEGG functional annotations (Fig. 4c). In other words, there is no indication of covariation enrichment for proteins in the same pathway without evidence that they physically interact. Altogether, our data suggests that direct interactions between proteins in the same complex, rather than pathway stoichiometry, (Fig. 4b–d) as a selector for covariations.
Predictive power of gene covariations
We next investigated if our observed covariations can be used to predict genes that share upstream regulators through a guilt-by-association principle. We hypothesize that if a gene of interest covaries with numerous known targets of a transcription factor, it is likely a target of said factor. To test this hypothesis, we noted all genes that had been identified as a targets of the transcription factor Ctnnb1 in a mouse ES cell ChIP-seq experiment42. This gene is known to regulate cell adhesion and has been linked to various cancer phenotypes43. We then ranked all other genes according to how many of the top 100 Ctnnb1 targets they covary with, and observed that the more covariations a gene exhibited, the more likely it is to be bound by Ctnnb1 in a second ChIP-seq experiment42 (Fig. 5a). In other words, the more significant covariations a gene had with the high confidence targets identified in the first experiment, the more often it was observed among the high confidence targets identified in a second independent experiment. While the predictive power of our method is limited (target probability ∼25%, Fig. 5a) it serves as a proof of principle that single-cell transcriptome data can be used for predicting regulatory relations even in a homogeneous cell population. This approach could be used to make sparse data sets more complete, through guilt-by-association with previously identified targets or to identify targets that escape current technologies due to biases. Last, we found that the function of genes could be inferred by surveying functional annotations of covarying genes (Fig. 5b, see "Methods" section). This may not only aid functional annotation but could also reveal hidden gene functions, so-called moonlighting. In conclusion, knowledge of gene covariations across homogeneous single cells can be used to infer gene function and regulation through associations.
Fig. 5: Gene covariation information can predict regulatory targets.
a Genes that covary with transcription factor targets are likely targets of the same factor (Ctnnb1) and can be validated by ChIP-seq. Probability for genes that share a certain number of significant covariations with the top 100 targets identified via ChIP-seq to be identified de novo in an independent second ChIP-seq experiment. b Examples of genes whose canonical function was retrieved purely from the functional annotations of covarying genes (p-values determined by Fisher's exact test).
We show that statistically robust and biologically meaningful gene covariations that can be detected in homogeneous nondynamic single cell populations. Evidence to support this claim include the validation by statistical methods, a low estimated false discovery rate, the recovery of known regulatory patterns, and a power-law distribution of network edges commonly found in biological networks. Our experimental set-up allows for the study of widespread gene expression covariations unrelated to cell cycle and other dynamic changes in the cells such as differentiation. Strikingly, all major regulatory mechanisms—post-transcriptional, transcriptional and by nuclear proximity—influence covariation patterns. We experimentally confirmed the importance of post-transcriptional regulation through miRNAs by showing that depletion of miRNAs results in a specific loss of a subset of covariations.
Based on our findings, we propose a hierarchy of gene covariation regulation in mESCs. We place regulation via intrachromosomal proximity first due to the strength of the effect, and transcription factors second because of the size of the affected target pool (Fig. 3). The influence of interchromosomal proximity and miRNA regulation is comparatively smaller though still substantial.
As targets of the same regulator tend to covary as well as genes that are part of the same functional units, covariations can be used to predict gene function and regulation. We show that we not only recover known gene functions and transcription factor targeting but, as a proof of principle, also demonstrate the predictive potential for both gene function and regulation.
Importantly, we find that covarying genes only tend to share the same function if their encoded proteins also physically interact, suggesting a role in protein complex stoichiometry. The induction of gene expression covariation could be beneficial to cells as it is well understood that the formation of heteromeric protein complexes is often needed for proper folding and for the stability of the proteins involved44. In bacteria, spatial separation of the translation of such proteins leads to misfolding events45. It is conceivable that temporal separation might result in similar effects. The production of misfolded proteins that must be removed by degradation is costly from an energetic point of view, and the accumulation of misfolded protein can have lethal consequences for cells (Fig. 4d). We suggest, therefore, that establishing expression covariation of such genes already on the RNA level might be an advantage in evolutionary terms.
In this study, we measure RNA rather than protein with the latter being closer to the cellular phenotype. However, when inferring upstream regulation, it may be more informative to measure RNA. Furthermore, many of the most interesting and biologically meaningful covariations that we discover may not be detectable at the protein level, even in single cells. For instance, transcript covariations may be important for cofolding, but they may not be visible at the proteome level for proteins that have long half-lives and that are therefore more stably expressed. It will be exciting to study covariations at the protein level, when technologies to accurately profile hundreds of proteins in single cells become available.
A caveat of our study is that different published data sources and methods were used to identify miRNA targets, transcription factors targets and genes that are in close proximity, complicating direct comparisons among them. For example, public ChIP-seq was used to infer transcription factor targets, while Hi-C was used to detect genes in the same nuclear vicinity. These methods have distinct limitations and ranges of sensitivity. However, all of the methods we employ are state-of-the-art in their respective fields, and in some cases the methods have limitations even within those areas. For instance, there is evidence that Hi-C may underestimate interchromosomal contacts46 and it is well-established that even the best miRNA target prediction has imperfect accuracy47,48, meaning the effects of nuclear proximity and miRNA repression on gene expression covariations may be more profound than we estimate here.
Gene coexpression studies have been conducted on pools of cells for decades, yielding important insights into covariations and network properties. These studies, however, have been limited in their capacity to study changes in network properties following a genetic perturbation. For instance, to study the effects of Drosha knockout using pooled cells, it would be necessary to ablate the gene in dozens or hundreds of cell lines in parallel to have the statistical strength to call covariations. In contrast, our study serves as a proof-of-concept that it is possible to delete a gene in a single cell line, and then consider each of hundreds of individual cells as an independent condition, thus obtaining the statistical power to resolve network properties in a single experiment. In our study, we find that many more covariations are lost than gained in the Drosha knockout cells, and we observe a general loss of network connectivity. This highlights the importance of miRNAs in maintaining gene expression synchronicity and global gene network connectivity—an insight that would be difficult to obtain with bulk cell or classical single-gene approaches. In summary, we demonstrate that the combination of single-cell sequencing, gene covariation analysis and genetic perturbations can yield insights into the robustness of regulatory networks with unprecedented ease and depth.
A previous study of RNA and protein covariations using samples from bulk cell populations35 found that neighboring genes on the same chromosomes are often coexpressed at the RNA level, but are not functionally related and that the covariations do not translate to the protein level. On the contrary, we observe that gene pairs in nuclear proximity that share an interaction on the protein level are in fact 7.5-fold enriched for covariations, suggesting a specific co-occurrence of nuclear proximity, RNA co-expression and shared function. Using a database for bulk cell protein expression covariations49, we further find that 21% of our observed proximity-related RNA covariations translate to the protein level, compared to 6% for background gene pairs. The apparent contrast between these results may derive from the fact that the previous study was conducted in immortalized primary cell lines from human individuals35, where genetic variants that strongly impact protein levels may have been specifically selected against by evolution. In contrast, temporal fluctuations of protein levels may be tolerated in individual cells from cell lines, allowing more refined measurements. It is possible that nuclear proximity limits independent regulation of physically close genes rather than enabling active coregulation. Genes that need tight coregulation may therefore evolutionary tend to locate in proximity on the same chromosome. Regardless of the causality, genes that share a protein interaction and therefore need stoichiometric coexpression may be placed in nuclear proximity through evolution. Surprisingly, a recent study provides evidence that chromosome rearrangements do not substantially impact gene expression in Drosophila50. These findings are not inconsistent with ours, however, and may reflect differences between mammals and invertebrates, or between measuring averages of gene expression in tissues and expression covariations in single cells. Overall, our findings highlight the advantages of studying variation of gene regulation at the single-cell level.
It has been proposed that while prokaryotes use cotranscribed operons to ensure synchronized expression and stoichiometry of proteins in common pathways or complexes, eukaryotes use post-transcriptional regulation to ensure a similar outcome at the RNA level. The integrated effect of dispersed transcription and coordinated post-transcriptional regulation has been named RNA operons or Regulons51. Our results support the idea that eukaryotic post-transcriptional regulators such as miRNAs can coordinate gene expression at the RNA level. Finally, we provide evidence that substantial functional regulation occurs at the level of nuclear organization, by genes on the same chromosome or by genes that are in proximity although on distinct chromosomes.
Drosha knock-out
The DroshaF E14 129Sv-derived mouse embryonic stem cell line (mESC) was provided by M. Chong52. DroshaKO cells were generated using the DroshaF cell line containing the tamoxifen-inducible LoxP—exon9—LoxP and a neomycin selection cassette. The tamoxifen induction was performed in standard serum-containing media. After 48 h of incubation with tamoxifen, single cells were FACS-sorted, clonally expanded and selected for deletion of exon 9 yielding the null allele of Drosha.
The mESCs were maintained in (1) standard serum-containing media (DMEM media, Gibco): 1× nonessential amino acids (Gibco), 1000 U ml−1 ESGRO mouse LIF medium supplement (Millipore), 15% heat-inactivated fetal bovine serum (Gibco), 2 mM glutamine (Gibco), 1 mM sodium pyruvate (Gibco), 0.1 mM b-mercaptoethanol (Gibco), 1× penicillin-streptomycin (Gibco). The standard serum-containing media was supplemented with 250 μg ml−1 neomycin (Sigma) for maintenance of DroshaF cells; (2) 2i media containing Ndiff227 medium (Cellartis Takara Bio), 1000 U ml−1 ESGRO mouse LIF medium supplement (Millipore), 1 μM PD0325901 (Selleckchem), 3 μM CHIR-99021 (Selleckchem) and 1× penicillin–streptomycin (Gibco) onto feeder-free 0.1% EmbryoMax gelatine (Merck Millipore) coated flasks at 5% CO2 and 37 °C. Cells were tested for mycoplasma contamination and propagated in serum-containing media for three passages before switching to 2i media. After adapting to the serum-free media, the cells were propagated for at least three passages in 2i media and used for single-cell sequencing when 50–70% confluency was reached. To harvest cells, they were incubated with Accutase (Sigma) at 37 °C for 5 min followed by centrifugation.
Cells were resuspended in 2i media (∼106 cells ml−1), DNA stained with 10 μg ml−1 Hoechst-33342 (Sigma) at 37 °C for 15 min, and then stained with 1 μg ml−1 propidium iodide (Sigma) to reveal cell viability. Single cells were sorted in G2/M using a BD Influx (BD Bioscience) into 384-well plates containing 2.3 μl of lysis buffer containing ERCC spiking (1:40,000 dilution) in each well.
cDNA library preparation and RNA sequencing
Dual-indexed cDNA libraries were prepared using the Illumina Nextera XT dual index library prep kit following the Smart-Seq2 protocol6. cDNA libraries were pooled and 50-bp single-end sequencing was performed on an Illumina HiSeq 2000 platform.
Read mapping and counting
An extended mouse genome assembly was created by concatenating the ERCC spike-in sequences (obtained from thermofisher.com) to the mouse genome assembly mm10 (obtained from genome.ucsc.edu). This assembly was indexed using bowtie2 (2.2.3) and reads were mapped to the indexed genome using TopHat (2.0.12). tophat2–GTF transcriptome.gtf mm10_genome_index_dir raw_reads.fastq. After mapping, PCR duplicates were removed with SAMtools rmdup (version 0.1.19–44428 cd). samtools rmdup -s infile.bam rmdup_file.bam. For the read count assignment to gene loci, a custom Perl script (available upon request) was used to intersect reads with gene annotation (RefSeq, obtained from genome.ucsc.edu, 28.08.2015). Specifically, to ensure stringency in the covariation analysis, only reads having a unique genome mapping position and gene annotation were considered. Transcript isoforms from the same gene locus were fused so that two overlapping exons from different isoforms were combined into a single exon, which comprises the start of the upstream exon and the end of the downstream exon. Gene models called GmNumber (e.g., Gm10024) were excluded. Similar analysis was performed separately for reads mapping to intron annotations.
Quality control of sequencing data
Cells were kept for further analysis if they fulfilled the following criteria: cells had more than 200,000 sequenced reads, more than 80% of the reads mapped to the transcriptome or genome, more than 40% of 19,127 genes were detected, the spike-in fraction of reads was less than 1%, the mitochondrial mapped reads were below 5% and the PCR duplicates per cell were below 30% of all reads. This procedure also removed empty wells, duplets and incompletely lysed cells due to their different mapping statistics. Mapping statistics are shown in Supplementary Fig. 1. Additionally, cells that were flagged as S or G1 phase using the cyclone function of the SCRAN package for R were excluded53. Remaining cells showed homogeneous expression of pluripotency and G2/M-phase markers, while differentiation markers and G1/S-phase markers were largely absent (Supplementary Fig. 5). Individual genes were excluded from analysis if they were not expressed in at least half of all cells in all subsets of conditions (control and Drosha knock-out, respectively) and replicates. This resulted in 8989 genes (9105 in Drosha KO, 8501 in both) that were considered reliably expressed. These genes generally exhibited mean abundances higher than 16 RPM (see below), which we considered to be the cut-off for reliably detected genes above technical background (Supplementary Fig. 3).
Gene expression normalization
To ensure that the normalization method did not bias the covariation analyses, the following well-established normalization methods were all tested and considered: normalization by overall read count, overall spike-in count, overall count of endogenous reads, and fraction of spike-ins reads. To avoid over-fitting we further investigated whether the normalization factors which were applied to all observations (individual cells) itself correlated with the features (genes) after the normalization. Normalization methods that induced correlations in our data or failed to remove correlation with technical factors were excluded. Simple normalization by sequencing depth was most suitable for removal of technical effects while avoiding overfitting (Supplementary Fig. 2). For this normalization read counts for each gene were divided by the total sum of mapped reads in the respective cell and multiplied by 106, resulting in reads per million (RPM). No length normalization was applied for two reasons. First, with regard to the ERCC spike-ins an RPKM normalization was found to overestimate the abundance of short transcripts. Second, when analyzing pair-wise correlations between genes with a ranked approach the relative abundance of each gene across cells is sufficient.
Covariation
Covariations were calculated using Spearman's rank correlation coefficient. p-values were determined via a z-score for the Fisher transformation of the correlation coefficient rho (ρ). For a more stringent analysis we only considered genes that were significantly correlated (p < 0.01) in at least two of our three replicates. Furthermore, covariations were discarded if the sign of the correlation coefficient differed in one of replicates.
We note that ribosomal proteins make up a sizeable proportion of the positive covariations with 68% of all possible riboprotein gene-pairs being significantly covarying, whereas only 0.3% of all other gene pairs covary. Riboproteins are known to be coordinately expressed, however the underlying regulatory mechanism is not well understood—especially in mammals54. We therefore excluded riboproteins from the following analysis, resulting in 67,328 remaining covariations of which 42,938 are positive and 24,390 are negative.
Covariation enrichment score
We developed a simple covariation enrichment approach. The probability of a gene pair to be covarying by chance can be calculated by the product sums of significant covariations of each individual gene, divided by the total sum of significant covariations within the whole data set (2). For a set of multiple genes, the sum of individual probabilities will be referred to as "expected covariations".
$$P\left( {{\mathrm{sigCov}}\left( {g_{{a}},g_{{b}}} \right)} \right) = \frac{{\mathop {\sum}\nolimits_{i = 1}^N {{\mathrm{sigCov}}\left( {g_{\mathrm{a}},g_{\mathrm{i}}} \right)} \times \mathop {\sum}\nolimits_{j = 1}^N {{\mathrm{sigCov}}(g_{{b}},g_{{j}})} }}{{\mathop {\sum}\nolimits_{i = 1}^N {\mathop {\sum}\nolimits_{j = 1}^N {{\mathrm{sigCov}}(g_{{i}},g_{{j}})} } }}.$$
This estimation works remarkably well for random gene sets (Supplementary Fig. 11). An enrichment for a gene set of interest can be calculated as the fold-change (or log2 fold-change) of the expected and the observed significant covariations within the aforementioned gene set. As an intuitive control for the CES a reverse enrichment approach is applied. While the CES describes enrichment for covariations in a gene pair set, e.g., genes that are targeted by the same miRNA, the reverse enrichment describes how often a certain gene pair feature, e.g., shared miRNA targeting, is found in the set of all significantly covarying gene pairs in comparison to a permuted set of these pairs.
Binning approaches
Covariation enrichment scores can only be calculated on reasonably sized sets of gene-pairs. Each set needs to contain one—preferably multiple—significant covariations. Therefore individual pairs have to be binned together. Depending on the overall sparseness of significant covariations in a certain array of gene pairs varying bin sizes may be applied. Binning is performed in a randomized manner. The details of each binning operation is described in the respective sections.
Transcription (co-)factor targets
Transcription factor and cofactor targets were determined by public chromatin immunoprecipitation with massively parallel DNA sequencing (ChIP-seq) data. Transcription factor lists were retrieved from the Cistrome database55 setting "Mus musculus" for species and "Embryonic Stem Cell" for biological sources. Only sets passing 4 out of 6 listed quality scores were considered. Putative target lists were downloaded and used for analysis. Target rank was determined by the assigned score. Covariation enrichment was calculated for each target set using the indicated number of top-ranking targets. For transcription factors listing multiple putative targets sets from the same or multiple studies enrichments were calculated individually and the median of all enrichments was used as representative for said transcription factor.
MiRNA targets
MiRNA targets were obtained from TargetScanMouse (targetscan.org/mmu_71) Release 7.131. Targets were ranked by the total number of 8mer sites first, then total number of m8-7mer sites and finally according to the cumulative weighted context score. Only miRNAs with at least 4000 RPM in mESCs56 were considered. Covariation enrichment was calculated for each target set using the indicated number of top-ranking targets.
Nuclear proximity
Nuclear gene proximity was estimated using chromosome conformation capture via high-throughput sequencing (HiC). Public preprocessed data for intrachromosomal37 (retrieved from the HiC project website of the Bing Ren lab) and interchromosomal38 contacts were integrated. Annotations mapping to mouse genome assembly mm9 were lifted to assembly mm10 using UCSC Genome Browser Utilities (liftOver). Normalized HiC reads for the assessment of intrachromosomal proximity were binned into 40 kilobase (kb) regions. Reads were considered supporting the proximity between two genes if their gene bodies (entire gene annotation plus a 2 kb region upstream of the gene) were each within 20 kb distance from the respective regions. Only contacts that were supported by more than one normalized read were considered. For the covariation enrichment analysis gene pairs were stratified according to their linear proximity (their absolute distance along the genome in base pairs) and according to whether there was evidence for 3D proximity based on HiC data as described above. Topologically associated domains were based on HiC experiments performed with HindIII restriction enzyme37. HiC reads for the assessment of interchromosomal proximity were binned into 500 kilobase (kb) regions. Reads were considered supporting the proximity between two genes if their gene bodies (entire gene annotation plus a 2 kb region upstream of the gene) were each within 500 kb distance from the respective regions. Ranking is based on the total number of reads supporting each gene pair.
Functional gene annotation and protein interaction
Gene ontology and pathway annotations were retrieved from Gene Ontology project57 (version 2.1, via MGI) and the KEGG PATHWAY Database58 (via KEGGREST for R) respectively. Exceedingly small (fewer than ten genes) or big (more than 100 genes) gene ontologies were discarded. Each gene ontology and each KEGG pathway was considered as gene set for enrichment analysis. Protein interactions were retrieved from the STRING Database59 (v10). Only interactions supported by experimental evidence were considered. For enrichment plots, gene pairs were randomly binned together (bin size 500). Further information on protein interaction in pathways and in complexes was retrieved from the Reactome project60. Reactome data was stratified according to whether genes interact in complex or reaction (in this paper the latter is termed pathway for reasons of consistency). Pairs were binned together randomly for enrichment plots (bin size 100).
Regulatory and functional predictions
Potential novel transcription factor targets are predicted as those genes that covary significantly with a certain number of the top 100 targets that were identified in one ChIP-seq experiment. Furthermore, novel predictions had to be absent from the top 1000 targets in this first experiment. Predictions success was scored as the fraction of these genes that could be identified as targets via ChIP-seq in a second experiment. For this, the top 1000 targets of this second experiment were considered. To test whether gene function can be deducted from neighboring gene pairs in a covariation network, we picked nine genes that are well characterized but act in different pathways and complexes. We then performed gene ontology enrichment with topGO on the ten genes that show the highest covariation coefficients with the gene in question. If the bona fide function of said gene was among the top ten enrichments we list the function with the associated p-value.
Single-cell quantitative PCR (sc-qPCR)
sc-qPCR was performed on preamplified cDNA (Illumina Nextera XT dual index library prep kit following the Smart-Seq2 protocol) for 21 genes and 112 cells. Applied Biosystems TaqMan assays (FAM-MGB) were used and reactions were run on a Fluidigm Biomark microfluidic qPCR chip (PN 100-6170 C1). Spearman's ranked coefficient was used for estimating covariations between sc-qPCR and single-cell RNA sequencing data. Limit of detection (LoD) was set at 39 cycles for covariations. Importantly, the results were overall consistent when using lower LoDs (incl. 25 cycles).
Single-molecule RNA fluorescence in situ hybridization
Custom mouse specific probes for single-molecule fluorescence in situ hybridization of gene pairs of interest were designed and produced in-house using an analytic software and production pipeline61. Each probe consists of 42–86 oligos (47, 51, 63, 42, and 86 oligos for the probe targeting the Adam19, Adam23, Dnmt3, Mme, and Tmem2 respectively) and each oligo consists of four parts (from 5′ to 3′): (1) a 20 nt adapter, C, for probe visualization; (2) a 20 nt adapter, F, for PCR amplification during probe synthesis; (3) a 30 nt T sequence complementary to the target; and (4) a 20 nt adapter, R, for PCR amplification during probe synthesis. The T sequences were designed using a custom-made pipeline run with the following parameters: (1) Targeting the longest transcript. (2) Having GC content between 40 and 60%. (3) Having up to 4 nt long homopolymers. (4) Allowing for at least 3 nt between consecutive oligos. The T sequences for all the probes as well as the transcript variant which they are targeting can be found in Supplementary Data 3. The C, F, and R adapter sequences were designed as explained in Gelali et al.61 with the difference that the sequences were checked for orthogonally against the mouse genome. The probes were produced using a pipeline for large-scale enzymatic production of hundreds of probes in parallel that is describe in details in (Gelali et al.61). More specifically, the F and R sequences are used as barcodes for the selective amplification of the desired oligonucleotides from an array-synthesized complex oligo pool (containing thousands of different species of oligonucleotides). During the PCR reaction the C and the T7 promoter sequences are incorporated into the PCR products. The T7 promoter sequence is used for the in vitro transcription (IVT) step that follows the PCR. The RNA product of the IVT is used as a template for the reverse transcription and in the final step it is removed via alkaline hydrolysis resulting in the production of the desired ssDNA probes containing the C sequence of interest. Mouse ESCs were fixed on coverslips immobilized onto a silicon gasket. The first hybridization was performed at 37 °C for 16–18 h using a hybridization buffer containing 45% Formamide, 2× SSC, 10% Dextran sulfate, 1 mg ml−1 E.coli tRNA, 0.02% bovine serum albumin, 10 mM Vanadyl-ribonucleoside complex, and the subsequent wash was performed at 37 °C using a wash buffer containing 35% Formamide, 2× SSC while the hybridization of a fluoresently labelled oligo to the C adapter was performed at 37 °C for 16–18 h using a hybridization buffer containing 30% Formamide, 2× SSC, 10% Dextran sulfate, 1 mg ml−1 E. coli tRNA, 0.02% bovine serum albumin, 10 mM Vanadyl-ribonucleoside complex, and the subsequent wash was performed at 37 °C using a wash buffer containing 25% Formamide, 2× SSC. All solutions were prepared in RNase-free water. The transcripts were probed as follows: Tmem2 with A594 and Mme with Cy5 (representing a negatively covarying gene pair); Adam19 with Cy7 (measured on the ir800 channel) and Adam23 with Cy5 (representing a positively covarying gene pair). To quantify the smFISH signal, all mRNA molecules in each field of view were counted using custom scripts in MATLAB®. To get an estimate of the mRNA counts per cell, the z-projection of the mRNA dots identified in each stack was split into a regular grid of squared pseudo-cells.
All computational analyses were performed in R Studio (v1.2.1335, R v3.6.1) All values (e.g., p-values) that are estimated to be smaller than the machine epsilon (machine precision) of 2.220446 × 10−16 are conservatively rounded up to 2.3 × 10−16.
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Sequencing data have been deposited at NCBI SRA under the BioProject ID PRJNA592852. Gene Ontology data can be retrieved via MGI http://www.informatics.jax.org/faq/GO_dload.shtml. KEGG data was retrieved using KEGGREST in R. STRING DB data can be downloaded via http://version10.string-db.org/download/protein.links.detailed.v10/10090.protein.links.detailed.v10.txt.gz. Other data is available at the indicated locations (see "Methods" section). Preprocessed data tables can be provided upon request. All data is available from the corresponding author upon reasonable request.
iFISH software can be found at http://ifish4u.org. R code is available upon request.
Segal, E. et al. Module networks: identifying regulatory modules and their condition-specific regulators from gene expression data. Nat. Genet. 34, 166–176 (2003).
Stuart, J. M., Segal, E., Koller, D. & Kim, S. K. A gene-coexpression network global discovery of conserved genetic modules Joshua. October 302, 249–255 (2003).
De Smet, R. & Marchal, K. Advantages and limitations of current network inference methods. Nat. Rev. Microbiol. 8, 717–729 (2010).
Tang, F. et al. mRNA-Seq whole-transcriptome analysis of a single cell. Nat. Methods 6, 377–382 (2009).
Islam, S. Characterization of the single-cell transcriptional landscape by highly multiplex RNA-seq. Genome Res. 7, 194–196 (2010).
Picelli, S. et al. Smart-seq2 for sensitive full-length transcriptome profiling in single cells. Nat. Methods 10, 1096–1098 (2013).
Farahbod, M. & Pavlidis, P. Untangling the effects of cellular composition on coexpression analysis. Genome Res. 30, 849–859 (2019).
Buettner, F. et al. Computational analysis of cell-to-cell heterogeneity in single-cell RNA-sequencing data reveals hidden subpopulations of cells. Nat. Biotechnol. 33, 155–160 (2015).
Kolodziejczyk, A. A. et al. Single cell RNA-sequencing of pluripotent states unlocks modular transcriptional variation. Cell Stem Cell 17, 471–485 (2015).
Svensson, V. et al. Power analysis of single-cell rnA-sequencing experiments. Nat. Methods 14, 381–387 (2017).
Natarajan, K. N. et al. Comparative analysis of sequencing technologies for single-cell transcriptomics. Genome Biol. 20, 1–8 (2019).
La Manno, G. et al. RNA velocity of single cells. Nature 560, 494–498 (2018).
Article ADS PubMed PubMed Central CAS Google Scholar
Marks, H. et al. The transcriptional and epigenomic foundations of ground state pluripotency. Cell 149, 590–604 (2012).
Santos, A., Wernersson, R. & Jensen, L. J. Cyclebase 3.0: a multiorganism database on cell-cycle regulation and phenotypes. Nucleic Acids Res. 43, D1140–D1144 (2015).
Yamaji, M. et al. PRDM14 ensures naive pluripotency through dual regulation of signaling and epigenetic pathways in mouse embryonic stem cells. Cell Stem Cell 12, 368–382 (2013).
Albert, R. Scale-free networks in cell biology. J. Cell Sci. 118, 4947–4957 (2005).
Yokoyama, C. et al. SREBP-1, a basic-helix-loop-helix-leucine zipper protein that controls transcription of the low density lipoprotein receptor gene. Cell 75, 187–197 (1993).
Yaffe, M. B. et al. TCP1 complex is a molecular chaperone in tubulin biogenesis. Nature 358, 245–248 (1992).
Dunn, S.-J., Martello, G., Yordanov, B., Emmott, S. & Smith, A. G. Defining an essential transcription factor program for naïve pluripotency. Science 344, 1156 LP–1151160 (2014).
Tye, B. W. et al. Proteotoxicity from aberrant ribosome biogenesis compromises cell fitness. Elife 8, 1–29 (2019).
Shi, Z. et al. Heterogeneous ribosomes preferentially translate distinct subpools of mRNAs genome-wide. Mol. Cell 67, 71–83 (2017).
Kushwaha, R. et al. Interrogation of a context-specific transcription factor network identifies novel regulators of pluripotency. Stem Cells 33, 367–377 (2013).
Gosline, S. J. C. et al. Elucidating microRNA regulatory networks using transcriptional, post-transcriptional, and histone modification measurements. Cell Rep. 14, 310–319 (2016).
Ha, M. & Hong, S. Gene-regulatory interactions in embryonic stem cells represent cell-type specific gene regulatory programs. Nucleic Acids Res. 45, 10428–10435 (2017).
Herberg, M. et al. Dissecting mechanisms of mouse embryonic stem cells heterogeneity through a model-based analysis of transcription factor dynamics. J. R. Soc. Interface 13, 20160167 (2016).
Herberg, M., Kalkan, T., Glauche, I. & Smith, A. A model-based analysis of culture-dependent phenotypes of mESCs. PLoS ONE 9, 1–12 (2014).
Leitch, H. G. et al. Naive pluripotency is associated with global DNA hypomethylation. Nat. Struct. Mol. Biol. 20, 311–316 (2013).
Waghray, A. et al. Tbx3 controls Dppa3 levels and exit from pluripotency toward mesoderm. Stem Cell Rep. 5, 97–110 (2015).
Bartel, D. P. Metazoan MicroRNAs. Cell 173, 20–51 (2018).
Eichhorn, S. W. et al. mRNA destabilization is the dominant effect of mammalian microRNAs by the time substantial repression ensues. Mol. Cell 56, 104–115 (2014).
Agarwal, V., Bell, G. W., Nam, J. W. & Bartel, D. P. Predicting effective microRNA target sites in mammalian mRNAs. Elife 4, e05005 (2015).
Bonath, F., Domingo-Prim, J., Tarbier, M., Friedländer, M. R. & Visa, N. Next-generation sequencing reveals two populations of damage-induced small RNAs at endogenous DNA double-strand breaks. Nucleic Acids Res. 46, 11869–11882 (2018).
Rzepiela, A. J. et al. Single‐cell mRNA profiling reveals the hierarchical response of mi RNA targets to mi RNA induction. Mol. Syst. Biol. 14, 1–15 (2018).
Mei, S. et al. Cistrome data browser: a data portal for ChIP-Seq and chromatin accessibility data in human and mouse. Nucleic Acids Res. 45, D658–D662 (2017).
Kustatscher, G., Grabowski, P. & Rappsilber, J. Pervasive coexpression of spatially proximal genes is buffered at the protein level. Mol. Syst. Biol. 13, 937 (2017).
Fanucchi, S. et al. Chromosomal contact permits transcription between coregulated genes. Cell 155, 606–620 (2013).
Dixon, J. R. et al. Topological domains in mammalian genomes identified by analysis of chromatin interactions. Nature 485, 376–380 (2012).
Kaufmann, S. et al. Inter-chromosomal contact networks provide insights into mammalian chromatin organization. PLoS ONE 10, 1–25 (2015).
Lalanne, J. B. et al. Evolutionary convergence of pathway-specific enzyme expression stoichiometry. Cell 173, 749–761 (2018).
Papp, B., Pál, C. & Hurst, L. D. Dosage sensitivity and the evolution of gene families in yeast. Nature 424, 194–197 (2003).
Wang, J. et al. Single-cell co-expression analysis reveals distinct functional modules, co-regulation mechanisms and clinical outcomes. PLoS Comput. Biol. 12, 1–18 (2016).
Zhang, X., Peterson, K. A., Liu, X. S., Mcmahon, A. P. & Ohba, S. Gene regulatory networks mediating canonical wnt signal-directed control of pluripotency and differentiation in embryo stem cells. Stem Cells 31, 2667–2679 (2013).
Salas, S. et al. Molecular characterization by array comparative genomic hybridization and DNA sequencing of 194 desmoid tumors. Genes Chromosomes Cancer 56, 89–116 (2017).
Webb, E. C. & Westhead, D. R. The transcriptional regulation of protein complexes; a cross-species perspective. Genomics 94, 369–376 (2009).
Shieh, Y.-W. et al. Operon structure and cotranslational subunit association direct protein assembly in bacteria. Science 350, 678–680 (2015).
Maass, P. G., Barutcu, A. R., Weiner, C. L. & Rinn, J. L. Inter-chromosomal contact properties in live-cell imaging and in Hi-C. Mol. Cell 69, 1039–1045.e3 (2018).
Baek, D. et al. The impact of microRNAs on protein output. Nature 455, 64–71 (2008).
Selbach, M. et al. Widespread changes in protein synthesis induced by microRNAs. Nature 455, 58–63 (2008).
Wang, M., Herrmann, C. J., Simonovic, M., Szklarczyk, D. & von Mering, C. Version 4.0 of PaxDb: protein abundance data, integrated across model organisms, tissues, and cell-lines. Proteomics 15, 3163–3168 (2015).
Ghavi-Helm, Y. et al. Highly rearranged chromosomes reveal uncoupling between genome topology and gene expression. Nat. Genet. 51, 1272–1282 (2019).
Keene, J. D. RNA regulons: coordination of post-transcriptional events. Nat. Rev. Genet. 8, 533–543 (2007).
Chong, M. M. W., Rasmussen, J. P., Rudensky, A. Y. & Littman, D. R. The RNAseIII enzyme Drosha is critical in T cells for preventing lethal inflammatory disease. J. Exp. Med. 205, 2005–2017 (2008).
Scialdone, A. et al. Computational assignment of cell-cycle stage from single-cell transcriptome data. Methods 85, 54–61 (2015).
Hu, H. & Li, X. Transcriptional regulation in eukaryotic ribosomal protein genes. Genomics 90, 421–423 (2007).
Liu, T. et al. Cistrome: an integrative platform for transcriptional regulation studies. Genome Biol. 12, R83 (2011).
Leung, A. K. L. et al. Genome-wide identification of Ago2 binding sites from mouse embryonic stem cells with and without mature microRNAs. Nat. Struct. Mol. Biol. 18, 237–244 (2011).
Ashburner, M. et al. Gene ontology: tool for the unification of biology. Nat. Genet. 25, 25–29 (2000).
Kanehisa, M. & Goto, S. Kyoto encyclopedia of genes and genomes. Nucleic Acids Res. 28, 27–30 (2000).
Szklarczyk, D. et al. STRING v10: protein–protein interaction networks, integrated over the tree of life. Nucleic Acids Res. 43, D447–D452 (2015).
Fabregat, A. et al. The reactome pathway knowledgebase. Nucleic Acids Res. 46, D649–D655 (2018).
Gelali, E. et al. iFISH is a publically available resource enabling versatile DNA FISH to study genome architecture. Nat. Commun. 10, 1–15 (2019).
We acknowledge the following funding sources: ERC Starting Grant 758397, "miRCell"; Swedish Research Council (VR) grant 2015-04611, "MapToCleave"; and funding from the Strategic Research Area (SFO) program of the Swedish Research Council through Stockholm University. The computations were performed on resources provided by SNIC through Uppsala Multidisciplinary Center for Advanced Computing Science (UPPMAX). The Smart-Seq2 data were generated by the Eukaryotic Single Cell Genomics (ESCG) and sc-qPCR was facilitated by the Single Cell Proteomics facilities at Science for Life Laboratory. We also thank CRG (the Center for Genomic Regulation) for support in the early pilot phase of the project. Gratitude to Franziska Bonath for designing Fig. 4D and for helping with figure design. Special thanks to Marie Öhman, Johan Elf, and Claes Andréasson, as well as the Friedländer, Kutter, and Pelechano labs for their comments and suggestions.
Open Access funding provided by Stockholm University.
These authors contributed equally: Sebastian D. Mackowiak, João Frade.
Science for Life Laboratory, Department of Molecular Biosciences, The Wenner-Gren Institute, Stockholm University, Stockholm, Sweden
Marcel Tarbier, Sebastian D. Mackowiak, Inna Biryukova & Marc R. Friedländer
Centre for Genomic Regulation (CRG), The Barcelona Institute for Science and Technology, Barcelona, Spain
João Frade, Silvina Catuara-Solarz, Diego Bárcena Menéndez, Luis Zapata & Stephan Ossowski
Science for Life Laboratory, Department of Medical Biochemistry and Biophysics, Karolinska Institute, Stockholm, Sweden
Eleni Gelali & Magda Bienko
Center for Evolution and Cancer, The Institute of Cancer Research, London, UK
Luis Zapata
Department of Experimental and Health Sciences, University Pompeu Fabra, Barcelona, Spain
Stephan Ossowski
Institute of Medical Genetics and Applied Genomics, University of Tübingen, Tübingen, Germany
Department of Immunology, Genetics and Pathology, Uppsala University, Uppsala, Sweden
Caroline J. Gallant
Marcel Tarbier
Sebastian D. Mackowiak
João Frade
Silvina Catuara-Solarz
Inna Biryukova
Eleni Gelali
Diego Bárcena Menéndez
Magda Bienko
Marc R. Friedländer
J.F., S.C.-S., D.B.M., L.Z., and M.R.F. conceived the project. S.D.M. performed sequence mapping, expression quantification and quality control and M.T. performed all further computational analyses, which were supervised by M.R.F. J.F. and S.C.-S. performed cell perturbation and sorting experiments. E.G. performed and M.B. supervised smFISH experiments. I.B. performed smFISH image analysis. I.B. and C.G. performed single-cell qPCR experiments. L.Z. performed and S.O. supervised early pilot phase computational analyses. M.T. and M.R.F. wrote the manuscript, with contributions from all authors.
Correspondence to Marc R. Friedländer.
Peer review information Nature Communications thanks Jack Keene and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Description of Additional Supplementary Files
Supplementary Data 1
Tarbier, M., Mackowiak, S.D., Frade, J. et al. Nuclear gene proximity and protein interactions shape transcript covariations in mammalian single cells. Nat Commun 11, 5445 (2020). https://doi.org/10.1038/s41467-020-19011-5
baredSC: Bayesian approach to retrieve expression distribution of single-cell data
Lucille Lopez-Delisle
Jean-Baptiste Delisle
BMC Bioinformatics (2022)
Shared regulation and functional relevance of local gene co-expression revealed by single cell analysis
Diogo M. Ribeiro
Chaymae Ziyani
Olivier Delaneau
Communications Biology (2022)
Genes that are Used Together are More Likely to be Fused Together in Evolution by Mutational Mechanisms: A Bioinformatic Test of the Used-Fused Hypothesis
Evgeni Bolotin
Daniel Melamed
Adi Livnat
Evolutionary Biology (2022)
Editors' Highlights
Nature Communications (Nat Commun) ISSN 2041-1723 (online) | CommonCrawl |
Transfinite number
In mathematics, transfinite numbers or infinite numbers are numbers that are "infinite" in the sense that they are larger than all finite numbers. These include the transfinite cardinals, which are cardinal numbers used to quantify the size of infinite sets, and the transfinite ordinals, which are ordinal numbers used to provide an ordering of infinite sets.[1][2] The term transfinite was coined in 1895 by Georg Cantor,[3][4][5][6] who wished to avoid some of the implications of the word infinite in connection with these objects, which were, nevertheless, not finite. Few contemporary writers share these qualms; it is now accepted usage to refer to transfinite cardinals and ordinals as infinite numbers. Nevertheless, the term transfinite also remains in use.
Notable work on transfinite numbers was done by Wacław Sierpiński: Leçons sur les nombres transfinis (1928 book) much expanded into Cardinal and Ordinal Numbers (1958,[7] 2nd ed. 1965[8]).
Definition
Any finite natural number can be used in at least two ways: as an ordinal and as a cardinal. Cardinal numbers specify the size of sets (e.g., a bag of five marbles), whereas ordinal numbers specify the order of a member within an ordered set[9] (e.g., "the third man from the left" or "the twenty-seventh day of January"). When extended to transfinite numbers, these two concepts are no longer in one-to-one correspondence. A transfinite cardinal number is used to describe the size of an infinitely large set,[2] while a transfinite ordinal is used to describe the location within an infinitely large set that is ordered.[9] The most notable ordinal and cardinal numbers are, respectively:
• $\omega $ (Omega): the lowest transfinite ordinal number. It is also the order type of the natural numbers under their usual linear ordering.
• $\aleph _{0}$ (Aleph-null): the first transfinite cardinal number. It is also the cardinality of the natural numbers. If the axiom of choice holds, the next higher cardinal number is aleph-one, $\aleph _{1}.$ If not, there may be other cardinals which are incomparable with aleph-one and larger than aleph-null. Either way, there are no cardinals between aleph-null and aleph-one.
The continuum hypothesis is the proposition that there are no intermediate cardinal numbers between $\aleph _{0}$ and the cardinality of the continuum (the cardinality of the set of real numbers):[2] or equivalently that $\aleph _{1}$ is the cardinality of the set of real numbers. In Zermelo–Fraenkel set theory, neither the continuum hypothesis nor its negation can be proved.
Some authors, including P. Suppes and J. Rubin, use the term transfinite cardinal to refer to the cardinality of a Dedekind-infinite set in contexts where this may not be equivalent to "infinite cardinal"; that is, in contexts where the axiom of countable choice is not assumed or is not known to hold. Given this definition, the following are all equivalent:
• ${\mathfrak {m}}$ is a transfinite cardinal. That is, there is a Dedekind infinite set $A$ such that the cardinality of $A$ is ${\mathfrak {m}}.$
• ${\mathfrak {m}}+1={\mathfrak {m}}.$
• $\aleph _{0}\leq {\mathfrak {m}}.$
• There is a cardinal ${\mathfrak {n}}$ such that $\aleph _{0}+{\mathfrak {n}}={\mathfrak {m}}.$
Although transfinite ordinals and cardinals both generalize only the natural numbers, other systems of numbers, including the hyperreal numbers and surreal numbers, provide generalizations of the real numbers.[10]
Examples
In Cantor's theory of ordinal numbers, every integer number must have a successor.[11] The next integer after all the regular ones, that is the first infinite integer, is named $\omega $. In this context, $\omega +1$ is larger than $\omega $, and $\omega \cdot 2$, $\omega ^{2}$ and $\omega ^{\omega }$ are larger still. Arithmetic expressions containing $\omega $ specify an ordinal number, and can be thought of as the set of all integers up to that number. A given number generally has multiple expressions that represent it, however, there is a unique Cantor normal form that represents it,[11] essentially a finite sequence of digits that give coefficients of descending powers of $\omega $.
Not all infinite integers can be represented by a Cantor normal form however, and the first one that cannot is given by the limit $\omega ^{\omega ^{\omega ^{...}}}$ and is termed $\varepsilon _{0}$.[11] $\varepsilon _{0}$ is the smallest solution to $\omega ^{\varepsilon }=\varepsilon $, and the following solutions $\varepsilon _{1},...,\varepsilon _{\omega },...,\varepsilon _{\varepsilon _{0}},...$ give larger ordinals still, and can be followed until one reaches the limit $\varepsilon _{\varepsilon _{\varepsilon _{...}}}$, which is the first solution to $\varepsilon _{\alpha }=\alpha $. This means that in order to be able to specify all transfinite integers, one must think up an infinite sequence of names: because if one were to specify a single largest integer, one would then always be able to mention its larger successor. But as noted by Cantor, even this only allows one to reach the lowest class of transfinite numbers: those whose size of sets correspond to the cardinal number $\aleph _{0}$.
See also
Look up transfinite in Wiktionary, the free dictionary.
• Actual infinity
• Beth number
• Epsilon number
• Infinitesimal
References
1. "Definition of transfinite number | Dictionary.com". www.dictionary.com. Retrieved 2019-12-04.
2. "Transfinite Numbers and Set Theory". www.math.utah.edu. Retrieved 2019-12-04.
3. "Georg Cantor | Biography, Contributions, Books, & Facts". Encyclopedia Britannica. Retrieved 2019-12-04.
4. Georg Cantor (Nov 1895). "Beiträge zur Begründung der transfiniten Mengenlehre (1)". Mathematische Annalen. 46 (4): 481–512.
5. Georg Cantor (Jul 1897). "Beiträge zur Begründung der transfiniten Mengenlehre (2)". Mathematische Annalen. 49 (2): 207–246.
6. Georg Cantor (1915). Philip E.B. Jourdain (ed.). Contributions to the Founding of the Theory of Transfinite Numbers (PDF). New York: Dover Publications, Inc. English translation of Cantor (1895, 1897).
7. Oxtoby, J. C. (1959), "Review of Cardinal and Ordinal Numbers (1st ed.)", Bulletin of the American Mathematical Society, 65 (1): 21–23, doi:10.1090/S0002-9904-1959-10264-0, MR 1565962
8. Goodstein, R. L. (December 1966), "Review of Cardinal and Ordinal Numbers (2nd ed.)", The Mathematical Gazette, 50 (374): 437, doi:10.2307/3613997, JSTOR 3613997
9. Weisstein, Eric W. (3 May 2023). "Ordinal Number". mathworld.wolfram.com.
10. Beyer, W. A.; Louck, J. D. (1997), "Transfinite function iteration and surreal numbers", Advances in Applied Mathematics, 18 (3): 333–350, doi:10.1006/aama.1996.0513, MR 1436485
11. John Horton Conway, (1976) On Numbers and Games. Academic Press, ISBN 0-12-186350-6. (See Chapter 3.)
Bibliography
• Levy, Azriel, 2002 (1978) Basic Set Theory. Dover Publications. ISBN 0-486-42079-5
• O'Connor, J. J. and E. F. Robertson (1998) "Georg Ferdinand Ludwig Philipp Cantor," MacTutor History of Mathematics archive.
• Rubin, Jean E., 1967. "Set Theory for the Mathematician". San Francisco: Holden-Day. Grounded in Morse–Kelley set theory.
• Rudy Rucker, 2005 (1982) Infinity and the Mind. Princeton Univ. Press. Primarily an exploration of the philosophical implications of Cantor's paradise. ISBN 978-0-691-00172-2.
• Patrick Suppes, 1972 (1960) "Axiomatic Set Theory". Dover. ISBN 0-486-61630-4. Grounded in ZFC.
Large numbers
Examples
in
numerical
order
• Thousand
• Ten thousand
• Hundred thousand
• Million
• Ten million
• Hundred million
• Billion
• Trillion
• Quadrillion
• Quintillion
• Sextillion
• Septillion
• Octillion
• Nonillion
• Decillion
• Eddington number
• Googol
• Shannon number
• Googolplex
• Skewes's number
• Moser's number
• Graham's number
• TREE(3)
• SSCG(3)
• BH(3)
• Rayo's number
• Transfinite numbers
Expression
methods
Notations
• Scientific notation
• Knuth's up-arrow notation
• Conway chained arrow notation
• Steinhaus–Moser notation
Operators
• Hyperoperation
• Tetration
• Pentation
• Ackermann function
• Grzegorczyk hierarchy
• Fast-growing hierarchy
Related
articles
(alphabetical
order)
• Busy beaver
• Extended real number line
• Indefinite and fictitious numbers
• Infinitesimal
• Largest known prime number
• List of numbers
• Long and short scales
• Number systems
• Number names
• Orders of magnitude
• Power of two
• Power of three
• Power of 10
• Sagan Unit
• Names
• History
Infinity (∞)
History
• Controversy over Cantor's theory
Branches of mathematics
• Internal set theory
• Nonstandard analysis
• Set theory
• Synthetic differential geometry
Formalizations of infinity
• Cardinal numbers
• Hyperreal numbers
• Ordinal numbers
• Surreal numbers
• Transfinite numbers
• Infinitesimal
• Absolute Infinite
Mathematicians
• Georg Cantor
• Gottfried Wilhelm Leibniz
• Abraham Robinson
Authority control: National
• France
• BnF data
• Israel
• United States
| Wikipedia |
crunchingnumbers(a)live
digesting hard math since 2016
Braille in Modern World
You may have seen braille in elevators and on ATMs and door signs, but brushed it off as something that's for blind people and not for you. As a student, you may have seen braille in a math problem involving patterns and binary choices. As a puzzle enthusiast, you may have seen braille in a decryption challenge. But is that all there is to braille?
For an upcoming Toastmasters speech, I decided to get to know braille, by researching and interviewing locals who professionally work with people who are blind and visually impaired. Surprisingly, the more I looked into braille, the more I realized its diminishing role in the modern world. I want to address the problem today.
1. How to read braille
First, not every blind person can read and write in braille. An alarming statistic comes from the National Federation of the Blind (NFB) in their 2009 report. Less than 10% of blind people in the U.S. are literate in braille, and the rate is similar with blind children, which points to a dismal future. The NFB concluded with a hopeful plan to double the literacy rate by 2015, but has yet to indicate any change.
If we are to suggest using braille to people who are blind and visually impaired, we had better understand how it works. Braille is a 6-dot system:
A set of 6 dots is called a cell, and the dots have the names 1-6, as shown above. Each dot in a cell can be raised or flat, so there can be 64 different cells. That does not seem like a whole lot to work with.
But braille, in fact, represents many things, including multiple languages, math, music, and even programming. The trick is to allow assigning multiple roles to a cell and allow considering a group of cells as one entity. It's ingenious, but at the same time, you can imagine how trying to represent everything with just 6 dots can cause problems.
With this in mind, let us look at the English language and math in braille.
a. English braille
English braille consists of two levels: Grade 1 and Grade 2. In Grade 1 braille, we turn each letter, number, and punctuation mark in a sentence into a cell, in a one-to-one fashion. Hence, if you already understand how words are spelled and sentences are constructed in English, you can write in Grade 1 braille.
Let us first examine the English letters in braille:
Figure 1. Letters in braille.
At a glance, this looks like a lot to remember. However, there are a couple of patterns. First, the bottom three lines of cells are copies of the top line, with one or both of dots 3 and 6 raised. Second, the "corner pieces" are assigned to the 4th, 6th, 8th, and 10th letters, and show a counterclockwise rotation when taken as a sequence. The letter w appears as an afterthought, because it is not used natively in French. (Braille started in France in the 1820s.)
We can also "lower" the dots on the a-j line to write ten more cells. Many of these are used to show punctuation marks.
Figure 2. Punctuation marks in braille.
And here are the remaining fourteen cells, which involve dots 3-6:
Figure 3. The remaining cells.
Three of these merit a special mention.
Dots-6 (called capital sign) capitalizes the letter that follows afterward. You can place two of this to write a word in "all caps."
Placing dots-3456 (called number sign) before the letter a-j creates a number between 0 and 9. The numbers are arranged in "keyboard" order. In other words, 1 and a share the same cell, 2 and b, and so on, with 0 and j sharing the last cell.
The cell with no raised dots indicates a space.
Now, you can imagine that long words would be cumbersome to write. We also know from experience that certain letters tend to appear together (e.g. as a prefix or suffix). Lastly, there may be words that are more useful to the blind. These words should be easier to read and write.
Figure 4. A sentence written in Grade 1 braille.
Grade 2 braille addresses these problems by introducing contractions. Contractions occur in two ways:
A cell represents a group of letters, sometimes an entire word.
A group of cells forms the abbreviation of a word (e.g. bl for blind, brl for braille).
Almost every cell takes on a double duty to accomplish these two goals.
Figure 5. The same sentence in Grade 2 braille.
Unfortunately, the contractions also create problems. There are many, carefully laid down rules for when to use contractions (based on spelling, phonetics, or optimality) and which contraction comes first. Imagine that you are a software engineer whose job is to parse a word into cells while accounting for all these rules. It is not easy to take the plunge and write code that can translate English to braille and vice versa.
Since 1992, efforts have been put into modernizing and standardizing English braille. The result is Unified English Braille (UEB). UEB removes and simplifies some of the contractions and punctuation marks in Grade 2 braille, in order to better reflect ideas that are relevant in the modern world and to pave a lasting future for braille. (The punctuation marks above remained the same.) In particular, standardization allows books and materials in braille to be more easily shared among the countries that use English. The schools for the blind, book lending programs, and braille certification programs in the U.S. are transitioning to UEB now.
b. Nemeth braille
Nemeth braille (pronounced ne-meth, not nee-muth) uses the six dots to represent ideas and notations that are common in math. Using the number sign (dots-3456), letter sign (dots-56), and punctuation sign (dots-456), we can write math along with English in a sentence, much like I do on this blog.
Figure 6. Special signs in Nemeth braille.
Again, there is a long list of rules for representing math. We consider a small handful below.
First, the cells for numbers are not the ones used in Grade 2 braille and UEB. Instead, we use the "lowered" cells that we had previously used for punctuation marks:
Figure 7. Numbers in Nemeth braille.
The number sign and punctuation sign allow us to understand whether we are looking at a number or a punctuation mark.
Next, let us consider operators. Note that two cells are needed to create the equal sign: dots-46, followed by dots-13.
Figure 8. Basic operators in Nemeth braille.
If you are familiar with LaTeX, you will feel more at ease when you read and write an expression in Nemeth braille. Even if you aren't, you can with practice by looking at examples. The key is to think about how you would describe an expression to a blind person or a computer, who cannot see the expression in print.
Consider writing a fraction in LaTeX. No matter how complicated the expressions in the numerator and denominator may be, we would write \frac{numerator}{denominator}. This line of code captures the essence of the fraction. We are telling LaTeX that there is a fraction ahead with the frac command, the numerator looks like numerator, and the denominator looks like denominator. Furthermore, we can write additional LaTeX code in numerator and denominator, so that we can describe how their expressions look in print more precisely.
We write a fraction in Nemeth braille in a similar manner:
Figure 9. An equation that involves a fraction.
The fraction signs (dots-1456 to open, dots-3456 to close) indicate that we are writing a fraction, and the slash sign (dots-34) separates the expressions for the numerator and denominator. The numerator and denominator may hold additional braille code.
At the Texas School for the Blind and Visually Impaired, the math teachers and braillists create and distribute homework written in Nemeth braille. In return, the students write their answers in braille using a typewriter such as Perkins Brailler.
Figure 10. A homework problem in Nemeth braille.
As you can see above, Nemeth braille works well. Word problems and multiple-choice questions can be given easily. Tables of information can be included as well, although they may require more space due to formatting.
One major obstacle is conveying visual information, such as drawings of geometric shapes, graphs of functions in 2D and 3D, and colors, shadows, and transparencies to highlight certain ideas. We may try to approximate the contour with dotted cells or explain what is shown in words. However, we must wonder how much information gets lost in doing so.
2. The decline of braille
Earlier I mentioned that the braille literacy rate among the blind is estimated to be 10%. This is a significant drop when you consider the rate in the 1960s, which passed 50%. What happened?
According to Ava Smith, the Director of the Talking Book Program, audiobooks became popular around the 1970s and a very disastrous decision took place: Blind children no longer need to be taught braille, since they can listen to audiobooks to learn English. As it turned out, listening by itself did not help them develop literacy skills.
Audiobooks provide one form of learning but not a holistic one. (source)
When we read a sentence by sight, we observe how to spell, how to follow grammar, how to format a text, how to use punctuation marks, etc. Oftentimes, particularly in poems, the author uses these in a very deliberate manner to highlight his or her ideas. You can't take in these ideas and learn to create your own, when you only listen to the sentence and never see it written.
There is much information in writing that we can take for granted.
In addition, when you listen to someone speak the sentence, you much depend on that person's interpretation of the sentence. The pronunciation, the inflection, the emotion, the pace—they are theirs, not yours. How will you give that sentence your own voice if you never learn to read?
In 1975, the Individuals with Disabilities Education Act (IDEA) allowed students with disabilities to attend a public school. Unfortunately, most teachers in public schools did not know braille, and there were simply not enough outside resources—braille books, braillists, and Teachers of the Visually Impaired (TVIs)—to help the blind students use braille to learn as well as sighted students.
Furthermore, alternatives to braille began to appear. People who had some sight could choose to read large print books. Compared to large print and audio, braille books are more costly to produce, bulkier in weight and number of volumes, and more crippling in the case of damage or loss. Refreshable braille displays—electronic braille—would certainly eliminate a lot of these problems. However, they are expensive (Humanware and Freedom Scientific sell their mid-ranged, 40-cell displays for about $3,000), can show only one line at a time, and are prone to failure.
An electronic braille display in use. (source)
If we are to advocate using braille daily, we need a display that is cheap (Transforming Braille Group is aiming for $320 for a 20-cell display), can show a full page of braille (without raising the price significantly), and is reliable. Computers and cell phones are almost universal now. They, by default, include accessibility options like high contrast, magnifier, and screen reader, as well as personal apps, for people who are blind and visually impaired. Braille is simply lagging behind in technology.
3. What can we do?
As sighted people, how can we help further braille? For the most part, awareness is key. Knowing what braille is—braille allows blind people to learn various ideas and share their own—is good, but knowing how to read and write in braille is even better. (It's easier than learning a foreign language, in my opinion.)
We can start out small. The Talking Book Program does community outreach and teaches kids how to write secret messages in braille to their friends. Puzzled Pint loves to make adults read braille (albeit Grade 1) by hiding the solution to a puzzle in braille. Ask, what can you do to get you and others interested in learning braille?
If you work at a restaurant or a company, offer braille copies of your restaurant menu or company brochure. Blind people are like everybody else. They eat, they drink, and they conduct business. There are many braille production groups that can help you with creating braille copies.
You can get creative with braille menus and brochures. (source)
We should also advocate for the inclusion of braille in mail and currency. Blind people get mail like everybody, but they cannot see what they just received. U.S. is the only country whose paper bills are of the same size, shape, color, and feel. There is no way a blind person can tell the denomination, unless the person had systematically placed the bills in a wallet or gets help from a money reader (which takes time).
There are a few additional things that we can do to help people who are blind and visually impaired. If you work in design—websites, games, electronics, and mobile apps—make sure that they can use your products with ease. Knowbility provides training for creating websites that are accessible, and World Wide Web Consortium (W3C) a list of links for mobile apps.
Please spread word about programs like Bookshare and National Library Service (NLS). NLS suggests that 1.4% of the population in any state may be eligible for their program. However, the Talking Book Program serves fewer than 20,000 people in Texas, out of the possible 378,000 or so according to the formula given by NLS. With limited funds, the Talking Book Program cannot advertise itself. The only way to be known and heard by people with disabilities is word of mouth.
Lastly, treat people with blindness (and any other disability) with respect and kindness as you would any other person, and don't be afraid to say words related to sight to them. If you are not sure whether you should help a blind person, just ask. Every one of us knows what help we want.
a. Helping the blind and visually impaired
Accessibility for mobile apps (World Wide Web Consortium)
Accessibility for websites (Knowbility)
Assistive technology funding
Braille production groups
National Library Service
Talking Book Program
b. Learning braille
Braille certification programs
Rules of English braille (comprehensive, full)
Rules of UEB (comprehensive, full)
Rules of Nemeth braille (comprehensive, full)
UEB chart
Nemeth braille chart
I want to thank Gloria Bennett of the Texas School for the Blind and Visually Impaired, and Ava Smith and Dina Abramson of the Talking Book Program. They were integral to my understanding of what is happening to braille nationwide and in the state of Texas, and were more than happy to give me a tour and introduce me to various equipment for printing braille and recording digital audio.
There is a lot of information that I did not cover here. If you are interested in learning more about what they do, please see the interview transcripts below.
Ava Smith and Dina Abramson, Talking Book Program, Interview.
Charles Petzold, Code: The Hidden Language of Computer Hardware and Software.
Gloria Bennett, Texas School for the Blind and Visually Impaired, Interview.
National Federation of the Blind, Blindness Statistics.
National Federation of the Blind, The Braille Literacy Crisis in America.
Perkins School for the Blind, A Low Cost Revolution in Refreshable Braille.
TIME, Blind People Tell Money Bills Apart.
Share crunchingnumbers:
Author IsaacPosted on June 7, 2016 April 1, 2018 Categories Blog postsTags Braille, Nemeth Braille, Speeches, Unified English Braille
Previous Previous post: Solving Nonograms with Compressive Sensing: Part 4
Next Next post: Sierpinski Shirt
8 Lecture Notes
CI with GitHub Actions for Ember Apps: Part 2
3 Refactoring Techniques
Container Queries: Cross-Resolution Testing
Container Queries: Adaptive Images
crunchingnumbers(a)live Create a website or blog at WordPress.com | CommonCrawl |
\begin{document}
\begin{frontmatter}
\title{The vertex-cut-tree of Galton--Watson trees converging to a stable tree} \runtitle{Vertex-cut-tree}
\begin{aug} \author[A]{\fnms{Daphn\'{e}}~\snm{Dieuleveut}\corref{}\ead[label=e1]{[email protected]}}
\runauthor{D. Dieuleveut} \affiliation{Universit\'{e} Paris-Sud}
\address[A]{Equipe de Probabilit\'{e}s,\\ \quad Statistiques et Mod\'{e}lisation\\ Universit\'{e} Paris-Sud\\ B\^{a}timent 430\\ 91405 Orsay Cedex\\ France\\ \printead{e1}}
\end{aug}
\received{\smonth{12} \syear{2013}} \revised{\smonth{5} \syear{2014}}
\begin{abstract} We consider a fragmentation of discrete trees where the internal vertices are deleted independently at a rate proportional to their degree. Informally, the associated cut-tree represents the genealogy of the nested connected components created by this process. We essentially work in the setting of Galton--Watson trees with offspring distribution belonging to the domain of attraction of a stable law of index $\alpha\in(1,2)$. Our main result is that, for a sequence of such trees $\mathcal{T}_n$ conditioned to have size $n$, the corresponding rescaled cut-trees converge in distribution to the stable tree of index $\alpha$, in the sense induced by the Gromov--Prokhorov topology. This gives an analogue of a result obtained by Bertoin and Miermont in the case of Galton--Watson trees with finite variance. \end{abstract}
\begin{keyword}[class=AMS]
\kwd{60F05} \kwd{60J80}
\end{keyword}
\begin{keyword} \kwd{Galton--Watson tree} \kwd{cut-tree} \kwd{stable continuous random tree} \end{keyword}
\end{frontmatter}
\section{Introduction and main result}\label{sec1}
Fragmentations of random trees were first introduced in the work of case of Meir and Moon \cite{MM} as a recursive random edge-deletion process on discrete trees. Since then, it has been recognized that fragmentations of discrete and continuous trees appear in several natural contexts; see, for example, \cite{BerFires,DroSch} for a connection with forest fire models, \cite{AldPit,BerFragMB} for fragmentations of the Brownian tree \cite{AldCRT3} and its relation to the additive coalescent, and \cite{AbrDV,Mi03,Mi05} for fragmentations of the stable tree of index $\alpha\in(1,2)$ \cite{DuqLG02}. The fragmentations considered in the two last cases, which arise naturally in the setting of Brownian and stable trees, are self-similar fragmentations as studied by Bertoin \cite{BerFragAS}, whose characteristics are explicitly known.
Several recent articles investigated the question of the asymptotic distribution of the number of cuts needed to isolate a specific vertex, for various classes of random trees. In specific cases, Panholzer \cite {Pan06} showed that the Rayleigh distribution arises naturally as a limit in this context, and Janson \cite{Jan} showed that this limiting result holds for general Galton--Watson trees with a finite variance offspring distribution, using a method of moments. He also established a connection to the Brownian tree, which is natural since the Rayleigh distribution is the law of the distance between two uniformly chosen vertices in the CRT. Later, Addario-Berry, Broutin and Holmgren \cite {ABBH} provided a different proof giving a more concrete connection to the Brownian tree. Bertoin and Miermont \cite{BerMi} then studied the genealogy of the cutting procedure in itself, which is related to the problem of the isolation of several vertices rather than just the root (certain of these ideas were implicitly present in former papers, including \cite{ABBH,BerFires}). This allows to code the discrete cutting procedure in terms of a ``cut-tree,'' whose scaling limit is shown to be a Brownian tree that describes in some sense the genealogy of the Aldous--Pitman fragmentation~\cite{AldPit}.
Note that the results of \cite{ABBH}, by introducing a reversible transformation of the Brownian tree, can be understood as building the ``first branch'' of the limiting cut-tree, the latter being a kind of iteration \textit{ad libitum} of this transformation. This transformation was extended in \cite{AbrDFARPLT} in the context of a fragmentation of stable trees. The main goal of the present work is to show that the approach of Bertoin and Miermont \cite{BerMi} can also be adapted to Galton--Watson trees with offspring distribution in the domain of attraction of a non-Gaussian stable law, showing the convergence of the whole discrete cut-tree to a limiting stable tree. This gives in passing a natural definition of the continuum cut-tree for the fragmentation studied in \cite{Mi05}.
Let us describe more precisely the result of \cite{BerMi} we are interested in. Consider a sequence of Galton--Watson trees $\mathcal{T}_n$, conditioned to have exactly $n$ edges, with critical offspring distribution having finite variance $\sigma^2$. The associated cut-trees $\operatorname{Cut}(\mathcal{T}_n)$ describe the genealogy of the fragments obtained by deleting the edges in a uniform random order. It is well known that the rescaled trees $(\sigma/\sqrt{n}) \cdot\mathcal{T}_n$ converge in distribution to the Brownian tree $\mathcal{T}$; see \cite {AldCRT3} for the convergence of the associated contour functions, which implies that this convergence holds for the commonly used Gromov--Hausdorff topology, and for the Gromov--Prokhorov topology. In the present work, we will mainly use the latter. Bertoin and Miermont showed that there is in fact the joint convergence
\[ \biggl(\frac{\sigma}{\sqrt{n}} \mathcal{T}_n, \frac{1}{\sigma \sqrt{n}} \operatorname{Cut}(\mathcal{T}_n) \biggr) \mathop{ \longrightarrow}_{n \rightarrow\infty}^{(d)}\, \bigl(\mathcal{T}, \operatorname{Cut} (\mathcal{T}) \bigr), \]
where $\operatorname{Cut}(\mathcal{T})$ is the so-called cut-tree of $\mathcal{T}$. Informally, $\operatorname{Cut}(\mathcal{T})$ describes the genealogy of the fragments obtained by cutting $\mathcal{T}$ at points chosen according to a~Poisson point process on its skeleton. Moreover, $\operatorname{Cut}(\mathcal{T})$ has the same law as~$\mathcal{T}$.
Our goal is to show an analogue result in the case where the $\mathcal{T}_n$ are Galton--Watson trees with offspring distribution belonging to the domain of attraction of a stable law of index $\alpha\in(1,2)$, and $\mathcal{T}$ is the stable tree of index $\alpha$. For the stable tree, a self-similar fragmentation arises naturally by splitting at branching points with a rate proportional to their ``width,'' as shown in \cite {Mi05}. This will lead us to modify the edge-deletion mechanism for the discrete trees, so that the rate at which internal vertices are removed increases with their degree. Therefore, we call \emph {edge-fragmentation} the fragmentation studied in \cite{BerMi}, and \emph{vertex-fragmentation} our model. Note that more general fragmentations of the stable tree can be constructed by splitting both at branching points and at uniform points of the skeleton, as in \cite {AbrDV}. However, these fragmentations are not self-similar (see \cite {Mi05}), and will not be studied here.
In the rest of the \hyperref[sec1]{Introduction}, we will describe our setting more precisely and give the exact definition of the cut-trees, both in the discrete and the continuous cases. This will enable us to state our main results in Section~\ref{SThm}.
\subsection{Vertex-fragmentation of a discrete tree} \label{SFragTn}
We begin with some notation. Let~$\mathbb{T}$ be the set of all finite plane rooted trees. For every $T \in\mathbb{T}$, we call $E (T)$ the set of edges of $T$, $V (T)$ the set of vertices of $T$, and $\rho(T)$ the root-vertex of $T$. For each vertex $v \in V (T)$, $\deg(v,T)$ denotes the number of children of $v$ in $T$ (or $\deg v$, if this notation is not ambiguous), and for each edge $e \in E (T)$, $e^-$ (resp.,~$e^+$) denotes the extremity of $e$ which is closest to (resp., furthest away from) the root.
For any tree $T$ with $n$ edges, we label the vertices of $T$ by $v_0, v_1, \ldots, v_n$, and the edges of $T$ by $e_1, \ldots, e_n$, in the depth-first order. Note that the planar structure of $T$ gives an order on the offspring of each vertex, say ``from left to right,'' hence the depth-first order is well defined. With this notation, we have $v_j = e_j^+$ for all $j \in\{1,\ldots,n\}$.
We let $T \in\mathbb{T}$ be a finite tree with $n$ edges. We consider a discrete-time fragmentation on $T$, which can be described as follows:
\begin{itemize}
\item at each step, we mark a vertex of $T$ at random, in such a way that the probability of marking a given vertex $v$ is proportional to $\deg v$;
\item when a vertex $v$ is marked, we delete all the edges $e$ such that $e^- = v$. \end{itemize}
Note that the total number of steps $N$ is at most $n$. To keep track of the genealogy induced by this edge-deletion process, we introduce a new structure called the cut-tree of $T$, denoted by $\operatorname{Cut}_{\mathrm{v}}(T)$.
For all $r \in\{1,\ldots,N\}$, we let $v (r)$ be the vertex which receives a mark at step~$r$, $E_r = \{ e \in E(T)\dvtx e^- = v (r) \}$ be the set of the edges which are deleted at step~$r$, $k_r = \vert E_r\vert$, and $D_r = \{ i \in\{1,\ldots,n\}\dvtx e_i \in\bigcup _{r' \leq r} E_{r'} \}$. We say that $j \sim_r j'$ if and only if $e_j$ and $e_{j'}$ are still connected in the forest obtained from $T$ by deleting the edges in $D_r$. Thus, $\sim_r$ is an equivalence relation on $\{1,\ldots,n\} \setminus D_r$. The family of the equivalence classes (without repetition) of the relations $\sim_r$ for $r = 1,\ldots,N$ forms the set of internal nodes of $\operatorname{Cut}_{\mathrm{v}}(T)$. The initial block $\{1,\ldots,n\}$ is seen as the root, and the leaves of $\operatorname{Cut}_{\mathrm{v}}(T)$ are given by $1, \ldots, n$. We stress that we distinguish the leaves $i$ and the internal nodes $\{i\}$.
We now build the cut-tree $\operatorname{Cut}_{\mathrm{v}}(T)$ inductively. At the $r$th step, we let $B$ be the equivalence class for $\sim_{r-1}$ containing the indices $i$ such that $e_i \in E_r$. Deleting the edges in $E_r$ splits the block $B$ into $k'_r$ equivalence classes $B_1, \ldots, B_{k'_r}$ for $\sim_r$, with $k'_r \leq k_r +1$. We draw $k'_r$ edges between $B$ and the sets $B_1, \ldots, B_{k'_r}$, and $k_r$ edges between $B$ and the leaves $i$ such that $e_i \in E_r$. Thus, the graph-distance between the leaf $i$ and the root in $\operatorname{Cut}_{\mathrm{v}}(T)$ is the number of cuts in the component of $T$ containing the edge $e_i$ before $e_i$ itself is removed. Note that $\operatorname{Cut}_{\mathrm{v}}(T)$ does not have a natural planar structure, but that the actual embedding does not intervene in our work. Figure~\ref{FCutT} gives an example of this construction for a tree $T$ with 16 edges.
If $T$ is a random tree, the fragmentation of $T$ and the cut-tree $\operatorname{Cut}_{\mathrm{v}}(T)$ are defined similarly, by conditioning on $T$ and performing the above construction.
Note that, equivalently, we could mark the edges of $T$ in a uniform random order, and delete all the edges $e$ such that $e^- = e_i^-$, as soon as $e_i$ is marked. The cut-tree $\operatorname{Cut}_{\mathrm{v}}(T)$ would then be obtained by performing the same construction with $E_r = \{ e \in E(T)\dvtx e^- = e_{i_r}^-\}$. This procedure sometimes adds ``neutral steps,'' which have no effect on the fragmentation, but this does not change the cut-tree. It will sometimes be more convenient to work with this point of view, for example, in Sections~\ref{SModDist} and~\ref{SBrownianCase}.
\begin{figure}\label{FCutT}
\end{figure}
\subsection{Fragmentation and cut-tree of the stable tree of index \texorpdfstring{$\alpha\in(1,2)$}{$alpha in(1,2)$}} \label{SFragT}
Following Duquesne and Le Gall (see, e.g., \cite{DuqLG05}), we see stable trees as random rooted $\mathbb{R}$-trees.
\begin{defn} A metric space $(T,d)$ is an $\mathbb{R}$-tree if, for every $u,v \in T$:
\begin{itemize}
\item There exists a unique isometric map $f_{u,v}$ from $[0,d(u,v)]$ into $T$ such that $f_{u,v}(0) = u$ and $f_{u,v}(d(u,v)) = v$.
\item For any continuous injective map $f$ from $[0,1]$ into $T$, such that $f(0)=u$ and $f(1)=v$, we have
\[ f\bigl([0,1]\bigr) = f_{u,v} \bigl(\bigl[0,d(u,v)\bigr]\bigr):= [\![ u, v ]\!]. \]
\end{itemize}
A rooted $\mathbb{R}$-tree is an $\mathbb{R}$-tree $(T,d,\rho)$ with a distinguished point $\rho$ called the root. \end{defn}
The trees we will work with can be seen as $\mathbb{R}$-trees coded by continuous functions from $[0,1]$ into $\mathbb{R}_+$, as in \cite{DuqLG05}. In particular, the stable tree $(\mathcal{T},d)$ of index $\alpha$ is the $\mathbb{R} $-tree coded by the excursion of length $1$ of the height process $H^{(\alpha)}$, defined as follows in \cite{DuqLG02}. Let $X^{(\alpha )}$ be a stable spectrally positive L\'{e}vy process with parameter $\alpha$, whose normalization will be prescribed in Section~\ref{SLimLocale}. For every $t >0$, let $\widehat{X}{}^{(\alpha,t)}$ be the process defined by
\begin{eqnarray*} \widehat{X}{}^{(\alpha,t)}_s = \cases{ X^{(\alpha)}_t - X^{(\alpha)}_{(t-s)^-}, &\quad if $0 \leq s < t$, \vspace*{2pt}\cr X^{(\alpha)}_t, &\quad if $s=t$,} \end{eqnarray*}
and write $\hat{S}^{(\alpha,t)}_s = \sup_{0 \leq r \leq s} \hat {X}^{(\alpha,t)}_r$ for all $r \in[0,t]$.
\begin{defn} The height process $H^{(\alpha)}$ is the real-valued process such that $H_0 = 0$ and, for every $t>0$, $H_t$ is the local time at level $0$ at time $t$ of the process~$\widehat{X}{}^{(\alpha,t)}-\hat{S}^{(\alpha,t)}$. \end{defn}
The normalization of local time, and the proof of the existence of a continuous modification of this process, are given in \cite{DuqLG02}, Section~1.2. This definition of $\mathcal{T}$ allows us to introduce the canonical projection $p\dvtx [0,1] \rightarrow\mathcal{T}$. We endow $\mathcal{T}$ with a probability mass-measure $\mu$ defined as the image of the Lebesgue measure on $[0,1]$ under $p$, and say that the rot of $\mathcal{T}$ is the unique point which has height $0$.
For the fragmentation of the stable tree, we will use a process introduced and studied by Miermont in \cite{Mi05}, which consists in deleting the nodes of $\mathcal{T}$ in such a way that the fragmentation is self-similar. We first recall that the multiplicity of a point $v$ in an $\mathbb{R}$-tree $T$ can be defined as the number of connected components of $T \setminus\{v\}$. To be consistent with the definitions of Section~\ref{SFragTn}, we define the degree of a point as its multiplicity minus $1$, and say that a branching point of $T$ is a point $v$ such that $\deg(v,T) \geq2$. Duquesne and Le Gall have shown in \cite{DuqLG05}, Theorem 4.6, that $\mbox{a.s.}$ the branching points in $\mathcal{T} $ form a countable set, and that these branching points have infinite degree. We let $\mathcal{B}$ denote the set of these branching points. For any $b \in\mathcal{B}$, one can define the local time, or width of $b$ as the almost sure limit
\[ L (b) = \lim_{\varepsilon\rightarrow0^+} \varepsilon^{-1} \mu\bigl\{ v \in\mathcal{T}\dvtx b \in[\![ \rho, v ]\!], d (b,v) < \varepsilon \bigr\}, \]
where $\rho$ is the root of the stable tree $\mathcal{T}$. The existence of this quantity is justified in \cite{Mi05}, Proposition 2, (see also \cite{DuqLG05}).
We can now describe the fragmentation we are interested in. Conditionally on~$\mathcal{T}$, we let $(t_i,b_i)_{i \in I}$ be the family (indexed by a countable set $I$) of the atoms of a Poisson point process with intensity $dt \otimes\sum_{b \in\mathcal{B}} L (b) \delta_b (dv)$ on $\mathbb{R}_+ \times\mathcal{B}$.\vspace*{2pt} Seeing these atoms as marks on the branching points of $\mathcal{T}$, we let $\overline {\mathcal{T}} (t) = \mathcal{T} \setminus\{b_i\dvtx t_i \leq t\}$.
For every $x \in\mathcal{T}$, we let $\mathcal{T}_x (t)$ be the connected component of $\overline{\mathcal{T}} (t)$ containing~$x$, with the convention that $\mathcal{T}_x (t) = \varnothing$ if $x \notin\overline{\mathcal{T}} (t)$. We\vspace*{1pt} also let $\mu_x (t) = \mu (\mathcal{T}_x (t))$. Adding a distinguished point $0$ to $\mathcal {T}$, we define a function $\delta$ from $(\mathcal{T}\sqcup\{0\})^2$ into $\mathbb {R}_+ \cup\{\infty \}$, such that for all $x,y \in\mathcal{T}$,
\begin{eqnarray*} \delta(0,0) & =& 0, \qquad\delta(0,x) = \delta(x,0) = \int_0^{\infty} \mu_x (t) \,dt, \\ \delta(x,y) &=& \int_{t (x,y)}^{\infty} \bigl( \mu_x (t) + \mu_y (t) \bigr) \,dt, \end{eqnarray*}
where $t (x,y):= \inf\{t \in\mathbb{R}_+\dvtx \mathcal{T}_x (t) \neq \mathcal{T}_y (t)\}$ is $\mbox{a.s.}$ finite. We think of $\delta$ as our new ``distance'' in the cut-tree. This definition might seem surprising, but the results of Section~\ref{SModDist} will show that it provides an analogue of the distance we defined in the discrete case, in terms of number of cuts; as will be explained in Section~\ref{SEqldeltad}, it also has a natural interpretation as a time-change between two fragmentation processes of the stable tree, studied in \cite{Mi03} and \cite{Mi05}. The role of the extra point $0$ in our (time-changed) fragmentation will be similar to the role played by the root of $\mathcal{T}$ in the ``fragmentation at heights'' which will be introduced in Section~\ref{SEqldeltad}.
A first idea would be to build the vertex-cut-tree $\operatorname{Cut}_{\mathrm{v}}(\mathcal {T})$ as a completion of $(\mathcal{T}\sqcup\{0\}, \delta)$. However, making this idea rigorous is difficult, since it is not clear whether $\delta$ is $\mbox{a.s.} $ finite, and defines a distance on $\mathcal{T}\sqcup\{0\}$. We will instead use an approach introduced by Aldous, which consists in building a continuous random tree such that the subtrees determined by $k$ randomly chosen leaves have the right distribution. To this end, we use the conditions given by Aldous in \cite{AldCRT3}, Theorem~3.
Set $\xi(0) = 0$, and let $(\xi(i))_{i \in\mathbb{N}}$ be an i.i.d. sequence distributed according to $\mu$, conditionally on $\mathcal {T}$. The key argument of our construction is the identity in law
\[ \bigl( \delta\bigl(\xi(i), \xi(j)\bigr) \bigr)_{i,j \geq0} \stackrel{(d)} {=} \bigl( d \bigl(\xi(i+1), \xi(j+1)\bigr) \bigr)_{i,j \geq0}, \]
which will be proven in Section~\ref{SEqldeltad}. In particular, it implies that almost surely, for all $i,j \geq0$, $\delta(\xi(i), \xi (j))$ is finite, and that $\delta$ is $\mbox{a.s.}$ a distance on $\{ \xi(i), i \geq0\}$. This allows us to see the spaces $\mathcal{R} (k):= (\{ \xi(i), 0 \leq i \leq k\}, \delta)$, for all $k \in\mathbb{N}$, as random rooted trees with $k$ leaves. Using the terminology of Aldous, $(\mathcal{R} (k), k \in\mathbb{N})$ forms a \emph{consistent} family of random rooted trees which satisfies the \emph{leaf-tight condition}:
\[ \min_{1 \leq j \leq k} \delta\bigl(\xi(0), \xi(j)\bigr) \mathop{ \longrightarrow}_{k \rightarrow\infty}^{\mathbb{P}} 0. \]
Indeed, the second part of Theorem~3 of \cite{AldCRT3} shows that these conditions hold for the reduced trees $( \{\xi(i), 1 \leq i \leq k+1\}, d )$. As a consequence, the family $(\mathcal{R} (k), k \in \mathbb{N} )$ can be represented as a continuous random tree $\operatorname{Cut}_{\mathrm{v}}(\mathcal {T})$, and $( \delta(\xi(i), \xi(j)) )_{i,j \geq0}$ is the matrix of mutual distances between the points of an i.i.d. sample of $\operatorname{Cut}_{\mathrm{v}}(\mathcal{T})$. This tree $\operatorname{Cut}_{\mathrm{v}}(\mathcal{T})$ is called the cut-tree of $\mathcal {T}$. Note that $\operatorname{Cut}_{\mathrm{v}}(\mathcal{T})$ depends on $\mathcal{T}$ and on the extra randomness of the Poisson process.
\subsection{Fragmentation and cut-tree of the Brownian tree} \label{SFragBr}
We will also work on the Brownian tree $(\mathcal{T}^{\mathrm{br}}, d^{\mathrm{br}}, \rho^{\mathrm{br}})$, which was defined by Aldous (see \cite{AldCRT3}) as the \mbox{$\mathbb{R}$-}tree coded by $(H_t)_{0 \leq t \leq1} = (2 B_t)_{0 \leq t \leq1}$, where $B$ denotes the standard Brownian excursion of length $1$. This tree can be seen as the stable tree of index $\alpha= 2$ (up to a scale factor, with the normalization we will use). In particular, we have a probability mass-measure $\mu^{\mathrm{br}}$ on $\mathcal{T}^{\mathrm{br}}$, defined as the image of the Lebesgue measure on $[0,1]$ under the canonical projection. We also define a length-measure $l$ on $\mathcal{T}^{\mathrm{br}}$, which is the sigma-finite measure such that, for all $u,v \in\mathcal{T}^{\mathrm{br}}$, $l ( [\![ u, v ]\!]) = d^{\mathrm{br}} (u,v)$.
The fragmentation of the Brownian tree we consider is the same as in \cite{BerMi}: conditionally on $\mathcal{T}^{\mathrm{br}}$, we let $(t_i,b_i)_{i \in I}$ be the family of the atoms of a Poisson point process with intensity $dt \otimes l(dv)$ on $\mathbb{R}_+ \times\mathcal{T}^{\mathrm{br}}$. As for the stable tree, we let $\mathcal{T}^{\mathrm{br}}_x (t)$ be the connected component of $\mathcal {T}^{\mathrm{br}}\setminus\{b_i\dvtx t_i \leq t\}$, and $\mu^{\mathrm{br}}_x (t) = \mu^{\mathrm{br}} (\mathcal{T}^{\mathrm{br}}_x (t))$, for every $x \in\mathcal{T}^{\mathrm{br}}$. Adding a distinguished point $0$ to $\mathcal{T}^{\mathrm{br}}$, we define a function $\delta^{\mathrm{br}}$ on $(\mathcal{T}^{\mathrm{br}}\sqcup\{0\})^2$ such that for all $x,y \in\mathcal{T}^{\mathrm{br}}$,
\begin{eqnarray*} \delta^{\mathrm{br}} (0,0) &=& 0, \qquad\delta^{\mathrm{br}} (0,x) = \delta^{\mathrm{br}} (x,0) = \int_0^{\infty} \mu ^{\mathrm{br}}_x (t) \,dt, \\ \delta^{\mathrm{br}} (x,y) &=& \int_{t^{\mathrm{br}} (x,y)}^{\infty} \bigl( \mu^{\mathrm{br}}_x (t) + \mu^{\mathrm{br}}_y (t) \bigr) \,dt, \end{eqnarray*}
where $t^{\mathrm{br}} (x,y):= \inf\{t \in\mathbb{R}_+\dvtx \mathcal{T}^{\mathrm{br}}_x (t) \neq\mathcal{T}^{\mathrm{br}}_y (t)\}$ is $\mbox{a.s.}$ finite. As shown in \cite{BerMi}, we can define a new tree $\operatorname{Cut}(\mathcal{T}^{\mathrm{br}})$ for which the matrix of mutual distances between the points of an i.i.d. sample of $\operatorname {Cut}(\mathcal{T}^{\mathrm{br}})$ is $(\delta(\xi(i), \xi(j)) )_{i,j \geq0}$,\vspace*{1pt} where $\xi(0) = 0$ and $(\xi(i))_{i \in \mathbb{N} }$ is an i.i.d. sequence distributed according to $\mu^{\mathrm{br}}$, conditionally on~$\mathcal{T}^{\mathrm{br}}$. Moreover, $\operatorname {Cut}(\mathcal{T}^{\mathrm{br}})$ has the same law as~$\mathcal{T}^{\mathrm{br}}$.
\subsection{Main results} \label{SThm}
As stated in the \hyperref[sec1]{Introduction}, we mainly work in the setting of Galton--Watson trees with critical offspring distribution $\nu$, where $\nu$ is a probability distribution belonging to the domain of attraction of a stable law of index $\alpha\in(1,2) $. We shall also assume that $\nu$ is aperiodic. Finally, for a technical reason, we will need the additional hypothesis
\begin{equation} \label{HMajPZ=r} \sup_{r \geq1} \biggl(\frac{r \mathbb{P}(\hat{Z} = r)}{\mathbb {P}(\hat{Z} > r)} \biggr) < \infty, \end{equation}
where $\hat{Z}$ is a random variable such that $\mathbb{P}(\hat {Z}=r) = r \nu(\{r\})$. For example, this is the case if $\nu(\{r\})$ is equivalent to $c/r^{\alpha+ 1}$ as $n \rightarrow\infty$, for a constant $c \in(0,\infty)$. In all our work, we shall implicitly work for values of $n$ such that, for a Galton--Watson tree $T$ with offspring distribution $\nu$, $\mathbb{P} (\vert E(T)\vert=n ) \neq0$. We let $\mathcal{T} _n$ be a $\nu$-Galton--Watson tree, conditioned to have exactly $n$ edges. We let $\delta_n$ denote the graph-distance on $\{0,1,\ldots,n\} $ induced by $\operatorname{Cut}_{\mathrm{v}}(\mathcal{T}_n)$. We will use the notation $\rho_n$ for the root of $\mathcal{T}_n$, and $\mu_n$ for the uniform distribution on $E (\mathcal{T}_n)$ (by slight abuse, $\mu_n$ will also sometimes be used for the uniform distribution on $\{1, \ldots, n\}$).
Our main goal is to study the asymptotic behavior of $\operatorname{Cut}_{\mathrm{v}}(\mathcal {T}_n)$ as $n \rightarrow\infty$. To this end, it will be convenient to see trees as pointed metric measure spaces, and work with the Gromov--Prokhorov topology on the set of (equivalence classes of) such spaces. Let us recall a few definitions and facts on these objects (see, e.g., \cite {GPW08} for details).
A pointed metric measure space is a quadruple $(X, D, m, x)$, where $m$ is a Borel probability measure on the metric space $(X,D)$, and $x$ is a point of $X$. These objects are considered up to a natural notion of isometry-equivalence. One says that a sequence $(X_n, D_n, m_n, x_n)$ of pointed measure metric spaces converges in the Gromov--Prokhorov sense to $(X_{\infty}, D_{\infty}, m_{\infty}, x_{\infty})$ if and only if the following holds: for $n \in\mathbb{N}\cup\{\infty\}$, set $\xi _n(0) = x_n$ and let $\xi_n(1), \xi_n(2), \ldots$ be a sequence of i.i.d. random variables with law $m_n$, then the vector\break $(D_n(\xi _n(i), \xi_n(j))\dvtx 0 \leq i, j \leq k)$ converges in distribution to $(D_{\infty} (\xi_{\infty} (i);\break \xi_{\infty} (j))\dvtx 0 \leq i, j \leq k)$ for every $k \geq1$. The space $\mathbb{M}$ of (isometry-equivalence classes of) pointed measure metric spaces, endowed with the Gromov--Prokho\-rov topology, is a Polish space.
In this setting, the stable tree $\mathcal{T}$ with index $\alpha$ can be seen as a scaling limit of the Galton--Watson trees $\mathcal{T}_n, n \in \mathbb{N}$. More precisely, we endow the discrete trees $\mathcal{T}_n$ with the associated graph-distance $d_n$ and the uniform distribution $m_n$ on $V(\mathcal{T}_n) \setminus\{\rho_n\}$. Note that $m_n$ is uniform on $\{v_1 (\mathcal{T} _n),\ldots,v_n(\mathcal{T}_n)\}$; by slight abuse, it will sometimes be identified with the uniform distribution on $\{1,\ldots,n\}$. For any pointed metric measure space $\mathbf{X} = (X,D,m,x)$ and any $a \in (0,\infty)$, we let $a \mathbf{X} = (X,a D,m,x)$. With this formalism, there exists a sequence $(a_n)_{n \in\mathbb{N}}$ such that
\begin{equation} \label{ECvTn} \frac{a_n}{n} \mathcal{T}_n \mathop{ \longrightarrow} ^{(d)} \mathcal{T}, \end{equation}
in the sense of the Gromov--Prokhorov topology, and $a_n = n^{1/\alpha} f(n)$ for a slowly-varying function $f$. This is a consequence of the convergence of the contour functions associated with the trees $\mathcal{T}_n$, shown in \cite{Duq}, Theorem 3.1. We will give a slightly more precise version of this result in Section~\ref{SCodingTrees}.
We can now state our main result.
\begin{teo}\label{TMainThm} Let $(a_n)_{n \in\mathbb{N}}$ be a sequence such that (\ref{ECvTn}) holds. Then we have the following joint convergence in distribution:
\[ \biggl(\frac{a_n}{n} \mathcal{T}_n, \frac{a_n}{n} \operatorname{Cut}_{\mathrm{v}}(\mathcal{T}_n) \biggr) \mathop{ \longrightarrow}_{n \rightarrow\infty}\, \bigl(\mathcal{T}, \operatorname{Cut}_{\mathrm{v}}(\mathcal{T}) \bigr), \]
where $\mathbb{M}$ is endowed with the Gromov--Prokhorov topology and $\mathbb{M} \times\mathbb{M}$ has the associated product topology. Furthermore, the cut-tree $\operatorname{Cut}_{\mathrm{v}}(\mathcal{T})$ has the same distribution as $\mathcal{T}$. \end{teo}
Note that this generalizes Proposition 1.4 of \cite{AbrDBC&GWT}, which gave the scaling limit of the number of cuts needed to isolate the root in a stable Galton--Watson tree.
In the following sections, we fix the sequence $(a_n)$. For some of the preliminary results, we will use a particular choice of this sequence, detailed in Section~\ref{SLimLocale}. Nevertheless, it is easy to check that the theorem holds for any equivalent sequence.
To complete this result, we will study the limit of the cut-tree obtained for the vertex-fragmentation, in the case where the offspring distribution $\nu$ has finite variance (still assuming that $\nu$ is critical and aperiodic). More precisely, we will show the following.
\begin{teo} \label{TBrownianCase} If the offspring distribution $\nu$ has finite variance $\sigma^2$, then we have the joint convergence in distribution
\[ \biggl(\frac{\sigma}{\sqrt{n}} \mathcal{T}_n, \frac{1}{\sqrt{n}} \biggl( \sigma+\frac{1}{\sigma} \biggr) \operatorname{Cut}_{\mathrm{v}}(\mathcal{T}_n) \biggr) \mathop{ \longrightarrow}_{n \rightarrow\infty}\, \bigl(\mathcal{T}^{\mathrm{br}}, \operatorname{Cut}\bigl(\mathcal{T}^{\mathrm{br}}\bigr) \bigr) \]
in $\mathbb{M} \times\mathbb{M}$. \end{teo}
Let us explain informally why we get a factor $\sigma+1/\sigma$, instead of the $1/\sigma$ we had in the case of the edge-fragmentation. In the vertex-fragmentation, the average number of deleted edges at each step is roughly $\sum_k k \nu(k) \times k = \sigma^2 + 1$. Thus, the edge-deletions happen $\sigma^2+1$ times faster than for the edge-fragmentation. As a consequence, $(1/\sqrt {n}) \cdot\,\operatorname{Cut}_{\mathrm{v}}(\mathcal{T}_n)$ behaves\vspace*{1pt} approximatively like $(1/(\sigma ^2+1) \sqrt{n}) \cdot\,\operatorname{Cut}(\mathcal{T}_n)$, that is, $(\sigma+ 1/\sigma )^{-1}(1/\sigma\sqrt{n}) \cdot\,\operatorname{Cut}(\mathcal{T}_n)$.
Also note that we would need additional hypotheses to extend this result to the more general case of an offspring distribution belonging to the domain of attraction of a Gaussian distribution. Indeed, as will be seen in the Section~\ref{SBrownianCase}, the proof of this result relies on the convergence of the coefficients $n/a_n^2$: if $\nu$ has finite variance, we may and will take $a_n = \sigma\sqrt{n}$, but in the general case, this convergence is not granted.
For both of these theorems, it is known that the first component converges in the stronger sense of the Gromov--Hausdorff--Prokhorov topology. However, as in the case studied by Bertoin and Miermont, the question of whether the joint convergences hold in this sense remains open.
In the following sections, we will first work on the proof of Theorem \ref{TMainThm}: preliminary results will be given in Section~\ref{SPrelims}, and the proof will be completed in Section~\ref{SProof}. The global structure of this proof is close to that of \cite{BerMi}, although the technical arguments differ, especially in Section~\ref{SPrelims}. Section~\ref{SBrownianCase} will be devoted to the study of the finite variance case.
\section{Preliminary results} \label{SPrelims}
\subsection{Modified distance on $\operatorname{Cut}_{\mathrm{v}}(\mathcal{T}_n)$}\label{SModDist}
We begin by introducing a new distance $\delta_n '$ on $\operatorname{Cut}_{\mathrm{v}}(\mathcal{T} _n)$, defined in a similar way as the distance $\delta$ for a continuous tree. We show that this distance is ``close'' enough to $(a_n/n) \cdot\delta_n$, which will enable us to work on the modified cut-tree $\operatorname{Cut}_{\mathrm{v}}'(\mathcal{T}_n):= (\operatorname{Cut}_{\mathrm{v}}(\mathcal{T}_n),\delta_n')$.
Recall the fragmentation of $\mathcal{T}_n$ introduced in Section~\ref {SFragTn}. We now turn this process into a continuous-time fragmentation, by saying that each vertex $v \in V(T)$ is marked independently, with rate $\deg v / a_n$. Equivalently, this can be seen as marking each edge of $T$ independently with rate $1/a_n$, and deleting all the edges $e$ such that $e^- = e_i^-$ as soon as $e_i$ is marked. Thus, we obtain a forest $\overline{\mathcal{T}}_n (t)$ at time $t$. For every $i \in\{1, \ldots, n\}$, we let $\mathcal{T}_{n,i} (t)$ denote the component of $\overline{\mathcal{T}}_n (t)$ containing the edge $e_i$, with the convention $\mathcal{T}_{n,i}(t) = \varnothing$ if $e_i \notin \overline{\mathcal{T}}_n(t)$, and $\mu_{n,i} (t) = \mu_n (\mathcal{T}_{n,i} (t))$. Note that $n \mu_{n,i} (t)$ is the number of edges in $\mathcal{T}_{n,i} (t)$. For all $i,j \in\{1,\ldots, n\}$, we now define
\begin{eqnarray*} \delta_n ' (0,0) & =& 0, \qquad\delta_n ' (0, i) = \delta_n ' (i, 0) = \int _0^{\infty} \mu_{n,i} (t) \,dt, \\ \delta_n ' (i,j) &=& \int_{t_n (i,j)}^{\infty} \bigl( \mu_{n,i} (t) + \mu_{n,j} (t) \bigr) \,dt, \end{eqnarray*}
where $t_n (i,j)$ denotes the first time when the components $\mathcal{T}_{n,i} (t)$ and $\mathcal{T}_{n,j} (t)$ become disjoint.
\begin{lem} \label{TModDist} For all $i,j \in\{1,\ldots,n\}$, we have
\[ \mathbb{E} \biggl[\biggl\vert\frac{a_n}{n} \delta_n (0,i) - \delta_n ' (0,i)\biggr\vert^2 \biggr] = \frac{a_n}{n} \mathbb{E} \bigl[\delta_n ' (0,i) \bigr] \]
and
\[ \mathbb{E} \biggl[\biggl\vert\frac{a_n}{n} \delta_n (i,j) - \delta_n ' (i,j)\biggr\vert^2 \biggr] \leq\frac{a_n}{n} \mathbb{E} \bigl[\delta_n ' (0,i) + \delta_n ' (0,j) \bigr]. \]
\end{lem}
\begin{pf} We work conditionally on $\mathcal{T}_n$. Fix $i \in\{1,\ldots,n\}$. For all $t \in\mathbb{R}_+$, we let $N_i (t)$ be the number of cuts happening in the component containing $e_i$ up to time $t$. Since each edge of $\mathcal{T}_n$ is marked independently with rate $1/a_n$, the process $(M_i (t))_{t \geq0}$, where
\begin{eqnarray*} M_i (t) &:=& \frac{a_n}{n} N_i (t) - \int _0^t \mu_i (s) \,ds, \end{eqnarray*}
is a purely discontinuous martingale.
Its predictable quadratic variation can be written as
\begin{eqnarray*} \langle M_i \rangle_t &=& \frac{a_n}{n} \int _0^t \mu_i (s) \,ds. \end{eqnarray*}
As a consequence, we have $\mathbb{E}[\vert M_i (\infty)\vert^2] = \mathbb{E} [ \langle M_i \rangle _{\infty} ]$.
Since
\begin{eqnarray*} \lim_{t \rightarrow\infty} N_i (t) = \delta_n (0,i) \quad\mbox{and} \quad\lim_{t \rightarrow\infty} \int_0^t \mu_i (s) \,ds &=& \delta_n ' (0,i), \end{eqnarray*}
we get
\begin{eqnarray*} \mathbb{E} \biggl[\biggl\vert\frac{a_n}{n} \delta_n (0,i) - \delta_n ' (0,i)\biggr\vert^2 \biggr] &=& \frac{a_n}{n} \mathbb{E} \bigl[\delta_n ' (0,i) \bigr]. \end{eqnarray*}
For the second part, we use similar arguments. We fix $i \neq j \in\{ 1,\ldots,n\}$, and we write $t_{ij}$ instead of $t_n (i,j)$. For all $t \geq0$, let $\mathcal{F}_t$ denote the $\sigma$-algebra generated by $\mathcal{T}_n$ and the atoms $\{(t_r, e_{i_r})\dvtx t_r \leq t \}$ of the Poisson point process of marks on the edges introduced in Section~\ref{SFragTn}. Conditionally on $\mathcal{F}_{t_{ij}}$,
\begin{eqnarray*} && M_{ij} (t):= M_i (t_{ij} + t) - M_i (t_{ij}) + M_j (t_{ij} + t) - M_j (t_{ij}) \end{eqnarray*}
defines a purely discontinuous martingale such that
\begin{eqnarray*} \lim_{t \rightarrow\infty} M_{ij} (t) & =& \frac{a_n}{n} \bigl( \delta_n (b_{ij}, i ) + \delta_n (b_{ij}, j ) \bigr) - \int_{t_{ij}}^{\infty} \mu_i (s) \,ds - \int_{t_{ij}}^{\infty} \mu_j (s) \,ds \\ & =& \frac{a_n}{n} \delta_n (i,j) - \delta_n ' (i,j), \end{eqnarray*}
where $b_{ij}$ denotes the most recent common ancestor of the leaves $i$ and $j$ in $\operatorname{Cut}_{\mathrm{v}}(\mathcal{T}_n)$.
Besides, since the edges of $\mathcal{T}_{n,i}$ and $\mathcal {T}_{n,j}$ are marked independently after time $t_{ij}$, the predictable quadratic variation of $M_{ij}$ is
\begin{eqnarray*} \langle M_{ij} \rangle_t &=& \frac{a_n}{n} \mathbb{E} \biggl[\int_{t_{ij}}^{t_{ij}+t} \bigl(\mu_i (s) + \mu_j (s) \bigr) \,ds \biggr]. \end{eqnarray*}
Since $\delta_n ' (i,j) = \delta_n ' (0,i) + \delta_n ' (0,j) - 2 \delta_n ' (0,b_{ij})$, this yields
\begin{eqnarray*} \mathbb{E} \biggl[\biggl\vert\frac{a_n}{n} \delta_n (i,j) - \delta_n ' (i,j)\biggr\vert^2 \biggr] &\leq&\frac{a_n}{n} \mathbb{E} \bigl[\delta_n ' (0,i) + \delta_n ' (0,j) \bigr]. \end{eqnarray*}\upqed
\end{pf}
\subsection{A first joint convergence} \label{SFstJointCv}
In this section, we first state precisely the convergence theorems we will rely on to prove the following lemmas.\vadjust{\goodbreak} To this end, we work in the setting of sums of i.i.d. random variable $S_n = Z_1 + \cdots + Z_n$, where the laws of the $Z_i$ are in the domain of attraction of a stable law. Under additional hypotheses, Theorem~\ref{TLimLocale} below gives a choice of scaling constants $a_n$ for which $S_n / a_n$ converges in law to a stable variable, and a formulation of Gnedenko's local limit theorem in this setting. Next, we will recall a result of Duquesne which shows, in particular, the convergence (\ref{ECvTn}). The version we will use is a joint convergence of three functions encoding the trees $\mathcal{T}_n$ and $\mathcal{T}$. These results will allow us to prove a first joint convergence for the fragmented trees in Proposition~\ref{TFstJointCv}.
\subsubsection{Local limit theorem} \label{SLimLocale}
We say that a measure $\pi$ on $\mathbb{Z}$ is lattice if there exists integers $b \in\mathbb{Z}$, $d \geq2$ such that $\operatorname {supp}(\pi) \subset b + d \mathbb{Z}$. We know from our hypotheses that $\nu$ is critical, aperiodic, and $\nu(\{0\}) >0$, and these three conditions imply that $\nu$ is nonlattice.
For\vspace*{1pt} any $\beta\in(1,2)$, we let $X^{(\beta)}$ be a stable spectrally positive L\'{e}vy process with parameter $\beta$, and $p_t^{(\beta)} (x)$ the density of the law of $X^{(\beta)}_t$. Similarly, for $\beta \in(0,1)$, we let $X^{(\beta)}$ be a stable subordinator with parameter $\beta$, and $q_t^{(\beta)} (x)$ be the density of the law of $X^{(\beta)}_t$. We fix the normalization of these processes by setting, for all $\lambda\geq0$,
\begin{eqnarray*} \mathbb{E} \bigl[e^{-\lambda X^{(\beta)}_t} \bigr] &=& e^{t \lambda ^{\beta}} \qquad\mbox{if } \beta\in(1,2), \\ \mathbb{E} \bigl[e^{-\lambda X^{(\beta)}_t} \bigr] &=& e^{-t \lambda ^{\beta}} \qquad\mbox{if } \beta\in(0,1). \end{eqnarray*}
We also introduce the set $R_{\rho}$ of regularly varying functions with index $\rho$.
\begin{teo} \label{TLimLocale} Let $(Z_i, i \in\mathbb{N})$ be an i.i.d. sequence of random variables in $\mathbb{N}\cup\{-1,0\}$. We denote by $Z$ a random variable having the same law as the $Z_i$. Suppose that the law of $Z$ belongs to the domain of attraction of a stable law of index $\beta\in(0,2) \setminus\{1\}$, and is nonlattice. If $\beta\in(1,2)$, we also suppose that $Z$ is centered. We introduce
\begin{eqnarray*} S_n &=& \sum_{i=1}^n Z_i, \qquad n \geq0. \end{eqnarray*}
Then there exists an increasing function $A \in R_{\beta}$ and a constant $c$ such that:
\begin{longlist}[(ii)]
\item[(i)] It holds that
\begin{equation} \label{ELimLocale1} \mathbb{P} (Z > r ) \sim\frac{c}{A (r)}\qquad\mbox{as } r \rightarrow\infty. \end{equation}
\item[(ii)] Letting $a$ be the inverse function of $A$, and $a_n = a(n)$ for all $n \in\mathbb{N}$, we have
\begin{eqnarray} \label{ELimLocale2} \lim_{n \rightarrow\infty} \sup_{k \in\mathbb{N}} \biggl\vert a_n \mathbb{P} (S_n = k ) - p_1^{(\beta)} \biggl(\frac {k}{a_n} \biggr)\biggr\vert&=& 0. \end{eqnarray}
\end{longlist}
\end{teo}
\begin{pf} Theorem 8.3.1 of \cite{BGT} shows that, since $Z \geq-1$ a.s., the law of $Z$ belongs to the domain of attraction of a stable law of index $\beta$ if and only if $\mathbb{P} (Z>r ) \in R_{-\beta}$. Using Theorem 1.5.3 of \cite{BGT}, we can take a monotone equivalent of $\mathbb{P} (Z>r )$, hence the existence of $A$ such that (\ref{ELimLocale1}) holds with a constant $c$ which will be chosen hereafter.
The remarks following Theorem 8.3.1 in \cite{BGT} give a characterization of the $a_n$ such that $S_n / a_n$ converges in law to a stable variable of index $\beta$. In particular, it is enough to take $a_n$ such that $n / A(a_n)$ converges, so $a = A^{-1}$ is a suitable choice. We now choose the constant $c$ such that $S_n / a_n$ converges to $X_1^{(\beta)}$. The second point of the theorem is given by Gnedenko's local limit theorem (see, e.g., Theorem 4.2.1 of \cite{IL}). \end{pf}
\subsubsection{Coding the trees $\mathcal{T}_n$ and $\mathcal{T}$} \label{SCodingTrees}
We now recall three classical ways of coding a tree $T \in\mathbb {T}$, namely the associated contour function, height function and Lukasiewicz path. Detailed descriptions and properties of these objects can be found, for example, in \cite{Duq}.
To define the contour function $C^{[n]}$ of $\mathcal{T}_n$, we see $\mathcal{T}_n$ as the embedded tree in the oriented half-plane, with each edge having length $1$. We consider a particle that visits continuously all edges at unit speed, from the left to the right, starting from the root. Then, for every $t \in[0,2n]$, we let $C^{[n]}_t$ be the \emph {height} of the particle at time~$t$, that is, its distance to the root. The height function is defined by\vspace*{1pt} letting $H^{[n]}_j$ be the height of the vertex $v_j$. Finally, for all $i \in\{ 0, \ldots, n \} $, we let $Z^{[n]}_{i+1}$ be the number of offspring of the vertex $v_i$. Then the Lukasiewicz path of $\mathcal{T}_n$ is defined~by
\begin{eqnarray*} W^{[n]}_j &=& \sum_{i=1}^j Z^{[n]}_i -j, \qquad j = 0, \ldots, n+1. \end{eqnarray*}
With this definition, we have $\deg(v_j, \mathcal{T}_n) = W^{[n]}_{j+1} - W^{[n]}_j +1$. We extend $C^{[n]}$ and~$H^{[n]}$ by setting $C^{[n]}_t = 0$ for all $t \in[2n, 2n+2]$ and $H^{[n]}_{n+1}=0$ (this will allow us to keep similar scaling factors for the rescaled functions we introduce in Theorem~\ref{TCvC,H,X}). Figure~\ref{FCodingFunctions} gives the contour function, height function and Lukasiewicz path associated to the tree we used in Figure~\ref{FCutT}.
\begin{figure}
\caption{The contour function $(C^{[n]}_t, 0 \leq t \leq2n+2)$, height function $(H^{[n]}_j, j=0,\ldots,n+1)$ and Lukasiewicz path $(W^{[n]}_j, j=0,\ldots,n+1)$ coding a realization of $\mathcal{T}_n$.}
\label{FCodingFunctions}
\end{figure}
We also use a random walk $(W_j)_{j \geq0}$ with jump distribution $\nu(k+1)$:
\begin{eqnarray*} W_j &=& \sum_{i=1}^j Z_i -j, \qquad j \geq0, \end{eqnarray*}
where $(Z_i)_{i \in\mathbb{N}}$ are i.i.d. variables having law $\nu$. Note that $(W^{[n]}_j, j = 0,\ldots,n+1)$ has the same law as $(W_j, j= 0,\ldots,n+1)$ conditionally on $W_{n+1} = -1$ and \mbox{$W_{j} \geq0$} for all $j \leq n$. In other terms, $(W_n)_{n \geq0}$ has the same law as the Lukasiewicz path associated with a sequence of Galton--Watson trees with offspring distribution $\nu$. From now on, we let $A$ and $a$ be functions given by Theorem~\ref{TLimLocale} for the sequence of i.i.d. variables $(Z_i -1)_{i \in\mathbb{N}}$. Thus, we have the convergence
\begin{equation} \label{ECvW} \frac{1}{a_n} W_n \mathop{ \longrightarrow}_{n \rightarrow\infty}^{(d)} X^{(\alpha)}_1. \end{equation}
Finally, let $(X_t)_{0 \leq t \leq1}$ be the excursion of length $1$ of the L\'{e}vy process $X^{(\alpha)}$, and~$(H_t)_{0 \leq t \leq1}$ be the excursion of length $1$ of the process $H^{(\alpha)}$ defined in Section~\ref{SFragT}. We will use the following adaptation of the results shown by Duquesne in~\cite{Duq}:
\begin{teo}[(Duquesne)] \label{TCvC,H,X} Consider the rescaled functions $C^{(n)}$, $H^{(n)}$ and~$X^{(n)}$, defined by
\begin{eqnarray*} C^{(n)}_t &=& \frac{a_n}{n} C^{[n]}_{(2n+2)t}, \qquad H^{(n)}_t = \frac{a_n}{n} H^{[n]}_{\lfloor(n+1)t \rfloor}, \qquad X^{(n)}_t = \frac{1}{a_n} W^{[n]}_{\lfloor(n+1)t \rfloor} \end{eqnarray*}
for all $t \in[0,1]$. If $\nu$ is aperiodic and hypothesis (\ref {ECvW}) holds, then we have the joint convergence
\[ \bigl(C^{(n)}_t, H^{(n)}_t, X^{(n)}_t \bigr)_{0 \leq t \leq1} \mathop{ \longrightarrow}_{n \rightarrow\infty}^{(d)}\, (H_t, H_t, X_t )_{0 \leq t \leq1}. \]
\end{teo}
Proposition 4.3 of \cite{Duq} shows the convergence of the corresponding bridges (with a change of index which comes from the fact that we are working on trees conditioned to have $n$ edges instead of $n$ vertices). Using the continuity of the Vervaat transform as in the proof of \cite{Duq}, Theorem 3.1, then gives the result.
The fact that these convergences hold jointly will be used in the proof of Lemma~\ref{TCvXtilde} below. Apart from this, we will mainly use the convergence of the rescaled Lukasiewicz paths $X^{(n)}$, because of the following link between the rates of our fragmentation and the jumps of $X^{(n)}$. Recall from Section~\ref{SFragT} that $p\dvtx [0,1] \rightarrow\mathcal{T}$ denotes the canonical projection from $[0,1]$ onto $\mathcal{T} $. Now, the set of the branching points of $\mathcal{T}$ is $\{p(t)\dvtx t \in [0,1] \mbox{ s.t. } \Delta X_t >0 \}$, and the associated local times are $L (p(t)) = \Delta X_t$ (see \cite{DuqLG05}, proof of Theorem 4.7, and \cite{Mi05}, Proposition 2). Similarly, we introduce the projection $p_n$ from $K_n:= \{1/(n+1), \ldots, 1\}$ onto $V(\mathcal{T}_n)$, such that $p_n (j/(n+1))$ is the vertex $v_{j-1}$ of $\mathcal{T}_n$. Thus, for all $t \in K_n$, we have
\begin{equation} \label{ElinkX,deg} \Delta X^{(n)}_t = \frac{1}{a_n} \bigl( \deg\bigl( p_n (t), \mathcal{T}_n\bigr) - 1\bigr). \end{equation}
We conclude this part by showing another result of joint convergence, for the Lukasiewicz paths of two symmetric sequences of trees. For all $n \in\mathbb{N}$, we introduce the symmetrized tree $\widetilde {\mathcal{T}}_n$, obtained\vspace*{1pt} by reversing the order of the children of~each vertex of $\mathcal{T} _n$. We let $\widetilde{W}{}^{[n]}$ denote the Lukasiewicz path of $\widetilde{\mathcal{T}}_n$. (We would obtain the same process by visiting the vertices of $\mathcal{T}_n$ ``from right to left'' in the depth-first search.) Finally, we define the rescaled process $\widetilde{X}{}^{(n)}$ by
\begin{eqnarray*} \widetilde{X}{}^{(n)}_t &=& \frac{1}{a_n} \widetilde{W}{}^{[n]}_{\lfloor(n+1)t \rfloor} \qquad\forall t \in[0,1]. \end{eqnarray*}
\begin{lem} \label{TCvXtilde} There exists a process $(\widetilde{X}_t)_{0 \leq t \leq1}$ such that there is the joint convergence
\begin{equation} \label{ECvX,Xtilde} \bigl(X^{(n)}, \widetilde{X}{}^{(n)}\bigr) \mathop{ \longrightarrow}_{n \rightarrow \infty}^{(d)}\, (X, \widetilde{X} ). \end{equation}
Moreover:
\begin{itemize}
\item The processes $\widetilde{X}$ and $X$ have the same law.
\item For every jump-time $t$ of $X$,
\begin{eqnarray*} \Delta\widetilde{X}_{1-t-l(t)} &=& \Delta X_t \qquad\mbox{a.s.}, \end{eqnarray*}
where $l(t) = \inf\{ s>t\dvtx X_s = X_{t^-}\} - t$. \end{itemize}
\end{lem}
\begin{pf} Since $\mathcal{T}_n$ and $\widetilde{\mathcal{T}}_n$ have the same law, $\widetilde{X}{}^{(n)}$ converges in distribution to an excursion of the L\' {e}vy process $X^{(\alpha)}$ in the Skorokhod space $\mathbb{D}$. Thus the sequence of the laws of the processes $(X^{(n)}, \widetilde {X}{}^{(n)})$ is tight in $\mathbb{D} \times\mathbb{D}$. Up to extraction, we can assume that $(X^{(n)}, \widetilde{X}{}^{(n)})$ converges in distribution to a couple of processes $(X,\widetilde{X})$.
For all $n \in\mathbb{N}$, $j \in\{0,\ldots,n\}$, a simple\vspace*{1pt} computation shows that the vertex $v_j (\mathcal{T}_n)$ corresponds to $v_{\tilde {j}}(\widetilde{\mathcal{T}}_n)$, where
\begin{eqnarray*} \tilde{j} &=& n - j + H^{[n]}_j - D^{[n]}_j, \end{eqnarray*}
and $D^{[n]}_j$ is the number of strict descendants of $v_j (\mathcal{T}_n)$. Note that $D^{[n]}_j$ is the largest integer such that $W^{[n]}_{i} \geq W^{[n]}_j$ for all $i \in[j, j+D^{[n]}_j]$. Then (\ref {ElinkX,deg}) shows that we have
\begin{equation} \label{EJumpsX,Xtilde} \Delta\widetilde{X}{}^{(n)}_{(n - j + H^{[n]}_j - D^{[n]}_j+1)/(n+1)} = \Delta X^{(n)}_{(j+1)/(n+1)}. \end{equation}
For all $n \in\mathbb{N}\cup\{\infty\}$, we let $(s^{(n)}_i)_{i \in \mathbb{N}}$ be the sequence of the times where $X^{(n)}$ has a positive jump, ranked in such a way that the sequence of the jumps $(\Delta X^{(n)}_{s^{(n)}_i})_{i \in\mathbb{N}}$ is nonincreasing. We define the $(\tilde{s}^{(n)}_i)_{i \in\mathbb{N}}$ in a similar way for the $\widetilde {X}{}^{(n)}$, $n \in\mathbb{N}\cup\{\infty\}$. Fix $i \in\mathbb {N}$. Then (\ref{EJumpsX,Xtilde}) can be translated into
\begin{equation} \label{ELocJumpsX,Xtilde} \tilde{s}^{(n)}_i = 1-s^{(n)}_i+ \frac{1}{n+1} \bigl(1 + H^{[n]}_{(n+1) s^{(n)}_i - 1}-D^{[n]}_{(n+1) s^{(n)}_i - 1} \bigr). \end{equation}
Using the Skorokhod representation theorem, we now work under the hypothesis
\[ \bigl(H^{(n)}_t, X^{(n)}_t \bigr)_{0 \leq t \leq1} \mathop{ \longrightarrow}_{n \rightarrow\infty}\, (H_t, X_t )_{0 \leq t \leq1} \qquad\mbox{a.s.} \]
Then the following convergences hold $\mbox{a.s.}$, for all $i \geq1$:
\begin{eqnarray*} s^{(n)}_i &\displaystyle\mathop{ \longrightarrow}_{n \rightarrow\infty}& s_i, \\ \Delta X^{(n)}_{s^{(n)}_i} &\displaystyle\mathop{ \longrightarrow}_{n \rightarrow\infty }& \Delta X_{s_i}, \\ \frac{1}{n+1} H^{[n]}_{(n+1) s^{(n)}_i - 1} &\displaystyle\mathop{ \longrightarrow }_{n \rightarrow\infty}& 0, \\ \frac{1}{n+1} D^{[n]}_{(n+1) s^{(n)}_i - 1} &\displaystyle\mathop{ \longrightarrow }_{n \rightarrow\infty}& l(s_i). \end{eqnarray*}
The first two convergences hold because the $\Delta X_{s_i}$ are distinct, and the last one uses the fact that $\mbox{a.s.}$
\[ \inf_{0 \leq u \leq\varepsilon} X_{s_i+l(s_i)+u} < X_{(s_i)^-} \qquad \forall\varepsilon> 0. \]
As\vspace*{1.5pt} a consequence, $\tilde{s}^{(n)}_i$ converges $\mbox{a.s.}$ to $1-s_i-l(s_i)$. Thus, $\tilde{s}_i = 1-s_i-l(s_i)$ a.s., and $\Delta \widetilde{X}_{\tilde{s}_i} = \Delta X_{s_i}$ a.s. (Since the discontinuity points are countable, this holds jointly for all $i$.)
The L\'{e}vy--It\^{o} representation theorem shows that $\widetilde{X}$ can be written as a measurable function of $(\tilde{s}_i,\Delta \widetilde{X}_{\tilde{s}_i})_{i \in\mathbb{N}}$. This identifies uniquely the law of $(X,\widetilde{X})$, hence~(\ref{ECvX,Xtilde}). \end{pf}
\subsubsection{Joint convergence of the subtree sizes}
Recall from Section~\ref{SFragT} that $(\xi(i), i \in\mathbb{N})$ is a sequence of i.i.d. variables in $\mathcal{T}$, with distribution the mass-measure $\mu$, and $\xi(0) = 0$. For all $n \in\mathbb{N}$, we introduce independent sequences $(\xi_n (i), i \in\mathbb{N})$ of i.i.d. uniform integers in $\{1,\ldots, n\}$, and set $\xi_n (0) = 0$. Recalling the notation of Section~\ref{SModDist}, we let $\tau_n (i,j) = t_n (\xi_n (i), \xi_n (j))$ be the first time when the components $\mathcal{T}_{n,\xi_n (i)} (t)$ and $\mathcal{T}_{n,\xi _n (j)} (t)$ become disjoint. Similarly, $\tau(i,j)$ will denote the first time when the components containing $\xi(i)$ and $\xi(j)$ become disjoint in the fragmentation of $\mathcal{T}$. Our goal is to prove the following result.
\begin{prop} \label{TFstJointCv} As $n \rightarrow\infty$, we have the following weak convergences
\begin{eqnarray*} \frac{a_n}{n} \mathcal{T}_n &\displaystyle\mathop{ \longrightarrow} ^{(d)} &\mathcal{T}, \\ \bigl(\tau_n (i,j) \bigr)_{i,j \in\mathbb{N}} &\displaystyle\mathop{ \longrightarrow} ^{(d)}& \bigl(\tau(i,j) \bigr)_{i,j \in\mathbb{N}}, \\ \bigl(\mu_{n,\xi_n (i)} (t) \bigr)_{i \in\mathbb{N}, t \geq0} &\displaystyle\mathop{ \longrightarrow} ^{(d)}& \bigl(\mu_{\xi(i)} (t) \bigr)_{i \in\mathbb{N}, t \geq0}, \end{eqnarray*}
where the three hold jointly. \end{prop}
For the proof of this proposition, it will be convenient to identify the $\xi_n (i)$ with vertices of $\mathcal{T}_n$ instead of edges. As noted in \cite{BerMi}, proof of Lemma 2, this makes no difference for the result we seek.
We let
\[ t^{(n)}_i = \frac{\xi_n (i)+1}{n+1}, \]
so that $p_n (t^{(n)}_i) = v_{\xi_n (i)} (\mathcal{T}_n)$. Furthermore, we may and will take $\xi(i) = p (t_i)$, with a sequence $(t_i, i \in\mathbb{N})$ of independent uniform variables in $[0,1]$. The sequence $(t^{(n)}_i, i \in\mathbb{N})$ converges in distribution to $(t_i, i \in\mathbb {N})$. Since these sequences are independent of the trees $\mathcal{T}_n$ and $\mathcal{T}$, the Skorokhod representation theorem allows us to assume
\begin{equation} \label{HasCvX,t} \cases{ \displaystyle\bigl( X^{(n)}, \widetilde{X}{}^{(n)} \bigr) \mathop{ \longrightarrow}_{n \rightarrow\infty}\, (X, \widetilde{X} )\qquad\mbox{a.s.}, \vspace*{3pt}\cr \displaystyle\bigl(t^{(n)}_i, i \in\mathbb{N} \bigr) \mathop{ \longrightarrow}_{n\rightarrow\infty}\, (t_i, i \in\mathbb{N} )\qquad \mbox{a.s.}} \end{equation}
We will sometimes write $X^{(\infty)}_t$ and $t^{(\infty)}_i$ for $X_t$ and $t_i$, when it makes notation easier.
For any two vertices $u,v$ of a discrete tree $T$, we introduce the notation
\begin{eqnarray*} [\![ u, v ]\!]_V &=& [\![ u, v ]\!] \cap V(T) \quad\mbox{and}\quad ]\!] u, v [\![_V = [\![ u, v ]\!]_V \setminus\{u,v\}, \end{eqnarray*}
where $ [\![ u, v ]\!]$ is the segment between $u$ and $v$ in $T$ (seen as an $\mathbb{R}$-tree).
\begin{defn} Fix $T \in\mathbb{T}$. The shape of $T$ is the discrete tree $S(T)$ such that
\begin{eqnarray*} V\bigl(S(T)\bigr) &=& \bigl\{v \in V(T)\dvtx \deg v \neq1 \bigr\}, \\ E\bigl(S(T)\bigr) &=& \bigl\{\{u,v\} \in V\bigl(S(T)\bigr)^2\dvtx \forall w \in\,]\!] u, v [\![_V, \deg w = 1\bigr\}. \end{eqnarray*}
\end{defn}
Note that this definition can easily be extended to the case of an $\mathbb{R} $-tree $(T,d)$ having a finite number of leaves, by using the ``convention'' $V(T) = \{v \in T\dvtx \deg v \neq1 \}$ in the previous definition.
For all $n,k \in\mathbb{N}$, we let $\mathcal{R}_n (k)$ denote the shape of the subtree of $\mathcal{T}_n$ spanned by the vertices $\xi_n (1), \ldots, \xi_n (k)$ and the root. Similarly, $\mathcal{R}_{\infty} (k)$ will denote the shape of the subtree of $\mathcal{T}$ spanned by $\xi(1), \ldots, \xi(k)$ and the root. For all $n \in\mathbb{N}\cup\{\infty\}$, we let $V_n (k)$ be the set of the vertices of $\mathcal{R}_n (k)$, and we identify the edges of $\mathcal{R}_n (k)$ with the corresponding segments in $\mathcal{T}_n$. In particular, for any edge $e = \{u,v\}$ of $\mathcal{R}_n (k)$, we write $w \in e$ if $w \in\,]\!] u, v [\![_V$. We let $L_n (v)$ denote the rate at which a vertex $v$ is deleted in $\mathcal{T} _n$. Recall from Section~\ref{SModDist} that $L_n (v) = \deg(v, \mathcal{T}_n)/a_n$.
\begin{lem} \label{TCvL} Fix $k \in\mathbb{N}$. Under (\ref{HasCvX,t}), $\mathcal{R}_n (k)$ is $\mbox{a.s.} $ constant for all $n$ large enough (say $n \geq N$). Identifying $V_n (k)$ with $V_{\infty} (k)$ for all $n \geq N$, we have
\[ \bigl( L_n (v), v \in V_n (k) \bigr) \mathop{ \longrightarrow}_{n \rightarrow\infty}\, \bigl( L (v), v \in V_{\infty} (k) \bigr) \qquad\mbox{a.s.} \]
\end{lem}
The above convergence can be written more rigorously by numbering the vertices of $\mathcal{R}_n (k)$ and $\mathcal{R}_{\infty} (k)$, and indexing on $i \in\{1, \ldots, \vert V_{\infty} (k)\vert\}$, but we keep this form to make the notation easier.
\begin{pf*}{Proof of Lemma~\ref{TCvL}} For all $n \in\mathbb{N}\cup\{\infty\}$, $s<t \in[0,1]$, we let
\begin{eqnarray*} I^{(n)}_{s,t} & =& \inf_{s < u < t} X^{(n)}_u, \end{eqnarray*}
and for all $i,j \in\mathbb{N}$,
\begin{eqnarray*} t^{(n)}_{ij} & =& \sup\bigl\{ s \in \bigl[0,t^{(n)}_i \wedge t^{(n)}_j \bigr]\dvtx I^{(n)}_{s,t^{(n)}_i} = I^{(n)}_{s,t^{(n)}_j}\bigr \}. \end{eqnarray*}
Note that $p_n (t^{(n)}_{ij})$ is the most recent common ancestor of the vertices $\xi_n (i)$ and $\xi_n (j)$ in $\mathcal{T}_n$.
If, for example, $t^{(n)}_i < t^{(n)}_j$, we can rewrite $t^{(n)}_{ij}$ as
\[ \sup\bigl\{ s \in \bigl[0,t^{(n)}_i \bigr]\dvtx X^{(n)}_{s^-} \leq I^{(n)}_{t^{(n)}_i, t^{(n)}_j}\bigr\}. \]
Besides, for $n=\infty$, we can replace the inequality in the broad sense by a strict inequality:
\begin{eqnarray*} t_{ij} &=& \sup\bigl\{ s \in [0,t_i ]\dvtx X_{s^-} < I_{t_i, t_j}\bigr\}. \end{eqnarray*}
With this notation, it is elementary to show that the following properties hold $\mbox{a.s.}$ for all $i,j,i',j' \geq0$:
\begin{longlist}[(iii)]
\item[(i)] $X$ is continuous at $t_i$, and $X^{(n)}_{t^{(n)}_i}$ converges to $X_{t_i}$ as $n \rightarrow\infty$.
\item[(ii)] $t^{(n)}_{ij}$ converges to $t_{ij}$ as $n \rightarrow \infty$.
\item[(iii)] $X^{(n)}_{t^{(n)}_{ij}}$ converges to $X_{t_{ij}}$ and $X^{(n)}_{(t^{(n)}_{ij})^-}$ converges to $X_{(t_{ij})^-}$ as $n \rightarrow\infty$.
\item[(iv)] If $t_{ij} = t_{i'j'}$, then $t^{(n)}_{ij} = t^{(n)}_{i'j'}$ for all $n$ large enough. \end{longlist}
We now fix $k \in\mathbb{N}$. We introduce the set
\begin{eqnarray*} B_n (k) &=& \bigl\{t^{(n)}_{i}\dvtx i \in\{1, \ldots, k\}\bigr\} \cup\bigl\{t^{(n)}_{ij}\dvtx i,j \in\{1, \ldots, k\} \bigr\} \cup\{0\} \end{eqnarray*}
of the times coding the vertices of $\mathcal{R}_n (k)$. We let $N_n (k)$ be the number of elements of $B_{n} (k)$, and $b^{(n,k)}_i$ be the $i$th element of $B_n (k)$.
Properties (i)--(iv) can be translated into the $\mbox{a.s.}$ properties:
\begin{longlist}[(ii)$'$]
\item[(i)$'$] For $n$ large enough, $N_n (k)$ is constant.
\item[(ii)$'$] For all $i \in\{1, \ldots, N_{\infty} (k)\}$,
\begin{eqnarray*} b^{(n,k)}_i &\displaystyle\mathop{ \longrightarrow}_{n \rightarrow\infty}& b^{(\infty,k)}_i, \\ X^{(n)}_{b^{(n,k)}_i} &\displaystyle\mathop{ \longrightarrow}_{n \rightarrow\infty }& X_{b^{(\infty,k)}_i}, \\ X^{(n)}_{(b^{(n,k)}_i)^-} &\displaystyle\mathop{ \longrightarrow}_{n \rightarrow \infty}& X_{(b^{(\infty,k)}_i)^-}. \end{eqnarray*}
\end{longlist}
Moreover, $\mathcal{R}_n (k)$ and the $L_n (v)$, $v \in V_n (k)$, can be recovered in a simple way using $B_n (k)$ and the $X^{(n)}_b$, $b \in B_n (k)$:
\begin{itemize}
\item Construct a graph with vertices labeled by $B_n (k)$, the root having label $0$.
\item For every $b \in B_n (k) \setminus\{0\}$, let $b'$ denote the largest $b'' < b$ such that $b'' \in B_n (k)$ and $X^{(n)}_{b''} \leq X^{(n)}_b$, then draw an edge between the vertices labelled $b$ and $b'$.
\item For each vertex $v$ labeled by $b \in B_n (k)$, let $L_n (v) = \Delta X^{(n)}_{b} + 1/a_n$. \end{itemize}
This entails the lemma. \end{pf*}
This first lemma allows us to control the rate at which fragmentations happen at the vertices of $\mathcal{R}_n (k)$. We now need another quantity for the fragmentations happening ``on the branches'' of $\mathcal{R}_n (k)$, that is, at vertices $v \in V (\mathcal{T}_n) \setminus V_n (k)$. For every $n \in\mathbb{N}\cup\{\infty\}$, we let
\begin{eqnarray*} \sigma_n (t) &=& \mathop{\sum_{0 < s < t}}_{X^{(n)}_{s-} < I^{(n)}_{s,t}} \Delta X^{(n)}_s \qquad\forall t \in[0,1]. \end{eqnarray*}
If $n \in\mathbb{N}$, the quantity $a_n \sigma_n (t)$ is the sum of the quantities $\deg v-1$ over all strict ancestors $v \neq\rho_n$ of $p_n (t)$ in $\mathcal{T}_n$. Similarly, $\sigma(t)$ is the (infinite) sum of the $L(v)$ for all branching points $v$ of $\mathcal{T}$ that are on the path $ [\![ p(t), \rho]\!]$.
\begin{lem} \label{TCvsigma} With the preceding notation, in the setting of (\ref{HasCvX,t}), for all $i \in{1, \ldots, N(k)}$, we have the convergence
\[ \sigma_n \bigl(b^{(n,k)}_i\bigr) \mathop{ \longrightarrow}_{n \rightarrow\infty } \sigma_{\infty } \bigl(b^{(\infty,k)}_i \bigr)\qquad\mbox{a.s.} \]
\end{lem}
\begin{pf} We fix $i \in\mathbb{N}$, and let $b_n = b^{(n,k)}_i$ to simplify the notation. For all $n \in\mathbb{N}\cup\{\infty\}$, we write $\sigma _n (t) = \sigma_n^- (t) + \sigma_n^+ (t)$, where
\begin{eqnarray*} \sigma_n^+ (t) &=& \mathop{\sum_{0 < s < t}}_{X^{(n)}_{s-} < I^{(n)}_{s,t}}\bigl(X^{(n)}_{s} - I^{(n)}_{s,t} \bigr), \\ \sigma_n^- (t) &=& \mathop{\sum_{0 < s < t}}_{X^{(n)}_{s-} < I^{(n)}_{s,t}} \bigl(I^{(n)}_{s,t} - X^{(n)}_{s^-} \bigr). \end{eqnarray*}
For any $s,t$ such that $0 < s < t$ and $X^{(n)}_{s-} < I^{(n)}_{s,t}$, the term $a_n (X^{(n)}_{s} - I^{(n)}_{s,t} )$ corresponds to the number of children of $p_n(s)$ that are visited before $p_n(t)$ in the depth-first search, and $a_n (I^{(n)}_{s,t} - X^{(n)}_{s^-} )$ is the number of children of $p_n(s)$ that are visited after $p_n(t)$. Writing the same decomposition $\tilde{\sigma}_n (t) = \tilde{\sigma}_n^- (t) + \tilde{\sigma}_n^+ (t)$ for the trees $\widetilde{\mathcal {T}}_n$, and recalling (\ref{ELocJumpsX,Xtilde}), we thus get
\begin{eqnarray*} \sigma_n^+ (b_n) &=& \tilde{\sigma}_n^- ( \tilde{b}_n ), \end{eqnarray*}
where
\begin{eqnarray*} \tilde{b}_n &=& 1-b_n+\frac{1}{n+1} \bigl(1 + H^{[n]}_{(n+1) b_n - 1}-D^{[n]}_{(n+1) b_n - 1} \bigr). \end{eqnarray*}
Now we note that for all $t \geq0$, we have $\sigma_n^- (t) = X^{(n)}_{t^-}$ and $\sigma_{\infty}^- (t) = X_{t^-}$. As a consequence, using (\ref{HasCvX,t}), we get
\[ \sigma_n^- (b_n) \mathop{ \longrightarrow}_{n \rightarrow\infty} X_{b^-} \qquad\mbox{a.s.} \]
The same relation for $\tilde{\sigma}_n^-$ and $\widetilde{X}{}^{(n)}$, and the fact that $\tilde{b}_n$ converges $\mbox{a.s.}$ to $\tilde {b}:= 1-b-l(b)$, show that
\begin{eqnarray*} \sigma_n^+ (b_n) &=& \tilde{\sigma}_n^- ( \tilde{b}_n) \mathop{ \longrightarrow}_{n \rightarrow\infty} \widetilde{X}_{\tilde{b}^-} \qquad\mbox{a.s.} \end{eqnarray*}
Thus, $\sigma_n (b_n)$ converges $\mbox{a.s.}$ to $\sigma_{\infty }^- (b) + \tilde{\sigma}_{\infty}^- (\tilde{b})$. To show that this quantity is equal to $\sigma_{\infty} (b)$, we introduce the ``truncated'' sums $\sigma_{n,\varepsilon} (t)$, $\sigma_{n,\varepsilon}^+ (t)$, $\sigma_{n,\varepsilon}^- (t)$, obtained by taking into account only the $s \in(0,t)$ such that $X^{(n)}_{s-} < I^{(n)}_{s,t}$ and $\Delta X^{(n)}_s > \varepsilon$. For all $n \in\mathbb{N}\cup\{\infty\}$, these quantities are finite sums. Therefore, the $\mbox{a.s.}$ convergence~(\ref{HasCvX,t}) implies that for all $\varepsilon> 0$,
\[ \sigma_{\infty,\varepsilon}^+ (b) = \lim_{n \rightarrow\infty} \sigma_{n,\varepsilon}^+ (b_n) = \lim_{n \rightarrow\infty} \tilde{ \sigma}_{n,\varepsilon}^- (\tilde{b}_n) =\tilde{\sigma}_{\infty,\varepsilon}^- (\tilde{b}). \]
Thus, $\sigma_{\infty,\varepsilon} (b) = \sigma_{\infty,\varepsilon}^- (b) + \tilde{\sigma}_{\infty,\varepsilon}^- (\tilde{b})$. By letting $\varepsilon\rightarrow0$, we get $\sigma_{\infty} (b) = \sigma _{\infty}^- (b) + \tilde{\sigma}_{\infty}^- (\tilde{b})$. \end{pf}
We now come back to the proof of Proposition~\ref{TFstJointCv}.
\begin{pf*}{Proof of Proposition~\ref{TFstJointCv}} For all $n \in\mathbb{N}\cup\{\infty\}$, we add edge-lengths to the discrete tree $\mathcal{R}_n (k)$ by letting
\begin{eqnarray*} \ell_n \bigl(\{u,v\}\bigr) &=& d_n (u,v)\qquad\mbox{if } n \in\mathbb{N}, \\ \ell_{\infty} \bigl(\{u,v\}\bigr) &=& d (u,v), \end{eqnarray*}
for every edge $\{u,v\}$. Let $\mathcal{R}'_n (t)$ denote the resulting tree with edge-lengths. We now write $\mathcal{R}_n (k,t)$ for the tree $\mathcal{R}'_n (t)$ endowed with point processes of marks on its edges and vertices, defined as follows:
\begin{itemize}
\item The marks on the vertices of $\mathcal{R}_n (k)$ appear at the same time as the marks on the corresponding vertices of $\mathcal{T}_n$.
\item Each edge receives a mark at its midpoint at the first time when a vertex $v$ of $\mathcal{T}_n$ such that $v \in e$ is marked in $\mathcal{T}_n$. \end{itemize}
For each $n$, these two point processes are independent, and their rates are the following:
\begin{itemize}
\item Each vertex $v \in V_n (k)$ is marked at rate $L_n (v)$, independently of the other vertices.
\item For each edge $e$ of $\mathcal{R}_n (k)$, letting $b, b'$ denote the points of $B_n (k)$ corresponding to $e^-, e^+$ (as explained in the proof of Lemma~\ref{TCvL}), the edge $e$ is marked at rate $\Sigma L_n (e)$, independently of the other edges, with
\begin{eqnarray*} \Sigma L_n (e) & =& \sum_{v \in V (\mathcal{T}_n) \cap e} L_n (v) \\ & =& \sigma_n \bigl(b'\bigr) - \sigma_n (b) + \frac{n}{a_n^2} \bigl( H^{(n)}_{(b')^-} - H^{(n)}_{b^-} \bigr) - L_n \bigl(e^-\bigr) \end{eqnarray*}
if $n \in\mathbb{N}$, and
\begin{eqnarray*} \Sigma L_{\infty} (e) &=& \Sigma L (e) = \sum_{v \in V (\mathcal{T}) \cap e} L (v) = \sigma_{\infty} \bigl(b'\bigr) - \sigma_{\infty} (b) - L\bigl(e^-\bigr). \end{eqnarray*}
\end{itemize}
Now Lemmas~\ref{TCvL} and~\ref{TCvsigma} show that $L_n (v)$ and $\Sigma L_n (e)$ converge to $L (v)$ and $\Sigma L (e)$ (resp.) as $n \rightarrow\infty$.
Therefore, we have the convergence
\begin{equation} \label{ECvRmarques} \biggl( \frac{a_n}{n} \mathcal{R}_n (k,t), t \geq0 \biggr) \mathop{ \longrightarrow}_{n \rightarrow\infty}^{(d)}\, \bigl( \mathcal {R}_{\infty} (k,t), t \geq0 \bigr), \end{equation}
where $(a_n/n) \cdot\mathcal{R}_n (k,t)$ and $\mathcal{R}_{\infty} (k,t)$ can be seen as random variables in $\mathbb{T} \times(\mathbb{R}_+ \cup\{-1\})^{\mathbb{N}} \times\{-1,0,1\}^{\mathbb{N}^2}$, for example,
\[ (a_n/n) \cdot\mathcal{R}_n (k,t) = \bigl( \mathcal{R}_n (k), (l_i)_{i \geq1}, \bigl( \delta_V (i,t)\bigr)_{i \geq0}, \bigl(\delta_E (i,t)\bigr)_{i \geq1} \bigr), \]
where
\begin{eqnarray*} l_i &=& \cases{ (a_n/n) \cdot\ell\bigl(e_i \bigl(\mathcal{R}_n (k)\bigr)\bigr), &\quad if $i < N_n(k)$, \vspace*{2pt}\cr -1, &\quad if $i \geq N_n (k)$,} \\ \delta_V (i,t) &=& \cases{ 1, &\quad if $i < N_n (k)$ and the vertex $v_i \bigl(\mathcal{R}_n (k)\bigr)$ \cr & \qquad\quad has been marked before time $t$, \vspace*{2pt}\cr 0, &\quad if $i < N_n (k)$ and the vertex $v_i \bigl(\mathcal{R}_n (k)\bigr)$ \cr & \qquad\quad has not been marked before time $t$, \vspace*{2pt}\cr -1,&\quad if $i \geq N_n (k)$,} \\ \delta_E (i,t) &=& \cases{ 1, &\quad if $i < N_n (k)$ and the edge $e_i \bigl(\mathcal{R}_n(k)\bigr)$ \cr &\qquad\quad has been marked before time $t$, \vspace*{2pt}\cr 0, &\quad if $i < N_n (k)$ and the edge $e_i \bigl(\mathcal{R}_n(k)\bigr)$ \cr &\qquad\quad has not been marked before time $t$, \vspace*{2pt}\cr -1,&\quad if $i \geq N_n (k)$} \end{eqnarray*}
[recall that $N_n (k)$ is the number of vertices of $\mathcal{R}_n (k)$]. Note that we could keep working under (\ref{HasCvX,t}) to get an $\mbox{a.s.}$ convergence, but this is no longer necessary.
The rest of the proof goes as in \cite{BerMi}. For every $i \in \mathbb{N}$, we let $\eta_n (k,i,t)$ denote the number of vertices among $\xi_n (1), \ldots, \xi_n (k)$ in the component of $\mathcal{R}_n (k)$ containing $\xi_n (i)$ at time $t$. Similarly, denote by $\eta _{\infty} (k,i,t)$ the number of vertices among $\xi(1), \ldots, \xi (k)$ in the component of $\mathcal{R}_{\infty} (k)$ containing $\xi (i)$ at time $t$. It follows from (\ref{ECvRmarques}) that we have the joint convergences
\begin{eqnarray*} \frac{a_n}{n} \mathcal{T}_n &\displaystyle\mathop{ \longrightarrow} ^{(d)}& \mathcal{T}, \\ \bigl(\eta_n (k,i,t)\bigr)_{t \geq0, i \in\mathbb{N}} &\displaystyle\mathop{\longrightarrow } ^{(d)}& \bigl(\eta_{\infty} (k,i,t) \bigr)_{t \geq0, i \in\mathbb{N}}, \\ \bigl(\tau_n (i,j)\bigr)_{i,j \in\mathbb{N}} &\displaystyle\mathop{ \longrightarrow} ^{(d)}& \bigl(\tau(i,j)\bigr)_{i,j \in\mathbb{N}}. \end{eqnarray*}
Besides, the law of large numbers gives that for each $i \in\mathbb {N}$ and $t \geq0$,
\[ \frac{1}{k} \eta_{\infty} (k,i,t) \mathop{ \longrightarrow}_{n \rightarrow\infty} \mu_{\xi(i)} (t) \qquad\mbox{a.s.} \]
Thus, for every fixed integer $l$ and times $0 \leq t_1 \leq\cdots \leq t_l$, we can construct a sequence $k_n \rightarrow\infty$ sufficiently slowly, such that
\[ \biggl(\frac{1}{k_n}\eta_n (k_n,i,t_j) \biggr)_{i,j \in\{1, \ldots,l\}} \mathop{ \longrightarrow} ^{(d)}\, \bigl(\mu_{\xi(i)} (t_j)\bigr)_{i,j \in \{1, \ldots,l\}}, \]
or equivalently (see \cite{AldPit}, Lemma 11)
\begin{eqnarray*} &&\bigl( \mu_{n,\xi_n (i)} (t_j) \bigr)_{i,j \in\{1, \ldots,l\}} \mathop{ \longrightarrow} ^{(d)}\, \bigl(\mu_{\xi(i)} (t_j)\bigr)_{i,j \in\{1, \ldots,l\}}, \end{eqnarray*}
both holding jointly with the preceding convergences. This entails the proposition. \end{pf*}
\subsection{Upper bound for the expected component mass} \label{SKeyEstimates}
To get the convergence of $(\mathcal{T}_n,\operatorname{Cut}_{\mathrm{v}}(\mathcal{T}_n))$, we will finally need to control the quantities
\begin{eqnarray*} &&\mathbb{E} \biggl[\int_{2^l}^{\infty} \mu_{n,\xi_n} (t) \,dt \biggr], \end{eqnarray*}
where $\xi_n$ is a uniform random integer in $\{1, \ldots, n\}$. Our main goal is to show that these quantities converge to $0$ as $l$ tends to $\infty$, uniformly in $n$, as stated in Corollary~\ref{TCor1}.
To this end, we will sometimes work under the size-biased measure $\mathrm{GW}^{\ast}$, defined as follows. We recall that a pointed tree is a pair $(T,v)$, where $T$ is a rooted planar tree and $v$ is a vertex of $T$. The measure $\mathrm{GW}^{\ast}$ is the sigma-finite measure such that, for every pointed tree $(T,v)$,
\begin{eqnarray*} \mathrm{GW}^{\ast} (T,v) &=& \mathbb{P} (\mathbf{T}=T ), \end{eqnarray*}
where $\mathbf{T}$ is a Galton--Watson tree with offspring distribution $\nu$. We let $\mathbb{E}^{\ast}$ denote the expectation under this ``law.'' In particular, the conditional law $\mathrm{GW}^{\ast}$ given $\vert V (T)\vert= n+1$ is well-defined, and corresponds to the distribution of a pair $(\mathcal{T}_n,v)$ where given $\mathcal{T} _n$, $v$ is a uniform random vertex of $\mathcal{T}_n$. Hereafter, $T$ will denote a \mbox{$\nu$-}Galton--Watson tree, whose expectation will either be taken under the unbiased law or under a conditioned version of the law $\mathrm{GW}^{\ast}$. Recall that we only consider values of $n$ such that $P_n = \mathbb{P} (\vert V (T)\vert= n+1 ) \neq0$.
For all $m, n \in\mathbb{N}$ such that $m \leq n$ and $P_m \neq0$, for all $t \in\mathbb{R}_+$, we define
\begin{equation} \label{EDefEn} E_{m,n} (t)= \frac{1}{m} \mathbb{E} \biggl[\sum _{e \in E (\mathcal {T}_m)} \exp\biggl(- \sum _{u \in[\![ \rho_m, e^- ]\!]_V} \deg(u, \mathcal{T}_m) \frac{t}{a_n} \biggr) \biggr], \end{equation}
and $E_n (t) = E_{n,n} (t)$. Equivalently, we can write
\begin{eqnarray*} E_{m,n} (t)&=& \frac{1}{m} \mathbb{E}^{\ast} \biggl[\sum _{e \in E (T)} \exp\biggl(- \sum _{u \in[\![ \rho(T), e^-
]\!]_V} \deg(u, T) \frac{t}{a_n} \biggr) \Big| \bigl\vert V(T)\bigr\vert=m+1 \biggr]. \end{eqnarray*}
For all $m<n$, we also use the notation
\begin{eqnarray*}
&&P^{\ast}_{m,n}:= \mathbb{P}^{\ast} \bigl(\bigl\vert V (T_v)\bigr\vert= m+1 | \bigl\vert V (T)\bigr\vert= n+1 \bigr), \end{eqnarray*}
where $T_v$ denotes the tree formed by $v$ and its descendants. Our first step is to show the following.
\begin{lem} \label{TMajEspMu} Let $\xi_n$ be a uniform random edge of $\mathcal{T}_n$. Using the previous notation, we have
\begin{equation} \label{EmajEspMu} \mathbb{E} \bigl[\mu_{n,\xi_n} (t) \bigr] \leq \frac{1}{n} e^{-t/a_n} + 2 \biggl(E_n (t) + \mathop{\sum _{m=1}}_{P_m \neq0}^{n-1} P^{\ast}_{m,n} \frac {m}{n} E_{m,n} (t) \biggr). \end{equation}
\end{lem}
The proof of this lemma will use Proposition~\ref{TLoisTv} below. Let us first introduce some notation. For all $v \in V (T)$, we let $T^v$ be the subtree obtained by deleting all the strict descendants of $v$ in $T$, and as before, $T_v$ be the tree formed by $v$ and its descendants. We define a new tree $\hat{T}^{\hat{v}}$, constructed by taking $T^v$ and modifying it as follows:
\begin{itemize}
\item we remove the edge $e(v)$ between $v$ and $p(v)$;
\item we add a new child $\hat{v}$ to the root, and let $\hat {e}_{\hat{v}}$ denote the edge between $\hat{v}$ and the root;
\item we reroot the tree at $p(v)$. \end{itemize}
An example of this construction is given in Figure~\ref {FTransformations}. Note that we have natural bijective correspondences between\vspace*{2pt} $V (T)$, $( V (T^v) \setminus\{v\} ) \sqcup V (T_v)$ and $( V (\hat{T}^{\hat{v}}) \setminus\{\hat{v}\} ) \sqcup V (T_v)$, and between $E (T)$, $E (T^v) \sqcup E (T_v)$ and $E (\hat{T}^{\hat{v}}) \sqcup E (T_v)$. Furthermore, one can easily check that for all $u \in V (\hat{T}^{\hat{v}}) \setminus\{\hat{v}\}$, we have $\deg(u,\hat {T}^{\hat{v}}) = \deg(u,T)$, and for all $u \in V (T_v)$, $\deg(u,T_v) = \deg(u,T)$.
\begin{figure}\label{FTransformations}
\end{figure}
This transformation is the same as in \cite{BerMi}, page~21, except that we work with rooted trees instead of planted trees. In our case, adding the edge $\hat{e}_{\hat{v}}$ and deleting $e(v)$ mimics the existence of a base edge. Thus, we can use Proposition 2 of \cite{BerMi}.
\begin{prop} \label{TLoisTv} Under $\mathrm{GW}^{\ast}$, $(\hat{T}^{\hat{v}}, T_v)$ and $(T^v,T_v)$ have the same ``law,'' and the trees $T^v$ and $T_v$ are independent, with $T_v$ being a Galton--Watson tree. \end{prop}
\begin{pf*}{Proof of Lemma~\ref{TMajEspMu}} In this proof, we identify $\xi_n$ with the edge $e_{\xi_n}$, to make notation easier. We first note that for each edge $e \in E (\mathcal{T}_n)$, $e$ belongs to the component $\mathcal{T}_{n,\xi_n} (t)$ if and only if no vertex on the path $ [\![ e^-, \xi_n^- ]\!]_V$ has been removed at time $t$. Given $\mathcal{T}_n$ and $\xi_n$, this happens with probability
\begin{eqnarray*} &&\exp\biggl(- \sum_{u \in[\![ e^-, \xi_n^- ]\!]_V} \deg u \cdot \frac {t}{a_n} \biggr) \end{eqnarray*}
[for any vertex $u$, at time $t$, $u$ has been deleted from the initial tree with probability $1-\exp(- \deg u \cdot t/a_n)$]. Thus,
\begin{eqnarray*} \mathbb{E} [n \mu_{n,\xi_n} ] &=& \mathbb{E} \biggl[\sum _{e \in E (\mathcal{T}_n)} \mathbh{1}_{e \in\mathcal{T}_{n,\xi_n} (t)} \biggr] \\ &=& \mathbb{E} \biggl[ \sum_{e \in E (\mathcal{T}_n)} \exp\biggl(- \sum _{u \in[\![ e^-, \xi_n^- ]\!]_V} \deg u \cdot\frac{t}{a_n} \biggr) \biggr]. \end{eqnarray*}
Since the edge $\xi_n$ is chosen uniformly in $E (\mathcal{T}_n)$, this yields
\begin{eqnarray*} \mathbb{E} [n \mu_{n,\xi_n} ] & =& \frac{1}{n} \mathbb{E} \biggl[ \sum_{e,\xi\in E (\mathcal{T}_n)} \exp\biggl(- \sum _{u \in[\![ e^-, \xi^- ]\!]_V} \deg u \frac{t}{a_n} \biggr) \biggr] \\ & =& \frac{1}{n} \mathbb{E} \biggl[\sum_{v \in V (\mathcal{T}_n)} \mathbh{1}_{v \neq\rho(\mathcal{T} _n)} \sum_{e \in E (\mathcal {T}_n)} \exp\biggl(- \sum_{u \in[\![ e^-, p (v) ]\!]_V} \deg u \frac{t}{a_n} \biggr) \biggr], \end{eqnarray*}
where $p (v)$ denotes the parent of vertex $v$. Hence, calling $A_n(T)$ the event $\{\vert V (T)\vert= n+1\}$,
\begin{eqnarray*} && \mathbb{E} [n \mu_{n,\xi_n} ] = \frac{n+1}{n} \mathbb {E}^{\ast} \biggl[\mathbh{1}_{v \neq\rho(T)} \sum _{e \in E (T)} \exp\biggl(- \sum_{u \in[\![ e^-, p (v) ]\!]_V} \deg u \frac
{t}{a_n} \biggr) \Big| A_n(T) \biggr]. \end{eqnarray*}
Distinguishing the cases for which $e \in E (T_v), e \in E (T^v) \setminus\{e(v)\}$ and $e = e(v)$, we split this quantity into three terms:
\begin{equation} \label{EDecEspMu} \mathbb{E} [n \mu_{n,\xi_n} ] = \biggl(1+\frac {1}{n} \biggr) \bigl(\Sigma_v + \Sigma^v + \varepsilon_v \bigr), \end{equation}
where
\begin{eqnarray*} \Sigma_v & =& \mathbb{E}^{\ast} \biggl[\mathbh{1}_{v \neq\rho(T)} \sum_{e \in E (T_v)} \exp\biggl(- \sum _{u \in[\![ e^-, v ]\!]_V} \bigl(\deg(u,T_v) + \deg p (v) \bigr)
\frac{t}{a_n} \biggr) \Big| A_n(T) \biggr], \\ \Sigma^v & =& \mathbb{E}^{\ast} \biggl[\mathbh{1}_{v \neq\rho(T)} \sum_{e \in E (T^v) \setminus\{e(v)\}} \exp\biggl(- \sum _{u \in [\![ e^-, p (v) ]\!]_V} \deg\bigl(u,T^v\bigr) \frac{t}{a_n}
\biggr) \Big| A_n(T) \biggr] \end{eqnarray*}
and
\begin{eqnarray*} \varepsilon_v &=& \mathbb{E}^{\ast} \biggl[ \mathbh{1}_{v \neq\rho
(T)} \exp\biggl(-\deg p (v) \frac{t}{a_n} \biggr) \Big| A_n(T) \biggr]. \end{eqnarray*}
For the first term, we have
\begin{eqnarray*} &&\Sigma_v \leq\mathbb{E}^{\ast} \biggl[\mathbh{1}_{v \neq\rho(T)} \sum_{e \in E (T_v)} \exp\biggl(- \sum _{u \in[\![ \rho(T_v), e^- ]\!]_V} \deg(u,T_v) \frac
{t}{a_n} \biggr) \Big| A_n(T) \biggr]. \end{eqnarray*}
Since $\vert V(T)\vert= \vert V(T_v)\vert + \vert V (T^v)\vert- 1$, this gives
\begin{eqnarray*} \Sigma_v &\leq& \mathop{\sum_{m=1}}_{P_m \neq0}^{n-1} P^{\ast}_{m,n} \mathbb{E}^{\ast} \left[ \sum_{e \in E (T_v)} \exp\biggl(- \sum _{u \in[\![ \rho(T_v), e^- ]\!]_V} \deg(u,T_v) \frac {t}{a_n} \biggr)
\bigg|\right. \\ &&\hspace*{159pt} \left.\begin{array} {l} \bigl\vert V (T_v)\bigr\vert= m+1, \\[3pt] \bigl\vert V \bigl(T^v\bigr)\bigr\vert= n - m + 1 \end{array}
\right] \end{eqnarray*}
[$m=n$ would correspond to the case where $v = \rho(T)$, and $m=0$ to the case where $E(T_v) = \varnothing$]. Proposition~\ref{TLoisTv} gives that the trees $T_v$ and $T^v$ are independent, with $T_v$ being a Galton--Watson tree. Hence,
\begin{eqnarray}\label{EMajEspMu1} \qquad\Sigma_v & \leq&\mathop{\sum_{m=1}}_{P_m \neq0}^{n-1} P^{\ast}_{m,n} \mathbb{E}^{\ast} \biggl[\sum _{e \in E (T)} \exp\biggl(- \sum_{u \in [\![ \rho(T), e^- ]\!]_V} \deg(u,T) \frac
{t}{a_n} \biggr) \Big| A_m(T) \biggr] \nonumber\\[-8pt]\\[-8pt]\nonumber & \leq&\mathop{\sum_{m=1}}_{P_m \neq0}^{n-1} P^{\ast}_{m,n} m E_{m,n} (t). \end{eqnarray}
For the second term, we use the correspondence between $E (T^v) \setminus\{e(v)\}$ and $E (\hat{T}^{\hat{v}}) \setminus\{\hat {e}_{\hat{v}}\}$, and the fact that $\rho(\hat{T}^{\hat{v}}) = p(v)$:
\begin{eqnarray*} \Sigma^v &=& \mathbb{E}^{\ast} \biggl[\mathbh{1}_{v \neq\rho(T)} \sum_{e \in E (\hat{T}^{\hat{v}}) \setminus\{\hat{e}_{\hat {v}}\}} \exp\biggl(- \sum _{u \in[\![ \rho(\hat {T}^{\hat{v}}), e^- ]\!]_V} \deg\bigl(u,\hat{T}^{\hat{v}}\bigr)
\frac{t}{a_n} \biggr) \Big| A_n(T) \biggr]. \end{eqnarray*}
This gives
\begin{eqnarray*} \Sigma^v &\leq&\mathbb{E}^{\ast} \biggl[\sum _{e \in E (\hat{T}^{\hat {v}})} \exp\biggl(- \sum_{u \in[\![ \rho(\hat{T}^{\hat{v}}), e^- ]\!]_V}
\deg\bigl(u,\hat{T}^{\hat{v}}\bigr) \frac{t}{a_n} \biggr) \Big| A_n(T) \biggr]. \end{eqnarray*}
Using the fact that $T^v$ and $\hat{T}^{\hat{v}}$ have the same law under $\mathrm{GW}^{\ast}$, we get
\begin{eqnarray*} &&\Sigma^v \leq\mathbb{E}^{\ast} \biggl[\sum _{e \in E (T^v)} \exp\biggl(- \sum_{u \in[\![ \rho(T^v), e^- ]\!]_V}
\deg\bigl(u,T^v\bigr) \frac{t}{a_n} \biggr) \Big| A_n(T) \biggr]. \end{eqnarray*}
Seeing $E(T^v)$ as a subset of $E(T)$, we can write
\begin{equation} \label{EMajEspMu2}
\qquad\Sigma^v \leq\mathbb{E}^{\ast} \biggl[\sum _{e \in E (T)} \exp\biggl(- \sum _{u \in[\![ \rho(T), e^- ]\!]_V} \deg(u,T) \frac{t}{a_n} \biggr) \Big| A_n(T) \biggr] = n E_n (t). \end{equation}
For the third term, we simply notice that
\begin{equation} \label{EMajEspMu3} \varepsilon_v \leq\frac{n}{n+1} e^{-t/a_n}. \end{equation}
Putting together (\ref{EMajEspMu1}), (\ref{EMajEspMu2}) and (\ref {EMajEspMu3}) into (\ref{EDecEspMu}), we finally get
\begin{eqnarray*} &&\mathbb{E} \bigl[n \mu_{n,\xi_n} (t) \bigr] \leq e^{-t/a_n} + \biggl(1+\frac {1}{n} \biggr) \biggl(n E_n (t) + \mathop{\sum _{m=1}}_{P_m \neq 0}^{n-1} P^{\ast}_{m,n} m E_{m,n} (t) \biggr). \end{eqnarray*}
Thus,
\begin{eqnarray*} &&\mathbb{E} \bigl[\mu_{n,\xi_n} (t) \bigr] \leq\frac{1}{n} e^{-t/a_n} + \biggl(1+\frac {1}{n} \biggr) \biggl(E_n (t) + \mathop{\sum_{m=1}}_{P_m \neq 0}^{n-1} P^{\ast}_{m,n} \frac{m}{n} E_{m,n} (t) \biggr). \end{eqnarray*}\upqed
\end{pf*}
Next, we compute $E_{m,n} (t)$. To this end, we introduce two new independent sequences of i.i.d. variables:
\begin{itemize}
\item$(\hat{Z}_i)_{i \geq1}$ with law $\hat{\nu}$, where $\hat {\nu}$ is the size-biased version of $\nu$;
\item$(N_i)_{i \geq1}$, with same law as the number of vertices of a Galton--Watson tree with offspring distribution $\nu$. \end{itemize}
For all $k,h \in\mathbb{N}$, we also write
\begin{eqnarray*} \hat{S}_h &=& \sum_{i=1}^{h} \hat{Z}_i \quad\mbox{and} \quad Y_k = \sum _{i=1}^k N_i. \end{eqnarray*}
\begin{lem} For every $m,n \in\mathbb{N}$ such that $m \leq n$ and $P_m \neq0$, one has
\begin{equation} \label{EExprEn} E_{m,n} (t) = \frac{1}{m P_m} \sum _{1 \leq h \leq k \leq m} e^{-kt/a_n} \mathbb{P}(\hat{S}_h = k) \mathbb{P} (Y_{k-h+1} = m-h+1 ). \end{equation}
\end{lem}
\begin{pf} We first note that relation (\ref{EDefEn}) can be written otherwise, using the one-to-one correspondence $e \mapsto e^+$ between $E (T)$ and $V (T) \setminus\{\rho(T)\}$:
\begin{eqnarray*} E_{m,n} (t) & =& \frac{1}{m} \mathbb{E} \biggl[\sum _{v \in V (T) \setminus{\rho(T)}} \exp\biggl(- \sum_{u \in[\![ \rho(T), p(v) ]\!]_V}
\deg(u, T) \frac{t}{a_n} \biggr) \Big| \bigl\vert E (T)\bigr\vert= m \biggr]. \end{eqnarray*}
We thus have
\begin{eqnarray*} E_{m,n} (t) & =& \frac{1}{m P_m} \mathbb{E} \biggl[\sum _{v \in V (T) \setminus{\rho(T)}} \exp\biggl(- \sum_{u \in [\![ \rho(T), p(v) ]\!]_V} \deg(u, T) \frac{t}{a_n} \biggr), \bigl\vert E (T)\bigr\vert= m \biggr] \\ & =& \frac{1}{m P_m} \mathbb{E}^{\ast} \biggl[ \mathbh{1}_{v \neq\rho(T)} \exp\biggl(- \sum_{u \in [\![ \rho(T), p(v) ]\!]_V} \deg(u, T) \frac{t}{a_n} \biggr), \bigl\vert E (T)\bigr\vert= m \biggr]. \end{eqnarray*}
We now use the following description of a typical pointed tree $(T,v)$ under $\mathrm{GW}^{\ast}$ (see the proof of Proposition 2 of \cite{BerMi} and \cite{LPP}):
\begin{itemize}
\item The ``law'' under $\mathrm{GW}^{\ast}$ of the distance $h(v)$ of the pointed vertex $v$ to the root is the counting measure on $\mathbb {N}\cup\{0\}$.
\item Conditionally on $h(v)=h$, the subtrees $T_v$ and $T^v$ are independent, with $T_v$ being a Galton--Watson tree with offspring distribution $\nu$, and $T^v$ having $\mathrm{GW}_h^{\ast}$ law, which can be described as follows. $T^v$ has a distinguished branch $B = \{ u_1 = \rho(T^v), u_2, \ldots, u_{h+1}=v \}$ of length $h$. Every vertex of $T^v$ has an offspring that is distributed independently of the other vertices, with offspring distribution $\nu$ for the vertices in $V(T^v) \setminus B$, $\hat{\nu}$ for the vertices $u_1,\ldots,u_h$, and $u_{h+1}$ having no descendants. The tree $T^v$ can thus be constructed inductively from the root $u_1$, by choosing the $i$th vertex $u_i$ of the distinguished branch uniformly at random from the children of $u_{i-1}$. \end{itemize}
In this representation, conditionally on having $h(v)=h$, $ [\![ \rho(T), p (v) ]\!]_V$ equals $\{u_1, \ldots, u_h \}$ and, for every $i \in\{ 1,\ldots,h\}$,
\[ \deg(u_i,T) = \hat{Z}_i. \]
Besides, the total number of vertices of $T$ is the sum of the number of vertices $h$ of $B \setminus\{v\}$, of $\vert V(T_v)\vert$, and of the\vspace*{1pt} $\vert V(T_u)\vert$ for $u$ such that $p(u) \in B \setminus\{v\}$ and $u \notin B$. There are $\sum_{i=1}^{h} (\hat{Z}_i - 1)$ such trees $T_u$. Hence, under $\mathrm{GW}^{\ast}$:
\begin{eqnarray*} &&\bigl\vert E(T)\bigr\vert= \bigl\vert V(T)\bigr\vert-1 \stackrel{(d)} {=}Y_{\sum_{i=1}^{h} (\hat{Z}_i - 1 ) + 1} + h - 1. \end{eqnarray*}
Thus,
\begin{eqnarray*} E_{m,n} (t) & =& \frac{1}{m P_m} \sum_{1 \leq h} \mathbb{E} \Biggl[\exp\Biggl(-\sum_{i=1}^{h} \hat{Z}_i \frac{t}{a_n} \Biggr), Y_{\sum_{i=1}^{h} \hat{Z}_i - h + 1} = m-h+1 \Biggr] \\ & =& \frac{1}{m P_m} \sum_{1 \leq h \leq k \leq m} e^{-kt/a_n} \mathbb{P}(\hat{S}_h = k) \mathbb{P} (Y_{k-h+1} = m-h+1 ). \end{eqnarray*}\upqed
\end{pf}
We now compute upper bounds for the terms $\mathbb{P} (Y_{k-h+1} = m-h+1 )$, $\mathbb{P}(\hat{S}_h = k)$ and $(m P_m)^{-1}$.
\subsubsection*{Upper bound for $\mathbb{P}(Y_{k-h+1}=m-h+1)$} Recalling the notation of Section~\ref{SCodingTrees}, we have
\begin{eqnarray*} \mathbb{P} (Y_k = n ) & =& \mathbb{P} (W_n = -k \mbox{ and, }\forall p < n, W_p > -k ) \\ & =& \frac{k}{n} \mathbb{P} (W_n = -k ). \end{eqnarray*}
The second equality is given by the cyclic lemma (see \cite{PitCSP}, Lemma~6.1). We will now use the fact, given by Theorem~\ref {TLimLocale}, that
\begin{equation} \label{ELimLocaleZ} \lim_{n \rightarrow\infty} \sup_{k \in\mathbb{N}} \biggl\vert a_n \mathbb{P} (W_n = -k ) - p_1^{(\alpha)} \biggl(- \frac {k}{a_n} \biggr)\biggr\vert= 0. \end{equation}
For all $s,x \in(0,\infty)$, we have
\begin{eqnarray*} x p_s^{(\alpha)} (-x) &=& s q_{x}^{(1/\alpha)} (s) \end{eqnarray*}
(see, e.g., \cite{BerLP}, Corollary VII.1.3). Taking $s=1$ and $x=k/a_n$, this gives
\begin{eqnarray*} && \frac{k}{a_n} p_1^{(\alpha)} \biggl(-\frac{k}{a_n} \biggr) = q_{k/a_n}^{(1/\alpha)} (1). \end{eqnarray*}
Thus,
\begin{eqnarray*} && n \mathbb{P} (Y_n = k ) - q_{k/a_n}^{(1/\alpha)} (1) = \frac{k}{a_n} \biggl(a_n \mathbb{P} (W_n = -k ) - p_1^{(\alpha)} \biggl(-\frac {k}{a_n} \biggr) \biggr), \end{eqnarray*}
and we get
\begin{eqnarray*} \mathbb{P} (Y_k = n ) & \leq&\frac{1}{n} \bigl(\bigl\vert n \mathbb{P} (Y_n = k ) - q_{k/a_n}^{(1/\alpha)} (1)\bigr \vert+ q_{k/a_n}^{(1/\alpha)} (1) \bigr) \\ & \leq&\frac{k}{n a_n} \biggl( \biggl\vert a_n \mathbb{P} (W_n = -k ) - p_1^{(\alpha)} \biggl(- \frac{k}{a_n} \biggr)\biggr\vert+ p_1^{(\alpha)} \biggl(- \frac {k}{a_n} \biggr) \biggr). \end{eqnarray*}
Since $p_1^{(\alpha)}$ is bounded and (\ref{ELimLocaleZ}) holds, there exists a constant $M \in(0,\infty)$ such that, for all $k, n \in\mathbb{N}$,
\[ \mathbb{P} (Y_k = n ) \leq\frac{k}{n a_n} M. \]
Thus, we have the following upper bound:
\begin{equation} \label{EmajPY} \mathbb{P} (Y_{k-h+1} = m-h+1 ) \leq\frac{k-h+1}{(m-h+1) a_{m-h+1}} M. \end{equation}
\subsubsection*{Upper bound for $\mathbb{P}(\hat{S}_h=k)$} We use Theorem~\ref{TLimLocale} for the i.i.d. variables $(\hat {Z}_i)_{i \in\mathbb{N}}$. Let $\hat{A} \in R_{\alpha-1}$ be an increasing function given by (i), such that
\begin{eqnarray*} &&\mathbb{P} (\hat{Z}_1 > r ) \sim\frac{1}{\hat{A} (r)}, \end{eqnarray*}
and $\hat{a}$ be the inverse function of $\hat{A}$. Then
\begin{eqnarray*} &&\lim_{h \rightarrow\infty} \sup_{k \in\mathbb{N}} \biggl\vert \hat{a}_h \mathbb{P} (\hat{S}_h = k ) - q_1^{(\alpha- 1)} \biggl(\frac{k}{\hat{a}_h} \biggr)\biggr\vert= 0. \end{eqnarray*}
Using the fact that $q_1^{(\alpha- 1)}$ is bounded, and writing
\begin{eqnarray*} &&\mathbb{P} (\hat{S}_h = k ) \leq\frac{1}{\hat{a}_h} \biggl( \biggl \vert\hat{a}_h \mathbb{P} (\hat{S}_h = k ) - q_1^{(\alpha- 1)} \biggl(\frac{k}{\hat{a}_h} \biggr)\biggr\vert+ q_1^{(\alpha- 1)} \biggl(\frac{k}{\hat{a}_h} \biggr) \biggr), \end{eqnarray*}
we get the existence of a constant $M' \in(0,\infty)$ such that, for all $h, k \in\mathbb{N}$,
\begin{equation} \label{EmajPS} \mathbb{P} (\hat{S}_h = k ) \leq\frac{M'}{\hat{a}_h}. \end{equation}
Furthermore, when $h$ is small enough, we have a better bound for $\mathbb{P}(\hat{S}_h = k)$:
\begin{lem} \label{TDoney} Using the previous notation, if hypothesis (\ref{HMajPZ=r}) holds, then there exist constants $B, C$ such that for all $k \in\mathbb {N}$, for all $h$ such that $k / \hat{a}_h \geq B$,
\[ \mathbb{P} (\hat{S}_h = k ) \leq C \frac{h}{k \hat{A} (k)}. \]
\end{lem}
This result is an adaptation of a theorem by Doney \cite{Don}. The main ideas of the proof, which is rather technical, will be given in the \hyperref[app]{Appendix}.
Besides, using the fact that $A$ is regularly varying and an Abel transformation of $\mathbb{P}(\hat{Z}>r)$, we get that
\begin{equation} \label{ELienA-hatA} \frac{1}{\hat{A} (r)} \sim\frac{\alpha r}{A(r)} \qquad\mbox{as } r \rightarrow\infty. \end{equation}
\subsubsection*{Upper bound for $(mP_m)^{-1}$} We have
\begin{eqnarray*} P_m &=& \mathbb{P} \bigl(\bigl\vert E (\mathcal{T})\bigr\vert= m \bigr) \sim\frac{p_1^{(\alpha)} (0)}{m a_m} \end{eqnarray*}
(this is a straightforward consequence of the cyclic lemma and the local limit theorem). This gives the existence of a constant $K \in (0,\infty)$ which verifies, for all $m$ such that $P_m \neq0$,
\begin{equation} \label{EmajPCardT} \frac{1}{m P_m} \leq K a_m. \end{equation}
Before coming back to the proof of Corollary~\ref{TCor1}, we give another useful result on regularly varying functions.
\begin{lem} \label{TLemmeFVR} Fix $\beta\in(0,\infty)$. Let $f$ be a positive increasing function in $R_{\beta}$ on $\mathbb{R}_+$, and $x_0$ a positive constant. For every $\delta\in(0,\beta)$, there exists a constant $C_{\delta} \in (0,\infty)$ such that, for all $x' \geq x \geq x_0$,
\begin{eqnarray*} && C_{\delta}^{-1} \biggl(\frac{x'}{x} \biggr)^{\beta- \delta} \leq\frac{f(x')}{f(x)} \leq C_{\delta} \biggl(\frac{x'}{x} \biggr)^{\beta+ \delta}. \end{eqnarray*}
\end{lem}
This result is a consequence of the Potter bounds (see, e.g., Theorem 1.5.6 of Bingham et~al. \cite{BGT}). In particular, it implies that for all $x$ bounded away from $0$, for all $z \geq1$,
\begin{equation} \label{EFVR1} C_{\delta}^{-1} z^{\beta- \delta} \leq \frac{f(x z)}{f(x)} \leq C_{\delta} z^{\beta+ \delta}, \end{equation}
and likewise, for all $x \in(0,\infty)$, $z \leq1$ such that $xz$ is bounded away from $0$,
\begin{equation} \label{EFVR2} C_{\delta}^{-1} z^{\beta+ \delta} \leq \frac{f(x z)}{f(x)} = \frac{f(x z)}{f (x z z^{-1})} \leq C_{\delta} z^{\beta- \delta}. \end{equation}
We can finally state the following.
\begin{lem} \label{TLimUnifIn,l} We have
\begin{equation} \label{ELimUnifIn,l1} \lim_{l \rightarrow\infty} \sup_{n \in\mathbb{N}} \int _{2^l}^{\infty} E_n (t) \,dt = 0 \end{equation}
and
\begin{eqnarray*}
&&\lim_{l \rightarrow\infty} \sup_{n \in\mathbb{N}} \mathop{ \sup_{1 \leq m \leq n}}_{P_m \neq0} \int_{2^l}^{\infty} \frac{m}{n} E_{m,n} (t) \,dt = 0. \end{eqnarray*}
\end{lem}
\begin{pf} For every $n, l \in\mathbb{N}$, we let
\begin{eqnarray*} && I_{n,l} = \int_{2^l}^{\infty} E_n (t) \,dt. \end{eqnarray*}
Putting together (\ref{EExprEn}) and (\ref{EmajPCardT}), we have
\begin{eqnarray*} && E_n (t) \leq K a_n \sum_{k=1}^{n} \sum_{h=1}^{k} e^{-kt/a_n} \mathbb {P} (\hat{S}_h = k ) \mathbb{P} (Y_{k-h+1} = n-h+1 ). \end{eqnarray*}
This yields
\begin{eqnarray*} I_{n,l} &\leq& K a_n^2 \sum _{k=1}^{n} \sum_{h=1}^{k} \frac{1}{k} e^{-2^l k/a_n} \mathbb{P} (\hat{S}_h = k ) \mathbb{P} (Y_{k-h+1} = n-h+1 ). \end{eqnarray*}
Writing $h (n,k) = \hat{A} (k/B)\wedge\lfloor n/2 \rfloor$ and $h' (n,k) = k \wedge\lfloor n/2 \rfloor$, we split this sum into three parts:
\begin{eqnarray*} I^1_{n,l} & =& a_n^2 \sum _{k=1}^{n} \sum_{h=1}^{h (n,k)} \frac{1}{k} e^{-2^l k/a_n} \mathbb{P} (\hat{S}_h = k ) \mathbb{P} (Y_{k-h+1} = n-h+1 ), \\ I^2_{n,l} & =& a_n^2 \sum _{k=1}^{n} \sum_{h=h (n,k) + 1}^{h' (n,k)} \frac{1}{k} e^{-2^l k/a_n} \mathbb{P} (\hat{S}_h = k ) \mathbb{P} (Y_{k-h+1} = n-h+1 ), \\ I^3_{n,l} & =& a_n^2 \sum _{k=1}^{n} \sum_{h=h' (n,k) + 1}^{k} \frac {1}{k} e^{-2^l k/a_n} \mathbb{P} (\hat{S}_h = k ) \mathbb{P} (Y_{k-h+1} = n-h+1 ). \end{eqnarray*}
Our first goal is to show that, for $i = 1, 2, 3$,
\begin{eqnarray*} &&\lim_{l \rightarrow\infty} \sup_{n \in\mathbb{N}} I_{n,l}^i = 0. \end{eqnarray*}
Let us first examine $I_{n,l}^1$. Since $a$ is increasing, the upper bound (\ref{EmajPY}) gives, for $n-h+1 \geq n/2$,
\begin{eqnarray}\label{EmajPY1-2} \mathbb{P} (Y_{k-h+1} = n-h+1 ) & \leq& M \frac {k-h+1}{(n-k+1)a_{n-k+1}} \nonumber\\[-8pt]\\[-8pt]\nonumber & \leq&2M \frac{k}{n a_{n/2}}. \end{eqnarray}
Thus, we have
\begin{eqnarray*} && I_{n,l}^1 \leq2M \frac{a_n^2}{n a_{n/2}} \sum _{k=1}^{n} e^{-2^l k / a_n} \sum _{h=1}^{h (n,k)} \mathbb{P} (\hat{S}_h = k ). \end{eqnarray*}
Turning the first sum into an integral, and using the substitution $y' = y /a_n$, we get
\begin{eqnarray*} I_{n,l}^1 & \leq&2M \frac{a_n^2}{n a_{n/2}} \int _1^{\infty} dy\,e^{- 2^l \lfloor y \rfloor/ a_n} \Biggl(\sum _{h=1}^{h (n,\lfloor y \rfloor)} \mathbb{P} \bigl(\hat{S}_h = \lfloor y \rfloor\bigr) \Biggr) \\ & =& 2M \frac{a_n^3}{n a_{n/2}} \int_{1/a_n}^{\infty} dy\,e^{- 2^l \lfloor a_n y \rfloor/ a_n} \Biggl(\sum_{h=1}^{h (n,\lfloor a_n y \rfloor)} \mathbb{P} \bigl(\hat{S}_h = \lfloor a_n y \rfloor \bigr) \Biggr). \end{eqnarray*}
Since $\hat{a}$ is increasing, for all $h \leq h (n,k)$, we have $\hat {a}_h \leq k / B$. Therefore, Lemma~\ref{TDoney} gives
\[ \mathbb{P} (\hat{S}_h = k ) \leq C \frac{h}{k \hat{A} (k)}. \]
This yields
\begin{eqnarray*} I_{n,l}^1 & \leq&2CM \frac{a_n^3}{n a_{n/2}} \int _{1/a_n}^{\infty} dy\,e^{- 2^l \lfloor a_n y \rfloor/ a_n} \Biggl(\sum _{h=1}^{h (n,\lfloor a_n y \rfloor)} \frac{h}{a_n y \hat{A} (a_n y)} \Biggr) \\ & \leq&2CM \frac{a_n^3}{n a_{n/2}} \int_{1/a_n}^{\infty} dy\,e^{- 2^l \lfloor a_n y \rfloor/ a_n} \biggl(\frac{\hat{A} (\lfloor a_n y \rfloor/B)^2}{\lfloor a_n y \rfloor\hat{A} (\lfloor a_n y \rfloor )} \biggr). \end{eqnarray*}
We fix $\delta\in(0,(\alpha-1) \wedge(2-\alpha))$. Since $\hat {A}$ is regularly varying with index $\alpha- 1$, for all $y \geq1 / a_n$, we have
\begin{eqnarray*} &&\frac{\hat{A} (\lfloor a_n y \rfloor/B)}{\hat{A} (\lfloor a_n y \rfloor)} \leq\frac{C_{\delta}^{-1}}{B^{\alpha-1-\delta}} \end{eqnarray*}
[we can use (\ref{EFVR1}) because $\lfloor a_n y \rfloor/B \geq1/B$ for all $y \in(1/a_n, \infty), n \in\mathbb{N}$]. As a~consequence, there exists a positive constant $K_1$ such that
\begin{eqnarray*} I_{n,l}^1 & \leq& K_1 \frac{a_n^3}{n a_{n/2}} \int _{1/a_n}^{\infty} dy\,e^{- 2^l \lfloor a_n y \rfloor/ a_n} \biggl( \frac{\hat{A} (\lfloor a_n y \rfloor)}{\lfloor a_n y \rfloor} \biggr) = K_1 J_{n,l}. \end{eqnarray*}
Therefore, it suffices to show that
\begin{equation} \label{ELimUnifJn,l} \lim_{l \rightarrow\infty} \sup_{n \in\mathbb{N}} J_{n,l} = 0. \end{equation}
To this end, we use the upper bounds~(\ref{EFVR1}) and~(\ref{EFVR2}), with $x = a_n$ and $y = \lfloor a_n y \rfloor/ a_n$ ($x$~and~$xy$ being, resp., greater than $a_0$ and $1$):
\begin{eqnarray*} &&\frac{\hat{A} (\lfloor a_n y \rfloor)}{\hat{A} (a_n)} \leq C_{\delta} \biggl( \biggl(\frac{\lfloor a_n y \rfloor }{a_n} \biggr)^{\alpha-1+\delta} \vee\biggl(\frac{\lfloor a_n y \rfloor }{a_n} \biggr)^{\alpha-1-\delta} \biggr). \end{eqnarray*}
Thus,
\begin{eqnarray*} &&J_{n,l} \leq\frac{a_n^2 \hat{A} (a_n)}{n a_{n/2}} \int_{1/a_n}^{\infty} dy\,e^{- 2^l \lfloor a_n y \rfloor/ a_n} \biggl( \biggl(\frac {a_n}{\lfloor a_n y \rfloor} \biggr)^{2-\alpha-\delta} \vee \biggl(\frac {a_n}{\lfloor a_n y \rfloor} \biggr)^{2-\alpha+\delta} \biggr). \end{eqnarray*}
Using the fact that $\lfloor a_n y \rfloor\geq a_n y - 1$, and the change of variable $y' = y - 1/a_n$, we get
\begin{eqnarray*} J_{n,l} & \leq&\frac{a_n^2 \hat{A} (a_n)}{n a_{n/2}} \int_0^{\infty }dy\,e^{- 2^l y} \biggl(\frac{1}{y^{2-\alpha-\delta}} \vee\frac {1}{y^{2-\alpha+\delta}} \biggr). \end{eqnarray*}
Now (\ref{ELienA-hatA}) gives that $\hat{A} (a_n) / n = \hat{A} (a_n) / A (a_n) \sim1/ \alpha a_n$, so we have
\begin{eqnarray*} &&\frac{a_n^2 \hat{A} (a_n)}{n a_{n/2}} \sim\frac{a_n}{\alpha a_{n/2}}. \end{eqnarray*}
Since $a$ is regularly varying with index $1/\alpha$, the right-hand term has a finite limit as $n$ goes to infinity. Therefore, $a_n^2 \hat {A} (a_n) / n a_{n/2}$ is bounded uniformly in $n$. Hence, there exists a constant $K \in(0,\infty)$ such that
\begin{eqnarray*} &&\sup_{n \in\mathbb{N}} J_{n,l} \leq K \int_0^{\infty} dy\,e^{- 2^l y} \biggl(\frac{1}{y^{2-\alpha -\delta}} \vee\frac{1}{y^{2-\alpha+\delta}} \biggr). \end{eqnarray*}
This yields (\ref{ELimUnifJn,l}) by taking the limit as $l$ goes to infinity.
For the second part, we can still use (\ref{EmajPY1-2}). As in the first step, we get
\begin{eqnarray*} &&I_{n,l}^2 \leq2M \frac{a_n^3}{n a_{n/2}} \int _{1/a_n}^{\infty} dy\,e^{- 2^l \lfloor a_n y \rfloor/ a_n} \Biggl(\sum _{h=h (n,\lfloor a_n y \rfloor) + 1}^{h' (n,\lfloor a_n y \rfloor)} \mathbb{P} \bigl(\hat{S}_h = \lfloor a_n y \rfloor\bigr) \Biggr). \end{eqnarray*}
Since the sum is null if $\hat{A} (\lfloor a_n y \rfloor/ B) > \lfloor n/2 \rfloor$, we have
\begin{eqnarray*} &&I_{n,l}^2 \leq2M \frac{a_n^3}{n a_{n/2}} \int _{1/a_n}^{\infty} dy\,e^{- 2^l \lfloor a_n y \rfloor/ a_n} \Biggl(\sum _{h=\hat{A} (\lfloor a_n y \rfloor/ B) + 1}^{\infty} \mathbb{P} \bigl(\hat{S}_h = \lfloor a_n y \rfloor\bigr) \Biggr). \end{eqnarray*}
We now turn the remaining sum into an integral:
\begin{eqnarray*} &&I_{n,l}^2 \leq2M \frac{a_n^3}{n a_{n/2}} \int _{1/a_n}^{\infty} dy\,e^{- 2^l \lfloor a_n y \rfloor/ a_n} \int _{\hat{A} (\lfloor a_n y \rfloor/ B)}^{\infty} dx\, \mathbb{P} \bigl( \hat{S}_{\lfloor x+1 \rfloor} = \lfloor a_n y \rfloor\bigr). \end{eqnarray*}
Using the change of variable $x' = \hat{A} (\lfloor a_n y \rfloor/ B) x$ and the upper bound (\ref{EmajPS}), this gives
\begin{eqnarray*} &&I_{n,l}^2 \leq2MM' \frac{a_n^3}{n a_{n/2}} \int _{1/a_n}^{\infty} dy\,e^{- 2^l \lfloor a_n y \rfloor/ a_n} \int _{1}^{\infty} dx\, \frac {\hat{A} (\lfloor a_n y \rfloor/ B)}{\hat{a} (\lfloor\hat{A} (\lfloor a_n y \rfloor/ B) x + 1 \rfloor)}. \end{eqnarray*}
Since $\hat{a}$ is increasing, for all $x, y$, we have
\begin{eqnarray*} &&\hat{a} \bigl(\bigl\lfloor\hat{A} \bigl(\lfloor a_n y \rfloor/ B \bigr) x + 1 \bigr\rfloor\bigr) \geq\hat{a} \bigl(\hat{A} \bigl(\lfloor a_n y \rfloor/ B\bigr) x \bigr). \end{eqnarray*}
Fix $\delta\in(0,1/(\alpha-1)-1)$. Inequality (\ref{EFVR1}) then gives, for all $x \geq1$, $y \geq1/a_n$,
\begin{eqnarray*} \hat{a} \bigl(\bigl\lfloor\hat{A} \bigl(\lfloor a_n y \rfloor/ B \bigr) x + 1 \bigr\rfloor\bigr) & \geq& c_{\delta}^{-1} \hat{a} \bigl(\hat{A} \bigl(\lfloor a_n y \rfloor/ B\bigr) \bigr) x^{1/(\alpha-1)-\delta} \\ & =& c_{\delta}^{-1} \frac{\lfloor a_n y \rfloor}{B} x^{1/(\alpha -1)-\delta}. \end{eqnarray*}
Thus, there exist constants $K_2, K'_2 \in(0,\infty)$ such that
\begin{eqnarray*} &&I_{n,l}^2 \leq K_2 \frac{a_n^3}{n a_{n/2}} \int _{1/a_n}^{\infty} dy\,e^{- 2^l \lfloor a_n y \rfloor/ a_n} \frac{\hat{A} (\lfloor a_n y \rfloor/ B)}{\lfloor a_n y \rfloor} \int_{1}^{\infty} \frac{dx}{x^{1/(\alpha-1)-\delta}} = K'_2 J_{n,l}, \end{eqnarray*}
and (\ref{ELimUnifJn,l}) also gives the conclusion.
For the third part, since the terms with indices $k \leq\lfloor n/2 \rfloor$ are null, we simply use the bounds $\mathbb{P} (Y_{k-h+1}=n-h+1 ) \leq1$ and $\mathbb{P}(\hat{S}_h=k) \leq1$:
\begin{eqnarray*} I_{n,l}^3 & \leq& a_n^2 \sum _{k=\lfloor n/2 \rfloor+1}^{n} \sum _{h=1}^{k} \frac {1}{k} e^{-2^l k/a_n} \\ & \leq& a_n^2 e^{-n 2^l/2 a_n} \sum _{k=\lfloor n/2 \rfloor+1}^{n} 1 \\ & \leq& n a_n^2 e^{-n 2^l/2 a_n}. \end{eqnarray*}
This quantity tends to $0$ as $l$ goes to infinity, uniformly in $n$. Indeed, for any $\kappa> 0$, the function $g_{\kappa}\dvtx x \mapsto x^{\kappa} e^{-x}$ is bounded by a constant $G_{\kappa}$, hence
\begin{eqnarray*} &&I_{n,l}^3 \leq G_{\kappa} \frac{2^{\kappa} a_n^{2+\kappa}}{n^{\kappa-1}} \cdot2^{- l \kappa}. \end{eqnarray*}
For any $\varepsilon> 0$, there exists a constant $C_{\varepsilon}$ such that $a_n \leq C_{\varepsilon} n^{1/\alpha+ \varepsilon}$ for all $n \in \mathbb{N}$. Therefore, the quantity $a_n^{2+\kappa} / n^{\kappa-1}$ is bounded as soon as $\kappa> (2+\alpha) / (\alpha- 1)$. This completes the proof of (\ref{ELimUnifIn,l1}).
For the second limit, we note that (\ref{EExprEn}) yields
\begin{eqnarray*} &&\int_{2^l}^{\infty} E_{m,n} (t) \,dt = \frac{a_n}{a_m} \int_{2^l}^{\infty} E_m (t) \,dt, \end{eqnarray*}
for all $m \leq n$ such that $P_m \neq0$. Thus,
\begin{eqnarray*} &&\sup_{n \in\mathbb{N}} \mathop{\sup_{1 \leq m \leq n}}_{P_m \neq 0} \int_{2^l}^{\infty} \frac{m}{n} E_{m,n} (t) \,dt = \sup_{n \in\mathbb{N}} \mathop{\sup _{1 \leq m \leq n}}_{P_m \neq0} \frac{m a_n}{n a_m} I_{m,l}. \end{eqnarray*}
As a consequence, it is enough to show that $m a_n / n a_m$ is bounded over $\{(m,n) \in\mathbb{N}^2\dvtx m \leq n\}$. Now,
\begin{eqnarray*} \sup\biggl\{ \frac{m a_n}{n a_m}\dvtx m,n \in\mathbb{N}, m\leq n \biggr\}& \leq&\sup \biggl\{ \frac{m a_{\lambda m}}{\lambda m a_m}\dvtx m \in\mathbb{N}, \lambda\in(1,\infty) \biggr\} \\ & \leq&\sup\biggl\{ \frac{a_{\lambda m}}{\lambda a_m}\dvtx m \in\mathbb{N}, \lambda\in(1,\infty) \biggr\}. \end{eqnarray*}
Fix $\delta\in(0,1-1/\alpha)$. Since $a$ is a positive increasing function in $R_{1/\alpha}$, Lemma~\ref{TLemmeFVR} shows the existence of a constant such that, for all $m \in\mathbb{N}$, $\lambda\in(1, \infty)$,
\begin{eqnarray*} &&\frac{a_{\lambda m}}{a_m} \leq C_{\delta} \lambda^{1/\alpha+ \delta}. \end{eqnarray*}
Hence, for all $\lambda\in(1, \infty)$,
\begin{eqnarray*} &&\sup_{m \in\mathbb{N}} \frac{a_{\lambda m}}{\lambda a_m} \leq C_{\delta} \lambda^{1/\alpha+ \delta- 1} \leq C_{\delta}. \end{eqnarray*}\upqed
\end{pf}
\subsubsection*{Key estimates for the proof of Theorem \protect\ref{TMainThm}} We conclude this section by giving two consequences of Lemma~\ref {TLimUnifIn,l} which will be used in the proof of Theorem~\ref{TMainThm}.
\begin{cor} \label{TCor1} It holds that
\begin{eqnarray*} &&\lim_{l \rightarrow\infty} \sup_{n \in\mathbb{N}} \mathbb{E} \biggl[ \int_{2^l}^{\infty} \mu_{n,\xi_n} (t) \,dt \biggr] = 0. \end{eqnarray*}
\end{cor}
\begin{pf} Using (\ref{EmajEspMu}), we get
\begin{eqnarray*} \sup_{n \in\mathbb{N}} \mathbb{E} \biggl[\int_{2^l}^{\infty} \mu_{n,\xi_n} (t) \,dt \biggr] &\leq&\sup_{n \in\mathbb{N}} \frac{a_n}{n} e^{-2^l/a_n} + 2 \sup_{n \in \mathbb{N}} \int _{2^l}^{\infty} E_n (t) \,dt \\ &&{} + 2 \sup_{n \in\mathbb{N}} \sup_{1 \leq m \leq n} \int _{2^l}^{\infty} \frac{m}{n} E_{m,n} (t) \,dt. \end{eqnarray*}
Lemma~\ref{TLimUnifIn,l} shows that the last two terms tend to $0$ as $l$ goes to infinity. For the first term, we use again the fact that for any $\kappa> 0$, the function $g_{\kappa}\dvtx x \mapsto x^{\kappa} e^{-x}$ is bounded by a constant $G_{\kappa}$. Hence, for all $n \in \mathbb{N}$,
\begin{eqnarray*} &&\frac{a_n}{n} e^{-2^l/a_n} \leq G_{\kappa} \frac{a_n^{\kappa+1}}{n} \cdot2^{-\kappa l}. \end{eqnarray*}
Taking $\kappa< \alpha-1$, we get that $a_n^{\kappa+1}/n$ is bounded, which completes the proof. \end{pf}
\begin{cor} \label{TCor2} There exists a constant $C$ such that, for all $n \in\mathbb{N}$,
\begin{eqnarray*} &&\mathbb{E} \bigl[\delta'_n (0,\xi_n) \bigr] \leq C. \end{eqnarray*}
\end{cor}
\begin{pf} Recalling the definition of $\delta'_n$, we get
\begin{eqnarray*} \mathbb{E} \bigl[\delta'_n (0,\xi_n) \bigr] & =& \mathbb{E} \biggl[\int_0^{\infty} \mu_{n,\xi_n} (t) \,dt \biggr]. \end{eqnarray*}
Now the upper bound (\ref{EmajEspMu}) gives
\begin{eqnarray*} \mathbb{E} \bigl[\delta'_n (0,\xi_n) \bigr] & \leq&1 + \mathbb{E} \biggl[\int_1^{\infty} \mu_{n,\xi_n} (t) \,dt \biggr] \\ & \leq&1 + \frac{a_n}{n} e^{-1/a_n} + 2 \int_{1}^{\infty} E_n (t) \,dt + 2 \sup_{1 \leq m \leq n} \int _{1}^{\infty} \frac{m}{n} E_{m,n} (t) \,dt. \end{eqnarray*}
The second term is bounded as $n \rightarrow\infty$. Recall from the proof of Lemma~\ref{TLimUnifIn,l} that
\begin{eqnarray*} \int_{1}^{\infty} E_n (t) \,dt &=& I_{n,0} \leq I_{n,0}^1 + I_{n,0}^2 + I_{n,0}^3 \leq\bigl(K_1 + K'_2\bigr) J_{n,0} + I_{n,0}^3. \end{eqnarray*}
Moreover, we have seen that for any $\delta> 0$, there exists a constant $K$ such that
\begin{eqnarray*} &&\sup_{n \in\mathbb{N}} J_{n,0} \leq K \int_0^{\infty} dy\,e^{-y} \biggl( \frac{1}{y^{2-\alpha-\delta}} \wedge\frac {1}{y^{2-\alpha+\delta }} \biggr) < \infty, \end{eqnarray*}
and
\begin{eqnarray*} &&I_{n,0}^3 \leq2 n a_n^2 e^{-n/a_n} \end{eqnarray*}
is bounded as $n \rightarrow\infty$. Since we have seen at the end of the proof of Lemma~\ref{TMajEspMu} that there exists a constant $K'$ such that for all $n \in\mathbb{N}$, $m \leq n$ such that $P_m \neq0$,
\begin{eqnarray*} &&\int_{1}^{\infty} \frac{m}{n} E_{m,n} (t) \,dt \leq K' \int_{1}^{\infty} E_m (t) \,dt, \end{eqnarray*}
this implies the corollary. \end{pf}
\section{Proof of Theorem \texorpdfstring{\protect\ref{TMainThm}}{1.3}} \label{SProof}
\subsection{Identity in law between $\operatorname{Cut}_{\mathrm{v}}(\mathcal{T})$ and $\mathcal{T}$}\label{SEqldeltad}
In this section, we show that the semi-infinite matrices of the mutual distance of uniformly sampled points in $\mathcal{T}$ and $\operatorname{Cut}_{\mathrm{v}}(\mathcal{T})$ have the same law. This justifies the existence of $\operatorname{Cut}_{\mathrm{v}}(\mathcal{T})$, as explained in Section~\ref{SFragT}, and shows the identity in law between $\mathcal{T}$ and $\operatorname{Cut}_{\mathrm{v}}(\mathcal{T})$. The structure of the proof will be similar to that of Lemma 4 in \cite{BerMi}. Precise descriptions of the fragmentation processes we consider can be found in \cite{Mi03} and \cite{Mi05}.
Recall that $(\xi(i))_{i \in\mathbb{N}}$ is a sequence of i.i.d. random variables in $\mathcal{T}$, with law $\mu$, and $\xi(0) = 0$. Since the law of $\mathcal{T}$ is invariant under uniform rerooting (see, e.g., \cite{DuqLG05}, Proposition 4.8), and the definition of $\delta$ does not depend on the choice of the root of $\mathcal{T}$, we may assume that $\xi(1)= \rho$.
\begin{prop} \label{TEqldeltad} It holds that
\[ \bigl( \delta\bigl(\xi(i), \xi(j)\bigr) \bigr)_{i,j \geq0} \stackrel{(d)} {=} \bigl( d \bigl(\xi(i+1), \xi(j+1)\bigr) \bigr)_{i,j \geq0}. \]
\end{prop}
\begin{pf} Here, it is convenient to work on fragmentation processes taking values in the set of the partitions of $\mathbb{N}$.
First, we introduce a process $\Pi$ which corresponds to our fragmentation of $\mathcal{T}$ by saying that $i, j \in\mathbb{N}$ belong to the same block of $\Pi(t)$ if and only if the path $ [\![ \xi(i), \xi(j) ]\!]_V$ does not intersect the set $\{b_k\dvtx k \in I, t_k \leq t\}$ of the points marked before time $t$. For every $i \in\mathbb{N}$, we let $B_i (t)$ be the block of the partition $\Pi(t)$ containing $i$. Note that the partitions $\Pi(t)$ are exchangeable, which justifies the existence of the asymptotic frequencies $\lambda(B_i (t))$ of the blocks $B_i (t)$, where
\begin{eqnarray*} \lambda(B) &=& \lim_{n \rightarrow\infty} \frac{1}{n} \bigl\vert B \cap\{1,\ldots,n\}\bigr\vert. \end{eqnarray*}
Then we define
\begin{eqnarray*} \sigma_i (t) &=& \inf\biggl\{ u \geq0\dvtx \int_0^u \lambda\bigl(B_i (s)\bigr) \,ds >t \biggr\}. \end{eqnarray*}
We use $\sigma_i$ as a time-change, letting $\Pi' (t)$ be the partition whose blocks are the sets $B_i (\sigma_i (t))$ for $i \in \mathbb{N} $. Note that this is possible because $B_i (\sigma_i (t))$ and $B_j (\sigma_j (t))$ are either equal or disjoint.
We define a second fragmentation $\Gamma$, which results from cutting the stable tree $\mathcal{T}$ at its heights. For every $x, y \in \mathcal{T}$, we let $x \wedge y$ denote the branch-point between $x$ and $y$, that is, the unique point such that $ [\![ \rho, x \wedge y ]\!]_V = [\![ \rho, x ]\!]_V \cap[\![ \rho, y ]\!]_V$. With this notation, we say that $i, j \in\mathbb{N}$ belong to the same block of $\Gamma(t)$ if and only if $d (\rho,\xi (i+1) \wedge\xi(j+1)) > t$.
Then we have the following link between the two fragmentations.
\begin{lem} The fragmentation processes $\Pi'$ and $\Gamma$ have the same law. \end{lem}
\begin{pf} Miermont has shown in \cite{Mi05}, Theorem 1, that the process $\Pi$ is a self-similar fragmentation with index $1/\alpha$, erosion coefficient $0$ and dislocation measure $\Delta_{\alpha}$ known explicitly. Applying Theorem 3.3 in \cite{BerFCP}, we get that the time-changed fragmentation $\Pi'$ is still self-similar, with index $1/\alpha-1$, erosion coefficient $0$ and the same dislocation measure $\Delta_{\alpha}$. Now the process $\Gamma$ is also self-similar, with the same characteristics as $\Pi'$ (see \cite{Mi03}, Proposition 1, Theorem 1). Thus, $\Gamma$ and $\Pi'$ have the same law. \end{pf}
Using the law of large numbers, we note that $\lambda(B_i (s)) = \mu _{\xi(i)} (s)$ almost surely. As a consequence, $\sigma_i (t) = \infty$ for $t = \int_0^{\infty} \lambda(B_i (s)) \,ds = \delta (0,\xi(i))$, which means that $\delta(0,\xi(i))$ can be seen as the first time when the singleton $\{i\}$ is a block of~$\Pi'$. Recalling that $d (\rho,\xi(i+1)) = d (\xi(1),\xi(i+1))$ is the first time when $\{i\}$ is a block of $\Gamma$, we get
\begin{equation} \label{EEql1} \bigl(\delta\bigl(0,\xi(i)\bigr) \bigr)_{i \geq1} \stackrel{(d)} {=} \bigl(d \bigl(\xi(1),\xi(i+1)\bigr) \bigr)_{i \geq1}. \end{equation}
Similarly, for any $i \neq j \in\mathbb{N}$,
\begin{eqnarray*} \delta\bigl(0,\xi(i) \wedge\xi(j)\bigr) & =& \frac{1}{2} \bigl(\delta \bigl(0,\xi(i)\bigr) + \delta\bigl(0,\xi(j)\bigr) - \delta\bigl(\xi (i), \xi(j) \bigr)\bigr) \\ & =& \int_0^{\tau(i,j)} \lambda\bigl(B_i (s)\bigr) \,ds, \end{eqnarray*}
where $\tau(i,j)$ denotes the first time when a mark appears on the segment $ [\![ \xi(i), \xi(j) ]\!]_V$. Thus, $\delta(0,\xi(i) \wedge \xi(j))$ is the first time when the blocks containing $i$ and $j$ are separated in $\Pi'$. In terms of the fragmentation $\Gamma$, this corresponds to $d (\rho,\xi(i+1) \wedge\xi(j+1))$. Hence,
\begin{eqnarray*} && \bigl(\delta\bigl(0,\xi(i) \wedge\xi(j)\bigr) \bigr)_{i,j \geq1} \stackrel{(d)} {=} \bigl(d \bigl(\xi(1),\xi(i+1) \wedge\xi(j+1)\bigr) \bigr)_{i,j \geq1}, \end{eqnarray*}
and this holds jointly with (\ref{EEql1}). This entails the proposition. \end{pf}
\subsection{Weak convergence}
We first establish the convergence for the cut-tree $\operatorname{Cut}_{\mathrm{v}}'(\mathcal{T}_n)$ endowed with the modified distance $\delta'_n$, as defined in Section~\ref{SModDist}.
\begin{prop} There is the joint convergence
\[ \biggl( \frac{a_n}{n} \mathcal{T}_n, \operatorname{Cut}_{\mathrm{v}}' (\mathcal{T}_n) \biggr) \mathop{ \longrightarrow}_{n \rightarrow\infty}^{(d)}\, \bigl(\mathcal{T}, \operatorname{Cut}_{\mathrm{v}}(\mathcal{T}) \bigr) \]
in $\mathbb{M} \times\mathbb{M}$. \end{prop}
\begin{pf} Proposition~\ref{TFstJointCv} shows that for every fixed integer $l$, there is the joint convergence
\begin{eqnarray*} \frac{a_n}{n} \mathcal{T}_n &\displaystyle \mathop{ \longrightarrow}_{n \rightarrow \infty}^{(d)}& \mathcal{T}, \\ \Biggl( 2^{-l} \sum_{j=1}^{4^l} \mu_{n,\xi_n (i)} \bigl(j 2^{-l} \bigr) \Biggr)_{i \in\mathbb{N}} &\displaystyle\mathop{ \longrightarrow}_{n \rightarrow\infty}^{(d)}& \Biggl( 2^{-l} \sum _{j=1}^{4^l} \mu_{\xi(i)} \bigl(j 2^{-l} \bigr) \Biggr)_{i \in\mathbb{N}}. \end{eqnarray*}
Let
\begin{eqnarray*} \Delta_{n,l} (i) &=& \mathbb{E} \Biggl[\Biggl\vert\int _0^{\infty} \mu_{n,\xi_n (i)} (t) \,dt - 2^{-l} \sum_{j=1}^{4^l} \mu_{n,\xi_n (i)} \bigl(j2^{-l} \bigr)\Biggr\vert\Biggr]. \end{eqnarray*}
For any nonincreasing function $f\dvtx \mathbb{R}_+ \rightarrow
[0,1 ]$, we have the upper bound
\begin{equation} \label{EMajIntf} \Biggl\vert\int_0^{\infty} f (t) \,dt - 2^{-l} \sum_{j=1}^{4^l} f \bigl(j2^{-l} \bigr)\Biggr\vert\leq2^{-l} + \int _{2^l}^{\infty} f (t) \,dt. \end{equation}
Applying this inequality to $\mu_{n,\xi_n (i)}$ yields
\begin{eqnarray*} &&\Delta_{n,l} (i) \leq2^{-l} + \mathbb{E} \biggl[\int _{2^l}^{\infty} \mu_{n,\xi_n} (t) \,dt \biggr]. \end{eqnarray*}
Corollary~\ref{TCor1} now shows that
\begin{eqnarray*} &&\lim_{l \rightarrow\infty} \sup_{n \in\mathbb{N}} \Delta_{n,l} (i) = 0, \end{eqnarray*}
and $\Delta_{n,l} (i)$ does not depend on $i$. Besides, Proposition \ref{TEqldeltad} shows that
\begin{eqnarray*} &&\delta\bigl(0,\xi(i)\bigr) = \int_0^{\infty} \mu_{\xi(i)} (t) \,dt \end{eqnarray*}
has the same law as $d (0,\xi(i))$ and, therefore, has finite mean.
As a consequence,
\begin{eqnarray*} && \mathbb{E} \Biggl[\Biggl\vert\int_0^{\infty} \mu_{\xi(i)} (t) \,dt - 2^{-l} \sum_{j=1}^{4^l} \mu_{\xi(i)} \bigl(j2^{-l} \bigr)\Biggr\vert\Biggr] \\ &&\qquad \leq 2^{-l} + \mathbb{E} \biggl[\int_{2^l}^{\infty} \mu_{\xi(i)} (t) \,dt \biggr] \\ &&\qquad \mathop{ \longrightarrow}_{l \rightarrow\infty} 0, \end{eqnarray*}
and the left-hand side does not depend on $i$. We conclude that
\[ \bigl(\delta'_n \bigl(0, \xi_n(i)\bigr) \bigr)_{i \in\mathbb{N}} \mathop{ \longrightarrow}_{n \rightarrow\infty }^{(d)}\, \bigl( \delta\bigl(0, \xi(i)\bigr) \bigr)_{i \in\mathbb{N}}, \]
jointly with $(a_n/n) \cdot\mathcal{T}_n \mathop{ \longrightarrow }\limits ^{(d)} \mathcal{T}$.
Using in addition the convergence of the $\tau_n (i,j)$ shown in Proposition~\ref{TFstJointCv}, a~similar argument shows that the preceding convergences also hold jointly with
\[ \bigl(\delta'_n \bigl(\xi_n (i), \xi_n(j)\bigr) \bigr)_{i,j \in\mathbb {N}} \mathop{ \longrightarrow}_{n \rightarrow\infty}^{(d)}\, \bigl(\delta\bigl(\xi(i), \xi(j)\bigr) \bigr)_{i,j \in\mathbb{N}}. \]
This entails the proposition. \end{pf}
The convergence stated in Theorem~\ref{TMainThm} now follows immediately. Indeed, Lemma~\ref{TModDist} and Corollary~\ref{TCor2} show that
\[ \mathbb{E} \biggl[\biggl\vert\frac{a_n}{n} \delta_n (i,j) - \delta_n ' (i,j)\biggr\vert^2 \biggr] \leq\frac{2 C a_n}{n} \]
for all $i,j \geq0$ [recalling that $\xi_n (0) = 0$]. Thus, the preceding proposition gives the joint convergence
\[ \biggl( \frac{a_n}{n} \mathcal{T}_n, \frac{a_n}{n} \operatorname{Cut}_{\mathrm{v}}(\mathcal{T}_n) \biggr) \mathop{ \longrightarrow}_{n \rightarrow\infty}^{(d)}\, \bigl(\mathcal{T}, \operatorname{Cut}_{\mathrm{v}}( \mathcal{T}) \bigr). \]
\section{The finite variance case} \label{SBrownianCase}
In this section, we assume that the offspring distribution $\nu$ of the Galton--Watson trees $\mathcal{T}_n$ has finite variance $\sigma^2$. Theorem 23 of~\cite{AldCRT3} shows that $(\sigma/ \sqrt{n}) \cdot \mathcal{T} _n$ converges to the Brownian tree $\mathcal{T}^{\mathrm{br}}$. More precisely, still using the three processes described in Section~\ref{SCodingTrees} to encode the trees $\mathcal{T}_n$, the joint convergence stated in Theorem \ref {TCvC,H,X} holds with $a_n = \sigma\sqrt{n}$, and limit processes defined by $X_t = B_t$ and $H_t = 2 B_t$ for all $t \in[0,1]$. (Recall that $B$ denotes the excursion of length 1 of the standard Brownian motion.) Note that the normalization of $X$ is not exactly the same as the one we used for the stable tree, since the Laplace transform of a standard Brownian motion $B'$ is $\mathbb{E}[e^{-\lambda B'_t}] = e^{\lambda ^2 t / 2}$. The fact that the height process $H$ is equal to $2 X$ can be seen from the definition of $H$ as a local time, as explained in \cite{DuqLG02}, Section~1.2.
Given these results, the proof of Theorem~\ref{TBrownianCase} follows the same structure as that of the main theorem. We first note that the results on the modified distance, introduced in Section~\ref {SModDist}, still hold. In the next two sections, we will see that we also have analogues for Proposition~\ref{TFstJointCv}, and Corollaries \ref{TCor1} and~\ref{TCor2}.
\subsection{Convergence of the component masses}
We use the same notation as in Section~\ref{SFstJointCv}. Recall in particular that $\mu_{n,\xi_n (i)}$ denotes the mass of the component $\mathcal{T}_{n,\xi_n (i)} (t)$, and that $\tau_n (i,j)$ denotes the first time when the components $\mathcal{T}_{n,\xi_n (i)} (t)$ and $\mathcal{T}_{n,\xi_n (j)} (t)$ become disjoint. To simplify, we drop the superscript $\mathrm{br}$ for the quantities associated to the Brownian tree (e.g., the mass-measure, the mass of a component, etc.), keeping the notation we used in the case of the stable tree. Our first step is to prove the following result.
\begin{prop} \label{TJointCvBr} As $n \rightarrow\infty$, we have the following weak convergences:
\begin{eqnarray*} \frac{\sigma}{\sqrt{n}} \mathcal{T}_n &\displaystyle\mathop{ \longrightarrow } ^{(d)}& \mathcal{T}^{\mathrm{br}}, \\ \bigl(\tau_n (i,j) \bigr)_{i,j \geq0} &\displaystyle\mathop{ \longrightarrow } ^{(d)}& \biggl( \biggl(1+\frac{1}{\sigma^2} \biggr)^{-1} \tau(i,j) \biggr)_{i,j \geq0}, \\ \bigl(\mu_{n,\xi_n (i)} (t) \bigr)_{i \geq0, t \geq0} &\displaystyle\mathop{ \longrightarrow} ^{(d)}& \biggl(\mu_{\xi(i)} \biggl( \biggl(1+\frac {1}{\sigma ^2} \biggr) t \biggr) \biggr)_{i \geq0, t \geq0}, \end{eqnarray*}
where the three hold jointly. \end{prop}
We begin by showing the same kind of property as in Lemma~\ref{TCvXtilde}. For all $n \in\mathbb{N}$, we let $\widetilde{X}{}^{(n)}$ and $\widetilde {C}{}^{(n)}$ denote the rescaled Lukasiewicz path and contour function of the symmetrized tree $\widetilde{\mathcal{T}}_n$.
\begin{lem} We have the joint convergence
\[ \bigl(X^{(n)}, C^{(n)}, \widetilde{X}{}^{(n)}, \widetilde{C}{}^{(n)}\bigr) \mathop{ \longrightarrow}_{n \rightarrow\infty}^{(d)}\, (X, H, \widetilde{X}, \widetilde{H} ), \]
where $\widetilde{H}_t = H_{1-t}$ and $\widetilde{X}_t = \widetilde {H}_t / 2$ for all $t \in[0,1]$. \end{lem}
\begin{pf} Since $\mathcal{T}_n$ and $\widetilde{\mathcal{T}}_n$ have the same law, $(\widetilde{X}{}^{(n)}, \widetilde{C}{}^{(n)})$ converges in distribution to a couple of processes having the same law as $(X,H)$ in $\mathbb{D} \times\mathbb{D}$. Thus, the sequence of the laws of the processes $(X^{(n)}, C^{(n)}, \widetilde{X}{}^{(n)}, \widetilde{C}{}^{(n)})$ is tight in $\mathbb{D}^4$. Up to extraction, we can assume that $(X^{(n)}, C^{(n)}, \widetilde{X}{}^{(n)}, \widetilde{C}{}^{(n)})$ converges in distribution to $(X, H,\widetilde{X}, \widetilde{H})$.
Fix\vspace*{1.5pt} $t\in[0,1]$. The definition of the contour function shows that for all $n \in\mathbb{N}$, we have $\widetilde{C}{}^{(n)}_t = C^{(n)}_{1-t}$. Since $H$ and $\widetilde{H}$ are $\mbox{a.s.}$ continuous, taking the limit yields $\widetilde{H}_t = H_{1-t}$ almost surely.\vspace*{1pt} Besides, since $(X,H)$ and $(\widetilde{X},\widetilde{H})$ have the same law, we have $\widetilde {X}_t = \widetilde{H}_t /2$ a.s. for all $t \in[0,1]$.
These equalities also hold $\mbox{a.s.}$, simultaneously for a countable number of times~$t$, and the continuity of $H$, $X$, $\widetilde{H}$ and $\widetilde{X}$ give that $\mbox{a.s.}$, they hold for all $t \in [0,1]$. This identifies uniquely the law of $(X,H, \widetilde{X}, \widetilde{H})$, hence the lemma. \end{pf}
This lemma shows that we can still work in the setting of
\begin{equation} \label{HasCvBr} \cases{ \displaystyle \bigl( X^{(n)}, \widetilde{X}{}^{(n)} \bigr) \mathop{ \longrightarrow}_{n\rightarrow\infty}\, (X, \widetilde{X} )\qquad\mbox{a.s.}, \vspace*{3pt}\cr \displaystyle\bigl(t^{(n)}_i, i \in\mathbb{N} \bigr) \mathop{ \longrightarrow}_{n \rightarrow\infty}\, (t_i, i \in\mathbb{N} )\qquad \mbox{a.s.},} \end{equation}
where $t^{(n)}_i = (\xi_n (i)+1)/(n+1)$ for all $n \in\mathbb{N}$, $i \geq 0$, and $(t_i, i \in\mathbb{N})$ is a sequence of independent uniform variables in $[0,1]$ such that $\xi(i) = p (t_i)$.
Recall the notation $\mathcal{R}_n (k)$ for the shape of the subtree of $\mathcal{T}_n$ (or $\mathcal{T}^{\mathrm{br}}$ if $n = \infty$) spanned by the root and the vertices $\xi_n (1), \ldots, \xi_n (k)$ [or $\xi(1), \ldots, \xi (k)$ if $n = \infty$]. We also keep the notation $L_n (v)= \deg(v, \mathcal{T} _n) / a_n$ for the rate at which a vertex $v$ is deleted in $\mathcal {T}_n$ (if $n \in\mathbb{N}$), and
\begin{eqnarray*} \sigma_n (t) &=& \mathop{\sum_{0 < s < t}}_{X^{(n)}_{s-} < I^{(n)}_{s,t}} \Delta X^{(n)}_s \qquad\forall t \in[0,1], \end{eqnarray*}
where $I^{(n)}_{s,t} = \inf_{s < u < t} X^{(n)}_u$, and $X^{(\infty)} = X$.
As in Section~\ref{SFstJointCv}, we state two lemmas which allow us two control the rates at which the fragmentations happen on the vertices and the edges of $\mathcal{R}_n (k)$.
\begin{lem} \label{TCvLBr} Fix $k \in\mathbb{N}$. Under (\ref{HasCvBr}), $\mathcal{R}_n (k)$ is $\mbox{a.s.}$ constant for all $n$ large enough (say $n \geq N$). Identifying the vertices of $\mathcal{R}_n (k)$ with $\mathcal{R}_{\infty} (k)$ for all $n \geq N$, we have the $\mbox{a.s.}$ convergence
\begin{eqnarray*} &&L_n (v) \mathop{ \longrightarrow}_{n \rightarrow\infty} 0 \qquad\forall v \in V \bigl(\mathcal{R}_{\infty} (k)\bigr). \end{eqnarray*}
\end{lem}
\begin{pf} The proof is the same as that of Lemma~\ref{TCvL}. In particular, we get that if the $b^{(n,k)}$ are the times encoding the ``same'' vertex $v$ of $R_n (k)$, for $n \geq N$, then we have the $\mbox{a.s.}$ convergences
\begin{eqnarray*} b^{(n,k)} &\displaystyle\mathop{ \longrightarrow}_{n \rightarrow\infty}& b^{(\infty,k)}, \\ X^{(n)}_{b^{(n,k)}} &\displaystyle\mathop{ \longrightarrow}_{n \rightarrow\infty}& X_{b^{(\infty,k)}}, \\ X^{(n)}_{(b^{(n,k)})^-} &\displaystyle\mathop{ \longrightarrow}_{n \rightarrow\infty }& X_{(b^{(\infty,k)})^-}. \end{eqnarray*}
Since $X$ is now continuous, this yields
\begin{eqnarray*} L_n (v) &=& \Delta X^{(n)}_{b^{(n,k)}} + \frac{1}{a_n} \mathop{ \longrightarrow}_{n \rightarrow\infty} \Delta X_{b^{(\infty,k)}} = 0. \end{eqnarray*}\upqed
\end{pf}
\begin{lem} \label{TCvsigmaBr} Let $(b_n)_{n \geq1} \in[0,1]^{\mathbb{N}}$ be a converging sequence in $[0,1]$, and let $b$ denote its limit. Then
\begin{eqnarray*} &&\sigma_n (b_n) \mathop{ \longrightarrow}_{n \rightarrow\infty} H_b \qquad\mbox{a.s.} \end{eqnarray*}
\end{lem}
\begin{pf} As in the proof of Lemma~\ref{TCvsigma}, for all $n \in\mathbb {N}\cup\{ \infty\}$, we write $\sigma_n (t) = \sigma_n^- (t) + \sigma_n^+ (t)$, where
\begin{eqnarray*} &&\sigma_n^+ (t) = \mathop{\sum_{0 < s < t}}_{X^{(n)}_{s-} < I^{(n)}_{s,t}} \bigl(X^{(n)}_{s} - I^{(n)}_{s,t} \bigr) \quad\mbox{and} \quad\sigma_n^- (t) = \mathop{\sum _{0 < s < t}}_{X^{(n)}_{s-} < I^{(n)}_{s,t}} \bigl(I^{(n)}_{s,t} - X^{(n)}_{s^-} \bigr). \end{eqnarray*}
For all $t \geq0$, $n \in\mathbb{N}$, we have $\sigma_n^- (t) = X^{(n)}_{t^-}$. As a consequence, (\ref{HasCvBr}) gives
\begin{eqnarray*} &&\sigma_n^- (b_n) \mathop{ \longrightarrow}_{n \rightarrow\infty} X_b \qquad\mbox{a.s.} \end{eqnarray*}
Besides, we still have $\sigma_n^+ (b_n) = \tilde{\sigma}_n^- ( \tilde{b}_n )$, with
\begin{eqnarray*} &&\tilde{b}_n = 1-b_n+\frac{1}{n+1} \bigl(1 + H^{[n]}_{(n+1) b_n - 1}-D^{[n]}_{(n+1) b_n - 1} \bigr). \end{eqnarray*}
Now
\begin{eqnarray*} &&\tilde{b}_n \mathop{ \longrightarrow}_{n \rightarrow\infty} 1 - b - l(b), \end{eqnarray*}
where $l(b) = \inf\{ s>b\dvtx X_s = X_{b}\} - b$. Using (\ref{HasCvBr}) again, we get
\begin{eqnarray*} &&\sigma_n^+ (b_n) \mathop{ \longrightarrow}_{n \rightarrow\infty} \widetilde{X}_{1-b-l(b)} = X_{b+l(b)} = X_b \qquad \mbox{a.s.} \end{eqnarray*}
Thus, we have the $\mbox{a.s.}$ convergence
\begin{eqnarray*} &&\sigma_n (b_n) \mathop{ \longrightarrow}_{n \rightarrow\infty} 2 X_b = H_b. \end{eqnarray*}\upqed
\end{pf}
We can now give the proof of Proposition~\ref{TJointCvBr}.
\begin{pf*}{Proof of Proposition~\ref{TJointCvBr}} Fix $n \in\mathbb{N}\cup\{\infty\}$. As in the proof of Proposition~\ref{TFstJointCv}, we write $\mathcal{R}_n (k,t)$ for the reduced tree with edge-lengths, endowed with point processes of marks on its edges and vertices such that:
\begin{itemize}
\item The marks on the vertices of $\mathcal{R}_n (k)$ appear at the same time as the marks on the corresponding vertices of $\mathcal{T}_n$.
\item Each edge receives a mark at its midpoint at the first time when a vertex $v$ of $\mathcal{T}_n$ such that $v \in e$ is marked in $\mathcal{T}_n$. \end{itemize}
These two point processes are independent, and their rates are the following:
\begin{itemize}
\item If $n \in\mathbb{N}$, each vertex $v$ of $\mathcal{R}_n (k)$ is marked at rate $L_n (v)$, independently of the other vertices. If $n = \infty $, there are no marks on the vertices.
\item For each edge $e$ of $\mathcal{R}_n (k)$, letting $b, b'$ denote the points of $B_n (k)$ corresponding to $e^-, e^+$, the edge $e$ is marked at rate $\Sigma L_n (e)$, independently of the other edges, with
\begin{eqnarray*} \Sigma L_n (e) & =& \sum_{v \in V (\mathcal{T}_n) \cap e} L_n (v) \\ & =& \sigma_n \bigl(b'\bigr) - \sigma_n (b) + \frac{n}{a_n^2} \bigl( H^{(n)}_{(b')^-} - H^{(n)}_{b^-} \bigr) - L_n \bigl(e^-\bigr) \end{eqnarray*}
if $n \in\mathbb{N}$, and
\begin{eqnarray*} &&\Sigma L_{\infty} (e) = H_{b'} - H_b. \end{eqnarray*}
\end{itemize}
We see from Lemmas~\ref{TCvLBr} and~\ref{TCvsigmaBr} that $L_n (v)$ converges to $0$ as $n \rightarrow\infty$, and that
\begin{eqnarray*} &&\Sigma L_n (e) \mathop{ \longrightarrow}_{n \rightarrow\infty}\, \biggl(1 + \frac {1}{\sigma^2} \biggr) \Sigma L_{\infty} (e). \end{eqnarray*}
As a consequence, we have the convergence
\begin{equation} \label{ECvRBr} \biggl( \frac{a_n}{n} \mathcal{R}_n (k,t), t \geq0 \biggr) \mathop{ \longrightarrow}_{n \rightarrow\infty}^{(d)}\, \biggl( \mathcal{R}_{\infty} \biggl(k, \biggl(1+\frac{1}{\sigma^2} \biggr)t \biggr), t \geq0 \biggr). \end{equation}
[As in the case $\alpha\in(1,2)$, $(a_n/n) \cdot\mathcal{R}_n (k,t)$ and $\mathcal{R}_{\infty} (k,t)$ can be seen as random variables in $\mathbb{T} \times(\mathbb{R}_+ \cup\{-1\})^{\mathbb {N}} \times\{ -1,0,1\}^{\mathbb{N}^2}$.]
For all $i \in\mathbb{N}$, we let $\eta_n (k,i,t)$ denote the number of vertices among $\xi_n (1), \ldots, \xi_n (k)$ in the component of $\mathcal{R}_n (k)$ containing $\xi_n (i)$ at time $t$, and similarly $\eta_{\infty} (k,i,t)$ the number of vertices among $\xi(1), \ldots, \xi(k)$ in the component of~$\mathcal{R}_{\infty} (k)$ containing $\xi(i)$ at time $t$. It follows from (\ref{ECvRBr}) that we have the joint convergences
\begin{eqnarray*} \frac{a_n}{n} \mathcal{T}_n &\displaystyle \mathop{ \longrightarrow} ^{(d)}& \mathcal{T}^{\mathrm{br}}, \\ \bigl(\eta_n (k,i,t)\bigr)_{t \geq0, i \in\mathbb{N}} &\displaystyle \mathop{\longrightarrow } ^{(d)}& \biggl(\eta_{\infty} \biggl(k,i, \biggl(1+\frac{1}{\sigma^2} \biggr)t \biggr) \biggr)_{t \geq0, i \in \mathbb{N}}, \\ \bigl(\tau_n (i,j)\bigr)_{i,j \in\mathbb{N}} &\displaystyle \mathop{ \longrightarrow} ^{(d)}& \biggl( \biggl(1+\frac {1}{\sigma^2} \biggr)^{-1}\tau(i,j) \biggr)_{i,j \in\mathbb{N}}. \end{eqnarray*}
The end of the proof is the same as for Proposition~\ref{TFstJointCv}. \end{pf*}
\subsection{Upper bound for the expected component mass}
The second step is to show that, as in Section~\ref{SKeyEstimates}, the following properties hold.
\begin{lem} \label{TKEBr} It holds that
\begin{eqnarray*} &&\lim_{l \rightarrow\infty} \sup_{n \in\mathbb{N}} \mathbb{E} \biggl[ \int_{2^l}^{\infty} \mu_{n,\xi_n} (t) \,dt \biggr] = 0. \end{eqnarray*}
Besides, there exists a constant $C$ such that, for all $n \in\mathbb{N}$,
\begin{eqnarray*} &&\mathbb{E} \bigl[\delta'_n (0,\xi_n) \bigr] \leq C. \end{eqnarray*}
\end{lem}
\begin{pf} We use the fact that there exists a natural coupling between the edge-fragmentation and the vertex-fragmentation of $\mathcal{T}_n$. Indeed, both can be obtained by a deterministic procedure, given $\mathcal {T}_n$ and a uniform permutation $(i_1,\ldots,i_n)$ of $\{1,\ldots,n\}$. More precisely, in the edge-fragmentation, we delete the edge $e_{i_k}$ at each step $k$, thus splitting $\mathcal{T}_n$ into at most two connected components, whereas in the vertex fragmentation, we delete all the edges such that $e^- = e_{i_k}^-$. Thus, at each step, the connected component containing a given edge $e$ for the vertex-fragmentation is included in the component containing $e$ for the edge-fragmentation.
Now consider the continuous-time versions of these fragmentations: each edge is marked independently with rate $a_n/n = \sigma/\sqrt{n}$ in our case, and $1/\sqrt{n}$ in \cite{BerMi}. We let $\mathcal{T}_{n,i}^{E} (t)$ and $\mathcal{T}_{n,i}^{V} (t)$ denote the connected components containing the edge $e_i$ at time $t$, respectively, for the edge-fragmentation and the vertex-fragmentation. Then the preceding remark shows that there exists a coupling such that $\mathcal{T}_{n,i}^{V} (t) \subset \mathcal{T} _{n,i}^{E} (\sigma t)$ a.s., and thus $\mu_n (\mathcal {T}_{n,i}^{V} (t)) \leq\mu_n (\mathcal{T}_{n,i}^{E} (\sigma t))$ almost surely.
Lemma 3 and Corollary 1 of \cite{BerMi} show that the two announced properties hold for the case of the edge-fragmentation. Therefore, they also hold for the vertex-fragmentation. \end{pf}
\subsection{Proof of Theorem \texorpdfstring{\protect\ref{TBrownianCase}}{1.4}}
As before, the proof of Theorem~\ref{TBrownianCase} now relies on showing a joint convergence for the rescaled versions of $\mathcal {T}_n$ and the modified cut-tree $\operatorname{Cut}_{\mathrm{v}}(\mathcal{T}_n)$:
\begin{equation} \label{EJointCvCut} \biggl( \frac{a_n}{n} \mathcal{T}_n, \biggl(1+ \frac{1}{\sigma ^2} \biggr) \operatorname{Cut}_{\mathrm{v}}' (\mathcal{T}_n) \biggr) \mathop{ \longrightarrow}_{n \rightarrow \infty}^{(d)}\, \bigl( \mathcal{T}^{\mathrm{br}}, \operatorname{Cut}\bigl(\mathcal{T}^{\mathrm{br}} \bigr) \bigr) \end{equation}
in $\mathbb{M} \times\mathbb{M}$. Indeed, Lemma~\ref{TModDist} and the second part of Lemma~\ref{TKEBr} show that
\begin{eqnarray*} &&\mathbb{E} \biggl[\biggl\vert\frac{a_n}{n} \delta_n (i,j) - \delta_n ' (i,j)\biggr\vert^2 \biggr] \leq\frac{2 C a_n}{n} \end{eqnarray*}
for all $i,j \geq0$. Thus, (\ref{EJointCvCut}) entails the joint convergence
\begin{eqnarray*} &&\biggl( \frac{a_n}{n} \mathcal{T}_n, \frac{a_n}{n} \biggl(1+ \frac {1}{\sigma ^2} \biggr) \operatorname{Cut}_{\mathrm{v}}(\mathcal{T}_n) \biggr) \mathop{ \longrightarrow}_{n \rightarrow\infty}^{(d)}\, \bigl(\mathcal{T}, \operatorname{Cut}_{\mathrm{v}}(\mathcal{T}) \bigr). \end{eqnarray*}
Since $a_n = \sigma\sqrt{n}$, this gives Theorem~\ref{TBrownianCase}.
Let us finally justify why (\ref{EJointCvCut}) holds. Proposition~\ref {TJointCvBr} shows that for every fixed integer $l$, there is the joint convergence
\begin{eqnarray*} \frac{a_n}{n} \mathcal{T}_n &\displaystyle \mathop{ \longrightarrow}_{n \rightarrow \infty}^{(d)}& \mathcal{T}^{\mathrm{br}}, \\ \Biggl( 2^{-l} \sum_{j=1}^{4^l} \mu_{n,\xi_n (i)} \bigl(j 2^{-l} \bigr) \Biggr)_{i \in\mathbb{N}} &\displaystyle \mathop{ \longrightarrow}_{n \rightarrow\infty}^{(d)}& \Biggl( 2^{-l} \sum _{j=1}^{4^l} \mu_{\xi(i)} \bigl(C_{\sigma} j 2^{-l} \bigr) \Biggr)_{i \in\mathbb{N}}, \end{eqnarray*}
where $C_{\sigma} = 1+ 1/\sigma^2$. Using the upper bound (\ref {EMajIntf}) and the first part of Lemma~\ref{TKEBr}, we get that
\begin{eqnarray*} &&\lim_{l \rightarrow\infty} \sup_{n \in\mathbb{N}} \mathbb{E} \Biggl[ \Biggl\vert\int_0^{\infty} \mu_{n,\xi_n (i)} (t) \,dt - 2^{-l} \sum_{j=1}^{4^l} \mu_{n,\xi_n (i)} \bigl(j2^{-l} \bigr)\Biggr\vert\Biggr] = 0, \end{eqnarray*}
and these expectations do not depend on $i$. Proposition 3.1 of \cite {BerMi} shows that $\delta(0,\xi(i))$ has the same law as $d (0,\xi (i))$ and, therefore, has finite mean. Thus,
\begin{eqnarray*} &&\Biggl\vert\int_0^{\infty} \mu_{\xi(i)} (C_{\sigma} t ) \,dt - 2^{-l} \sum_{j=1}^{4^l} \mu_{\xi(i)} \bigl(C_{\sigma} j2^{-l} \bigr)\Biggr\vert
\leq\underbrace{2^{-l} + \mathbb{E} \biggl[\int_{2^l}^{\infty} \mu_{\xi(i)} (C_{\sigma} t ) \,dt \biggr]}_{\mathop{\longrightarrow}\limits_{l \rightarrow\infty} 0}, \end{eqnarray*}
and the left-hand side does not depend on $i$. Since
\begin{eqnarray*} &&\int_0^{\infty} \mu_{\xi(i)} (C_{\sigma} t ) \,dt = C_{\sigma}^{-1} \int _0^{\infty} \mu_{\xi(i)} (t) \,dt = C_{\sigma }^{-1} \delta\bigl(0,\xi(i)\bigr), \end{eqnarray*}
we conclude that
\begin{eqnarray*} &&\bigl(C_{\sigma} \delta'_n \bigl(0, \xi_n(i)\bigr) \bigr)_{i \in\mathbb {N}} \mathop{ \longrightarrow}_{n \rightarrow\infty}^{(d)}\, \bigl(\delta\bigl(0, \xi(i)\bigr) \bigr)_{i \in \mathbb{N}}, \end{eqnarray*}
jointly with $(a_n/n) \cdot\mathcal{T}_n \mathop{ \longrightarrow }\limits ^{(d)} \mathcal{T}$. Using in addition the convergence of the $\tau_n (i,j)$ shown in Proposition~\ref {TFstJointCv}, we see that the preceding convergences also hold jointly with
\begin{eqnarray*} &&\bigl( C_{\sigma} \delta'_n \bigl( \xi_n (i), \xi_n(j)\bigr) \bigr)_{i,j \in\mathbb{N}} \mathop{ \longrightarrow}_{n \rightarrow\infty}^{(d)}\, \bigl(\delta\bigl(\xi(i), \xi(j) \bigr) \bigr)_{i,j \in\mathbb{N}}, \end{eqnarray*}
and this gives the convergence (\ref{EJointCvCut}).
\setcounter{teo}{0} \begin{appendix}\label{app}
\section*{Appendix: Adaptation of Doney's result}
We rephrase Lemma~\ref{TDoney} using the notation of \cite{Don}.
\begin{lem} Let $(X_i)_{i \in\mathbb{N}}$ be a sequence of i.i.d. variables in $\mathbb{N}\cup \{0\}$, whose law belongs to the domain of attraction of a stable law of index $\hat{\alpha}\in(0,1)$, and $S_n = X_1 + \cdots+ X_n$. We also let $A \in R_{\hat{\alpha}}$ be a positive increasing function such that
\begin{equation} \label{HDoney1} \mathbb{P} (X > r ) \sim\frac{1}{A (r)}, \end{equation}
and $a$ the inverse function of $A$. Besides, we suppose that the additional hypothesis
\begin{equation} \label{HDoney2} \sup_{r \geq1} \biggl(\frac{r \mathbb{P}(X = r)}{\mathbb {P}(X > r)} \biggr) < \infty \end{equation}
holds. Then there exist constants $B, C$ such that for all $r \in \mathbb{N} $, for all $n$ such that $r / a_n \geq B$,
\[ \mathbb{P} (S_n = r ) \leq C \frac{n}{r A (r)}. \]
\end{lem}
This result is an adaptation of a theorem shown by Doney in \cite {Don}, which gives an equivalent for $\mathbb{P} (S_n = r )$ as $n\rightarrow \infty$, uniformly in $n$ such that $r / a_n \rightarrow\infty$, using the slightly stronger hypothesis
\[ \mathbb{P} (X=r ) \sim\frac{1}{r A (r)} \qquad\mbox{as } r \rightarrow \infty \]
instead of (\ref{HDoney2}).
\begin{pf*}{Sketch of the proof} The main idea is to split up $\mathbb{P} (S_n = r )$ into four terms, depending upon the values taken by $M_n = \max\{X_i\dvtx i=1, \ldots, n \}$ and $N_n = \vert\{ m \leq n\dvtx X_m > z \}\vert$. More precisely, letting $\eta$ and $\gamma$ be constants in $(0,1)$, $w = r / a_n$ and $z = a_n w^{\gamma}$, we have
\[ \mathbb{P} (S_n = r ) = \sum_{i=0}^3 \mathbb{P} \bigl(\{ S_n = r\} \cap A_i \bigr), \]
where $A_i = \{M_n \leq\eta r, N_n = i \}$ for $i = 0,1$, $A_2 = \{M_n \leq\eta r, N_n \geq2 \}$ and $A_3 = \{M_n > \eta r \}$. For our purposes, it is enough to show that there exist constants $c_i$ such that
\[ q_i:= \mathbb{P} \bigl(\{S_n = r\} \cap A_i \bigr) \leq c_i \frac {n}{r A(r)} \qquad\forall i \in\{0,1,2,3\}. \]
The constants $\gamma$ and $\eta$ are fixed, with conditions that will be given later (see the detailed version of the proof for explicit conditions). In the whole proof, we suppose that $w \geq B$, for $B$ large enough (possibly depending on the values of $\eta$ and $\gamma $). Note that hypotheses (\ref{HDoney1}) and (\ref{HDoney2}) imply the existence of a constant $c$ such that
\begin{equation} \label{EMajPr&Fr} p_r = \mathbb{P} (X=r ) \leq\frac{c}{r A(r)} \quad\mbox{and}\quad\overline{F} (r) = \mathbb{P} (X>r ) \leq\frac {c}{A (r)}. \end{equation}
The first calculations of \cite{Don} show that we have the following inequalities:
\begin{eqnarray*} q_3 &\leq& n \sup_{l > \eta r} p_l, \\ q_2 &\leq&\frac{1}{2} n^2 \overline{F} (z) \sup _{l>z} p_l, \\ q_1 &\leq& n \mathbb{P} \bigl(M_{n-1} \leq z, S_{n-1} > (1-\eta) r \bigr) \sup_{l>z} p_l. \end{eqnarray*}
We now use (\ref{EMajPr&Fr}), and apply Lemma~\ref{TLemmeFVR} for the regularly varying function $A$. The first inequality thus yields the existence of a constant $c_3$ which only depends on the value of $\eta $. Similarly, the second inequality gives the existence of $c_2$, provided~$\gamma$ is large enough (independently of $B$) and $B \geq1$.
To get the existence of $c_1$, we first apply Lemma 2 of \cite{Don}, which gives an upper bound for the quantity $\mathbb{P} (M_{n-1} \leq z, S_{n-1} > (1-\eta) r )$ provided $z$ is large enough and $(1-\eta)r \geq z$. Since $a_1 w^{\gamma} \leq z \leq r / w^{1-\gamma}$, these conditions can be achieved by taking $B$ large enough. The lemma gives
\[ q_1 \leq c \frac{n}{z A(z)} \cdot\biggl(\frac{c' z}{(1-\eta) r} \biggr)^{(1-\eta)r/z},
\]
where $c'$ is a constant. Now, applying Lemma~\ref{TLemmeFVR}, we get the existence of a constant $c'_1$ such that
\[ q_1 \leq c'_1 \frac{n}{r A(r)} \cdot w^{\kappa}, \]
where $\kappa$ depends on the values of $\eta$, $\gamma$ and $B$. For a given choice of $\eta$ and $\gamma$, and for $B$ large enough, $\kappa$ is negative, hence the existence of $c_1$.
For $q_0$, getting the upper bound goes by first showing that we can work under the hypotheses $r \leq nz$ and $r \leq n a_n/2$ (instead of the hypotheses $n \rightarrow\infty$ and \mbox{$r / n a_n \rightarrow0$} of \cite{Don}). Indeed, if $r > nz$, then $q_0 = 0$, and if $r > n a_n/2$, another application of Lemma 2 of \cite{Don} and of Lemma~\ref {TLemmeFVR} yields the result. The rest of the proof relies on replacing the $X_i$ by truncated variables $\widehat{X}_i$, and using an exponentially biased probability law. This last part is long and technical, but it is rather easy to check that each step still holds with our hypotheses, for $B$ large enough and with an appropriate choice of $\eta$ (independently of $B$). \end{pf*} \end{appendix}
\printaddresses
\end{document} | arXiv |
Henry Berthold Mann
Henry B. Mann, our friend and former colleague, passed away on February 1, 2000 in Tucson, Arizona. A mathematician of international fame, Mann, in a career of more than fifty years, made significant contributions to algebra, number theory, statistics, and combinatorics.
Henry Mann was born October 27, 1905, in Vienna to Oscar and Friedrike (Schönnhof) Mann. He received his Ph.D. degree in mathematics in 1935 from the University of Vienna where. as a student of Philipp Furtwängler, he wrote his dissertation in algebraic number theory. After a year of teaching school in Vienna and a couple of years spent in research and tutoring, he emigrated in 1938 to the United States.
In New York he earned his living for several years primarily by tutoring. He had by then developed an interest in mathematical statistics, particularly in the analysis of variance, and in the problem of designing experiments with a view to their statistical analysis. He later contributed to this subject in a number of research papers and in his book (1949) "Analysis and Design of Experiments."
One of Mann's most remarkable achievements was his discovery in 1941 of a proof of a celebrated conjecture of Schnirelmann and Landau in additive number theory. This conjecture had its origin in the work of L. Schnirelmann in the early 1930s. Let $A (B,C)$ be sets of positive integers. Form $A^0$, $B^0$ by adjoining $0$ to $A$ and $B$ respectively. Let $A(n)$ be the number of positive integers in $A$ that are $\le n$. The greatest lower bound of the quotients $A(n)/n$ is called the density of $A$. Let $C^0$ consist of all integers of the form $a+b$ ($a\in A^0$, $b\in B^0$).
Let $\alpha$ be the density of $A$, $\beta$ the density of $B$, and $\gamma$ the density of $C$. It had been conjectured by E. Landau, I. Schur, and A. Khintchine that
$$\gamma \ge \alpha + \beta \,\, or \,\, = 1\qquad (*)$$
Approximations to this inequality had been obtained by Landau in 1930, who showed that $\gamma \ge \alpha + \beta -\alpha \beta$, and by A. Brauer in 1941, who showed that $\gamma \ge (9/10)(\alpha + \beta)$. Schnirelmann had shown that
$\alpha + \beta \ge \alpha+\beta - \alpha \beta$.
$C$ contains all positive integers if $\alpha + \beta \ge 1$.
From these two rules Schnirelmann obtained (readily) the result that any set having positive density is a basis for the integers (that is, if $\alpha > 0$, then the sum of $A$ with itself sufficiently many times contains all positive integers). As an application of these ideas, Schnirelmann proved (for the first time) the existence of a value $k$ such that every integer greater than 1 is the sum of at most $k$ primes. This he did by showing that $P + P - P$ is the set of primes together with 1 - has positive density, hence is a basis of the integers.
Out of further study of these ideas by Schnirelmann and by E. Landau, there arose the conjecture that (1) and (2) may be replaced by the much stronger statement $(*)$: Either $\gamma \ge \alpha + \beta$ or $C$ contains all positive integers.
This conjecture, appealing in its apparent simplicity, soon attracted wide attention. Many distinguished mathematicians attempted to find a proof; indeed, partial results were obtained over the next decade by E. Landau, A. Khintchine, A. Besicovitch, I. Schur, and A. Brauer.
It was this conjecture that Mann succeeded in proving in 1941. His interest in the problem had been aroused through the lectures of A. Brauer at New York University. Actually, he proved the still sharper statement:
\frac{C(n)}{n}\geq\min_{
\substack{0 < m \le n \\
m\not\in C}
}\left(1,\frac{A(m)+B(m)}{m}\right).
For his proof he was awarded the Cole Prize in Number Theory by the American Mathematical Society in 1946. The technique that Mann introduced in his proof, and its various modifications, have led to further important results in additive number theory and have also proved useful in the more general setting of additive problems in groups.
In 1942 Mann was the recipient of a Carnegie Fellowship for the study of statistics at Columbia University. At Columbia he had the opportunity of working with Abraham Wald in the department of economics, which at that time was headed by Harold Hotelling. He taught for a year (1943-1944) in the Army Specialized Training Program at Bard College; he spent a year (1944-1945) as research associate at Ohio State University, and six months as research associate at Brown University. In 1946 he returned to Ohio State to join the mathematics faculty where, as associate professor (1946-1948) and full professor (1948-1964), he was actively engaged in teaching and research for many years. After retiring from Ohio State, he held professorships at the University of Wisconsin Mathematics Research Center from 1964 to 1971, and at the University of Arizona from 1971 until his second retirement in 1975.
Mann's research interests in algebra and combinatorics covered a wide range. He had a special fondness, though, for algebraic number theory and Galois theory. and imparted his enthusiasm for these subjects to many students over the years. Besides his dozen or so papers that contribute directly to these subjects, several of his papers on difference sets and coding theory contain beautiful applications of theorems on algebraic numbers and Galois theory.
Mann married Anna Löffler on July 19, 1935, and had one son Michael.
Bibliography of Henry B. Mann
Ein Satz Ãber Normalteiler, Anz. Österreich. Akad. Wiss. Math.-Naturwiss. Kl. (1935), Nr. 6, 49-50.
Über eine notwendige Bedingung fïr die Ordnung einfacher Gruppen, Anz. Österreich. Akad. Wiss. Math.-Naturwiss. Kl.(1935). Nr. 19. 209-210.
Untersuchungen Ãber Wabenzellen bei allgemeiner Minkowskischer Metrik, Mh. Math. Phys. 42 (1935), 417-424.
Über die Erzeugung von Darstellungen von Gruppen durch Darstellungen von Untergruppen, Mh. Math. Phys. 46 (1937), 74-83.
A proof of the fundamental theorem on the density of sums of sets of positive integers. Ann. of Math. 43 (1942), 523-527.
On the choice of the number of class intervals in the application of the chi square test. Ann. Math. Stat. 13 (1942), 306-317 (with A. Wald).
The construction of orthogonal Latin squares, Ann. Math. Stat. 13 (1942), 418-423.
Quadratic forms with linear constraints, Amer. Math. Monthly 50 (1943), 430-433.
On stochastic limit and order relationships, Ann. Math. Stat. 14 (1943), 217-226 (with A. Wald).
On the statistical treatment of linear stochastic difference equations. Econometrica 11 (1943), 173-220 (with A. Wald).
On the construction of sets of orthogonal Latin squares, Ann. Math. Stat. 14 (1943), 401-414.
On orthogonal Latin squares, Bull. Amer. Math. Soc. 50 (1944), 249-257.
On certain systems which are almost groups, Bull. Amer. Math. Soc. 50 (1944), 879-881.
On a problem of estimation occurring in public opinion polls, Ann. Math. Stat. 16 (1945), 85-90. [A correction appears in Ann. Math. Stat. 17 (1946), 87-88.]
On a test for randomness based on signs of differences, Ann. Math. Stat. 16 (1945), 193-199.
Note on a paper by C. W. Cotterman and L. U. Snyder, Ann. Math. Stat. 16 (1945), 311-312.
Nonparametric tests against trend, Econometrica 13 (1945), 245-259.
Correction of G-M counter data, Phys. Rev. 68(1945), 40-43 (with J. D. Kurbatov).
A note on the correction of Geiger Mãller counter data. Quart. J. Mech. Appl. Math. 4 (1946), 307-309.
On a test of whether one of two random variables is stochastically larger than the other, Ann. Math. Stat. 18 (1947), 50-60 (with D. R. Whitney).
Integral extensions of a ring, BulL Amer. Math. Soc. 55(1949). 592-594 (with H. Chatland).
On the field of origin of an ideal, Canad. J. Math. 2 (1950), 16-21.
On the number of integers in the sum of two sets of positive integers, Pacific J. Math. 1(1951), 249-253.
On the realization of stochastic processes by probability distributions in function spaces, Sankhya 11(1951), 3-8.
The estimation of parameters in certain stochastic processes, Sankhya 11(1951), 97-106.
On simple difference sets, Sankhya 11(1951), 357-364 (with T. A. Evans).
On products of sets of group elements, Canad. J. Math. 4 (1952), 64-66.
Some theorems on difference sets, Canad. J. Math. 4 (1952), 222-226.
On the estimation of parameters determining the mean value function of a stochastic process, Sankhya 12 (1952), 117-120.
An addition theorem for sets of elements of Abelian groups, Proc Amer. Math Soc. 4(1953), 423.
Systems of distinct representatives, Amer. Math. Monthly 60 (1953), 397-401 (with H. J. Ryser).
On the moments of stochastic integrals, Sankya 12(1953), 347-350 (with A. P. Calderãn).
On integral closure, Canad. J. Math. 6 (1954), 471-473 (with H. S. Butts and M. Hall Jr.).
On an exceptional phenomenon in certain quadratic extensions. Canad. J. Math. 6 (1954), 474-476.
A generalization of a theorem of Ankeny and Rogers, Rend. Circ. Mat. Palermo 3(1954), 106-108.
A theory of estimation for the fundamental random process and the Ornstein Uhlenbeck process, Sankhya 13 (1954), 325-350.
On the efficiency of the least square estimates of parameters in the Ornstein Uhlenbeck process, Sankhya 13 (1954). 351-358 (with P. B. Moranda).
Corresponding residue systems in algebraic number fields, Pacific J. Math. 6 (1956), 211-224 (with H. S. Butts).
On integral bases, Proc. Amer. Math. Sac. 9 (1958), 167-172.
A note to the paper "On integral bases" by H. B. Mann, Proc Amer. Math. Soc. 9(1958), 173-174 (with V. Hanly).
Some applications of the Cauchy-Davenport theorem. Norske Vid. Selsk. Forh. (Trondheim) 32 (1959), 74-80 (with S. Chowla and F. G. Straus).
The algebra of a linear hypothesis. Ann. Math. Stat. 31(1960), 1-15.
A refinement of the fundamental theorem on the density of the sum of two sets of integers, Pacific J. Math. 10 (1960), 909-915.
Intrablock and interblock estimates, in "Contributions to Probability and Statistics," pp. 293-298. Stanford Univ. Press, Stanford, California, 1960 (with M. V. Menon).
On modular computation, Math. Comput. 15 (1961), 190-192.
An inequality suggested by the theory of statistical inference, Illinois J. Math. 6 (1962), 131-136.
On the number of information symbols in Bose-Chaudhuri codes, information and Control 5(1962), 153-162.
Main effects and interactions, Sankhya Ser. A 24 (1962), 185-202.
Balanced incomplete block designs and Abelian difference sets, Illinois J. Math. 8 (1964), 252-261.
On the casus irreducibilis, Amer. Math. Monthly 71(1964), 289-290.
Decomposition of sets of group elements, Pacific J. Math. 14 (1964), 547-558 (with W. B. Lalfer).
On multipliers of difference sets, Canad. J. Math. 17 (1965), 541-542 (with R. L. McFarland).
Difference sets in elementary Abelian groups, Illinois J. Math. 9(1965), 212-219.
On linear relations between roots of unity, Mathematika 12 (1965), 107-117.
Recent advances in difference sets, Amer. Math. Monthly 74 (1967), 229-235.
On canonical bases of ideals, J. Combinatorial Theory Ser. 0 2 (1967), 71-76 (with K. Yamamoto).
Sums of sets in the elementary Abelian group of type (p,p), J. Combinatorial Theory 2 (1967), 275-284 (with J. E. Olson).
Two addition theorems, J. Combinatorial Theory 3 (1967), 233-235.
Properties of differential forms in n real variables. Pacific J. Math. 21(1967), 525-529 (with J. Mitchell and L. Schoenfeld). [A correction appears in Pacific J. Math. 23 (1967), 631.]
On the p-rank of the design matrix of a difference set, Information and Control 12 (1968), 474-488 (with F. J. MacWilliams).
On orthogonal m-pods on a cone, J. Combinatorial Theory Ser. 0 5 (1968), 302-307.
A new proof of the maximum principle for doubly-harmonic functions, Pacific J. Math. 27 (1968). 567-571 (with J. Mitchell and L. Schoenfeld).
On canonical bases for subgroups of an Abelian group. in "Combinatorial Mathematics and its Applications" (Proc. Conf., Univ. North Carolina, Chapel Hill, 1967), 38-54, Univ. of North Carolina Press, Chapel Hill, 1969.
A note on balanced incomplete block designs, Ann. Math. Stat. 40 (1969), 679-680.
On multipliers of difference sets, Illinois J. Math. 13(1969), 378-382 (with S. K. Zaremba).
On the difference between the geometric and the arithmetic mean of n quantities, Advances in Math. 5 (1970), 472-473 (with C. Loewner).
Linear equations over a commutative ring, J. Algebra 18(1971), 432-446 (with P. Camion and L. S. Levy).
Antisymmetric difference sets, J. Number Theory 4 (1972), 266-268 (with P. Camion).
Representations by kth powers in GF(q), J.Number Theory 4 (1972), 269-273 (with G. T. Diderrich).
A necessary and sufficient condition for primality and its source, J. Combinatorial Theory Ser. A. 13(1972), 131-134 (with D. Shanks).
Combinatorial problems in finite Abelian groups, in "A Survey of Combinatorial Theory" (Proc. Internat. Symp. Combinatorial Math, and Its Appl., Colorado State Univ.. Fort Collins, Colo. 1971), pp.95-100. North-Holland. Amsterdam, 1973 (with G.T. Diderrich).
On Hadamard difference sets, in "A Survey of Combinatorial Theory" (Proc. Internat. Symp. Combinatorial Math. and Its Appl. Colorado State Univ., Fort Collins, 1971). pp. 333-334. North-Holland, Amsterdam, 1973 (with R. L. McFarland).
Prãfer rings, J. Number Theory 5(1973), 132-138 (with P. Camion and L. S. Levy).
Additive group theory- a progress report, Bull. Amer. Math. Soc. 79 (1973), 1069-1075.
The solution of equations by radicals, J. Algebra 29(1974), 551-554.
Lectures on error correcting codes, The University of Arizona Department of Mathematics Lecture Note Series, University of Arizona, Tucson, Ariz., 1974, iii+88 pp. (with D. K. Ray-Chaudhuri).
On normal radical extensions of the rationals, Linear and Multilinear Algebra 3 (1975), 73-80 (with W. Y. Vélez).
Prime ideal decomposition in F, Monatsh. Math. 81 (1976), 131-139 (with W. Y. Vélez).
A short proof of Fermat's theorem for n = 3, Math. Student 46 (1978), 103-104 (with W. A. Webb).
An addition theorem for the elementary abelian group of type (p,p), Monatsh. Math. 102(1986), 273-308 (with Y. F. Wou).
"Analysis and Design of Experiments," Dover, New York, 1949.
"Introduction to Algebraic Number Theory," Ohio State Univ. Press, Columbus, 1955.
"Addition Theorems: The Addition Theorems of Group Theory and Number Theory," Wiley (Interscience), New York, 1965.
Ph. D. Students of Henry B. Mann
Donald Ransom Whitney, Ohio State University, 1949
George Marsaglia, Ohio State University, 1951
Hubert Spence Butts, Jr., Ohio State University, 1953
Walter Wilson Hoy, Ohio Ohio State University, 1953
Chio-Shih Lin, Ohio State University, 1955
Leon Royce McCulloh, Ohio State University, 1959
Manavazhi Vijaya Krishna Menon, Ohio State University, 1959
Walter Ball Laffer, I, Ohio State University, 1963
George T. Diderrich, University of Wisconsin-Madison, 1972
William Yslas Vélez, University of Arizona, 1975
Ying Fou Wou, University of Arizona, 1980 | CommonCrawl |
\begin{document}
\sloppy
\newenvironment{proo}{\begin{trivlist} \item{\sc {Proof.}}}
{
$\square$ \end{trivlist}}
\long\def\symbolfootnote[#1]#2{\begingroup \def\thefootnote{\fnsymbol{footnote}}\footnote[#1]{#2}\endgroup}
\title{An explicit two step quantization of Poisson structures\\ and Lie bialgebras}
\author{Sergei~Merkulov} \address{Sergei~Merkulov: Mathematics Research Unit, Luxembourg University, Grand Duchy of Luxembourg } \email{[email protected]}
\author{Thomas~Willwacher} \address{Thomas~Willwacher: Institute of Mathematics, University of Zurich, Zurich, Switzerland} \email{[email protected]}
\begin{abstract} We develop a new approach to deformation quantizations of Lie bialgebras and Poisson structures which goes in two steps.
In the first step one associates to any Poisson (resp.\ Lie bialgebra) structure a so called {\em quantizable}\, Poisson (resp.\ Lie bialgebra) structure. We show explicit transcendental formulae for this correspondence.
In the second step one deformation quantizes a {\em quantizable}\, Poisson (resp.\ Lie bialgebra) structure. We show again explicit transcendental formulae for this second step correspondence (as a byproduct we obtain configuration space models for biassociahedron and bipermutohedron).
In the Poisson case the first step is the most non-trivial one and requires a choice of an associator while the second step quantization is essentially unique, it is independent of a choice of an associator and can be done by a trivial induction. We conjecture that similar statements hold true in the case of Lie bialgebras.
The main new result is a surprisingly simple explicit universal formula (which uses only smooth differential forms) for universal quantizations of finite-dimensional Lie bialgebras.
\end{abstract}
\maketitle \markboth{Sergei Merkulov and Thomas Willwacher}{Explicit deformation quantization of Lie bialgebras}
{\small {\small
\tableofcontents } }
{\Large \section{\bf Introduction} }
\subsection{Two classical deformation quantization problems} There are two famous deformation quantization problems, one deals with quantization of Poisson structures on finite dimensional manifolds and another with quantization of Lie bialgebras.
A lot is known by now about the first deformation quantization problem: we have an explicit formula for a universal deformation quantization \cite{Ko}, we also know that all homotopy inequivalent universal deformation quantizations are classified by the set of Drinfeld associators and that, therefore, the Grothendieck-Teichm\"uller group acts on such quantizations.
Also much is known about the second quantization problem. Thanks to Etingof and Kazhdan in \cite{EK} it is proven that, for any choice of a Drinfeld associator, there exists a universal quantization of an arbitrary Lie bialgebra. Later Tamarkin gave a second proof of the Etingof-Kazhdan deformation quantization theorem in \cite{Ta}, and very recently Severa found a third proof \cite{Se}. The theorem follows furthermore from the more general results of \cite{GY}. All these proofs give us existence theorems for deformation quantization maps, but show no hint on how such a quantization might look like explicitly to any order in $\hbar$.
In this paper we show a new transcendental explicit formula for universal quantization of finite-dimensional Lie bialgebras. This work is based on the study of compactified configuration spaces in ${\mathbb R}^3$ which was motivated by (but not identical to) an earlier work of Boris Shoikhet \cite{Sh1}; it gives in particular a new proof of the Etingof-Kazhdan existence theorem. The methods used in the construction of that formula work well also in two dimensions, and give us new explicit formulae for a universal quantization of Poisson structures. Let us explain main ideas of the paper first in this very popular case.
\subsection{Deformation quantization of Poisson structures} Let $C^\infty({\mathbb R}^d)$ be the commutative algebra of smooth functions in ${\mathbb R}^n$. A {\em star product} in $C^\infty({\mathbb R}^n)$ is an associative product, $$ \begin{array}{rccc} *_\hbar: & C^\infty({\mathbb R}^n) \times C^\infty({\mathbb R}^n) & \longrightarrow & C^\infty({\mathbb R}^n)\\
& (f(x),g(x)) & \longrightarrow & f *_\hbar g = fg + \sum_{k\geq 1}^\infty \hbar^k B_k(f,g) \end{array} $$ where all operators $B_k$ are bi-differential. One can check that the associativity condition for $*_\hbar$ implies that $\pi(f,g):= B_1(f,g) - B_1(g,f)$ is a Poisson structure in ${\mathbb R}^n$; then $*_\hbar$ is called a {\em deformation quantization}\, of $\pi\in {\mathcal T}_{poly}({\mathbb R}^n)$.
The {\em deformation quantization problem}\, addresses the question: given a Poisson structure in ${\mathbb R}^n$, does there exist a star product $*_\hbar$ in $C^\infty({\mathbb R}^n)$ which is its deformation quantization?
This problem was solved by Maxim Kontsevich \cite{Ko} by giving an explicit direct map between the two sets
$$ \xymatrix{ \begin{array}{c}\frame{\mbox{$\begin{array}{c} \ \mathrm{Poisson} \ \\
\ \mathrm{structures\ in}\ {\mathbb R}^n\ \end{array}$}}\end{array} \ \ \ \ \ \
\ar[r]^{\mathit{depends\ on}}_{\mathit{associators}} & \ \ \ \ \ \ \ \begin{array}{c}\frame{\mbox{$\begin{array}{c} \ \mathrm{Star\ products} \ \\ \ *_\hbar\ \mathrm{in}\ C^\infty({\mathbb R}^n)[[\hbar]] \end{array}$}} \end{array}
} $$ In fact, a stronger correspondence was proven --- the formality theorem. Later Dmitry Tamarkin proved \cite{Ta2} an existence theorem for deformation quantizations which exhibited a non-trivial role of Drinfeld's associators.
In this paper we consider an intermediate object --- a {\em quantizable}\, Poisson structure --- so that the quantization process splits in two steps as follows $$ \xymatrix{ \begin{array}{c}\frame{\mbox{$\begin{array}{c} \ \mathrm{Poisson} \ \\
\ \mathrm{structures\ in}\ {\mathbb R}^n\ \end{array}$}}\end{array} \ \ \ \ \ \
\ar[r]^{\mathit{depends\ on}}_{\mathit{associators}} & \ \ \ \ \ \ \begin{array}{c} \frame{\mbox{$\begin{array}{c} \ \mathrm{Quantizable} \ \\ \mathrm{Poisson} \ \\
\ \mathrm{structures\ in}\ {\mathbb R}^n \end{array}$}} \end{array} \ \ \ \ \ \ \ar[r]^{\mathit{easy:\, no\,\, need}}_{\mathit{for\,\, associators}} & \ \ \ \ \ \ \ \begin{array}{c}\frame{\mbox{$\begin{array}{c} \ \mathrm{Star\ products} \ \\ \ *_\hbar\ \mathrm{in}\ C^\infty({\mathbb R}^n)[[\hbar]] \end{array}$}} \end{array}
} $$
If an ordinary Poisson structure is a Maurer-Cartan element $\pi \in {\mathcal T}_{poly}({\mathbb R}^n)$ of the classical Schouten bracket $[\ ,\ ]_S$, $$ [\pi,\pi]_S=0, $$ a quantizable Poisson structure $\pi^{\diamond}$ is a bivector field in ${\mathcal T}_{poly}({\mathbb R}^n)[[\hbar]]$ which is Maurer-Cartan element, \begin{equation}\label{1: eqn for pi^diamond} \frac{1}{2}[\pi^{\diamond},\pi^{\diamond}]_S + \frac{\hbar}{4!} [\pi^{\diamond},\pi^{\diamond},\pi^{\diamond},\pi^{\diamond}]_4 + \frac{\hbar^2}{6!} [\pi^{\diamond},\pi^{\diamond},\pi^{\diamond},\pi^{\diamond},\pi^\diamond,\pi^\diamond]_6 + \ldots =0, \end{equation} a certain ${\mathcal L} ie_\infty$ structure in ${\mathcal T}_{poly}({\mathbb R}^n)$, $$ \left\{[\, \ , \ldots ,\ ]_{2k}: {\mathcal T}_{poly}({\mathbb R}^n)^{\otimes 2k}\rightarrow {\mathcal T}_{poly}({\mathbb R}^n)[3-4k] \right\}_{k\geq 1} $$ which we call a {\em Kontsevich-Shoikhet ${\mathcal L} ie_\infty$ structure}\, as it was was introduced by Boris Shoikhet in \cite{Sh} with a reference to an important contribution by Maxim Kontsevich via an informal communication. As the Schouten bracket, this ${\mathcal L} ie_{\infty}$ structure makes sense in infinite dimensions. It was proven in \cite{Wi2} that the Kontsevich-Shoikhet structure is {\em the unique}\, non-trivial deformation of the standard Schouten bracket in ${\mathcal T}_{poly}({\mathbb R}^n)$ in the class of universal ${\mathcal L} ie_\infty$ structures which makes sense in {\em any}\, (including infinite) dimension (it is a folklore conjecture that in finite dimensions the Schouten bracket $[\ ,\ ]$ is rigid, i.e. admits no universal homotopy non-trivial deformations).
A map \begin{equation}\label{1: from qua Poisson to star products} \xymatrix{ \begin{array}{c} \frame{\mbox{$\begin{array}{c} \ \mathrm{Quantizable} \ \\ \mathrm{Poisson} \ \\
\ \mathrm{structures\ in}\ {\mathbb R}^n \end{array}$}} \end{array} \ \ \ \ \ \ \ar[r]^{} & \ \ \ \ \ \ \ \begin{array}{c}\frame{\mbox{$\begin{array}{c} \ \mathrm{Star\ products} \ \\ \ *_\hbar\ \mathrm{in}\ C^\infty({\mathbb R}^n)[[\hbar]] \end{array}$}} \end{array} } \end{equation} was constructed in \cite{Sh} for any $n$ (including the case $n=+\infty$) with the help of the hyperbolic geometry and transcendental formulae. It was shown in \cite{Wi2,B} that this universal map (which comes in fact from a ${\mathcal L} ie_\infty$ morphism) is essentially unique and can, in fact, be constructed by a trivial (in the sense that no choice of an associator is needed) induction.
What is new in our paper is the following Theorem proven in Section 4 below.
\subsubsection{\bf Theorem}\label{1: Theorem on ordinary and qua Poisson} {\em For any finite $n$ and any choice of an associator, there is 1-1 correspondence between the two sets, $$ \xymatrix{ \begin{array}{c}\frame{\mbox{$\begin{array}{c} \ \mathrm{Poisson} \ \\
\ \mathrm{structures\ in}\ {\mathbb R}^n\ \end{array}$}}\end{array} \ \ \ \
\leftrightarrow & \begin{array}{c} \frame{\mbox{$\begin{array}{c} \ \mathrm{Quantizable} \ \\ \mathrm{Poisson} \ \\
\ \mathrm{structures\ in}\ {\mathbb R}^n \end{array}$}} \end{array} } $$ More precisely, there is a ${\mathcal L} ie_\infty$ isomorphism, $$ F: \left({\mathcal T}_{poly}({\mathbb R}^n), [\ ,\ ]_S\right) \longrightarrow \left({\mathcal T}_{poly}({\mathbb R}^n)[[\hbar]], \{ [\ , \ , \ldots ,\ ]_{2n}\}_{n\geq 1}\right) $$
from the Schouten algebra to the Kontsevich-Shoikhet one. }
We show explicit transcendental formulae for the ${\mathcal L} ie_\infty$ morphism $F$ in (\ref{4: F_k,l from_g}). Composing this morphism with the essentially unique arrow in (\ref{1: from qua Poisson to star products}) we obtain an acclaimed new explicit formula for universal quantization of Poisson structures. In fact we obtain a family of such formulae parameterized by smooth functions on $S^1:=\{(x,y)\in {\mathbb R}^2| x^2+y^2=1\}$ with compact support in the upper $(y>0)$ half-circle; all the associated maps $F$ are homotopy equivalent to each other.
\subsection{Deformation quantization of Lie bialgebras}
Let $V$ be a ${\mathbb Z}$-graded real vector space, and let ${\mathcal O}_V:= {\odot^{\bullet}} V= \oplus_{n\geq 0} \odot^n V$ be the space of polynomial functions on $V^*$ equipped with the standard graded commutative and cocommutative bialgebra structure. If ${\mathcal A} ss{\mathcal B} $ stands for the prop of bialgebras, then the standard product and coproduct in ${\mathcal O}_V$ give us a representation, \begin{equation}\label{1: rho_0} \rho_0: {\mathcal A} ss{\mathcal B} \longrightarrow {\mathcal E} nd_{{\mathcal O}_V}. \end{equation} A {\em formal deformation}\, of the standard bialgebra structure in ${\mathcal O}_V$ is a continuous morphisms of props, $$ \rho_\hbar: {\mathcal A} ss{\mathcal B}[[\hbar]] \longrightarrow {\mathcal E} nd_{{\mathcal O}_V}[[\hbar]], $$
$\hbar$ being a formal parameter, such that $\rho_\hbar|_{\hbar=0}=\rho_0$. It is well-known \cite{D}
that if $\rho_\hbar$ is a formal deformation of $\rho_0$, then $\frac{d \rho_\hbar}{d\hbar}|_{\hbar=0}$ makes the vector space $V$ into a Lie bialgebra, that is, induces a representation, $$ \nu: {\mathcal L} ie {\mathcal B} \longrightarrow {\mathcal E} nd_V $$ of the prop of Lie bialgebras ${\mathcal L} ie {\mathcal B}$ in $V$. Thus Lie bialgebra structures, $\nu$, in $V$ control infinitesimal formal deformations of $\rho_0$. Drinfeld formulated a deformation quantization problem:
given $\nu$ in $V$, does $\rho_\hbar$ exist such that $\frac{d \rho_\hbar}{d\hbar}|_{\hbar=0}$ induces $\nu$? This problem was solved affirmatively in \cite{EK,Ta2,Se}. In this paper we give a new proof of the Etingof-Kazhdan theorem which shows such an explicit formula in the form $\sum_\Gamma c_\Gamma \Phi_\Gamma$, where the sum runs over a certain family of graphs, $\Phi_\Gamma$ is a certain operator uniquely determined by each graph $\Gamma$ and $c_\Gamma$ is an absolutely convergent integral, $\int_{{C}_{\bullet,\bullet}(\Gamma)}\Omega_\Gamma$, of a smooth differential form $\Omega_\Gamma$ over a certain configuration space of points in a 3-dimensional subspace, ${\mathcal H}$, of the Cartesian product, $\overline{{\mathbb H}}\times \overline{{\mathbb H}}$, of two copies of the closed upper-half plane. Our construction goes in two steps, $$ \xymatrix{ \begin{array}{c}\frame{\mbox{$\begin{array}{c} \ \mathrm{Lie\ bialgebra} \ \\
\ \mathrm{structures\ in}\ {\mathbb R}^n\ \end{array}$}}\end{array} \ \ \ \ \ \
\ar[r]^{} & \ \ \ \ \ \ \begin{array}{c} \frame{\mbox{$\begin{array}{c} \ \mathrm{Quantizable} \ \\ \mathrm{Lie\ bialgebra} \ \\
\ \mathrm{structures\ in}\ {\mathbb R}^n \end{array}$}} \end{array} \ \ \ \ \ \ \ar[r]^{} & \ \ \ \ \ \ \ \begin{array}{c}\frame{\mbox{$\begin{array}{c} \ \mathrm{Bialgebra\ structures}\\ \ (*_\hbar, \Delta_\hbar)\ \mathrm{in}\ \odot^\bullet({\mathbb R}^n)[[\hbar]] \end{array}$}} \end{array}
} $$ as in the case of quantization of Poisson structures. We show in \S 5 an explicit universal formula for the first
arrow (behind which lies a ${\mathcal L} ie_\infty$ morphism in the full analogy to the Poisson case), and then in \S 6 an explicit universal formula for the second arrow. The composition of the two gives us an explicit formula for a universal quantization of an arbitrary finite-dimensional Lie bialgebra, one of the main results of our paper. This result raises, however, open questions on the classification theory of both maps above, and on the graph cohomology description of a quantizable Lie bialgebra structure; here the situation is much less clear than in the Poisson case discussed above.
We remark that an explicit configuration space integral formula (based on a propagator which is a generalized function rather than a smooth differential form) for the quantization of finite dimensional Lie bialgebras was claimed in B.\ Shoikhet's preprint \cite{Sh1}. Furthermore, an odd analog of the properad governing quantizable Lie bialgebras has been investigated in \cite{KMW}.
\subsection{Structure of the paper} \S 2 is a self-consistent reminder on graph complexes and configuration space models for the 1-coloured operad ${\mathcal H} olie_d$ of (degree shifted) strongly homotopy Lie algebras, and for the 2-coloured operad ${\mathcal M} or({\mathcal H} olie_d)$ of their morphisms.
In \S 3 we obtain explicit universal formulae for ${\mathcal L} ie_\infty$ morphisms relating Poisson (resp., Lie bialgebra) structures with their {\em quantizable}\, counterparts.
\S 4 shows a new explicit two-step formula for universal quantization of Poisson structures (depending only on a choice of a smooth function on the circle $S^1$ with support in the upper half of $S^1$), and proves classification claims (made in \S 1.2) about every step in that construction.
\S 5 reminds key facts about the minimal resolutions, $\mathcal{A}\mathit{ssb}_\infty$ and $\mathcal{L}\mathit{ieb}_\infty^{\mathrm{min}}$, of the prop $\mathcal{A}\mathit{ssb}$ of associative bialgebras and, respectively, of the prop $\mathcal{L}\mathit{ieb}$ of Lie bialgebras, and introduces a prop $\widehat{\LB}_\infty^{\mathrm{quant}}$ of strongly homotopy {\em quantizable}\, Lie bialgebras. We use results of \S 3 to give an explicit transcendental morphism of dg props $\widehat{\LB}_\infty^{\mathrm{quant}} \rightarrow \widehat{\LB}_\infty^{\mathrm{min}, \circlearrowright}$, where $\widehat{\LB}_\infty^{\mathrm{min}, \circlearrowright}$ is the wheeled closure of the completed version of the dg prop $\widehat{\LB}_\infty^{\mathrm{min}}$, and hence an explicit morphism $ \widehat{\LB}^{\mathrm{quant}} \rightarrow \widehat{\LB}^\circlearrowright$ from the prop of quantizable Lie bialgebras into the wheeled closure of the completed prop of ordinary Lie bialgebras.
In \S 6 we show an explicit transcendental formula for a morphism of props $\mathcal{A}\mathit{ssb} \longrightarrow {\mathcal D}(\widehat{\LB}^{\mathrm{quant}})$, where ${\mathcal D}$ is the polydifferential endofunctor on props introduced in \cite{MW2}, and show that it lifts by induction to a morphism of dg props $\mathcal{A}\mathit{ssb}_\infty \longrightarrow {\mathcal D}(\widehat{\LB}^{\mathrm{quant}}_\infty)$. This gives us explicit formulae for a universal quantization of quantizable Lie bialgebras. Combining this formula with the explicit formula from \S 5, we obtain finally an explicit transcendental formula for a universal quantization of ordinary finite-dimensional Lie bialgebras.
In Appendix A we prove a number of Lemmas on vanishing of some classes of integrals involved into our formula for quantization of Lie bialgebras.
In Appendix B we construct surprisingly simple configuration space models for the bipermutahedron and biassociahedron posets introduced by Martin Markl in \cite{Ma} following an earlier work by Samson Saneblidze and Ron Umble \cite{SU}.
\subsection{Some notation}
The set $\{1,2, \ldots, n\}$ is abbreviated to $[n]$; its group of automorphisms is denoted by ${\mathbb S}_n$; the trivial one-dimensional representation of
${\mathbb S}_n$ is denoted by ${\mbox{1 \hskip -7pt 1}}_n$, while its one dimensional sign representation is
denoted by ${\mathit s \mathit g\mathit n}_n$. The cardinality of a finite set $A$ is denoted by $\# A$. For a graph $\Gamma$ its set of vertices (resp., edges) is denote by $V(\Gamma)$ (resp., $E(\Gamma)$).
We work throughout in the category of ${\mathbb Z}$-graded vector spaces over a field ${\mathbb K}$ of characteristic zero. If $V=\oplus_{i\in {\mathbb Z}} V^i$ is a graded vector space, then
$V[k]$ stands for the graded vector space with $V[k]^i:=V^{i+k}$ and and $s^k$ for the associated isomorphism $V\rightarrow V[k]$; for $v\in V^i$ we set $|v|:=i$. For a pair of graded vector spaces $V_1$ and $V_2$, the symbol ${\mathrm H\mathrm o\mathrm m}_i(V_1,V_2)$ stands for the space of homogeneous linear maps of degree $i$, and ${\mathrm H\mathrm o\mathrm m}(V_1,V_2):=\bigoplus_{i\in {\mathbb Z}}{\mathrm H\mathrm o\mathrm m}_i(V_1,V_2)$; for example, $s^k\in {\mathrm H\mathrm o\mathrm m}_{-k}(V,V[k])$.
For a prop(erad) ${\mathcal P}$ we denote by ${\mathcal P}\{k\}$ a prop(erad) which is uniquely defined by
the following property: for any graded vector space $V$ a representation of ${\mathcal P}\{k\}$ in $V$ is identical to a representation of ${\mathcal P}$ in $V[k]$.
The degree shifted operad of Lie algebras ${\mathcal L} \mathit{ie}\{d\}$ is denoted by ${\mathcal L} ie_{d+1}$ while its minimal resolution by ${\mathcal H} olie_{d+1}$; representations of ${\mathcal L} ie_{d+1}$ are vector spaces equipped with Lie brackets of degree $-d$.
For a right (resp., left) module $V$ over a group $G$ we denote by $V_G$ (resp.\ $_G\hspace{-0.5mm}V$)
the ${\mathbb K}$-vector space of coinvariants:
$V/\{g(v) - v\ |\ v\in V, g\in G\}$ and by $V^G$ (resp.\ $^GV$) the subspace of invariants: $\{\forall g\in G\ :\ g(v)=v,\ v\in V\}$. If $G$ is finite, then these spaces are canonically isomorphic as $char({\mathbb K})=0$.
{\Large \section{\bf Graph complexes and configuration spaces} }
\subsection{Directed graph complexes}\label{2: subsec on DRGC} Let ${\mathcal G}_{k,l}$ be a set of directed graphs $\Gamma$ with $k$ vertices and $l$ edges such that some bijections $V(\Gamma)\rightarrow [k]$ and $E(\Gamma)\rightarrow [l]$ are fixed, i.e.\ every edge and every vertex of $\Gamma$ has a numerical label. There is a natural right action of the group ${\mathbb S}_k \times {\mathbb S}_l$ on the set ${\mathcal G}_{k,l}$ with ${\mathbb S}_k$ acting by relabeling the vertices and ${\mathbb S}_l$ by relabeling the edges. For each fixed integer $d$, consider a collection of ${\mathbb S}_k$-modules ${\mathcal D}{\mathcal G} ra_{d}=\{{\mathcal D}{\mathcal G} ra_d(k)\}_{k\geq 1}$, where $$
{\mathcal D}{\mathcal G} ra_d(k):= \prod_{l\geq 0} {\mathbb K} \langle {\mathcal G}_{k,l}\rangle \otimes_{ {\mathbb S}_l} {\mathit s \mathit g\mathit n}_l^{\otimes |d-1|} [l(d-1)]. $$
It has an operad structure with the composition rule, $$ \begin{array}{rccc} \circ_i: & {\mathcal D}{\mathcal G} ra_d(p) \times {\mathcal D}{\mathcal G} ra_d(q) &\longrightarrow & {\mathcal D}{\mathcal G} ra_d(p+q-1), \ \ \forall\ i\in [n]\\
& (\Gamma_1, \Gamma_2) &\longrightarrow & \Gamma_1\circ_i \Gamma_2, \end{array} $$ given by substituting the graph $\Gamma_2$ into the $i$-labeled vertex $v_i$ of $\Gamma_1$ and taking the sum over re-attachments of dangling edges (attached before to $v_i$) to vertices of $\Gamma_2$ in all possible ways.
For any operad ${\mathcal P}=\{{\mathcal P}(k)\}_{n\geq 1}$ in the category of graded vector spaces, the linear the map $$ \begin{array}{rccc} [\ ,\ ]:& {\mathsf P} \otimes {\mathsf P} & \longrightarrow & {\mathsf P}\\ & (a\in {\mathcal P}(p), b\in {\mathcal P}(q)) & \longrightarrow &
[a, b]:= \sum_{i=1}^p a\circ_i b - (-1)^{|a||b|}\sum_{i=1}^q b\circ_i a\ \in {\mathcal P}(p+q-1) \end{array} $$ makes a graded vector space $ {\mathsf P}:= \prod_{k\geq 1}{\mathcal P}(k)$ into a Lie algebra \cite{KM}; moreover, these brackets induce a Lie algebra structure on the subspace of invariants $ {\mathsf P}^{\mathbb S}:= \prod_{n\geq 1}{\mathcal P}(k)^{{\mathbb S}_k}$. In particular, the graded vector space $$ \mathsf{dfGC}_{d}:= \prod_{k\geq 1} {\mathcal G} ra_{d}(k)^{{\mathbb S}_k}[d(1-k)] $$ is a Lie algebra with respect to the above Lie brackets, and as such it can be identified with the deformation complex $\mathsf{Def}({\mathcal L} ie_d\stackrel{0}{\rightarrow} {\mathcal G} ra_{d})$ of the zero morphism. Hence non-trivial Maurer-Cartan elements of $(\mathsf{dfGC}_{d}, [\ ,\ ])$ give us non-trivial morphisms of operads \begin{equation}\label{2: morphism from Lie to dGra} i:{\mathcal L} ie_d {\longrightarrow} {\mathcal D}{\mathcal G} ra_{d}. \end{equation}
One such non-trivial morphism $f$ is given explicitly on the generator of ${\mathcal L} ie_{d}$ by \cite{Wi} \begin{equation}\label{2: map from Lie to dgra} i \left(\begin{array}{c}\begin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^{_2}}**@{},
<-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^{_1}}**@{}, \end{xy}\end{array}\right)= \begin{array}{c}\resizebox{6.3mm}{!}{\xy (0,1)*+{_1}*\cir{}="b", (8,1)*+{_2}*\cir{}="c",
\ar @{->} "b";"c" <0pt> \endxy} \end{array} + (-1)^d \begin{array}{c}\resizebox{7mm}{!}{\xy (0,1)*+{_2}*\cir{}="b", (8,1)*+{_1}*\cir{}="c",
\ar @{->} "b";"c" <0pt> \endxy} \end{array}=:\xy
(0,0)*{\bullet}="a", (5,0)*{\bullet}="b",
\ar @{->} "a";"b" <0pt> \endxy \end{equation} Note that elements of $\mathsf{dfGC}_{d}$ can be identified with graphs from ${\mathcal D}{\mathcal G} ra_d$ whose vertices' labels are symmetrized (for $d$ even) or skew-symmetrized (for $d$ odd) so that in pictures we can forget about labels of vertices and denote them by unlabelled black bullets as in the formula above. Note also that graphs from $\mathsf{dfGC}_{d}$ come equipped with an orientation, $or$, which is a choice of ordering of edges (for $d$ even) or a choice of ordering of vertices (for $d$ odd) up to an even permutation in both cases. Thus every graph $\Gamma\in \mathsf{dfGC}_{d}$ has at most two different orientations, $or$ and $or^{opp}$, and one has the standard relation, $(\Gamma, or)=-(\Gamma, or^{opp})$; as usual, the data $(\Gamma, or)$ is abbreviated by $\Gamma$ (with some choice of orientation implicitly assumed). Note that the homological degree of graph $\Gamma$ from $\mathsf{dfGC}_{d}$ is given by $
|\Gamma|=d(\# V(\Gamma) -1) + (1-d) \# E(\Gamma). $
The above morphism (\ref{2: map from Lie to dgra}) makes
$(\mathsf{dfGC}_{d}, [\ ,\ ])$ into a {\em differential}\, Lie algebra with the differential
$$
\delta:= [\xy
(0,0)*{\bullet}="a", (5,0)*{\bullet}="b",
\ar @{->} "a";"b" <0pt> \endxy ,\ ].
$$
This dg Lie algebra contains a dg subalgebra $\mathsf{dGC}_{d}$ spanned by connected graphs with at least bivalent vertices.
It was proven in \cite{Wi} that $$ H^\bullet(\mathsf{dfGC}_{d})= \odot^{\bullet\geq 1}\left(\mathsf{dGC}_{d}[-d]\right)[d] $$ so that there is no loss of generality of working with $\mathsf{dGC}_{d}$ instead of $\mathsf{dfGC}_{d}$. Moreover, one has an isomorphism of Lie algebras \cite{Wi}, $$ H^0(\mathsf{dGC}_{d})={\mathfrak g}{\mathfrak r}{\mathfrak t}_1, $$ where ${\mathfrak g}{\mathfrak r}{\mathfrak t}_1$ is the Lie algebra of the Grothendieck-Teichm\"u ller group $GRT_1$ introduced by Drinfeld in the context of deformation quantization of Lie bialgebras. Nowadays, this group play an important role in many areas of mathematics.
\subsection{Oriented graph complexes}\label{2: subsec on oriented graph complexes} A graph $\Gamma$ from the set ${\mathcal G}_{k,l}$ is called {\em oriented}\, if it contains no {\em wheels}, that is, directed paths of edges forming a closed circle; the subset of ${\mathcal G}_{k,l}$ spanned by oriented graphs is denoted by ${\mathcal G}^{or}_{k,l}$. It is clear that the subspace ${\mathcal G} ra_d^{or}\subset {\mathcal D} {\mathcal G} ra_d$ spanned by oriented graphs is a suboperad. The morphism (\ref{2: map from Lie to dgra}) factors through the inclusion ${\mathcal G} ra_d^{or}\subset {\mathcal D}{\mathcal G} ra_d$ so that one can consider a graph complex $$ \mathsf{fGC}^{or}_d:=\mathsf{Def}\left({\mathcal L} ie_d \stackrel{i}{\rightarrow} {\mathcal G} ra_d^{or}\right) $$ and its subcomplex $\mathsf{GC}^\mathit{or}_d$ spanned by connected graphs with at least bivalent vertices and with no bivalent vertices of the form $\xy
(0,0)*{}="a", (4,0)*{\bullet}="b", (8,0)*{}="c", \ar @{->} "a";"b" <0pt> \ar @{->} "b";"c" <0pt> \endxy$. This subcomplex determines the cohomology of the full graph complex, $H^\bullet(\mathsf{fGC}^{or}_d)=\odot^{\bullet\geq 1} (H^\bullet(\mathsf{GC}^\mathit{or}_d)[-d])[d]$. It was proven in \cite{Wi2} that $$ H^\bullet(\mathsf{GC}^\mathit{or}_{d+1})=H^\bullet(\mathsf{dGC}_d). $$ In particular, one has a remarkable isomorphism of Lie algebras, $ H^0(\mathsf{GC}^\mathit{or}_3)={\mathfrak g}{\mathfrak r}{\mathfrak t}$. It was also proven in \cite{Wi2} that the cohomology group $H^1(\mathsf{GC}^\mathit{or}_2)=H^1(\mathsf{dGC}_1)$ is one-dimensional and is spanned by the following graph $$ \Upsilon_4:= \lambda\begin{array}{c}\resizebox{11mm}{!}{\xy
(0,0)*{\bullet}="1", (-7,16)*{\bullet}="2", (7,16)*{\bullet}="3", (0,10)*{\bullet}="4",
\ar @{<-} "2";"4" <0pt> \ar @{<-} "3";"4" <0pt> \ar @{<-} "4";"1" <0pt> \ar @{<-} "2";"1" <0pt> \ar @{<-} "3";"1" <0pt> \endxy}\end{array} +
2\lambda \begin{array}{c}\resizebox{11mm}{!}{\xy
(0,0)*{\bullet}="1", (-6,6)*{\bullet}="2", (6,10)*{\bullet}="3", (0,16)*{\bullet}="4",
\ar @{<-} "4";"3" <0pt> \ar @{<-} "4";"2" <0pt> \ar @{<-} "3";"2" <0pt> \ar @{<-} "2";"1" <0pt> \ar @{<-} "3";"1" <0pt> \endxy}\end{array} +
\lambda
\begin{array}{c}\resizebox{11mm}{!}{\xy
(0,16)*{\bullet}="1", (-7,0)*{\bullet}="2", (7,0)*{\bullet}="3", (0,6)*{\bullet}="4",
\ar @{->} "2";"4" <0pt> \ar @{->} "3";"4" <0pt> \ar @{->} "4";"1" <0pt> \ar @{->} "2";"1" <0pt> \ar @{->} "3";"1" <0pt> \endxy}\end{array}
\ \ \ \ \ \forall \lambda\in {\mathbb R}\setminus 0. $$
Moreover $H^2(\mathsf{GC}^\mathit{or}_2)={\mathbb K}$ and is spanned by a graph with four vertices. This means that one can construct by induction a new Maurer-Cartan element in the Lie algebra $\mathsf{GC}^\mathit{or}_2$ (the integer subscript in the summand $\Upsilon_n$ stands for the number of vertices of graphs) \begin{equation}\label{2: KS MC element Upsilon} \Upsilon_{KS}= \xy
(0,0)*{\bullet}="a", (6,0)*{\bullet}="b",
\ar @{->} "a";"b" <0pt> \endxy + \Upsilon_4 + \Upsilon_6 + \Upsilon_8 + \ldots \end{equation} as all obstructions have more than $7$ vertices and hence do not hit the unique cohomology class in $H^2(\mathsf{GC}^\mathit{or}_2)$. Up to gauge equivalence, this new Maurer-Cartan element $\Upsilon$ is the {\em only}\, non-trivial deformation of the standard Maurer-Cartan element $\xy
(0,0)*{\bullet}="a", (6,0)*{\bullet}="b",
\ar @{->} "a";"b" <0pt> \endxy$. We call this element {\em Kontsevich-Shoikhet}\, one as it was first found by Boris Shoikhet in \cite{Sh} with a reference to an important contribution by Maxim Kontsevich via an informal communication.
\subsection{On a class of representations of graph complexes} Consider a formal power series algebra $$ {\mathcal O}_n:={\mathbb K}[[x^1,\ldots, x^n]] $$
in $n$ formal homogeneous variables and let $\mathrm{Der}({\mathcal O}_n)$ be the Lie algebra of continuous derivations of ${\mathcal O}_n$. Then, for any integer $d\geq 2$, the completed vector space $$ {\mathbb A}_d^{(n)}:= \widehat{\odot^\bullet} \left( \mathrm{Der}({\mathcal O}_n)[d-1]\right) $$ is canonically a $d$-algebra, that is, a graded commutative algebra equipped with compatible Lie brackets $[\ ,\ ]_S$ of degree $1-d$ (here $\widehat{\odot^\bullet}$ stands for the completed graded symmetric tensor algebra functor). One can identify ${\mathbb A}_d^{(n)}$ with the ring of formal power series, $$ {\mathbb A}_{d}^{(n)}:={\mathbb K}[[x^1,\ldots, x^n, \psi_1,\ldots, \psi_n]] $$
generated by formal variables satisfying the condition $$
|x^i| + |\psi_i|=d-1, \ \ \ d\in {\mathbb Z}, $$ Then Lie bracket (of degree $1-d$) is given explicitly by \begin{equation}\label{2: standard Lie bracket in A_d}
[f_1, f_2]_S=\sum_{i=1}^n\frac{f_1\overleftarrow{{\partial}}}{{{\partial}} \psi_i}\frac{\overrightarrow{{\partial}} f_2}{{\partial} x^i} + (-1)^{|f_1||f_2|+ (d-1)(|f_1|+|f_2|)} \frac{f_2\overleftarrow{{\partial}}}{{{\partial}} \psi_i}\frac{\overrightarrow{{\partial}} f_1}{{\partial} x^i} \end{equation}
A degree $d$ element $\gamma\in {{\mathbb A}}_d^{(n)}$ is called {\em Maurer-Cartan}\, if it satisfies the condition $[\gamma,\gamma]_S=0$.
We are interested in an $n\rightarrow \infty$ version of ${\mathbb A}_d^{(n)}$ which retains the above canonical $d$-algebra structure. Clearly, the sequence of canonical projections of graded vector spaces, $$ \ldots \longrightarrow {\mathbb A}_{d}^{(n+2)} \longrightarrow {\mathbb A}_{d}^{(n)} \longrightarrow {\mathbb A}_{d}^{(n-1)} $$ does not respect the above Lie bracket, so that the associated inverse limit $\displaystyle\lim_{\leftarrow} {\mathbb A}_{d}^{(n)}$ can not be a $d$-algebra. There is a chain of injections of formal power series algebras, $$ \ldots \longrightarrow {\mathcal O}_n \longrightarrow {\mathcal O}_{n+1} \longrightarrow {\mathcal O}_{n+2}\longrightarrow \ldots $$ and we denote the associated {\em direct}\, limit by $$ {\mathcal O}_\infty:= \lim_{n\longrightarrow \infty} {\mathcal O}_{n}. $$ Let $V_\infty$ stand for the infinite-dimensional graded vector space with the infinite basis $\{x_1,x_2,\dots \}$ and set $$
{\mathbb A}_d^{\infty}:= \prod_{m\geq 0} {\mathrm H\mathrm o\mathrm m}\left(\odot^m(V_\infty[1-d]), {\mathcal O}_\infty\right) $$ This is a vector subspace of the inverse limit $$ \lim_{\leftarrow} {\mathbb A}_{d}^{(n)}={\mathbb K}[[x^1,x^2,\ldots, \psi_1,\psi_2,\ldots]] $$ spanned by formal power series in two infinite sets of graded commutative generators $X=\{x^1,x^2,\ldots\}$ and $\Psi=\{ \psi_1,\psi_2,\ldots\}$ with the property that every monomial in generators from the set $\Psi$ has as a coefficient a formal power series from the ring ${\mathcal O}_k$ for some finite number $k$. Clearly, the subspace ${\mathbb A}_d^{\infty}$ is a well-defined $d$-algebra.
The first interesting for application case has $d=2$, $|x^i|=0$ and $|\psi_i|=1$. The associated 2-algebra ${{\mathbb A}}_{2}^{(n)}$ can be identified with the Gerstenhaber algebra ${\mathcal T}_{poly}({\mathbb R}^n)$ of formal polyvector fields on ${\mathbb R}^n$
so that its Maurer-Cartan are formal power series Poisson structures on ${\mathbb R}^n$. Its $n\rightarrow \infty$ version ${{\mathbb A}}_{2}^{\infty}$ gives us the Gerstenhaber algebra of polyvector fields on the infinite-dimensional space ${\mathbb R}^\infty$.
The second interesting example has $d=3$ and $|x^i|=|\psi_i|=1$. In this case Maurer-Cartan elements of ${\mathbb A}_{3}^{(n)}$
satisfying the conditions $\gamma|_{x^i=0}=0$ and $\gamma|_{\psi_i=0}=0$, $\forall\ i\in [n]$, are cubic polynomials $$ \gamma:=\sum_{i,j,k\in I}\left( C_{ij}^k \psi_kx^i x^j + \Phi_k^{ij}x^k \psi_i\psi_j \right), $$ and the equation $[\gamma,\gamma]=$ implies that the associated to the structure constants $\Phi_k^{ij}$ and, respectively, $C_{ij}^k$ linear maps, $$ \bigtriangleup:{\mathbb R}^n\rightarrow \wedge^2 {\mathbb R}^n,\ \ \ [\ ,\ ]:\wedge^2{\mathbb R}^n \rightarrow {\mathbb R}^n $$ define a Lie bialgebra structure in ${\mathbb R}^n$.
The above Lie brackets $[\ ,\ ]_S$ give us a representation $$ {\mathcal L} ie_d \longrightarrow {\mathcal E} nd_{{\mathbb A}_d^{(n)}} $$ for any $n\geq 1$.
In fact, this representation factors through morphism (\ref{2: morphism from Lie to dGra}) and a canonical representation $\Phi$ \begin{equation}\label{2: lie to dGra to End} \begin{array}{cccc}
\Phi: & d{\mathcal G} ra_d & {\longrightarrow} & {\mathcal E} nd_{{\mathbb A}_{d}^{(n)}}\\
& \Gamma & \longrightarrow & \Phi_\Gamma
\end{array} \end{equation} of the operad $d{\mathcal G} ra_d$ in ${\mathbb A}_{d}^{(n)}$ defined, for any $\Gamma\in d{\mathcal G} ra_d(k)$, by a linear map \begin{equation}\label{2: representation Phi_Gamma} \begin{array}{rccc} \Phi_\Gamma: & \otimes^k {\mathbb A}_{d}^{(n)} & \longrightarrow & {\mathbb A}_{d}^{(n)}\\
& (f_1,f_2,\ldots,f_k) & \longrightarrow & \rho_\Gamma(f_1,f_2,\ldots,f_k) \end{array} \end{equation} where $$ \Phi_\Gamma(f_1,\ldots, f_k) :=m\left(\prod_{e\in E(\Gamma)} \Delta_e \left(f_1(x, \psi)\otimes f_2(x, \psi)\otimes \ldots\otimes f_k(x, \psi) \right)\right) $$ and, for a directed edge $e$ connecting vertices labeled by integers $i$ and $j$, $$ \Delta_e:= \sum_{a=1}^n \frac{{\partial}}{{\partial} x_{(i)}^a} \otimes \frac{{\partial}}{{\partial} \psi_{a(j)}} $$ with the subscript $(i)$ or $(j)$ indicating that the derivative operator is to be applied to the $i$-th or $j$-th factor in the tensor product. The symbol $m$ denotes the multiplication map, $$ \begin{array}{rccc} m:& \otimes^k {\mathbb A}_{d}^{(n)} & \longrightarrow & {\mathbb A}_{d}^{(n)}\\
& f_1\otimes f_2\otimes \ldots \otimes f_k &\longrightarrow & f_1f_2\cdots f_k. \end{array} $$ The morphism of dg operads (\ref{2: lie to dGra to End}) induces a morphism of the dg Lie algebras $$ s: \mathsf{dfGC}_d:=\mathsf{Def}\left({\mathcal L} ie_d \stackrel{i}{\rightarrow} d{\mathcal G} ra_d\right) \longrightarrow CE^\bullet\left({\mathbb A}_{d}^{(n)}, {\mathbb A}_d^{(n)}\right):= \mathsf{Def}\left({\mathcal L} ie_d \stackrel{\Phi \circ i}{\longrightarrow} {\mathcal E} nd_{{\mathbb A}_d^{(n)}}\right). $$ Here $$ CE^\bullet\left({\mathbb A}_d^{(n)}, {\mathbb A}_d^{(n)}\right)=\mbox{Coder}\left(\odot^{\bullet\geq 1}({\mathbb A}_d^{(n)}[d])\right) $$ is the standard Chevalley-Eilenberg deformation complex of the Lie algebra ${\mathbb A}_d^{(n)}$, that is, the dg Lie algebra of coderivations of the graded co-commutative coalgebra $\odot^{\bullet\geq 1}({\mathbb A}_d^{(n)}[d])$. Therefore any Maurer-Cartan element $\gamma$ in the graph complex $\mathsf{dfGC}_d$ gives a Maurer-Cartan element $s(\Gamma)$ in $\mbox{Coder}(\odot^{\bullet\geq 1}(A_{d}^{(n)}[d]))$, that is a ${\mathcal H} olie_d$ algebra structure in ${\mathbb A}_d^{\infty}$, for any {\em finite}\, number $n$. Moreover, if $\gamma$ belongs to the Lie subalgebra $\mathsf{fGC}^{or}_d$, then the associated ${\mathcal H} olie_d$ structure remains well-defined
in the limit $n\rightarrow +\infty$, i.e.\ it is well-defined in ${\mathbb A}_d^{\infty}$.
\subsubsection{\bf Example} The Maurer-Cartan element $\xy
(0,0)*{\bullet}="a", (5,0)*{\bullet}="b",
\ar @{->} "a";"b" <0pt> \endxy \in \mathsf{fGC}_d^{or}\subset \mathsf{fGC}_d$ (see (\ref{2: lie to dGra to End})) gives rise to the standard Lie brackets (\ref{2: standard Lie bracket in A_d}) in ${\mathbb A}_d^{(n)}$.
\subsubsection{\bf Example} The Maurer-Cartan element $\Upsilon_{KS}\in \mathsf{fGC}_2^{or}$ from (\ref{2: KS MC element Upsilon}) gives rise to a{\em Kontsevich-Shoikhet}\, ${\mathcal L} ie_\infty$ structure in ${\mathbb A}_2^{(n)}={\mathcal T}_{poly}({\mathbb R}^n)$, $$ \left\{[\, \ , \ldots ,\ ]_{2k}: ({\mathbb A}_2^{(n)})^{\otimes 2k}\rightarrow {\mathbb A}_2^{(n)}[3-4k] \right\}_{k\geq 1} $$ where $$ [\, \ , \ldots ,\ ]_{2k}:=\Phi_{\Upsilon_{2k}}. $$
It was introduced by Boris Shoikhet in \cite{Sh} with a reference to an important contribution by Maxim Kontsevich via a private communication. This ${\mathcal L} ie_\infty$ structure is well defined in the limit $n\rightarrow +\infty$.
We shall consider in the next section some transcendental constructions of Maurer-Cartan elements in $\mathsf{fGC}_d$ and
$\mathsf{fGC}_d^{or}$ in which we shall use heavily the following configuration space models of classical operads.
\subsection{Configuration space model for the operad $\mathcal{H}\mathit{olie}_d$} Let $$
{\mathit{Conf}}_k({\mathbb R}^d):=\{p_1, \ldots, p_k\in {\mathbb R}^d\ |\, p_i\neq p_j\ \mbox{for}\ i\neq j\} $$ be the configuration space of $k$ pairwise distinct points in ${\mathbb R}^d$, $d\geq 2$. The group ${\mathbb R}^+ \ltimes {\mathbb R}^d$ acts freely on each configuration space ${\mathit{Conf}}_k({\mathbb R}^d)$ for $k\geq 2$, $$ (p_1, \ldots, p_k) \longrightarrow (\lambda p_1 + a, \ldots, \lambda p_k+a),\ \ \ \ \forall \lambda\in {\mathbb R}^+, a\in {\mathbb R}^d, $$ so that the space of orbits,
$$ C_{k}({\mathbb R}^d):={\mathit{Conf}}_k({\mathbb C})/{{\mathbb R}^+ \ltimes {\mathbb R}^d}, $$ a smooth real $(kd-d-1)$-dimensional manifold. The space $C_2({\mathbb R}^d)$ is homeomorphic to the sphere $S^{d-1}$ and hence is compact.
For $\geq 3$ the compactified configuration space $\overline{C}_k({\mathbb R}^{d})$ is defined as the closure of an embedding \cite{Ko, Ga} $$ \begin{array}{ccccc} C_k({\mathbb R}^d) & \longrightarrow & (S^{d-1} )^{k(k-1)} &\times& ({\mathbb R}{\mathbb P}^2)^{k(k-1)(k-2)}\\
(p_1, \ldots, p_k) & \longrightarrow & \prod_{i\neq j} \frac{p_i-p_j}{|p_i-p_j|} &\times&
\prod_{i\neq j\neq l\neq i}\left[|p_{i}-p_{j}| :
|p_{j}-p_{l}|: |p_{i}-p_{l}|\right] \end{array} $$ The space $\overline{C}_k({\mathbb C})$ is a smooth (naturally oriented) manifold with corners. Its codimension 1 strata is given by $$ {\partial} \overline{C}_k({\mathbb C}) = \bigsqcup_{A\subset [k]\atop \# A\geq 2} C_{k - \# A + 1}({\mathbb C})\times
C_{\# A}({\mathbb C}) $$ where the summation runs over all possible proper subsets of $[k]$ with cardinality $\geq 2$. Geometrically, each such stratum corresponds to the $A$-labeled elements of the set $\{p_1, \ldots, p_k\}$ moving very close to each other. If we represent $\overline{C}_k({\mathbb R}^d)$ by the (skew)symmetric $k$-corolla of degree\footnote{We prefer working with {\em co}chain complexes, and hence adopt gradings accordingly.}
$k+1-kd$ \begin{equation}\label{2: Lie_inf corolla} \begin{array}{c}\resizebox{21mm}{!}{ \xy (1,-5)*{\ldots}, (-13,-7)*{_1}, (-8,-7)*{_2}, (-3,-7)*{_3}, (7,-7)*{_{k-1}}, (13,-7)*{_k},
(0,0)*{\circ}="a", (0,5)*{}="0", (-12,-5)*{}="b_1", (-8,-5)*{}="b_2", (-3,-5)*{}="b_3", (8,-5)*{}="b_4", (12,-5)*{}="b_5",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "a";"b_3" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_4" <0pt> \ar @{-} "a";"b_5" <0pt>
\endxy}\end{array}=(-1)^{d|\sigma|} \begin{array}{c}\resizebox{21mm}{!}{\xy (1,-6)*{\ldots}, (-13,-7)*{_{\sigma(1)}}, (-6.7,-7)*{_{\sigma(2)}},
(13,-7)*{_{\sigma(k)}},
(0,0)*{\circ}="a", (0,5)*{}="0", (-12,-5)*{}="b_1", (-8,-5)*{}="b_2", (-3,-5)*{}="b_3", (8,-5)*{}="b_4", (12,-5)*{}="b_5",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "a";"b_3" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_4" <0pt> \ar @{-} "a";"b_5" <0pt> \endxy}\end{array}, \ \ \ \forall \sigma\in {\mathbb S}_k,\ k\geq2 \end{equation} then the boundary operator in the associated face complex of $\overline{C}_\bullet({\mathbb R}^d)$ takes a familiar form \begin{equation}\label{3: Lie_infty differential} {\partial}\hspace{-3mm} \begin{array}{c}\resizebox{21mm}{!}{ \xy (1,-5)*{\ldots}, (-13,-7)*{_1}, (-8,-7)*{_2}, (-3,-7)*{_3}, (7,-7)*{_{k-1}}, (13,-7)*{_k},
(0,0)*{\circ}="a", (0,5)*{}="0", (-12,-5)*{}="b_1", (-8,-5)*{}="b_2", (-3,-5)*{}="b_3", (8,-5)*{}="b_4", (12,-5)*{}="b_5",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "a";"b_3" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_4" <0pt> \ar @{-} "a";"b_5" <0pt> \endxy}\end{array} = \sum_{A\varsubsetneq [k]\atop \# A\geq 2} \pm \begin{array}{c}\resizebox{25mm}{!}{ \begin{xy} <10mm,0mm>*{\circ}, <10mm,0.8mm>*{};<10mm,5mm>*{}**@{-}, <0mm,-10mm>*{...}, <14mm,-5mm>*{\ldots}, <13mm,-7mm>*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ }}, <14mm,-10mm>*{_{[k]\setminus A}}; <10.3mm,0.1mm>*{};<20mm,-5mm>*{}**@{-}, <9.7mm,-0.5mm>*{};<6mm,-5mm>*{}**@{-}, <9.9mm,-0.5mm>*{};<10mm,-5mm>*{}**@{-}, <9.6mm,0.1mm>*{};<0mm,-4.4mm>*{}**@{-}, <0mm,-5mm>*{\circ}; <-5mm,-10mm>*{}**@{-}, <-2.7mm,-10mm>*{}**@{-}, <2.7mm,-10mm>*{}**@{-}, <5mm,-10mm>*{}**@{-}, <0mm,-12mm>*{\underbrace{\ \ \ \ \ \ \ \ \ \ }}, <0mm,-15mm>*{_{A}}, \end{xy}} \end{array} \end{equation} implying the following
\subsubsection{\bf Proposition \cite{GJ}} {\em The fundamental chain complex
of the family of compactified configurations spaces, $\{\overline{C}_k({\mathbb R}^d)\}_{k\geq 2}$, has the structure of a dg free non-unital operad canonically isomorphic to the operad $\mathcal{H}\mathit{olie}_d$ of degree shifted strongly homotopy Lie algebras.}
\subsection{Configuration space model for the operad ${\mathcal M} or(\mathcal{H}\mathit{olie}_d)$} Let ${\mathcal M} or(\mathcal{H}\mathit{olie}_d)$ be a two-coloured operad whose representations in a pair of dg vector spaces $V_{in}$ and $V_{out}$ is a triple $(\mu_{in}, \mu_{out}, F)$ consisting of $\mathcal{H}\mathit{olie}_d$ structures $\mu_{in}$ on $V_{in}$ and $\mu_{out}$ on $V_{out}$, and of a $\mathcal{H}\mathit{olie}_d$ morphism, $F:(V_{in},\mu_{in})\rightarrow (V_{out},\mu_{out})$, between them. There is a configuration space model \cite{Me1} for this operad which plays one of central roles in this paper.
The Abelian group ${\mathbb R}^d$ acts freely, $$ \begin{array}{ccc} {\mathit{Conf}}_k({\mathbb C}) \times {\mathbb R}^d& \longrightarrow & {\mathit{Conf}}_A({\mathbb C})\\ (p=\{p_i\}_{i\in [k]},a) &\longrightarrow & p+a:= \{p_i+a\}_{i\in [k]} \end{array} $$ on the configuration space ${\mathit{Conf}}_k({\mathbb R}^d)$ for any $k\geq 1$
so that the quotient $$ {\mathfrak C}_A({\mathbb R}^d):={\mathit{Conf}}_A({\mathbb R}^d)/{\mathbb R}^d $$ is a $k(d -1)$-dimensional manifold. There is a diffeomorphism, $$ \begin{array}{rccccc} \Psi_k: & {\mathfrak C}_k({\mathbb R}^d) & \longrightarrow & C_k({\mathbb R}^d) & \times & (0,1)\\
& p & \longrightarrow & \frac{p-p_c}{|p-p_c|} && \frac{|p-p_c|}{1+ |p-p_c|} \end{array} $$ where $$ p_c:=\frac{1}{k}(p_1+\ldots + p_k). $$
Note that the configuration $\frac{p-p_c}{|p-p_c|}$ is invariant under the larger group ${\mathbb R}^+\ltimes {\mathbb R}^d$ and hence belongs to $C_k({\mathbb R}^d)$. For any non-empty subset $A\subseteq [n]$ there is a natural map $$ \begin{array}{rccc} \pi_A : & {\mathfrak C}_k({\mathbb C}) & \longrightarrow & {\mathfrak C}_A({\mathbb C})\\
& p=\{p_i\}_{i\in [k]} & \longrightarrow & p_A:=\{p_i\}_{i\in A} \end{array} $$ which forgets all the points labeled by elements of the complement $[k]\setminus A$.
The space ${\mathfrak C}_{1}({\mathbb R}^d)$ is a point and hence is compact. For $k\geq 2$ a {\em semialgebraic compactification} $\overline{{\mathfrak C}}_k({\mathbb R}^d)$ of ${\mathfrak C}_{k}({\mathbb R}^d)$ can be defined as the closure of a composition \cite{Me1}, \begin{equation}\label{2: first compactifn of fC(C)} {\mathfrak C}_{k}({\mathbb R}^d)\stackrel{\prod \pi_A}{\longrightarrow} \prod_{A\subseteq [k]\atop \# A\geq 2} {\mathfrak C}_{\# A}({\mathbb R}^d) \stackrel{\prod \Psi_A}{\longrightarrow} \prod_{A\subseteq [k]\atop \# A\geq 2}
C_{\# A}({\mathbb R}^d)\times (0, 1) \hookrightarrow \prod_{A\subseteq [k]\atop \# A\geq 2}
\overline{C}_{\# A}({\mathbb R}^d)\times [0,1]. \end{equation} Thus all the limiting points in this compactification come from configurations in which a group or groups of points move too {\em close}\, to each other
within each group and/or a group or groups of points which are moving too {\em far}\, (with respect to the standard Euclidean distance) away from each other. The codimension one boundary strata in $\widehat{{\mathfrak C}}_{n}({\mathbb R}^d)$ correspond to the limit values $0$ or $+\infty$
of the parameters $|p-p_c|$, and are given by \cite{Me1} \begin{equation}\label{2:codimension 1 boundary in widehat{C}_n(C)} \displaystyle {\partial} \overline{{\mathfrak C}}_{k}({\mathbb R}^d) = \bigsqcup_{A\subseteq [n]\atop \# A\geq 2} \left(\overline{{\mathfrak C}}_{n - \# A + 1}({\mathbb R}^d)\times
\overline{C}_{\# A}({\mathbb R}^d)\right)\
\bigsqcup_{[k]=B_1\ \sqcup \ldots \sqcup B_i\atop{
2\leq l\leq k \atop \#B_1,\ldots, \#B_l\geq 1}}\left( \overline{C}_{k}({\mathbb R}^d)\times \overline{{\mathfrak C}}_{\# B_1}({\mathbb R}^d)\times \ldots\times
\overline{{\mathfrak C}}_{\# B_l}({\mathbb R}^d)
\right) \end{equation} where \begin{itemize} \item the first summation runs over all possible subsets $A$ of $[k]$ and each summand corresponds to $A$-labeled elements of the set $\{p_1, \ldots, p_k\}$ moving {\em close}\, to each other, \item the second summation runs over all possible decompositions of $[k]$ into $l\geq 2$ disjoint non-empty subsets $B_1, \ldots, B_l$, and each summand corresponds to $l$ groups of points (labeled, respectively,
by disjoint ordered subsets $B_1, \ldots B_l$ of $[k]$) moving {\em far}\, from each other while keeping relative
distances within each group finite.
\end{itemize}
Note that the faces of the type $\overline{C}_\bullet({\mathbb C})$ appear in (\ref{2:codimension 1 boundary in widehat{C}_n(C)})
in two different ways --- as the strata describing collapsing points and as the strata controlling groups of points going infinitely away from each other --- and they do {\em not}\, intersect in $\widehat{{\mathfrak C}}_{\bullet}({\mathbb C})$. For that reason one has to assign to these two groups of faces different colours and represent collapsing $\overline{C}_k({\mathbb C})$-stratum by, say, white corolla with straight legs as in (\ref{2: Lie_inf corolla}), the $\overline{C}_k({\mathbb R})$-stratum at ``infinity" by, say,
a version of (\ref{2: Lie_inf corolla}) with ``broken" legs, $ \hspace{-3mm} \begin{array}{c}\resizebox{21mm}{!}{\xy (1,-5)*{\ldots}, (-13,-7)*{_{i_1}}, (-8,-7)*{_{i_2}}, (-3,-7)*{_{i_3}}, (7,-7)*{_{i_{q-1}}}, (13,-7)*{_{i_q}},
(0,0)*{\circ}="a", (0,5)*{}="0", (-12,-5)*{}="b_1", (-8,-5)*{}="b_2", (-3,-5)*{}="b_3", (8,-5)*{}="b_4", (12,-5)*{}="b_5",
\ar @{--} "a";"0" <0pt> \ar @{--} "a";"b_2" <0pt> \ar @{--} "a";"b_3" <0pt> \ar @{--} "a";"b_1" <0pt> \ar @{--} "a";"b_4" <0pt> \ar @{--} "a";"b_5" <0pt> \endxy}\end{array} $. The face $\overline{{\mathfrak C}}_{n}({\mathbb C})$ can be represented pictorially by a 2-coloured (skew)symmetric corolla with black vertex, $\hspace{-3mm} \begin{array}{c}\resizebox{21mm}{!}{\xy (1,-5)*{\ldots}, (-13,-7)*{_{i_1}}, (-8,-7)*{_{i_2}}, (-3,-7)*{_{i_3}}, (7,-7)*{_{i_{k-1}}}, (13,-7)*{_{i_k}},
(0,0)*{\bullet}="a", (0,5)*{}="0", (-12,-5)*{}="b_1", (-8,-5)*{}="b_2", (-3,-5)*{}="b_3", (8,-5)*{}="b_4", (12,-5)*{}="b_5",
\ar @{--} "a";"0" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "a";"b_3" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_4" <0pt> \ar @{-} "a";"b_5" <0pt> \endxy}\end{array} $ of degree $d(1-k)$.
Each space $\overline{{\mathfrak C}}_{k}({\mathbb R}^d)$ has a natural structure of a smooth manifold with corners.
\subsubsection{\bf Proposition \cite{Me1}}\label{2: Propos on the face complex of Mor(Lie_infty)} {\em The disjoint union \begin{equation}\label{3: Lie_infty config topol operad} \underline{{\mathfrak C}}({\mathbb R}^d):=\overline{C}_\bullet({\mathbb R}^d)\sqcup \overline{{\mathfrak C}}_{\bullet}({\mathbb R}^d)\sqcup \overline{C}_\bullet({\mathbb R}^d) \end{equation} is a 2-coloured operad in the category of semialgebraic manifolds (or smooth manifolds with corners). Its complex of fundamental chains can be identified with the operad ${\mathcal M} or(\mathcal{H}\mathit{olie}_d)$ which is a dg free non-unital 2-coloured operad generated by the corollas, $$ {\mathcal M} or(\mathcal{H}\mathit{olie}_d):= {\mathcal F} ree \left\langle \begin{array}{c}\resizebox{21mm}{!}{\xy (1,-5)*{\ldots}, (-13,-7)*{_1}, (-8,-7)*{_2}, (-3,-7)*{_3}, (7,-7)*{_{p-1}}, (13,-7)*{_p},
(0,0)*{\circ}="a", (0,5)*{}="0", (-12,-5)*{}="b_1", (-8,-5)*{}="b_2", (-3,-5)*{}="b_3", (8,-5)*{}="b_4", (12,-5)*{}="b_5",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "a";"b_3" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_4" <0pt> \ar @{-} "a";"b_5" <0pt> \endxy}\end{array}, \ \ \ \begin{array}{c}\resizebox{21mm}{!}{\xy (1,-5)*{\ldots}, (-13,-7)*{_1}, (-8,-7)*{_2}, (-3,-7)*{_3}, (7,-7)*{_{k-1}}, (13,-7)*{_k},
(0,0)*{\bullet}="a", (0,5)*{}="0", (-12,-5)*{}="b_1", (-8,-5)*{}="b_2", (-3,-5)*{}="b_3", (8,-5)*{}="b_4", (12,-5)*{}="b_5",
\ar @{--} "a";"0" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "a";"b_3" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_4" <0pt> \ar @{-} "a";"b_5" <0pt> \endxy}\end{array} \ \ \ , \begin{array}{c}\resizebox{21mm}{!}{\xy (1,-5)*{\ldots}, (-13,-7)*{_1}, (-8,-7)*{_2}, (-3,-7)*{_3}, (7,-7)*{_{q-1}}, (13,-7)*{_q},
(0,0)*{\circ}="a", (0,5)*{}="0", (-12,-5)*{}="b_1", (-8,-5)*{}="b_2", (-3,-5)*{}="b_3", (8,-5)*{}="b_4", (12,-5)*{}="b_5",
\ar @{--} "a";"0" <0pt> \ar @{--} "a";"b_2" <0pt> \ar @{--} "a";"b_3" <0pt> \ar @{--} "a";"b_1" <0pt> \ar @{--} "a";"b_4" <0pt> \ar @{--} "a";"b_5" <0pt> \endxy}\end{array} \right\rangle_{p,q\geq 2, k\geq 1} $$ and equipped with a differential which is given on white corollas of both colours by formula (\ref{3: Lie_infty differential}) and on the black corollas by the following formula
\begin{eqnarray}\label{2: d on MorLie corollas} {\partial} \begin{array}{c}\resizebox{21mm}{!}{\xy (1,-5)*{\ldots}, (-13,-7)*{_1}, (-8,-7)*{_2}, (-3,-7)*{_3}, (7,-7)*{_{k-1}}, (13,-7)*{_k},
(0,0)*{\bullet}="a", (0,5)*{}="0", (-12,-5)*{}="b_1", (-8,-5)*{}="b_2", (-3,-5)*{}="b_3", (8,-5)*{}="b_4", (12,-5)*{}="b_5",
\ar @{--} "a";"0" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "a";"b_3" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_4" <0pt> \ar @{-} "a";"b_5" <0pt> \endxy}\end{array} &=& - \sum_{A\varsubsetneq [n]\atop \# A\geq 2} \begin{array}{c}\resizebox{25mm}{!}{ \begin{xy} <10mm,0mm>*{\bullet}, <10mm,0.8mm>*{};<10mm,5mm>*{}**@{--}, <0mm,-10mm>*{...}, <12mm,-5mm>*{\ldots}, <13mm,-7mm>*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ }}, <14mm,-10mm>*{_{[k]\setminus A}}; <10.0mm,0mm>*{};<20mm,-5mm>*{}**@{-}, <10.0mm,-0mm>*{};<5mm,-5mm>*{}**@{-}, <10.0mm,-0mm>*{};<8mm,-5mm>*{}**@{-}, <10.0mm,0mm>*{};<0mm,-4.4mm>*{}**@{-}, <10.0mm,0mm>*{};<16.5mm,-5mm>*{}**@{-}, <0mm,-5mm>*{\circ}; <-5mm,-10mm>*{}**@{-}, <-2.7mm,-10mm>*{}**@{-}, <2.7mm,-10mm>*{}**@{-}, <5mm,-10mm>*{}**@{-}, <0mm,-12mm>*{\underbrace{\ \ \ \ \ \ \ \ \ \ }}, <0mm,-15mm>*{_{A}}, \end{xy}} \end{array} \nonumber\\ && +\ \, \sum_{l=2}^n \sum_{[k]=B_1\sqcup\ldots\sqcup B_l \atop \inf B_1<\ldots< \inf B_l} \pm \begin{array}{c}\resizebox{32mm}{!}{ \xy (-15.5,-7)*{...}, (19,-7)*{...}, (7.5,0)*{\ldots}, (-17.8,-12)*{_{B_1}}, (-3.2,-12)*{_{B_2}}, (17.8,-12)*{_{B_l}}, (-1.8,-7)*{...},
(-3.2,-9)*{\underbrace{\ \ \ \ \ \ \ \ \ \ }},
(-17.8,-9)*{\underbrace{\ \ \ \ \ \ \ \ \ \ }},
(16.8,-9)*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ }},
(0,7)*{\circ}="a", (-14,0)*{\bullet}="b_0",
(-4.5,0)*{\bullet}="b_2", (14,0)*{\bullet}="b_3",
(0,13)*{}="0", (1,-7)*{}="c_1", (-8,-7)*{}="c_2", (-5,-7)*{}="c_3", (-22,-7)*{}="d_1", (-19,-7)*{}="d_2", (-13,-7)*{}="d_3", (12,-7)*{}="e_1", (15,-7)*{}="e_2", (22,-7)*{}="e_3", \ar @{--} "a";"0" <0pt> \ar @{--} "a";"b_0" <0pt> \ar @{--} "a";"b_2" <0pt> \ar @{--} "a";"b_3" <0pt> \ar @{-} "b_2";"c_1" <0pt> \ar @{-} "b_2";"c_2" <0pt> \ar @{-} "b_2";"c_3" <0pt> \ar @{-} "b_0";"d_1" <0pt> \ar @{-} "b_0";"d_2" <0pt> \ar @{-} "b_0";"d_3" <0pt> \ar @{-} "b_3";"e_1" <0pt> \ar @{-} "b_3";"e_2" <0pt> \ar @{-} "b_3";"e_3" <0pt> \endxy} \end{array}. \end{eqnarray} }
\subsubsection{\bf Example} As $\overline{C}_2({\mathbb R}^n)=S^{d-1}$, the space
$\widehat{{\mathfrak C}}_2({\mathbb R}^d)$ is the closure of the embedding $$ \begin{array}{ccccccc} {\mathfrak C}_2({\mathbb R}^d) & \longrightarrow & S^{d-1} &\times & (0,1) &\hookrightarrow & S^{d-1} \times [0,+\infty]\\
(p_1,p_2) &\longrightarrow & \frac{p_1-p_2}{|p_1-p_2|} &\times & \frac{|p_1-p_2|}{1+ |p_1-p_2|} && \end{array} $$ and hence can be identified with the closed $d$-dimensional cylinder \begin{equation}\label{2: fC_2(C) cylinder} \overline{{\mathfrak C}}_2({\mathbb R}^d)=\ \begin{array}{c} {S^{d-1}_{out}}\\ \xy (-8,0)*{}="a", (8,0)*{}="b", (-8,-15)*{}="a1", (8,-15)*{}="b1", (-8,0)*-{};(8,0)*-{}; **\crv{(0,6)} **\crv{(0,-6)}; \ar @{-} "a";"a1" <0pt> \ar @{-} "b";"b1" <0pt> \endxy
\\ \xy (-8,0)*{}, (8,0)*{}, (-8,0)*-{};(8,0)*-{}; **\crv{(0,6)} **\crv{(0,-6)}; \endxy
\\ {{S^{d-1}_{in}}} \end{array}.
\end{equation} where $S^{d-1}_{in}$ is the boundary corresponding to $|p_1-p_2|\rightarrow 0$, and $S^{d-1}_{out}$ is the boundary corresponding to $|p_1-p_2|\rightarrow +\infty$. This is in accordance with
the r.h.s.\ of (\ref{2: d on MorLie corollas}) for $k=2$ which is the sum of two terms, the first term corresponding to the bottom ``in" sphere $S^{d-1}$ (``two points collapsing to each other") and upper ``out" sphere $S^{d-1}$ (``two points going infinitely far away from each other").
{\Large \section{\bf Transcendental formulae for a class of $\mathcal{H}\mathit{olie}_d$ morphisms} }
\subsection{De Rham theories on operads of manifolds with corners} Let $X=\{X_k\}$ be a (a possibly coloured) operad on the category of semialgebraic manifolds (or smooth manifolds with corners), and ${\mathfrak G}=\{{\mathfrak G}(k)\}$ some dg cooperad of graphs with the same set of coloures (e.g., the dual cooperad of the operad ${\mathcal D}{\mathcal G} ra_d$ or ${\mathcal D} {\mathcal G} ra_d^{or}$ from \S 2). A {\em de Rham ${\mathfrak G}$-theory on the operad $X$}\, is by definition a collection of ${\mathbb S}_n$-equivariant (and respecting colours) morphisms of complexes, $$ \begin{array}{rccc} \Omega_k: & {\mathfrak G}(k)& \longrightarrow & \Omega^\bullet(X_k)\\ & \Gamma & \longrightarrow & \Omega_\Gamma \end{array} $$ where $\Omega^\bullet(X_k)$ stands for the de Rham algebra of piecewise semialgebraic differential forms on $X_k$, which satisfy the following compatibility condition: for any $k,l\in {\mathbb N}$ and any $i\in [k]$ the associated operad composition $$ \circ_i: X_k \times X_l \longrightarrow X_{k+l-1} $$ and the cooperad co-composition $$ \Delta_i: {\mathfrak G}(k+l-1) \longrightarrow {\mathfrak G}(k)\otimes {\mathfrak G}(l) $$ makes the following diagram commutative, \[
\xymatrix{
{\mathfrak G}(k+l-1)\ar[r]^{\Omega_{k+l-1}}\ar[d]_{\Delta_i} & \Omega^\bullet(X_{k+l-1})\ar[r]^{\circ^*_i} & \Omega^\bullet(X_{k}\times X_l) \\
{\mathfrak G}(k)\otimes_{\mathbb K} {\mathfrak G}(l)\ar[r]_{\Omega_{k}\otimes \Omega_l\ \ } & \ \Omega^\bullet(X_k)\otimes_{\mathbb K} \Omega^\bullet(X_{l})\ar[ur]_i & } \] where $$ \begin{array}{rccc} i: & \Omega^\bullet(X_k)\otimes_{\mathbb K} \Omega^\bullet(X_{l}) & \longrightarrow & \Omega^\bullet(X_{k}\times X_l)\\ & \omega_1 \otimes \omega_2 &\longrightarrow & \omega_1 \wedge \omega_2 \end{array} $$ is the natural inclusion.
\subsubsection{\bf Proposition}\label{3: From DRhamFT to MC element} {\em Let ${\mathfrak G}$ be the cooperad dual to the operad ${\mathcal D} {\mathcal G} ra_d$ (resp., to ${\mathcal D}{\mathcal G} ra^{or}_d$) equipped with the trivial differential. Then a de Rham ${\mathfrak G}$-theory on the operad of configuration spaces
$\overline{C}_\bullet({\mathbb R}^d)=\{\overline{C}_k({\mathbb R}^d)\}_{k\geq 2}$ gives rise to the following
Maurer-Cartan element
\begin{equation}\label{3: MC element from DRham theory}
\Upsilon:=\sum_{k\geq 2} \sum_{\Gamma\in {\mathfrak G}(k)} \left(\int_{\overline{C}_k({\mathbb R}^d)} \Omega_\Gamma\right) \Gamma
\end{equation} in the (non-differential) Lie algebra\, $\mathsf{dfGC}_d$ (respectively, in\, $\mathsf{fGC}_d^{or}$).}
The second summation in (\ref{3: MC element from DRham theory}) runs over the set of generators of the vector space ${\mathcal D} {\mathcal G} ra_d(k)$ (resp., ${\mathcal G} ra^{or}_d(k)$), and we assume $\int_{\overline{C}_k({\mathbb R}^d)} \Omega_\Gamma=0$ if $\deg \Omega_\Gamma\neq \dim \overline{C}_k({\mathbb R}^d)$. This proposition is just a reformulation of Theorem 4.2.1 in \cite{Me0} so that we refer to that paper for its proof. It is worth noting that only connected graphs can give a non-zero contribution into the sum (\ref{3: MC element from DRham theory}).
\subsection{De Rham ${\mathfrak G}$-theories from propagators} There is a large class of de Rham ${\mathfrak G}$-theories on $\overline{C}_\bullet({\mathbb R}^d)=\{\overline{C}_k({\mathbb R}^d)\}_{k\geq 2}$ constructed as follows. Let $\omega$ be an arbitrary differential top degree differential form on the sphere $$ {C}_2({\mathbb R}^d)=\overline{C}_2({\mathbb R}^d)=S^{d-1} $$ normalized so that $$ \int_{S^{d-1}} \omega=1. $$ We call such a differential form a {\em propagator}. For any pair of distinct ordered numbers $(i,j)$ with $i,j\in [k]$, consider a smooth map $$ \begin{array}{rccc} p_{ij}: & C_{k}({\mathbb R}^d) &\longrightarrow & C_{2}({\mathbb R}^d) \\
& (p_1, \ldots, p_k) & \longrightarrow & \frac{p_i-p_j}{|p_i-p_j|}, \end{array} $$ The pullback $\pi_{ij}^*(\omega)$ defines a degree $d-1$ differential form on $C_{k}({\mathbb R}^n)$ which extends smoothly to the compactification $\overline{C}_k({\mathbb R}^d)$. In particular, for any directed graph $\Gamma$ with $k$ labelled vertices and any edge $e\in E(\Gamma)$ there is an associated
differential form $p_e^*(\omega)\in \Omega^{d-1}_{\overline{C}_k({\mathbb R}^d)}$, where $p_e:=p_{ij}$ if the edge $e$ begins at the vertex labelled by
$i$ and ends at the vertex labelled by $j$. Then, for ${\mathfrak G}$ being the cooperad dual to the operad ${\mathcal D} {\mathcal G} ra_d$, consider a collection of maps $$ \begin{array}{rccc} \Omega_k: & {\mathfrak G}(k) & \longrightarrow & \Omega^\bullet_{\overline{C}_k({\mathbb R}^d)}\\ & \Gamma & \longrightarrow & \Omega_\Gamma:=\displaystyle \bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\pi}^*_e\left(\omega\right). \end{array} $$ It defines a de Rham ${\mathfrak G}$-theory on the operad $\overline{C}_\bullet({\mathbb R}^d)$ which in turn gives rise to a Maurer-Cartan element (\ref{3: MC element from DRham theory}) in $\mathsf{fGC}_d$ which in turn induces a $\mathcal{H}\mathit{olie}_d$ structure in ${\mathbb A}_d^{(n)}$, \begin{equation}\label{3: om Lie infty structure} \mu^{\omega}=\left\{\mu^{\omega}_k: \otimes^n {\mathbb A}_d^{(n)} \longrightarrow {\mathbb A}_d^{(n)}[ d+1-kd] \right\} \end{equation} given explicitly by \begin{equation}\label{3: mu_k^omega Lie infty structure} \mu^{\omega}_k=\sum_{\Gamma\in {\mathfrak G}(k)} \left(\int_{\overline{C}_k({\mathbb R}^d)} \Omega_\Gamma\right) \Phi_\Gamma. \end{equation} As $\wedge^N \omega=0$ for sufficiently large $N$, graphs with too many edges between any pair of vertices do not contribute into the sum in the r.h.s.\ of (\ref{3: mu_k^omega Lie infty structure}) so that the sum is finite and the formula is well-defined.
Note that an (oriented) graphs $\Gamma$ with $k$ vertices can make a non-zero contribution into (\ref{3: MC element from DRham theory}) or into $\mu^{\omega}_k$ only if $d-1\mid kd-d-1$, i.e.\ if and only if
$k=(d-1)l+2$ for some $l\in {\mathbb N}$;
in that case the number of edges
of $\Gamma$ must be equal to $\frac{kd-d-1}{d-1}=dl+1$.
Denote by ${\mathsf G}_{k,l}$ (respectively, ${\mathsf G}_{k,l}^{or}$) the subset of the set ${\mathcal G}_{k,l}$ (respectively, ${\mathcal G}_{k,l}^{or}$)
of directed (oriented) graphs consisting of connected graphs $\Gamma$ such every vertex of $\Gamma$ has valency $\geq 2$. Then we have the following sharpening of Proposition {\ref{3: From DRhamFT to MC element}}.
\subsubsection{\bf Proposition}\label{3: Prop on Upsilon^om and mu^om} {\em For any propagator $\omega$ on $S^{d-1}$, $d\geq 2$, there is an associated Maurer-Cartan element \begin{equation}\label{3: Upsilon^omega} \Upsilon^{\omega}= \begin{array}{c}\resizebox{7mm}{!}{\xy (0,2)*+{_1}*\cir{}="b", (8,2)*+{_2}*\cir{}="c",
\ar @{->} "b";"c" <0pt> \endxy} \end{array} - (-1)^d \begin{array}{c}\resizebox{7mm}{!}{\xy (0,2)*+{_2}*\cir{}="b", (8,2)*+{_1}*\cir{}="c",
\ar @{->} "b";"c" <0pt> \endxy} \end{array} + \sum_{l\geq 1} \sum_{\Gamma\in {\mathsf G}_{l(d-1)+2,ld+1}} \left(\int_{\overline{C}_{l(d-1)+2}({\mathbb R}^d)} \displaystyle \bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\pi}^*_e\left(\omega\right)\right) \Gamma \end{equation} in $\mathsf{dfGC}_d$, and an associated $\mathcal{H}\mathit{olie}_d$ algebra structure (\ref{3: om Lie infty structure}) can have $\mu_k^\omega$ non-vanishing only for $k= l(d-1)+2$ for some $l\in {\mathbb N}$, and with $\mu_2^{\omega}$ given by the standard Schouten bracket (\ref{2: standard Lie bracket in A_d}).}
\begin{proof}
It remains to check that (i) disconnected graphs and (ii) connected directed graphs with univalent vertices do not contribute into the sum over $l\geq 1$. Let us show the second claim, the proof of the first claim being analogous (cf.\ \cite{Ko}).
Let $\Gamma\in {\mathsf G}_{l(d-1)+2,ld+1}$, $l\geq 1$ be a connected directed graph with a univalent vertex $v\in V(\Gamma)$, and let $v'$ be the unique vertex connected to $v$ by the unique edge $e_{v,v'}$. Note that $v'$ has valency at least $2$ (as the $\Gamma$ is connected and has $\geq 3$ vertices) so that
there is a vertex $v''\in V(\Gamma)\setminus v $ which is connected by an edge to $v'$.
Let a $p_{v'}$ and $p_{v''}$ be two different points in ${\mathbb R}^d$ corresponding to the vertices $v'$ and respectively $v''$.
Using the action of the group ${\mathbb R}^+ \ltimes {\mathbb R}^d$ on ${\mathbb R}^d$ we can put $p_{v'}$ into $0\in {\mathbb R}^d$ and $p_{v''}$ on the unital sphere $S^{d-1}$ with center at $0$. The integral factorizes as follows $$ \int_{{C}_{l(d-1)+2}({\mathbb R}^d)} \displaystyle \bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\pi}^*_e\left(\omega\right)= \int_{{\mathit{Conf}}_1({\mathbb R}^d)} \pi^*_{e_{v,v'}}(\omega) \cdot \int_{M\subset {\mathit{Conf}}_{l(d-1)}({\mathbb R}^d) } \displaystyle \bigwedge_{e\in E(\Gamma)\setminus e_{v,v'}}\hspace{-2mm} {\pi}^*_e\left(\omega\right) $$ The form $\bigwedge_{e\in E(\Gamma)\setminus e_{v,v'}}\hspace{-2mm} {\pi}^*_e\left(\omega\right)$ has degree $ld(d-1)$ and $M$ is a subspace in ${\mathit{Conf}}_{l(d-1)}({\mathbb R}^d)$ of dimension $ld(d-1)-1$ (as one of the configuration points, $p_{v''}$, is restricted to lie on $S^{d-1}$). Hence the form $\bigwedge_{e\in E(\Gamma)\setminus e_{v,v'}}\hspace{-2mm} {\pi}^*_e\left(\omega\right)$ vanishes identically on $M$ and the claim is proven. \end{proof}
\subsubsection{\bf Example: the standard Schouten type bracket} If one chooses the propagator $$ \omega_0:=\mathrm{Vol}_{S^{d-1}} $$ to be the standard homogeneous (normalized to $1$) volume form on $S^{d-1}$ then, thanks to Kontsevich's Vanishing Lemma (proven for $d=2$ case in \cite{Ko} and for $d\geq 3$ in \cite{Ko0}), all integrals in the sum (\ref{3: Upsilon^omega}) over $l\geq 1$ vanish so that \begin{equation}\label{3: Upsilon_0} \Upsilon^{\omega_0}= \begin{array}{c}\resizebox{6.3mm}{!}{\xy (0,1)*+{_1}*\cir{}="b", (8,1)*+{_2}*\cir{}="c",
\ar @{->} "b";"c" <0pt> \endxy} \end{array} - (-1)^d \begin{array}{c}\resizebox{7mm}{!}{\xy (0,1)*+{_2}*\cir{}="b", (8,1)*+{_1}*\cir{}="c",
\ar @{->} "b";"c" <0pt> \endxy}= \end{array}=:\xy
(0,0)*{\bullet}="a", (5,0)*{\bullet}="b",
\ar @{->} "a";"b" <0pt> \endxy \end{equation} The associated $\mathcal{H}\mathit{olie}_d$ structure $\mu^{\omega_0}$ in ${\mathbb A}_d^{(n)}$ is just the standard Schouten bracket (\ref{2: standard Lie bracket in A_d}).
\subsubsection{\bf Example: a class of ${\mathcal L} ie_\infty$ structures given by oriented graphs} Let $g(x)$ be a non-negative function on the sphere $$ S^{d-1}=\{(x_1,\ldots, x_d)\in {\mathbb R}^d\ \mid x_1^2 + \ldots x_d^2=1\} $$ with compact support in the the upper ($x_d>0$) half of $S^{d-1}$ and normalized so that $$ \int_{S^{d-1}} g\, \mathrm{Vol}_{S^{d-1}} =1. $$ We can and will assume from now on that the function $g(x)$ on $S^{d-1}$ is invariant under the reflection in the $x_d$-axis, $$ \sigma: \{x_i\rightarrow -x_i\}_{1\leq i \leq d-1}, x_d\rightarrow x_d. $$ so that the propagator \begin{equation}\label{3: omega_g propagator} \omega_g:= g\, \mathrm{Vol}_{S^{d-1}} \end{equation} satisfies \begin{equation}\label{3: reflection sigma} \sigma^*(\omega_g)=(-1)^{d-1} \omega_g \end{equation} It is clear that only {\em oriented}\, graphs can give a non-trivial contribution into the associated Maurer-Cartan element (\ref{3: Upsilon^omega}) (so that $\Upsilon^{\omega_g}\in \mathsf{dfGC}_d^{or}$) and that the associated $\mathcal{H}\mathit{olie}_d$ structure $\mu_{\omega_g}$ on ${\mathbb A}_d^{(n)}$ is well-defined in the limit $n\rightarrow +\infty$.
The imposed symmetry property (\ref{3: reflection sigma}) leads to vanishing of many terms in the sum (\ref{3: Upsilon^omega}).
\subsubsection{\bf Proposition}\label{3: Prop on Upsilon^om_g} {\em For any propagator $\omega_g$ as above the associted MC element in $\mathsf{dfGC}_d^{or}$ has the form \begin{equation}\label{3: Usilon^omega_g} \Upsilon^{\omega_g}= \xy
(0,0)*{\bullet}="a", (5,0)*{\bullet}="b",
\ar @{->} "a";"b" <0pt> \endxy + \sum_{p\geq 1} \sum_{\Gamma\in {\mathsf G}^{or}_{2p(d-1)+2,2pd+1}} \left(\int_{\overline{C}_{2p(d-1)+2}({\mathbb R}^d)} \bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\pi}^*_e\left(\omega_g\right)\right) \Gamma \end{equation} so that the associated $\mathcal{H}\mathit{olie}_d$ structure in ${\mathbb A}_d^{(n)}$ can have linear maps $ \mu_k^{\omega_g}\neq 0 $ only for $k=2p(d-1)+2 $, $p\in N$. }
\begin{proof} By Proposition {\ref{3: Prop on Upsilon^om and mu^om}}, $\mu_k^{\omega_g}$ can be non-zero if and only if $k=(d-1)l+2$ for some $l\in {\mathbb N}$. Let $$ C_\Gamma:=\int_{\overline{C}_{(d-1)l+2}({\mathbb R}^d)} \bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\pi}^*_e\left(\omega_g\right) $$ be the weight of a summand $\Gamma\in G_{(d-1)l+2,dl+1}$ in $\mu_{(d-1)l+2}^{\omega_g}$ or in $\Upsilon^g$. Using the translation freedom we can fix one of the vertices of $\Gamma$ at $0\in {\mathbb R}^d$. If $\sigma$ stands for the reflection in the $x_d$ axis we have (cf.\ \cite{Ko,Sh}), $$ \int_{\sigma(\overline{C}_{(d-1)l+2}({\mathbb R}^d))} \bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\pi}^*_e\left(\omega\right)= \int_{\overline{C}_{(d-1)l+2}({\mathbb R}^d)} \sigma^*\left({\pi}^*_e\left(\omega\right)\right). $$ As $\sigma(\overline{{\mathbb R}^d}_{(d-1)l+2}({\mathbb C}))$ is equal to $\overline{C}_{(d-1)l+2}({\mathbb R}^d)$ with orientation changed by the factor $(-1)^{(k-1)(d-1)}$ and as $\sigma^*(\omega_g)=(-1)^{d-1}\omega_g$, we obtain an equality $$ (-1)^{((d-1)l+2-1)(d-1)} C_\Gamma= (-1)^{(dl+1)(d-1)}C_\Gamma $$ which implies $C_\Gamma=0$ unless $$ (d-1)l+1\equiv dl+1\bmod 2{\mathbb Z} $$ i.e. unless $l=2p$ for some $p\in {\mathbb N}$. \end{proof}
\subsubsection{\bf Example: a Kontsevich-Shoikhet ${\mathcal L} ie_\infty$ structure} If $d=2$, then only oriented graphs $\Gamma$ with even number $2p$ of vertices contribute into $\Upsilon^g$, and the leading terms are given explicitly by \cite{Sh} \begin{equation}\label{3: Upsilon_g^2} \Upsilon_{KS}^g:=\Upsilon^{\omega_g}= \xy
(0,0)*{\bullet}="a", (5,0)*{\bullet}="b",
\ar @{->} "a";"b" <0pt> \endxy \ + \ \lambda\begin{array}{c}\resizebox{11mm}{!}{\xy
(0,0)*{\bullet}="1", (-7,16)*{\bullet}="2", (7,16)*{\bullet}="3", (0,10)*{\bullet}="4",
\ar @{<-} "2";"4" <0pt> \ar @{<-} "3";"4" <0pt> \ar @{<-} "4";"1" <0pt> \ar @{<-} "2";"1" <0pt> \ar @{<-} "3";"1" <0pt> \endxy}\end{array} +
2\lambda \begin{array}{c}\resizebox{11mm}{!}{\xy
(0,0)*{\bullet}="1", (-6,6)*{\bullet}="2", (6,10)*{\bullet}="3", (0,16)*{\bullet}="4",
\ar @{<-} "4";"3" <0pt> \ar @{<-} "4";"2" <0pt> \ar @{<-} "3";"2" <0pt> \ar @{<-} "2";"1" <0pt> \ar @{<-} "3";"1" <0pt> \endxy}\end{array} +
\lambda
\begin{array}{c}\resizebox{11mm}{!}{\xy
(0,16)*{\bullet}="1", (-7,0)*{\bullet}="2", (7,0)*{\bullet}="3", (0,6)*{\bullet}="4",
\ar @{->} "2";"4" <0pt> \ar @{->} "3";"4" <0pt> \ar @{->} "4";"1" <0pt> \ar @{->} "2";"1" <0pt> \ar @{->} "3";"1" <0pt> \endxy}\end{array} \ \ \ + \ldots =:\sum_{p\geq 1} \Upsilon_{2p} \end{equation} for some $\lambda\in {\mathbb R}\setminus 0$. In view of the homotopy uniqueness of the Kontsevich-Shoikhet element $\Upsilon_{KS}\in \mathsf{fGC}_3^{or}$, the sum $\Upsilon_{KS}^g$ must be gauge equivalent (with the gauge depending on the choice of a function $g$) to an element $\Upsilon_{KS}$ constructed by induction in \S {\ref{2: subsec on oriented graph complexes}}.
Thus the propagator $\omega_g$ induces a Kontsevich-Shoikhet $\mathcal{H}\mathit{olie}_2$ structure $\mu_{KS}^g$ in ${\mathbb A}_d^{(2)}= {\mathcal T}_{poly}({\mathbb R}^{n})$ \begin{equation}\label{3: KS g-structure in T_poly} [\ ,...,\ ]_{2p}:= \sum_{\Gamma\in {\mathsf G}_{2p, 4p-3}^{or}} \left(\int_{\overline{C}_{2p}({\mathbb R}^d)} \displaystyle \bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\pi}^*_e\left(\omega_g\right)\right) \Phi_\Gamma: {\mathcal T}_{poly}({\mathbb R}^{n})^{\otimes 2p} \rightarrow {\mathcal T}_{poly}({\mathbb R}^n)[3-4p] \end{equation} which is isomorphic to the one introduced in \cite{Sh}.
We have two explicit $\mathcal{H}\mathit{olie}_d$ structures in ${\mathbb A}_2^{(n)}$, the standard one (\ref{2: standard Lie bracket in A_d}) corresponding to the propagator $\omega_0$ and the Kontsevich-Shoikhet one $\omega_g$ corresponding to the propagator (\ref{3: omega_g propagator}). Shoikhet conjectured in \cite{Sh} that for $d=2$ these two structures are $\mathcal{H}\mathit{olie}_2$ isomorphic, i.e.\ there is a $\mathcal{H}\mathit{olie}_2$ isomorphism $$ F: \left({\mathcal T}_{poly}({\mathbb R}^n), [\ ,\ ]_S\right) \longrightarrow \left({\mathcal T}_{poly}({\mathbb R}^n), [\ ,...,\ ]_{2p}, p\geq 1\right) $$
Stated in terms of graphs, this conjecture says that as Maurer-Cartan elements in $\mathsf{dfGC}_2$ the expressions (\ref{3: Upsilon_0}) and (\ref{3: Upsilon_g^2}) are gauge-equivalent to each other, \begin{equation}\label{2: gauge equiv of d and d_KS} \Upsilon_S = e^{\mathrm{ad}_\Theta} \Upsilon_{KS}^g= e^{\mathrm{ad}_\Theta}\left(\sum_{p=1}^\infty \Upsilon_{2p}\right) \end{equation} for some degree zero element $\Theta$ in $\mathsf{fGC}_2$. That this relation holds true is far from obvious. Indeed, let us attempt to construct $\Theta$ by induction on the number of vertices (as we managed to construct $\Upsilon_{KS}$ above). The first step is easy --- the term $\Upsilon_4$ is $\delta$-exact in $\mathsf{dfGC}_2$, $$ \Upsilon_4=\lambda \delta \left(\begin{array}{c}\resizebox{10mm}{!}{\xy
(0,0)*{\bullet}="1", (-12,6)*{\bullet}="2", (0,12)*{\bullet}="3",
{\ar@/^0.6pc/(0,12)*{\bullet};(0,0)*{\bullet}};
{\ar@/^0.6pc/(0,0)*{\bullet};(0,12)*{\bullet}};
\ar @{->} "2";"3" <0pt> \ar @{<-} "1";"2" <0pt> \endxy}\end{array} + \begin{array}{c}\resizebox{10mm}{!}{\xy
(0,0)*{\bullet}="1", (-12,6)*{\bullet}="2", (0,12)*{\bullet}="3",
{\ar@/^0.6pc/(0,12)*{\bullet};(0,0)*{\bullet}};
{\ar@/^0.6pc/(0,0)*{\bullet};(0,12)*{\bullet}};
\ar @{<-} "2";"3" <0pt> \ar @{->} "1";"2" <0pt> \endxy}\end{array}
\right) $$ and we can use the sum of two graphs of degree zero inside the brackets to gauge away $\Upsilon_4$. However the next obstruction becomes a {\em wheeled}\, graph $\Upsilon_6'$ from $\mathsf{dfGC}_2$ so that starting with the second step all the obstruction classes land in $H^1(\mathsf{dfGC}_2)=H^1(\mathsf{GC}_2)$ (rather than in $H^1(\mathsf{GC}_2^{or})$), the same cohomology group where, according to Kontsevich \cite{Ko-f}, all the obstructions for the universal deformation quantization of Poisson structures lie. Therefore, the formula for $\Theta$ must be as highly non-trivial as the Kontsevich quantization formula in \cite{Ko}. One of our main results in this paper is such an explicit formula for $\Theta$ proving thereby Shoikhet's conjecture (in fact, we show that it holds true for {\em any}\, value of the integer parameter $d$).
An MC element of the $\mathcal{H}\mathit{olie}_2$ algebra $\mu_{KS}^{\omega_g}$ can be defined (to assure convergence) as a degree $2$ formal power series $\pi=\hbar \pi^\diamond(\hbar)$ for some $\pi^\diamond(\hbar) \in {\mathcal T}_{poly}({\mathbb R}^n)[[\hbar]]$ satisfying the equation $$ \frac{1}{2}[\pi,\pi]_2 + \frac{1}{4!} [\pi,\pi,\pi,\pi]_4 + \ldots =0, $$ or, equivalently, $$ \frac{1}{2}[\pi^\diamond,\pi^\diamond]_2 + \frac{\hbar^2}{4!} [\pi^\diamond,\pi^\diamond,\pi^\diamond,\pi^\diamond]_4 + \frac{\hbar^4}{6!} [\pi^\diamond,\pi^\diamond,\pi^\diamond,\pi^\diamond,\pi^\diamond,\pi^\diamond]_6 + \ldots =0. $$ The equation is invariant under $\hbar\rightarrow -\hbar$ so that it makes sense to look for solutions $\pi^\diamond(\hbar)$ which are also invariant under $\hbar\rightarrow -\hbar$, i.e.\ which are formal power series in $\hbar^2$. Such solutions are precisely what we call quantizable Poisson structures, and making the change $\hbar^2 \rightarrow \hbar$ we arrive at the defining equations in the Subsection 1.2.
\subsubsection{\bf Example: a class of Kontsevich-Shoikhet type ${\mathcal L} ie_\infty$ structures in the case $d=3$} In this case one can apply a refined version {\ref{A: propos on hat{sG}} of Proposition {\ref{3: Prop on Upsilon^om_g}} and write explicitly the associated Maurer-Cartan \begin{eqnarray} \Upsilon^{\omega_g}&=&\xy
(0,0)*{\bullet}="a", (5,0)*{\bullet}="b",
\ar @{->} "a";"b" <0pt> \endxy + \sum_{p\geq 2} \sum_{\Gamma\in \hat{{\mathsf G}}^{or}_{4p+2,6p+1}} \left(\int_{\overline{C}_{4p+2}({\mathbb R}^3)} \bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\pi}^*_e\left(\omega_g\right)\right) \Gamma \label{3: Upsilon om_g for d=3} \end{eqnarray} and the associated ${\mathcal H} olie_3$ structure $\mu^{\omega_{\bar{g}}}=\{\mu^{\omega_{\bar{g}}}_{4p+2}\}_{p\geq 2}$ in ${\mathbb A}_3^{(n)}$ for any $n\in N$ \begin{equation}\label{3: Holied om_bar{g} structure in A_3} \mu_2=[\ ,\ ]_S \ \ \text{and}\ \ \mu^{\omega_{\bar{g}}}_{4p+2}:= \sum_{\Gamma\in \hat{{\mathsf G}}^{or}_{4p+2, 6p+1}} \left(\int_{\overline{C}_{4p+2}({\mathbb R}^3)} \displaystyle \bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\pi}^*_e\left(\omega_g\right)\right) \Phi_\Gamma
\ \ \text{for}\ p\geq 2. \end{equation} using the subset of graphs $\hat{{\mathsf G}}_{4p+2,6p+1}\subset {\mathsf G}_{4p+2,6p+1}$ introduced in the Appendix A. This gives us a $3$-dimensional analogue of the Kontsevich-Shoikhet structure on polyvector fields.
Maurer-Cartan elements of the Lie algebra $({\mathbb A}_3^{(n)}, [\ ,\ ]_S)$ are precisely (strongly homotopy) Lie bialgebra structures in the vector space $V=\mathrm{span}\langle x_1,\ldots, x_n\rangle$. Maurer-Cartan elements in the continuous $\mathcal{H}\mathit{olie}_3$ algebra $({\mathbb A}_3^{(n)}[[\hbar]], \mu^{\omega_{g}})$, that is, degree $3$ elements $\pi^\diamond\in {\mathbb A}_3^{(n)}[[\hbar]]$ satisfying the equation $$ [\pi^\diamond,\gamma^\diamond]_S+ \sum_{p\geq 2} \frac{\hbar^{p}}{(4p+2)!} \mu_{4p+2}^{\omega_{{g}}}(\pi^\diamond,\pi^\diamond,\ldots, \pi^\diamond) =0, $$ are called {\em quantizable Lie bialgebras}. We show in Section 7 below that the latter structures can be easily deformation quantized via an explicit formula. We also show below an explicit formula for a universal (i.e.\ independent of a particular value of $n$) $\mathcal{H}\mathit{olie}_2$ morphism $$ ({\mathbb A}_3^{(n)}, [\ ,\ ]_S) \longrightarrow ({\mathbb A}_3^{(n)}[[\hbar]], \mu^{\omega_{{g}}}). $$ The two formulae provide us with an explicit universal quantization of ordinary Lie bialgebras.
We shall be interested below in a special class of propagators of type $\omega_{g}$ on $S^2$ constructed as follows. Consider the upper-half hemisphere, $$ S^2_+:=\{(x,y,z)\in {\mathbb R}^3 \ \mid \ x^2+y^2+z^2=1,\ z>0\} $$ and a well-defined smooth map $$ \begin{array}{rccc} \nu_+: & S_+^2 & \longrightarrow & S^1\times S^1\\
& (x,y,z) & \longrightarrow & ({\mathrm{Arg}}(x+iz), {\mathrm{Arg}}(y+iz)) \end{array} $$
Let $\bar{g}(\theta)d\theta$ be a normalized volume form on the circle $S^1=\{e^{i\theta} \ |\ \theta \in [0,2\pi]\}$ as in (\ref{3: omega_g propagator}), i.e.\ the function $\bar{g}(\theta)$ has a compact support in the open interval $(0, \pi)$ and satisfies the standard conditions for $d=2$ propagator, $$ \bar{g}(\theta)=\bar{g}(\pi-\theta), \ \ \ \ \int_{0}^{2\pi} \bar{g}(\theta)d\theta =1. $$ Then \begin{equation}\label{3: propagator omega_bar{g}} \omega_{\bar{g}}:=\nu_+^*\left(\bar{g}(x+iz)\bar{g}(y+iz) dArg(x+iz)\wedge dArg(y+iz) \right) \end{equation} is a smooth differential form on $S_+^2$ which extends by zero to a smooth differential form on $S^2$ and which satisfies the standard conditions for $d=3$ propagator,
$$
\int_{S^2}\omega_{\bar{g}} =1,
$$
and
\begin{eqnarray*}
\sigma^*\left(\omega_{\bar{g}}\right)&=&\sigma^*\left(\nu_+^*\left(\bar{g}(x+iz)\bar{g}(y+iz) dArg(x+iz)\wedge dArg(y+iz) \right)\right)\\
&=& \nu_+^*\left(\bar{g}(-x+iz)\bar{g}(-y+iz) dArg(-x+iz)\wedge dArg(-y+iz) \right)\\
&=& \nu_+^*\left(\bar{g}(x+iz)\bar{g}(y+iz) (-1)^2 dArg(x+iz)\wedge dArg(y+iz) \right)\\
&=& \omega_{\bar{g}}.
\end{eqnarray*} Hence the propagator $\omega_{\bar{g}}$ belongs to the family of propagators\footnote{We apologize for some abuse of notations --- the propagator $\omega_{\bar{g}}$ is {\em not}\, equal to $\bar{g}\mathrm{Vol}_{S^2}$; the role of the bar in the notation $\omega_{\bar{g}}$ is to emphasize this difference.} (\ref{3: omega_g propagator}) so that all the above claims hold true for $\omega_{\bar{g}}$.
The $1$-form $$ \Omega_{\bar{g}}(\theta):= \bar{g}(\theta)d\theta $$ has support in the open interval $(0,\pi)$ and hence it makes sense to consider its {\em iterated integrals}, \begin{equation}\label{3: numbers Lambda_g} \Lambda_{\bar{g}}^{(p)}:= \int_{0}^\pi\underbrace{\Omega_{\bar{g}}\Omega_{\bar{g}}\ldots \Omega_{\bar{g}}}_{p\ \text{times}} \end{equation} which are some positive real numbers with $\Lambda_{\bar{g}}^{(1)}=1$.
\subsection{Transcendental formula for a class of $\mathcal{H}\mathit{olie}_d$ morphisms} Consider an operad ${\mathcal D} {\mathcal G} ra_d$ and its 2-coloured version \begin{equation}\label{3: 2-colourd operad DGrad} \underline{{\mathcal D} {\mathcal G} ra}_d=\left({\mathcal D} {\mathcal G} ra^{out}_d, {\mathcal D} {\mathcal G} ra^{mor}_d, {\mathcal D} {\mathcal G} ra^{in}_d \right) \end{equation} consisting of three copies of ${\mathcal D} {\mathcal G} ra_d$: one copy is denoted by ${\mathcal D} {\mathcal G} ra^{out}_d$ and has inputs and outputs in ``dashed" colour, the second copy is denoted by ${\mathcal D} {\mathcal G} ra^{mor}_d$ and has inputs in ``solid" colour and the output in the dashed colour, and the third copy is denoted by ${\mathcal D} {\mathcal G} ra^{in}_d$ and has both inputs and outputs in the ``solid" colour (cf.\ Proposition {\ref{2: Propos on the face complex of Mor(Lie_infty)}}). Therefore for any $n,m\in {\mathbb N}$ and any $i\in [n]$ the only non-trivial operadic compositions are of the form $$ \circ_i: {\mathcal D} {\mathcal G} ra^{out}_d(n)\otimes {\mathcal D} {\mathcal G} ra^{out}_d(m) \longrightarrow {\mathcal D} {\mathcal G} ra^{out}_d(n+m-1), \ \ \ \ \ \ \circ_i: {\mathcal D} {\mathcal G} ra^{in}_d(n)\otimes {\mathcal D} {\mathcal G} ra^{in}_d(m) \longrightarrow {\mathcal D} {\mathcal G} ra^{in}_d(n+m-1), $$ $$ \circ_i: {\mathcal D} {\mathcal G} ra^{out}_d(n)\otimes {\mathcal D} {\mathcal G} ra^{mor}_d(m) \longrightarrow {\mathcal D} {\mathcal G} ra^{mor}_d(n+m-1), \ \ \ \ \ \ \circ_i: {\mathcal D} {\mathcal G} ra^{mor}_d(n)\otimes {\mathcal D} {\mathcal G} ra^{in}_d(m) \longrightarrow {\mathcal D} {\mathcal G} ra^{mor}_d(n+m-1). $$ Similarly one defines a 2-coloured operad $\underline{{\mathcal G} ra}_d^{or}$ of oriented graphs. Let $\underline{{\mathfrak G}}$ and $\underline{{\mathfrak G}}^{or}$ be the cooperads dual to the operads $\underline{{\mathcal D} Gra}_d$ and $\underline{{\mathcal G} ra}_d^{or}$ respectively.
\subsubsection{\bf Proposition} {\em Let $ \underline{{\mathfrak G}}=\left({\mathfrak G}^{in}, {\mathfrak G}^{mor}, {\mathfrak G}^{out}\right) $ be the 2-coloured cooperad dual to the operad (\ref{3: 2-colourd operad DGrad}). Then a de Rham $\underline{{\mathfrak G}}$-theory, $$ \Omega=\left(\Omega^{in}, \Omega^{mor}, \Omega^{out}\right): \left({\mathfrak G}^{in}, {\mathfrak G}^{mor}, {\mathfrak G}^{out}\right) \longrightarrow \left(\Omega^\bullet_{\overline{C}_\bullet({\mathbb R}^d)}, \Omega^\bullet_{\overline{{\mathfrak C}}_{\bullet}({\mathbb R}^d)}, \Omega^{\bullet}_{\overline{C}_\bullet({\mathbb R}^d)}\right) $$
on the 2-coloured operad of compactified configuration spaces $\underline{{\mathfrak C}}({\mathbb R}^d)$ (see (\ref{3: Lie_infty config topol operad})) provides us with a a $\mathcal{H}\mathit{olie}_d$-isomorphism between $\mathcal{H}\mathit{olie}_d$-algebras, $$ F: \left({\mathbb A}_d^{(n)}, \mu_\bullet^{\Gamma_{in}}\right) \longrightarrow \left({\mathbb A}_d^{(n)}, \mu_\bullet^{\Gamma_{out}}\right) \ \ \ \ \ \forall\ n\in {\mathbb N} $$ associated to Maurer-Cartan elements $$ \Upsilon_{in}:=\sum_{k\geq 2} \sum_{\Gamma\in {\mathfrak G}^{in}(k)} \left(\int_{\overline{C}_k({\mathbb R}^d)} \Omega_\Gamma^{in}\right) \Gamma \ \ \ \ \text{and} \ \ \ \ \ \Upsilon_{out}:=\sum_{k\geq 2} \sum_{\Gamma\in {\mathfrak G}^{out}(k)} \left(\int_{\overline{C}_k({\mathbb R}^d)} \Omega_\Gamma^{out}\right) \Gamma $$ in $\mathsf{dfGC}_d$. This isomorphism is given explicitly by the following formulae, \begin{equation}\label{3: F=F_k for morphism} F=\left\{ F_k: \otimes^k {\mathbb A}_d^{(n)} \longrightarrow {\mathbb A}_d^{(n)}[d-dk]\right\}_{k\geq 1} \end{equation} where $$ F_k:= \sum_{\Gamma\in {\mathfrak G}^{mor}(k)} \left(\int_{\overline{{\mathfrak C}}_k({\mathbb R}^d)} \Omega_\Gamma^{mor}\right) \Phi_\Gamma $$ } \begin{proof} The claim follows from the de Rham theorem applied to the family of the compactified configuration spaces
$\overline{{\mathfrak C}}_\bullet({\mathbb R}^d)$ and Proposition {\ref{2: Propos on the face complex of Mor(Lie_infty)}} (see \S 10.1 in \cite{Me1} for details). \end{proof}
\subsection{An example} Consider a smooth degree $d-1$ differential form $\varpi$ on ${\mathfrak C}_2({\mathbb R}^d)= S^{d-1}\times [0,1]$ such that its restrictions $\omega_{in}:= \varpi\mid_{t=0}$ and $\omega_{out}:=\varpi\mid_{t=1}$ give us top degree differential forms on $\overline{C}_2({\mathbb R}^d)=S^{d-1}$ such that $\int_{S^{d-1}}\omega_{in}=1$ and
$\int_{S^{d-1}}\omega_{out}=1$. Then the collections of maps, $k\geq 1$, $$ \begin{array}{rccc} \Omega_k^{in}: & {\mathfrak G}^{in}(k) & \longrightarrow & \Omega^\bullet_{\overline{C}_k({\mathbb R}^d)}\\ & \Gamma & \longrightarrow & \Omega_\Gamma:=\displaystyle \bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\pi}^*_e\left(\omega_{in}\right) \end{array} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \begin{array}{rccc} \Omega_k^{out}: & {\mathfrak G}^{out}(k) & \longrightarrow & \Omega^\bullet_{\overline{C}_k({\mathbb R}^d)}\\ & \Gamma & \longrightarrow & \Omega_\Gamma:=\displaystyle \bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\pi}^*_e\left(\omega_{out}\right) \end{array} $$ $$ \begin{array}{rccc} \Omega_k^{mor}: & {\mathfrak G}^{mor}(k) & \longrightarrow & \Omega^\bullet_{\overline{{\mathfrak C}}_k({\mathbb R}^d)}\\ & \Gamma & \longrightarrow & \Omega_\Gamma:=\displaystyle \bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\pi}^*_e\left(\varpi\right) \end{array} $$ define a de Rham $\underline{{\mathfrak G}}$-theory on the 2-coloured operad $\underline{{\mathfrak C}}({\mathbb R}^d)$ (see Theorem 10.1.1 in \cite{Me1} for a proof), and hence a $\mathcal{H}\mathit{olie}_d$ isomorphism (\ref{3: F=F_k for morphism}) of the associated $\mathcal{H}\mathit{olie}_d$ algebra structures in ${\mathbb A}_d^{(n)}$ for any $n$.
The propagator (\ref{3: omega_g propagator}) satisfies the following equation $$ \omega_g = \mathrm{Vol}_{S^{d-1}} + d\Psi_g $$ for some degree $d-2$ differential form $\Psi_g$ on $S^{d-1}$. As $H^{d-2}(S^{d-1})$ equals zero for $d\geq 3$ and ${\mathbb R}$ for $d=2$, we can (and will) choose $\Psi_g$ in such a way that (cf.\ (\ref{3: reflection sigma}))
$$ \sigma^*(\Psi_g)=(-1)^{d-1} \Psi_g, $$ where $\sigma: S^{d-1}\rightarrow S^{d-1}$ is the reflection in the $x_d$-axis.
Consider next a differential form on ${\mathfrak C}_2({\mathbb R}^d)$, $$
\varpi_g:= \mathsf{Vol}_{S^{d-1}}\left(\frac{p_1-p_2}{|p_1-p_2|}\right) + \frac{|p_1-p_2|}{1+ |p_1-p_2|} d\Psi_g\left(\frac{p_1-p_2}{|p_1-p_2|}\right) +
(-1)^{d-1}\Psi_g\left(\frac{p_1-p_2}{|p_1-p_2|}\right)\wedge d \left(\frac{|p_1-p_2|}{1+ |p_1-p_2|}\right) $$ As it satisfies the conditions $$ \varpi_g\mid_{S^{d-1}_{in}} = \mathsf{Vol}_{S^{d-1}}, \ \ \ \ \ \ \ \ \varpi_g\mid_{S^{d-1}_{out}} = \mathsf{Vol}_{S^{d-1}} + d\Psi_g= \omega_g. $$ and \begin{equation}\label{reflection for varpi_g} \sigma^*(\varpi_g):=(-1)^{d-1}\varpi_g. \end{equation} the associated $\underline{{\mathfrak G}}$-theory on $\underline{{\mathfrak C}}({\mathbb R}^d)$ gives us almost immediately the following result (which for $d=2$ proves the Shoikhet conjecture).
\subsubsection{\bf Theorem}\label{3: Theorem on KS iso for A_d^n} {\em For any $d\geq 2$ and any $n\geq 1$ there is a $\mathcal{H}\mathit{olie}_d$ isomorphism between the $\mathcal{H}\mathit{olie}_d$ algebras, $$ F^{\omega_g}: \left({\mathbb A}_d^{(n)}, [\ ,\ ]_S\right) \longrightarrow \left({\mathbb A}_d^{(n)}, \mu^{\omega_g} \right) $$ which is given by (\ref{3: F=F_k for morphism}) with $F_k^{\omega_g}$ possibly non-zero only for $k=1+2q(d-1)$, $q\in {\mathbb Z}^{\geq 0}$, \begin{equation}\label{3: formula for Holie_2 F_k} F^{\omega_g}_{1+2q(d-1)}=\sum_{\Gamma\in {\mathsf G}_{1+2q(d-1), 2qd}} \left(\int_{\overline{{\mathfrak C}}_k({\mathbb R}^d)} \bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\pi}^*_e\left(\varpi_g\right) \right) \Phi_\Gamma. \end{equation} } \begin{proof} We have only to check that a connected directed graph $\Gamma$ with all vertices of valency $\geq 2$ can give a non-trivial contribution to the above formulae if and only if it belongs to the set ${\mathsf G}_{1+2q(d-1), 2qd}$ for the non-negative integer $q$.
As $\dim {\mathfrak C}_k({\mathbb R}^d)= kd -d=d(k-1)$ a directed graph $\Gamma$ with $k$ vertices can have non-zero weight $$ c_\Gamma:=\int_{\overline{{\mathfrak C}}_k({\mathbb R}^d)}\bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\pi}^*_e\left(\varpi_g\right) $$ if and only if its number of edges, say $l$, satisfies the equation
$$
d(k-1)=(d-1)l. $$ Thus $l=pd$ for some $p\in {\mathbb Z}^{\geq 0}$ and hence $k-1=p(d-1)$. Thus only graphs $\Gamma$ from ${\mathsf G}_{1+ p(d-1), pd}$ can have $c_\Gamma\neq 0$.
Using the translation freedom we can fix one of the vertices of $\Gamma$ at $0\in {\mathbb R}^d$. Using the reflection $\sigma$ in the $x_d$ as in the proof of Proposition {\ref{3: Prop on Upsilon^om_g}} and formula (\ref{reflection for varpi_g}), we obtain an equality
$$
(-1)^{p(d-1)(d-1)} c_\Gamma= (-1)^{(d-1)pd}c_\Gamma
$$ which implies $c_\Gamma=0$ unless $ p=2q $
for some non-negative integer $q$. \end{proof}
This Theorem gives us an explicit gauge equivalence between between the Maurer-Cartan elements $\Upsilon_S$ and $\Upsilon_{KS}^g$. We use it below in the case $d=2$ to show that such gauge equivalences (and hence the homotopy classes of the associated universal $\mathcal{H}\mathit{olie}_d$ morphisms) are classified by the set of Drinfeld associators. In particular, the Grothendieck-Teichm\"uller group $GRT_1$ acts effectively and transitively on such gauge equivalences.
\subsubsection{\bf Corollary} {\em Given a Maurer-Cartan element $\pi\in {\mathbb A}_d^{(n)}$, $$ [\pi,\pi]_S=0 $$ of the Lie algebra $({\mathbb A}_d^{(n)}, [\ ,\ ]_S)$, the associated formal power series \begin{equation}\label{3: pi-diamond formula} \pi^\diamond=\pi + \sum_{q=1}^\infty \frac{\hbar^q}{(1+2q(d-1))!} F^{\omega_g}_{1+2q(d-1)}(\pi,\ldots,\pi) \end{equation} in ${\mathbb A}_d^{(n)}[[\hbar]]$ satisfies the equation \begin{equation}\label{3: Mc eqn for quantizable str in general d} [\pi^\diamond, \pi^\diamond]_S + \sum_{p\geq 1} \frac{\hbar^p}{(2p(d-1)+2)!} \mu_{2p(d-1)+2}^{\omega_g}(\pi^\diamond,\ldots,\pi^\diamond)=0 \end{equation} }
In particular, the transcendental morphism $F^{\omega_g}$ sends ordinary Poisson and Lie bialgebra structures into {\em quantizable}\, ones
establishing thereby a 1-1 correspondence between their gauge equivalence classes: (i) given an ordinary Poisson/Lie bialgebra structure $\pi$ in ${\mathbb R}^n$, the above formal power series gives us a quantizable Poisson/Lie bialgebra structure $\pi^\diamond$, (ii) given a quantizable Poisson/Lie bialgebra structure $\pi^\diamond$ in ${\mathbb R}^n$, the initial term $\pi:=\pi^\diamond|_{\hbar=0}$ is an ordinary Poisson structure/Lie bialgebra.
\subsection{Remark} The Kontsevich-Shoikhet $\mathcal{H}\mathit{olie}_2$ structure on polyvector fields and the associated $\mathcal{H}\mathit{olie}_2$ isomorphism (\ref{3: formula for Holie_2 F_k}) have been defined above on the {\em affine}\, space ${\mathbb R}^n$ (as the formulae are invariant only under the affine group, not under the group of diffeomorphisms). However both structures can be globalized, i.e.\ can be well-defined an arbitrary manifold $M$ using a torsion-free connection on $M$ as they both do not involve graphs with vertices which are univalent or have precisely one incoming edge and precisely one outgoing edge.
{\Large \section{\bf A new explicit formula for universal quantizations of Poisson structures} }
\subsection{The Kontsevich formula for a formality map} Let ${\mathit{Conf}}_{n,m}(\overline{{\mathbb H}})$ be the configuration space of injections $z:[m+n]\hookrightarrow \overline{{\mathbb H}}$ of the set $[m+n]$ into the closed upper-half plane such that the following conditions are satisfied \begin{itemize} \item[(i)] for $1 \leq i \leq m$ one has $z_i:=z(i)\in {\mathbb R}={\partial} \overline{{\mathbb H}}$\, and $z_1<z_2<\ldots < z_m$; \item[(ii)] for $m+1 \leq i \leq m+n$ one has $z_i\in {\mathbb H}$. \end{itemize} The group ${\mathbb R}^{+}\ltimes {\mathbb R}$ acts on this configuration space freely via $z_i\rightarrow \lambda z_i +a$, $\lambda\in {\mathbb R}^+$, $a\in {\mathbb C}$, so that the quotient space $$ C_{n,m}({{\mathbb H}}):= \frac{{\mathit{Conf}}_{n,m}(\overline{{\mathbb H}})}{ {\mathbb R}^{+}\ltimes {\mathbb R}}, \ \ \ \ \ \ 2n+m\geq 2, $$ is a $2n+m-2$-dimensional manifold. Maxim Kontsevich constructed in \cite{Ko} its compactification $\overline{C}_{n,m}({{\mathbb H}})$ as a smooth manifold with corners, and used it to construct an explicit $\mathcal{H}\mathit{olie}_2$ quasi-isomorphism of dg Lie algebras (for any $n\in {\mathbb N})$, $$
{\mathcal F}^K: \left({\mathcal T}_{poly}({\mathbb R}^n), [\ ,\ ]_S\right) \longrightarrow \left(C^\bullet({\mathcal O}_{{\mathbb R}^n}, {\mathcal O}_{{\mathbb R}^n})[1], d_H,\ \ [\ ,\ ]_{\mathrm{G}}\right)
$$ where $(C^\bullet({\mathcal O}_{{\mathbb R}^n}, {\mathcal O}_{{\mathbb R}^n})[1], d_H)$ is the (degree shifted) Hochschild complex of the graded commutative algebra ${\mathcal O}_{{\mathbb R}^n}={\mathbb K}[[x_1,\ldots,x_n]]$ and $[\ ,\ ]_G$ are the Gerstenhaber brackets. This quasi-isomorphism \begin{equation}\label{4: F_k,l} {\mathcal F}^K=\left\{ {\mathcal F}_{k,l}^K: \otimes^k{\mathcal O}_{{\mathbb R}^n} \bigotimes \otimes^l {\mathcal T}_{poly}({\mathbb R}^n)\longrightarrow {\mathcal O}_{{\mathbb R}^n}\right\}_{2k+l\geq 2} \end{equation} is given explicitly by $$ {\mathcal F}_{k,l}=\sum_{\Gamma\in G_{k+l, l+2k-2}} \left(\int_{\overline{C}_{l,k}({{\mathbb H}})} \bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\nu}^*_e\left(\omega_H\right) \right) \Phi_\Gamma $$ where \begin{itemize} \item $G_{k+l, l+2k-2}$ is the set of directed graphs with $k+l$ numbered vertices and $l+2k-2$ edges such that the vertices with labels in the range from $1$ to $k$ have no outgoing edges, and for any $\Gamma\in G_{k+l, l+2k-2}$ the associated operator $\Phi_\Gamma:\otimes^k{\mathcal O}_{{\mathbb R}^n} \bigotimes \otimes^l {\mathcal T}_{poly}({\mathbb R}^n)\longrightarrow {\mathcal O}_{{\mathbb R}^n}$ is given explicitly in \cite{Ko};
\item for an edge $e\in \Gamma$ connecting a vertex with label $i$ to the vertex labelled $j$ $$ \nu_e: \overline{C}_{l,k}({{\mathbb H}}) \rightarrow \overline{C}_{2,0}({{\mathbb H}}) $$ is the map forgetting all the points in the configuration space except $z_i$ and $z_j$; \item $\omega_H$ is a smooth 1-form on $\overline{C}_{2,0}({{\mathbb H}})$ given explicitly by $$ \omega_H(z_i,z_j)=\frac{1}{2\pi} dArg\frac{z_i-z_j}{\overline{z}_i - z_j} $$ \end{itemize}
\subsection{A new explicit formula for the formality map} Note that the 1-form (cf.\ (\ref{3: omega_g propagator})) $$
\omega_g(z_i,z_j)=g\left(\frac{\overline{z}_j-\overline{z}_i}{|z_i-z_j|}\right) dArg(\bar{z}_j -\bar{z}_i) $$ is well defined on $\overline{C}_{2,0}({{\mathbb H}})$ so that it makes sense to consider a collection of maps $\bar{{\mathcal F}}=\{{\mathcal F}_{k,l}\}_{2k+l\geq 2}$ as in (\ref{4: F_k,l}) with \begin{equation}\label{4: F_k,l from_g} \bar{{\mathcal F}}_{k,l}:=\sum_{\Gamma\in G_{k+l, l+2k-2)}} \left(\int_{\overline{C}_{l,k}(\overline{{\mathbb H}})} \bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\nu}^*_e\left(\omega_g\right) \right) \Phi_\Gamma. \end{equation} The propagator $\omega_g$ does {\em not}\, satisfy Kontsevich's Vanishing Lemma 6.4 in \cite{Ko} so that many graphs $\Gamma$ have non-trivial weights on the strata corresponding to groups of points collapsing to a point inside ${\mathbb H}$; however all such graphs $\Gamma$ are easy to describe --- they are precisely the ones which generate the Kontsevich-Shoikhet $\mathcal{H}\mathit{olie}_2$ structure $\{[\ ,...,\ ]_{2p}\}_{p\geq 1}$ in ${\mathcal T}_{poly}({\mathbb R}^n)$ so that Kontsevich's arguments lead us to the following
\subsubsection{\bf Proposition \cite{B}} {\em The formulae ({\ref{4: F_k,l from_g}}) provide us with an explicit $\mathcal{H}\mathit{olie}_2$ quasi-isomorphism of $\mathcal{H}\mathit{olie}_2$ algebras \begin{equation}\label{4: F quasi-iso from KS to Hochshield} \bar{{\mathcal F}}: \left({\mathcal T}_{poly}({\mathbb R}^n), \{[\ ,...,\ ]_{2p}\}_{p\geq 1}\right) \longrightarrow \left(C^\bullet({\mathcal O}_{{\mathbb R}^n}, {\mathcal O}_{{\mathbb R}^n})[1], d_H,\ \ [\ ,\ ]_{\mathrm{G}}\right). \end{equation} Moreover, this quasi-isomorphism holds true in infinite dimensions, i.e.\ in the limit $n\rightarrow +\infty$.}
\begin{proof} It remains to show the last claim about the limit $n\rightarrow +\infty$. However it is obvious as the only graphs $\Gamma$ which can give a non-trivial contribution into the formula (\ref{4: F_k,l from_g}) are {\em oriented}\, graphs, i.e.\ the ones which have no closed paths of directed edges. \end{proof}
The above formulae are transcendental, i.e.\ involve an integration over configuration spaces. However this $\mathcal{H}\mathit{olie}_2$ quasi-isomorphism can be constructed by a trivial (in the sense, independent of the choice of an associator) induction \cite{Wi2,B}.
\subsubsection{\bf Theorem} \label{4: Theorem on uniquenes of KS quantizations} {\em For any $n$ (including the limit $n\rightarrow +\infty$) there is, up to homotopy equivalence, a unique $\mathcal{H}\mathit{olie}_2$ quasi-isomorphism of $\mathcal{H}\mathit{olie}_2$ algebras as in (\ref{4: F quasi-iso from KS to Hochshield}).}
We refer to \cite{Wi2} and \cite{B} for two different proofs of this Theorem.
Now we can assemble the previous results into a new proof of the Kontsevich formality theorem which gives us also an new explicit formula for such a formality map ({\em not}\, involving the 2-dimensional hyperbolic geometry).
\subsubsection{\bf Kontsevich Formality Theorem} {\em For finite natural number $n$ there is a $\mathcal{H}\mathit{olie}_2$ quasi-isomorphism of dg Lie algebras} $$
{\mathcal F}: \left({\mathcal T}_{poly}({\mathbb R}^n), [\ ,\ ]_S\right) \longrightarrow \left(C^\bullet({\mathcal O}_{{\mathbb R}^n}, {\mathcal O}_{{\mathbb R}^n})[1], d_H,\ \ [\ ,\ ]_{\mathrm{G}}\right)
$$ \begin{proof} Let $g$ be an arbitrary smooth function on the circle $S^1$ with compact support in the upper half of $S^1$ and normalized so that $\int_{S^1} g \mathrm{Vol_{S^1}}=1$. Then there is an associated $\mathcal{H}\mathit{olie}_2$ isomorphism of $\mathcal{H}\mathit{olie}_2$ algebras $$ F: \left({\mathcal T}_{poly}({\mathbb R}^n), [\ ,\ ]_S\right) \longrightarrow \left({\mathcal T}_{poly}({\mathbb R}^n), [\ ,...,\ ]_{2p}, p\geq 1, \right) $$ given explicitly by formulae (\ref{3: F=F_k for morphism}), and a quasi-isomorphism of $\mathcal{H}\mathit{olie}_2$ algebras (\ref{4: F quasi-iso from KS to Hochshield}) given by explicit formulae (\ref{4: F_k,l from_g}). Hence we obtain the required $\mathcal{H}\mathit{olie}_2$ quasi-isomorphism as the composition \begin{equation}\label{4: composition cF} {\mathcal F}: \left({\mathcal T}_{poly}({\mathbb R}^n), [\ ,\ ]_S\right) \stackrel{F}{\longrightarrow} \left({\mathcal T}_{poly}({\mathbb R}^n), [\ ,...,\ ]_{2p}, p\geq 1, \right) \stackrel{\bar{{\mathcal F}}}{\longrightarrow} \left(C^\bullet({\mathcal O}_{{\mathbb R}^n}, {\mathcal O}_{{\mathbb R}^n})[1], d_H,\ \ [\ ,\ ]_{\mathrm{G}}\right) \end{equation} which is also given by explicit formulae with weights obtained from integrations on two different families of configuration spaces. \end{proof}
It was proven in \cite{Do,Wi3} that the set of homotopy classes of universal formality maps $\{{\mathcal F}\}$ can be identified with the set of Drinfeld associators, i.e.\ it is a torsor over the Grothendieck-Teichm\"uler group $GRT$. It follows from Theorem {\ref{3: Theorem on KS iso for A_d^n}} for $d=2$ that every such a quasi-isomorphism can be split as the composition (\ref{4: composition cF}) with, by Theorem {\ref{4: Theorem on uniquenes of KS quantizations}}, the map $\bar{{\mathcal F}}$ being unique (up to homotopy). Hence we obtain the following result.
\subsubsection{\bf Corollary}\label{4: Corollary on Holie_2 iso and Drinfeld ass} {\em The set of homotopy classes of universal $\mathcal{H}\mathit{olie}_2$ isomorphims $$ F: \left({\mathcal T}_{poly}({\mathbb R}^n), [\ ,\ ]_S\right) \longrightarrow \left({\mathcal T}_{poly}({\mathbb R}^n), [\ ,...,\ ]_{2p}, p\geq 1, \right), \ \ \ \forall\ n\in {\mathbb N}, $$ can be identified with the set of Drinfeld associators, i.e.\ it is a torsor over the Grothendieck-Teichm\"uler group $GRT_1$.}
We conclude that a construction of a non-commutative associative star product in ${\mathcal O}_{{\mathbb R}^n}$ out of an arbitrary ordinary Poisson structure $\pi$ can be split in two steps: \begin{itemize} \item[\sc Step 1] Associate to $\pi$ a quantizable Poisson structure $\pi^\diamond$. This step is most non-trivial and requires a choice of an associator; it can by given by an explicit formula (\ref{3: pi-diamond formula}).
\item[\sc Step 2] Construct a star product in ${\mathcal O}_{{\mathbb R}^n}$ using the unique (up to homotopy) quantization formulae (\ref{4: F_k,l from_g}). \end{itemize}
We shall use a similar procedure below to obtain explicit and relatively simple formulae for the universal deformation quantization of arbitrary finite-dimensional Lie bialgebras.
{\Large \section{\bf Props governing associative bialgebras, Lie bialgebras\\ and the formality maps} }
\subsection{Prop of associative bialgebras and its minimal resolution.} A prop $\mathcal{A}\mathit{ssb}$ governing associative bialgebras is the quotient, $$ \mathcal{A}\mathit{ssb}:= {{\mathcal F} ree\langle A_0 \rangle}/(R) $$ of the free prop ${\mathcal F} ree\langle A_0 \rangle$ generated
by an ${\mathbb S}$-bimodule $A_0=\{A_0(m,n)\}$\footnote{Here and everywhere all internal edges and legs in the graphical representation of an element of a prop are assumed to be implicitly oriented from the bottom of a graph to its top.}, \[ A_0(m,n):=\left\{ \begin{array}{rr} {\mathbb K}[{\mathbb S}_2]\otimes {\mbox{1 \hskip -7pt 1}}_1\equiv\mbox{span}\left\langle \begin{xy}
<0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{.},
<0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{.},
<-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0mm,-0.55mm>*{};<0mm,-3.8mm>*{_1}**@{},
<0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^2}**@{},
<-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^1}**@{},
\end{xy} \, ,\, \begin{xy}
<0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{.},
<0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{.},
<-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0mm,-0.55mm>*{};<0mm,-3.8mm>*{_1}**@{},
<0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^1}**@{},
<-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^2}**@{},
\end{xy}
\right\rangle & \mbox{if}\ m=2, n=1,
\\ {\mbox{1 \hskip -7pt 1}}_1\otimes {\mathbb K}[{\mathbb S}_2]\equiv \mbox{span}\left\langle \begin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{.},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{.},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0mm,0.66mm>*{};<0mm,3.4mm>*{^1}**@{},
<0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^2}**@{},
<-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^1}**@{}, \end{xy} \, ,\, \begin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{.},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{.},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0mm,0.66mm>*{};<0mm,3.4mm>*{^1}**@{},
<0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^1}**@{},
<-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^2}**@{}, \end{xy} \right\rangle \ & \mbox{if}\ m=1, n=2,
\\ 0 & \mbox{otherwise} \end{array} \right. \] modulo the ideal generated by relations \begin{equation}\label{2: bialgebra relations} R:\left\{ \begin{array}{c} \begin{xy}
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{.},
<0.49mm,0.49mm>*{};<1.9mm,1.9mm>*{}**@{.},
<-0.5mm,0.5mm>*{};<-1.9mm,1.9mm>*{}**@{.},
<-2.3mm,2.3mm>*{\circ};<-2.3mm,2.3mm>*{}**@{},
<-1.8mm,2.8mm>*{};<0mm,4.9mm>*{}**@{.},
<-2.8mm,2.9mm>*{};<-4.6mm,4.9mm>*{}**@{.},
<0.49mm,0.49mm>*{};<2.7mm,2.3mm>*{^3}**@{},
<-1.8mm,2.8mm>*{};<0.4mm,5.3mm>*{^2}**@{},
<-2.8mm,2.9mm>*{};<-5.1mm,5.3mm>*{^1}**@{},
\end{xy}\end{array} \ - \ \begin{array}{c} \begin{xy}
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{.},
<0.49mm,0.49mm>*{};<1.9mm,1.9mm>*{}**@{.},
<-0.5mm,0.5mm>*{};<-1.9mm,1.9mm>*{}**@{.},
<2.3mm,2.3mm>*{\circ};<-2.3mm,2.3mm>*{}**@{},
<1.8mm,2.8mm>*{};<0mm,4.9mm>*{}**@{.},
<2.8mm,2.9mm>*{};<4.6mm,4.9mm>*{}**@{.},
<0.49mm,0.49mm>*{};<-2.7mm,2.3mm>*{^1}**@{},
<-1.8mm,2.8mm>*{};<0mm,5.3mm>*{^2}**@{},
<-2.8mm,2.9mm>*{};<5.1mm,5.3mm>*{^3}**@{},
\end{xy}\end{array}=0, \ \ \ \ \
\begin{array}{c}\begin{xy}
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{.},
<0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{.},
<-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{.},
<-2.4mm,-2.4mm>*{\circ};<-2.4mm,-2.4mm>*{}**@{},
<-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{.},
<-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{.},
<0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^3}**@{},
<-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^2}**@{},
<-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^1}**@{},
\end{xy}\end{array} \ - \
\begin{array}{c}\begin{xy}
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{.},
<0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{.},
<-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{.},
<2.4mm,-2.4mm>*{\circ};<-2.4mm,-2.4mm>*{}**@{},
<2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{.},
<2.8mm,-2.9mm>*{};<4.7mm,-4.9mm>*{}**@{.},
<0.39mm,-0.39mm>*{};<-3mm,-4.0mm>*{^1}**@{},
<-2.0mm,-2.8mm>*{};<0mm,-6.7mm>*{^2}**@{},
<-2.8mm,-2.9mm>*{};<5.2mm,-6.7mm>*{^3}**@{},
\end{xy}\end{array}=0,\ \ \ \ \ \
\begin{array}{c} \begin{xy}
<0mm,2.47mm>*{};<0mm,-0.5mm>*{}**@{.},
<0.5mm,3.5mm>*{};<2.2mm,5.2mm>*{}**@{.},
<-0.48mm,3.48mm>*{};<-2.2mm,5.2mm>*{}**@{.},
<0mm,3mm>*{\circ};<0mm,3mm>*{}**@{},
<0mm,-0.8mm>*{\circ};<0mm,-0.8mm>*{}**@{}, <0mm,-0.8mm>*{};<-2.2mm,-3.5mm>*{}**@{.},
<0mm,-0.8mm>*{};<2.2mm,-3.5mm>*{}**@{.},
<0.5mm,3.5mm>*{};<2.8mm,5.7mm>*{^2}**@{},
<-0.48mm,3.48mm>*{};<-2.8mm,5.7mm>*{^1}**@{},
<0mm,-0.8mm>*{};<-2.7mm,-5.2mm>*{^1}**@{},
<0mm,-0.8mm>*{};<2.7mm,-5.2mm>*{^2}**@{}, \end{xy}\end{array} \ - \ \begin{array}{c}\begin{xy}
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{.},
<-0.5mm,0.5mm>*{};<-3mm,2mm>*{}**@{.},
<-3mm,2mm>*{};<0mm,4mm>*{}**@{.},
<0mm,4mm>*{\circ};<-2.3mm,2.3mm>*{}**@{},
<0mm,4mm>*{};<0mm,7.4mm>*{}**@{.}, <0mm,0mm>*{};<2.2mm,1.5mm>*{}**@{.},
<6mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<6mm,4mm>*{};<3.8mm,2.5mm>*{}**@{.},
<6mm,4mm>*{};<6mm,7.4mm>*{}**@{.},
<6mm,4mm>*{\circ};<-2.3mm,2.3mm>*{}**@{},
<0mm,4mm>*{};<6mm,0mm>*{}**@{.}, <6mm,4mm>*{};<9mm,2mm>*{}**@{.}, <6mm,0mm>*{};<9mm,2mm>*{}**@{.}, <6mm,0mm>*{};<6mm,-3mm>*{}**@{.},
<-1.8mm,2.8mm>*{};<0mm,7.8mm>*{^1}**@{},
<-2.8mm,2.9mm>*{};<0mm,-4.3mm>*{_1}**@{}, <-1.8mm,2.8mm>*{};<6mm,7.8mm>*{^2}**@{},
<-2.8mm,2.9mm>*{};<6mm,-4.3mm>*{_2}**@{},
\end{xy} \end{array}=0 \right. \end{equation}
Note that the relations are not quadratic (it is proven, however, in \cite{MV} that $\mathcal{A}\mathit{ssb}$ is {\em homotopy Koszul}). A minimal resolution, $(\mathcal{A}\mathit{ssb}_\infty,\delta)$ of $\mathcal{A}\mathit{ssb}$ exists \cite{Ma1} and is generated by an ${\mathbb S}$-bimodule $ A=\{ A(m,n)\}_{m,n\geq 1, m+n\geq 3}$, \[
A(m,n):= {\mathbb K}[{\mathbb S}_m]\otimes {\mathbb K}[{\mathbb S}_n][m+n-3]=\mbox{span}\left\langle \begin{array}{c} \resizebox{19mm}{!}{\begin{xy}
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0mm,0mm>*{};<-8mm,5mm>*{}**@{.},
<0mm,0mm>*{};<-4.5mm,5mm>*{}**@{.},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0mm,0mm>*{};<4.5mm,5mm>*{}**@{.},
<0mm,0mm>*{};<8mm,5mm>*{}**@{.},
<0mm,0mm>*{};<-10.5mm,5.9mm>*{^{\tau(1)}}**@{},
<0mm,0mm>*{};<-4mm,5.9mm>*{^{\tau(2)}}**@{},
<0mm,0mm>*{};<10.0mm,5.9mm>*{^{\tau(m)}}**@{},
<0mm,0mm>*{};<-8mm,-5mm>*{}**@{.},
<0mm,0mm>*{};<-4.5mm,-5mm>*{}**@{.},
<0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{},
<0mm,0mm>*{};<4.5mm,-5mm>*{}**@{.},
<0mm,0mm>*{};<8mm,-5mm>*{}**@{.},
<0mm,0mm>*{};<-10.5mm,-6.9mm>*{^{\sigma(1)}}**@{},
<0mm,0mm>*{};<-4mm,-6.9mm>*{^{\sigma(2)}}**@{},
<0mm,0mm>*{};<10.0mm,-6.9mm>*{^{\sigma(n)}}**@{},
\end{xy}}\end{array} \right\rangle_{\tau\in {\mathbb S}_n\atop \sigma\in {\mathbb S}_m}, \] The differential $\delta$ in $\mathcal{A}\mathit{ssb}_\infty$ is not quadratic, and its explicit value on generic $(m,n)$-corolla is not known at present, but we can (and will) assume from now on that $\delta$ preserves the {\em path grading}\, of $\mathcal{A}\mathit{ssb}_\infty$ (which associates to any decorated graph $G$ from $\mathcal{A}\mathit{ssb}_\infty$ the total number of directed paths connecting input legs of $G$ to the output ones).
Let $V$ be a ${\mathbb Z}$-graded vector space over a field ${\mathbb K}$ of characteristic zero. The associated symmetric tensor algebra ${\mathcal O}_V:= {\odot^{\bullet}} V= \oplus_{n\geq 0} \odot^n V$ comes equipped with the standard graded commutative and co-commutative bialgebra structure, i.e.\ there a non-trivial representation, \begin{equation}\label{2: rho_0} \rho_0: \mathcal{A}\mathit{ssb} \longrightarrow {\mathcal E} nd_{{\mathcal O}_V}. \end{equation} According to \cite{MV}, the (extended) deformation complex $$ C_{GS}^\bullet\left({\mathcal O}_V,{\mathcal O}_V\right):=\mathsf{Def}\left(\mathcal{A}\mathit{ssb} \stackrel{\rho_0}{\longrightarrow} {\mathcal E} nd_{{\mathcal O}_V}\right)\simeq \prod_{m,n\geq 1}{\mathrm H\mathrm o\mathrm m}({\mathcal O}_V^{\otimes m}, {\mathcal O}_V^{\otimes n})[2-m-n] $$ and its polydifferential subcomplex $C_{poly}^\bullet\left({\mathcal O}_V,{\mathcal O}_V\right)$ come equipped with a ${\mathcal L} ie_\infty$ algebra structure, $ \left\{\mu_n: \wedge^n C_{GS}^\bullet({\mathcal O}_V,{\mathcal O}_V)\longrightarrow C_{GS}^\bullet({\mathcal O}_V,{\mathcal O}_V)[2-n]\right\}_{n\geq 1}, $ such that $\mu_1$ coincides precisely with the Gerstenhaber-Shack differential \cite{GS}. According to \cite{GS}, the cohomology of the complex $(C_{GS}^\bullet({\mathcal O}_V,{\mathcal O}_V), \mu_1)$ is precisely the deformation complex $$ {\mathfrak g}_V:= \mathsf{Def}(\mathcal{L}\mathit{ieb}\stackrel{0}{\longrightarrow} {\mathcal E} nd_V) $$ controlling deformations of the zero morphism $0: \mathcal{L}\mathit{ieb}\rightarrow {\mathcal E} nd_V$, where $\mathcal{L}\mathit{ieb}$ is the prop of Lie bialgebras which we discuss below.
\subsection{Prop governing Lie bialgebras and its minimal resolution} The prop $\mathcal{L}\mathit{ieb}$ is defined \cite{D} as a quotient, $$ \mathcal{L}\mathit{ieb}:= {{\mathcal F} ree \langle E_0\rangle}/(R) $$ of the free prop generated by an ${\mathbb S}$-bimodule $E_0=\{E_0(m,n)\}$, \begin{equation}\label{5: module E_0 generating Lieb} E_0(m,n):=\left\{ \begin{array}{rr} sgn_2\otimes {\mbox{1 \hskip -7pt 1}}_1\equiv\mbox{span}\left\langle \begin{xy}
<0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-},
<0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-},
<-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-},
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<0mm,-0.55mm>*{};<0mm,-3.8mm>*{_1}**@{},
<0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^2}**@{},
<-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^1}**@{},
\end{xy} =- \begin{xy}
<0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-},
<0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-},
<-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-},
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<0mm,-0.55mm>*{};<0mm,-3.8mm>*{_1}**@{},
<0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^1}**@{},
<-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^2}**@{},
\end{xy}
\right\rangle & \mbox{if}\ m=2, n=1,
\\ {\mbox{1 \hskip -7pt 1}}_1\otimes sgn_2\equiv \mbox{span}\left\langle \begin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-},
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<0mm,0.66mm>*{};<0mm,3.4mm>*{^1}**@{},
<0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^2}**@{},
<-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^1}**@{}, \end{xy}=- \begin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-},
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<0mm,0.66mm>*{};<0mm,3.4mm>*{^1}**@{},
<0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^1}**@{},
<-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^2}**@{}, \end{xy} \right\rangle \ & \mbox{if}\ m=1, n=2,
\\ 0 & \mbox{otherwise} \end{array} \right. \end{equation} modulo the ideal generated by the following relations \begin{equation}\label{3: LieB relations} R:\left\{ \begin{array}{l} \begin{array}{c}\begin{xy}
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{-},
<0.49mm,0.49mm>*{};<1.9mm,1.9mm>*{}**@{-},
<-0.5mm,0.5mm>*{};<-1.9mm,1.9mm>*{}**@{-},
<-2.3mm,2.3mm>*{\bullet};<-2.3mm,2.3mm>*{}**@{},
<-1.8mm,2.8mm>*{};<0mm,4.9mm>*{}**@{-},
<-2.8mm,2.9mm>*{};<-4.6mm,4.9mm>*{}**@{-},
<0.49mm,0.49mm>*{};<2.7mm,2.3mm>*{^3}**@{},
<-1.8mm,2.8mm>*{};<0.4mm,5.3mm>*{^2}**@{},
<-2.8mm,2.9mm>*{};<-5.1mm,5.3mm>*{^1}**@{},
\end{xy} \ + \ \begin{xy}
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{-},
<0.49mm,0.49mm>*{};<1.9mm,1.9mm>*{}**@{-},
<-0.5mm,0.5mm>*{};<-1.9mm,1.9mm>*{}**@{-},
<-2.3mm,2.3mm>*{\bullet};<-2.3mm,2.3mm>*{}**@{},
<-1.8mm,2.8mm>*{};<0mm,4.9mm>*{}**@{-},
<-2.8mm,2.9mm>*{};<-4.6mm,4.9mm>*{}**@{-},
<0.49mm,0.49mm>*{};<2.7mm,2.3mm>*{^2}**@{},
<-1.8mm,2.8mm>*{};<0.4mm,5.3mm>*{^1}**@{},
<-2.8mm,2.9mm>*{};<-5.1mm,5.3mm>*{^3}**@{},
\end{xy} \ + \ \begin{xy}
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{-},
<0.49mm,0.49mm>*{};<1.9mm,1.9mm>*{}**@{-},
<-0.5mm,0.5mm>*{};<-1.9mm,1.9mm>*{}**@{-},
<-2.3mm,2.3mm>*{\bullet};<-2.3mm,2.3mm>*{}**@{},
<-1.8mm,2.8mm>*{};<0mm,4.9mm>*{}**@{-},
<-2.8mm,2.9mm>*{};<-4.6mm,4.9mm>*{}**@{-},
<0.49mm,0.49mm>*{};<2.7mm,2.3mm>*{^1}**@{},
<-1.8mm,2.8mm>*{};<0.4mm,5.3mm>*{^3}**@{},
<-2.8mm,2.9mm>*{};<-5.1mm,5.3mm>*{^2}**@{},
\end{xy}\end{array} =0
\ \ \ \ , \ \ \
\begin{array}{c}\begin{xy}
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-},
<-2.4mm,-2.4mm>*{\bullet};<-2.4mm,-2.4mm>*{}**@{},
<-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-},
<-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^3}**@{},
<-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^2}**@{},
<-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^1}**@{},
\end{xy} \ + \
\begin{xy}
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-},
<-2.4mm,-2.4mm>*{\bullet};<-2.4mm,-2.4mm>*{}**@{},
<-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-},
<-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^2}**@{},
<-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^1}**@{},
<-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^3}**@{},
\end{xy} \ + \
\begin{xy}
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-},
<-2.4mm,-2.4mm>*{\bullet};<-2.4mm,-2.4mm>*{}**@{},
<-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-},
<-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^1}**@{},
<-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^3}**@{},
<-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^2}**@{},
\end{xy}\end{array} =0
\\
\begin{xy}
<0mm,2.47mm>*{};<0mm,0.12mm>*{}**@{-},
<0.5mm,3.5mm>*{};<2.2mm,5.2mm>*{}**@{-},
<-0.48mm,3.48mm>*{};<-2.2mm,5.2mm>*{}**@{-},
<0mm,3mm>*{\bullet};<0mm,3mm>*{}**@{},
<0mm,-0.8mm>*{\bullet};<0mm,-0.8mm>*{}**@{}, <-0.39mm,-1.2mm>*{};<-2.2mm,-3.5mm>*{}**@{-},
<0.39mm,-1.2mm>*{};<2.2mm,-3.5mm>*{}**@{-},
<0.5mm,3.5mm>*{};<2.8mm,5.7mm>*{^2}**@{},
<-0.48mm,3.48mm>*{};<-2.8mm,5.7mm>*{^1}**@{},
<0mm,-0.8mm>*{};<-2.7mm,-5.2mm>*{^1}**@{},
<0mm,-0.8mm>*{};<2.7mm,-5.2mm>*{^2}**@{}, \end{xy} \ - \ \begin{xy}
<0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-},
<0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-},
<-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,-0.8mm>*{\bullet};<0mm,0.8mm>*{}**@{},
<2.4mm,2.4mm>*{\bullet};<2.4mm,2.4mm>*{}**@{},
<2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-},
<2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-},
<0mm,-1.3mm>*{};<0mm,-5.3mm>*{^1}**@{},
<2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^2}**@{},
<2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^2}**@{},
<-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^1}**@{},
\end{xy} \ + \ \begin{xy}
<0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-},
<0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-},
<-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,-0.8mm>*{\bullet};<0mm,0.8mm>*{}**@{},
<2.4mm,2.4mm>*{\bullet};<2.4mm,2.4mm>*{}**@{},
<2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-},
<2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-},
<0mm,-1.3mm>*{};<0mm,-5.3mm>*{^2}**@{},
<2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^1}**@{},
<2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^2}**@{},
<-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^1}**@{},
\end{xy} \ - \ \begin{xy}
<0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-},
<0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-},
<-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,-0.8mm>*{\bullet};<0mm,0.8mm>*{}**@{},
<2.4mm,2.4mm>*{\bullet};<2.4mm,2.4mm>*{}**@{},
<2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-},
<2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-},
<0mm,-1.3mm>*{};<0mm,-5.3mm>*{^2}**@{},
<2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^1}**@{},
<2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^1}**@{},
<-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^2}**@{},
\end{xy} \ + \ \begin{xy}
<0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-},
<0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-},
<-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,-0.8mm>*{\bullet};<0mm,0.8mm>*{}**@{},
<2.4mm,2.4mm>*{\bullet};<2.4mm,2.4mm>*{}**@{},
<2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-},
<2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-},
<0mm,-1.3mm>*{};<0mm,-5.3mm>*{^1}**@{},
<2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^2}**@{},
<2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^1}**@{},
<-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^2}**@{},
\end{xy}=0 \end{array} \right. \end{equation} Its minimal resolution,
${\LB}^{\mathrm{min}}_\infty$, is a dg free prop, $$ {\LB}^{\mathrm{min}}_\infty={\mathcal F} ree \langle E\rangle, $$ generated by the ${\mathbb S}$--bimodule $ E=\{ E(m,n)\}_{m,n\geq 1, m+n\geq 3}$, \begin{equation}\label{5: generators of Lieb_infty}
E(m,n):= sgn_m\otimes sgn_n[m+n-3]=\mbox{span}\left\langle \begin{array}{c}\resizebox{14mm}{!}{\begin{xy}
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
<-0.6mm,-0.44mm>*{};<-8mm,-5mm>*{}**@{-},
<-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{},
<0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-},
<0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,-6.9mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,-6.9mm>*{^2}**@{},
<0mm,0mm>*{};<4.5mm,-6.9mm>*{^{n\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<9.0mm,-6.9mm>*{^n}**@{},
\end{xy}}\end{array} \right\rangle, \end{equation} and with the differential given on generating corollas by \cite{MaVo,Va} \begin{equation}\label{3: differential in LieBinfty} \delta \begin{array}{c}\resizebox{14mm}{!}{\begin{xy}
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
<-0.6mm,-0.44mm>*{};<-8mm,-5mm>*{}**@{-},
<-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{},
<0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-},
<0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,-6.9mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,-6.9mm>*{^2}**@{},
<0mm,0mm>*{};<4.5mm,-6.9mm>*{^{n\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<9.0mm,-6.9mm>*{^n}**@{},
\end{xy}}\end{array} \ \ = \ \
\sum_{[1,\ldots,m]=I_1\sqcup I_2\atop
{|I_1|\geq 0, |I_2|\geq 1}}
\sum_{[1,\ldots,n]=J_1\sqcup J_2\atop
{|J_1|\geq 1, |J_2|\geq 1} }\hspace{0mm}
(-1)^{\sigma(I_1\sqcup I_2)+ |I_1||I_2|+|J_1||J_2|} \begin{array}{c}\resizebox{20mm}{!}{ \begin{xy}
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<0mm,5mm>*{\ldots}**@{},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-},
<0.6mm,0.44mm>*{};<12.4mm,4.8mm>*{}**@{-},
<0mm,0mm>*{};<-2mm,7mm>*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ }}**@{},
<0mm,0mm>*{};<-2mm,9mm>*{^{I_1}}**@{},
<-0.6mm,-0.44mm>*{};<-8mm,-5mm>*{}**@{-},
<-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{},
<0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-},
<0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<0mm,-7mm>*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \
}}**@{},
<0mm,0mm>*{};<0mm,-10.6mm>*{_{J_1}}**@{},
<13mm,5mm>*{};<13mm,5mm>*{\bullet}**@{},
<12.6mm,5.44mm>*{};<5mm,10mm>*{}**@{-},
<12.6mm,5.7mm>*{};<8.5mm,10mm>*{}**@{-},
<13mm,5mm>*{};<13mm,10mm>*{\ldots}**@{},
<13.4mm,5.7mm>*{};<16.5mm,10mm>*{}**@{-},
<13.6mm,5.44mm>*{};<20mm,10mm>*{}**@{-},
<13mm,5mm>*{};<13mm,12mm>*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ }}**@{},
<13mm,5mm>*{};<13mm,14mm>*{^{I_2}}**@{},
<12.4mm,4.3mm>*{};<8mm,0mm>*{}**@{-},
<12.6mm,4.3mm>*{};<12mm,0mm>*{\ldots}**@{},
<13.4mm,4.5mm>*{};<16.5mm,0mm>*{}**@{-},
<13.6mm,4.8mm>*{};<20mm,0mm>*{}**@{-},
<13mm,5mm>*{};<14.3mm,-2mm>*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ }}**@{},
<13mm,5mm>*{};<14.3mm,-4.5mm>*{_{J_2}}**@{},
\end{xy}}\end{array} \end{equation} where $\sigma(I_1\sqcup I_2)$ and $\sigma(J_1\sqcup J_2)$ are the signs of the shuffles $[1,\ldots,m]\rightarrow I_1\sqcup I_2$ and, respectively, $[1,\ldots,n]\rightarrow J_1\sqcup J_2$.
Let $V$ be a dg vector space. According to the general theory \cite{MV}, there is a one-to-one correspondence between the set of representations, $\{ \rho: {\LB}^{\mathrm{min}}_\infty \rightarrow {\mathcal E} nd_V\}$, and the set of Maurer-Cartan elements in the dg Lie algebra \begin{equation}\label{2: fl_V'} \mathsf{Def}({\LB}^{\mathrm{min}}_\infty \stackrel{0}{\rightarrow} {\mathcal E} nd_V)\simeq \prod_{m,n\geq 1} \wedge^mV^*\otimes \wedge^n V[2-m-n]= \prod_{m,n\geq 1} \odot^m(V^*[-1])\otimes \odot^n(V[-1]) [2] =: {\mathfrak g}_V \end{equation} controlling deformations of the zero map $\mathcal{L}\mathit{ieb}_\infty \stackrel{0}{\rightarrow} {\mathcal E} nd_V$. The differential in ${\mathfrak g}_V$ is induced by the differential in $V$ while the Lie bracket can be described explicitly as follows. First one notices that the completed graded vector space $$ {\mathfrak g}_V[-2]= \prod_{m,n\geq 1} \odot^m(V^*[-1])\otimes \odot^n(V[-1])=\widehat{\odot^{\bullet \geq 1}}\left( V^*[-1])\oplus V[-1]\right) $$ is naturally a 3-algebra with degree $-2$ Lie brackets, $\{\ ,\ \}$, given on generators by \[ \{sv, sw\}=0,\ \ \{s\alpha, s\beta\}=0, \ \ \{s\alpha, sv\}=<\alpha,v>, \ \ \forall v,w\in V, \alpha,\beta\in V^*. \] where $s: V\rightarrow V[-1]$ and $s: V^*\rightarrow V^*[-1]$ are natural isomorphisms. Maurer-Cartan elements in ${\mathfrak g}_V$, that is degree 3 elements $\nu$ satisfying the equation $$ \{\nu,\nu\}=0, $$
are in 1-1 correspondence with representations $\nu: \mathcal{L}\mathit{ieb}_\infty \rightarrow {\mathcal E} nd_V$. Such elements satisfying the condition $$ \nu \in \odot^2(V^*[-1)\otimes V[-1] \ \oplus V^*[-1]\otimes \odot^2(V[-1]) $$ are precisely Lie bialgebra structures in $V$.
The properads $\mathcal{L}\mathit{ieb}$ and ${\LB}^{\mathrm{min}}_\infty$ admit filtrations by the number of vertices and we denote by $\widehat{\LB}$ and $\widehat{\LB}^{\mathrm{min}}_\infty$ their completions with respect to these filtrations.
\subsection{Formality maps as morphisms of props} We introduced in \cite{MW2} an endofunctor ${\mathcal D}$ in the category of augmented props with the property that for any representation of a prop ${\mathcal P}$ in a vector space $V$ the associated prop ${\mathcal D}{\mathcal P}$ admits an induced representation on the graded commutative algebra $\odot^\bullet V$ given in terms of polydifferential operators. More, we proved that
\begin{itemize}
\item[(i)] For any choice of a Drinfeld associator ${\mathfrak A}$ there is an associated highly non-trivial (in the sense that it is is non-zero on every generator of $\mathcal{A}\mathit{ssb}_\infty$, see formula (\ref{5: Boundary cond for formality map}) below) morphism of dg props, \begin{equation}\label{1: formality map F_A} F_{\mathfrak A}: \mathcal{A}\mathit{ssb}_\infty \longrightarrow {\mathcal D}\widehat{\LB}^{\mathrm{min}}_\infty. \end{equation} where $\mathcal{A}\mathit{ssb}_\infty$ stands for a minimal resolution of the prop of associative bialgebras, and the construction of the polydifferential prop ${\mathcal D}\widehat{\LB}^{\mathrm{min}}_\infty$ out of $\widehat{\LB}^{\mathrm{min}}_\infty$ is explained below.
\item[(ii)] For any graded vector space $V$, each morphism $F_{\mathfrak A}$ induces a ${\mathcal L} ie_\infty$ quasi-isomorphism (called a {\em formality map}) between the dg ${\mathcal L} ie_\infty$ algebra $$ C_{GS}^\bullet({\mathcal O}_V,{\mathcal O}_V)= \mathsf{Def}({\mathcal A} ss{\mathcal B}\stackrel{\rho_0}{\longrightarrow} {\mathcal E} nd_{{\mathcal O}_V}) $$ controlling deformations of the standard graded commutative an co-commutative bialgebra structure $\rho_0$ in ${\mathcal O}_V$, and the Lie algebra $$ {\mathfrak g}_V=\mathsf{Def}(\mathcal{L}\mathit{ieb}\stackrel{0}{\longrightarrow} {\mathcal E} nd_V) $$ controlling deformations of the zero morphism $0: \mathcal{L}\mathit{ieb}\rightarrow {\mathcal E} nd_V$.
\item[(iii)] For any formality morphism $F_{\mathfrak A}$ there is a canonical morphism of complexes
$$
\mathsf{fGC}_3^{or} \longrightarrow \mathsf{Def}\left(\mathcal{A}\mathit{ssb}_\infty \stackrel{F_{\mathfrak A}}{\longrightarrow} {\mathcal D} \widehat{\LB}^{\mathrm{min}}_\infty\right)
$$
which is a quasi-isomorphism up to one class corresponding to the standard rescaling automorphism
of the prop of Lie bialgebras $\mathcal{L}\mathit{ieb}$.
\item[(iv)]
The set of homotopy classes of universal formality maps as in (\ref{1: formality map F_A}) can be identified with the set of Drinfeld associators. In particular, the Grothendieck-Teichm\"uller group $GRT=GRT_1\rtimes {\mathbb K}^*$ acts faithfully and transitively on such universal formality maps. \end{itemize}
In the proof of item (i) in \cite{MW2} we used the Etingof-Kazhdan theorem \cite{EK} which says that any Lie bialgebra can deformation quantized in the sense explained by Drinfeld in \cite{D}, and which can be reformulated in our language as a morphism of props $$ f_{\mathfrak A}: \mathcal{A}\mathit{ssb} \longrightarrow {\mathcal D} \widehat{\LB} $$ satisfying certain non-triviality condition (see below). This morphism gives us universal quantizations of arbitrary, possibly infinitely dimensional, Lie bialgebras. If one is interested in universal quantization of {\em finite-dimensional}\, Lie bialgebras only, then the above morphism should be replaced by a map $$ f^\circlearrowright:\mathcal{A}\mathit{ssb} \longrightarrow {\mathcal D} \widehat{\LB}^\circlearrowright $$ to the polydifferential extension of the {\em wheeled}\, closure $\widehat{\LB}^\circlearrowright$ (see \cite{MMS}) of the prop $\widehat{\LB}$. The morphism $f_{\mathfrak A}$ implies the morphism $f^\circlearrowright$ due to the canonical injection ${\mathcal D} \widehat{\LB} \rightarrow {\mathcal D} \widehat{\LB}^\circlearrowright$, but not vice versa. In this paper we show a new proof of the Etingof-Kazhdan theorem for finite-dimensional Lie bialgebras by giving an explicit formula for the morphism $f^\circlearrowright$ above. We also show that the morphism $f^\circlearrowright$ can be lifted by a trivial induction to a morphism of dg props \begin{equation} \label{5: F from Assb-infty to FLieb-wheeld} F^\circlearrowright: \mathcal{A}\mathit{ssb}_\infty \longrightarrow {\mathcal D} \widehat{\LB}^{\mathrm{min},\circlearrowright}_\infty \end{equation} satisfying the conditions
\begin{equation}\label{5: Boundary cond for formality map} \pi_1\circ F^\circlearrowright\left(\begin{array}{c}\resizebox{13mm}{!}{ \xy
(0,7)*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ }},
(0,9)*{^m},
(0,3)*{^{...}},
(0,-3)*{_{...}},
(0,-7)*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }},
(0,-9)*{_n},
(0,0)*{\circ}="0", (-7,5)*{}="u_1", (-4,5)*{}="u_2", (4,5)*{}="u_3", (7,5)*{}="u_4", (-7,-5)*{}="d_1", (-4,-5)*{}="d_2", (4,-5)*{}="d_3", (7,-5)*{}="d_4",
\ar @{.} "0";"u_1" <0pt> \ar @{.} "0";"u_2" <0pt> \ar @{.} "0";"u_3" <0pt> \ar @{.} "0";"u_4" <0pt> \ar @{.} "0";"d_1" <0pt> \ar @{.} "0";"d_2" <0pt> \ar @{.} "0";"d_3" <0pt> \ar @{.} "0";"d_4" <0pt> \endxy}\end{array}\right)= \lambda \begin{array}{c}\resizebox{16mm}{!}{\xy (0,7.5)*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }},
(0,9.5)*{^m},
(0,-7.5)*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }},
(0,-9.9)*{_n},
(-6,5)*{...},
(-6,-5)*{...},
(-3,5)*{\circ}="u1",
(-3,-5)*{\circ}="d1",
(-6,5)*{...},
(-6,-5)*{...},
(-9,5)*{\circ}="u2",
(-9,-5)*{\circ}="d2",
(3,5)*{\circ}="u3",
(3,-5)*{\circ}="d3",
(6,5)*{...},
(6,-5)*{...},
(9,5)*{\circ}="u4",
(9,-5)*{\circ}="d4",
(0,0)*{\bullet}="a",
\ar @{-} "d1";"a" <0pt> \ar @{-} "a";"u1" <0pt> \ar @{-} "d2";"a" <0pt> \ar @{-} "a";"u2" <0pt> \ar @{-} "d3";"a" <0pt> \ar @{-} "a";"u3" <0pt> \ar @{-} "d4";"a" <0pt> \ar @{-} "a";"u4" <0pt> \endxy}\end{array}\ \ \text{for some non-zero} \lambda\in {\mathbb R}, \end{equation} for all $m+n\geq 3$, $m,n\geq 1$; here $\pi_1$ is the projection to the
vector subspace in $\widehat{\LB}^{\mathrm{min},\circlearrowright}_\infty$ spanned by graphs with precisely one black vertex. Moreover, we conjecture an explicit formula for such an extension $F^\circlearrowright$.
Morphisms of dg props ({\ref{5: F from Assb-infty to FLieb-wheeld}) satisfying the condition (\ref{5: Boundary cond for formality map}) can be called {\em formality morphisms in finite dimensions}\, as every such a morphism gives rise to a quasi-isomorphism of ${\mathcal L} ie_\infty$-algebras introduced in the item (ii) above, but only for {\em finite-dimensional}\, graded vector spaces $V$ (cf.
\cite{MW2}).
\subsection{Polydifferential functor}\label{5: functor D} We refer to \cite{MW2} for a detailed definition of the endofunctor ${\mathcal D}$. In this paper we apply this functor to the props $\widehat{\LB}$ and $\widehat{\LB}^{\mathrm{min}}_\infty$, their wheeled closures $\widehat{\LB}^\circlearrowright$, $\widehat{\LB}_\infty^{\mathrm{min}, \circlearrowright}$, and their quantized versions $\widehat{\LB}^{\mathrm{quant}}$ and $\widehat{\LB}_\infty^{\mathrm{quant}}$. It is enough to explain the action of ${\mathcal D}$ on the prop $\widehat{\LB}^{\mathrm{min}}_\infty$, the other cases being completely analogous.
Roughly speaking, ${\mathcal D}\widehat{\LB}^{\mathrm{min}}_\infty$ is spanned as a vector space by graphs from $\widehat{\LB}^{\mathrm{min}}_\infty$ whose input and output legs are labeled by {\em not necessarily different integers}; input legs labelled by the same integer $i$ we show as attached to a new white {\em in-vertex}\, to which we assign label $i$; the same procedure applies to output legs giving us new white {\em out-vertices}. Moreover, we allow these new white in-vertices and out-vertices with no legs attached. For example, $$ \begin{array}{c}\resizebox{8mm}{!}{ \xy (-3,0)*{_{_1}}, (3,0)*{_{_2}}, (0,8)*{^{^1}},
(0,7)*{\circ}="a", (-3,2)*{\circ}="b_1", (3,2)*{\circ}="b_2",
\endxy}\end{array}, \ \ \ \begin{array}{c}\resizebox{12mm}{!}{ \xy (-5,0)*{_{_1}}, (5,0)*{_{_2}}, (0,14)*{^{^1}},
(0,13)*{\circ}="0",
(0,7)*{\bullet}="a", (-5,2)*{\circ}="b_1", (5,2)*{\circ}="b_2",
(-8,-2)*{}="c_1", (-2,-2)*{}="c_2", (2,-2)*{}="c_3", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \endxy} \end{array}\ \ , \ \ \ \begin{array}{c}\resizebox{10mm}{!}{ \xy
(-5,-8)*{^{_1}}, (5,-8)*{^{_2}}, (0,14)*{^{^1}},
(0,13)*{\circ}="0",
(0,7)*{\bullet}="a", (5,-5)*{\circ}="b_1", (-5,-5)*{\circ}="b_2", (-5,2)*{\bullet}="c", (5,2)*{}="o",
(-5,2)*{}="c_1", (2,2)*{}="c_2", (-2,2)*{}="c_3", \ar @{-} "a";"0" <0pt> \ar @{-} "o";"b_1" <0pt> \ar @{-} "c";"b_2" <0pt> \ar @{-} "c";"b_1" <0pt> \ar @{-} "a";"c" <0pt> \ar @{-} "a";"o" <0pt> \endxy} \end{array} \ \ \in {\mathcal D} \widehat{\LB}^{\mathrm{min}}_\infty(1,2), \ \ \ \ \ \begin{array}{c}\resizebox{22mm}{!}{ \xy (-5,-5)*{_{_1}}, (5,-5)*{_{_2}}, (-8,16.5)*{^{^1}}, (0,16.5)*{^{^2}}, (8,16.5)*{^{^3}},
(0,15)*{\circ}="u", (-8,15)*{\circ}="uL", (8,15)*{\circ}="uR",
(0,7)*{\bullet}="a", (-10,7)*{\bullet}="L", (12,7)*{\bullet}="R", (-5,2)*{\bullet}="b_1", (5,2)*{\bullet}="b_2",
(-5,-3)*{\circ}="c_1", (5,-3)*{\circ}="c_3", \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_3" <0pt> \ar @{-} "b_2";"c_3" <0pt> \ar @{-} "R";"c_3" <0pt> \ar @{-} "b_1";"L" <0pt> \ar @{-} "u";"L" <0pt> \ar @{-} "b_2";"R" <0pt> \ar @{-} "R";"u" <0pt> \ar @{-} "a";"u" <0pt> \ar @{-} "a";"uL" <0pt> \ar @{-} "a";"uL" <0pt> \ar @{-} "L";"uL" <0pt> \endxy}\end{array}\in {\mathcal D} \widehat{\LB}^{\mathrm{min}}_\infty(3,2). $$ The linear span of graphs obtained in this way from elements of $\widehat{\LB}^{\mathrm{min}}_\infty$ with $n$ in-vertices and $m$-out vertices is denoted by ${\mathcal D} \widehat{\LB}^{\mathrm{min}}_\infty(m,n)$; it is clearly an ${\mathbb S}_m^{op}\times {\mathbb S}_n$ module (with elements of the permutation groups acting by relabelling of the in- and out vertices). The ${\mathbb S}$-bimodule ${\mathcal D} \widehat{\LB}^{\mathrm{min}}_\infty(m,n)$ has a natural basis $\{{\mathcal G}_{k;m,n}\}$ where ${\mathcal G}_{k;m,n}$ is the set of oriented graphs with $n$ labelled white in-vertices, $m$ labelled white out-vertices and $k$ unlabeled internal (black) vertices and with no edges connecting in-vertices directly to out-vertices. Any graph $\Gamma\in {\mathcal G}_{k;m,n}$ has its set of edges $E(\Gamma)$ decomposed canonically into the disjoint union $$ E(\Gamma)=E_{int}(\Gamma) \coprod E_{in}(\Gamma)\coprod E_{out}(\Gamma) $$ where $E_{int}(\Gamma)$ is the subset of edges connecting two internal vertices, $E_{in}(\Gamma)$ is the subset of edges connecting in-vertices to internal ones, and $E_{out}(\Gamma)$ is the subset of edges connecting internal vertices to out-vertices. As a ${\mathbb Z}$-graded vector space ${\mathcal D} \widehat{\LB}_\infty(m,n)$ is defined by $$ {\mathcal D} \widehat{\LB}_\infty(m,n)=\prod_{k\geq 0}{\mathbb K}\langle{\mathcal G}_{k;m,n}^{or}\rangle $$ where a graph $\Gamma\in {\mathcal G}_{k;m,n}$ is assigned the following homological degree $$
|\Gamma|=3|V_{int}(\Gamma)| -2|E_{int}(\Gamma)| -|E_{in}(\Gamma)|-|E_{out}(\Gamma)|. $$
The horizontal composition in ${\mathcal D} \widehat{\LB}^{\mathrm{min}}_\infty$ $$ \begin{array}{rccc} \boxtimes: & {\mathcal D} \widehat{\LB}^{\mathrm{min}}_\infty(m,n) \otimes {\mathcal D} \widehat{\LB}^{\mathrm{min}}_\infty(m',n') &\longrightarrow & {\mathcal D} \widehat{\LB}^{\mathrm{min}}_\infty(m+m',n+n')\\
& \Gamma\otimes \Gamma' & \longrightarrow & \Gamma\boxtimes \Gamma' \end{array} $$ is given just by taking the disjoint union of the graphs $\Gamma$ and $\Gamma'$ and relabelling in- and out-vertices of the graph $\Gamma'$ accordingly. The vertical composition, $$ \begin{array}{rccc} \circ: & {\mathcal D} \widehat{\LB}^{\mathrm{min}}_\infty(m,n) \otimes {\mathcal D} \widehat{\LB}^{\mathrm{min}}_\infty(n,l) &\longrightarrow & {\mathcal D} \widehat{\LB}^{\mathrm{min}}_\infty(m,l)\\
& \Gamma\otimes \Gamma' & \longrightarrow & \Gamma\circ\Gamma', \end{array} $$ is given by the following two step procedure: (a) erase all $n$ in-vertices of $\Gamma$ and all $n$ out-vertices of $\Gamma'$, (b) take a sum over all possible ways of attaching the hanging out-legs of $\Gamma$ to hanging in-legs of $\Gamma'$ (with the same numerical label) as well as to the out-vertices of $\Gamma'$, and also attaching the remaining in-legs of $\Gamma'$ to in-vertices of $\Gamma$ (see \S 2.2.2 in \cite{MW2} for more details). For example, a vertical composition of the following two graphs, $$ \begin{array}{rccc} \circ: & {\mathcal D} \widehat{\LB}^{\mathrm{min}}_\infty(2,1) \otimes {\mathcal D} \widehat{\LB}^{\mathrm{min}}_\infty(1,2) &\longrightarrow & {\mathcal D} \widehat{\LB}^{\mathrm{min}}_\infty(2,2)\\ & \begin{array}{c}\resizebox{7mm}{!}{ \xy (-5,0)*{_1}, (5,0)*{_2},
(0,13)*{\circ}="0",
(0,7)*{\bullet}="a", (-5,2)*{\circ}="b_1", (5,2)*{\circ}="b_2",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \endxy}\end{array} \otimes \begin{array}{c} \resizebox{7mm}{!}{\xy (-5,15)*{_1}, (5,15)*{_2},
(0,2)*{\circ}="0",
(0,8)*{\bullet}="a", (-5,13)*{\circ}="b_1", (5,13)*{\circ}="b_2",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \endxy} \end{array}
&\longrightarrow & \Gamma \end{array} $$ is given by the following sum $$ \Gamma= \begin{array}{c}\resizebox{7mm}{!}{\xy (-5,0)*{_1}, (5,0)*{_2}, (-5,20)*{^1}, (5,20)*{^2},
(0,13)*{\bullet}="0",
(0,7)*{\bullet}="a", (-5,2)*{\circ}="b_1", (5,2)*{\circ}="b_2", (-5,18)*{\circ}="u_1", (5,18)*{\circ}="u_2",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "0";"u_1" <0pt> \ar @{-} "0";"u_2" <0pt> \endxy} \end{array}\ \ \ +\ \ \ \begin{array}{c} \resizebox{8mm}{!}{\xy (-5,0)*{_1}, (5,0)*{_2}, (-5,20)*{^1}, (5,20)*{^2},
(4,10)*{\bullet}="0",
(-4,8)*{\bullet}="a", (-5,2)*{\circ}="b_1", (5,2)*{\circ}="b_2", (-5,18)*{\circ}="u_1", (5,18)*{\circ}="u_2",
\ar @{-} "b_1";"0" <0pt> \ar @{-} "u_1";"a" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "0";"u_1" <0pt> \ar @{-} "0";"u_2" <0pt> \endxy} \end{array}\ \ \ +\ \ \ \begin{array}{c} \resizebox{8mm}{!}{\xy (-5,0)*{_1}, (5,0)*{_2}, (-5,20)*{^1}, (5,20)*{^2},
(4,10)*{\bullet}="0",
(-4,8)*{\bullet}="a", (-5,2)*{\circ}="b_1", (5,2)*{\circ}="b_2", (-5,18)*{\circ}="u_1", (5,18)*{\circ}="u_2",
\ar @{-} "b_2";"0" <0pt> \ar @{-} "u_1";"a" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "0";"u_1" <0pt> \ar @{-} "0";"u_2" <0pt> \endxy} \end{array}\ \ \ +\ \ \ \begin{array}{c} \resizebox{8mm}{!}{\xy (-5,0)*{_1}, (5,0)*{_2}, (-5,20)*{^1}, (5,20)*{^2},
(4,10)*{\bullet}="0",
(-4,8)*{\bullet}="a", (-5,2)*{\circ}="b_1", (5,2)*{\circ}="b_2", (-5,18)*{\circ}="u_1", (5,18)*{\circ}="u_2",
\ar @{-} "b_2";"0" <0pt> \ar @{-} "u_2";"a" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "0";"u_1" <0pt> \ar @{-} "0";"u_2" <0pt> \endxy} \end{array}\ \ \ +\ \ \ \begin{array}{c} \resizebox{8mm}{!}{\xy (-5,0)*{_1}, (5,0)*{_2}, (-5,20)*{^1}, (5,20)*{^2},
(4,10)*{\bullet}="0",
(-4,8)*{\bullet}="a", (-5,2)*{\circ}="b_1", (5,2)*{\circ}="b_2", (-5,18)*{\circ}="u_1", (5,18)*{\circ}="u_2",
\ar @{-} "b_1";"0" <0pt> \ar @{-} "u_2";"a" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "0";"u_1" <0pt> \ar @{-} "0";"u_2" <0pt> \endxy}\end{array} $$ The differential $\delta$ in ${\mathcal D} \mathcal{L}\mathit{ieb}_\infty$ acts only on black vertices and splits them as shown in (\ref{3: differential in LieBinfty}).
For any given representation $\nu: \mathcal{L}\mathit{ieb}_\infty^{\mathrm{min}}\rightarrow {\mathcal E} nd_V$, i.e.\ for any Maurer-Cartan element $\nu$ in the Lie algebra ${\mathfrak g}_V$, there is an associated representation $\rho_\nu: {\mathcal D} \mathcal{L}\mathit{ieb}_\infty \rightarrow {\mathcal E} nd_{{\mathcal O}_V}$ in ${\mathcal O}_V=\odot^\bullet V$ given in terms of polydifferential operators as explained in full details in \S 5.4 of \cite{MW2}. If, for example, $V={\mathbb R}^n$ with the standard basis denoted by $(x_1, \ldots, x_n)$ (so that ${\mathcal O}_V={\mathbb K}[x_1, \ldots, x_n]$), and $\nu$ is a Lie bialgebra structure in $V$ with the structure constants for the Lie bracket and, respectively, Lie cobracket given by $$ [x_i,x_j]=:\sum_{k=1}^n C_{ij}^k x_k, \ \ \ \ \ \triangle(x_k)=\sum_{i,j=1}^n \Phi^{ij}_k x_i\wedge x_j $$ then one has, $$ \begin{array}{rccc} \rho^\nu\left( \begin{array}{c}\resizebox{7mm}{!}{ \xy (-5,0)*{_1}, (5,0)*{_2},
(0,13)*{\circ}="0",
(0,7)*{\bullet}="a", (-5,2)*{\circ}="b_1", (5,2)*{\circ}="b_2",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \endxy}\end{array}\right): & {\mathcal O}_V\otimes {\mathcal O}_V & \longrightarrow {\mathcal O}_V\\ & f_1 \otimes f_2 & \longrightarrow & \displaystyle\sum_{i,j,k=1}^n x_k C^k_{ij} \frac{{\partial} f_1}{{\partial} x_i} \frac{{\partial} f_2}{{\partial} x_j} \end{array} $$ $$ \begin{array}{rccc} \rho^\nu\left( \begin{array}{c} \resizebox{7mm}{!}{ \xy (-5,0)*{_1}, (5,0)*{_2}, (-5,20)*{^1}, (5,20)*{^2},
(0,13)*{\bullet}="0",
(0,7)*{\bullet}="a", (-5,2)*{\circ}="b_1", (5,2)*{\circ}="b_2", (-5,18)*{\circ}="u_1", (5,18)*{\circ}="u_2",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "0";"u_1" <0pt> \ar @{-} "0";"u_2" <0pt> \endxy} \end{array} \right): & {\mathcal O}_V & \longrightarrow & {\mathcal O}_V\otimes {\mathcal O}_V\\ & f & \longrightarrow & \displaystyle\sum_{i,j,k,m,n=1}^n (x_m\otimes x_n)\cdot \Phi_k^{mn} C^k_{ij} \Delta( \frac{{\partial} f_1}{{\partial} x_i} \frac{{\partial} f_2}{{\partial} x_j}) \end{array} $$ while $ \rho^\nu\left( \begin{array}{c}\resizebox{6mm}{!}{ \xy (-3,0)*{_1}, (3,0)*{_2},
(0,7)*{\circ}="a", (-3,2)*{\circ}="b_1", (3,2)*{\circ}="b_2",
\endxy}\end{array} \right): {\mathcal O}_V^{\otimes 2}\rightarrow {\mathcal O}_V$ and $\Delta:=\rho^\nu\left( \begin{array}{c}\resizebox{6mm}{!}{ \xy (-3,9)*{^1}, (3,9)*{^2},
(0,2)*{\circ}="a", (-3,7)*{\circ}="b_1", (3,7)*{\circ}="b_2",
\endxy}\end{array} \right): {\mathcal O}_V\rightarrow {\mathcal O}_V^{\otimes 2}$ are the standard commutative multiplication and, respectively, co-commutative comultiplication
in ${\mathcal O}_V$. Representations of the completed props $\widehat{\LB}$ and $\widehat{\LB}^{\mathrm{min}}_\infty$ (and hence of ${\mathcal D}\widehat{\LB}$ and
${\mathcal D}\widehat{\LB}^{\mathrm{min}}_\infty$) are considered in the subsection \S {\ref{5: subsec on repr of wLBq}} below --- they require an introduction of a formal parameter to insure convergence.
\subsection{Properad of quantizable Lie bialgebras} Let us denote by $\widehat{\mathcal{L}\mathit{ieb}}_\infty$ the {\em non-differential}\, properad $\widehat{\LB}^{\mathrm{min}}_\infty$, i.e.\ the completed free properad generated by the same ${\mathbb S}$-bimodule $E$ but with the differential set to zero. Let $\widehat{\mathcal{L}\mathit{ieb}}_\infty^+$ be the free extension of $\widehat{\mathcal{L}\mathit{ieb}}_\infty$ by one extra $(1,1)$ generator $\begin{array}{c} \xy
(0,0)*{\bullet}="a", (0,3)*{}="b", (0,-3)*{}="c",
\ar @{-} "a";"b" <0pt> \ar @{-} "a";"c" <0pt> \endxy\end{array}$ of homological degree one. In \cite{MW} (see formula (11) there) we constructed a map of Lie algebras $$ \begin{array}{rccc} f: & \mathsf{dfGC}_3^{or} & \longrightarrow & \mathrm{Der}(\widehat{\mathcal{L}\mathit{ieb}}_\infty^+) \\ & \Gamma & \longrightarrow & \displaystyle\sum_{m,n\geq 1} \sum_{s:[n]\rightarrow V(\Gamma)\atop \hat{s}:[m]\rightarrow V(\Gamma)} \begin{array}{c}\resizebox{11mm}{!} {\xy
(-6,7)*{^1}, (-3,7)*{^2}, (2.5,7)*{}, (7,7)*{^m}, (-3,-8)*{_2}, (3,-6)*{}, (7,-8)*{_n}, (-6,-8)*{_1},
(0,4.5)*+{...}, (0,-4.5)*+{...},
(0,0)*+{\Gamma}="o", (-6,6)*{}="1", (-3,6)*{}="2", (3,6)*{}="3", (6,6)*{}="4", (-3,-6)*{}="5", (3,-6)*{}="6", (6,-6)*{}="7", (-6,-6)*{}="8",
\ar @{-} "o";"1" <0pt> \ar @{-} "o";"2" <0pt> \ar @{-} "o";"3" <0pt> \ar @{-} "o";"4" <0pt> \ar @{-} "o";"5" <0pt> \ar @{-} "o";"6" <0pt> \ar @{-} "o";"7" <0pt> \ar @{-} "o";"8" <0pt> \endxy}\end{array} \end{array} $$
where the second sum in the r.h.s.\ is taken over all ways of attaching the incoming and outgoing legs to the graph $\Gamma$, and then setting to zero every resulting graph if it contains a vertex with valency $\leq 2$ or with no input legs or no output legs
(there is an implicit rule of signs in-built into this formula and its version in Proposition {\ref{5: Prop on isomorphisms of lieb properads}} below which is completely analogous to the one explained in \S 7 of \cite{MaVo}). Here $\mathrm{Der}(\widehat{\mathcal{L}\mathit{ieb}}_\infty^+)$ is the Lie algebra of continuous derivations of the topological properad $\widehat{\mathcal{L}\mathit{ieb}}_\infty^+$. Note that for many graphs $\Gamma\in \mathsf{dfGC}_3^{or}$ the associated $(m=1,n=1)$ summand $ \begin{array}{c}\resizebox{3mm}{!} {\xy
(0,5)*{^1}, (0,-5)*{_1},
(0,0)*+{\Gamma}="o", (0,4)*{}="1", (0,-4)*{}="8",
\ar @{-} "o";"1" <0pt> \ar @{-} "o";"8" <0pt> \endxy}\end{array} $
in $f(\Gamma)$ can be highly non-trivial, and this phenomenon explains the need for the extension $\widehat{\mathcal{L}\mathit{ieb}}_\infty\rightarrow \widehat{\mathcal{L}\mathit{ieb}}_\infty^+$ above.
If $\Upsilon$ is a Maurer-Cartan element in $\mathsf{dfGC}_3^{or}$, then $f(\Upsilon)$ is a differential in $\widehat{\mathcal{L}\mathit{ieb}}_\infty^+$ which acts on the generating $(m,n)$-corolla as follows $$ f(\Upsilon)\left( \begin{array}{c}\resizebox{14mm}{!}{\begin{xy}
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
<-0.6mm,-0.44mm>*{};<-8mm,-5mm>*{}**@{-},
<-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{},
<0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-},
<0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,-6.9mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,-6.9mm>*{^2}**@{},
<0mm,0mm>*{};<4.5mm,-6.9mm>*{^{n\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<9.0mm,-6.9mm>*{^n}**@{},
\end{xy}}\end{array} \right)= \sum_{s:[n]\rightarrow V(\Gamma)\atop \hat{s}:[m]\rightarrow V(\Gamma)} \begin{array}{c}\resizebox{11mm}{!} {\xy
(-6,7)*{^1}, (-3,7)*{^2}, (2.5,7)*{}, (7,7)*{^m}, (-3,-8)*{_2}, (3,-6)*{}, (7,-8)*{_n}, (-6,-8)*{_1},
(0,4.5)*+{...}, (0,-4.5)*+{...},
(0,0)*+{\Upsilon}="o", (-6,6)*{}="1", (-3,6)*{}="2", (3,6)*{}="3", (6,6)*{}="4", (-3,-6)*{}="5", (3,-6)*{}="6", (6,-6)*{}="7", (-6,-6)*{}="8",
\ar @{-} "o";"1" <0pt> \ar @{-} "o";"2" <0pt> \ar @{-} "o";"3" <0pt> \ar @{-} "o";"4" <0pt> \ar @{-} "o";"5" <0pt> \ar @{-} "o";"6" <0pt> \ar @{-} "o";"7" <0pt> \ar @{-} "o";"8" <0pt> \endxy}\end{array} $$
If $\Upsilon$ is such that the summand $ \begin{array}{c}\resizebox{3mm}{!} {\xy
(0,5)*{^1}, (0,-5)*{_1},
(0,0)*+{\Upsilon}="o", (0,4)*{}="1", (0,-4)*{}="8",
\ar @{-} "o";"1" <0pt> \ar @{-} "o";"8" <0pt> \endxy}\end{array} $
in $f(\Upsilon)$ contains at least one vertex of the form
$\begin{array}{c} \xy
(0,0)*{\bullet}="a", (0,3)*{}="b", (0,-3)*{}="c",
\ar @{-} "a";"b" <0pt> \ar @{-} "a";"c" <0pt> \endxy\end{array}$, then the ideal $I^+\subset \widehat{\mathcal{L}\mathit{ieb}}_\infty^+$ generated by this extra generator
$\begin{array}{c} \xy
(0,0)*{\bullet}="a", (0,3)*{}="b", (0,-3)*{}="c",
\ar @{-} "a";"b" <0pt> \ar @{-} "a";"c" <0pt> \endxy\end{array}$ is respected by the differential $f(\Upsilon)$ so that the latter induces a differential in the quotient properad $$
\widehat{\mathcal{L}\mathit{ieb}}_\infty= \widehat{\mathcal{L}\mathit{ieb}}_\infty^+/I^+. $$ For example, the standard Maurer-Cartan element $$ \Upsilon_S:=\xy
(0,0)*{\bullet}="a", (5,0)*{\bullet}="b",
\ar @{->} "a";"b" <0pt> \endxy $$ in $\mathsf{dfGC}_3^{or}$ does have this property as $$ \begin{array}{c}\resizebox{7mm}{!} {\xy
(0,5)*{^1}, (0,-5)*{_1},
(0,0)*+{\ \Upsilon_S}="o", (0,4)*{}="1", (0,-4)*{}="8",
\ar @{-} "o";"1" <0pt> \ar @{-} "o";"8" <0pt> \endxy}\end{array} = \begin{xy} <0mm,7mm>*{^1};
<0mm,-4.4mm>*{_1};
<0mm,0mm>*{};<0mm,-3mm>*{}**@{-},
<0mm,0mm>*{};<0mm,6mm>*{}**@{-},
<0mm,0mm>*{\bullet};
<0mm,3mm>*{\bullet};
\end{xy} $$ and induces the standard differential $\delta$ in $\widehat{\mathcal{L}\mathit{ieb}}_\infty$ given by the formula (\ref{3: differential in LieBinfty}).
Another interesting for us Maurer-Cartan element is given explicitly by (\ref{3: Upsilon om_g for d=3}). It was proven in Lemma {\ref{A: lemma on 4 binary vertices}} that every graph $\Gamma\in \hat{{\mathsf G}}_{4p+2,6p+1}$, $p\geq 2$, contributing into $\Upsilon^{\omega_g}$ has at least 4 binary vertices so that again $$ \begin{array}{c}\resizebox{9mm}{!} {\xy
(0,5)*{^1}, (0,-5)*{_1},
(0,0)*+{\ \ \Upsilon^{\omega_g}}="o", (0,4)*{}="1", (0,-4)*{}="8",
\ar @{-} "o";"1" <0pt> \ar @{-} "o";"8" <0pt> \endxy}\end{array} = \begin{xy} <0mm,7mm>*{^1};
<0mm,-4.4mm>*{_1};
<0mm,0mm>*{};<0mm,-3mm>*{}**@{-},
<0mm,0mm>*{};<0mm,6mm>*{}**@{-},
<0mm,0mm>*{\bullet};
<0mm,3mm>*{\bullet};
\end{xy} $$ implying that $\Upsilon^{\omega_g}$ induces the following differential in $\widehat{\mathcal{L}\mathit{ieb}}_\infty$ $$ \delta^{\omega_g}=f(\Upsilon^{\omega_g})\bmod I^+ = \delta + \sum_{p\geq 2} \sum_{\Gamma\in \hat{{\mathsf G}}^{or}_{4p+2,6p+1}}\displaystyle\sum_{m,n\geq 1\atop m+n\geq 4} \left(\int_{\overline{C}_{4p+2}({\mathbb R}^3)}\bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\pi}^*_e\left(\omega_g\right)\right) \sum_{s:[n]\rightarrow V(\Gamma)\atop \hat{s}:[m]\rightarrow V(\Gamma)} \begin{array}{c}\resizebox{9mm}{!} {\xy
(-6,7)*{^1}, (-3,7)*{^2}, (2.5,7)*{}, (7,7)*{^m}, (-3,-8)*{_2}, (3,-6)*{}, (7,-8)*{_n}, (-6,-8)*{_1},
(0,4.5)*+{...}, (0,-4.5)*+{...},
(0,0)*+{\Gamma}="o", (-6,6)*{}="1", (-3,6)*{}="2", (3,6)*{}="3", (6,6)*{}="4", (-3,-6)*{}="5", (3,-6)*{}="6", (6,-6)*{}="7", (-6,-6)*{}="8",
\ar @{-} "o";"1" <0pt> \ar @{-} "o";"2" <0pt> \ar @{-} "o";"3" <0pt> \ar @{-} "o";"4" <0pt> \ar @{-} "o";"5" <0pt> \ar @{-} "o";"6" <0pt> \ar @{-} "o";"7" <0pt> \ar @{-} "o";"8" <0pt> \endxy}\end{array}. $$ As every graph in the sum over $p\geq 2$ has at least $4$ bivalent vertices (see Appendix A), we have, in particular, $$ \delta^{\omega_g}\left(\begin{array}{c} \begin{xy}
<0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-},
<0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-},
<-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-},
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<0mm,-0.55mm>*{};<0mm,-3.8mm>*{_1}**@{},
<0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^2}**@{},
<-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^1}**@{},
\end{xy}
\end{array}\right)=\delta\left(\begin{array}{c} \begin{xy}
<0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{-},
<0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{-},
<-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{-},
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<0mm,-0.55mm>*{};<0mm,-3.8mm>*{_1}**@{},
<0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^2}**@{},
<-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^1}**@{},
\end{xy}
\end{array}\right)=0
\ \ \ \ , \ \ \ \
\delta^{\omega_g} \left(\begin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-},
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<0mm,0.66mm>*{};<0mm,3.4mm>*{^1}**@{},
<0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^2}**@{},
<-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^1}**@{}, \end{xy}\right)= \delta \left(\begin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{-},
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<0mm,0.66mm>*{};<0mm,3.4mm>*{^1}**@{},
<0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^2}**@{},
<-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^1}**@{}, \end{xy}\right)=0 $$
The first differential $\delta$ makes $\widehat{\mathcal{L}\mathit{ieb}}_\infty$ into the standard minimal resolution of the completed properad $\widehat{\mathcal{L}\mathit{ieb}}$ of Lie bialgebras. The second differential $\delta^{\omega_g}$ makes $\widehat{\mathcal{L}\mathit{ieb}}_\infty$ into a resolution of a properad $\widehat{\mathcal{L}\mathit{ieb}}^{
\mathrm{quant}}$ which we call the {\em properad of quantizable Lie bialgebras}\, and which can be defined as follows.
By contrast to $\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{min}}:= (\widehat{\mathcal{L}\mathit{ieb}}_\infty, \delta)$, let us abbreviate from now on $$ \widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}}:= (\widehat{\mathcal{L}\mathit{ieb}}_\infty, \delta^{\omega_g}) $$ Let $J$ be the differential closure of the ideal in $\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}}$ generated by $(m,n)$-corollas with $m+n\geq 4$. The quotient $$
\widehat{\mathcal{L}\mathit{ieb}}^{\mathrm{quant}}:= \widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}}/J $$ is a properad which is concentrated in homological degree zero, and which is generated by the ${\mathbb S}$-bimodule (\ref{5: module E_0 generating Lieb}) modulo the following three relations, $$ 0= \begin{array}{c}\resizebox{6mm}{!}{\begin{xy}
<0mm,2.47mm>*{};<0mm,0.12mm>*{}**@{-},
<0.5mm,3.5mm>*{};<2.2mm,5.2mm>*{}**@{-},
<-0.48mm,3.48mm>*{};<-2.2mm,5.2mm>*{}**@{-},
<0mm,3mm>*{\bullet};<0mm,3mm>*{}**@{},
<0mm,-0.8mm>*{\bullet};<0mm,-0.8mm>*{}**@{}, <-0.39mm,-1.2mm>*{};<-2.2mm,-3.5mm>*{}**@{-},
<0.39mm,-1.2mm>*{};<2.2mm,-3.5mm>*{}**@{-},
<0.5mm,3.5mm>*{};<2.8mm,5.7mm>*{^2}**@{},
<-0.48mm,3.48mm>*{};<-2.8mm,5.7mm>*{^1}**@{},
<0mm,-0.8mm>*{};<-2.7mm,-5.2mm>*{^1}**@{},
<0mm,-0.8mm>*{};<2.7mm,-5.2mm>*{^2}**@{}, \end{xy}}\end{array}
- \begin{array}{c}\resizebox{8mm}{!}{\begin{xy}
<0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-},
<0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-},
<-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,-0.8mm>*{\bullet};<0mm,0.8mm>*{}**@{},
<2.4mm,2.4mm>*{\bullet};<2.4mm,2.4mm>*{}**@{},
<2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-},
<2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-},
<0mm,-1.3mm>*{};<0mm,-5.3mm>*{^1}**@{},
<2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^2}**@{},
<2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^2}**@{},
<-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^1}**@{},
\end{xy}}\end{array}
+ \begin{array}{c}\resizebox{8mm}{!}{\begin{xy}
<0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-},
<0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-},
<-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,-0.8mm>*{\bullet};<0mm,0.8mm>*{}**@{},
<2.4mm,2.4mm>*{\bullet};<2.4mm,2.4mm>*{}**@{},
<2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-},
<2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-},
<0mm,-1.3mm>*{};<0mm,-5.3mm>*{^2}**@{},
<2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^1}**@{},
<2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^2}**@{},
<-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^1}**@{},
\end{xy}}\end{array}
- \begin{array}{c}\resizebox{8mm}{!}{\begin{xy}
<0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-},
<0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-},
<-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,-0.8mm>*{\bullet};<0mm,0.8mm>*{}**@{},
<2.4mm,2.4mm>*{\bullet};<2.4mm,2.4mm>*{}**@{},
<2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-},
<2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-},
<0mm,-1.3mm>*{};<0mm,-5.3mm>*{^2}**@{},
<2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^1}**@{},
<2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^1}**@{},
<-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^2}**@{},
\end{xy}}\end{array} + \begin{array}{c}\resizebox{8mm}{!}{\begin{xy}
<0mm,-1.3mm>*{};<0mm,-3.5mm>*{}**@{-},
<0.38mm,-0.2mm>*{};<2.0mm,2.0mm>*{}**@{-},
<-0.38mm,-0.2mm>*{};<-2.2mm,2.2mm>*{}**@{-}, <0mm,-0.8mm>*{\bullet};<0mm,0.8mm>*{}**@{},
<2.4mm,2.4mm>*{\bullet};<2.4mm,2.4mm>*{}**@{},
<2.77mm,2.0mm>*{};<4.4mm,-0.8mm>*{}**@{-},
<2.4mm,3mm>*{};<2.4mm,5.2mm>*{}**@{-},
<0mm,-1.3mm>*{};<0mm,-5.3mm>*{^1}**@{},
<2.5mm,2.3mm>*{};<5.1mm,-2.6mm>*{^2}**@{},
<2.4mm,2.5mm>*{};<2.4mm,5.7mm>*{^1}**@{},
<-0.38mm,-0.2mm>*{};<-2.8mm,2.5mm>*{^2}**@{},
\end{xy}}\end{array}
+ \ \sum_{p\geq 2} \sum_{\Gamma\in \hat{{\mathsf G}}^{\leq 3}_{4p+2,6p+1}}\displaystyle \hspace{-2mm} \left(\int_{\overline{C}_{4p+2}({\mathbb R}^3)} \bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\pi}^*_e\left(\omega_g\right)\right)\hspace{-1mm} \sum_{s:[2]\rightarrow V(\Gamma)\atop \hat{s}:[2]\rightarrow V(\Gamma)} \begin{array}{c}\resizebox{10mm}{!} {\xy
(-6,7)*{^1},
(2.5,7)*{}, (7,7)*{^2},
(3,-6)*{}, (7,-8)*{_2}, (-6,-8)*{_1},
(0,0)*+{\Gamma}="o", (-5,5)*{}="1",
(5,5)*{}="4",
(5,-5)*{}="7", (-5,-5)*{}="8",
\ar @{-} "o";"1" <0pt>
\ar @{-} "o";"4" <0pt>
\ar @{-} "o";"7" <0pt> \ar @{-} "o";"8" <0pt> \endxy}\end{array}\ , $$
$$ 0=\begin{array}{c}\resizebox{9mm}{!}{\begin{xy}
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-},
<-2.4mm,-2.4mm>*{\bullet};<-2.4mm,-2.4mm>*{}**@{},
<-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-},
<-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^3}**@{},
<-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^2}**@{},
<-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^1}**@{},
\end{xy}}\end{array} \ + \ \begin{array}{c}\resizebox{9mm}{!}{ \begin{xy}
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-},
<-2.4mm,-2.4mm>*{\bullet};<-2.4mm,-2.4mm>*{}**@{},
<-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-},
<-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^2}**@{},
<-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^1}**@{},
<-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^3}**@{},
\end{xy}}\end{array} \ + \ \begin{array}{c}\resizebox{9mm}{!}{ \begin{xy}
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<0mm,0.69mm>*{};<0mm,3.0mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<2.4mm,-2.4mm>*{}**@{-},
<-0.35mm,-0.35mm>*{};<-1.9mm,-1.9mm>*{}**@{-},
<-2.4mm,-2.4mm>*{\bullet};<-2.4mm,-2.4mm>*{}**@{},
<-2.0mm,-2.8mm>*{};<0mm,-4.9mm>*{}**@{-},
<-2.8mm,-2.9mm>*{};<-4.7mm,-4.9mm>*{}**@{-},
<0.39mm,-0.39mm>*{};<3.3mm,-4.0mm>*{^1}**@{},
<-2.0mm,-2.8mm>*{};<0.5mm,-6.7mm>*{^3}**@{},
<-2.8mm,-2.9mm>*{};<-5.2mm,-6.7mm>*{^2}**@{},
\end{xy}}\end{array} \
+ \ \sum_{p\geq 2} \sum_{\Gamma\in \hat{{\mathsf G}}^{\leq 3}_{4p+2,6p+1}}\displaystyle \left(\int_{\overline{C}_{4p+2}({\mathbb R}^3)} \bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\pi}^*_e\left(\omega_g\right)\right) \sum_{s:[2]\rightarrow V(\Gamma)\atop \hat{s}:[2]\rightarrow V(\Gamma)} \begin{array}{c}\resizebox{10mm}{!} {\xy
(0,7)*{^1},
(2.5,7)*{}, (0,-8)*{_2},
(3,-6)*{}, (7,-8)*{_3}, (-6,-8)*{_1},
(0,0)*+{\Gamma}="o", (0,5)*{}="1",
(0,-5)*{}="4",
(5,-5)*{}="7", (-5,-5)*{}="8",
\ar @{-} "o";"1" <0pt>
\ar @{-} "o";"4" <0pt>
\ar @{-} "o";"7" <0pt> \ar @{-} "o";"8" <0pt> \endxy}\end{array}\ , $$
$$ 0= \begin{array}{c}\resizebox{9mm}{!}{\begin{xy}
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{-},
<0.49mm,0.49mm>*{};<1.9mm,1.9mm>*{}**@{-},
<-0.5mm,0.5mm>*{};<-1.9mm,1.9mm>*{}**@{-},
<-2.3mm,2.3mm>*{\bullet};<-2.3mm,2.3mm>*{}**@{},
<-1.8mm,2.8mm>*{};<0mm,4.9mm>*{}**@{-},
<-2.8mm,2.9mm>*{};<-4.6mm,4.9mm>*{}**@{-},
<0.49mm,0.49mm>*{};<2.7mm,2.3mm>*{^3}**@{},
<-1.8mm,2.8mm>*{};<0.4mm,5.3mm>*{^2}**@{},
<-2.8mm,2.9mm>*{};<-5.1mm,5.3mm>*{^1}**@{},
\end{xy}}\end{array} \ + \ \begin{array}{c}\resizebox{9mm}{!}{\begin{xy}
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{-},
<0.49mm,0.49mm>*{};<1.9mm,1.9mm>*{}**@{-},
<-0.5mm,0.5mm>*{};<-1.9mm,1.9mm>*{}**@{-},
<-2.3mm,2.3mm>*{\bullet};<-2.3mm,2.3mm>*{}**@{},
<-1.8mm,2.8mm>*{};<0mm,4.9mm>*{}**@{-},
<-2.8mm,2.9mm>*{};<-4.6mm,4.9mm>*{}**@{-},
<0.49mm,0.49mm>*{};<2.7mm,2.3mm>*{^2}**@{},
<-1.8mm,2.8mm>*{};<0.4mm,5.3mm>*{^1}**@{},
<-2.8mm,2.9mm>*{};<-5.1mm,5.3mm>*{^3}**@{},
\end{xy}}\end{array} \ + \ \begin{array}{c}\resizebox{9mm}{!}{\begin{xy}
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{-},
<0.49mm,0.49mm>*{};<1.9mm,1.9mm>*{}**@{-},
<-0.5mm,0.5mm>*{};<-1.9mm,1.9mm>*{}**@{-},
<-2.3mm,2.3mm>*{\bullet};<-2.3mm,2.3mm>*{}**@{},
<-1.8mm,2.8mm>*{};<0mm,4.9mm>*{}**@{-},
<-2.8mm,2.9mm>*{};<-4.6mm,4.9mm>*{}**@{-},
<0.49mm,0.49mm>*{};<2.7mm,2.3mm>*{^1}**@{},
<-1.8mm,2.8mm>*{};<0.4mm,5.3mm>*{^3}**@{},
<-2.8mm,2.9mm>*{};<-5.1mm,5.3mm>*{^2}**@{},
\end{xy}}\end{array}\
+ \ \sum_{p\geq 2} \sum_{\Gamma\in \hat{{\mathsf G}}^{\leq 3}_{4p+2,6p+1}}\displaystyle \left(\int_{\overline{C}_{4p+2}({\mathbb R}^3)} \bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\pi}^*_e\left(\omega_g\right)\right) \sum_{s:[2]\rightarrow V(\Gamma)\atop \hat{s}:[2]\rightarrow V(\Gamma)} \begin{array}{c}\resizebox{10mm}{!} {\xy
(-7,7)*{^1},
(2.5,7)*{}, (0,7)*{^2},
(7,7)*{_3}, (0,-8)*{_1},
(0,0)*+{\Gamma}="o", (0,5)*{}="1",
(-5,5)*{}="4",
(5,5)*{}="7", (0,-5)*{}="8",
\ar @{-} "o";"1" <0pt>
\ar @{-} "o";"4" <0pt>
\ar @{-} "o";"7" <0pt> \ar @{-} "o";"8" <0pt> \endxy}\end{array}\ , $$ where $\hat{{\mathsf G}}^{\leq 3}_{4p+2,6p+1}$ is the subset of $\hat{{\mathsf G}}^{or}_{4p+2,6p+1}$ consisting of graphs with vertices of valency $\leq 3$ (it was shown in Appendix A that such graphs have precisely $4$ bivalent vertices which explains why there no other relations than the ones shown above).
\subsubsection{\bf Theorem}\label{5: Prop cohomology of LB infty quant} {\em The natural epimorphism of props $$ \nu: \widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}} \longrightarrow \widehat{\mathcal{L}\mathit{ieb}}^{\mathrm{quant}} $$ is a quasi-isomorphism.}
\begin{proof} The morphism $\nu$ respects complete and exhaustive filtrations of both sides by the number of vertices, hence it induces a morphism of the associated spectral sequences, $$ \nu^r: {\mathcal E}^r\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}} \longrightarrow {\mathcal E}^r\widehat{\mathcal{L}\mathit{ieb}}^{\mathrm{quant}} $$ The term ${\mathcal E}^0\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}}$ has trivial differential, while the term ${\mathcal E}^1\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}}$ can be identified with the dg prop $\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{min}}$ so that ${\mathcal E}^2\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}}=\widehat{\mathcal{L}\mathit{ieb}}$ as an ${\mathbb S}$-bimodule. On the other hand ${\mathcal E}^2\widehat{\LB}^{\mathrm{quant}}$ can also be identified with ${\widehat{\LB}}$ as an ${\mathbb S}$-bimodule. Hence the morphism $\nu^2$ is an isomorphism so that, by the Eilenberg-Moore Comparison Theorem 5.5.11 (see \S 5.5 in \cite{Wei}), the morphism $\nu$ is a quasi-isomorphism. \end{proof}
Note that graphs in (\ref{3: formula for Holie_2 F_k}) may contain closed paths of directed edges in general and hence belong to the graph complex $\mathsf{dfGC}_d$ rather than to $\mathsf{dfGC}_d^{or}$. Therefore in order to see the meaning of Theorem {{\ref{3: Theorem on KS iso for A_d^n}}} in terms of props one has to consider the wheeled closure \cite{MMS} of the prop $\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{min}}$ which we denote by $\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{min}, \circlearrowright}$; by definition, it is generated by the same ${\mathbb S}$-bimodule (\ref{5: generators of Lieb_infty}) but now using directed graphs with possibly {\em closed}\, directed paths of internal edges.
Theorem {\ref{3: Theorem on KS iso for A_d^n}} implies almost immediately the following
\subsubsection{\bf Proposition}\label{5: Prop on isomorphisms of lieb properads} {\em There is a morphism of dg props $\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}}$ $$ F: \widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}} \longrightarrow \widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{min}, \circlearrowright} $$ given by the following transcendental formula (cf.\ (\ref{3: formula for Holie_2 F_k})) \begin{equation}\label{5: explicit map F from LB^q_infty to LB_infty wheeled} F\left( \begin{array}{c}\resizebox{14mm}{!}{\begin{xy}
<0mm,0mm>*{\bullet};<0mm,0mm>*{}**@{},
<-0.6mm,0.44mm>*{};<-8mm,5mm>*{}**@{-},
<-0.4mm,0.7mm>*{};<-4.5mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,5mm>*{\ldots}**@{},
<0.4mm,0.7mm>*{};<4.5mm,5mm>*{}**@{-},
<0.6mm,0.44mm>*{};<8mm,5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,5.5mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,5.5mm>*{^2}**@{},
<0mm,0mm>*{};<4.5mm,5.5mm>*{^{m\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<9.0mm,5.5mm>*{^m}**@{},
<-0.6mm,-0.44mm>*{};<-8mm,-5mm>*{}**@{-},
<-0.4mm,-0.7mm>*{};<-4.5mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-1mm,-5mm>*{\ldots}**@{},
<0.4mm,-0.7mm>*{};<4.5mm,-5mm>*{}**@{-},
<0.6mm,-0.44mm>*{};<8mm,-5mm>*{}**@{-},
<0mm,0mm>*{};<-8.5mm,-6.9mm>*{^1}**@{},
<0mm,0mm>*{};<-5mm,-6.9mm>*{^2}**@{},
<0mm,0mm>*{};<4.5mm,-6.9mm>*{^{n\hspace{-0.5mm}-\hspace{-0.5mm}1}}**@{},
<0mm,0mm>*{};<9.0mm,-6.9mm>*{^n}**@{},
\end{xy}}\end{array}
\right)= \sum_{q\geq 0} \sum_{\Gamma\in {\mathsf G}_{1+4q, 6q}} \left(\int_{\overline{{\mathfrak C}}_{1+4q}({\mathbb R}^d)} \bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\pi}^*_e\left(\varpi_g\right) \right) \sum_{s:[n]\rightarrow V(\Gamma)\atop \hat{s}:[m]\rightarrow V(\Gamma)} \begin{array}{c}\resizebox{9mm}{!} {\xy
(-6,7)*{^1}, (-3,7)*{^2}, (2.5,7)*{}, (7,7)*{^m}, (-3,-8)*{_2}, (3,-6)*{}, (7,-8)*{_n}, (-6,-8)*{_1},
(0,4.5)*+{...}, (0,-4.5)*+{...},
(0,0)*+{\Gamma}="o", (-6,6)*{}="1", (-3,6)*{}="2", (3,6)*{}="3", (6,6)*{}="4", (-3,-6)*{}="5", (3,-6)*{}="6", (6,-6)*{}="7", (-6,-6)*{}="8",
\ar @{-} "o";"1" <0pt> \ar @{-} "o";"2" <0pt> \ar @{-} "o";"3" <0pt> \ar @{-} "o";"4" <0pt> \ar @{-} "o";"5" <0pt> \ar @{-} "o";"6" <0pt> \ar @{-} "o";"7" <0pt> \ar @{-} "o";"8" <0pt> \endxy}\end{array} \end{equation}
where the third sum in the r.h.s.\ is taken over all ways of attaching the incoming and outgoing legs to the graph $\Gamma$, and we set to zero every resulting graph if it contains a vertex with valency $<3$ or
with no at least one incoming and at least one outgoing edge.}
\subsubsection{\bf Corollary}\label{5: coroll on f from LB^q to LB^min} {\em The explicit morphism $F$ in Proposition {\ref{5: Prop on isomorphisms of lieb properads}} induces an explicit morphism $f: \widehat{\mathcal{L}\mathit{ieb}}^{\mathrm{quant}} \rightarrow \widehat{\mathcal{L}\mathit{ieb}}^\circlearrowright$. }
\subsubsection{\bf Representations of $\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}}$ and quantizable Lie bialgebras}\label{5: subsec on repr of wLBq} As properads $\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}}$ and $\widehat{\mathcal{L}\mathit{ieb}}^{\mathrm{quant}}$ are vertex completed one must be careful when defining their representations in a dg space $V$.
Let $F_p\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}}$, $F_p\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{min}}$, $F_p\widehat{\mathcal{L}\mathit{ieb}}^{\mathrm{quant}}$ and $F_p\widehat{\mathcal{L}\mathit{ieb}}$ be the sub-properads generated by graphs with $\geq p$ vertices, and let $\lambda$ be a formal parameter of homological degree zero. By a representation of, say, $\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}}$ in a dg vector space $V$ we mean a morphism of properads $$ \rho: \widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}} \longrightarrow {\mathcal E} nd_V[[\lambda]] $$ such that $\rho(F_p\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}})\subset \lambda^p {\mathcal E} nd_V[[\lambda]]$ where ${\mathcal E} nd_V[[\lambda]]$ is the properad of formal power series in $\lambda$ with coefficients in ${\mathcal E} nd_V$, and $\lambda^p {\mathcal E} nd_V[[\lambda]]\subset {\mathcal E} nd_V[[\lambda]]$ is a sub-properad generated by formal power series which are divisible by $\lambda^p$. Representations of $\widehat{\mathcal{L}\mathit{ieb}}^{\mathrm{quant}}$, $\widehat{\LB}^{\mathrm{min}}_\infty$, $\widehat{\mathcal{L}\mathit{ieb}}$ and of their wheeled versions are defined similarly.
It is clear that there is a 1-1 correspondence between representations of $\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}}$ in $V$ and elements $\pi^\diamond\in {\mathfrak g}_V[-2][[\lambda]]\simeq {\mathbb A}_3^{(n)}[[\lambda]]$ (for some $n$ including the case $n=+\infty$) such that the equation holds $$ [\pi^\diamond, \pi^\diamond]_S + \sum_{p\geq 2} \frac{\lambda^{4p}}{(4p+2)!} \mu_{4p+2}^{\omega_g}(\pi^\diamond,\ldots,\pi^\diamond)=0. $$ As this equation involves only powers of $\lambda^4$, it makes sense to introduce $\hbar:=\lambda^4$ and consider a subclass of solutions $\pi^\diamond$ which belong to ${\mathbb A}_3^{(n)}[[\hbar]]$; in the case $V={\mathbb R}^n$ these are precisely {\em quantizable Lie bialgebra}\, structures introduced above.
In the next subsection we construct an explicit morphism of props $$ f^q: {\mathcal A} ssB \longrightarrow {\mathcal D}\widehat{\mathcal{L}\mathit{ieb}}^{\mathrm{quant}} $$ and show that it lifts by a naive induction to a morphism of dg props $$ {\mathcal F}^q: {\mathcal A} ssb_\infty \longrightarrow {\mathcal D}\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}} $$ satisfying the boundary condition (\ref{5: Boundary cond for formality map}). Such a morphism composed with the explicit isomorphism ${\mathcal D} F: {\mathcal D}\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}}\longrightarrow {\mathcal D}\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{min, \circlearrowright}}$ from Proposition {\ref{5: Prop on isomorphisms of lieb properads}}, gives us the required formality map, $$ {\mathcal D} F \circ {\mathcal F}^q: {\mathcal A} ssb_\infty \longrightarrow {\mathcal D}\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{min},\circlearrowright} $$ for finite-dimensional Lie bialgebras.
\subsection{Open problems} The prop $\widehat{\mathcal{L}\mathit{ieb}}^{\mathrm{quant}}$ and the dg prop $\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}}$ have been defined with the help of explicit transcendental formulae. However it is very hard to compute the integrals given in that formulae. For example the weights of the graphs $\gamma_{10}^{2,2}$,
$\gamma_{10}^{1,3}$ and $\gamma_{10}^{3,1}$ (the first possibly non-trivial contributions) introduced in the Appendix A involve
integrals of top-degree differential forms over $24$-dimensional configuration spaces.
In principle all these weights might be zero so that $\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}}$ might be identical to $\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{min}}$. If this is the case, then our explicit formulae
for universal quantization of Lie bialgebras become even much simpler --- the quantization job would be done solely by the map $f^q$ given by the explicit formulae
(\ref{7: explicit morphism f^q}).
We conjecture, however, that the situation is quite the opposite:
\subsubsection{\bf Conjectures} {\em (i) The set of homotopy classes of morphisms of dg
props $ F: \widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}} \longrightarrow \widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{min}, \circlearrowright} $ is a torsor over the Grothendieck-Teichm\:uller group $GRT$.
(ii) The set of homotopy classes of morphisms of dg props ${\mathcal F}^q: {\mathcal A} ssb_\infty \longrightarrow {\mathcal D}\widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}}$ consists of a single point. }
These are open problems which we hope to address in the future. Another open problem is to construct an {\em explicit}\, isomorphism of dg props $$ \widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{quant}} \longrightarrow \widehat{\mathcal{L}\mathit{ieb}}_\infty^{\mathrm{min}}, $$ i.e.\ to construct an analogue of our explicit morphism $F$ in (\ref{5: explicit map F from LB^q_infty to LB_infty wheeled}) which does {\em not}\, involve graphs with wheels.
{\Large \section{\bf An explicit formula for universal quantizations of Lie bialgebras} }
\subsection{Kontsevich compactified configuration spaces}
Let $\overline{{\mathbb H}}=\{z=x+it\in {\mathbb C} | t\geq 0\}$ be the closed upper-half plane. Its open subset $\{z=x+it\in {\mathbb C} | t> 0\}$ is denoted by ${\mathbb H}$; we also consider ${\partial}\overline{{\mathbb H}}:=\overline{{\mathbb H}}\setminus {\mathbb H}\simeq {\mathbb R}$. The group
$G_2:={\mathbb R}^+\rtimes {\mathbb R}$ acts on $\overline{{\mathbb H}}$ $$ \begin{array}{ccc} G_2 \times \overline{{\mathbb H}} &\longrightarrow & \overline{{\mathbb H}}\\ (\lambda\in {\mathbb R}^+,h\in {\mathbb R})\times z &\longrightarrow & \ \lambda z + h. \end{array} $$
Let $A$ and $I$ be some finite sets, and let $$ {\mathit{Conf}}_{A,I}(\overline{{\mathbb H}}):=\{f: A\hookrightarrow {\mathbb H}, \ i: I\hookrightarrow {\partial}\overline{{\mathbb H}}\} $$ be the configuration space of injections of $A$ into the upper half-plane, and of $I$ into the real line ${\mathbb R}\simeq {\partial}\overline{{\mathbb H}}$.
This is a smooth manifold of dimension $2\# A+ \#I$. The group $G_2$ acts naturally on it, $(f(A), i(I)\rightarrow (\lambda f(A)+ h, \lambda i(I) +h)$, and this action is free provided $2|A|+ |I|\geq 2$. The quotient space $$ C_{A,I}({\mathbb H}):= {\mathit{Conf}}_{A,I}(\overline{{\mathbb H}}) /G_2 $$
is a smooth manifold of dimension $2|A|+|I|-2$. Kontsevich constructed in \cite{Ko}
its compactification, $\overline{C}_{A,I}({\mathbb H})$, which is a smooth manifold with corners, and which we use below for a construction of a new family of compactified configuration spaces. If $A=[k]$
and $I=[n]$, we abbreviate $C_{A,I}({\mathbb H})$ to $C_{k,n}({\mathbb H})$.
\subsection{Configuration spaces of points in ${\mathbb R}\times {\mathbb R}$} Let $C_n({\mathbb R})$, $n\geq 2$, be the configuration space of injections $\{p: [n]\rightarrow {\mathbb R}\}$ modulo the action of the group ${\mathbb R}^+\ltimes {\mathbb R}$ sending an injection $p$ into an injection $\lambda p + \nu$, $\lambda\in {\mathbb R}^+$, $\nu\in {\mathbb R}$. We remind in Appendix B its compactification $\overline{C}_n({\mathbb R})$ which gives us a geometric realization (in the category of semialgebraic manifolds) of Jim Stasheff's {\em associahedra}.
Boris Shoikhet introduced in \cite{Sh1} (with a reference to Maxim Kontsevich's informal suggestion) the configuration space $C_{m,n}({\mathbb R}\times {\mathbb R})$ of pairs of injections $\{p': [n]\rightarrow {\mathbb R}, [m]\rightarrow {\mathbb R}\}$, $m,n\geq 1$, $m+n\geq 3$, modulo the action of the group ${\mathbb R}^+\ltimes {\mathbb R}^2$ sending a pair of injections $(p',p'')$ into $(\lambda p' + \nu', \lambda^{-1} p''+ \nu'' )$ for any $\lambda\in {\mathbb R}^+$, $\nu',\nu''\in {\mathbb R}$. We remind its compactification $\overline{C}_{m,n}({\mathbb R}\times {\mathbb R})$ in Appendix B, and also prove that the family of compactifications $\{\overline{C}_{m,n}({\mathbb R}\times {\mathbb R})\}$ gives us a geometric realization (in the category of semialgebraic manifolds) of the (pre)biassociahedra posets introduced by Martin Markl in \cite{Ma} following an earlier work by Samson Saneblidze and Ron Umble \cite{SU}. This result gives us a nice combinatorial tool to control the boundary strata of the semialgebraic manifolds $\overline{C}_{m,n}({\mathbb R}\times {\mathbb R})$.
\subsection{Configuration space $C_{A;I,J}({\mathcal H})$ and its compactification}\label{7: subsection on C_A;I,J} Let ${\mathbb H}'=\{(x,t)\in {\mathbb R}\times {\mathbb R}^{> 0}\}$ and ${\mathbb H}''=\{(y,\hat{t})\in {\mathbb R}\times {\mathbb R}^{> 0}\}$ be two copies of the upper-half plane, and let $\overline{{\mathbb H}}'=\{(x,t)\in {\mathbb R}\times {\mathbb R}^{\geq 0}\}$ and $\overline{{\mathbb H}}''=\{(y,\hat{t})\in {\mathbb R}\times {\mathbb R}^{\geq 0}\}$ be their closures. Consider a subspace ${\mathcal H} \subset {\mathbb H}'\times {\mathbb H}''$ given by the equation
$t\widehat{t}=1$, and denote by $\overline{{\mathcal H}}$ its closure under the embedding into $\overline{{\mathbb H}}'\times \overline{{\mathbb H}}''$. The space $\overline{{\mathcal H}}$ has two distinguished lines, ${\mathbf X}:=\{(x\in {\mathbb R},y=0, t=0\}$ and ${\mathbf Y}:=\{(x=0,y\in {\mathbb R}, \widehat{t}=0\}$; it also has a natural structure of a smooth manifold with boundary. $$ \overline{{\mathcal H}}:\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \begin{array}{c}\resizebox{50mm}{!}{ \xy (37,30)*{^{\mathbf Y}}, (-17,-17)*{^{\mathbf X}}, (-35,30)*{}="a", (35,30)*{}="b", (-35,0)*{}="a1", (35,0)*{}="b1", (0,30)*{}="c", (0,0)*{}="d", (-15,-15)*{}="a'", (15,15)*{}="b'", (0,0)*{}="c'", (0,30)*{}="d'", (-15,15)*{}="a''", (15,45)*{}="b''", \ar @{<-} "a";"b" <0pt> \ar @{.} "a1";"b1" <0pt> \ar @{.} "a1";"a" <0pt> \ar @{.} "b1";"b" <0pt> \ar @{-->} "c";"d" <0pt> \ar @{<-} "a'";"b'" <0pt> \ar @{.} "a''";"b''" <0pt> \ar @{.} "a''";"a'" <0pt> \ar @{.} "b''";"b'" <0pt> \ar @{-->} "c'";"d'" <0pt> \endxy} \end{array} $$
The group $G_3:={\mathbb R}^+ \rtimes {\mathbb R}^2$ acts on $\overline{{\mathcal H}}$, $$ \begin{array}{ccccc} {\mathbb R}^+ \rtimes {\mathbb R}^2 &\times& \overline{{\mathcal H}} &\longrightarrow & \overline{{\mathcal H}}\\ (\lambda, a,b) &\times& (x,y,t) &\longrightarrow & (\lambda x + a, \lambda^{-1}y +b , \lambda t). \end{array} $$ For finite sets $A$, $I$ and $J$ let us consider a configuration space $$ {\mathit{Conf}}_{A;I,J}({\mathcal H}):= \{i: A\hookrightarrow {\mathcal H}, i':J\hookrightarrow {\mathbf X}, i'': I\hookrightarrow {\mathbf Y} \} $$ of injections. This is a $(3\# A + \# I + \# J)$-dimensional smooth manifold. The group $G_3$ acts on it smoothly and, in the case $3\# A + \# I + \# J\geq 3$ freely. {\em We assume from now on that conditions\, $3\# A + \# I + \# J\geq 3$, $\# I\geq 1$ and $\# J\geq 1$ hold true}, and denote by $$ C_{A;I,J}({\mathcal H})= {\mathit{Conf}}_{A;I,J}({\mathcal H})/G_3 $$ the associated smooth manifold of $G_3$-orbits.
If $A=[k]$, $I=[m]$ and $J=[n]$ for some non-negative integers $k,m,n\in {\mathbb Z}^{\geq 0}$ (with $3k+m+n\geq 3$, $m,n\geq 1$), then we abbreviate $C_{[k];[m],[n]}({\mathcal H})$ to $C_{k;m,n}({\mathcal H})$.
A point $p\in C_{A;I,J}({\mathcal H})$ can be understood as a collection of numbers
$$
p=\left\{(x_a,y_a,t_a=\frac{1}{\hat{t}_a}), x^0_{\alpha}, y^0_{\beta}\right\}_{a\in [A], \alpha\in [J], \beta\in [I]}
$$ defined modulo the following transformation $$ \left\{(x_a,y_a,t_a), x^0_{\alpha}, y^0_{\beta}\right\}\longrightarrow \left\{(\lambda x_a+ h',\lambda^{-1}y_a +h'',\lambda t_a), \lambda x^0_{\alpha}+h', \lambda^{-1} y^0_{\beta} + h''\right\} $$ for some $\lambda\in {\mathbb R}^+$, $h',h''\in {\mathbb R}$.
The space $C_{0;I,J}({\mathcal H})$ can be identified with $C_{I,J}({\mathbb R}\times {\mathbb R})\simeq C_{m,n}({\mathbb R}\times {\mathbb R})$ studied in detail in Appendix B, and we define its compactification $\overline{C}_{0;I,J}({\mathcal H})$ as $\overline{C}_{I,J}({\mathbb R}\times {\mathbb R})$.
The space $C_{A;I,J}({\mathcal H})$ with $\# A\geq 1$ admits a canonical projection $$ \pi: C_{A;I,J}({\mathcal H}) \longrightarrow C_{I,J}({\mathbb R}\times {\mathbb R}) $$ which forgets {\em internal}\, points in ${\mathcal H}$ (where we assume $C_{1,1}({\mathbb R}\times {\mathbb R})$ to be the one point set for consistency), and, for any $a\in A$, the following two projections $$ \begin{array}{rccc} \pi_a': & C_{A;I,J}({\mathcal H}) &\longrightarrow & C_{a,J}({\mathbb H}')\simeq C_{1,n}({\mathbb H})
\\
& p & \longrightarrow & \{z_a':=x_a+it_a, x^0_{\alpha}\}_{\alpha\in I}
\end{array} $$
$$ \begin{array}{rccc} \pi_a'': & C_{k;m,n}({\mathcal H}) &\longrightarrow & C_{1,I}({\mathbb H}'')\simeq C_{1,m}({\mathbb H})
\\
& p & \longrightarrow & \{z_a'':=y_a+i\frac{1}{t_a}, y^0_{\beta}\}_{\beta\in [m]}.
\end{array} $$ We use these projections to construct the following continuous map for $\# A\geq 1$
$$ \begin{array}{ccccccccccc} f: C_{A;I,J}({\mathcal H}) \hspace{-2mm} &\longrightarrow & \hspace{-2mm} \displaystyle\prod_{a\in A} \overline{C}_{a,J}({\mathbb H}') \hspace{-2mm}&\times &\hspace{-2mm}\displaystyle \prod_{a\in A} \overline{C}_{a,I}({\mathbb H}'')\hspace{-2mm} &\times& \hspace{-2mm} \overline{C}_{I,J}({\mathbb R}\times {\mathbb R}) &\times & \hspace{-2mm} (S^2)^{k(k-1)} \hspace{-2mm}& \times & \hspace{-2mm}[0,+\infty]^{ k(k-1)(k-2)}
\\ p\hspace{-2mm} &\longrightarrow& \hspace{-2mm}\displaystyle \sqcap_{a\in A}\pi'_a(p) && \displaystyle \sqcap_{a\in A}\pi''_a(p) && \hspace{-4mm} \pi(p)
&& \hspace{-4mm} {\displaystyle \underset{_{a,b\in A\atop a\neq b}}{\sqcap}} \pi_{ab}(p)
&& \hspace{-7mm}{\displaystyle \underset{_{a,b,c\in A \atop \#\{a,b,c\}|=3}}{\sqcap}}
\pi_{abc}(p) \\ \end{array} $$ where \begin{equation}\label{5: map pi_a,b to the sphere}
\pi_{ab}(p):= \frac{\left(x_a-x_b, t_at_b(y_a-y_b), t_a-t_b,\right)}{\sqrt{(x_a-x_b)^2 + (t_a-t_b)^2 + t^2_at^2_b(y_a-y_b)^2 }}, \ \ \ \end{equation} $$
\pi_{abc}(p):= \frac{\sqrt{(x_a-x_b)^2 + (t_a-t_b)^2 + t^2_at^2_b(y_a-y_b)^2 }} {\sqrt{(x_b-x_c)^2 + (t_b-t_c)^2 + t^2_bt^2_c(y_b-y_c)^2 }}, \ \ \ $$ Here we assume that the last factor in the r.h.s.\ is omitted for $k<3$, and the last two factors are omitted for $k<2$ (as they have no sense in these cases). It is not hard to check that the above map is an embedding (it is essentially enough to check the cases $C_{1;1,1}({\mathcal H})$ and $C_{2,1,1}({\mathcal H})$) so that we can define a {\em compactified}\, configuration space $\overline{C}_{A;I,J}({\mathcal H})$ as the closure of the image of ${C}_{A;I,J}({\mathcal H})$ under the map $f$. It clearly has the structure of an oriented smooth manifold with corners and also of a semi-algebraic manifold.
\subsection{A class of differential forms on $\overline{C}_{A,I,J}({\mathcal H})$} Consider the circle $$ S^1=\{z\in {\mathbb C}: z=e^{i\theta}, \theta=Arg(z)\in [0,2\pi]\} $$ and a 1-form on $S^1$ of the form $\frac{1}{2\pi}\bar{g}(\theta)d\theta$ which satisfies the conditions $$ \int_0^{2\pi} \frac{1}{2\pi}\bar{g}(\theta)d\theta=1 $$
and
$$
\text{supp} (\bar{g}(\theta))\subset (0,\pi).
$$
Thus this $1$-form is concentrated in the upper-half of the circle. We shall use this $1$-form
to construct a class of closed differential forms $\Omega_\Gamma$ on $\overline{C}_{k;n,m}({\mathcal H})$
parameterized by a set of graphs $\Gamma$ we describe next.
\subsubsection{\bf A family of graphs ${\mathcal G}_{k;m,n}$} The prop ${\mathcal D} \widehat{\LB}^{\mathrm{quant}}=\{{\mathcal D} \widehat{\LB}^{\mathrm{quant}}(m,n)\}$ introduced in \S {\ref{5: functor D}} is identical as graded vector space to the prop $\widehat{\LB}^{\mathrm{min}}_\infty$ and hence admits the same set $\{{\mathcal G}_{k;m,n}\}$ of basis vectors.
For example (we omit labellings of white vertices by integers), $$ \begin{array}{c}\resizebox{11mm}{!}{ \xy
(0,13)*{\circ}="0",
(0,7)*{\bullet}="a", (-5,2)*{\circ}="b_1", (5,2)*{\circ}="b_2",
(-8,-2)*{}="c_1", (-2,-2)*{}="c_2", (2,-2)*{}="c_3", \ar @{->} "a";"0" <0pt> \ar @{<-} "a";"b_1" <0pt> \ar @{<-} "a";"b_2" <0pt> \endxy} \end{array}\in {\mathcal G}_{1;2,1}, \ \ \ \
\begin{array}{c}\resizebox{10mm}{!}{ \xy
(0,13)*{\bullet}="0",
(0,7)*{\bullet}="a", (5,12)*{}="R", (-5,2)*{\circ}="b_1", (5,2)*{\circ}="b_2", (-5,18)*{\circ}="u_1", (5,18)*{\circ}="u_2",
\ar @{->} "a";"0" <0pt> \ar @{<-} "a";"b_1" <0pt> \ar @{<-} "a";"b_2" <0pt> \ar @{->} "0";"u_1" <0pt> \ar @{->} "0";"u_2" <0pt> \ar @{-} "a";"R" <0pt> \ar @{->} "R";"u_2" <0pt> \endxy} \end{array}\in {\mathcal G}_{2;2,2}, \ \ \ \ \ \ \begin{array}{c}\resizebox{14mm}{!}{ \xy
(0,17)*{\circ}="u",
(0,7)*{\bullet}="a", (-10,7)*{}="L", (10,7)*{}="R", (-5,2)*{\bullet}="b_1", (5,2)*{\bullet}="b_2",
(-5,-3)*{\circ}="c_1", (5,-3)*{\circ}="c_3", \ar @{<-} "a";"b_1" <0pt> \ar @{<-} "a";"b_2" <0pt> \ar @{<-} "b_1";"c_1" <0pt> \ar @{<-} "b_2";"c_3" <0pt> \ar @{-} "b_1";"L" <0pt> \ar @{<-} "u";"L" <0pt> \ar @{-} "b_2";"R" <0pt> \ar @{->} "R";"u" <0pt> \ar @{->} "a";"u" <0pt> \endxy} \end{array}\in {\mathcal G}_{3;2,1}, \ \ \ \begin{array}{c}\resizebox{14mm}{!}{ \xy
(0,17)*{\circ}="u", (-5,12)*{\bullet}="0",
(0,7)*{\bullet}="a", (-10,7)*{}="L", (10,7)*{}="R", (-5,2)*{\bullet}="b_1", (5,2)*{\bullet}="b_2",
(-5,-3)*{\circ}="c_1", (5,-3)*{\circ}="c_3", \ar @{->} "a";"0" <0pt> \ar @{<-} "a";"b_1" <0pt> \ar @{<-} "a";"b_2" <0pt> \ar @{<-} "b_1";"c_1" <0pt> \ar @{<-} "b_2";"c_3" <0pt> \ar @{-} "b_1";"L" <0pt> \ar @{<-} "0";"L" <0pt> \ar @{-} "b_2";"R" <0pt> \ar @{->} "R";"u" <0pt> \ar @{->} "0";"u" <0pt> \endxy} \end{array}\in {\mathcal G}_{4;2,1}
$$
Thus graphs from ${\mathcal G}_{k;m,n}$ admit a flow which we always assume in our pictures to be directed from the bottom to the top (so that there is no need to show directions of the edges anymore). As before, $E_{int}(\Gamma)$ stands for the set of {\em internal}\, edges, $E_{in}(\Gamma)$ for the set of in-legs,
$E_{out}(\Gamma)$ for the set of out-legs.
\subsubsection{\bf From graphs to differential forms} Consider a graph $\Gamma\in {\mathcal G}_{k;m,n}$ with $3k+m+n\geq 3$, and an associated configuration space $$ C(\Gamma):=C_{E_{int}(\Gamma); E_{out}(\Gamma), E_{in}(\Gamma)}({\mathcal H})\simeq C_{k;n,m}({\mathcal H}). $$ Let ${\mathsf C}(\Gamma)$ be a subspace of $C(\Gamma)$ consisting of points
$$
p=\left\{(x_a,y_a,t_a=\frac{1}{\hat{t}_a}), x^0_{\alpha}, y^0_{\beta}\right\}_{a\in E_{int}(\Gamma), \alpha\in E_{in}(\Gamma), \beta\in E_{out}(\Gamma)}
$$ with $$ z'_a(p):=x_a+it_a \neq z'_b(p):=x_b+it_b \ \ \text{and}\ \ \ z''_a(p):=y_a+i\frac{1}{t_a} \neq z''_b(p):=y_b+i\frac{1}{t_b}\ \ \ \ \ \ \forall\ a\neq b \in V_{int}(\Gamma), $$ i.e.\ with projections of internal vertices on planes ${\mathbb H}'$ and ${\mathbb H}''$ being different (so that differential forms $dArg(z'_a(p) - z'_b(p))$ and $dArg(z''_a(p) - z''_b(p))$ are well-defined on ${\mathsf C}(\Gamma)$).
We define a smooth top degree differential form $\Omega_\Gamma$ on ${\mathsf C}(\Gamma)$, \begin{equation}\label{1> Def of Omega_Ga} \Omega_\Gamma:= \bigwedge_{e\in E_{in}(\Gamma)} \omega'_e\ \ \wedge\ \ \bigwedge_{e\in E_{int}(\Gamma)}\Omega_e\ \ \wedge \ \
\bigwedge_{e\in E_{out}(\Gamma)} \omega''_e \end{equation} where $\omega'_e$ and $\omega''_e$ are 1-forms and $\Omega_e$ is a 2-form defined as follows.
Identifying vertices of $\Gamma$ with their images in $\overline{{\mathcal H}}$ under injections $(i,i',i'')$,
we define, \begin{itemize} \item[(i)] for any in-leg $e=\xy (0,2)*{^{v_1}}, (8,2)*{^{v_2}},
(0,0)*{\circ}="0",
(8,0)*{\bullet}="a"
\ar @{->} "0";"a" <0pt> \endxy\in E_{in}(\Gamma)$, $\omega'_e:=\frac{1}{2\pi}\bar{g}\left(Arg\left(z'_{v_2} - x^0_{v_1}\right)\right)d Arg\left(z'_{v_2} - x^0_{v_1}\right)$, \item[(ii)] for any out-leg $e=\xy (0,2)*{^{v_1}}, (8,2)*{^{v_2}},
(0,0)*{\bullet}="0",
(8,0)*{\circ}="a"
\ar @{->} "0";"a" <0pt> \endxy\in E_{out}(\Gamma)$, $\omega''_e:=\frac{1}{2\pi}\bar{g}\left(Arg\left(\overline{y^0_{v_2} - z''_{v_1}}\right)\right)dArg\left(\overline{y^0_{v_2} - z''_{v_1}}\right)$, \item[(iii)] for any internal edge $e=\xy (0,2)*{^{v_1}}, (8,2)*{^{v_2}},
(0,0)*{\bullet}="0",
(8,0)*{\bullet}="a"
\ar @{->} "0";"a" <0pt> \endxy\in E_{int}(\Gamma)$,
$$ \Omega_e:=\frac{1}{(2\pi)^2}\bar{g}\left(Arg\left(z'(v_2) - z'(v_1)\right)\right) \bar{g}\left(Arg\left(\overline{z''(v_2) - z''(v_1)}\right)\right)d Arg\left(z'(v_2) - z'(v_1)\right)\wedge dArg\left(\overline{z''(v_2) - z''(v_1)}\right) $$ \end{itemize} As the function $\bar{g}$ has support in the upper-half of the circle, the differential form $ \Omega_\Gamma$ extends smoothly to the configuration space $C(\Gamma)$ and even to its compactification $\overline{C}(\Gamma):=\overline{C}_{E_{int}(\Gamma); E_{out}(\Gamma), E_{in}(\Gamma)}({\mathcal H})$.
A subset of ${\mathcal G}_{k;m,n}$ consisting of graphs $\Gamma$ satisfying the condition $$ 3\# E_{int}(\Gamma) + \# E_{in}(\Gamma) + \# E_{out}(\Gamma)= 3k+m+n-3 $$ is denoted by ${\mathcal G}_{k;m,n}^{top}$ as the associated differential forms $\Omega_\Gamma$ give us top-degree forms on the configuration space $\overline{C}(\Gamma)$.
Notice that if a graph $\Gamma\in {\mathcal G}_{k;m,n}$ satisfies the condition $$
3\# E_{int}(\Gamma) + \# E_{in}(\Gamma) + \# E_{out}(\Gamma)= 3k+m+n-4 $$ then the associated differential form $\Omega_\Gamma$ has degree $\dim C(\Gamma)-1$ and hence one can apply the Stokes theorem to $d\Omega_\Gamma$ which is a top degree form. As $\Omega_\Gamma$ is closed, we obtain $$ 0=\int_{\overline{C}(\Gamma)} d\Omega_\Gamma = \int_{{\partial}\overline{C}(\Gamma)} \Omega_\Gamma $$ Let us check in a few concrete examples all the boundary strata in ${\partial}\overline{C}(\Gamma)$ on which the form $\Omega_\Gamma$ does not vanish identically.
\subsubsection{\bf Example} Consider $$ \Gamma= \begin{array}{c}\resizebox{9mm}{!}{ \xy
(0,13)*{\bullet}="0",
(0,7)*{\bullet}="a", (-5,2)*{\circ}="b_1", (5,2)*{\circ}="b_2", (-5,18)*{\circ}="u_1", (5,18)*{\circ}="u_2",
\ar @{->} "a";"0" <0pt> \ar @{<-} "a";"b_1" <0pt> \ar @{<-} "a";"b_2" <0pt> \ar @{->} "0";"u_1" <0pt> \ar @{->} "0";"u_2" <0pt>
\endxy} \end{array}\in {\mathcal G}_{2;2,2}. $$ The associated $7$-dimensional configuration space $C(\Gamma)$ is given by the data, \begin{equation}\label{7: C(Ga) for the first example} \left\{ \left[\begin{array}{c}z_1'=x_1+it_1 \\ z_2'=x_2+ i{t_2}\\ x_1^0, x_2^0\in {\mathbb R} \end{array}\right], \left[\begin{array}{c}z_1''=y_1+ \frac{i}{t_1} \\ z_2''=y_2+ \frac{i}{t_2}\\ y_1^0, y_2^0\in {\mathbb R} \end{array}\right]\ \text{with}\ x_1^0<x_2^0, y_1^0< y_2^0 \right\} \end{equation} modulo the action of the 3-dimensional group $G_3$. The $6$-form $\Omega_\Gamma$ is given by $$ \Omega_\Gamma=\Omega_\Gamma'\wedge \Omega_\Gamma'' $$ where $$ \Omega_\Gamma':=\Omega_{\bar{g}}(z'_2-z_1')\wedge \Omega_{\bar{g}}(z_1'-x_1^0)\wedge \Omega_{\bar{g}}(z_1'-x_2^0), \ \ \ \Omega_{\Gamma}'':=\Omega_{\bar{g}}(\overline{z_2''-z_1''})\wedge \wedge\Omega_{\bar{g}}(\overline{y_1^0-z_2''})\wedge \Omega_{\bar{g}}(\overline{y_2^0-z_2''}) $$
and the 1-form $\Omega_{\bar{g}}$ is given by $$ \Omega_{\bar{g}}(z_1-z_2):= \frac{1}{2\pi}{\bar{g}}(Arg(z_1-z_2))dArg(z_1-z_2). $$ Let us classify the boundary strata in ${\partial} \overline{C}(\Gamma)$ on which the form $\Omega_\Gamma$ does not vanish identically.
{\sc Case I}. Consider the boundary strata in which two internal vertices collapse into one internal vertex, that is, the limit $\varepsilon\rightarrow 0$ of the configuration in which $(x_1^0, x_2^0, y_1^0, y_2^0)$ stay constant, and
$$ \left(z_a' = x_* + it_* + \varepsilon ({\mathbf x}_a + i{\mathbf t}_a), z_a''= y_* + \varepsilon {\mathbf y}_a +\frac{i}{t_* + \varepsilon {\mathbf t}_a}\right)_{a=1,2} $$
It is isomorphic to $C_2({\mathbb R}^3)\times C(\Gamma/\Gamma_{V_{int}(\Gamma)})$ where $\Gamma_{V_{int}(\Gamma)}= \xy
(0,0)*{\bullet}="a", (5,0)*{\bullet}="b",
\ar @{->} "a";"b" <0pt> \endxy $ is the complete subgraph of $\Gamma$ spanned by the two internal vertices, and $$ \Gamma/\Gamma_{V_{int}(\Gamma)}=\begin{array}{c}\resizebox{11mm}{!}{ \xy
(0,7)*{\bullet}="0", (-5,2)*{\circ}="b_1", (5,2)*{\circ}="b_2", (-5,12)*{\circ}="u_1", (5,12)*{\circ}="u_2",
\ar @{<-} "0";"b_1" <0pt> \ar @{<-} "0";"b_2" <0pt> \ar @{->} "0";"u_1" <0pt> \ar @{->} "0";"u_2" <0pt> \endxy} \end{array} $$ is the quotient graph obtained from $\Gamma$ by collapsing the subgraph $\Gamma_{V_{int}(\Gamma)}$ into a single internal vertex. As we can fix the unique internal vertex at of the latter graph $(x_*=0,y_*=0,t_*=1)$ and $$ \lim_{\varepsilon\rightarrow 0} Arg(z_2'-z_1')=Arg({\mathbf x}_2-{\mathbf x}_1 + i({\mathbf t}_2 - {\mathbf t}_1)) $$ and \begin{eqnarray*} \lim_{\varepsilon\rightarrow 0} Arg(\overline{z_2''-z_1''})&=&\lim_{\varepsilon\rightarrow 0} Arg({\mathbf y}_2 - {\mathbf y}_1 - \frac{i}{t_* + \varepsilon {\mathbf t}_2} + \frac{i}{t_* + \varepsilon {\mathbf t}_1})\\ &=&Arg({\mathbf y}_2 -{\mathbf y}_1 + i({\mathbf t}_2 - {\mathbf t}_1)) \end{eqnarray*} we obtain a factorization $$ \int_{C_2({\mathbb R}^3)\times C(\Gamma/\Gamma_{V_{int}(\Gamma)})}\Omega_\Gamma=\int_{C_2({\mathbb R}^3)=S^2}\omega_{\bar{g}} \cdot \int_{C(\Gamma/\Gamma_{V_{int}(\Gamma)})}\Omega_{\Gamma/\Gamma_{V_{int}(\Gamma)}}=(\Lambda_{\bar{g}}^{(2)})^2. $$
{\sc Case II}. Using invariance under the group $G_3$ we can always assume that the point $(x_2,y_2,t_2)$ is fixed at, say, $(0,0,1)$. Thus it remains to consider limit configurations in which the projection $z_1'$ collapses to a point $x_*$ in the boundary $t=0$ of $\overline{{\mathbb H}}'$,
$$ \left(z_1'= x_* + \varepsilon ({\mathbf x}_1 + i{\mathbf t}_1), z_2'=x_2 + i t_2, z_1''={\mathbf y}_1(\varepsilon) + \frac{i}{\varepsilon {\mathbf t}_1}, z_2''= y_2^* +\frac{i}{t_2}\right)\ \ \text{with} \ \varepsilon\rightarrow 0. $$ for some function ${\mathbf y}_1^*(\varepsilon)$ of the parameter $\varepsilon$. The limit $$ \lim_{\varepsilon\rightarrow 0} dArg(z_1' - x_1^0)\wedge dArg(z_1' - x_2^0) $$ can be non-zero if and only if the boundary points $x_2^0$ and $x_1^0$ also collapse to $x_*$, $$ x_1^0=x_* + \varepsilon {\mathbf x}_1^0, \ \ \ \ x_2^0= x_* + \varepsilon {\mathbf x}_2^0, $$ so that we get in that limit $$ \Omega_\Gamma' \ \underset{\varepsilon\rightarrow 0}{\longrightarrow} \ \Omega_{\bar{g}}(z'_2-x_*)\wedge \Omega_{\bar{g}}({\mathbf z}_1'-{\mathbf x}_1^0)\wedge \Omega_{\bar{g}}({\mathbf z}_1'-{\mathbf x}_2^0) $$ where ${\mathbf z}_1={\mathbf x}_1+i{\mathbf t}_1$. To make the form $$ d Arg(\overline{z_2''-z_1})= dArg (y_2-{\mathbf y}_1(\varepsilon)-\frac{i}{t_2} + \frac{i}{\varepsilon {\mathbf t}_1}) $$ non-zero in the limit $\varepsilon\rightarrow 0$, we have to assume $$ {\mathbf y}_1(\varepsilon) \thicksim \text{const} + \frac{{\mathbf y}_*}{\varepsilon} \ \text{for some}\ {\mathbf y}_*\in {\mathbb R} $$ and then get in the limit $$ \Omega_\Gamma'' \ \underset{\varepsilon\rightarrow 0}{\longrightarrow} \ \Omega_{\bar{g}}(\overline{{\mathbf y}_1-{\mathbf z}_1''})\wedge \Omega_{\bar{g}}(\overline{y_1^0-z_2''})\wedge \Omega_{\bar{g}}(\overline{y_2^0- z_2''}) $$ where ${\mathbf z}_1''={\mathbf y}_1 + \frac{i}{{\mathbf t}_1}$. We conclude that this boundary strata is isomorphic to $C_{1;1,2}({\mathcal H})\times C_{1;2,1}({\mathcal H})$ and the integral over it factorizes as follows $$ \int_{C_{1;1,2}({\mathcal H})\times C_{1;2,1}({\mathcal H})}\Omega_{\Gamma_1}=-\int_{C_{1;1,2}({\mathcal H})}\Omega_{\Gamma_1} \cdot \int_{C_{1;2,1}({\mathcal H})}\Omega_{\Gamma_2}=-(\Lambda_{\bar{g}}^{(2)})^2, $$ where \begin{equation}\label{7: graphs Ga_1 and Ga_2} \Gamma_1=
\begin{array}{c}\resizebox{11mm}{!}{ \xy
(0,13)*{\circ}="0",
(0,7)*{\bullet}="a", (-5,2)*{\circ}="b_1", (5,2)*{\circ}="b_2",
(-8,-2)*{}="c_1", (-2,-2)*{}="c_2", (2,-2)*{}="c_3", \ar @{->} "a";"0" <0pt> \ar @{<-} "a";"b_1" <0pt> \ar @{<-} "a";"b_2" <0pt> \endxy} \end{array} \ \ \ \ \ \ \ \ \ \ \Gamma_2= \begin{array}{c}\resizebox{11mm}{!}{ \xy
(0,-6)*{\circ}="0",
(0,0)*{\bullet}="a", (-5,5)*{\circ}="b_1", (5,5)*{\circ}="b_2",
(-8,9)*{}="c_1", (-2,9)*{}="c_2", (2,9)*{}="c_3", \ar @{<-} "a";"0" <0pt> \ar @{->} "a";"b_1" <0pt> \ar @{->} "a";"b_2" <0pt> \endxy} \end{array} \end{equation} As expected,
$
\int_{{\partial}\overline{C}(\Gamma)}\Omega_\Gamma= - (\Lambda_{\bar{g}}^{(2)})^2 + (\Lambda_{\bar{g}}^{(2)})^2=0.$
\subsubsection{\bf Example}\label{7: example 2} Consider $$ \Gamma= \begin{array}{c}\resizebox{9mm}{!}{ \xy
(+3,11)*{\bullet}="0",
(-3,8)*{\bullet}="a", (-5,2)*{\circ}="b_1", (5,2)*{\circ}="b_2", (-5,18)*{\circ}="u_1", (5,18)*{\circ}="u_2",
\ar @{->} "a";"0" <0pt> \ar @{<-} "a";"b_1" <0pt> \ar @{<-} "0";"b_2" <0pt> \ar @{->} "a";"u_1" <0pt> \ar @{->} "0";"u_2" <0pt>
\endxy} \end{array}\in
{\mathcal G}_{2;2,2}. $$ The associated $7$-dimensional configuration space $C(\Gamma)$ is given by the same data as in (\ref{7: C(Ga) for the first example}), while the $6$-form $\Omega_\Gamma$ is given by
$$ \Omega_\Gamma:= \Omega_{\bar{g}}(z_1'-x_1^0)\wedge \Omega_{\bar{g}}(z_2'-x_2^0)\wedge \Omega_{\bar{g}}(z'_2-z_1')\wedge\Omega_{\bar{g}}(\overline{z_2''-z_1''})\wedge \wedge\Omega_{\bar{g}}(\overline{y_1^0-z_1''})\wedge \Omega_{\bar{g}}(\overline{y_1^0-z_2''}) $$
Let us classify again the boundary strata in ${\partial} \overline{C}(\Gamma)$ which can contribute non-trivially into the vanishing integral $\int_{{\partial}\overline{C}(\Gamma)} \Omega_\Gamma$.
{\sc Case 0}. Consider the boundary configurations in which the internal points stay invariant while (i) $|x_2^0-x_1^0| \rightarrow 0$, or (ii) $|y_2^0-y_1^0|$, or (iii) $|x_2^0-x_1^0| \rightarrow +\infty$, or (iv) $|y_2^0-y_1^0|\rightarrow +\infty$. The forms $\Omega_\Gamma$ vanishes identically on boundary strata of types (iii) and (iv), while on strata of types (i) and, respectively, (ii) ones obtains the integrals $$ \int_{C_{2;2,1}({\mathcal H})} \Omega_{\Gamma_1'}\ \ \text{and}\ \ \int_{C_{2;1,2}({\mathcal H})} \Omega_{\Gamma_2'} \ \ , \ \ \text{where} \ \ \ \ \
\Gamma_1'= \begin{array}{c}\resizebox{9mm}{!}{ \xy
(+3,11)*{\bullet}="0",
(-3,8)*{\bullet}="a", (0,2)*{\circ}="b_1", (0,2)*{\circ}="b_2", (-5,18)*{\circ}="u_1", (5,18)*{\circ}="u_2",
\ar @{->} "a";"0" <0pt> \ar @{<-} "a";"b_1" <0pt> \ar @{<-} "0";"b_2" <0pt> \ar @{->} "a";"u_1" <0pt> \ar @{->} "0";"u_2" <0pt>
\endxy} \end{array}\ \ \ , \ \ \ \Gamma_2'= \begin{array}{c}\resizebox{9mm}{!}{ \xy
(+3,11)*{\bullet}="0",
(-3,8)*{\bullet}="a", (-5,2)*{\circ}="b_1", (5,2)*{\circ}="b_2", (0,18)*{\circ}="u_1", (0,18)*{\circ}="u_2",
\ar @{->} "a";"0" <0pt> \ar @{<-} "a";"b_1" <0pt> \ar @{<-} "0";"b_2" <0pt> \ar @{->} "a";"u_1" <0pt> \ar @{->} "0";"u_2" <0pt>
\endxy} \end{array} $$ (which happen to vanish identically --- one can use the standard reflection argument to check this claim which plays no role below).
{\sc Case I} is exactly the same as Case I in the previous example. The boundary strata is isomorphic to $C_2({\mathbb R}^3)\times C(\Gamma/\Gamma_{V{int}(\Gamma)})$ and one has $$ \int_{C_2({\mathbb R}^3)\times C(\Gamma/\Gamma_{V_{int}(\Gamma)})}\Omega_\Gamma=\int_{C_2({\mathbb R}^3)}\omega_{\bar{g}} \cdot \int_{C(\Gamma/\Gamma_{V_{int}(\Gamma)})}\Omega_{\Gamma/\Gamma_{V_{int}(\Gamma)}}=(\Lambda_{\bar{g}}^{(2)})^2. $$
{\sc Case II}. Using invariance under the group $G_3$ we can always assume that the point $(x_2,y_2,t_2)$ is fixed at, say, $(0,0,1)$. Thus it remains to consider limit configurations in which the projection $z_1'$ collapses to a point $x_*$ in the boundary $t=0$ of $\overline{{\mathbb H}}'$,
$$ \left(z_1'= x_* + \varepsilon ({\mathbf x}_1 + i{\mathbf t}_1), z_2'=x_2 + i t_2, z_1''={\mathbf y}_1(\varepsilon) + \frac{i}{\varepsilon {\mathbf t}_1}, z_2''= y_2^* +\frac{i}{t_2}\right)\ \ \text{with} \ \varepsilon\rightarrow 0. $$ for some function ${\mathbf y}_1^*(\varepsilon)$ of the parameter $\varepsilon$. Arguing as in the Case II of the previous example, we conclude that for $\Omega_\Gamma$ not to vanish identically we have to assume $$ x_1^0=x_* +\varepsilon {\mathbf x}_1^0 \ , \ {\mathbf y}_1(\varepsilon) = \text{const} + \frac{{\mathbf y}_1}{\varepsilon}\ , \ y_1^0=\text{const}+
\frac{{\mathbf y}_1^0}{\varepsilon}\ \text{for some}\ {\mathbf x}_1^0,{\mathbf y}_1, {\mathbf y}_1^0\in {\mathbb R} $$ so that we get in the limit $$ \lim_{\varepsilon \rightarrow 0} \Omega_\Gamma=-\Omega_{\Gamma_2}\wedge \Omega_{\Gamma_1} $$ where $$ \Omega_{\Gamma_2}:=\Omega_{\bar{g}}({\mathbf z}'_1-{\mathbf x}_1^0)\wedge \Omega_{\bar{g}}(\overline{{\mathbf y}_1^0-{\mathbf z}''_1})\wedge \Omega_{\bar{g}}(\overline{0-{\mathbf z}_1''})\ \text{and}\ \Omega_{\Gamma_1}:=\Omega_{\bar{g}}(\overline{z_2''-x_*})\wedge \wedge\Omega_{\bar{g}}(\overline{z_2''-x_2^0})\wedge \Omega_{\bar{g}}(\overline{y_2^0-z_2''}) $$ are the differential forms associated to the graphs in (\ref{7: graphs Ga_1 and Ga_2}). This boundary stratum is isomorphic to $C_{1;2,1}({\mathcal H})\times C_{1;1;2}({\mathcal H})=C(\Gamma_2)\times C(\Gamma_1)$ and we get $$ \int_{C_{1;2,1}({\mathcal H})\times C_{1;1,2}({\mathcal H})}\Omega_{\Gamma}=-\int_{C(\Gamma_2)}\Omega_{\Gamma_2}\ \cdot \ \int_{C(\Gamma_1)}\Omega_{\Gamma_1}=-(\Lambda_{\bar{g}}^{(2)})^2. $$ \subsubsection{\bf A useful observation}\label{6: useful observation} Notice that the only boundary strata in the above two examples which lie in the fibre of the surjection $$ \pi: \overline{C}(\Gamma)) \longrightarrow C_{2,2}({\mathbb R}\times{\mathbb R}) $$ over a generic point in the base and contributes non-trivially into the integral is the boundary strata of type $I$.
Analyzing similarly the graphs $$ \begin{array}{c}\resizebox{10mm}{!}{ \xy
(+3,11)*{\bullet}="0",
(-3,8)*{\bullet}="a", (-5,2)*{\circ}="b_2", (5,2)*{\circ}="b_1", (-5,18)*{\circ}="u_1", (5,18)*{\circ}="u_2",
\ar @{->} "a";"0" <0pt> \ar @{<-} "a";"b_1" <0pt> \ar @{<-} "0";"b_2" <0pt> \ar @{->} "a";"u_1" <0pt> \ar @{->} "0";"u_2" <0pt>
\endxy} \end{array} \ \ \ \ \begin{array}{c}\resizebox{10mm}{!}{ \xy
(+3,11)*{\bullet}="0",
(-3,8)*{\bullet}="a", (-5,2)*{\circ}="b_1", (5,2)*{\circ}="b_2", (-5,18)*{\circ}="u_2", (5,18)*{\circ}="u_1",
\ar @{->} "a";"0" <0pt> \ar @{<-} "a";"b_1" <0pt> \ar @{<-} "0";"b_2" <0pt> \ar @{->} "a";"u_1" <0pt> \ar @{->} "0";"u_2" <0pt>
\endxy} \end{array} \ \ \ \ \begin{array}{c}\resizebox{10mm}{!}{ \xy
(+3,11)*{\bullet}="0",
(-3,8)*{\bullet}="a", (-5,2)*{\circ}="b_2", (5,2)*{\circ}="b_1", (-5,18)*{\circ}="u_2", (5,18)*{\circ}="u_1",
\ar @{->} "a";"0" <0pt> \ar @{<-} "a";"b_1" <0pt> \ar @{<-} "0";"b_2" <0pt> \ar @{->} "a";"u_1" <0pt> \ar @{->} "0";"u_2" <0pt>
\endxy} \end{array} $$ we obtain the following result for sum of the push-forwards along the map $ \pi: \overline{C}(\Gamma)\rightarrow C_{2,2}({\mathbb R}\times{\mathbb R}) $ and its boundary version $\pi_{{\partial}}: {\partial} \overline{C}(\Gamma)\rightarrow C_{2,2}({\mathbb R}\times{\mathbb R})$, \begin{eqnarray*} \sum_{\Gamma\in {\mathcal G}_{2;2,2}} \pi_{*{\partial}}\left(\Omega_\Gamma \right)\Gamma&=&
\pi_*(\Omega_{\Gamma_0})\left( \begin{array}{c}\resizebox{8mm}{!}{ \xy
(0,13)*{\bullet}="0",
(0,7)*{\bullet}="a", (-5,2)*{\circ}="b_1", (5,2)*{\circ}="b_2", (-5,18)*{\circ}="u_1", (5,18)*{\circ}="u_2",
\ar @{->} "a";"0" <0pt> \ar @{<-} "a";"b_1" <0pt> \ar @{<-} "a";"b_2" <0pt> \ar @{->} "0";"u_1" <0pt> \ar @{->} "0";"u_2" <0pt>
\endxy} \end{array}
- \begin{array}{c}\resizebox{8mm}{!}{ \xy
(+3,11)*{\bullet}="0",
(-3,8)*{\bullet}="a", (-5,2)*{\circ}="b_1", (5,2)*{\circ}="b_2", (-5,18)*{\circ}="u_1", (5,18)*{\circ}="u_2",
\ar @{->} "a";"0" <0pt> \ar @{<-} "a";"b_1" <0pt> \ar @{<-} "0";"b_2" <0pt> \ar @{->} "a";"u_1" <0pt> \ar @{->} "0";"u_2" <0pt>
\endxy} \end{array} + \begin{array}{c}\resizebox{10mm}{!}{ \xy
(+3,11)*{\bullet}="0",
(-3,8)*{\bullet}="a", (-5,2)*{\circ}="b_2", (5,2)*{\circ}="b_1", (-5,18)*{\circ}="u_1", (5,18)*{\circ}="u_2",
\ar @{->} "a";"0" <0pt> \ar @{<-} "a";"b_1" <0pt> \ar @{<-} "0";"b_2" <0pt> \ar @{->} "a";"u_1" <0pt> \ar @{->} "0";"u_2" <0pt>
\endxy} \end{array} + \begin{array}{c}\resizebox{8mm}{!}{ \xy
(+3,11)*{\bullet}="0",
(-3,8)*{\bullet}="a", (-5,2)*{\circ}="b_1", (5,2)*{\circ}="b_2", (-5,18)*{\circ}="u_2", (5,18)*{\circ}="u_1",
\ar @{->} "a";"0" <0pt> \ar @{<-} "a";"b_1" <0pt> \ar @{<-} "0";"b_2" <0pt> \ar @{->} "a";"u_1" <0pt> \ar @{->} "0";"u_2" <0pt>
\endxy} \end{array} - \begin{array}{c}\resizebox{8mm}{!}{ \xy
(+3,11)*{\bullet}="0",
(-3,8)*{\bullet}="a", (-5,2)*{\circ}="b_2", (5,2)*{\circ}="b_1", (-5,18)*{\circ}="u_2", (5,18)*{\circ}="u_1",
\ar @{->} "a";"0" <0pt> \ar @{<-} "a";"b_1" <0pt> \ar @{<-} "0";"b_2" <0pt> \ar @{->} "a";"u_1" <0pt> \ar @{->} "0";"u_2" <0pt>
\endxy} \end{array} \right) \\ &=& \pi_*(\Omega_{\Gamma_0})\delta \Gamma_0 \end{eqnarray*} where $\delta$ is the differential in ${\mathcal D} \mathcal{L}\mathit{ieb}_\infty$ and $ \Gamma_0=\begin{array}{c}\resizebox{10mm}{!}{ \xy
(0,7)*{\bullet}="0", (-5,2)*{\circ}="b_1", (5,2)*{\circ}="b_2", (-5,12)*{\circ}="u_1", (5,12)*{\circ}="u_2",
\ar @{<-} "0";"b_1" <0pt> \ar @{<-} "0";"b_2" <0pt> \ar @{->} "0";"u_1" <0pt> \ar @{->} "0";"u_2" <0pt> \endxy} \end{array} $.
\subsection{An explicit formula for quantization of Lie bialgebras} Let ${\mathcal G}_{k;m,n}^{(3)}$ be a subset of ${\mathcal G}_{k;m,n}$ consisting of graphs forming a basis of the ${\mathbb S}$-bimodule ${\mathcal D}\widehat{\mathcal{L}\mathit{ieb}}^{\mathrm{quant}}$ (these graphs have, in particular, all their internal vertices $3$-valent).
\subsubsection{\bf Theorem}\label{7: Theorem on f^q} {\em There is a morphism of props \begin{equation}\label{7: explicit morphism f^q} f^{q}: {\mathcal A} ss{\mathcal B} \longrightarrow {\mathcal D}\widehat{\mathcal{L}\mathit{ieb}}^{\mathrm{quant}} \end{equation} given explicitly on the generators of ${\mathcal A} ss{\mathcal B}$ as follows, $$ f^{q}\left(\begin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{.},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{.},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^2}**@{},
<-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^1}**@{}, \end{xy}\right):= \begin{array}{c}\resizebox{8mm}{!}{ \xy (-3,0)*{_1}, (3,0)*{_2},
(0,7)*{\circ}="a", (-3,2)*{\circ}="b_1", (3,2)*{\circ}="b_2",
\endxy}\end{array} \ +\ \sum_{k\geq 1} \sum_{\Gamma \in {\mathcal G}_{k;1,2}^{(3)}} \left(\int_{\overline{C}_{k;1,2}({\mathcal H})}\Omega_\Gamma\right) \Gamma =: \begin{array}{c}\resizebox{8mm}{!}{ \xy (-3,0)*{_1}, (3,0)*{_2},
(0,7)*{\circ}="a", (-3,2)*{\circ}="b_1", (3,2)*{\circ}="b_2",
\endxy}\end{array} \ +\ f^{q}_{\geq 1}\left(\begin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{.},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{.},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^2}**@{},
<-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^1}**@{}, \end{xy}\right) $$ $$ f^{q}\left(\begin{xy}
<0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{.},
<0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{.},
<-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^2}**@{},
<-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^1}**@{},
\end{xy}\right):=
=\begin{array}{c}\resizebox{8mm}{!}{ \xy (-3,9)*{_1}, (3,9)*{_2},
(0,0)*{\circ}="a", (-3,7)*{\circ}="b_1", (3,7)*{\circ}="b_2",
\endxy}\end{array} \ +\ \sum_{k\geq 1} \sum_{\Gamma \in {\mathcal G}_{k;2,1}^{(3)}} \left(\int_{\overline{C}_{k;2,1}({\mathcal H})}\Omega_\Gamma\right) \Gamma := \begin{array}{c}\resizebox{8mm}{!}{ \xy (-3,9)*{_1}, (3,9)*{_2},
(0,0)*{\circ}="a", (-3,7)*{\circ}="b_1", (3,7)*{\circ}="b_2",
\endxy}\end{array} \ +\ f^{q}_{\geq 1}\left(\begin{xy}
<0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{.},
<0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{.},
<-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^2}**@{},
<-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^1}**@{},
\end{xy}\right) $$ where the differential form $\Omega_\Gamma$ is defined in (\ref{1> Def of Omega_Ga}).}
\begin{proof} If $\Gamma\in {\mathcal G}_{k;m,n}^{(3)}$ with $m+n=4$ then $\deg \Omega_\Gamma=3k=\dim \overline{C}_{k;m,n}({\mathcal H}) -1$ so that it makes sense to apply the Stokes theorem to the vanishing differential form $d\Omega_\Gamma$, \begin{equation}\label{6: Stokes for Omega_Ga trivalent} 0=\int_{ \overline{C}_{k;m,n}({\mathcal H})} d\Omega_\Gamma=\int_{{\partial} \overline{C}_{k;m,n}({\mathcal H})} \Omega_\Gamma, \ \ \ m+n=4, m,n\geq 1. \end{equation} We claim that the equation \begin{itemize} \item[(i)] $ 0=\sum_{k\geq 0}\sum_{\Gamma\in {\mathcal G}_{k;1,3}^{(3)}} \int_{{\partial} \overline{C}_{k;1,3}({\mathcal H})} \Omega_\Gamma \Gamma$ implies that $f^{q}$ respects the first (associativity) relations in (\ref{2: bialgebra relations}), \item[(ii)] $ 0=\sum_{k\geq 0}\sum_{\Gamma\in {\mathcal G}_{k;3,1}^{(3)}} \int_{{\partial} \overline{C}_{k;3,1}({\mathcal H})} \Omega_\Gamma \Gamma$ implies that $f^{q}$ respects the second (co-associativity) relations in (\ref{2: bialgebra relations}). \item[(iii)] $ 0=\sum_{k\geq 0}\sum_{\Gamma\in {\mathcal G}_{k;2,2}^{(3)}} \int_{{\partial} \overline{C}_{k;2,2}({\mathcal H})} \Omega_\Gamma \Gamma$ implies that $f^{q}$ respects the third (compatibility) relations in (\ref{2: bialgebra relations}). \end{itemize}
We show the proof of the most difficult step (iii) --- the proofs of the first two steps (i) and (ii) are analogous.
Let us classify all the boundary strata on which the differential forms $\Omega_\Gamma$ do not vanish identically. Let us notice that the product the function $|x_2^0-x_1^0||y_2^0-y_1^0|$ can take the following values on the codimension 1 boundary configurations: \begin{itemize}
\item[I:] the value $|x_2^0-x_1^0||y_2^0-y_1^0|$ stays finite; \item[II:]
$|x_2^0-x_1^0|\rightarrow 0$ while $|y_2^0-y_1^0|$ stays finite, or
$|y_2^0-y_1^0|\rightarrow 0$ while $|x_2^0-x_1^0|$ stays finite; \item[III:]
$|y_2^0-y_1^0|\rightarrow +\infty$ while $|x_2^0-x_1^0|$ stays finite, or
$|x_2^0-x_1^0|\rightarrow +\infty$ while $|y_2^0-y_1^0|$ stays finite. \end{itemize} Let us consider each case separately.
{\bf Case I} corresponds to the boundary strata --- which we denote by ${\partial}_I\overline{C}_{k;2,2}({\mathcal H})\subset {\partial}\overline{C}_{k;2,2}({\mathcal H})$ --- in which several internal points collapse into an internal point (see examples in \S {\ref{6: useful observation}}). By Proposition {\ref{3: Prop on Upsilon^om_g}} for the case $d=3$ the following sum $$ \sum_{k\geq 0} \sum_{\Gamma\in {\mathcal G}^{(3)}_{k;2,2}} \left(\int_{{\partial}_I \overline{C}_{k;m,n}({\mathcal H})}\Omega_\Gamma)\right) \Gamma =
\sum_{s:[2]\rightarrow V(\Gamma)\atop \hat{s}:[2]\rightarrow V(\Gamma)}
\begin{array}{c}\resizebox{9mm}{!} {\xy
(-6,7)*{^1}, (7,7)*{^2}, (7,-8)*{_2}, (-6,-8)*{_1},
(0,0)*+{\Gamma^{\omega_{\bar{g}}}}="o", (-6,6)*{\circ}="1", (6,6)*{\circ}="4", (-6,-6)*{\circ}="5", (6,-6)*{\circ}="8",
\ar @{-} "o";"1" <0pt> \ar @{-} "o";"4" <0pt> \ar @{-} "o";"5" <0pt> \ar @{-} "o";"8" <0pt> \endxy}\end{array} \equiv 0 $$ gives an identically vanishing element in $ {\mathcal D}\widehat{\mathcal{L}\mathit{ieb}}^{\mathrm{quant}}$ (here the sum is taken over all possible ways of attaching four legs to the MC element $\Gamma^{\omega_{\bar{g}}}$ and setting to zero every graph which has at least one non-trivalent internal vertex or an internal vertex with no at least ingoing half-edge and at least one outgoing half-edge). Hence we can skip type I boundary strata in equation (\ref{6: Stokes for Omega_Ga trivalent}).
{\bf Case II}. Denote the associated boundary strata by ${\partial}_{II}\overline{C}_{k;2,2}({\mathcal H})$. If, for example, we consider a limit configuration with $|x_2^0-x_1^0|\rightarrow 0$ but $|y_2^0-y_1^0|$ finite, then the boundary points $x_1^0$, $x_2^0$ and, perhaps, some (possibly empty) subset $I\subset V_{int}(\Gamma)$ of internal points tend in the limit $\varepsilon\rightarrow 0$ to a point $x_*\in {\mathbf X}$, \begin{eqnarray*} z_i'&=& x_*+ \varepsilon({\mathbf x}_i + i{\mathbf t}_i), \ \ \ z_i''= y_i(\varepsilon) + \frac{i}{\varepsilon{\mathbf t}_i} \ i\in I,\\ x_1'&=& x_* + \varepsilon {\mathbf x}_1^0\\ x_2'&=& x_* + \varepsilon {\mathbf x}_2^0 \end{eqnarray*} for some functions $y_i(\varepsilon)$ of the parameter $\varepsilon$ (it is easy to see that if $I\neq \emptyset$, then the differential form $\Omega_\Gamma$ has a chance not to vanish identically on such a boundary stratum if and only if $y_i(\varepsilon)\simeq \frac{{\mathbf y}_i}{\varepsilon}\ \text{as}\ \varepsilon\rightarrow 0$ for some ${\mathbf y}_i\in {\mathbb R}$).
Consider (as an elementary illustration) a special case $I=\emptyset$ (and denote the associated strata in ${\partial}_{II}\overline{C}_{k;2,2}({\mathcal H})$ by ${\partial}_{II\emptyset}\overline{C}_{k;2,2}({\mathcal H})$). It is clear that in this case we have $$ \sum_{k\geq 1} \sum_{\Gamma \in {\mathcal G}_{k;2,2}^{(3)}} \left(\int_{{\partial}_{II\emptyset}C(\Gamma)}\Omega_\Gamma\right)= -\frac{f^{q}_{\geq 1}\left(\begin{xy}
<0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{.},
<0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{.},
<-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
\end{xy}\right)}{\begin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{.},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{.},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
\end{xy}} $$
An analogue of this formula in the case $|y_2^0-y_1^0|\rightarrow 0$ while $|x_2^0-x_1^0|$ stays finite and no internal vertices collapse to the line $\mathbf Y$ would be of course the following one $$ \sum_{k\geq 1} \sum_{\Gamma \in {\mathcal G}_{k;2,2}^{(3)}} \left(\int_{{\partial}_{IIa}C(\Gamma)}\Omega_\Gamma\right)= -\frac{\begin{xy}
<0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{.},
<0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{.},
<-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
\end{xy}}{f^q_{\geq 1}\left(\begin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{.},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{.},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
\end{xy}\right)} $$ where we use fraction type notation for prop compositions introduced in \cite{Ma1} e.g.\ $$ \frac{\begin{array}{c} \begin{xy}
<0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{.},
<0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{.},
<-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
\end{xy}
\ \\ \ \end{array}}{\begin{array}{c}\begin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{.},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{.},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, \end{xy}\end{array}}:= \begin{array}{c}\begin{xy}
<0mm,2.47mm>*{};<0mm,-0.5mm>*{}**@{.},
<0.5mm,3.5mm>*{};<2.2mm,5.2mm>*{}**@{.},
<-0.48mm,3.48mm>*{};<-2.2mm,5.2mm>*{}**@{.},
<0mm,3mm>*{\circ};<0mm,3mm>*{}**@{},
<0mm,-0.8mm>*{\circ};<0mm,-0.8mm>*{}**@{}, <0mm,-0.8mm>*{};<-2.2mm,-3.5mm>*{}**@{.},
<0mm,-0.8mm>*{};<2.2mm,-3.5mm>*{}**@{.}, \end{xy}\end{array} \ \ \ \ \ , \ \ \ \ \ \ \ \begin{array}{c}\resizebox{12mm}{!}{\xy (-10,0)*{}="1L", (10,0)*{}="1R",
(4,10)*{}="0",
(4,6)*{\circ}="a", (1,2)*{}="u_1", (7,2)*{}="u_2",
(-4,10)*{}="0'",
(-4,6)*{\circ}="a'", (-1,2)*{}="u_1'", (-7,2)*{}="u_2'",
(-1,-2)*{}="du1", (-7,-2)*{}="du2", (-4,-6)*{\circ}="vu",
(-4,-10)*{}="vd",
(4,-10)*{}="xd",
(4,-6)*{\circ}="x", (1,-2)*{}="x_1", (7,-2)*{}="x_2",
\ar @{.} "a";"0" <0pt> \ar @{.} "a";"u_1" <0pt> \ar @{.} "a";"u_2" <0pt>
\ar @{.} "a'";"0'" <0pt> \ar @{.} "a'";"u_1'" <0pt> \ar @{.} "a'";"u_2'" <0pt>
\ar @{.} "vd";"vu" <0pt> \ar @{.} "vu";"du1" <0pt> \ar @{.} "vu";"du2" <0pt>
\ar @{.} "x";"xd" <0pt> \ar @{.} "x";"x_1" <0pt> \ar @{.} "x";"x_2" <0pt>
\ar @{-} "1L";"1R" <0pt> \endxy}\end{array} := \begin{array}{c}\resizebox{10mm}{!}{\begin{xy}
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{.},
<-0.5mm,0.5mm>*{};<-3mm,2mm>*{}**@{.},
<-3mm,2mm>*{};<0mm,4mm>*{}**@{.},
<0mm,4mm>*{\circ};<-2.3mm,2.3mm>*{}**@{},
<0mm,4mm>*{};<0mm,7.4mm>*{}**@{.}, <0mm,0mm>*{};<2.2mm,1.5mm>*{}**@{.},
<6mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<6mm,4mm>*{};<3.8mm,2.5mm>*{}**@{.},
<6mm,4mm>*{};<6mm,7.4mm>*{}**@{.},
<6mm,4mm>*{\circ};<-2.3mm,2.3mm>*{}**@{},
<0mm,4mm>*{};<6mm,0mm>*{}**@{.}, <6mm,4mm>*{};<9mm,2mm>*{}**@{.}, <6mm,0mm>*{};<9mm,2mm>*{}**@{.}, <6mm,0mm>*{};<6mm,-3mm>*{}**@{.},
\end{xy}}\end{array} $$
The general case is no more difficult. Let $J:=V_{int}(\Gamma)\setminus I$ be the complementary subset corresponding to points which have ${\mathbb H}'$-projections {\em not}\, tending to $x_*$ as $\varepsilon\rightarrow 0$. We can represent each graph $\Gamma$ in the sum $$ \sum_{k\geq 0}\sum_{\Gamma\in {\mathcal G}_{k;2,2}^{(3)}} \int_{{\partial}_{II} \overline{C}_{k;2,2}({\mathcal H})} \Omega_\Gamma \Gamma $$
in the form
$$ \Gamma= \begin{array}{c}\resizebox{23mm}{!}{ \xy
(0,28.5)*{_{J}}, (-5,32)*{}="ulUL", (5,32)*{}="urUL", (-5,25)*{}="dlUL", (5,25)*{}="drUL",
(0,11.5)*{_{I}}, (-5,15)*{}="ulDL", (5,15)*{}="urDL", (-5,8)*{}="dlDL", (5,8)*{}="drDL",
(-2,32)*{}="UL1L", (2,32)*{}="UL1R", (0,25)*{}="UL2", (-5,25)*{}="UL2L", (5,25)*{}="UL2R",
(0,15)*{}="DL1", (-5,15)*{}="DL1L", (5,15)*{}="DL1R", (-2,8)*{}="DL2L", (2,8)*{}="DL2R",
(-15,0)*{\circ}="b_1", (15,0)*{\circ}="b_2", (-15,40)*{\circ}="u_1", (15,40)*{\circ}="u_2",
\ar @{-} "ulUL";"urUL" <0pt> \ar @{-} "ulUL";"dlUL" <0pt> \ar @{-} "dlUL";"drUL" <0pt> \ar @{-} "drUL";"urUL" <0pt>
\ar @{-} "ulDL";"urDL" <0pt> \ar @{-} "ulDL";"dlDL" <0pt> \ar @{-} "dlDL";"drDL" <0pt> \ar @{-} "drDL";"urDL" <0pt>
\ar @{=>} "UL1L";"u_1" <0pt> \ar @{=>} "DL1L";"u_1" <0pt> \ar @{=>} "DL1R";"u_2" <0pt> \ar @{=>} "DL1";"UL2" <0pt> \ar @{=>} "b_1";"DL2L" <0pt> \ar @{=>} "b_1";"UL2L" <0pt> \ar @{=>} "b_2";"UL2R" <0pt>
\ar @{=>} "UL1R";"u_2" <0pt> \ar @{=>} "b_2";"DL2R" <0pt> \endxy} \end{array} $$ where directed double edges stand for (possibly empty) sets of directed edges. Let $\Gamma'$ (resp., $\Gamma''$ ) be the element of ${\mathcal D} \widehat{\mathcal{L}\mathit{ieb}}^{\mathrm{quant}}(1,2)$ (resp., of $\in {\mathcal D} \widehat{\mathcal{L}\mathit{ieb}}^{\mathrm{quant}}(2,1)$ defined as the complete subgraph of $\Gamma$ spanned by vertices from the set $I$ (resp., $J$), together with {\em all}\, edges attached to this set, $$ \Gamma'= \begin{array}{c}\resizebox{18mm}{!}{ \xy
(0,11.5)*{_{I}}, (-5,15)*{}="ulDL", (5,15)*{}="urDL", (-5,8)*{}="dlDL", (5,8)*{}="drDL",
(0,25)*{\circ}="UL2", (0,15)*{}="DL1", (-5,15)*{}="DL1L", (5,15)*{}="DL1R", (-2,8)*{}="DL2L", (2,8)*{}="DL2R",
(-12,0)*{\circ}="b_1", (12,0)*{\circ}="b_2",
\ar @{-} "ulDL";"urDL" <0pt> \ar @{-} "ulDL";"dlDL" <0pt> \ar @{-} "dlDL";"drDL" <0pt> \ar @{-} "drDL";"urDL" <0pt>
\ar @{=>} "DL1";"UL2" <0pt> \ar @{=>} "b_1";"DL2L" <0pt>
\ar @{=>} "b_2";"DL2R" <0pt> \endxy} \end{array}, \ \ \ \ \ \ \Gamma''= \begin{array}{c}\resizebox{18mm}{!}{ \xy
(0,-11.5)*{_{J}}, (-5,-15)*{}="ulDL", (5,-15)*{}="urDL", (-5,-8)*{}="dlDL", (5,-8)*{}="drDL",
(0,-25)*{\circ}="UL2", (0,-15)*{}="DL1", (-5,-15)*{}="DL1L", (5,-15)*{}="DL1R", (-2,-8)*{}="DL2L", (2,-8)*{}="DL2R",
(-12,0)*{\circ}="b_1", (12,0)*{\circ}="b_2",
\ar @{-} "ulDL";"urDL" <0pt> \ar @{-} "ulDL";"dlDL" <0pt> \ar @{-} "dlDL";"drDL" <0pt> \ar @{-} "drDL";"urDL" <0pt>
\ar @{<=} "DL1";"UL2" <0pt> \ar @{<=} "b_1";"DL2L" <0pt>
\ar @{<=} "b_2";"DL2R" <0pt> \endxy} \end{array} $$ Note that out-legs in $\Gamma'$ are formed by three types of edges in $\Gamma$ (and denoted in $\Gamma$ by three different double arrows), the ones which connect vertices of $I$ to the left out-vertex, to the vertices of $J$, and to the right out-vertex. Similarly, the set of in-legs of $\Gamma''$ encompasses three different double arrows in $\Gamma$. Many different graphs $\Gamma$ produce {\em identical}\, associated graphs $\Gamma'$ and $\Gamma''$ and it is easy to describe this family --- it is precisely the set of non-vanishing summands in the prop composition $\Gamma''\ _1\circ_1 \Gamma'$! As
$$
\Omega_\Gamma|_{{\partial}_{II}\overline{C}_{k;2,2}({\mathcal H})}=\lim_{\varepsilon \rightarrow 0} \Omega_\Gamma = \Omega_{\Gamma'}\wedge \Omega_{\Gamma''}. $$ we finally get \begin{eqnarray*} -\sum_{k\geq 0} \sum_{\Gamma\in {\mathcal G}^{(3)}_{k;2,2}}\int_{{\partial}_{II} \overline{C}_{k;2,2}({\mathcal H})} \Omega_\Gamma &=&\sum_{k',k''\geq 0} \sum_{\Gamma'\in {\mathcal G}^{(3)}_{k';1,2}} \sum_{\Gamma''\in {\mathcal G}^{(3)}_{k';2,1}}\left(\int_{\overline{C}_{k;1,2}({\mathcal H})} \Omega_{\Gamma'}\right) \cdot \left(\int_{\overline{C}_{k;2,1}({\mathcal H})} \Omega_{\Gamma'}\right) \Gamma''\ _1\circ_1\ \Gamma'\\ &=& f^{q}\left(\begin{xy}
<0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{.},
<0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{.},
<-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0.5mm,0.5mm>*{};<2.7mm,2.8mm>*{^2}**@{},
<-0.48mm,0.48mm>*{};<-2.7mm,2.8mm>*{^1}**@{},
\end{xy}\right) \ _1\circ_1\ f^{q}\left(\begin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{.},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{.},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0.39mm,-0.39mm>*{};<2.9mm,-4mm>*{^2}**@{},
<-0.35mm,-0.35mm>*{};<-2.8mm,-4mm>*{^1}**@{}, \end{xy}\right) \\ &=& f^{q}\left(\begin{array}{c}\begin{xy}
<0mm,2.47mm>*{};<0mm,-0.5mm>*{}**@{.},
<0.5mm,3.5mm>*{};<2.2mm,5.2mm>*{}**@{.},
<-0.48mm,3.48mm>*{};<-2.2mm,5.2mm>*{}**@{.},
<0mm,3mm>*{\circ};<0mm,3mm>*{}**@{},
<0mm,-0.8mm>*{\circ};<0mm,-0.8mm>*{}**@{}, <0mm,-0.8mm>*{};<-2.2mm,-3.5mm>*{}**@{.},
<0mm,-0.8mm>*{};<2.2mm,-3.5mm>*{}**@{.}, \end{xy}\end{array}\right). \end{eqnarray*}
{\bf Case III}.
Denote the associated boundary strata by ${\partial}_{III}\overline{C}_{k;2,2}({\mathcal H})$, and consider for concreteness limit configuration with $|y_2^0-y_1^0|\rightarrow +\infty$ and $x_2^0$, $x_1^0$ staying constant (the other subcase can be treated similarly). In general a (possibly empty) subset $I_1\subset V_{int}(\Gamma)$ (resp,. $I_2$) can collapse to the boundary point $x_1^0$ (resp., $x_2^0$), and a (possibly empty) subset $K_1\subset V_{int}(\Gamma)$ (resp., $K_2$) can tend as $\varepsilon \rightarrow 0$ to the boundary point $y_1^0$ (resp., $y_2^0$), \begin{eqnarray*} z_{i_1}'&=& x_1^0+ \varepsilon({\mathbf x}_{i_1} + i{\mathbf t}_{i_1}), \ \ \ z_{i_1}''= y_{i_1}(\varepsilon) + \frac{i}{\varepsilon{\mathbf t}_{i_1}} \ , \ \ i_1\in I_1,\\ z_{i_2}'&=& x_2^0+ \varepsilon({\mathbf x}_{i_2} + i{\mathbf t}_{i_2}), \ \ \ z_{i_2}''= y_{i_2}(\varepsilon) + \frac{i}{\varepsilon{\mathbf t}_{i_2}} \ , \ \ i_2\in I_2,\\ y_1^0&=& \frac{{\mathbf y}_1^0}{\varepsilon}\ \ , \ \ \ \ y_2^0= \frac{{\mathbf y}_2^0}{\varepsilon}\\ z_{k_1}'&=& x_{k_1}+ \frac{i{\mathbf t}_{k_1}}{\varepsilon}, \ \ \ z_{k_1}''= \frac{{\mathbf y}_1^0}{\varepsilon} + \varepsilon( \Delta {\mathbf y}_1^0 + \frac{i}{{\mathbf t}_{k_1}}) \ , \ \ k_1\in K_1,\\ z_{k_2}'&=& x_{k_2}+ \frac{i{\mathbf t}_{k_2}}{\varepsilon}, \ \ \ z_{k_2}''= \frac{{\mathbf y}_1^0}{\varepsilon} + \varepsilon( \Delta {\mathbf y}_2^0 + \frac{i}{{\mathbf t}_{k_2}}) \ , \ \ k_2\in K_2,\\ z'_j&=&x_j + it_j\ , \ \ z''_j=y_j(\varepsilon) + \frac{i}{t_j}\ ,\ \ j\in J:=V_{int}(\Gamma)\setminus I_1\sqcup I_2 \sqcup K_1 \sqcup K_2 \end{eqnarray*} for some functions $y_\bullet(\varepsilon)$ of the parameter $\varepsilon$ (which we have yet to understand) and some arbitrary constants in bold letters.
We claim that it is enough to consider the case when the sets $K_1$ and $K_2$ are both empty. Indeed, if at least one of the sets, say $K_1$ is not empty, it has a vertex $k\in K_1$ connected by an edge to a vertex $i$ in the set $J\sqcup I_1\sqcup I_2\sqcup \{x_1^0\}\sqcup \{x_2^0\}$
which contributes into the form $\Omega_\Gamma$ a factor $$ \lim_{\varepsilon\rightarrow 0}dArg\left({z_k'-z_i'}\right)= \lim_{\varepsilon\rightarrow 0}dArg\left({x_{k} + \frac{i{\mathbf t}_{k}}{\varepsilon} - z_i')}\right)\rightarrow 0 $$
which vanishes identically. Hence, for $\Omega_\Gamma|_{{\partial}_{II}\overline{C}_{k;2,2}({\mathcal H})}$ not to vanish identically, we can assume assume that $\Gamma$ has a form $$ \Gamma= \begin{array}{c}\resizebox{25mm}{!}{ \xy
(-15,37)*{_{a_1}}, (15,37)*{_{a_2}}, (-15,3)*{_{c_1}}, (15,3)*{_{c_2}},
(-7,20)*{_{b_1}}, (7,20)*{_{b_2}},
(0,28.5)*{_{J}}, (-15,32)*{}="ulUL", (-5,32)*{}="urUL", (-15,25)*{}="dlUL", (-5,25)*{}="drUL",
(-10,11.5)*{_{I_1}}, (-15,15)*{}="ulDL", (-5,15)*{}="urDL", (-15,8)*{}="dlDL", (-5,8)*{}="drDL",
(15,32)*{}="ulUR", (5,32)*{}="urUR", (15,25)*{}="dlUR", (5,25)*{}="drUR",
(-10,11.5)*{_{I_1}}, (-15,15)*{}="ulDL", (-5,15)*{}="urDL", (-15,8)*{}="dlDL", (-5,8)*{}="drDL",
(10,11.5)*{_{I_2}}, (15,15)*{}="ulDR", (5,15)*{}="urDR", (15,8)*{}="dlDR", (5,8)*{}="drDR",
(-10,32)*{}="UL1", (-10,25)*{}="UL2", (-10,15)*{}="DL1", (-10,8)*{}="DL2",
(10,32)*{}="UR1", (10,25)*{}="UR2", (10,15)*{}="DR1", (10,8)*{}="DR2",
(-15,0)*{\circ}="b_1", (15,0)*{\circ}="b_2", (-15,40)*{\circ}="u_1", (15,40)*{\circ}="u_2",
\ar @{-} "ulUL";"urUR" <0pt> \ar @{-} "ulUL";"dlUL" <0pt> \ar @{-} "dlUL";"drUR" <0pt>
\ar @{-} "ulDL";"urDL" <0pt> \ar @{-} "ulDL";"dlDL" <0pt> \ar @{-} "dlDL";"drDL" <0pt> \ar @{-} "drDL";"urDL" <0pt>
\ar @{-} "ulUR";"urUR" <0pt> \ar @{-} "ulUR";"dlUR" <0pt> \ar @{-} "dlUR";"drUR" <0pt>
\ar @{-} "ulDR";"urDR" <0pt> \ar @{-} "ulDR";"dlDR" <0pt> \ar @{-} "dlDR";"drDR" <0pt> \ar @{-} "drDR";"urDR" <0pt>
\ar @{=>} "UL1";"u_1" <0pt>
\ar @{=>} "DL1";"UL2" <0pt> \ar @{=>} "b_1";"DL2" <0pt>
\ar @{=>} "UR1";"u_2" <0pt> \ar @{=>} "DR1";"UR2" <0pt>
\ar @{=>} "b_2";"DR2" <0pt>
\endxy} \end{array} $$ where some edges ingoing into a box can continue as outgoing edges without ``hitting" an internal vertex inside the box. Note that no edge can connect a vertex $i_1$ from $I_1$ a to a vertex $i_2$ from $I_2$ as otherwise the differential form $\Omega_\Gamma$ vanishes identically in the limit $\varepsilon\rightarrow 0$ due to the presence of the factor $$ \lim_{\varepsilon\rightarrow 0} dArg(z_{i_1}' - z_{i_2}')=dArg(x_1^0-x_2^0)=0. $$
If the set $J$ is empty, then $\Gamma$ takes the form $$ \Gamma= \begin{array}{c}\resizebox{25mm}{!}{ \xy
(-15,32)*{}="ulUL", (-5,32)*{}="urUL", (-15,25)*{}="dlUL", (-5,25)*{}="drUL",
(-10,11.5)*{_{I_1}}, (-15,15)*{}="ulDL", (-5,15)*{}="urDL", (-15,8)*{}="dlDL", (-5,8)*{}="drDL",
(15,32)*{}="ulUR", (5,32)*{}="urUR", (15,25)*{}="dlUR", (5,25)*{}="drUR",
(-10,11.5)*{_{I_1}}, (-15,15)*{}="ulDL", (-5,15)*{}="urDL", (-15,8)*{}="dlDL", (-5,8)*{}="drDL",
(10,11.5)*{_{I_2}}, (15,15)*{}="ulDR", (5,15)*{}="urDR", (15,8)*{}="dlDR", (5,8)*{}="drDR",
(-10,32)*{}="UL1", (-10,25)*{}="UL2", (-10,15)*{}="DL1", (-10,8)*{}="DL2",
(10,32)*{}="UR1", (10,25)*{}="UR2", (10,15)*{}="DR1", (10,8)*{}="DR2",
(-15,0)*{\circ}="b_1", (15,0)*{\circ}="b_2", (-15,27)*{\circ}="u_1", (15,27)*{\circ}="u_2",
\ar @{-} "ulDL";"urDL" <0pt> \ar @{-} "ulDL";"dlDL" <0pt> \ar @{-} "dlDL";"drDL" <0pt> \ar @{-} "drDL";"urDL" <0pt>
\ar @{-} "ulDR";"urDR" <0pt> \ar @{-} "ulDR";"dlDR" <0pt> \ar @{-} "dlDR";"drDR" <0pt> \ar @{-} "drDR";"urDR" <0pt>
\ar @{=>} "DL1";"u_1" <0pt> \ar @{=>} "DL1";"u_2" <0pt> \ar @{=>} "b_1";"DL2" <0pt> \ar @{=>} "DR1";"u_2" <0pt> \ar @{=>} "DR1";"u_1" <0pt> \ar @{=>} "b_2";"DR2" <0pt> \endxy} \end{array} $$ Let $G_{k;2,2}\subset {\mathcal G}_{k;2,2}^{(3)}$ be the subset of graphs of this special form with $k$ internal vertices. It is clear that $$ \sum_{k\geq 0} \sum_{\Gamma \in G_{k;2,2}} \left(\int_{{\partial}_{III}C(\Gamma)}\Omega_\Gamma\right)= \frac{\begin{array}{c}\begin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{.},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{.},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, \end{xy}\end{array} \begin{array}{c}\begin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{.},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{.},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, \end{xy}\end{array}} {f^q\left(\begin{array}{c} \begin{xy}
<0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{.},
<0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{.},
<-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
\end{xy}\end{array}\right) f^q\left( \begin{array}{c} \begin{xy}
<0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{.},
<0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{.},
<-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
\end{xy}\end{array}\right)} $$
Consider next a more general case $J\neq \emptyset$. Let $J_1\subset J$
(resp., $J_2\subset J$) be the subset of vertices which can be connected by a directed path of edges to the out-vertex $y_1^0$ (resp., $y_2^0$). At least one of the sets $J_1$ and $J_2$ is non-empty. It is easy to see that for $\Omega_\Gamma|_{{\partial}_{II}\overline{C}_{k;2,2}({\mathcal H})}$ not to vanish identically, the functions $y_{j_1}(\varepsilon)$ and $y_{j_2}(\varepsilon)$ in the formulae above must be of the form as $\varepsilon\rightarrow 0$, $$ y_{j_1}(\varepsilon)=\frac{{\mathbf y}_1^0}{\varepsilon} + {\mathbf y}_{j_1}, \ \ \ \ \ y_{j_2}(\varepsilon)=\frac{{\mathbf y}_2^0}{\varepsilon} + {\mathbf y}_{j_2}, \ \ \ \forall j_1\in J_1,\ \forall j_2\in J_2, $$ for some constants ${\mathbf y}_{j_1}$ and ${\mathbf y}_{j_2}$. In particular,
$J_1\cap J_2=\emptyset$, so that for $\Omega_\Gamma|_{{\partial}_{II}\overline{C}_{k;2,2}({\mathcal H})}$ not to vanish identically, the graph $\Gamma$ must be of the form $$ \Gamma= \begin{array}{c}\resizebox{23mm}{!}{ \xy
(-10,28.5)*{_{J_1}}, (-15,32)*{}="ulUL", (-5,32)*{}="urUL", (-15,25)*{}="dlUL", (-5,25)*{}="drUL",
(-10,11.5)*{_{I_1}}, (-15,15)*{}="ulDL", (-5,15)*{}="urDL", (-15,8)*{}="dlDL", (-5,8)*{}="drDL",
(10,28.5)*{_{J_2}}, (15,32)*{}="ulUR", (5,32)*{}="urUR", (15,25)*{}="dlUR", (5,25)*{}="drUR",
(-10,11.5)*{_{I_1}}, (-15,15)*{}="ulDL", (-5,15)*{}="urDL", (-15,8)*{}="dlDL", (-5,8)*{}="drDL",
(10,11.5)*{_{I_2}}, (15,15)*{}="ulDR", (5,15)*{}="urDR", (15,8)*{}="dlDR", (5,8)*{}="drDR",
(-10,32)*{}="UL1", (-10,25)*{}="UL2", (-10,15)*{}="DL1", (-10,8)*{}="DL2",
(10,32)*{}="UR1", (10,25)*{}="UR2", (10,15)*{}="DR1", (10,8)*{}="DR2",
(-15,0)*{\circ}="b_1", (15,0)*{\circ}="b_2", (-15,40)*{\circ}="u_1", (15,40)*{\circ}="u_2",
\ar @{-} "ulUL";"urUL" <0pt> \ar @{-} "ulUL";"dlUL" <0pt> \ar @{-} "dlUL";"drUL" <0pt> \ar @{-} "drUL";"urUL" <0pt>
\ar @{-} "ulDL";"urDL" <0pt> \ar @{-} "ulDL";"dlDL" <0pt> \ar @{-} "dlDL";"drDL" <0pt> \ar @{-} "drDL";"urDL" <0pt>
\ar @{-} "ulUR";"urUR" <0pt> \ar @{-} "ulUR";"dlUR" <0pt> \ar @{-} "dlUR";"drUR" <0pt> \ar @{-} "drUR";"urUR" <0pt>
\ar @{-} "ulDR";"urDR" <0pt> \ar @{-} "ulDR";"dlDR" <0pt> \ar @{-} "dlDR";"drDR" <0pt> \ar @{-} "drDR";"urDR" <0pt>
\ar @{=>} "UL1";"u_1" <0pt>
\ar @{=>} "DL1";"UL2" <0pt> \ar @{=>} "b_1";"DL2" <0pt>
\ar @{=>} "UR1";"u_2" <0pt> \ar @{=>} "DR1";"UR2" <0pt>
\ar @{=>} "b_2";"DR2" <0pt> \ar @{=>} "DR1";"UL2" <0pt> \ar @{=>} "DL1";"UR2" <0pt> \endxy} \end{array} $$ where some edges ingoing into a box can continue as outgoing edges without ``hitting" an internal vertex inside the box (note that some of sets $I_1$, $I_2$, $J_1$ and $J_i$ can be empty!).
If $\Gamma$ is a disjoint union of two graphs, say $\Gamma_1$ and $\Gamma_2$, from ${\mathcal G}^{or}_{n;1,1}$, i.e.\ if it has one of the following two structures, $$ \Gamma= \begin{array}{c}\resizebox{22mm}{!}{ \xy
(-10,28.5)*{_{J_1}}, (-15,32)*{}="ulUL", (-5,32)*{}="urUL", (-15,25)*{}="dlUL", (-5,25)*{}="drUL",
(-10,11.5)*{_{I_1}}, (-15,15)*{}="ulDL", (-5,15)*{}="urDL", (-15,8)*{}="dlDL", (-5,8)*{}="drDL",
(10,28.5)*{_{J_2}}, (15,32)*{}="ulUR", (5,32)*{}="urUR", (15,25)*{}="dlUR", (5,25)*{}="drUR",
(-10,11.5)*{_{I_1}}, (-15,15)*{}="ulDL", (-5,15)*{}="urDL", (-15,8)*{}="dlDL", (-5,8)*{}="drDL",
(10,11.5)*{_{I_2}}, (15,15)*{}="ulDR", (5,15)*{}="urDR", (15,8)*{}="dlDR", (5,8)*{}="drDR",
(-10,32)*{}="UL1", (-10,25)*{}="UL2", (-10,15)*{}="DL1", (-10,8)*{}="DL2",
(10,32)*{}="UR1", (10,25)*{}="UR2", (10,15)*{}="DR1", (10,8)*{}="DR2",
(-15,0)*{\circ}="b_1", (15,0)*{\circ}="b_2", (-15,40)*{\circ}="u_1", (15,40)*{\circ}="u_2",
\ar @{-} "ulUL";"urUL" <0pt> \ar @{-} "ulUL";"dlUL" <0pt> \ar @{-} "dlUL";"drUL" <0pt> \ar @{-} "drUL";"urUL" <0pt>
\ar @{-} "ulDL";"urDL" <0pt> \ar @{-} "ulDL";"dlDL" <0pt> \ar @{-} "dlDL";"drDL" <0pt> \ar @{-} "drDL";"urDL" <0pt>
\ar @{-} "ulUR";"urUR" <0pt> \ar @{-} "ulUR";"dlUR" <0pt> \ar @{-} "dlUR";"drUR" <0pt> \ar @{-} "drUR";"urUR" <0pt>
\ar @{-} "ulDR";"urDR" <0pt> \ar @{-} "ulDR";"dlDR" <0pt> \ar @{-} "dlDR";"drDR" <0pt> \ar @{-} "drDR";"urDR" <0pt>
\ar @{=>} "UL1";"u_1" <0pt>
\ar @{=>} "DL1";"UL2" <0pt> \ar @{=>} "b_1";"DL2" <0pt>
\ar @{=>} "UR1";"u_2" <0pt> \ar @{=>} "DR1";"UR2" <0pt>
\ar @{=>} "b_2";"DR2" <0pt>
\endxy} \end{array} \ \ \ \ \ \ \ \ \mbox{or}\ \ \ \ \ \ \ \ \Gamma= \begin{array}{c}\resizebox{22mm}{!}{ \xy
(-10,28.5)*{_{J_1}}, (-15,32)*{}="ulUL", (-5,32)*{}="urUL", (-15,25)*{}="dlUL", (-5,25)*{}="drUL",
(-10,11.5)*{_{I_1}}, (-15,15)*{}="ulDL", (-5,15)*{}="urDL", (-15,8)*{}="dlDL", (-5,8)*{}="drDL",
(10,28.5)*{_{J_2}}, (15,32)*{}="ulUR", (5,32)*{}="urUR", (15,25)*{}="dlUR", (5,25)*{}="drUR",
(-10,11.5)*{_{I_1}}, (-15,15)*{}="ulDL", (-5,15)*{}="urDL", (-15,8)*{}="dlDL", (-5,8)*{}="drDL",
(10,11.5)*{_{I_2}}, (15,15)*{}="ulDR", (5,15)*{}="urDR", (15,8)*{}="dlDR", (5,8)*{}="drDR",
(-10,32)*{}="UL1", (-10,25)*{}="UL2", (-10,15)*{}="DL1", (-10,8)*{}="DL2",
(10,32)*{}="UR1", (10,25)*{}="UR2", (10,15)*{}="DR1", (10,8)*{}="DR2",
(-15,0)*{\circ}="b_1", (15,0)*{\circ}="b_2", (-15,40)*{\circ}="u_1", (15,40)*{\circ}="u_2",
\ar @{-} "ulUL";"urUL" <0pt> \ar @{-} "ulUL";"dlUL" <0pt> \ar @{-} "dlUL";"drUL" <0pt> \ar @{-} "drUL";"urUL" <0pt>
\ar @{-} "ulDL";"urDL" <0pt> \ar @{-} "ulDL";"dlDL" <0pt> \ar @{-} "dlDL";"drDL" <0pt> \ar @{-} "drDL";"urDL" <0pt>
\ar @{-} "ulUR";"urUR" <0pt> \ar @{-} "ulUR";"dlUR" <0pt> \ar @{-} "dlUR";"drUR" <0pt> \ar @{-} "drUR";"urUR" <0pt>
\ar @{-} "ulDR";"urDR" <0pt> \ar @{-} "ulDR";"dlDR" <0pt> \ar @{-} "dlDR";"drDR" <0pt> \ar @{-} "drDR";"urDR" <0pt>
\ar @{=>} "UL1";"u_1" <0pt>
\ar @{=>} "b_1";"DL2" <0pt>
\ar @{=>} "UR1";"u_2" <0pt>
\ar @{=>} "b_2";"DR2" <0pt> \ar @{=>} "DR1";"UL2" <0pt> \ar @{=>} "DL1";"UR2" <0pt> \endxy} \end{array} $$
then $\Omega_{\Gamma}|_{{\partial}\overline{C}_{k;2,2}({\mathcal H})}=0$ because of the following
{\sc Claim.} {\em For any $\Gamma\in {\mathcal G}_{n;1,1}$ the associated integral $$ \int_{C_{n;1,1}({\mathcal H})}\Omega_{\Gamma} $$ vanishes.} Indeed, let $l'$ be the number of in-legs of $\Gamma$, $l''$ the number of out-legs, and $k$ the number of internal edges. The integral $\int_{C_{n;1,1}({\mathcal H})}\Omega_{\Gamma}$ can be non-zero if and only if $\Omega_\Gamma$ has top degree, i.e.\ if and only if $$ 3n-3+2= 2k+ l'+l'' $$ On the other hand, as every internal vertex of $\Gamma$ is at least trivalent, one must have $$ 2k+l+l''\geq 3n $$ These two equations are incompatible which proves the {\sc Claim}.
Combining all the above observations, we conclude $\Omega_{\Gamma}|_{{\partial}_{III}\overline{C}_{k;2,2}({\mathcal H})}$ may not vanish identically only on the boundary strata of the form $$ {\partial}_{I_1,I_2,J_1,J_2}\overline{C}_{k;2,2}({\mathcal H}):=\overline{C}_{\# I_1;2,1} \times \overline{C}_{\# I_2;2,1} \times \overline{C}_{\# I_2;1,2} \times \overline{C}_{\# I_2;1,2} $$ and $$
\Omega_{\Gamma}|_{{\partial}_{I_1,I_2,J_1,J_2} \overline{{\mathfrak C}}(\Gamma)}=\Omega_{\Gamma_{I_1}}\wedge \Omega_{\Gamma_{I_2}}\wedge \Omega_{\Gamma_{J_1}}\wedge \Omega_{\Gamma_{J_2}} $$ where the graphs $\Gamma_{I_i}$ and $\Gamma_{J_i}$ , $i=1,2$, are given by, $$ \Gamma_{I_i}=\begin{array}{c}\resizebox{14mm}{!}{ \xy
(0,11.5)*{_{I_i}}, (-5,15)*{}="ulDL", (5,15)*{}="urDL", (-5,8)*{}="dlDL", (5,8)*{}="drDL",
(0,15)*{}="DL1", (0,8)*{}="DL2",
(0,0)*{\circ}="b_1", (-8,25)*{\circ}="u_1", (8,25)*{\circ}="u_2",
\ar @{-} "ulDL";"urDL" <0pt> \ar @{-} "ulDL";"dlDL" <0pt> \ar @{-} "dlDL";"drDL" <0pt> \ar @{-} "drDL";"urDL" <0pt>
\ar @{=>} "DL1";"u_1" <0pt> \ar @{=>} "DL1";"u_2" <0pt> \ar @{=>} "b_1";"DL2" <0pt>
\endxy} \end{array}\in {\mathcal G}^{(3)}_{\# I_i;2,1}, \ \ \ \ \ \ \ \ \ \ \Gamma_{J_i}=\begin{array}{c}\resizebox{14mm}{!}{ \xy
(0,11.5)*{_{J_i}}, (-5,15)*{}="ulDL", (5,15)*{}="urDL", (-5,8)*{}="dlDL", (5,8)*{}="drDL",
(0,15)*{}="DL1", (0,8)*{}="DL2",
(0,25)*{\circ}="u_1", (-8,0)*{\circ}="b_1", (8,0)*{\circ}="b_2",
\ar @{-} "ulDL";"urDL" <0pt> \ar @{-} "ulDL";"dlDL" <0pt> \ar @{-} "dlDL";"drDL" <0pt> \ar @{-} "drDL";"urDL" <0pt>
\ar @{=>} "DL1";"u_1" <0pt> \ar @{=>} "b_2";"DL2" <0pt> \ar @{=>} "b_1";"DL2" <0pt>
\endxy} \end{array}\in {\mathcal G}^{(3)}_{\# J_i;1,2}. $$ Note that if $I_i$, respectively $J_1$, is empty, then we have to set $$ \Gamma_{I_i}= \begin{array}{c}\resizebox{9mm}{!}{ \xy
(0,2)*{\circ}="a", (-3,7)*{\circ}="b_1", (3,7)*{\circ}="b_2",
\endxy}\end{array} \ \ \ , \ \ \ \mbox{respectively}\ \
\Gamma_{J_i}= \begin{array}{c}\resizebox{9mm}{!}{ \xy
(0,7)*{\circ}="a", (-3,2)*{\circ}="b_1", (3,2)*{\circ}="b_2",
\endxy}\end{array} $$ and $\Omega_{\Gamma_{I_i}}=1$, resp.\ $\Omega_{\Gamma_{J_i}}=1$. Therefore we conclude that $$ \sum_{k\geq 0}\sum_{\Gamma\in {\mathcal G}_{k;2,2}^{(3)}}\left(\sum_{V_{int}(\Gamma)=I_1\sqcup I_2\sqcup J_1\sqcup J_2\atop
|I_1|+|I_2|\geq 1, |J_1|+|J_2|\geq 1} \int_{{\partial}_{I_1,I_2,J_1,J_2} \overline{{\mathfrak C}}(\Gamma)} \Omega_\Gamma\right) \Gamma =\frac{f^{q}\left(\begin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{.},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{.},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, \end{xy}\right) f^{q}\left(\begin{xy}
<0mm,0.66mm>*{};<0mm,3mm>*{}**@{.},
<0.39mm,-0.39mm>*{};<2.2mm,-2.2mm>*{}**@{.},
<-0.35mm,-0.35mm>*{};<-2.2mm,-2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{}, \end{xy}\right)}{f^{q}\left(\begin{xy}
<0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{.},
<0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{.},
<-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
\end{xy}\right) f^{q}\left(\begin{xy}
<0mm,-0.55mm>*{};<0mm,-2.5mm>*{}**@{.},
<0.5mm,0.5mm>*{};<2.2mm,2.2mm>*{}**@{.},
<-0.48mm,0.48mm>*{};<-2.2mm,2.2mm>*{}**@{.},
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
\end{xy}\right)}=
f^{ex}\left(\begin{array}{c}\resizebox{15mm}{!}{\begin{xy}
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{.},
<-0.5mm,0.5mm>*{};<-3mm,2mm>*{}**@{.},
<-3mm,2mm>*{};<0mm,4mm>*{}**@{.},
<0mm,4mm>*{\circ};<-2.3mm,2.3mm>*{}**@{},
<0mm,4mm>*{};<0mm,7.4mm>*{}**@{.}, <0mm,0mm>*{};<2.2mm,1.5mm>*{}**@{.},
<6mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<6mm,4mm>*{};<3.8mm,2.5mm>*{}**@{.},
<6mm,4mm>*{};<6mm,7.4mm>*{}**@{.},
<6mm,4mm>*{\circ};<-2.3mm,2.3mm>*{}**@{},
<0mm,4mm>*{};<6mm,0mm>*{}**@{.}, <6mm,4mm>*{};<9mm,2mm>*{}**@{.}, <6mm,0mm>*{};<9mm,2mm>*{}**@{.}, <6mm,0mm>*{};<6mm,-3mm>*{}**@{.},
\end{xy}}\end{array}\right) $$ where the middle expression means the fraction type composition in the prop $\mathcal{L}\mathit{ieb}^{\mathrm{quant}}$. Finally, we conclude \begin{eqnarray*} 0&=&\sum_{k\geq 0}\sum_{\Gamma\in {\mathcal G}_{k;2,2}^{(3)}}\left( \int_{{\partial} \overline{C}_{k;2,2}({\mathcal H})} \Omega_\Gamma\right) \Gamma\\
&=& \sum_{k\geq 0}\sum_{\Gamma\in {\mathcal G}_{k;2,2}^{(3)}}\left( \int_{{\partial}_{II} \overline{C}_{k;2,2}({\mathcal H})} \Omega_\Gamma\right) \Gamma +
\sum_{k\geq 0}\sum_{\Gamma\in {\mathcal G}_{k;2,2}^{(3)}}\left( \int_{{\partial}_{III} \overline{C}_{k;2,2}({\mathcal H})} \Omega_\Gamma\right) \Gamma\\
&=& \sum_{k\geq 0}\sum_{\Gamma\in {\mathcal G}_{k;2,2}^{(3)}}\left( \int_{{\partial}_{II} \overline{C}_{k;2,2}({\mathcal H})} \Omega_\Gamma\right) +
\sum_{k\geq 0}\sum_{\Gamma\in {\mathcal G}_{k;2,2}^{(3)}}\sum_{V_{int}(\Gamma)=I_1\sqcup I_2\sqcup J_1\sqcup J_2\atop
|I_1|+|I_2|\geq 1, |J_1|+|J_2|\geq 1} \left(\int_{{\partial}_{I_1,I_2,J_1,J_2} \overline{{\mathfrak C}}(\Gamma)}\Omega_\Gamma\right) \Gamma\\ &=& f^{ex}\left( -\begin{array}{c} \begin{xy}
<0mm,2.47mm>*{};<0mm,-0.5mm>*{}**@{.},
<0.5mm,3.5mm>*{};<2.2mm,5.2mm>*{}**@{.},
<-0.48mm,3.48mm>*{};<-2.2mm,5.2mm>*{}**@{.},
<0mm,3mm>*{\circ};<0mm,3mm>*{}**@{},
<0mm,-0.8mm>*{\circ};<0mm,-0.8mm>*{}**@{}, <0mm,-0.8mm>*{};<-2.2mm,-3.5mm>*{}**@{.},
<0mm,-0.8mm>*{};<2.2mm,-3.5mm>*{}**@{.},
<0.5mm,3.5mm>*{};<2.8mm,5.7mm>*{^2}**@{},
<-0.48mm,3.48mm>*{};<-2.8mm,5.7mm>*{^1}**@{},
<0mm,-0.8mm>*{};<-2.7mm,-5.2mm>*{^1}**@{},
<0mm,-0.8mm>*{};<2.7mm,-5.2mm>*{^2}**@{}, \end{xy}\end{array} \ + \ \begin{array}{c}\begin{xy}
<0mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<0mm,-0.49mm>*{};<0mm,-3.0mm>*{}**@{.},
<-0.5mm,0.5mm>*{};<-3mm,2mm>*{}**@{.},
<-3mm,2mm>*{};<0mm,4mm>*{}**@{.},
<0mm,4mm>*{\circ};<-2.3mm,2.3mm>*{}**@{},
<0mm,4mm>*{};<0mm,7.4mm>*{}**@{.}, <0mm,0mm>*{};<2.2mm,1.5mm>*{}**@{.},
<6mm,0mm>*{\circ};<0mm,0mm>*{}**@{},
<6mm,4mm>*{};<3.8mm,2.5mm>*{}**@{.},
<6mm,4mm>*{};<6mm,7.4mm>*{}**@{.},
<6mm,4mm>*{\circ};<-2.3mm,2.3mm>*{}**@{},
<0mm,4mm>*{};<6mm,0mm>*{}**@{.}, <6mm,4mm>*{};<9mm,2mm>*{}**@{.}, <6mm,0mm>*{};<9mm,2mm>*{}**@{.}, <6mm,0mm>*{};<6mm,-3mm>*{}**@{.},
<-1.8mm,2.8mm>*{};<0mm,7.8mm>*{^1}**@{},
<-2.8mm,2.9mm>*{};<0mm,-4.3mm>*{_1}**@{}, <-1.8mm,2.8mm>*{};<6mm,7.8mm>*{^2}**@{},
<-2.8mm,2.9mm>*{};<6mm,-4.3mm>*{_2}**@{},
\end{xy} \end{array} \right) \end{eqnarray*} which proves claim (iii). \end{proof}
\subsubsection{\bf Main Corollary} {\em Composition of the explicit morphism (\ref{7: explicit morphism f^q}) with the explicit morphism ${\mathcal D}(f)$ (see \S {\ref{5: coroll on f from LB^q to LB^min}}(ii)) gives us an explicit transcendental morphism of props \begin{equation}\label{7: explicit map from Assb to LB wheeled} {\mathcal D}(f) \circ f^q: \mathcal{A}\mathit{ssb} \longrightarrow \widehat{\LB}^\circlearrowright \end{equation} and hence an explicit universal quantization of finite-dimensional Lie bialgebras.}
The main purpose of this paper is achieved.
\subsubsection{\bf Other Corollaries}
(i) As the differential 2-forms $\omega_g$ and $\varpi_g$ used in the constructions of the maps $f^q$ and $f$ are simple, {\em graphs with multiple edges do not contribute into the map (\ref{7: explicit map from Assb to LB wheeled})}. Essentially this observation says that {\em our universal quantization formula does not involve graphs which contain a subgraph of the form}\, $\begin{array}{c}\xy
(0,0)*{}="0",
(0,3)*{\bullet}="1", (-3,5)*{}="L", (3,5)*{}="R", (0,7)*{\bullet}="2", (0,10)*{}="00",
\ar @{-} "0";"1" <0pt> \ar @{-} "1";"L" <0pt> \ar @{-} "1";"R" <0pt> \ar @{-} "2";"L" <0pt> \ar @{-} "2";"R" <0pt> \ar @{-} "2";"00" <0pt> \endxy\end{array}$. It also follows from our explicit formula that all graphs with at least one black vertex contributing to the universal quantization morphism are {\em connected}.
(ii) {\em The explicit map (\ref{7: explicit morphism f^q}) lifts by a trivial induction to a morphism of dg props ${\mathcal F}^q$ which
fits into a commutative diagram, $$
\xymatrix{ \mathcal{A}\mathit{ssb}_\infty\ar[r]^{{\mathcal F}^q}\ar[d]_p & {\mathcal D}\widehat{\LB}^{\mathrm{quant}}_\infty\ar[d]^\pi\\
\mathcal{A}\mathit{ssb} \ar[r]_{f^q} &
{\mathcal D}\widehat{\LB}^{\mathrm{quant}}} $$ and which satisfies the condition $$ \pi_1\circ {\mathcal F}^q\left(\begin{array}{c}\resizebox{13mm}{!}{ \xy
(0,7)*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ }},
(0,9)*{^m},
(0,3)*{^{...}},
(0,-3)*{_{...}},
(0,-7)*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }},
(0,-9)*{_n},
(0,0)*{\circ}="0", (-7,5)*{}="u_1", (-4,5)*{}="u_2", (4,5)*{}="u_3", (7,5)*{}="u_4", (-7,-5)*{}="d_1", (-4,-5)*{}="d_2", (4,-5)*{}="d_3", (7,-5)*{}="d_4",
\ar @{.} "0";"u_1" <0pt> \ar @{.} "0";"u_2" <0pt> \ar @{.} "0";"u_3" <0pt> \ar @{.} "0";"u_4" <0pt> \ar @{.} "0";"d_1" <0pt> \ar @{.} "0";"d_2" <0pt> \ar @{.} "0";"d_3" <0pt> \ar @{.} "0";"d_4" <0pt> \endxy}\end{array}\right)= \lambda \begin{array}{c}\resizebox{16mm}{!}{\xy (0,7.5)*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }},
(0,9.5)*{^m},
(0,-7.5)*{\underbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }},
(0,-9.9)*{_n},
(-6,5)*{...},
(-6,-5)*{...},
(-3,5)*{\circ}="u1",
(-3,-5)*{\circ}="d1",
(-6,5)*{...},
(-6,-5)*{...},
(-9,5)*{\circ}="u2",
(-9,-5)*{\circ}="d2",
(3,5)*{\circ}="u3",
(3,-5)*{\circ}="d3",
(6,5)*{...},
(6,-5)*{...},
(9,5)*{\circ}="u4",
(9,-5)*{\circ}="d4",
(0,0)*{\bullet}="a",
\ar @{-} "d1";"a" <0pt> \ar @{-} "a";"u1" <0pt> \ar @{-} "d2";"a" <0pt> \ar @{-} "a";"u2" <0pt> \ar @{-} "d3";"a" <0pt> \ar @{-} "a";"u3" <0pt> \ar @{-} "d4";"a" <0pt> \ar @{-} "a";"u4" <0pt> \endxy}\end{array}\ \ \text{for some non-zero}\ \lambda\in {\mathbb R}, $$ for all $m+n\geq 3$, $m,n\geq 1$}. Here $\pi_1$ is the projection to the vector subspace in ${\mathcal D}\widehat{\LB}^{\mathrm{quant}}_\infty$ spanned by graphs with precisely one black vertex.
This claim is obvious as surjections $p$ and $\pi^q$ are quasi-isomorphisms.
(iii) Composition of the maps ${\mathcal F}^q$ and ${\mathcal D}(F)$, where $F$ is given by the explicit formula (\ref{5: explicit map F from LB^q_infty to LB_infty wheeled}), gives us a formality map} $$ {\mathcal D}(F) \circ {\mathcal F}^q: \mathcal{A}\mathit{ssb}_\infty \longrightarrow {\mathcal D}\widehat{\LB}^\circlearrowright_\infty $$ and hence {\em a universal quantization of finite-dimensional strongly homotopy Lie bialgebras}.
\subsection{An open problem} The above Corollary(ii) gives us an inductive extension of the explicit morphism (\ref{7: explicit morphism f^q}) to some morphism of dg props ${\mathcal F}^q: \mathcal{A}\mathit{ssb}_\infty\rightarrow \widehat{\LB}_\infty^{\mathrm{quant}}$. Can this extension be given by an explicit formula similar to the one for $f^q$? Here is a conjectural answer.
\subsubsection{\bf Conjecture}\label{7: Conjecture on F^q} {\em There is a morphism of props \begin{equation}\label{7: explicit morphism F^q} {\mathcal F}^{q}: {\mathcal A} ss{\mathcal B}_\infty \longrightarrow {\mathcal D}\widehat{\mathcal{L}\mathit{ieb}}^{\mathrm{quant}}_\infty \end{equation} given explicitly on the generators of ${\mathcal A} ss{\mathcal B}_\infty$ as follows, $$ {\mathcal F}^q\left(\begin{array}{c}\resizebox{13mm}{!}{ \xy
(0,7)*{\overbrace{\ \ \ \ \ \ \ \ \ \ \ \ \ \ }},
(0,9)*{^m},
(0,3)*{^{...}},
(0,-3)*{_{...}},
(0,-7)*{\underbrace{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }},
(0,-9)*{_n},
(0,0)*{\circ}="0", (-7,5)*{}="u_1", (-4,5)*{}="u_2", (4,5)*{}="u_3", (7,5)*{}="u_4", (-7,-5)*{}="d_1", (-4,-5)*{}="d_2", (4,-5)*{}="d_3", (7,-5)*{}="d_4",
\ar @{.} "0";"u_1" <0pt> \ar @{.} "0";"u_2" <0pt> \ar @{.} "0";"u_3" <0pt> \ar @{.} "0";"u_4" <0pt> \ar @{.} "0";"d_1" <0pt> \ar @{.} "0";"d_2" <0pt> \ar @{.} "0";"d_3" <0pt> \ar @{.} "0";"d_4" <0pt> \endxy}\end{array}\right) := \sum_{k\geq 1} \sum_{\Gamma \in {\mathcal G}_{k;m,n}^{or}} \left(\int_{\overline{C}_{k;m,n}({\mathcal H})}\Omega_\Gamma\right) \Gamma \ \ \ + \ \ \left\{\begin{array}{cl} \begin{array}{c}\resizebox{6mm}{!}{ \xy (-3,9)*{_1}, (3,9)*{_2},
(0,0)*{\circ}="a", (-3,5)*{\circ}="b_1", (3,5)*{\circ}="b_2",
\endxy}\end{array}& \text{if}\ m=2,n=1
\\
\begin{array}{c}\resizebox{6mm}{!}{ \xy (-3,0)*{_1}, (3,0)*{_2},
(0,6)*{\circ}="a", (-3,2)*{\circ}="b_1", (3,2)*{\circ}="b_2",
\endxy}\end{array} & \text{if}\ m=1,n=2\\
0 & \text{otherwise}
\end{array}\right. $$ where the differential form $\Omega_\Gamma$ is defined in (\ref{1> Def of Omega_Ga}).}
Let us provide a strong evidence for this conjecture elucidating a particular problem which requires a better understanding.
By construction of the compactified space $\overline{C}_{k;m,n}({\mathcal H})$, we have a natural semialgebraic fibration (see \cite{HLTV}) $$ \pi: \overline{C}_{k;m,n}({\mathcal H}) \longrightarrow \overline{C}_{m,n}({\mathbb R}\times {\mathbb R}) $$ and hence a push-forward map of piecewise semi-algebraic differential forms $$ \pi_*: \Omega_{ \overline{C}_{k;m,n}({\mathcal H})}^\bullet \longrightarrow \Omega^\bullet_{\overline{C}_{m,n}({\mathbb R}\times {\mathbb R})} $$ such that for any semialgebraic chain $$ \phi: M \rightarrow \overline{C}_{m,n}({\mathbb R}\times {\mathbb R}) $$ the integral $$ \int_M \phi^*(\pi_*(\Omega_\Gamma)) $$ is well-defined (i.e.\ convergent) for any $\Gamma\in {\mathcal G}_{k;m,n}$. Hence we can consider an ${\mathbb S}_m^{op}\times {\mathbb S}_n$ equivariant map \begin{equation}\label{7: map Phi_n^m from chains to DLieb} \begin{array}{rccc} \Phi_{n}^m: & Chains(\overline{C}_{m,n}) & \longrightarrow & \widehat{\LB}^{\mathrm{quant}}_\infty(m,n)\\
& \phi: M\rightarrow \overline{C}_{m,n}({\mathbb R}\times {\mathbb R}) & \longrightarrow & \displaystyle \sum_{k\geq 0}\sum_{\Gamma\in
{\mathcal G}_{k;m,n}} \left(\int_M \phi^*\left( \pi_*(\Omega_\Gamma)\right) \right)\Gamma
\end{array} \end{equation} Note that in our grading conventions the chain complex $(Chains(\overline{C}_{m,n}),{\partial})$ is non-positively graded so that the standard boundary differential ${\partial}$ has degree $+1$. Using arguments almost identical to the ones employed in the proof of Theorem {\ref{7: Theorem on f^q}} one can show the following
\subsubsection{\bf Theorem}\label{7: Theorem on Chains to Dlieb} {\em For any $m,n\geq 1$ with $m+n\geq 3$ the collection of maps $\Phi_{n}^m: Chains(\overline{C}_{m,n})\longrightarrow {\mathcal D} \widehat{\LB}^{\mathrm{quant}}_\infty(m,n)$ commutes with the differentials, $$ \delta^{\omega_{\bar{g}}}\circ \Phi_n^m = \Phi_n^m \circ {\partial} $$ and hence gives us an equivariant morphism of differential $\frac{1}{2}$-props $$ \Phi: Chains(\overline{C}_{\bullet,\bullet}({\mathbb R}\times{\mathbb R}))\rightarrow {\mathcal D}\widehat{\LB}^{\mathrm{quant}}_\infty. $$}
The restriction of the map $\Phi$ to the Saneblidze-Umble cell complex $({\mathcal C} ell({\mathsf K}_\bullet^\bullet), {\partial}_{cell}) \subset Chains(\overline{C}_{\bullet,\bullet}({\mathbb R}\times{\mathbb R})$ (see Appendix B) gives us precisely the map ${\mathcal F}^q$ in the Conjecture {\ref{7: Theorem on Chains to Dlieb}}. This map respects the differentials but at the moment we can not claim it respects {\em all}\, prop compositions as the isomorphism $({\mathcal C} ell({\mathsf K}_\bullet^\bullet), {\partial}_{cell}) \simeq \mathcal{A}\mathit{ssb}_\infty$ (which is claimed in \cite{SU}) should be understood better in this context.
\appendix \renewcommand{{\bf A.\arabic{subsection}}}{{\bf A.\arabic{subsection}}} \renewcommand{{\bf A.\arabic{subsection}.\arabic{subsubsection}}}{{\bf A.\arabic{subsection}.\arabic{subsubsection}}}
{\Large \section{\bf Some vanishing Lemmas}\label{App: A} }
Let $\omega_g$ be a top degree form on $S^2$ given by (\ref{3: omega_g propagator}) for $d=3$. We shall prove some vanishing results for the weights $$ C_\Gamma=\int_{\overline{C}_{4p+2}({\mathbb R}^3)} \displaystyle \bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\pi}^*_e\left(\omega_g\right) $$ of graphs $\Gamma\in {\mathsf G}_{4p+2,6p+1}$ with $p\geq 1$ contributing to the formulae given in Proposition {\ref{3: Prop on Upsilon^om_g}}.
\subsection{Lemma on binary vertices}\label{A: lemma on 4 binary vertices} {\em Any graph $\Gamma\in {\mathcal G}_{4p+2,6p+1}$ with $p\geq 1$ has at least 4 binary vertices. Moreover, if $\Gamma\in {\mathcal G}_{4p+2,6p+1}$ has precisely $4$ binary vertices, then all other vertices must be trivalent}.
\begin{proof} For a vertex $v\in V(\Gamma)$ its valency can be represented as the sum $2+ \Delta v$ for some non-negative integer $\Delta v$. The graph $\Gamma$ has $12p+2$ half-edges so we have an equality $$ \sum_{v\in V(\Gamma)} (2+ \Delta v)= 2+12p, $$ i.e. $$ \sum_{v\in V(\Gamma)} \Delta v=2+12p - 2(2+4p)=4p-2 $$ Therefore at most $4p-2$ vertices can have $\Delta v\geq 1$ which implies that $\Gamma$ has at least $4p+2 -(4p-2)=4$ binary vertices. Moreover, if $\Gamma$ has precisely $4$ bivalent vertices, then the remaining $4p-2$ vertices $v$ must have $\Delta v=1$. \end{proof}
Therefore every graph in $\Gamma\in {\mathsf G}_{4p+2,6p+1}$ with $p\geq 1$ has at least four complete\footnote{For a graph $\Gamma$ and its pair of vertices $v_1,v_2\in V(\Gamma)$ denote by $E_\Gamma(v_1,v_2)$ the set of edges connecting $v_1$ to $v_2$. A subgraph $\Gamma'$ of graph $\Gamma$ is called {\em complete}\, if
between any pair of its vertices $v_1,v_2 \in V(\Gamma')$ we have $E_{\Gamma'}(v_1,v_2)=E_{\Gamma}(v_1,v_2)$.} subgraphs of one of the following forms, $$ \begin{array}{c}\resizebox{10mm}{!}{ \xy (-1,2)*{_{v_1}}, (2,16)*{_{v_2}}, (9,8)*{_{v}},
(0,0)*{\bullet}="a", (7,7)*{\bullet}="b", (3,14)*{\bullet}="c",
\ar @{->} "a";"b" <0pt> \ar @{->} "b";"c" <0pt> \endxy}\end{array} \ \ \ \ , \ \ \ \ \begin{array}{c}\resizebox{10mm}{!}{\xy (7,5)*{_{v}}, (15,20)*{_{v_2}}, (3,16)*{_{v_1}},
(14,18)*{\bullet}="a", (7,7)*{\bullet}="b", (3,14)*{\bullet}="c",
\ar @{<-} "a";"b" <0pt> \ar @{->} "b";"c" <0pt> \endxy} \end{array} \ \ \ \ , \ \ \ \ \begin{array}{c}\resizebox{15mm}{!}{\xy (-1,2)*{_{v_1}}, (17,-3)*{_{v_2}}, (7,9)*{_{v}},
(0,0)*{\bullet}="a", (7,7)*{\bullet}="b", (14,-3)*{\bullet}="c",
\ar @{->} "a";"b" <0pt> \ar @{<-} "b";"c" <0pt> \endxy}\end{array} $$ where the vertex $v$ has no other attached edges except the ones shown in the pictures.
\subsection{Vanishing Lemma}\label{4: vanishing of one bivalent} {\em If\, $\Gamma\in {\mathsf G}_{4p+2,6p+1}$\, with $p\geq 1$ admits a binary vertex $v$ of the form $\begin{array}{c}\resizebox{10mm}{!}{ \xy (-1,2)*{_{v_1}}, (2,16)*{_{v_2}}, (9,8)*{_{v}},
(0,0)*{\bullet}="a", (7,7)*{\bullet}="b", (3,14)*{\bullet}="c",
\ar @{->} "a";"b" <0pt> \ar @{->} "b";"c" <0pt> \endxy}\end{array}$, then its
weight $C_\Gamma$ vanishes.}
\begin{proof} We assume here that the propagators are chosen $O(2)$-anti-invariantly, i.e., invariantly for the $SO(2)$ action on the sphere $S^2$, and anti-invariantly for a reflection across a plane containing both poles. Now, integrating over the position of (the point in a configuration associated to) vertex $v$, the above graph yields a $1$-form on the configuration space of $v_1$ and $v_2$, i.e., on $S^2$. This 1-form is easily checked to be $O(2)$-anti-invariant, and furthermore closed by Stokes' Theorem. Using standard cylindrical coordinates $(Z,\phi)$ the $O(2)$-anti-invariance implies that the form can be written as \[ f(Z)d\phi. \] for some function $f(Z)$, vanishing at the sphere's poles $Z=\pm 1$ to ensure continuity. The closedness then implies that in fact $f(Z)\equiv 0$.
\end{proof}
\subsection{Vanishing Lemma}\label{Lemma on triangles} {\em If $\Gamma\in {\mathsf G}_{4p+2,6p+1}$ admits a 3-vertex complete graph (with any possible choice of directions of edges), $$ \begin{array}{c}\resizebox{10mm}{!}{ \xy (9,8)*{^{v_2}}, (0,18)*{^{v_3}}, (0,-3)*{^{v_1}},
(0,0)*{\bullet}="d", (0,16)*{\bullet}="u", (7,8)*{\bullet}="R",
\ar @{-} "d";"u" <0pt> \ar @{-} "d";"R" <0pt> \ar @{-} "R";"u" <0pt> \endxy} \end{array}, $$ as a subgraph, then its weight $C_\Gamma$ vanishes.}
\begin{proof} The integrand $\Omega_\Gamma:=\bigwedge_{e\in E(\Gamma)}\hspace{-2mm} {\pi}^*_e\left(\omega_g\right)$ is invariant under the action of the gauge group $p\rightarrow {\mathbb R}^+p + {\mathbb R}^3$ on points in ${\mathbb R}^3$. Hence we can place vertex $v_1$ at $0\in {\mathbb R}^3$, and normalized the Euclidean
distance $|v_2-v_1|$ to be equal to $1$. Then the $6$-form
$$
\pi_{v_1,v_2}^*(\omega_g)\wedge \pi_{v_1,v_3}^*(\omega_g)\wedge \pi_{v_2,v_3}^*(\omega_g)
$$ depends only on $5$ parameters and hence vanishes identically for degree reasons. Hence the form $\Omega_\Gamma$ is zero. \end{proof}
\subsection{Vanishing Lemma}{\em Assume $\Gamma\in {\mathcal G}_{4p+2,6p+1}$ has two bivalent vertices $v'$ and $v''$ connected by an edge. Then its weight $C_\Gamma$ vanishes.}
\begin{proof} It is enough to consider the case when orientations on the subgraph containing $v'$ and $v''$ and their neighbouring (not necessarily binary) vertices $v_1$ and $v_2$ are as in the following oriented graph, $$ \Gamma_{v_1,v',v'',v_2}:= \begin{array}{c}\resizebox{18mm}{!}{ \xy (-2,1)*{^{v_1}}, (7,9)*{^{v'}}, (15,3)*{^{v''}}, (21,9)*{^{v_2}},
(0,0)*{\bullet}="1", (7,7)*{\bullet}="2",
(14,0)*{\bullet}="3", (21,7)*{\bullet}="4",
\ar @{->} "1";"2" <0pt> \ar @{->} "3";"2" <0pt> \ar @{->} "3";"4" <0pt> \endxy} \end{array}\ , $$ for all other inequivalent choices the vanishing claim follows from Lemma~{\ref{4: vanishing of one bivalent}}, and, in the case $v_1=v_2$, from Lemma~{\ref{Lemma on triangles}}.
Let us fix all vertices of the graph except $v'$ and $v''$. We can also fix without loss of generality the vertex $v_1$ at $0\in {\mathbb R}^3$ and the vertex $v_2$ at the unit Euclidean distance from $v_1$. Consider a projection \begin{equation}\label{map pi} \pi: \overline{C}({\Gamma_{v_1,v',v'',v_2}}) \longrightarrow { C}_{v_1,v_2}({\mathbb R}^3) \end{equation} and the function $$ f:= \pi_*( \underbrace{\pi_{v_1,v'}^*(\omega_g)\wedge \pi_{v',v''}^*(\omega_g)\wedge \pi_{v'',v_2}^*(\omega_g)}_{\Omega_{\Gamma_{v_1,v',v'',v_2}}} ) $$ on $C_{v_1,v_2}({\mathbb R}^3)$. By the generalized Stokes Theorem, $$ d\circ \pi_* =\pm \pi_*\circ d + \pi_{{\partial} *}, $$ so that we have \begin{equation}\label{df} df= \pi_{{\partial} *}\left(\Omega_{\Gamma_{v_1,v',v'',v_2}}\right)=\alpha_*(\Omega_{\Gamma_{v_1,v'',v_2}}) - \beta_*(\Omega_{\Gamma_{v_1,v,v_2}}) + \gamma_*(\Omega_{\Gamma_{v_1,v',v_2}}) \end{equation} where $$ \Gamma_{v_1,v'',v_2}:= \begin{array}{c}\resizebox{13mm}{!}{ \xy (7,9)*{^{v_1}}, (15,3)*{^{v''}}, (21,9)*{^{v_2}},
(7,7)*{\bullet}="2",
(14,0)*{\bullet}="3", (21,7)*{\bullet}="4",
\ar @{->} "3";"2" <0pt> \ar @{->} "3";"4" <0pt> \endxy} \end{array}\ ,\ \ \Gamma_{v_1,v,v_2}:= \begin{array}{c}\resizebox{10mm}{!}{ \xy (-1,2)*{_{v_1}}, (2,16)*{_{v_2}}, (9,8)*{_{v}},
(0,0)*{\bullet}="a", (7,7)*{\bullet}="b", (3,14)*{\bullet}="c",
\ar @{->} "a";"b" <0pt> \ar @{->} "b";"c" <0pt> \endxy}\end{array} \ , \ \ \Gamma_{v_1,v',v_2}:= \begin{array}{c}\resizebox{16mm}{!}{ \xy (-2,1)*{^{v_1}}, (7,9)*{^{v'}}, (16,1)*{^{v_2}},
(0,0)*{\bullet}="1", (7,7)*{\bullet}="2",
(14,0)*{\bullet}="3",
\ar @{->} "1";"2" <0pt> \ar @{->} "3";"2" <0pt> \endxy} \end{array} $$ and $$ \alpha: C(\Gamma_{v_1,v'',v_2}) \rightarrow C_{v_1,v_2}({\mathbb R}^3),\ \ \beta: C(\Gamma_{v_1,v,v_2}) \rightarrow C_{v_1,v_2}({\mathbb R}^3), \ \ \ \gamma: C(\Gamma_{v_1,v',v_2}) \rightarrow C_{v_1,v_2}({\mathbb R}^3) $$ are the natural forgetful maps. By Lemma~{\ref{4: vanishing of one bivalent}}, the middle term $\beta_*(\Omega_{\Gamma_{v_1,v,v_2}})$ vanishes. On the other hand the sum, $$ \alpha_*(\Omega_{\Gamma_{v_1,v',v_2}}) + \gamma_*(\Omega_{\Gamma_{v_1,v'',v_2}}) $$ equals the push down, $$ p_*\left(\pi_{v_1,v}^*(\omega_g)\wedge \pi_{v,v_2}^*(\omega_g)\right)
$$
of the $4$-form $\pi_{v_1,v}^*(\omega_g)\wedge \pi_{v,v_2}^*(\omega_g)$ along the 3-dimensional fiber of the natural projection, $$ p: C_{v_1,v,v_2}({\mathbb R}^3) \longrightarrow C_{v_1,v_2}({\mathbb R}^3). $$ The latter vanishes by the standard argument using the reflection in the line through vertices $v_1$ and $v_2$ (cf.\ \cite{Ko0}).
Therefore we conclude that $$ df=0, $$ i.e.\ the function $f$ is a constant independent of a particular position of the vertex $v_2$ (on the sphere). Let us choose $v_2$ to lie in the $(x,t)$-plane. Then the reflection in this plane preserves the orientation of the fiber of the map (\ref{map pi}) but changes the differential form $$ \Omega_{\Gamma_{v_1,v',v'',v_2}} \longrightarrow - \Omega_{\Gamma_{v_1,v',v'',v_2}}. $$ Hence $f=0$ and the proof is completed. \end{proof}
Let $\hat{{\mathsf G}}_{4p+1,6p+1}^{or}$ be the subset of the set of oriented graphs $\hat{{\mathsf G}}_{4p+1,6p+1}^{or}$ consisting of graphs $\Gamma$ which have no \begin{itemize} \item binary vertices of arity $(1,1)$, i.e.\ of the form $\begin{array}{c}\resizebox{2mm}{!}{ \xy
(0,0)*{\bullet}="a", (0,5)*{}="b", (0,-5)*{}="c",
\ar @{->} "a";"b" <0pt> \ar @{<-} "a";"c" <0pt> \endxy}\end{array}$ \item no complete subgraphs of the form $ \begin{array}{c}\resizebox{10mm}{!}{ \xy (9,8)*{^{v_2}}, (0,18)*{^{v_3}}, (0,-3)*{^{v_1}},
(0,0)*{\bullet}="d", (0,16)*{\bullet}="u", (7,8)*{\bullet}="R",
\ar @{-} "d";"u" <0pt> \ar @{-} "d";"R" <0pt> \ar @{-} "R";"u" <0pt> \endxy} \end{array}, $ \item no two binary vertices connected by an edge. \end{itemize}
We proved in this Appendix the following
\subsection{Proposition}\label{A: propos on hat{sG}} {\em In the case $d=3$ Proposition {\ref{3: Prop on Upsilon^om_g}} holds true with the set of graphs ${\mathsf G}_{4p+2,6p+1}^{or}$ replaced by its subset $\hat{{\mathsf G}}_{4p+1,6p+1}^{or}$.}
A quick inspection of the case $p=1$ shows that there are no graphs in $\hat{{\mathsf G}}_{6,7}^{or}$ which satisfy the above three properties so that one gets the following
\subsection{Lemma}\label{A: lemma on p=1} {\em The set $\hat{{\mathsf G}}_{6,7}^{or}$\, is empty}.
In the case $p=2$ one has non-trivial examples, e.g.
$$ \Upsilon_{10}^{2,2}:=\begin{array}{c}\resizebox{27mm}{!}{ \xy (-6,0)*{\bullet}="1l", (6,0)*{\bullet}="1r", (0,11)*{\bullet}="u", (-19,-3)*{\bullet}="2l", (19,-3)*{\bullet}="2r", (-9,-10)*{\bullet}="3l", (9,-10)*{\bullet}="3r", (-10,-20)*{\bullet}="4l", (10,-20)*{\bullet}="4r", (0,-16)*{\bullet}="d",
\ar @{->} "1l";"u" <0pt> \ar @{->} "1r";"u" <0pt> \ar @{<-} "1l";"2l" <0pt> \ar @{->} "1r";"2r" <0pt> \ar @{<-} "3l";"2l" <0pt> \ar @{->} "3r";"2r" <0pt> \ar @{->} "3l";"4l" <0pt> \ar @{<-} "3r";"4r" <0pt> \ar @{->} "d";"4l" <0pt> \ar @{<-} "d";"4r" <0pt> \ar @{->} "u";"d" <0pt> \ar @{->} "1l";"3r" <0pt> \ar @{->} "1r";"3l" <0pt> \endxy} \end{array} \in \hat{{\mathsf G}}_{10,13}^{or}, \hspace{10mm} \Upsilon_{10}^{3,1}:=\begin{array}{c} \resizebox{27mm}{!}{\xy (-6,0)*{\bullet}="1l", (6,0)*{\bullet}="1r", (0,11)*{\bullet}="u", (-19,-3)*{\bullet}="2l", (19,-3)*{\bullet}="2r", (-9,-10)*{\bullet}="3l", (9,-10)*{\bullet}="3r", (-10,-20)*{\bullet}="4l", (10,-20)*{\bullet}="4r", (0,-16)*{\bullet}="d",
\ar @{->} "1l";"u" <0pt> \ar @{->} "1r";"u" <0pt> \ar @{<-} "1l";"2l" <0pt> \ar @{->} "1r";"2r" <0pt> \ar @{<-} "3l";"2l" <0pt> \ar @{->} "3r";"2r" <0pt> \ar @{->} "3l";"4l" <0pt> \ar @{->} "3r";"4r" <0pt> \ar @{->} "d";"4l" <0pt> \ar @{->} "d";"4r" <0pt> \ar @{->} "u";"d" <0pt> \ar @{->} "1l";"3r" <0pt> \ar @{->} "1r";"3l" <0pt> \endxy} \end{array} \in \hat{{\mathsf G}}_{10,13}^{or} $$
The first graph $\Upsilon_{10}^{2,2}$ has two binary vertices have type $(2,0)$ and two binary vertices of type $(0,2)$. The second graph $\Upsilon_{10}^{3,1}$ has three vertices of type $(2,0)$ and one vertex of type $(0,2)$. Reversing all arrows in $\Upsilon_{10}^{3,1}$ one obtains a graph $$ \Upsilon_{10}^{1,3}=\begin{array}{c}\resizebox{27mm}{!}{ \xy (-6,0)*{\bullet}="1l", (6,0)*{\bullet}="1r", (0,11)*{\bullet}="u", (-19,-3)*{\bullet}="2l", (19,-3)*{\bullet}="2r", (-9,-10)*{\bullet}="3l", (9,-10)*{\bullet}="3r", (-10,-20)*{\bullet}="4l", (10,-20)*{\bullet}="4r", (0,-16)*{\bullet}="d",
\ar @{<-} "1l";"u" <0pt> \ar @{<-} "1r";"u" <0pt> \ar @{->} "1l";"2l" <0pt> \ar @{<-} "1r";"2r" <0pt> \ar @{->} "3l";"2l" <0pt> \ar @{<-} "3r";"2r" <0pt> \ar @{<-} "3l";"4l" <0pt> \ar @{<-} "3r";"4r" <0pt> \ar @{<-} "d";"4l" <0pt> \ar @{<-} "d";"4r" <0pt> \ar @{<-} "u";"d" <0pt> \ar @{<-} "1l";"3r" <0pt> \ar @{<-} "1r";"3l" <0pt> \endxy} \end{array} \in \hat{{\mathsf G}}_{10,13}^{or} $$
with three vertices of type $(0,2)$ and one vertex of type $(2,0)$.
{\Large \section{\bf Configuration space models for bipermutahedra\\ and biassociahedra} }\label{App B}
\subsection{Associahedron, permutahedron and configuration spaces} Here we remind two well-known constructions \cite{St,Ko, LTV} (see also lecture notes \cite{Me1}) which will be used later.
Let $$ {\mathit{Conf}}_n({\mathbb R}):=\{[n]\hookrightarrow {\mathbb R}\}, $$ be the space of all possible injections of the set $[n]:=\{1,2,\ldots, n\}$ into the real line ${\mathbb R}$. This space is a disjoint union of $n!$ connected components each of which is isomorphic
to the space $$
{\mathit{Conf}}_n^{o}({\mathbb R})=\{x_{1}< x_{2} <\ldots < x_{n}\}.
$$
The set ${\mathit{Conf}}_n({\mathbb R})$ has a natural structure of an oriented $n$-dimensional manifold
with orientation on ${\mathit{Conf}}_n^{0}({\mathbb R})$ given by the volume form $dx_1\wedge dx_2\wedge\ldots\wedge dx_n$;
orientations of all other connected components are then fixed once we assume that the natural smooth action of ${\mathbb S}_n$ on ${\mathit{Conf}}_n({\mathbb R})$ is orientation preserving.
In fact, we can (and often do) label points by an arbitrary finite set $I$, that is, consider the space
of injections of sets, $$ {\mathit{Conf}}_I({\mathbb R}):=\{I\hookrightarrow {\mathbb R}\}. $$
A $2$-dimensional Lie group $G_{2}={\mathbb R}^+ \ltimes {\mathbb R}$ acts freely on ${\mathit{Conf}}_n({\mathbb R})$ by the law,
$$ \begin{array}{ccccc} {\mathit{Conf}}_n({\mathbb R}) & \times & {\mathbb R}^+ \ltimes {\mathbb R} & \longrightarrow & {\mathit{Conf}}_n({\mathbb R})\\ p=\{x_1,\ldots,x_n\}&& (\lambda,\nu) &\longrightarrow & \lambda p+\nu:= \{\lambda x_1+\nu, \ldots, \lambda x_n+\nu\}. \end{array} $$ The action is free so that the quotient space, $$ C_n({\mathbb R}):= {\mathit{Conf}}_n({\mathbb R})/G_{2},\ \ \ n\geq 2, $$ is naturally an $(n-2)$-dimensional real oriented manifold equipped with a smooth orientation preserving action of the group ${\mathbb S}_n$. In fact, $$ C_n({\mathbb R})=C_n^o({\mathbb R})\times {\mathbb S}_n $$ with orientation, $\Omega_n$, defined on $C_n^o({\mathbb R}):={\mathit{Conf}}^o_n({\mathbb R})/G_{2}$ as follows: identify $C_n^o({\mathbb R})$ with the subspace of ${\mathit{Conf}}^o_n({\mathbb R})$ consisting of points $\{0=x_{1}< x_{2} <\ldots < x_{n}=1\}$ and then set $\Omega_n:= dx_2\wedge\ldots\wedge dx_{n-1}$.
The space $C_2({\mathbb R})$ is closed as it is the disjoint union, $C_2({\mathbb R})\simeq {\mathbb S}_2$, of two points. The topological compactification, $\overline{C}_n({\mathbb R})$, of $C_n({\mathbb R})$ for higher $n$
can be defined as $\overline{C}_n^o({\mathbb R})\times {\mathbb S}_n$ where $\overline{C}_n^o({\mathbb R})$ is, by definition, the closure of an embedding, $$ \begin{array}{ccc} C_n^o({\mathbb R}) & \longrightarrow & ({\mathbb R}{\mathbb P}^2)^{n(n-1)(n-2)}\\
(x_{1}, \ldots, x_{n}) & \longrightarrow & \displaystyle\prod_{\#\{i,j,k\}=3}\left[|x_{i}-x_{j}| :
|x_{i}-x_{k}|: |x_{j}-x_{k}|\right]. \end{array} $$ Its codimension one strata are given by $$ {\partial} \overline{C}_n^o({\mathbb R}) = \bigsqcup_{A} \overline{C}^o_{n - \# A + 1}({\mathbb R})\times
\overline{C}_{\# A}^o({\mathbb R}), $$ where the union runs over {\em connected}\, proper subsets, $A$, of the ordered set $\{1,2,\ldots,n\}$. The associated collection $\overline{C}({\mathbb R})=\{\overline{C}_n({\mathbb R})\}$ is a free operad in the category with the set of generators, $$ \left\{ {C}_n^o({\mathbb R}) \simeq \begin{array}{c}\resizebox{21mm}{!}{\xy (1,-5)*{\ldots}, (-13,-7)*{_{1}}, (-8,-7)*{_{2}}, (-3,-7)*{_{3}}, (8,-7)*{_{{n-1}}}, (14,-7)*{_{n}},
(0,0)*{\circ}="a", (0,5)*{}="0", (-12,-5)*{}="b_1", (-8,-5)*{}="b_2", (-3,-5)*{}="b_3", (8,-5)*{}="b_4", (12,-5)*{}="b_5",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "a";"b_3" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_4" <0pt> \ar @{-} "a";"b_5" <0pt> \endxy}\end{array}\right\}_{n\geq 2} $$ With the above graphical notations for the generators, the compactified configuration space is the disjoint union of sets parameterized by planar rooted (equivalently, directed) trees $$
\overline{C}_n^o({\mathbb R})=\coprod_{T\in {\mathcal T} ree_n} T({\mathbb R}) $$ where $ {\mathcal T} ree_n$ is the set of all planar trees with $n$ input legs whose vertices are at least trivalent (i.e.\ have at least two input half-edges)\footnote{The set of internal edges of a rooted tree is denoted by $E(T)$, its set of legs by $Leg(G)$, and the set of vertices by $V(T)$; for example, picture (\ref{2: metric tree T}) below shows a rooted tree (with directions of edges tacitly chosen to run from bottom to the top) with $\# E(T)=3$, $\# Leg(T)=7$ and $\# V(T)=4$. There is a natural partial order on the set $V(T)$: $v_1> v_2$ if and only if there is a directed path of internal edges starting at $v_2$ and ending at $v_1$. The set ${\mathcal T} ree_n$ also admits a partial order: $T_1> T_2$ if and only if $T_2$ can be obtained from $T_1$ by contraction of at least one internal edge.} and $$ T({\mathbb R}):=\prod_{v\in V(T)}{C}_{\# v} ^o({\mathbb R}) $$
is a set, better to say, a tree ``decorated" by sets. In this decomposition the one-vertex tree corresponds to the big open cell ${C}_n^o({\mathbb R})\subset \overline{C}_n^0({\mathbb R})$, while trees with larger number of vertices to the boundary components of the closed topological space $\overline{C}_n({\mathbb R})$. Therefore the compactified space $\overline{C}_n^o({\mathbb R})$ is homeomorphic, as a stratified topological space, to the $n$-th Stasheff associahedron ${\mathcal K}_n$, and associated to $\overline{C}_n({\mathbb R})$ the operad of fundamental chains gives the minimal resolution, ${\mathcal A} ss_\infty$, of the operad of associative algebras.
The trees parameterizing the boundary strata of $\overline{C}_n^o({\mathbb R})$
can also be used to define a structure of a smooth manifold with corners on $\overline{C}_n^o({\mathbb R})$ \cite{Ko}. In particular, a decoration of internal edges of such a tree $T$ with ``small" real parameters defines an smooth open coordinate chart, ${\mathcal U}_T$, of the boundary strata corresponding to $T$ in $\overline{C}_n^o({\mathbb R})$ as follows (see \cite{Ko,Ga} and lecture notes \cite{Me1} for details)
$$
\alpha_T: [0,\varepsilon)^{\# E(T)}\times \prod_{v\in V(T)} {C}^{st}_{\# In(v)}({\mathbb R})\simeq {\mathcal U}_T\subset
\overline{C}_n({\mathbb R})
$$ where $E(T)$ is the set of internal edges of $T$, $V(T)$ the set of vertices, $\varepsilon\in {\mathbb R}$ is a sufficiently small number (which is in fact depends on coordinates in the factors
${C}^{st}_{\# In(v)}({\mathbb R})$, i.e.\ strictly speaking the left hand side is a subset of a smooth bundle over
$\prod_{v\in V(T)} {C}^{st}_{\# In(v)}({\mathbb R})$ but we ignore these unimportant subtleties here),
and ${C}^{st}_{k}({\mathbb R})$ is an ${\mathbb S}_n$-equivariant section, $\tau: C_n({\mathbb R})\rightarrow {\mathit{Conf}}_n({\mathbb R})$, of the natural projection
${\mathit{Conf}}_n({\mathbb R})\rightarrow C_n({\mathbb R})$ defined, for example, by equations $\sum_{i=1}^n x_i=0$ and $\sum_{i}|x_i|^2=1$; clearly, such a section is a smooth manifold so that the l.h.s.\ of the isomorphism $\alpha_T$ is a smooth manifold with corners and can serve as a coordinate chart indeed.
For example, a tree \cite{Me1}
\begin{equation}\label{2: metric tree T} T= \begin{array}{c}\resizebox{23mm}{!}{\xy (2.0,3.0)*{_{\varepsilon_{1}}}, (11.2,3.7)*{_{\varepsilon_{2}}}, (-6.8,-4)*{_{\varepsilon_{3}}},
(-10.5,-2)*{_1}, (-11,-17)*{_3}, (-2,-17)*{_5}, (3,-10)*{_6}, (8,-10)*{_2}, (14,-10)*{_4}, (21,-10)*{_7},
(0,14)*{}="0",
(0,8)*{\circ}="a", (-10,0)*{}="b_1", (-2,0)*{\circ}="b_2", (12,0)*{\circ}="b_3",
(2,-8)*{}="c_1", (-7,-8)*{\circ}="c_2", (8,-8)*{}="c_3", (14,-8)*{}="c_4", (20,-8)*{}="c_5", (-11,-15)*{}="d_1", (-3,-15)*{}="d_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "a";"b_3" <0pt> \ar @{-} "b_2";"c_1" <0pt> \ar @{-} "b_2";"c_2" <0pt> \ar @{-} "b_3";"c_3" <0pt> \ar @{-} "b_3";"c_4" <0pt> \ar @{-} "b_3";"c_5" <0pt> \ar @{-} "c_2";"d_1" <0pt> \ar @{-} "c_2";"d_2" <0pt> \endxy}\end{array} \ \ \ \ \ \ \varepsilon_{1}, \varepsilon_{2}, \varepsilon_{3}\in [0,\varepsilon) \ \mbox{for some}\ 0\leq \varepsilon \ll +\infty; \end{equation}
gives a coordinate chart, $$ \begin{array}{ccccccccccc}
[0,\varepsilon)^3 & \hspace{-2mm} \times \hspace{-2mm}& C^{st}_3({\mathbb R}) & \hspace{-2mm} \times\hspace{-2mm} & C^{st}_2({\mathbb R}) & \hspace{-2mm} \times \hspace{-2mm} & C^{st}_3({\mathbb R}) &\hspace{-2mm} \times\hspace{-2mm} & C^{st}_2({\mathbb R}) & {\longrightarrow} & \overline{C}_7({\mathbb R}) \\
(\varepsilon_1,\varepsilon_2,\varepsilon_3) & \hspace{-2mm} \times \hspace{-2mm} & (x_1, x',x'') & \hspace{-2mm}\times
\hspace{-2mm}& (x''', x_6) &\hspace{-2mm} \times \hspace{-2mm}& (x_2,x_4,x_7) &\hspace{-2mm} \times\hspace{-2mm} & (x_3,x_7) &\longrightarrow& (y_1, y_3, y_5,y_6, y_2,y_4, y_7) \end{array} $$ given explicitly as follows, $$ \begin{array}{llllllllrrrr} y_1 &=& x_1 & & y_3 &=& x'+ \varepsilon_1(x'''+ \varepsilon_3 x_3) && y_2 &=& x''+ \varepsilon_2 x_2 \\ &&&& y_5 &=& x'+ \varepsilon_1(x''' + \varepsilon_3 x_5) && y_4 &=& x''+ \varepsilon_2 x_4 \\ &&&& y_6 &=& x'+ \varepsilon_1 x_6, && y_7 &=& x'' + \varepsilon_2 x_7 \\ \end{array} $$ The boundary stratum corresponding to $T$ is given in ${\mathcal U}_T$ by the equations $\varepsilon_1=\varepsilon_2=\varepsilon_3=0$. In this atlas the boundary strata gets interpreted as the limit configurations of {\em collapsing}\, points. However, our configurations are considered only up to an action of the group $G_2$, so that above 3-parameter family of configurations can be equivalently rewritten as $$ \begin{array}{llllllllrrrr} y_1 &=& \frac{1}{\varepsilon_1\varepsilon_2\varepsilon_3} x_1 & & y_3 &=& \frac{1}{\varepsilon_1\varepsilon_2\varepsilon_3} x'+ \frac{1}{\varepsilon_2\varepsilon_3}x'''+ \frac{1}{\varepsilon_2} x_3 && y_2 &=& \frac{1}{\varepsilon_1\varepsilon_2\varepsilon_3}x''+ \frac{1}{\varepsilon_1\varepsilon_3} x_2 \\ &&&& y_5 &=& \frac{1}{\varepsilon_1\varepsilon_2\varepsilon_3} x'+ \frac{1}{\varepsilon_2\varepsilon_3}x''' + \frac{1}{\varepsilon_2} x_5 && y_4 &=& \frac{1}{\varepsilon_1\varepsilon_2\varepsilon_3} x''+ \frac{1}{\varepsilon_1\varepsilon_3} x_4 \\ &&&& y_6 &=& \frac{1}{\varepsilon_1\varepsilon_2\varepsilon_3} x'+ \frac{1}{\varepsilon_2\varepsilon_3} x_6, && y_7 &=& \frac{1}{\varepsilon_1\varepsilon_2\varepsilon_3} x'' + \frac{1}{\varepsilon_1\varepsilon_3} x_7 \\ \end{array} $$ and hence in the corresponding coordinate chart the limit configurations corresponds to points going in groups {\em infinitely far away from each other}\, (with different relative speeds), i.e.\ as ``exploded" configurations. We shall work below with configuration spaces of points on a {\em pair of lines}, ${\mathbb R}\times {\mathbb R}$, whose boundary strata are parameterized by pairs of trees (with some extra structure); then it will sometimes be useful to interpret the limit configurations as collapsing ones for one tree (i.e.\ on one copy of the real line), and as exploded ones for another tree (i.e.\ on another copy of ${\mathbb R}$).
\subsubsection{\bf Permutahedron}\label{2: subsection permutahedron} The n-dimensional permutahedron ${\mathcal P}_n$ is defined as a convex hull in ${\mathbb R}^{n+1}$ of the set $\{\sigma(1), \sigma(2), \ldots, \sigma(n+1)\}_{\sigma_{{\mathbb S}_n}}$ of $(n+1)!$ points. The faces of ${\mathcal P}_n$ are encoded by the ordered partitions of the set $\{1, 2, \ldots , n + 1\}$, or equivalently, by the set of {\em leveled}\, planar trees with $n+1$ legs (see, e.g., \cite{LTV} or \cite{Ma} for examples and explanations). We recall that a {\em leveled planar $n$-tree}\, is a rooted $n$-tree $T$ together with a surjective map, $L: V(T) \rightarrow [l]$, from the set of its vertices to some finite ordinal $[l]=\{1,2,\ldots, l\}$ that respects the standard partial order on $V(T)$. The set, ${\mathcal L} {\mathcal T} ree_{n}$, of leveled planar trees is partially ordered: $(T,L) > (T',L')$ if $(T',L')$ is obtained from $(T,L)$ by a contraction of levels. In particular $(T,L) > (T',L')$ implies $T \geq T'$. For a level tree $(T,L: V(T)\rightarrow [l])$ we set $$
|L|:=-l+ \sum_{i=1}^l \#L^{-1}(i). $$
The configuration space model for the permutahedron was given in \cite{LTV}. In our context (when we want to keep freedom of interpreting the limit configurations either as collapsing or as exploded) it is useful to consider the closure, $\widehat{C}_n^o({\mathbb R})$, of
${C}_n^o({\mathbb R})$ under the following embedding (cf.\ \cite{LTV}), $$ \begin{array}{ccccc} C_n^o({\mathbb R}) & \longrightarrow & ({\mathbb R}{\mathbb P}^2)^{n(n-1)(n-2)} & \times & [0,\infty]^{n(n-1)(n-2)(n-3)}\\
(x_{1}, \ldots, x_{n}) & \longrightarrow & \displaystyle \prod_{\#\{i,j,k\}=3}\left[|x_{i}-x_{j}| :
|x_{i}-x_{k}|: |x_{j}-x_{k}|\right] &\times&\displaystyle \prod_{\#\{i,j,k,l\}=4}{\frac{|x_i-x_j|}{|x_k-x_l|}} \end{array} $$ where $[0,\infty]$ is a 1-dimensional compact smooth manifold with corners with a defining coordinate chart given by $$ \begin{array}{ccc} [0,\infty] &\longrightarrow & [0,1]\\
t & \longrightarrow & \frac{t}{t+1} \end{array} $$ The set $\widehat{C}_n^o({\mathbb R})$ is is the disjoint union of sets parameterized by planar rooted level trees $$
\widehat{C}_n^o({\mathbb R})=\coprod_{T\in {\mathcal L} {\mathcal T} ree_n} T({\mathbb R}), $$ and, as a smooth manifold with corners, can be identified with the permutahedron ${\mathcal P}_{n-1}$. For example, the following level trees, $$ T_1=\begin{array}{c}\resizebox{16mm}{!}{\xy (12,0)*{^{_{1}}}, (12,-3)*{^{_{2}}}, (12,-6)*{^{_{3}}},
(0,4)*{}="0",
(0,0)*{\circ}="a", (-5,-3)*{\circ}="b_1", (5,-6)*{\circ}="b_2",
(-8,-10)*{}="c_1", (-2,-10)*{}="c_2", (8,-10)*{}="c_3", (2,-10)*{}="c_4",
(-10,0)*{}="1L", (10,0)*{}="1R",
(-10,-3)*{}="2L", (10,-3)*{}="2R",
(-10,-6)*{}="3L", (10,-6)*{}="3R", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt> \ar @{-} "b_2";"c_3" <0pt> \ar @{-} "b_2";"c_4" <0pt>
\ar @{-} "1L";"1R" <0pt> \ar @{-} "2L";"2R" <0pt> \ar @{-} "3L";"3R" <0pt> \endxy}\end{array} \ \ \ \ \ \ \ \ \ \ \ T_2=\begin{array}{c}\resizebox{18mm}{!}{\xy (12,0)*{^{_{1}}}, (12,-5)*{^{_{2}}},
(-10,0)*{}="1L", (10,0)*{}="1R",
(-10,-5)*{}="2L", (10,-5)*{}="2R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (-5,-5)*{\circ}="b_1", (5,-5)*{\circ}="b_2",
(-8,-10)*{}="c_1", (-2,-10)*{}="c_2", (8,-10)*{}="c_3", (2,-10)*{}="c_4", (20,-8)*{}="c_5", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt> \ar @{-} "b_2";"c_3" <0pt> \ar @{-} "b_2";"c_4" <0pt>
\ar @{-} "1L";"1R" <0pt> \ar @{-} "2L";"2R" <0pt> \endxy}\end{array} \ \ \ \ \ \ \ \ \ \ \ T_3=\begin{array}{c}\resizebox{16mm}{!}{\xy (12,0)*{^{_{1}}}, (12,-3)*{^{_{2}}}, (12,-6)*{^{_{3}}},
(0,4)*{}="0",
(0,0)*{\circ}="a", (-5,-6)*{\circ}="b_1", (5,-3)*{\circ}="b_2",
(-8,-10)*{}="c_1", (-2,-10)*{}="c_2", (8,-10)*{}="c_3", (2,-10)*{}="c_4",
(-10,0)*{}="1L", (10,0)*{}="1R",
(-10,-3)*{}="2L", (10,-3)*{}="2R",
(-10,-6)*{}="3L", (10,-6)*{}="3R", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt> \ar @{-} "b_2";"c_3" <0pt> \ar @{-} "b_2";"c_4" <0pt>
\ar @{-} "1L";"1R" <0pt> \ar @{-} "2L";"2R" <0pt> \ar @{-} "3L";"3R" <0pt> \endxy}\end{array} $$ encode, respectively, the following limit configurations (as well as coordinate charts near the limit configurations) in ${\mathcal P}_3= \widehat{C}_4^o({\mathbb R})$: \begin{itemize} \item[(i)] $T_1$ corresponds to the point in ${\mathcal P}_3$ obtained in the limit $\varepsilon_1,\varepsilon_2\rightarrow +0$ from the configurations, $$ x_1=-1 - \varepsilon_1,\ \ \ \ x_2=-1 + \varepsilon_1,\ \ \ x_3=1 - \varepsilon_1\varepsilon_2,\ \ \ x_4=1 + \varepsilon_1\varepsilon_2,\ \ \ $$ \item[(ii)] $T_2$ corresponds to the 1-dimensional strata in ${\mathcal P}_3$ obtained in the limit $\varepsilon\rightarrow +0$ from the configurations, $$ x_1=-1 - \varepsilon x,\ \ \ \ x_2=-1 + \varepsilon x,\ \ \ x_3=1 - \varepsilon x,\ \ \ x_4=1 + \varepsilon x,\ \ \ \ \ x=\frac{x_4-x_3}{x_2-x_1}\in (0,+\infty). $$ \item[(iii)] $T_3$ corresponds to the point in ${\mathcal P}_3$ obtained in the limit $\varepsilon_1,\varepsilon_2\rightarrow +0$ from the configurations, $$ x_1=-1 - \varepsilon_1\varepsilon_2,\ \ \ \ x_2=-1 + \varepsilon_1\varepsilon_2,\ \ \ x_3=1 - \varepsilon_2,\ \ \ x_4=1 + \varepsilon_2,\ \ \ $$ \end{itemize}
For future reference we outline
a general pattern which associates to a limit configuration, $p=\lim \{x_1,\ldots,x_n\}$, in $\widehat{C}_n^o({\mathbb R})$ a {\em levelled}\, tree: \begin{itemize} \item[(a)] there is a natural projection $\pi:\widehat{C}_n^o({\mathbb R})\rightarrow \overline{C}_n^o({\mathbb R})$ which associates to $p$ its image $\pi(p)$ in the associahedron and hence a unique maximal (with respect to the standard partial order in the poset ${\mathcal T} ree_n$) unlevelled $n$-tree $T\in {\mathcal T} ree_n$ such that $p\in T({\mathbb R})\subset \widehat{C}_n^o({\mathbb R})$; the legs of $T$ are naturally labelled by the set $[n]$.
\item[(b)] every vertex $v$ of the unlevelled tree $T$ from (a) stands for a collection of points $\{x_{i_v}\in {\mathbb R}\}_{i_v\in H(v)}$ parameterized by the set $H(v)$ of input half edges at $v\in T_p$ which collapse to a single point $x_v$ in ${\mathbb R}$; we introduce an equivalence relation
in the set $V(T_p)$ of vertices of the tree $T_p$: $v'\sim v''$ if and only if $\lim \frac{|x_{i_{v'}}-x_{j_{v'}}|}{|x_{k_{v''}}-x_{l_{v''}}|}$ is a non-zero finite number
for some (and hence all) $i_{v'}\neq j_{v'}\in H(v')$ and $k_{v''}\neq l_{v''}\in H(v'')$ ; the associated equivalence classes $[v']$ are called {\em levels}; we say that equivalent vertices {\em lie on the same level};
\item[(c)] the natural partial ordering in the set of vertices, $V(T_p)$, induces a well-defined {\em total}\, ordering on the set of its levels.
Indeed, if $v'$ and $v''$ belong to different levels, then either
$\lim \frac{|x_{i_{v'}}-x_{j_{v'}}|}{|x_{k_{v''}}-x_{l_{v''}}|}=+\infty$ (in which case the level $[v']$ lies above the level $[v'']$ in the standard pictorial representation of a tree) or $\lim \frac{|x_{i_{v'}}-x_{j_{v'}}|}{|x_{k_{v''}}-x_{l_{v''}}|}=0$ (in which case the level $[v']$ lies below the level $[v'']$). \end{itemize} As a result we get a natural partition of the permutahedron, $$
\widehat{C}_n^o({\mathbb R})=\coprod_{(T,L)\in {\mathcal L}{\mathcal T} ree_n} T({\mathbb R}) \times ({\mathbb R}^+)^{|L|}, $$ parameterized by leveled trees; by analogy to the case of the associahedron, one can use this partition to introduce a smooth (with corners) atlas on
$\widehat{C}_n^o({\mathbb R})$ in which each leveled tree $(T,L)$ (with edges decorated by sufficiently small parameters and with levels decorated by arbitrary non-negative parameters) gives us a coordinate chart near the boundary strata $T({\mathbb R}) \times ({\mathbb R}^+)^{|L|}\subset \widehat{C}_n^o({\mathbb R})$. Thus $\widehat{C}_n^o({\mathbb R})={\mathcal P}_{n-1}$ can be given a structure of smooth manifold with corners (we do not use in this paper a finer fact that ${\mathcal P}_{n-1}$ can be identified with a polytope).
\subsection{Bipermutahedron}\label{2: subsection bipermutohedron} In this and the next subsections we give a configuration space interpretation of the bipermutahedron and biassociahedron posets, ${\mathcal P}^m_n$ and, respectively, ${\mathcal K}_n^m$, which were introduced and studied by Martin Markl in \cite{Ma}. We show that these posets can be identified with the boundary posets of certain smooth manifolds with corners (which come equipped with a natural structure of semialgebraic manifolds).
Consider a configuration space $$ {\mathit{Conf}}^o_{m,n}({\mathbb R}\times {\mathbb R}):={\mathit{Conf}}^o_{m}({\mathbb R}) \times {\mathit{Conf}}^o_{n}({\mathbb R}). $$ A point $p\in {\mathit{Conf}}^o_{m,n}({\mathbb R}\times {\mathbb R})$ is a pair $(p',p'')$ of collections of real numbers, $$ p'=\{x_1<\ldots< x_m\}, \ \ \ p''= \{y_1<\ldots< y_n\}. $$
The group $G_3:=R^+\rtimes {\mathbb R}^2$ acts freely on ${\mathit{Conf}}^o_{m,n}({\mathbb R}\times {\mathbb R})$ for all $m+n\geq 3$ by rescalings and translations, $$ \begin{array}{ccccc} G_3 &\times & {\mathit{Conf}}^o_{m,n}({\mathbb R}\times {\mathbb R}) & \longrightarrow & {\mathit{Conf}}^o_{m,n}({\mathbb R}\times {\mathbb R})\\ (\lambda,a,b) & & (p',p'') & \longrightarrow & (\lambda p'+ a; \lambda^{-1}p'' +b) \end{array}. $$ The space of orbits, $$ C_{m,n}({\mathbb R}\times {\mathbb R}):=\frac{{\mathit{Conf}}^o_{m,n}({\mathbb R}\times {\mathbb R})}{G_3} $$ is a $(m+n-3)$-dimensional oriented manifold. It is clear that $$ C_{1,n}({\mathbb R}\times {\mathbb R})=C_{n,1}({\mathbb R}\times {\mathbb R})=C_n^o({\mathbb R}) $$ and we define their compactifications $\widehat{C_{1,n}}({\mathbb R}\times {\mathbb R})$ and $\widehat{C_{n,1}}({\mathbb R}\times {\mathbb R})$ as the permutahedron $\widehat{C}_{n}^o({\mathbb R})$. For $m,n\geq 2$, there are canonical projections $$ \pi': C_{m,n}({\mathbb R}\times {\mathbb R})\rightarrow C_{m}({\mathbb R}), \ \ \ \ \pi'': C_{m,n}({\mathbb R}\times {\mathbb R})\rightarrow C_{n}({\mathbb R}) $$ which can be used to construct the following embedding \[ \begin{array}{ccccccc}
C_{m,n}({\mathbb R}\times {\mathbb R})\hspace{0mm} & \longrightarrow &\hspace{0mm} \widehat{C}_m({\mathbb R}) & \times & \hspace{0mm}
\widehat{C}_n({\mathbb R}) &\times& \hspace{0mm} [0,\infty]^{\frac{nm(n-1)(m-1)}{4}}
\\
(p',p'')
\hspace{0mm} & \longrightarrow & \hspace{0mm} p' &\times&
\hspace{0mm} p'' \hspace{0mm} &\times &\hspace{0mm}
\displaystyle \prod_{i>j, \alpha>\beta}
{{|x_{i}-x_{j}||y_{\alpha}-y_{\beta}|}} \hspace{0mm} \end{array} \] and define the compactified configuration space $\widehat{C_{m,n}}({\mathbb R}\times {\mathbb R})$ as the closure of the image of $C_{m,n}({\mathbb R}\times {\mathbb R})$ under this embedding. By analogy to the case of permutohedra, the compact space $\widehat{C_{m,n}}({\mathbb R}\times {\mathbb R})$ can be given naturally a structure of a smooth manifold with corners; in particular, this space comes with a stratification,
$$
\widehat{C_{m,n}}({\mathbb R}\times {\mathbb R})\supset {\partial} \widehat{C_{m,n}}({\mathbb R}\times {\mathbb R}) \supset {\partial}^2 \widehat{C_{m,n}}({\mathbb R}\times {\mathbb R})\supset \ldots,
$$
and it is not hard to check that the associated to this stratification poset is precisely the bipermutohedron poset ${\mathcal P}^m_n$ from \cite{Ma}. Let us first recall from \cite{Ma} the definition of the poset ${\mathcal P}_m^n$, $m\geq 1$, $n\geq 1$, $m+n\geq 3$. For $m,n\geq 2$ the set ${\mathcal P}_m^n$ is defined as the set of all triples, $(T^\uparrow, T_\downarrow, \ell)$, consisting of an up rooted tree $T^\uparrow\in {\mathcal T} ree_n$, of a down-rooted tree $T_\downarrow\in {\mathcal T} ree_m$, and a strictly order preserving\footnote{i.e.\ if $v>u$ then $\ell(v)>\ell(u)$.} surjective {\em level function}\, $\ell: V(T^\uparrow) \cup V(T_\downarrow)\rightarrow [l]$. For example $$ \begin{array}{c}\resizebox{20mm}{!}{\xy
(-10,0)*{}="1L", (19,0)*{}="1R",
(-10,5)*{}="2L", (19,5)*{}="2R",
(-10,-5)*{}="3L", (19,-5)*{}="3R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (-5,-5)*{\circ}="b_1", (5,-5)*{\circ}="b_2",
(-8,-10)*{}="c_1", (-2,-10)*{}="c_2", (8,-10)*{}="c_3", (2,-10)*{}="c_4", (20,-8)*{}="c_5",
(10,-9)*{}="0'",
(10,-5)*{\circ}="a'", (7,0)*{\circ}="b_1'", (15.5,5)*{}="b_2'", (4,5)*{}="c_1'", (9.5,5)*{}="c_2'",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt> \ar @{-} "b_2";"c_3" <0pt> \ar @{-} "b_2";"c_4" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt> \ar @{-} "b_1'";"c_1'" <0pt> \ar @{-} "b_1'";"c_2'" <0pt>
\ar @{-} "1L";"1R" <0pt> \ar @{-} "3L";"3R" <0pt> \endxy}\end{array} \in {\mathcal P}_4^3 $$ We define $$
|\ell|:=-l+ \sum_{i=1}^l \ell^{-1}(i). $$ The set ${\mathcal P}_n^m$ is partially ordered: $(T^\uparrow, T_\downarrow, \ell)> (\tilde{T}^\uparrow, \tilde{T}_\downarrow, \tilde{\ell})$ if the latter
can be obtained from the former by contraction of levels.
The posets ${\mathcal P}_n^1$ and ${\mathcal P}^n_1$ are identified with ${\mathcal L} {\mathcal T} ree_n$
but their elements are still represented as {\em pairs}\, of trees with the help
of the singular tree $|$ which has no vertices, for example
$$
\begin{array}{c} \resizebox{6mm}{!}{\xy
(-4,0)*{}="1L", (7,0)*{}="1R",
(-4,-4)*{}="2L", (7,-4)*{}="2R", (0,3)*{}="0",
(0,0)*{\circ}="a", (-2,-4)*{\circ}="b_1", (3,-7)*{}="b_2",
(5,3)*{}="r0", (5,-7)*{}="r00",
(-3.7,-7)*{}="c_1", (-0.5,-7)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt> \ar @{-} "r0";"r00" <0pt>
\ar @{-} "1L";"1R" <0pt> \ar @{-} "2L";"2R" <0pt> \endxy}\end{array}\in {\mathcal P}_3^1.
$$
To each (limit) configuration, $p=\lim \{x_1,\ldots,x_n\}$, in $\widehat{C}_{m,n}({\mathbb R}\times {\mathbb R})$ we associate a uniquely defined leveled bi-tree from
${\mathcal P}_m^n$ by a procedure which is completely analogous to the one described at the end of \S {\ref{2: subsection permutahedron}} and get, therefore, a decomposition, \begin{equation}\label{2: partition of bipermutohedron}
\widehat{C}_{m,n}({\mathbb R}\times {\mathbb R})=\coprod_{(T^\uparrow,T_\downarrow,\ell)\in {\mathcal P}_m^n} T^\uparrow({\mathbb R})\times T_\downarrow({\mathbb R}) \times ({\mathbb R}^+)^{|\ell|}. \end{equation}
This decomposition can be used to define a smooth (with corners) atlas on
the bipermutohedron $\widehat{C}_{m,n}({\mathbb R}\times {\mathbb R})$.
\subsection{Biassociahedron}\label{6: subsec on biassociahedron} Compactifications $\overline{C_{1,n}}({\mathbb R}\times {\mathbb R})$ and $\overline{C_{n,1}}({\mathbb R}\times {\mathbb R})$ of the configuration spaces ${C_{1,n}}({\mathbb R}\times {\mathbb R})$ and respectively ${C_{n,1}}({\mathbb R}\times {\mathbb R})$ are defined as the associahedron $\overline{C}^o_n({\mathbb R})$. For $m,n\geq 2$ we define a compactification $\overline{C_{m,n}}({\mathbb R}\times {\mathbb R})$ of the configuration space $C_{m,n}({\mathbb R}\times {\mathbb R})$ as the closure of the image of $C_{m,n}({\mathbb R}\times {\mathbb R})$ under the following embedding (cf.\ \cite{Sh1}), \[ \begin{array}{ccccccc}
C_{m,n}({\mathbb R}\times {\mathbb R})\hspace{0mm} & \longrightarrow &\hspace{0mm} \overline{C}_m({\mathbb R}) & \times & \hspace{0mm}
\overline{C}_n({\mathbb R}) &\times& \hspace{0mm} [0,\infty]^{\frac{nm(n-1)(m-1)}{4}}
\\
(p',p'')
\hspace{0mm} & \longrightarrow & \hspace{0mm} p' &\times&
\hspace{0mm} p'' \hspace{0mm} &\times &\hspace{0mm}
\displaystyle \prod_{i>j, \alpha>\beta}
{{|x_{i}-x_{j}||y_{\alpha}-y_{\beta}|}} \hspace{0mm} \end{array} \]
There is a natural surjection $$ P: \widehat{C_{m,n}}({\mathbb R}\times {\mathbb R}) \longrightarrow \overline{C_{m,n}}({\mathbb R}\times {\mathbb R}) $$ so that the partition (\ref{2: partition of bipermutohedron}) induces a partition of $\overline{C_{m,n}}({\mathbb R}\times {\mathbb R})$. The induced partition is again parameterized by pairs of trees with an extra structure. The difference of the compactification formula for $\overline{C_{m,n}}({\mathbb R}\times {\mathbb R})$ from the one for
$\widehat{C_{m,n}}({\mathbb R}\times {\mathbb R})$ is that we have no factors $\frac{|x_{i}-x_{j}|}{|x_k-x_l|}$ and $\frac{|y_{\alpha}-y_{\beta}|}{|y_\gamma-y_\delta|}$ which measure relatives speeds of collapsing/exploding groups of points belonging solely to one of the factors in ${\mathbb R}\times {\mathbb R}$. Hence the projection $P$
applied to the stratum $T^\uparrow({\mathbb R})\times T_\downarrow({\mathbb R}) \times ({\mathbb R}^+)^{|\ell|}$ contracts to single points those factors of ${\mathbb R}^+$ which correspond to the levels $i\in [l]$ which have the property that either $\ell^{-1}(i)\cap V(T_\uparrow)=\emptyset$ or $\ell^{-1}(i)\cap V(T_\downarrow)=\emptyset$. However such levels do not disappear completely from the induced stratification formula as it still makes sense to compare $\ell^{-1}(i)$ with $\ell^{-1}(j)$ in the cases when $\ell^{-1}(i)\cap V(T_\uparrow)=\emptyset$ and $\ell^{-1}(j)\cap V(T^\uparrow)=\emptyset$. Thus after the projection $P$ the level function on $V(T^\uparrow)\sqcup V(T_\downarrow)$ gets transformed into
a so called {\em zone function}\, \cite{Ma} which, by definition, is a surjection,
$$
\zeta: V(T^\uparrow)\sqcup V(T_\downarrow) \longrightarrow [l]
$$
satisfying the following conditions:
\begin{itemize}
\item[(i)] if $v<u$, then $\zeta(v)\leq \zeta (u)$,
\item[(ii)] for any pair of different elements $i,j\in [l]$ with $\zeta^{-1}(i)$ and $\zeta^{-1}(j)$
containing vertices from {\em both}\, sets $V(T^\uparrow)$ and $V(T_\downarrow)$, then $i<j$ implies
$v< u$ for every vertex $v\in \zeta^{-1}(i)$ and every vertex $u\in \zeta^{-1}(j)$ such that the relation
$v<u$;
\item[(iii)] there is no $i\in [l]$ such that both subsets $\zeta^{-1}(i)$ and $\zeta^{-1}(i+1)$
belong to $V(T^\uparrow)$ or both belong to $V(T_\downarrow)$.
\end{itemize} Elements $i\in [l]$ with $\zeta^{-1}(i)\cap V(T^\uparrow)\neq \emptyset$ and $\zeta^{-1}(i)\cap V(T_\downarrow)\neq \emptyset$ are called {\em barriers}\, and are depicted as solid horizontal lines. Elements $i\in [l]$ with $\zeta^{-1}(i)\cap V(T^\uparrow)= \emptyset$ are called {\em down-zones}, while elements $\zeta^{-1}(i)\cap V(T_\downarrow)= \emptyset$ are called {\em up-zones}; they are depicted as dashed horizontal lines. Thus condition (i) says that the zone function is order preserving, condition (ii) says that it is strictly order preserving on barriers, and condition (iii) says that there are no adjacent zones of the same type. Here are examples, $$ \begin{array}{c}\resizebox{25mm}{!}{\xy
(-10,6)*{}="1L", (19,6)*{}="1R",
(-10,-5)*{}="2L", (19,-5)*{}="2R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (-5,-4)*{\circ}="b_1", (5,-6.6)*{\circ}="b_2",
(-8,-10)*{}="c_1", (-2,-10)*{}="c_2", (8,-10)*{}="c_3", (2,-10)*{}="c_4", (20,-8)*{}="c_5",
(10,-1)*{}="0'",
(10,3)*{\circ}="a'", (7,8)*{\circ}="b_1'", (15.5,13)*{}="b_2'", (4,13)*{}="c_1'", (9.5,13)*{}="c_2'",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt> \ar @{-} "b_2";"c_3" <0pt> \ar @{-} "b_2";"c_4" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt> \ar @{-} "b_1'";"c_1'" <0pt> \ar @{-} "b_1'";"c_2'" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \endxy}\end{array} \hspace{10mm}
\begin{array}{c}\resizebox{25mm}{!}{\xy
(-10,0)*{}="1L", (19,0)*{}="1R",
(-10,5)*{}="2L", (19,5)*{}="2R",
(-10,-5)*{}="3L", (19,-5)*{}="3R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (-5,-4)*{\circ}="b_1", (5,-6.5)*{\circ}="b_2",
(-8,-10)*{}="c_1", (-2,-10)*{}="c_2", (8,-10)*{}="c_3", (2,-10)*{}="c_4", (20,-8)*{}="c_5",
(10,-4)*{}="0'",
(10,0)*{\circ}="a'", (7,5)*{\circ}="b_1'", (15.5,10)*{}="b_2'", (4,10)*{}="c_1'", (9.5,10)*{}="c_2'",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt> \ar @{-} "b_2";"c_3" <0pt> \ar @{-} "b_2";"c_4" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt> \ar @{-} "b_1'";"c_1'" <0pt> \ar @{-} "b_1'";"c_2'" <0pt>
\ar @{-} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \ar @{.} "3L";"3R" <0pt> \endxy}\end{array}
\ \ \ \ \ \ \ \begin{array}{c}\resizebox{25mm}{!}{\xy
(-10,0)*{}="1L", (19,0)*{}="1R",
(-10,5)*{}="2L", (19,5)*{}="2R",
(-10,-5)*{}="3L", (19,-5)*{}="3R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (-5,-5)*{\circ}="b_1", (5,-5)*{\circ}="b_2",
(-8,-10)*{}="c_1", (-2,-10)*{}="c_2", (8,-10)*{}="c_3", (2,-10)*{}="c_4", (20,-8)*{}="c_5",
(10,-9)*{}="0'",
(10,-5)*{\circ}="a'", (7,0)*{\circ}="b_1'", (15.5,5)*{}="b_2'", (4,5)*{}="c_1'", (9.5,5)*{}="c_2'",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt> \ar @{-} "b_2";"c_3" <0pt> \ar @{-} "b_2";"c_4" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt> \ar @{-} "b_1'";"c_1'" <0pt> \ar @{-} "b_1'";"c_2'" <0pt>
\ar @{-} "1L";"1R" <0pt> \ar @{-} "3L";"3R" <0pt> \endxy}\end{array} $$ of a fixed pair of trees and three different zone functions on the set of their vertices. For a zone function $\zeta$ on $V(T^\uparrow)\sqcup V(T_\downarrow)$
we denote by $B(\zeta)$ the set of its barriers, and by $|\zeta|$ the non-negative integer, $$
|\zeta|:=-l + \sum_{i\in B(\zeta)} \# \zeta^{-1}(i). $$ The compactified configuration space $\overline{C_{m,n}}({\mathbb R}\times {\mathbb R})$, the {\em biassociahedron} (cf.\ \cite{Ma}), comes therefore equipped with the induced stratification \begin{equation}\label{2: C m,n stratification formula}
\overline{C_{m,n}}({\mathbb R}\times {\mathbb R})=\bigcup_{(T^\uparrow, T_\downarrow, \zeta)} T^\uparrow({\mathbb R})\times T_\downarrow({\mathbb R}) \times (0,+\infty)^{|\zeta|} \end{equation} which is parameterized by the poset ${\mathcal K}_m^n$ consisting of triples $(T^\uparrow, T_\downarrow, \zeta)$. Therefore we often denote $\overline{C_{m,n}}({\mathbb R}\times {\mathbb R})$ by ${\mathsf K}_m^n$. This decomposition can be used to define in a standard way a smooth (with corners) atlas on
the biassociahedron ${\mathsf K}_m^n=\overline{C_{m,n}}({\mathbb R}\times {\mathbb R})$ such that the associated poset
$$
\overline{C_{m,n}}({\mathbb R}\times {\mathbb R})\supset {\partial} \overline{C_{m,n}}({\mathbb R}\times {\mathbb R}) \supset {\partial}^2 \overline{C_{m,n}}({\mathbb R}\times {\mathbb R})\supset \ldots,
$$ is precisely the poset ${\mathcal K}^m_n$ from \cite{Ma}.
\subsection{Example: $m+n=4$} This is the first non-trivial case. It is clear that $$ \overline{C_{3,1}}({\mathbb R}\times{\mathbb R} )\simeq \overline{C_{1,3}}({\mathbb R}\times {\mathbb R})\simeq \overline{C_{3}}({\mathbb R})\simeq [0,1].
$$
Therefore in the cases $(m=1,n=2)$ and $(m=2,n=1)$ the combinatorics of the natural stratification of the compactified configuration spaces can be coded by the following pairs of trees (each pair is equipped with the only possible zone function), $$ \overline{C_{3,1}}({\mathbb R}\times{\mathbb R} )=\hspace{3mm} \begin{array}{c} \hspace{-3mm}\resizebox{12mm}{!}{\xy
(-3,-2)*{}="1L", (7,-2)*{}="1R", (0,3)*{}="0",
(0,0)*{\circ}="a", (-2,-4)*{\circ}="b_1", (3,-7)*{}="b_2",
(5,3)*{}="r0", (5,-7)*{}="r00",
(-3.7,-7)*{}="c_1", (-0.5,-7)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt> \ar @{-} "r0";"r00" <0pt>
\ar @{.} "1L";"1R" <0pt> \endxy} \ \ \ \ \ \ \ \ \ \ \resizebox{12mm}{!}{\xy (-3,-1.5)*{}="1L", (7,-1.5)*{}="1R",
(0,3)*{}="0",
(0,-1.5)*{\circ}="a", (3,-7)*{}="b_2",
(5,3)*{}="r0", (5,-7)*{}="r00",
(-3.7,-7)*{}="c_1", (-0.5,-7)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "a";"c_1" <0pt> \ar @{-} "a";"c_2" <0pt> \ar @{-} "r0";"r00" <0pt>
\ar @{.} "1L";"1R" <0pt> \endxy} \ \ \ \ \ \ \ \ \ \ \ \resizebox{12mm}{!}{\xy
(-3,-2)*{}="1L", (7,-2)*{}="1R", (0,3)*{}="0",
(0,0)*{\circ}="a", (2,-4)*{\circ}="b_1", (-3,-7)*{}="b_2",
(5,3)*{}="r0", (5,-7)*{}="r00",
(3.7,-7)*{}="c_1", (0.5,-7)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt> \ar @{-} "r0";"r00" <0pt>
\ar @{.} "1L";"1R" <0pt> \endxy}\\
\\ {\xy
(-5,0)*{^0}="0",
(44,0)*{^1}="a", \ar @{-} "a";"0" <0pt> \endxy} \end{array} $$ The left pair corresponds to the point $0\in [0,1]$, the middle one to the open interval $(0,1)$, and the right pair of trees to the point $1\in [0,1]$. Turning the trees above upside down, we get a ``pairs of trees" stratification of $\overline{C_{1,3}}({\mathbb R}\times{\mathbb R} )$. The trees are not leveled, but it will be useful to understand these trees as trivially {\em zoned} (cf.\ \cite{Ma}), i.e.\ as the ones in which all vertices are assigned one an the same zone value $1$. We shall see below examples of trees with more than one zone.
The compactification formula says that $\overline{C_{2,2}}({\mathbb R}\times{\mathbb R} )$ is the closure of the embedding, $$ \begin{array}{cccc} {C_{2,2}}({\mathbb R}\times{\mathbb R} ) &\longrightarrow & [0,+\infty]\\
(x_1,x_2),(y_1,y_2) & \longrightarrow & |x_2-x_1||y_2-y_1| \end{array} $$ Thus $\overline{C_{2,2}}({\mathbb R}\times{\mathbb R} )\simeq [0,1]$, and the stratification $[0,1]=0\sqcup(0.1)\sqcup 1$ can be represented in terms of the pair of trees and three possible zone functions as follows, $$ \overline{C_{2,2}}({\mathbb R}\times{\mathbb R} )=\begin{array}{c} \resizebox{10mm}{!}{\xy (-3,-1)*{}="1L", (7,-1)*{}="1R",
(-3,1)*{}="2L", (7,1)*{}="2R", (0,4)*{}="0",
(0,-1)*{\circ}="a", (-2,-4)*{}="b_1", (2,-4)*{}="b_2",
(4,-3)*{}="00",
(4,1)*{\circ}="a0", (2,4)*{}="c_1", (6,4)*{}="c_2",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "a0";"c_1" <0pt> \ar @{-} "a0";"c_2" <0pt> \ar @{-} "a0";"00" <0pt> \ar @{.} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \endxy} \ \ \ \ \ \ \ \ \ \ \resizebox{11mm}{!}{ \xy (-3,0)*{}="1L", (7,0)*{}="1R",
(0,3)*{}="0",
(0,0)*{\circ}="a", (-2,-4)*{}="b_1", (2,-4)*{}="b_2",
(4,-3)*{}="00",
(4,0)*{\circ}="a0", (2,4)*{}="c_1", (6,4)*{}="c_2",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "a0";"c_1" <0pt> \ar @{-} "a0";"c_2" <0pt> \ar @{-} "a0";"00" <0pt> \ar @{-} "1L";"1R" <0pt> \endxy} \ \ \ \ \ \ \ \ \ \ \ \resizebox{11mm}{!}{ \xy (-3,-1)*{}="1L", (7,-1)*{}="1R",
(-3,1)*{}="2L", (7,1)*{}="2R", (0,4)*{}="0",
(0,1)*{\circ}="a", (-2,-4)*{}="b_1", (2,-4)*{}="b_2",
(4,-4)*{}="00",
(4,-1)*{\circ}="a0", (2,4)*{}="c_1", (6,4)*{}="c_2",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "a0";"c_1" <0pt> \ar @{-} "a0";"c_2" <0pt> \ar @{-} "a0";"00" <0pt> \ar @{.} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \endxy}\\
\\ \ \ {\xy (-12,0)*{^0}="0",
(46,0)*{^1}="a", \ar @{-} "a";"0" <0pt> \endxy} \end{array} $$ The left pair of trees corresponds to the limit $\varepsilon\rightarrow 0$ configuration $$ (x_1=-\varepsilon, x_2=\varepsilon),\ (y_1=-1, y_2=1)\ \ \ \sim \ \ \ (x_1=-1, x_2=1),\ (y_1=-\varepsilon, y_2=\varepsilon), $$
with $|x_2-x_1||y_2-y_1|\rightarrow 0$.
The middle pair of tress corresponds to the generic configurations, $$ (x_1=- x, x_2= x),\ (y_1=-y, y_2=y)\ \ \ \sim \ \ \ (x_1=-\varepsilon x, x_2=\varepsilon x),\ (y_1=-\frac{1}{\varepsilon}y, y_2=\frac{1}{\varepsilon}y), \ \ \ x,y\in {\mathbb R}^+, $$
with $|x_2-x_1||y_2-y_1|$ a positive finite number (so that $|x_2-x_1|\sim |y_2-y_1|$ and the associated vertices are on the same level ). The right pair of trees corresponds to the limit $\varepsilon\rightarrow 0$ of the configuration $$ (x_1=-1, x_2=1), \ \ (y_1=-\frac{1}{\varepsilon}, y_2=\frac{1}{\varepsilon}) \ \ \ \sim \ \ \ (x_1=-\frac{1}{\varepsilon}, x_2=\frac{1}{\varepsilon}), \ (y_1=-1, y_2=1) $$
with $|x_2-x_1||y_2-y_1|\rightarrow +\infty $.
\subsection{Example: $m+n=5$} The cases $(m=1,n=4)$ and $(m=4,n=1)$ are completely analogous to the example discussed above. The cases $(m=2,n=3)$ and $(m=3,n=2)$ are similar so that we shall study in detail only one of them. The compactification $\overline{C_{3,2}}({\mathbb R}\times{\mathbb R} )$ is the closure of the embedding, $$ \begin{array}{ccccc} {C_{3,2}}({\mathbb R}\times{\mathbb R} ) &\longrightarrow & {\mathbb R}{\mathbb P}^2 &\times & [0,+\infty]^3\\
(x_1,x_2,x_3),(y_1,y_2) & \longrightarrow & \left[|x_1-x_2|:|x_1-x_3|:|x_2-x_3|\right] &\times&
\left\{{\begin{array}{c} |x_2-x_1||y_2-y_1|\\ |x_3-x_1||y_2-y_1| \\ |x_2-x_3||y_2-y_1|\end{array}}\right. \end{array} $$ There are three possible pairs of trees in this case, $$ \begin{array}{c}\resizebox{12mm}{!}{\xy
(0,2)*{}="0",
(0,-3)*{\circ}="a", (-4,-8)*{}="b_1", (0,-8)*{}="b_2", (4,-8)*{}="b_3",
(8,-8)*{}="0'",
(8,-3)*{\circ}="a'", (5,2)*{}="b_1'", (11,2)*{}="b_2'",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "a";"b_3" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt> \endxy}\end{array} \ \ \ \ \ \ \begin{array}{c}\resizebox{12mm}{!}{\xy
(0,4)*{}="0",
(0,0)*{\circ}="a", (-3,-5)*{\circ}="b_1", (5.5,-10)*{}="b_2",
(8,-9)*{}="0'",
(8,-3)*{\circ}="a'", (5,4)*{}="b_1'", (11,4)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt> \endxy}\end{array} \ \ \ \ \ \ \ \ \ \ \ \begin{array}{c}\resizebox{12mm}{!}{\xy
(0,4)*{}="0",
(0,0)*{\circ}="a", (3,-5)*{\circ}="b_2", (-5.5,-10)*{}="c_1",
(8,-9)*{}="0'",
(8,-3)*{\circ}="a'", (5,4)*{}="b_1'", (11,4)*{}="b_2'",
(6,-10)*{}="c_3", (0.5,-10)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"c_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_2";"c_2" <0pt> \ar @{-} "b_2";"c_3" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt> \endxy}\end{array} $$ To check claim (\ref{2: C m,n stratification formula}) we have to consider the list of all possible zone functions on these pairs, together with the associated boundary strata.
1) To the zone function $\begin{array}{c}\resizebox{12mm}{!}{\xy (-4,-3)*{}="1L", (12,-3)*{}="1R",
(0,2)*{}="0",
(0,-3)*{\circ}="a", (-4,-8)*{}="b_1", (0,-8)*{}="b_2", (4,-8)*{}="b_3",
(8,-8)*{}="0'",
(8,-3)*{\circ}="a'", (5,2)*{}="b_1'", (11,2)*{}="b_2'",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "a";"b_3" <0pt>
\ar @{-} "1L";"1R" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt> \endxy}\end{array}$
we associate, in accordance with (\ref{2: C m,n stratification formula}), the $2$-dimensional big cell $$ C_{3,2}({\mathbb R}\times{\mathbb R} )\simeq \left\{\begin{array}{c}(x_1=0,x_2=x,x_3=1)\\ (y_1=-y,y_2=y)\end{array}\right. \simeq (0,1)\times (0,+\infty) $$
2) The zone function $\begin{array}{c}\resizebox{12mm}{!}{\xy (-4,-2)*{}="1L", (12,-2)*{}="1R", (-4,-5)*{}="2L", (12,-5)*{}="2R",
(0,0)*{}="0",
(0,-5)*{\circ}="a", (-4,-10)*{}="b_1", (0,-10)*{}="b_2", (4,-10)*{}="b_3",
(8,-7)*{}="0'",
(8,-2)*{\circ}="a'", (5,3)*{}="b_1'", (11,3)*{}="b_2'",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "a";"b_3" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt> \endxy}\end{array}$ corresponds to the $1$-dimensional cell $$ \lim_{\varepsilon\rightarrow 0} \left\{\begin{array}{c}(x_1=0,x_2=x,x_3=1)\\ (y_1=-\varepsilon,y_2=\varepsilon)\end{array}\right. \simeq (0,1) $$
3) The zone function $\begin{array}{c}\resizebox{12mm}{!}{\xy (-4,-2)*{}="1L", (12,-2)*{}="1R", (-4,-5)*{}="2L", (12,-5)*{}="2R",
(0,3)*{}="0",
(0,-2)*{\circ}="a", (-4,-7)*{}="b_1", (0,-7)*{}="b_2", (4,-7)*{}="b_3",
(8,-10)*{}="0'",
(8,-5)*{\circ}="a'", (5,0)*{}="b_1'", (11,0)*{}="b_2'",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "a";"b_3" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt> \endxy}\end{array}$ corresponds to the $1$-dimensional cell $$ \lim_{\varepsilon\rightarrow 0} \left\{\begin{array}{c}(x_1=0,x_2=x,x_3=1)\\ (y_1=-\frac{1}{\varepsilon},y_2=\frac{1}{\varepsilon})\end{array}\right.
\simeq (0,1) $$
4) The zone functions $\begin{array}{c}\resizebox{12mm}{!}{\xy (-4,6)*{}="1L", (12,6)*{}="1R", (-4,-3)*{}="2L", (12,-3)*{}="2R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (-3,-5)*{\circ}="b_1", (5.5,-10)*{}="b_2",
(8,0)*{}="0'",
(8,6)*{\circ}="a'", (5,13)*{}="b_1'", (11,13)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \endxy}\end{array}$ and, respectively, $\begin{array}{c}\resizebox{12mm}{!}{\xy (-4,6)*{}="1L", (12,6)*{}="1R", (-4,-3)*{}="2L", (12,-3)*{}="2R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (3,-5)*{\circ}="b_1", (5.5,-10)*{}="c_3",
(8,0)*{}="0'",
(8,6)*{\circ}="a'", (5,13)*{}="b_1'", (11,13)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"c_3" <0pt> \ar @{-} "a";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \endxy}\end{array}$ correspond to $2$ points which are boundaries of the closure of the strata 2) in $\overline{C_{3,2}}({\mathbb R}\times {\mathbb R})$, i.e.\ they correspond, respectively, to the following two limit configuration $$ \lim_{\varepsilon_1,\varepsilon_2\rightarrow 0} \left\{\begin{array}{c}(x_1=0,x_2=\varepsilon_1,x_3=1)\\ (y_1=-\varepsilon_2,y_2=\varepsilon_2)\end{array}\right. \ \ \ \ \ \ \ \ \ \ \lim_{\varepsilon_1,\varepsilon_2\rightarrow 0} \left\{\begin{array}{c}(x_1=0,x_2=1-\varepsilon_1,x_3=1)\\ (y_1=-\varepsilon_2,y_2=\varepsilon_2)\end{array}\right. $$
5) The zone functions $\begin{array}{c}\resizebox{12mm}{!}{\xy (-4,-7)*{}="1L", (12,-7)*{}="1R", (-4,-3)*{}="2L", (12,-3)*{}="2R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (-3,-5)*{\circ}="b_1", (5.5,-10)*{}="b_2",
(8,-13)*{}="0'",
(8,-7)*{\circ}="a'", (5,0)*{}="b_1'", (11,0)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \endxy}\end{array}$ and, respectively, $\begin{array}{c}\resizebox{12mm}{!}{\xy (-4,-7)*{}="1L", (12,-7)*{}="1R", (-4,-3)*{}="2L", (12,-3)*{}="2R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (3,-5)*{\circ}="b_1", (5.5,-10)*{}="c_3",
(8,-13)*{}="0'",
(8,-7)*{\circ}="a'", (5,0)*{}="b_1'", (11,0)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"c_3" <0pt> \ar @{-} "a";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \endxy}\end{array}$ correspond to $2$ points which are boundaries of the closure of the strata 3) in $\overline{C_{3,2}}({\mathbb R}\times {\mathbb R})$, i.e.\ they correspond, respectively, to the following two limit configuration $$ \lim_{\varepsilon_1,\varepsilon_2\rightarrow 0} \left\{\begin{array}{c}(x_1=0,x_2=\varepsilon_1,x_3=1)\\ (y_1=-\frac{1}{\varepsilon_1\varepsilon_2},y_2=\frac{1}{\varepsilon_1\varepsilon_2})\end{array}\right. \ \ \ \ \ \ \ \ \ \ \lim_{\varepsilon_1,\varepsilon_2\rightarrow 0} \left\{\begin{array}{c}(x_1=0,x_2=1-\varepsilon_1\,x_3=1)\\ (y_1=-\frac{1}{\varepsilon_1\varepsilon_2},y_2=\frac{1}{\varepsilon_1\varepsilon_2})\end{array}\right. $$
6) The zone functions $\begin{array}{c}\resizebox{12mm}{!}{\xy (-4,0)*{}="1L", (12,0)*{}="1R", (-4,-5)*{}="2L", (12,-5)*{}="2R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (-3,-5)*{\circ}="b_1", (5.5,-10)*{}="b_2",
(8,-6)*{}="0'",
(8,0)*{\circ}="a'", (5,7)*{}="b_1'", (11,7)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt>
\ar @{-} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \endxy}\end{array}$ and, respectively, $\begin{array}{c}\resizebox{12mm}{!}{\xy (-4,0)*{}="1L", (12,0)*{}="1R", (-4,-5)*{}="2L", (12,-5)*{}="2R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (3,-5)*{\circ}="b_1", (5.5,-10)*{}="c_3",
(8,-6)*{}="0'",
(8,0)*{\circ}="a'", (5,7)*{}="b_1'", (11,7)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"c_3" <0pt> \ar @{-} "a";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt>
\ar @{-} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \endxy}\end{array}$ correspond, respectively, to the following $1$-dimensional cells,
$$ \lim_{\varepsilon\rightarrow 0} \left\{\begin{array}{c}(x_1=0,x_2=\varepsilon,x_3=1)\\ (y_1=-y,y_2=y) \end{array}\right.\simeq (0,+\infty) \ \ \ \ \ \ \ \ \ \ \lim_{\varepsilon\rightarrow 0} \left\{\begin{array}{c}(x_1=0,x_2=1-\varepsilon_1,x_3=1)\\ (y_1=-y,y_2=y)\end{array}\right.\simeq (0,+\infty) $$
7) The zone functions $\begin{array}{c}\resizebox{12mm}{!}{\xy (-4,0)*{}="1L", (12,0)*{}="1R", (-4,-5)*{}="2L", (12,-5)*{}="2R", (-4,-3)*{}="3L", (12,-3)*{}="3R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (-3,-5)*{\circ}="b_1", (5.5,-10)*{}="b_2",
(8,-9)*{}="0'",
(8,-3)*{\circ}="a'", (5,4)*{}="b_1'", (11,4)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \ar @{.} "3L";"3R" <0pt> \endxy}\end{array}$ and, respectively, $\begin{array}{c}\resizebox{12mm}{!}{\xy (-4,0)*{}="1L", (12,0)*{}="1R", (-4,-5)*{}="2L", (12,-5)*{}="2R", (-4,-3)*{}="3L", (12,-3)*{}="3R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (3,-5)*{\circ}="b_1", (5.5,-10)*{}="c_3",
(8,-9)*{}="0'",
(8,-3)*{\circ}="a'", (5,4)*{}="b_1'", (11,4)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"c_3" <0pt> \ar @{-} "a";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \ar @{.} "3L";"3R" <0pt> \endxy}\end{array}$ correspond, respectively, to the following points in $\overline{C_{3,2}}({\mathbb R}\times {\mathbb R})$, $$ \lim_{\varepsilon_1,\varepsilon_2\rightarrow 0} \left\{\begin{array}{c}(x_1=0,x_2=\varepsilon_1\varepsilon_2,x_3=1)\\ (y_1=-\frac{1}{\varepsilon_2},y_2=\frac{1}{\varepsilon_2})\end{array}\right. \ \ \ \ \ \ \ \ \ \ \lim_{\varepsilon_2\rightarrow 0} \left\{\begin{array}{c}(x_1=0,x_2=1-\varepsilon_1\varepsilon_2\,x_3=1)\\ (y_1=-\frac{1}{\varepsilon_2},y_2=\frac{1}{\varepsilon_2})\end{array}\right. $$
8) The zone functions $\begin{array}{c}\resizebox{12mm}{!}{\xy (-4,0)*{}="1L", (12,0)*{}="1R", (-5,-5)*{}="2L", (12,-5)*{}="2R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (-3,-5)*{\circ}="b_1", (5.5,-10)*{}="b_2",
(8,-11)*{}="0'",
(8,-5)*{\circ}="a'", (5,2)*{}="b_1'", (11,2)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{-} "2L";"2R" <0pt> \endxy}\end{array}$ and, respectively, $\begin{array}{c}\resizebox{12mm}{!}{\xy (-4,0)*{}="1L", (12,0)*{}="1R", (-5,-5)*{}="2L", (12,-5)*{}="2R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (3,-5)*{\circ}="b_1", (5.5,-10)*{}="c_3",
(8,-11)*{}="0'",
(8,-5)*{\circ}="a'", (5,2)*{}="b_1'", (11,2)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"c_3" <0pt> \ar @{-} "a";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{-} "2L";"2R" <0pt> \endxy}\end{array}$ correspond, respectively, to the following $1$-dimensional cells, $$ \lim_{\varepsilon\rightarrow 0} \left\{\begin{array}{c}(x_1=0,x_2=\varepsilon ,x_3=1)\\ (y_1=-\frac{y}{\varepsilon},y_2=\frac{y}{\varepsilon})\end{array}\right.\simeq (0,+\infty) \ \ \ \ \ \ \ \ \ \ \lim_{\varepsilon\rightarrow 0} \left\{\begin{array}{c}(x_1=0,x_2=1-\varepsilon ,x_3=1)\\ (y_1=-\frac{y}{\varepsilon},y_2=\frac{y}{\varepsilon})\end{array}\right.\simeq (0,+\infty) $$
This list exhaust all possible natural strata of and all possible triples $(T^\uparrow\in {\mathcal T} ree_3, T_\downarrow\in {\mathcal T} ree_2, \zeta)$. The stratification formula (\ref{2: C m,n stratification formula}) holds true in this case. Not surprisingly, $\overline{C_{3,2}}({\mathbb R}\times{\mathbb R} )$ is the hexagon from the multiplihedra family \cite{Ma,SU}
$$ \begin{array}{c} \resizebox{90mm}{!}{\xy
(0,16)*{\resizebox{12mm}{!}{\xy (-4,-3)*{}="1L", (12,-3)*{}="1R",
(0,2)*{}="0",
(0,-3)*{\circ}="a", (-4,-8)*{}="b_1", (0,-8)*{}="b_2", (4,-8)*{}="b_3",
(8,-8)*{}="0'",
(8,-3)*{\circ}="a'", (5,2)*{}="b_1'", (11,2)*{}="b_2'",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "a";"b_3" <0pt>
\ar @{-} "1L";"1R" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt> \endxy}},
(0,-7)*{\resizebox{12mm}{!}{\xy (-4,-2)*{}="1L", (12,-2)*{}="1R", (-4,-5)*{}="2L", (12,-5)*{}="2R",
(0,0)*{}="0",
(0,-5)*{\circ}="a", (-4,-10)*{}="b_1", (0,-10)*{}="b_2", (4,-10)*{}="b_3",
(8,-7)*{}="0'",
(8,-2)*{\circ}="a'", (5,3)*{}="b_1'", (11,3)*{}="b_2'",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "a";"b_3" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt> \endxy}},
(0,38)*{\resizebox{12mm}{!}{\xy (-4,-2)*{}="1L", (12,-2)*{}="1R", (-4,-5)*{}="2L", (12,-5)*{}="2R",
(0,3)*{}="0",
(0,-2)*{\circ}="a", (-4,-7)*{}="b_1", (0,-7)*{}="b_2", (4,-7)*{}="b_3",
(8,-10)*{}="0'",
(8,-5)*{\circ}="a'", (5,0)*{}="b_1'", (11,0)*{}="b_2'",
\ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "a";"b_3" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt> \endxy}},
(27,3)*{\resizebox{12mm}{!}{\xy (-4,0)*{}="1L", (12,0)*{}="1R", (-5,-5)*{}="2L", (12,-5)*{}="2R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (-3,-5)*{\circ}="b_1", (5.5,-10)*{}="b_2",
(8,-6)*{}="0'",
(8,0)*{\circ}="a'", (5,7)*{}="b_1'", (11,7)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt>
\ar @{-} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \endxy}},
(-25,3)*{\resizebox{12mm}{!}{\xy (-4,0)*{}="1L", (12,0)*{}="1R", (-4,-5)*{}="2L", (12,-5)*{}="2R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (3,-5)*{\circ}="b_1", (5.5,-10)*{}="c_3",
(8,-6)*{}="0'",
(8,0)*{\circ}="a'", (5,7)*{}="b_1'", (11,7)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"c_3" <0pt> \ar @{-} "a";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt>
\ar @{-} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \endxy}},
(25,29)*{\resizebox{12mm}{!}{\xy (-4,0)*{}="1L", (12,0)*{}="1R", (-5,-5)*{}="2L", (12,-5)*{}="2R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (-3,-5)*{\circ}="b_1", (5.5,-10)*{}="b_2",
(8,-11)*{}="0'",
(8,-5)*{\circ}="a'", (5,2)*{}="b_1'", (11,2)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{-} "2L";"2R" <0pt> \endxy}},
(-27,28)*{\resizebox{12mm}{!}{\xy (-4,0)*{}="1L", (12,0)*{}="1R", (-5,-5)*{}="2L", (12,-5)*{}="2R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (3,-5)*{\circ}="b_1", (5.5,-10)*{}="c_3",
(8,-11)*{}="0'",
(8,-5)*{\circ}="a'", (5,2)*{}="b_1'", (11,2)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"c_3" <0pt> \ar @{-} "a";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{-} "2L";"2R" <0pt> \endxy}},
(-31,-26)*{\resizebox{12mm}{!}{ \xy (-4,6)*{}="1L", (12,6)*{}="1R", (-4,-3)*{}="2L", (12,-3)*{}="2R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (-3,-5)*{\circ}="b_1", (5.5,-10)*{}="b_2",
(8,0)*{}="0'",
(8,6)*{\circ}="a'", (5,13)*{}="b_1'", (11,13)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \endxy }},
(29,-22)*{\resizebox{12mm}{!}{\xy (-4,6)*{}="1L", (12,6)*{}="1R", (-4,-3)*{}="2L", (12,-3)*{}="2R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (-3,-5)*{\circ}="b_1", (5.5,-10)*{}="b_2",
(8,0)*{}="0'",
(8,6)*{\circ}="a'", (5,13)*{}="b_1'", (11,13)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \endxy}},
(54,16)*{\resizebox{12mm}{!}{\xy (-4,0)*{}="1L", (12,0)*{}="1R", (-4,-5)*{}="2L", (12,-5)*{}="2R", (-4,-3)*{}="3L", (12,-3)*{}="3R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (-3,-5)*{\circ}="b_1", (5.5,-10)*{}="b_2",
(8,-9)*{}="0'",
(8,-3)*{\circ}="a'", (5,4)*{}="b_1'", (11,4)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \ar @{.} "3L";"3R" <0pt> \endxy}},
(-56,16)*{\resizebox{12mm}{!}{\xy (-4,0)*{}="1L", (12,0)*{}="1R", (-4,-5)*{}="2L", (12,-5)*{}="2R", (-4,-3)*{}="3L", (12,-3)*{}="3R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (3,-5)*{\circ}="b_1", (5.5,-10)*{}="c_3",
(8,-9)*{}="0'",
(8,-3)*{\circ}="a'", (5,4)*{}="b_1'", (11,4)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"c_3" <0pt> \ar @{-} "a";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \ar @{.} "3L";"3R" <0pt> \endxy}},
(27,54)*{\resizebox{12mm}{!}{\xy (-4,-7)*{}="1L", (12,-7)*{}="1R", (-4,-3)*{}="2L", (12,-3)*{}="2R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (-3,-5)*{\circ}="b_1", (5.5,-10)*{}="b_2",
(8,-13)*{}="0'",
(8,-7)*{\circ}="a'", (5,0)*{}="b_1'", (11,0)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"b_2" <0pt> \ar @{-} "b_1";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \endxy}},
(-24,54)*{\resizebox{12mm}{!}{\xy (-4,-7)*{}="1L", (12,-7)*{}="1R", (-4,-3)*{}="2L", (12,-3)*{}="2R",
(0,4)*{}="0",
(0,0)*{\circ}="a", (3,-5)*{\circ}="b_1", (5.5,-10)*{}="c_3",
(8,-13)*{}="0'",
(8,-7)*{\circ}="a'", (5,0)*{}="b_1'", (11,0)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{-} "a";"0" <0pt> \ar @{-} "a";"b_1" <0pt> \ar @{-} "a";"c_3" <0pt> \ar @{-} "a";"c_1" <0pt> \ar @{-} "b_1";"c_2" <0pt>
\ar @{-} "a'";"0'" <0pt> \ar @{-} "a'";"b_1'" <0pt> \ar @{-} "a'";"b_2'" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{.} "2L";"2R" <0pt> \endxy}},
(-25,-15)*{\bullet}="1",
(25,-15)*{\bullet}="2",
(45,15)*{\bullet}="3",
(25,45)*{\bullet}="4",
(-25,45)*{\bullet}="6", (-45,15)*{\bullet}="7",
\ar @{-} "1";"2" <0pt> \ar @{-} "2";"3" <0pt> \ar @{-} "3";"4" <0pt> \ar @{-} "4";"6" <0pt> \ar @{-} "6";"7" <0pt> \ar @{-} "7";"1" <0pt> \endxy}\\ \\ \mbox{\sc{Fig.}\ 1:\ Biassociahedron ${\mathsf K}_3^2$} \end{array} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \begin{array}{c} \resizebox{60mm}{!}{\xy
(0,16)*{\resizebox{9mm}{!}{\xy
(3,2)*{}="u1", (-3,2)*{}="u2",
(0,-3)*{\circ}="a", (-4,-8)*{}="b_1", (0,-8)*{}="b_2", (4,-8)*{}="b_3",
\ar @{.} "a";"b_1" <0pt> \ar @{.} "a";"b_2" <0pt> \ar @{.} "a";"b_3" <0pt>
\ar @{.} "a";"u1" <0pt> \ar @{.} "a";"u2" <0pt> \endxy}},
(0,-7)*{\resizebox{7mm}{!}{\xy
(0,-5)*{\circ}="a", (-4,-10)*{}="b_1", (0,-10)*{}="b_2", (4,-10)*{}="b_3",
(0,0)*{\circ}="a'", (-3,4)*{}="b_1'", (3,4)*{}="b_2'",
\ar @{.} "a";"a'" <0pt> \ar @{.} "a";"b_1" <0pt> \ar @{.} "a";"b_2" <0pt> \ar @{.} "a";"b_3" <0pt>
\ar @{.} "a'";"b_1'" <0pt> \ar @{.} "a'";"b_2'" <0pt> \endxy}},
(0,40)*{\resizebox{12mm}{!}{\xy (-12,0)*{}="1L", (12,0)*{}="1R",
(-4,7)*{}="0",
(5,12)*{}="0",
(5,7)*{\circ}="a", (1,2)*{}="b_1", (5,2)*{}="b_2", (9,2)*{}="b_3",
(-5,12)*{}="0'",
(-5,7)*{\circ}="a'", (-9,2)*{}="b_1'", (-5,2)*{}="b_2'", (-1,2)*{}="b_3'",
(0,-12)*{}="01",
(0,-7)*{\circ}="a1", (-3,-2)*{}="b_11", (3,-2)*{}="b_21",
(-7,-12)*{}="02",
(-7,-7)*{\circ}="a2", (-10,-2)*{}="b_12", (-3,-2)*{}="b_22",
(7,-12)*{}="03",
(7,-7)*{\circ}="a3", (10,-2)*{}="b_13", (3,-2)*{}="b_23",
\ar @{.} "a";"0" <0pt> \ar @{.} "a";"b_1" <0pt> \ar @{.} "a";"b_2" <0pt> \ar @{.} "a";"b_3" <0pt>
\ar @{.} "a'";"0'" <0pt> \ar @{.} "a'";"b_1'" <0pt> \ar @{.} "a'";"b_2'" <0pt> \ar @{.} "a'";"b_3'" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{.} "a1";"01" <0pt> \ar @{.} "a1";"b_11" <0pt> \ar @{.} "a1";"b_21" <0pt>
\ar @{.} "a2";"02" <0pt> \ar @{.} "a2";"b_12" <0pt> \ar @{.} "a2";"b_22" <0pt>
\ar @{.} "a3";"03" <0pt> \ar @{.} "a3";"b_13" <0pt> \ar @{.} "a3";"b_23" <0pt> \endxy}},
(29,3)*{\resizebox{10mm}{!}{ \xy
(0,0)*{\circ}="a", (-3,-5)*{\circ}="b_1", (5.5,-10)*{}="b_2",
(-3,5)*{}="b_1'", (3,5)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{.} "a";"b_1" <0pt> \ar @{.} "a";"b_2" <0pt> \ar @{.} "b_1";"c_1" <0pt> \ar @{.} "b_1";"c_2" <0pt>
\ar @{.} "a";"b_1'" <0pt> \ar @{.} "a";"b_2'" <0pt> \endxy}},
(-27,3)*{\resizebox{10mm}{!}{ \xy
(0,0)*{\circ}="a", (3,-5)*{\circ}="b_1", (-5.5,-10)*{}="b_2",
(-3,5)*{}="b_1'", (3,5)*{}="b_2'",
(6,-10)*{}="c_1", (0.5,-10)*{}="c_2", \ar @{.} "a";"b_1" <0pt> \ar @{.} "a";"b_2" <0pt> \ar @{.} "b_1";"c_1" <0pt> \ar @{.} "b_1";"c_2" <0pt>
\ar @{.} "a";"b_1'" <0pt> \ar @{.} "a";"b_2'" <0pt> \endxy }},
(27,29)*{\resizebox{13mm}{!}{ \xy (-10,0)*{}="1L", (12,0)*{}="1R",
(4,10)*{}="0",
(4,6)*{\circ}="a", (1,2)*{}="u_1", (7,2)*{}="u_2",
(-4,10)*{}="0'",
(-4,6)*{\circ}="a'", (-1,2)*{}="u_1'", (-7,2)*{}="u_2'",
(-1,-2)*{}="du1", (-7,-2)*{}="du2", (-4,-6)*{\circ}="v",
(-1,-10)*{}="dd1", (-7,-10)*{}="dd2",
(4,-10)*{}="xd",
(4,-6)*{\circ}="x", (1,-2)*{}="x_1", (7,-2)*{}="x_2",
\ar @{.} "a";"0" <0pt> \ar @{.} "a";"u_1" <0pt> \ar @{.} "a";"u_2" <0pt>
\ar @{.} "a'";"0'" <0pt> \ar @{.} "a'";"u_1'" <0pt> \ar @{.} "a'";"u_2'" <0pt>
\ar @{.} "v";"du1" <0pt> \ar @{.} "v";"du2" <0pt> \ar @{.} "v";"dd1" <0pt> \ar @{.} "v";"dd2" <0pt>
\ar @{.} "x";"xd" <0pt> \ar @{.} "x";"x_1" <0pt> \ar @{.} "x";"x_2" <0pt>
\ar @{.} "1L";"1R" <0pt> \endxy }},
(-29,28)*{\resizebox{13mm}{!}{ \xy (-10,0)*{}="1L", (12,0)*{}="1R",
(4,10)*{}="0",
(4,6)*{\circ}="a", (1,2)*{}="u_1", (7,2)*{}="u_2",
(-4,10)*{}="0'",
(-4,6)*{\circ}="a'", (-1,2)*{}="u_1'", (-7,2)*{}="u_2'",
(1,-2)*{}="du1", (7,-2)*{}="du2", (4,-6)*{\circ}="v",
(1,-10)*{}="dd1", (7,-10)*{}="dd2",
(-4,-10)*{}="xd",
(-4,-6)*{\circ}="x", (-1,-2)*{}="x_1", (-7,-2)*{}="x_2",
\ar @{.} "a";"0" <0pt> \ar @{.} "a";"u_1" <0pt> \ar @{.} "a";"u_2" <0pt>
\ar @{.} "a'";"0'" <0pt> \ar @{.} "a'";"u_1'" <0pt> \ar @{.} "a'";"u_2'" <0pt>
\ar @{.} "v";"du1" <0pt> \ar @{.} "v";"du2" <0pt> \ar @{.} "v";"dd1" <0pt> \ar @{.} "v";"dd2" <0pt>
\ar @{.} "x";"xd" <0pt> \ar @{.} "x";"x_1" <0pt> \ar @{.} "x";"x_2" <0pt>
\ar @{.} "1L";"1R" <0pt> \endxy }},
(-28,-22)*{\resizebox{9mm}{!}{ \xy
(0,4)*{}="0",
(0,0)*{\circ}="a", (3,-5)*{\circ}="b_1", (5.5,-10)*{}="b_2",
(0,5)*{\circ}="a'", (-3,10)*{}="b_1'", (3,10)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{.} "a";"a'" <0pt> \ar @{.} "a";"b_2" <0pt> \ar @{.} "a";"c_1" <0pt> \ar @{.} "b_1";"c_2" <0pt>
\ar @{.} "a'";"b_1'" <0pt> \ar @{.} "a'";"b_2'" <0pt> \endxy }},
(28,-20)*{\resizebox{9mm}{!}{ \xy
(0,4)*{}="0",
(0,0)*{\circ}="a", (-3,-5)*{\circ}="b_1", (5.5,-10)*{}="b_2",
(0,5)*{\circ}="a'", (-3,10)*{}="b_1'", (3,10)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{.} "a";"a'" <0pt> \ar @{.} "a";"b_1" <0pt> \ar @{.} "a";"b_2" <0pt> \ar @{.} "b_1";"c_1" <0pt> \ar @{.} "b_1";"c_2" <0pt>
\ar @{.} "a'";"b_1'" <0pt> \ar @{.} "a'";"b_2'" <0pt> \endxy }},
(52,16)*{\resizebox{12mm}{!}{\xy (-10,0)*{}="1L", (12,0)*{}="1R",
(4,10)*{}="0",
(4,6)*{\circ}="a", (1,2)*{}="u_1", (7,2)*{}="u_2",
(-4,10)*{}="0'",
(-4,6)*{\circ}="a'", (-1,2)*{}="u_1'", (-7,2)*{}="u_2'",
(-1,-2)*{}="du1", (-7,-2)*{}="du2", (-4,-6)*{\circ}="vu",
(-4,-10)*{\circ}="vd", (-1,-14)*{}="dd1", (-7,-14)*{}="dd2",
(4,-10)*{}="xd",
(4,-6)*{\circ}="x", (1,-2)*{}="x_1", (7,-2)*{}="x_2",
\ar @{.} "a";"0" <0pt> \ar @{.} "a";"u_1" <0pt> \ar @{.} "a";"u_2" <0pt>
\ar @{.} "a'";"0'" <0pt> \ar @{.} "a'";"u_1'" <0pt> \ar @{.} "a'";"u_2'" <0pt>
\ar @{.} "vd";"vu" <0pt> \ar @{.} "vu";"du1" <0pt> \ar @{.} "vu";"du2" <0pt> \ar @{.} "vd";"dd1" <0pt> \ar @{.} "vd";"dd2" <0pt>
\ar @{.} "x";"xd" <0pt> \ar @{.} "x";"x_1" <0pt> \ar @{.} "x";"x_2" <0pt>
\ar @{-} "1L";"1R" <0pt> \endxy}},
(-52,16)*{\resizebox{12mm}{!}{ \xy (-10,0)*{}="1L", (12,0)*{}="1R",
(4,10)*{}="0",
(4,6)*{\circ}="a", (1,2)*{}="u_1", (7,2)*{}="u_2",
(-4,10)*{}="0'",
(-4,6)*{\circ}="a'", (-1,2)*{}="u_1'", (-7,2)*{}="u_2'",
(1,-2)*{}="du1", (7,-2)*{}="du2", (4,-6)*{\circ}="vu",
(4,-10)*{\circ}="vd", (1,-14)*{}="dd1", (7,-14)*{}="dd2",
(-4,-10)*{}="xd",
(-4,-6)*{\circ}="x", (-1,-2)*{}="x_1", (-7,-2)*{}="x_2",
\ar @{.} "a";"0" <0pt> \ar @{.} "a";"u_1" <0pt> \ar @{.} "a";"u_2" <0pt>
\ar @{.} "a'";"0'" <0pt> \ar @{.} "a'";"u_1'" <0pt> \ar @{.} "a'";"u_2'" <0pt>
\ar @{.} "vd";"vu" <0pt> \ar @{.} "vu";"du1" <0pt> \ar @{.} "vu";"du2" <0pt> \ar @{.} "vd";"dd1" <0pt> \ar @{.} "vd";"dd2" <0pt>
\ar @{.} "x";"xd" <0pt> \ar @{.} "x";"x_1" <0pt> \ar @{.} "x";"x_2" <0pt>
\ar @{-} "1L";"1R" <0pt> \endxy}},
(25,52)*{\resizebox{12mm}{!}{ \xy (-14,0)*{}="1L", (14,0)*{}="1R",
(-6.5,16)*{}="l0",
(-6.5,12)*{\circ}="la", (-9.5,7)*{\circ}="lb_1", (-1,2)*{}="lb_2", (-12.5,2)*{}="lc_1", (-7,2)*{}="lc_2",
(7,16)*{}="r0",
(7,12)*{\circ}="ra", (4,7)*{\circ}="rb_1", (12.5,2)*{}="rb_2", (1,2)*{}="rc_1", (6.5,2)*{}="rc_2",
(0,-12)*{}="01",
(0,-7)*{\circ}="a1", (-3,-2)*{}="b_11", (3,-2)*{}="b_21",
(-7,-12)*{}="02",
(-7,-7)*{\circ}="a2", (-10,-2)*{}="b_12", (-3,-2)*{}="b_22",
(7,-12)*{}="03",
(7,-7)*{\circ}="a3", (10,-2)*{}="b_13", (3,-2)*{}="b_23",
\ar @{.} "la";"l0" <0pt> \ar @{.} "la";"lb_1" <0pt> \ar @{.} "la";"lb_2" <0pt> \ar @{.} "lb_1";"lc_1" <0pt> \ar @{.} "lb_1";"lc_2" <0pt>
\ar @{.} "ra";"r0" <0pt> \ar @{.} "ra";"rb_1" <0pt> \ar @{.} "ra";"rb_2" <0pt> \ar @{.} "rb_1";"rc_1" <0pt> \ar @{.} "rb_1";"rc_2" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{.} "a1";"01" <0pt> \ar @{.} "a1";"b_11" <0pt> \ar @{.} "a1";"b_21" <0pt>
\ar @{.} "a2";"02" <0pt> \ar @{.} "a2";"b_12" <0pt> \ar @{.} "a2";"b_22" <0pt>
\ar @{.} "a3";"03" <0pt> \ar @{.} "a3";"b_13" <0pt> \ar @{.} "a3";"b_23" <0pt> \endxy }},
(-25,53)*{\resizebox{12mm}{!}{ \xy (-14,0)*{}="1L", (14,0)*{}="1R",
(6.5,16)*{}="l0",
(6.5,12)*{\circ}="la", (9.5,7)*{\circ}="lb_1", (1,2)*{}="lb_2", (12.5,2)*{}="lc_1", (7,2)*{}="lc_2",
(-7,16)*{}="r0",
(-7,12)*{\circ}="ra", (-4,7)*{\circ}="rb_1", (-12.5,2)*{}="rb_2", (-1,2)*{}="rc_1", (-6.5,2)*{}="rc_2",
(0,-12)*{}="01",
(0,-7)*{\circ}="a1", (-3,-2)*{}="b_11", (3,-2)*{}="b_21",
(-7,-12)*{}="02",
(-7,-7)*{\circ}="a2", (-10,-2)*{}="b_12", (-3,-2)*{}="b_22",
(7,-12)*{}="03",
(7,-7)*{\circ}="a3", (10,-2)*{}="b_13", (3,-2)*{}="b_23",
\ar @{.} "la";"l0" <0pt> \ar @{.} "la";"lb_1" <0pt> \ar @{.} "la";"lb_2" <0pt> \ar @{.} "lb_1";"lc_1" <0pt> \ar @{.} "lb_1";"lc_2" <0pt>
\ar @{.} "ra";"r0" <0pt> \ar @{.} "ra";"rb_1" <0pt> \ar @{.} "ra";"rb_2" <0pt> \ar @{.} "rb_1";"rc_1" <0pt> \ar @{.} "rb_1";"rc_2" <0pt>
\ar @{-} "1L";"1R" <0pt> \ar @{.} "a1";"01" <0pt> \ar @{.} "a1";"b_11" <0pt> \ar @{.} "a1";"b_21" <0pt>
\ar @{.} "a2";"02" <0pt> \ar @{.} "a2";"b_12" <0pt> \ar @{.} "a2";"b_22" <0pt>
\ar @{.} "a3";"03" <0pt> \ar @{.} "a3";"b_13" <0pt> \ar @{.} "a3";"b_23" <0pt> \endxy }},
(-25,-15)*{\bullet}="1",
(25,-15)*{\bullet}="2",
(45,15)*{\bullet}="3",
(25,45)*{\bullet}="4",
(-25,45)*{\bullet}="6", (-45,15)*{\bullet}="7",
\ar @{-} "1";"2" <0pt> \ar @{-} "2";"3" <0pt> \ar @{-} "3";"4" <0pt> \ar @{-} "4";"6" <0pt> \ar @{-} "6";"7" <0pt> \ar @{-} "7";"1" <0pt> \endxy}\\ \\ \mbox{{\sc Fig}.\ 2:\ $r_3^2\left({\mathcal F} Chains({\mathsf K}_3^2)\right)$} \end{array} $$
\subsection{From biassociahedra to strongly homotopy bialgebras}
As we saw in the previous subsection, the biassociahedron ${\mathsf K}_m^n$ is a smooth manifold with corners which
comes equipped with a boundary stratification parameterized by
Markl's poset ${\mathcal K}_m^n$. In fact, we constructed ${\mathsf K}_m^n$ as a closed semi-algebraic subset in the product of copies of 2-spheres $S^2$ and the intervals $[0,1]$. Hence ${\mathsf K}_m^n$ comes equipped with a structure of a semialgebraic set (which is finer than just the structure of a smooth manifold with corners). Kontsevich and Soibelman introduced in the Appendix 8 of \cite{KS} a suitable theory of singular chains for such semialgebraic spaces $X$ (see \cite{HLTV} for full details); in this theory $Chains(X)$ is a vector space of a field ${\mathbb K}$ group generated by (equivalence classes) of semialgebraic maps $f:Y\rightarrow X$ from oriented compact semialgebraic spaces $Y$. As in \cite{KS} we assume that the semialgebraic chain complex $(Chains(X),{\partial})$ is negatively graded so that the boundary operator has degree $+1$.
This canonical stratification of the biassociahedron ${\mathsf K}_m^n$ in terms of zoned trees gives us (i) an obvious $\frac{1}{2}$-structure on the collection of dg ${\mathbb S}$-bimodules $\{Chains({\mathsf K}_m^n)\}_{m,n\in {\mathbb N}}$, and (ii) a $\frac{1}{2}$-subprop
${\mathcal F} Chains({\mathsf K}_m^n)\subset Chains({\mathsf K}_m^n)$ spanned by fundamental chains which is called the dg $\frac{1}{2}$-prop of {\em of fundamental or cellular chains}\, of the biassociahedron. Unfortunately,
the ${\mathbb S}$-submodule $\{{\mathcal F} Chains({\mathsf K}_m^n)\}_{m,n\in {\mathbb N}}$ is not a prop.
Martin Markl constructed by induction a collection $r=\{r_m^n\}$ of linear {\em monomorphisms} of graded vector spaces in \cite{Ma},
$$
r_m^n: {\mathcal F} Chains({\mathsf K}_m^n)\} \hookrightarrow {\mathcal A} ss {\mathcal B}_\infty,
$$
The image under $r_3^2$ of generators of
${\mathcal F} Chains({\mathsf K}_3^2)$ is given in \mbox{{\sc Fig.} \hspace{-2mm} 2}. As we see from this example, the monomorphism $r$ is not even homogeneous: the upper edge of ${\mathsf K}_3^2$ (which is a degree $-1$ element in ${\mathcal F} Chains({\mathsf K}_3^2)$) gets mapped into a degree $-2$ element\footnote{We also use here fraction notations for elements of ${\mathcal A} ss {\mathcal B}_\infty$ introduced in \cite{Ma1}.} $\begin{array}{c}{\resizebox{11mm}{!}{\xy (-12,0)*{}="1L", (12,0)*{}="1R",
(-4,7)*{}="0",
(5,12)*{}="0",
(5,7)*{\circ}="a", (1,2)*{}="b_1", (5,2)*{}="b_2", (9,2)*{}="b_3",
(-5,12)*{}="0'",
(-5,7)*{\circ}="a'", (-9,2)*{}="b_1'", (-5,2)*{}="b_2'", (-1,2)*{}="b_3'",
(0,-12)*{}="01",
(0,-7)*{\circ}="a1", (-3,-2)*{}="b_11", (3,-2)*{}="b_21",
(-7,-12)*{}="02",
(-7,-7)*{\circ}="a2", (-10,-2)*{}="b_12", (-3,-2)*{}="b_22",
(7,-12)*{}="03",
(7,-7)*{\circ}="a3", (10,-2)*{}="b_13", (3,-2)*{}="b_23",
\ar @{.} "a";"0" <0pt> \ar @{.} "a";"b_1" <0pt> \ar @{.} "a";"b_2" <0pt> \ar @{.} "a";"b_3" <0pt>
\ar @{.} "a'";"0'" <0pt> \ar @{.} "a'";"b_1'" <0pt> \ar @{.} "a'";"b_2'" <0pt> \ar @{.} "a'";"b_3'" <0pt>
\ar @{-} "1L";"1R" <0pt> \ar @{.} "a1";"01" <0pt> \ar @{.} "a1";"b_11" <0pt> \ar @{.} "a1";"b_21" <0pt>
\ar @{.} "a2";"02" <0pt> \ar @{.} "a2";"b_12" <0pt> \ar @{.} "a2";"b_22" <0pt>
\ar @{.} "a3";"03" <0pt> \ar @{.} "a3";"b_13" <0pt> \ar @{.} "a3";"b_23" <0pt> \endxy}}\end{array}$ in ${\mathcal A} ss {\mathcal B}_\infty$. Thus we can not use the map $r$ to make ${\mathcal F} Chains({\mathsf K}_\bullet^\bullet)$ into a prop (however the collection of maps $\{r_{\bullet}^\bullet\}$ respects $\frac{1}{2}$-prop compositions in the dg ${\mathbb S}$-bimodules ${\mathcal F} Chains({\mathsf K}_\bullet^\bullet)\}$ and ${\mathcal A} ss {\mathcal B}_\infty$).
It is not hard to see how the complex ${\mathcal F} Chains({\mathsf K}_3^2)$ should be modified in order to make the map $r_3^2: {\mathcal F} Chains({\mathsf K}_3^2) \rightarrow {\mathcal A} ss {\mathcal B}_\infty $ into a degree zero morphism of {\em complexes}. One has to subdivide the upper edge of ${\mathsf K}_3^2$ into the union of two edges by adding a new vertex in the middle. Equivalently, one has to replace the degree $-2$ element $\begin{array}{c}{\resizebox{11mm}{!}{\xy (-12,0)*{}="1L", (12,0)*{}="1R",
(-4,7)*{}="0",
(5,12)*{}="0",
(5,7)*{\circ}="a", (1,2)*{}="b_1", (5,2)*{}="b_2", (9,2)*{}="b_3",
(-5,12)*{}="0'",
(-5,7)*{\circ}="a'", (-9,2)*{}="b_1'", (-5,2)*{}="b_2'", (-1,2)*{}="b_3'",
(0,-12)*{}="01",
(0,-7)*{\circ}="a1", (-3,-2)*{}="b_11", (3,-2)*{}="b_21",
(-7,-12)*{}="02",
(-7,-7)*{\circ}="a2", (-10,-2)*{}="b_12", (-3,-2)*{}="b_22",
(7,-12)*{}="03",
(7,-7)*{\circ}="a3", (10,-2)*{}="b_13", (3,-2)*{}="b_23",
\ar @{.} "a";"0" <0pt> \ar @{.} "a";"b_1" <0pt> \ar @{.} "a";"b_2" <0pt> \ar @{.} "a";"b_3" <0pt>
\ar @{.} "a'";"0'" <0pt> \ar @{.} "a'";"b_1'" <0pt> \ar @{.} "a'";"b_2'" <0pt> \ar @{.} "a'";"b_3'" <0pt>
\ar @{-} "1L";"1R" <0pt> \ar @{.} "a1";"01" <0pt> \ar @{.} "a1";"b_11" <0pt> \ar @{.} "a1";"b_21" <0pt>
\ar @{.} "a2";"02" <0pt> \ar @{.} "a2";"b_12" <0pt> \ar @{.} "a2";"b_22" <0pt>
\ar @{.} "a3";"03" <0pt> \ar @{.} "a3";"b_13" <0pt> \ar @{.} "a3";"b_23" <0pt> \endxy}}\end{array}$ with a {\em sum}\, of degree $-1$ elements,
$ {\resizebox{11mm}{!}{\xy (-12,0)*{}="1L", (12,0)*{}="1R",
(-4,7)*{\mbox{$\Delta$}}="0",
(0,12)*{}="0",
(0,7)*{\circ}="a", (-4,2)*{}="b_1", (0,2)*{}="b_2", (4,2)*{}="b_3",
(0,-12)*{}="01",
(0,-7)*{\circ}="a1", (-3,-2)*{}="b_11", (3,-2)*{}="b_21",
(-7,-12)*{}="02",
(-7,-7)*{\circ}="a2", (-10,-2)*{}="b_12", (-3,-2)*{}="b_22",
(7,-12)*{}="03",
(7,-7)*{\circ}="a3", (10,-2)*{}="b_13", (3,-2)*{}="b_23",
\ar @{.} "a";"0" <0pt> \ar @{.} "a";"b_1" <0pt> \ar @{.} "a";"b_2" <0pt> \ar @{.} "a";"b_3" <0pt>
\ar @{-} "1L";"1R" <0pt> \ar @{.} "a1";"01" <0pt> \ar @{.} "a1";"b_11" <0pt> \ar @{.} "a1";"b_21" <0pt>
\ar @{.} "a2";"02" <0pt> \ar @{.} "a2";"b_12" <0pt> \ar @{.} "a2";"b_22" <0pt>
\ar @{.} "a3";"03" <0pt> \ar @{.} "a3";"b_13" <0pt> \ar @{.} "a3";"b_23" <0pt> \endxy}} $, where $\Delta $ stands for a $A_\infty$ diagonal \cite{SU2,MS}, $$ \Delta\begin{array}{c}\resizebox{5mm}{!} {\xy
(0,12)*{}="0",
(0,7)*{\circ}="a", (-4,2)*{}="b_1", (0,2)*{}="b_2", (4,2)*{}="b_3",
\ar @{.} "a";"0" <0pt> \ar @{.} "a";"b_1" <0pt> \ar @{.} "a";"b_2" <0pt> \ar @{.} "a";"b_3" <0pt>
\endxy}\end{array} = \begin{array}{c}\resizebox{7mm}{!}{\xy
(0,5)*{}="0",
(0,0)*{\circ}="a", (-3,-4)*{\circ}="b_1", (5.5,-8)*{}="b_2",
(-6,-8)*{}="c_1", (-0.5,-8)*{}="c_2", \ar @{.} "a";"0" <0pt> \ar @{.} "a";"b_1" <0pt> \ar @{.} "a";"b_2" <0pt> \ar @{.} "b_1";"c_1" <0pt> \ar @{.} "b_1";"c_2" <0pt>
\endxy}\end{array} \otimes \begin{array}{c}\resizebox{5mm}{!} {\xy
(0,12)*{}="0",
(0,7)*{\circ}="a", (-4,2)*{}="b_1", (0,2)*{}="b_2", (4,2)*{}="b_3",
\ar @{.} "a";"0" <0pt> \ar @{.} "a";"b_1" <0pt> \ar @{.} "a";"b_2" <0pt> \ar @{.} "a";"b_3" <0pt>
\endxy}\end{array} \ \ \ \ + \ \ \ \ \begin{array}{c}\resizebox{5mm}{!} {\xy
(0,12)*{}="0",
(0,7)*{\circ}="a", (-4,2)*{}="b_1", (0,2)*{}="b_2", (4,2)*{}="b_3",
\ar @{.} "a";"0" <0pt> \ar @{.} "a";"b_1" <0pt> \ar @{.} "a";"b_2" <0pt> \ar @{.} "a";"b_3" <0pt>
\endxy}\end{array} \otimes
\begin{array}{c}\resizebox{7mm}{!}{\xy
(0,5)*{}="0",
(0,0)*{\circ}="a", (3,-4)*{\circ}="b_1", (-5.5,-8)*{}="b_2",
(6,-8)*{}="c_1", (0.5,-8)*{}="c_2", \ar @{.} "a";"0" <0pt> \ar @{.} "a";"b_1" <0pt> \ar @{.} "a";"b_2" <0pt> \ar @{.} "b_1";"c_1" <0pt> \ar @{.} "b_1";"c_2" <0pt>
\endxy}\end{array} $$
After this subdivision one reads from ${\mathsf K}_3^2$ the correct formula for the value of the differential in ${\mathcal A} ss {\mathcal B}_\infty$,
$$ \delta\ \begin{array}{c} \resizebox{4.6mm}{!} {\xy
(3,2)*{}="u1", (-3,2)*{}="u2",
(0,-3)*{\circ}="a", (-4,-8)*{}="b_1", (0,-8)*{}="b_2", (4,-8)*{}="b_3",
\ar @{.} "a";"b_1" <0pt> \ar @{.} "a";"b_2" <0pt> \ar @{.} "a";"b_3" <0pt>
\ar @{.} "a";"u1" <0pt> \ar @{.} "a";"u2" <0pt> \endxy}\end{array} \ = \
\begin{array}{c}\resizebox{5.6mm}{!} {\xy
(0,-5)*{\circ}="a", (-4,-10)*{}="b_1", (0,-10)*{}="b_2", (4,-10)*{}="b_3",
(0,0)*{\circ}="a'", (-3,4)*{}="b_1'", (3,4)*{}="b_2'",
\ar @{.} "a";"a'" <0pt> \ar @{.} "a";"b_1" <0pt> \ar @{.} "a";"b_2" <0pt> \ar @{.} "a";"b_3" <0pt>
\ar @{.} "a'";"b_1'" <0pt> \ar @{.} "a'";"b_2'" <0pt> \endxy}\end{array}
\ - \
\begin{array}{c}\resizebox{8mm}{!}{ \xy
(0,0)*{\circ}="a", (-3,-5)*{\circ}="b_1", (5.5,-10)*{}="b_2",
(-3,5)*{}="b_1'", (3,5)*{}="b_2'",
(-6,-10)*{}="c_1", (-0.5,-10)*{}="c_2", \ar @{.} "a";"b_1" <0pt> \ar @{.} "a";"b_2" <0pt> \ar @{.} "b_1";"c_1" <0pt> \ar @{.} "b_1";"c_2" <0pt>
\ar @{.} "a";"b_1'" <0pt> \ar @{.} "a";"b_2'" <0pt> \endxy}\end{array}
\ +\
\begin{array}{c}\resizebox{8mm}{!}{ \xy
(0,0)*{\circ}="a", (3,-5)*{\circ}="b_1", (-5.5,-10)*{}="b_2",
(-3,5)*{}="b_1'", (3,5)*{}="b_2'",
(6,-10)*{}="c_1", (0.5,-10)*{}="c_2", \ar @{.} "a";"b_1" <0pt> \ar @{.} "a";"b_2" <0pt> \ar @{.} "b_1";"c_1" <0pt> \ar @{.} "b_1";"c_2" <0pt>
\ar @{.} "a";"b_1'" <0pt> \ar @{.} "a";"b_2'" <0pt> \endxy}\end{array}
\ - \
\begin{array}{c}\resizebox{14mm}{!}{ \xy (-10,0)*{}="1L", (12,0)*{}="1R",
(4,10)*{}="0",
(4,6)*{\circ}="a", (1,2)*{}="u_1", (7,2)*{}="u_2",
(-4,10)*{}="0'",
(-4,6)*{\circ}="a'", (-1,2)*{}="u_1'", (-7,2)*{}="u_2'",
(-1,-2)*{}="du1", (-7,-2)*{}="du2", (-4,-6)*{\circ}="v",
(-1,-10)*{}="dd1", (-7,-10)*{}="dd2",
(4,-10)*{}="xd",
(4,-6)*{\circ}="x", (1,-2)*{}="x_1", (7,-2)*{}="x_2",
\ar @{.} "a";"0" <0pt> \ar @{.} "a";"u_1" <0pt> \ar @{.} "a";"u_2" <0pt>
\ar @{.} "a'";"0'" <0pt> \ar @{.} "a'";"u_1'" <0pt> \ar @{.} "a'";"u_2'" <0pt>
\ar @{.} "v";"du1" <0pt> \ar @{.} "v";"du2" <0pt> \ar @{.} "v";"dd1" <0pt> \ar @{.} "v";"dd2" <0pt>
\ar @{.} "x";"xd" <0pt> \ar @{.} "x";"x_1" <0pt> \ar @{.} "x";"x_2" <0pt>
\ar @{-} "1L";"1R" <0pt> \endxy}\end{array}
\ +\
\begin{array}{c} \resizebox{14mm}{!}{ \xy (-10,0)*{}="1L", (12,0)*{}="1R",
(4,10)*{}="0",
(4,6)*{\circ}="a", (1,2)*{}="u_1", (7,2)*{}="u_2",
(-4,10)*{}="0'",
(-4,6)*{\circ}="a'", (-1,2)*{}="u_1'", (-7,2)*{}="u_2'",
(1,-2)*{}="du1", (7,-2)*{}="du2", (4,-6)*{\circ}="v",
(1,-10)*{}="dd1", (7,-10)*{}="dd2",
(-4,-10)*{}="xd",
(-4,-6)*{\circ}="x", (-1,-2)*{}="x_1", (-7,-2)*{}="x_2",
\ar @{.} "a";"0" <0pt> \ar @{.} "a";"u_1" <0pt> \ar @{.} "a";"u_2" <0pt>
\ar @{.} "a'";"0'" <0pt> \ar @{.} "a'";"u_1'" <0pt> \ar @{.} "a'";"u_2'" <0pt>
\ar @{.} "v";"du1" <0pt> \ar @{.} "v";"du2" <0pt> \ar @{.} "v";"dd1" <0pt> \ar @{.} "v";"dd2" <0pt>
\ar @{.} "x";"xd" <0pt> \ar @{.} "x";"x_1" <0pt> \ar @{.} "x";"x_2" <0pt>
\ar @{-} "1L";"1R" <0pt> \endxy}\end{array}
\ - \
\begin{array}{c}\resizebox{14mm}{!}{\xy (-12,0)*{}="1L", (12,0)*{}="1R",
(-4,7)*{\mbox{$\Delta$}}="0",
(0,12)*{}="0",
(0,7)*{\circ}="a", (-4,2)*{}="b_1", (0,2)*{}="b_2", (4,2)*{}="b_3",
(0,-12)*{}="01",
(0,-7)*{\circ}="a1", (-3,-2)*{}="b_11", (3,-2)*{}="b_21",
(-7,-12)*{}="02",
(-7,-7)*{\circ}="a2", (-10,-2)*{}="b_12", (-3,-2)*{}="b_22",
(7,-12)*{}="03",
(7,-7)*{\circ}="a3", (10,-2)*{}="b_13", (3,-2)*{}="b_23",
\ar @{.} "a";"0" <0pt> \ar @{.} "a";"b_1" <0pt> \ar @{.} "a";"b_2" <0pt> \ar @{.} "a";"b_3" <0pt>
\ar @{.} "1L";"1R" <0pt> \ar @{.} "a1";"01" <0pt> \ar @{.} "a1";"b_11" <0pt> \ar @{.} "a1";"b_21" <0pt>
\ar @{.} "a2";"02" <0pt> \ar @{.} "a2";"b_12" <0pt> \ar @{.} "a2";"b_22" <0pt>
\ar @{.} "a3";"03" <0pt> \ar @{.} "a3";"b_13" <0pt> \ar @{.} "a3";"b_23" <0pt> \endxy}\end{array} $$ on the $(2,3)$-corolla.
Note that the definition of the ${\mathcal A} ss_\infty$ diagonal $\Delta$ involves choices so that the best one can hope for is to find a (non-uniquely) defined cellular refinement, $({\mathcal C} ell({\mathsf K}_\bullet^\bullet), {\partial}_{cell})$, of the fundamental chain complex of the biassociahedron together with a monomorphism complexes $$ r: {\mathcal C} ell({\mathsf K}_\bullet^\bullet)\longrightarrow {\mathcal A} ss {\mathcal B}_\infty $$ such that the free properad generated by ``big" cells ${\mathsf K}^m_n$ and equipped with the differential ${\partial}_{cell}$ can be identified via $r$ with some minimal resolution ${\mathcal A} ss {\mathcal B}_\infty$ of ${\mathcal A} ss{\mathcal B}$. The existence of such an intermediate complex $$ {\mathcal F}{\mathcal C} hains({\mathsf K}_\bullet^\bullet) \subset {\mathcal C} ell({\mathsf K}_\bullet^\bullet) \subset {\mathcal C} hains({\mathsf K}_\bullet^\bullet) $$ was claimed by Samson Saneblidze and Ron Umble in \cite{SU}.
\def$'${$'$}
\end{document} | arXiv |
\begin{document}
\begin{frontmatter}
\title{Koml\'os--Major--Tusn\'ady approximation under dependence} \runtitle{Koml\'os--Major--Tusn\'ady approximation}
\begin{aug} \author[A]{\fnms{Istv\'an} \snm{Berkes}\corref{}\thanksref{t1}\ead[label=e1]{[email protected]}}, \author[B]{\fnms{Weidong} \snm{Liu}\thanksref{t2}\ead[label=e2]{[email protected]}} \and \author[C]{\fnms{Wei Biao} \snm{Wu}\thanksref{t3}\ead[label=e3]{[email protected]}} \runauthor{I. Berkes, W. Liu and W. B. Wu} \affiliation{Graz University of Technology, Shanghai Jiao Tong University and University~of~Chicago} \thankstext{t1}{Supported by FWF Grant P 24302-N18 and OTKA Grant K~108615.} \thankstext{t2}{Supported by NSFC Grant 11201298.} \thankstext{t3}{Supported in part from DMS-09-06073 and DMS-11-06970.}
\address[A]{I. Berkes\\ Institute of Statistics\\ Graz University of Technology\\ Kopernikusgasse 24\\ 8010 Graz\\ Austria\\ \printead{e1}} \address[B]{W. Liu\\ Department of Mathematics\\ Shanghai Jiao Tong University\\ 800 Dongchuan Road Minhang\\ Shanghai\\ China\\ \printead{e2}} \address[C]{W. B. Wu\\ Department of Statistics\\ University of Chicago\\ 5734 S. University Avenue\\ Chicago, Illinois 60637\\ USA\\ \printead{e3}} \end{aug}
\received{\smonth{2} \syear{2012}} \revised{\smonth{2} \syear{2013}}
\begin{abstract} The celebrated results of Koml\'os, Major and Tusn\'ady [\textit{Z. Wahrsch. Verw. Gebiete} \textbf{32} (1975) 111--131; \textit{Z. Wahrsch. Verw. Gebiete} \textbf{34} (1976) 33--58] give optimal Wiener approximation for the partial sums of i.i.d. random variables and provide a powerful tool in probability and statistics. In this paper we extend KMT approximation for a large class of dependent stationary processes, solving a long standing open problem in probability theory. Under the framework of stationary causal processes and functional dependence measures of Wu [\textit{Proc. Natl. Acad. Sci. USA} \textbf{102} (2005) 14150--14154],
we show that, under natural moment conditions, the partial sum processes can be approximated by Wiener process with an optimal rate. Our dependence conditions are mild and easily verifiable. The results are applied to ergodic sums, as well as to nonlinear time series and Volterra processes, an important class of nonlinear processes. \end{abstract}
\begin{keyword}[class=AMS] \kwd{60F17} \kwd{60G10} \kwd{60G17} \end{keyword}
\begin{keyword} \kwd{Stationary processes} \kwd{strong invariance principle} \kwd{KMT approximation} \kwd{weak dependence} \kwd{nonlinear time series} \kwd{ergodic sums} \end{keyword}
\end{frontmatter}
\section{Introduction}\label{seintro}
Let $X_1, X_2, \ldots$ be independent, identically distributed random variables with $\mathsf{E} X_1=0$, $\mathsf{E} X_1^2=1$. In their seminal papers, Koml\'os, Major and Tusn\'ady (\citeyear{KomMajTus75,KomMajTus76}) proved that under $\mathsf{E}|X_1|^p<\infty$, $p>2$, there exists, after suitably enlarging the probability space, a Wiener process $\{\mathbb{B}(t), t\ge0\}$ such that, setting $S_n = \sum_{k=1}^n X_k$, we have
\begin{equation} \label{kmt1} S_n=\mathbb{B}(n)+o\bigl(n^{1/p}\bigr) \qquad\mbox{a.s.} \end{equation}
Assuming $\mathsf{E} e^{t|X_1|}<\infty$ for some $t>0$, they obtained the approximation
\begin{equation} \label{kmt2} S_n=\mathbb{B}(n)+O(\log n) \qquad\mbox{a.s.} \end{equation}
The remainder terms in (\ref{kmt1}) and (\ref{kmt2}) are optimal. These results close a long development in probability theory starting with the classical paper of Erd\H{o}s and Kac (\citeyear{ErdKac46}) introducing the method of \textit{invariance principle.} The ideas of Erd\H{o}s and Kac were developed further by \citet{Doo49}, \citet{Don52}, \citet{Pro56} and others and led to the theory of weak convergence of probability measures on metric spaces; see, for example, \citet{Bil68}. In another direction, \citet{Str64} used the Skorohod representation theorem to get an almost sure approximation of partial sums of i.i.d. random variables by Wiener process. Cs\"org\H{o} and R\'ev\'esz (\citeyear{CsoRev74}) showed that using the quantile transform instead of Skorohod embedding yields better approximation rates under higher moments and developing this idea further, Koml{\'o}s, Major and Tusn{\'a}dy (\citeyear{KomMajTus75,KomMajTus76}) reached the final result in the i.i.d. case.
Their results were extended to the independent, nonidentically distributed case and for random variables taking values in ${\mathbb R}^d$, $d\ge2$, by Sakhanenko, Einmahl and Zaitsev; see G\"otze and Zaitsev (\citeyear{GotZai08}) for history and references.
Due to the powerful consequences of KMT approximation [see, e.g., Cs\" org\H{o} and Hall (\citeyear{CsoHal84}) or the books of Cs\"org\H{o} and R\'ev\'esz (\citeyear{CsoRev81}) and \citet{ShoWel86} for the scope of its applications], extending these results for dependent random variables would have a great importance, but until recently, little progress has been made in this direction. The dyadic construction of Koml\'os, Major and Tusn\'ady is highly technical and utilizes conditional large deviation techniques, which makes it very difficult to extend to dependent processes. Recently a new proof of the KMT result for the simple random walk via Stein's method was given by \citet{Cha12}. The main motivation of his paper was, as stated by the author, to get ``a more conceptual understanding of the problem that may allow one to go beyond sums of independent random variables.'' Using martingale approximation and Skorohod embedding, \citet{ShaLu87} and \citet{Wu07} proved the approximation
\begin{equation} \label{wu2007} S_n=\sigma\mathbb{B}(n)+o\bigl(n^{1/p}(\log n)^\gamma\bigr) \qquad\mbox{a.s.} \end{equation}
with some $\sigma\ge0$, $\gamma>0$ for some classes of stationary sequences $(X_k)$ satisfying $\mathsf{E} X_1=0$, $\mathsf{E}|X_1|^p<\infty$ for some $2<p\le4$. \citet{LiuLin09} removed the logarithmic term from (\ref{wu2007}), reaching the KMT bound $o(n^{1/p})$. Recently \citet{MerRio12} and Dedecker, Doukhan and Merlev{\`e}de (\citeyear{DedDouMer12}) extended these results for a much larger class of weakly dependent processes. Note, however, that all existing results in the dependent case concern the case $2\le p\le 4$ and the applied tools (e.g., Skorohod representation) limit the accuracy of the approximation to $o(n^{1/4})$, regardless the moment assumptions on $X_1$.
The purpose of the present paper is to develop a new approximation technique enabling us to prove the KMT approximation (\ref{kmt1}) for all $p>2$ and for a large class of dependent sequences. Specifically, we will deal with stationary sequences allowing the representation
\begin{equation} \label{eqS41055} X_k=G(\ldots, \varepsilon_{k-1}, \varepsilon_k, \varepsilon_{k+1}, \ldots),\qquad k \in{\mathbb Z}, \end{equation}
where $\varepsilon_i$, $i \in{\open\mathbb{Z}}$, are i.i.d. random variables, and $G\dvtx{\mathbb R}^{\mathbb Z}\to{\mathbb R}$ is a measurable function. Sequences of this type have been studied intensively in weak dependence theory [see, e.g., \citet{Bil68} or \citet{IbrLin71}], and many important time series models also have a representation (\ref{eqS41055}). Processes of the type (\ref{eqS41055}) also play an important role in ergodic theory, as sequences generated by Bernoulli shift transformations. The Bernoulli shift is a very important class of dynamical systems; see \citet{Orn74} and \citet{Shi73} for the deep Kolmogorov--Sinai--Ornstein isomorphism theory. There is a substantial amount of research showing that various dynamical systems are isomorphic to Bernoulli shifts. As a step further, Weiss (\citeyear{Wei75}) asked,
\begin{quote} ``having shown that some physical system is Bernoullian, what does that allow one to say about the system itself? To answer such questions one must dig deeper and gain a better understanding of a Bernoulli system.'' \end{quote}
\noindent Naturally, without additional assumptions one cannot hope to prove KMT-type results (or even the CLT) for Bernoulli systems; the representation (\ref{eqS41055}) allows stationary processes that can exhibit a markedly non-i.i.d. behavior. For limit theorems under dynamic assumptions, see \citet{HofKel82}, \citet{DenPhi84}, \citet{Den89}, Voln\'y (\citeyear{Vol99}), \citet{MerRio12}. The classical approach to deal with systems (\ref{eqS41055}) is to assume that $G$ is approximable with finite dimensional functions in a certain technical sense; see \citet{Bil68} or \citet{IbrLin71}. However, this approach leads to a substantial loss of accuracy and does not yield optimal results. In this paper we introduce a new, triadic decomposition scheme enabling one to deduce directly, under the dependence measure (\ref{eqpdm}) below, the asymptotic properties of $X_n$ in (\ref{eqS41055}) from those of the~$\varepsilon_n$. In particular, this allows us to carry over KMT approximation from the partial sums of the $\varepsilon_n$ to those of $X_n$.
To state our weak dependence assumptions on the process in (\ref{eqS41055}), assume $X_i \in{\mathcal L}^p$, $p
> 2$, namely $\| X_i \|_p := [\mathsf{E}(|X_i|^p) ]^{1/p} < \infty$. For $i\in{\open\mathbb{Z}}$ define the shift process ${\mathcal F}_i = (\varepsilon_{l+i}, l \in{\open\mathbb{Z}})$. The central element of ${\mathcal F}_i$ (belonging to $l=0$) is $\varepsilon_i$, and thus by (\ref{eqS41055}) we have $X_i=G({\mathcal F}_i)$. Let $(\varepsilon'_j)_{j \in{\open\mathbb{Z}}}$ be an i.i.d. copy of $(\varepsilon_j)_{j \in{\open\mathbb{Z}}}$, and for $i, j \in{\open\mathbb{Z}}$ let ${\mathcal F}_{i, \{j\}}$ denote the process obtained from ${\mathcal F}_i$ by replacing the coordinate $\varepsilon_j$ by $\varepsilon_j'$. Put
\begin{equation}
\label{eqpdm} \delta_{i, p} = \|X_{i} - X_{i, \{0\}}
\|_p,\qquad \mbox{where } X_{i, \{0\}} = G({\mathcal F}_{i, \{0\}}). \end{equation}
The above quantity can be interpreted as the dependence of $X_i$ on $\varepsilon_0$ and $X_{i, \{0\}}$ is a coupled version of $X_{i}$ with $\varepsilon_0$ in the latter replaced by $\varepsilon'_0$. If $G({\mathcal F}_i)$ does not functionally depend on $\varepsilon_0$, then $\delta_{i, p} = 0$. Throughout the paper, for a random variable $W = H({\mathcal F}_i)$, we use the notation $W_{ \{j\} } = H({\mathcal F}_{i, \{j\} })$ for the $j$-coupled version of $W$.
The functional dependence measure (\ref{eqpdm}) is easy to work with, and it is directly related to the underlying data-generating mechanism. In our main result Theorem~\ref{thopsip}, we express our dependence condition in terms of
\begin{equation}
\label{eqcpdm} \Theta_{i,p} = \sum_{|j|\ge i} \delta_{j, p},\qquad i\ge0, \end{equation}
which can be interpreted as the cumulative dependence of $(X_j)_{|j|
\ge i}$ on $\varepsilon_0$, or equivalently, the cumulative dependence of $X_0$ on $\varepsilon_j$, $|j|\ge i$. Throughout the paper we assume that the short-range dependence condition
\begin{equation} \label{eqsrdpdm} \Theta_{0,p} < \infty \end{equation}
holds. If (\ref{eqsrdpdm}) fails, then the process $(X_i)$ can be long-range dependent, and the partial sum processes behave no longer like Brownian motions. Our main result is introduced in Section~\ref{secmain}, where we also include some discussion on the conditions. The proof is given in Section~\ref{secproof}, with the proof of some useful lemmas postponed until Section~\ref{secuselem}.
\section{Main results} \label{secmain} We introduce some notation. For $u \in{\open\mathbb{R}}$, let $\lceil u \rceil=\break \min\{ i \in{\open\mathbb{Z}}\dvtx i \ge u \}$ and $\lfloor u \rfloor= \max\{ i \in{\open\mathbb{Z}}\dvtx i \le u \}$. Write the ${\mathcal L}^2$
norm $\| \cdot\| = \| \cdot\|_2$. Denote by ``$\Rightarrow$'' the weak convergence. Before stating our main result, we first introduce a central limit theorem for $S_n$. Assume that $X_i$ has mean zero, $\mathsf{E}(X_i^2) < \infty$, with covariance function $\gamma_i = \mathsf{E} (X_0 X_i)$, $i \in{\open\mathbb{Z}}$. Further assume that
\begin{equation}
\label{eqA181016} \sum_{i=-\infty}^\infty \bigl\|
\mathsf{E}(X_i | {\mathcal G}_0) - \mathsf{E}(X_i|{\mathcal G}_{-1})\bigr\| < \infty, \end{equation}
where ${\mathcal G}_i = (\ldots, \varepsilon_{i-1}, \varepsilon_i)$. Then we have
\begin{equation} \label{eqA181022} { {S_n} \over\sqrt n} \Rightarrow N\bigl(0, \sigma^2\bigr) \qquad\mbox{where } \sigma^2 = \sum _{i \in{\open\mathbb{Z}}} \gamma_i. \end{equation}
Results of the above type have been known for several decades; see
\citet{Han79}, \citet{Woo92}, Voln\'y (\citeyear{Vol93}) and Dedecker and Merlev\`ede (\citeyear{DedMer03}) among others. \citet{Wu05} pointed out the inequality $\|\mathsf{E}(X_i | {\mathcal G}_0) - \mathsf{E}(X_i |{\mathcal G}_{-1})\| \le \delta_{i,2}$. Hence~(\ref{eqA181016}) follows from $\Theta_{0, 2} < \infty$. With stronger moment and dependence conditions, the central limit theorem (\ref{eqA181022}) can be improved to strong invariance principles.
There is a huge literature for central limit theorems and invariance principles for stationary processes; see, for example, the monographs of \citet{IbrLin71}, \citet{EbTa86}, \citet{Bra07}, \citet{Dedetal07} and \citet{Bil68}, among others. To establish strong invariance principles, here we shall use the framework of stationary process (\ref{eqS41055}) and its associated functional dependence measures (\ref{eqpdm}). Many important processes in probability and statistics assume this form; see the examples at the end of this section, where also estimates for the functional dependence measure $\delta_{i, p}$ are given. The following theorem, which is the main result of our paper, provides optimal KMT approximation for processes (\ref{eqS41055}) under suitable assumptions on the functional dependence measure.
\begin{theorem}\label{thopsip} Assume that $X_i \in{\mathcal L}^p$ with mean $0$, $p > 2$, and there exists $\alpha> p$ such that
\begin{equation} \label{eqsrdsip} \Xi_{\alpha, p}:= \sum_{j=-\infty}^\infty
|j|^{1/2 - 1/\alpha} \delta_{j, p}^{p/\alpha} < \infty. \end{equation}
Further assume that there exists a positive integer sequence $(m_k)_{k=1}^\infty$ such that
\begin{eqnarray} \label{eqmk1} M_{\alpha, p}:= \sum_{k=1}^\infty3^{k - k\alpha/ p} m_k^{\alpha/2 -1} &<& \infty, \\ \label{eqmap} \sum_{k=1}^\infty { {3^{k p/2} \Theta_{m_k, p}^p} \over{3^k}} &<& \infty \end{eqnarray}
and
\begin{equation} \label{eqA18705p} \Theta_{m_k, p}+ \min_{l \ge0} \bigl( \Theta_{l, p} + l 3^{k (2/p-1)}\bigr) = o \biggl( { {3^{k(1/p-1/2)}}\over{(\log k)^{1/2}}} \biggr). \end{equation}
Then there exists a probability space $(\Omega_c, {\mathcal A}_c, \mathsf{P}_c)$ on which we can define random variables $X^{c}_i$ with the partial sum process $S^{c}_{n}=\sum_{i=1}^{n} X^{c}_i$, and a standard Brownian motion $\mathbb{B}_c(\cdot)$, such that $(X^{c}_i)_{i \in{\mathbb Z}} \stackrel{\cal D}{=}(X_i)_{i \in{\mathbb Z}}$ and
\begin{equation} \label{eqsipA181103} S^{c}_{n} - \sigma \mathbb{B}_c(n) = o_{ a.s.}\bigl(n^{1/p}\bigr) \qquad\mbox{in } (\Omega_c, {\mathcal A}_c, \mathsf{P}_c). \end{equation}
\end{theorem}
Gaussian approximation results of type (\ref{eqsipA181103}) have many applications in statistics. For example, \citet{WuZha07} dealt with simultaneous inference of trends in time series. \citet{EubSpe93} considered a similar problem for independent observations. As pointed out by and C. \citet{WuChiHoo98}, basic difficulties in the theory of simultaneous inference under dependence are due to the lack of suitable Gaussian approximation. Using a recent ``split'' form of approximation, Berkes, H\"ormann and Schauer (\citeyear{BerHorSch11}) obtained asymptotic estimates for increments of stationary processes with applications to change point tests. Theorem \ref{thopsip} improves these results and provides optimal rates. Many further applications of the KMT theory for i.i.d. sequences also extend easily for dependent samples via Theorem \ref{thopsip}.\vadjust{\goodbreak}
A crucial issue in applying Theorem \ref{thopsip} is to find the sequence $m_k$ and to verify conditions (\ref{eqsrdsip}), (\ref{eqmk1}), (\ref{eqmap}) and (\ref{eqA18705p}). If $\Theta_{m, p}$ decays to zero at the rate $O(m^{-\tau} (\log m)^{-A})$, where $\tau> 0$, then we have the following corollary. An explicit form of $m_k$ can also be given. Let
\begin{equation} \label{eqA19901p} \tau_p = { {p^2-4+(p-2)\sqrt{p^2 + 20p + 4}} \over{8 p}}. \end{equation}
\begin{corollary} \label{corsip} Assume that any one of the following holds: \begin{longlist}[(iii)] \item[(i)] $p > 4$ and $\Theta_{m, p} = O(m^{-\tau_p} (\log m)^{-A})$, where $A > {2\over3} (1/p +1 + \tau_p)$;
\item[(ii)] $p = 4$ and $\Theta_{m, p} = O(m^{-1} (\log m)^{-A})$ with $A > 3/2$;
\item[(iii)] $2 < p < 4$ and $\Theta_{m, p} = O(m^{-1} (\log m)^{-1/p})$. \end{longlist}
Then there exists $\alpha> p$ and an integer sequence $m_k$ such that (\ref{eqsrdsip}), (\ref{eqmk1}), (\ref{eqmap}) and (\ref{eqA18705p}) are all satisfied. Hence the strong invariance principle (\ref{eqsipA181103}) holds. \end{corollary}
\begin{pf}If $\Theta_{m, p} = O(m^{-\tau} (\log m)^{-A})$, then
\begin{eqnarray*} \Xi_{\alpha, p} &\le& \sum_{l=1}^\infty2^{l(1/2 - 1/\alpha)} \sum_{j=2^{l-1}}^{2^l-1} \bigl(\delta_{j, p}^{p/\alpha} + \delta_{-j, p}^{p/\alpha}\bigr) \\ &\le& \sum _{l=1}^\infty2^{l(1/2 - 1/\alpha)} 2^{(l-1) (1-p/\alpha)} \Biggl( \sum_{j=2^{l-1}}^{2^l-1} (\delta_{j, p} + \delta_{-j, p}) \Biggr)^{p/\alpha} \\ &\le& \sum _{l=1}^\infty2^{l(3/2 - 1/\alpha-p/\alpha)} \Theta_{2^{l-1}, p}^{p/\alpha} \\ &=& \sum_{l=1}^\infty2^{l(3/2 - 1/\alpha-p/\alpha)} O \bigl[\bigl(2^{-l \tau} l^{-A}\bigr)^{p/\alpha}\bigr], \end{eqnarray*}
which is finite if $3/2 < (1+p + p\tau) / \alpha$ or $3/2 = (1+p + p\tau) / \alpha$ and $A p/\alpha> 1$.
(i) Write $\tau= \tau_p$. The quantity $\tau_p$ satisfies the following equation:
\begin{equation} \label{eqA17816} { {\tau-(1/2-1/p)} \over{\tau/p-1/4+1/(2p)}} = {2\over3}(1+p + p\tau). \end{equation}
Let $\alpha= {2\over3} (1+p + p \tau_p)$. Then (\ref{eqsrdsip}) requires that $A p/\alpha> 1$, or $A > \alpha/ p$. Let
\begin{equation} \label{eqA181015pm} m_k = \bigl\lfloor3^{ k(\alpha/p-1)/(\alpha/2-1)} k^{-1/(\alpha/2-1)} (\log k)^{-1/(p/2-1)} \bigr\rfloor, \end{equation}
which satisfies (\ref{eqmk1}). Then $\Theta_{m_k, p} = O(m_k^{-\tau} k^{-A})$. If $A > \tau/ (\alpha/2 -1)$, then (\ref{eqA18705p}) holds. If $A > \tau/ (\alpha/2 -1) + 1/ p$, then (\ref{eqmap}) holds. Combining these three inequalities on $A$, we have (i), since $\alpha/ p > \tau/ (\alpha/2 -1) + 1/ p$.
(ii) In this case we can choose $\alpha= 6$ and $m_k = \lfloor 3^{k/4} / k \rfloor$.
(iii) Since $2 < p < 4$, we can choose $\alpha$ such that $(2+p)/(3-p/2) < \alpha< (2+4p)/3$ and $m_k = \lfloor3^{k (1/2-1/p)} \log k \rfloor$. \end{pf}
Corollary \ref{corsip} indicates that, to establish Gaussian approximation for a Bernoulli shift process, one only needs to compute the functional dependence measure $\delta_{i, p}$ in~(\ref{eqpdm}). In the following examples we shall deal with some special Bernoulli process. Example~\ref{exnts} concerns some widely used nonlinear time series, and Example~\ref{exVolterra} deals with Volterra processes which play an important role in the study of nonlinear systems.
\begin{example} Consider the measure-preserving transformation $T x = 2 x\, \operatorname{mod}\, 1$ on $([0,1], {\mathcal B}, \mathsf{P})$, where $\mathsf{P}$ is the Lebesgue measure on $[0, 1]$. Let $U_0 \sim\operatorname{uniform} (0,1)$ have the dyadic expansion $U_0 = \sum_{j=0}^\infty\varepsilon_j / 2^{1+j}$, where $\varepsilon_j$ are i.i.d. Bernoulli random variables with $\mathsf{P}( \varepsilon_j = 0) = \mathsf{P}( \varepsilon_j = 1) = 1/2$. Then $U_i = T^i U_0 = \sum_{j=i}^\infty\varepsilon_j / 2^{1+j-i}$, $i \ge0$; see
\citet{DenKel86} for a more detailed discussion. We now compute the functional dependence measure for $X_i = g(U_i)$. Assume that $\int_0^1 g(u) \,d u = 0$ and $\int_0^1 |g(u)|^p\, d u < \infty$, $p > 2$. Then $\delta_{i,p} = 0$ if $i > 0$, and for $i \ge0$ we get by stationarity
\begin{eqnarray}
\label{eqS5943} \delta_{-i,p}^p &=& \mathsf{E}\bigl|g(U_0)
- g(U_{0, \{i\}})\bigr|^p \nonumber \\[-8pt] \\[-8pt] \nonumber &=&{1\over2} \sum _{j=1}^{2^i} \int_0^1
\biggl|g\biggl( { j \over{2^i}}+ {u \over{2^{i+1}}}\biggr) - g\biggl(
{ {j-1} \over{2^i}}+ {u \over{2^{i+1}}}\biggr)\biggr|^p\, d u. \end{eqnarray}
If $X_i = g(U_i) = K(\sum_{j=i}^\infty a_{j-i} \varepsilon_j)$, where $K$ is a Lipschitz continuous function and $\sum_{j=0}^\infty
|a_j| < \infty$, then $\delta_{i, p} = O(|a_i|)$. If $g$ has the Haar wavelet expansion
\begin{equation} \label{eqS71208} g(u) = \sum_{i=0}^\infty \sum_{j=1}^{2^i} c_{i,j} \phi_{i,j}(u), \end{equation}
where $\phi_{i,j}(u) = 2^{i/2} \phi(2^i u-j)$ and $\phi(u) = \mathbf{ 1}_{0\le u < 1/2} - \mathbf{ 1}_{1/2 \le u < 1}$, then for $i \ge0$,
\begin{equation} \label{eqS71209} \delta_{-i, p}^p = O\bigl(2^{i(p/2-1)}
\bigr) \sum_{j=1}^{2^i} |c_{i,j}|^p. \end{equation}
\end{example}
\begin{example}[(Nonlinear time series)] \label{exnts} Consider the iterated random function
\begin{equation} \label{eqJ0210431} X_i = G(X_{i-1}, \varepsilon_i), \end{equation}
where $\varepsilon_i$ are i.i.d. and $G$ is a measurable function [\citet{DiaFre99}]. Many nonlinear time series including ARCH, threshold autoregressive, random coefficient autoregressive and bilinear autoregressive processes are of form (\ref{eqJ0210431}). If there exists $p > 2$ and $x_0$ such that $G(x_0, \varepsilon_0) \in{\mathcal L}^p$ and
\begin{equation} \label{eqJ0210481} \ell_p = \sup_{x\not=x'} {
{\| G(x, \varepsilon_0)-G(x', \varepsilon_0)\|_p}
\over{ |x-x'|}} < 1, \end{equation}
then $\delta_{m, p} = O(\ell^m_p)$ and also $\Theta_{m, p} = O(\ell^m_p)$ [\citet{WuSha04}]. Hence conditions in Corollary \ref{corsip} are trivially satisfied, and thus (\ref{eqsipA181103}) holds. \end{example}
\begin{example} \label{exVolterra} In the study of nonlinear systems, Volterra processes are of fundamental importance; see \citet{Sch80}, \citet{Rug81}, \citet{Cas85}, \citet{Pri88} and \citet{Ben90}, among others. We consider the discrete-time process
\begin{equation} \label{eqVolterra} X_n = \sum_{k=1}^\infty \sum_{0 \le j_1 < \cdots< j_k} g_k(j_1, \ldots, j_k) \varepsilon_{n-j_1} \cdots \varepsilon_{n-j_k}, \end{equation}
where $\varepsilon_i$ are i.i.d. with mean $0$, $\varepsilon_i \in {\mathcal L}^p$, $p > 2$, and $g_k$ are called the $k$th order Volterra kernel. Let
\begin{equation} \label{eqQnk} Q_{n, k} = \sum_{n \in\{j_1, \ldots, j_k\},\ 0 \le j_1 < \cdots< j_k} g^2_k(j_1, \ldots, j_k). \end{equation}
Assume for simplicity that $p$ is an even integer. Elementary calculations show that there exists a constant $c_p$, only depending on $p$, such that
\begin{equation} \label{eqpdmVolterra} \delta_{n, p}^2 \le c_p
\sum_{k=1}^\infty\| \varepsilon_0
\|_p^{2k} Q_{n, k}. \end{equation}
Assume that for some $\tau> 0$ and $A$,
\begin{equation}
\label{eqF51036}\qquad \sum_{k=1}^\infty\|
\varepsilon_0\|_p^{2k} \sum _{j_k\ge m,\ 0 \le j_1 < \cdots< j_k} g^2_k(j_1, \ldots, j_k) = O\bigl(m^{-1-2\tau}(\log m)^{-2A}\bigr) \end{equation}
as $m \to\infty$. Then
\begin{equation} \label{eqF51042} \sum_{n=m}^\infty
\delta_{n, p}^2 \le c_p \sum _{k=1}^\infty\| \varepsilon_0
\|_p^{2k} \sum_{n=m}^\infty Q_{n, k} = O\bigl(m^{-1-2\tau}(\log m)^{-2A}\bigr), \end{equation}
which implies $\Theta_{m, p} = O(m^{-\tau} (\log m)^{-A})$ and hence Corollary \ref{corsip} is applicable. \end{example}
For further examples of processes allowing the representation (\ref{eqS41055}), we refer to \citet{Wie58}, \citet{Ton90}, \citet{Pri88}, \citet{ShaWu07}, \citet{Wu11} and the examples in Berkes, H\"ormann and Schauer (\citeyear{BerHorSch11}).
\section{\texorpdfstring{Proof of Theorem \protect\ref{thopsip}}{Proof of Theorem 2.1}} \label{secproof} The proof of Theorem \ref{thopsip} is quite intricate. To simplify the notation, we assume that $(X_i)$ is a function of a one-sided Bernoulli shift,
\begin{equation} \label{eqscp} X_{i} = G({\mathcal F}_i), \qquad\mbox{where } {\mathcal F}_i = (\ldots, \varepsilon_{i-1}, \varepsilon_{i}),\vadjust{\goodbreak} \end{equation}
where $\varepsilon_{k}, k \in{\open\mathbb{Z}}$, are i.i.d. Clearly, in this case in (\ref{eqpdm}) we have $\delta_{i, p}=0$ for $i<0$.
As argued in \citet{Wu11}, (\ref{eqscp}) itself defines a very large class of stationary processes, and many widely used linear and nonlinear processes fall within the framework of (\ref{eqscp}). Our argument can be extended to the two-sided process (\ref{eqS41055}) in a straightforward manner since our primary tool is the $m$-dependence approximation technique. In Section~\ref{secTMB} we shall handle the pre-processing work of truncation, $m$-dependence approximation and blocking, and in Section~\ref{secGAA} we shall apply Sakhanenko's (\citeyear{Sak06}) Gaussian approximation result to the transformed processes and establish conditional Gaussian approximations. Section~\ref{secUGA} removes the conditioning, and an unconditional Gaussian approximation is obtained. In Section~\ref{secA18RGA} we refine the unconditional Gaussian approximation in Section~\ref{secUGA} by linearizing the variance function, so that one can have the readily applicable form (\ref{eqsipA181103}).
\subsection{Truncation, $m$-dependence approximation and blocking} \label{secTMB} For $a > 0$, define the truncation operator $T_a$ by
\begin{equation} \label{eqtruncA13917} T_a(w) = \max\bigl(\min(w, a), -a\bigr), \qquad w \in{\open\mathbb{R}}. \end{equation}
Then $T_a$ is Lipschitz continuous and the Lipschitz constant is $1$. For $n \ge2$ let $h_n = \lceil(\log n) / (\log3) \rceil$, so that $3^{h_n-1} < n \le3^{h_n}$. Define
\begin{equation} \label{eqA14725p} W_{k, l} = \sum_{i=1+3^{k-1}}^{l+3^{k-1}} \bigl[T_{3^{k/p}} (X_i) - \mathsf{E} T_{3^{k/p}} (X_i)\bigr] \end{equation}
and the $m_k$-dependent process
\begin{equation} \label{eqtruncSA13933} \tilde X_{k,j} = \mathsf{E}\bigl[ T_{3^{k/p}}
(X_j) | \varepsilon_{j-m_k}, \ldots, \varepsilon_{j-1}, \varepsilon_j\bigr] - \mathsf{E} T_{3^{k/p}} (X_j). \end{equation}
Let
\begin{equation} \label{eqtruncSA13923} S_n^\dag= \sum _{k=1}^{h_n-1} W_{k, 3^k-3^{k-1}} + \sum _{i=1+3^{h_n-1}}^n \bigl[T_{3^{h_n/p}} (X_i) - \mathsf{E} T_{3^{h_n/p}} (X_i)\bigr] \end{equation}
and
\begin{equation} \label{eqA14814p} \tilde S_n = \sum_{k=1}^{h_n-1} \tilde W_{k, 3^k-3^{k-1}} + \tilde W_{h_n, n - 3^{h_n-1}} \qquad\mbox{where } \tilde W_{k, l} = \sum_{i=1+3^{k-1}}^{l+3^{k-1}} \tilde X_{k,i}. \end{equation}
If $n = 1$, we let $S_1^\dag= \tilde S_1 = 0$. Since $X_i \in{\mathcal L}^p$, we have
\begin{equation}
\label{eqtruncSA13930} \max_{1\le i \le n} \bigl|S_i -
S_i^\dag\bigr| = o_\mathrm{ a.s.}\bigl(n^{1/p}\bigr). \end{equation}
Note that there exists a constant $c_p$ such that, for all $k \ge 1$,
\begin{equation}
\label{eqA14820p} \Bigl\Vert \max_{3^{k-1} < l \le3^k} |\tilde W_{k, l} - W_{k, l}| \Bigr\Vert _p \le c_p \bigl(3^k-3^{k-1}\bigr)^{1/2} \Theta_{1+m_k, p}. \end{equation}
Hence, by the Borel--Cantelli lemma and condition (\ref{eqmap}), we have
\begin{equation}
\label{eqA14825p} \max_{1\le i \le n} \bigl|\tilde S_i -
S_i^\dag\bigr| = o_\mathrm{ a.s.}\bigl(n^{1/p}\bigr). \end{equation}
Let $q_k = \lfloor2 \times3^{k-2} / m_k \rfloor- 2$. By (\ref{eqmk1}), $m_k = o(3^{k(\alpha/p-1)/(\alpha/2-1)})$. Hence\break $\lim_{k \to\infty} q_k = \infty$. Choose $K_0 \in{\open\mathbb{N}}$ such that $q_k \ge2$ whenever $k \ge K_0$, and let $N_0 = 3^{K_0}$. For $k \ge K_0$ define
\begin{equation} \label{eqA14923p} B_{k, j} = \sum_{i = 1 + 3 j m_k + 3^{k-1}}^{ 3(j+1)m_k + 3^{k-1}} \tilde X_{k,i},\qquad j=1, 2, \ldots, q_k. \end{equation}
Let $B_{k, j} \equiv0$ if $k < K_0$. In the sequel we assume throughout that $k \ge K_0$ and $n \ge N_0$. By Markov's inequality and the stationarity of the process $(\tilde X_{k,i})_{i \in{\open\mathbb{Z}}}$,
\begin{eqnarray} \label{eqA14927p}&& \mathsf{P} \Biggl( \max_{1 \le l \le2 \times3^{k-1}} \Biggl\vert \tilde W_{k, l} - \sum_{j=1}^{\lfloor l/(3 m_k) \rfloor} B_{k, j} \Biggr\vert \ge3^{k/p} \Biggr) \nonumber\\ &&\qquad\le
{ {2 \times3^{k-1}} \over{m_k}} \mathsf{P} \Bigl( \max_{1 \le l \le3 m_k} | \tilde W_{k, l}| \ge3^{k/p} \Bigr) \\ && \qquad\le{ {3^k \mathsf{E}(\max_{1 \le l \le3 m_k}
| \tilde W_{k, l}|^\alpha)} \over{m_k 3^{k\alpha/p}}}.\nonumber \end{eqnarray}
We define the functional dependence measure for the process $(T_{3^{k/p}} (X_i))_{i \in{\open\mathbb{Z}}}$ as
\begin{equation}
\label{eqtruncSA14738} \delta_{k, j, \iota} = \bigl\|T_{3^{k/p}}
(X_i) - T_{3^{k/p}} (X_{i, \{i-j\}})\bigr\|_\iota, \end{equation}
where $\iota\ge2$, and similarly the functional dependence measure for $(\tilde X_{k, i})$ as
\begin{equation}
\tilde\delta_{k, j, \iota} = \|\tilde X_{k, i} - \tilde X_{k, i, \{i-j\}}\|_\iota. \end{equation}
For those dependence measures, we can easily have the following simple relation:
\begin{equation} \label{eqA181112} \tilde\delta_{k, j, \iota} \le\delta_{k, j, \iota}, \delta_{k, j, p} \le\delta_{j, p} \quad\mbox{and}\quad \delta_{k, j, 2} \le\delta_{j, 2}. \end{equation}
By the above relation, a careful check of the proof of Lemma \ref{lemmomentA14} below indicates that, under (\ref{eqsrdsip}) and (\ref{eqmk1}), there exists a constant $c = c_{\alpha, p}$ such that
\begin{equation} \label{eqA14951p} \qquad\sum_{k=K_0}^\infty
{ {3^k}\over{m_k}} { {\mathsf{E}(\max_{1\le l \le3 m_k} |\tilde W_{k, l}|^\alpha)} \over{ 3^{k\alpha/p} }} \le c \bigl(M_{\alpha, p} \Theta_{0, 2}^\alpha + \Xi_{\alpha, p}^\alpha +
\|X_1\|_p^p\bigr). \end{equation}
The above inequality plays a critical role in our proof, and it will be used again later. In (\ref{eqA14927p}), the largest index $j$ is ${\lfloor2 \times3^{k-1} /(3 m_k) \rfloor} = q_k + 2$. Note that $B_{k, q_k}$ is independent of $B_{k+1, 1}$. This motivates us to define the sum
\begin{equation} \label{eqA141010p} \qquad S_n^\diamond= \sum _{k=K_0}^{h_n-1} \sum_{j=1}^{q_k} B_{k,j} + \sum_{j=1}^{\tau_n} B_{h_n, j},\qquad \mbox{where } \tau_n = \biggl\lfloor { {n-3^{h_n-1}} \over{3 m_{h_n}}} \biggr\rfloor- 2.\vadjust{\goodbreak} \end{equation}
We emphasize that the sums $\sum_{j=1}^{q_k} B_{k,j}$, $k = 1, 2, \ldots, h_n-1$ and $\sum_{j=1}^{\tau_n} B_{h_n, j}$ are mutually independent. By (\ref{eqA14927p}), (\ref{eqA14951p}) and the Borel--Cantelli lemma, we have
\begin{equation}
\label{eqA17109p} \max_{N_0 \le i \le n} \bigl|\tilde S_i -
S_i^\diamond\bigr| = o_\mathrm{ a.s.}\bigl(n^{1/p}\bigr), \end{equation}
where we recall $N_0 = 3^{K_0}$. Summarizing the truncation approximation (\ref{eqtruncSA13930}), the $m$-dependence approximation (\ref{eqA14825p}) and the block approximation (\ref{eqA17109p}), we have
\begin{equation}
\label{eqA20226} \max_{N_0 \le i \le n} \bigl|S_i -
S_i^\diamond\bigr| = o_\mathrm{ a.s.}\bigl(n^{1/p}\bigr), \end{equation}
and by Lemma \ref{lemcnstctn} in Chapter~\ref{s4} it remains to show that (\ref{eqsipA181103}) holds with~$S_n^\diamond$.
\subsection{Conditional Gaussian approximation} \label{secGAA} For $3^{k-1} < i \le3^k$, $k \ge K_0$, let $G_k$ be a measurable function such that
\begin{equation} \label{eqmdrv} \tilde X_{k, i} = G_{k}(\varepsilon_{i - m_{k}}, \ldots, \varepsilon_i). \end{equation}
Recall $q_k = \lfloor2 \times3^{k-2} / m_k \rfloor- 2$. For $j = 1, 2, \ldots, q_k$ define
\begin{equation} \label{eqbbs1} {\mathcal J}_{k, j} = \bigl\{3^{k-1}+ (3j-1)m_k + l, l=1, 2, \ldots, m_k\bigr\}. \end{equation}
Let $\mathbf{ a} = (\mathbf{ a}_{k, 3j}, 1\le j \le q_k)_{k=K_0}^\infty $ be a vector of real numbers, where $\mathbf{ a}_{k, 3j} = (a_l, l \in {\mathcal J}_{k, j})$, $j=1, \ldots, q_k$. Define the random functions
\begin{eqnarray*} F_{k, 3j}(\mathbf{a}_{k, 3j}) &=& \sum _{i=1+(3j-1)m_k}^{3j m_k} G_k(a_{i+3^{k-1}}, \ldots,a_{3 j m_k+3^{k-1}}, \\ & & \hspace*{77pt}\varepsilon_{3j m_k+1+3^{k-1}},\ldots, \varepsilon_{i+m_k+3^{k-1}}); \\ F_{k,3j+1}&=&\sum _{i=1+3jm_k}^{(3j+1)m_k} G_k(\varepsilon_{i+3^{k-1}}, \ldots, \varepsilon_{(3j+1)m_k+3^{k-1}}, \\ & &\hspace*{60pt} \varepsilon_{(3j+1)m_k+1+3^{k-1}},\ldots, \varepsilon_{i+m_k+3^{k-1}}); \\ F_{k,3j+2}(\mathbf{a}_{k,3j+3}) &=&\sum_{i=1+(3j+1)m_k}^{(3j+2)m_k} G_k( \varepsilon_{i+3^{k-1}},\ldots,\varepsilon_{(3j+2)m_k+3^{k-1}}, \\ &&\hspace*{78pt} a_{(3j+2)m_k+1+3^{k-1}},\ldots,a_{i+m_k+3^{k-1}}). \end{eqnarray*}
Let $\bolds{\eta}_{k, 3j} = (\varepsilon_l, l \in{\mathcal J}_{k, j})$, $j=1, \ldots, q_k$, and $\bolds{\eta}= (\bolds{\eta}_{k, 3j}, 1\le j \le q_k)_{k=K_0}^\infty$. Then
\begin{equation} \label{eqA16744} B_{k, j} = F_{k, 3j}(\bolds{\eta}_{k, 3j}) + F_{k,3j+1} + F_{k,3j+2}(\bolds{\eta}_{k, 3j+3}). \end{equation}
Note that $\mathsf{E} F_{k,3j+1} = 0$. Define the mean functions
\[ \Lambda_{k, 0} (\mathbf{a}_{k, 3j}) = \mathsf{E} F_{k, 3j}( \mathbf{a}_{k, 3j}),\qquad \Lambda_{k, 2} (\mathbf{a}_{k, 3j+3}) = \mathsf{E} F_{k,3j+2} (\mathbf{a}_{k,3j+3}). \]
Introduce the centered process
\begin{eqnarray} \label{eqA161219} \qquad Y_{k, j}(\mathbf{a}_{k, 3j}, \mathbf{a}_{k, 3j+3}) &=& \bigl[F_{k, 3j}(\mathbf{a}_{k, 3j}) - \Lambda_{k,0}(\mathbf{a}_{k, 3j})\bigr] \nonumber \\[-8pt] \\[-8pt] \nonumber && {}+ F_{k, 3j+1} + \bigl[F_{k, 3j+2}(\mathbf{a}_{k, 3j+3}) - \Lambda_{k,2}(\mathbf{a}_{k, 3j+3})\bigr]. \end{eqnarray}
Then $Y_{k, j}(\mathbf{a}_{k, 3j}, \mathbf{a}_{k, 3j+3})$, $j = 1, \ldots, q_k$, $k \ge K_0$, are mean zero independent random variables with variance function
\begin{eqnarray} \label{eqA16752} V_k(\mathbf{a}_{k, 3j},
\mathbf{a}_{k, 3j+3}) &=& \bigl\|Y_{k, j}(\mathbf{a}_{k, 3j},
\mathbf{a}_{k, 3j+3})\bigr\|^2 \nonumber\\
&=& \bigl\|F_{k, 3j}( \mathbf{a}_{k, 3j}) - \Lambda_{k,0}(\mathbf{a}_{k, 3j})
\bigr\|^2 + \|F_{k, 3j+1}\|^2 \nonumber\\ &&{} + 2 \mathsf{E}\bigl \{F_{k, 3j+1} \bigl[F_{k, 3j}(\mathbf{a}_{k, 3j}) - \Lambda_{k,0}(\mathbf{a}_{k, 3j})\bigr] \bigr\} \\
&&{} + \bigl\| F_{k, 3j+2}(\mathbf{a}_{k, 3j+3}) - \Lambda_{k,2}(
\mathbf{a}_{k, 3j+3}) \bigr\|^2\nonumber \\ &&{} + 2 \mathsf{E}\bigl \{F_{k, 3j+1} \bigl[F_{k, 3j+2}(\mathbf{a}_{k, 3j+3}) - \Lambda_{k,2}(\mathbf{a}_{k, 3j+3})\bigr] \bigr\},\nonumber \end{eqnarray}
since $[F_{k, 3j}(\mathbf{a}_{k, 3j}) - \Lambda_{k,0}(\mathbf{a}_{k, 3j})]$ and $[F_{k, 3j+2}(\mathbf{a}_{k, 3j+3}) - \Lambda_{k,2}(\mathbf{a}_{k, 3j+3})]$ are independent. Following the definition of $S_n^\diamond$ in (\ref{eqA141010p}), we let
\begin{eqnarray} \label{eqF281} H_n(\mathbf{ a})& =& \sum_{k=K_0}^{h_n-1} \sum_{j=1}^{q_k} Y_{k, j}( \mathbf{a}_{k, 3j}, \mathbf{a}_{k, 3j+3}) \nonumber \\[-8pt] \\[-8pt] \nonumber &&{} + \sum _{j=1}^{\tau_n} Y_{h_n, j}(\mathbf{a}_{h_n, 3j}, \mathbf{a}_{h_n, 3j+3}). \end{eqnarray}
Define the mean function
\begin{eqnarray*} M_n(\mathbf{ a}) &=& \sum_{k=K_0}^{h_n-1} \sum_{j=1}^{q_k} \bigl[\Lambda_{k, 0} (\mathbf{a}_{k, 3j}) + \Lambda_{k, 2} (\mathbf{a}_{k, 3j+3}) \bigr] \\ & & {}+ \sum_{j=1}^{\tau_n} \bigl[ \Lambda_{h_n, 0} (\mathbf{a}_{h_n, 3j}) + \Lambda_{h_n, 2} ( \mathbf{a}_{h_n, 3j+3})\bigr], \end{eqnarray*}
and the variance of $H_n(\mathbf{ a})$,
\[ Q_n(\mathbf{ a}) = \sum_{k=K_0}^{h_n-1} \sum_{j=1}^{q_k} V_{k} ( \mathbf{a}_{k, 3j}, \mathbf{a}_{k, 3j+3}) + \sum _{j=1}^{\tau_n} V_{h_n} (\mathbf{a}_{h_n, 3j}, \mathbf{a}_{h_n, 3j+3}). \]
Let
\begin{eqnarray} \label{eqA16818} V^\circ_k(\mathbf{a}_{k, 3j}) &=&
\bigl\|\bigl[F_{k, 3j}(\mathbf{a}_{k, 3j}) - \Lambda_{k,0}( \mathbf{a}_{k, 3j})\bigr] \nonumber\\ &&\hspace*{6pt}{} + F_{k, 3j+1} + \bigl[F_{k, 3j+2}(\mathbf{a}_{k, 3j}) - \Lambda_{k,2}(
\mathbf{a}_{k, 3j})\bigr]\bigr\|^2 \nonumber\\
&=& \bigl\|F_{k, 3j}( \mathbf{a}_{k, 3j}) - \Lambda_{k,0}(\mathbf{a}_{k, 3j})
\bigr\|^2 + \| F_{k, 3j+1}\|^2 \nonumber\\ &&{} + 2\mathsf{E}\bigl \{F_{k, 3j+1} \bigl[F_{k, 3j}(\mathbf{a}_{k, 3j}) - \Lambda_{k,0}(\mathbf{a}_{k, 3j})\bigr]\bigr\} \nonumber\\ &&{} +
\bigl\|F_{k, 3j+2}(\mathbf{a}_{k, 3j}) - \Lambda_{k,2}(
\mathbf{a}_{k, 3j})\bigr\|^2 \\ &&{} + 2\mathsf{E}\bigl\{F_{k, 3j+1} \bigl[F_{k, 3j+2}(\mathbf{a}_{k, 3j}) - \Lambda_{k,2}( \mathbf{a}_{k, 3j})\bigr]\bigr\}, \nonumber\\ L_k(
\mathbf{a}_{k, 3j}) &=& \bigl\|F_{k, 3j+1} + \bigl[F_{k, 3j+2}( \mathbf{a}_{k, 3j}) - \Lambda_{k,2}(\mathbf{a}_{k, 3j})
\bigr]\bigr\|^2 \nonumber\\
&=& \bigl\|F_{k, 3j+1}\bigr\|^2 + \bigl\| \bigl[F_{k, 3j+2}(\mathbf{a}_{k, 3j}) - \Lambda_{k,2}(
\mathbf{a}_{k, 3j})\bigr]\bigr\|^2 \nonumber\\ &&{} + 2 \mathsf{E}\bigl\{ F_{k, 3j+1} \bigl[F_{k, 3j+2}(\mathbf{a}_{k, 3j}) - \Lambda_{k,2}(\mathbf{a}_{k, 3j})\bigr] \bigr\}.\nonumber \end{eqnarray}
By the formulas of $V_k(\mathbf{a}_{k, 3j}, \mathbf{a}_{k, 3j+3})$ in (\ref{eqA16752}) and $V^\circ_k(\mathbf{a}_{k, 3j})$ and $L_k(\mathbf{a}_{k, 3j})$ in~(\ref{eqA16818}), we have the following identity:
\begin{equation} \label{eqA16819} L_k(\mathbf{a}_{k, 3}) + \sum _{j=1}^t V_k(\mathbf{a}_{k, 3j}, \mathbf{a}_{k, 3j+3}) = \sum_{j=1}^t V^\circ_k(\mathbf{a}_{k, 3j}) + L_k( \mathbf{a}_{k, 3+3t}) \end{equation}
holds for all $t \ge1$. The above identity motivates us to introduce the auxiliary process
\begin{equation} \label{eqA16835} \Gamma_n(\mathbf{a}) = \sum _{k=K_0}^{h_n-1} L_k(\mathbf{a}_{k, 3})^{1/2} \zeta_k + L_{h_n}(\mathbf{a}_{h_n, 3})^{1/2} \zeta_{h_n}, \end{equation}
where $\zeta_l, l \in{\open\mathbb{Z}}$, are i.i.d. standard normal random variables which are independent of $(\varepsilon_i)_{i \in{\open\mathbb{Z}}}$. Then in view of (\ref{eqA16819}), the variance of $H_n(\mathbf{ a}) + \Gamma_n(\mathbf{a})$ is given by
\begin{eqnarray} \label{eqA16840} Q^\circ_n(\mathbf{ a}) &=& \sum _{k=K_0}^{h_n-1} \Biggl[\sum_{j=1}^{q_k} V^\circ_k(\mathbf{a}_{k, 3j}) + L_k( \mathbf{a}_{k, 3+3 q_k}) \Biggr] \nonumber \\[-8pt] \\[-8pt] \nonumber && {}+ \sum_{j=1}^{\tau_n} \bigl[ V^\circ_{h_n} (\mathbf{a}_{h_n, 3j}) + L_{h_n}(\mathbf{a}_{h_n, 3+3 \tau_n}) \bigr]. \end{eqnarray}
In studying $H_n(\mathbf{ a}) + \Gamma_n(\mathbf{a})$, for notational convenience, for $j = 0$ we let $Y_{k,0}(\mathbf{ a}_{k, 0}, \mathbf{ a}_{k, 3}) = L_k(\mathbf{a}_{k, 3})^{1/2} \zeta_k$. We shall now apply Sakhanenko's (\citeyear{Sak91,Sak06}) Gaussian approximation result. To this end, for $x > 0$, we define
\begin{eqnarray} \label{eqA161125}&& \Psi_h(\mathbf{ a}, x, \alpha) \nonumber\\ &&\qquad= \sum _{k=K_0}^h \sum_{j=0}^{q_k}
\mathsf{E}\min\bigl\{ \bigl|Y_{k,j}(\mathbf{ a}_{k, 3j}, \mathbf{
a}_{k, 3j+3}) / x\bigr|^\alpha, \bigl|Y_{k,j}(\mathbf{
a}_{k, 3j}, \mathbf{ a}_{k, 3j+3}) / x\bigr|^2 \bigr\} \\ &&\qquad
\le \sum_{k=K_0}^h \sum _{j=0}^{q_k} \mathsf{E}\bigl|Y_{k,j}(\mathbf{
a}_{k, 3j}, \mathbf{ a}_{k, 3j+3}) / x\bigr|^\alpha.\nonumber \end{eqnarray}
By Theorem 1 in \citet{Sak06}, there exists a probability space $(\Omega_\mathbf{ a}, {\mathcal A}_\mathbf{ a}, \mathsf{P}_\mathbf{ a})$ on which we can define a standard Brownian motion $\mathbb{B}_\mathbf{ a}$ and random variables $R_{k, j}^\mathbf{ a}$ such that the distributional equality
\begin{equation} \label{eqeqRy} \bigl(R_{k,j}^\mathbf{ a}\bigr)_{0 \le j \le q_k, k \ge K_0} \stackrel{\mathcal D}{=} \bigl(Y_{k,j}(\mathbf{ a}_{k, 3j}, \mathbf{ a}_{k, 3j+3})\bigr)_{0 \le j \le q_k, k \ge K_0} \end{equation}
holds, and, for the partial sum processes
\begin{equation} \label{eqA161218} \qquad\Upsilon_n^\mathbf{ a} = \sum _{k=K_0}^{h-1} \sum_{j=1}^{q_k} R_{k,j}^\mathbf{ a} + \sum_{j=1}^{\tau_n} R_{h_n,j}^\mathbf{ a} \qquad\mbox{and}\qquad \mu_n^\mathbf{ a} = \sum_{k=K_0}^{h-1} R_{k,0}^\mathbf{ a} + R_{h_n,0}^\mathbf{ a}, \end{equation}
we have for all $x > 0$ and $\alpha> p$ that
\begin{equation} \label{eqscha7} \mathsf{P}_\mathbf{ a} \Bigl[ \max_{N_0 \le i \le3^h} \bigl\vert \bigl(\Upsilon_i^\mathbf{ a}+ \mu_i^\mathbf{ a}\bigr) - \mathbb{B}_\mathbf{ a}\bigl( Q^\circ_i(\mathbf{ a}) \bigr) \bigr\vert \ge c_0 \alpha x \Bigr] \le\Psi_h(\mathbf{ a}, x, \alpha). \end{equation}
Here $c_0$ is an absolute constant. By Jensen's inequality, for both $j=0$ and $j > 0$, there exists a constant $c_\alpha$ such that
\begin{equation}
\mathsf{E}\bigl[\bigl|Y_{k,j}(\bolds{\eta}_{k, 3j}, \bolds{\eta}_{k, 3j+3})\bigr|^\alpha
\bigr] \le c_\alpha\mathsf{E}\bigl(|\tilde W_{k, m_k}|^\alpha \bigr). \end{equation}
In (\ref{eqscha7}) we let $x = 3^{h/p}$ and by Lemma \ref{lemasct} in the next chapter [see also (\ref{eqA14951p})],
\begin{eqnarray} \label{eqA15830} \sum_{h=K_0}^\infty\mathsf{E}\bigl[ \Psi_h\bigl(\bolds{\eta}, 3^{h/p}, \alpha\bigr)\bigr] &\le& \sum _{h=K_0}^\infty\sum _{k=K_0}^h { {q_k+1} \over{3^{\alpha h/p}}}
c_\alpha\mathsf{E}\bigl(|\tilde W_{k, m_k}|^\alpha\bigr) \nonumber\\ &
\le& \sum_{k=K_0}^\infty\sum _{h=k}^\infty { {3^k c_\alpha} \over{m_k 3^{\alpha h/p}}} \mathsf{E} \Bigl(\max _{1\le l \le3 m_k} |\tilde W_{k, l}|^\alpha \Bigr) \\ & <& \infty.\nonumber \end{eqnarray}
Hence, by the Borel--Cantelli lemma, we obtain
\begin{equation}
\label{eqA161140} \max_{i \le n}\bigl |\bigl(\Upsilon_i^{\bolds{\eta}}+ \mu_i^{\bolds{\eta}}\bigr) - \mathbb{B}_{\bolds{\eta}}\bigl(Q^\circ_i({
\bolds{\eta}})\bigr)\bigr| = o_\mathrm{ a.s.}\bigl(n^{1/p}\bigr). \end{equation}
The probability space for the above almost sure convergence is
\begin{equation} \label{eqps3} (\Omega_*, {\mathcal A}_*, \mathsf{P}_*) = (\Omega, {\mathcal A}, \mathsf{P}) \times\prod _{\tau\in\Omega} (\Omega_{\bolds{\eta}(\tau)}, {\mathcal A}_{\bolds{\eta}(\tau)}, \mathsf{P}_{\bolds{\eta}(\tau)}), \end{equation}
where $(\Omega, {\mathcal A}, \mathsf{P})$ is the probability space on which the random variables $(\varepsilon_i)_{i \in{\open\mathbb{Z}}}$ are defined and, for a set $A \subset\Omega_*$ with $A \in{\mathcal A}_*$, the probability measure $\mathsf{P}_*$ is defined as
\begin{equation} \label{eq} \mathsf{P}_*(A) = \int_\Omega\mathsf{P}_{\bolds{\eta}(\omega)} (A_\omega) \mathsf{P}(d \omega), \end{equation}
where $A_\omega$ is the $\omega$-section of $A$. Here we recall that, for each $\mathbf{ a}$, $(\Omega_\mathbf{ a}, {\mathcal A}_\mathbf { a}, \mathsf{P}_\mathbf{ a})$ is the probability space carrying $\mathbb{B}_\mathbf{ a}$ and $R_{k, j}^\mathbf{ a}$ given $\bolds{\eta}= \mathbf{ a}$. On the probability space $(\Omega_*, {\mathcal A}_*, \mathsf{P}_*)$, the random variable $R_{k,j}^{\bolds{\eta}}$ is defined as $R_{k, j}^{\bolds{\eta}} (\omega, \theta(\cdot)) = R_{k, j}^{\bolds{\eta}(\omega)}(\theta(\omega))$, where $(\omega,\theta(\cdot)) \in\Omega_*$, $\theta(\cdot)$ is an element in $\prod_{\tau\in\Omega} \Omega_{\bolds{\eta}(\tau)}$ and $\theta (\tau) \in\Omega_{\bolds{\eta}(\tau)}$, $\tau\in\Omega$. The other random processes $\mu_i^{\bolds{\eta}}$ and $\mathbb{B}_{\bolds{\eta}}(Q^\circ_i({\bolds{\eta}}))$ can be similarly defined.
\subsection{Unconditional Gaussian approximation} \label{secUGA} In this subsection we shall work with the processes $\Upsilon_i^{\bolds{\eta}}$, $\mu_i^{\bolds{\eta}}$ and $\mathbb{B}_{\bolds{\eta}}(Q^\circ_i({\bolds{\eta}}))$. Based on (\ref{eqA16840}), we can construct i.i.d. standard normal random variables $Z^{\mathbf{ a}}_{i, l}, i, l \in {\open\mathbb{Z}}$, and standard normal random variables $\mathcal{ G}^{\mathbf{ a}}_{i, l}$, such that
\begin{equation} \label{eqA161157} \mathbb{B}_\mathbf{ a}\bigl( Q^\circ_n( \mathbf{ a})\bigr) = \varpi_n(\mathbf{a}) + \varphi_n( \mathbf{a}), \end{equation}
where
\begin{eqnarray*} \varpi_n(\mathbf{a}) &=& \sum_{k=K_0}^{h_n-1} \sum_{j=1}^{q_k} V^\circ_k( \mathbf{a}_{k, 3j})^{1/2} Z^\mathbf{ a}_{k, j} + \sum_{j=1}^{\tau_n} V^\circ_{h_n} (\mathbf{a}_{h_n, 3j})^{1/2} Z^\mathbf{ a}_{h_n, j}, \\ \varphi_n(\mathbf{a}) &=& \sum _{k=K_0}^{h_n-1} L_k(\mathbf{a}_{k, 3+3 q_k})^{1/2} {\mathcal G}^{\mathbf{a}}_{k, 1+q_k} + L_{h_n}(\mathbf{a}_{h_n, 3+3 \tau_n})^{1/2} {\mathcal G}^\mathbf{ a}_{h_n, 1+\tau_n}. \end{eqnarray*}
In particular, \begin{eqnarray*} {V^\circ_{h_n} (\mathbf{a}_{h_n, 3j})^{1/2}} Z^{\mathbf {a}}_{h_n, j} &=& \mathbb{B}_{\mathbf {a}}\Biggl( Q^\circ_{3^{h_n-1}}(\mathbf {a}) + \sum_{j'=1}^j V^\circ_{h_n} (\mathbf{a}_{h_n, 3j'})\Biggr) \\ &&{}- \mathbb{B}_{\mathbf{ a}}\Biggl( Q^\circ_{3^{h_n-1}}({\mathbf {a}}) + \sum_{j'=1}^{j-1} V^\circ_{h_n} (\mathbf{a}_{h_n, 3j'})\Biggr) \end{eqnarray*} and \[ L_{h_n}(\mathbf{a}_{h_n, 3+3 \tau_n})^{1/2} {\mathcal G}^{\bf a}_{h_n, 1+\tau_n} = \mathbb{B}_{\mathbf a}\bigl( Q^\circ_n({\mathbf a})\bigr) - \mathbb{B}_{\mathbf a}\Biggl( Q^\circ_{3^{h_n-1}}({\bf a}) + \sum_{j=1}^{\tau_n} V^\circ_{h_n} (\mathbf{a}_{h_n, 3j})\Biggr). \] Note that the standard normal random variables ${\mathcal G}^{\mathbf a}_{i, l}, i,l,$ can be possibly dependent and $({\mathcal G}^{\bf a}_{i, l})_{i l}$ and $(Z^{\mathbf a}_{i, l})_{i l}$ can also be possibly dependent.
Let $Z^\star_{i, l}, i, l \in{\open\mathbb{Z}}$, independent of $(\varepsilon_j)_{j \in{\open\mathbb{Z}}}$, be also i.i.d. standard normal random variables, and define
\[ \Phi_n =\sum_{k=K_0}^{h_n-1} \sum_{j=1}^{q_k} V^\circ_k( \bolds{\eta}_{k, 3j})^{1/2} Z^\star_{k, j} + \sum _{j=1}^{\tau_n} V^\circ_{h_n} (\bolds{\eta}_{h_n, 3j})^{1/2} Z^\star_{h_n, j}. \]
Since $Z^\mathbf{ a}_{i, l}$, are i.i.d. standard normal, the conditional distribution $[\varpi_n(\bolds{\eta}) | \bolds{\eta}= \mathbf{ a}]$, namely the distribution of $\varpi_n(\mathbf{a})$, is same as that of $\Phi_n$. Hence
\begin{equation} (\Phi_i)_{i \ge N_0}
\stackrel{\mathcal D}{=} \bigl(\varpi_i(\bolds{\eta})\bigr)_{i \ge N_0}.
\end{equation}
By Jensen's inequality, $\mathsf{E}[|L_k(\bolds{\eta}_{k, 3j+3})^{1/2}|^\alpha]
\le3^{\alpha} \mathsf{E}(|\tilde W_{k, m_k}|^\alpha)$. By (\ref{eqA14951p}),
\begin{eqnarray} \label{eqA17228}&& \sum_{k=K_0}^\infty\mathsf{P} \Bigl(
\max_{1\le j \le q_k} \bigl|L_k(\bolds{\eta}_{k, 3j+3})^{1/2}
{\mathcal G}_{k, 1+j}^{\bolds{\eta}} \bigr| \ge3^{k/p} \Bigr)\nonumber\\ &&\qquad \le \sum _{k=K_0}^\infty q_k
{ {\mathsf{E}[|L_k(\bolds{\eta}_{k, 3})^{1/2} {\mathcal G}_{k, 1}^{\bolds{\eta}} |^\alpha]} \over{3^{k\alpha/p}} } \nonumber \\[-8pt] \\[-8pt] \nonumber
&&\qquad\le \sum_{k=K_0}^\infty q_k { {c_\alpha\mathsf{E}(|\tilde W_{k, m_k}|^\alpha)} \over{3^{k\alpha/p}} } \\ &&\qquad < \infty,\nonumber \end{eqnarray}
which by the Borel--Cantelli lemma implies
\begin{equation}
\label{eqA17203} \max_{i\le n}\bigl |\varphi_i(\bolds{\eta})\bigr| = o_\mathrm{ a.s.}\bigl(n^{1/p}\bigr). \end{equation}
The same argument also implies that $\max_{i\le n} |\Gamma_i(\bolds{\eta})| = o_\mathrm{ a.s.}(n^{1/p})$ and consequently
\begin{equation}
\label{eqA161200} \max_{i\le n}\bigl |\mu_i^{\bolds{\eta}}\bigr| = o_\mathrm{ a.s.}\bigl(n^{1/p}\bigr) \end{equation}
in view of (\ref{eqeqRy}) with $j = 0$. Hence by (\ref{eqA161140}) and (\ref{eqA161157}), we have $ \max_{i \le n}
|\Upsilon_i^{\bolds{\eta}}-\varpi_i(\bolds{\eta})| = o_\mathrm{ a.s.}(n^{1/p})$. Observe that, by (\ref{eqeqRy}), (\ref{eqA161218}), (\ref{eqA16744}) and (\ref{eqA161219}), we have the distributional equality
\begin{equation} \label{eqA161220} \bigl(\Upsilon_i^{\bolds{\eta}} + M_i( \bolds{\eta})\bigr)_{i \ge N_0} \stackrel{\mathcal D}{=} \bigl(S_i^\diamond \bigr)_{i \ge N_0}, \end{equation}
where we recall (\ref{eqA141010p}) for the definition of $S_n^\diamond$. Then it remains to establish a strong invariance principle for $\Phi_n + M_n(\bolds{\eta})$. To this end, let
\begin{equation} A_{k, j} = V^\circ_k(\bolds{\eta}_{k, 3j})^{1/2} Z^\star_{k, j} + \Lambda_{k, 0} ( \bolds{\eta}_{k, 3j}) + \Lambda_{k, 2} (\bolds{\eta}_{k, 3j}), \end{equation}
which are independent random variables for $j = 1, \ldots, q_k$ and $k \ge K_0$, and let
\begin{equation} \label{eqA17220p} S_n^\natural= \sum _{k=K_0}^{h_n-1} \sum_{j=1}^{q_k} A_{k,j} + \sum_{j=1}^{\tau_n} A_{h_n, j} \end{equation}
and $R_n^\natural= \Phi_n + M_n(\bolds{\eta}) - S_n^\natural$. Note that
\[ R_n^\natural= \sum_{k=K_0}^{h_n-1} \bigl[\Lambda_{k, 2} (\bolds{\eta}_{k, 3+3q_k}) - \Lambda_{k, 2} ( \bolds{\eta}_{k, 3})\bigr] + \bigl[\Lambda_{h_n, 2} ( \bolds{\eta}_{h_n, 3+3 \tau_n}) - \Lambda_{h_n, 2} (\bolds{\eta}_{h_n, 3})\bigr]. \]
Then using the same argument as in (\ref{eqA17228}), we have
\begin{equation}
\label{eqA17231p} \max_{i \le n}\bigl |R_i^\natural\bigr|
= \max_{i \le n} \bigl|\Phi_i + M_i(\bolds{\eta}) -
S_i^\natural\bigr| = o_\mathrm{ a.s.}\bigl(n^{1/p}\bigr). \end{equation}
The variance of $S_n^\natural$ equals to
\begin{eqnarray} \label{eqA20212p} \sigma_n^2 &=& \sum _{k=K_0}^{h_n-1} \sum_{j=1}^{q_k}
\|A_{k, j}\|^2 + \sum_{j=1}^{\tau_n}
\|A_{h_n, j}\|^2 \nonumber \\[-8pt] \\[-8pt] \nonumber & =& \sum_{k=K_0}^{h_n-1}
q_k \|A_{k, 1}\|^2 + \tau_n
\|A_{h_n, 1}\|^2. \end{eqnarray}
Again by Theorem 1 in \citet{Sak06}, on the same probability space that defines $(A_{k, j})_{1\le j \le q_k, k \ge K_0}$, by the argument in (\ref{eqscha7})--(\ref{eqA161140}), there exists a standard Brownian motion $\mathbb{B}$ such that
\begin{equation}
\label{eqA161249} \max_{i\le n} \bigl|S_i^\natural-
\mathbb{B}\bigl(\sigma_i^2\bigr)\bigr| = o_\mathrm{ a.s.} \bigl(n^{1/p}\bigr). \end{equation}
\subsection{Regularizing the Gaussian approximation} \label{secA18RGA} In this section we shall regularize the Gaussian approximation (\ref{eqA161249}) by replacing the variance function $\sigma_i^2$ by the asymptotic linear form $\phi_i$ or the linear form $i \sigma^2$, and the latter is more easily usable. By (\ref{eqA16818}), we obtain
\begin{eqnarray} V^\circ_k(\mathbf{a}_{k, 3j}) &=&
\bigl\|F_{k, 3j}(\mathbf{a}_{k, 3j})\bigr\|^2 -
\Lambda_{k,0}(\mathbf{a}_{k, 3j})^2 + \|
F_{k, 3j+1}\|^2 \nonumber\\ &&{} + 2\mathsf{E}\bigl\{F_{k, 3j+1} F_{k, 3j}(\mathbf{a}_{k, 3j}) \bigr\} \nonumber \\[-8pt] \\[-8pt] \nonumber &&{} +
\bigl\|F_{k, 3j+2}(\mathbf{a}_{k, 3j})\bigr\|^2 - \Lambda_{k,2}(\mathbf{a}_{k, 3j})^2 \\ &&{} + 2\mathsf{E} \bigl\{F_{k, 3j+1} F_{k, 3j+2}(\mathbf{a}_{k, 3j}) \bigr\},\nonumber \end{eqnarray}
which, by the expression of $A_{k, j}$, implies that
\begin{eqnarray}
\|A_{k, j}\|^2 &=& \mathsf{E}\bigl[V^\circ_k( \bolds{\eta}_{k, 3j})\bigr] + \mathsf{E}\bigl[\Lambda_{k, 0} ( \bolds{\eta}_{k, 3j}) + \Lambda_{k, 2} (\bolds{\eta}_{k, 3j}) \bigr]^2 \nonumber \\[-8pt] \\[-8pt] \nonumber & =& 3 \mathsf{E}\bigl[\tilde W_{k, m_k}^2 + 2 \tilde W_{k, m_k} (\tilde W_{k, 2 m_k}-\tilde W_{k, m_k}) \bigr]. \end{eqnarray}
Let $\tilde\gamma_{k, i} = \mathsf{E}(\tilde X_{k,0} \tilde X_{k,i} )$. Then $\nu_k := \|A_{k, j}\|^2 / (3 m_k)$ has the expression
\begin{eqnarray} \label{eqA19513p} \nu_k &=& {1\over{m_k}} \mathsf{E}\bigl[\tilde W_{k, m_k}^2 + 2 \tilde W_{k, m_k} (\tilde W_{k, 2 m_k}-\tilde W_{k, m_k})\bigr] \nonumber \\[-8pt] \\[-8pt] \nonumber & = & \sum _{i=-m_k}^{m_k} \tilde\gamma_{k, i} + 2 \sum _{i=1}^{m_k} (1-i/m_k) \tilde \gamma_{k, m_k+i}. \end{eqnarray}
We now prove that
\begin{equation} \label{eqA181047} \nu_k - \sigma^2 = O \Bigl[ \Theta_{m_k, p}+ \min_{l \ge0} \bigl(\Theta_{l, p} + l 3^{k (2/p-1)}\bigr) \Bigr], \end{equation}
which converges to $0$ if $k \to\infty$. Let $\hat X_{k, i} = T_{3^{k/p}} (X_i)$ and $\hat\gamma_{k,i} = \operatorname{cov}(\hat X_{k, 0},\break
\hat X_{k, i}) = \mathsf{E}(\hat X_{k, 0} \hat X_{k, i}) - [\mathsf{E}(\hat X_{k, 0})]^2$. Note that if $|X_i| \le3^{k/p}$, then $X_i = \hat X_{k, i}$. Since $X_i \in{\mathcal L}^p$,
\begin{eqnarray}
\bigl|\mathsf{E}(X_0 X_i) - \mathsf{E}(\hat X_{k, 0} \hat X_{k, i})\bigr| &=& \bigl|\mathsf{E}(X_0 X_i \mathbf{
1}_{|X_0| \le3^{k/p}, |X_i| \le3^{k/p}} ) - \mathsf{E}(\hat X_{k, 0} \hat X_{k, i}) \nonumber\\ &&\hspace*{60pt}{}
+ \mathsf{E}(X_0 X_i \mathbf{ 1}_{\max(|X_0|, |X_i|) > 3^{k/p}} )\bigr| \nonumber\\
&\le& \bigl|
\mathsf{E}(\hat X_{k, 0} \hat X_{k, i} \mathbf{ 1}_{\max(|X_0|, |X_i|) > 3^{k/p}} ) \bigr| \nonumber \\[-8pt] \\[-8pt] \nonumber
&& {}+ \bigl|\mathsf{E}(X_0 X_i \mathbf{ 1}_{\max(|X_0|, |X_i|) > 3^{k/p}} )\bigr| \\
&\le& 2 \mathsf{E}\bigl[ \bigl(|X_0|+|X_i|\bigr)^2 \mathbf{
1}_{ |X_0| + |X_i| > 3^{k/p}} \bigr] \nonumber\\ &=& o\bigl(3^{k(2-p)/p}\bigr).\nonumber \end{eqnarray}
Clearly, we also have $\mathsf{E}(\hat X_{k, 0}) = o(3^{k(2-p)/p})$. Hence
\begin{equation}
\label{eqA18651p} \sup_i |\hat\gamma_{k,i} -
\gamma_i| = o\bigl(3^{k(2-p)/p}\bigr). \end{equation}
For all $j \ge1$, we have $\|W_{k,j} - \tilde W_{k,j} \| \le j^{1/2} \Theta_{m_k, 2} \le j^{1/2} \Theta_{m_k, p}$. Then
\begin{equation}
\label{eqA18646p}\qquad \bigl|\mathsf{E} W_{k,j}^2 - \mathsf{E}\tilde W_{k,j}^2\bigr| \le\|W_{k,j} - \tilde W_{k,j} \| \|W_{k,j} + \tilde W_{k,j} \| \le2 j \Theta_{m_k, p} \Theta_{0, p}. \end{equation}
Since $\lim_{j \to\infty} j^{-1} \mathsf{E}\tilde W_{k,j}^2 = \sum_{i=-m_k}^{m_k} \tilde\gamma_{k, i}$ and $\lim_{j \to\infty} j^{-1} \mathsf{E} W_{k,j}^2 = \sum_{i \in{\open\mathbb{Z}}} \hat\gamma_{k,i}$, (\ref{eqA18646p}) implies that
\begin{equation} \label{eqA18647p} \Biggl\vert \sum_{i=-m_k}^{m_k} \tilde\gamma_{k, i} - \sum_{i \in{\open\mathbb{Z}}} \hat \gamma_{k,i}\Biggr\vert \le2 \Theta_{m_k, p} \Theta_{0, p}. \end{equation}
Let the projection operator ${\mathcal P}_l \cdot= \mathsf{E}( \cdot| {\mathcal F}_l) - \mathsf{E}( \cdot| {\mathcal F}_{l-1})$. Then $\hat X_{k,i} = \sum_{l \in{\open\mathbb{Z}}} {\mathcal P}_l \hat X_{k,i}$. By the orthogonality of ${\mathcal P}_l, l \in{\open\mathbb{Z}}$, and inequality (\ref{eqA181112}),
\begin{eqnarray}
\label{eqA181140} |\hat\gamma_{k, i}| &=& \biggl\vert \sum _{l \in{\open\mathbb{Z}}} \sum_{l' \in{\open\mathbb{Z}}} \mathsf{E}\bigl[({\mathcal P}_l \hat X_{k,0}) ({\mathcal P}_{l'} \hat X_{k,i} )\bigr] \biggr\vert \nonumber \\[-8pt] \\[-8pt] \nonumber
&\le& \sum _{l \in{\open\mathbb{Z}}} \| {\mathcal P}_l \hat X_{k,0}\| \| {
\mathcal P}_{l} \hat X_{k,i} \| \le\sum _{j=0}^\infty\delta_{j, p} \delta_{j+i, p}. \end{eqnarray}
The same inequality also holds for $|\gamma_i|$ and $|\tilde
\gamma_{k, i}|$. For any $0 \le l \le m_k$, we have by~(\ref{eqA181140}) that
\begin{equation}
\sum_{i=l}^{\infty}\bigl (|\hat
\gamma_{k, i}| + |\tilde\gamma_{k, i}| + |\gamma_i|\bigr) \le3 \sum_{i=l}^\infty\sum _{j=0}^\infty \delta_{j, p} \delta_{j+i, p} \le3 \Theta_{0, p} \Theta_{l,p}, \end{equation}
which entails (\ref{eqA181047}) in view of (\ref{eqA18651p}), (\ref{eqA18647p}) and (\ref{eqA19513p}).
Recall (\ref{eqA20212p}) and (\ref{eqA161249}) for $\sigma_n^2$. Now we shall compare $\sigma_n^2$ with
\begin{equation} \phi_n = \sum_{k=1}^{h_n-1} \bigl(3^k - 3^{k-1}\bigr) \nu_k + \bigl(n-3^{h_n-1}\bigr) \nu_{h_n}. \end{equation}
Then $\phi_n$ is a piecewise linear function. Observe that, by (\ref{eqmk1}),
\begin{equation}
\max_{i \le n} \bigl|\phi_i-\sigma_i^2\bigr| \le3 \max_{k \le h_n} (m_k \nu_k) = o \bigl(n^{ (\alpha/p-1)/(\alpha/2 -1)}\bigr). \end{equation}
By increment properties of Brownian motions, we obtain
\begin{equation}
\label{eqA20223p} \max_{i \le n} \bigl|\mathbb{B}(\phi_i)-\mathbb{B}\bigl(
\sigma_i^2\bigr)\bigr| = o_\mathrm{ a.s.}\bigl(n^{ (\alpha/p-1)/(\alpha-2)} \log n\bigr) = o_\mathrm{ a.s.}\bigl(n^{1/p}\bigr). \end{equation}
Note that by (\ref{eqA181047}), $\phi_i$ is asymptotically linear with slope $\sigma^2$. Here we emphasize that, under (\ref{eqsrdsip}), (\ref{eqmk1}), (\ref{eqmap}), a strong invariance principle with the Brownian motion $\mathbb{B}(\phi_i)$ holds in view of (\ref{eqA20226}), (\ref{eqA161220}), (\ref{eqA17231p}), (\ref{eqA161249}), (\ref{eqA20223p}) and Lemma \ref{lemcnstctn} in the next chapter. However, the approximation $\mathbb{B}(\phi_i)$ is not convenient for use since~$\phi_i$ is not genuinely linear.
Next, under condition (\ref{eqA18705p}), we shall linearize the variance function $\phi_i$, so that one can have the readily applicable form (\ref{eqsipA181103}). Based on the form of $\phi_i$, we write
\begin{equation} \mathbb{B}(\phi_n) = \sum_{k=1}^{h_n-1} \sum_{j=1}^{3^k - 3^{k-1}} \nu_k^{1/2} Z_{k,j} + \sum_{j=1}^{n - 3^{h_n-1}} \nu_{h_n}^{1/2} Z_{h_n,j}, \end{equation}
where $Z_{k, j}$ are i.i.d. standard normal random variables. Define
\begin{equation} \mathbb{B}^\ddag(n) = \sum_{k=1}^{h_n-1} \sum_{j=1}^{3^k - 3^{k-1}} Z_{k,j} + \sum _{j=1}^{n - 3^{h_n-1}} Z_{h_n,j}, \end{equation}
which is a standard Brownian motion for integer values of $n$. Then we can write
\begin{equation} \label{eqA19411p} \mathbb{B}(\phi_n) - \sigma\mathbb{B}^\ddag(n) = \sum _{i=2}^n b_i Z_i, \end{equation}
where $(Z_2, Z_3, Z_4, \ldots) = (Z_{1,1}, Z_{1,2}, Z_{2,1}, Z_{2,2}, \ldots, Z_{2, 6}, \ldots, Z_{k,1}, \ldots,\break Z_{k, 3^k - 3^{k-1}}, \ldots)$ is a lexicographic re-arrangement of $Z_{k,j}$, and the coefficients $b_n = \nu_{h_n}^{1/2} - \sigma$. Then
\begin{eqnarray}
\label{eqA18709p} \varsigma_n^2 &=& \bigl\|\mathbb{B}(
\phi_n) - \sigma\mathbb{B}^\ddag(n)\bigr\|^2 = \sum _{i=2}^n b_i^2 \nonumber \\[-8pt] \\[-8pt] \nonumber &=& \sum _{k=1}^{h_n-1} \bigl(3^k - 3^{k-1}\bigr) \bigl(\nu_k^{1/2} - \sigma \bigr)^2 + \bigl(n - 3^{h_n-1}\bigr) \bigl( \nu_{h_n}^{1/2} - \sigma\bigr)^2 \end{eqnarray}
and $\varsigma_n^2$ is nondecreasing. If $\lim_{n \to\infty} \varsigma_n^2 < \infty$, then trivially we have
\begin{equation} \label{eqA18745} \mathbb{B}(\phi_n) - \sigma\mathbb{B}^\ddag(n) = o_\mathrm{ a.s.}\bigl(n^{1/p}\bigr). \end{equation}
We shall now prove (\ref{eqA18745}) under the assumption that $\lim_{n \to\infty} \varsigma_n^2 = \infty$. Under the latter condition, note that we can represent $\mathbb{B}(\phi_n) - \sigma \mathbb{B}^\ddag(n)$ as another Brownian motion $\mathbb{B}_0(\varsigma_n^2)$, and by the law of the iterated logarithm for Brownian motion, we have
\begin{equation} \label{eqA18749} \mathop{\underline{\overline{\lim}}}_{n \to\infty} { {\mathbb{B}(\phi_n) - \sigma\mathbb{B}^\ddag(n)} \over{\sqrt{2 \varsigma_n^2 \log\log\varsigma_n^2}}} = \pm1 \qquad\mbox{almost surely.} \end{equation}
Then (\ref{eqA18745}) follows if we can show that
\begin{equation} \label{eqA18658p} \varsigma_n^2 \log\log n = o \bigl(n^{2/p}\bigr). \end{equation}
Note that (\ref{eqA181047}) and (\ref{eqA18705p}) imply that $3^k (\nu_k^{1/2} - \sigma)^2 = o(3^{2k/p} / \log k)$, which entails~(\ref{eqA18658p}) in view of (\ref{eqA18709p}).
\section{Some useful lemmas}\label{s4} \label{secuselem} In this section we shall provide some lemmas that are used in Section~\ref{secproof}. Lemma \ref{lemcnstctn} is a ``gluing'' lemma, and it concerns how to combine almost sure convergences in different probability spaces. Lemma \ref{lemasct} relates truncated and original moments, and Lemma \ref{lemmomentA14} gives an inequality for moments of maximum sums.
\begin{lemma} \label{lemcnstctn} Let $(T_{1,n})_{n \ge1}$ and $(U_{1,n})_{n \ge 1}$ be two sequences of random variables defined on the probability space $(\Omega_1, {\mathcal A}_1, \mathsf{P}_1)$ such that $T_{1,n} - U_{1,n} \to0$ almost surely; let $(T_{2,n})_{n \ge1}$ and $(U_{2,n})_{n \ge1}$ be another two sequences of random variables defined on the probability space $(\Omega_2, {\mathcal A}_2, \mathsf{P}_2)$ such that $T_{2,n} - U_{2,n} \to0$ almost surely. Assume that the distributional equality $(U_{1,n})_{n \ge1} \stackrel{\cal D}{=}(T_{2,n})_{n\ge1}$ holds. Then we can construct a probability space $(\Omega^\dag, {\mathcal A}^\dag, \mathsf{P}^\dag)$ on which we can define $(T_{1,n}')_{n \ge1}$ and $(U'_{2,n})_{n\ge1}$ such that $(T_{1,n}')_{n \ge1} \stackrel{\cal D}{=} (T_{1,n})_{n \ge1}$, $(U'_{2, n})_{n \ge1} \stackrel{\cal D}{=}(U_{2,n})_{n\ge 1}$ and $T_{1,n}' - U'_{2, n} \to0$ almost surely in $(\Omega^\dag, {\mathcal A}^\dag, \mathsf{P}^\dag)$. \end{lemma}
\begin{pf} Let $\mathbf{ T}_1=(T_{1,n})_{n\ge1}$, $\mathbf{ U}_1=(U_{1,n})_{n\ge1}$, $\mathbf{ T}_2=(T_{2, n})_{n\ge1}$, $\mathbf{
U}_2=\break (U_{2,n})_{n\ge1}$; let $\mu_{\mathbf{ T}_1 |\mathbf{ U}_1}$ and $\mu _{\mathbf{
U}_2 | \mathbf{T}_2}$ denote, respectively, the conditional distribution of $ \mathbf{T}_1 $ given $ \mathbf{U}_1$ and the conditional distribution of $ \mathbf{U}_2 $ given $ \mathbf{T}_2$. Let $(\Omega^\dag, {\mathcal F}^\dag, P^\dag)$ be a probability space on which there exists a vector $ \mathbf{U}_1'$ distributed as $ \mathbf{U}_1$. By enlarging $(\Omega^\dag, {\mathcal F}^\dag, P^\dag)$ if necessary, there exist random vectors $ \mathbf{T}_1'$ and
$ \mathbf{U}_2'$ on this probability space such that the conditional distribution of $ \mathbf{T}_1'$ given $ \mathbf{U}_1'$ equals $\mu_{\mathbf{ \mathbf{T}_1 | \mathbf{U}_1}}$, and the conditional distribution of $ \mathbf{U}_2'$ given $
\mathbf{U}_1'$ equals $\mu_{\mathbf{ U}_2 | \mathbf{T}_2}$. Then by $\mathbf{ U}_1 \stackrel{\cal D}{=} \mathbf{ T}_2$ we have $ (\mathbf{T}_1', \mathbf{U}_1') \stackrel{\cal D}{=} (\mathbf{T}_1, \mathbf{U}_1)$ and $(\mathbf{U}_1', \mathbf{U}_2') \stackrel{\cal D}{=} (\mathbf{T}_2, \mathbf{U}_2)$, so that for the components we have $T_{1,n}'-U_{1,n}'\to0$ a.s. and $U_{1,n}'-U_{2,n}'\to0$ a.s., so that $T_{1,n}'-U_{2,n}' \to0$ a.s. \end{pf}
\begin{lemma} \label{lemasct} Let $X \in{\mathcal L}^p$, $2 < p < \alpha$. Then there exists a constant $c = c_{\alpha, p}$ such that
\begin{equation} \label{eqCT43}\qquad \sum_{i=1}^\infty3^i
\mathsf{P}\bigl(|X| \ge3^{i/p}\bigr) + \sum_{i=1}^\infty3^i
\mathsf{E}\min\bigl(\bigl|X/3^{i/p}\bigr|^\alpha, \bigl|X/3^{i/p}\bigr|^2
\bigr) \le c \mathsf{E}\bigl(|X|^p\bigr). \end{equation}
\end{lemma}
\begin{pf} That the first sum is finite follows from
\begin{equation} \label{eqCT431} \sum_{i=1}^\infty3^i
\mathsf{P}\bigl(|X| \ge3^{i/p}\bigr) \le3 \sum_{i=1}^\infty
\int_{3^{i-1}}^{3^i} \mathsf{P}\bigl(|X|^p > u\bigr)
\,d u \le3 \mathsf{E}\bigl(|X|^p\bigr). \end{equation}
For the second one, let $q_i = \mathsf{P}(3^{i-1} \le|X|^p < 3^i)$. Then
\begin{eqnarray} \label{eqCT432} \sum_{i=1}^\infty3^i
\mathsf{E}\bigl(\bigl|X/3^{i/p}\bigr|^2 \mathbf{ 1}_{|X|^p\ge3^i}\bigr) & \le& \sum_{i=1}^\infty3^i \sum _{j=1+i}^\infty 3^{(j-i)2/p} q_j \nonumber\\ &=& \sum_{j=2}^\infty \sum_{i=1}^{j-1} 3^i 3^{(j-i)2/p} q_j \\
&=& c_1 \sum _{j=2}^\infty3^{j} q_j \le c_1 \mathsf{E}\bigl(|X|^p\bigr)\nonumber \end{eqnarray}
for some constant $c_1$ only depending on $p$ and $\alpha$. Similarly, there exists $c_2$ such that
\begin{eqnarray*} \sum_{i=1}^\infty3^i \mathsf{E}
\bigl(\bigl|X/3^{i/p}\bigr|^\alpha\mathbf{ 1}_{|X|^p < 3^i}\bigr) &\le& \sum_{i=1}^\infty3^i \sum _{j=-\infty}^i 3^{(j-i)\alpha/p} q_j \\ &=& \sum_{j=-\infty}^\infty\sum _{i=\max(1,j)}^\infty 3^{i(1-\alpha/p)} 3^{j \alpha/p}
q_j \le c_2 \mathsf{E}\bigl(|X|^p\bigr). \end{eqnarray*}
For the last relation, we consider the two cases $\sum_{j=-\infty}^0$ and $\sum_{j=1}^\infty$ separately. The lemma then follows from (\ref{eqCT431}) and (\ref{eqCT432}). It is easily seen that (\ref{eqCT43}) also holds with the factor $3$ therein replaced by any $\theta> 1$. In this case the constant $c$ depends on $p, \alpha$ and $\theta$. \end{pf}
\begin{lemma} \label{lemmomentA14} Recall (\ref{eqsrdsip}) and (\ref{eqmk1}) for $\Xi_{\alpha, p}$ and $M_{\alpha, p}$, respectively, and (\ref{eqA14725p}) for $W_{k, l}$. Then there exists a constant $c$, only depending of $\alpha$ and $p$, such that
\begin{equation} \label{eqA14728p} \sum_{k=1}^\infty
{ {3^k}\over{m_k}} { {\mathsf{E}(\max_{1\le l \le m_k} |W_{k, l}|^\alpha)} \over{ 3^{k\alpha/p} }} \le c M_{\alpha, p} \Theta_{0, 2}^\alpha + c \Xi_{\alpha, p}^\alpha + c
\|X_1\|_p^p. \end{equation}
\end{lemma}
\begin{pf}Recall (\ref{eqtruncSA14738}) for the functional dependence measure $\delta_{k, j, \iota}$. Since $T_a$ has Lipschitz constant $1$, we have
\begin{eqnarray} \label{eqtruncSA14748} \delta_{k, j, \iota}^\iota &\le& \mathsf{E}\bigl[\min
\bigl(2 \times3^{k/p}, |X_i-X_{i, \{i-j\}}| \bigr)^\iota\bigr] \nonumber \\[-8pt] \\[-8pt] \nonumber &\le& 2^\iota\mathsf{E}\bigl[\min
\bigl(3^{k/p}, |X_j-X_{j, \{0\}}|\bigr)^\iota \bigr]. \end{eqnarray}
We shall apply the Rosenthal-type inequality in \citet{LiuHanWu}: there exists a constant $c$, only depending on $\alpha$, such that
\begin{eqnarray}
\label{eqA14808} \Bigl\Vert \max_{1\le l \le m_k} |W_{k, l}|
\Bigr\Vert _{\alpha} &\le& c m_k^{1/2} \Biggl[ \sum _{j=1}^{m_k} \delta_{k, j, 2} + \sum _{j=1+m_k}^\infty\delta_{k, j, \alpha} + \bigl\|
T_{3^{k/p}} (X_1)\bigr\|_2 \Biggr] \nonumber\\ & &{}+ c m_k^{1/\alpha} \Biggl[ \sum_{j=1}^{m_k}
j^{1/2-1/\alpha} \delta_{k, j, \alpha} + \bigl\| T_{3^{k/p}} (X_1)
\bigr\|_\alpha \Biggr] \\ &\le& c(I_k + \mathit{I I}_k + \mathit{I I I}_k),\nonumber \end{eqnarray}
where
\begin{eqnarray} \label{eqA14826} I_k &=& m_k^{1/2} \sum _{j=1}^\infty\delta_{j, 2} +
m_k^{1/2} \|X_1\|_2, \nonumber\\ \mathit{I I}_k &=& m_k^{1/\alpha} \sum _{j=1}^\infty j^{1/2-1/\alpha} \delta_{k, j, \alpha}, \\
\mathit{I I I}_k &=& m_k^{1/\alpha} \bigl\| T_{3^{k/p}}
(X_1)\bigr\|_\alpha.\nonumber \end{eqnarray}
Here we have applied the inequality $\delta_{k, j, 2} \le\delta_{j, 2}$, since $T_a$ has Lipschitz constant $1$. Since
$\sum_{j=1}^\infty\delta_{j, 2} + \|X_1\|_2 \le2 \Theta_{0, 2}$, by (\ref{eqmk1}), we obtain the upper bound $c M_{\alpha, p} \Theta_{0, 2}^\alpha$ in (\ref{eqA14728p}), which corresponds to the first term $I_k$ in (\ref{eqA14808}). For the third term
$\mathit{I I I}_k$, we obtain the bound $c \|X_1\|_p^p$ in (\ref{eqA14808}) in view of Lemma \ref{lemasct} by noting that
$|T_{3^{k/p}} (X_1)| \le\min(3^{k/p}, |X_1|)$ and
$\min(|v|^\alpha, v^2) \ge\min(|v|^\alpha, 1)$.
We shall now deal with $\mathit{I I}_k$. Let $\beta= \alpha/(\alpha-1)$, so that $\beta^{-1} + \alpha^{-1} = 1$; let $\lambda_j = (j^{1/2 - 1/\alpha} \delta_{j, p}^{p/\alpha})^{-1/\beta}$. Recall (\ref{eqsrdsip}) for $\Xi_{\alpha, p}$. By H\"older's inequality,
\begin{equation} \Biggl(\sum_{j=1}^\infty j^{1/2-1/\alpha} \delta_{k, j, \alpha} \Biggr)^\alpha \le\Xi_{\alpha, p}^{\alpha/\beta} \sum_{j=1}^\infty\lambda_j^\alpha \bigl(j^{1/2-1/\alpha} \delta_{k, j, \alpha}\bigr)^\alpha. \end{equation}
Hence, by (\ref{eqtruncSA14748}) and Lemma \ref{lemasct}, we complete the proof of (\ref{eqA14728p}) in view of
\begin{eqnarray} \label{eqA14903} \sum_{k=1}^\infty { {3^k}\over{m_k}} { {\mathit{I I}_k^\alpha} \over{3^{\alpha k/p}}} &\le& \sum _{k=1}^\infty3^{k-k\alpha/p} \Xi_{\alpha, p}^{\alpha/\beta} \sum_{j=1}^\infty\lambda_j^\alpha \bigl(j^{1/2-1/\alpha} \delta_{k, j, \alpha}\bigr)^\alpha \nonumber\\ &=& \Xi_{\alpha, p}^{\alpha/\beta} \sum_{j=1}^\infty \lambda_j^\alpha j^{\alpha/2-1} \sum _{k=1}^\infty3^{k-k\alpha/p} \delta_{k, j, \alpha}^\alpha \\ &\le& \Xi_{\alpha, p}^{\alpha/\beta} \sum_{j=1}^\infty \lambda_j^\alpha j^{\alpha/2-1} c_{\alpha, p} \delta_{j, p}^p = c_{\alpha, p} \Xi_{\alpha, p}^{\alpha}.\nonumber \end{eqnarray}
\upqed\end{pf} \section*{Acknowledgment} We thank the anonymous referee for his/her helpful comments that have improved the paper. We also thank F. Merlev\`{e}de and E. Rio for pointing out an error in an earlier version of the paper.
\printaddresses
\end{document} | arXiv |
On the first day, Barry Sotter used his magic wand to make an object's length increase by $\frac{1}{2}$, meaning that if the length of the object was originally $x,$ then it is now $x + \frac{1}{2} x.$ On the second day he increased the object's longer length by $\frac{1}{3}$; on the third day he increased the object's new length by $\frac{1}{4}$; and so on. On the $n^{\text{th}}$ day of performing this trick, Barry will make the object's length exactly 100 times its original length. What is the value of $n$?
On day $n$, Barry increases the length of the object by a factor of $\frac{n+2}{n+1}$. Thus, the overall increase through day $n$ is by a factor of $\left( \frac32 \right) \left( \frac43\right) \cdots \left( \frac{n+1}{n}\right) \left( \frac{n+2}{n+1}\right)$. Canceling, we see that this expression equals $\frac{n+2}2$. Thus we have $\frac{n+2}2=100$, and so $n=\boxed{198}.$ | Math Dataset |
Tag Archives: High Energy Astrophysical Phenomena
LAGO: the Latin American Giant Observatory [IMA]
Posted on March 17, 2017 by arxiver
The Latin American Giant Observatory (LAGO) is an extended cosmic ray observatory composed of a network of water-Cherenkov detectors (WCD) spanning over different sites located at significantly different altitudes (from sea level up to more than $5000$\,m a.s.l.) and latitudes across Latin America, covering a wide range of geomagnetic rigidity cut-offs and atmospheric absorption/reaction levels. The LAGO WCD is simple and robust, and incorporates several integrated devices to allow time synchronization, autonomous operation, on board data analysis, as well as remote control and automated data transfer.
This detection network is designed to make detailed measurements of the temporal evolution of the radiation flux coming from outer space at ground level. LAGO is mainly oriented to perform basic research in three areas: high energy phenomena, space weather and atmospheric radiation at ground level. It is an observatory designed, built and operated by the LAGO Collaboration, a non-centralized collaborative union of more than 30 institutions from ten countries.
In this paper we describe the scientific and academic goals of the LAGO project – illustrating its present status with some recent results – and outline its future perspectives.
I. Sidelnik, H. Asorey and LAGO. Collaboration
Fri, 17 Mar 17
Comments: 4 pages, 2 figures, Proceedings of the 9th International Workshop on Ring Imaging Cherenkov Detectors (RICH 2016), Lake Bled, Slovenia
Posted in Instrumentation and Methods for Astrophysics | Tagged High Energy Astrophysical Phenomena, Instrumentation and Methods for Astrophysics
Quasars as standard candles I: The physical relation between disc and coronal emission [HEAP]
A tight non-linear relation exists between the X-ray and UV emission in quasars (i.e. $L_{\rm X}\propto L_{\rm UV}^{\gamma}$), with a dispersion of $\sim$0.2~dex over \rev{$\sim$3~orders of magnitude in luminosity}. Such observational evidence has two relevant consequences: (1) an ubiquitous physical mechanism must regulate the energy transfer from the accretion disc to the X-ray emitting {\it corona}, and (2) the non-linearity of the relation provides a new, powerful way to estimate the absolute luminosity, turning quasars into a new class of {\it standard candles}.
Here we propose a modified version of this relation which involves the emission line full-width half maximum, $L_{\rm X}\propto L_{\rm UV}^{\hat\gamma}\upsilon_{\rm fwhm}^{\hat\beta}$. We interpret this new relation through a simple, {\it ad-hoc} model of accretion disc corona, derived from the works of Svensson \& Zdziarski (1994) and Merloni \& Fabian (2002), where it is assumed that reconnection and magnetic loops above the accretion disc can account for the production of the primary X-ray radiation.
We find that the monochromatic optical-UV (2500 \AA) and X–ray (2 keV) luminosities depend on the black hole mass and accretion rate as $L_{\rm UV}\propto M_{\rm BH}^{4/3} (\dot{M}/\dot{M}_{\rm Edd})^{2/3}$ and $L_{\rm X}\propto M_{\rm BH}^{19/21} (\dot{M}/\dot{M}_{\rm Edd})^{5/21}$, respectively. Assuming a broad line region size function of the disc luminosity $R_{\rm blr}\propto L_{\rm disc}^{0.5}$ we finally have that $L_{\rm X}\propto L_{\rm UV}^{4/7} \upsilon_{\rm fwhm}^{4/7}$. Such relation is remarkably consistent with the slopes and the normalization obtained from a fit of a sample of 545 optically selected quasars from SDSS DR7 cross matched with the latest XMM–{\it Newton} catalogue 3XMM-DR6.
The homogeneous sample used here has a dispersion of 0.21 dex, which is much lower than previous works in the literature and suggests a tight physical relation between the accretion disc and the X-ray emitting corona. We also obtained a possible physical interpretation of the $L_{\rm X}-L_{\rm UV}$ relation (considering also the effect of $\upsilon_{\rm fwhm}$), which puts the determination of distances based on this relation on a sounder physical grounds. The proposed new relation does not evolve with time, and thus it can be employed as a cosmological indicator to robustly estimate cosmological parameters.
E. Lusso and G. Risaliti
Comments: 15 pages, 9 figures, accepted for publication in Astronomy & Astrophysics
Posted in High Energy Astrophysical Phenomena | Tagged High Energy Astrophysical Phenomena
Charged massive scalar field configurations supported by a spherically symmetric charged reflecting shell [CL]
The physical properties of bound-state charged massive scalar field configurations linearly coupled to a spherically symmetric charged reflecting shell are studied {\it analytically}. To that end, we solve the Klein-Gordon wave equation for a static scalar field of proper mass $\mu$, charge coupling constant $q$, and spherical harmonic index $l$ in the background of a charged shell of radius $R$ and electric charge $Q$. It is proved that the dimensionless inequality $\mu R<\sqrt{(qQ)^2-(l+1/2)^2}$ provides an upper bound on the regime of existence of the composed charged-spherical-shell-charged-massive-scalar-field configurations. Interestingly, we explicitly show that the {\it discrete} spectrum of shell radii $\{R_n(\mu,qQ,l)\}_{n=0}^{n=\infty}$ which can support the static bound-state charged massive scalar field configurations can be determined analytically. We confirm our analytical results by numerical computations.
S. Hod
Comments: 8 pages
Posted in Cross-listed | Tagged General Relativity and Quantum Cosmology, High Energy Astrophysical Phenomena, High Energy Physics - Theory
Narrow phase-dependent features in X-ray Dim Isolated Neutron Stars: a new detection and upper limits [HEAP]
We report on the results of a detailed phase-resolved spectroscopy of archival XMM–Newton observations of X-ray Dim Isolated Neutron Stars (XDINSs). Our analysis revealed a narrow and phase-variable absorption feature in the X-ray spectrum of RX J1308.6+2127. The feature has an energy of $\sim$740 eV and an equivalent width of $\sim$15 eV. It is detected only in $\sim$ 1/5 of the phase cycle, and appears to be present for the entire timespan covered by the observations (2001 December – 2007 June). The strong dependence on the pulsar rotation and the narrow width suggest that the feature is likely due to resonant cyclotron absorption/scattering in a confined high-B structure close to the stellar surface. Assuming a proton cyclotron line, the magnetic field strength in the loop is B$_{loop} \sim 1.7 \times 10^{14}$ G, about a factor of $\sim$5 higher than the surface dipolar magnetic field (B$_{surf} \sim 3.4 \times 10^{13}$ G). This feature is similar to that recently detected in another XDINS, RX J0720.4-3125, showing (as expected by theoretical simulations) that small scale magnetic loops close to the surface might be common to many highly magnetic neutron stars (although difficult to detect with current X-ray instruments). Furthermore, we investigated the available XMM–Newton, data of all XDINSs in search for similar narrow phase-dependent features, but could derive only upper limits for all the other sources.
A. Borghese, N. Rea, F. Zelati, et. al.
Comments: 10 pages, 5 figures, 4 tables. Accepted for publication in MNRAS
New constraints on binary evolution enhance the supernova type Ia rate [HEAP]
Even though Type Ia supernovae (SNIa) play an important role in many fields in astronomy, the nature of the progenitors of SNIa remain a mystery. One of the classical evolutionary pathways towards a SNIa explosion is the single degenerate (SD) channel, in which a carbon-oxygen white dwarf accretes matter from its non-degenerate companion until it reaches the Chandrasekhar mass. Constraints on the contribution from the SD channel to the overall SNIa rate come from a variety of methods, e.g. from abundances, from signatures of the companion star in the light curve or near the SNIa remnant, and from synthetic SNIa rates. In this proceedings, I show that when incorporating our newest understandings of binary evolution, the SNIa rate from the single degenerate channel is enhanced. I also discuss the applicability of these constraints on the evolution of SNIa progenitors.
S. Toonen
Comments: 3 figures, 6 pages, Proceedings of the workshop: "The Golden Age of Cataclysmic Variables and Related Objects III", Palermo, Italy, Sep 7-12, 2015
Lectures on the Infrared Structure of Gravity and Gauge Theory [CL]
This is a redacted transcript of a course given by the author at Harvard in spring semester 2016. It contains a pedagogical overview of recent developments connecting the subjects of soft theorems, the memory effect and asymptotic symmetries in four-dimensional QED, nonabelian gauge theory and gravity with applications to black holes. The lectures may be viewed online at https://goo.gl/3DJdOr. Please send typos or corrections to [email protected].
A. Strominger
Posted in Cross-listed | Tagged General Relativity and Quantum Cosmology, High Energy Astrophysical Phenomena, High Energy Physics - Phenomenology, High Energy Physics - Theory, Mathematical Physics
An investigation of pulsar searching techniques with the Fast Folding Algorithm [IMA]
Here we present an in-depth study of the behaviour of the Fast Folding Algorithm, an alternative pulsar searching technique to the Fast Fourier Transform. Weaknesses in the Fast Fourier Transform, including a susceptibility to red noise, leave it insensitive to pulsars with long rotational periods (P > 1 s). This sensitivity gap has the potential to bias our understanding of the period distribution of the pulsar population. The Fast Folding Algorithm, a time-domain based pulsar searching technique, has the potential to overcome some of these biases. Modern distributed-computing frameworks now allow for the application of this algorithm to all-sky blind pulsar surveys for the first time. However, many aspects of the behaviour of this search technique remain poorly understood, including its responsiveness to variations in pulse shape and the presence of red noise. Using a custom CPU-based implementation of the Fast Folding Algorithm, ffancy, we have conducted an in-depth study into the behaviour of the Fast Folding Algorithm in both an ideal, white noise regime as well as a trial on observational data from the HTRU-S Low Latitude pulsar survey, including a comparison to the behaviour of the Fast Fourier Transform. We are able to both confirm and expand upon earlier studies that demonstrate the ability of the Fast Folding Algorithm to outperform the Fast Fourier Transform under ideal white noise conditions, and demonstrate a significant improvement in sensitivity to long-period pulsars in real observational data through the use of the Fast Folding Algorithm.
A. Cameron, E. Barr, D. Champion, et. al.
Comments: 19 pages, 15 figures, 3 tables
The reflection spectrum of the low-mass X-ray binary 4U 1636-53 [HEAP]
We present 3-79 keV NuSTAR observations of the neutron star low-mass X-ray binary 4U 1636-53 in the soft, transitional and hard state. The spectra display a broad emission line at 5-10 keV. We applied several models to fit this line: A GAUSSIAN line, a relativistically broadened emission line model, KYRLINE, and two models including relativistically smeared and ionized reflection off the accretion disc with different coronal heights, RELXILL and RELXILLLP. All models fit the spectra well, however, the KYRLINE and RELXILL models yield an inclination of the accretion disc of $\sim88\degree$ with respect to the line of sight, which is at odds with the fact that this source shows no dips or eclipses. The RELXILLLP model, on the other hand, gives a reasonable inclination of $\sim56\degree$. We discuss our results for these models in this source and the possible primary source of the hard X-rays.
Y. Wang, M. Mendez, A. Sanna, et. al.
Comments: 9 pages, 8 figures
Early UV emission from disk-originated matter (DOM) in type Ia supernovae in the double degenerate scenario [HEAP]
We show that the blue and UV excess emission at the first few days of some type Ia supernovae (SNe Ia) can be accounted for in the double degenerate (DD) scenario by the collision of the SN ejecta with circumstellar matter that was blown by the accretion disk formed during the merger process of the two white dwarfs (WDs). We assume that in cases of excess early light the disk blows the circumstellar matter, that we term disk-originated matter (DOM), hours to days before explosion. To perform our analysis we first provide a model-based definition for early excess light, replacing the definition of excess light relative to a power-law fit to the rising luminosity. We then examine the light curves of the SNe Ia iPTF14atg and SN 2012cg, and find that the collision of the ejecta with a DOM in the frame of the DD scenario can account for their early excess emission. Thus, early excess light does not necessarily imply the presence of a stellar companion in the frame of the single-degenerate scenario. Our findings further increase the variety of phenomena that the DD scenario can account for, and emphasize the need to consider all different SN Ia scenarios when interpreting observations.
N. Levanon and N. Soker
Comments: 7 pages, 5 figures. Will be submitted in two days to allow comments by readers
Secluded and Flipped Dark Matter and Stueckelberg Extensions of the Standard Model [CL]
We consider here three dark matter models with the gauge symmetry of the standard model plus an additional local $U(1)_D$ factor. One model is secluded and two models are flipped. All of these models include one dark fermion and one vector boson that attains mass through the Stueckelberg mechanism. We show that the flipped models provide examples dark matter composed of "least interacting particles" (LIPs). Such particles are therefore compatible with the constraints obtained from both laboratory measurements and astrophysical observations.
E. Fortes, V. Pleitez and F. Stecker
Comments: 6 pages, no figures
Posted in Cross-listed | Tagged High Energy Astrophysical Phenomena, High Energy Physics - Phenomenology, High Energy Physics - Theory
Accretion Flow Properties of Swift J1753.5-0127 during its 2005 outburst [HEAP]
Galactic X-ray binary black hole candidate Swift~J1753.5-0127 was discovered on June 30 2005 by Swift/BAT instrument. In this paper, we make detailed analysis of spectral and timing properties of its 2005 outburst using RXTE/PCA archival data. We study evolution of spectral properties of the source from spectral analysis with the additive table model {\it fits} file of the Chakrabarti-Titarchuk two-components advective flow (TCAF) solution. From spectral fit, we extract physical flow parameters, such as, Keplerian disk accretion rate, sub-Keplerian halo rate, shock location and shock compression ratio, etc. We also study the evolution of temporal properties, such as observation of low frequency quasi-periodic oscillations (QPOs), variation of X-ray intensity throughout the outburst. From the nature of the variation of QPOs, and accretion rate ratios (ARRs=ratio of halo to disk rates), we classify entire 2005 outburst into two harder (hard-intermediate and hard) spectral states. No signature of softer (soft-intermediate and soft) spectral states are seen. This may be because of significant halo rate throughout the outburst. This behavior is similar to a class of other short orbital period sources, such as, MAXI~J1836-194, MAXI~J1659-152 and XTE~J1118+480. Here, we also estimate probable mass range of the source, to be in between $4.75 M_\odot$ to $5.90 M_\odot$ based on our spectral analysis.
D. Debnath, A. Jana, S. Chakrabarti, et. al.
Comments: 14 pages, 5 Figures, ApJ (communicated)
Study of statistical properties of hybrid statistic in coherent multi-detector compact binary coalescences Search [CL]
In this article, we revisit the problem of coherent multi-detector search of gravitational wave from compact binary coalescence with Neutron stars and Black Holes using advanced interferometers like LIGO-Virgo. Based on the loss of optimal multi-detector signal-to-noise ratio (SNR), we construct a hybrid statistic as a best of maximum-likelihood-ratio(MLR) statistic tuned for face-on and face-off binaries. The statistical properties of the hybrid statistic is studied. The performance of this hybrid statistic is compared with that of the coherent MLR statistic for generic inclination angles. Owing to the single synthetic data stream, the hybrid statistic gives low false alarms compared to the multi-detector MLR statistic and small fractional loss in the optimum SNR for a large range of binary inclinations. We have demonstrated that for a LIGO-Virgo network and binary inclination, \epsilon < 70 deg. and \epsilon > 110 deg., the hybrid statistic captures more than 98% of network optimum matched filter SNR with low false alarm rate. The Monte-Carlo exercise with two distributions of incoming inclination angles namely, U[cos(\epsilon)] and more realistic distribution proposed by B. F. Schutz are performed with hybrid statistic and gave ~5% and ~7% higher detection probability respectively compared to the two stream multi-detector MLR statistic for a fixed false alarm probability of 10^-5.
K. Haris and A. Pai
Comments: Published in Phys. Rev. D
Posted in Cross-listed | Tagged General Relativity and Quantum Cosmology, High Energy Astrophysical Phenomena
Clustering of Gamma-Ray bursts through kernel principal component analysis [CL]
We consider the problem related to clustering of gamma-ray bursts (from "BATSE" catalogue) through kernel principal component analysis in which our proposed kernel outperforms results of other competent kernels in terms of clustering accuracy and we obtain three physically interpretable groups of gamma-ray bursts. The effectivity of the suggested kernel in combination with kernel principal component analysis in revealing natural clusters in noisy and nonlinear data while reducing the dimension of the data is also explored in two simulated data sets.
S. Modak, A. Chattopadhyay and T. Chattopadhyay
Comments: 30 pages, 10 figures
Posted in Cross-listed | Tagged Applications, High Energy Astrophysical Phenomena, Instrumentation and Methods for Astrophysics
Yet another introduction to relativistic astrophysics [HEAP]
Late Winter Lecture Notes, Short Course (10 hours) of Relativistic Astrophysics held at the Department of Physics and Astronomy of the University of Padova, March 13-17, 2017.
L. Foschini
Comments: 132 pages
Posted in High Energy Astrophysical Phenomena | Tagged General Relativity and Quantum Cosmology, High Energy Astrophysical Phenomena
Searching For Pulsars Associated With the Fermi GeV Excess [HEAP]
The Fermi Large Area Telescope has detected an extended region of GeV emission toward the Galactic Center that is currently thought to be powered by dark matter annihilation or a population of young and/or millisecond pulsars. In a test of the pulsar hypothesis, we have carried out an initial search of a 20 deg**2 area centered on the peak of the galactic center GeV excess. Candidate pulsars were identified as a compact, steep spectrum continuum radio source on interferometric images and followed with targeted single-dish pulsation searches. We report the discovery of the recycled pulsar PSR 1751-2737 with a spin period of 2.23 ms. PSR 1751-2737 appears to be an isolated recycled pulsar located within the disk of our Galaxy, and it is not part of the putative bulge population of pulsars that are thought to be responsible for the excess GeV emission. However, our initial success in this small pilot survey suggests that this hybrid method (i.e. wide-field interferometric imaging followed up with single dish pulsation searches) may be an efficient alternative strategy for testing whether a putative bulge population of pulsars is responsible for the GeV excess.
D. Bhakta, J. Deneva, D. Frail, et. al.
Thu, 16 Mar 17
Comments: MNRAS, in press
A Comprehensive Library of X-ray Pulsars in the Small Magellanic Cloud: Time Evolution of their Luminosities and Spin Periods [HEAP]
We have collected and analyzed the complete archive of {\itshape XMM-Newton\} (116), {\itshape Chandra\} (151), and {\itshape RXTE\} (952) observations of the Small Magellanic Cloud (SMC), spanning 1997-2014. The resulting observational library provides a comprehensive view of the physical, temporal and statistical properties of the SMC pulsar population across the luminosity range of $L_X= 10^{31.2}$–$10^{38}$~erg~s$^{-1}$. From a sample of 67 pulsars we report $\sim$1654 individual pulsar detections, yielding $\sim$1260 pulse period measurements. Our pipeline generates a suite of products for each pulsar detection: spin period, flux, event list, high time-resolution light-curve, pulse-profile, periodogram, and spectrum. Combining all three satellites, we generated complete histories of the spin periods, pulse amplitudes, pulsed fractions and X-ray luminosities. Some pulsars show variations in pulse period due to the combination of orbital motion and accretion torques. Long-term spin-up/down trends are seen in 12/11 pulsars respectively, pointing to sustained transfer of mass and angular momentum to the neutron star on decadal timescales. Of the sample 30 pulsars have relatively very small spin period derivative and may be close to equilibrium spin. The distributions of pulse-detection and flux as functions of spin-period provide interesting findings: mapping boundaries of accretion-driven X-ray luminosity, and showing that fast pulsars ($P<$10 s) are rarely detected, which yet are more prone to giant outbursts. Accompanying this paper is an initial public release of the library so that it can be used by other researchers. We intend the library to be useful in driving improved models of neutron star magnetospheres and accretion physics.
J. Yang, S. Laycock, D. Christodoulou, et. al.
Comments: 17 pages, 11 + 58 (appendix) figures. To appear in the Astrophysical Journal Supplement
Relativistic Turbulence with Strong Synchrotron and Synchrotron-Self-Compton Cooling [HEAP]
Many relativistic plasma environments in high-energy astrophysics, including pulsar wind nebulae, hot accretion flows onto black holes, relativistic jets in active galactic nuclei and gamma-ray bursts, and giant radio lobes, are naturally turbulent. The plasma in these environments is often so hot that synchrotron and inverse-Compton (IC) radiative cooling becomes important. In this paper we investigate the general thermodynamic and radiative properties (and hence the observational appearance) of an optically thin relativistically hot plasma stirred by driven magnetohydrodynamic (MHD) turbulence and cooled by radiation. We find that if the system reaches a statistical equilibrium where turbulent heating is balanced by radiative cooling, the effective electron temperature tends to attain a universal value $\theta = kT_e/m_e c^2 \sim 1/\sqrt{\tau_T}$, where $\tau_T=n_e\sigma_T L \ll 1$ is the system's Thomson optical depth, essentially independent of the strength of turbulent driving or magnetic field. This is because both MHD turbulent dissipation and synchrotron cooling are proportional to the magnetic energy density. We also find that synchrotron self-Compton (SSC) cooling and perhaps a few higher-order IC components are automatically comparable to synchrotron in this regime. The overall broadband radiation spectrum then consists of several distinct components (synchrotron, SSC, etc.), well separated in photon energy (by a factor $\sim \tau_T^{-1}$) and roughly equal in power. The number of IC peaks is checked by Klein-Nishina effects and depends logarithmically on $\tau_T$ and the magnetic field. We also examine the limitations due to synchrotron self-absorption, explore applications to Crab PWN and blazar jets, and discuss links to radiative magnetic reconnection.
D. Uzdensky
Comments: 12 pages, 1 figure; submitted for publication. Comments welcome!
Posted in High Energy Astrophysical Phenomena | Tagged High Energy Astrophysical Phenomena, Plasma Physics
The early B-type star Rho Oph A is an X-ray lighthouse [SSA]
We present the results of a 140 ks XMM-Newton observation of the B2 star $\rho$ Ophiuchi A. The star exhibited strong X-ray variability: a cusp-shaped increase of rate, similar to the one we partially observed in 2013, and a bright flare. These events are separated in time by about 104 ks, which likely correspond to the rotational period of the star (1.2 days). Time resolved spectroscopy of the X-ray spectra shows that the first event is almost only due to an increase of the plasma emission measure, while the second increase of rate is mainly due is a major flare, with temperatures in excess of 60 MK ($kT\sim5$ keV). From the analysis of its rise we infer a magnetic field of $\ge300$ G and a size of the flaring region of $\sim1.4-1.9\times10^{11}$ cm, which corresponds to $\sim25\%-30\%$ of the stellar radius. We speculate that either an intrinsic magnetism that produces a hot spot on its surface, or an unknown low mass companion are the source of such X-rays and variability. A hot spot of magnetic origin should be a stable structure over a time span of $\ge$2.5 years, and suggests an overall large scale dipolar magnetic field that produce an extended feature on the stellar surface. In the second scenario, a low mass unknown companion is the emitter of X-rays and it should orbit extremely close to the surface of the primary in a locked spin-orbit configuration, almost on the verge of collapsing onto the primary. As such, the X-ray activity of the secondary star would be enhanced by both its young age and the tight orbit like in RS Cvn systems and $\rho$ Ophiuchi would constitute an extreme system worth of further investigation.
I. Pillitteri, S. Wolk, F. Reale, et. al.
Comments: 10 pages, 7 figures, 2 tables, A&A accepted
Posted in Solar and Stellar Astrophysics | Tagged High Energy Astrophysical Phenomena, Solar and Stellar Astrophysics
What can we learn about GRB from the variability timescale related correlations? [HEAP]
Recently, two empirical correlations related to the minimum variability timescale ($\rm MTS$) of the lightcures are discovered in gamma-ray bursts (GRBs). One is the anti-correlation between $\rm MTS$ and Lorentz factor $\Gamma$, the other is the anti-correlation between the $\rm MTS$ and gamma-ray luminosity $L_\gamma$. Both the two correlations might be used to explore the activity of the central engine of GRBs. In this paper we try to understand these empirical correlations by combining two popular black hole (BH) central engine models (namely, Blandford \& Znajek mechanism and neutrino-dominated accretion flow). By taking the $\rm MTS$ as the timescale of viscous instability of the neutrino-dominated accretion flow (NDAF), we find that these correlations favor the scenario in which the jet is driven by Blandford-Znajek (BZ) mechanism.
W. Xie, W. Lei and D. Wang
Comments: 6 pages, 3 figures, accepted for publication in ApJ
Star formation, supernovae, iron, and $α$: consistent cosmic and Galactic histories [HEAP]
Recent versions of the observed cosmic star-formation history (SFH) have resolved an inconsistency between the SFH and the observed cosmic stellar mass density history. Here, we show that the same SFH revision scales up by a factor $\sim 2$ the delay-time distribution (DTD) of Type Ia supernovae (SNe Ia), as determined from the observed volumetric SN Ia rate history, and thus brings it into line with other field-galaxy SN Ia DTD measurements. The revised-SFH-based DTD has a $t^{-1.1 \pm 0.1}$ form and a Hubble-time-integrated SN Ia production efficiency of $N/M_\star=1.25\pm 0.10$ SNe Ia per $1000~{\rm M_\odot}$ of formed stellar mass. Using these revised histories and updated, purely empirical, iron yields of the various SN types, we rederive the cosmic iron accumulation history. Core-collapse SNe and SNe Ia have contributed about equally to the total mass of iron in the Universe today, as deduced also for the Sun. We find the track of the average cosmic gas element in the [$\alpha$/Fe] vs. [Fe/H] abundance-ratio plane, as well as the track for gas in galaxy clusters, which have a higher DTD and have had a distinct, burst-like, SFH. Our cosmic $[\alpha$/Fe] vs. [Fe/H] track is broadly similar to the observed main locus of Galactic stars in this plane, indicating a Milky Way (MW) SFH similar in form to the cosmic one, and we find a MW SFH that makes the track closely match the stellar locus. The cluster DTD with a short-burst SFH at $z=3$ produces a track that matches well the observed `high-$\alpha$' locus of MW stars, suggesting the halo/thick-disk population has had a galaxy-cluster-like formation history.
D. Maoz and O. Graur
Comments: Submitted to MNRAS, comments welcome
Posted in High Energy Astrophysical Phenomena | Tagged Astrophysics of Galaxies, Cosmology and Nongalactic Astrophysics, High Energy Astrophysical Phenomena
The quest for blue supergiants: binary merger models for the evolution of the progenitor of SN 1987A [SSA]
We present the results of a detailed, systematic stellar evolution study of binary mergers for blue supergiant (BSG) progenitors of Type II supernovae. In particular, these are the first evolutionary models that can simultaneously reproduce nearly all observational aspects of the progenitor of SN 1987A, $\text{Sk}-69\,^{\circ}202$, such as its position in the HR diagram, the enrichment of helium and nitrogen in the triple-ring nebula, and its lifetime before its explosion. The merger model, based on the one proposed by Podsiadlowski 1992 et al. and Podsiadlowski 2007 et al., consists of a main sequence secondary star that dissolves completely in the common envelope of the primary red supergiant at the end of their merger. We empirically explore a large initial parameter space, such as primary masses ($15\,\text{M}_{\odot}$, $16\,\text{M}_{\odot}$, and $17\,\text{M}_{\odot}$), secondary masses ($2\,\text{M}_{\odot}$, $3\,\text{M}_{\odot}$, …, $8\,\text{M}_{\odot}$) and different depths up to which the secondary penetrates the He core of the primary during the merger. The evolution of the merged star is continued until just before iron-core collapse and the surface properties of the 84 pre-supernova models ($16\,\text{M}_{\odot}-23\,\mathrm{M}_{\odot}$) computed have been made available in this work. Within the parameter space studied, the majority of the pre-supernova models are compact, hot BSGs with effective temperature $>12\,\text{kK}$ and radii of $30\,\text{R}_{\odot}-70\,\mathrm{R}_{\odot}$ of which six match nearly all the observational properties of $\text{Sk}-69\,^{\circ}202$.
A. Menon and A. Heger
Comments: Submitted to MNRAS. 21 pages, 11 figures, 7 tables
Dirac states of an electron in a circular intense magnetic field [HEAP]
Neutron-star magnetospheres are structured by very intense magnetic fields extending from 100 to 10 5 km traveled by very energetic electrons and positrons with Lorentz factors up to $\sim$ 10 7. In this context, particles are forced to travel almost along the magnetic field with very small gyro-motion, potentially reaching the quantified regime. We describe the state of Dirac particles in a locally uniform, constant and curved magnetic field in the approximation that the Larmor radius is very small compared to the radius of curvature of the magnetic field lines. We obtain a result that admits the usual relativistic Landau states as a limit of null curvature. We will describe the radiation of these states, that we call quantum curvature or synchro-curvature radiation, in an upcoming paper.
G. Voisin, S. Bonazzola and F. Mottez
Comments: N/A
Posted in High Energy Astrophysical Phenomena | Tagged High Energy Astrophysical Phenomena, High Energy Physics - Theory, Quantum Physics
A possible solution of the puzzling variation of the orbital period of MXB 1659-298 [HEAP]
MXB 1659-298 is a transient neutron star Low-Mass X-ray binary system that shows eclipses in the light curve with a peiodicity of 7.1 hr. MXB 1659-298 on outburst in August 2015 after 14 years of quiescence. We span a baseline of 40 years using the eight eclipse arrival times present in literature and adding 51 eclipse arrival times collected during the last two outbursts. We find that the companion star mass is $0.76 $ M$_{\odot}$, the inclination angle of the system is $72^{\circ}\!.4$ and the corona surrounding the neutron star has a size of $R_c \simeq 3.5 \times 10^8$ cm. A simple quadratic ephemeris do not fit the delays associated with the eclipse arrival times, the addition of a sinusoidal term is needed. We infer a binary orbital period of $P=7.1161099(3)$ hr and an orbital period derivative of $\dot{P}=-8.5(1.2) \times 10^{-12}$ s s$^{-1}$; the sinusoidal modulation has a period of $2.31 \pm 0.02$ yr. These results are consistent with a conservative mass transfer scenario during the outbursts and with a totally non-conservative mass transfer scenario during X-ray quiescence with the same mass transfer rate. The periodic modulation can be explained by either a gravitational quadrupole coupling due to variations of the oblateness of the companion star or with a presence of celestial body by orbiting around the binary system; in the latter case the mass of a third body is M$_3 = 21 \pm 2$ M$_J$.
R. Iaria, A. Gambino, T. Salvo, et. al.
Comments: 9 pages, 6 figures. Submitted to MNRAS on 2016 November 21, revised version after referee report
MHD simulations of oscillating cusp-filling tori around neutron stars — missing upper kHz QPO [HEAP]
We performed axisymmetric, grid-based, ideal magnetohydrodynamic (MHD) simulations of oscillating cusp-filling tori orbiting a non-rotating neutron star. A pseudo-Newtonian potential was used to construct the constant angular momentum tori in equilibrium. The inner edge of the torus is terminated by a "cusp" in the effective potential. The initial motion of the model tori were perturbed with uniform sub-sonic vertical and diagonal velocity fields. As the configuration evolved in time, we measured the mass accretion rate on the neutron star surface and obtained the power spectrum. The prominent mode of oscillation in the cusp torus is the radial epicyclic mode. From our analysis it follows that the mass accretion rate carries a modulation imprint of the oscillating torus, and hence so does the boundary layer luminosity.
V. Parthasarathy, W. Kluzniak and M. Cemeljic
Comments: Submitted as a Letter to MNRAS. 5 pages, 3 figures, 1 table. Comments are welcome
A multi-observatory database of X-ray pulsars in the Magellanic Clouds [HEAP]
Using hundreds of XMM-Newton and Chandra archival observations and nearly a thousand RXTE observations, we have generated a comprehensive library of the known pulsars in the Small and Large Magellanic Clouds (SMC, LMC). The pulsars are detected multiple times across the full parameter spaces of X-ray luminosity ($L_X= 10^{31-38}$~erg/s) and spin period ( P$<$1s — P$>$1000s) and the library enables time-domain studies at a range of energy scales. The high time-resolution and sensitivity of the EPIC cameras are complemented by the angular resolution of Chandra and the regular monitoring of RXTE. Our processing %$\sim$15 year pipeline uses the latest calibration files and software to generate a suite of useful products for each pulsar detection: event lists, high time-resolution light curves, periodograms, spectra, and complete histories of $\dot{P}$, the pulsed fraction, etc., in the broad (0.2-12 keV), soft (0.2-2 keV), and hard (2-12 keV) energy bands. After combining the observations from these telescopes, we found that 28 pulsars show long-term spin up and 25 long-term spin down. We also used the faintest and brightest sources to map out the lower and upper boundaries of accretion-powered X-ray emission: the propeller line and the Eddington line, respectively. We are in the process of comparing the observed pulse profiles to geometric models of X-ray emission in order to constrain the physical parameters of the pulsars. Finally we are preparing a public release of the library so that it can be used by others in the astronomical community.
J. Yang, S. Laycock, J. Drake, et. al.
Dark Matter Constraints from a Joint Analysis of Dwarf Spheroidal Galaxy Observations with VERITAS [HEAP]
We present constraints on the annihilation cross section of WIMP dark matter based on the joint statistical analysis of four dwarf galaxies with VERITAS. These results are derived from an optimized photon weighting statistical technique that improves on standard imaging atmospheric Cherenkov telescope (IACT) analyses by utilizing the spectral and spatial properties of individual photon events. We report on the results of $\sim$230 hours of observations of five dwarf galaxies and the joint statistical analysis of four of the dwarf galaxies. We find no evidence of gamma-ray emission from any individual dwarf nor in the joint analysis. The derived upper limit on the dark matter annihilation cross section from the joint analysis is $1.35\times 10^{-23} {\mathrm{ cm^3s^{-1}}}$ at 1 TeV for the bottom quark ($b\bar{b}$) final state, $2.85\times 10^{-24}{\mathrm{ cm^3s^{-1}}}$ at 1 TeV for the tau lepton ($\tau^{+}\tau^{-}$) final state and $1.32\times 10^{-25}{\mathrm{ cm^3s^{-1}}}$ at 1 TeV for the gauge boson ($\gamma\gamma$) final state.
VERITAS. Collaboration, S. Archambault, A. Archer, et. al.
Comments: 14 pages, 9 figures, accepted for publication in PRD
Gravitational Waves from Core-Collapse Supernovae [HEAP]
Gravitational waves are a potential direct probe for the multi-dimensional flow during the first second of core-collapse supernova explosions. Here we outline the structure of the predicted gravitational wave signal from neutrino-driven supernovae of non-rotating progenitors from recent 2D and 3D simulations. We sketch some quantitative dependencies that govern the amplitudes of this signal and its evolution in the time-frequency domain.
B. Muller
Comments: 4 pages, invited contribution prepared for the minisymposium "Gravitational Waves: Sources and Detection" at the 13th International Conference on Mathematical and Numerical Aspects of Wave Propagation, Minnesota, 2017
Posted in High Energy Astrophysical Phenomena | Tagged General Relativity and Quantum Cosmology, High Energy Astrophysical Phenomena, Solar and Stellar Astrophysics
Modelling Jets, Tori and Flares in Pulsar Wind Nebulae [HEAP]
In this contribution we review the recent progress in the modeling of Pulsar Wind Nebulae (PWN). We start with a brief overview of the relevant physical processes in the magnetosphere, the wind-zone and the inflated nebula bubble. Radiative signatures and particle transport processes obtained from 3D simulations of PWN are discussed in the context of optical and X-ray observations. We then proceed to consider particle acceleration in PWN and elaborate on what can be learned about the particle acceleration from the dynamical structures called "wisps" observed in the Crab nebula. We also discuss recent observational and theoretical results of gamma-ray flares and the inner knot of the Crab nebula, which had been proposed as the emission site of the flares. We extend the discussion to GeV flares from binary systems in which the pulsar wind interacts with the stellar wind from a companion star. The chapter concludes with a discussion of solved and unsolved problems posed by PWN.
O. Porth, R. Buehler, B. Olmi, et. al.
Comments: To appear in "Jets and Winds in Pulsar Wind Nebulae, Gamma-ray Bursts and Blazars: Physics of Extreme Energy Release" of the Space Science Reviews series. The final publication is available at Springer via this http URL
The X-ray properties of Be/X-ray pulsars in quiescence [HEAP]
Observations of accreting neutron stars (NS) with strong magnetic fields can be used not only for studying the accretion flow interaction with NS magnetospheres, but also for understanding the physical processes inside NSs and for estimating their fundamental parameters. Of particular interest are (i) the interaction of a rotating neutron star (magnetosphere) with the in-falling matter at different accretion rates, and (ii) the theory of deep crustal heating and the influence of a strong magnetic field on this process. Here, we present results of the first systematic investigation of 16 X-ray pulsars with Be optical companions during their quiescent states, based on data from the Chandra, XMM-Newton and Swift observatories. The whole sample of sources can be roughly divided into two distinct groups: i) relatively bright objects with a luminosity around ~10^34 erg/s and (hard) power-law spectra, and ii) fainter ones showing thermal spectra. X-ray pulsations were detected from five objects in group i) with quite a large pulse fraction of 50-70 per cent. The obtained results are discussed within the framework of the models describing the interaction of the in-falling matter with the neutron star magnetic field and those describing heating and cooling in accreting NSs.
S. Tsygankov, R. Wijnands, A. Lutovinov, et. al.
Comments: 17 pages, 3 figures, 3 tables, submitted to MNRAS
The NuSTAR view of the true Type 2 Seyfert NGC3147 [GA]
We present the first NuSTAR observation of a 'true' Type 2 Seyfert galaxy. The 3-40 keV X-ray spectrum of NGC3147 is characterised by a simple power-law, with a standard {\Gamma}~1.7 and an iron emission line, with no need for any further component up to ~40 keV. These spectral properties, together with significant variability on time-scales as short as weeks (as shown in a 2014 Swift monitoring campaign), strongly support an unobscured line-of-sight for this source. An alternative scenario in terms of a Compton-thick source is strongly disfavoured, requiring an exceptional geometrical configuration, whereas a large fraction of the solid angle to the source is filled by a highly ionised gas, whose reprocessed emission would dominate the observed luminosity. Moreover, in this scenario the implied intrinsic X-ray luminosity of the source would be much larger than the value predicted by other luminosity proxies, like the [OIII]{\lambda}5007 emission line extinction-corrected luminosity. Therefore, we confirm with high confidence that NGC3147 is a true Type 2 Seyfert galaxy, intrinsically characterised by the absence of a BLR.
S. Bianchi, A. Marinucci, G. Matt, et. al.
Comments: 5 pages, 5 figures, accepted for publication in MNRAS
Posted in Galaxy Astrophysics | Tagged Astrophysics of Galaxies, Cosmology and Nongalactic Astrophysics, High Energy Astrophysical Phenomena
Probing the cosmic ray mass composition in the knee region through TeV secondary particle fluxes from solar surroundings [HEAP]
The possibility of estimating the mass composition of primary cosmic rays above the knee of its energy spectrum through the study of high energy gamma rays, muons and neutrinos produced in the interactions of cosmic rays with the solar ambient matter and radiation has been explored. It is found that the theoretical fluxes of TeV gamma rays, muons and neutrinos from a region around $15^{o}$ of the Sun are sensitive to mass composition of cosmic rays in the PeV energy range. The experimental prospects for detection of such TeV gamma rays/neutrinos by future experiments are discussed.
P. Banik, B. Bijay, S. Sarkar, et. al.
Comments: 10 pages, 7 figures, to appear in Physical Review D
Descattering of Giant Pulses in PSR B1957+20 [HEAP]
The interstellar medium scatters radio waves which causes pulsars to scintillate. For intrinsically short bursts of emission, the observed signal should be a direct measurement of the impulse response function. We show that this is indeed the case for giant pulses from PSR B1957+20: from baseband observations at 327 MHz, we demonstrate that the observed voltages of a bright pulse allow one to coherently descatter nearby ones. We find that while the scattering timescale is $12.3\,\mu$s, the power in the descattered pulses is concentrated within a span almost two orders of magnitude shorter, of $\lesssim\!200\,$ns. This sets an upper limit to the intrinsic duration of the giant pulses. We verify that the response inferred from the giant pulses is consistent with the scintillation pattern obtained by folding the regular pulsed emission, and that it decorrelates on the same timescale, of~$84\,$s. In principle, with large sets of giant pulses, it should be possible to constrain the structure of the scattering screen much more directly than with other current techniques, such as holography on the dynamic spectrum and cyclic spectroscopy.
R. Main, M. Kerkwijk, U. Pen, et. al.
Powerful Solar Signatures of Long-Lived Dark Mediators [HEAP]
Dark matter capture and annihilation in the Sun can produce detectable high-energy neutrinos, providing a probe of the dark matter-proton scattering cross section. We consider the case when annihilation proceeds via long-lived dark mediators, which allows gamma rays to escape the Sun and reduces the attenuation of neutrinos. For gamma rays, there are exciting new opportunities, due to detailed measurements of GeV solar gamma rays with Fermi, and unprecedented sensitivities in the TeV range with HAWC and LHAASO. For neutrinos, the enhanced flux, particularly at higher energies ($\sim$TeV), allows a more sensitive dark matter search with IceCube and KM3NeT. We show that these search channels can be extremely powerful, potentially improving sensitivity to the dark matter spin-dependent scattering cross section by several orders of magnitude relative to present searches for high-energy solar neutrinos, as well as direct detection experiments.
R. Leane, K. Ng and J. Beacom
Posted in High Energy Astrophysical Phenomena | Tagged High Energy Astrophysical Phenomena, High Energy Physics - Experiment, High Energy Physics - Phenomenology
Cosmic rays, gas and dust in nearby anticentre clouds : I — CO-to-H2 conversion factors and dust opacities [HEAP]
We aim to explore the capabilities of dust emission and rays for probing the properties of the interstellar medium in the nearby anti-centre region, using gamma-ray observations with the Fermi Large Area Telescope (LAT), and the thermal dust optical depth inferred from Planck and IRAS observations. In particular, we aim at quantifying potential variations in cosmic-ray density and dust properties per gas nucleon across the different gas phases and different clouds, and at measuring the CO-to-H2 conversion factor, X$_{CO}$ , in different environments. We have separated six nearby anti-centre clouds that are coherent in velocities and distances, from the Galactic-disc background in HI 21-cm and $^{12}$CO 2.6-mm line emission. We have jointly modelled the gamma-ray intensity recorded between 0.4 and 100 GeV, and the dust optical depth at 353 GHz as a combination of HI-bright, CO-bright, and ionised gas components. The complementary information from dust emission and gamma rays was used to reveal the gas not seen, or poorly traced, by HI , free-free, and $^{12}$CO emissions, namely (i) the opaque HI and diffuse H$_2$ present in the Dark Neutral Medium at the atomic-molecular transition, and (ii) the dense H$_2$ to be added where $^{12}$CO lines saturate. The measured interstellar gamma-ray spectra support a uniform penetration of the cosmic rays with energies above a few GeV through the clouds. We find a gradual increase in grain opacity as the gas becomes more dense. The increase reaches a factor of four to six in the cold molecular regions that are well shielded from stellar radiation. Consequently, the X$_{CO}$ factor derived from dust is systematically larger by 30% to 130% than the gamma-ray estimate. We also evaluate the average gamma-ray X$_{CO}$ factorfor each cloud, and find that X$_{CO}$ tends to decrease from diffuse to more compact molecular clouds, as expected from theory.
Q. Remy, I. Grenier, D. Marshall, et. al.
On the Chemistry of the Young Massive Protostellar core NGC 2264 CMM3 [GA]
We present the first gas-grain astrochemical model of the NGC 2264 CMM3 protostellar core. The chemical evolution of the core is affected by changing its physical parameters such as the total density and the amount of gas-depletion onto grain surfaces as well as the cosmic ray ionisation rate, $\zeta$. We estimated $\zeta_{\text {CMM3}}$ = 1.6 $\times$ 10$^{-17}$ s$^{-1}$. This value is 1.3 times higher than the standard CR ionisation rate, $\zeta_{\text {ISM}}$ = 1.3 $\times$ 10$^{-17}$ s$^{-1}$. Species response differently to changes into the core physical conditions, but they are more sensitive to changes in the depletion percentage and CR ionisation rate than to variations in the core density. Gas-phase models highlighted the importance of surface reactions as factories of large molecules and showed that for sulphur bearing species depletion is important to reproduce observations.
Comparing the results of the reference model with the most recent millimeter observations of the NGC 2264 CMM3 core showed that our model is capable of reproducing the observed abundances of most of the species during early stages ($\le$ 3$\times$10$^4$ yrs) of their chemical evolution. Models with variations in the core density between 1 – 20 $\times$ 10$^6$ cm$^{-3}$ are also in good agreement with observations during the early time interval 1 $\times$ 10$^4 <$ t (yr) $<$ 5 $\times$ 10$^4$. In addition, models with higher CR ionisation rates (5 – 10) $\times \zeta_{\text {ISM}}$ are often overestimating the fractional abundances of the species. However, models with $\zeta_{\text {CMM3}}$ = 5 $\zeta_{\text {ISM}}$ may best fit observations at times $\sim$ 2 $\times$ 10$^4$ yrs. Our results suggest that CMM3 is (1 – 5) $\times$ 10$^4$ yrs old. Therefore, the core is chemically young and it may host a Class 0 object as suggested by previous studies.
Z. Awad and O. Shalabeia
Comments: 24 pages, 4 figures, 3 Tables. Accepted for publication in Astrophysics and Space Science
Posted in Galaxy Astrophysics | Tagged Astrophysics of Galaxies, High Energy Astrophysical Phenomena, Solar and Stellar Astrophysics
Probing the Interstellar Dust towards the Galactic Centre: Dust Scattering Halo around AX J1745.6-2901 [HEAP]
AX J1745.6-2901 is an X-ray binary located at only 1.45 arcmin from Sgr A*, showcasing a strong X-ray dust scattering halo. We combine Chandra and XMM-Newton observations to study the halo around this X-ray binary. Our study shows two major thick dust layers along the line of sight (LOS) towards AX J1745.6-2901. The LOS position and $N_{H}$ of these two layers depend on the dust grain models with different grain size distribution and abundances. But for all the 19 dust grain models considered, dust Layer-1 is consistently found to be within a fractional distance of 0.11 (mean value: 0.05) to AX J1745.6-2901 and contains only (19-34)% (mean value: 26%) of the total LOS dust. The remaining dust is contained in Layer-2, which is distributed from the Earth up to a mean fractional distance of 0.64. A significant separation between the two layers is found for all the dust grain models, with a mean fractional distance of 0.31. Besides, an extended wing component is discovered in the halo, which implies a higher fraction of dust grains with typical sizes $\lesssim$ 590 \AA\ than considered in current dust grain models. Assuming AX J1745.6-2901 is 8 kpc away, dust Layer-2 would be located in the Galactic disk several kpc away from the Galactic Centre (GC). The dust scattering halo biases the observed spectrum of AX J1745.6-2901 severely in both spectral shape and flux, and also introduces a strong dependence on the size of the instrumental point spread function and the source extraction region. We build Xspec models to account for this spectral bias, which allow us to recover the intrinsic spectrum of AX J1745.6-2901 free from dust scattering opacity. If dust Layer-2 also intervenes along the LOS to Sgr A* and other nearby GC sources, a significant spectral correction for the dust scattering opacity would be necessary for all these GC sources.
C. Jin, G. Ponti, F. Haberl, et. al.
Comments: 20 pages, 18 figures, 5 tables, accepted for publication in MNRAS
Non-cyclic geometric phases and helicity transitions for neutrino oscillations in magnetic field [CL]
We show that neutrino spin and spin-flavor transitions involve non-vanishing geometric phases. Analytical expressions are derived for non-cyclic geometric phases arising due to neutrino oscillations in magnetic fields and matter. Several calculations are performed for different cases of rotating and non-rotating magnetic fields in the context of solar neutrinos and neutrinos produced inside neutron stars. It is shown that the neutrino eigenstates carry non-vanishing geometric phases even at large distances from their original point of production. Also the effects of critical magnetic fields and cross boundary effects in case of neutrinos emanating out of neutron stars are analyzed.
S. Joshi and S. Jain
Posted in Cross-listed | Tagged High Energy Astrophysical Phenomena, High Energy Physics - Phenomenology, Quantum Physics
XMM-Newton and NuSTAR simultaneous X-ray observations of IGR J11215-5952 [HEAP]
We report the results of an XMM-Newton and NuSTAR coordinated observation of the Supergiant Fast X-ray Transient (SFXT) IGRJ11215-5952, performed on February 14, 2016, during the expected peak of its brief outburst, which repeats every about 165 days. Timing and spectral analysis were performed simultaneously in the energy band 0.4-78 keV. A spin period of 187.0 +/- 0.4 s was measured, consistent with previous observations performed in 2007. The X-ray intensity shows a large variability (more than one order of magnitude) on timescales longer than the spin period, with several luminous X-ray flares which repeat every 2-2.5 ks, some of which simultaneously observed by both satellites. The broad-band (0.4-78 keV) time-averaged spectrum was well deconvolved with a double-component model (a blackbody plus a power-law with a high energy cutoff) together with a weak iron line in emission at 6.4 keV (equivalent width, EW, of 40+/-10 eV). Alternatively, a partial covering model also resulted in an adequate description of the data. The source time-averaged X-ray luminosity was 1E36 erg/s (0.1-100 keV; assuming 7 kpc). We discuss the results of these observations in the framework of the different models proposed to explain SFXTs, supporting a quasi-spherical settling accretion regime, although alternative possibilities (e.g. centrifugal barrier) cannot be ruled out.
L. Sidoli, A. Tiengo, A. Paizis, et. al.
Comments: 13 pages, 11 figures, accepted for publication on The Astrophysical Journal
Indication of a massive circumbinary planet orbiting the Low Mass X-ray Binary MXB 1658-298 [HEAP]
We present an X-ray timing analysis of the transient X-ray binary MXB 1658-298, using data obtained from the RXTE and XMM-Newton observatories. We have made 27 new mid eclipse time measurements from observations made during the two outbursts of the source. These new measurements have been combined with the previously known values to study long term changes in orbital period of the binary system. We have found that the mid-eclipse timing record of MXB 1658-298 is quite unusual. The long term evolution of mid-eclipse times indicates an overall orbital period decay with a time scale of — 6.5(7) x 10^7 year. Over and above this orbital period decay, the O-C residual curve also shows a periodic residual on shorter timescales. This sinusoidal variation has an amplitude of ~9 lt-sec and a period of ~760 d. This is indicative of presence of a third body around the compact X-ray binary. The mass and orbital radius of the third body are estimated to lie in the range, 20.5-26.9 Jupiter mass and 750-860 lt-sec, respectively. If true, then it will be the most massive circumbinary planet and also the smallest period binary known to host a planet.
C. Jain, B. Paul, R. Sharma, et. al.
Tue, 14 Mar 17
Comments: 5 pages, 3 figures, Accepted for publication in Monthly Notices of the Royal Astronomical Society Letters
The X-ray continuum time-lags and intrinsic coherence in AGN [HEAP]
We present the results from a systematic analysis of the X-ray continuum (`hard') time-lags and intrinsic coherence between the $2-4\,\mathrm{keV}$ and various energy bands in the $0.3-10\,\mathrm{keV}$ range, for ten X-ray bright and highly variable active galactic nuclei (AGN). We used all available archival \textit{XMM-Newton} data, and estimated the time-lags following Epitropakis \& Papadakis (2016). By performing extensive numerical simulations, we arrived at useful guidelines for computing intrinsic coherence estimates that are minimally biased, have known errors, and are (approximately) Gaussian distributed. Owing to the way we estimated the time-lags and intrinsic coherence, we were able to do a proper model fitting to the data. Regarding the continuum time-lags, we are able to demonstrate that they have a power-law dependence on frequency, with a slope of $-1$, and that their amplitude scales with the logarithm of the light-curve mean-energy ratio. We also find that their amplitude increases with the square root of the X-ray Eddington ratio. Regarding the intrinsic coherence, we found that it is approximately constant at low frequencies. It then decreases exponentially at frequencies higher than a characteristic `break frequency.' Both the low-frequency constant intrinsic-coherence value and the break frequency have a logarithmic dependence on the light-curve mean-energy ratio. Neither the low-frequency constant intrinsic-coherence value, nor the break frequency exhibit a universal scaling with either the central black hole mass, or the the X-ray Eddington ratio. Our results could constrain various theoretical models of AGN X-ray variability.
A. Epitropakis and I. Papadakis
Comments: 36 pages, 6 tables, 42 figures (accepted for publication in MNRAS)
Do FRB Mark Dark Core Collapse? [HEAP]
Are some neutron stars produced without a supernova, without ejecting mass in a remnant? Theoretical calculations of core collapse in massive stars often predict this. The observation of the repeating FRB 121102, whose dispersion measure has not changed over several years, suggests that dark core collapses are not just failures of computer codes, but may be real. The existence of one repeating FRB with unchanging dispersion measure is not conclusive, but within a decade hundreds or thousands of FRB are expected to be discovered, likely including scores of repeaters, permitting useful statistical inferences. A na\"{\i}ve supernova remnant model predicts observable decline in dispersion measure for 100 years after its formation. If an upper limit on the decline of 2 pc/cm$^3$-y is set for five repeating FRB, then the na\"{\i}ve model is rejected at the 95\% level of confidence. This may indicate dark neutron star formation without a supernova or supernova remnant. This hypothesis may also be tested with LSST data that would show, if present, a supernova at an interferometric FRB position if it occurred within the LSST epoch.
J. Katz
Comments: 4 pp
Determination of the Magnetic Fields of Magellanic X-Ray Pulsars [HEAP]
The 80 high-mass X-ray binary (HMXB) pulsars that are known to reside in the Magellanic Clouds (MCs) have been observed by the XMM-Newton and Chandra X-ray telescopes on a regular basis for 15 years, and the XMM-Newton and Chandra archives contain nearly complete information about the duty cycles of the sources with spin periods P_S < 100 s. We have rerprocessed the archival data from both observatories and we combined the output products with all the published observations of 31 MC pulsars with P_S < 100 s in an attempt to investigate the faintest X-ray emission states of these objects that occur when accretion to the polar caps proceeds at the smallest possible rates. These states determine the so-called propeller lines of the accreting pulsars and yield information about the magnitudes of their surface magnetic fields. We have found that the faintest states of the pulsars segregate into five discrete groups which obey to a high degree of accuracy the theoretical relation between spin period and X-ray luminosity. So the entire population of these pulsars can be described by just five propeller lines and the five corresponding magnetic moments (0.29, 0.53, 1.2, 2.9, and 7.3, in units of 10^30 G cm^3).
D. Christodoulou, S. Laycock, J. Yang, et. al.
Comments: To appear in Reserch in Astronomy and Astrophysics
Constructing Gravitational Waves from Generic Spin-Precessing Compact Binary Inspirals [CL]
The coalescence of compact objects is one of the most promising sources of gravitational waves for ground-based interferometric detectors, such as advanced LIGO and Virgo. Generically, com- pact objects in binaries are expected to be spinning with spin angular momenta misaligned with the orbital angular momentum, causing the orbital plane to precess. This precession adds rich structure to the gravitational waves, introducing such complexity that an analytic closed-form description has been unavailable until now. We here construct the first closed-form frequency- domain gravitational waveforms that are valid for generic spin-precessing quasicircular compact binary inspirals. We first construct time-domain gravitational waves by solving the post-Newtonian precession equations of motion with radiation reaction through multiple scale analysis. We then Fourier transform these time-domain waveforms with the method of shifted uniform asymptotics to obtain closed-form expressions for frequency-domain waveforms. We study the accuracy of these analytic, frequency-domain waveforms relative to waveforms obtained by numerically evolving the post-Newtonian equations of motion and find that they are suitable for unbiased parameter estimation for 99.2%(94.6%) of the binary configurations we studied at a signal-to-noise ratio of 10(25). These new frequency-domain waveforms could be used for detection and parameter estimation studies due to their accuracy and low computational cost.
K. Chatziioannou, A. Klein, N. Yunes, et. al.
Comments: 21 pages, submitted to Phys. Rev. D
Radial modes of levitating atmospheres around Eddington-luminosity neutron stars [HEAP]
We consider an optically thin radiation-supported levitating atmosphere suspended well above the stellar surface, as discussed recently in the Schwarzschild metric for a star of luminosity close to the Eddington value. Assuming the atmosphere to be geometrically thin and polytropic, we investigate the eigenmodes and calculate the frequencies of the oscillations of the atmosphere in Newtonian formalism. The ratio of the two lowest eigenfrequencies is $\sqrt{\gamma+1}$, i.e., it only depends on the adiabatic index.
D. Bollimpalli and W. Kluzniak
Comments: 6 pages, 3 figures, Submitted for publication to MNRAS
First Detection of Mid-Infrared Variability from an Ultraluminous X-Ray Source Holmberg II X-1 [HEAP]
We present mid-infrared (IR) light curves of the Ultraluminous X-ray Source (ULX) Holmberg II X-1 from observations taken between 2014 January 13 and 2017 January 5 with the \textit{Spitzer Space Telescope} at 3.6 and 4.5 $\mu$m in the \textit{Spitzer} Infrared Intensive Transients Survey (SPIRITS). The mid-IR light curves, which reveal the first detection of mid-IR variability from a ULX, is determined to arise primarily from dust emission rather than from a jet or an accretion disk outflow. We derived the evolution of the dust temperature ($T_\mathrm{d}\sim600 – 800$ K), IR luminosity ($L_\mathrm{IR}\sim3\times10^4$ $\mathrm{L}_\odot$), mass ($M_\mathrm{d}\sim1-3\times10^{-6}$ $\mathrm{M}_\odot$), and equilibrium temperature radius ($R_\mathrm{eq}\sim10-20$ AU). A comparison of X-1 with a sample spectroscopically identified massive stars in the Large Magellanic Cloud on a mid-IR color-magnitude diagram suggests that the mass donor in X-1 is a supergiant (sg) B[e]-star. The sgB[e]-interpretation is consistent with the derived dust properties and the presence of the [Fe II] ($\lambda=1.644$ $\mu$m) emission line revealed from previous near-IR studies of X-1. We attribute the mid-IR variability of X-1 to increased heating of dust located in a circumbinary torus. It is unclear what physical processes are responsible for the increased dust heating; however, it does not appear to be associated with the X-ray flux from the ULX given the constant X-ray luminosities provided by serendipitous, near-contemporaneous X-ray observations around the first mid-IR variability event in 2014. Our results highlight the importance of mid-IR observations of luminous X-ray sources traditionally studied at X-ray and radio wavelengths.
R. Lau, M. Heida, M. Kasliwal, et. al.
Comments: 9 page, 4 figures, 1 table, Accepted to ApJ Letters
Posted in High Energy Astrophysical Phenomena | Tagged Astrophysics of Galaxies, High Energy Astrophysical Phenomena, Solar and Stellar Astrophysics
Double O-Ne-Mg white dwarfs merging as the source of the Powerfull Gravitational Waves for LIGO/VIRGO type interferometers [HEAP]
New strong non spiralling gravitational waves (GW) source for LIGO/VIRGO detectors are proposed. It is noted that double O-Ne-Mg white dwarfs mergers can produce strong gravitational waves with frequencies in the 600-1200 Hz range. Such events can be followed by the Super Nova type Ia.
V. Lipunov
Comments: 4 pages, submitted to New Astronomy
On the Absence of Non-thermal X-ray emission around Runaway O stars [HEAP]
Theoretical models predict that the compressed interstellar medium around runaway O stars can produce high-energy non-thermal diffuse emission, in particular, non-thermal X-ray and $\gamma$-ray emission. So far, detection of non-thermal X-ray emission was claimed for only one runaway star AE Aur. We present a search for non-thermal diffuse X-ray emission from bow shocks using archived XMM-Newton observations for a clean sample of 6 well-determined runaway O stars. We find that none of these objects present diffuse X-ray emission associated to their bow shocks, similarly to previous X-ray studies toward $\zeta$ Oph and BD$+$43$^{\circ}$3654. We carefully investigated multi-wavelength observations of AE Aur and could not confirm previous findings of non-thermal X-rays. We conclude that so far there is no clear evidence of non-thermal extended emission in bow shocks around runaway O stars.
J. Toala, L. Oskinova and R. Ignace
Comments: 6 pages, 2 tables, 3 figures; Accepted to ApJ Letters
Classical collapse to black holes and white hole quantum bounces: A review [CL]
In the last four decades different programs have been carried out aiming at understanding the final fate of gravitational collapse of massive bodies once some prescriptions for the behaviour of gravity in the strong field regime are provided. The general picture arising from most of these scenarios is that the classical singularity at the end of collapse is replaced by a bounce. The most striking consequence of the bounce is that the black hole horizon may live for only a finite time. The possible implications for astrophysics are important since, if these models capture the essence of the collapse of a massive star, an observable signature of quantum gravity may be hiding in astrophysical phenomena. One intriguing idea that is implied by these models is the possible existence of exotic compact objects, of high density and finite size, that may not be covered by an horizon. The present article outlines the main features of these collapse models and some of the most relevant open problems. The aim is to provide a comprehensive (as much as possible) overview of the current status of the field from the point of view of astrophysics. As a little extra, a new toy model for collapse leading to the formation of a quasi static compact object is presented.
D. Malafarina
Comments: 29 pages, 8 figures, comments are welcome
Synthetic streams in a Gravitational Wave inspiral search with a multi-detector network [CL]
http://arxiv.org/abs/1401.7967
Gravitational Wave Inspiral search with a global network of interferometers when carried in a phase coherent fashion would mimic an effective multi-detector network with synthetic streams constructed by the linear combination of the data from different detectors. For the first time, we demonstrate that the two synthetic data streams pertaining to the two polarizations of Gravitational Wave can be derived prior to the maximum-likelihood analysis in a most natural way using the technique of singular-value-decomposition applied to the network signal-to-noise ratio vector. We construct the network matched filters in combined network plus spectral space which capture both the synthetic streams. We further show that the network LLR is then sum of the LLR of each synthetic stream. The four extrinsic parameters are mapped to the two amplitudes and two phases. The maximization over these is a straightforward approach closely linked to the single detector approach. Towards the end, we connect all the previous works related to the multi-detector Gravitational Wave inspiral search and express in the same notation in order to bring under the same footing.
Comments: LIGO laboratory document number: LIGO-P1300229
Probing the Extragalactic Cosmic Rays origin with gamma-ray and neutrino backgrounds [HEAP]
The GeV-TeV gamma-rays and the PeV-EeV neutrino backgrounds provide a unique window on the nature of the ultra-high-energy cosmic-ray (UHECR). We discuss the implications of the recent Fermi-LAT data regarding the extragalactic gamma-ray background (EGB) and related estimates of the contribution of point sources as well as IceCube neutrino data on the origin of the ultra-high-energy cosmic-ray (UHECR). We calculate the diffuse flux of cosmogenic $\gamma$-rays and neutrinos produced by the UHECRs and derive constraints on the possible cosmological evolution of UHECR sources. In particular, we show that the mixed-composition scenario considered in \citet{Globus2015b}, that is in agreement with both (i) Auger measurements of the energy spectrum and composition up to the highest energies and (ii) the ankle-like feature in the light component detected by KASCADE-Grande, is compatible with both the Fermi-LAT measurements and with current IceCube limits. We also discuss the possibility for future experiments to detect associated cosmogenic neutrinos and further constrain the UHECR models, including possible subdominant UHECR proton sources.
N. Globus, D. Allard, E. Parizot, et. al.
Comments: 6 pages, 4 figures, submitted to ApJ Letters
Spatially-Resolved Star Formation Main Sequence of Galaxies in the CALIFA Survey [GA] | CommonCrawl |
ScholarWorks@UMass Amherst
Home > ETDS > DISSERTATIONS > AAI9809388
Off-campus UMass Amherst users: To download dissertations, please use the following link to log into our proxy server with your UMass Amherst user name and password.
Non-UMass Amherst users, please click the view more button below to purchase a copy of this dissertation from Proquest.
(Some titles may also be available free of charge in our Open Access Dissertation Collection, so please check there first.)
Self-assembled polypeptide-surfactant complexes in organic solvents and in the solid-state: A new class of comb-shaped polypeptides
Ekaterina A Ponomarenko, University of Massachusetts Amherst
We describe herein the preparation and physical characterization of novel water-insoluble complexes formed by synthetic polypeptides, sodium poly($\alpha$,L-glutamate) and poly(L-lysine) hydrobromide, and oppositely charged low molecular weight surfactants, alkyltrimethylammonium bromides and sodium alkyl sulfates with chain lengths from twelve to eighteen methylene groups. The complexes of nearly stoichiometric compositions were prepared by mixing equimolar amounts of the components in water. The goal of the research was to understand the influence of the electrostatically bound 'side chains' on properties of polypeptide chains (solubility and conformation) and the effect of polymer chains on organization of the complexed surfactants. The behavior of the complexes was compared to that of their covalent analogs, alkyl esters of poly($\alpha$,L-glutamic acid) and acyl derivatives of poly(L-lysine). Conformational and structural properties of the complexes in the solid state were studied via circular dichroism, infrared spectroscopy, X-ray diffraction and differential scanning calorimetry. Poly($\alpha$,L-glutamate) chains in the complexes adopt $\alpha$-helical conformations at room temperature and disordered conformations at elevated temperatures. Poly(L-lysine) chains in the complexes adopt either $\beta$-sheet conformation (as isolated after synthesis) or $\alpha$-helical conformation (in the solid films cast from chloroform-trifluoroacetic acid solutions). Organization of surfactants in the complexes depends on the surfactant chain length. Shorter chains (eight-sixteen carbon atoms) are packed with a short range order, while the longer chains (eighteen carbon atoms) crystallize on a hexagonal lattice. In complexes with mixed octyl and octadecyl sulfates, organization of the surfactants depends on the composition: the minimum octadecyl chain content required for crystallization is about 20 molar per cent. All complexes studied are organized in lamellar structures consisting of alternating layers of polypeptide chains separated by layers of surfactants. Dilute solution properties of poly(L-lysine)-dodecyl sulfate complexes in organic solvents were studied via viscometry, $\sp1$H NMR and $\sp1$H NMR relaxation techniques. Poly(L-lysine) chains in the complexes in chloroform-trifluoroacetic solution adopt either $\alpha$-helical (1-2 volume per cent trifluoroacetic acid) or disordered (4-10 volume per cent trifluoroacetic acid) conformations.
Polymers|Biochemistry|Chemistry
Ponomarenko, Ekaterina A, "Self-assembled polypeptide-surfactant complexes in organic solvents and in the solid-state: A new class of comb-shaped polypeptides" (1997). Doctoral Dissertations Available from Proquest. AAI9809388.
https://scholarworks.umass.edu/dissertations/AAI9809388
Login for Faculty Authors
Faculty Author Gallery
This page is sponsored by the University Libraries. | CommonCrawl |
On quasilinear parabolic equations and continuous maximal regularity
Almost mixed semi-continuous perturbation of Moreau's sweeping process
March 2020, 9(1): 39-60. doi: 10.3934/eect.2020016
Uniform exponential stability of a fluid-plate interaction model due to thermal effects
Gilbert Peralta
Department of Mathematics and Computer Science, University of the Philippines Baguio, Governor Pack Road, Baguio, 2600, Philippines
Received June 2018 Revised August 2019 Published March 2020 Early access October 2019
Fund Project: This work was supported in part by the ERC advanced grant 668998 (OCLOC) under the EU-s H2020 research program and the Ernst-Mach grant of the Austrian Agency for International Cooperation in Education and Research (OeAD-GmbH).
We consider a coupled fluid-thermoelastic plate interaction model. The fluid velocity is modeled by the linearized 3D Navier-Stokes equation while the plate dynamics is described by a thermoelastic Kirchoff system. By eliminating the pressure term, the system is reformulated as an abstract evolution problem and its well-posedness is proved by semigroup methods. The dissipation in the system is due to the diffusion of the fluid and heat components. Uniform stability of the coupled system is established through multipliers and the energy method. The multipliers used for thermoelastic plate models in the literature are modified in accordance to the applicability of a certain Stokes map.
Keywords: Fluid-plate interaction model, linearized Navier-Stokes equation, thermoelastic Kirchoff plate system, exponential stability, energy method, modified multipliers.
Mathematics Subject Classification: Primary: 35Q30, 74K20, 35K05; Secondary: 93D20.
Citation: Gilbert Peralta. Uniform exponential stability of a fluid-plate interaction model due to thermal effects. Evolution Equations & Control Theory, 2020, 9 (1) : 39-60. doi: 10.3934/eect.2020016
G. Avalos and F. Bucci, Spectral analysis and rational decay rates of strong solutions to a fluid-structure PDE system, J. Differ. Equations, 258 (2015), 4398–4423, https://www.sciencedirect.com/science/article/pii/S002203961500056X. doi: 10.1016/j.jde.2015.01.037. Google Scholar
G. Avalos and F. Bucci, Exponential decay properties of a mathematical model for a certain fluid-structure interaction, in New Prospects in Direct, Inverse and Control Problems for Evolution Equations, Springer INdAM Ser., Springer, Cham, 10 (2014), 49-78. doi: 10.1007/978-3-319-11406-4_3. Google Scholar
G. Avalos and T. Clark, A mixed variational formulation for the wellposedness and numerical approximation of a PDE model arising in a 3-D fluid-structure interaction, Evol. Equ. Control Theory, 3 (2014), 557–578, http://www.aimsciences.org/journals/displayArticlesnew.jsp?paperID=10481. doi: 10.3934/eect.2014.3.557. Google Scholar
G. Avalos and M. Dvorak, A new maximality argument for a coupled fluid-structure interaction model with implications for a divergence-free finite element method, Appl. Math. (Warsaw), 35 (2008), 259–280, https://www.impan.pl/pl/wydawnictwa/czasopisma-i-serie-wydawnicze/applicationes-mathematicae/all/35/3/84340/a-new-maximality-argument-for-a-coupled-fluid-structure-interaction-with-implications-for-a-divergence-free-finite-element-method. doi: 10.4064/am35-3-2. Google Scholar
G. Avalos and I. Lasiecka, Exponential stability of a thermoelastic system without mechanical dissipation, Rend. Istit. Mat. Univ. Trieste, 28 (1997), 1–28, https://rendiconti.dmi.units.it/volumi/28s/01.pdf. Google Scholar
G. Avalos and I. Lasiecka, Exponential stability of a thermoelastic system with free boundary conditions without mechanical dissipation, SIAM J. Math. Anal., 29 (1998), 155-182. doi: 10.1137/S0036141096300823. Google Scholar
G. Avalos and R. Triggiani, The coupled PDE system arising in fluid/structure interaction, I. Explicit semigroup generator and its spectral properties, Fluids and Waves, Contemporary Mathematics, Amer. Math. Soc., Providence, RI, 440 (2007), 15-54. doi: 10.1090/conm/440/08475. Google Scholar
G. Avalos and R. Triggiani, Fluid structure interaction with and without internal dissipation of the structure: A contrast study in stability, Evol. Equ. Control Theory, 2 (2013), 563-598. doi: 10.3934/eect.2013.2.563. Google Scholar
I. D. Chuesov, A global attractor for a fluid-plate interaction model accounting only for longitudinal deformations of the plate, Math. Methods Appl. Sci., 34 (2011), 1801-1812. doi: 10.1002/mma.1496. Google Scholar
I. Chuesov and I. Ryzhkova, A global attractor for a fluid-plate interaction model, Comm. Pure Appl. Anal., 12 (2013), 1635-1656. doi: 10.3934/cpaa.2013.12.1635. Google Scholar
H. Cohen and S. I. Rubinow, Some mathematical topics in Biology, Proc. Symp. on System Theory, Polytechnic Press, New York, (1965), 321–337. Google Scholar
A. Haraux, Decay rate of the range component of solutions to some semilinear evolution equations, Nonlinear Differ. Equ. Appl., 13 (2006), 435-445. doi: 10.1007/s00030-006-4019-7. Google Scholar
V. Komornik, Exact Controllability and Stabilization: The Multiplier Method, Research in Applied Mathematics, Masson, Paris, John Wiley & Sons, Ltd., Chichester, 1994. Google Scholar
J. E. Lagnese, Boundary Stabilization of Thin Plates, SIAM Studies in Applied Mathematics, 10. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1989. doi: 10.1137/1.9781611970821. Google Scholar
I. Lasiecka and Y. J. Lu, Interface feedback control stabilisation of a nonlinear fluid-structure interaction, Nonlinear Analysis: Theory, Methods & Applications, 75 (2012), 1449–1460, https://www.sciencedirect.com/science/article/pii/S0362546X11002136. doi: 10.1016/j.na.2011.04.018. Google Scholar
I. Lasiecka and R. Triggiani, Exact controllability and uniform stabilization of Kirchoff plates with boundary control only on $\Delta \omega|_{\Sigma}$ and homogeneous boundary displacements, J. Differ. Equations, 93 (1991), 62-101. doi: 10.1016/0022-0396(91)90022-2. Google Scholar
I. Lasiecka and R. Triggiani, Analyticity of thermo-elastic semigroups with coupled hinged/Neumann B.C., Abstr. Appl. Anal., 3 (1998), 153–169, https://www.hindawi.com/journals/aaa/1998/428531/abs/. doi: 10.1155/S1085337598000487. Google Scholar
J.-L. Lions, Quelques Méthodes de Résolution des Probelèmes aux Limites non Linéares. Tome 2. Perturbations, Dunod, Gauthier-Villars, Paris, 1969. Google Scholar
J.-L. Lions, Contrôlabilité exacte, Perturbations et Stabilization de Systèmes Distribués, Recherches en Mathématiques Appliquées, 9. Masson, Paris, 1988. Google Scholar
J.-L. Lions and E. Magenes, Non-Homogeneous Boundary Value Problems and Applications, Die Grundlehren der mathematischen Wissenschaften, Band 181. Springer-Verlag, New York-Heidelberg, 1972. Google Scholar
J.-L. Lions and E. Zuazua, Approximate controllability of a hydro-elastic coupled system, ESAIM: Control, Optimisation and Calculus of Variations, 1 (1995/96), 1–15, https://www.esaim-cocv.org/articles/cocv/abs/1996/01/cocv-Vol1.1/cocv-Vol1.1.html. doi: 10.1051/cocv:1996100. Google Scholar
Z.-Y. Liu and M. Renardy, A note on the equations of a thermoelastic plate, Appl. Math. Lett., 8 (1995), 1-6. doi: 10.1016/0893-9659(95)00020-Q. Google Scholar
G. Perla Menzala and E. Zuazua, Explicit exponential decay rates for solutions of von Kármán's system of thermoelastic plates, C. R. Acad. Sci. Paris Sér. I Math., 324 (1997), 49-54. doi: 10.1016/S0764-4442(97)80102-4. Google Scholar
G. Perla Menzala and E. Zuazua, Energy decay rates for the von Kármán system of thermoelastic plates, Differential and Integral Equations, 11 (1998), 755–770, https://projecteuclid.org/euclid.die/1367329669. Google Scholar
G. Perla Menzala and E. Zuazua, The energy decay rate for the modified von Karman system of thermoelastic plates: An improvement, App. Math. Lett., 16 (2003), 531-534. doi: 10.1016/S0893-9659(03)00032-6. Google Scholar
A. Quarteroni and A. Valli, Numerical Approximations of Partial Differential Equations, Springer Series in Computational Mathematics, 23. Springer-Verlag, Berlin, 1994. Google Scholar
R. Temam, Navier-Stokes Equations, Theory and Numerical Analysis, Studies in Mathematics and its Applications, Vol. 2. North-Holland Publishing Co., Amsterdam-New York-Oxford, 1977. Google Scholar
R. Triggiani and J. Zhang, Heat-viscoelastic plate interaction: Analyticity, spectral analysis, exponential decay, Evol. Equ. Control Theory, 7 (2018), 153-182. doi: 10.3934/eect.2018008. Google Scholar
M. Tucsnak and G. Weiss, Observation and Control for Operator Semigroups, Birkhäuser Advanced Texts: Basler Lehrbücher, Birkhäuser-Verlag, Basel, 2009. doi: 10.1007/978-3-7643-8994-9. Google Scholar
J. Zhang, The analyticity and exponential decay of a Stokes-wave coupling system with viscoelastic damping in the variational framework, Evol. Equ. Control Theory, 6 (2017), 135-154. doi: 10.3934/eect.2017008. Google Scholar
I. D. Chueshov. Interaction of an elastic plate with a linearized inviscid incompressible fluid. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1759-1778. doi: 10.3934/cpaa.2014.13.1759
Sun-Ho Choi. Weighted energy method and long wave short wave decomposition on the linearized compressible Navier-Stokes equation. Networks & Heterogeneous Media, 2013, 8 (2) : 465-479. doi: 10.3934/nhm.2013.8.465
I. D. Chueshov, Iryna Ryzhkova. A global attractor for a fluid--plate interaction model. Communications on Pure & Applied Analysis, 2013, 12 (4) : 1635-1656. doi: 10.3934/cpaa.2013.12.1635
Iryna Ryzhkova-Gerasymova. Long time behaviour of strong solutions to interactive fluid-plate system without rotational inertia. Discrete & Continuous Dynamical Systems - B, 2018, 23 (3) : 1243-1265. doi: 10.3934/dcdsb.2018150
Igor Chueshov, Björn Schmalfuß. Stochastic dynamics in a fluid--plate interaction model with the only longitudinal deformations of the plate. Discrete & Continuous Dynamical Systems - B, 2015, 20 (3) : 833-852. doi: 10.3934/dcdsb.2015.20.833
Ariane Piovezan Entringer, José Luiz Boldrini. A phase field $\alpha$-Navier-Stokes vesicle-fluid interaction model: Existence and uniqueness of solutions. Discrete & Continuous Dynamical Systems - B, 2015, 20 (2) : 397-422. doi: 10.3934/dcdsb.2015.20.397
Qiang Du, Manlin Li, Chun Liu. Analysis of a phase field Navier-Stokes vesicle-fluid interaction model. Discrete & Continuous Dynamical Systems - B, 2007, 8 (3) : 539-556. doi: 10.3934/dcdsb.2007.8.539
Moncef Aouadi, Alain Miranville. Quasi-stability and global attractor in nonlinear thermoelastic diffusion plate with memory. Evolution Equations & Control Theory, 2015, 4 (3) : 241-263. doi: 10.3934/eect.2015.4.241
Irena Lasiecka, Mathias Wilke. Maximal regularity and global existence of solutions to a quasilinear thermoelastic plate system. Discrete & Continuous Dynamical Systems, 2013, 33 (11&12) : 5189-5202. doi: 10.3934/dcds.2013.33.5189
Robert Denk, Yoshihiro Shibata. Generation of semigroups for the thermoelastic plate equation with free boundary conditions. Evolution Equations & Control Theory, 2019, 8 (2) : 301-313. doi: 10.3934/eect.2019016
Roberto Triggiani, Jing Zhang. Heat-viscoelastic plate interaction: Analyticity, spectral analysis, exponential decay. Evolution Equations & Control Theory, 2018, 7 (1) : 153-182. doi: 10.3934/eect.2018008
Grzegorz Karch, Maria E. Schonbek, Tomas P. Schonbek. Singularities of certain finite energy solutions to the Navier-Stokes system. Discrete & Continuous Dynamical Systems, 2020, 40 (1) : 189-206. doi: 10.3934/dcds.2020008
Moncef Aouadi, Taoufik Moulahi. The controllability of a thermoelastic plate problem revisited. Evolution Equations & Control Theory, 2018, 7 (1) : 1-31. doi: 10.3934/eect.2018001
Henry Jacobs, Joris Vankerschaver. Fluid-structure interaction in the Lagrange-Poincaré formalism: The Navier-Stokes and inviscid regimes. Journal of Geometric Mechanics, 2014, 6 (1) : 39-66. doi: 10.3934/jgm.2014.6.39
Debanjana Mitra, Mythily Ramaswamy, Jean-Pierre Raymond. Largest space for the stabilizability of the linearized compressible Navier-Stokes system in one dimension. Mathematical Control & Related Fields, 2015, 5 (2) : 259-290. doi: 10.3934/mcrf.2015.5.259
Susan Friedlander, Nataša Pavlović. Remarks concerning modified Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2004, 10 (1&2) : 269-288. doi: 10.3934/dcds.2004.10.269
Ana Bela Cruzeiro. Navier-Stokes and stochastic Navier-Stokes equations via Lagrange multipliers. Journal of Geometric Mechanics, 2019, 11 (4) : 553-560. doi: 10.3934/jgm.2019027
I. Moise, Roger Temam. Renormalization group method: Application to Navier-Stokes equation. Discrete & Continuous Dynamical Systems, 2000, 6 (1) : 191-210. doi: 10.3934/dcds.2000.6.191
Pedro Marín-Rubio, Antonio M. Márquez-Durán, José Real. Three dimensional system of globally modified Navier-Stokes equations with infinite delays. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 655-673. doi: 10.3934/dcdsb.2010.14.655
Yuming Qin, T. F. Ma, M. M. Cavalcanti, D. Andrade. Exponential stability in $H^4$ for the Navier--Stokes equations of compressible and heat conductive fluid. Communications on Pure & Applied Analysis, 2005, 4 (3) : 635-664. doi: 10.3934/cpaa.2005.4.635 | CommonCrawl |
The height of a right cylinder is 2.5 times its radius. If the surface area of the cylinder is $112\pi\text{ cm}^2$, what is the radius of the cylinder in centimeters?
The surface area of a cylinder with radius $r$ and height $h$ is $2\pi r^2+2\pi rh$. Setting this expression equal to $112\pi$ and substituting $h=2.5r$ gives \begin{align*}
2\pi r(r+2.5r) &= 112\pi \\
7\pi r^2 &= 112\pi \\
r^2 &= 16 \\
r&=\boxed{4} \text{ cm}.
\end{align*} | Math Dataset |
Mutational signature distribution varies with DNA replication timing and strand asymmetry
Marketa Tomkova1,
Jakub Tomek2,
Skirmantas Kriaucionis1 &
Benjamin Schuster-Böckler ORCID: orcid.org/0000-0002-8892-51331
DNA replication plays an important role in mutagenesis, yet little is known about how it interacts with other mutagenic processes. Here, we use somatic mutation signatures—each representing a mutagenic process—derived from 3056 patients spanning 19 cancer types to quantify the strand asymmetry of mutational signatures around replication origins and between early and late replicating regions.
We observe that most of the detected mutational signatures are significantly correlated with the timing or direction of DNA replication. The properties of these associations are distinct for different signatures and shed new light on several mutagenic processes. For example, our results suggest that oxidative damage to the nucleotide pool substantially contributes to the mutational landscape of esophageal adenocarcinoma.
Together, our results indicate an interaction between DNA replication, the associated damage repair, and most mutagenic processes.
Understanding the mechanisms of mutagenesis in cancer is important for the prevention and treatment of the disease [1, 2]. Mounting evidence suggests replication itself contributes to cancer risk [3]. Copying of DNA is intrinsically asymmetrical, with leading and lagging strands being processed by distinct sets of enzymes [4], and different genomic regions replicating at defined times during S phase [5]. Previous analyses have focused either on the genome-wide distribution of mutation rate or on the strand specificity of individual base changes. These studies revealed that the average mutation frequency is increased in late-replicating regions [6, 7], and that the asymmetric synthesis of DNA during replication leads to strand-specific frequencies of base changes [8,9,10,11]. However, the extent to which DNA replication influences distinct mutational mechanisms, with their manifold possible causes, remains incompletely understood.
Mutational signatures have been established as a powerful approach to quantify the presence of distinct mutational mechanisms in cancer [12]. A mutational signature is a unique combination of the frequencies of all base-pair mutation types (C:G > A:T, T:A > G:C, etc.) and their flanking nucleotides. Since it is usually not known which base in a pair was the source of a mutation, the convention is to annotate mutations from the pyrimidine (C > A, T > A, etc.), leading to 96 possible combinations of mutation types and neighboring bases. Non-negative matrix factorization is used to extract mutational signatures from somatic mutations in cancer samples [12]. This approach has the important advantage of being able to distinguish between processes that have the same major mutation type (such as C > T transitions) but differ in their sequence context. We built upon this feature of mutational signatures and developed a computational framework to identify the replication-strand-specific impact of distinct mutational processes. Using this system, we quantified the replication strand and timing bias of mutational signatures across 19 cancer types. We show that replication affects the distribution of nearly all mutational signatures across the genome, including those that represent chemical mutagens. The unique strand asymmetry and replication timing profile of different signatures reveal novel aspects of the underlying mechanism. For example, we discovered a strong lagging strand bias of T > G mutations in esophageal adenocarcinoma, suggesting an involvement of oxidative damage to the nucleotide pool in the etiology of the disease. Together, our results highlight the critical role of DNA replication and the associated repair in the accumulation of somatic mutations.
Replication bias of mutational signatures
DNA replication in eukaryotic cells is initiated around replication origins (ORI), from where it proceeds in both directions, synthesizing the leading strand continuously and the lagging strand discontinuously (Fig. 1a). We used two independent data sets to describe replication direction relative to the reference sequence, one derived from high-resolution replication timing data [11] and the other from direct detection of ORIs by short nascent strand sequencing (SNS-seq) [13], corrected for technical artifacts [14] (see "Methods"). The former provides information for more genomic loci, while the latter is of higher resolution. As a third measure of DNA replication, we compared regions replicating early during S phase to regions replicating late [11]. We calculated strand-specific signatures [15] that add strand information to each mutation type, based on the direction of DNA replication [11] (Fig. 1b). We clustered the strand-specific signatures and further condensed them into directional signatures consisting of 96 mutation types, each assigned either "leading" or "lagging" direction depending on the frequency in the strand-specific signature (Fig. 1c; see "Methods"). These directional signatures can be used to separately compute the presence of the signature on the leading and lagging strands in individual samples, which is analogous to what is called the exposure to the signature in a sample [16], representing the genome-scale normalized contribution of mutations to the signature (Fig. 1d). Depending on whether the strand bias matches the consensus of the directional signature, the exposure can be matching or inverse. The latter can occur if the strand bias of a signature in a subset of samples does not match the bias observed in the samples that most strongly contributed to the definition of the signature. We applied this novel algorithm to somatic mutations detected in whole-genome sequencing of 3056 tumor samples from 19 cancer types (Additional file 1: Table S1). We excluded protein-coding genes from the analysis in order to prevent potential confounding of the results by transcription strand asymmetry [11, 12] or selection. Samples with microsatellite instability (MSI) and POLE mutations were treated as separate groups, since they are associated with specific mutational processes. In total, we detected 25 mutational signatures that each corresponded to one of the COSMIC signatures (http://cancer.sanger.ac.uk/cosmic/signatures) and four novel signatures, which were primarily found in samples that had not been previously used for signature extraction (myeloid blood, skin, MSI, and ovarian cancers; Additional file 2: Figures S1–S5).
Methods overview. a Mutation frequency on the leading and lagging strands is computed using annotated left/right-replicating regions and somatic single-nucleotide mutations oriented according to the strand of the pyrimidine in the base pair. b Leading and lagging strand-specific mutational signatures are extracted using non-negative matrix factorization. c The signatures are clustered and in each cluster a representative signature is selected ("Methods"). In the cluster representatives, each of the 96 mutation types is annotated according to its dominant direction (upwards-facing bars for leading, downwards-facing bars for lagging template preference). d Exposures to the directional signatures are separately quantified for the leading and lagging strands of each patient. The exposure in the matching orientation reflects the extent to which mutations in pyrimidines on the leading (and lagging) strand can be explained by the leading (and lagging) component of the signature, respectively. Conversely, the exposure in the inverse orientation reflects how mutations in pyrimidines on the leading strand can be explained by the lagging component of the signature (or vice versa) ("Methods"). Top part of d shows an example of a sample with completely matching exposure, given the signature in c, with C > T mutations on the leading template and C > A and T > C mutations on the lagging template, whereas the bottom part of d shows an example of a sample with completely inverse exposure. e Example of matching and inverse exposure quantification in individual patients (for a given signature). Significance of the asymmetry of this signature across the cohort is evaluated based on the distribution of difference between the matching and inverse exposures. The histogram shows an example of a signature with significant matching asymmetry. f Signature exposures are next quantified in bins representing four quartiles of replication timing. The graph on the right shows average and standard deviation values in individual quartiles, representing an example of a signature enriched in the late-replicated regions both in the matching and inverse exposures
In total, 21 out of 29 signatures exhibited significant replication strand asymmetry, 23 were significantly correlated with replication timing, and 27 were significant in at least one of these two metrics (signtest p < 0.05, with Benjamini-Hochberg correction; Fig. 2, Additional file 2: Figures S7–S8, S13–S24, Additional file 3: Table S2, Additional file 4: Table S3). Such widespread replication bias across the mutational landscape is surprising, considering that previous reports documented strand bias for only a few mutational processes, such as activity of the APOBEC class of enzymes that selectively edit exposed single-stranded cytosines on the lagging strand [11, 15, 17,18,19]. Including protein coding genes did not qualitatively change the results (Additional file 2: Figure S9a–c), nor did the exclusion of non-coding in addition to protein-coding genes (Additional file 2: Figure S9d–f). Similarly, using SNS-seq data to determine replication strand direction leads to highly similar findings (Additional file 2: Figure S9g–i). Furthermore, we validated that the strand asymmetry and correlation with replication timing is lost when using random genomic loci as replication origins or replication timing domains (Additional file 2: Figures S10 and S11).
Most mutational signatures exhibit a significant replication strand asymmetry and/or correlation with replication timing. a The difference of matching and inverse exposure is computed for each sample and signature. For each signature, the median value of these differences (in samples exposed to this signature) is plotted against -log10 q-value (signtest of strand asymmetry per sample; with Benjamini-Hochberg correction). b Percentage of samples that have higher matching than inverse exposure to the signature denoted above/below each bar. c Correlation of exposures with replication timing. The 20-kbp replication domains were divided into four quartiles by their average replication timing (early-replicated in the first quartile, late-replicated in the last quartile) and exposures to signatures were computed in each quartile. Median slope of correlation with the replication timing is plotted on the x-axis, i.e., values on the right denote more mutations in late-replicated regions, values on the left reflect more mutations in early replicating regions. The y-axis represents significance of the correlation of signature with replication timing in individual samples (signtest of correlation slope per sample; with Benjamini-Hochberg correction). d Percentage of samples with a positive correlation of replication timing with exposure to the signature denoted above/below each bar
Our observations confirm that both APOBEC signatures (2 and 13) exhibit clear strand asymmetry, with signature 13 being the most significantly asymmetric signature (q-value 2e− 98). In breast cancer samples, we observed differences in these signatures with respect to replication timing: signature 2, but not signature 13, shows a significant enrichment in late replicating regions (Fig. 3), which is consistent with previous reports [15]. These results validate that our approach is able to correctly identify strand and timing asymmetries of mutagenic processes. Consequently, we next tried to interpret the replication biases we observed in other mutational signatures.
APOBEC signatures in breast cancers show strong but distinct effects of replication. Column 1: directional signatures for the two APOBEC signatures, showing proportional contributions of individual mutation types (the absolute values sum to one). A maximum absolute value of 0.2 is shown and mutation types exceeding 0.2 are denoted by an asterisk. Column 2: mean signature exposure on the plus (Watson) and minus (Crick) strand around transitions between left- and right-replicating regions. The transition corresponds to a region enriched for replication origins. The bin size is 20 kbp. Column 3: mean signature exposure on the plus and minus strand around directly ascertained replication origins by SNS-seq, with a bin size of 1 kbp. Column 4: distribution of differences between matching and inverse exposure amongst patients with sufficient exposure. Number of outliers is denoted by the small numbers on the sides. Column 5: mean matching and inverse exposure in four quartiles of replication timing (p value is computed as signtest of slopes of correlation with replication timing in individual samples; the median values of the slope in the matching and inverse directions are shown on the right to the bars). The error bars represent standard error of the mean. The leading and lagging strand annotations used in columns 2, 4, and 5 are based on the direction of replication derived from replication timing data. Plots in columns 2–5 are based on samples not defined as outliers (by Tukey fences method with k = 2)
Interestingly, the lack of correlation with replication timing in signature 13 seems to be specific to breast cancers, as other cancer types (such as lung squamous, esophageal adenocarcinoma, and pancreas) show a significant correlation (Additional file 2: Figure S12). Nevertheless, the replication strand asymmetry of both signatures 2 and 13 is significantly present in all these tissues.
Processes directly involving DNA replication or repair
Amongst the better understood mutational mechanisms, several involve replicative processes and DNA repair, such as mismatch-repair deficiency (MMR) [20] or mutations in the proofreading domain of Pol ε ("POLE-MUT samples") [8, 21]. We first analyzed the signatures representing these mechanisms, since they can be directly attributed to a known molecular process. All five signatures previously associated with MMR and the novel MSI-linked signature N4 exhibit a clear trend of replication strand asymmetry (significant in signature (sig.) 6 and N4 in MSI), generally with enrichment of C > T mutations on the leading strand template and C > A and T > C mutations on the lagging strand template (Fig. 4, Additional file 2: Figure S13), in line with the previously suggested role of MMR to balance mutational asymmetries generated by DNA polymerases during replication [9, 11].
Different mutational signatures exhibit characteristic timing and strand asymmetry profiles. Columns show directional signature (column 1), distribution around timing transition regions (column 2), and around replication origins (column 3), per-patient mutation strand asymmetry (column 4; non-significant asymmetry is shown in light-colored histogram), and correlation with replication timing (column 5), as described in Fig. 3. The pie chart shows the proportional contributions by individual tissue types (number of samples weighted by their exposure) and the colors of the tissues are explain in the legend at the bottom. Row 1: signature 6, associated with mismatch-repair deficiency. Rows 2–3: signature 10, associated with POLE errors, shown for patients with known POLE mutations (row 2), and those without (row 3). Row 4: signature 7, representing UV-induced damage. Row 5: signature 17, characteristic of gastric and esophageal cancers. Row 6: signature 5, of unknown etiology, is not discernibly affected by replication
It has previously been proposed that the correlation of overall mutation rate with replication timing (as shown in Fig. 2b) is a direct result of the activity of MMR [22]. In contrast, when splitting the signal by mutational signatures, we observed a more complex relationship. Some MMR signatures in MMR-deficient patients do not significantly correlate with replication timing (sig. 15, 21, 26) or do so only in one direction of replication (such as a negative correlation in the leading direction in sig. 20), whereas others show a clear increase in the late replicated regions (sig. 6 and N4; Additional file 2: Figure S13). We next explored the effect of MMR on correlation with replication timing (irrespective of the replication strand) in all signatures with exposure > 10 in at least four MSI samples. Four signatures exhibited a significant correlation with replication timing in MSI samples (a positive correlation in sig. 6, 18, and N4; negative correlation in sig. 20; Additional file 2: Figure S25). Interestingly, in all these four signatures, the slope of the correlation was significantly steeper in MSI than MSS samples (Additional file 2: Figure S26). Furthermore, some of the other signatures significantly correlated with replication timing in MSS (e.g., sig. 17 and 8) also showed a weak but consistent correlation in MSI. In other signatures (e.g., sig. 1) the correlation is lost. Altogether, these results indicate that MMR is only one of several factors influencing mutagenesis in a timing-dependent manner.
Unexpectedly, two MMR signatures (sig. 6 and N4) showed increased exposures around ORIs (Fig. 4, Additional file 2: Figures S13, S14, S27). Based on experiments in yeast, it has been suggested that MMR is involved in balancing the differences in fidelity of the leading and lagging polymerases [9], in particular repairing errors made by Pol α [9], which primes the leading strand at ORIs and each lagging strand Okazaki fragment [23] and lacks intrinsic proofreading capabilities [24]. It has been recently shown that error-prone Pol α-synthesized DNA is retained in vivo, causing an increase of mutations on the lagging strand [10]. Since regions around ORIs have a higher density of Pol α-synthesized DNA (as discussed, e.g., in [25]), it is possible that increased exposure to signatures 6 and N4 around ORIs is caused by incomplete repair of Pol α-induced errors. The most common Pol α-induced mismatches normally repaired by MMR are G-dT and C-dT, leading to C > T mutations on the leading strand and C > A mutations on the lagging strand [26], matching our observations in the MMR-linked signatures. Notably, we also detected weaker but still significant exposure to MMR signatures in samples with seemingly intact mismatch repair (Additional file 2: Figure S14). Replication strand asymmetry in these samples was substantially smaller, but the higher exposure to signatures 6 and N4 around ORIs remained (Additional file 2: Figure S27). These findings are compatible with a model in which one of the functions of mismatch repair is to balance the effect of mis-incorporation of nucleotides by Pol α. Signatures 6, N4, and possibly 26 appear to reflect this mechanism, while the other MMR signatures might be a result of unrelated functions of MMR, such as its involvement in balancing errors made by other polymerases, e.g., Pol δ.
POLE-MUT samples were previously reported to be "ultra-hypermutated" with excessive C > A and C > T mutations on the leading strand [8, 11, 21]. Mutational signature 10 has been associated with mutations in the proofreading domain of Pol ε, the main leading strand polymerase [23, 27]. We noticed that mutational signature 14 is also strongly associated with POLE mutations in the data sets by Shlien et al. [21], Alexandrov et al. [12], and in The Cancer Genome Atlas (TCGA). Since then, Andrianova et al. confirmed this observation, showing that signature 14 is enriched in POLE-MUT samples with mismatch repair deficiency [28]. As expected, we observed very strong strand asymmetry for these two signatures in all POLE-MUT samples, with an increase of C > A, C > T, and T > G mutations on the leading strand (Fig. 4, Additional file 2: Figure S15). As with MMR signatures, we also found weak but significant evidence of signature 10 and 14 in samples without Pol ε defects (POLE-WT). Strikingly, however, in these samples the strand asymmetry was in the inverse orientation compared to the POLE-MUT samples, i.e., more C > A, C > T, and T > G mutations on the lagging strand (Fig. 4, Additional file 2: Figure S16). Conversely, we detected the presence of two signatures of unknown etiology, signatures 18 and 28, in POLE-MUT samples, but in the inverse orientation compared to POLE-WT samples. We performed two additional analyses in order to validate that this is not an artifact of spurious/wrong associations in the signature exposures decomposition. First, we removed exposures not robust to perturbations (see "Methods") and confirmed that all four signatures (10, 14, 18, and 28) remained significantly strand asymmetric in both POLE-MUT and POLE-WT samples (Additional file 2: Figures S17–S18). Second, we directly compared the frequencies of the most prominent mutation types for each of the four signatures (sig. 10, 14, 18, and 28) in POLE-MUT and POLE-WT samples on the leading and lagging strands. The inverse strand preference observed in the signatures was also detected for individual mutation types. For example, the frequency of mutations in TCT > A, TCG > T, and TTT > G, the three major components of signature 10, is higher on the lagging strand than on the leading strand in POLE-WT samples, whereas it is higher on the leading strand in POLE-MUT (Additional file 2: Figures S28–S31). We therefore hypothesize that POLE-linked signatures are originally caused by a process that affects both strands and, under normal circumstances, is slightly enriched on the lagging strand. This could be caused by certain types of DNA lesions which under normal circumstances are less accurately replicated when on the template of the lagging strand (e.g., due to a lower fidelity of Pol δ or Pol α compared to wild-type Pol ε when replicating these lesions). In POLE-MUT samples the lack of replication-associated proofreading would then lead to a strong relative increase in these mutations on the leading strand, explaining the flipped orientation of signatures.
Apart from replication strand asymmetry, we also observed a significant correlation with replication timing in signatures 10, 14, 18, and 28 in POLE-WT samples (Additional file 2: Figure S16). The correlation was significant only in signature 28 in POLE-MUT samples, but at least 75% of samples showed a positive slope of the correlation also in signatures 10, 14, and 18, and the slope was significantly increased in POLE-MUT compared to MSS POLE-WT samples in signatures 10, 18, and 28 (Additional file 2: Figures S15, S16, S32, S33). Future studies with larger sample size are needed to evaluate the effect of POLE mutations and MMR on the replication timing characteristics on this group of mutational signatures.
Signatures linked to environmental mutagens
We next focused on signatures that have not previously been reported to be connected to replication, or for which the causal mechanism is unknown. Our data show a link between DNA replication and exogenous mutagens such as UV light (signature 7), tobacco smoke (signature 4), or aristolochic acid (AA; signature 22) [29]. In these signatures, we observed marked correlation with replication timing (Fig. 4, Additional file 2: Figures S19 and S20). Higher mutation frequency late in replication has been observed in mouse embryonic fibroblast (MEFs) treated with AA or Benzo[a]pyrene (B[a]P; a mutagen in tobacco smoke) [30]. This increased mutagenicity might be attributed to different DNA damage tolerance pathways being active during early and late replication. Regions replicated early in S-phase are thought to prefer high-fidelity template switching, whereas regions replicated late are more likely to require translesion synthesis (TLS), which has a higher error rate [31,32,33,34,35,36,37]. This is consistent with the observation in yeast that a disruption of TLS leads to decreased mutation frequency in late-replicating regions and therefore a more even distribution of mutation frequency between early and late-replicating regions [32]. In particular, TLS has been observed to increase in activity and mutagenicity later in the cell cycle when replicating DNA damaged by B[a]P [38]. Alternatively, differences in chromatin accessibility could be responsible for the decreased mutagenicity in early-replicated regions. Open chromatin is, on average, replicated earlier and is also more accessible to repair enzymes, which could contribute to the decreased mutation frequency in early-replicating regions [39].
We also observed weak but significant replication strand asymmetry in the mutagen-induced signatures in the tissues associated with the respective mutagen (Additional file 2: Figure S19). Signature 4 has a significant strand asymmetry in lung cancers, similar to signature 7 in skin cancers. Signature 22 in kidney cancers has a small sample size but shows the same trend. This matches a previously observed lower efficiency of bypass of DNA damage on the lagging strand [40] and strong mutational strand asymmetry in cells lacking Pol η, the main TLS polymerase responsible for the replication of UV-induced photolesions [41]. Altogether, our data highlight the importance of replication in converting DNA damage into actual mutations and suggest that bypass of DNA damage occurring on the lagging template results in detectably lower fidelity on this strand.
Signature 17 had the largest median strand asymmetry (sixth largest log2 ratio of the two strands) and also was one of the signatures with the strongest correlations with replication timing (Figs. 2 and 4, Additional file 2: Figure S7). The mutational process causing this signature is unclear. We noted that the timing asymmetry and exposure distribution around ORIs to signature 17 closely resembled that of signatures 4 and 7, suggesting a possible link to DNA damage. Signature 17 is most prominent in gastric cancers and esophageal adenocarcinoma (EAC), where it appears early during disease development [42], and it is also present in Barrett's esophagus (BE), a precursor to EAC [43]. Due to the importance of gastro-esophageal and duodeno-gastric reflux in the development of BE and EAC [44,45,46] and the resulting oxidative stress [47,48,49,50], it has been speculated that oxidative damage could cause the mutation patterns characteristic for signature 17 [51, 52]. Increased oxidative damage to guanine has been reported in the epithelial cells of dysplastic BE as well as after incubation of BE tissue with a cocktail mimicking bile reflux [50]. Oxidative stress affects bases in not only the DNA but also the nucleotide pool, such as the oxidation of dGTP to 8-oxo-dGTP. This oxidized dGTP derivative has been shown to induce T > G transversions [53,54,55] through incorporation by TLS polymerases into DNA opposite A on the template strand [56]. In contrast, oxidation of guanine in the DNA produces 8-oxo-G, which has been shown to result in C > A mutations when paired with adenine during replication [57]. These C > A mutations are normally prevented by DNA glycosylases in the base excision repair pathway, such as MUTYH and OGG1, which repair 8-oxo-G:A pairs to G:C. However, if an 8-oxo-G:A mismatch resulted from incorporation of 8-oxo-dGTP in the de novo synthesized strand, the "repair" to G:C would actually lead to a T > G mutation [57]. Consequently, depletion of MUTYH led to an increase of C > A mutations [57, 58] but a decrease of T > G mutations induced by 8-oxo-dGTP [59]. Importantly, the mismatch of 8-oxo-G and A has been shown in yeast to be more efficiently repaired into G:C when 8-oxo-G is on the lagging strand template [60, 61], resulting in an enrichment of T > G mutations on the lagging strand template if the 8-oxoG:A mismatch originated from incorporation of 8-oxo-dGTP opposite A. Our data show strong lagging-strand bias of T > G mutations and overall higher exposure to signature 17 on the lagging strand, supporting the hypothesis that signature 17 is a by-product of oxidative damage.
DNA methylation-linked mutagenesis
A small but significant strand asymmetry was detected in signature 1 (Additional file 2: Figure S22). This observation is difficult to explain with spontaneous deamination of 5-methylcytosine, the assumed etiology of signature 1. However, the observation would be in line with a previously hypothesized model in which Pol ε has decreased fidelity of replicating 5-methylcytosine, causing an enrichment of C > T mutations in methylated cytosines on the leading strand, especially in samples with deficiency in MMR or Pol ε proofreading [62]. Our analysis shows that signature 1 is slightly enriched on the leading strand, even in samples proficient for post-replicative proofreading and repair. Moreover, we observed that signature 1 is significantly correlated with replication timing in MSS, but not in MSI samples (Additional file 2: Figure S26), in line with the possibility that MMR repairs C > T errors in a CpG context introduced by Pol ε, as MMR is thought to be active primarily in the early-replicated regions [22].
Our findings demonstrate how the relationship between mutational signatures and DNA replication can help to illuminate the mechanisms underlying several currently unexplained mutational processes, as exemplified by signature 17 in esophageal cancer. Crucially, our computational analysis produces testable hypotheses which we anticipate to be experimentally validated in the future; for instance, that bypass of external-mutagen-induced DNA damage (such as UV light, tobacco smoking, and aristolochic acid) is more error prone during synthesis of the lagging strand, or that oxidative damage to the dNTP pool contributes to the etiology of signature 17. Our results also add a new perspective to the recent debate regarding the correlation of tissue-specific cell division rates with cancer risk [3]. It has been argued that this correlation is primarily attributable to "bad luck" in the form of random errors that are introduced during replication by DNA polymerases. Critics of this theory have pointed out that the range of mutational signatures observed in cancer samples makes a purely replication-driven etiology of cancer mutations unlikely [63, 64]. Our analysis at least partially reconciles the two arguments, showing that most mutational signatures are themselves affected by DNA replication, including signatures linked to environmental mutagens. The presence of mutational signatures on the one hand and a strong relationship between replication and the risk of cancer on the other therefore need not be mutually exclusive. In conclusion, our results provide evidence that DNA replication interacts with most processes that introduce mutations in the genome, suggesting that differences amongst DNA polymerases and post-replicative repair enzymes might play a larger part in the accumulation of mutations than previously appreciated.
Somatic mutations
Cancer somatic mutations in 3056 whole-genome sequencing samples (Additional file 1: Table S1, Additional file 5: Table S4) were obtained from the data portal of TCGA, the data portal of the International Cancer Genome Consortium (ICGC), and previously published data in peer review journals [12, 21, 51, 65, 66]. For TCGA samples, aligned reads of paired tumor and normal samples were downloaded from the Genomic Data Commons Data Portal website under TCGA access request #10140 and somatic variants were called using Strelka (version 1.0.14) [67] with default parameters. The status of POLE-MUT and MSI samples were obtained from the supplementary data of [11, 21, 66].
Direction of replication
Left- and right-replicating domains were taken from [11] where replication timing profiles were generated in six lymphoblastoid cell lines [68], valleys and peaks (defined as regions with a slope with a magnitude lower than 250 rtu per Mb) were removed, after which left- and right-replicating domains were defined as timing transition regions with a negative and positive slope, respectively [11]. In the left-replicated regions, the reference strand is used as a template for the leading strand, while the opposite strand is used as a template for the lagging strand, and vice versa for the right-replicated regions. Each domain (called territory in the original source code and data) is 20 kbp wide and annotated with the direction of replication and with replication timing.
Excluded regions
The following regions were excluded: regions with low unique mappability of sequencing reads (positions with mean mappability in 100-bp sliding windows below 0.99 from UCSC mappability track: alignability of 50mers, accession number wgEncodeEH000320), gencode protein coding genes, and blacklisted regions defined by Anshul Kundaje [69] (Anshul_Hg19UltraHighSignalArtifactRegions.bed, Duke_Hg19SignalRepeatArtifactRegions.bed, and wgEncodeHg19ConsensusSignalArtifactRegions.bed from http://mitra.stanford.edu/kundaje/akundaje/release/blacklists/hg19-human/).
Mutation frequency analysis
All variants were classified by the pyrimidine of the mutated Watson-Crick base pair (C or T), strand of this base pair (C or T), and the immediate 5′ and 3′ sequence context into 96 possible mutation types as described by Alexandrov et al. [12]. The frequency of trinucleotides on each strand was computed for each replication domain. Then the mutation frequency of each mutation type in each replication domain on the leading (plus = Watson strand in left-replicating domains; minus = Crick strand in right-replicating domains) and lagging strand (vice versa) was computed for each sample.
Extraction of mutational signatures
Matlab code [12] was used for extraction of strand-specific mutational signatures. The input data were the mutation counts on the leading and lagging strands (summed from all replicating domains together, but without the excluded regions) in each sample. The 192-elements-long mutational signatures (example in Fig. 1b) were extracted in each cancer type separately (for K number of signatures between 2 and 7). The best K with minimal error and maximal stability (minimizing errorK/max(error) + (1 − stabilityK) and with stability of at least 0.8) was selected for each cancer type. The stability metric (computed as average silhouette width of clusters of signatures computed by Non-negative Matrix Factorisation) represents reproducibility of the model, while the error (computed as the average Frobenius reconstruction error) evaluates the accuracy with which the deciphered mutational signatures and their respective exposures describe the original matrix of mutations. Signatures present in only a small number of samples with very low exposures were excluded ((95th percentile of exposures of this signature)/(Mean total exposure per samples) < 0.2). The remaining signatures were then normalized by the frequency of trinucleotides in the leading and lagging strands and subsequently multiplied by the frequency of trinucleotides in the genome. This made them comparable with the 30 previously identified whole-genome-based COSMIC signatures (http://cancer.sanger.ac.uk/cosmic/signatures). Signatures extracted in each cancer type and COSMIC signatures were all pooled together (with equal values in the leading and lagging parts in the COSMIC signatures) and were clustered using unsupervised hierarchical clustering (with cosine distance and complete linkage). A threshold of 27 signatures was selected to identify clusters of similar signatures. Mis-clustering was avoided by manual examination (and whenever necessary re-assignment) of all signatures in all clusters (Additional file 2: Figure S6). The resulting 29 signatures (representing the detected clusters) contained 25 previously observed (COSMIC) and four new signatures. For the subsequent analysis, the signatures were converted back to 96 values: the 25 COSMIC signatures were used in their original form (i.e., having the same values as on the COSMIC website) and for the four newly identified signatures we used an average of the leading and lagging parts of the 192-elements-long signatures.
Annotation of signatures with leading and lagging direction
Each signature was annotated with the dominant strand direction (leading vs lagging) in each of the 96 mutation types (Fig. 1c). This was based on the dominant strand direction within the signature's cluster. Types with unclear direction and small values were assigned according to the predominant direction of other trinucleotides of the same mutation group, such as C > T.
Calculating strand-specific exposures in individual samples
Exposures to leading and lagging parts of the signatures on the leading and lagging strands in individual samples were quantified using non-negative least squares regression using the Matlab function e = lsqnonneg(S, m), where:
$$ S=\left(\begin{array}{cc}{S}_{LD}& {S}_{LG}\\ {}{S}_{LG}& {S}_{LD}\end{array}\right),m=\left(\genfrac{}{}{0pt}{}{m_{LD}}{m_{LG}}\right),e=\left(\genfrac{}{}{0pt}{}{e_{matching}}{e_{inverse}}\right). $$
The matrix SLD has 96 rows and 29 columns and represents the leading parts of the signatures, i.e., the elements of the lagging parts contain zeros in this matrix. Similarly, SLG has the same size but contains zeros in the leading parts. The vector mLD of length 96 contains mutations on the leading strand (again normalized by trinucleotides in leading strand/whole genome), and similarly mLG contains mutations from the lagging strand. Finally, lsqnonneg finds a non-negative vector of exposures e such that it minimizes a function |m – S · e|. A similar approach has been used in [70] for finding exposures to a given set of signatures. Our extension includes the strand-specificity of the signatures. The interpretation of the model is that the matching exposure ematching represents exposure of the leading part of the signature on the leading strand and exposure of the lagging part of the signature on the lagging strand, whereas einverse represents the two remaining options. It is important to note that the direction of the mutation is relative to the nucleotide in the base pair chosen as the reference, i.e., mutations of a pyrimidine on the leading strand correspond to mutations of a purine on the lagging strand. In order to minimize the number of spurious signature exposures, the least exposed signature was incrementally removed (in both leading and lagging parts) as long as the resulting error did not exceed the original error by 0.5%. The resulting reported values in each sample and signature were the difference (or fold change) of ematching and einverse. In each signature, the signtest was used to compare matching and inverse exposures across samples with sufficient minimal exposure (at least 10) to the signature. Benjamini-Hochberg correction was applied to correct for multiple testing.
Replication origins
The left/right transitions of the replication domains represent regions with, on average, higher density of replication origins. In order to get better resolution of the replication origins, and to validate the results using an independent estimates of left- and right-replicating domains, genome-wide maps of human replication origins from SNS-seq by [13] were used. Eight fastq files (HeLa, iPS, hESC, IMR; each with two replicates) were downloaded and mapped to hg19 using bowtie2 (version 2.1.0). To control for the inefficient digestion of λ-exo step of SNS-seq, reads from non-replicating genomic DNA (LexoG0) were used as a control [14]. Peaks were called using "macs callpeak" with parameters --gsize = hs --bw = 200 --qvalue = 0.05 --mfold 5 50 and LexoG0 mapped reads as a control. Only peaks covered in at least seven of the eight samples were used. We generated 1000 1-kbp bins to the left and right of each origin, as long as they did not reach half the distance to the next origin. We then used these replication direction annotations in the 1-kbp bins to calculate strand-specific exposures in individual samples as above and ascertained that both approaches lead to qualitatively very similar mutational strand asymmetries in individual signatures (Additional file 2: Figure S9).
Quantification of exposures with respect to replication timing, left/right transitions, and replication origins
Replication domains were divided into four quartiles by their average replication timing. The entire exposure quantification was computed separately in each quartile, or bin around left/right transition or bin around replication origin. In replication timing plots, a linear regression model (function fitlm in MatLab) was fitted to the mean exposure in each quartile (separately for matching and inverse exposures) and signtest on slopes of the fits in individual samples was used to evaluate significance of correlation with replication timing across the cohort. Benjamini-Hochberg correction was applied to correct for multiple testing.
Evaluation of robustness to noise in signature exposures
For each sample, 1000 perturbations of the input mutation frequency vector were performed, adding noise generated from normal distribution with mean 0 and standard deviation as 5% of the original mutation frequency. Exposures to signatures were quantified for each perturbation. Signatures with extremely variable exposures ((qtl75 − x)/x ≥ 0.3 or (x − qtl25)/x) ≥ 0.3) were removed for this sample and then the exposures were re-computed using only the signatures that passed the filtering.
Secrier M, Li X, de Silva N, Eldridge MD, Contino G, Bornschein J, et al. Mutational signatures in esophageal adenocarcinoma define etiologically distinct subgroups with therapeutic relevance. Nat Genet. 2016;2016:1131–41. Available from: http://www.nature.com/doifinder/10.1038/ng.3659.
Stenzinger A, Pfarr N, Endris V, Penzel R, Jansen L, Wolf T, et al. Mutations in POLE and survival of colorectal cancer patients--link to disease stage and treatment. Cancer Med. 2014;3:1527–38.
Article PubMed PubMed Central CAS Google Scholar
Tomasetti C, Vogelstein B. Variation in cancer risk among tissues can be explained by the number of stem cell divisions. Science. 2015;347:78–81. Available from: http://www.ncbi.nlm.nih.gov/pubmed/25554788.
Lujan SA, Williams JS, Kunkel TA. DNA polymerases divide the labor of genome replication. Trends Cell Biol. 2016;26:640–54. Available from: https://doi.org/10.1016/j.tcb.2016.04.012.
Fragkos M, Ganier O, Coulombe P, Méchali M. DNA replication origin activation in space and time. Nat Rev Mol Cell Biol. 2015;16:360–74. Available from: https://doi.org/10.1038/nrm4002%5Cnhttp://www.ncbi.nlm.nih.gov/pubmed/25999062.
Article PubMed CAS Google Scholar
Stamatoyannopoulos JA, Adzhubei I, Thurman RE, Kryukov GV, Mirkin SM, Sunyaev SR. Human mutation rate associated with DNA replication timing. Nat Genet. 2009;41:393–5.
Lawrence MS, Stojanov P, Polak P, Kryukov G V, Cibulskis K, Sivachenko A, et al. Mutational heterogeneity in cancer and the search for new cancer-associated genes. Nature. 2013;499:214–218. Available from: https://doi.org/10.1038/nature12213.
Shinbrot E, Henninger EE, Weinhold N, Covington KR, Göksenin AY, Schultz N, et al. Exonuclease mutations in DNA polymerase epsilon reveal replication strand specific mutation patterns and human origins of replication. Genome Res. 2014:1740–50. Available from: http://www.ncbi.nlm.nih.gov/pubmed/25228659.
Lujan SA, Williams JS, Pursell ZF, Abdulovic-Cui AA, Clark AB, Nick McElhinny SA, et al. Mismatch repair balances leading and lagging strand DNA replication Fidelity. PLoS Genet. 2012;8:e1003016.
Reijns MAM, Kemp H, Ding J, de Procé SM, Jackson AP, Taylor MS. Lagging-strand replication shapes the mutational landscape of the genome. Nature. 2015;518:502–6. Available from: http://www.nature.com/nature/journal/v518/n7540/full/nature14183.html.
Haradhvala NJ, Polak P, Stojanov P, Covington KR, Shinbrot E, Hess JM, et al. Mutational strand asymmetries in cancer genomes reveal mechanisms of DNA damage and repair. Cell. 2016;164:538–49. Available from: http://linkinghub.elsevier.com/retrieve/pii/S0092867415017146.
Alexandrov LB, Nik-Zainal S, Wedge DC, Aparicio SAJR, Behjati S, Biankin AV, et al. Signatures of mutational processes in human cancer. Nature. 2013;500:415–21. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3776390&tool=pmcentrez&rendertype=abstract.
Besnard E, Babled A, Lapasset L, Milhavet O, Parrinello H, Dantec C, et al. Unraveling cell type-specific and reprogrammable human replication origin signatures associated with G-quadruplex consensus motifs. Nat Struct Mol Biol. 2012;19:837–44.
Foulk MS, Urban JM, Casella C, Gerbi SA. Characterizing and controlling intrinsic biases of lambda exonuclease in nascent strand sequencing reveals phasing between nucleosomes and G-quadruplex motifs around a subset of human replication origins. Genome Res. 2015;25:725–35.
Morganella S, Alexandrov LB, Glodzik D, Zou X, Davies H, Staaf J, et al. The topography of mutational processes in breast cancer genomes. Nat Commun. 2016;7:11383. Available from: http://www.nature.com/doifinder/10.1038/ncomms11383.
Alexandrov LB, Nik-Zainal S, Wedge DC, Campbell PJ, Stratton MR. Deciphering signatures of mutational processes operative in human cancer. Cell Rep. 2013;3:246–59. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3588146&tool=pmcentrez&rendertype=abstract
Hoopes JI, Cortez LM, Mertz TM, Malc EP, Mieczkowski PA, Roberts SA. APOBEC3A and APOBEC3B preferentially deaminate the lagging strand template during DNA replication. Cell Rep. 2016;14:1273–82. Available from: http://linkinghub.elsevier.com/retrieve/pii/S2211124716000425.
Green AM, Landry S, Budagyan K, Avgousti DC, Shalhout S, Bhagwat AS, et al. APOBEC3A damages the cellular genome during DNA replication. Cell Cycle. 2016;15:998–1008. Available from: https://doi.org/10.1080/15384101.2016.1152426.
Seplyarskiy VB, Soldatov RA, Popadin KY, Antonarakis SE, Bazykin GA, Nikolaev SI. APOBEC-induced mutations in human cancers are strongly enriched on the lagging DNA strand during replication. Genome Res. 2016;26:174–82.
Zhao H, Thienpont B, Yesilyurt BT, Moisse M, Reumers J, Coenegrachts L, et al. Mismatch repair deficiency endows tumors with a unique mutation signature and sensitivity to DNA double-strand breaks. elife. 2014;3:e02725.
Shlien A, Campbell BB, de Borja R, Alexandrov LB, Merico D, Wedge D, et al. Combined hereditary and somatic mutations of replication error repair genes result in rapid onset of ultra-hypermutated cancers. Nat Genet. 2015;47:257–62. Available from: http://www.nature.com/doifinder/10.1038/ng.3202.
Supek F, Lehner B. Differential DNA mismatch repair underlies mutation rate variation across the human genome. Nature. 2015;521:81–4.
Stillman B. DNA polymerases at the replication fork in eukaryotes. Mol Cell. 2008;30:259–60.
McCulloch SD, Kunkel TA. The fidelity of DNA synthesis by eukaryotic replicative and translesion synthesis polymerases. Cell Res. 2008;18:148–61.
Waisertreiger IS-R, Liston VG, Menezes MR, Kim H-M, Lobachev KS, Stepchenkova EI, et al. Modulation of mutagenesis in eukaryotes by DNA replication fork dynamics and quality of nucleotide pools. Environ Mol Mutagen. 2012;53:699–724. Available from: http://www.ncbi.nlm.nih.gov/pubmed/23055184.
Nick McElhinny SA, Kissling GE, Kunkel TA. Differential correction of lagging-strand replication errors made by DNA polymerases α and δ. Proc Natl Acad Sci U S A. 2010;107:21070–5.
Georgescu RE, Schauer GD, Yao NY, Langston LD, Yurieva O, Zhang D, et al. Reconstitution of a eukaryotic replisome reveals suppression mechanisms that define leading/lagging strand operation. elife. 2015;2015:1–20.
Andrianova MA, Bazykin GA, Nikolaev SI, Seplyarskiy VB. Human mismatch repair system balances mutation rates between strands by removing more mismatches from the lagging strand. Genome Res. 2017;27:1336–43.
Helleday T, Eshtad S, Nik-Zainal S. Mechanisms underlying mutational signatures in human cancers. Nat Rev Genet. 2014;15:585–98. Available from: http://www.ncbi.nlm.nih.gov/pubmed/24981601.
Nik-Zainal S, Kucab JE, Morganella S, Glodzik D, Alexandrov LB, Arlt VM, et al. The genome as a record of environmental exposure. Mutagenesis. 2015;30:763–70.
PubMed PubMed Central CAS Google Scholar
Waters LS, Walker GC. The critical mutagenic translesion DNA polymerase Rev1 is highly expressed during G(2)/M phase rather than S phase. Proc Natl Acad Sci U S A. 2006;103:8971–6.
Lang GI, Murray AW. Mutation rates across budding yeast chromosome VI are correlated with replication timing. Genome Biol Evol. 2011;3:799–811.
Karras GI, Fumasoni M, Sienski G, Vanoli F, Branzei D. Article noncanonical role of the 9-1-1 clamp in the error-free DNA damage tolerance pathway. Mol Cell. 2013;49:536–46. Available from: https://doi.org/10.1016/j.molcel.2012.11.016.
Gonzalez-Huici V, Szakal B, Urulangodi M, Psakhye I, Castellucci F, Menolfi D, et al. DNA bending facilitates the error-free DNA damage tolerance pathway and upholds genome integrity. EMBO J. 2014;33:327–40.
Bi X. Mechanism of DNA damage tolerance. World J Biol Chem. 2015;6:48. Available from: http://www.wjgnet.com/1949-8454/full/v6/i3/48.htm.
Branzei D, Szakal B. DNA damage tolerance by recombination: Molecular pathways and DNA structures. DNA Repair (Amst). 2016;44:68–75. Available from: https://doi.org/10.1016/j.dnarep.2016.05.008.
D'Souza S, Yamanaka K, Walker GC. Non mutagenic and mutagenic DNA damage tolerance. Cell Cycle. 2016;15:314–5. Available from: https://doi.org/10.1080/15384101.2015.1132909.
Diamant N, Hendel A, Vered I, Carell T, Reißner T, De Wind N, et al. DNA damage bypass operates in the S and G2 phases of the cell cycle and exhibits differential mutagenicity. Nucleic Acids Res. 2012;40:170–80.
Adar S, Hu J, Lieb JD, Sancar A. Genome-wide kinetics of DNA excision repair in relation to chromatin state and mutagenesis. Proc Natl Acad Sci U S A. 2016:201603388. Available from: http://www.pnas.org/lookup/doi/10.1073/pnas.1603388113.
Cordeiro-Stone M, Nikolaishvili-Feinberg N. Asymmetry of DNA replication and translesion synthesis of UV-induced thymine dimers. Mutat Res. 2002;510:91–106.
McGregor WG, Wei D, Maher VM, McCormick JJ. Abnormal, error-prone bypass of photoproducts by xeroderma pigmentosum variant cell extracts results in extreme strand bias for the kinds of mutations induced by UV light. Mol Cell Biol. 1999;19:147–54. Available from: http://mcb.asm.org/content/19/1/147.abstract.
Murugaesu N, Wilson GA, Birkbak NJ, Watkins TBK, McGranahan N, Kumar S, et al. Tracking the genomic evolution of esophageal adenocarcinoma through neoadjuvant chemotherapy. Cancer Discov. 2015;5:821–32.
Ross-Innes CS, Becq J, Warren A, Cheetham RK, Northen H, O'Donovan M, et al. Whole-genome sequencing provides new insights into the clonal architecture of Barrett's esophagus and esophageal adenocarcinoma. Nat Genet. 2015;47:1–11. Available from: http://www.nature.com/doifinder/10.1038/ng.3357.
Souza RF. The role of acid and bile reflux in oesophagitis and Barrett's metaplasia. Biochem Soc Trans. 2010;38:348–52. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3072824&tool=pmcentrez&rendertype=abstract.
Erichsen R, Robertson D, Farkas DK, Pedersen L, Pohl H, Baron JA, et al. Erosive reflux disease increases risk for esophageal adenocarcinoma, compared with nonerosive reflux. Clin Gastroenterol Hepatol. 2012;10:475–480.e1. Available from: https://doi.org/10.1016/j.cgh.2011.12.038.
Fein M, Maroske J, Fuchs KH. Importance of duodenogastric reflux in gastro-oesophageal reflux disease. Br J Surg. 2006;93:1475–82.
Kauppi J, Räsänen J, Sihvo E, Nieminen U, Arkkila P, Ahotupa M, et al. Increased oxidative stress in the proximal stomach of patients with Barrett's esophagus and adenocarcinoma of the esophagus and Esophagogastric junction. Transl Oncol. 2016;9:336–9. Available from: http://www.ncbi.nlm.nih.gov/pubmed/27567957.
Rasanen JV, Sihvo EIT, Ahotupa MO, Färkkilä MA, Salo JA. The expression of 8-hydroxydeoxyguanosine in oesophageal tissues and tumours. Eur J Surg Oncol. 2007;33:1164–8.
Jimenez P, Piazuelo E, Sanchez MT, Ortego J, Soteras F, Lanas A. Free radicals and antioxidant systems in reflux esophagitis and Barrett's esophagus. World J Gastroenterol. 2005;11:2697–703. Available from: http://www.ncbi.nlm.nih.gov/pubmed/15884106.
Dvorak K, Payne CM, Chavarria M, Ramsey L, Dvorakova B, Bernstein H, et al. Bile acids in combination with low pH induce oxidative stress and oxidative DNA damage: relevance to the pathogenesis of Barrett's oesophagus. Gut. 2007;56:763–71.
Dulak AM, Stojanov P, Peng S, Lawrence MS, Fox C, Stewart C, et al. Exome and whole-genome sequencing of esophageal adenocarcinoma identifies recurrent driver events and mutational complexity. Nat Genet. 2013;45:478–86. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3678719&tool=pmcentrez&rendertype=abstract.
Nones K, Waddell N, Wayte N, Patch A-M, Bailey P, Newell F, et al. Genomic catastrophes frequently arise in esophageal adenocarcinoma and drive tumorigenesis. Nat Commun. 2015;5:1–9. Available from: https://doi.org/10.1038/ncomms6224%5Cnpapers2://publication/doi/10.1038/ncomms6224.
Inoue M, Kamiya H, Fujikawa K, Ootsuyama Y, Murata-Kamiya N, Osaki T, et al. Induction of chromosomal gene mutations in Escherichia coli by direct incorporation of oxidatively damaged nucleotides: new evaluation method for mutagenesis by damaged dna precursors in vivo. J Biol Chem. 1998;273:11069–74.
Satou K, Kawai K, Kasai H, Harashima H, Kamiya H. Mutagenic effects of 8-hydroxy-dGTP in live mammalian cells. Free Radic Biol Med. 2007;42:1552–60. Available from: https://doi.org/10.1016/j.freeradbiomed.2007.02.024.
Satou K, Hori M, Kawai K, Kasai H, Harashima H, Kamiya H. Involvement of specialized DNA polymerases in mutagenesis by 8-hydroxy-dGTP in human cells. DNA Repair (Amst). 2009;8:637–42.
Kamiya H. Mutations induced by oxidized DNA precursors and their prevention by nucleotide pool sanitization enzymes. Genes Environ. 2007;29:133–40.
Suzuki T, Kamiya H. Mutations induced by 8-hydroxyguanine (8-oxo-7,8-dihydroguanine), a representative oxidized base, in mammalian cells. Genes Environ. 2017;39:2. Available from: https://doi.org/10.1186/s41021-016-0051-y.
Rashid M, Fischer A, Wilson CH, Tiffen J, Rust AG, Stevens P, et al. Adenoma development in familial adenomatous polyposis and MUTYH-associated polyposis: somatic landscape and driver genes. J Pathol. 2016;238:98–108.
Suzuki T, Harashima H, Kamiya H. Effects of base excision repair proteins on mutagenesis by 8-oxo-7,8-dihydroguanine (8-hydroxyguanine) paired with cytosine and adenine. DNA Repair (Amst). 2010;9:542–50. Available from: https://doi.org/10.1016/j.dnarep.2010.02.004.
Pavlov YI, Mian IM, Kunkel TA. Evidence for preferential mismatch repair of lagging strand DNA replication errors in yeast. Curr Biol. 2003;13:744–8.
Mudrak SV, Welz-Voegele C, Jinks-Robertson S. The polymerase eta translesion synthesis DNA polymerase acts independently of the mismatch repair system to limit mutagenesis caused by 7,8-dihydro-8-oxoguanine in yeast. Mol Cell Biol. 2009;29:5316–26. Available from: http://www.scopus.com/inward/record.url?eid=2-s2.0-70349329456&partnerID=tZOtx3y1.
Tomkova M, McClellan M, Kriaucionis S, Schuster-Böckler B. DNA replication and associated repair pathways are involved in the mutagenesis of methylated cytosine. DNA Repair (Amst). 2017. Available from: https://www.sciencedirect.com/science/article/pii/S1568786417303464.
Gao Z, Wyman MJ, Sella G, Przeworski M. Interpreting the dependence of mutation rates on age and time. PLoS Biol. 2016;14:e1002355. Available from: http://dx.plos.org/10.1371/journal.pbio.1002355.
Crossan GP, Garaycoechea JI, Patel KJ. Do mutational dynamics in stem cells explain the origin of common cancers? Cell Stem Cell. 2015;16:111–2. Available from: https://doi.org/10.1016/j.stem.2015.01.009.
Bass AJ, Lawrence MS, Brace LE, Ramos AH, Drier Y, Cibulskis K, et al. Genomic sequencing of colorectal adenocarcinomas identifies a recurrent VTI1A-TCF7L2 fusion. Nat Genet. 2011;43:964–8. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3802528&tool=pmcentrez&rendertype=abstract.
Wang K, Yuen ST, Xu J, Lee SP, Yan HHN, Shi ST, et al. Whole-genome sequencing and comprehensive molecular profiling identify new driver mutations in gastric cancer. Nat Genet. 2014;46:573–82. Available from: http://www.ncbi.nlm.nih.gov/pubmed/24816253.
Saunders CT, Wong WSW, Swamy S, Becq J, Murray LJ, Cheetham RK. Strelka: accurate somatic small-variant calling from sequenced tumor-normal sample pairs. Bioinformatics. 2012;28:1811–7.
Koren A, Polak P, Nemesh J, Michaelson JJ, Sebat J, Sunyaev SR, et al. Differential relationship of DNA replication timing to different forms of human mutation and variation. Am J Hum Genet. 2012;91:1033–40. Available from: https://doi.org/10.1016/j.ajhg.2012.10.018.
Encode Consortium. An integrated encyclopedia of DNA elements in the human genome. Nature. 2012;489:57–74. Available from: http://www.nature.com/nature/journal/v489/n7414/full/nature11247.html.
Rosenthal R, McGranahan N, Herrero J, Taylor BS, Swanton C. deconstructSigs: delineating mutational processes in single tumors distinguishes DNA repair deficiencies and patterns of carcinoma evolution. Genome Biol. 2016;17:31. Available from: http://genomebiology.biomedcentral.com/articles/10.1186/s13059-016-0893-4.
Tomkova M, Tomek J, Kriaucionis S, Schuster-Bockler B. Mutational signature distribution varies with DNA replication timing and strand asymmetry. Source code. Bitbucket. https://bitbucket.org/bsblabludwig/replicationasymmetry (2018).
Tomkova M, Tomek J, Kriaucionis S, Schuster-Bockler B. Mutational signature distribution varies with DNA replication timing and strand asymmetry Source code figshare. 2018; https://doi.org/10.6084/m9.figshare.6941456.
We thank Dr. Mary Muers for comments on the manuscript.
S.K. and B.S.-B. are funded by Ludwig Cancer Research. S.K. received funding from BBSRC grant BB/M001873/1. M.T. and J.T. are funded by EPSRC (EP/F500394/1).
All data used in this study were downloaded from public repositories. A complete list of whole-genome sequencing data sets and their accession information can be found in Additional file 1: Table S1 and the list of all whole-genome sequencing samples and signature exposures are in Additional file 5: Table S4. The replication direction [11] was taken from http://software.broadinstitute.org/cancer/cga/AsymTools, table per_base_territories_20kb.mat. The replication origins measured by SNS-seq [13] were based on the supplementary files of National Center for Biotechnology Information Gene Expression Omnibus accession GSE37757. The strand-specific signatures are in Additional file 6: Table S5. The code is available at https://bitbucket.org/bsblabludwig/replicationasymmetry [71] and Figshare (https://figshare.com/s/21174d60d594ddb9b83d, DOI https://doi.org/10.6084/m9.figshare.6941456) [72] under GPL3 license.
Ludwig Cancer Research Oxford, University of Oxford, Old Road Campus Research Building, Oxford, OX3 7DQ, UK
Marketa Tomkova, Skirmantas Kriaucionis & Benjamin Schuster-Böckler
Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, OX1 3PT, UK
Jakub Tomek
Marketa Tomkova
Skirmantas Kriaucionis
Benjamin Schuster-Böckler
BS-B and MT designed the study; MT performed the analysis with contributions from JT; BS-B and MT wrote the manuscript with contributions from SK and JT. All authors read and approved the final manuscript.
Correspondence to Benjamin Schuster-Böckler.
Table S1. Overview of the used whole-genome sequencing samples. (PDF 211 kb)
Figures S1–S33. Supplementary figures. (PDF 6116 kb)
Table S2. Overview of the strand asymmetry and correlation with replication timing in mutational signatures. (PDF 205 kb)
Table S3. Values of data points. (XLSX 12 kb)
Table S4. List of all samples. (XLSX 1426 kb)
Table S5. Values of the strand-specific mutational signatures. (XLSX 51 kb)
Review history. (DOCX 64 kb)
Tomkova, M., Tomek, J., Kriaucionis, S. et al. Mutational signature distribution varies with DNA replication timing and strand asymmetry. Genome Biol 19, 129 (2018). https://doi.org/10.1186/s13059-018-1509-y
DOI: https://doi.org/10.1186/s13059-018-1509-y
DNA repair | CommonCrawl |
Melting of generalized Wigner crystals in transition metal dichalcogenide heterobilayer Moiré systems
Phonon-based partition of (ZnSe-like) semiconductor mixed crystals on approach to their pressure-induced structural transition
M. B. Shoker, Olivier Pagès, … F. Firszt
Nanoscale self-organization and metastable non-thermal metallicity in Mott insulators
Andrea Ronchi, Paolo Franceschini, … Claudio Giannetti
Quantum jamming transition to a correlated electron glass in 1T-TaS2
Yaroslav A. Gerasimenko, Igor Vaskivskyi, … Dragan Mihailovic
Perfect short-range ordered alloy with line-compound-like properties in the ZnSnN2:ZnO system
Jie Pan, Jacob J. Cordell, … Stephan Lany
Phonon behavior in a random solid solution: a lattice dynamics study on the high-entropy alloy FeCoCrMnNi
Shelby R. Turner, Stéphane Pailhès, … Valentina M. Giordano
Particle-resolved topological defects of smectic colloidal liquid crystals in extreme confinement
René Wittmann, Louis B. G. Cortes, … Dirk G. A. L. Aarts
Ultrafast optical melting of trimer superstructure in layered 1T′-TaTe2
Khalid M. Siddiqui, Daniel B. Durham, … Robert A. Kaindl
Modeling of networks and globules of charged domain walls observed in pump and pulse induced states
Petr Karpov & Serguei Brazovskii
Nanoembryonic thermoelastic equilibrium and enhanced properties of defected pretransitional materials
Ye-Chuan Xu, Wei-Feng Rao, … Armen G. Khachaturyan
Michael Matty1 &
Eun-Ah Kim ORCID: orcid.org/0000-0002-9554-44431
Nature Communications volume 13, Article number: 7098 (2022) Cite this article
Electronic properties and materials
Two-dimensional materials
Moiré superlattice systems such as transition metal dichalcogenide heterobilayers have garnered significant recent interest due to their promising utility as tunable solid state simulators. Recent experiments on a WSe2/WS2 heterobilayer detected incompressible charge ordered states that one can view as generalized Wigner crystals. The tunability of the transition metal dichalcogenide heterobilayer Moiré system presents an opportunity to study the rich set of possible phases upon melting these charge-ordered states. Here we use Monte Carlo simulations to study these intermediate phases in between incompressible charge-ordered states in the strong coupling limit. We find two distinct stripe solid states to be each preceded by distinct types of nematic states. In particular, we discover microscopic mechanisms that stabilize each of the nematic states, whose order parameter transforms as the two-dimensional E representation of the Moiré lattice point group. Our results provide a testable experimental prediction of where both types of nematic occur, and elucidate the microscopic mechanism driving their formation.
The promise of a highly tunable lattice system that can allow solid-state-based simulation of strong coupling physics1,2,3 has largely driven the explosion of efforts studying Moiré superlattices. The transition metal dichalcogenide (TMD) heterobilayer Moiré systems with zero twist-angle (see Fig. 1a) and localized Wannier orbitals form a uniquely simple platform to explore phases driven by strong interactions4,5,6. Many theoretical efforts went into the study of charge order in TMD moiré systems at commensurate fillings. Specifically, incompressible charge-ordered states at commensurate fillings have been theorized based on Monte Carlo simulations6,7,8 and Hartree-Fock calculations9. Moreover, refs. 10, 11 discuss predictions of specific experimental systems capable of realizing such states based on estimated material parameters. These incompressible charge-ordered states have been detected experimentally in a WS2/WSe2 system as well5,6,7,12. However, an understanding of the phase diagram when one tunes the density away from the incompressible states into the compressible region is still lacking.
Fig. 1: Electronic states in TMD Moiré systems.
a The red and blue dots show the sites of two honeycomb lattices whose lattice constants differ by 5%. These lattices are layered at zero twist-angle, resulting in an emergent triangular Moiré lattice with a unit cell indicated by the black lines. In the case of TMD heterobilayers, the Moiré lattice has point group C3. b Top: optical anisotropy as a function of Moiré lattice filling, reproduced with permission from ref. 16. Bottom: charge order patterns at 1/3-, 2/5-, and 1/2-filling as determined by Monte Carlo, reproduced with permission from ref. 7. c Here we show the Moiré unit cell with occupied lattice sites represented by blue dots, and the bond connecting nearest neighbor pairs colored according to its orientation. On a lattice with C3 symmetry, there are two distinct types of nematic states. Type-I nematics (left) have a nematic director oriented along a single majority bond orientation at θ ∈ {0, π/3, 2π/3} and have \(\langle \cos (6\theta )\rangle=1\). Type-II nematics (right) have a nematic director oriented perpendicular to a single minority bond orientation at θ ∈ {π/6, π/2, 5π/6} and have \(\langle \cos (6\theta )\rangle=-1\). d The critical temperature as a function of Moiré lattice filling as determined by Monte Carlo. At 1/3-, 2/5-, and 1/2-filling we find the same charge-ordered states as in (b). Between 2/5-filling and 1/2-filling we find a type-I nematic state defined by short-range domains of the 1/2-filled charge stripe state. Above 1/3-filling, we find an isotropic state defined by hexagonal domains of the 1/3 generalized Wigner crystal, which eventually gives way to a type-II nematic state defined by fragmented domains of the 2/5-filled columnar dimer crystal. For the two isotropic phases we determine Tc from the integrated peak weight of the structure factor. For the remaining anisotropic phases we use the jump in the nematic correlation function.
The incompressible charge orders can be viewed as generalized Wigner crystalline states that reduce the symmetry of the underlying moiré lattice, as they are driven by the long-range Coulomb interaction. The density-controlled melting of Wigner crystals is expected to result in a rich hierarchy of intermediate phases13,14,15. While a microscopic theoretical study of Wigner crystal melting is challenging due to the continuous spatial symmetry, the melting of generalized Wigner crystals is more amenable to a microscopic study due to the lattice. The observation of intermediate compressible states with optical anisotropy16 (see Fig. 1b) and the tunability beyond density17,18 present a tantalizing possibility to study melting and the possible intermediate phases of the generalized Wigner crystals.
The underlying lattice in the generalized Wigner crystal reduces the continuous rotational symmetry to C3 point group symmetry. Reference 19 studied the melting of a 1/3-filled crystalline state on a triangular lattice in the context of Krypton adsorbed on Graphene. Based on the free energy costs of the domain walls and domain wall intersections, they reasoned that the generalized WC would first melt into a hexagonal liquid, and then crystallize into a stripe solid. From the modern perspectives of electronic liquid crystals20, one anticipates nematic fluid states in the vicinity of crystalline anisotropic states such as the stripe solid. Moreover, the C3 point group symmetry of the triangular lattice relevant for hetero-TMD Moiré sytems further enriches the possibilities of the intermediate fluid phases. The triangular lattice admits two types of nematic states due to the nematic order parameter transforming as a 2-dimensional irreducible representation of the lattice point group21,22. The hetero-TMD Moiré systems present an excellent opportunity to study these intermediate liquid phases.
As the quantum melting of charge order is a notoriously difficult problem23, in this paper we will take advantage of the small bandwidth in hetero-TMD systems and use a strong coupling approach. We study Monte Carlo simulations inspired by hetero-TMD Moiré systems at and between commensurate charge-ordered states. We analyze our results in terms of the structure factor and a nematic order parameter correlation function. In particular, we distinguish between the two possible types of nematic states illustrated in Fig. 1c, one associated with the director aligned with a single majority bond orientation (which we dub type-I), and the other perpendicular to a single minority bond orientation (type-II). As shown in Fig. 1d, we find the type-I nematic and type-II nematic to each robustly appear between 2/5 and 1/2 and 1/3 and 2/5, respectively. We conclude with a discussion.
As the orientation of the nematic director is defined within the angle range θ ∈ [0, π) (Fig. 1c) we define the local nematic field using complex notation N(r) = ∣N(r)∣ei2θ(r). In terms of this nematic order parameter field, the free energy density describing the isotropic-nematic transition in a trigonal system takes the following form22,24,25,26:
$$f[N({{{{{{{\bf{r}}}}}}}})]=\frac{r}{2}|N({{{{{{{\bf{r}}}}}}}}){|}^{2}+\frac{u}{4}|N({{{{{{{\bf{r}}}}}}}}){|}^{4}+\frac{\gamma }{3}|N({{{{{{{\bf{r}}}}}}}}){|}^{3}\cos (6\theta ({{{{{{{\bf{r}}}}}}}})).$$
As usual, r changing sign signifies a transition into nematic order with positive definite u for stability. The cubic term is allowed by symmetry in a trigonal system, and is not allowed in a tetragonal system. Clearly, the sign of γ will determine the expectation value of \(\cos (6\theta )\), and thus two types of nematic: type-I (γ < 0 and thus \(\langle \cos (6\theta )\rangle=+ \!1\)) and type-II (\(\gamma \, > \,0,\langle \cos (6\theta )\rangle=-\!1\)).
We explore the phase diagram using classical Monte Carlo as a function of temperature T and the number of particles per Moiré site ν. To emulate the experimental setup in refs. 5, 7, 16, 27, the Hamiltonian that we simulate describes the Coulomb interaction for electrons halfway between two dielectric gates a distance d apart with dielectric constant ϵ:
$$\begin{array}{lll}{{{{{{{\mathcal{H}}}}}}}}=\frac{1}{2}\mathop{\sum}\limits_{i\ne j}\rho ({{{{{{{{\bf{r}}}}}}}}}_{i})\rho ({{{{{{{{\bf{r}}}}}}}}}_{j})\left(\frac{{e}^{2}}{4\pi \epsilon {\epsilon }_{0}a}\right)\frac{4}{d}\\ \quad\quad\times \left[\mathop{\sum }\limits_{n=0}^{\infty }{K}_{0}\left(\frac{\pi (2n+1)|{{{{{{{{\bf{r}}}}}}}}}_{{{{{{{{\bf{i}}}}}}}}}-{{{{{{{{\bf{r}}}}}}}}}_{{{{{{{{\bf{j}}}}}}}}}|}{d}\right)\right].\end{array}$$
Here, K0 is the modified Bessel function of the second kind, a is the Moiré lattice constant, and ρ(ri,j) ∈ {0, 1} are the occupancies of lattice sites i, j. As in refs. 7, 16, we take a = 8nm and d = 10a, and we take e2/(4πϵϵ0a) as our unit of energy for simulation. For further simulation details see the "Methods" section and SI section 1.
At each point in phase space, we calculate the Monte Carlo average of the structure factor
$$\langle S({{{{{{{\bf{Q}}}}}}}})\rangle=\frac{1}{{\ell }^{4}}\left\langle \mathop{\sum}\limits_{i,j}\rho ({{{{{{{{\bf{r}}}}}}}}}_{i})\rho ({{{{{{{{\bf{r}}}}}}}}}_{j}){e}^{-i{{{{{{{\bf{Q}}}}}}}}\cdot ({{{{{{{{\bf{r}}}}}}}}}_{i}-{{{{{{{{\bf{r}}}}}}}}}_{j})}\right\rangle,$$
to assess crystalline order. To assess the degree of rotational symmetry breaking, we also calculate the average of the nematic order parameter correlation function given by
$$\frac{1}{{\ell }^{4}}\mathop{\sum}\limits_{{{{{{{{\bf{r}}}}}}}},{{{{{{{\bf{r}}}}}}}}^{\prime} }\langle C({{{{{{{\bf{r}}}}}}}},{{{{{{{\bf{r}}}}}}}}^{\prime} )\rangle=\frac{1}{{\ell }^{4}}\mathop{\sum}\limits_{{{{{{{{\bf{r}}}}}}}},{{{{{{{\bf{r}}}}}}}}^{\prime} }\langle N({{{{{{{\bf{r}}}}}}}}){N}^{*}({{{{{{{\bf{r}}}}}}}}^{\prime} )\rangle=\frac{1}{{\ell }^{4}}\langle \tilde{C}({{{{{{{\bf{q}}}}}}}}=0)\rangle$$
where q denotes Fourier momentum. At high temperatures, when 〈N(r)〉 = 0, \(\langle \tilde{C}({{{{{{{\bf{q}}}}}}}}=0)\rangle /{\ell }^{4}\) behaves as kBT times the nematic susceptibility: χ(q = 0)kBT. Generically we expect this to have some continuous behavior as a function of temperature. However, when the order parameter develops an expectation value in a nematic state, \(\langle \tilde{C}({{{{{{{\bf{q}}}}}}}}=0)\rangle /{\ell }^{4}\) should acquire a constant, non-zero value. To determine the type of nematicity exhibited by nematic states, we also calculate \(\langle \cos (6\theta )\rangle\), where, as in Fig. 1c, type-I (type-II) nematic states have \(\langle \cos (6\theta )\rangle=+ 1\,(-1)\). For further details about the calculation of these quantities from our Monte Carlo simulation data including a formulation of the nematic order parameter in terms of the density operators ρ(r), see SI section 2. All results that we show are obtained from an ℓ = 20 system, except for exactly at ν = 1/3 since 20 × 20/3 is not an integer. In all cases, we perform 105 updates per site for equilibration at each temperature, and then 2 × 105 updates per site for data collection.
At ν = 1/3, we find the isotropic generalized Wigner crystalline phase, shown in Fig. 2a for ℓ = 12. This phase has lattice vectors \({{{{{{{{\bf{a}}}}}}}}}_{1}^{{{{{{{{\rm{wc}}}}}}}}}=(0,\,\,\sqrt{3})\) and \({{{{{{{{\bf{a}}}}}}}}}_{2}^{{{{{{{{\rm{wc}}}}}}}}}=(3/2,\,\,\sqrt{3}/2)\) as indicated by the black arrows in Fig. 2a. The structure factor shows well-defined peaks at the reciprocal lattice vectors \({{{{{{{{\bf{G}}}}}}}}}_{1}^{{{{{{{{\rm{wc}}}}}}}}}=(-2\pi /3,\,\,2\pi /\sqrt{3})\)) and \({{{{{{{{\bf{G}}}}}}}}}_{2}^{{{{{{{{\rm{wc}}}}}}}}}=(4\pi /3,0)\) associated with the crystalline state (Fig. 2b). Upon increasing the density, this crystalline state starts to melt, but it maintains an isotropic, compressible state to a certain filling. At small fillings away from the 1/3-state, as shown in Fig. 2c, the extra particles form domain walls between the three different registries of the generalized WC state. Three domain walls are marked with black lines in Fig. 2c. The domain walls meet at 2π/3 angles, reminiscent of what was found in ref. 19.
Fig. 2: Isotropic states.
a Generalized Wigner crystal at ν = 1/3 particles per Moiré site for an ℓ = 12 system obtained by Monte Carlo. The system is isotropic and thus the orientation of the nematic director, θ, is undefined. b The Monte Carlo average of the structure factor at ν = 1/3. The structure factor exhibits peaks at the reciprocal lattice vectors of the ν = 1/3 generalized Wigner crystal. c Monte Carlo equilibrated state at ν = 0.36 showing the isotropic hexagonal Wigner crystal domain state. Nearest neighbor bonds are shown and color-coded according to their orientation. Domain walls (marked with black lines) between the three registries of the generalized Wigner crystal state meet at 2π/3 angles, forming hexagonal domains. d The Monte Carlo average of the structure factor at ν = 0.36, showing the short-range correlated nature of the hexagonal Wigner crystal domain state in the broadened peaks as compared to (b). The peak width Γ in (e) is calculated along the line Qy = 0. e The Monte Carlo average of the reciprocal width 1/Γ of the peak at \({{{{{{{{\bf{G}}}}}}}}}_{2}^{wc}\) for ν = 1/3 and ν = 0.36. It plateaus below Tc and illustrates the smaller correlation length at ν = 0.36 compared to at ν = 1/3. Error bars are smaller than the symbol size. Dashed lines correspond to the boundaries of the "jump" in 1/Γ, which we note correspond to the jump boundaries in the integrated peak intensity as well. We assign Tc to be the temperature midway between the dashed lines.
As in ref. 19, this domain wall structure is stable while the density of domain walls is dilute (and hence the domain walls are long) because energetics favor 2π/3 angles. Taking into account interactions up to fifth neighbor, the energy contributed to the Hamiltonian by the six particles in the three dimers composing the 2π/3 vertex is
$${E}_{v}=3{V}_{1}+\frac{21}{2}{V}_{2}+6{V}_{3}+\frac{21}{2}{V}_{4}+\frac{15}{2}{V}_{5}$$
while that of the particles composing three straight domain wall dimers is
$${E}_{DW}=3{V}_{1}+12{V}_{2}+6{V}_{3}+6{V}_{4}+9{V}_{5},$$
where Vi denotes the energy of two i'th neighbor particles. Using Eq. (2) it is easy to check that EDW − Ev > 0, and thus (at least to this order of interaction), it is energetically favorable to have 2π/3-vertices, even at low temperatures. However, the densest possible hexagonal domain state consists of a close packing of 2π/3-vertices, which has density ν = 3/8. Thus, this state certainly cannot exist at densities ν > 3/8. In Fig. 2d, the structure factor of this compressible state exhibits broadened superstructure peaks centered at the generalized WC periodicities with the width Γ reflecting finite correlation length limited by the hexagonal domain size. Γ is calculated by a Gaussian fit to the peak along the line Qy = 0.
The temperature evolution of the inverse peak width 1/Γ, which behaves like the correlation length, establishes a clear distinction between the incompressible phase at ν = 1/3 and the compressible hexagonal domain wall state at ν = 0.36. The transition into the incompressible generalized Wigner crystal at ν = 1/3 is evidenced by the development of a momentum resolution limited sharp peak (see Fig. 2e). At this filling, the ordering phenomena belongs to the universality class of the three-state Potts model7 with a second order transition. On the other hand, the correlation length of the hexagonal domain wall state at ν = 0.36 exhibits a discontinuous jump reflecting the saturation of the correlation length at a finite value. This transition appears to be first order due to the discontinuous jump in 1/Γ and the integrated peak width, which jumps at the same temperature.
The state is dramatically different at ν = 1/2. We have the charge stripe state shown in Fig. 3a with lattice vectors \({{{{{{{{\bf{a}}}}}}}}}_{1}^{{{{{{{{\rm{cs}}}}}}}}}=(1,0)\) and \({{{{{{{{\bf{a}}}}}}}}}_{2}^{{{{{{{{\rm{cs}}}}}}}}}=(0,\sqrt{3})\). There are two degenerate charge stripe states whose lattice vectors are related by π/3 and 2π/3 rotations of \({{{{{{{{\bf{a}}}}}}}}}_{1,2}^{{{{{{{{\rm{cs}}}}}}}}}\). The structure factor in Fig. 3b, averaged over configurations with the same orientation as the one shown in Fig. 3a, contains peaks at the reciprocal lattice vectors \({{{{{{{{\bf{G}}}}}}}}}_{1}^{{{{{{{{\rm{cs}}}}}}}}}=(2\pi,\,\,0)\) and \({{{{{{{{\bf{G}}}}}}}}}_{2}^{{{{{{{{\rm{cs}}}}}}}}}=(0,2\pi /\sqrt{3})\). As expected, \({{{{{{{{\bf{G}}}}}}}}}_{i}^{{{{{{{{\rm{cs}}}}}}}}}\cdot {{{{{{{{\bf{a}}}}}}}}}_{j}^{{{{{{{{\rm{cs}}}}}}}}}=2\pi {\delta }_{ij}\). Diluting the 1/2-filled state, the stripes become shorter via the introduction of dislocations, as shown in Fig. 3c. The structure factor reflects the finite length of these stripe domains in the splitting of the stripe peak over the span of the stripe domain size scale. The peak at \({{{{{{{{\bf{G}}}}}}}}}_{2}^{{{{{{{{\rm{cs}}}}}}}}}\) is split into two peaks separated by 2π/LN where LN ≈ 4.63 is the average stripe domain size. The nematic correlation function reveals that, unlike the isotropic phases in Fig. 2e, \(\langle \tilde{C}({{{{{{{\bf{q}}}}}}}}=0)\rangle\) shows a sharp jump at Tc to a finite value in Fig. 3e. This indicates that these are nematic states. By examining \(\langle \cos (6\theta )\rangle\) in Fig. 3f, we can see that both of these phases are type-I nematic. The discontinuous jump in the correlation function suggests that these transitions are first-order. In Fig. 1d, we assign the critical temperature to be the center of the jump, and the error bars are given by the width of the jump.
Fig. 3: Charge stripe and type-I nematic states.
a The Monte Carlo charge stripe state at ν = 1/2 shown for nematic director orientation θ = 0. We annotate nearest neighbor bonds and color them according to their orientation. The red arrows indicate the charge stripe lattice vectors. b The Monte Carlo average of the structure factor at ν = 1/2, averaged over configurations with director orientation θ = 0. The red arrows indicate peaks at the reciprocal lattice vectors of the charge stripe state. c Monte Carlo equilibrated state at ν = 0.48 showing short-ranged stripe nematic state for nematic director orientation θ = 0. We again annotate nearest-neighbor bonds. Pieces of the charge stripe state are separated by dislocations. d The Monte Carlo average of the structure factor, at ν = 0.48, averaged over configurations with director orientation θ = 0. The peak at \((0,2\pi /\sqrt{3})\) splits into two peaks separated by 2π/LN where LN is the average stripe domain length. e The Monte Carlo average of the nematic order parameter correlation function at ν = 1/2 and ν = 0.48. Error bars are smaller than the symbol size. It jumps to a finite, constant value at Tc. Dashed lines denote the boundaries of the jump and we assign Tc to be the temperature midway between the boundaries, while the boundaries give the error bars in Fig. 1d. f \(\langle \cos (6\theta )\rangle\) at ν = 1/2 and ν = 0.48, which goes to + 1 at Tc in both cases. This suggests type-I nematicity at both of these fillings. Error bars are smaller than the symbol size.
Upon further diluting, the system maintains the same type of anisotropy and forms the columnar dimer crystal state at ν = 2/5, shown in Fig. 4a with director orientation θ = 0. This is the limit of the shortest stripe length, evolving from ν = 1/2. The ν = 2/5 state is a crystalline state with lattice vectors \({{{{{{{{\bf{a}}}}}}}}}_{1}^{{{{{{{{\rm{cdc}}}}}}}}}=(0,\sqrt{3})\) and \({{{{{{{{\bf{a}}}}}}}}}_{2}^{{{{{{{{\rm{cdc}}}}}}}}}=(5/2,\sqrt{3}/2)\). We mark the reciprocal lattice vectors \({{{{{{{{\bf{G}}}}}}}}}_{1}^{{{{{{{{\rm{cdc}}}}}}}}}=(-2\pi /5,\,\,2\pi /\sqrt{3})\) and \({{{{{{{{\bf{G}}}}}}}}}_{2}^{{{{{{{{\rm{cdc}}}}}}}}}=(4\pi /5,0)\) in the structure factor shown in Fig. 4b. Note that the peak at \(2{{{{{{{{\bf{G}}}}}}}}}_{2}^{{{{{{{{\rm{cdc}}}}}}}}}\) is more intense than the one at \({{{{{{{{\bf{G}}}}}}}}}_{2}^{{{{{{{{\rm{cdc}}}}}}}}}\). This is due to the form factor from the lattice basis. As we dilute further, the length of the columns get shorter as dimers get broken up. At lower densities, the columns do not extend over the entire system, so there are finite length segments of columns that can have different orientations as illustrated in Fig. 4c. The broken pieces of dimers form short-range correlated domains of generalized WC. This is shown by the broad peaks in the structure factor in Fig. 4d. This compressible state no longer has the mirror symmetries of the columnar state. It is still anisotropic as we can see from the nematic correlation function in Fig. 4e. Interestingly, the columnar fragments intersect at 2π/3 angles, as well as π/3 angles, one of which is circled in red in Fig. 4c. While the 2π/3 intersections are isotropic, the π/3 intersections consist primarily of only two of the three possible nearest-neighbor bond orientations, and hence this state is a type-II nematic phase. We confirm this by observing that \(\langle \cos (6\theta )\rangle=-1\) at low temperatures in Fig. 4f. (It is worth noting that although \(\langle \cos (6\theta )\rangle\) gets very close to 1.0, it never actually reaches 1.0. We understand this to signify the growth of strong type-I nematic correlations near the phase transition to the type-II state. This could perhaps be due to the proximity to the columnar dimer crystal state.) Thus we predict the microscopic mechanism for the type-II nematic phase. As with the charge stripe and type-I nematic states, these transitions are first-order. In Fig. 1d, we again determine Tc and the error bars by jump center and width, respectively.
Fig. 4: Columnar dimer crystal and type-II nematic states.
a The columnar dimer crystal state obtained from Monte Carlo simulations at ν = 0.4, shown with nematic director orientation θ = 0. We annotate the nearest neighbor bonds and color them according to orientation. The red arrows indicate the columnar dimer crystal lattice vectors. b The Monte Carlo average of the structure factor at ν = 2/5, averaged over configurations with director orientation θ = 0. The red arrows indicate peaks at the reciprocal lattice vectors of the columnar dimer crystal state. c The fragmented dimer column state at ν = 0.38, shown with nematic director orientation θ = π/6. d The Monte Carlo average of the structure factor at ν = 0.38, averaged over configurations with director orientation θ = π/6. Broad peaks at the reciprocal lattice vectors of the generalized Wigner crystal state appear due to the short-range correlated regions of generalized Wigner crystal between the dimer column fragments. e The nematic order parameter correlation function is finite and constant at low temperatures, showing that these are nematic states. Error bars are smaller than the symbol size. We assign the critical temperature as midway between the dashed lines, and the dashed lines correspond to the error bars in Fig. 1d. f \(\langle \cos (6\theta )\rangle\) for ν = 2/5 and ν = 0.38. The columnar dimer crystal has type-I nematicity as \(\langle \cos (6\theta )\rangle=+ 1\) at low-T. The fragmented dimer column state is a type-II nematic as \(\langle \cos (6\theta )\rangle=-\!1\) at low-T. Error bars are smaller than the symbol size.
The nematic-II state in the region of 3/8 < ν < 2/5 is supported energetically. Upon increasing density beyond ν > 3/8, columnar fragments have to either intersect also at π/3 or be parallel to each other. Since the distance between columnar fragments increases away from the π/3 intersection, we expect π/3 intersections to be favored. See SI section 3 for a schematic calculation demonstrating this. Such π/3 intersections involve two nearest-neighbor bond orientations, promoting a nematic-II state.
One could experimentally probe our predicted nematic states by performing optical measurements similar to those done at ν = 1/2 in ref. 16. As one lowers the density from ν = 1/2 to ν = 1/3 we would anticipate a rotation of the nematic director and consequently a shift in the peaks of the measured optical anisotropy axis. In particular, as one decreases the density from between ν = 1/2 and ν = 2/5, we predict that the measured anisotropy axis should have peaks along the nematic-I orientations 0, π/3, 2π/3. Below ν = 2/5, when the director rotates into the nematic-II state, the peaks should be at π/6, π/2, 5π/6. Finally, below ν = 3/8 when the system becomes isotropic, we expect that there should be no preferred anisotropy axis at all. For 1/3 < ν < 3/8, one could also look for signatures of the hexagonal WC domain state using Umklapp spectroscopy experiments like those done in ref. 28. The short-range correlated nature of this state should show up as broadened Umklapp resonances around the ν = 1/3 generalized Wigner crystal lattice vectors. Recently developed techniques using scanning tunneling microscopy (STM)12 also provide a promising avenue for confirming our proposed phase diagram. The authors in ref. 12 have already had success in imaging the charge-ordered states at ν = 1/3 and 1/2. STM measurements in the intermediate density regime could ideally allow direct imaging of the compressible phases and produce results similar to our Monte Carlo configuration snapshots.
In summary, we studied the electronic states of a system of strongly correlated electrons on a triangular lattice in the region 1/3 ≤ ν ≤ 1/2 particles per Moiré site. At ν = 1/2, we find the charge stripe state. Upon dilution, the charge stripe state melts into a nematic-I short-ranged charge stripe state via the introduction of dislocations. Once the stripes become short enough, the columnar dimer crystal state emerges at ν = 2/5. At even lower densities, the remaining columnar fragments space themselves out to lower their energy by intersecting at π/3 and 2π/3 angles, resulting in a nematic-II state. Below ν = 3/8, the system can again lower its energy by using only 2π/3 columnar fragment intersections to form an isotropic, hexagonal network of domain walls between regions of the ν = 1/3 generalized Wigner crystal. Finally, at ν = 1/3, the pure, isotropic generalized Wigner crystal state emerges. We note that while the generalized Wigner crystal states are interaction-driven insulators, the compressibility experiments in ref. 16 suggest that this is not the case for the intermediate states. The system is found to be compressible at the densities at which the intermediate states occur. Our intermediate states were not only promoted by entropy, but we also found them to have lower energy compared to macroscopically phase separated states. Accordingly, we suspect that our proposed states are relevant for finite experimental temperatures where fluctuations due to entropy also play a role. We leave the determination of the classical ground state at T = 0 as a subject of future work.
As another subject of future work, we would like to consider the effects of a finite bandwidth in the TMD Moiré system. Although it is difficult to make a quantitative statement about how our proposed states would fare in the presence of a finite bandwidth from our classical simulations, we can draw some insight as to the relative stability of the phases from one energy scale available to us: kBTc. Estimating the hopping as t ~ 1 meV as in ref. 7 and comparing to the phase diagram in Fig. 1d, we anticipate the charge-ordered, type-I nematic, and hexagonal domain states to survive quantum fluctuations, but to observe the type-II nematic state might need further suppression of bandwidth. A quantum mechanical analysis using a technique such as DMRG is still needed to provide further detail, however.
Further support for our proposed phase diagram could be garnered by studying larger system sizes. Although a full finite-size scaling analysis is currently beyond our reach due to the runtime of the simulations for larger lattices, we are encouraged by the similarity of the results for smaller systems such as the ℓ = 10 system we consider in SI section 4.
Studying the intermediate phases of melted density waves has been of interest since considerations of Krypton adsorbed on graphene19. However, limitations in computational resources and experimental methods caused difficulties in probing the intermediate states. With advances in computing power and the advent of the TMD Moiré platform, however, detailed phase diagrams can now be predicted computationally and probed experimentally. Our work demonstrates this capacity to explore intermediate phases and the richness of the phase diagram one can obtain with a classical model, even without considering quantum effects. We found the striped phase predicted upon increasing density in ref. 19 refines into two distinct stripe crystal states neighboring two distinct types of nematics. In particular, we presented a microscopic mechanism for the formation of the nematic-II state via π/3 intersections between columnar fragments. As a subject of future work, it would be interesting to study the implications of our findings for the melting of WCs without a lattice potential, such as those recently observed in refs. 29, 30.
Because the interaction Eq. (2) is long-ranged, simply simulating a system with periodic boundary conditions would result in ambiguous distance calculations. Thus, we simulate a formally infinite system that is constrained to be periodic in an ℓ × ℓ rhombus. Particles interact both within and between copies of the system. The choice of an ℓ × ℓ rhombus has the full symmetry of the triangular lattice as, when one considers the infinite system, the action of each element of the point group is a bijective map on the set of unique sites contained within the simulation cell. Thus we do not expect our choice of geometry to artificially promote rotational symmetry breaking. Moreover, in each nematic state that we report, our simulations find configurations with each of the three possible director orientations for the relevant nematic type with equal probability.
For Monte Carlo updates, we use arbitrary-range, single-particle occupancy exchanges with standard Metropolis acceptance rules. However, the prevalence of short-range correlated structures leading to long autocorrelation times complicates our simulations, especially in the incompressible density region. To get our simulations to converge, we need to perform cluster updates as well. Typical cluster update methods based on the Swendsen-Wang31 and Wolff32 algorithms for spin systems are insufficient for our needs as they do not simulate the correct ensemble. The geometric cluster algorithm33 does allow us to simulate a fixed number of occupied sites (i.e., the fixed magnetization ensemble of a spin model), but until now, has not been generalized to accommodate long-range interactions. Thus we develop our own cluster update method based on the geometric cluster algorithm that can handle arbitrary interactions. It is worth noting that our algorithm also works on an arbitrary lattice. For further details, see SI section 1. We perform a cluster update after every 1000 single particle occupancy exchange updates.
The raw Monte Carlo data generated and analyzed for the current study have been deposited in a Zenodo repository, available publicly at ref. 34.
The code used to produce and analyze our results is available for download and use in our Github respoitory ref. 35.
Wu, F., Lovorn, T., Tutuc, E. & MacDonald, A. H. Hubbard model physics in transition metal dichalcogenide Moire bands. Phys. Rev. Lett. 121, 026402 (2018).
Article ADS PubMed CAS Google Scholar
Naik, M. H. & Jain, M. Ultraflatbands and shear solitons in Moire patterns of twisted bilayer transition metal dichalcogenides. Phys. Rev. Lett. 121, 266401 (2018).
Andrei, E. Y. et al. The marvels of moiré materials. Nat. Rev. Mater. 6, 201–206 (2021).
Tang, Y. et al. Simulation of Hubbard model physics in WSe2/WS2 moirésuperlattices. Nature 579, 353–358 (2020).
Regan, E. C. et al. Mott and generalized Wigner crystal states in WSe2/WS2 Moiré superlattices. Nature 579, 359–363 (2020).
Huang, X. et al. Correlated insulating states at fractional fillings of the WS2/WSe2 moiré lattice. Nat. Phys. 17, 715–719 (2021).
Xu, Y. et al. Correlated insulating states at fractional fillings of moirésuperlattices. Nature 587, 214–218 (2020).
Li, W. et al. Local sensing of correlated electrons in dual-moiré heterostructures using dipolar excitons. Preprint at http://arxiv.org/abs/2111.09440 (2021).
Pan, H., Wu, F. & Das Sarma, S. Quantum phase diagram of a Moiré-Hubbard model. Phys. Rev. B 102, 201104 (2020).
Padhi, B., Setty, C. & Phillips, P. W. Doped twisted bilayer graphene near magic angles: proximity to Wigner crystallization, not Mott insulation. Nano Lett. 18, 6175–6180 (2018).
Padhi, B., Chitra, R. & Phillips, P. W. Generalized Wigner crystallization in Moire materials. Phys. Rev. B 103, 125146 (2021).
Li, H. et al. Imaging two-dimensional generalized Wigner crystals. Nature 597, 650–654 (2021).
Spivak, B. & Kivelson, S. A. Phases intermediate between a two-dimensional electron liquid and Wigner crystal. Phys. Rev. B 70, 155114 (2004).
Spivak, B. & Kivelson, S. A. Transport in two dimensional electronic micro-emulsions. Ann. Phys. 321, 2071–2115 (2006).
Article ADS MATH CAS Google Scholar
Jamei, R., Kivelson, S. & Spivak, B. Universal aspects of coulomb-frustrated phase separation. Phys. Rev. Lett. 94, 056805 (2005).
Article ADS PubMed Google Scholar
Jin, C. et al. Stripe phases in WSe2/WS2 moiré superlattices. Nat. Mater. 20, 940–944 (2021).
Li, T. et al. Continuous Mott transition in semiconductor moiré superlattices. Nature 597, 350–354 (2021).
Ghiotto, A. et al. Quantum criticality in twisted transition metal dichalcogenides. Nature 597, 345–349 (2021).
Coppersmith, S. N., Fisher, D. S., Halperin, B. I., Lee, P. A. & Brinkman, W. F. Dislocations and the commensurate-incommensurate transition in two dimensions. Phys. Rev. B 25, 349–363 (1982).
Kivelson, S. A., Fradkin, E. & Emery, V. J. Electronic liquid-crystal phases of a doped Mott insulator. Nature 393, 550–553 (1998).
Serre, J.-P. Linear Representations of Finite Groups (Springer Science & Business Media, 2012)
Fernandes, R. M. & Venderbos, J. W. F. Nematicity with a twist: rotational symmetry breaking in a moiré superlattice. Sci. Adv. 6 https://doi.org/10.1126/sciadv.aba8834 (2020).
Kivelson, S. A. et al. How to detect fluctuating stripes in the high-temperature superconductors. Rev. Mod. Phys. 75, 1201–1241 (2003).
Venderbos, J. W. F. & Fernandes, R. M. Correlations and electronic order in a two-orbital honeycomb lattice model for twisted bilayer graphene. Phys. Rev. B 98, 245103 (2018).
Hecker, M. & Schmalian, J. Vestigial nematic order and superconductivity in the doped topological insulator Cu x Bi2Se3. npj Quant. Mater 3, 26 (2018).
Little, A. et al. Three-state nematicity in the triangular lattice antiferromagnet Fe1/3NbS2. Nat. Mater. 19, 1062–1067 (2020).
Li, H. et al. Imaging moiréflat bands in three-dimensional reconstructed WSe2/WS2 superlattices. Nat. Mater. 20, 945–950 (2021).
Shimazaki, Y. et al. Optical signatures of periodic charge distribution in a Mott-like correlated insulator state. Phys. Rev. X 11, 021027 (2021).
Zhou, Y. et al. Bilayer Wigner crystals in a transition metal dichalcogenide heterostructure. Nature 595, 48–52 (2021).
Smoleński, T. et al. Signatures of Wigner crystal of electrons in a monolayer semiconductor. Nature 595, 53–57 (2021).
Swendsen, R. H. & Wang, J.-S. Nonuniversal critical dynamics in Monte Carlo simulations. Phys. Rev. Lett. 58, 86–88 (1987).
Wolff, U. Collective Monte Carlo updating for spin systems. Phys. Rev. Lett. 62, 361–364 (1989).
Heringa, J. R. & Blöte, H. W. J. Geometric cluster Monte Carlo simulation. Phys. Rev. E 57, 4976–4978 (1998).
Matty, M. Monte Carlo data for "Melting of generalized Wigner crystals in transition metal dichalcogenide heterobilayer Moiré systems". https://doi.org/10.5281/zenodo.7120826 (2022).
Matty, M. KimGroup/tmd_moire_monte_carlo: Manuscript code. https://doi.org/10.5281/zenodo.7120887 (2022).
The authors acknowledge support by the NSF [Platform for the Accelerated Realization, Analysis, and Discovery of Interface Materials (PARADIM)] under cooperative agreement no. DMR-U638986. We would also like to thank Steven Kivelson, Kin-Fai Mak, Jie Shan, Sue Coppersmith, Ataç Imamoğlu, and Samuel Lederer for helpful discussions.
Department of Physics, Cornell University, Ithaca, NY, 14853, USA
Michael Matty & Eun-Ah Kim
Michael Matty
Eun-Ah Kim
M.M. developed the cluster algorithm, wrote the code, ran the simulations, and analyzed the data. E.-A.K. initiated the project, and M.M. and E.-A.K. wrote the manuscript.
Correspondence to Eun-Ah Kim.
Matty, M., Kim, EA. Melting of generalized Wigner crystals in transition metal dichalcogenide heterobilayer Moiré systems. Nat Commun 13, 7098 (2022). https://doi.org/10.1038/s41467-022-34683-x
DOI: https://doi.org/10.1038/s41467-022-34683-x
Editors' Highlights
Nature Communications (Nat Commun) ISSN 2041-1723 (online) | CommonCrawl |
Margaret Millington
Margaret Hilary Millington (22 March 1944 – 5 March 1973) was an English-born mathematician.[1]
Margaret Millington
Born22 March 1944
Halifax, Yorkshire, England
Died5 March 1973(1973-03-05) (aged 28)
Hanover, Germany
OccupationMathematician
She was born Margaret Hilary Ashworth in Halifax, Yorkshire, the daughter of the local assistant head postmaster, and was educated there. She continued her studies at St Mary's College, Durham[2] and went on to Oxford University, where she earned a PhD in 1968 with A. O. L. Atkin as her advisor.[3] Also, in 1968, she married Lieutenant A.H. Millington,[2] who was part of the Royal Electrical and Mechanical Engineers.[1] She was awarded a two-year Science Research Council Fellowship which allowed her to pursue research at any university. During her husband's two-year posting in Germany, she taught mathematics at an Army Education Centre there.[1]
She died in Germany due to a brain tumour at the age of 28.[2]
Although her career was cut short, in 1983, the London Mathematical Society organized a symposium on modular forms. During the symposium, the importance of her doctoral thesis and post-doctoral research became clear. The work that she had started during her fellowship was picked up and pursued by other mathematicians, leading to a resurgence in the field.[1] Her research had dealt with modular forms, as well as subgroups of the modular group.[1]
In a tribute to Millington, Atkin said "I have no doubt that, had she lived, she would have made exciting original contributions to a field which has at last come into its own again, after nearly a quarter century in the doldrums, and where there are now at least twenty first rate people of her generation working actively."[1]
References
1. O'Connor, John J.; Robertson, Edmund F. (February 2010). "Margaret Hilary Ashworth Millington". MacTutor History of Mathematics archive. School of Mathematics and Statistics, University of St Andrews, Scotland.
2. Atkin, A. O. L. (September 1985). "Obituary: Margaret Hilary Millington (née Ashworth)". Bulletin of the London Mathematical Society. London Mathematical Society. 17 (5): 484–486. doi:10.1112/blms/17.5.484.
3. Margaret Millington at the Mathematics Genealogy Project
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
| Wikipedia |
A matrix whose number of rows does not equal to the number of columns, is called a rectangular matrix.
Rectangular matrix is a type of matrix and the elements are arranged in the matrix as number of rows and the number of columns. The arrangement of elements in matrix is in rectangle shape. Thus, it is called as a rectangular matrix.
The rectangular matrix can be expressed in general form as follows. The elements of this matrix are arranged in $m$ rows and $n$ columns. Therefore, the order of the matrix is $m \times n$.
Rectangle shape in matrix is possible if the number of rows is different to the number of columns. It means $m \ne n$. Therefore, there are two possibilities to form rectangular matrix, one is number of rows is greater than the number of columns ($m > n$) and the other is number of rows is less than the number of columns ($m < n$).
The following two cases are the possibility for the formation of rectangular matrices in the matrix algebra.
$A$ is a matrix and elements are arranged in matrix as $3$ rows and $4$ columns.
The order of the matrix $A$ is $3 \times 4$. The number of rows is not equal to the number of columns ($3 \ne 4$), and also the number of rows is less than number of columns ($3 < 4$). Therefore, the matrix $A$ is an example for a rectangular matrix.
$B$ is a matrix and the elements are arranged in the matrix in $5$ rows and $2$ columns.
The order of the matrix $B$ is $5 \times 2$. The number of rows is not equal to the number of columns ($5 \ne 2$) and also the number of rows is greater than number the columns ($5 > 2$). So, the matrix $B$ is known as a rectangular matrix. | CommonCrawl |
Results of multicenter double-blind placebo-controlled phase II clinical trial of Panagen preparation to evaluate its leukostimulatory activity and formation of the adaptive immune response in patients with stage II-IV breast cancer
Anastasia S Proskurina1,
Tatiana S Gvozdeva2,
Ekaterina A Alyamkina1,
Evgenia V Dolgova1,
Konstantin E Orishchenko1,
Valeriy P Nikolin1,
Nelly A Popova1,3,
Sergey V Sidorov3,4,
Elena R Chernykh5,
Alexandr A Ostanin5,
Olga Y Leplina5,
Victoria V Dvornichenko6,7,
Dmitriy M Ponomarenko6,7,
Galina S Soldatova3,8,
Nikolay A Varaksin9,
Tatiana G Ryabicheva9,
Stanislav N Zagrebelniy3,
Vladimir A Rogachev1,
Sergey S Bogachev1 &
Mikhail A Shurdov10
An Erratum to this article was published on 17 February 2016
We performed a multicenter, double-blind, placebo-controlled, phase II clinical trial of human dsDNA-based preparation Panagen in a tablet form. In total, 80 female patients with stage II-IV breast cancer were recruited.
Patients received three consecutive FAC (5-fluorouracil, doxorubicin and cyclophosphamide) or AC (doxorubicin and cyclophosphamide) adjuvant chemotherapies (3 weeks per course) and 6 tablets of 5 mg Panagen or placebo daily (one tablet every 2–3 hours, 30 mg/day) for 18 days during each chemotherapy course. Statistical analysis was performed using Statistica 6.0 software, and non-parametric analyses, namely Wilcoxon-Mann–Whitney and paired Wilcoxon tests. To describe the results, the following parameters were used: number of observations (n), median, interquartile range, and minimum-maximum range.
Panagen displayed pronounced leukostimulatory and leukoprotective effects when combined with chemotherapy. In an ancillary protocol, anticancer effects of a tablet form of Panagen were analyzed. We show that Panagen helps maintain the pre-therapeutic activity level of innate antitumor immunity and induces formation of a peripheral pool of cytotoxic CD8+ perforin + T-cells. Our 3-year follow-up analysis demonstrates that 24% of patients who received Panagen relapsed or died after the therapy, as compared to 45% in the placebo cohort.
The data collected in this trial set Panagen as a multi-faceted "all-in-one" medicine that is capable of simultaneously sustaining hematopoiesis, sparing the innate immune cells from adverse effects of three consecutive rounds of chemotherapy and boosting individual adaptive immunity. Its unique feature is that it is delivered via gastrointestinal tract and acts through the lymphoid system of intestinal mucosa. Taken together, maintenance of the initial levels of innate immunity, development of adaptive cytotoxic immune response and significantly reduced incidence of relapses 3 years after the therapy argue for the anticancer activity of Panagen.
ClinicalTrials.gov NCT02115984 from 04/07/2014.
Programmed chemotherapies involve tightly scheduled and dosed administration of highly toxic substances, whose therapeutic efficacy is invariably accompanied with systemic damage to the body. Liver and hematopoietic cells are the first to suffer from such therapies. Hence, when cancer patients are treated with cytostatic drugs, they routinely receive adjuvant medications alleviating the deleterious effects of cytostatics. Leukostimulatory drugs are among such protective agents [1].
Several classes of drugs are currently used to stimulate leukopoiesis. The first group includes the drugs boosting cellular metabolism – dicarbamin, methyluracil, pentoxyl, leukogen, etc. The second group comprises colony-stimulating growth factor analogs, such as filgrastim (neupogen), sagramostim, lenograstim, molgramostim (leucomax), etc. Chemical leukostimulatory drugs (dicarbamin and alike) are used in patients receiving myelosuppressive chemotherapy. In particular, dicarbamin stimulates maturation of neutrophilic granolocytes thereby reducing the occurrence of leukopenia and neutropenia. To treat severe leukopenia, analogs of human G-CSF, such as filgrastim and alike, are also widely used. These medications act by inducing mobilization of hematopoietic stem cells and by modulating production and release of neutrophils into peripheral blood. This panel of G-CSF-derived drugs is therefore used to treat various forms of neutropenia in cancer patients receiving myelosuppressive chemotherapy [1-3].
Recently, one more class of drugs which is based on nucleic acids has been introduced into oncology practice (ridostin, derinat, polydan, desoxynatum, etc.), as these drugs were reported to display, among others, leukostimulatory activity. Finally, one must consider a group of drugs that are based on CpG-modified DNA oligonucleotides, − these agents are used to induce adaptive antiviral and anticancer immune response. When tested in mice, these drugs resulted in 50-60% suppression of tumor growth [4-6].
By applying these research observations to the therapeutic activity of nucleic acids, we proceeded to develop Panagen medication, which is based on the fragmented human dsDNA, and is intended for use as an adjuvant leukostimulatory agent in cancer patients receiving multiple lines of chemotherapy. We put forward and test a novel concept of treating stage II-IV breast cancer by combining standard chemotherapy course with Panagen. This strategy allows protecting and activating the proliferation of hematopoietic stem cells along with expansion of the population of CD8 + perforin + cytotoxic T-cells, i.e. it aids in developing adaptive immune response in these patients.
Thus, Panagen is a multi-faceted drug with pronounced leukostimulatory activity and which functions to stimulate adaptive immune response across multiple courses of chemotherapy. In contrast to the above-mentioned classes of drugs (in particular, those G-CSF- and CpG-ODN- based), Panagen is manufactured in a form of tablets with gastro-resistant coating. This drug form is perfectly compatible with long-term therapy including three or more consecutive courses of chemotherapy (up to one year) without running the risks of adverse inflammatory and autoimmune reactions caused by the constant presence of dsDNA in the bloodstream. The drug mode of action is notably distinct from the mobilizing effect of colony-stimulating factors that induce abortive release of hematopoietic progenitors into the bloodstream. It is rather based on the activation of mucosal mononuclear cells. This is accompanied with secretion of stimulatory cytokines which induce proliferation of hematopoietic stem cells [7-10]. Of particular importance is that Panagen uniquely combines several therapeutic features, thereby paving the way to novel clinical applications.
DNA quantification in blood plasma of patients receiving tablet form of Panagen medication
Levels of DNA in blood plasma were determined according to the method described by Spirin [11]. This method relies on the optical density measurements of Panagen hydrolysis products, which translates into quantification of phosphorus content in nucleic acids. One milliliter of the medication (0.015 – 35 mkg of nucleic acids) is mixed with 5 ml of 1 N NaOH solution, and boiled on a waterbath for 5 minutes with stirring. The mixture is brought back to room temperature, transferred on ice and neutralized with concentrated perchloric acid. HClO4 is added to a final concentration of 1-2%, the reaction is vigorously mixed and centrifuged at +4°C 5000 g for 15 minutes. Supernatant containing ribomononucleotides is discarded and the pellet is re-dissolved in 5 ml 0.5 M HClO4. This is followed by heating on a boiling waterbath for 20 minutes. After chilling, the probes are centrifuged for 20 minutes at 5000 g. Supernatants are transferred into 10 mm pathlength cuvettes and optical density at 270 and 290 nm is measured using a spectrophotometer. Sample measurements are blanked against 0.5 M solution of perchloric acid.
Nucleic acid contentration (X, expressed as mkg/ml) is calculated using a formula: X = (D270 – D290) / 0.19 × 10.3, where D270 and D290 are optical density values, 0.19 is extinction coefficient, and 10.3 is a phosphorus-to-nucleic acid conversion factor. Notably, this method performs well when D260 and D270 values fall within 15% difference range of each other.
Brief outline of phase II clinical trial protocol
Phase II clinical trial of preparation Panagen was approved by the Ministry of Health and Social Development of the Russian Federation (No. 47 of 03/12/2010) as well as by local ethics committees at the Irkutsk Regional Oncology Dispensary and the Novosibirsk Municipal Hospital No 1, where clinical trials were subsequently performed. The studies were carried out in compliance with the World Medical Association Declaration of Helsinki. Written informed consent to participate in the study was obtained from each of the patients, which specified open publication of the results presented as reports or otherwise. All patients were also insured.
The study recruited 80 female high-risk category patients with stage II breast cancer or patients with stage III-IV breast cancer who were advised to undergo chemotherapy. All patients were sequentially randomized, i.e. the patient was assigned to one of the two groups irrespective of the time she joined the trial. The first group comprising 57 patients received Panagen, of which 6 patients were later excluded from the study for various reasons, the second group received placebo (with 23 patients recruited, of which 6 were excluded during the trial). For ethical reasons, the second group was maximally down-sized so as to reliably meet the statistical significance threshold.
The patients received standard FAC chemotherapy (fluorouracil 500 mg/m2, doxorubicin 50 mg/m2, cyclophosphamide 500 mg/m2 – all i.v. for 1 day) or AC chemotherapy (doxorubicin 50 mg/m2 and cyclophosphamide 500 mg/m2 i.v. for 1 day). The patients completed three courses of chemotherapies (3 weeks per course), and each course began on the day of chemotherapy (day 1).
Patients received 5 mg Panagen or placebo tablets daily (6 tablets every 2–3 hours, 30 mg/day). The tablets were given to the patients 48 hours post-chemotherapy (day 3) and the course continued for 17 more days until day 20 post-chemotherapy. If the next round of chemotherapy was delayed, the patients stayed on Panagen or placebo. The patients stopped taking tablets one day before the next course of chemotherapy. Delay of next course of chemotherapy of up to one week was considered acceptable. All patients from the placebo cohort received standard-of-care therapy as required by the Ministry of Health and Social Development of the Russian Federation.
The clinical trial was conducted in two medical centers, Irkutsk Regional Oncology Dispensary (29 patients on FAC regimen), referred hereafter as "base I" and Novosibirsk Municipal Hospital No 1 (20 patients on FAC and 31 patients on AC regimens), which is referred to as "base II".
Primary endpoints of the study were the degrees of leukopenia and neutropenia (grade 1, 2, 3, 4) which have emerged during the study.
Secondary endpoints of the study were:
Occurrence of grade 1, 2, 3, 4 leukopenia during the cycles 1, 2, 3 of chemotherapy.
Occurrence of grade 1, 2, 3, 4 neutropenia and febrile neutropenia during the cycles 1, 2, 3 of chemotherapy.
Duration of grade 1, 2, 3, 4 leukopenia throughout chemotherapy cycles 1, 2, 3.
Duration of grade 1, 2, 3, 4 neutropenia and febrile neutropenia throughout chemotherapy cycles 1, 2, 3.
The lowest white blood cell count value observed across cycles 1, 2, 3.
The lowest neutrophil count value observed across cycles 1, 2, 3.
Time to restore leukocyte and neutrophil counts in cycles 1, 2, 3.
Time from the start of chemotherapy to the point with lowest neutrophil and white blood cell counts in chemotherapy cycles 1, 2, 3.
If the patient was started on G-CSF medications, she was considered as having discontinued the study.
Statistical analysis was performed using Statistica 6.0 software, and non-parametric analyses, namely Wilcoxon-Mann–Whitney and paired Wilcoxon tests. To describe the results, the following parameters were used: number of observations (n), median, interquartile range, and minimum-maximum range.
For more details see Additional file 1.
МТТ assay using human peripheral blood mononuclear cells
Cytotoxicity of human peripheral blood mononuclear cells (PBMCs) obtained by fractionation of patient peripheral blood on the density gradient of ficoll-urografin (d = 1.077 g/ml) was assayed against the MCF-7 tumor cell line. Tumor cells were placed in 96-well plates (5 × 104 cells/well). Next, 5 × 104, 10 × 104 or 25 × 104 PBMCs per well (1:1, 1:2 or 1:5 ratios) were added. Cell mixtures were incubated in RPMI-1640 supplemented with gentamicin sulphate (100 mkg/ml) in 5%о СО2 at 37°С for 21 h. Co-incubation was terminated by adding MTT solution to 0.5 mg/ml and reactions were allowed to stay for 3 more hours. Cells were centrifuged at 4000 rpm for 10 min (Eppendorf Centrifuge 5810 R). Supernatant was decanted and blue-colored formazan crystals were dissolved in 100 mkl DMSO. Optical density was read using «Multiskan RC» set at 570 nm with background subtraction measured at 620 nm. Results of the MTT assay were processed using Microsoft Excel 2002. Cytotoxicity index (CI) was calculated as follows:
$$ \mathrm{C}\mathrm{I}\ \left(\%\right) = \left[1-\left({\mathrm{OD}}_{\mathrm{e}+\mathrm{t}}-{\mathrm{OD}}_{\mathrm{e}}\right)/{\mathrm{OD}}_{\mathrm{t}}\right]\times 100 $$
$$ \begin{array}{c}{\mathrm{OD}}_{\mathrm{e}+\mathrm{t}}\hbox{--}\ \mathrm{o}\mathrm{ptical}\ \mathrm{density}\ \mathrm{value}\ \mathrm{in}\ \mathrm{experimental}\ \mathrm{wells}\ \left(\mathrm{c}\mathrm{o}-\mathrm{incubated}\ \mathrm{effector}\ \mathrm{and}\ \mathrm{target}\ \mathrm{cells}\right);\\ {}{\mathrm{OD}}_{\mathrm{e}}\hbox{--}\ \mathrm{o}\mathrm{ptical}\ \mathrm{density}\ \mathrm{value}\ \mathrm{in}\ \mathrm{wells}\ \mathrm{with}\ \mathrm{effector}\ \mathrm{cells};\\ {}{\mathrm{OD}}_{\mathrm{t}}\hbox{--}\ \mathrm{o}\mathrm{ptical}\ \mathrm{density}\ \mathrm{value}\ \mathrm{in}\ \mathrm{wells}\ \mathrm{with}\ \mathrm{target}\ \mathrm{cells}.\end{array} $$
Cytokine production by PBMCs isolated from recruited patients
To gain insight into the dynamics of cytokine production upon dsDNA administration, we used samples of peripheral blood from patients recruited into our trial. Blood samples were taken at three control timepoints, namely, at initial point – 1–3 days prior the first round of chemotherapy; at intermediate point – 1 day before the second course of chemotherapy; and at the final point – upon completion of the therapy (i.e. after completion of the third three-week course of Panagen).
To assay spontaneous cytokine production, peripheral venous blood was collected in heparin-containing vacutainers, and fresh 1 ml blood samples were reserved for assaying spontaneous cytokine production. In parallel, we also measured mitogen-induced cytokine production. For this purpose, we used a commercial kit "Cytokine-Stimul-Best" containing a mix of mitogens (PHA, Con A, LPS, − at 4, 4 and 2 mkg/ml each). The samples were incubated at 37°С for 24 hours. Cells were centrifuged at 10000 rpm for 3 min (Eppendorf Centrifuge 5810 R), the supernatants were transferred into new tubes, snap-frozen and stored at −70°С until further processed for quantification of cytokine production. Concentrations of IFN-γ, IFN-α, TNF-α, IL-1β, IL-6, IL-8, IL-10, IL-2, IL-17, VEGF, MCP, IL-18, IL-4, GM-CSF, G-CSF and IL-1 receptor antagonist (IL-1RА) in the samples were measured using solid-phase sandwich ELISA kits manufactured by the JSC «Vector-Best» (Novosibirsk, Russia).
SigmaStat statistical software package (Systat Software Inc., San Jose, CA, USA) was used for statistical data analysis. Two-way ANOVA followed by the Holman-Sidak test was used to analyze data in 2 groups x 3 intervals matrix to find dependencies between independent factors. A value of p < 0.05 was considered as statistically significant. Results are presented as Mean ± SE in their absolute units of measure (ng/μl). Absolute units of concentration were log10-converted prior to calculations of the stimulation index.
Below we describe the experimental data from both published reports and our own studies, which allowed us to design the strategy of cancer therapy with human dsDNA preparation Panagen as a leukostimulatory and leukoprotective agent, and as an activator of adaptive immune response.
Choice of the drug's active substance
Our choice of human dsDNA as an active substance in Panagen was dictated by both our experimental data and general knowledge of the interplay between dsDNA fragments and the genome of a human cell.
When dsDNA preparations from various sources were compared, human dsDNA consistently displayed superior leukostimulatory and anti-tumor activity [7-9,12-17].
Use of the DNA preparations and molecular interactions between DNA fragments and the cell genome are studied insufficiently, − mostly in yeast or in vitro. Recombination of extracellular DNA fragments has been reported as a likely event taking place in the nuclei of immune cells and various stem cells. Notably, the termini of these molecules induce activation of double-stranded DNA repair and recombination response in the cells [7,10,18-21]. Due to the presence of short stretches of homology on DNA ends, recombination may result in integration of exogenous DNA into the genome [22-32]. This translates to the conclusion that any xenogeneic DNA, even at low dosage, poses a threat to the integrity of the genome, as compared to the allogeneic DNA fragments, that are more likely to be considered as a suitable substrate for homologous recombination machinery, which in turn should lower the risk of introducing unwanted mutations.
The choice of specific size of DNA fragments to be used in the medication was based on the well-established fact that extracellular dsDNA is normally present in the human blood plasma and interstitial fluid at a concentration of 14–100 ng/ml, ranging 1–20 nucleosomal repeats in size, which equals to approximately 200 – 6000 bp [33-38].
Human placental DNA was selected as a source of active substance. Our protocol for collecting and isolating human DNA assures it is free of steroid hormones, various types of polysaccharides, infectious agents (parasites, protists, bacteria, RNA- and DNA-viruses), which is rigorously quality controlled for each batch of the drug (Registration certificate Medical Drugs of Russia No. 004429/08 of 09.06.2008). Furthermore, we make sure the drug is protein-free, as protein contaminations (for instance HMG proteins) are known to activate various types of immune and stem cells.
Major features of the Panagen active substance in light of its possible therapeutic applications
The fragments of exogenous extracellular dsDNA may interact and be internalized by various cell types without any transfection procedures. It was established that double-stranded fragmented DNA molecules (including those used in Panagen) can be delivered into cell compartments without transfection – in both unconnected/loosely connected cells and in the tissue context (such as Peyer's patches and solitary lymphatic nodules) [39-48]. Specifically, this property has been demonstrated for bone marrow cells, including mouse and human CD34+ hematopoietic progenitors tested in vivo, ex vivo cultured mouse and human bone marrow cells, and ascites forms of mouse hepatoma and lung carcinoma. DsDNA fragments were also shown to be incorporated by human pluripotent ES cells ex vivo, and by human breast adenocarcinoma cell line MCF-7, and may interact with human dendritic cells obtained ex vivo [7,10,18-21,49,50].
Leukostimulatory effect of a tablet form of the drug (targeting CD34+ hematopoietic stem cells and their earliest lineage-commitment progeny). DsDNA fragments have been reported many times to target hematopoietic progenitors and so to boost their proliferation [14-17]. Leukostimulatory effect of the tablet form of human dsDNA preparation was consistently demonstrated on dogs [51] and in phase I clinical trial on healthy volunteers (unpublished data). This stimulatory effect on proliferation is apparently due to the incorporation of dsDNA by immunocompetent cells of the gut-associated lymphoid tissue, which stimulates their migration to the periphery [41-48] and concomitantly activates them to produce cytokines via the system of cytosolic DNA sensors [10,52,53]. Activated intestinal lymphocytes leave the gut and migrate to distant body regions, including bone marrow, where they are believed to induce proliferation of hematopoietic stem cells or their more committed progeny via direct cell-cell contacts or through secretion of specific cytokines.
Activation of antigen-presenting dendritic cells and expansion of a population of cytotoxic perforin + CD8+ T cells contribute to the anticancer activity of Panagen. These features are based on the interaction of Panagen dsDNA fragments with dendritic cells, which in turn activates their antigen-presenting properties [7-9,12,13].
So-called "delayed death" phenomenon results from the selective targeting of CD34+ hematopoietic stem cells as they recover from the genotoxic stress caused by a cross-linking agent cyclophosphamide. Fragments of exogenous dsDNA reach the nuclear interior of bone marrow cells, including CD34+ hematopoietic stem cells (HSCs). Importantly, if this happens during a very specific "death window" interval, the introduced DNA fragments overwhelm and interfere with the ongoing dsDNA repair. Thus, the dsDNA breaks waiting for delicate resolution via homology-dependent recombination pathway become instantly and randomly end-joined by an error-prone SOS-repair system. This leads to the failure of CD34+ HSC to differentiate into lymphoid lineage. Within several days, functional depletion of the organism immune system occurs, and animals succumb to opportunistic infections and progressive inflammatory response [18,20].
Synergistic action of Panagen and cytostatic drugs cyclophosphamide and doxorubicin. DNA-based immunomodulators have been shown to display synergistic effects with standard cytostatic drugs used in the clinics [4,54,55]. Consistently, we also reported that human dsDNA-based medication has a pronounced anti-cancer effect when combined with doxorubicin and cyclophosphamide [8,12,13].
Choice of the tablet form of the drug and the strategy of drug administration
The full potential of Panagen activities which include leukostimulatory activity, activation of dendritic cells and stimulation of adaptive antitumor immunity, can only be exploited upon its long-term and continuous administration, so that it can efficiently act upon immune cells, particularly antigen-presenting cells. It has been reported in the literature and established in our own experiments that long-term presence of large amounts of dsDNA in the bloodstream of humans and experimental animals results in multiple inflammation foci in various organs and in activation of autoimmunity [18,56-61]. This has rendered the systemic route of administration – which is typically used in drugs with similar features (leukostimulation, leukoprotection and activation of protective immunity activation) quite problematic.
Yet, it was also known that dsDNA fragments administered per os can reach the immune cells residing in mucosal lymphatic system, where such cells become activated to produce a variety of cytokines and migrate elsewhere in the body [41-48]. So, we hypothesized that dsDNA fragments administered as tablets with gastro-resistant coating (Panagen) should activate immune cells in the gut, and this route of delivery could be exploited to ultimately target HSCs and antigen-presenting cells.
Our preclinical study performed in dogs [51] and phase I clinical trial of a tablet form of Panagen on 20 healthy volunteers indicated that this drug form stimulated leukopoiesis to the same extent as did intraperitoneal injections (unpublished data). Based on these data, we proceeded to phase II clinical trial on stage II-IV breast cancer patients.
Earlier studies of human dsDNA preparation Panagen have established it as a leukostimulatory agent. Taking into account its described synergistic activity with cytostatic drugs cyclophosphamide and doxorubicin to potently inhibit tumor growth in experimental animals, we developed a new therapeutic scheme of cytostatic treatment of human malignancies.
As was confirmed in multiple studies, leukostimulatory activity of dsDNA preparation is caused by the stimulation of bone marrow cell proliferation, in particular HSCs. This stimulatory effect may result from either internalization of dsDNA fragments by bone marrow progenitors or production of pro-proliferative cytokines by mononuclear cells activated by dsDNA fragments [10,18,41-48].
In terms of inducing adaptive (anticancer) immune response, the major steps of dsDNA therapeutic activity in combination with cross-linking and anthracycline cytostatics are as follows:
Human dsDNA potently activates dendritic cells [7-9,12,13].
Upon oral administration, dsDNA reaches the immune cells of intestinal mucosa and stimulates their professional properties [41-48].
DsDNA fragments turn on the system of cytosolic sensors, thereby leading to production of specific cytokines by immune cells [10,53,54].
Cyclophosphamide metabolite, phosphoramide mustard, induces formation of interstrand crosslinks in cancer cells, which leads to their death and production of tumor cell debris.
Cyclophosphamide stimulates bone marrow-resident mononuclear cells to secrete interferon type I, dendritic cells undergo maturation and migrate to the periphery. Cyclophosphamide also drives cancer cells into apoptosis, which results in formation of immunogenic cell debris [62].
Cyclophosphamide interferes with the functions of T-regulatory lymphocytes, resulting in their temporary depletion and functional suppression. In contrast, dendritic cells and cytotoxic T-cells are less sensitive to cyclophosphamide. Tumor cells then lose their cellular and humoral protection, whereas immunocompetent cells stop receiving inhibitory signaling from T-regulatory lymphocytes. This combination of factors makes it possible for the immune system to target the tumor [63-74].
Anthracycline cytostatics, such as doxorubicin, induce membrane translocation of calreticulin in apoptotic cancer cells. This is interpreted as an "eat me" signal by the antigen-presenting cells, dendritic cells in particular. Similarly to cyclophosphamide, this results in formation of immunogenic tumor cell debris [75-78].
Taking into account the above-listed properties, we put forward a scientific basis for the following novel therapeutic strategy. Use of cyclophosphamide leads to the physical disintegration of tumor cells and immunogenic debris formation. Simultaneously, cyclophosphamide boosts maturation and peripheral migration of antigen-presenting dendritic cells. Doxorubicin also causes tumor cells to undergo apoptosis and induces membrane translocation of calreticulin in dying cells, which serves as an «eat me» signal for dendritic cells. These treatments converge to form immunogenic tumor cell debris. Additionally, cyclophosphamide selectively targets T-regulatory lymphocytes, and potently inhibits their functions or directly kills them. This leaves tumor cells unprotected from the immune system surveillance. Our studies indicate that following cyclophosphamide treatment and tumor cell lysis, administration of exogenous DNA independently of the cyclophosphamide activity will additionally stimulate activation of antigen-presenting properties of dendritic cells. This will be accompanied by suppression of T-regulatory lymphocytes that will no longer restrain the immune system from attacking the tumor. Such combined action of cyclophosphamide and doxorubicin will potentiate antigen uptake by dendritic cells activated by cyclophosphamide and dsDNA which in turn will launch specific anti-cancer immune response.
There are three key points to this strategy, as applied to the clinical practice:
Tablet form of the drug is administered 48 hours post cyclophosphamide treatment. This assures safety of the drug by avoiding the cell-destructive period, known as the "death window".
The drug is administered continuously throughout the courses of chemotherapy. It is prescribed as a leukostimulatory medication used intermittently, continuously and massively, which mediates sustained activation of mucosal immune cells, and so results in increased proliferation of HSCs and their immediate committed progeny. Temporary drug withdrawal throughout the chemotherapy courses may only be required to avoid the "death window".
Uninterrupted administration of the drug across multiple lines of chemotherapy allows combining its leukostimulatory potential with activation of antigen-presenting dendritic cells resident in the human mucosa. Cytostatic background further contributes to the maturation and release of CD4 + CD8 + perforin + cytotoxic T-cells into peripheral blood, which is generally accepted as developing adaptive immune response.
The proposed mode of action of the tablet form of Panagen relies on targeting the gut mucosa-resident lymphoid cells by dsDNA fragments. The active substance is encapsulated and delivered to the small intestine. The coating then disintegrates, and the substance is dissolved in the intestinal lumen. Dissolved dsDNA fragments reach mononuclear cells found in Peyer's patches, in lymphoid follicles of vermiform appendix and in solitary follicles, where they activate the cells via a cascade of dsDNA sensors. Upon activation, various types of immune cells normally resident in gut-associated lymphoid tissue migrate into the bloodstream and reach immunocompetent organs. Immune cells then activate proliferation and mobilization of HSCs and their immediate committed progeny, via cell-cell contacts or secreted cytokines.
Gut-associated dendritic cells also migrate into the bloodstream upon activation. When they eventually reach and become anchored in the lymphoid organs (such as mesenterium), they are faced with cancer antigens in the form of immunogenic tumor cell debris. All these events culminate in the induction of anticancer adaptive immune response.
Here, we report on the results of phase II clinical trials of a human dsDNA-based preparation Panagen.
DNA content in the blood plasma samples of patients receiving tablet form of Panagen
Panagen medication tested in the clinical trial is manufactured in the form of gastro-resistant tablets. This gastro-resistant coating dissolves at neutral pH in intestines, and so the active substance – fragmented human dsDNA – is liberated into intestinal lumen, where it reaches the mononuclear cells of Peyers patches [48]. Analyses of blood plasma samples from healthy donors daily receiving Panagen tablets for three months showed no increase in extracellular DNA concentration. Fasting blood samples were collected in the morning, 8 hours post-Panagen tablet. Daily dose was 30 mg, in six tablets (5 mg each) taken throughout the day, approximately 1 tablet every 2–3 hours (Figure 1). Furthermore, we detected no changes in DNA concentration in blood plasma 2 hours after swallowing 2–3 Panagen tablets (data not shown).
DNA concentration in blood plasma of healthy volunteers not receiving Panagen (control, n = 15) and following daily oral administration of 30 mg Panagen for 1 and 3 months (n = 9).
Effects of Panagen on hematopoietic progenitors
Leukostimulatory effects of Panagen drug based on fragmented human dsDNA were analyzed throughout three consecutive courses of FAC or AC chemotherapies in patients with stage II-IV breast cancer (Additional files 2, 3 and 4).
Our primary goal at this step was to understand how the drug modulates different blood lineages under the increasing detrimental pressure of repeated chemotherapies. To do so, we measured specific blood lineage cell counts in peripheral blood at control points after 1, 2 and 3 rounds of chemotherapy in patients on Panagen vs placebo, and determined whether these values were significantly different. We assumed that positive effect of Panagen would be demonstrated if significant differences in blood cell counts are observed in at least one control point. This seemingly liberal definition of a positive effect was dictated by several factors. First, we found no published data describing and substantiating the specific time-points to assay the dynamics of hematopoiesis in response to gastrointestinal tract delivery of a drug – hence we were free to choose the control points. Second, neutrophils are known to quickly migrate from the periphery to their destination points, which makes it rather challenging to reliably measure their stimulated proliferation by analyzing peripheral blood samples. We also monitored the frequencies of stage I-IV neutropenia-related events throughout the chemotherapy courses, as well as the dynamics of CD34+/45+ HSCs, which was essentially a blind search in the absence of the documented time-course data.
The analysis performed thus far summarizes the following therapeutic features of Panagen in the context of three courses of FAC/AC chemotherapies. We demonstrate that absolute cell counts for lymphocytes, neutrophils and monocytes in control points on day 21 after 1, 2 and 3 chemotherapy courses are significantly different between Panagen and placebo-treated patient cohorts (Figure 2). In order to mitigate the confounding effects from individual patients on statistical analysis of Panagen leukostimulatory activity (as assayed by cell counts in peripheral blood, by timing and magnitude of cell proliferation), the patients were grouped into Panagen-responders and non-responders (Figure 3, see appropriate parts of Additional files 2, 3 and 4). Patients whose cell counts were higher at a given time point than on day 14 or day 21 after the first course of chemotherapy (set as 100%) were classified as responders. Most of the blood parameters in the group of Panagen-responders were significantly higher than in the placebo cohort. Notably, 52% of patients positively responded on Panagen therapy throughout the 3 courses of chemotherapy as measured in control points. This approach allowed us to accurately delineate the leukostimulatory effect of Panagen with minimal contribution of individual patient-specific effects.
Dynamic changes in blood cell counts (×10 9 cells/L) measured in the clinical trial at the initial pre-therapy timepoint (0) and on day 21 after each chemotherapy course. Median values in each group are shown. The number of patients per group is indicated for each time point. Significantly higher values are observed for Panagen (dashed orange line) vs placebo (black solid line) groups of patients (Wilcoxon-Mann–Whitney test), as well as within each group relatively to the initial level before the therapy (Wilcoxon paired test). For patients who received Panagen, increased value is marked with up-facing arrow, for patients from placebo group, decreased value is highlighted by downward-facing arrow. Red asterisk (*) denotes significant values with р < 0.05, blue hash symbol (#) marks statistically significant difference with р < 0.11.
Changes in stimulation indices (%) for blood cell types throughout three chemotherapy courses. Median values per group are shown. The number of patients per group is indicated for each time point. Stimulation index is expressed as a ratio of values measured in second and third control time points (days 14 and 21) to the appropriate value observed in the control point of the first chemotherapy course (set as 100%). Patients were subgrouped into Panagen-responders, Placebo and Panagen-non-responders. Red line denotes 100% level, i.e. the values reported in control time points (days 14 and 21) after the first chemotherapy. Values that show statistically significant difference between Panagen-responders and Placebo patient groups (Wilcoxon-Mann–Whitney test) with р < 0.01 (**), р < 0.05 (*) and р < 0.09 (#) are highlighted.
Notably, blood test parameters in the placebo group display statistically significant differences when compared to the first control point (Figure 2). If one compares cell count curves for placebo and Panagen patient groups, most of the data points (for leukocytes, neutrophils and lymphocytes) display pronounced decline by the end of the third round of chemotherapy in placebo-, but not in Panagen-treated patients where they remain at initial levels (Figure 2). These data are consistent with protective effect of Panagen on leukocyte progenitors.
In both FAC and AC chemotherapies, we observed progressively fewer neutropenias in Panagen patients facing the increasingly negative effects of chemotherapies, as compared to the placebo cohort, where the frequency of neutropenias increased (Figure 4) (Additional file 2, p. 13–15; Additional file 3, p. 15–16).
Frequency of grade I-IV neutropenia-related events in patients at base II on day 14 of three courses of FAC and AC chemotherapies.
Panagen alters the timing of cyclophosphamide-induced abortive release of CD34+/45+ HSCs into peripheral blood. Notably, Panagen also significantly increases the number of HSCs mobilized into the bloodstream (Additional file 5).
As it follows from our analysis, Panagen potently stimulates erythropoietic lineage in patients on FAC protocol. Hemoglobin levels generally dropped in patients on chemotherapy, yet this was only observed in 63% patients receiving Panagen vs 100% placebo-treated patients. Further, in 23% patients receiving Panagen, we saw an increase in platelet counts by day 21 (Additional file 2, p. 27–29).
Panagen shows hepatoprotective activity counteracting the activities of chemotherapeutic drugs cyclophosphamide, doxorubicin and fluorouracil (Additional file 2, p. 32–38; Additional file 3, p. 38–50; Additional file 4, p. 16–24). Panagen suppresses the effects of drug-induced immunodeficiency in patients on the AC protocol of breast cancer chemotherapy (Additional file 3, p. 51–52). Panagen activity positively correlates with regeneration of surface epithelium, which is likely due to increased proliferation of basal cells in the skin (Additional file 2, p. 39–42).
Activation of immune response. Effects of Panagen on patient cytokine profiles
When combined with cytostatic drugs, Panagen increases the number of CD8 + perforin + cytotoxic T cells in peripheral blood, which serves as a major marker of mounting adaptive immune response (Additional file 6).
In our earlier studies, we established that fragmented genomic DNA is actively targeting dendritic cells and potently induces their allostimulatory activity and maturation both ex vivo and in vivo [7,9]. We also showed that when combined with cyclophosphamide, fragmented dsDNA preparation displays pronounced anti-cancer activity in vivo in mice with tumor grafts [12]. When fragmented dsDNA was injected in tumor-engrafted mice following cyclophosphamide or cyclophosphamide and doxorubicin administration, significant antitumor activity was observed [13]. We speculate that the most likely scenario describing suppression of tumor growth in these in vivo experiments involves activation of key immune system components, namely that of adaptive immunity, which is primarily characterized by production of CD8 + perforin + cytotoxic T cells [8]. We can not formally exclude yet another option, i.e. two-pronged targeting of cancer cells by immune system and via direct cytotoxic activity of dsDNA preparation [79].
In order to firmly establish whether the anticancer immune response does unfold upon combined use of fragmented exogenous dsDNA and cytostatics, we surveyed basic cell types involved in adaptive immunity. Namely, we monitored the dynamics of plasmacytoid and myeloid dendritic cells, T-regulatory lymphocytes and CD8+ perforin + cytotoxic T-cells. Our analysis failed to uncover pronounced trends when measuring dendritic cell and T-regulatory lymphocyte counts.
We observed significant increase in CD8+ perforin + cytotoxic T-cells in the peripheral blood of patients receiving Panagen vs placebo on day 21 following 1 course of chemotherapy, notably 58% patients on FAC protocol (7 out of 12) and 16% patients on AC protocol (3 out of 19) responded (Figure 5). This supports the activating role of Panagen on development of adaptive immune response when it is combined with standard FAC and AC chemotherapeutics.
Arbitrary content (%) of CD8+ perforin + cytotoxic T-cells in peripheral blood of patients at base II undergoing FAC or AC chemotherapy on day 21 after the first course, relatively to the initial baseline level (100%, red line). Panagen group is split to demonstrate that two distinct patient subgroups are present – "responders" (those having cell counts significantly different from the placebo group) and "non-responders". Median values, quartile range 25-75% (box) and minimum-maximum range are given for each group; n – the number of patients per group. Significant differences from the Placebo group with p < 0.05 (Wilcoxon-Mann–Whitney) are marked with red asterisk.
It was not unexpected that our analysis of peripheral blood cell counts would fail to uncover significant expansion of dendritic cell population, whose maturation and activation was driven by Panagen. In fact, dendritic cells are known not to stay freely circulating in the peripheral blood for too long, as they quickly anchor in the lymphoid organs (first and foremost in the mesenterium). Furthermore, we could not tell in advance the exact time when matured dendritic cells would likely peak in the peripheral blood. These two points contributed to our failure to detect the changes in peripheral dendritic cell counts. Nonetheless, the observed increase in cytotoxic T cells argued for the activation of a substantial proportion of dendritic cell population capable of efficient antigen presentation so that specific adaptive immune response could be unfolded. We speculate that during the cytostatic therapy the largest source of cancer antigens comes from immunogenic debris of cancer cells killed by cyclophosphamide and doxorubicin treatments. This may suggest that adaptive immune response formed is a highly personalized adaptive anticancer immune response.
Clearly, Panagen potently protects PBMCs known to mediate innate anticancer immunity and counterbalances the negative effects from three courses of aggressive chemotherapy. We further analyzed Panagen activity to maintain and enhance proliferation of PBMCs in the context of innate anticancer immunity. We chose to analyze non-specific cytotoxic activity of patient-derived PBMCs against human adenocarcinoma cell line MCF-7. Our results indicate that Panagen has protective and stimulatory activity towards PBMCs. Cytotoxic indices of PBMCs in patients receiving Panagen were significantly higher than those observed in the placebo group (Figure 6).
Comparative analysis of cytotoxicity indices in PBMCs of patients (FAC regimen, base II) on day 21 following the third round of chemotherapy. Largely responsible for innate anticancer immunity, PBMCs retain their specific functions at the levels observed before the therapy in patients receiving Panagen throughout three courses of chemotherapy (р < 0.05). Unlike in Panagen cohort, PBMCs from placebo group patients display three-fold decrease in cytotoxicity indices by the end of the third chemotherapy course relatively to the initial level.
Opposing trends in the production of TNF-α, IL-2, IL-1, IL-1RA, IL-18, IL-10 and IFN-γ between Panagen-treated and placebo groups have been observed. The production of IL-1, IL-1RA, IL-18, IL-10 and IFN-γ in the Panagen group of patients decreased, while the production of the same cytokines in the placebo group increased. In contrast, production of TNF-α and IL-2 in Panagen-treated patients progressively declined. Moreover, those changes were statistically significant (Figure 7). As per respect to physiological significance, it is essential to remember that an increase in systemic levels of these two cytokines is often associated with initial stages of a cytokine storm. In general, decreased ability to secrete cytokines is associated with immune suppression. However, increase in systemic cytokine production is not necessarily a good sign. Uncontrolled systemic secretion of cytokines is one of the major pathogenic outcomes of septic shock and systemic inflammatory response syndrome. It is usually associated with the rapid and severe increase in circulating levels of IL-6, IL-8, MCP-1, MIP-1β, IFN-γ, GM-CSF (also known as "cytokine storm") due to polyclonal activation of immunocompetent cells [80-84].
Effects of Panagen on patient cytokine profiles. Comparison of spontaneous and mitogen-stimulated cytokine secretion in Control (n = 4) and Panagen-treated (n = 12) patients under FAC chemotherapy regimen and in Control (n = 6) and Panagen-treated (n = 19) patients under AC chemotherapy regimen. T0 – before the therapy, Т1 – day 21 after the first chemotherapy, Т2 – day 21 after the third chemotherapy. Results were analyzed by One Way Repeated Measures and Two Way ANOVA with post hoc Holm-Sidak and Tukey tests in order to evaluate significance of interval-dependent changes, as well as difference between groups. Absolute units of measure (pg/ml) were converted to natural logarithms in order to normalize data prior to analysis. Data presented as Mean ± SEM; a,b – statistically significant difference (p < 0.05) vs T0 or T1 interval respectively; c – statistically significant difference (p < 0.05) between Panagen and Control groups.
Additionally, our quantitative analysis demonstrates unusually high cytokine secretion by PBMCs as compared with samples from healthy donors [85], which may argue for non-specific activation of leukocytes by tumor-derived factors or by the therapy performed, which should similarly converge in a systemic response presenting as a cytokine storm. Lower cytokine secretion upon Panagen co-administration may be indicative of efficient suppression of inflammatory reaction. Collectively, these results argue for cytoprotective properties of Panagen.
Previously it was established that in mice cyclophosphamide monotherapy stimulates maturation and peripheral migration of dendritic cells, as well as induces bone marrow cells to secrete type I interferon [62]. Concomitantly, cyclophosphamide treatment results in cancer cell apoptosis, thereby producing immunogenic cancer cell debris. Thus, anticancer activity of cyclophosphamide relies on a combination of direct cancer cell destruction and activation of adaptive anticancer immunity.
Our studies [7-10] show that dsDNA works in parallel to cyclophosphamide and independently boosts adaptive immunity. So, the combination of dsDNA and cyclophosphamide results in maximum immune response, which is manifested as significant increase in cytotoxic T cells in peripheral blood samples of the recruited patients Taken together, published experimental data and results of the present clinical trial argue for a highly complex nature of anticancer immune response formation upon synergistic activity of standard chemotherapy (FAC, AC) with Panagen dsDNA preparation.
Long-term follow-up analysis
We compared the frequency of relapses in patients of both study groups 3 years following the therapy at base II (18 FAC patients and 26 AC patients). In Panagen vs placebo cohorts, 24% and 45% of patients, respectively, relapsed or died (Figure 8, Table 1). Notably, in the Panagen cohort, 2 out of 8 study participants with cancer relapse had initially stage IV cancer with metastases, and 2 more showed evidence of cancer progression during the first or second rounds of chemotherapy. In other words, these patients had disseminated cancer very early in the therapy, and so formally Panagen therapy was used to treat the patients whose cancer stage was beyond the protocol coverage.
3-year follow-up analysis of Panagen clinical trial. Percentage of relapse events and deaths of patients relatively to the total number of patients. Data for patients from the base II (FAC and AC regimen) are presented. Each group of bars is labeled to show the percentage of patients having stage II, III or IV cancers.
Table 1 Long-term follow-up analysis
Human dsDNA-based drug Panagen shows leukoprotective and leukostimulatory activity when assessed throughout three consecutive FAC and AC chemotherapies.
Panagen efficiently protects the cells involved in innate anticancer immunity from detrimental effects of FAC and AC chemotherapies
Panagen induces adaptive immune response, as assayed by production of CD8 + perforin + cytotoxic T cell population.
Panagen therapy reduces the 3-year relapse frequency from 45% down to 24%.
The data collected in this trial set Panagen as a multi-faceted "all-in-one" medicine that is capable of simultaneously sustaining hematopoiesis, protecting the innate immune cells from toxic effects of three consecutive rounds of chemotherapy and boosting individual adaptive immunity. Its unique feature is that it is delivered via gastrointestinal tract and acts through the lymphoid system of intestinal mucosa. Taken together, maintenance of the initial level of innate immunity, development of adaptive cytotoxic immune response and significantly reduced incidence of relapses 3 years after the therapy argue for the anticancer activity of Panagen.
AC chemotherapy:
Chemotherapy including doxorubicin and cyclophosphamide
dsDNA:
Double-stranded DNA
FAC chemotherapy:
Chemotherapy including 5-fluorouracil, doxorubicin and cyclophosphamide
HSCs:
PBMCs:
Peripheral blood mononuclear cells
Airley R. Cancer chemotherapy: Basic Science to the Clinic. Chichester, UK: Wiley-Blackwell; 2009. p. 342.
Frankfurt O, Tallman MS. Growth factors in leukemia. J Natl Compr Canc Netw. 2007;5(2):203–15.
Hoggatt J, Pelus LM. New G-CSF agonists for neutropenia therapy. Expert Opin Investig Drugs. 2014;23(1):21–35.
Weigel BJ, Rodeberg DA, Krieg AM, Blazar BR. CpG oligodeoxynucleotides potentiate the antitumor effects of chemotherapy or tumor resection in an orthotopic murine model of rhabdomyosarcoma. Clin Cancer Res. 2003;9:3105–14.
Klinman DM. Immunotherapeutic uses of CpG oligodeoxynucleotides. Nat Rev Immunol. 2004;4:249–58.
Silin DS, Lyubomska OV, Ershov FI, Frolov VM, Kutsyna GA. Synthetic and natural immunomodulators acting as interferon inducers. Curr Pharm Des. 2009;15(11):1238–47.
Alyamkina EA, Dolgova EV, Likhacheva AS, Rogachev VA, Sebeleva TE, Nikolin VP, et al. Exogenous allogenic fragmented double-stranded DNA is internalized into human dendritic cells and enhances their allostimulatory activity. Cell Immunol. 2010;262(2):120–6.
Alyamkina EA, Leplina OY, Ostanin AA, Chernykh ER, Nikolin VP, Popova NA, et al. Effects of human exogenous DNA on production of perforin-containing CD8+ cytotoxic lymphocytes in laboratory setting and clinical practice. Cell Immunol. 2012;276:59–66.
Alyamkina EA, Leplina OY, Sakhno LV, Chernykh ER, Ostanin AA, Efremov YR, et al. Effect of double-stranded DNA on maturation of dendritic cells in vitro. Cell Immunol. 2010;266:46–51.
Orishchenko KE, Ryzhikova SL, Druzhinina YG, Ryabicheva TG, Varaksin NA, Alyamkina EA, et al. Effect of human double-stranded DNA preparation on the production of cytokines by dendritic cells and peripheral blood cells from relatively healthy donors. Cancer Therapy. 2013;8:191–205.
Spirin AS. Spectrophotometrical determination of total quantity of nucleic acids. Biochemistry. 1958;23(4):656–62. In Russian.
Alyamkina EA, Dolgova EV, Likhacheva AS, Rogachev VA, Sebeleva TE, Nikolin VP, et al. Combined therapy with cyclophosphamide and DNA preparation inhibits the tumor growth in mice. Genet Vaccines Ther. 2009;7(1):12. http://www.gvt-journal.com/content/7/1/12.
Alyamkina EA, Nikolin VP, Popova NA, Dolgova EV, Proskurina AS, Orishchenko KE, et al. A strategy of tumor treatment in mice with doxorubicin-cyclophosphamide combination based on dendritic cell activation by human double-stranded DNA preparation. Genet Vaccines Ther. 2010;8(1):7. http://www.gvt-journal.com/content/8/1/7.
Dolgova EV, Rogachev VA, Nikolin VP, Popova NA, Likhacheva AS, Aliamkina EA, et al. A leukocyte-stimulating effect of exogenous DNA fragments protected with protamine in the cyclophosphamide-induced myelosuppression in mice. Problems in Oncology. 2009;55:761–4. In Russian.
Likhacheva AS, Nikolin VP, Popova NA, Rogachev VA, Prokhorovich MA, Sebeleva TE, et al. Exogenous DNA can be captured by stem cells and be involved in their rescue from death after lethal-dose γ-radiation. Gene Ther Mol Biol. 2007;11:305–14.
Likhacheva AS, Rogachev VA, Nikolin VP, Popova NA, Shilov AG, Sebeleva TE, et al. Involvement of exogenous DNA in the molecular processes in somatic cell. The Herald of Vavilov Society for Geneticists and Breeding Scientists. 2008;12:426–73. In Russian.
Nikolin VP, Popova NA, Sebeleva TE, Strunkin DN, Rogachev VA, Semenov DV, et al. Effect of exogenous DNA injection on leukopoietic repair and antitumor action of cyclophosphamide. Problems in Oncology. 2006;52:336–40. In Russian.
Dolgova EV, Proskurina AS, Nikolin VP, Popova NA, Alyamkina EA, Orishchenko KE, et al. "Delayed death" phenomenon: A synergistic action of cyclophosphamide and exogenous DNA. Gene. 2012;495:134–45.
Dolgova EV, Nikolin VP, Popova NA, Proskurina AS, Orishenko KE, Alyamkina EA, et al. Internalization of exogenous DNA into internal compartments of murine bone marrow cells. Russian Journal of Genetics: Applied Research. 2012;2(6):440–52.
Dolgova EV, Proskurina AS, Nikolin VP, Popova NA, Alyamkina EA, Orishchenko KE, et al. Time-course analysis of toxic action of exogenous DNA administered upon cyclophosphamide pretreatment. Vavilov Journal of Genetics and Breeding. 2011;15(4):674–89. In Russian.
Likhacheva AS, Nikolin VP, Popova NA, Dubatolova TD, Strunkin DN, Rogachev VA, et al. Integration of human DNA fragments into the cell genomes of certain tissues from adult mice treated with cytostatic cyclophosphamide in combination with human DNA. Gene Ther Mol Biol. 2007;11:185–202.
Ayares D, Chekuri L, Song KY, Kucherlapati R. Sequence homology requirements for intermolecular recombination in mammalian cells. Proc Natl Acad Sci U S A. 1986;83(14):5199–203.
Hastings PJ, McGill C, Shafer B, Strathern JN. Ends-in vs. ends-out recombination in yeast. Genetics. 1993;135(4):973–80.
Helleday T. Pathways for mitotic homologous recombination in mammalian cells. Mutat Res. 2003;532(1/2):103–15.
Kotnis A, Du L, Liu C, Popov SW, Pan-Hammarstrom Q. Non-homologous end joining in class switch recombination: the beginning of the end. Philos Trans R Soc Lond B Biol Sci. 2009;364:653–65.
Langston LD, Symington LS. Gene targeting in yeast is initiated by two independent strand invasions. Proc Natl Acad Sci U S A. 2004;101(43):15392–7.
Leung W, Malkova A, Haber JE. Gene targeting by linear duplex DNA frequently occurs by assimilation of a single strand that is subject to preferential mismatch correction. Proc Natl Acad Sci U S A. 1997;94(13):6851–6.
Li J, Read LR, Baker MD. The mechanism of mammalian gene replacement is consistent with the formation of long regions of heteroduplex DNA associated with two crossing-over events. Mol Cell Biol. 2001;21(2):501–10.
Rubnitz J, Subramani S. The minimum amount of homology required for homologous recombination in mammalian cells. Mol Cell Biol. 1984;4(11):2253–8.
Sharma S, Choudhary B, Raghavan SC. Efficiency of nonhomologous DNA end joining varies among somatic tissues, despite similarity in mechanism. Cell Mol Life Sci. 2011;68:661–76.
Symington LS. Focus on recombinational DNA repair. The EMBO Rep. 2005;6(6):512–7.
Thomas KR, Folger KR, Capecchi MR. High frequency targeting of genes to specific sites in the mammalian genome. Cell. 1986;44(3):419–28.
Anker P. Quantitative aspects of plasma/serum DNA in cancer patients. Ann NY Acad Sci. 2000;906:5–7.
Anker P, Mulcahy H, Chen XQ, Stroun M. Detection of circulating tumour DNA in the blood (plasma/serum) of cancer patients. Cancer Metastasis Rev. 1999;18(1):65–73.
Giacona MB, Ruben GC, Iczkowski KA, Roos TB, Porter DM, Sorenson GD. Cell-free DNA in human blood plasma: length measurements in patients with pancreatic cancer and healthy controls. Pancreas. 1998;17(1):89–97.
Ishii KJ, Suzuki K, Coban C, Takeshita F, Itoh Y, Matoba H, et al. Genomic DNA released by dying cells induces the maturation of APCs. J Immunol. 2001;167(5):2602–7.
Jahr S, Hentze H, Englisch S, Hardt D, Fackelmayer FO, Hesch RD, et al. DNA fragments in the blood plasma of cancer patients: quantitations and evidence for their origin from apoptotic and necrotic cells. Cancer Res. 2001;61(4):1659–65.
Tamkovich SN, Bryzgunova OK, Rykova EY, Kolesnikova EV, Shelestuk PJ, Laktionov PP, et al. Circulating nucleic acids in blood of patients with malignant tumors of gastrointestinal tract. Biomedical Chemistry. 2005;51:321–8. In Russian.
Anker P, Jachertz D, Stroun M, Brögger R, Lederrey C, Henri J, et al. The role of extracellular DNA in the transfer of information from T to B human lymphocytes in the course of an immune response. J Immunogenet. 1980;7(6):475–81.
Bennett RM, Gabor GT, Merritt MM. DNA binding to human leukocytes. Evidence for a receptor-mediated association, internalization, and degradation of DNA. J Clin Invest. 1985;76(6):2182–90.
Doerfler W, Orend G, Schubbert R, Fechteler K, Heller H, Wilgenbus P, et al. On the insertion of foreign DNA into mammalian genomes: mechanism and consequences. Gene. 1995;157(1–2):241–5.
Doerfler W, Remus R, Müller K, Heller H, Hohlweg U, Schubbert R. The fate of foreign DNA in mammalian cells and organisms. Dev Biol (Basel). 2001;106:89–97.
Doerfler W, Schubbert R, Heller H, Hertz J, Remus R, Schröer J, et al. Foreign DNA in mammalian systems. APMIS Suppl. 1998;84:62–8.
Grønsberg IM, Nordgård L, Fenton K, Hegge B, Nielsen KM, Bardocz S, et al. Uptake and organ distribution of feed introduced plasmid DNA in growing or pregnant rats. Food and Nutrition Sciences. 2011;2(4):377–86.
Hefeneider SH, Bennett RM, Pham TQ, Cornell K, McCoy SL, Heinrich MC. Identification of a cell-surface DNA receptor and its association with systemic lupus erythematosus. J Invest Dermatol. 1990;94(6 Suppl):79S–84.
Mazza R, Soave M, Morlacchini M, Piva G, Marocco A. Assessing the transfer of genetically modified DNA from feed to animal tissues. Transgenic Res. 2005;14(5):775–84.
Schubbert R, Lettmann C, Doerfler W. Ingested foreign (phage M13) DNA survives transiently in the gastrointestinal tract and enters the bloodstream of mice. Mol Gen Genet. 1994;242(5):495–504.
Schubbert R, Renz D, Schmitz B, Doerfler W. Foreign (M13) DNA ingested by mice reaches peripheral leukocytes, spleen, and liver via the intestinal wall mucosa and can be covalently linked to mouse DNA. Proc Natl Acad Sci U S A. 1997;94(3):961–6.
Yakubov LA, Rogachev VA, Likhacheva AC, Bogachev SS, Sebeleva TE, Shilov AG, et al. Natural human gene correction by small extracellular genomic DNA fragments. Cell Cycle. 2007;6(18):2293–301.
Rogachev VA, Likhacheva A, Vratskikh O, Mechetina LV, Sebeleva TE, Bogachev SS, et al. Qualitative and quantitative characteristics of the extracellular DNA delivered to the nucleus of a living cell. Cancer Cell Int. 2006;6:23. http://cancerci.com/content/6/1/23.
Arzamastsev EV, Malinovskaya KI, Levitskaya EL, Masycheva VI, Dolzhenko TS, Alyamkina EA, et al. The study of the toxicity of the preparation Panagen in tablet form in conditions of one month chronic experiment on dogs. Toxicological Review. 2012;112(1):57–8. In Russian.
Barber GN. Cytoplasmic DNA, innate immune pathways. Immunol Rev. 2011;243(1):99–108.
Kawai T, Akira S. Toll-like receptors and their crosstalk with other innate receptors in infection and immunity. Immunity. 2011;34(5):637–50.
Krieg AM. Development of TLR9 agonists for cancer therapy. J Clin Invest. 2007;117(5):1184–94.
Krieg AM. Therapeutic potential of Toll-like receptor 9 activation. Nat Rev Drug Discov. 2006;5(6):471–84.
Agrawal A, Sridharan A, Prakash S, Agrawal H. Dendritic cells and aging: consequences for autoimmunity. Expert Rev Clin Immunol. 2012;8(1):73–80.
Bleiblo F, Michael P, Brabant D, Ramana CV, Tai T, Saleh M, et al. The role of immunostimulatory nucleic acids in septic shock. Int J Clin Exp Med. 2012;5(1):1–23.
Choubey D. DNA-responsive inflammasomes and their regulators in autoimmunity. Clin Immunol. 2012;142(3):223–31.
Guiducci C, Tripodo C, Gong M, Sangaletti S, Colombo MP, Coffman RL, et al. Autoimmune skin inflammation is dependent on plasmacytoid dendritic cell activation by nucleic acids via TLR7 and TLR9. J Exp Med. 2010;207(13):2931–42.
Kaczorowski DJ, Scott MJ, Pibris JP, Afrazi A, Nakao A, Edmonds RD, et al. Mammalian DNA is an endogenous danger signal that stimulates local synthesis and release of complement factor B. Mol Med. 2012;18:851–60.
Pisetsky DS, Ullal AJ. The blood nucleome in the pathogenesis of SLE. Autoimmun Rev. 2010;10(1):35–7.
Schiavoni G, Sistigu A, Valentini M, Mattei F, Sestili P, Spadaro F, et al. Cyclophosphamide synergizes with type I interferons through systemic dendritic cell reactivation and induction of immunogenic tumor apoptosis. Cancer Res. 2011;71(3):768–78.
Casares N, Arribillaga L, Sarobe P, Dotor J. Lopez-Diaz de Cerio A, Melero I, et al. CD4+/CD25+ regulatory cells inhibit activation of tumor-primed CD4+ T cells with IFN-gamma-dependent antiangiogenic activity, as well as long-lasting tumor immunity elicited by peptide vaccination. J Immunol. 2003;171(11):5931–9.
Curiel TJ, Coukos G, Zou L, Alvarez X, Cheng P, Mottram P, et al. Specific recruitment of regulatory T cells in ovarian carcinoma fosters immune privilege and predicts reduced survival. Nat Med. 2004;10(9):942–9.
Darrasse-Jeze G, Deroubaix S, Mouquet H, Victora GD, Eisenreich T, Yao KH, et al. Feedback control of regulatory T cell homeostasis by dendritic cells in vivo. J Exp Med. 2009;206(9):1853–62.
Ercolini AM, Ladle BH, Manning EA, Pfannenstiel LW, Armstrong TD, Machiels JP, et al. Recruitment of latent pools of high-avidity CD8(+) T cells to the antitumor immune response. J Exp Med. 2005;201(10):1591–602.
Ikezawa Y, Nakazawa M, Tamura C, Takahashi K, Minami M, Ikezawa Z. Cyclophosphamide decreases the number, percentage and the function of CD25+ CD4+ regulatory T cells, which suppress induction of contact hypersensitivity. J Dermatol Sci. 2005;39(2):105–12.
Lutsiak ME, Semnani RT, De Pascalis R, Kashmiri SV, Schlom J, Sabzevari H. Inhibition of CD4(+)25+ T regulatory cell function implicated in enhanced immune response by low-dose cyclophosphamide. Blood. 2005;105(7):2862–8.
Motoyoshi Y, Kaminoda K, Saitoh O, Hamasaki K, Nakao K, Ishii N, et al. Different mechanisms for anti-tumor effects of low- and high-dose cyclophosphamide. Oncol Rep. 2006;16(1):141–6.
Nakahara T, Uchi H, Lesokhin AM, Avogadri F, Rizzuto GA, Hirschhorn-Cymerman D, et al. Cyclophosphamide enhances immunity by modulating the balance of dendritic cell subsets in lymphoid organs. Blood. 2010;115(22):4384–92.
Onizuka S, Tawara I, Shimizu J, Sakaguchi S, Fujita T, Nakayama E. Tumor rejection by in vivo administration of anti-CD25 (interleukin-2 receptor alpha) monoclonal antibody. Cancer Res. 1999;59(13):3128–33.
Salem ML, El-Naggar SA, Cole DJ. Cyclophosphamide induces bone marrow to yield higher numbers of precursor dendritic cells in vitro capable of functional antigen presentation to T cells in vivo. Cell Immunol. 2010;261(2):134–43.
Shimizu J, Yamazaki S, Sakaguchi S. Induction of tumor immunity by removing CD25 + CD4+ T cells: a common basis between tumor immunity by removing. J Immunol. 1999;163(10):5211–8.
Uchida S, Suzuki K, Akiyama S, Miyamoto M, Juji T, Fujiwara M. Suppressive effect of cyclophosphamide on the progression of lethal graft-versus-host disease in mice–a therapeutic model of fatal post-transfusion GVHD. Ther Immunol. 1994;1(6):313–8.
Gardai SJ, McPhillips KA, Frasch SC, Janssen WJ, Starefeldt A, Murphy-Ullrich JE, et al. Cell-surface calreticulin initiates clearance of viable or apoptotic cells through trans-activation of LRP on the phagocyte. Cell. 2005;123(2):321–34.
Henson PM, Hume DA. Apoptotic cell removal in development and tissue homeostasis. Trends Immunol. 2006;27(5):244–50.
Machiels JP, Reilly RT, Emens LA, Ercolini AM, Lei RY, Weintraub D, et al. Cyclophosphamide, doxorubicin, and paclitaxel enhance the antitumor immune response of granulocyte/macrophage-colony stimulating factor-secreting whole-cell vaccines in HER-2/neu tolerized mice. Cancer Res. 2001;61(9):3689–97.
Obeid M, Panaretakis T, Tesniere A, Joza N, Tufi R, Apetoh L, et al. Leveraging the immune system during chemotherapy: moving calreticulin to the cell surface converts apoptotic death from "silent" to immunogenic. Cancer Res. 2007;67(17):7941–4.
Dolgova EV, Alyamkina EA, Efremov YR, Nikolin VP, Popova NA, Tyrinova TV, et al. Identification of cancer stem cells and a strategy for their elimination. Cancer Biol Ther. 2014;15(10):1378–94.
Tisoncik JR, Korth MJ, Simmons CP, Farrar J, Martin TR, Katze MG. Into the eye of the cytokine storm. Microbiol Mol Biol Rev. 2012;76(1):16–32.
Yiu HH, Graham AL, Stengel RF. Dynamics of a cytokine storm. PLoS One. 2012;7(10):e45027.
Tamayo E, Fernández A, Almansa R, Carrasco E, Heredia M, Lajo C, et al. Pro- and anti-inflammatory responses are regulated simultaneously from the first moments of septic shock. Eur Cytokine Netw. 2011;22(2):82–7.
Martinet L, Garrido I, Filleron T, Le Guellec S, Bellard E, Fournie JJ, et al. Human solid tumors contain high endothelial venules: association with T- and B-lymphocyte infiltration and favorable prognosis in breast cancer. Cancer Res. 2011;71(17):5678–87.
Martinet L, Garrido I, Girard JP. Tumor high endothelial venules (HEVs) predict lymphocyte infiltration and favorable prognosis in breast cancer. Oncoimmunology. 2012;1(5):789–90.
Rizhikova SL, Varaksin NA, Druzhinina YG, Zhukov VA, Zhukova YV. Cytokines in the acute coronary syndrome. Handbook for the head of clinical diagnostic laboratory. 2014;11:27–36. In Russian.
This study was supported by the LLC Panagen, the State scientific project No. VI.60.1.3, and by the state contracts from the Federal Targeted Program "Scientific and academic specialists for innovations in Russia", 2009–2013 of June 15th, 2009, N 02.740.11.0091, September 1st, 2010, N 14.740.11.0007, and April 29th, 2011, N 14.740.11.0922.
Institute of Cytology and Genetics, Siberian Branch of the Russian Academy of Sciences, 10 Lavrentieva ave, Novosibirsk, 630090, Russia
Anastasia S Proskurina, Ekaterina A Alyamkina, Evgenia V Dolgova, Konstantin E Orishchenko, Valeriy P Nikolin, Nelly A Popova, Vladimir A Rogachev & Sergey S Bogachev
Novosibirsk State Medical University, Novosibirsk, 630091, Russia
Tatiana S Gvozdeva
Novosibirsk State University, Novosibirsk, 630090, Russia
Nelly A Popova, Sergey V Sidorov, Galina S Soldatova & Stanislav N Zagrebelniy
Oncology Department of Municipal Hospital No 1, Novosibirsk, 630047, Russia
Sergey V Sidorov
Institute of Clinical Immunology, Siberian Branch of the Russian Academy of Medical Sciences, Novosibirsk, 630099, Russia
Elena R Chernykh, Alexandr A Ostanin & Olga Y Leplina
Irkutsk State Medical Academy of Postgraduate Education, Irkutsk, 664049, Russia
Victoria V Dvornichenko & Dmitriy M Ponomarenko
Regional Oncology Dispensary, Irkutsk, 664035, Russia
Clinic Department of the Central Clinical Hospital, Siberian Branch of the Russian Academy of Sciences, Novosibirsk, 630090, Russia
Galina S Soldatova
CJSC "Vector-best", Koltsovo, Novosibirsk region, 630559, Russia
Nikolay A Varaksin & Tatiana G Ryabicheva
LLC Panagen, Gorno-Altaisk, 649000, Russia
Mikhail A Shurdov
Anastasia S Proskurina
Ekaterina A Alyamkina
Evgenia V Dolgova
Konstantin E Orishchenko
Valeriy P Nikolin
Nelly A Popova
Elena R Chernykh
Alexandr A Ostanin
Olga Y Leplina
Victoria V Dvornichenko
Dmitriy M Ponomarenko
Nikolay A Varaksin
Tatiana G Ryabicheva
Stanislav N Zagrebelniy
Vladimir A Rogachev
Sergey S Bogachev
Correspondence to Sergey S Bogachev.
ASP performed the analysis, interpreted the data, and drafted the manuscript. TSG carried out clinical work with patients and drafted the manuscript. EAA carried out the molecular studies. EVD carried out the molecular studies. KEO performed the design of the study and provide the technical conditions for perform works. VPN performed the analysis and interpreted the data. NAP participated in the design of the study. SVS carried out clinical work with patients and participated in study design. ERC participated in the design of the study. AAO performed the analysis and interpreted the data. OYL carried out experiments to estimate the adaptive immunity induction. VVD helped in the data interpretation. DMP carried out clinical work with patients. GSS helped in the data interpretation. NAV and TGR carried out cytokine analysis. SNZ participated in the design of the study. VAR carried out production of DNA preparation. SSB conceived the study, participated in its design, and coordinated and drafted the manuscript. MAS participated in the study design and coordination. All authors read and approved the final manuscript.
Anastasia S Proskurina and Tatiana S Gvozdeva contributed equally to this work.
Progress of phase II clinical trials of Panagen.
Results of the completed phase II double-blind multicenter placebo-controlled clinical trial to evaluate the safety and leukostimulatory activity of Panagen in breast cancer patients. The study was performed at the Oncology Department of Novosibirsk Municipal Hospital No 1 and included 18 patients receiving FAC therapy (14 patients additionally received Panagen, 4 patients received placebo).
Results of the phase II double-blind multicenter placebo-controlled clinical trial to evaluate the safety and leukostimulatory activity of Panagen in breast cancer patients.The study was performed at the Oncology Department of Novosibirsk Municipal Hospital No 1 and included 26 patients receiving AC therapy (19 patients additionally received Panagen, 7 patients received placebo).
Results of the phase II double-blind multicenter placebo-controlled clinical trial to evaluate the safety and leukostimulatory activity of Panagen in breast cancer patients. The study was performed at the Irkutsk Regional Oncology Dispensary and included 23 patients receiving FAC therapy (18 patients additionally received Panagen, 5 patients received placebo).
Analysis of CD34+/45+ progenitor cell dynamics at control time points throughout the three cycles of chemotherapy.
Summary of experiments characterizing development of adaptive immune response.
Proskurina, A.S., Gvozdeva, T.S., Alyamkina, E.A. et al. Results of multicenter double-blind placebo-controlled phase II clinical trial of Panagen preparation to evaluate its leukostimulatory activity and formation of the adaptive immune response in patients with stage II-IV breast cancer. BMC Cancer 15, 122 (2015). https://doi.org/10.1186/s12885-015-1142-z
DOI: https://doi.org/10.1186/s12885-015-1142-z
5-fluorouracil
Leukostimulation
dsDNA | CommonCrawl |
In recent years, the study of evolution equations featuring a fractional Laplacian has received much attention due to the fact that they have been successfully applied into the modelling of a wide variety of phenomena, ranging from biology, physics to finance. The stochastic process behind fractional operators is linked, in the whole space, to an $\alpha$-stable processes as opposed to the Laplacian operator which is linked to a Brownian stochastic process. In addition, evolution equations involving fractional Laplacians offer new interesting and very challenging mathematical problems. There are several equivalent definitions of the fractional Laplacian in the whole domain, however, in a bounded domain there are several options depending on the stochastic process considered. In this talk we shall present results on the rigorous passage from a velocity jumping stochastic process in a bounded domain to a macroscopic evolution equation featuring a fractional Laplace operator. More precisely, we shall consider the long-time/small mean-free path asymptotic behaviour of the solutions of a re-scaled linear kinetic transport equation in a smooth bounded domain. | CommonCrawl |
# Numerical Analysis <br> Notes for Math 575A
William G. Faris<br>Program in Applied Mathematics<br>University of Arizona
Fall 1992
## Chapter 1
## Nonlinear equations
## $1.1 \quad$ Introduction
This chapter deals with solving equations of the form $f(x)=0$, where $f$ is a continuous function.
The usual way in which we apply the notion of continuity is through sequences. If $g$ is a continuous function, and $c_{n}$ is a sequence such that $c_{n} \rightarrow c$ as $n \rightarrow \infty$, then $g\left(c_{n}\right) \rightarrow g(c)$ as $n \rightarrow \infty$.
Here is some terminology that we shall use. A number $x$ is said to be positive if $x \geq 0$. It is strictly positive if $x>0$. (Thus we avoid the mindnumbing term "non-negative.") A sequence $a_{n}$ is said to be increasing if $a_{n} \leq a_{n+1}$ for all $n$. It is said to be strictly increasing if $a_{n}<a_{n+1}$ for all $n$. There is a similar definition for what it means for a function to be increasing or strictly increasing. (This avoids the clumsy locution "nondecreasing.")
Assume that a sequence $a_{n}$ is increasing and bounded above by some $c<\infty$, so that $a_{n} \leq c$ for all $n$. Then it is always true that there is an $a \leq c$ such that $a_{n} \rightarrow a$ as $n \rightarrow \infty$.
### Bisection
The bisection method is a simple and useful way of solving equations. It is a constructive implementation of the proof of the following theorem. This result is a form of the intermediate value theorem.
Theorem 1.2.1 Let $g$ be a continuous real function on a closed interval $[a, b]$ such that $g(a) \leq 0$ and $g(b) \geq 0$. Then there is a number $r$ in the interval with $g(r)=0$.
Proof: Construct a sequence of intervals by the following procedure. Take an interval $[a, b]$ on which $g$ changes sign, say with $g(a) \leq 0$ and $g(b) \geq 0$. Let $m=(a+b) / 2$ be the midpoint of the interval. If $g(m) \geq 0$, then replace $[a, b]$ by $[a, m]$. Otherwise, replace $[a, b]$ by $[m, b]$. In either case we obtain an interval of half the length on which $g$ changes sign.
The sequence of left-hand points $a_{n}$ of these intervals is an increasing sequence bounded above by the original right-hand end point. Therefore this sequence converges as $n \rightarrow \infty$. Similarly, the sequence of right-hand points $b_{n}$ is a decreasing sequence bounded below by the original left-hand end point. Therefore it also converges. Since the length $b_{n}-a_{n}$ goes to zero as $n \rightarrow \infty$, it follows that the two sequences have the same limit $r$.
Since $g\left(a_{n}\right) \leq 0$ for all $n$, we have that $g(r) \leq 0$. Similarly, since $g\left(b_{n}\right) \geq 0$ for all $n$, we also have that $g(r) \geq 0$. Hence $g(r)=0$.
Note that if $a$ and $b$ are the original endpoints, then after $n$ steps one is guaranteed to have an interval of length $(b-a) / 2^{n}$ that contains a root.
In the computer implementation the inputs to the computation involve giving the endpoints $\mathrm{a}$ and $\mathrm{b}$ and the function $\mathrm{g}$. One can only do a certain number of steps of the implementation. There are several ways of accomplishing this. One can give the computer a tolerance and stop when the the length of the interval does not exceed this value. Alternatively, one can give the computer a fixed number of steps nsteps. The output is the sequence of $\mathrm{a}$ and $\mathrm{b}$ values describing the bisected intervals.
The construction here is the while loop, which is perhaps the most fundamental technique in programming.
Here is an alternative version in which the iterations are controlled by a counter $\mathrm{n}$.
Algorithms such as this are actual computer code in the programming language $\mathrm{C}$. It should be rather easy to read even without a knowledge of $C$. The key word void indicates that the bisect function is a procedure and not a function that returns a meaningful value. The parameter declaration real $(* g)$ (real) indicates that $g$ is a variable that can point to functions that have been defined (real functions of real arguments). When reading the text of a function such as bisect, it is useful to read the sign = as "becomes."
To make a complete program, one must put this in a program that calls this procedure.
This program begins with declarations of a new type real and of four functions bisect, quadratic, fetch, and display. The main program uses these functions to accomplish its purpose; it returns the integer value 0 only to proclaim its satisfaction with its success.
One also needs to define the function quadratic.
This particular function has roots that are square roots of two. We shall not go into the dismal issues of input and output involved with fetch and display.
Another interesting question is that of uniqueness. If $g$ is strictly increasing on $[a, b]$, then there is at most one solution of $g(x)=0$.
The easiest way to check that $g$ is strictly increasing on $[a, b]$ is to check that $g^{\prime}(x)>0$ on $(a, b)$. Then for $a \leq p<q \leq b$ we have by the mean value theorem that $g(q)-g(p)=g^{\prime}(c)(q-p)>0$ for some $c$ with $p<c<q$. Thus $p<q$ implies $g(p)<g(q)$.
One can use a similar idea to find maxima and minima. Let $g$ be a continuous function on $[a, b]$. Then there is always a point $r$ at which $g$ assumes its maximum.
Assume that $g$ is unimodal, that is, that there exists an $r$ such that $g$ is strictly increasing on $[a, r]$ and strictly decreasing on $[r, b]$. The computational problem is to locate the point $r$ at which the maximuum is assumed.
The trisection method accomplishes this task. Divide $[a, b]$ into three equal intervals with end points $a<p<q<b$. If $g(p) \leq g(q)$, then $r$ must be in the smaller interval $[p, b]$. Similarly, if $g(p) \geq g(q)$, then $r$ must be in the smaller interval $[a, q]$. The method is to repeat this process until a sufficiently small interval is obtained.
Projects 1. Write a bisection program to find the square roots of two. Find them.
2. Use the program to solve $\sin x=x^{2}$ for $x>0$.
3. Use the program to solve $\tan x=x$ with $\pi / 2<x<3 \pi / 2$.
4. Write a trisection program to find maxima. Use it to find the minimum of $h(x)=x^{3} / 3+\cos x$ for $x \geq 0$.
## Problems
1. Show that $\sin x=x^{2}$ has a solution with $x>0$. Be explicit about the theorems that you use.
2. Show that $\sin x=x^{2}$ has at most one solution with $x>0$.
3. Show that $\tan x=x$ has a solution with $\pi / 2<x<3 \pi / 2$.
4. Show that $x^{5}-x+1 / 4=0$ has has at least two solutions between 0 and 1.
5. Show that $x^{5}-x+1 / 4=0$ has has at most two solutions between 0 and 1.
6. Prove a stronger form of the intermediate value theorem: if $g$ is continuous on $[a, b]$, then $g$ assumes every value in $[g(a), g(b)]$.
7. How many decimal places of accuracy does one gain at each bisection?
8. How many decimal places are obtained at each trisection?
### Iteration
#### First order convergence
Recall the intermediate value theorem: If $f$ is a continuous function on the interval $[a, b]$ and $f(a) \leq 0$ and $f(b) \geq 0$, then there is a solution of $f(x)=0$ in this interval.
This has an easy consequence: the fixed point theorem. If $g$ is a continuous function on $[a, b]$ and $g(a) \geq a$ and $g(b) \leq b$, then there is a solution of $g(x)=x$ in the interval.
Another approach to numerical root-finding is iteration. Assume that $g$ is a continuous function. We seek a fixed point $r$ with $g(r)=r$. We can attempt to find it by starting with an $x_{0}$ and forming a sequence of iterates using $x_{n+1}=g\left(x_{n}\right)$. Theorem 1.3.1 Let $g$ be continuous and let $x_{n}$ a sequence such that $x_{n+1}=$ $g\left(x_{n}\right)$. Then if $x_{n} \rightarrow r$ as $n \rightarrow \infty$, then $g(r)=r$.
This theorem shows that we need a way of getting sequences to converge. One such method is to use increasing or decreasing sequences.
Theorem 1.3.2 Let $g$ be a continuous function on $[r, b]$ such that $g(x) \leq x$ for all $x$ in the interval. Let $g(r)=r$ and assume that $g^{\prime}(x) \geq 0$ for $r<x<b$. Start with $x_{0}$ in the interval. Then the iterates defined by $x_{n+1}=g\left(x_{n}\right)$ converge to a fixed point.
Proof: By the mean value theorem, for each $x$ in the interval there is a $c$ with $g(x)-r=g(x)-g(r)=g^{\prime}(c)(x-r)$. It follows that $r \leq g(x) \leq x$ for $r \leq x \leq b$. In other words, the iterations decrease and are bounded below by $r$.
Another approach is to have a bound on the derivative.
Theorem 1.3.3 Assume that $g$ is continuous on $[a, b]$ and that $g(a) \geq a$ and $g(b) \leq b$. Assume also that $\left|g^{\prime}(x)\right| \leq K<1$ for all $x$ in the interval. Let $x_{0}$ be in the interval and iterate using $x_{n+1}=g\left(x_{n}\right)$. Then the iterates converge to a fixed point. Furthermore, this fixed point is unique.
Proof: Let $r$ be a fixed point in the interval. By the mean value theorem, for each $x$ there is a $c$ with $g(x)-r=g(x)-g(r)=g^{\prime}(c)(x-r)$, and so $|g(x)-r|=\left|g^{\prime}(c)\right||x-r| \leq K|x-r|$. In other words each iteration replacing $x$ by $g(x)$ brings us closer to $r$.
We say that $r$ is a stable fixed point if $\left|g^{\prime}(r)\right|<1$. We expect convergence when the iterations are started near a stable fixed point.
If we want to use this to solve $f(x)=0$, we can try to take $g(x)=$ $x-k f(x)$ for some suitable $k$. If $k$ is chosen so that $g^{\prime}(x)=1-k f^{\prime}(x)$ is small for $x$ near $r$, then there should be a good chance of convergence.
It is not difficult to program fixed point iteration. Here is a version that displays all the iterates.
#### Second order convergence
Since the speed of convergence in iteration with $g$ is controlled by $g^{\prime}(r)$, it follows that the situation when $g^{\prime}(r)=0$ is going to have special properties.
It is possible to arrange that this happens! Say that one wants to solve $f(x)=0$. Newton's method is to take $g(x)=x-f(x) / f^{\prime}(x)$. It is easy to check that $f(r)=0$ and $f^{\prime}(r) \neq 0$ imply that $g^{\prime}(r)=0$.
Newton's method is not guaranteed to be good if one begins far from the starting point. The damped Newton method is more conservative. One defines $g(x)$ as follows. Let $m=f(x) / f^{\prime}(x)$ and let $y=x-m$. While $|f(y)|>|f(x)|$ replace $m$ by $m / 2$ and let $y=x-m$. Let $g(x)$ be the final value of $y$.
## Projects
1. Implement Newton's method as a special case of the fixed point iterations. Use this to find the largest root of $\sin x-x^{2}=0$. Describe what happens if you start the iteration with .46 .
2. Implement the damped Newton's method. Use this to find the largest root of $\sin x-x^{2}=0$. Describe what happens if you start the iteration with .46 .
3. Find all roots of $2 \sin x-x=0$ numerically. Use some version of Newton's method.
## Problems
1. Let $g(x)=(2 / 3) x+(7 / 3)\left(1 / x^{2}\right)$. Show that for every initial point $x_{0}$ above the fixed point the iterations converge to the fixed point. What happens for initial points $x_{0}>0$ below the fixed point?
2. Assume that $x \leq g(x)$ and $g^{\prime}(x) \geq 0$ for $a \leq x \leq r$. Show that it follows that $x \leq g(x) \leq r$ for $a \leq x \leq r$ and that the iterations increase to the root.
3. Prove the fixed point theorem from the intermediate value theorem.
4. In fixed point iteration with a $g$ having continuous derivative and stable fixed point $r$, find the limit of $\left(x_{n+1}-r\right) /\left(x_{n}-r\right)$. Assume the iterations converge.
5. Perhaps one would prefer something that one could compute numerically. Find the limit of $\left(x_{n+1}-x_{n}\right) /\left(x_{n}-x_{n-1}\right)$ as $n \rightarrow \infty$.
6. How many decimal places does one gain at each iteration? 7. In fixed point iteration with a $g$ having derivative $g^{\prime}(r)=0$ and continuous second derivative, find the limit of $\left(x_{n+1}-r\right) /\left(x_{n}-r\right)^{2}$.
7. Describe what this does to the decimal place accuracy at each iteration.
8. Calculate $g^{\prime}(x)$ in Newton's method.
9. Show that in Newton's method $f(r)=0$ with $f^{\prime}(r) \neq 0$ implies $g^{\prime}(r)=0$.
10. Calculate $g^{\prime \prime}(x)$ in Newton's method.
11. Consider Newton's method for $x^{3}-7=0$. Find the basin of attraction of the positive root. Be sure to find the entire basin and prove that your answer is correct. (The basin of attraction of a fixed point of an interation function $g$ is the set of all initial points such that fixed point iteration starting with that initial point converges to the fixed point.)
12. Consider Newton's method to find the largest root of $\sin x-x^{2}=0$. What is the basin of attraction of this root? Give a mathematical argument that your result is correct.
13. Show that in Newton's method starting near the root one has either increase to the the root from the left or decrease to the root from the right. (Assume that $f^{\prime}(x)$ and $f^{\prime \prime}(x)$ are non-zero near the root.) What determines which case holds?
14. Assume that $\left|x_{n+1}-r\right| \leq K\left|x_{n}-r\right|^{2}$ for all $n \geq 0$. Find a condition on $x_{0}$ that guarantees that $x_{n} \rightarrow r$ as $n \rightarrow \infty$.
15. Is the iteration function in the damped Newton's method well-defined? Or could the halving of the steps go on forever?
### Some C notations
#### Introduction
This is an exposition of a fragment of $\mathrm{C}$ sufficient to express numerical algorithms involving only scalars. Data in $\mathrm{C}$ comes in various types. Here we consider arithmetic types and function types.
A C program consists of declarations and function definitions. The declarations reserve variables of various types. A function definition describes how to go from input values to an output value, all of specified types. It may also define a procedure by having the side effect of changing the values of variables.
The working part of a function definition is formed of statements, which are commands to perform some action, usually changing the values of variables. The calculations are performed by evaluating expressions written in terms of constants, variables, and functions to obtain values of various types.
#### Types
## Arithmetic
Now we go to the notation used to write a $\mathrm{C}$ program. The basic types include arithmetic types such as:
char
int
float
double
These represent character, integer, floating point, and double precision floating point values.
There is also a void type that has no values.
A variable of a certain type associates to each machine state a value of this type. In a computer implementation a variable is realized by a location in computer memory large enough to hold a value of the appropriate type.
Example: One might declare $\mathrm{n}$ to be an integer variable and $\mathrm{x}$ to be a float variable. In one machine state $\mathrm{n}$ might have the value 77 and $\mathrm{x}$ might have the value 3.41 .
## Function
Another kind of data object is a function. The type of a function depends on the types of the arguments and on the type of the value. The type of the value is written first, followed by a list of types of the arguments enclosed in parentheses.
Example: float (int) is the type of a function of an integer argument returning float. There might be a function convert of this type defined in the program.
Example: float ( float $(*)$ (float), float, int) is the type of a function of three arguments of types float $(*)$ (float), float, and int returning float. The function iterate defined below is of this type.
A function is a constant object given by a function definition. A function is realized by the code in computer memory that defines the function.
## Pointer to function
There are no variable functions, but there can be variables of type pointer to function. The values of such a variable indicate which of the function definitions is to be used.
Example: float $(*)$ (int) is the type of a pointer to a function from int to float. There could be a variable $f$ of this type. In some machine state it could point to the function convert.
The computer implementation of pointer to function values is as addresses of memory locations where the functions are stored.
A function is difficult to manipulate directly. Therefore in a $\mathrm{C}$ expression the value of a function is not the actual function, but the pointer associated with the function. This process is known as pointer conversion.
Example: It is legal to make the assignment $\mathrm{f}=$ convert.
#### Declarations
A declaration is a specification of variables or functions and of their types.
A declaration consists of a value type and a list of declarators and is terminated by a semicolon. These declarators associate identifiers with the corresponding types.
Example: float $\mathrm{x}, \mathrm{y}$; declares the variables $\mathrm{x}$ and $\mathrm{y}$ to be of type float.
Example: float $(* g)$ (float) ; declares $g$ as a pointer to function from float to float.
Example: float iterate (float $(*)$ (float), float, int) ; declares a function iterate of three arguments of types float (*) (float), float, and int returning float.
#### Expressions
## Variables
An expression of a certain type associates to each machine state a value of this type.
Primary expressions are the expressions that have the highest precedence. Constants and variables are primary expressions. An arbitrary expression can be converted to a primary expression by enclosing it in parentheses.
Usually the value of the variable is the data contained in the variable. However the value of a function is the pointer that corresponds to the function. Example: After the declaration float x, y ; and subsequent assignments the variables $x$ and $y$ may have values which are float.
Example: After the declaration float $(* g)$ (float) ; and subsequent assignments the variable $g$ may have a pointer to function on float returning a float value.
Example: After the declaration float iterate( float $(*)$ (float), float, int) ; and a function definition the function iterate is defined. Its value is the pointer value that corresponds to the function. Thus if $h$ is a variable which can point to such a function, then the assignment $\mathrm{h}=$ iterate ; is legal.
## Function calls
A function call is an expression formed from a pointer to function expression and an argument list of expressions. Its value is obtained by finding the pointer value, evaluating the arguments and copying their values, and using the function corresponding to the pointer value to calculate the result.
Example: Assume that $\mathrm{g}$ is a function pointer that has a pointer to some function as its value. Then this function uses the value of $\mathrm{x}$ to obtain a value for the function call $\mathrm{g}(\mathrm{x})$.
Example: The function iterate is defined with the heading iterate ( float $(* g)$ (float), float $x$, float $n$ ). A function call iterate (square, $z, 3)$ uses the value of iterate, which is a function pointer, and the values of square, $x$, and 3, which are function pointer, float, and integer. The values of the arguments square, $\mathbf{z}$, and 3 are copied to the parameters $\mathrm{g}$, $\mathrm{x}$, and $\mathrm{n}$. The computation described in the function definition is carried out, and a float is returned as the value of the function call.
## Casts
A data type may be changed by a cast operator. This is indicated by enclosing the type name in parentheses.
Example: $7 / 2$ evaluates to 3 while (float) $7 / 2$ evaluates to 3.5.
## Arithmetic and logic
There are a number of ways of forming new expressions from old.
The unary operators + and - and the negation ! can form new expressions.
Multiplicative expressions are formed by the binary operators $*$, / and
$\%$. The last represents the remainder in integer division.
Additive expressions are formed by the binary operators + and -. Relational expressions are formed by the inequalities $<$ and $<=$ and $>$ and $>=$.
Equality expressions are formed by the equality and negated equality == and $!==$.
Logical $A N D$ expression are formed by \&\&.
Logical $O R$ expression are formed by $\mid$ ।.
## Assignments
Another kind of expression is the assignment expression. This is of the form variable $=$ expression. It takes the expression on the right, evaluates it, and assigns the value to the variable on the left (and to the assignment expression). This changes the machine state.
An assignment is is read variable "becomes" expression.
Warning: This should be distinguished from an equality expression of the form expression $==$ expression. This is read expression "equals" expression.
Example: $i=0$
Example: $i=i+1$
Example: $\mathrm{x}=\mathrm{g}(\mathrm{x})$
Example: $\mathrm{h}=$ iterate, where $\mathrm{h}$ is a function pointer variable and iterate is a function constant.
#### Statements
## Expression statements
A statement is a command to perform some action changing the machine state.
Among the most important are statements formed from expressions (such as assignment expressions) of the form expression ;
Example: $i=0$;
Example: $i=i+1$;
Example: $\mathrm{x}=\mathrm{g}(\mathrm{x})$;
Example: $\mathrm{h}=$ iterate ; where $\mathrm{h}$ is a function pointer variable and iterate is a function constant.
In the compound statement part of a function definition the statement return expression ; stops execution of the function and returns the value of the expression.
## Control statements
There are several ways of building up new statements from old ones. The most important are the following. A compound statement is of the form:
\{ declaration-list statement-list \}
An if-else statement is of the form:
if ( expression) statement else statement
A while statement is of the form:
while ( expression ) statement
The following pattern of statements often occurs:
expr1 ; while ( expr2) \{ statement expr 3; \}
Abbreviation: The above pattern is abbreviated by the for statement of the form:
for ( expr1 ; expr2 ; expr3) statement
#### Function definitions
A function definition begins with a heading that indicates the type of the output, the name of the function, and a parenthesized parameter list. Each element of the parameter list is a specification that identifies a parameter of a certain type. The body of a function is a single compound statement.
Example: The definition of square as the squaring function of type function of float returning float is
The definition of iterate (with parameters $\mathrm{g}, \mathrm{x}, \mathrm{n}$ of types pointer to function of float returning float, float, and integer) returning float is
Example: Consider the main program
The function call iterate(square, z, 3) has argument expressions which are a function square, a float $\mathbf{z}$, and an integer 3 . These arguments are evaluated and the values are copied to the parameters $\mathrm{g}, \mathrm{x}$, and $\mathrm{n}$, which are pointer to function, float, and integer objects. In the course of evaluation the parameter $\mathrm{x}$ changes its value, but $\mathrm{z}$ does not change its value of 2.0. The value returned by iterate(square, $z, 3$ ) is 256.0 . The ultimate result of the program is to assign 2.0 to $\mathrm{z}$ and 256.0 to w.
## Chapter 2
## Linear Systems
This chapter is about solving systems of linear equations. This is an algebraic problem, and it provides a good place in which to explore matrix theory.
## $2.1 \quad$ Shears
In this section we make a few remarks about the geometric significance of Gaussian elimination.
We begin with some notation. Let $\mathbf{z}$ be a vector and $\mathbf{w}$ be another vector. We think of these as column vectors. The inner product of $\mathbf{w}$ and $\mathbf{z}$ is $\mathbf{w}^{T} \mathbf{z}$ and is a scalar. The outer product of $\mathbf{z}$ and $\mathbf{w}$ is $\mathbf{z} \mathbf{w}^{T}$, and this is a matrix.
Assume that $\mathbf{w}^{T} \mathbf{z}=0$. A shear is a matrix $M$ of the form $I+\mathbf{z w}^{T}$. It is easy to check that the inverse of $M$ is another shear given by $I-\mathbf{z w}^{T}$.
The idea of Gaussian elimination is to bring vectors to a simpler form by using shears. In particular one would like to make the vectors have many zero components. The vectors of concern are the column vectors of a matrix.
Here is the algorithm. We want to solve $A \mathbf{x}=\mathbf{b}$. If we can decompose $A=L U$, where $L$ is lower triangular and $U$ is upper triangular, then we are done. All that is required is to solve $L \mathbf{y}=\mathbf{b}$ and then solve $U \mathbf{x}=\mathbf{y}$.
In order to find the $L U$ decomposition of $A$, one can begin by setting $L$ to be the identity matrix and $U$ to be the original matrix $A$. At each stage of the algorithm on replaces $L$ by $L M^{-1}$ and $U$ by $M U$, where $M$ is a suitably chosen shear matrix.
The choice of $M$ at the $j$ th stage is the following. We take $M=I+\mathbf{z e}_{j}$, where $\mathbf{e}_{j}$ is the $j$ th unit basis vector in the standard basis. We take $\mathbf{z}$ to have non-zero coordinates $z_{i}$ only for index values $i>j$. Then $M$ and $M^{-1}$ are lower triangular matrices.
The goal is to try to choose $M$ so that $U$ will eventually become an upper triangular matrix. Let us apply $M$ to the $j$ th column $\mathbf{u}_{j}$ of the current $U$. Then we want to make $M \mathbf{u}_{j}$ equal to zero for indices larger than $j$. That is, one must make $u_{i j}+z_{i} u_{j j}=0$ for $i>j$. Clearly this can be done, provided that the diagonal element $u_{j j}=0$.
This algorithm with shear transformations only works if all of the diagonal elements turn out to be non-zero. This is somewhat more restrictive than merely requiring that the matrix $A$ be non-singular.
Here is a program that implements the algorithm.
In $C$ the vector and matrix data types may be implemented by pointers. These pointers must be told to point to available storage regions for the vector and matrix entries. That is the purpose of the following functions.
The actual work in producing the upper triangular matrix is done by the following procedure. The matrix a is supposed to become upper triangular while the matrix 1 remains lower triangular.
The column procedure computes the proper shear and stores it in the lower triangular matrix 1.
The shear procedure applies the shear to bring a closer to upper triangular form.
The actual solving of lower and upper triangular systems is routine.
It would be nicer to have an algorithm that worked for an arbitrary non-singular matrix. Indeed the problem with zero diagonal elements can be eliminated by complicating the algorithm.
The idea is to decompose $A=P L U$, where $P$ is a permutation matrix (obtained by permuting the rows of the identity matrix). Then to solve $A \mathbf{x}=\mathbf{b}$, one solves $L U \mathbf{x}=P^{-1} \mathbf{b}$ by the same method as before.
One can begin by setting $P$ to be the identity matrix and $L$ to be the identity matrix and $U$ to be the original matrix $A$. The algorithm uses shears as before, but it is also allowed to use permutations when it is useful to get rid of zero or small diagonal elements.
Let $R$ be a permutation that interchanges two rows. Then we replace $P$ by $P R^{-1}, L$ by $R L R^{-1}$, and $U$ by $R U$. Then $P$ remains a permutation matrix, $L$ remains lower triangular, and $U$ is modified to obtain a non-zero diagonal element in the appropriate place.
## Projects
1. Write a program to multiply a matrix $A$ (not necessarily square) times a vector $\mathbf{x}$ to get an output vector $\mathbf{b}=A \mathbf{x}$.
## Problems
1. Check the formula for the inverse of a shear.
2. Show that a shear has determinant one.
3. Describe the geometric action of a shear in two dimensions. Why is it called a shear?
4. Consider a transformation of the form $M=I+\mathbf{z w}^{T}$, but do not assume that $\mathbf{w}^{T} \mathbf{z}=0$. When does this have an inverse? What is the formula for the inverse?
## $2.2 \quad$ Reflections
Gaussian elimination with LU decomposition is not the only technique for solving equations. The QR method is also worth consideration.
The goal is to write an arbitrary matrix $A=Q R$, where $Q$ is an orthogonal matrix and $R$ is an upper triangular matrix. Recall that an orthogonal matrix is a matrix $Q$ with $Q^{T} Q=I$.
Thus to solve $A \mathbf{x}=\mathbf{b}$, one can take $\mathbf{y}=Q^{T} \mathbf{b}$ and solve $R \mathbf{x}=\mathbf{y}$.
We can define an inner product of vectors $\mathbf{x}$ and $\mathbf{y}$ by $\mathbf{x} \cdot \mathbf{y}=\mathbf{x}^{T} \mathbf{y}$. We say that $\mathbf{x}$ and $\mathbf{y}$ are perpendicular or orthogonal if $\mathbf{x} \cdot \mathbf{y}=0$.
The Euclidean length (or norm) of a vector $\mathbf{x}$ is $|\mathbf{x}|=\sqrt{\mathbf{x} \cdot \mathbf{x}}$. A unit vector $\mathbf{u}$ is a vector with length one: $|\mathbf{u}|=1$.
A reflection $P$ is a linear transformation of the form $P=I-2 \mathbf{u u}^{T}$, where $\mathbf{u}$ is a unit vector. The action of a reflection on a vector perpendicular to $\mathbf{u}$ is to leave it alone. However a $\mathbf{x}=c \mathbf{u}$ vector parallel to $\mathbf{u}$ is sent to its negative.
It is easy to check that a reflection is an orthogonal matrix. Furthermore, if $P$ is a reflection, then $P^{2}=I$, so $P$ is its own inverse.
Consider the problem of finding a reflection that sends a given non-zero vector a to a multiple of another given unit vector $\mathbf{b}$. Since a reflection preserves lengths, the other vector must be $\pm|\mathbf{a}| \mathbf{b}$.
Take $\mathbf{u}=c \mathbf{w}$, where $\mathbf{w}=\mathbf{a} \pm|\mathbf{a}| \mathbf{b}$, and where $c$ is chosen to make $\mathbf{u}$ a unit vector. Then $c^{2} \mathbf{w} \cdot \mathbf{w}=1$. It is easy to check that $\mathbf{w} \cdot \mathbf{w}=2 \mathbf{w} \cdot \mathbf{a}$. Furthermore,
$$
P \mathbf{a}=\mathbf{a}-2 c^{2} \mathbf{w} \mathbf{w}^{T} \cdot \mathbf{a}=\mathbf{a}-\mathbf{w}=\mp|\mathbf{a}| \mathbf{b} .
$$
Which sign should we choose? We clearly want $\mathbf{w} \cdot \mathbf{w}>0$, and to avoid having to choose a large value of $c$ we should take it as large as possible. However $\mathbf{w} \cdot \mathbf{w}=2 \mathbf{a} \cdot \mathbf{a} \pm 2|\mathbf{a}| \mathbf{b} \cdot \mathbf{a}$. So we may as well choose the sign so that $\pm \mathbf{b} \cdot \mathbf{a} \geq 0$.
Now the goal is to use successive reflections in such a way that $P_{n} \cdots P_{1} A=$ $R$. This gives the $A=Q R$ decomposition with $Q=P_{1} \cdots P_{n}$.
One simply proceeds through the columns of $A$. Fix the column $j$. Apply the reflection to send the vector $a_{i j}$ for $j \leq i \leq n$ to a vector that is non-zero in the $j j$ place and zero in the $i j$ place for $j \leq i \leq n$.
We have assumed up to this point that our matrices were square matrices, so that there is some hope that the upper triangular matrix $R$ can be inverted. However we can also get a useful result for systems where there are more equations than unknowns. This corresponds to the case when $A$ is an $m$ by $n$ matrix with $m>n$. Take $\mathbf{b}$ to be an $m$ dimensional vector. The goal is to solve the least-squares problem of minimizing $|A \mathbf{x}-\mathbf{b}|$ as a function of the $n$ dimensional vector $\mathbf{x}$.
In that case we write $Q^{T} A=R$, where $Q$ is $m$ by $m$ and $R$ is $m$ by $n$. We cannot solve $A \mathbf{x}=\mathbf{b}$. However if we look at the difference $A \mathbf{x}-\mathbf{b}$ we see that
$$
|A \mathbf{x}-\mathbf{b}|=|R \mathbf{x}-\mathbf{y}|
$$
where $\mathbf{y}=Q^{T} \mathbf{x}$.
We can try to choose $\mathbf{x}$ to minimize this quantity. This can be done by solving an upper triangular system to make the first $n$ components of $R \mathbf{x}-\mathbf{y}$ equal to zero. (Nothing can be done with the other $m-n$ components, since $R \mathbf{x}$ automatically has these components equal to zero.)
The computer implementation of the QR algorithm is not much more complicated than that for the LU algorithm. The work is done by a triangulation procedure. It goes throught the columns of the matrix a and finds the suitable unit vectors, which it stores in another matrix $\mathrm{h}$. There is never any needed to actually compute the orthogonal part of the decomposition, since the unit vectors for all of the reflections carry the same information.
void triangle(matrix a, matrix $h$, int $m$, int $n$ )
\{
int $j$;
for $(j=1 ; j<=n ; j=j+1)$
\{
$\operatorname{select}(j, a, h, m)$;
$\operatorname{reflm}(j, a, h, m, n) ;$
\}
\}
The select procedure does that calculation to determine the unit vector that is appropriate to the given column.
The reflect matrix procedure applies the reflections to the appropriate column of the matrix.
In order to use this to solve an equation one must apply the same reflections to the right hand side of the equation. Finally, one must solve the resulting triangular system.
Projects 1. Implement the QR algorithm for solving systems of equations and for solving least-squares problems.
2. Consider the 6 by 6 matrix with entries $a_{i j}=1 /\left(i+j^{2}\right)$. Use your QR program to find the first column of the inverse matrix.
3. Consider the problem of getting the best least-squares approximation of $1 /(1+x)$ by a linear combination of $1, x$, and $x^{2}$ at the points 1 , $2,3,4,5$, and 6 . Solve this 6 by 3 least squares problem using your program.
## Problems
1. Show that if $A$ and $B$ are invertible matrices, then $(A B)^{-1}=B^{-1} A^{-1}$.
2. Show that if $A$ and $B$ are matrices, then $(A B)^{T}=B^{T} A^{T}$.
3. Show that an orthogonal matrix preserves the inner product in the sense that $Q \mathbf{x} \cdot Q \mathbf{y}=\mathbf{x} \cdot \mathbf{y}$.
4. Show that an orthogonal matrix preserves length: $|Q \mathbf{x}|=|\mathbf{x}|$.
5. Show that the product of orthogonal matrices is an orthogonal matrix. Show that the inverse of an orthogonal matrix is an orthogonal matrix.
6. What are the possible values of the determinant of an orthogonal matrix? Justify your answer.
7. An orthogonal matrix with determinant one is a rotation. Show that the product of two reflections is a rotation.
8. How is the angle of rotation determined by the angle between the unit vectors determining the reflection?
### Vectors and matrices in $\mathrm{C}$
#### Pointers in $\mathrm{C}$
## Pointer types
The variables of a certain type $T$ correspond to a linearly ordered set of pointer to $T$ values. In a computer implementation the pointer to $T$ values are realized as addresses of memory locations.
Each pointer value determines a unique variable of type $T$. In other words, pointer to $T$ values correspond to variables of type $T$. Thus there are whole new families of pointer types. Example: float $*$ is the type pointer to float.
Example: float $* *$ is the type pointer to pointer to float.
One can have variables whose values are pointers.
Example: Consider a variable $\mathrm{x}$ of type float. One can have a variable $\mathrm{p}$ of type pointer to float. One possible value of $\mathrm{p}$ would be a pointer to $\mathrm{x}$. In this case the corresponding variable is $\mathrm{x}$.
Example: float $\mathrm{x}, * \mathrm{p}, * * \mathrm{~m}$; declares the variables $\mathrm{x}, \mathrm{p}$, and $\mathrm{m}$ to be of types float, pointer to float, and pointer to pointer to float.
#### Pointer Expressions
## Indirection
The operator \& takes a variable (or function) and returns its corresponding pointer. If the variable or function has type $T$, the result has type pointer to $T$.
The other direction is given by the indirection or dereferencing operator *. Applying * to an pointer value gives the variable (or function) corresponding to this value. This operator can only be applied to pointer types. If the value has type pointer to $T$, then the result has type $T$.
Example: Assume that $\mathrm{p}$ is a variable of type pointer to float and that its value is a pointer to $\mathrm{x}$. Then $* \mathrm{p}$ is the same variable as $\mathrm{x}$.
Example: Assume that $\mathrm{m}$ is a variable of type pointer to pointer to float. The expression $* \mathrm{~m}$ can be a variable whose values are pointer to float. The expression $* * \mathrm{~m}$ can be a variable whose values are float.
Example: If $\mathrm{f}$ is a function pointer variable with some function pointer value, then $* \mathrm{f}$ is a function corresponding to this value. The value of this function is the pointer value, so $(* f)(x)$ is the same as $f(x)$.
## Pointer arithmetic
Let $T$ be a type that is not a function type. For an integer $i$ and a pointer value $p$ we have another pointer value $p+i$. This is the pointer value associated with the ith variable of this type past the variable associated with the pointer value $\mathrm{p}$.
Incrementing the pointer to $T$ value by $i$ corresponds to incrementing the address by $i$ times the size of a $T$ value.
The fact that variables of type $T$ may correspond to a linearly ordered set of pointer to $T$ values makes $\mathrm{C}$ useful for models where a linear strucure is important.
When a pointer value $p$ is incremented by the integer amount $i$, then $p+i$ is a new pointer value. We use $p[i]$ as a synonym for $*(p+i)$. This is the variable pointed to by $p+i$. Example: Assume that $\mathrm{v}$ has been declared float $* \mathrm{v}$. If we think of $\mathrm{v}$ as pointing to an entry of a vector, then $\mathrm{v}[\mathrm{i}]$ is the entry $i$ units above it.
Example: Assume that $\mathrm{m}$ has been declared float $* * \mathrm{~m}$. Think of $\mathrm{m}$ as pointing to a row pointer of a matrix, which in turn points to an entry of the matrix. Then $\mathrm{m}[\mathrm{i}]$ points to an entry in the row $i$ units above the original row in row index value. Furthermore $m[i][j]$ is the entry $j$ units above this entry in column index value.
## Function calls
In $\mathrm{C}$ function calls it is always a value that is passed. If one wants to give a function access to a variable, one must pass the value of the pointer corresponding to the variable.
Example: A procedure to fetch a number from input is defined with the heading void fetch (float $* p$ ). A call fetch $(\& x)$ copies the argument, which is the pointer value corresponding to $\mathrm{x}$, onto the the parameter, which is the pointer variable $\mathrm{p}$. Then $* \mathrm{p}$ and $\mathrm{x}$ are the same float variable, so an assignment to $* \mathrm{p}$ can change the value of $\mathrm{x}$.
Example: A procedure to multiply a scalar $\mathrm{x}$ times a vector given by $\mathrm{w}$ and put the result back in the same vector is defined with the heading void mult (float $x$, float $*_{w}$ ). Then a call mult $(a, v)$ copies the values of the arguments $\mathrm{a}$ and $\mathrm{v}$ onto the parameters $\mathrm{x}$ and $\mathrm{w}$. Then $\mathrm{v}$ and $\mathrm{w}$ are pointers with the same value, and so $\mathrm{v}[\mathrm{i}]$ and $\mathrm{w}[\mathrm{i}]$ are the same float variables. Therefore an assignment statement $\mathrm{w}[\mathrm{i}]=\mathrm{x} * \mathrm{w}[\mathrm{i}]$ in the body of the procedure has the effect of changing the value of $\mathrm{v}[\mathrm{i}]$.
## Memory allocation
There is a cleared memory allocation function named calloc that is very useful in working with pointers. It returns a pointer value corresponding to the first of a specified number of variables of a specified type.
The calloc function does not work with the actual type, but with the size of the type. In an implementation each data type (other than function) has a size. The size of a data type may be recovered by the sizeof ( ) operator. Thus sizeof ( float ) and sizeof ( float $*$ ) give numbers that represent the amount of memory needed to store a float and the amount of memory need to store a pointer to float.
The function call calloc( $n, \operatorname{sizeof}($ float $)$ ) returns a pointer to void corresponding to the first of $\mathrm{n}$ possible float variables. The cast operator $(f l o a t *)$ converts this to a pointer to float. If a pointer variable $\mathrm{v}$ has been declared with float $* \mathrm{v}$; then
$\mathrm{v}=($ float $*)$ calloc $(\mathrm{n}$, sizeof (float) ); assigns this pointer to v. After this assignment it is legitimate to use the variable $\mathrm{v}[\mathrm{i}]$ of type float, for $\mathrm{i}$ between 0 and $\mathrm{n}-1$.
Example: One can also create space for a matrix in this way. The assignment statement
$$
\mathrm{m}=(\text { float } * *) \text { calloc }(\mathrm{m}, \operatorname{sizeof}(\text { float } *)) \text {; }
$$
creates space for the row pointers and assigns the pointer to the first row pointer to $\mathrm{m}$, while
$$
\mathrm{m}[\mathrm{i}]=(\text { float } *) \operatorname{calloc}(\mathrm{n}, \operatorname{sizeof} \text { (float) }) ;
$$
creates space for a row and assigns the pointer to the first entry in the row to $m[i]$. After these assignments we have float variables $m[i]$ [j] available.
## Chapter 3
## Eigenvalues
### Introduction
A square matrix can be analyzed in terms of its eigenvectors and eigenvalues. In this chapter we review this theory and approach the problem of numerically computing eigenvalues.
If $A$ is a square matrix, $\mathbf{x}$ is a vector not equal to the zero vector, and $\lambda$ is a number, then the equation
$$
A \mathbf{x}=\lambda \mathbf{x}
$$
says that $\lambda$ is an eigenvalue with eigenvector $\mathbf{x}$.
We can also identify the eigenvalues as the set of all $\lambda$ such that $\lambda I-A$ is not invertible.
We now begin an abbreviated review of the relevant theory. We begin with the theory of general bases and similarity. We then treat the theory of orthonormal bases and orthogonal similarity.
### Similarity
If we have an $n$ by $n$ matrix $A$ and a basis consisting of $n$ linearly independent vectors, then we may form another matrix $S$ whose columns consist of the vectors in the basis. Let $\hat{A}$ be the matrix of $A$ in the new basis. Then $A S=S \hat{A}$. In other words, $\hat{A}=S^{-1} A S$ is similar to $A$.
Similar matrices tend to have similar geometric properties. They always have the same eigenvalues. They also have the same determinant and trace. (Similar matrices are not always identical in their geometrical properties; similarity can distort length and angle.) We would like to pick the basis to display the geometry. The way to do this is to use eigenvectors as basis vectors, whenever possible.
If the dimension $n$ of the space of vectors is odd, then a matrix always has at least one real eigenvalue. If the dimension is even, then there may be no real eigenvalues. (Example: a rotation in the plane.) Thus it is often helpful to allow complex eigenvalues and eigenvectors. In that case the typical matrix will have a basis of eigenvectors.
If we can take the basis vectors to be eigenvectors, then the matrix $\hat{A}$ in this new basis is diagonal.
There are exceptional cases where the eigenvectors do not form a basis. (Example: a shear.) Even in these exceptional cases there will always be a new basis in which the matrix is triangular. The eigenvalues will appear (perhaps repeated) along the diagonal of the triangular matrix, and the determinant and trace will be the product and sum of these eigenvalues.
We now want to look more closely at the situation when a matrix has a basis of eigenvectors.
We say that a collection of vectors is linearly dependent if one of the vectors can be expressed as a linear combination of the others. Otherwise the collection is said to be linearly independent.
Proposition 3.2.1 If $\mathbf{x}_{i}$ are eigenvectors of $A$ corresponding to distinct eigenvalues $\lambda_{i}$, then the $\mathbf{x}_{i}$ are linearly independent.
Proof: The proof is by induction on $k$, the number of vectors. The result is obvious when $k=1$. Assume it is true for $k-1$. Consider the case of $k$ vectors. We must show that it is impossible to express one eigenvector as a linear combination of the others. Otherwise we would have $\mathbf{x}_{j}=\sum_{i \neq j} c_{i} \mathbf{x}_{i}$ for some $j$. If we apply $A-\lambda_{j} I$ to this equation, we obtain $0=\sum_{i \neq j} c_{i}\left(\lambda_{i}-\lambda_{j}\right) \mathbf{x}_{i}$. If $c_{i} \neq 0$ for some $i \neq j$, then we could solve for $\mathbf{x}_{i}$ in terms of the other $k-2$ vectors. This would contradict the result for $k-1$ vectors. Therefore $c_{i}=0$ for all $i \neq j$. Thus $\mathbf{x}_{j}=0$, which is not allowed.
If we have $n$ independent eigenvectors, then we can put the eigenvectors as columns of a matrix $X$. Let $\Lambda$ be the diagonal matrix whose entries are the corresponding eigenvalues. Then we may express the eigenvalue equation as
$$
A X=X \Lambda .
$$
Since $X$ is an invertible matrix, we may write this equation as
$$
X^{-1} A X=\Lambda
$$
This says that $A$ is similar to a diagonal matrix. Theorem 3.2.1 Consider an $n$ by $n$ matrix with $n$ distinct (possibly complex) eigenvalues $\lambda_{i}$. Then the corresponding (possibly complex) eigenvectors $\mathbf{x}_{i}$ form a basis. The matrix is thus similar to a diagonal matrix.
Let $Y$ be a matrix with column vectors $\mathbf{y}_{i}$ determined in such a way that $Y^{T}=X^{-1}$. Then $Y^{T} A X=\Lambda$ and so $A=X \Lambda Y^{T}$. This leads to the following spectral representation.
Let $\mathbf{y}_{i}$ be the dual basis defined by $\mathbf{y}_{i}^{T} \mathbf{x}_{j}=\delta_{i j}$. Then we may represent
$$
A=\sum_{i} \lambda_{i} \mathbf{x}_{i} \mathbf{y}_{i}^{T}
$$
It is worth thinking a bit more about the meaning of the complex eigenvalues. It is clear that if $A$ is a real matrix, then the eigenvalues that are not real occur in complex conjugate pairs. The reason is simply that the complex conjugate of the equation $A \mathbf{x}=\lambda \mathbf{x}$ is $A \overline{\mathbf{x}}=\bar{\lambda} \overline{\mathbf{x}}$. If $\lambda$ is not real, then we have a pair $\lambda \neq \bar{\lambda}$ of complex conjugate eigenvalues.
We may write $\lambda=a+i b$ and $\mathbf{x}=\mathbf{u}+i \mathbf{v}$. Then the equation $A \mathbf{x}=\lambda \mathbf{x}$ becomes the two real equations $A \mathbf{u}=a \mathbf{u}-b \mathbf{v}$ and $A \mathbf{v}=b \mathbf{u}+a \mathbf{v}$. The vectors $\mathbf{u}$ and $\mathbf{v}$ are no longer eigenvectors, but they can be used as part of a real basis. In this case instead of two complex conjugate diagonal entries one obtains a two by two matrix that is a multiple of a rotation matrix.
Thus geometrically a typical real matrix is constructed from stretches, shrinks, and reversals (from the real eigenvalues) and from stretches, shrinks, and rotations (from the conjugate pair non-real eigenvalues).
## Problems
1. Find the eigenvalues of the 2 by 2 matrix whose first row is $0,-3$ and whose second row is $-1,2$. Find eigenvectors. Find the similarity transformation and show that it takes the matrix to diagonal form.
2. Find the spectral representation for the matrix of the previous problem.
3. Consider a rotation by angle $\theta$ in the plane. Find its eigenvalues and eigenvectors.
4. Give an example of two matrices with the same eigenvalues that are not similar.
5. Show how to express the function $\operatorname{tr}(z I-A)^{-1}$ of complex $z$ in terms of the numbers $\operatorname{tr} A^{n}, n=1,2,3, \ldots$
6. Show how to express the eigenvalues of $A$ in terms of $\operatorname{tr}(z I-A)^{-1}$. 7. Let $\mathbf{z} \mathbf{w}^{T}$ and $\mathbf{z}^{\prime} \mathbf{w}^{\prime T}$ be two one-dimensional projections. When is their product zero? If the product is zero in one order, must it be zero in the other order?
7. Show that for arbitrary square matrices $\operatorname{tr} A B=\operatorname{tr} B A$.
8. Show that $\operatorname{tr}(A B)^{n}=\operatorname{tr}(B A)^{n}$.
9. Show that if $B$ is non-singular, then $A B$ and $B A$ are similar.
10. Show that if $A B$ and $B A$ always have the same eigenvalues, even if both of them are singular.
11. Give an example of square matrices $A$ and $B$ such that $A B$ is not similar to $B A$.
### Orthogonal similarity
#### Symmetric matrices
If we have an $n$ by $n$ matrix $A$ and an orthonormal basis consisting of $n$ orthogonal unit vectors, then as before we may form another matrix $Q$ whose columns consist of the vectors in the basis. Let $\hat{A}$ be the matrix of $A$ in the new basis. Then again $\hat{A}=Q^{-1} A Q$ is similar to $A$. However in this special situation $Q$ is orthogonal, that is, $Q^{-1}=Q^{T}$. In this case we say that $\hat{A}$ is orthogonal similar to $A$.
The best of worlds is the case of a symmetric real matrix $A$.
Theorem 3.3.1 For a symmetric real matrix A the eigenvalues are all real, and there is always a basis of eigenvectors. Furthermore, these eigenvectors may be taken to form an orthonormal basis. With this choice the matrix $Q$ is orthogonal, and $\hat{A}=Q^{-1} A Q$ is diagonal.
#### Singular values
It will be useful to have the observation that for a real matrix $A$ the matrix $A^{T} A$ is always a symmetric real matrix. It is easy to see that it must have positive eigenvalues $\sigma_{i}^{2} \geq 0$. Consider the positive square roots $\sigma_{i} \geq 0$. These are called the singular values of the original matrix $A$. It is not difficult to see that two matrices that are orthogonally equivalent have the same singular values.
We may define the positive square root $\sqrt{A^{T} A}$ as the matrix with the same eigenvectors as $A^{T} A$ but with eigenvalues $\sigma_{i}$. We may think of $\sqrt{A^{T} A}$ as a matrix that is in some sense the absolute value of $A$. Of course one could also look at $A A^{T}$ and its square root, and this would be different in general. We shall see, however, that these matrices are always orthogonal similar, so in particular the eigenvalues are the same.
To this end, we use the following polar decomposition.
Proposition 3.3.1 Let $A$ be a real square matrix. Then $A=Q \sqrt{A^{T} A}$, where $Q$ is orthogonal.
This amounts to writing the the matrix as the product of a part that has absolute value one with a part that represents its absolute value. Of course here the absolute value one part is an orthogonal matrix and the absolute value part is a symmetric matrix.
Here is how this can be done. We can decompose the space into the orthogonal sum of the range of $A^{T}$ and the nullspace of $A$. This is the same as the orthogonal sum of the range of $\sqrt{A^{T} A}$ and the nullspace of $\sqrt{A^{T} A}$. The range of $A^{T}$ is the part where the absolute value is nonzero. On this part the unit size part is determined; we must define $Q$ on $\mathbf{x}=\sqrt{A^{T} A} \mathbf{y}$ in the range in such a way as to have $Q \mathbf{x}=Q \sqrt{A^{T} A} \mathbf{y}=A \mathbf{y}$. Then $|Q \mathbf{x}|=|A \mathbf{y}|=|\mathbf{x}|$, so $Q$ sends the range of $A^{T}$ to the range of $A$ and preserves lengths on this part of the space. However on the nullspace of $A$ the unit size part is arbitrary. But we can also decompose the space into the orthogonal sum of the range of $A$ and the nullspace of $A^{T}$. Since the nullspaces of $A$ and $A^{T}$ have the same dimension, we can define $Q$ on the nullspace of $A$ to be an arbitrary orthogonal transformation that takes it to the nullspace of $A^{T}$.
We see from $A=Q \sqrt{A^{T} A}$ that $A A^{T}=Q A^{T} A Q^{T}$. Thus $A A^{T}$ is similar to $A^{T} A$ by the orthogonal matrix $Q$. The two possible notions of absolute value are geometrically equivalent, and the two possible notions of singular value coincide.
#### The Schur decomposition
We now consider a real matrix $A$ that has only real eigenvalues. Then this matrix is similar to an upper triangular matrix, that is, $A X=X \hat{A}$, where $\hat{A}$ is upper triangular.
In the general situation the vectors $\mathbf{x}_{i}$ that constitute the columns of $X$ may not be orthogonal. However we may produce a family $\mathbf{q}_{i}$ of orthogonal vectors, each of norm one, such that for each $k$ the subspace spanned by $\mathbf{x}_{1}, \ldots, \mathbf{x}_{k}$ is the same as the subspace spanned by $\mathbf{q}_{1}, \ldots, \mathbf{q}_{k}$.
Let $Q$ be the matrix with columns formed by the vectors $\mathbf{q}_{i}$. This condition may be expressed by
$$
X_{i k}=\sum_{j \leq k} R_{j k} Q_{i j} .
$$
In other words, $X=Q R$, where $Q$ is orthogonal and $R$ is upper triangular.
From this equation we may conclude that $R^{-1} Q^{-1} A Q R=\hat{A}$, or
$$
Q^{-1} A Q=U,
$$
where $Q$ is orthogonal and $U=R \hat{A} R^{-1}$ is upper triangular. This is called the Schur decomposition.
Theorem 3.3.2 Let $A$ be a real matrix with only real eigenvalues. Then $A$ is orthogonal similar to an upper triangular matrix $U$.
The geometrical significance of the Schur decomposition may be seen as follows. Let $V_{r}$ be the subspace spanned by column vectors that are nonzero only in their first $r$ components. Then we have $A Q V_{r}=Q U V_{r}$. Since $U V_{r}$ is contained in $V_{r}$, it follows that $Q V_{r}$ is an $r$-dimensional subspace invariant under the matrix $A$ that is spanned by the first $r$ column vectors of $Q$.
## Problems
1. Consider the symmetric 2 by 2 matrix whose first row is 2,1 and whose second row is 1, 2. Find its eigenvalues. Find the orthogonal similarity that makes it diagonal. Check that it works.
2. Find the spectral decomposition in this case.
3. Find the eigenvalues of the symmetric 3 by 3 matrix whose first row is $2,1,0$ and whose second row is $1,3,1$ and whose third row is 0 , 1, 4. (Hint: One eigenvalue is an integer.) Find the eigenvectors and check orthogonality.
4. Find the singular values of the matrix whose first row is $0,-3$ and whose second row is $-1,2$.
5. Find a Schur decomposition of the matrix in the preceding problem.
6. Give an example of two matrices that are similar by an invertible matrix, but cannot be made similar by an orthogonal matrix.
7. Show that an arbitrary $A$ may be written $A=Q_{1} D Q_{2}$, where $D$ is a diagonal matrix with positive entries and $Q_{1}$ and $Q_{2}$ are orthogonal matrices.
### Vector and matrix norms
#### Vector norms
We shall use three vector norms. The first is the 1-norm
$$
|\mathbf{x}|_{1}=\sum_{i=1}^{n}\left|x_{i}\right|
$$
The second is the 2-norm
$$
|\mathbf{x}|_{2}=\sqrt{\sum_{i=1}^{n}\left|x_{i}\right|^{2}} .
$$
The final one is the $\infty$-norm
$$
|\mathbf{x}|_{\infty}=\max _{1 \leq i \leq n}\left|x_{i}\right| .
$$
They are related by the inequalities
$$
|\mathbf{x}|_{\infty} \leq|\mathbf{x}|_{2} \leq|\mathbf{x}|_{1} \leq n|\mathbf{x}|_{\infty}
$$
#### Associated matrix norms
There are three matrix norms associated with the three vector norms. These are given for $p=1,2, \infty$ by
$$
\|A\|_{p}=\min \left\{\left.M|| A \mathbf{x}\right|_{p} \leq M|\mathbf{x}|_{p}\right\} .
$$
Here are the explicit forms. The 1-norm is easy to compute.
$$
\|A\|_{1}=\max _{1 \leq j \leq n} \sum_{i=1}^{m}\left|a_{i j}\right| .
$$
The 2-norm is the difficult one.
$$
\|A\|_{2}=\max _{1 \leq i \leq n} \sigma_{i}=\sigma_{\max },
$$
where $\sigma_{i} \geq 0$ are the singular values of $A$.
The $\infty$-norm is just as easy as the 1 -norm.
$$
\|A\|_{\infty}=\max _{1 \leq i \leq m} \sum_{j=1}^{n}\left|a_{i j}\right| .
$$
The $\infty$ and 1 norms are related by $\|A\|_{\infty}=\left\|A^{T}\right\|_{1}$. For the 2 -norm we have the important relation $\|A\|_{2}=\left\|A^{T}\right\|_{2}$.
There is a very useful interpolation bound relating the 2-norm to the other norms.
## Proposition 3.4.1
$$
\|A\|_{2} \leq \sqrt{\|A\|_{1}\|A\|_{\infty}}
$$
#### Singular value norms
It is sometime useful to define other norms in terms of singular values. Here are three such norms defined in terms of the singular values $\sigma_{i} \geq 0$ of $A$ (where the $\sigma_{i}^{2}$ are eigenvalues of $A^{T} A$.) We distinguish these from the previous definitions by the use of a square bracket. (This is not standard notation.)
The first is the trace norm
$$
\|A\|_{[1]}=\operatorname{tr}\left(\sqrt{A^{T} A}\right)=\sum_{i=1}^{n} \sigma_{i}
$$
This is difficult to compute, because of the square root.
The second is the Hilbert-Schmidt norm
$$
\|A\|_{[2]}=\sqrt{\operatorname{tr}\left(A^{T} A\right)}=\sqrt{\sum_{i=1}^{n} \sigma_{i}^{2}} .
$$
This one is easy to compute.
The final one is the uniform norm
$$
\|A\|_{[\infty]}=\max _{1 \leq i \leq n} \sigma_{i}
$$
This again is difficult to compute.
They are related by the inequalities
$$
\|A\|_{[\infty]} \leq\|A\|_{[2]} \leq\|A\|_{[1]} \leq n\|A\|_{[\infty]} .
$$
Why are these norms useful? Maybe the main reason is that $\|A\|_{[\infty]}=$ $\|A\|_{2}$, and so
$$
\|A\|_{2} \leq\|A\|_{[2]} .
$$
This gives a useful upper bound that complements the interpolation bound.
#### Eigenvalues and norms
From now on we deal with one of the norms $\|A\|_{p}$ and denote it by $\|A\|$. The fundamental relation between norms and eigenvalues is that every eigenvalue $\lambda$ of $A$ satisfies $|\lambda| \leq\|A\|$. This is an equality for symmetric matrices. However in general it is not such an accurate result. The following is often a much better bound.
Theorem 3.4.1 Every eigenvalue $\lambda$ of A satisfies the inequality
$$
|\lambda| \leq\left\|A^{n}\right\|^{\frac{1}{n}}
$$
for every $n=1,2,3, \ldots$
#### Condition number
Let $A$ be an invertible square matrix. Consider one of the $p$-norms $\|A\|_{p}$, where $p$ is 1,2 , or $\infty$. In this section we shall abbreviate this as $\|A\|$. We are most interested in the case $p=2$. Unfortuately, this is the case when it is most difficult to compute the norm.
We want to measure how far $A$ is from being invertible. The standard measure is
$$
\operatorname{cond}(A)=\|A\|\left\|A^{-1}\right\| \text {. }
$$
When this is not too much larger than one, then the matrix is well-conditioned, in the sense that calculations with it are not too sensitive to perturbations (small errors). (When the number is very large, then the matrix may be ill-conditioned, that is, extremely sensitive to perturbations.)
In the case of the 2-norm this condition number has a simple interpretation. Let $\sigma_{i}^{2}$ be the eigenvalues of $A^{T} A$. Then
$$
\operatorname{cond}(A)=\frac{\sigma_{\max }}{\sigma_{\min }}
$$
## Problems
1. Evaluate each of the six matrix norms for the two-by-two matrix whose first row is $0,-3$ and whose second row is $-1,2$.
2. In the preceding problem, check the interpolation bound.
3. In the preceding problem, check the Hilbert-Schmidt bound.
4. In the preceding problem, check the bound on the eigenvalues for $n=1,2,3$ and for each of the three $p$ norms. 5. Give an example of a matrix $A$ for which the eigenvalue $\lambda$ of largest absolute value satisfies $|\lambda|<\|A\|$ but $|\lambda|=\left\|A^{n}\right\|^{1 / n}$ for some $n$.
5. Prove the assertions about the concrete forms of the $p$-norms $\|A\|_{p}$, for $p=1,2, \infty$.
6. Prove that the 2-norm of a matrix is the 2-norm of its transpose.
### Stability
#### Inverses
We next look at the stability of the inverse under perturbation. The fundamental result is the following.
Proposition 3.5.1 Assume that the matrix $A$ has an inverse $A^{-1}$. Let $E$ be another matrix. Assume that $E$ is small relative to $A$ in the sense that $\|E\|<1 /\left\|A^{-1}\right\|$. Let $\hat{A}=A-E$. Then $\hat{A}$ has an inverse $\hat{A}^{-1}$, and
$$
A^{-1}-\hat{A}^{-1}=-A^{-1} E \hat{A}^{-1} .
$$
Proof: Assume that $(A-E) \mathbf{x}=0$. Then $\mathbf{x}=A^{-1} A \mathbf{x}=A^{-1} E \mathbf{x}$. Hence $|\mathbf{x}| \leq\left\|A^{-1}\right\|\|E\||| \mathbf{x} \mid$. Thus $|\mathbf{x}|=0$, so $\mathbf{x}$ is the zero vector. This proves that $\hat{A}=A-E$ is invertible. The identity relating $A^{-1}$ and $\hat{A}^{-1}$ follows by algebraic manipulation.
We may write the hypothesis of the theorem in terms of the relative size of the perturbation as $\|E\| /\|A\|<1 / \operatorname{cond}(A)$. Thus for an ill-conditioned matrix, one can only take very small relative perturbations.
Furthermore, we may deduce that
$$
\left\|A^{-1}-\hat{A}^{-1}\right\| \leq\left\|A^{-1}\right\| \mid E\|\| \hat{A}^{-1} \|
$$
which says that
$$
\left\|A^{-1}-\hat{A}^{-1}\right\| /\left\|\hat{A}^{-1}\right\| \leq \operatorname{cond}(A)\|E\| /\|A\| .
$$
Relative changes in matrices are controlled by condition numbers.
#### Iteration
Sometimes one wants to solve the equation $A \mathbf{x}=\mathbf{b}$ by iteration. A natural choice of fixed point function is
$$
\mathbf{g}(\mathbf{x})=\mathbf{x}+C(\mathbf{b}-A \mathbf{x}) .
$$
Here $C$ can be an arbitrary non-singular matrix, and the fixed point will be a solution. However for convergence we would like $C$ to be a reasonable guess of or approximation to $A^{-1}$.
When this is satisfied we may write
$$
\mathbf{g}(\mathbf{x})-\mathbf{g}(\mathbf{y})=(I-C A)(\mathbf{x}-\mathbf{y})=\left(A^{-1}-C\right) A(\mathbf{x}-\mathbf{y}) .
$$
Then if $\left\|A^{-1}-C\right\|\|A\|<1$, the iteration function is guaranteed to shrink the iterates together to a fixed point.
If we write the above condition in terms of relative error, it becomes $\left\|A^{-1}-C\right\| /\left\|A^{-1}\right\|<1 / \operatorname{cond}(A)$. Again we see that for an ill-conditioned matrix one must make a good guess of the inverse.
#### Eigenvalue location
Let $A$ be a square matrix, and let $D$ be the diagonal matrix with the same entries as the diagonal entries of $A$. If all these entries are non-zero, then $D$ is invertible. We would like to conclude that $A$ is invertible.
Write $A=D D^{-1} A$. The matrix $D^{-1} A$ has matrix entries $a_{i j} / a_{i i}$ so it has ones on the diagonal. Thus we may treat it as a perturbation of the identity matrix. Thus we may write $D^{-1} A=I-\left(I-D^{-1} A\right)$, where $I-D^{-1} A$ has zeros on the diagonal and entries $-a_{i j} / a_{i i}$ elsewhere. We know from our perturbation lemma that if $I-D^{-1} A$ has norm strictly less than one, then $D^{-1} A$ is invertible, and so $A$ is invertible.
The norm that is most convenient to use is the $\infty$ norm. The condition for $I-D^{-1} A$ to have $\infty$ norm strictly less than one is that $\max _{i} \sum_{j \neq i} \frac{\left|a_{i j}\right|}{\left|a_{i i}\right|}<$ 1. We have proved the following result on diagonal dominance.
Proposition 3.5.2 If a matrix A satisfies for each $i$
$$
\sum_{j \neq i}\left|a_{i j}\right|<\left|a_{i i}\right|,
$$
then $A$ is invertible.
Let $B$ be an arbitrary matrix and let $\lambda$ be a number. Apply this result to the matrix $\lambda I-B$. Then $\lambda$ is an eigenvalue of $B$ precisely when $\lambda I-B$ is not invertible. This gives the following conclusion.
Corollary 3.5.1 If $\lambda$ is an eigenvalue of $B$, then for some $i$ the eigenvalue $\lambda$ satisfies
$$
\left|\lambda-b_{i i}\right| \leq \sum_{j \neq i}\left|b_{i j}\right| .
$$
The intervals about $b_{i i}$ in the corollary are known as Gershgorin's disks. Problems
1. Assume $A \mathbf{x}=\mathbf{b}$. Assume that there is a computed solution $\hat{\mathbf{x}}=\mathbf{x}-\mathbf{e}$, where $\mathbf{e}$ is an error vector. Let $A \hat{\mathbf{x}}=\hat{\mathbf{b}}$, and define the residual vector $\mathbf{r}$ by $\hat{\mathbf{b}}=\mathbf{b}-\mathbf{r}$. Show that $|\mathbf{e}| /|\mathbf{x}| \leq \operatorname{cond}(A)|\mathbf{r}| /|\mathbf{b}|$.
2. Assume $A \mathbf{x}=\mathbf{b}$. Assume that there is an error in the matrix, so that the matrix used for the computation is $\hat{A}=A-E$. Take the computed solution as $\hat{\mathbf{x}}$ defined by $\hat{A} \hat{\mathbf{x}}=\mathbf{b}$. Show that $|\mathbf{e}| /|\hat{\mathbf{x}}| \leq$ $\operatorname{cond}(A)\|E\| /\|A\|$.
3. Find the Gershgorin disks for the three-by-three matrix whose first row is $1,2,-1$, whose second row is $2,7,0$, and whose third row is $-1,0,-5$.
### Power method
We turn to the computational problem of finding eigenvalues of the square matrix $A$. We assume that $A$ has distinct real eigenvalues. The power method is a method of computing the dominant eigenvalue (the eigenvalue with largest absolute value).
The method is to take a more or less arbitrary starting vector $\mathbf{u}$ and compute $A^{k} \mathbf{u}$ for large $k$. The result should be approximately the eigenvector corresponding to the dominant eigenvalue.
Why does this work? Let us assume that there is a dominant eigenvalue and call it $\lambda_{1}$. Let $\mathbf{u}$ be a non-zero vector. Expand $\mathbf{u}=\sum_{i} c_{i} \mathbf{x}_{i}$ in the eigenvectors of $A$. Assume that $c_{1} \neq 0$. Then
$$
A^{k} \mathbf{u}=\sum_{i} c_{i} \lambda_{i}^{k} \mathbf{x}_{i}
$$
When $k$ is large, the term $c_{1} \lambda_{1}^{k} \mathbf{x}_{i}$ is so much larger than the other terms that $A^{k} \mathbf{u}$ is a good approximation to a multiple of $\mathbf{x}_{1}$.
[We can write this another way in terms of the spectral representation. Let $\mathbf{u}$ be a non-zero vector such that $\mathbf{y}_{1}^{T} \mathbf{u} \neq 0$. Then
$$
A^{k} \mathbf{u}=\lambda_{1}^{k} \mathbf{x}_{1} \mathbf{y}_{1}^{T} \mathbf{u}+\sum_{i \neq 1} \lambda_{i}^{k} \mathbf{x}_{i} \mathbf{y}_{i}^{T} \mathbf{u} .
$$
When $k$ is large the first term will be much larger than the other terms. Therefore $A^{k} \mathbf{u}$ will be approximately $\lambda_{1}^{k}$ times a multiple of the eigenvector $\mathrm{x}_{1}$.]
In practice we take $\mathbf{u}$ to be some convenient vector, such as the first coordinate basis vector, and we just hope that the condition is satisfied. We compute $A^{k} \mathbf{u}$ by successive multiplication of the matrix $A$ times the previous vector. In order to extract the eigenvalue we can compute the result for $k+1$ and for $k$ and divide the vectors component by component. Each quotient should be close to $\lambda_{1}$.
## Problems
1. Take the matrix whose rows are $0,-3$ and $-1,2$. Apply the matrix four times to the starting vector. How close is this to an eigenvector.
2. Consider the power method for finding eigenvalues of a real matrix. Describe what happens when the matrix is symmetric and the eigenvalue of largest absolute value has multiplicity two.
3. Also describe what happens when the matrix is not symmetric and the eigenvalues of largest absolute value are a complex conjugate pair.
### Inverse power method
The inverse power method is just the power method applied to the matrix $(A-\mu I)^{-1}$. We choose $\mu$ as an intelligent guess for a number that is near but not equal to an eigenvalue $\lambda_{j}$. The matrix has eigenvalues $\left(\lambda_{i}-\mu\right)^{-1}$. If $\mu$ is closer to $\lambda_{j}$ than to any other $\lambda_{i}$, then the dominant eigenvalue of $(A-\mu I)^{-1}$ will be $\left(\lambda_{j}-\mu\right)^{-1}$. Thus we can calculate $\left(\lambda_{j}-\mu\right)^{-1}$ by the power method. From this we can calculate $\lambda_{j}$.
The inverse power method can be used to search for all the eigenvalues of $A$. At first it might appear that it is computationally expensive, but in fact all that one has to do is to compute an LU or QR decomposition of $A-\mu I$. Then it is easy to do a calculation in which we start with an arbitrary vector $\mathbf{u}$ and at each stage replace the vector $\mathbf{v}$ obtained at that stage with the result of solving $(A-\mu I) \mathbf{x}=\mathbf{v}$ for $\mathbf{x}$ using this decomposition.
## Projects
1. Write a program to find the dominant eigenvalue of a matrix by the inverse power method.
2. Find the eigenvalues of the symmetric matrix with rows $16,4,1,1$ and $4,9,1,1$ and $1,1,4,1$ and $1,1,1,1$.
3. Change the first 1 in the last row to a 2 , and find the eigenvalues of the resulting non-symmetric matrix.
### Power method for subspaces
The power method for subspaces is very simple. One computes $A^{k}$ for large $k$. Then one performs a decomposition $A^{k}=Q R$. Finally one computes $Q^{-1} A Q$. Miracle: the result is upper triangular with the eigenvalues on the diagonal!
Here is why this works. Take $\mathbf{e}_{1}, \ldots, \mathbf{e}_{r}$ to be the first $r$ unit basis vectors. Then $\mathbf{e}_{i}=\sum_{j} c_{i j} \mathbf{x}_{j}$, where the $\mathbf{x}_{j}$ are the eigenvectors of $A$ corresponding to the eigenvalues ordered in decreasing absolute value. Thus for the powers we have
$$
A^{k} \mathbf{e}_{i}=\sum_{j} c_{i j} \lambda_{j}^{k} \mathbf{x}_{j}
$$
To a good approximation, the first $r$ terms of this sum are much larger than the remaining terms. Thus to a good approximation the $A^{k} \mathbf{e}_{i}$ for $1 \leq i \leq r$ are just linear combinations of the first $r$ eigenvectors.
We may replace the $A^{k} \mathbf{e}_{i}$ by linear combinations that are orthonormal. This is what is accomplished by the QR decomposition. The first $r$ columns of $Q$ are an orthonormal basis consisting of linear combinations of the $A^{k} \mathbf{e}_{i}$ for $1 \leq i \leq r$.
It follows that the first $r$ columns of $Q$ are approximately linear combinations of the first $r$ eigenvectors. If this were exact, then $Q^{-1} A Q$ would be the exact Schur decomposition. However in any case it should be a good approximation.
[This can be considered in terms of subspaces as an attempt to apply the power method to find the subspace spanned by the first $r$ eigenvectors, for each $r$. The idea is the following. Let $V_{r}$ be a subspace of dimension $r$ chosen in some convenient way. Then, in the typical situation, the first $r$ eigenvectors will have components in $V_{r}$. It follows that for large $k$ the matrix $A^{k}$ applied to $V_{r}$ should be approximately the subspace spanned by the first $r$ eigenvectors.
However we may compute the subspace given by $A^{k}$ applied to $V_{r}$ by using the $Q R$ decomposition. Let
$$
A^{k}=\tilde{Q}_{k} \tilde{R}_{k}
$$
be the $Q R$ decomposition of $A^{k}$. Let $V_{r}$ be the subspace of column vectors which are non-zero only in their first $r$ components. Then $\tilde{R}_{k}$ leaves $V_{r}$ invariant. Thus the image of this $V_{r}$ by $\tilde{Q}_{k}$ is the desired subspace.
We expect from this that $\tilde{Q}_{k}$ is fairly close to mapping the space $V_{r}$ into the span of the first $r$ eigenvectors. In other words, if we define $U_{k+1}$ by
$$
U_{k+1}=\tilde{Q}_{k}^{-1} A \tilde{Q}_{k}
$$
then this is an approximation to a Schur decomposition. Thus one should be able to read off all the eigenvalues from the diagonal.]
This method is certainly simple. One simply calculates a large power of $A$ and finds the QR decomposition of the result. The resulting orthogonal matrix give the Schur decomposition of the original $A$, and hence the eigenvalues.
What is wrong with this? The obvious problem is that $A^{k}$ is an illconditioned matrix for large $k$, and so computing the $Q R$ decomposition is numerically unstable. Still, the idea is appealing in its simplicity.
## Problems
1. Take the matrix whose rows are $0,-3$ and $-1,2$. Take the eigenvector corresponding to the largest eigenvalue. Find an orthogonal vector and form an orthogonal basis with these two vectors. Use the matrix with this basis to perform a similarity transformation of the original matrix. How close is the result to an upper triangular matrix? 2. Take the matrix whose rows are $0,-3$ and $-1,2$. Apply the matrix four times to the starting vector. Find an orthogonal vector and form an orthogonal basis with these two vectors. Use the matrix with this basis to perform a similarity transformation of the original matrix. How close is the result to an upper triangular matrix?
### QR method
The famous QR method is just another variant on the power method for subspaces of the last section. However it eliminates the calculational difficulties.
Here is the algorithm. We want to find approximate the Schur decomposition of the matrix $A$.
Start with $U_{1}=A$. Then iterate as follows. Having defined $U_{k}$, write
$$
U_{k}=Q_{k} R_{k},
$$
where $Q_{k}$ is orthogonal and $R_{k}$ is upper triangular. Let
$$
U_{k+1}=R_{k} Q_{k}
$$
(Note the reverse order). Then for large $k$ the matrix $U_{k+1}$ should be a good approximation to the upper triangular matrix in the Schur decomposition. Why does this work?
First note that $U_{k+1}=R_{k} Q_{k}=Q_{k}^{-1} U_{k} Q_{k}$, so $U_{k+1}$ is orthogonal similar to $U_{k}$.
Let $\tilde{Q}_{k}=Q_{1} \cdots Q_{k}$ and $\tilde{R}_{k}=R_{k} \cdots R_{1}$. Then it is easy to see that
$$
U_{k+1}=\tilde{Q}_{k}^{-1} A \tilde{Q}_{k} .
$$
Thus $U_{k+1}$ is similar to the original $A$.
Furthermore, $\tilde{Q}_{k} \tilde{R}_{k}=\tilde{Q}_{k-1} U_{k} \tilde{R}_{k-1}=A \tilde{Q}_{k-1} \tilde{R}_{k-1}$. Thus the $k$ th stage decomposition is produced from the previous stage by multiplying by $A$.
Finally, we deduce from this that
$$
\tilde{Q}_{k} \tilde{R}_{k}=A^{k} .
$$
In other words, the $\tilde{Q}_{k}$ that sets up the similarity of $U_{k+1}$ with $A$ is the same $\tilde{Q}_{k}$ that arises from the $Q R$ decompositon of the power $A^{k}$. But we have seen that this should give an approximation to the Schur decomposition of $A$. Thus the $U_{k+1}$ should be approximately upper triangular.
## Projects
1. Implement the QR method for finding eigenvalues. 2. Use the program to find the eigenvalues of the symmetric matrix with rows $1,1,0,1$ and $1,4,1,1$ and $0,1,9,5$ and $1,1,5,16$.
2. Change the last 1 in the first row to a 3 , and find the eigenvalues of the resulting non-symmetric matrix.
### Finding eigenvalues
The most convenient method of finding all the eigenvalues is the QR method. Once the eigenvalues are found, then the inverse power method gives an easy determination of eigenvectors.
There are some refinements of the QR method that give greater efficiency, especially for very large matrices.
The trick is to work with Hessenberg matrices, which are matrices with zeros below the diagonal below the main diagonal.
The idea is to do the eigenvalue determination in two stages. The first stage is to transform $A$ to $Q^{-1} A Q=\hat{A}$, where $\hat{A}$ is a Hessenberg matrix. This is an orthogonal similarity transformation, so this gives a matrix $\hat{A}$ with the same eigenvalues.
This turns out to be an easy task. The idea is much the same as the idea for the QR decomposition, except that the reflections must be applied on both sides, to make it an orthogonal similarity transformation. No limiting process is involved.
One builds the matrix $Q$ out of reflection matrices, $Q=P_{n} \cdots P_{1}$. At the $j$ th stage the matrix is $P_{j} \cdots P_{1} A P_{1} \cdots P_{j}$. The unit vector determining the reflection $P_{j}$ is taken to be zero in the first $j$ components. Furthermore it is chosen so that application of $P_{j}$ on the left will zero out the components in the $j$ th column below the entry below the diagonal entry. The entry just below the diagonal entry does not become zero. However the advantage is that the application of $P_{j}$ on the right does not change the $j$ th column or any of the preceding columns.
Now that the matrix is in Hessenberg form, we note that the QR algorithm preserves Hessenberg form. We take the first $U_{1}=\hat{A}$, which is in Hessenberg form. Then we may easily compute that
$$
U_{k+1}=R_{k} U_{k} R_{k}^{-1} .
$$
Thus each $U_{k}$ is in Hessenberg form, since Hessenberg form is preserved by multiplication by upper triangular matrices.
This is very advantageous, since at each stage we must decompose $U_{k}=$ $Q_{k} R_{k}$ and then multiply out $R_{k} Q_{k}$. Since $U_{k}$ is in Hessenberg form, the reflection vectors used in the decomposition are each vectors that have only two non-zero components. The arithmetic is much reduced.
## Chapter 4
## Nonlinear systems
### Introduction
This chapter deals with solving equations of the form $\mathbf{f}(\mathbf{x})=0$, where $\mathbf{f}$ is a continuous function from $\mathbf{R}^{n}$ to $\mathbf{R}^{n}$. It also treats questions of roundoff error and its amplification in the course of a numerical calculation.
In much of what we do the derivative $\mathbf{f}^{\prime}$ of such a function $\mathbf{f}$ will play an essential role. This is defined in such as way that
$$
\mathbf{f}(\mathbf{x}+\mathbf{h})-\mathbf{f}(\mathbf{x})=\mathbf{f}^{\prime}(\mathbf{x}) \mathbf{h}+\mathbf{r},
$$
where the remainder is of higher than first order in the vector $\mathbf{h}$. Thus the derivative $\mathbf{f}^{\prime}(\mathbf{x})$ is a matrix. If we write this in variables with $\mathbf{y}=\mathbf{f}(\mathbf{x})$, then the derivative formula is
$$
\Delta \mathbf{y} \approx \mathbf{f}^{\prime}(\mathbf{x}) \Delta \mathbf{x} .
$$
If we write these relations in components, we get
$$
f_{i}(\mathbf{x}+\mathbf{h})=f_{i}(\mathbf{x})+\sum_{j=1}^{n} \frac{\partial f_{i}(\mathbf{x})}{\partial x_{j}} h_{j}+r_{i} .
$$
Thus the derivative matrix is the matrix of partial derivatives. Using variables one writes
$$
\Delta y_{i} \approx \sum_{j=1}^{n} \frac{\partial y_{i}}{\partial x_{j}} \Delta x_{j}
$$
The same idea is often expressed in differential notation
$$
d y_{i}=\sum_{j=1}^{n} \frac{\partial y_{i}}{\partial x_{j}} d x_{j} .
$$
There are several interpretations of such a function. Two of the most important are the interpretation as a transformation and the interpretation as a vector field.
When we think of a function $\mathbf{g}$ as a transformation, we may think of $\mathbf{x}$ as being a point in one space and $\mathbf{y}=\mathbf{g}(\mathbf{x})$ as being a point in some other space. For each $\mathbf{x}$ there is a corresponding $\mathbf{y}$. It is illuminating to look at the consequence of a change of coordinate system. Say that we have $\mathbf{z}$ coordinates that are functions of the $\mathbf{x}$ coordinates, and we have $\mathbf{w}$ coordinates that are functions of the $\mathbf{y}$ coordinates. In that case we have
$$
\frac{\partial w_{i}}{\partial z_{j}}=\sum_{k} \frac{\partial w_{i}}{\partial y_{k}} \sum_{r} \frac{\partial y_{k}}{\partial x_{r}} \frac{\partial x_{r}}{\partial z_{j}}
$$
This says that the new derivative matrix is obtained from the original matrix by multiplying on each side by matrices representing the effect of the coordinate transformations.
A variant is when we think of the function as a transformation from a space to the same space. In that case we may write $\hat{\mathbf{x}}=\mathbf{g}(\mathbf{x})$ and think of $\mathbf{x}$ as the coordinates of the original point and $\hat{\mathbf{x}}$ as the coordinates of the new point.
In this case there is only the change from $\mathbf{x}$ to $\mathbf{z}$ coordinates, so the change of variable formula becomes
$$
\frac{\partial \hat{z}_{i}}{\partial z_{j}}=\sum_{k} \frac{\partial \hat{z}_{i}}{\partial \hat{x}_{k}} \sum_{r} \frac{\partial \hat{x}_{k}}{\partial x_{r}} \frac{\partial x_{r}}{\partial z_{j}}
$$
We shall see in the problems that there is a special situation when this change is a familiar operation of linear algebra.
When we think of a function $\mathbf{f}$ as a vector field, then we think of $\mathbf{x}$ as being a point in some space and $\mathbf{y}=\mathbf{f}(\mathbf{x})$ as being the components of a vector attached to the point $\mathbf{x}$.
Let us look at the effect of a change of coordinates on the vector field itself. We change from $\mathbf{x}$ to $\mathbf{z}$ coordinates. Let us call the new components of the vector field $\overline{\mathbf{y}}$. Then if we look at a curve tangent to the vector field, we see that along the curve
$$
\bar{y}_{i}=\frac{d \bar{z}_{i}}{d t}=\sum_{k} \frac{\partial z_{i}}{\partial x_{k}} \frac{d x_{k}}{d t}=\sum_{k} \frac{\partial z_{i}}{\partial x_{k}} y_{k} .
$$
So the vector field is changed by multiplication on the left by a matrix:
$$
\bar{y}_{i}=\sum_{k} \frac{\partial z_{i}}{\partial x_{k}} y_{k}
$$
How about the partial derivatives of the vector field? Here the situation is ugly. A routine computation gives
$$
\frac{\partial \bar{y}_{i}}{\partial z_{j}}=\sum_{r} \sum_{k}\left[\frac{\partial^{2} z_{i}}{\partial x_{r} \partial x_{k}} y_{k}+\frac{\partial z_{i}}{\partial x_{k}} \frac{\partial y_{k}}{\partial x_{r}}\right] \frac{\partial x_{r}}{\partial z_{j}} .
$$
This does not even look like matrix multiplication. We shall see in the problems that there is a special situation where this difficulty does not occur and where we get instead some nice linear algebra.
## Problems
1. Consider a transformation $\hat{\mathbf{x}}=\mathbf{g}(\mathbf{x})$ of a space to itself. Show that at a fixed point the effect of a change of coordinates on the derivative matrix is a similarity transformation.
2. Consider a vector field with components $\mathbf{y}=\mathbf{f}(\mathbf{x})$ in the $\mathbf{x}$ coordinate system. Show that at a zero of the vector field the effect of a change of coordinates on the derivative matrix is a similarity transformation.
### Degree
The intermediate value theorem was a fundamental result in solving equations in one dimension. It is natural to ask whether there is an analog of this theorem for systems. There is such an analog; one version of it is the following topological degree theorem.
We say that a vector $\mathbf{y}$ is opposite to another vector $\mathbf{x}$ if there exists $c \geq 0$ with $\mathbf{y}=-c \mathbf{x}$.
Theorem 4.2.1 Let $\mathbf{f}$ be a continuous function on the closed unit ball $B$ in $\mathbf{R}^{n}$ with values in $\mathbf{R}^{n}$. Let $\partial B$ be the sphere that is the boundary of $B$. Assume that for each $\mathbf{x}$ in $\partial B$ the vector $\mathbf{f}(\mathbf{x})$ is not opposite to $\mathbf{x}$. Then there exists a point $\mathbf{r}$ in $B$ such that $\mathbf{f}(\mathbf{r})=0$.
This is called a topolological degree theorem because there is an invariant called the degree which under the hypotheses of this theorem has the value one. In general, if the degree is non-zero, then there is a fixed point.
The proof of this theorem is much more difficult than the proof of the intermediate value theorem, and it will not be attempted here.
When $n=1$ we are essentially in the situation of the intermediate value theorem. The unit ball $B$ is the closed interval $[-1,1]$, and the boundary $\partial B$ consists of the two points -1 and 1 . The computational problem thus consists of checking the value of a function on these two end points. Unfortunately, the theorem does not lead to a particularly efficient computer implementation when $n>1$. (However perhaps something can be done when $n=2$.) The problem is that the boundary $\partial B$ is an infinite set, and one has to do a calculation involving the whole boundary to check the presence of a root.
## Problems
1. Show how to derive the intermediate value theorem as a corollary of the degree theorem.
2. Find a example in two dimensions where the degree theorem applies to guarantee the existence of the root, but where the root cannot be calculated by elementary means.
#### Brouwer fixed point theorem
The degree theorem has implications for the existence of fixed points. The most famous such result is the Brouwer fixed point theorem.
Theorem 4.2.2 Let $\mathbf{g}$ be a continuous function on the closed unit ball $B$ in $\mathbf{R}^{n}$ with values in $\mathbf{R}^{n}$. Let $\partial B$ be the sphere that is the boundary of $B$. Assume $\mathbf{g}$ sends $\partial B$ into $B$. Then $\mathbf{g}$ has a fixed point.
Proof: Let $\mathbf{f}(\mathbf{x})=\mathbf{x}-\mathbf{g}(\mathbf{x})$ defined on the closed unit ball $B$. We want to show that for some $\mathbf{x}$ in $B$ the vector $\mathbf{f}(\mathbf{x})=0$.
Suppose that for all $\mathbf{x}$ in $B$ the vector $\mathbf{f}(\mathbf{x}) \neq 0$. Then $\mathbf{g}$ maps $\partial B$ into the interior of the unit ball. Consider $\mathbf{x}$ in $\partial B$. If $\mathbf{f}(\mathbf{x})$ is opposite to $\mathbf{x}$, then $\mathbf{g}(\mathbf{x})=(1+c) \mathbf{x}$ with $c \geq 0$, which is impossible. Therefore $\mathbf{f}(\mathbf{x})$ is never opposite to $\mathbf{x}$. But then the degree theorem leads to a contradiction.
Unfortunately, this proof only reduces the Brouwer theorem to the degree theorem, and does not provide a self-contained proof. Again the apparatus of algebraic topology is necessary for a satisfactory treatment.
Two subsets $\mathbf{R}^{n}$ are homeomorphic if there is a one-to-one correspondence between them that is continuous in both directions. The Brouwer fixed point theorem applies to a subset that is homeomorphic to the closed unit ball. Thus it applies to a closed ball of any size, or to a closed cube.
### Iteration
#### First order convergence
Another approach to numerical root-finding is iteration. Assume that $\mathbf{g}$ is a continuous function. We seek a fixed point $\mathbf{r}$ with $\mathbf{g}(\mathbf{r})=\mathbf{r}$. We can attempt to find it by starting with an $\mathbf{x}_{0}$ and forming a sequence of iterates using $\mathbf{x}_{n+1}=\mathbf{g}\left(\mathbf{x}_{n}\right)$.
Theorem 4.3.1 Let $\mathbf{g}$ be continuous and let $\mathbf{x}_{n}$ a sequence such that $\mathbf{x}_{n+1}=$ $\mathbf{g}\left(\mathbf{x}_{n}\right)$. Then if $\mathbf{x}_{n} \rightarrow \mathbf{r}$ as $n \rightarrow \infty$, then $\mathbf{g}(r)=\mathbf{r}$.
This theorem shows that we need a way of getting sequences to converge. In higher dimensions the most convenient approach is to have a bound on the derivative.
How do we use such a bound? We need a replacement for the mean value theorem. Here is the version that works.
Lemma 4.3.1 Let $\mathbf{g}$ be a function with continuous derivative $\mathbf{g}^{\prime}$. Then
$$
|\mathbf{g}(\mathbf{y})-\mathbf{g}(\mathbf{x})| \leq \max _{0 \leq t \leq 1}\left\|\mathbf{g}^{\prime}\left(\mathbf{z}_{t}\right)\right\||\mathbf{y}-\mathbf{x}|,
$$
where $\mathbf{z}_{t}=(1-t) \mathbf{x}+$ ty lies on the segment between $\mathbf{x}$ and $\mathbf{y}$.
The way to prove the lemma is to use the identity
$$
\mathbf{g}(\mathbf{y})-\mathbf{g}(\mathbf{x})=\int_{0}^{1} \mathbf{g}^{\prime}\left(\mathbf{z}_{t}\right)(\mathbf{y}-\mathbf{x}) d t
$$
Theorem 4.3.2 Let $B$ be a set that is homeomorphic to a closed ball. Assume that $\mathbf{g}$ is continuous on $B$ and that $\mathbf{g}$ maps $B$ into itself. Then $\mathbf{g}$ has a fixed point. Assume also that $B$ is convex and that $\left\|\mathbf{g}^{\prime}(\mathbf{z})\right\| \leq K<1$ for all $\mathbf{z}$ in $B$. Let $\mathbf{x}_{0}$ be in $B$ and iterate using $\mathbf{x}_{n+1}=\mathbf{g}\left(\mathbf{x}_{n}\right)$. Then the iterates converge to the fixed point. Furthermore, this fixed point is the unique fixed point in $B$.
Proof: The Brouwer theorem guarantees that there is a fixed point $\mathbf{r}$ in the interval. By the mean value theorem, for each $\mathbf{x}$ we have $\mathbf{x}_{n+1}-\mathbf{r}=$ $\mathbf{g}\left(\mathbf{x}_{n}\right)-\mathbf{g}(\mathbf{r})$. By the lemma the norm of this is bounded by $K$ times the norm of $\mathbf{x}-\mathbf{r}$. In other words each iteration replacing $\mathbf{x}$ by $\mathbf{g}(\mathbf{x})$ brings us closer to $\mathbf{r}$.
We see that that $\mathbf{r}$ is a stable fixed point if $\left\|\mathbf{g}^{\prime}(\mathbf{r})\right\|<1$. However there is a stronger result. Recall that the power of a matrix has a norm that gives a much better bound on the eigenvalues. We may compute the derivative of the $m$ th iterate $\mathbf{g}^{m}(\mathbf{x})$ and we get $\mathbf{g}^{\prime}\left(\mathbf{g}^{m-1}(\mathbf{x})\right) \mathbf{g}^{\prime}\left(\mathbf{g}^{m-2}(\mathbf{x})\right) \cdots \mathbf{g}^{\prime}(\mathbf{g}(\mathbf{x})) \mathbf{g}^{\prime}(\mathbf{x})$. At the fixed point with $\mathbf{g}(\mathbf{r})=\mathbf{r}$ this is just the power $\mathbf{g}^{\prime}(\mathbf{r})^{m}$. So near the fixed point we expect that
$$
\mathbf{x}_{n+m}-\mathbf{r}=\mathbf{g}^{m}\left(\mathbf{x}_{n}\right)-\mathbf{g}^{m}(\mathbf{r}) \approx \mathbf{g}^{\prime}(\mathbf{r})^{m}\left(\mathbf{x}_{n}-\mathbf{r}\right) .
$$
We see that if $\left\|\mathbf{g}(\mathbf{r})^{m}\right\|<1$ then the fixed point $\mathbf{r}$ is stable.
If we want to use this to solve $\mathbf{f}(\mathbf{x})=0$, we can try to take $\mathbf{g}(\mathbf{x})=\mathbf{x}-$ $C \mathbf{f}(\mathbf{x})$ for some suitable matrix $K$. If $C$ is chosen so that $\mathbf{g}^{\prime}(\mathbf{x})=1-C \mathbf{f}^{\prime}(\mathbf{x})$ is small for $\mathbf{x}$ near $\mathbf{r}$, then there should be a good chance of convergence.
#### Second order convergence
Since the speed of convergence in iteration with $\mathbf{g}$ is controlled by $\mathbf{g}^{\prime}(\mathbf{r})$, it follows that the situation when $\mathbf{g}^{\prime}(\mathbf{r})=0$ is going to have special properties.
It is possible to arrange that this happens. Say that one wants to solve $\mathbf{f}(\mathbf{x})=0$. Newton's method is to take $\mathbf{g}(\mathbf{x})=\mathbf{x}-\mathbf{f}^{\prime}(\mathbf{x})^{-1} \mathbf{f}(\mathbf{x})$. It is easy to check that $\mathbf{f}(\mathbf{r})=0$ and $\mathbf{f}^{\prime}(\mathbf{r})$ non-singular imply that $\mathbf{g}^{\prime}(\mathbf{r})=0$.
Newton's method is not guaranteed to be good if one begins far from the starting point. The damped Newton method is more conservative. One defines $\mathbf{g}(\mathbf{x})$ as follows. Let $\mathbf{m}=\mathbf{f}^{\prime}(\mathbf{x})^{-1} \mathbf{f}(\mathbf{x})$ and let $\mathbf{y}=\mathbf{x}-\mathbf{m}$. While $|\mathbf{f}(\mathbf{y})| \geq|\mathbf{f}(\mathbf{x})|$ replace $\mathbf{m}$ by $\mathbf{m} / 2$ and let $\mathbf{y}=\mathbf{x}-\mathbf{m}$. Let $\mathbf{g}(\mathbf{x})$ be the final value of $\mathbf{y}$.
## Projects
1. Newton's method for systems has the disadvantage that one must compute many partial derivatives. Steffensen's method provides an alternative. The method is to iterate with $\mathbf{g}(\mathbf{x})=\mathbf{x}-\mathbf{w}$, where $\mathbf{w}$ is the solution of $J(\mathbf{x}) \mathbf{w}=\mathbf{f}(\mathbf{x})$. For Newton's method $J(\mathbf{x})=\mathbf{f}^{\prime}(\mathbf{x})$, but for Steffensen's method we approximate the matrix of partial derivatives by a matrix of difference quotients. Thus the $i, j$ entry of $J(\mathbf{x})$ is $\left.\left(f_{i}\left(\mathbf{x}+h_{j} \mathbf{e}_{j}\right)-f_{i}(\mathbf{x})\right) / h_{j}\right)$, where $h_{j}=\alpha_{j}(\mathbf{f}(\mathbf{x}))$. Here $\alpha$ is a function that vanishes at zero. Thus as $\mathbf{f}(\mathbf{x})$ approaches zero, these difference quotients automatically approach the partial derivatives.
There are various possible choices for the function $\alpha$. One popular choice is the identity $\alpha_{j}(\mathbf{z})=z_{j}$, so that $h_{j}=f_{j}(\mathbf{x})$. The disadvantage of this choice is that $h_{j}$ can be zero away from the root.
Another method is to take each component of $\alpha$ to be the length, so that $\alpha_{j}(\mathbf{z})=|\mathbf{z}|$ and $h_{j}=|\mathbf{f}(\mathbf{x})|$. This choice of $\alpha$ is not differentiable at the origin, but in this case this is not a problem.
Perhaps an even better method is to take $\alpha_{j}(\mathbf{z})$ to be the minimum of $|\mathbf{z}|$ and some small number, such as 0.01. This will make the difference matrix somewhat resemble the derivative matrix even far from the solution.
The project is to write a program for solving a system of non-linear equations by Steffensen's method. Try out the program on a simple system for which you know the solution.
2. Use the program to solve the following system.
$$
\begin{aligned}
x^{3}-3 x y^{2}-6 z^{3}+18 z w^{2}-1 & =0 \\
3 x^{2} y-y^{3}-18 z^{2} w+6 w^{3} & =0
\end{aligned}
$$
$$
\begin{aligned}
x z-y w-1 & =0 \\
y z+x w & =0
\end{aligned}
$$
Find a solution near the point where $(x, y, z, w)$ is $(0.6,1.1,0.4,-0.7)$.
## Problems
1. Let $B$ be the ball of radius $r$ centered at $\mathbf{c}$. Assume that $\left\|\mathbf{g}^{\prime}(\mathbf{z})\right\| \leq$ $K<1$ for all $\mathbf{z}$ in $B$. Suppose that $\mathbf{a}$ is in $B$ and that $\mathbf{a}$ and $\mathbf{g}(\mathbf{a})$ satisfy $K r+K|\mathbf{a}-\mathbf{c}|+|\mathbf{g}(\mathbf{a})-\mathbf{c}| \leq r$. Show that $\mathbf{g}$ maps $B$ into $B$.
2. Let $\mathbf{g}(\mathbf{x})=\mathbf{x}-C \mathbf{f}(\mathbf{x})$. Show that if $C$ approximates $\mathbf{f}^{\prime}(\mathbf{x})^{-1}$ in the sense that $\left\|C-\mathbf{f}^{\prime}(\mathbf{x})^{-1}\right\|\left\|\mathbf{f}^{\prime}(\mathbf{x})\right\|$ is bounded below one near the fixed point, then iteration with $\mathbf{g}$ starting near the fixed point converges to the fixed point.
3. Calculate $\mathbf{g}^{\prime}(\mathbf{x})$ in Newton's method.
4. Show that in Newton's method $\mathbf{f}(\mathbf{r})=0$ with $\mathbf{f}^{\prime}(\mathbf{r})$ invertible implies $\mathbf{g}^{\prime}(\mathbf{r})=0$.
5. The following problems deal with the theory of Steffensen's method. For simplicity we deal with the scalar case. Thus the method consists of iteration with $g(x)=x-f(x) / m(x)$, where $m(x)=(f(x+$ $\alpha(f(x)))-f(x)) / \alpha(f(x))$. The function $\alpha$ is chosen so that $\alpha(0)=0$. Let $r$ be a root of $f$, so that $f(r)=0$, and assume that $f^{\prime}(r) \neq 0$. The first problem is to compute $m(r)$ and $m^{\prime}(r)$.
6. Show that $g^{\prime}(r)=0$ and that $g^{\prime \prime}(r)=\left(2 m^{\prime}(r)-f^{\prime \prime}(r)\right) / f^{\prime}(r)$.
7. Let $x_{n+1}=g\left(x_{n}\right)$ and assume $x_{n} \rightarrow r$ as $n$ to $\infty$. Evaluate the limit of $\left(x_{n+1}-r\right) /\left(x_{n}-r\right)^{2}$ as $n \rightarrow \infty$.
8. What if $\alpha(z)=|z|$, so that $\alpha$ is not differentiable at 0 . Is it still true that $g^{\prime}(r)=0$.
9. Another interesting problem is to examine the situation when the iterations give increasing or decreasing sequences of vectors. Show that if the matrix $\mathbf{f}^{\prime}(\mathbf{x})^{-1}$ has positive entries and $\mathbf{f}(\mathbf{x})$ has positive entries, then $\mathbf{g}(\mathbf{x}) \leq \mathbf{x}$.
10. This leads to the problem of finding when a matrix $A$ has the property that $A^{-1}$ has positive entries. Show that if $A=D-N$, where $D$ is diagonal and $N$ is off-diagonal and $D \geq 0$ and $N \geq 0$ and $\left\|D^{-1} N\right\|_{\infty}<1$, then $A^{-1}$ has only positive entries. 11. Show that if in this situation $A^{-1}$ exists but we have only $\left\|D^{-1} N\right\|_{\infty} \leq$ 1 , then the same conclusion holds.
11. Assume that $\mathbf{x} \geq \mathbf{r}$ implies $\mathbf{f}(\mathbf{x}) \geq 0$. We want to see when $\mathbf{x} \geq \mathbf{r}$ implies $\mathbf{g}(\mathbf{x}) \geq \mathbf{r}$. Evaluate $\mathbf{g}^{\prime}(\mathbf{z})$ terms of $\mathbf{f}^{\prime \prime}(\mathbf{z})$ and show that it is sufficient that all the second partial deriviatives in this expression are positive.
### Power series
Let $C$ be a matrix with $\|C\|<1$. Then $(1-C)^{-1}$ exists. We may most easily see this by expanding $C$ in a power series. By summing this series we see that $\left\|(1-C)^{-1}\right\| \leq 1 /(1-\|C\|)$.
Recall that in general $\left\|C^{k}\right\| \leq\|C\|^{k}$. Thus there is an interesting generalization to the case when $\left\|C^{k}\right\|<1$ for some $k$. We observe that
$$
(1-C)^{-1}=\left(1+C+C^{2}+\cdots+C^{k-1}\right)\left(1-C^{k}\right)^{-1} .
$$
It follows that also in this situation $(1-C)^{-1}$ exists.
## Problems
1. Find a bound for $(1+C)^{-1}$ in terms of $\|C\|$ and $\left\|C^{k}\right\|$.
2. Show that if $A^{-1}$ exists and $\left\|\left(E A^{-1}\right)^{k}\right\|<1$ for some $k$, then $(A-$ $E)^{-1}$ exists.
### The spectral radius
The spectral radius $\rho(A)$ of a square matrix $A$ is the maximum absolute value of an eigenvalue. Even if the matrix is real, it is important in this definition that all complex eigenvalues are considered. The reason for this importance is that with this definition we have the following theorem.
Theorem 4.5.1 The norms of powers $A^{n}$ and the spectral radius of $A$ are related by
$$
\lim _{n \rightarrow \infty}\left\|A^{n}\right\|^{\frac{1}{n}}=\rho(A) .
$$
Proof:
It is easy to see that for all $n$ it is true that $\rho(A) \leq\left\|A^{n}\right\|^{\frac{1}{n}}$. The problem is to show that for large $n$ the right hand side is not much larger than the left hand side. The first thing to do is to check that $|z|<1 / \rho(A)$ implies that $(I-z A)^{-1}$ exists. (Otherwise $1 / z$ would be an eigenvalue of $A$ outside of the spectral radius.)
The essential observation is that $(I-z A)^{-1}$ is an analytic function of $z$ for $|z|<1 / \rho(A)$. It follows that the power series expansion converges in this disk. Thus for $|z|$ with $|z|<1 / \rho(A)$ there is a constant $c$ with $\left\|(z A)^{n}\right\|=|z|^{n}\left\|A^{n}\right\| \leq c$.
We have shown that for every $r<1 / \rho(A)$ there is a $c$ with $\left\|A^{n}\right\|^{\frac{1}{n}} \leq$ $c^{\frac{1}{n}} / r$. Take $1 / r$ to be larger than but very close to $\rho(A)$. Take $n$ so large that $c^{\frac{1}{n}} / r$ is still close to $\rho(A)$. Then $\left\|A^{n}\right\|$ must be larger than but close to $\rho(A)$.
Let us look at this proof in more detail. The essential point is the convergence of the power series. Why must this happen? It is a miracle of complex variable: the Cauchy integral formula reduces the convergence of an arbitrary power series inside its radius of convergence to the convergence of a geometric series.
Look at the Cauchy integral formula
$$
(I-w A)^{-1}=\frac{1}{2 \pi i} \int(1-z A)^{-1} 1 /(z-w) d z,
$$
where $w$ is inside the circle of integration $|z|=r$ and $r<1 / \rho(A)$. We may expand in a geometric series in powers of $w / z$. From this we see that the coefficients of the expansion in powers of $w$ are
$$
A^{n}=\frac{1}{2 \pi i} \int(1-z A)^{-1} 1 / z^{n+1} d z .
$$
This proves that $\left\|A^{n}\right\| \leq c / r^{n}$, where
$$
c=\frac{1}{2 \pi r} \int\left\|(I-z A)^{-1}\right\| d|z|
$$
over $|z|=r$.
Notice that as $r$ approaches $1 / \rho(A)$ the bound $c$ will become larger, due to the contribution to the integral from the singularity at $z=1 / \lambda$.
## Problems
1. Show that it is false in general that $\rho(A+B) \leq \rho(A)+\rho(B)$. Hint: Find 2 by 2 matrices for which the right hand side is zero.
### Linear algebra review
We briefly review the various similarity representations of matrices. We are interested in the case of a real square matrix $A$. There is always a (possibly complex) matrix $S$ and a (possibly complex) upper triangular matrix $J$ such that $S^{-1} A S=J$. Thus $A$ has the Jordan representation $A=S J S^{-1}$. The eigenvalues of $A$ occur on the diagonal of $J$.
Assume that $A$ has distinct eigenvalues. Then the eigenvectors of $A$ form a basis. Then there is always a (possibly complex) matrix $S$ and a (possibly complex) diagonal matrix $D$ such that $S^{-1} A S=D$. Thus $A$ has the spectral representation $A=S D S^{-1}$. The eigenvalues of $A$ occur on the diagonal of $D$. The eigenvectors form the columns of the matrix $S$.
Assume that $A$ has real eigenvalues (not necessarily distinct). Then there is always an orthogonal matrix $Q$ and an upper triangular matrix $U$ such that $Q^{-1} A Q=U$. Thus $A$ has the Schur representation $A=Q U Q^{-1}$ where $Q^{-1}=Q^{T}$. The eigenvalues of $A$ occur on the diagonal of $U$.
Assume that $A$ is symmetric, so $A=A^{T}$. Then $A$ has real eigenvalues (not necessarily distinct). The eigenvectors of $A$ may be chosen so as to form an orthonormal basis. There is always an orthogonal matrix $Q$ and a diagonal matrix $D$ such that $Q^{-1} A Q=D$. Thus $A$ has the spectral representation $A=Q D Q^{-1}$ where $Q^{-1}=Q^{T}$. The eigenvalues of $A$ occur on the diagonal of $D$. The eigenvectors may be chosen to form the columns of the matrix $\mathrm{Q}$.
### Error analysis
There are several sources of error in numerical analysis. There are some that are obvious and inescapable, such as the input error in data from the outside world, and the inherent representation error in the precision that is available in the output.
#### Approximation error and roundoff error
However there are two main sources of error. One is the approximation error. This is the error that is due to approximating limits by finite arithmetical operators. It is sometimes called truncation error and sometimes called discretization error, depending on the context.
The most common tool for treating approximation error is Taylor's formula with remainder. The idea is that a function such as $f(x)=$ $f(a)+f^{\prime}(a)(x-a)+1 / 2 f^{\prime \prime}(a)(x-a)^{2}+r$ is approximated by a polynomical $p(x)=f(a)+f^{\prime}(a)(x-a)+1 / 2 f^{\prime \prime}(a)(x-a)^{2}$. The advantage is that the polynomial can be computed by addition and multiplication, which are operations supported by the computer. The disadvantage is that there is an error $r$. One hopes that the error $r$ is small, and that one can prove that it is small.
There are other forms of approximation error, but in every case the challenge is to find algorithms for which the approximation error is small. This is a general mathematical problem, perhaps the central problem in analysis. In this respect numerical analysis appears as nothing more than a subset of analysis.
Of course some parts of numerical analysis have no approximation error. For instance, the formulas for inverting a matrix using the $L U$ or $Q R$ decomposition are exact.
Examples that we have encountered where there is approximation error are root-finding and eigenvalue computation. In each of these cases the computation is only exact when one performs infinitely many iterations, which is impossible on the computer.
The other main source of error is roundoff error. This is due to the fact that computer does not do exact arithmetic with real numbers, but only floating point arithmetic. This source of error is peculiar to numerical analysis.
#### Amplification of absolute error
We want an algorithm to have the property that it does not magnify roundoff error needlessly. For a start we do an analysis of absolute error. For the purposes of analysis, imagine that we want to compute a function $\mathbf{y}=\mathbf{f}(\mathbf{x})$. Then $d \mathbf{y}=\mathbf{f}^{\prime}(\mathbf{x}) d \mathbf{x}$, so small input error is amplified by the entries in the matrix $\mathbf{f}^{\prime}(\mathbf{x})$. If this matrix has large entries, then the problem in inherently ill-conditioned with respect to absolute error. Nothing can be done to remedy this.
Example: Consider the problem of solving a polynomial equation $p(z)=$ 0 . The roots are functions of the coefficients. Let $a$ be one of the coefficients, say the coefficient of $z^{m}$. Then the root $y$ satisfies an equation $p(y)=$ $P(a, y)=0$. Differentiate this equation with respect to $a$. We obtain $y^{m}+p^{\prime}(y) d y / d a=0$, where $p^{\prime}(z)$ is the derivative of the polynomial with respect to $z$. This shows that the problem of finding the root $y$ as a function of the coefficient $a$ is not well-posed when $p^{\prime}(y)=0$, that is, when $y$ is a multiple root.
However, even if the problem is well-conditioned, a poor choice of algorithm can give trouble. Say that our numerical algorithm is to compute $\mathbf{z}=\mathbf{h}(\mathbf{x})$ and then $\mathbf{y}=\mathbf{g}(\mathbf{z})$. Then we have an intermediate stage at which we can introduce roundoff errors. We have $d \mathbf{y}=\mathbf{g}^{\prime}(\mathbf{z}) d \mathbf{z}$, so if $\mathbf{g}^{\prime}(\mathbf{z})$ is large, which can happen, then we can have these intermediate errors amplified.
We can examine this amplification effect rather crudely in terms of the norm of the matrix $\mathbf{g}^{\prime}(\mathbf{z})$. Or we can write out the entries explicitly as
$$
d y_{i}=\sum_{k} \frac{\partial y_{i}}{\partial z_{k}} d z_{k}
$$
and perform a detailed analysis.
From the chain rule, we always have $\mathbf{f}^{\prime}(\mathbf{x})=\mathbf{g}^{\prime}(\mathbf{z}) \mathbf{h}^{\prime}(\mathbf{x})$. So, roughly speaking, for a fixed problem of computing $\mathbf{f}(\mathbf{x})$, we want to choose an algorithm so as to take $\mathbf{g}^{\prime}(\mathbf{z})$ to have a reasonably small norm, not greatly exceeding the norm of $\mathbf{f}^{\prime}(\mathbf{x})$. This keeps the norm of $\mathbf{h}^{\prime}(\mathbf{x})$ from being small, but this norm does not matter. (The whole point is not to needlessly amplify roundoff errors from intermediate stages.)
We say that an algorithm is numerically stable if the errors that are introduced from intermediate stages are not much larger than the inherent roundoff errors that arize from the input error and the representation error.
A problem can be well-conditioned, but we can make the mistake of choosing a numerically unstable algorithm. This leads to wrong answers!
Example: Here is a classic example. Say that one wants to compute the function $y=f(x)=\sqrt{x+1}-\sqrt{x}$ for large $x$. One does this in stages. The first stage is to compute $z=\sqrt{x+1}$ and $w=\sqrt{x}$. (Note that $z^{2}-w^{2}=$ 1.) The second stage may be performed in two ways. The obvious but undesirable way is to compute $y=z-w$. The better way is to compute $y=1 /(z+w)$. The derivative $d y / d x$ is very small when $x$ is large. Errors in $x$ are damped. However in the undesirable method $\partial y / \partial z=1$ and $\partial y / \partial w=-1$, which are not small. So errors in $z$ and $w$ are not damped.
With the better method, the partial derivatives are $\partial y / \partial z=-1 /(z+w)^{2}$ and $\partial y / \partial w=-1 /(z+w)^{2}$. Errors in $z$ and $w$ are damped. Clearly this method is preferable.
Example: Fix $0<a<1$. Let $b=a+1 / a$. Consider the problem of solving the recurrence relation $x_{n+1}=b x_{n}-x_{n-1}$ with $x_{1}=a$ and $x_{0}=1$. It is easy to see that the solution is $x_{k}=a^{k}$. Clearly $d x_{k} / d a=k a^{k-1}$ is small. The problem is well-posed.
However the recurrence relation is a terrible way to compute the answer. The reason is that this is essentially computing the matrix power in
$$
\left(\begin{array}{c}
x_{k+1} \\
x_{k}
\end{array}\right)=\left(\begin{array}{cc}
b & -1 \\
1 & 0
\end{array}\right)^{k}\left(\begin{array}{l}
a \\
1
\end{array}\right) .
$$
The largest eigenvalue of the matrix is $1 / a>1$, so for large $k$ this has very large norm. Errors (from earlier stages of taking the power) are very much amplified!
#### Amplification of relative error
The arithmetic used in most computers is floating point arithmetic, so the roundoff error is more or less independent of the size of the number. It should be thought of as a relative error. Thus the amplification effect of relative error is what is actually important.
The formula for the dependence of relative error in the output on relative error in an intermediate stage is
$$
\frac{d y_{i}}{y_{i}}=\sum_{k} \frac{z_{k}}{y_{i}} \frac{\partial y_{i}}{\partial z_{k}} \frac{d z_{k}}{z_{k}}
$$
It is of course possible to think of this as the formula for the absolute error amplifiation for logarithms
$$
d \log \left|y_{i}\right|=\sum_{k} \frac{\partial \log \left|y_{i}\right|}{\partial \log \left|z_{k}\right|} d \log \left|z_{k}\right| .
$$
In this sense it falls under the scope of the analysis of the previous section.
Ultimately, however, the result is that the amplification of relative error is given by a matrix with entries $\left(z_{k} / y_{i}\right)\left(\partial y_{i} / \partial z_{k}\right)$. We want this to be a small matrix. We have no control over the $y_{i}$, since this is simply the output value. But the algorithm controls the size of the intermediate number $z_{k}$. The conclusion is that, unless there is a compensating small derivative, we prefer small sizes of the numbers $z_{k}$ that occur in intermediate steps of the computation.
Example: Say that one wants to compute $y=f(x)=x+2$ at $x=2$. A bad algorithm is to compute $z=1000001 x$ and $w=1000000 x$ and compute $y=z-w+2$. The reason is that a relative error of a millionth in $z$ or $w$ will give a completely wrong answer. Big numbers have been introduced needlessly.
Example: Say that one wants to compute $y=f(x)=x+2$ at $x=2$. One algorithm is to compute $z=1000000 x$ and then $y=1 / 1000000 z+2$. This introduces a big number $z$, but in this case it does no harm. The reason is that the small $d y / d z$ compensates the large $z$.
Example: Let us repeat the example of the function $f(x)=\sqrt{x+1}-$ $\sqrt{x}$, this time making the analysis using relative error. The first stage is to compute $z=\sqrt{x+1}$ and $w=\sqrt{x}$. (Note that $z^{2}-w^{2}=1$.) The second stage may be performed in two ways. The obvious but undesirable way is to compute $y=z-w$. The better way is to compute $y=1 /(z+w)$.
The relative error amplification inherent in the problem is $(x / y) d y / d x$ which evaluates to $-x /(z w)$, a quantity bounded by one. The problem is well-conditioned.
However in the undesirable method we have $(z / y) \partial y / \partial z=z(z+w)$ and also $(w / y) \partial y / \partial w=-w(z+w)$, which are very large. So relative errors in $z$ and $w$ are amplified.
With the better method, the partial derivatives are given by the expressions $(z / y) \partial y / \partial z=-z /(z+w)$ and $(w / y) \partial y / \partial w=-w /(z+w)$. This does not amplify relative error, and so this method is numerically stable. Note that in this particular example the advantage of the better method is the same in the absolute error analysis and the relative error analysis. This is because the two algorithms deal with the precisely the same numbers in the intermediate stage.
## Problems
1. Say that $f(a, y)=0$ defines $y$ as a function of $a$. When is this problem of determining $y$ from $a$ well-posed?
2. Consider the problem of finding $y=e^{-x}$ for large values of $x$. One strategy is to use the $n$th partial sum of the Taylor series expansion about zero. How large must $n$ be chosen so that the approximation error is smaller than some small $\epsilon>0$ ?
3. Methods using Taylor series can have problems with roundoff errror. Consider the problem of finding $y=e^{-x}$ for large $x$. Here are two methods. Neither is particularly good, but one is considerably better than the other. The problem is to give a relative error analysis.
The first method is to compute $y=z+w$, where $z$ is the $n$th partial sum of the Taylor series of $y$ is powers of $x$ about zero. The other term $w$ is the remainder (which we assume can be well-approximated).
The second method is to compute $y=1 / u$. Here $u=z+w$, where $z$ is the $n$th partial sum of the Taylor series of $u$ in powers of $x$ about zero. The other term is the remainder (which we assume again can be well-approximated).
### Numerical differentiation
Let us apply our analysis of error to the example of numerical differentiation. We want to start from $x$ and compute $y^{\prime}=f^{\prime}(x)$. The relative error amplification is $\left(x / y^{\prime}\right)\left(d y^{\prime} / d x\right)$ and so it is controlled by the second derivative $f^{\prime \prime}(x)$. So the problem is well-conditioned in most circumstances.
The method of numerical differentiation is to compute $z=f(x+h)$ and $w=f(x)$ and approximate $y^{\prime}$ by $(z-w) / h$. This is supposed to be a good approximation when $h$ is very small.
The relative amplification factor from $z$ is given by $\left(z / y^{\prime}\right) \partial y^{\prime} / \partial z$. In this approximation this is $\left(z / y^{\prime}\right)(1 / h)$. This is ordinarily huge when $h$ is very small. Therefore numerical differentiation is not a numerically stable way of computing derivatives.
There is one exceptional case when numerical differentiation is not so bad. This is one $f(x)$ is itself small, comparable to $h$. Then factors such as $z / h$ are of order one, which is acceptable.
Newton's method for solving equations is to iterate using $g(x)=x-$ $f(x) / y^{\prime}$, where $y^{\prime}=f^{\prime}(x)$. Steffensen's method is to apply numerical differentiation with $h=f(x)$ to avoid having to compute a formula for the derivative. Why is this justified, if numerical differentiation is not a numerically stable process?
The answer is first that this is precisely the case when numerical differentiation is not so bad. Furthermore, the numerical differentiation is not the final stage. We are actually interested in computing $g(x)$, and the relative error amplification quantity is $\left(y^{\prime} / g(x)\right) \partial g(x) / \partial y^{\prime}=-1 /\left(y^{\prime} g(x)\right) f(x)$. As we approach the solution the factor $f(x)$ gets very small, so in this final stage the relative error is damped by a huge amount. Thus the iteration is very stable with respect to errors in the numerical derivative.
Problems 1. Take $f(x)=\sqrt{x}$ and consider the problem of computing the difference quotient $(f(x+h)-f(x)) / h$ for small $h$. Discuss numerical stability of various algorithms.
2. Take $f(x)=1 / x$ and consider the problem of computing the difference quotient $(f(x+h)-f(x)) / h$ for small $h$. Discuss numerical stability of various algorithms.
3. Consider the problem of computing the derivative $f^{\prime}(x)$. One may compute either $(f(x+h)-f(x)) / h$, or one may compute $(f(x+h)-$ $f(x-h)) /(2 h)$. Compare these from the point of view of approximation error.
4. Say that one takes Steffensen's method with $h=f(x)^{2}$ instead of $h=f(x)$. What is the situation with numerical stability?
5. How about with $h=f(x)^{3}$ ?
$æ$
## Chapter 5
## Ordinary Differential Equations
### Introduction
This chapter is on the numerical solution of ordinary differential equations. There is no attempt to cover a broad spectrum of methods. In fact, we stick with the simplest methods to implement, the Runge-Kutta methods.
Our main purpose is to point out that there are two different problems with the approximation of ordinary differential equations. The first is to get an accurate representation of the solution for moderate time intervals by using a small enough step size and an accurate enough approximation method. The second is to get the right asymptotic behavior of the solution for large time.
### Numerical methods for scalar equations
We consider the equation
$$
\frac{d y}{d t}=g(t, y)
$$
with the initial condition $y=y_{0}$ when $t=t_{0}$.
The simplest numerical method is Euler's method, which is the firstorder Runge-Kutta method. Fix a step size $h>0$. Set $t_{n}=t_{0}+n h$. The algorithm is
$$
y_{n+1}=y_{n}+h g\left(t_{n}, y_{n}\right) .
$$
Here is a program in ISETL (Interactive Set Language).
Another method is the implicit or backward Euler's method. This is given by
$$
y_{n+1}=y_{n}+h g\left(t_{n+1}, y_{n+1}\right) \text {. }
$$
It would seem that this would never be used, because one has to solve an equation for $y_{n+1}$ at each step. However we shall see that this method has important stability properties that can be crucial in some situations.
Euler's method is not very accurate, since the slopes are computed only at the beginning of the time step. A better method would take the average of the slopes at the beginning and at the end. This is the implicit trapezoid method. This is given by
$$
y_{n+1}=y_{n}+h \frac{1}{2}\left[g\left(t_{n}, y_{n}\right)+g\left(t_{n+1}, y_{n+1}\right)\right] .
$$
Again this requires solving an equation for $y_{n+1}$ at each stage.
The trouble is that we don't know the slope at the end. The solution is to use the Euler method to estimate the slope at the end. This gives an explicit trapezoid method which is a second order Runge-Kutta method. The formula is
$$
y_{n+1}=y_{n}+h \frac{1}{2}\left[g\left(t_{n}, y_{n}\right)+g\left(t_{n+1}, y_{n}+h g\left(t_{n}, y_{n}\right)\right)\right] .
$$
Here is a program.
Problems 1. Find the solution of
$$
\frac{d y}{d t}=y+e^{t} \cos t
$$
with $y=0$ when $t=0$. What is the value of the solution at $y=\pi$.
2. Solve this numerically with Euler's method and compare.
3. Solve this numerically with the trapezoid second order Runge Kutta and compare.
4. Compare the Euler and trapezoid second order Runge Kutta method with the (left endpoint) Riemann sum and trapezoid rule methods for numerical integration.
5. Another method is to use midpoints: $d y=g\left(t+d t / 2, y+d y_{e} / 2\right) d t$ where $d y_{e}=g(t, y) d t$. This is another second-order Runge-Kutta method. Program this and solve the equation numerically. How does this compare in accuracy with the other methods?
### Theory of scalar equations
#### Linear equations
Linear equations are easy to solve. The general homogeneous linear equation is
$$
\frac{d u}{d t}=a(t) u
$$
It may be solved by separation of variables $d u / u=a(t) d t$.
It is then easy to find the solutions of the linear equation
$$
\frac{d y}{d t}=a(t) y+s(t)
$$
The trick is to let $u(t)$ be a solution of the corresponding homogeneous equation and try $y=c(t) u(t)$. Then it is easy to solve for $c(t)$ by integration of $d c(t) / d t=s(t) / u(t)$.
#### Autonomous equations
The general autonomous equation is
$$
\frac{d y}{d t}=f(y)
$$
An equilibrium point is a solution of $f(r)=0$. For each equilibrium point we have a solution $y=r$.
Near an equilibrium point $f(y) \approx f^{\prime}(r)(y-r)$. An equilibrium point $r$ is attractive if $f^{\prime}(r)<0$ and repulsive if $f^{\prime}(r)>0$.
One can attempt to find the general solution of the equation by integrating
$$
\int \frac{1}{f(y)} d y=\int d t
$$
## Problems
1. If a population grows by $d p / d t=.05 p$, how long does it take to double in size?
2. The velocity of a falling body (in the downward direction) is given by $d v / d t=g-k v$, where $g=32$ and $k=1 / 4$. If $v=0$ when $t=0$, what is the limiting velocity as $t \rightarrow \infty$ ?
3. Consider $d y / d t=a y+b$ where $y=y_{0}$ when $t=0$. Solve for the case when $a \neq 0$. Fix $t$ and find the limit of the solution $y$ as $a \rightarrow 0$.
4. A population grows by $d p / d t=a p-b p^{2}$. Here $a>0, b>0$, and $0<p<a / b$. Find the solution with $p=p_{0}$ at $t=0$. Do this by letting $u=1 / p$ and solving the resulting differential equation for $u$.
5. Do the same problem by integrating $1 /\left(a p-b p^{2}\right) d p=d t$. Use partial fractions.
6. In the same problem, find the limiting population as $t \rightarrow \infty$.
## Projects
1. Write a program to solve an ordinary differential equation via the explicit trapezoid method.
2. Use your program to explore the solutions of $d x / d t=x-x^{3}$. Try many different initial conditions. What pattern emerges? Discuss the limit of $x$ as $t \rightarrow \infty$ as a function of the initial condition $x_{0}$.
#### Existence
We want to explore several questions. When do solutions exist? When are they uniquely specified by the initial condition? How does one approximate them numerically? We begin with existence. Consider the equation
$$
\frac{d y}{d t}=g(t, y)
$$
with initial condition $y=y_{0}$ when $t=t_{0}$. Assume that $g$ is continuous. Then the solution always exists, at least for a short time interval near $t_{0}$. One proof of this is based on using Euler's method with step sizes $h>0$ to generate approximate solutions. One then takes the limit $h \rightarrow 0$ and uses a compactness argument to show that these approximate a solution.
In general, however, we have only local existence. An example is given in the problems.
## Problems
1. Consider the differential equation
$$
\frac{d y}{d t}=y^{2}
$$
with initial condition $y=y_{0}$ when $t=0$. Find the solution. For which $t$ does the solution blow up?
2. Sketch the vector field in phase space (with $d x / d t=1$ ). Sketch a solution that blows up.
3. Can this sort of blow up happen for linear equations? Discuss.
#### Uniqueness
Assume in addition that $g$ has continuous derivatives. Then the solution with the given initial condition is unique. This fact is usually proved using a fixed point iteration method.
Uniqueness can fail when $g$ is continuous but when $g(t, y)$ has infinite slope as a function of $y$.
## Problems
1. Plot the function $g(y)=\operatorname{sign}(y) \sqrt{|y|}$. Prove that it is continuous.
2. Plot its derivative and prove that it is not continuous.
3. Solve the differential equation
$$
\frac{d y}{d t}=\operatorname{sign}(y) \sqrt{|y|}
$$
with the initial condition $y=0$ when $t=0$. Find all solutions for $t \geq 0$. 4. Substitute the solutions back into the equation and check that they are in fact solutions.
5. Sketch the vector field in phase space ( with $d x / d t=1$ ).
6. Consider the backward Euler's method for this example. What ambiguity is there in the numerical solution?
#### Forced oscillations
We consider the non-linear equation
$$
\frac{d y}{d t}=g(t, y)
$$
Assume that $g(t, y)$ has period $T$, that is, $g(t+T, y)=g(t, y)$. It will not necessarily be the case that all solutions have period $T$. However there may be a special steady-state solution that has period $T$.
Here is the outline of the argument. Assume that $a<b$ and that $g(t, a) \geq 0$ for all $t$ and $g(t, b) \leq 0$ for all $t$. Then no solution can leave the interval $[a, b]$. Thus if $y=\phi\left(t, y_{0}\right)$ is the solution with $y=y_{0}$ at $t=0$, then $h\left(y_{0}\right)=\phi\left(T, y_{0}\right)$ is a continuous function from $[a, b]$ to itself. It follows that $h$ has a fixed point. But then if we take the initial condition to be this fixed point we get a periodic solution.
We can sometimes get to this fixed point by iterations. Let $y^{\prime}$ be $\partial y / \partial y_{0}$. Then
$$
\frac{d y^{\prime}}{d t}=\frac{\partial g(t, y)}{\partial y} y^{\prime}
$$
Also $y^{\prime}=1$ at $t=0$ and $y^{\prime}=h^{\prime}\left(y_{0}\right)$ at $t=T$. It follows that $h^{\prime}\left(y_{0}\right)>0$.
Assume that $\partial g(t, y) / \partial y<0$. Then $h^{\prime}\left(y_{0}\right)<1$ and so we can hope that fixed point iterations of $h$ converge. This would say that every solution in the interval converges to the periodic solution.
## Problems
1. Consider the equation
$$
\frac{d y}{d t}=g(y)+s(t)
$$
with periodic forcing function $s(t)$. Find conditions that guarantee that this has a periodic solution.
2. Apply this to the equation
$$
\frac{d y}{d t}=a y-b y^{2}+c \sin (\omega t)
$$
3. Experiment with numerical solutions. Which solutions converge to the periodic solution?
### Theory of numerical methods
#### Fixed time, small step size
We want to approximate the solution of the differential equation
$$
\frac{d y}{d t}=f(t, y)
$$
with initial condition $y=y_{0}$ at $t=0$. For convenience we denote the solution of the equation at a point where $t=a$ by $y(a)$.
The general method is to find a function $\phi(t, y, h)$ and compute
$$
y_{n+1}=y_{n}+h \phi\left(t_{n}, y_{n}, h\right)
$$
Here $h>0$ is the step size and $t_{n}=n h$.
Example: Let $\phi(t, y, h)=f(t, y)$. This is Euler's method.
Example: Let $\phi(t, y, h)=(1 / 2)[f(t, y)+f(t+h, y+h f(t, y))]$. This is a second-order Runge-Kutta method which we will call the explicit trapezoid method.
Example: Let $\phi(t, y, h)=f(t+h / 2, y+h f(t, y) / 2)$. This is another second-order Runge-Kutta method, the explicit midpoint method.
We will always pick the method so that $\phi(t, y, 0)=f(t, y)$. Such a method is said to be consistent.
Why is one method better than other. One criterion is obtained by looking at the exact solution $y(t)$. If
$$
y\left(t_{n+1}\right)=y\left(t_{n}\right)+h \phi\left(t_{n}, y\left(t_{n}\right), h\right)+T_{n+1},
$$
then the remainder term $T_{n}$ is called the local truncation error.
The local truncation error may be computed as a Taylor series about $t_{n}$ in powers of $h$. If the local truncation error only contains only powers of order $p+1$ or more, then the method is said to have order at least $p$.
Example: Euler's method has order one.
Example: The explicit trapezoid method described above has order two.
Remark: A consistent method has order at least one.
Let $\epsilon_{n}=y\left(t_{n}\right)-y_{n}$. This is the error at step $n$. We see that
$$
\epsilon_{n+1}-\epsilon_{n}=h\left[\phi\left(t_{n}, y\left(t_{n}\right), h\right)-\phi\left(t_{n}, y_{n}, h\right)\right]+T_{n+1}
$$
Assume that we have the inequality
$$
\left|\phi\left(t, y_{1}, h\right)-\phi\left(t, y_{2}, h\right)\right| \leq L\left|y_{1}-y_{2}\right|
$$
for some constant $L$. This would follow from a bound on the $y$ partial derivative of $\phi$. We call this the slope bound.
Assume also that we have a bound $T_{n+1} \leq K h^{p+1}$. We call this the local truncation error bound.
Theorem 5.4.1 Assume that the one-step numerical method for solving the ordinary differential equation $d y / d t=f(t, y)$ satisfies the slope bound and the local truncation error bound. Then the error satisfies the global truncation error bound
$$
\left|\epsilon_{n}\right| \leq K h^{p} \frac{e^{L t_{n}}-1}{L}
$$
This bound is a worst case analysis, and the error may not be nearly as large as the bound. But there are cases when it can be this bad. Notice that for fixed time $t_{n}$ the bound gets better and better as $h \rightarrow 0$. In fact, when the order $p$ is large, the improvement is dramatic as $h$ becomes very small.
On the other hand, for fixed $h$, even very small, this bound becomes very large as $t_{n} \rightarrow \infty$.
Obviously, it is often desirable to take $p$ to be large. It is possible to classify the Runge-Kutta methods of order $p$, at least when $p$ is not too large. The usual situation is to use a method of order 2 for rough calculation and a method of order three or four for more accurate calculation.
We begin by classifying the explicit Runge-Kutta methods of order 2 . We take
$$
\phi(t, y, h)=(1-c) f(t, y)+c f(t+a h, y+a h f(t, y)) .
$$
We see that every such method is consistent, and hence of order one. The condition that the method be of order 2 works out to be that $a c=1 / 2$.
There is a similar classification of methods of order 3 and 4 . In each case there is a two-parameter family of Runge-Kutta methods. By far the most commonly used method is a particular method of order 4. (There are methods of higher order, but they tend to be cumbersome.)
## Problems
1. Prove the bound on the global truncation error.
2. Carry out the classification of methods of order 2 .
#### Fixed step size, long time
In order to study the long time behavior, it is useful to begin with the autonomous equation
$$
\frac{d y}{d t}=f(y)
$$
If $r$ is such that $f(r)=0$, then $r$ is a stationary or equilibrium point. If also $f^{\prime}(r)<0$, then $r$ is stable.
The numerical method is now of the form
$$
y_{n+1}=y_{n}+h \phi\left(y_{n}, h\right)
$$
It is natural to require that the method satisfies the condition that $f(y)=0$ implies $\phi(y, h)=0$.
This is an iteration with the iteration function $g(y)=y+h \phi(y, h)$. Under the above requirement the equilibrium point $r$ is a fixed point with $g(r)=r$. Such a fixed point $r$ is stable if $g^{\prime}(r)=1+h \phi^{\prime}(r, h)$ is strictly less then one in absolute value.
If $f^{\prime}(y)<0$ for $y$ near $r$, then for $h$ small one might expect that $\phi^{\prime}(r, h)<0$ and hence $g^{\prime}(r)<1$. Furthermore, for $h$ small enough we would have $g^{\prime}(s)>-1$. So a stable equilibrium of the equation should imply a stable fixed point of the numerical scheme, at least for small values of $h$.
How small? We want $g^{\prime}(r)=1+h \phi^{\prime}(r, h)>-1$, or $h \phi^{\prime}(r, h)>-2$. Thus a rough criterion would be $h f^{\prime}(r)>-2$.
In some problems, as we shall see, there are two time scales. One time scale suggests taking a comparatively large value of time step $h$ for the integration. The other time scale is determined by the reciprocal of the magnitude of $f^{\prime}(r)$, and this can be very small. If these scales are so different that the criterion is violated, then the problem is said to be stiff. (We shall give a more precise definition later.)
A class of problems where stiffness is involved is for equations of the form
$$
\frac{d y}{d t}=f(t, y)
$$
where there is a function $y=r(t)$ with $f(t, r(t))=0$ and with $\partial f(t, r(t)) / \partial y<$ 0 and very negative. The dependence of $r(t)$ on $t$ is slow compared with the fast decay of the solution $y$ to $y=r(t)$. Thus one might want to take a moderate sized time step $h$ suitable for tracking $y=r(t)$. However this would be too large for tracking the decay in detail, which we might not even care to do. Thus we need a numerical method that gives the decay without worrying about the details of how it happens. What can we do for a stiff problem? One solution is to use implicit methods.
The general implicit method is to find a function $\phi(t, y, z, h)$ and compute
$$
y_{n+1}=y_{n}+h \phi\left(t_{n}, y_{n}, y_{n+1}, h\right)
$$
Here $h>0$ is the step size and $t_{n}=n h$.
Example: Let $\phi(t, y, z, h)=f(t, z)$. This is backward Euler's method.
Example: Let $\phi(t, y, z, h)=(1 / 2)[f(t, y)+f(t+h, z)]$. This is a second-order implicit Runge-Kutta method often called the implicit trapezoid method.
Example: Let $\phi(t, y, z, h)=f(t+h / 2,(y+z) / 2)$. This is another second order implicit Runge-Kutta method, known as the implicit midpoint method.
The difficulty with an implicit method is that at each stage one must solve an equation for $y_{n+1}$. However an implicit method may be more useful for stiff problems.
Consider the special case of an autonomous equation. The implicit method amounts to iteration by an iteration function defined implicitly by
$$
g(y)=y+h \phi(y, g(y), h)
$$
The derivative is given by
$$
g^{\prime}(y)=\frac{1+h \phi_{1}(y, g(y), h)}{1-h \phi_{2}(y, g(y), h)} .
$$
Here $\phi_{1}(y, z, h)$ and $\phi_{2}(y, z, h)$ denote the partial derivatives of $\phi(y, z, h)$ with respect to $y$ and $z$, respectively. At a fixed point $r$ with $g(r)=r$ this is
$$
g^{\prime}(r)=\frac{1+h \phi_{1}(r, r, h)}{1-h \phi_{2}(r, r, h)} .
$$
The condition that $\left|g^{\prime}(r)\right|<1 \mid$ translates to
$$
\left|1+h \phi_{1}(r, r, h)\right|<\left|1-h \phi_{2}(r, r, h)\right| .
$$
If the two partial derivatives are both strictly negative, then this condition is guaranteed for all $h>0$, no matter how large, by the inequality
$$
-\phi_{1}(r, r, h) \leq-\phi_{2}(r, r, h)
$$
This says that the implicit method must have at least as much dependence on the future as on the past. Thus a stiff problem requires implicitness. The implicit methods require solving equations at each step. In some cases this may be done by algebraic manipulation. In other cases an iteration method must be used.
The most obvious iteration method is to use the iteration function
$$
s(z)=y_{n}+h \phi\left(t_{n}, y_{n}, z, h\right) .
$$
One could start this iteration with $y_{n}$. The fixed point of this function is the desired $y_{n+1}$. The trouble with this method is that $s^{\prime}(z)=h \phi_{2}\left(t_{n}, y_{n}, z, h\right)$ and for a stiff problem this is very large. So the iteration will presumably not converge.
How about Newton's method? This is iteration with
$$
t(z)=z-\frac{z-y_{n}-h \phi\left(t, y_{n}, z, h\right)}{1-h \phi_{2}\left(t, y_{n}, z, h\right)} .
$$
This should work, but the irritation is that one must compute a partial derivative.
## Problems
1. Consider a problem of the form $d y / d t=-a(y-r(t))$ where $a>0$ is large but $r(t)$ is slowly varying. Find the explicit solution of the initial value problem when $y=y_{0}$ at $t=0$. Find the limit of the solution at time $t$ as $a \rightarrow \infty$.
2. Take $a=10000$ and $r(t)=\sin t$. Use $y_{0}=1$. The problem is to experiment with explicit methods with different step sizes. Solve the problem on the interval from $t=0$ to $t=\pi$. Use Euler's method with step sizes $h=.01, .001, .0001, .00001$. Describe your computational experience and relate it to theory.
3. The next problem is to do the same experiment with stable implicit methods. Use the backward Euler's method with step sizes $h=.01, .001$.
4. Consider the general implicit method of the form
$$
\phi(t, y, z, h)=c_{1} f\left(t+a_{1} h, y+a_{1}(z-y)\right)+c_{2} f\left(t+a_{2} h, y+a_{2}(z-y)\right)
$$
Find the condition that ensures that this is first order.
5. Find the condition that ensures that it is second order.
6. Consider an autonomous equation and assume that $f^{\prime}(y)<0$ for all $y$ in some interval in the region of interest. Find the condition for stability of the method at a fixed point for a stiff problem. 7. Are there second order methods for stiff problems that are stable? Discuss.
7. We have required that every zero of $f(y)$ also be a zero of $\phi(y, h)$. When is this satisfied for Runge-Kutta methods?
8. Consider a Taylor method of the form $\phi(y, h)=f(y)+(1 / 2) f^{\prime}(y) f(y) h$. When is the requirement satisfied for such Taylor methods?
### Systems
#### Introduction
We now turn to systems of ordinary differential equations. For simplicity, we concentrate on autonomous systems consisting of two equations. The general form is
$$
\begin{aligned}
& \frac{d x}{d t}=f(x, y) \\
& \frac{d y}{d t}=g(x, y)
\end{aligned}
$$
Notice that this includes as a special case the equation $d y / d t=g(t, y)$. Thus may be written as a system by writing it as
$$
\begin{aligned}
& \frac{d x}{d t}=1 \\
& \frac{d y}{d t}=g(x, y)
\end{aligned}
$$
In general, if we have a system in $n$ variables with explicit time dependence, then we may use the same trick to get an autonomous system in $n+1$ variables.
We may think of an autonomous system as being given by a vector field. In the case we are considering this is a vector field in the plane with components $f(x, y)$ and $g(x, y)$. If we change coordinates in the plane, then we change these coordinates.
In general, the matrix of partial derivatives of this vector field transforms in a complicated way under change of coordinates in the plane. However at a zero of the vector field the matrix undergoes a similarity transformation. Hence linear algebra is relevant!
In particular, two eigenvalues (with negative real parts) of the linearization at a stable fixed point determine two rates of approach to equilibrium. In the case when these rates are very different we have a stiff problem.
#### Linear constant coefficient equations
The homogeneous linear constant coefficient system is of the form
$$
\begin{aligned}
& \frac{d x}{d t}=a x+b y \\
& \frac{d y}{d t}=c x+d y .
\end{aligned}
$$
Try a solution of the form
$$
\begin{aligned}
& x=v e^{\lambda t} \\
& y=w e^{\lambda t} .
\end{aligned}
$$
We obtain the eigenvalue equation
$$
\begin{aligned}
& a v+b w=\lambda v \\
& c v+d w=\lambda w .
\end{aligned}
$$
This has a non-zero solution only when $\lambda$ satisfies $\lambda^{2}-(a+d) \lambda+a d-b c=0$. We can express the same ideas in matrix notation. The equation is
$$
\frac{d \mathbf{x}}{d t}=A \mathbf{x}
$$
The trial solution is
$$
\mathbf{x}=\mathbf{v} e^{\lambda t}
$$
The eigenvalue equation is
$$
A \mathbf{v}=\lambda \mathbf{v} .
$$
This has a non-zero solution only when $\operatorname{det}(\lambda I-A)=0$.
## Growth and Decay
The first case is real and unequal eigenvalues $\lambda_{1} \neq \lambda_{2}$. This takes place when $(a-d)^{2}+4 b c>0$. There are two solutions corresponding to two independent eigenvectors. The general solution is a linear combination of these two. In matrix notation this is
$$
\mathbf{x}=c_{1} \mathbf{v}_{1} e^{\lambda_{1} t}+c_{2} \mathbf{v}_{2} e^{\lambda_{2} t} .
$$
When the two eigenvalues are both positive or both negative, the equilibrium is called a node. When one eigenvalue is positive and one is negative, it is called a saddle. An attractive node corresponds to an overdamped oscillator.
## Oscillation
The second case is complex conjugate unequal eigenvalues $\lambda=\alpha+i \omega$ and $\bar{\lambda}=\alpha-i \omega$ with $\alpha=(a+d) / 2$ and $\omega>0$. This takes place when $(a-d)^{2}+4 b c<0$. There are two independent complex conjugate solutions. These are expressed in terms of $e^{\lambda t}=e^{\alpha t} e^{i \omega t}$ and $e^{\bar{\lambda} t}=e^{\alpha t} e^{-i \omega t}$. Their real and imaginary parts are independent real solutions. These are expressed in terms of $e^{\alpha t} \cos (\omega t)$ and $e^{\alpha t} \sin (\omega t)$.
In matrix notation we have complex eigenvectors $\mathbf{u} \pm i \mathbf{v}$ and the solutions are
$$
x=\left(c_{1} \pm i c_{2}\right) e^{\alpha t} e^{ \pm i \omega t}(\mathbf{u} \pm i \mathbf{v}) .
$$
Taking the real part gives
$$
x=c_{1} e^{\alpha t}(\cos (\omega t) \mathbf{u}-\sin (\omega t) \mathbf{v})-c_{2} e^{\alpha t}(\sin (\omega t) \mathbf{u}+\cos (\omega t) \mathbf{v}) .
$$
If we write $c_{1} \pm i c_{2}=c e^{ \pm i \theta}$, these take the alternate forms
$$
x=c e^{\alpha t} e^{ \pm i(\omega t+\theta)}(\mathbf{u} \pm i \mathbf{v}) .
$$
and
$$
x=c e^{\alpha t}(\cos (\omega t+\theta) \mathbf{u}-\sin (\omega t+\theta) \mathbf{v}) .
$$
From this we see that the solution is characterized by an amplitude $c$ and a phase $\theta$. When the two conjugate eigenvalues are pure imaginary, the equilibrium is called a center. When the two conjugate eigenvalues have a non-zero real part, it is called a spiral (or a focus). An center corresponds to an undamped oscillator. An attractive spiral corresponds to an underdamped oscillator.
## Shearing
The remaining case is when there is only one eigenvalue $\lambda=(a+d) / 2$. This takes place when $(a-d)^{2}+4 b c=0$. In this case we neeed to try a solution of the form
$$
\begin{aligned}
& x=p e^{\lambda t}+v t e^{\lambda t} \\
& y=q e^{\lambda t}+w t e^{\lambda t} .
\end{aligned}
$$
We obtain the same eigenvalue equation together with the equation
$$
\begin{aligned}
a p+b q & =\lambda p+v \\
c p+d q & =\lambda q+w .
\end{aligned}
$$
In practice we do not need to solve for the eigenvector: we merely take $p, q$ determined by the initial conditions and use the last equation to solve for $v, w$.
Im matrix notation this becomes
$$
\mathbf{x}=\mathbf{p} e^{\lambda t}+\mathbf{v} t e^{\lambda t}
$$
with
$$
A \mathbf{p}=\lambda \mathbf{p}+\mathbf{v} .
$$
## Inhomogeneous equations
The general linear constant coefficient equation is
$$
\frac{d \mathbf{x}}{d t}=A \mathbf{x}+\mathbf{r}
$$
When $A$ is non-singular we may rewrite this as
$$
\frac{d \mathbf{x}}{d t}=A(\mathbf{x}-\mathbf{s})
$$
where $\mathbf{s}=-A^{-1} \mathbf{r}$ is constant. Thus $\mathbf{x}=\mathbf{s}$ is a particular solution. The general solution is the sum of this particular solution with the general solution of the homogeneous equation.
## Problems
1. Find the general solution of the system
$$
\begin{aligned}
& \frac{d x}{d t}=x+3 y \\
& \frac{d y}{d t}=5 x+3 y .
\end{aligned}
$$
2. Find the solution of this equation with the initial condition $x=1$ and $y=3$ when $t=0$.
3. Sketch the vector field in the above problem. Sketch the given solution in the $x, y$ phase space. Experiment to find a solution that passes very close to the origin, and sketch it.
4. Write the Taylor series of $e^{z}$ about $z=0$. Plug in $z=i \theta$, where $i^{2}=-1$. Show that $e^{i \theta}=\cos \theta+i \sin \theta$.
5. Find the general solution of the system
$$
\begin{aligned}
& \frac{d x}{d t}=x+5 y \\
& \frac{d y}{d t}=-x-3 y .
\end{aligned}
$$
6. Find the solution of this equation with the initial condition $x=5$ and $y=4$ when $t=0$.
7. Sketch the vector field in the above problem. Find the orbit of the given solution in phase space. Also plot $x$ versus $t$ and $y$ versus $t$. 8. A frictionless spring has mass $m>0$ and spring constant $k>0$. Its displacement and velocity $x$ and $y$ satisfy
$$
\begin{aligned}
\frac{d x}{d t} & =y \\
m \frac{d y}{d t} & =-k x .
\end{aligned}
$$
Describe the motion.
9. A spring has mass $m>0$ and spring constant $k>0$ and friction constant $f>0$. Its displacement and velocity $x$ and $y$ satisfy
$$
\begin{aligned}
\frac{d x}{d t} & =y \\
m \frac{d y}{d t} & =-k x-f y .
\end{aligned}
$$
Describe the motion in the case $f^{2}-4 k<0$ (underdamped).
10. Take $m=1$ and $k=1$ and $f=0.1$. Sketch the vector field and the solution in the phase plane. Also sketch $x$ as a function of $t$.
11. In the preceding problem, describe the motion in the case $f^{2}-4 k>$ 0 (overdamped). Is it possible for the oscillator displacement $x$ to overshoot the origin? If so, how many times?
12. An object has mass $m>0$ and its displacement and velocity $x$ and $y$ satisfy
$$
\begin{aligned}
\frac{d x}{d t} & =y \\
m \frac{d y}{d t} & =0
\end{aligned}
$$
Describe the motion.
13. Solve the above equation with many initial condition with $x=0$ and with varying value of $y$. Run the solution with these initial conditions for a short time interval. Why can this be described as "shear"?
#### Stiff systems
Stiff systems are ones where the eigenvalues near an equilibrium point have real parts describing very different decay rates. This situation may be illustrated by simple homogeneous constant coefficient systems such as an oscillator.
## Problems
1. Consider the system
$$
\begin{aligned}
\frac{d x}{d t} & =v \\
m \frac{d v}{d t} & =-k x-c v
\end{aligned}
$$
where $m>0$ is the mass, $k>0$ is the spring constant, and $c>0$ is the friction constant. We will be interested in the highly damped situations, when $m$ is small relative to $k$ and $c$. Take $k$ and $c$ each 10 times the size of $m$. Find the eigenvalues and find approximate numerical expressions for them. Find approximate numerical expressions for the eigenvectors. Describe the corresponding solutions.
2. In this preceding problem, which eigenvalue describes very rapid motion in phase space, and which eigenvalue describes slow motion in phase space? Describe the solution starting from an arbitrary initial condition. There are two stages to the motion. The first takes place until $t$ is comparable to $m / c$. The second takes place until $t$ is comparable to $c / k$. Describe the two stages in terms of motion in phase space. Which variable or variables (displacement $x$ and velocity $v$ ) are making the main change in each of these two stages?
3. Produce pictures of the solutions in phase space. Do this with enough initial conditions to confirm the analysis in the last problem. Sketch the results. Confirm them by $x$ versus $t$ and $v$ versus $t$ pictures.
#### Autonomous Systems
The general autonomous system is
$$
\begin{aligned}
& \frac{d x}{d t}=f(x, y) \\
& \frac{d y}{d t}=g(x, y)
\end{aligned}
$$
An equilibrium point is a solution of $f(r, s)=0$ and $g(r, s)=0$. For each equilibrium point we have a solution $x=r$ and $y=s$.
Near an equilibrium point
$$
\begin{aligned}
& f(x, y) \approx a(x-r)+b(y-s) \\
& g(x, y) \approx c(x-r)+d(y-s),
\end{aligned}
$$
where $a=\partial f(x, y) / \partial x, b=\partial f(x, y) / \partial y, c=\partial g(x, y) / \partial x$, and $d=$ $\partial g(x, y) / \partial y$, all evaluated at $x=r$ and $y=s$. So near the equilibrium point the equation looks like a linear equation.
Assume that the eigenvalues of the linear equation are real. Then the equilibrium point is attractive if they are both negative. On the other hand, assume that the eigenvalues of the linear equation are complex conjugates. Then the equilibrium point is attractive if the real part is negative. In general the equilibrium point is classified by the behavior of the linearized equation at that point.
A first example is the non-linear pendulum equation. This is
$$
\begin{aligned}
\frac{d x}{d t} & =y \\
m l \frac{d y}{d t} & =-m g \sin (x)-c y .
\end{aligned}
$$
Here $x$ is the angle and $y$ is the angular velocity. The parameters are the mass $m>0$, the length $l>0$, and the gravitational acceleration $g>0$. There may also be a friction coefficient $c \geq 0$. The first equation is the definition of angular velocity. The second equation is Newton's law of motion: mass times acceleration equals force.
There are two interesting equilibrium situations. One is where $x=0$ and $y=0$. In the case we use $\sin (x) \approx x$ to find the linear approximation The other interesting situation is when $x-\pi=0$ and $y=0$. In this case we use $\sin (x) \approx-(x-\pi)$. The minus sign makes a crucial difference.
A second example is the predator-prey system. This is
$$
\begin{aligned}
& \frac{d x}{d t}=(a-b y-m x) x \\
& \frac{d y}{d t}=(c x-d-n y) y .
\end{aligned}
$$
Here $x$ is the prey and $y$ is the predator. The prey equation says that the prey has a natural growth rate $a$, are eaten by the predators at rate by, and compete with themselves with rate $m x$. The predator equation says that the predators have a growth rate $c x-d$ at food level $x$ and compete with themselves at rate $n y$. The parameters are strictly positive, except that we allow the special case $m=0$ and $n=0$ with no internal competition. We are only interested in the situation $x \geq 0$ and $y \geq 0$.
There are several equilibria. One corresponds to total extinction. Also when $m>0$ one can have a situation when the predator is extinct and where $x=a / m$ is the natural prey carrying capacity. Whem $m=0$, on the other hand, there is there is no natural limit to the size of the prey population: we interpret $a / m=+\infty$. The most interesting equilibrium takes place when the natural predator growth rate $c x-d$ with $x=a / m$ at the prey carrying capacity is positive. This says that the predator can live off the land.
## Problems
1. For the pendulum problem with no friction, find the linearization at $x=0, y=0$. Discuss the nature of the equilibrium.
2. Consider the pendulum problem. Find oscillatory solutions that are near the zero solution, but not too near. How large can the solutions be before the pendulum can no longer be used as a clock?
3. For the pendulum problem with no friction, find the linearization at $x=\pi, y=0$. Discuss the nature of the equilibrium.
4. Find at least two different kinds of oscillatory solutions that pass near $x=\pi, y=0$. Sketch plots that illustrate these different kinds of solutions.
5. For the pendulum problem, describe the nature of the two equilibria when there is friction.
6. Consider the predator-prey equations with internal competition. Find the nature of the equilibrium corresponding to total extinction.
7. Find the nature of the equilibrium corresponding to extinction of the predators. There are two situations, depending on the sign of the predator natural growth rate.
8. Find the nature of the equilibrium corresponding to coexistence. Discuss its stability.
9. Sketch representative solutions.
10. Find the nature of the equilibrium corresponding to coexistence when there is no internal competition.
11. Sketch representative solutions.
#### Limit cycles
Now we come to an essentially non-linear effect: oscillations that are stabilized by the non-linearity. The classic example is
$$
\begin{aligned}
& \frac{d x}{d t}=v \\
& \frac{d v}{d t}=-k x-g(x) v
\end{aligned}
$$
This is an oscillator in which the friction coefficient $g(x)$ is a function of position. There is a constant $r>0$ such that $g(x)<0$ for $|x|<r$ and $g(x)>0$ for $|x|>r$. Thus when $|x|$ is small the oscillator gets a boost. A standard example is $g(x)=c\left(x^{2}-r^{2}\right)$.
Change variables to $y=v+G(x)$, where $G^{\prime}(x)=g(x)$. Then this same oscillator becomes
$$
\begin{aligned}
\frac{d x}{d t} & =y-G(x) \\
\frac{d y}{d t} & =-k x .
\end{aligned}
$$
The equation is often studied in this form.
## Problems
1. Take the van der Pol oscillator in $x, y$ space with $G(x)=x^{3}-a x$. Investigate the Hopf bifurcation. Sketch your results.
2. Take the non-linear van der Pol oscillator in $x, v$ space with $g(x)=$ $a\left(x^{2}-1\right)$. Take $a>0$ increasingly large. The result is a relaxation oscillator. Make plots in the $x, v$ plane. Also make $x$ versus $t$ and $v$ versus $t$ plots and interpret them.
æ
## Chapter 6
## Fourier transforms
### Groups
We want to consider several variants of the Fourier transform at once. The unifying idea is that the Fourier transform deals with complex functions defined on commutative groups. (Recall that a commutative group is a set with operations of addition and subtraction that satisfy the usual properties.) Here are the groups that we shall consider.
The first is the group of all real numbers.
The second is the group of all integer multiples of $n \Delta x$, where $\Delta x>0$ is a fixed real number. This is a subgroup of the real numbers, since the sum or difference of any two $n \Delta x$ is also of the same form.
The third is the group of all real numbers $\bmod L$, where $L>0$ is fixed. This is the circle group, where the circle has circumference $L$. Two real numbers determine the same element if they differ by an integer multiple of $L$. Thus the circle group is a quotient group of the real numbers.
The final group is the group of all integer multiples $n \Delta x \bmod L=N \Delta x$. This is a subgroup of the circle group. It is also a quotient group of the integer group. It is finite with precisely $N$ elements.
### Integers $\bmod \mathrm{N}$
We first consider the Fourier transform on the group $G$ of integers $\bmod N$. This is a finite group with elements $\{0,1, \ldots, N-1\}$ extended periodically.
It is helpful to think of the integers as being embedded in the reals at a spacing $\Delta x$. Then the integers mod $N$ may be thought of as embedded in the reals $\bmod L=N \Delta x$. We consider a complex function $f$ on the group $G$. This may also be thought of as a function on the integers spaced by $\Delta x$ with period $L=N \Delta x$.
The Fourier transform is another complex function defined on the dual group $\hat{G}$. This is another group of integers mod N, but it is regarded as embedded in the real line with spacing $\Delta k$, where $\Delta k=2 \pi /(N \Delta x)=2 \pi / L$. Thus the Fourier transform is periodic with period $N \Delta k=2 \pi / \Delta x$.
We think of frequencies from 0 to $N \Delta k / 2$ as positive frequencies. We think of freqencies from $N \Delta k / 2$ to $N \Delta k$ as negative frequencies. The frequency of least oscillation is 0 , which is identified with $N \Delta k$. The frequency of maximum oscillation is $N \Delta k / 2$.
The Fourier transform is
$$
\hat{f}(k)=\sum_{x \in G} e^{-i k x} f(x) \Delta x
$$
where the sum is over $N$ consecutive points spaced at interval $\Delta x$.
This may be written more explicitly as
$$
\hat{f}(m \Delta k)=\sum_{n=0}^{N-1} e^{\frac{-i 2 \pi m n}{N}} f(n \Delta x) \Delta x .
$$
The inversion formula is then
$$
f(x)=\sum_{k \in \hat{G}} e^{i k x} \hat{f}(k) \Delta k /(2 \pi),
$$
where the sum is over $N$ consecutive points $k$ with spacing $\Delta k$.
This may be written more explicitly as
$$
f(n \Delta x)=\sum_{m=0}^{N-1} e^{\frac{i 2 \pi m n}{N}} \hat{f}(m \Delta k) \Delta k /(2 \pi) .
$$
Here is a proof of the inversion formula. We wish to show that
$$
f(x)=\frac{1}{N} \sum_{k} e^{i k x} \sum_{y} e^{-i k y} f(y)=\frac{1}{N} \sum_{y} \sum_{k} e^{i k(x-y)} f(y) .
$$
But it is easy to sum the geometric series
$$
\frac{1}{N} \sum_{k} e^{i k z}
$$
and see that it is zero unless $z=0 \bmod L$, in which case it is one.
### The circle
We now consider the Fourier transform on the circle group $G$, thought of as the reals $\bmod L$.
The dual group in this case is $\hat{G}$, thought of as the integers spaced with interval $\Delta k=2 \pi / L$.
The Fourier transform is
$$
\hat{f}(k)=\int_{0}^{L} e^{-i k x} f(x) d x .
$$
The inversion formula is
$$
f(x)=\frac{1}{L} \sum_{k} e^{i k x} \hat{f}(k) .
$$
These formulas may be obtained from the finite case. Take the sum over $k$ to run from $-N \Delta k / 2=-\pi / \Delta x$ to $N \Delta k / 2=\pi / \Delta x$, counting the end point at most once. Then let $N \rightarrow \infty$ and $\Delta x \rightarrow 0$ keeping $L=N \Delta x$ fixed.
### The integers
We now consider the Fourier transform on the integers, thought of as spaced at interval $\Delta x$.
The dual group in this case is $\hat{G}$, thought of as the circle of circumference $2 \pi / \Delta x$.
The Fourier transform is
$$
\hat{f}(k)=\sum e^{-i k x} f(x) \Delta x .
$$
The inversion formula is
$$
f(x)=\int_{0}^{2 \pi / \Delta x} e^{i k x} \hat{f}(k) d k /(2 \pi) .
$$
These formulas may be obtained from the finite case by taking the sum on $x$ to range from $-N \Delta x / 2$ to $N \Delta x / 2$ (counting end points only once) and then letting $N \rightarrow \infty$ with fixed $\Delta x$.
### The reals
We now consider the Fourier transform on the group $G$ of reals. The dual group in this case is $\hat{G}$, again the reals. The Fourier transform is
$$
\hat{f}(k)=\int_{-\infty}^{\infty} e^{-i k x} f(x) d x
$$
The inversion formula is
$$
f(x)=\int_{-\infty}^{\infty} e^{i k x} \hat{f}(k) d k /(2 \pi) .
$$
These formulas may be obtained from the circle case integrating $x$ from $-L / 2$ to $L / 2$ and letting $L \rightarrow \infty$ or from the integer case by integrating $k$ from $-\pi / \Delta x$ to $\pi / \Delta x$ and letting $\Delta x \rightarrow 0$.
The notation has been chosen to suggest that $x$ is position and $k$ is wave number (spatial frequency). It is also common to find another notation in which $t$ is time and $\omega$ is (temporal) frequency. For the record, here are the formulas in the other notation.
The Fourier transform is
$$
\hat{f}(\omega)=\int_{-\infty}^{\infty} e^{-i \omega t} f(t) d t
$$
The inversion formula is
$$
f(t)=\int_{-\infty}^{\infty} e^{i \omega t} \hat{f}(\omega) d \omega /(2 \pi) .
$$
### Translation Invariant Operators
We now want to explore the uses of the Fourier transform. It is convenient to look at the transform in a unified framework. Thus we write in all cases the Fourier transform as
$$
\hat{f}(k)=\int_{G} e^{-i k x} f(x) d x .
$$
The inversion formula is
$$
f(x)=\int_{\hat{G}} e^{i k x} \hat{f}(k) d k /(2 \pi) .
$$
The fundamental observation is that the Fourier transform does good things with respect to translation. Define $U_{y} f(x)=f(x-y)$. Then we have that the Fourier transform of $U_{y} f$ at $k$ is $e^{-i k y} \hat{f}(k)$. In other words, translation is converted to a multiplicative factor. Here is a generalization. Define the convolution $g * f$ of two functions as the function defined by
$$
(g * f)(x)=\int_{G} g(y) f(x-y) d y .
$$
Thus convolution is weighted translation. The Fourier transform of $g * f$ evaluated at $k$ is then $\hat{g}(k) \hat{f}(k)$. Again this is a multiplicative factor.
Here are some special cases. Let
$$
\delta_{+} f(x)=\frac{1}{\Delta x}(f(x+\Delta x)-f(x)) .
$$
This has Fourier transform $(1 / \Delta x)\left(e^{i k \Delta x}-1\right) \hat{f}(k)$. Let
$$
\delta_{-} f(x)=\frac{1}{\Delta x}(f(x)-f(x-\Delta x)) .
$$
This has Fourier transform $(1 / \Delta x)\left(1-e^{-i k \Delta x}\right) \hat{f}(k)$. Let
$$
\delta_{0} f(x)=\frac{1}{2}\left(\delta_{+}+\delta_{-}\right) f(x)=\frac{1}{2 \Delta x}(f(x+\Delta x)-f(x-\Delta x)) .
$$
This has Fourier transform $(i / \Delta x) \sin (k \Delta x) \hat{f}(k)$. We may take $\Delta x \rightarrow 0$ in these formulas and conclude that the first derivative is represented in the Fourier transform by multiplication by $i k$.
We may also represent the second derivative in the same way. The most useful formula is
$$
\delta^{2} f(x)=\delta_{+} \delta_{-} f(x)=\frac{1}{(\Delta x)^{2}}(f(x+\Delta x)-2 f(x)+f(x-\Delta x)) .
$$
This has Fourier transform $-(2 / \Delta x)^{2} \sin ^{2}(k \Delta x / 2) \hat{f}(k)$. In the limit $\Delta x \rightarrow$ 0 this gives $-k^{2} \hat{f}(k)$.
It is not hard to check that the Fourier transform of the reversed conjugate $\overline{f(-x)}$ is the conjugate $\overline{\hat{f}(k)}$. It follows that the Fourier transform of the "correlation" $\int_{G} \overline{f(y-x)} g(y) d y$ is $\overline{\hat{f}(k)} \hat{g}(k)$. From the inversion formula it follows that
$$
\int_{G} \overline{f(y-x)} g(y) d y=\int_{\hat{G}} e^{i k x} \overline{\hat{f}(k)} \hat{g}(k) d k /(2 \pi) .
$$
A very important special case is obtained by taking $f=g$ and $x=0$. This gives the Plancherel theorem
$$
\int_{G}|f(y)|^{2} d y=\int_{\hat{G}}|f(k)|^{2} d k /(2 \pi)
$$
## 7 $\quad$ Subgroups
We now want to consider a more complicated situation. Let $G$ be a group. Let $H$ be a discrete subgroup. Thus for some $\Delta y>0$ the group $H$ consists of the multiples $n \Delta y$ of $\Delta y$. We think of this subgroup as consisting of uniformly spaced sampling points. Let $Q$ be the quotient group, where we identify multiples of $\Delta y$ with zero.
The group $G$ has a dual group $\hat{G}$. The elements of $\hat{G}$ that are multiples of $2 \pi / \Delta y$ form the dual group $\hat{Q}$, which is a subgroup of $\hat{G}$. The quotient group, where we identify multiples of $2 \pi / \Delta y$ with zero, turns out to be $\hat{H}$. These dual groups may all be thought of as consisting of angular frequencies.
We can summarize this situation in diagrams
$$
H \longrightarrow G \longrightarrow Q
$$
and
$$
\hat{Q} \longrightarrow \hat{G} \longrightarrow \hat{H}
$$
The arrow between two groups means that elements of one group uniquely determine elements of the next group. Furthermore, an element of the group $G$ or $\hat{G}$ that is determined by an element of the group on the left itself determines the element 0 of the group on the right.
The first main example is when $G$ is the reals, $H$ is the subgroup of integer multiples of $\Delta y$, and $Q$ is the circle of circumference $\Delta y$. Then $\hat{G}$ is the reals (considered as angular frequencies), $\hat{Q}$ is the subgroup of multiples of $\Delta r=2 \pi / \Delta y$, and $\hat{H}$ is the circle of circumference $\Delta r$.
The other main example is when $G$ is the circle of circumference $L=$ $N \Delta y, H$ is the subgroup of order $N$ consisting of integer multiples of $\Delta y$, and $Q$ is the circle of circumference $\Delta y$. Then $\hat{G}$ is the integers spaced by $\Delta k=2 \pi / L, \hat{Q}$ is the subgroup of multiples of $\Delta r=2 \pi / \Delta y=N \Delta k$, and $\hat{H}$ is the group of order $N$ consisting of multiples of $\Delta k \bmod N$. In this example integrals over $\hat{G}$ and $\hat{H}$ are replaced by sums.
We begin with $f$ defined on $G$. Its Fourier transform is
$$
\int_{G} e^{-i k x} f(x) d x=\hat{f}(k)
$$
defined for $k$ in $\hat{G}$. The inversion formula then gives
$$
f(x)=\int_{\hat{G}} e^{i k x} \hat{f}(k) \frac{d k}{2 \pi} .
$$
We now restrict the inversion formula to the discretely sampled points $y$ in the subgroup $H$ and obtain
$$
f(y)=\int_{\hat{G}} e^{i k y} \hat{f}(k) \frac{d k}{2 \pi}=\int_{\hat{H}} \sum_{r \in \hat{Q}} e^{i(k+r) y} \hat{f}(k+r) \frac{d k}{2 \pi} .
$$
We observe that there is an aliasing effect. The act of discrete sampling implies that all frequencies $k+r$ that differ by a multiple of $\Delta r$ from $k$ show up under the alias of $k$. The reason for this is simply that
$$
e^{i(k+r) y}=e^{i k r} e^{i r y}=e^{i k r}
$$
when $y$ is in $H$ (one of the discretely sampled points). This is because $r y$ is a multiple of $\Delta r \Delta y=2 \pi$.
Thus we obtain that for $y$ in $H$
$$
f(y)=\int_{\hat{H}} e^{i k y} \sum_{r \in \hat{Q}} \hat{f}(k+r) \frac{d k}{2 \pi}
$$
This is of the form of an inversion formula for the group $H$. Therefore we have identified the Fourier transform of $f$ restricted to $H$. The proves the following fundamental result.
The Poisson summation formula says that the restriction of $f$ to $H$ has Fourier transform
$$
\sum_{y \in H} e^{-i k y} f(y) \Delta y=\sum_{r \in \hat{Q}} \hat{f}(k+r)
$$
Here $H$ consists of multiples of $\Delta y$ and $\hat{Q}$ consists of multiples of $2 \pi / \Delta y$.
This formula says that replacing an integral by a Riemann sum has the effect of replacing the Fourier transform by a sum of the transforms over aliased frequencies.
Thus, for instance, when we want to take the Fourier transform of a function on the circle of length $L$, and we approximate the transform by a Riemann sum with length $\Delta y$, then the aliased frequencies $r$ are spaced by $2 \pi / \Delta y$. Thus we want to take the $\Delta y$ sufficiently small so that the $\hat{f}(k+r)$ are close to zero except when $r=0$.
An immediate consequence is the following somewhat more general shifted Poisson summation formula.
$$
\sum_{y \in H} e^{-i k(x+y)} f(x+y) \Delta y=\sum_{r \in \hat{Q}} e^{i r x} \hat{f}(k+r) .
$$
This is obtained by applying the Poisson summation formula to $g(z)=$ $f(x+z)$ and noting that $\hat{g}(k)=e^{i k x} \hat{f}(k)$. An important special case of the Poisson summation formula is obtained if we take $k=0$ :
$$
\sum_{y \in H} f(y) \Delta y=\sum_{r \in \hat{Q}} \hat{f}(r) .
$$
Even this form leads to remarkable identities.
### The sampling theorem
We may ask to what extent $f$ restricted to the sampling points in $H$ determines $f$ on the other points. The Fourier transform of $f$ restricted to $H$ is $\sum_{r} f(k+r)$ restricted to the frequency band $\hat{H}$. (Often $\hat{H}$ is thought of as the frequency band running from $-\Delta r / 2$ to $\Delta r / 2$; however any band of length $\Delta r$ would do.)
We can try to define a function $f_{H}(x)$ on the entire group $G$ from this Fourier transform by the formula
$$
f_{H}(x)=\int_{\hat{H}} e^{i k x} \sum_{r \in \hat{Q}} \hat{f}(k+r) \frac{d k}{2 \pi} .
$$
We can change variable and write
$$
f_{H}(x)=\sum_{r \in \hat{Q}} e^{-i r x} \int_{\hat{H}+r} e^{i u x} \hat{f}(u) \frac{d u}{2 \pi} .
$$
Thus $f_{H}(x)$ has contributions from all frequency bands, but with a confusing exponential factor in front.
However note that when $y$ is in $H$, then $f_{H}(y)=f(y)$. Thus $f_{H}$ interpolates $f$ at the sampling points.
Another way of writing $f_{H}(x)$ is as
$$
f_{H}(x)=\int_{\hat{H}} e^{i k x} \sum_{y \in H} e^{-i k y} f(y) \Delta y \frac{d k}{2 \pi}=\sum_{y \in H} K_{H}(x-y) f(y) \Delta y
$$
where
$$
K_{H}(x)=\int_{\hat{H}} e^{i k x} \frac{d k}{2 \pi} .
$$
This formula expresses $f_{H}(x)$ directly in terms of the values $f(y)$ at the sampling points $y$.
Now assume in addition that the original Fourier transform $\hat{f}$ is bandlimited, that is, it vanishes outside of $\hat{H}$. In that case it is easy to see that $f_{H}(x)=f(x)$ for all $x$ in $G$. This is the sampling theorem: A band-limited function is so smooth that it is determined by its values on the sampling points.
## $6.9 \quad$ FFT
It is clear that the obvious implementation of the Fourier transform on the cyclic group of order $N$ amounts to multiplying a matrix times a vector and hence has order $N^{2}$ operations. The Fast Fourier Transform is another way of doing the computation that only requires order $N \log _{2} N$ operations.
Again the setup is a group $G$ and a subgroup $H$. Again we take $Q$ to be the quotient group. We have
$$
\hat{f}(k)=\int_{Q} e^{-i k x}\left[\sum_{y \in H} e^{-i k y} f(x+y) \Delta y\right] \frac{d x}{\Delta y} .
$$
The Fast Fourier Transform is a special case. We take $G$ to be a discrete group given by multiples of $\Delta x$, and we take $H$ to be the subgroup of even elements. Then $Q$ is a two element group consisting of 0 and $\Delta x$. The formula becomes
$$
\hat{f}(k)=\frac{1}{2}\left[\sum_{y \in H} e^{-i k y} f(y) \Delta y+e^{-i k \Delta x} \sum_{y \in H} e^{-i k y} f(y+\Delta x) \Delta y\right],
$$
where the sum is over $y$ which are multiples of $\Delta y=2 \Delta x$. Here $k$ is a multiple of $\Delta k=2 \pi /(N \Delta x)$.
Again we may write this more explicitly as
$\hat{f}(m \Delta k)=\left[\sum_{n=0}^{N / 2-1} e^{-i 2 \pi m n /(N / 2)} f(2 n \Delta x)+e^{-i 2 \pi m / N} \sum_{n=0}^{N / 2-1} e^{-i 2 \pi m n /(N / 2)} f((2 n+1) \Delta x)\right] \Delta x$
If the order of $G$ is $N$, an even number, then this expresses the Fourier transform on $G$ as the sum of two Fourier transforms on $H$, a group of order $N / 2$. This allows a recursive computation of the Fourier transform.
The number of operations required is of order $C_{N}=N \log _{2} N$. One can see this as follows. For a group of order 1, no computation is required, so $C_{1}=0$. For a group of order $N$, one must have already computed two transforms of order $N / 2$, which took $C_{N / 2}$ operations. Then one has to compute the $N$ values, so $C_{N}=2 C_{N / 2}+N$. This determines $C_{N}$.
A typical application of the FFT is to compute a convolution. A convolution of two functions on a group of order $N$ is a straightforward order $N^{2}$ operation. However the indirect computation by Fourier transforms is much more rapid. The Fourier transform of each function is of order $N \log N$. The multiplication of the Fourier transforms is order $N$. The inverse Fourier transformation of the product is order $N \log N$. So the whole computation is order $N \log N$, which is much faster.
æ
| Textbooks |
\begin{document}
\title{Embeddings into outer models}
\begin{abstract} We explore the possibilities for elementary embeddings $j : M \to N$, where $M$ and $N$ are models of {\rm ZFC}\xspace with the same ordinals, $M \subseteq N$, and $N$ has access to large pieces of $j$. We construct commuting systems of such maps between countable transitive models that are isomorphic to various canonical linear and partial orders, including the real line $\mathbb R$. \end{abstract}
\section{Introduction}
The notion of an elementary embedding between transitive models of set theory is central to the investigation of principles of high consistency strength. A typical situation involves an elementary $j : M \to N$, where $M$ and $N$ are models of {\rm ZFC}\xspace sharing the same ordinals, and $j$ is not the identity map. Such a map cannot be the identity on ordinals, and the least ordinal moved is called the \emph{critical point}. Postulating more agreement between $M$ and $N$, and with the ``real'' universe $V$, usually results in a stronger hypothesis. Kunen established an upper bound to this collection of ideas, showing that there is no nontrivial elementary $j : V \to V$.
The question is how exactly to formalize Kunen's result, which is not on its face equivalent to a first-order statement. Indeed, it cannot be about an \emph{arbitrary} elementary embedding which may exist in some outer universe, as the relatively low-strength assumption of $0^\sharp$ gives a nontrivial elementary $j : L \to L$. Furthermore, requiring $j$ to be definable from parameters in $V$ yields an impossibility using more elementary considerations and not requiring the Axiom of Choice \cite{MR1780073}. One way to express the content of Kunen's Theorem is to require that $V$ satisfies the expansion of {\rm ZFC}\xspace to include Replacement and Comprehension for formulas that use $j$ as a predicate. Another way is to localize its combinatorial content, which shows that there cannot be a cardinal $\lambda$ and a nontrivial elementary $j : V_{\lambda+2} \to V_{\lambda+2}$.
We begin with an elaboration on Kunen's theorem in a slightly different direction. Using the main idea of Woodin's proof of Kunen's theorem (see \cite{MR1994835}, Theorem 23.12), we show that whenever one model of {\rm ZFC}\xspace is embedded into another with the same ordinals, there are some general constraints on what the models can know about the embedding and about each other.
\begin{theorem} \label{kunengen} Suppose $j : M \to N$ is a nontrivial elementary embedding between transitive models of ${\rm ZFC}\xspace$ with the same class of ordinals $\Omega$. Then at least one of the following holds: \begin{enumerate} \item\label{repl} The \emph{critical sequence} $\langle j^n(\crit(j)) : n \in \omega \rangle$ is cofinal in $\Omega$. \item\label{comp} For some $\alpha \in \Omega$, $j[\alpha] \notin N$. \item\label{cof} For some $\alpha \in \Omega$, $\alpha$ is regular in $M$ and singular in $N$. \end{enumerate} \end{theorem}
\begin{proof} We will suppose that all of the alternatives fail and derive a contradiction. Let $\lambda \in \Omega$ be the supremum of the critical sequence. Since $j[\lambda] \in N$, the critical sequence is a member of $N$, and thus $N \models \cf(\lambda) = \omega$. Since $\cf(\lambda)^M$ is regular in $M$, it is also regular in $N$, and so $M \models \cf(\lambda) = \omega$ as well. Thus $j(\lambda) = \lambda$. Since $(\lambda^+)^M$ is regular in $N$, we must have that $(\lambda^+)^M$ is also a fixed point.
Let $\kappa = \crit(j)$, and choose in $M$ a pairwise-disjoint sequence of stationary subsets of $\lambda^+ \cap \cof(\omega)$, $\langle S_\alpha : \alpha < \kappa \rangle$. Let $\langle S'_\alpha : \alpha < j(\kappa) \rangle = j(\langle S_\alpha : \alpha < \kappa \rangle)$. Let $C = \{ \alpha < \lambda^+ : j[\alpha] \subseteq \alpha \}$. $C$ is a member of $N$ and a club in $\lambda^+$, so let $\alpha \in C \cap S'_\kappa$. Since $N \models \cf(\alpha) = \omega$, there is a sequence $\langle \alpha_n : n < \omega \rangle \in M$ cofinal in $\alpha$. Since $\alpha$ is closed under $j$, $j(\alpha) = \sup_{n<\omega} j(\alpha_n) = \alpha$. By elementarity, there is some $\beta < \kappa$ such that $\alpha \in S_\beta$. But then $\alpha \in S'_\beta \cap S'_\kappa = \emptyset$, a contradiction. \end{proof}
Let us list some key examples of embeddings between transitive models of {\rm ZFC}\xspace with the same ordinals, in which \emph{exactly one} of the above alternatives holds: \begin{enumerate} \item Axiom I3 asserts the existence of a nontrivial elementary embedding $j : V_\lambda \to V_\lambda$, where $\lambda$ is a limit ordinal. (\ref{comp}) and (\ref{cof}) must both fail for such $j$, so (\ref{repl}) must hold. \item Suppose $\mathcal U$ is a countably complete ultrafilter over some set. If $j : V \to M \cong \Ult(V,\mathcal U)$ is the ultrapower embedding, then (\ref{repl}) fails by the Replacement axiom, and (\ref{cof}) fails since $M \subseteq V$. Thus (\ref{comp}) holds. \item Situations in which (\ref{repl}) and (\ref{comp}) both fail will be considered in Sections \ref{fixedpts} and \ref{amcat} of the present paper. \end{enumerate}
We will say that an elementary embedding $j : M \to N$ is \emph{(target-)amenable} if $j[x] \in N$ for all $x \in M$, or in other words that alternative (\ref{comp}) fails. For {\rm ZFC}\xspace models, this is equivalent to saying that $j \cap \alpha^2 \in N$ for all ordinals $\alpha \in N$. It is easy to see that $M \subseteq N$ in such a situation. It is an immediate consequence of Theorem \ref{kunengen} that if $M$ and $N$ are transitive models of {\rm ZFC}\xspace with the same ordinals $\Omega$, and $j : M \to N$ is an amenable elementary embedding such that its critical sequence is \emph{not} cofinal in $\Omega$, then $M$ and $N$ do not agree on cofinalities. The proof actually shows that $M$ and $N$ cannot agree on both the class of cardinals and the class $\{ \alpha : \cf(\alpha) = \omega \}$.
When the domain and target of an elementary embedding are the same, alternative (\ref{cof}) cannot hold. If $j : M \to M$ is definable from parameters in some larger universe $V$, then alternatives (\ref{repl}) and (\ref{comp}) show that the closure of $M$, as measured by $V$, must run out at some point. In contrast to the I3 examples that achieve amenability at the cost of countable closure, we show it is possible for such $M$ to be as near to $V$ as desired, and find in this motif a characterization of supercompactness:
\begin{theorem} \label{agreement} Suppose $\kappa \leq \lambda$ are regular. $\kappa$ is $\lambda$-supercompact if and only if there is a $\lambda$-closed transitive class $M$ and a nontrivial elementary $j : M \to M$ with critical point $\kappa$. \end{theorem}
Finally, we consider amenable embeddings for which alternative (\ref{repl}) does not hold. Because of the importance of regular fixed points in the proof of Theorem \ref{kunengen}, we first explore the possible behaviors of cardinal fixed points of amenable embeddings, showing that essentially anything can happen. Then we explore the possible structural configurations of commuting systems of amenable embeddings.
Given an ordinal $\delta$, let $\mathcal E_\delta$ be the category whose objects are all transitive models of {\rm ZFC}\xspace of height $\delta$ and whose arrows are all elementary embeddings between these models. Let $\mathcal A_\delta$ be the subcategory where we take only amenable embeddings as arrows. (It is easy to see that amenable embeddings are closed under composition.)
Partial orders are naturally represented as categories where between any two objects there is at most one arrow, which we take to point from the lesser object to the greater. We would like to know what kinds of partial orders can appear in a reasonable way as subcategories of an $\mathcal A_\delta$. Let us say that a subcategory $\mathcal D$ of a category $\mathcal C$ is \emph{honest} if whenever $x$ and $y$ are objects of $\mathcal D$ and there is an arrow $f : x \to y$ in $\mathcal C$, then there is one in $\mathcal D$ as well.
\begin{theorem} If there is a transitive model of {\rm ZFC}\xspace plus sufficiently many large cardinals, \footnote{For (3), we use a set of measurable cardinals of ordertype $\omega+1$. The others use hypotheses weaker than one measurable cardinal.} then there is a countable ordinal $\delta$ such that $\mathcal A_\delta$ contains honest subcategories isomorphic to: \begin{enumerate} \item The real numbers. \item The complete binary tree of any countable height. \item The reverse-ordered complete binary tree of height $\omega$. \item An Aronszajn tree. \item Every countable pseudotree. \end{enumerate} \end{theorem}
A pseudotree is a partial order that is linear below any given element, which generalizes both linear orders and trees. These have been considered by several authors, for example in \cite{ MR2195726, MR1173142, MR0498152}. In order to show the last item, we develop the model theory of pseudotrees. We show that there is a countable pseudotree that has the same kind of universal property that the rationals have with respect to linear orders: It is characterized up to isomorphism by some first-order axioms, and every other countable pseudotree appears as a substructure. We prove that for suitable $\delta$, the category $\mathcal A_\delta$ contains a copy of this universal countable pseudotree.
We also rule out some kinds of subcategories. For example, if $\delta$ is countable, then $\mathcal A_\delta$ cannot contain a copy of $\omega_1$ or a Suslin tree. There are many natural questions about the possible structure of these categories, and we list some of them at the end.
\section{Self-embeddings of highly closed classes} \label{self}
In this section, we will prove Theorem \ref{agreement}. First let us show that if $\lambda$ is regular, $M$ is a $\lambda$-closed transitive class, and $j : M \to M$ is a nontrivial elementary embedding with critical point $\kappa$, then $\kappa$ is $\lambda$-supercompact. First note that we may assume $j(\kappa) > \lambda$: The proof of Theorem \ref{kunengen} shows that the critical sequence eventually must overtake $\lambda$. For if not, then $\lambda \geq \eta^+$, where $\eta = \sup_{n<\omega} j^n(\kappa)$, and we can derive a contradiction from the assumption that $j[\eta^+] \in M$. So composing $j$ with itself finitely many times yields an embedding that sends $\kappa$ above $\lambda$.
Next we claim that $\lambda^{<\kappa} = \lambda$, using a well-known argument (see \cite{MR2160657}). Let $\vec C = \langle C_\alpha : \alpha < \lambda \rangle$ be such that $C_\alpha$ is a club in $\alpha$ of ordertype $\cf(\alpha)$. Since $j[\lambda] \in M$ and $j(\lambda)$ is regular in $M$, $\gamma := \sup j[\lambda] < j(\lambda)$. Let $C^* = j(\vec C)(\gamma)$. Let $D = j^{-1}[C^*]$. Since $j[\lambda]$ is ${<}\kappa$-closed, $|D| = \lambda$. If $x \in [D]^{<\kappa}$, then $j(x) = j[x]$, and by elementarity, $x \subseteq C_\alpha$ for some $\alpha < \lambda$ such that $\cf(\alpha)<\kappa$. Since $\kappa$ is inaccessible, $|\mathcal{P}(C_\alpha)| < \kappa$ when $|C_\alpha|<\kappa$. Thus $\lambda^{<\kappa} \leq \lambda \cdot \kappa = \lambda$.
Thus, all subsets of $\mathcal{P}_\kappa\lambda$ are in $M$. From $j$ we may define a $\lambda$-supercompactness measure in the usual way: $\mathcal U = \{ X \subseteq \mathcal{P}_\kappa\lambda : j[\lambda] \in j(X) \}$.
For the other direction, we use an iterated ultrapower. Let $\mathcal U$ be a normal, fine, $\kappa$-complete ultrafilter on $\mathcal{P}_\kappa\lambda$. Let $V = M_0$ and for $n<\omega$, let $j_{n,n+1} : M_n \to M_{n+1} = \Ult(M_n,j_{0,n}(\mathcal U))$ be the ultrapower embedding, and let $j_{m,n+1} = j_{n,n+1} \circ j_{m,n}$ for $m < n$. Let $M_\omega$ the direct limit, and for $n<\omega$, let $j_{n,\omega}$ be the direct limit embedding. Note that each $M_n$ is $\lambda$-closed, but the limit $M_\omega$ is not even countably closed. $j_{0,\omega}(\kappa) = \sup_{n<\omega} j_{0,n}(\kappa)$, yet this ordinal is inaccessible in $M_\omega$.
To construct the desired model $M$, we find a generic for a Prikry forcing over $M_\omega$, which will restore $\lambda$-closure when adjoined. The sequences of classes $\langle M_n : n < \omega \rangle$ and embeddings $\langle j_{m,n} : m < n < \omega \rangle$ are definable in $V$ from $\mathcal U$. Applying $j_{\mathcal U}$ to the sequences yields $\langle M_n : 1 \leq n < \omega \rangle$ and $\langle j_{m,n} : 1\leq m < n < \omega \rangle$, which has the same direct limit, $M_\omega$. For any formula $\varphi(v_0,\dots,v_n)$ and parameters $a_0,\dots,a_n \in M_\omega$, $\varphi^{M_\omega}(a_0,\dots,a_n) \Leftrightarrow \varphi^{M_\omega}(j_{\mathcal U}(a_0),\dots,j_{\mathcal U}(a_n))$. Hence, $j_{\mathcal U} \ M_\omega$ is an elementary embedding into $M_\omega$.
Let us define the Prikry forcing $\mathbb P_{\mathcal U}$, which is standard. Conditions take the form $\langle x_0,\dots,x_n,A \rangle$, where: \begin{enumerate} \item Each $x_i \in \mathcal{P}_\kappa\lambda$, and $\kappa_i := x_i \cap \kappa$ is inaccessible.
\item $x_i \subseteq x_{i+1}$, and $|x_i| < \kappa_{i+1}$. \item $A \in \mathcal U$. \end{enumerate} Suppose $p = \langle x_0,\dots,x_n,A \rangle$ and $q = \langle x'_0,\dots,x'_m,B \rangle$. We say $q \leq p$ when: \begin{enumerate} \item $m \geq n$, and for $i \leq n$, $x_i = x'_i$. \item For $n < i \leq m$, $x'_i \in A$. \item $B \subseteq A$. \end{enumerate}
Proof of the following can be found in \cite{MR2768695}: \begin{lemma} $\mathbb P_{\mathcal U}$ adds no bounded subsets of $\kappa$, collapses $\lambda$ to $\kappa$, and is $\lambda^+$-c.c. \end{lemma}
We now argue that there exists an $M_\omega$-generic filter $G \subseteq j_{0,\omega}(\mathbb P_{\mathcal U})$ in $V$. Furthermore, $M_\omega[G]$ is $\lambda$-closed. The idea for $\kappa = \lambda$ is due to Dehornoy \cite{MR514228}.
For $n < \omega$, let $z_n = j_{n,\omega}[j_{0,n}(\lambda)]$. Since $\crit(j_{n+1,\omega}) = j_{0,n+1}(\kappa) > j_{0,n}(\lambda)$, $z_n = j_{n+1,\omega}(j_{n,n+1}[j_{0,n}(\lambda)])\in \mathcal{P}_{j_{0,\omega}(\kappa)} (j_{0,\omega}(\lambda))^{M_\omega}$. We define $G$ as the collection of $\langle x_0,\dots,x_n,A \rangle \in j_{0,\omega}(\mathbb P_{\mathcal U})$ such that $\langle x_0,\dots,x_n\rangle$ is an initial segment of $\langle z_i : i < \omega \rangle$ and $\{ z_i : n < i < \omega \} \subseteq A$.
\begin{lemma}$G$ is generic over $M_\omega$.
\begin{proof}
We use an analogue of Rowbottom's Theorem \cite{MR0323572}: If $F : [\mathcal{P}_\kappa\lambda]^{<\omega} \to 2$, then there is a set $A \in \mathcal U$ and a sequence $r \in {^\omega}2$ such that whenever $\langle x_i,\dots,x_n \rangle \subseteq A$ is such that $x_i \subseteq x_{i+1}$ and $|x_i| < x_{i+1} \cap \kappa \in \kappa$, then $F(\{x_0,\dots,x_{n-1}\}) = r(n)$. Let $D$ be a dense open subset of $j_{0,\omega}(\mathbb P_{\mathcal U})$ in $M_\omega$. For each $s$ such that $s^\frown \langle j_{0,\omega}(\mathcal{P}_\kappa\lambda)\rangle \in j_{0,\omega}(\mathbb P_{\mathcal U})$, let $F_s : [\mathcal{P}_\kappa\lambda]^{<\omega} \to 2$ be defined by $F_s(t) = 1$ if there is $B_{s,t}$ such that $s^\frown t^\frown \langle B_{s,t} \rangle \in D$, and $F_s(t) = 0$ otherwise. If $F_s(t) = 0$, define $B_{s,t} = j_{0,\omega}(\mathcal{P}_\kappa\lambda)$. For each $s$, let $B_s$ be the diagonal intersection $\Delta_t B_{s,t} \in j_{0,\omega}(\mathcal U)$. For each $s$, let $A_s$ and $r_s$ be given by the analogue of Rowbottom's Theorem. Let $A^*$ be the diagonal intersection $\Delta_s A_s \cap B_s$.
We claim that for each $s$, whether a condition $s^\frown t ^\frown \langle B\rangle \leq s^\frown \langle A^* \rangle$ is in $D$ depends only on the length of $t$. Let $s$ be given and let $t,t'$ be of the same length $n$, such that $s^\frown t ^\frown \langle B\rangle$ and $s^\frown t' \!^\frown \langle B' \rangle$ are both $\leq s^\frown \langle A^* \rangle$. Then $t,t' \subseteq A_s$ and $B,B' \subseteq B_s$, so $B \subseteq B_{s,t}$ and $B' \subseteq B_{s,t'}$. Thus $F_s(t) = F_s(t') = r_s(n)$, so either both $s^\frown t ^\frown \langle B\rangle$ and $s^\frown t' \!^\frown \langle B' \rangle$ are in $D$, or they are both are not in $D$.
Now if $D$ is a dense open subset of $j_{0,\omega}(\mathbb P_{\mathcal U})$ in $M_\omega$, there is some $n <\omega$ and $\bar D \in M_n$ such that $j_{n,\omega}(\bar D) = D$. Let $A^*$ be as above, and let $m \geq n$ be such that $A^* = j_{m,\omega}(\bar A^*)$ for some $\bar A^* \in j_{0,m}(\mathcal U)$. Let $l <\omega$ be such that for all sequences $t$ of length $\geq l$ and all $B$ such that $\langle z_0,\dots,z_{m-1} \rangle ^\frown t ^\frown \langle B \rangle \leq \langle z_0,\dots,z_{m-1} \rangle ^\frown \langle A^* \rangle$, we have $\langle z_0,\dots,z_{m-1} \rangle ^\frown t ^\frown \langle B \rangle \in D$. Since $z_k \in A^*$ for all $k \geq m$, $D \cap G \not= \emptyset$. \end{proof} \end{lemma}
\begin{lemma}The map $j_{\mathcal U} \restriction M_\omega$ can be extended to a map $j : M_\omega[G] \to M_\omega[G]$. \end{lemma}
\begin{proof} Note that for each $z_n$, $$j_{\mathcal U}(z_n) = j_{0,1}(j_{n,\omega}[j_{0,n}(\lambda)]) = j_{n+1,\omega}[j_{1,n+1}(j_{0,1}(\lambda))] = j_{n+1,\omega}[j_{0,n+1}(\lambda)] = z_{n+1}.$$ Let $G'$ be the generic filter generated by $\langle z_n : 1 \leq n < \omega \rangle$. Then $j_{\mathcal U}[G] \subseteq G'$. Thus by Silver's criterion, we may extend the map to $j : M_\omega[G] \to M_\omega[G']$ by putting $j(\tau^G) = j_{\mathcal U}(\tau)^{G'}$ for every $j_{0,\omega}(\mathbb P_{\mathcal U})$-name $\tau$. But clearly, $M_\omega[G'] = M_\omega[G]$. \end{proof}
\begin{lemma}$M_\omega[G]$ is $\lambda$-closed. \end{lemma}
\begin{proof} It suffices to show that $M_\omega[G]$ contains all $\lambda$-sequences of ordinals. Suppose $\langle \xi_\alpha : \alpha < \lambda \rangle \subseteq \ord$. For each $\alpha$, there is $n < \omega$ and a function $f_\alpha : (\mathcal{P}_\kappa\lambda)^n \to \ord$ such that $\xi_\alpha = j_{0,\omega}(f_\alpha)(z_0,\dots,z_{n-1})$. The sequence $\langle j_{0,\omega}(f_\alpha) : \alpha < \lambda \rangle$ can be computed from $j_{0,\omega}(\langle f_\alpha : \alpha < \lambda \rangle)$ and $j_{0,\omega}[\lambda]$, both of which are in $M_\omega$. The sequence $\langle \xi_\alpha : \alpha < \lambda \rangle$ can be computed from $\langle j_{0,\omega}(f_\alpha) : \alpha < \lambda \rangle$ and $\langle z_n : n < \omega \rangle$, and is thus in $M_\omega[G]$. \end{proof}
This concludes the proof of Theorem \ref{agreement}.
\section{Fixed point behavior of amenable embeddings} \label{fixedpts}
One way to produce elementary embeddings is with indiscernibles, like the embeddings derived from $0^\sharp$. If we want the embedding to be amenable to the target model, indiscernibles can also be used, more consistency strength is required. Vickers and Welch \cite{MR1856729} showed a near-equiconsistency between the existence of an elementary $j : M \to V$, where $M$ is a transitive class and $V$ satisfies ZFC for formulas involving $j$, and the existence of a Ramsey cardinal. In this section, we begin with the argument for constructing nontrivial amenable embeddings from a Ramsey cardinal, and then we elaborate on this idea using measurable cardinals to control the behavior of the embedding more precisely.
Recall that a cardinal $\kappa$ is \emph{Ramsey} when for every coloring of its finite subsets in $<\kappa$ colors, $c : [\kappa]^{<\omega} \to \delta < \kappa$, there is $X \in [\kappa]^\kappa$ such that $c \restriction [X]^n$ is constant for all $n<\omega$ ($X$ is \emph{homogeneous}). Rowbottom \cite{MR0323572} showed that if $\kappa$ is measurable and $\mathcal U$ is a normal measure on $\kappa$, then for any coloring $c : [\kappa]^{<\omega} \to \delta < \kappa$, there is a homogeneous $X \in \mathcal U$.
Suppose $\kappa$ is Ramsey. Let $\frak A$ be a structure on $V_\kappa$ in a language of size $\delta<\kappa$, that includes a well-order of $V_\kappa$ so that the structure has definable Skolem functions. For $X \subseteq \frak A$, we write $\hull^{\frak A}(X)$ for the set $\{ f(z) : z \in X^{<\omega}$ and $f$ is a definable Skolem function for $\frak A \}$. For $\alpha_0<\dots<\alpha_n<\kappa$, let $c(\alpha_0,\dots,\alpha_n) = \{ \varphi(v_0,\dots,v_n) : \frak A \models \varphi(\alpha_0,\dots,\alpha_n) \}$. The number of colors is at most $2^\delta$, so let $X \in [\kappa]^\kappa$ be homogeneous for $c$. If $Y \subseteq X$ and $\xi \in X \setminus Y$, then $\xi \notin \hull^{\frak A}(Y)$: For if not, let $f(v_0,\dots,v_n)$ be a definable Skolem function, and let $\{\alpha_0,\dots,\alpha_n\} \subseteq Y$ be such that $\xi = f(\alpha_0,\dots,\alpha_n)$. Let $m\leq n$ be the maximum such that $\alpha_m < \xi$. Suppose first that $m<n$. Let $\alpha_{n+1} > \alpha_n$ be in $X$. By homogeneity, \begin{align*} \frak A \models & \xi = f(\alpha_0,\dots,\alpha_m,\alpha_{m+2},\dots,\alpha_n,\alpha_{n+1}), \text{ and } \\ \frak A \models & \alpha_{m+1} = f(\alpha_0,\dots,\alpha_m,\alpha_{m+2},\dots,\alpha_n,\alpha_{n+1}). \end{align*}
This contradicts that $f$ is a function. If $\xi > \alpha_n$, then similarly, $A \models \xi = \xi' = f(\alpha_0,\dots,\alpha_n)$, for some $\xi'>\xi$ in $X$, again a contradiction. Furthermore, for every infinite $\mu \in [\delta,\kappa)$, $|\mathcal{P}(\mu) \cap \hull^{\frak A}(X)| \leq \mu$ \cite{MR0323572}.
Therefore, if $Y \subseteq X$ has size $\kappa$, and $M$ is the transitive collapse of $\hull^{\frak A}(Y)$, then $M$ is a proper transitive subset of $V_\kappa$ of size $\kappa$, and there is an elementary $j : M \to V_\kappa$. $j$ is amenable simply because $V_\kappa$ has all sets of rank $<\kappa$, so in particular $j[x] \in V_\kappa$ for all $x \in M$. Furthermore, if $Y_0,Y_1 \in [X]^\kappa$, then the order-preserving bijection $f : Y_0 \to Y_1$ induces an isomorphism $\pi : \hull^{\frak A}(Y_0) \to \hull^{\frak A}(Y_1)$, and thus these two hulls have the same transitive collapse $M$. So given the structure $\frak A$, this process produces one proper transitive subset $M \subseteq V_\kappa$ of size $\kappa$, which can be amenably embedded into $V_\kappa$ in many different ways. Let us examine the ways in which these embeddings may differ with regard to fixed points.
\begin{theorem} \label{regfix} Suppose $\kappa$ is measurable. There is a transitive $M \subseteq V_\kappa$ of size $\kappa$ such that for every $\delta \leq \kappa$, there is an elementary embedding $j : M \to V_\kappa$ such that the set of $M$-cardinals fixed by $j$ above $\crit(j)$ has ordertype $\delta$. \end{theorem}
\begin{proof}
Fix a normal ultrafilter $\mathcal U$ on $\kappa$ and some cardinal $\theta > 2^\kappa$. Let $\frak A$ be a structure in a countable language expanding $(H_\theta,\in,\mathcal U)$ with definable Skolem functions. Let $\frak A_0 \prec \frak A$ be such that $|\frak A_0| = |\frak A_0 \cap \kappa| < \kappa$. We show something a little stronger than the claimed result; namely, for every $\delta \leq \kappa$, there is a set of indiscernibles $B \subseteq \kappa$ of size $\kappa$ such that if $\frak B = \hull(\frak A_0 \cup B)$, then $\frak B \cap \sup (\frak A_0 \cap \kappa) = \frak A_0 \cap \kappa$, and the set of cardinals in the interval $[\sup(\frak A_0 \cap \kappa),\kappa)$ that are fixed by the transitive collapse of $\frak B$ has ordertype $\delta$. Thus if the inverse of the transitive collapse map of $\frak A_0$ has no fixed points above its critical point, for example if $\frak A_0$ is countable, then the result follows.
Let $\alpha_0 \in\bigcap (\mathcal U \cap \frak A_0)$ be strictly greater than $\sup(\frak A_0 \cap \kappa)$, and let $\frak A_1 = \hull(\frak A_0 \cup \{ \alpha_0 \})$. We claim that $\frak A_0 \cap \kappa = \frak A_1 \cap \alpha_0$. If $\gamma \in \frak A_1 \cap \kappa$, then there is a function $f : \kappa \to \kappa$ in $\frak A_0$ such that $f(\alpha_0) = \gamma$. If $\gamma < \alpha_0$, then $f$ is regressive on a set in $\mathcal U$, and therefore constant on a set in $\mathcal U$, and thus $\gamma \in \frak A_0$. Continue in this way, producing a continuous increasing sequence of elementary substructures of $\frak A$, $\langle \frak A_i : i < \kappa \rangle$, and an increasing sequence of ordinals $\langle \alpha_i : i < \kappa \rangle$, such that for $0<i<\kappa$, $\alpha_i = \min\bigcap(\mathcal U \cap \frak A_i)$, and $\frak A_{i+1} = \hull (\frak A_i \cup \{ \alpha_i \})$.
\begin{claim} $\{ \alpha_i : i < \kappa \} \in \mathcal U $. \end{claim} \begin{proof} Let $\frak A_\kappa = \bigcup_{i<\kappa} \frak A_i$, and let $\langle X_i : i < \kappa \rangle$ enumerate $\mathcal U \cap \frak A_\kappa$. There is a club $C\subseteq\kappa$ such that for all $\beta \in C$, $\mathcal U \cap \frak A_\beta = \{ X_i : i < \beta \}$, and $\beta = \sup(\frak A_\beta \cap \kappa)$. If $\beta \in \bigcap_{i<\beta} X_i$, then $\alpha_\beta = \beta$. This means that $C \cap \Delta_{i<\kappa} X_i \subseteq \{ \alpha_i : i < \kappa \}$. Since $\mathcal U$ is normal, the claim follows. \end{proof}
Let $A = \{ \alpha_i : i < \kappa \}$. Under a mild assumption on $\frak A_0$, $A$ is a set of order-indiscernibles for $\frak A$. For let $\varphi$ be a formula in $n$ free variables in the language of $\frak A$. Let $c_\varphi : [\kappa]^n \to 2$ be the coloring defined by $c_\varphi(a_1,\dots,a_n) = 1$ if $\frak A \models \varphi(a_1,\dots,a_n)$, and 0 otherwise. Since $c_\varphi \in \frak A$, Rowbottom's Theorem implies that there is a set $X_\varphi \in \mathcal U$ such that $\frak A \models \varphi(a_1,\dots,a_n) \leftrightarrow \varphi(b_1,\dots,b_n)$ whenever $\langle a_1,\dots,a_n \rangle,\langle b_1,\dots,b_n \rangle$ are increasing sequences from $X_\varphi$. By slightly enlarging $\frak A_0$ at the beginning if necessary, we may assume $c_\varphi \in \frak A_0$ for each $\varphi$. By elementarity, we may assume $X_\varphi \in \frak A_0$. Thus $A \subseteq X_\varphi$ for each such $\varphi$.
Let $\langle \beta_i : i < \kappa \rangle$ be the increasing enumeration of the closure of $A \cup \{ \sup(\frak A_0 \cap \kappa) \}$.
\begin{claim} \label{hullcontrol} For every $X \subseteq A$ and $\gamma<\kappa$, if $\beta_\gamma \notin X$, then $\hull(\frak A_0 \cup X)$ is disjoint from the interval $[\beta_\gamma,\beta_{\gamma+1})$. \end{claim} \begin{proof} Suppose $\beta_\gamma \notin X$, $\xi \in [\beta_\gamma,\beta_{\gamma+1})$, but there are elements $c_1,\dots,c_k \in \frak A_0$, ordinals $i_1 < \dots < i_n$, and a Skolem function $f$ such that $\xi = f(c_1,\dots,c_k,\alpha_{i_1},\dots,\alpha_{i_n})$, with $\{ \alpha_{i_1},\dots,\alpha_{i_n} \} \subseteq X$. Let us assume that we have chosen $f$ and $c_1,\dots,c_k$ to output $\xi$ with the least number $n$ of parameters from $X$. Since $\alpha_{i_n+1} > \sup(\frak A_{i_n+1} \cap \kappa)$, we have $\xi < \alpha_{i_n}$. Working in $N = \hull(\frak A_0 \cup \{ \alpha_{i_1},\dots,\alpha_{i_{n-1}} \})$, let $$Y = \{ \eta < \kappa : f(c_1,\dots,c_k, \alpha_{i_1},\dots,\alpha_{i_{n-1}},\eta ) < \eta \}.$$ Since $\alpha_{i_n} \in Y \in \mathcal U$ and $\mathcal U$ is normal, there is $\zeta < \kappa$ and $Z \in \mathcal U \cap N$ such that $f(c_1,\dots,c_k, \alpha_{i_1},\dots,\alpha_{i_{n-1}},\eta ) = \zeta$ for all $\eta \in Z$. Thus, $\zeta = \xi \in N$, and $\xi$ is the output of a Skolem function with only $n-1$ inputs from $X$, contrary to the minimality assumption. \end{proof}
Now let $D \subseteq A \cap \lim A$ have ordertype $\delta$. Let $B = (A \setminus \lim A) \cup D$. Let $\frak B = \hull(\frak A_0 \cup B)$, let $H$ be the transitive collapse of $\frak B$, let $M = H \cap V_\kappa$, and let $j : M \to V_\kappa$ be the inverse of the collapse map.
Let $C = j^{-1}[B]$, and let $\langle B(i) : i < \kappa \rangle$ and $\langle C(i) : i < \kappa \rangle$ denote the respective increasing enumerations of these sets. Let $\lambda \in \lim C$. By Claim \ref{hullcontrol}, there is some $i<\kappa$ such that $j(\lambda) = \beta_i$, and $\beta_i \in B$. Thus $C$ is closed. Since $|C(i)| = |C(i+1)|$ for all $i$, every cardinal of $V_\kappa$ above $C(0)$ is a limit point of $C$.
Let $\kappa_0 = \ot(\frak A_0 \cap \kappa)$. Since $\frak B \cap \alpha_0 = \frak A_0 \cap \kappa$, $j(\kappa_0) = \alpha_0 > \kappa_0$. Define an increasing continuous sequence $\langle \kappa_i : i < \kappa \rangle$ as follows. Given $\kappa_i$, if $j(\kappa_i) > \kappa_i$, let $\kappa_{i+1} = j(\kappa_i)$, and if $j(\kappa_i) = \kappa_i$, let $\kappa_{i+1} = (\kappa_i^+)^M$. For limit $\lambda$, let $\kappa_\lambda = \sup_{i<\lambda}\kappa_i$.
We claim that $D$ is the set of points above $\kappa_0$ that are fixed by $j$. To show each point in $D$ is fixed, note that if $\xi \in D$, then $\xi$ is closed under $j$, because $\ot(B \cap \xi) = \xi$. Thus $\xi = \kappa_\xi$, and $j(\xi) = \xi$. To show no other points are fixed, we argue by induction that if $\xi \notin D$, then $j(\kappa_\xi) = \kappa_{\xi+1}$. Assume that this holds for all $i<\xi$, and $\xi \notin D$. \begin{itemize} \item \underline{Case 1:} $\xi$ is a limit. Then $\kappa_\xi = \sup_{i<\xi} \kappa_\xi$. By the inductive assumption, $\kappa_i \in \ran(j)$ for unboundedly many $i <\xi$, so $\kappa_\xi$ is a cardinal in $V$, and thus $\kappa_\xi = C(\kappa_\xi)$. Therefore, $j(\kappa_\xi) = B(\kappa_\xi)$. If $\xi < \kappa_\xi$, then $\kappa_\xi \notin A$ since it is singular, so $B(\kappa_\xi) > \kappa_\xi$. If $\xi = \kappa_\xi$, then $B(\xi) > \xi$ since $\xi \notin D$. In either case, the definition of the sequence gives that $\kappa_{\xi+1} = j(\kappa_\xi)$.
\item \underline{Case 2:} $\xi = \eta+1$, and $\eta \in D$. Then $\kappa_\xi = (\kappa_\eta^+)^M$. Since $|\frak A_{\eta+1}| = \eta$, $(\kappa_\eta^+)^M < (\kappa_\eta^+)^V$. Thus $j(\kappa_\xi) = (\kappa_\eta^+)^V = \kappa_{\xi+1}$. \item \underline{Case 3:} $\xi = \eta+1$, and $\eta \notin D$. Then by induction, $\kappa_\eta < j(\kappa_\eta) = \kappa_\xi$. By elementarity, $\kappa_\xi = j(\kappa_\eta) < j(\kappa_\xi)$, so by definition, $\kappa_{\xi+1} = j(\kappa_\xi)$. \end{itemize} To conclude, if $\xi \in D$, then the interval $[\kappa_\xi,\kappa_{\xi+1})$ contains only one $M$-cardinal, and it is fixed. If $\xi \notin D$, then the interval $[\kappa_\xi,\kappa_{\xi+1})$ is moved into the interval $[\kappa_{\xi+1},\kappa_{\xi+2})$. \end{proof}
\begin{theorem} \label{singfix} Suppose $\kappa$ is a measurable limit of measurables, $\delta \leq \kappa$, and $f : \delta \to 2$. Then there is a transitive $M \subseteq V_\kappa$ of size $\kappa$ and an elementary embedding $j : M \to V_\kappa$ such that the set of $M$-cardinals fixed by $j$ above $\crit(j)$ has ordertype $\delta$, and for all $\alpha < \delta$, the $\alpha^{th}$ fixed point is regular iff $f(\alpha) = 1$. \end{theorem}
\begin{proof}
Let $\mathcal U$ be a normal measure on $\kappa$, let $\theta > 2^\kappa$, and let $\frak A$ be a structure in a countable language expanding $(H_\theta,\in,\mathcal U)$ with definable Skolem functions. Let us first show how to get isolated singular fixed points. Suppose $\frak A_0 \prec \frak A$ and there is a cardinal $\mu\in \frak A_0 \cap \kappa$ such that $|\frak A_0|<\mu$. Let $\langle \delta_n : n < \omega \rangle$ enumerate the first $\omega$ measurable cardinals in $\frak A_0 \cap \kappa$ above $\mu$. By the previous construction, we may make a series of elementary extensions $\frak A_0 \prec \frak A_1 \prec \frak A_2 \prec \dots$ such that: \begin{enumerate} \item For all $n$, $\frak A_n \cap \delta_n = \frak A_{n+1} \cap \delta_n$.
\item $\ot(\frak A_1 \cap \delta_0) = | \frak A_1 | = \mu$.
\item For $n \geq 1$, $\ot(\frak A_n \cap \delta_n) = |\frak A_n | = \delta_{n-1}$. \end{enumerate}
Then let $\frak A_\omega = \bigcup_{n<\omega} \frak A_n$. Let $j : H \to \frak A_\omega$ be the inverse of the transitive collapse. It is easy to see that $j(\mu) = \delta_0$, and for all $n<\omega$, $j(\delta_n) = \delta_{n+1}$. Let $\delta_\omega = \sup_n \delta_n$, which is in $\frak A_0$. Since $\ot(\frak A_\omega \cap \delta_\omega) = \delta_\omega$, $j(\delta_\omega) = \delta_\omega$. Since $|\frak A_\omega| = \delta_\omega$, all cardinals of $H$ above $\delta_\omega$ are below $(\delta_\omega^+)^V$, so $j$ has no cardinal fixed points greater than $\delta_\omega$.
To prove the theorem, we build a continuous chain of elementary submodels of $\frak A$, $\langle \frak A_\alpha : \alpha < \kappa \rangle$, along with an increasing sequence of cardinals $\langle \delta_\alpha : \alpha < \delta \rangle \subseteq \kappa$, such that: \begin{enumerate} \item If $\alpha < \beta$, then $\frak A_\beta \cap \sup(\frak A_\alpha \cap \kappa)= \frak A_\alpha \cap \kappa$. \item For each $\alpha$, $\delta_\alpha \in \frak A_{\alpha+1} \setminus \sup(\frak A_\alpha \cap \kappa)$. \item If $\alpha < \delta$, then $\delta_\alpha$ is the unique cardinal fixed point of the transitive collapse of $\frak A_{\alpha+1}$ that is $\geq\sup(\frak A_{\alpha} \cap \kappa)$, and $\delta_\alpha$ is regular iff $f(\alpha) = 1$. \item If $\alpha \geq \delta$, then there are no fixed points in the transitive collapse of $\frak A_{\alpha+1}$ that are $\geq \sup(\frak A_{\alpha} \cap \kappa)$. \end{enumerate} Given $\frak A_\alpha$, first adjoin an ordinal $\gamma_\alpha \in \bigcap(\frak A_\alpha \cap \mathcal U)$ that is strictly above $\sup(\frak A_\alpha \cap \kappa)$, to form a model $\frak A_\alpha'$ with the same ordinals below $\gamma_\alpha$, so that if $\pi$ is the transitive collapse of $\frak A_\alpha'$, then $\pi(\gamma_\alpha)<\gamma_\alpha$. If $\alpha<\delta$ and $f(\alpha) = 0$, let $\delta_\alpha$ be the supremum of the first $\omega$ measurable cardinals in $\frak A_\alpha'$ above $\gamma_\alpha$. If $\alpha < \delta$ and $f(\alpha) = 1$, let $\delta_\alpha$ be the least measurable cardinal in $\frak A_\alpha'$ above $\gamma_\alpha$. Depending on $f(\alpha)$, use either the above construction or that of Theorem \ref{regfix} to build an extension $\frak A_{\alpha+1}$ of $\frak A_\alpha'$ with size $\delta_\alpha$, so that $\delta_\alpha$ is the unique cardinal fixed point of its transitive collapse above $\sup(\frak A_\alpha \cap \kappa)$, and $\frak A_{\alpha+1} \cap \gamma_\alpha = \frak A_\alpha \cap \kappa$.
If $\delta = \kappa$, we continue the same process for $\kappa$-many steps. If $\delta < \kappa$, then once we have $\frak A_\delta$, we can apply Theorem \ref{regfix} to extend to a $\kappa$-sized model $\frak A_\kappa$, such that $\frak A_\kappa \cap \sup(\frak A_\delta \cap \kappa) = \frak A_\delta \cap \kappa$, and the transitive collapse of $\frak A_\kappa$ has no fixed points in the interval $[\sup(\frak A_\delta \cap \kappa),\kappa)$. \end{proof}
\section{Categories of amenable embeddings} \label{amcat}
In this section, we explore the structure of the categories $\mathcal A_\delta$ of models of {\rm ZFC}\xspace of height $\delta$ and amenable embeddings between them. We attempt to determine, as comprehensively as we can, what kinds of linear and partial orders can appear as honest subcategories of $\mathcal A_\delta$, for a countable ordinal $\delta$. We start by applying the methods of the previous section to build patterns of ``amalgamation'' and then use stationary tower forcing to build transfinite chains and patterns of ``splitting.'' Then we pass to ill-founded models that are well-founded up to $\delta$ to find subcategories of $\mathcal A_\delta$ that are isomorphic to certain canonical ill-founded structures.
The following large cardinal notion will be important for this section: \begin{definition*}[\cite{MR2069032}]A cardinal $\kappa$ is called \emph{completely J\'onsson} if it is inaccessible and for every $a \in V_\kappa$ which is stationary in $\bigcup a$, the set
$\{ M \subseteq V_\kappa : M \cap \bigcup a \in a \text{ and } |M \cap \kappa| = \kappa \}$ is stationary in $\mathcal{P}(V_\kappa)$. \end{definition*}
The construction of the previous section shows that measurable cardinals are completely J\'onsson. For let $\kappa$ be measurable and let $a \in V_\kappa$ be stationary. For any function $F : V_\kappa^{<\omega} \to V_\kappa$, we can take a well-order $\lhd$ on $H_\theta$, where $\theta > 2^\kappa$, and find an elementary substructure $\frak A \prec (H_\theta,\in,\lhd,F)$ of size $<\kappa$, and such that $\frak A \cap \bigcup a \in a$. By repeated end-extension, we get $\frak B \prec (H_\theta,\in,\lhd,F)$ such that $|\frak B \cap \kappa| = \kappa$ and $\frak B \cap \bigcup a = \frak A \cap \bigcup a$. Since $\frak B \cap V_\kappa$ is closed under $F$, the claim follows. An easy reflection argument shows that if $\kappa$ is measurable and $\mathcal U$ is a normal measure on $\kappa$, then the set of completely J\'onsson cardinals below $\kappa$ is in $\mathcal U$.
Similar arguments show the following: If $\langle \kappa_i : i < \delta \rangle$ is an increasing sequence of measurable cardinals with supremum $\theta$, and $\langle \mu_i : i < \delta \rangle$ is a nondecreasing sequence of cardinals with $\mu_i \leq \kappa_i$ for all $i$, then the set $\{ X \subseteq \theta : |X \cap \kappa_i| = \mu_i$ for all $i \}$ is stationary.
\subsection{Collapsing inward}
Following \cite{MR1940513}, let us say that a \emph{sequential tree} is a collection of functions $p : n \to \omega$ for $n<\omega$ closed under initial segments, ordered by $p \leq q$ iff $p \supseteq q$. If $T$ is a well-founded sequential tree, then it has a rank function $\rho_T : T \to \omega_1$, defined inductively by putting $\rho_T(p) = 0$ for minimal nodes $p$, and $\rho_T(p) = \sup \{ \rho_T(q) + 1 : q < p \}$. The rank of $T$ is $\rho_T(\emptyset)$.
\begin{theorem} \label{wftrees} Suppose there is a countable transitive model $N$ of {\rm ZFC}\xspace containing a completely J\'onsson cardinal $\delta$ that has infinitely many measurables below it, $\alpha \in N$ is an ordinal, and $T$ is a well-founded sequential tree of rank $\alpha$. Then there is an honest subcategory of $\mathcal A_\delta$ isomorphic to $T$. \end{theorem}
\begin{proof} For the purposes of this argument, for a natural number $n$ and a model $M$ in the language of set theory, let $\kappa_n^M$ denote the $n^{th}$ measurable cardinal in $M$, if it exists. We show the following claim by induction on $\alpha$. Let $\Phi(T,\alpha,M,\delta,f)$ stand for the assertion that: \begin{enumerate} \item $T$ is a well-founded sequential tree of rank $\alpha$. \item $M \in N$ is a transitive model of ${\rm ZF}\xspace -$ Powerset + ``Every set has a well-ordering'' + ``$\delta$ is a completely J\'onsson cardinal with infinitely many measurables below it'' $+$ ``$V_{\delta+\alpha+1}$ exists''. \item $f : \omega^2 \to \omega$ is an injection in $M$. \end{enumerate} We claim that if $\Phi(T,\alpha,M,\delta,f)$ holds, then there is an assignment $p \mapsto M_p$ for $p \in T$ such that: \begin{enumerate} \item Each $M_p$ is a transtive set of rank $\delta$ in $M$, with $M_\emptyset = V^M_\delta$ \item If $p \leq q$, then there is an amenable $j : M_p \to M_q$ in $M$. \item For all $p \in T$, $\kappa_{f(i,p(i))}^{M_p} < \kappa_{f(i,p(i))}^M$ for $i < \len(p)$, and $\kappa_{f(i,i')}^{M_p} = \kappa_{f(i,i')}^M$ for $(i,i') \notin p$. \end{enumerate} Let us first show why the claim implies that the generated subcategory is honest. Suppose $p(i) \not= q(i)$, and let $n = f(i,q(i))$. If there were an elementary embedding $j : M_p \to M_q$, then $$j(\kappa^{M_p}_n) = \kappa^{M_q}_n < \kappa_n^M = \kappa^{M_p}_n,$$ which is impossible since embeddings are monotonic on ordinals.
Suppose $\Phi(T,\alpha,M,\delta,f)$ holds, and the claim holds below $\alpha$.
Without loss of generality, $\langle n \rangle \in T$ for each $n<\omega$, since every sequential tree of rank $\alpha$ can be embedded into one of this form. For each $n$, let $T_n = \{ \langle p(1),\dots,p(\len(p) -1) \rangle : p \in T$ and $p(0) = n \}$. Note that each $T_n$ is a well-founded sequential tree, and $\alpha = \sup_n (\rank(T_n)+1)$. Let $\alpha_n = \rank(T_n)$.
Working in $M$, let $\theta = |V_{\delta+\alpha}|^+$. We can take for each $n<\omega$, an elementary substructure $\frak A_n \prec H_\theta$ such that: \begin{enumerate}
\item $| \frak A_n \cap \kappa_{f(0,n)}^M | < \kappa_{f(0,n)}^M$.
\item $| \frak A_n \cap \kappa_{f(i,i')}^M | = \kappa_{f(i,i')}^M$ for $(i,i') \not= (0,n)$.
\item $\delta \in \frak A_n$ and $| \frak A_n \cap \delta | = \delta$. \end{enumerate} Let $M_n$ be the transitive collapse of $\frak A_n$. Let $M_\emptyset = V_\delta^M$ and $M_{\langle n \rangle} = (V_\delta)^{M_n}$. (Note that $M_{\langle n \rangle}$ is a rank initial segment of $M_n$, and the larger model does not satisfy the powerset axiom.) Each $M_{\langle n \rangle}$ amenably embeds into $M_\emptyset$. For each $n$, $\kappa_{f(0,n)}^{M_n} < \kappa_{f(0,n)}^M$ and $\kappa_{f(i,i')}^{M_n} = \kappa_{f(i,i')}^M$ for $(i,i') \not= (0,n)$.
Let $g : \omega^2 \to \omega$ be defined by $g(i,i') = f(i+1,i')$. Then $\Phi(T_n,\alpha_n,M_n,\delta,g)$ holds for each $n$. By the induction hypothesis, there is an assignment $p \mapsto M_p$, for $p \in T \setminus \{ \emptyset \}$, such that whenever $p(0) = n$, \begin{enumerate} \item $M_p$ is a transtive set of rank $\delta$ in $M_n$. \item If $q \leq p$, then there is an amenable $j : M_q \to M_p$ in $M_n$. \item For $1 \leq i < \len(p)$, $$\kappa_{g(i-1,p(i))}^{M_p} = \kappa_{f(i,p(i))}^{M_p} < \kappa_{f(i,p(i))}^{M_{n}} = \kappa_{f(i,p(i))}^M,$$ and for a pair $(i,i') \notin p$, with $i \geq 1$, $$\kappa_{g(i-1,i')}^{M_p} = \kappa_{f(i,i')}^{M_p} = \kappa_{f(i,i')}^{M_{n}} = \kappa_{f(i,i')}^M.$$ \item For all $m<\omega$, $\kappa_{f(0,m)}^{M_p} = \kappa_{f(0,m)}^{M_n}$. \end{enumerate} Therefore, the induction hypothesis holds for $\alpha$. \end{proof}
\subsection{Forcing outward}
The background material for this subsection can be found in \cite{MR2069032}.
\begin{definition*}[Woodin] The (proper class) \emph{stationary tower forcing} $\mathbb P_\infty$ is the class of stationary sets ordered by $a \leq b$ if $\bigcup a \supseteq \bigcup b$ and $\{ x \cap \bigcup b : x \in a \} \subseteq b$. If $\kappa$ is a strong limit cardinal, then $\mathbb P_{<\kappa} = \mathbb P_\infty \cap V_\kappa$. \end{definition*}
\begin{theorem}[Woodin] Suppose there is a proper class of completely J\'onsson cardinals, and $G \subseteq \mathbb P_\infty$ is generic over $V$. Then there is an amenable elementary embedding $j : V \to V[G]$ with unboundedly many regular fixed points. \end{theorem}
The above theorem immediately implies that if there is a countable transitive model $M_0$ of height $\delta$ with a proper class of completely J\'onsson cardinals, then $\mathcal A_\delta$ contains a subcategory isomorphic to the linear order $\omega$. Given $M_n$, we can take a $\mathbb P_\infty$-generic $G_n$ over $M_n$, which gives us an amenable $j : M_n \to M_{n+1} = M_n[G_n]$. In order to construct more complicated subcategories, we use the following to determine the action of the maps:
\begin{fact} \label{los} Suppose $G \subseteq \mathbb P_\infty$ and $j : V \to V[G]$ are as above. Suppose $\varphi(x_0,\dots,x_n,y)$ is a formula, $a_0,\dots,a_n,b \in V$, and $b$ is transitive. Then $$V[G] \models \varphi(j(a_0),\dots,j(a_n),j[b]) \Leftrightarrow \{ z \subseteq b : V \models \varphi(a_0,\dots,a_n,z) \} \in G.$$ Consequently, $\kappa = \crit(j) \Leftrightarrow \{ z \subseteq \kappa : z \cap \kappa \in \kappa \} \in G$. \end{fact}
To construct subcategories containing transfinite chains, we use our latitude in determining critical points so that the equivalence classes that make up the ordinals of the direct limit are all eventually ``frozen.''
\begin{lemma} \label{omegalimit} Suppose the following: \begin{enumerate} \item $\langle \delta_n : n < \omega \rangle$ is an increasing sequence of ordinals. \item $\langle M_n : n <\omega \rangle$ is a sequence of transitive ZFC models, each of height $\delta = \sup_{n<\omega} \delta_n$. \item $\langle j_{m,n} : M_m \to M_n : m \leq n < \omega \rangle$ is a commuting system of elementary embeddings. \item For all $m \leq n$, $j_{m,n}(\delta_n)<\crit(j_{n,n+1}) $. \end{enumerate} Then the direct limit is isomorphic to a transitive model $M_\omega \subseteq \bigcup_{n<\omega} M_n$. If each $j_{m,n}$ is amenable, then so is each direct limit map $j_{n,\omega}$, and thus $M_\omega = \bigcup_{n<\omega} M_n$. \end{lemma}
\begin{proof} Recall that the direct limit is defined as the set of equivalence classes $[n,x]$, where $x \in M_n$, and we put $[n,x] \sim [m,y]$ for $n \leq m$ when $j_{n,m}(x) = y$. The membership relation is defined similarly: for $n\leq m$, $[n,x]$ ``$\in$'' $[m,y]$ when $j_{n,m}(x) \in y$. Suppose $\alpha < \delta$ and $n<\omega$. Let $m \geq n$ be such that $\delta_m \geq \alpha$, and let $\beta = j_{n,m}(\alpha)$. Then for all $k > m$, $\crit(j_{m,k})>\beta$. Thus if $[k,\gamma] < [n,\alpha]$ in the direct limit, then there is some $\xi <\beta$ such that $[k,\gamma] \sim [m,\xi]$. Thus the set of predecessors of any ordinal in the direct limit is isomorphic to an ordinal below $\delta$, so the direct limit can be identified with a transitive model $M_\omega$ of height $\delta$. To show $M_\omega \subseteq \bigcup_n M_n$, let $x \in M_\omega$ and let $n,y$ be such that $x = j_{n,\omega}(y)$. Let $m\geq n$ be such that $\rank(y) <\delta_m$. $\crit(j_{m,\omega}) > \rank(j_{n,m}(y))$, so $x = j_{m,\omega} \circ j_{n,m}(y) = j_{n,m}(y) \in M_m$.
Now suppose each $j_{m,n}$ is amenable. Let $x \in M_n$. Let $m\geq n$ be such that $\rank(x) <\delta_m$. The critical point of $j_{m,k}$ is greater than $j_{n,m}(\delta_m)$ for all $k>m$. By amenability, $j_{n,m}[x] \in (V_{j_{n,m}(\delta_m)})^{M_m}$. But $j_{n,\omega}[x] = j_{m,\omega}[j_{n,m}[x]] = j_{m,\omega}(j_{n,m}[x]) \in M_\omega$. \end{proof}
\begin{corollary} \label{anycountable} Suppose $M$ is a countable transitive model of {\rm ZFC}\xspace of height $\delta$ satisfying that there is a proper class of completely J\'onsson cardinals. Let $\xi$ be any countable ordinal. Then there is a subcategory of $\mathcal A_\delta$ isomorphic to $\xi$. \end{corollary}
\begin{proof} We argue by induction on $\xi$, using the following stronger hypothesis: If $M_0$ is a countable transitive model of {\rm ZFC}\xspace + ``There is a proper class of completely J\'onsson cardinals'' of height $\delta$, and $\gamma <\delta$, then there is a linear system of amenable embeddings $\langle j_{\alpha,\beta} : M_\alpha \to M_\beta : \alpha \leq \beta \leq \xi \rangle$ with $\crit(j_{0,\xi}) > \gamma$, each $M_\alpha$ having the same ordinals $\delta$. Suppose this is true for all $\xi' < \xi$. If $\xi =\xi'+1$, then we may continue to $\xi$ by taking the final embedding $j_{\xi',\xi}$ to be a stationary tower embedding with large enough critical point. Suppose $\xi$ is a limit ordinal, and let $\xi = \sup_{n<\omega} \xi_n$ and $\delta = \sup_{n<\omega} \delta_n$, where the $\xi_n$ and $\delta_n$ are increasing. Let $\gamma<\delta$. Inductively choose for each $n<\omega$, a chain of amenable embeddings $\langle j_{\alpha,\beta} : M_\alpha \to M_\beta : \xi_n \leq \alpha \leq\beta\leq\xi_{n+1} \rangle$, such that for all $m\leq n <\omega$, $\crit(j_{\xi_n,\xi_{n+1}}) > j_{\xi_m,\xi_n}(\delta_n)+\gamma$. Then by Lemma \ref{omegalimit}, the direct limit $M_\xi$ is well-founded of height $\delta$, and each direct limit map $j_{\xi_n,\xi} : M_{\xi_n} \to M_\xi$ is amenable. For $\alpha<\xi$, let $j_{\alpha,\xi} : M_\alpha \to M_\xi$ be $j_{\xi_n,\xi} \circ j_{\alpha,\xi_n}$, where $n$ is least such that $\xi_n \geq \alpha$. For $\alpha<\beta<\xi$ coming from different intervals, let $j_{\alpha,\beta} = j_{\xi_m,\beta} \circ j_{\xi_{m-1},\xi_m} \circ \dots \circ j_{\xi_n,\xi_{n+1}} \circ j_{\alpha,\xi_n}$, where $n$ is least such $\xi_n \geq \alpha$ and $m$ is greatest such that $\xi_m \leq \beta$. These are amenable since they are the composition of amenable maps. It is easy to check that $\crit(j_{0,\xi}) > \gamma$. \end{proof}
The countability of $\xi$ above is optimal:
\begin{theorem} \label{nolongchains} For any ordinal $\delta$, $\mathcal E_\delta$ contains no subcategory isomorphic to $\delta^+$. Moreover, if $T$ is a tree of height $\delta^+$ such that for every $t \in T$ and every $\alpha<\delta^+$, there is $s \geq t$ of height $\geq \alpha$, and $T$ is isomorphic to a subcategory of $\mathcal E_\delta$, then forcing with $T$ collapses $\delta^+$. \end{theorem} \begin{proof} Suppose $\{ M_\alpha : \alpha < \delta^+ \}$ is set of objects in $\mathcal E_\delta$, and each pair $\alpha < \beta$ is assigned an embedding $j_{\alpha,\beta} : M_\alpha \to M_\beta$, such that for $\alpha < \beta < \gamma$, $j_{\alpha,\gamma} = j_{\beta,\gamma} \circ j_{\alpha,\beta}$.
We select continuous increasing sequences of ordinals, $\langle \xi_i : i <\delta^+ \rangle$, and subsets of $\delta$, $\langle F_i : i < \delta^+\rangle$. Let $\xi_0 = 0$ and let $F_0$ be the set of ordinals that are fixed by every map $j_{\alpha,\beta}$. Given $\xi_i$, the sequence $\langle j_{\xi_i,\beta}(\alpha) : \beta < \delta^+ \rangle$ is nondecreasing. Thus there is $\xi_{i+1}<\delta^+$ such that $j_{\xi_i,\xi_{i+1}}(\alpha) = j_{\xi_i,\beta}(\alpha)$ for all $\beta \geq \xi_{i+1}$, and all $\alpha<\delta$. Let $F_{i+1} = j_{\xi_i,\xi_{i+1}}[\delta]$. Take unions at limits. By induction, the sets $F_i$ are increasing, and for all $i<\delta^+$, $j_{\xi_i,\beta}(\alpha) = \alpha$ for $\alpha \in F_i$ and $\beta\geq\xi_i$.
Let $\gamma < \delta^+$ be the point at which the sets $F_i$ stabilize. We claim that for all $i > \gamma$, $j_{\xi_\gamma,i}$ is the identity map and thus $M_i = M_{\xi_\gamma}$. Otherwise, let $\alpha$ and $\eta$ be such that $j_{\xi_\gamma,\eta}(\alpha) > \alpha$. But then $j_{\xi_\gamma,\xi_{\gamma+1}}(\alpha) \in F_{\gamma+1} \setminus F_\gamma$, a contradiction.
To show the claim about $\delta^+$-trees, suppose on the contrary $T$ is a tree satisfying the hypothesis. If $b$ is $T$-generic branch over $V$, then $b$ has length $(\delta^+)^V$. If $\delta^+$ is preserved, then in the extension, $b$ gives a chain of length $\delta^+$ in $\mathcal E_\delta$, contradicting what we have shown. \end{proof}
To further explore the structure of these categories, we use countable transitive models $M \models {\rm ZFC}\xspace$ of some height $\delta$ which is the supremum of an increasing sequence $\langle \delta_n : n<\omega \rangle$ such that: \begin{enumerate} \item Each $\delta_n$ is a completely J\'onsson cardinal in $M$. \item For each $n$, $V_{\delta_n}^M \prec M$. \end{enumerate} Let us call these \emph{good models}, and let us say an ordinal $\delta$ is good if there exists a good model of height $\delta$. The existence of a good model follows from the existence a measurable cardinal. Note that if $M$ is good, $N$ has the same ordinals, and $j : M \to N$ is elementary, then $\langle j(\delta_n) : n <\omega \rangle$ witnesses that $N$ is good. Over good models, we can build generics for $\mathbb P_\infty$ in a way that allows precise control over what happens to the cardinals $\delta_n$:
\begin{lemma} \label{controltower} Let $f : \omega \to \omega$ be any increasing function, let $M$ be good with witnessing sequence $\langle \delta_n : n < \omega \rangle$, and let $\kappa_0<\kappa_1$ be regular cardinals of $M$ below $\delta_{f(0)}$. There is $G \subseteq \mathbb P_\infty^M$ generic over $M$ such that if $j$ is the associated generic embedding, then \begin{enumerate} \item $\kappa_0 = \crit(j)$ and $j(\kappa_0) \geq \kappa_1$. \item For all $n$, $\delta_{f(n)}$ is a fixed point of $j$. \item For all $n$, if $f(n) < m < f(n+1)$, then $\delta_m$ has cardinality $\delta_{f(n)}$ in $M[G]$.
\end{enumerate}
\end{lemma}
\begin{proof} Let $\langle p_i : i < \omega \rangle$ enumerate $M$, such that for all $n$, $p_n \in V_{\delta_n}$. For each dense class $D \subseteq \mathbb P_\infty^M$ definable in $M$ from a parameter $p$, if $p \in V_{\delta_n}$, then by elementarity, $D \cap V_{\delta_n}$ is dense in $\mathbb P_{<\delta_n}$. Let $n \mapsto \langle n_0,n_1 \rangle$ be the G\"odel pairing function, and let $\langle \varphi_n(x,y) : n < \omega \rangle$ enumerate the formulas in two free variables. Now enumerate the dense subclasses of $\mathbb P_\infty$ as follows. For $n< \omega$, if $\{ x : \varphi_{n_0}(x,p_{n_1}) \}$ is dense, let $D_n$ be this class, and otherwise let $D_n = \mathbb P_\infty$.
Let $b = \{ z \subseteq \kappa_1 : z \cap \kappa_0 \in \kappa_0, \ot(z) \leq \kappa_0 \},$ which by Fact \ref{los} forces that $\kappa_1 = \ot(j[\kappa_1]) \leq j(\kappa_0)$. Then take $b' \leq b$ in $D_0 \cap V_{\delta_{f(0)}}$. Let $a_0 = \{ z \subseteq V_{\delta_{f(0)}} : z \cap \bigcup b' \in b', \ot(z \cap \delta_{f(0)} )= \delta_{f(0)} \}$, which forces $\delta_{f(0)}$ to be fixed.
We choose inductively a sequence $a_0 \geq a_1 \geq a_2 \geq \dots$ such that for all $n$: \begin{enumerate} \item $a_{n}$ is below something in $D_{n}$. \item $a_{n} \subseteq \{ z \subseteq V_{\delta_{f(n)}} : \ot(z) = \delta_{f(n)} \}$. \item $a_{n+1}$ forces $\delta_{f(n+1)-1}< j(\delta_{f(n)}^+)<\delta_{f(n+1)}$. \end{enumerate} The last condition ensures that if $f(n)<m<f(n+1)$, then $\delta_m$ is collapsed to $\delta_{f(n)}$ in $M[G]$.
Given $a_n$, take a cardinal $\kappa$, $\delta_{f(n+1)-1}< \kappa<\delta_{f(n+1)}$, and let $a_n'$ be the stationary set $\{ z \subseteq V_\kappa : z \cap \bigcup a_n \in a_n, |z| = \delta_{f(n)} \}$. By Fact \ref{los}, this forces that $\kappa = \ot(j[\kappa]) < j(\delta_{f(n)}^+)$. Then extend to $a_n'' \in D_{n+1} \cap V_{\delta_{f(n+1)}}$. Then let $a_{n+1} = \{ z \subseteq V_{\delta_{f(n+1)}} : z \cap \bigcup a_n'' \in a_n'', \ot(z \cap \delta_{f(n+1)}) = \delta_{f(n+1)}\}$. \end{proof}
\begin{theorem} \label{aronszajn} If $\delta$ is good, then $\mathcal A_\delta$ contains an honest subcategory isomorphic to an Aronszajn tree. \end{theorem}
\begin{proof} Let us first describe our mechanism for splitting into incompatible nodes. Suppose $M$ is a good model of height $\delta$ with witnessing sequence $\langle \delta_n : n < \omega \rangle$. Let us partition the natural numbers into blocks $B_0,B_1,B_2,\dots$ as follows. Let $B_0 = \emptyset$, and given $B_{n-1}$, let $B_n$ be the next $n$ numbers above $B_{n-1}$. In other words, $B_n = \{ \frac{n(n-1)}{2},\frac{n(n-1)}{2} + 1, \dots,\frac{n(n-1)}{2} + n -1 \}$ for $n>0$. Let $\langle B_n(i) : i < n \rangle$ enumerate block $B_n$ in increasing order. Using Lemma \ref{controltower}, for each $n$, we can take a $\mathbb P_\infty$-generic $G_n$ over $M$ with associated embedding $j_n$ such that: \begin{enumerate} \item $\crit(j_n) \geq \delta_n$. \item For sufficiently large $m$, $\delta_{B_m(n)}$ is fixed by $j_n$. \item For sufficiently large $m$, if $B_{m}(n)<i<B_{m+1}(n)$, then $\delta_i$ has cardinality $\delta_{B_{m}(n)}$ in $M[G_n]$. \end{enumerate}
This ensures that if $n_0 < n_1$, then there is no model $N$ of ZF with the same class of ordinals $\delta$ containing both $M[G_{n_0}]$ and $M[G_{n_1}]$. For if there were such an $N$, then in $N$, for sufficiently large $m$, $|\delta_{B_m(n_0)}| = |\delta_{B_m(n_1)}| = |\delta_{B_{m+1}(n_0)}|$, and so $N$ would have a largest cardinal.
Let $M_0$ be a good model of height $\delta$, with witnessing sequence $\langle \delta_n : n <\omega \rangle$. We will build an $\omega_1$-tree $T \subseteq \,^{<\omega_1}\omega$ by induction. Each $\sigma \in T$ will be assigned a good model $M_\sigma$ of height $\delta$, and for pairs $\sigma \subseteq \tau$ in $T$, we will assign an amenable embedding $j_{\sigma,\tau} : M_\sigma \to M_\tau$, such that if $\sigma \subseteq \tau \subseteq \upsilon$, then $j_{\sigma,\upsilon} = j_{\tau,\upsilon}\circ j_{\sigma,\tau}$. When $\sigma \perp \tau$, we will arrange that there is not any ZF model $N$ of height $\delta$ such that $M_\sigma,M_\tau \subseteq N$, which guarantees that there is no arrow between $M_\sigma$ and $M_\tau$ in the category $\mathcal A_\delta$. This makes the tree of models an honest subcategory. Theorem \ref{nolongchains} will ensure that $T$ is Aronszajn.
We build $T$ as a union of trees $T_\alpha$ for $\alpha < \omega_1$, where $T_\alpha$ has height $\alpha+1$, and $T_\beta$ end-extends $T_\beta$ for $\beta>\alpha$. Along the way, we build an increasing sequence of injective functors $F_\alpha : T_\alpha \to \mathcal A_\delta$. Put $T_0 = \{\emptyset\}$ and $F_0(\emptyset) = M_0$. Given $T_\alpha$ and $F_\alpha$, form $T_{\alpha+1}$ by adding $\sigma ^\smallfrown \langle n \rangle$ for each top node $\sigma$ and each $n < \omega$. For such top nodes $\sigma$, take an embedding $j_{\sigma,\sigma^\smallfrown\langle n\rangle} : M_\sigma \to M_{\sigma,\sigma^\smallfrown\langle n\rangle} = M_\sigma[G_n]$ as above, so that no two of them amalgamate. For $\tau \subseteq \sigma$, let $j_{\tau,\sigma^\smallfrown\langle n\rangle} = j_{\sigma,\sigma^\smallfrown\langle n\rangle} \circ j_{\tau,\sigma}$.
Suppose $\lambda$ is a countable limit ordinal and we have constructed $T_\alpha$ and $F_\alpha$ for $\alpha <\lambda$. Let $T_\lambda'$ and $F_\lambda'$ be the unions of the previously constructed trees and functors. We must build the top level. Let us assume the following inductive hypothesis: for each $\alpha <\lambda$, each $\sigma \in T_\alpha$, and each $n<\omega$, there is $\tau \supseteq \sigma$ at the top level of $T_\alpha$ such that $\crit(j_{\sigma,\tau}) \geq \delta_n$. To construct $T_\lambda$, use the induction hypothesis to choose for each node $\sigma\in T_\lambda'$ and each $n<\omega$, an increasing sequence $\langle \tau_i : i < \omega \rangle \subseteq T_\lambda'$ such that $\tau_0 = \sigma$, $\sup_i \len(\tau_i) = \lambda$, and for each $i < \omega$, $$\crit(j_{\tau_i,\tau_{i+1}})> \max \{ \delta_n,\delta_i,j_{\tau_0,\tau_i}(\delta_i),j_{\tau_1,\tau_i}(\delta_i),\dots,j_{\tau_{i-1},\tau_i}(\delta_i) \}.$$ Let $\tau(\sigma,n) = \bigcup_i \tau_i$. By Lemma \ref{omegalimit}, the direct limit of the system $\langle j_{\tau_i,\tau_m} : M_{\tau_i} \to M_{\tau_m} : i \leq m < \omega \rangle$ yields a model $M_{\tau(\sigma,n)}$ of height $\delta$ such that the direct limit maps $j_{\tau_i, \tau(\sigma,n)} : M_{\tau_i} \to M_{\tau(\sigma,n)}$ are amenable. If $\sigma \subseteq \upsilon \subseteq \tau(\sigma,n)$, then the map $j_{\upsilon,\tau(\sigma,n)}$ is given by $j_{\tau_i,\tau(\sigma,n)} \circ j_{\upsilon,\tau_i}$ for any $i$ such that $\tau_i \supseteq \upsilon$. Let the top level of $T_\lambda$ consist of all such $\tau(\sigma,n)$, and let the functor $F_\lambda$ be defined as above. By construction, the inductive hypothesis is preserved. \end{proof}
Similar constructions yield other kinds of uncountable trees as honest subcategories of $\mathcal A_\delta$. For example, we can build a copy of $^{<\omega}\omega$ where we also require that if $\sigma$ has length $n$, then for every initial segment $\tau \subseteq \sigma$ and every $m<\omega$, $\crit(j_{\sigma,\sigma^\smallfrown \langle m \rangle}) > j_{\tau,\sigma}(\delta_n)$. Then \emph{every} branch yields a nice direct limit. An induction as in Corollary \ref{anycountable} allows this to be extended to any countable height. Let us record this as: \begin{prop}If $\delta$ is good, then for every countable ordinal $\alpha$, there is an honest subcategory of $\mathcal A_\delta$ isomorphic to the complete binary tree of height $\alpha$. \end{prop}
\subsection{Ill-founded subcategories}
In order to construct ill-founded subcategories of $\mathcal A_\delta$, we pass to models of enough set theory that are well-founded beyond $\delta$ and contain some chosen member of $\mathcal A_\delta$, but ultimately have ill-founded $\omega_1$. One way of achieving this is via Barwise compactness \cite{MR0406760}. Another is via the following:
\begin{fact}[see \cite{MR2768692}, Section 2.6] Suppose $r$ is a real. Let $G$ be generic for $\mathcal{P}(\omega_1)/\ns$ over $L[r]$. Let $j : L[r] \to N = \Ult(L[r],G)$ be the ultrapower embedding. Then $N$ is well-founded up to $\omega_2^{L[r]}$, but $\omega_1^N$ is ill-founded. \end{fact}
We can decompose each ordinal in such a model into a sum of purely ill-founded and a purely well-founded part. Let $N$ be any countable ill-founded model of ${\rm ZF}\xspace -$Powerset that has standard $\omega$. Let $\xi$ be the ordertype of the set of well-founded ordinals of $N$. H. Friedman \cite{MR0347599} showed that the ordertype of the ordinals of $N$ is $\xi + (\mathbb Q \times \xi)$. Thus every $N$-ordinal $\alpha$ can be written as $\alpha = i(\alpha) + w(\alpha)$, where $w(\alpha) <\xi$, and $i(\alpha)$ is either 0 or is represented in the isomorphism as $(q,0)$, where $q \in \mathbb Q$.
We apply this trick to construct honest subcategories of $\mathcal A_\delta$ isomorphic to three canonical ill-founded partial orders: the real numbers, the universal countable pseudotree (to be defined below), and the reverse-ordered complete binary tree. We present them in increasing order of the consistency strength of the hypothesis employed.
\begin{theorem} \label{R} Suppose there is a countable transitive model of height $\delta$ of {\rm ZFC}\xspace + ``There is a proper class of completely J\'onsson cardinals.'' Then there is a subcategory of $\mathcal A_\delta$ isomorphic to the linear order $\mathbb R$. \end{theorem}
\begin{proof} Suppose $M$ is a countable transitive model of height $\delta$ of {\rm ZFC}\xspace + ``There is a proper class of completely J\'onsson cardinals'' and $r_M$ is a real coding $M$. Let $N$ be a countable model of ${\rm ZFC}\xspace-$Powerset with standard $\omega$ and ill-founded $\omega_1$, such that $r_M \in N$ and the conclusion of Corollary \ref{anycountable} holds in $N$.
Fix some $\zeta$ that $N$ thinks is a countable ordinal but is really ill-founded. There is a subcategory of $\mathcal A_\delta$ isomorphic to the linear order $\{ \alpha : N \models \alpha < \zeta \}$. If we fix an increasing sequence $\langle \delta_n : n < \omega \rangle$ cofinal in $\delta$, then we can modify the construction in Corollary \ref{anycountable} slightly to require that if $\kappa = \crit(j_{\alpha,\alpha+1})$ and $\delta_n \leq \kappa <\delta_{n+1}$, then $j_{\alpha,\alpha+1}(\kappa) \geq \delta_{n+1}$.
Thus there is a subcategory of $\mathcal A_\delta$ isomorphic to the rationals, which has as its objects all models indexed by $i(\alpha)$ for $0<i(\alpha)<\zeta$. Let us write this as $\langle j_{a,b} : M_a \to M_b : a \leq b \in \mathbb Q \rangle$. We have that for rationals $a<b$, if $\kappa = \crit(j_{a,b})$ and $\delta_n \leq \kappa < \delta_{n+1}$, then $j_{a,b}(\kappa) \geq \delta_{n+1}$. Now we argue that we can fill in the Dedekind cuts with direct limits and get a larger subcategory of $\mathcal A_\delta$.
Suppose $r \in \mathbb R \setminus \mathbb Q$. Let $M_r$ be the direct limit and let $j_{a,r}$ be the direct limit map for rationals $a<r$. Let $b > r$ be rational. For $x \in M_r$, there is a rational $a<r$ and $y \in M_a$ such that $x = j_{a,r}(y)$. Let $j_{r,b}(x) = j_{a,b}(y)$ for any such $a,y$. The choice of $a,y$ does not matter, since if $x = j_{a,r}(y) = j_{a',r}(y')$ for $a < a'$, then $y' = j_{a,a'}(y)$, and $j_{a,b}(y) = j_{a',b}(y')$. Furthermore, if $b,b'$ are rational such that $r<b<b'$, then $j_{r,b'} = j_{b,b'} \circ j_{r,b}$. This is because if $x \in M_r$ and $a,y$ are such that $j_{a,r}(y) = x$, then $$j_{r,b'}(x) = j_{a,b'}(y) = j_{b,b'} \circ j_{a,b}(y) = j_{b,b'} \circ j_{r,b}(x).$$
If $M_r \models \varphi(x_0,\dots,x_{n-1})$, then there are a rational $a<r$ and $y_0,\dots,y_{n-1} \in M_a$ such that $j_{a,r}(y_m) = x_m$ for $m<n$, and so $M_b \models \varphi(j_{a,b}(y_0),\dots,j_{a,b}(y_{n-1}))$. Thus $j_{r,b}$ is elementary, and so $M_r$ is well-founded and has height $\delta$. For irrational numbers $r<s$,
let $j_{r,s} = j_{b,s} \circ j_{r,b}$ for any rational $b \in (r,s)$. The choice of $b$ does not matter, since for $b,b'$ rational such that $r < b < b' < s$, we have: $$j_{b,s} \circ j_{r,b} = j_{b',s} \circ j_{b,b'} \circ j_{r,b} = j_{b',s} \circ j_{r,b'}.$$ To verify the commutativity of the system, let $r<s<t$ be reals and let $x \in M_r$. Let $b,c$ be rational such that $r<b<s<c<t$. Then $$j_{r,t}(x) = j_{b,t} \circ j_{r,b}(x) = j_{c,t} \circ j_{b,c} \circ j_{r,b}(x).$$ But by definition, $j_{b,c}(z) = j_{s,c} \circ j_{b,s}(z)$ for any $z \in M_b$. Thus: $$j_{r,t}(x) = j_{c,t} \circ \ j_{s,c} \circ j_{b,s} \circ j_{r,b}(x) = j_{s,t} \circ j_{r,s}(x).$$
To show that the maps are amenable, first suppose $r$ is irrational, $a<r$ is rational, and $\gamma < \delta$. Let $\langle a_n : n < \omega \rangle$ be an increasing sequence of rationals converging to $r$, with $a_0 = a$. For $n<\omega$, let $\kappa_n = \min \{ \crit(j_{a_n,a_m}) : n<m < \omega \}$. By passing to a subsequence if necessary, we may assume that for all $n$, $\kappa_n = \crit(j_{a_n,a_{n+1}})$, and $\kappa_n \leq \kappa_{n+1}$. Now it cannot happen that for infinitely many $n$ and all $m > n$, $j_{a_n,a_m}(\kappa_n) > \kappa_m$, because then the direct limit would be ill-founded. Thus there is some $n^*$ such that for all $n \geq n^*$, there is $m>n$ such that $j_{a_n,a_m}(\kappa_n) \leq \kappa_m$. By our extra requirement on how high the critical points are sent, this means that for large enough $n$, $\kappa_n > \gamma$. Therefore, for a large enough $n$, $j_{a,r}[\gamma] = j_{a_n,r}(j_{a,a_n}[\gamma]) \in M_r$.
Now suppose $r$ is irrational, $b>r$ is rational, and $\gamma<\delta$. By the previous paragraph, there is some rational $a<r$ such that $\crit(j_{a,r}) > \gamma$. Thus $j_{r,b}[\gamma] = j_{a,b}[\gamma] \in M_b$. Finally, if $r<s$ are irrational, the map $j_{r,s}$ is amenable since it is the composition of two amenable maps $j_{b,s} \circ j_{r,b}$, for any rational $b \in (r,s)$. \end{proof}
We note also that $M_r = \bigcup_{a<r} M_a$ for any $r$. This is true for rational $r$ by the construction in the proof of Lemma \ref{anycountable}, since all limit models are direct limits. If $r$ is irrational and $x \in M_r$, then $x \in M_b$ for all rational $b > r$. If $x \notin \bigcup_{a<r} M_a$, then $N$ thinks there is some least ordinal $\gamma$ such that $x \in M_\gamma$, which must be a successor ordinal of $N$. Thus $x \notin M_{i(\gamma)}$, and $i(\gamma)$ corresponds to a rational number greater than $r$. This contradiction shows that $x \in M_a$ for some $a<r$.
Using stationary tower forcing as the basic building block, we have constructed various (well-founded) trees and linear orders as honest subcategories of $\mathcal A_\delta$. A natural generalization of these two kinds of partial orders is the notion of a \emph{pseudotree}, which is simply a partial order which is linear below any given element. By the well-known universality of $\mathbb Q$, all countable linear orders appear in $\mathcal A_\delta$ for appropriate $\delta$. In fact, this generalizes to pseudotrees:
\begin{theorem} \label{pseudo} If $\delta$ is good, then every countable pseudotree is isomorphic to an honest subcategory of $\mathcal A_\delta$. \end{theorem}
To prove this, we first construct a certain countable pseudotree which contains every other countable pseudotree as a substructure. Let $T_{\mathbb Q}$ be the collection of all partial functions $f : \mathbb Q \to \omega$ such that: \begin{enumerate} \item $\dom f$ is a proper initial segment of $\mathbb Q$. \item If $f \not= \emptyset$, then there is a finite sequence $-\infty = q_0 < q_1 < \dots < q_n = \max(\dom f)$ such that for $i <n$, $f \restriction (q_i,q_{i+1}]$ is constant. \end{enumerate} We put $f \leq g$ when $f \subseteq g$. Notice that $T_{\mathbb Q}$ satisfies the following axiom set, which we call DPM for ``Dense Pseudotrees with Meets'': \begin{enumerate} \item It's a pseudotree with a least element 0. \item Every two elements $f,g$ have an infimum $f \wedge g$. \item Infinite Splitting: For all $g,f_0,\dots,f_{n-1}$ such that $f_i \wedge f_j = g$ for $i<j < n$, there is $f_n > g$ such that $f_i \wedge f_n = g$ for $i< n$. \item Density: If $g<f$, there is $h$ such that $g<h<f$. \end{enumerate} To be more precise, we take the language to be $\{ 0, \wedge, \leq \}$, where 0 is a constant, $\wedge$ is a binary function, and $a<b$ means $a \leq b$ and $a \not= b$. The third axiom is actually an infinite scheme. Let PM be the system consisting of only the first two axioms. Note that in the theory PM, $\leq$ is definable in terms of $\wedge$: $p \leq q$ iff $p \wedge q = p$. Thus to verify that a map from one model of PM to another is an embedding, it suffices to show that it is an injection that is a homomorphism in the restricted language $\{0,\wedge\}$.
\begin{lemma} \label{universal} Suppose $T$ satisfies PM, $Q$ satisfies DPM, $a \in T$, $S$ is a finite substructure of $T$, and $\pi : S \to Q$ is an embedding. Then there is a finite substructure $S'$ of $T$ containing $S \cup \{ a \}$ and an embedding $\pi' : S' \to Q$ extending $\pi$. \end{lemma}
\begin{proof} Let us assume $a \notin S$. Let $b = \max \{ a \wedge s : s \in S \}$. Note that $a \wedge s = b \wedge s$ for all $s \in S$ by the maximality of $b$. First we claim that $S \cup \{ b \}$ and $S \cup \{ a,b \}$ are both closed under the operation $\wedge$. Let $s_0 = \inf \{ s \in S : s \geq b \}$. Since $T$ is linearly ordered below $s_0$, for all $s \in S$, $$b \wedge s = \begin{cases} b &\text{ if } b \leq s_0 \wedge s \\ s_0 \wedge s &\text{ if } s_0 \wedge s \leq b. \\ \end{cases}$$ Thus for all $s \in S$, $b \wedge s \in S \cup \{ b \}$. Further, if $b \leq s \wedge s_0$, then $b \leq a \wedge s$.
Thus for all $s \in S$, either $a \wedge s = b$, or $b > a \wedge s$ and so $b > s \wedge s_0$, which implies $a \wedge s = b \wedge s = s \wedge s_0 \in S$. Therefore, $S \cup \{ a,b \}$ is also closed under $\wedge$.
Let us first extend $\pi$ to an embedding $\pi' : S \cup \{b \} \to Q$. If $b \notin S$, let $s_1 \in S$ be the maximum element below $b$. Use Density to pick some $b^* \in Q$ such that $\pi(s_1)<b^*<\pi(s_0)$, and let $\pi'(b) = b^*$. Suppose $s \in S$. If $s_0 \leq s$, then $\pi'(b \wedge s) = \pi'(b) = b^* = b^* \wedge \pi(s)$, since $\pi(s) \geq \pi(s_0) > b^*$. If $s_0 \nleq s$, then $s_0 \wedge s \leq s_1$ by the minimality of $s_0$, and $\pi'(b \wedge s) = \pi(s_0 \wedge s) = \pi(s_0) \wedge \pi(s) = b^* \wedge \pi(s_0) \wedge \pi(s) = b^* \wedge \pi(s)$, since $\pi(s) \wedge \pi(s_0) \leq \pi(s_1) < b^*$.
Now if $b$ is not maximal in $S \cup \{b \}$, let $\{ c_0,\dots,c_{n-1} \}$ be the set of minimal elements above $b$ in $S$, which implies that $c_i \wedge c_j = b$ for $i < j < n$. Use Infinite Splitting to find $a^* \in Q$ such that $a^* > b^*$ and, if the $c_i$ are defined, $a^* \wedge \pi(c_i) = b^*$ for $i< n$. We claim that for all $s \in S$, $a^* \wedge \pi(s) = b^* \wedge \pi(s)$. For if $s = b$, $a^* \wedge \pi(b) = a^* \wedge b^* = b^* = b^* \wedge b^*$. If $s > b$, then $s \geq c_i$ for some $i < n$, so $a^* \wedge \pi(s) = b^*$. If $s \ngeq b$, then $\pi(s) \wedge b^* < b^* < a^*$, so also $\pi(s) \wedge a^* < b^*$, and thus $a^* \wedge \pi(s) = b^* \wedge \pi(s)$. Thus for all $s \in S$, $\pi'(a \wedge s) = \pi'(b \wedge s) = b^* \wedge \pi(s) = a^* \wedge \pi(s)$. Therefore, we may define the desired extension $\pi''$ of $\pi'$ by putting $\pi''(a) = a^*$. \end{proof}
\begin{corollary} Any countable model of PM can be embedded into any model of DPM, and any two countable models of DPM are isomorphic. \end{corollary} \begin{proof} Suppose $S \models$ PM, $T \models$ DPM, and $S$ is countable. Let $\langle s_i : i < \omega \rangle$ enumerate $S$. Using Lemma \ref{universal}, we can build increasing sequences of finite substructures $S_n \subseteq S$ and embeddings $\pi_n : S_n \to T$ for $n<\omega$, such that $S_n \supseteq \{ s_i : i < n \}$. Then $\pi = \bigcup_{n<\omega} \pi_n$ is an embedding of $S$ into $T$. If we additionally assume that $T = \{ t_i : i < \omega \}$ and $S \models$ DPM, then given $\pi_n : S_n \to T$, we can find a finite substructure $T_n \supseteq \ran \pi_n \cup \{ t_n \}$ and an embedding $\sigma_n : T_n \to S$ extending $\pi_n^{-1}$. We then take $S_{n+1} \supseteq \ran \sigma_n \cup \{ s_n \}$ and $\pi_{n+1}$ extending $\sigma_n^{-1}$. Then $\pi = \bigcup_{n<\omega} \pi_n$ is an isomorphism. \end{proof}
\begin{lemma} Every pseudotree is a substructure of a model of PM. \end{lemma}
\begin{proof} Let $T$ be a pseudotree. $T$ is isomorphic to the collection of its subsets of the form $S_t = \{ x \in T : x \leq t \}$, ordered by inclusion. Extend this to a collection $T^*$ by adding $S_{t_0} \cap S_{t_1}$ for every two $t_0,t_1\in T$, and $\emptyset$ if $T$ does not already have a minimal element. We claim that $T^*$ is closed under finite intersection. Let $t_0,t_1,t_2,t_3 \in T$. The sets $S_{t_0} \cap S_{t_i}$ for $i < 4$ are initial segments of the linearly ordered set $S_{t_0}$, and so are $\subseteq$-comparable. If $j$ is such that $S_{t_0} \cap S_{t_j}$ is smallest, then $S_{t_0} \cap S_{t_j} \subseteq S_{t_i}$ for all $i<4$. To check that $T^*$ is a pseudotree, suppose $S_0,S_1,S_2 \in T^*$ and $S_0,S_1 \subseteq S_2$. Again, $S_0$ and $S_1$ are initial segments of a linear order, so one is contained in the other. \end{proof}
\begin{corollary} Every countable pseudotree can be embedded into $T_{\mathbb Q}$. \end{corollary}
Now we argue that if $\delta$ is a good ordinal, there is a countable honest subcategory of $\mathcal A_\delta$ that is a partial order satisfying DPM. Suppose $M$ is a good model of height $\delta$ and $r_M$ is a real coding $M$. Let $N$ be a countable model of ${\rm ZFC}\xspace -$ Powerset with standard $\omega$ and ill-founded $\omega_1$, such that $r_M \in N$ and $N$ satisfies the conclusion of Theorem \ref{aronszajn}. In the real universe, the object $T$ that $N$ thinks is an Aronszajn tree isomorphic to an honest subcategory of $\mathcal A_\delta^N$ is actually a countable pseudotree isomorphic to an honest subcategory of $\mathcal A_\delta$, since $\delta$ is in the well-founded part of $N$, and because the non-amalgamability of models corresponding to incompatible nodes is witnessed in an absolute way. We will find a substructure of $T$ that satisfies DPM.
As before, each $N$-ordinal $\alpha$ has a decomposition into ill-founded and well-founded parts, $\alpha = i(\alpha)+w(\alpha)$, and the collection $\{ i(\alpha) : \alpha < \omega_1^N \}$ is a dense linear order with a left endpoint. Let $S \subseteq T$ be the collection of nodes at levels $i(\alpha)$ for $\alpha < \omega_1^N$. Let us verify each axiom of DPM. We have already argued that it's a pseudotree, and we keep the root node. For infima, if $s,t \in S$ are incomparable, then $N$ thinks that there is a least ordinal $\alpha$ such that $s(\alpha) \not= t(\alpha)$. Then $s \restriction i(\alpha) = t \restriction i(\alpha)$ is the infimum of $s$ and $t$ in $S$, since $i(\alpha)<i(\beta)$ implies $\alpha<\beta$. To verify Density, just use the fact that the collection of levels of $S$ is a dense linear order. To verify Infinite Splitting, let $g,f_0,\dots,f_{n-1}$ be as in the hypothesis, so that $g = f_k \wedge_S f_m$ for $k<m<n$. Let $\gamma = \dom g$. We may select any $f_n\in S$ such that $f_n \restriction \gamma = g$ and $f_n(\gamma) \not= f_m(\gamma)$ for all $m<n$. This completes the proof of Theorem \ref{pseudo}.
\begin{remark} As pointed out by the referee, Theorem \ref{pseudo} has the following generalization. If we construct the tree of models and embeddings in the proof of Theorem \ref{aronszajn} with the additional requirement that the critical points are always sent sufficiently high, as in the proof of Theorem \ref{R}, then we can take a Dedekind completion of our universal countable pseudotree and obtain an isomorphic honest subcategory of $\mathcal A_\delta$. Such a pseudotree will be universal for the class of pseudotrees possessing a countable dense subset. \end{remark}
\begin{theorem} If there is a set of measurable cardinals of ordertype $\omega+1$, then there is a countable ordinal $\delta$ such that $\mathcal A_\delta$ contains an honest subcategory isomorphic to the reverse-ordered complete binary tree of height $\omega$. \end{theorem}
\begin{proof} Let $\kappa$ be the $\omega+1^{st}$ measurable cardinal, with normal measure $\mathcal U$. Let $M$ be a countable transitive set such that there is an elementary embedding $\sigma : M \to (H_\theta,\in,\mathcal U)$, for $\theta>2^\kappa$ regular. Let $\kappa^*,\mathcal U^* \in M$ be such that $\sigma(\kappa^*) = \kappa$ and $\sigma(\mathcal U^*)=\mathcal U$. Then $(M,\mathcal U^*)$ is iterable. (See, for example, Lemma 19.11 of \cite{MR1994835}.)
Let $r_M$ be a real coding $(M,\mathcal U^*)$, and let $N$ be a countable model of ${\rm ZFC}\xspace - $ Powerset with standard $\omega$ and ill-founded $\omega_1$, such that $r_M \in N$, and satisfying that $(M,\mathcal U^*)$ is iterable. Let $\delta < \kappa^*$ be completely J\'onsson in $M$ and above the first $\omega$ measurables of $M$. Working in $N$, iterate $(M,\mathcal U^*)$ $\omega_1^N$-many times, and let $j : M \to M_{\omega_1^N}$ be the iterated ultrapower map. Then $j$ is the identity below $\kappa^*$, and $j(\kappa^*) = \omega_1^N$. Then take in $N$ an elementary substructure $M^*$ of $(V_{j(\kappa^*)})^{M_{\omega_1^N}}$ that $N$ thinks is countable and transitive, containing some $N$-ordinal $\zeta$ which is really ill-founded.
Now we apply Theorem \ref{wftrees} in $N$. Let $T \in N$ be a sequential tree such that $N$ thinks $T$ is well-founded and of rank $\zeta$. In $N$, there is an honest subcategory of $\mathcal A_\delta$ isomorphic to $T$. The honesty is absolute to our universe, since the incomparability is just witnessed by inequalities among the ordertypes of the first $\omega$ measurable cardinals of the various models corresponding to the nodes of $T$.
To complete the argument, we show that in our universe, there is a subtree of $T$ isomorphic to the complete binary tree of height $\omega$. It suffices to show that if $S \in N$ is a sequential tree such that $N \models \rho_S(\emptyset) = \zeta$, where $\zeta$ is really ill-founded, then there are incomparable nodes $p,q \in S$ such that $\rho_S(p)$ and $\rho_S(q)$ are both ill-founded ordinals, for then the conclusion follows by a simple induction. Now since $N \models \rho_S(t) = \sup_{s < t}(\rho_S(s) + 1)$ for all $t \in S$, there is $p \in S$ of rank $i(\zeta)$. Since $i(\zeta)$ is a limit ordinal, $p$ must have infinitely many nodes immediately below it, call them $\{ q_i : i <\omega \}$, and $N \models \rho_S(p) = \sup_i(\rho_S(q_i) + 1)$. Thus there are $n<m$ such that $\rho_S(q_n)$ and $\rho_S(q_m)$ are both ill-founded, as there is no least ill-founded ordinal. \end{proof}
\section{Questions}
In an earlier version of this manuscript, we asked whether there can be two models of {\rm ZFC}\xspace with the same ordinals such that each is elementarily embeddable into the other. This was answered in \cite{2021arXiv210812355E}.
If $M$ is a self-embeddable inner model, then in light of Theorems \ref{kunengen} and \ref{agreement}, it is interesting to ask how close $M$ can be to $V$. The second author \cite{capture} showed under some mild large cardinal assumptions that $V$ can be a class-generic forcing extension of such an $M$. Some related questions include: \begin{question} Suppose $M$ is a transitive proper class definable from parameters. \begin{enumerate} \item Can $M$ be self-embeddable and correct about cardinals? \item Can $M$ be self-embeddable and correct about cofinalities? \item Can $M$ be embeddable into $V$ and be correct about either cardinals or cofinality $\omega$? \end{enumerate} \end{question}
Our direct-limit constructions in Section \ref{amcat} were particular to countable models. Thus it is natural to ask: \begin{question} Is it consistent that there is an ordinal $\delta$ of uncountable cofinality such that $\mathcal A_\delta$ contains a copy of the linear order $\omega+1$? \end{question}
Although we ruled out the possibility of a Suslin tree being isomorphic to a subcategory of $\mathcal A_\delta$ for a countable $\delta$, we don't know much else about the combinatorial properties of the Aronszajn tree constructed in Theorem \ref{aronszajn}. \begin{question} Characterize the class of Aronszajn trees that can be isomorphic to an honest subcategory of $\mathcal A_\delta$ for a countable $\delta$. Can it contain non-special trees? \end{question}
Our methods for building upward-growing and downward-growing trees as honest subcategories of $\mathcal A_\delta$ are quite different, and we don't know if they can be combined in any interesting way. We would like to know if there is any structural asymmetry in this category, or rather:
\begin{question}Can an $\mathcal A_\delta$ contain an honest subcategory isomorphic to a ``diamond'', i.e. the standard partial order on the four-element boolean algebra? \end{question}
\begin{question} If $\mathbb P$ is a partial order isomorphic to an honest subcategory of $\mathcal A_\delta$, does this hold for the reverse of $\mathbb P$? \end{question}
\end{document} | arXiv |
Quasiregular element
In mathematics, specifically ring theory, the notion of quasiregularity provides a computationally convenient way to work with the Jacobson radical of a ring.[1] In this article, we primarily concern ourselves with the notion of quasiregularity for unital rings. However, one section is devoted to the theory of quasiregularity in non-unital rings, which constitutes an important aspect of noncommutative ring theory.
This article addresses the notion of quasiregularity in the context of ring theory, a branch of modern algebra. For other notions of quasiregularity in mathematics, see the disambiguation page quasiregular.
Definition
Let R be a ring (with unity) and let r be an element of R. Then r is said to be quasiregular, if 1 − r is a unit in R; that is, invertible under multiplication.[1] The notions of right or left quasiregularity correspond to the situations where 1 − r has a right or left inverse, respectively.[1]
An element x of a non-unital ring is said to be right quasiregular if there is y such that $x+y-xy=0$.[2] The notion of a left quasiregular element is defined in an analogous manner. The element y is sometimes referred to as a right quasi-inverse of x.[3] If the ring is unital, this definition of quasiregularity coincides with that given above.[4] If one writes $x\cdot y=x+y-xy$, then this binary operation $\cdot $ is associative.[5] In fact, the map $(R,\cdot )\to (R,\times );x\mapsto 1-x$ (where × denotes the multiplication of the ring R) is a monoid isomorphism.[4] Therefore, if an element possesses both a left and right quasi-inverse, they are equal.[6]
Note that some authors use different definitions. They call an element x right quasiregular if there exists y such that $x+y+xy=0$,[7] which is equivalent to saying that 1 + x has a right inverse when the ring is unital. If we write $x\circ y=x+y+xy$, then $(-x)\circ (-y)=-(x\cdot y)$, so we can easily go from one set-up to the other by changing signs.[8] For example, x is right quasiregular in one set-up iff −x is right quasiregular in the other set-up.[8]
Examples
• If R is a ring, then the additive identity of R is always quasiregular.
• If $x^{2}$ is right (resp. left) quasiregular, then $x$ is right (resp. left) quasiregular.[9]
• If R is a rng, every nilpotent element of R is quasiregular.[10] This fact is supported by an elementary computation:
If $x^{n+1}=0$, then
$(1-x)(1+x+x^{2}+\dotsb +x^{n})=1$ (or $(1+x)(1-x+x^{2}-\dotsb +(-x)^{n})=1$ if we follow the second convention).
From this we see easily that the quasi-inverse of x is $-x-x^{2}-\dotsb -x^{n}$ (or $-x+x^{2}-\dotsb +(-x)^{n}$).
• In the second convention, a matrix is quasiregular in a matrix ring if it does not possess −1 as an eigenvalue. More generally, a bounded operator is quasiregular if −1 is not in its spectrum.
• In a unital Banach algebra, if $\|x\|<1$, then the geometric series $\sum _{0}^{\infty }x^{n}$ converges. Consequently, every such x is quasiregular.
• If R is a ring and S = R[[X1, ..., Xn]] denotes the ring of formal power series in n indeterminants over R, an element of S is quasiregular if and only its constant term is quasiregular as an element of R.
Properties
• Every element of the Jacobson radical of a (not necessarily commutative) ring is quasiregular.[11] In fact, the Jacobson radical of a ring can be characterized as the unique right ideal of the ring, maximal with respect to the property that every element is right quasiregular.[12][13] However, a right quasiregular element need not necessarily be a member of the Jacobson radical.[14] This justifies the remark in the beginning of the article – "bad elements" are quasiregular, although quasiregular elements are not necessarily "bad". Elements of the Jacobson radical of a ring are often deemed to be "bad".
• If an element of a ring is nilpotent and central, then it is a member of the ring's Jacobson radical.[15] This is because the principal right ideal generated by that element consists of quasiregular (in fact, nilpotent) elements only.
• If an element, r, of a ring is idempotent, it cannot be a member of the ring's Jacobson radical.[16] This is because idempotent elements cannot be quasiregular. This property, as well as the one above, justify the remark given at the top of the article that the notion of quasiregularity is computationally convenient when working with the Jacobson radical.[1]
Generalization to semirings
The notion of quasiregular element readily generalizes to semirings. If a is an element of a semiring S, then an affine map from S to itself is $\mu _{a}(r)=ra+1$. An element a of S is said to be right quasiregular if $\mu _{a}$ has a fixed point, which need not be unique. Each such fixed point is called a left quasi-inverse of a. If b is a left quasi-inverse of a and additionally b = ab + 1, then b it is called a quasi-inverse of a; any element of the semiring that has a quasi-inverse is said to be quasiregular. It is possible that some but not all elements of a semiring be quasiregular; for example, in the semiring of nonnegative reals with the usual addition and multiplication of reals, $\mu _{a}$ has the fixed point ${\frac {1}{1-a}}$ for all a < 1, but has no fixed point for a ≥ 1.[17] If every element of a semiring is quasiregular then the semiring is called a quasi-regular semiring, closed semiring,[18] or occasionally a Lehmann semiring[17] (the latter honoring the paper of Daniel J. Lehmann.[19])
Examples of quasi-regular semirings are provided by the Kleene algebras (prominently among them, the algebra of regular expressions), in which the quasi-inverse is lifted to the role of a unary operation (denoted by a*) defined as the least fixedpoint solution. Kleene algebras are additively idempotent but not all quasi-regular semirings are so. We can extend the example of nonegative reals to include infinity and it becomes a quasi-regular semiring with the quasi-inverse of any element a ≥ 1 being the infinity. This quasi-regular semiring is not additively idempotent however, so it is not a Kleene algebra.[18] It is however a complete semiring.[20] More generally, all complete semirings are quasiregular.[21] The term closed semiring is actually used by some authors to mean complete semiring rather than just quasiregular.[22][23]
Conway semirings are also quasiregular; the two Conway axioms are actually independent, i.e. there are semirings satisfying only the product-star [Conway] axiom, (ab)* = 1+a(ba)*b, but not the sum-star axiom, (a+b)* = (a*b)*a* and vice versa; it is the product-star [Conway] axiom that implies that a semiring is quasiregular. Additionally, a commutative semiring is quasiregular if and only if it satisfies the product-star Conway axiom.[17]
Quasiregular semirings appear in algebraic path problems, a generalization of the shortest path problem.[18]
See also
• inverse element
Notes
1. Isaacs, p. 180
2. Lam, Ex. 4.2, p. 50
3. Polcino & Sehgal (2002), p. 298.
4. Lam, Ex. 4.2(3), p. 50
5. Lam, Ex. 4.1, p. 50
6. Since 0 is the multiplicative identity, if $x\cdot y=0=y'\cdot x$, then $y=(y'\cdot x)\cdot y=y'\cdot (x\cdot y)=y'$. Quasiregularity does not require the ring to have a multiplicative identity.
7. Kaplansky, p. 85
8. Lam, p. 51
9. Kaplansky, p. 108
10. Lam, Ex. 4.2(2), p. 50
11. Isaacs, Theorem 13.4(a), p. 180
12. Isaacs, Theorem 13.4(b), p. 180
13. Isaacs, Corollary 13.7, p. 181
14. Isaacs, p. 181
15. Isaacs, Corollary 13.5, p. 181
16. Isaacs, Corollary 13.6, p. 181
17. Jonathan S. Golan (30 June 2003). Semirings and Affine Equations over Them. Springer Science & Business Media. pp. 157–159 and 164–165. ISBN 978-1-4020-1358-4.
18. Marc Pouly; Jürg Kohlas (2011). Generic Inference: A Unifying Theory for Automated Reasoning. John Wiley & Sons. pp. 232 and 248–249. ISBN 978-1-118-01086-0.
19. Lehmann, D. J. (1977). "Algebraic structures for transitive closure" (PDF). Theoretical Computer Science. 4: 59–76. doi:10.1016/0304-3975(77)90056-1.
20. Droste, M., & Kuich, W. (2009). Semirings and Formal Power Series. Handbook of Weighted Automata, 3–28. doi:10.1007/978-3-642-01492-5_1, pp. 7-10
21. U. Zimmermann (1981). Linear and combinatorial optimization in ordered algebraic structures. Elsevier. p. 141. ISBN 978-0-08-086773-1.
22. Dexter Kozen (1992). The Design and Analysis of Algorithms. Springer Science & Business Media. p. 31. ISBN 978-0-387-97687-7.
23. J.A. Storer (2001). An Introduction to Data Structures and Algorithms. Springer Science & Business Media. p. 336. ISBN 978-0-8176-4253-2.
References
• I. Martin Isaacs (1993). Algebra, a graduate course (1st ed.). Brooks/Cole Publishing Company. ISBN 0-534-19002-2.
• Irving Kaplansky (1969). Fields and Rings. The University of Chicago Press.
• Lam, Tsit-Yuen (2003). Exercises in Classical Ring Theory. Problem Books in Mathematics (2nd ed.). Springer-Verlag. ISBN 978-0387005003.
• Milies, César Polcino; Sehgal, Sudarshan K. (2002). An introduction to group rings. Springer. ISBN 978-1-4020-0238-0.
| Wikipedia |
\begin{definition}[Definition:Vector Sum]
Let $\mathbf u$ and $\mathbf v$ be vector quantities of the same physical property.
\end{definition} | ProofWiki |
\begin{definition}[Definition:Strict Lower Closure/Element]
Let $\left({S, \preccurlyeq}\right)$ be an ordered set.
Let $a \in S$.
The '''strict lower closure of $a$ (in $S$)''' is defined as:
:$a^\prec := \left\{{b \in S: b \preccurlyeq a \land a \ne b}\right\}$
or:
:$a^\prec := \left\{{b \in S: b \prec a}\right\}$
That is, $a^\prec$ is the set of all elements of $S$ that strictly precede $a$.
\end{definition} | ProofWiki |
\begin{document}
\title{Extensible Proof Systems for Infinite-State Systems
\thanks{Research supported by US National Science Foundation grant CNS-1446365 and US Office of Naval Research grant N00014-17-1-2622.} }
\author{Rance Cleaveland\inst{1}\orcidID{0000-0002-4952-5380} \and
Jeroen J.A. Keiren\inst{2}\orcidID{0000-0002-5772-9527}}
\authorrunning{R. Cleaveland \and J.J.A. Keiren}
\institute{Department of Computer Science, University of Maryland, College Park, Maryland, USA\\
\email{[email protected]}
\and
Department of Mathematics and Computer Science, Eindhoven University of Technology, PO Box 513, 5600 MB Eindhoven, The Netherlands\\
\email{[email protected]}
}
\maketitle
\begin{abstract}
This paper revisits soundness and completeness of proof systems for proving that sets of states in infinite-state labeled transition systems satisfy formulas in the modal mu-calculus.
Our results rely on novel results in lattice theory, which give constructive characterizations of both greatest and least fixpoints of monotonic functions over complete lattices.
We show how these results may be used to reconstruct the sound and complete tableau method for this problem due to Bradfield and Stirling.
We also show how the flexibility of our lattice-theoretic basis simplifies reasoning about tableau-based proof strategies for alternative classes of systems.
In particular, we extend the modal mu-calculus with timed modalities, and prove that the resulting tableaux method is sound and complete for timed transition systems.
\keywords{mu-calculus \and model checking \and infinite-state systems.} \end{abstract}
\section{Introduction}
Proof systems provide a means for proving sequents in formal logics, and are intended to reduce reasoning about objects in a given theory to syntactically checkable proofs consisting of applications of proof rules to the sequents in question. When a proof system is sound, every provable sequent is indeed semantically valid; when it is in addition complete, it follows that every semantically valid sequent can be proved within the proof system. Because they manipulate syntax, the construction of proofs within a proof system can be automated; proof assistants such as Coq~\cite{bertot2013interactive} and Nuprl~\cite{constable1986implementing} are built around this observation. Within the model-checking community, the fully automatic construction of proofs based on sound and complete proof systems for decidable theories provides a basis for establishing the correctness, or incorrectness, of systems \emph{vis \`a vis} properties they are expected to satisfy. In addition, proof systems can be used as a basis for different approaches to model checking. In classical \emph{global} model-checking techniques~\cite{clarke2018model}, one uses the proof rules to prove properties of states with respect to larger and larger subformulas of the given formula, until one shows that the start state(s) of the system either do, or do not, satisfy the original formula. In \emph{local}~\cite{SW1991}, or \emph{on-the-fly}~\cite{bhat1995efficient} methods, in contrast, one uses the proof rules to conduct backward reasoning from the start states and original formula, applying proof rules ``in reverse'' to generate subgoals that require proving in order for the original sequents to be true. The virtue of on-the-fly model checking is that proofs can often be completed, or be shown not to exist, without having to examine all states and all subformulas.
Driven by applications in model checking, proof systems have been developed for establishing that finite-state systems satisfy formulas captured in a very expressive temporal logic, the \emph{(propositional) modal mu-calculus}~\cite{Koz1983}. These in turn have been used as a basis for efficient model-checking procedures for fragments of this logic~\cite{CS1993}. Work has also shown that for interesting fragments of the mu-calculus, global and on-the-fly techniques exhibit the same worst-case complexity~\cite{CS1993,mateescu2003efficient}, meaning that the early-termination feature of on-the-fly approaches does not incur additional overhead in the worst case.
Researchers have also developed sound and complete proof systems for infinite-state systems and the modal mu-calculus~\cite{BS1992,Bra1991}, a prototype implementation of which was described in~\cite{Bra1993}. In this case, the sequents, instead of involving single states and formulas in the logic as in the finite-state case, refer to potentially infinite sets of states. A variation of these proof systems that localizes validity by annotating fixed points, and that explicitly provides well-founded order on states satisfying least fixed points, was described in~\cite{And1993}. Ultimately, the success conditions in proof systems for infinite state systems are based on the fundamental theorem of the modal mu-calculus due to Streett and Emerson~\cite{SE1989}. For a detailed yet accessible account of the fundamental theorem, the reader is referred to~\cite{BS2001}.
While in general infinite-state model checking in the modal mu-calculus is undecidable, specialized proof systems for modifications of the so-called \emph{alternation-free} fragment of the mu-calculus~\cite{FC2014,DC2005} can lead to efficient on-the-fly model checkers for \emph{timed automata}~\cite{AD1994}, a class of infinite-state systems whose model-checking problem is decidable.
Ideally, one should be able to prove soundness and completeness of these timed-automata proof systems by referring to the general results for infinite-state systems and the modal mu-calculus. One should also be able to develop checkers for larger fragments than the alternation-free fragment so that more types of properties can be processed. However, there are several obstacles to this desirable state of affairs.
\begin{enumerate}
\item The intricacy of the proof system in~\cite{BS1992} means that modifications to it in essence require re-proofs of soundness and completeness from scratch.
\item The proof systems used for on-the-fly model checking of timed automata require several modifications to the modal mu-calculus: modalities for reasoning about time, and computable proof-termination criteria to enable detection of when a proof attempt is complete.
\item For efficiency reasons, construction of proofs in the on-the-fly model checkers must also use different proof-construction strategies, and this prevents applying reasoning from the infinite-state mu-calculus proof system to establishing the correctness of these procedures. \end{enumerate} These issues have limited the practical application of the proof system in~\cite{BS1992}, although its theoretical contribution is rightfully very highly regarded.
In this paper the goal is to revisit the proof system for general infinite-state systems and the modal mu-calculus with a view toward developing new, extensible proofs of soundness and completeness. Concretely, our contributions are the following.
We first introduce \emph{support orderings}, along with general lattice-theoretical results, that formalize the dependencies between states that satisfy given fixpoint formulas. This, in essence, gives a purely semantic, constructive account of the least and greatest fixpoints of monotonic functions over subset lattices. These support orderings are closely related to Streett and Emerson's \emph{regeneration relations}~\cite{SE1989}, as well as Bradfield and Stirling's \emph{(extended) paths}~\cite{Bra1991,BS1992}, although unlike those works our results do not rely on infinitary syntactic manipulations of mu-calculus formulas such as ordinal unfoldings.
We next recall the proof system of~\cite{BS1992}, and show that the soundness of the proof system follows from the lattice-theoretical results. In particular, a syntactic ordering, the \emph{extended dependency ordering}, derived from extended paths is a support ordering, and from this observation soundness of the proof system follows straightforwardly. In a similar way, given a support ordering induced by our lattice-theoretical results, we construct a tableau whose extended dependencies respect the given support ordering. This establishes completeness. To facilitate the completeness proof, we first establish a novel notion of well-founded induction in the context of mutually recursive fixpoints.
Finally, we show that these results also permit extensions to the mu-calculus to be easily incorporated in a soundness- and completeness-preserving manner. They also simplify reasoning about new proof-termination criteria. To show this, we first modify the termination criterion of the proof system, such that sequents for least-fixpoint formulas with a non-empty set of states are always unfolded. This results in a proof systems that is (evidently) not complete, but whose soundness follows trivially from our earlier soundness result. Second, we consider a proof system for timed transition systems, and a mu-calculus with two additional, timed modalities~\cite{FC2014}. The proofs of soundness and completeness are, indeed, straightforward extensions of our earlier results. To the best of our knowledge, ours is also the first sound and complete proof system for a timed mu-calculus.
The rest of the paper develops along the following lines. In the next section we review mathematical preliminaries used in the rest of the paper. We then state and prove our lattice-theoretical results in Section~\ref{sec:support-orderings}, while Section~\ref{sec:mu-calculus} introduces the syntax and semantics of the modal mu-calculus and establishes some properties that will prove useful. We and present the proof system of~\cite{BS1992} in Section~\ref{sec:base-proof-system}. In Section~\ref{sec:Soundness-via-support-orderings}, we show that the soundness of the proof system follows from the lattice-theoretical results, which we consider completeness in Section~\ref{sec:Completeness}. In Sections~\ref{sec:proof-search} and~\ref{sec:timed-mu-calculus} we illustrate how our new approach accommodates changes to the proof system that are needed for efficient on-the-fly model checking for timed automata and other decidable formalisms. Conclusions and future work are discussed in Section~\ref{sec:Conclusions}.
\section{Mathematical preliminaries}\label{sec:preliminaries}
This section defines basic terminology used in the sequel for finite sequences, (partial) functions, binary relations, lattices and fixpoints, and finite trees.
\subsection{Sequences}\label{subsec:sequences}
As usual, sequences are ordered collections of elements $x_1 \cdots x_n$, where each $x_i$ is taken from a given set $X$. \begin{notation}[Sequences]
Let $X$ be a set.
\begin{itemize}
\item $X^*$ is the set of finite, possibly empty, sequences of elements from $X$.
\item The \emph{empty sequence} in $X^*$ is denoted $\varepsilon$.
\item We take $X \subseteq X^*$, with each $x \in X$ being a single-element sequence in $X^*$.
\item
Suppose $\vec{w} = x_1 \cdots x_n \in X^*$, where each $x_i \in X$. Then $|\vec{w}| = n$ denotes the \emph{length} of $\vec{w}$. Note that $| \varepsilon | = 0$, and $|x| = 1$ if $x \in X$.
\item
If $\vec{w}_1, \vec{w}_2 \in X^*$ then $\vec{w}_1 \cdot \vec{w}_2 \in X^*$ is the \emph{concatenation} of $\vec{w}_1$ and $\vec{w}_2$. We often omit $\cdot$ and write e.g.\/ $\vec{w}_1\vec{w}_2$ for $\vec{w}_1 \cdot \vec{w}_2$.
\item
Suppose $\vec{w}_1, \vec{w}_2 \in X^*$. Then we write $\vec{w}_1 \preceq \vec{w}_2$ if $\vec{w}_1$ is a (not necessarily strict) \emph{prefix} of $\vec{w}_2$, and $\vec{w}_1 \npreceq \vec{w}_2$ if $\vec{w}_1$ is not a prefix of $\vec{w}_2$.
\item
Let $\vec{w} = x_1 \cdots x_n \in X^*$. Then $\textit{set}(\vec{w}) \subseteq X$, the \emph{set associated with} $\vec{w}$, is defined to be $\{x_1, \ldots, x_n\}$. Note that $\textit{set}(\varepsilon) = \emptyset$, and $|\textit{set}(\vec{w})| \leq |\vec{w}|$.
\item
Sequence $\vec{w} \in X^*$ is \emph{duplicate-free} iff $|\vec{w}| = |\textit{set}(\vec{w})|$.
\item
Sequence $\vec{w} \in X^*$ is a \emph{permutation}, or \emph{ordering}, of $X$ iff $\vec{w}$ is duplicate-free and $\textit{set}(\vec{w}) = X$.
\end{itemize} \end{notation}
\noindent Note that only finite sets can have permutations / orderings in this definition.
\subsection{Partial functions}
In this paper we make significant use of partial as well as total functions. This section introduces notation we use for such functions.
\begin{notation}[Partial functions] Let $X$ and $Y$ be sets. \begin{itemize}
\item
Relation $f \subseteq X \times Y$ is \emph{functional} iff for all $x \in X$ and $y_1, y_2 \in Y$, if $(x,y_1) \in R$ and $(x,y_2) \in R$ then $y_1 = y_2$. We call $f$ a \emph{partial function} from $X$ to $Y$ and use $X \to_{\perp} Y$ to denote the set of all partial functions from $X$ to $Y$.
\item
Suppose $f \in X \to_{\perp} Y$ and $x \in X$. If there is $y \in Y$ such that $(x,y) \in Y$ then we write $f(x)$ as usual to denote this $y$ and say $f$ is \emph{defined} for $x$ in this case. We will also write $f(x) \in Y$ to denote that $f$ is defined for $x$. If there exists no $y \in Y$ such that $(x,y) \in f$ then we say that $f$ is \emph{undefined} for $x$ and write $f(x){\perp}$.
\item
If $f \in X \to_{\perp} Y$ then we call $\operatorname{dom}(f) = \{ x \in X \mid f(x) \in Y\}$ the \emph{domain of definition} of $f$.
\item
$f \in X \to_{\perp} Y$ is \emph{total} iff $\operatorname{dom}(f) = X$. We write $X \to Y$ as usual for the set of total functions from $X$ to $Y$. Note that $X \to Y \subseteq X \to_{\perp} Y$.
\item
If $f, g \in X \to_{\perp} Y$ then $f = g$ iff $\operatorname{dom}(f) = \operatorname{dom}(g)$ and for all $x \in \operatorname{dom}(f)$, $f(x) = g(x)$. \end{itemize} \end{notation}
\noindent Note that partial functions are equal exactly when they are defined on the same elements and return the same values when they are defined. We also use the following standard operations on partial functions. \begin{notation}[Function operations] Let $X$ and $Y$ be sets. \begin{itemize}
\item
The \emph{everywhere undefined} function $f_\emptyset \in X \to_{\perp} Y$ is defined as $f_\emptyset = \emptyset \subseteq X \times Y$. Note that $f(x) {\perp}$ for all $x \in X$ and thus $\operatorname{dom}(f_\emptyset) = \emptyset$.
\item
Let $f \in X \to_{\perp} X$ and $i \in \mathbb{N}$. Then $f^i \in X \to_{\perp} X$ is defined as follows.
\[
f^i(x) =
\begin{cases}
x & \text{if $i = 0$} \\
f(f^{i-1}(x)) & \text{otherwise}
\end{cases}
\]
Note that $f^0$ is total and that $f^i(x){\perp}$ iff $f(f^j(x)) {\perp}$ some $j < i$. Also note that if $f$ is total then so is $f^i$ for all $i \geq 0$.
\item
Suppose $f \in X \to_{\perp} Y$ , and let $x \in X$ and $y \in Y$. Then $f[x:=y] \in X \rightarrow Y$ is defined as follows.
\[
f[x:=y](x') =
\begin{cases}
y & \text{if $x'=x$} \\
f(x') & \text{if $x' \neq x$ and $f(x') \in Y$}
\end{cases}
\]
Note that even if $f(x){\perp}$, $f[x:=y]$ is nevertheless defined for $x$, and that if $f$ is total then so is $f[x:=y]$.
This notion can generalized to $f[\vec{x} := \vec{y}]$, where $\vec{x} \in X^*$ is duplicate-free and $\vec{y} \in Y^*$ is such that $|\vec{x}| = |\vec{y}|$, in the obvious fashion.
\item
Let $f \in X \to_{\perp} Y$ and $\vec{w} = x_1 \cdots x_n \in X^*$. Then $f \in X^* \to_{\perp} Y^*$ is defined to by $f(\vec{w}) = f(x_1) \cdots f(x_n)$. Note that if $f(x) {\perp}$ for any $x \in \textit{set}(\vec{w})$ then $f(\vec{w}) {\perp}$. \end{itemize} \end{notation}
\subsection{Binary relations}
Later in this paper we refer extensively to the theory of binary relations over a given set $X$. This section summarizes some of the concepts used in what follows.
\begin{definition}[Binary relations] Let $X$ be a set. Then a \emph{binary relation} over $X$ is a subset $R \subseteq X \times X$. \end{definition}
\noindent When $R$ is a binary relation over $X$ we usually write $x_1 \inr{R} x_2$ in lieu of $(x_1, x_2) \in R$ and $x_1 \inr{\centernot R} x_2$ instead of $(x_1, x_2) \not\in R$. We now recall the following terminology.
\begin{definition}[Preorders, partial orders and equivalence relations]~\label{def:relations} Let $R \subseteq X \times X$ be a binary relation over $X$. \begin{enumerate}
\item
$R$ is \emph{reflexive} iff $x \inr{R} x$ for all $x \in X$.
\item
$R$ is \emph{symmetric} iff whenever $x_1 \inr{R} x_2$ then $x_2 \inr{R} x_1$.
\item
$R$ is \emph{anti-symmetric} iff whenever $x_1 \inr{R} x_2$ and $x_2 \inr{R} x_1$ then $x_1 = x_2$.
\item
$R$ is \emph{transitive} iff whenever $x_1 \inr{R} x_2$ and $x_2 \inr{R} x_3$ then $x_1 \inr{R} x_3$.
\item
$R$ is a \emph{preorder} iff $R$ is reflexive and transitive.
\item
$R$ is a \emph{partial order} iff $R$ is reflexive, anti-symmetric and transitive.
\item
$R$ is an \emph{equivalence relation} iff $R$ is reflexive, symmetric and transitive. \end{enumerate} \end{definition}
\noindent We also use the following standard, if less well-known, definitions.
\begin{definition}[Irreflexive and total relations]\label{def:irreflexive-total} Let $R \subseteq X \times X$. \begin{enumerate}
\item
$R$ is \emph{irreflexive} iff for every $x \in X$, $x \inr{\centernot R} x$.
\item
$R$ is a \emph{(strict) total order} iff it is irreflexive and transitive and satisfies: for all $x_1 \neq x_2 \in X$, either $x_1 \inr{R} x_2$ or $x_2 \inr{R} x_1$. \end{enumerate} \end{definition}
\noindent A relation $R$ over $X$ is irreflexive iff no element in $X$ is related to itself. It is total exactly when any distinct $x_1, x_2 \in X$ are \emph{comparable}, one way or another, via $R$. This version of totality is often called \emph{strict totality}, although we drop the qualifier ``strict" in this paper. The following relations are used later.
\begin{definition}[Identity, universal relations]\label{def:identity-universal-relations} Let $X$ be a set. \begin{enumerate}
\item
The \emph{identity relation over $X$} is defined as $\mathit{Id}_X = \{ (x,x) \mid x \in X\}$.
\item
The \emph{universal relation over $X$} is defined as $\mathit{U}_X = X \times X$. \end{enumerate} \end{definition}
\noindent We also use the following operations on binary relations. \begin{definition}[Relational operations]\label{def:relation-operations} Let $R, R'$ be binary relations over $X$. \begin{enumerate}
\item
$R'$ \emph{extends} $R$ iff $R \subseteq R'$.
\item
Let $X' \subseteq X$. Then the \emph{restriction} of $R$ with respect to $X'$ is the binary relation $\restrict{R}{X'}$ over $X'$ defined as follows:
\[
\restrict{R}{X'} = R \cap (X' \times X') = \{(x_1, x_2) \in X' \times X' \mid x_1 \inr{R} x_2\}.
\]
\item
The \emph{relational composition} of $R$ and $R'$ is the binary relation $R \mathbin{;} R'$ over $X$ defined as follows.
\[
R \mathbin{;} R' =
\{(x_1, x_3) \in X \times X \mid \exists x_2 \colon x_2 \in X \colon x_1 \inr{R} x_2 \land x_2 \inr{R'} x_3\}
\]
\item
The \emph{inverse} of $R$ is the binary relation $R^{-1}$ over $X$ defined by:
\[
R^{-1} = \{(x_2, x_1) \in X \times X \mid x_1 \inr{R} x_2\}
\]
\item\label{it:relation-image}
Let $X' \subseteq X$. Then the \emph{image}, $\img{R}{X'}$, of $R$ with respect to $X'$ is defined by
\[
\img{R}{X'} = \{x \in X \mid \exists x' \colon x' \in X' \colon x' \inr{R} x\}.
\]
If $x \in X$ then we write $\img{R}{x}$ in lieu of $\img{R}{\{x\}}$.
\item\label{it:relation-preimage}
Let $X' \subseteq X$. Then the \emph{pre-image} of $R$ with respect to $X'$ is the image $\preimg{R}{X'}$ of $R^{-1}$ with respect to $X'$.
\item\label{item:irreflexive-core}
The \emph{irreflexive core} of $R$ is the binary relation $R^-$ over $X$ defined by
\[
R^- = R \setminus \mathit{Id}_X.
\]
\item\label{item:reflexive-closure}
The \emph{reflexive closure} of $R$ is the binary relation $R^=$ given by
\[
R^= = R \cup \mathit{Id}_X.
\]
\item\label{subdef:transitive-closure}
The transitive closure, $R^+$, of $R$ is the least transitive relation extending $R$.
\item\label{subdef:reflexive-transitive-closure}
The reflexive and transitive closure, $R^*$ of $R$, is defined by
\[
R^* = (R^+)^=.
\] \end{enumerate} \end{definition}
\noindent Note that based on the definition of $R^{-1}$, the following holds.
\[
\preimg{R}{X'} = \{x \in X \mid \exists x' \colon x' \in X' \colon x' \inr{R^{-1}} x \}
= \{x \in X \mid \exists x' \colon x' \in X' \colon x \inr{R} x'\}
\]
Relation $R^+$ is guaranteed to exist for arbitrary set $X$ and relation $R$ over $X$, and from the definition it is immediate that $R$ itself is transitive iff $R^+ = R$. $R^+$ also has the following alternative characterization. Define \begin{align*}
R^1 &= R\\
R^{i+1} &= R \mathbin{;} R^i. \end{align*} Then $R^+ = \bigcup_{i=1}^\infty R^i$. It may also be shown that for any relation $R \subseteq X \times X$, $R^*$ is the unique smallest preorder that extends $R$.
The transitive and reflexive closure of a relation also induces an associated equivalence relation and partial order over the resulting equivalence classes.
\begin{definition}[Quotient of a relation]\label{def:relation-quotient} Let $R \subseteq X \times X$ be a relation. \begin{enumerate}
\item
Relation $\sim_R$ is defined as $x_1 \sim_R x_2$ iff $x_1 \inr{R^*} x_2$ and $x_2 \inr{R^*} x_1$.
\item
Let $x \in X$. Then $[x]_R \subseteq X$ is defined as
$$
[x]_R = \{x' \in X \mid x \sim_R x'\}.
$$
We use $Q_R = \{ \,[x]_R \mid x \in X \}$.
\item
$P(R) \subseteq Q_R \times Q_R$, the \emph{ordering induced by $R$ on $Q_R$}, is defined as
$$
P(R) = \left\{ \left( [x]_R, [x']_R \right) \mid x \inr{R^*} x' \right\}.
$$
\item
$(Q_R, P(R))$ is called the \emph{quotient} of $R$. \end{enumerate} \end{definition} It is easy to verify that $\sim_R$ is an equivalence relation for any $R$, and thus that $[x]_R$ is the equivalence class of $R$ and that $P(R)$ is a partial order over $Q_R$. Note that $Q_R$ defines a partition of set $X$: $X = \bigcup_{Q \in Q_R} Q$, and either $Q = Q'$ or $Q \cap Q' = \emptyset$ for any $Q, Q' \in Q_R$.
In this paper we also make extensive use of \emph{well-founded relations} and \emph{well-orderings}. To define these, we first introduce the following.
\begin{definition}[Extremal elements]\label{def:minimal-minimum-elements} Let $R \subseteq X \times X$, and let $X' \subseteq X$. \begin{enumerate}
\item
$x' \in X'$ is \emph{$R$-minimal in $X'$} iff for all $x \neq x' \in X'$, $x \inr{\centernot R} x'$.
\item
$x' \in X'$ is \emph{$R$-minimum in $X'$} iff $x'$ is the only $R$-minimal element in $X'$.
\item
$x' \in X'$ is \emph{$R$-maximal in $X'$} iff for all $x \neq x' \in X'$, $x' \inr{\centernot R} x$.
\item
$x' \in X'$ is \emph{$R$-maximum in $X'$} iff $x'$ is the only $R$-maximal element in $X'$.
\item
$x \in X$ is an \emph{$R$-lower bound} of $X'$ iff for all $x' \neq x \in X'$, $x \inr{R} x'$.
\item
$x \in X$ is the \emph{$R$-greatest lower bound} of $X'$ iff it is the $R$-maximum of the set of $R$-lower bounds of $X'$.
\item
$x \in X$ is an \emph{$R$-upper bound} of $X'$ iff for all $x' \neq x \in X'$, $x' \inr{R} x$.
\item
$x \in X$ the \emph{$R$-least upper bound} of $X'$ iff it is the $R$-minimum of the set of $R$-upper bounds of $X'$. \end{enumerate} \end{definition}
\noindent In what follows we often omit $R$ when it is clear from context and instead write minimal rather than $R$-minimal, etc. Note that minimal / minimum / maximal / maximum elements for $X'$ must themselves belong to $X'$; this is not the case for upper and lower bounds. We can now defined well-foundness and well-orderings.
\begin{definition}[Well-founded relations, well-orderings]\label{def: well-founded} Let $R \subseteq X \times X$. \begin{enumerate}
\item
$R$ is \emph{well-founded} iff every non-empty $X' \subseteq X$ has an $R$-minimal element.
\item
$R$ is a \emph{well-ordering} iff it is total and well-founded. \end{enumerate} \end{definition}
We close the section by remarking on some noteworthy properties of well-founded relations and well-orderings.\footnote{These results generally rely on the inclusion of additional axioms beyond the standard ones of Zermelo-Fraenkel (ZF) set theory. The Axiom of Choice~\cite{jech2008axiom} is one such axiom, and in the rest of the paper we assume its inclusion in ZF.} The first result is a well-known alternative characterization of well-foundedness. If $X$ is a set and $R \subseteq X \times X$, call $\ldots, x_2, x_1$ an \emph{infinite descending chain in} $R$ iff for all $i \geq 1$, $x_{i+1} \inr{R} x_i$.
\begin{lemma}[Descending chains and well-foundedness]\label{lem:well-foundedness-chains} Let $R \subseteq X \times X$. Then $R$ is well-founded iff $R$ contains no infinite descending chains. \end{lemma}
\noindent Transitive closures also preserve well-foundedness.
\begin{lemma}[Transitive closures of well-founded relations]\label{lem:transitive-closure-well-founded} Let $R \subseteq X \times X$ be well-founded. Then $R^+$ is also well-founded. \end{lemma}
\noindent Any well-founded relation can be extended to a well-ordering.
\begin{lemma}[Total extensions of well-founded relations]\label{lem:well-ordering-extension} Let $R \subseteq X \times X$ be well-founded. Then there exists a well-ordering $R' \subseteq X \times X$ extending $R$. \end{lemma}
\noindent Note that if $R = \emptyset$ then the above lemma reduces to the Well-Ordering Theorem~\cite{Haz01}, which states that every set can be well-ordered.
The next result is immediate from the definition of well-ordering.
\begin{lemma}[Minimum elements and well-orderings]\label{lem:well-ordering-minimum} Let $R \subseteq X \times X$ be a well-ordering. Then every non-empty $X' \subseteq X$ contains an $R$-minimum element. \end{lemma}
\subsection{Complete lattices, monotonic functions and fixpoints}\label{subsec:lattices}
The results in this paper rely heavily on the basic theory of fixpoints of monotonic functions over complete lattices, as developed by Tarski and Knaster~\cite{Tar1955}. We review the relevant parts of the theory here.
\begin{definition}[Complete lattice]\label{def:complete-lattice} A \emph{complete lattice} is a tuple $(X, \sqsubseteq, \bigsqcup, \bigsqcap)$ satisfying the following. \begin{enumerate}
\item $X$ is a set (the \emph{carrier set}).
\item
Relation ${\sqsubseteq}$ is a partial order over $X$.
\item
Function ${\bigsqcup} \in 2^X \to X$, the \emph{join} operation, is total and satisfies: for all $X' \subseteq X$, $\bigsqcup (X')$ is the least upper bound of $X'$.
\item
Function ${\bigsqcap} \in 2^X \to X$, the \emph{meet} operation, is total and satisfies: for all $X' \subseteq X$, $\bigsqcap (X')$ is the greatest lower bound of $X'$. \end{enumerate} \end{definition}
\noindent In what follows we write $\bigsqcup X'$ and $\bigsqcap X'$ instead of $\bigsqcup(X')$ and $\bigsqcap(X')$.
\begin{definition}[Fixpoint]\label{def:fixpoint} Let $X$ be a set and $f \in X \to X$ be a function. Then $x \in X$ is a \emph{fixpoint} of $f$ iff $f(x) = x$. \end{definition}
As usual, $f \in X \rightarrow X$ is monotonic over complete lattice $(X, \sqsubseteq, \bigsqcup,\bigsqcap)$ iff $f$ is total and whenever $x_1 \sqsubseteq x_2$, $f(x_1) \sqsubseteq f(x_2)$. The next result follows from~\cite{Tar1955}.
\begin{lemma}[Extremal fixpoint characterizations]\label{lem:tarski-knaster} Let $(X, \sqsubseteq, \bigsqcup, \bigsqcap)$ be a complete lattice, and let $f \in X \to X$ be monotonic over it. Then $f$ has least and greatest fixpoints $\mu f, \nu f \in X$, respectively, characterized as follows. \begin{align*}
\mu f &= \bigsqcap \{x \in X \mid f(x) \sqsubseteq x \}\\
\nu f &= \bigsqcup \{x \in X \mid x \sqsubseteq f(x) \} \end{align*} \end{lemma}
\noindent Elements $x \in X$ such that $f(x) \sqsubseteq x$ are sometimes called \emph{pre-fixpoints} of $f$, while those satisfying $x \sqsubseteq f(x)$ are referred to as \emph{post-fixpoints} of $f$.
In this paper we focus on specialized complete lattices called \emph{subset lattices}.
\begin{definition}[Subset lattice]\label{def:subset-lattice} Let $S$ be a set. Then the \emph{subset lattice generated by $S$} is the tuple $(2^S, \subseteq, \bigcup, \bigcap)$. \end{definition}
\noindent It is straightforward to establish for any $S$, the subset lattice generated by $S$ is indeed a complete lattice.
\subsection{Finite non-empty trees}\label{subsec:trees}
The proof objects considered later in this paper are finite trees whose nodes are labeled by logical sequents. As we wish to reason about mathematical constructions on these proof objects we need formal accounts of such trees.
\begin{definition}[Finite non-empty unordered tree]~\label{def:unordered-tree}
A \emph{finite non-empty unordered tree} is a triple $\tree{T} = (\node{N}, \node{r}, p)$, where:
\begin{enumerate}
\item $\node{N}$ is a finite, non-empty set of \emph{nodes};
\item $\node{r} \in \node{N}$ is the \emph{root} node; and
\item $p \in \node{N} \to_{\perp} \node{N}$, the (partial) \emph{parent} function, satisfies: $p(\node{n}) {\perp}$ iff $\node{n} = \node{r}$, and for all $\node{n} \in \node{N}$ there exists $i \geq 0$ such that $p^i(\node{n}) = \node{r}$.
\end{enumerate} \end{definition}
\noindent Note that each non-root node $\node{n} \neq \node{r}$ has a parent node $p(\node{n}) \in \node{N}$. If $p(\node{n}') = \node{n}$ then we call $\node{n}'$ a \emph{child} of $\node{n}$; we use $c(\node{n}) = \{ \node{n}' \in \node{N} \mid p(\node{n}') = \node{n} \}$ to denote all the children of $\node{n}$. If $c(\node{n}) = \emptyset$ then $\node{n}$ is a \emph{leaf}; otherwise it is \emph{internal}. We call node $\node{n}$ an \emph{ancestor} of node $\node{n}'$, or equivalently, $\node{n}'$ a \emph{descendant} of $\node{n}$, iff there exists an $i \geq 0$ such that $p^i(\node{n}') = \node{n}$; we also say in this case that there is a \emph{path from $\node{n}$ to $\node{n}'$}.
We write $A(\node{n})$ and $D(\node{n})$ for the ancestors and descendants of $\node{n}$, respectively, and note that $\node{r} \in A(\node{n})$, $\node{n} \in D(\node{r})$, $\node{n} \in A(\node{n})$ and $\node{n} \in D(\node{n})$ for all $\node{n} \in \node{N}$. We use $A_s(\node{n}) = A(\node{n}) \setminus \{\node{n}\}$ and $D_s(\node{n}) = D(\node{n}) \setminus \{\node{n}\}$ for the \emph{strict} ancestors and descendants of $\node{n}$. We also define the notions of \emph{depth}, $d(\node{n})$, and \emph{height}, $h(\node{n})$ of $\node{n}$ as the length of the unique path from the root $\node{r}$ to $\node{n}$, and the length of the longest path starting at $\node{n}$, respectively. Specifically, $d(\node{n}) = i$ if $i$ is (the unique $i \in \mathbb{N}$) such that $p^i(\node{n}) = \node{r}$, while \[ h(\node{n}) = \max \{ i \mid \exists \node{n}' \colon \node{n}' \in \node{N} \colon p^i(\node{n}') = \node{n} \}. \] Note that for any leaf $\node{n}$, $h(\node{n}) = 0$. We define $h(\tree{T})$ to be $h(\node{r})$.
Finite unordered trees also admit the following induction and co-induction principles.
\begin{principle}[Tree induction]
Let $\tree{T} = (\node{N}, \node{r}, p)$ be a finite unordered tree, and let $Q$ be a predicate over $\node{N}$. To prove that $Q(\node{n})$ holds for every $\node{n} \in \node{N}$, it suffices to prove $Q(\node{n})$ under the assumption that $Q(\node{n}')$ holds for every $\node{n}' \in D_s(\node{n})$. The assumption is referred to as the \emph{induction hypothesis}. \end{principle}
\begin{principle}[Tree co-induction\footnote{Treatments of co-induction in theoretical computer science tend to focus on its use in reasoning about co-algrebras. The setting of finite trees in this paper is not explicitly co-algebraic, but the principle of co-induction as articulated in e.g.~\cite{jacobs2011introduction} is easily seen to correspond what is given here.}]
Let $\tree{T} = (\node{N}, \node{r}, p)$ be a finite unordered tree, and let $Q$ be a predicate over $\node{N}$. To prove that $Q(\node{n})$ holds for every $\node{n} \in \node{N}$, it suffices to prove that $Q(\node{n})$ holds under the assumption that $Q(\node{n}')$ holds for every $\node{n}' \in A_s(\node{n})$. The assumption is referred to as the \emph{co-induction hypothesis}. \end{principle}
\noindent Tree induction is an instance of standard strong induction on the height of nodes, while tree co-induction is an instance of standard strong induction on the depth of nodes. Note that the co-induction principle also applies to discrete rooted infinite trees~\cite{jech:1978,kunen:1980} as well as finite ones, although we do not use this capability in this paper. We also note that in the case of co-induction, reasoning about the root node $\node{r}$ is handled differently than the other nodes, owing to the fact that $\node{r}$ is the only node with no strict ancestors. Consequently, in the co-inductive arguments given in the paper, we will often single out a special \emph{root case} for dealing with this node, with reasoning about other nodes covered in a so-called \emph{co-induction step}.
Finite ordered trees can now be defined as follows.
\begin{definition}[Finite non-empty ordered tree]\label{def:ordered-tree}
A \emph{finite non-empty ordered tree} is a tuple $\tree{T} = (\node{N}, \node{r}, p, cs)$, where:
\begin{enumerate}
\item
$(\node{N}, \node{r}, p)$ is a finite non-empty unordered tree; and
\item
$cs \in \node{N} \rightarrow \node{N}^*$, the \emph{child ordering}, satisfies: $cs(\node{n})$ is an ordering of $c(\node{n})$ for all $\node{n} \in \node{N}$.
\end{enumerate} \end{definition}
The definition of ordered tree extends that of unordered tree by incorporating a function, $cs(\node{n})$, that returns the children of $\node{n}$ in left-to-right order. Ordered trees inherit the definitions given for unordered trees (height, children, etc.), as well as the tree-induction and co-induction principles. The notion of \emph{subtree} rooted at a node in a given ordered tree can now be defined.
\begin{definition}[Subtree]\label{def:subtree}
Let $\tree{T} = (\node{N}, \node{r}, p, cs)$ be a finite ordered tree, and let $\node{n} \in \node{N}$. Then $\tree{T}_\node{n}$, the \emph{subtree of $\tree{T}$ rooted at $\node{n}$}, is defined to be $\tree{T}_\node{n} = (D(\node{n}), \node{n}, p_\node{n}, cs)$, where $p_\node{n}$ satisfies: $p_\node{n}(\node{n}){\perp}$, and $p_\node{n}(\node{n}') = p(\node{n}')$ if $\node{n}' \neq \node{n}$. \end{definition}
\noindent It is straightforward to verify that $\tree{T}_\node{n}$ is itself a finite ordered tree. We also use the notion of \emph{tree prefix} later.
\begin{definition}[Tree prefix]\label{def:tree-prefix} Let $\tree{T}_1 = (\node{N}_1, \node{r}_1, p_1, cs_1)$, $\tree{T}_2 = (\node{N}_2, \node{r}_2, p_2, cs_2)$ be finite ordered trees. Then $\tree{T}_1$ is a \emph{tree prefix} of $\tree{T}_2$, notation $\tree{T}_1 \preceq \tree{T}_2$, iff: \begin{enumerate} \item
$\node{N}_1 \subseteq \node{N}_2$; \item
$\node{r}_1 = \node{r}_2$; \item
For all $\node{n} \in \node{N}_1$, $p_1(\node{n}) = p_2(\node{n})$; and \item
For all $\node{n} \in \node{N}_1$, either $cs_1(\node{n}) = cs_2(\node{n})$, or $cs_1(\node{n}) = \varepsilon$. \end{enumerate} \end{definition}
\noindent Intuitively, if $\tree{T}_1 \preceq \tree{T}_2$ then the two trees share the same root and tree structure, except that some internal nodes in $\tree{T}_2$ are leaves in $\tree{T}_1$. Note that if $\tree{T}_1 \preceq \tree{T}_2$ and $\node{n} \in \node{N}_1$ is such that $p_2^i(\node{n}) \in \node{N}_2$, then $p_2^i(\node{n}) = p_1^i(\node{n}) \in \node{N}_1$; that is, if $\node{n}$ is a node in $\tree{T}_1$ then it has the same ancestors in $\tree{T}_1$ as in $\tree{T}_2$. We can also specify a prefix of a given tree by giving the set of nodes in the tree that should be turned into leaves.
\begin{definition}[Tree-prefix generation]\label{def:tree-prefix-generation} Let $\tree{T} = (\node{N}, \node{r}, p, cs)$ be a finite ordered tree, and let $\node{L} \subseteq \node{N}$. Then $\tpre{\tree{T}}{\node{L}}$, the \emph{tree prefix of $\tree{T}$ generated by $\node{L}$}, is the tree $(\node{N}', \node{r}, p', cs')$ given as follows. \begin{itemize}
\item
$\node{N}' = \node{N} \setminus \left( \bigcup_{\node{l} \in \node{L}} D_s(\node{l}) \right)$.
\item
Let $\node{n}' \in \node{N}'$. Then $p'(\node{n}') = p(\node{n}')$.
\item
Let $\node{n}' \in \node{N}'$. Then $cs'(\node{n}') = \varepsilon$ if $D_s(\node{n}) \cap \node{N}' = \emptyset$, and $cs(\node{n}')$ otherwise. \end{itemize} \end{definition}
\noindent The nodes of $\tpre{\tree{T}}{\node{L}}$ are the nodes of $\tree{T}$, with however the strict descendants of nodes in $\node{L}$ removed. It is straightforward to verify that $\tpre{\tree{T}}{\node{L}}$ is a finite ordered tree if $\tree{T}$ is, and that $\tpre{\tree{T}}{\node{L}} \preceq \tree{T}$. While $\node{L}$ can be thought of as specifying nodes in $\tree{T}$ that should be converted into leaves in $\tpre{\tree{T}}{\node{L}}$, this intuition is only partially accurate, since it is not the case that every $\node{l} \in \node{L}$ is a node in $\tpre{\tree{T}}{\node{L}}$. In particular, if $\node{l}$ has a strict ancestor in $\node{L}$ this would cause the removal of $\node{l}$ from $\tpre{\tree{T}}{\node{L}}$. However, if $\node{l} \in \node{L}$ has no strict ancestors in $\node{L}$ then it is indeed a leaf in $\tpre{\tree{T}}{\node{L}}$.
Functions may be defined inductively on ordered trees.
\begin{definition}[Inductive tree functions]~\label{def:node-function}
Let $\tree{T} = (\node{N},\node{r},p,cs)$ be an ordered tree and $V$ a set; call $\node{N} \to V$ the set of \emph{tree functions from $\tree{T}$ to $V$.}
\begin{enumerate}
\item \label{def:node-function-inductively-generated}
Tree function $f$ is \emph{inductively generated from} $g \in \node{N} \times V^* \rightarrow V$ iff for all $\node{n} \in \node{N}$, $f(\node{n}) = g(\node{n}, f(cs(\node{n})))$.
\item \label{def:node-function-inductive-update}
Let $f$ be inductively generated from $g$.
Then the \emph{inductive update}, $$f \iupd{\node{n}'}{v},$$ of $f$ at $\node{n}'$ by $v$ is the tree function inductively generated by $g[(\node{n}', \cdot):= v]$, where $g[(\node{n}', \cdot) := v](\node{n}, \vec{w}) = v$ if $\node{n} = \node{n}'$ and $g(\node{n}, \vec{w})$ otherwise.
\end{enumerate} \end{definition}
\noindent Intuitively, a function $f$ is inductively generated from $g$ if $g$ computes the ``single steps" in the recursive definition of $f$. That is, $f$ uses $g$ to compute the value associated with any $\node{n}$ based on the results $f$ returns for the children of $\node{n}$. Operation $f \iupd{\node{n}'}{v}$ then specifies a means for altering the values inductively generated for $f$: not only does $f \iupd{\node{n}'}{v}$ change the value returned for $\node{n}$, but it also potentially changes the values for ancestors of $\node{n}$ as well. Inductive updating can be generalized to $f \iupd{\node{n}_1 \cdots \node{n}_j}{v_1 \cdots v_j}$ in the obvious manner, where it is assumed that $\node{n}_1 \cdots \node{n}_j$ is duplicate-free.
We close this section with a lemma about inductively updated functions. It in essence asserts that updates involving only ancestors of a node $\node{n}$ do not affect the value associated with $\node{n}$.
\begin{lemma}[Inductive update correspondence]\label{lem:inductive-update-correspondence}
Let $\tree{T} = (\node{N}, \node{r}, p, cs)$ be a finite ordered tree, let $f \in \node{N} \rightarrow V$ be inductively generated from $g$, let $\vec{\node{n}} \in \node{N}^*$ be duplicate-free, and let $\vec{v} \in V^*$ be such that $|\vec{\node{n}}| = |\vec{v}|$. Then for every $\node{n}' \in \node{N}$ such that $D(\node{n}') \cap \textit{set}(\vec{\node{n}}) = \emptyset$, $f\iupd{\vec{\node{n}}}{\vec{v}}(\node{n}') = f(\node{n}')$. \end{lemma}
\begin{proof} Fix $T, f, g, \vec{\node{n}} = \node{n}_1 \cdots \node{n}_j$ and $\vec{v} = v_1 \cdots v_j$ as specified in the statement of the lemma. The proof proceeds by tree induction. So let $\node{n}' \in \node{N}$ be such that $D(\node{n}') \cap \{\node{n}_1, \ldots \node{n}_j\} = \emptyset$. From the definition of $D(\cdot)$ we know that $\node{n}' \in D(\node{n}')$ and that each $\node{n}'' \in c(\node{n}')$ satisfies $D(\node{n}'') \subseteq D(\node{n}')$. Therefore $\node{n}' \not\in \{\node{n}_1, \ldots, \node{n}_j\}$ and $D(\node{n}'') \cap \{\node{n}_1, \ldots, \node{n}_j\} = \emptyset$ for every child $\node{n}''$ of $\node{n}'$. The induction hypothesis guarantees that for every $\node{n}'' \in c(\node{n}')$, $f \iupd{\vec{\node{n}}}{\vec{v}}(\node{n}'') = f(\node{n}'')$, and thus that $f \iupd{\vec{\node{n}}}{\vec{v}}(cs(\node{n}')) = f(cs(\node{n}'))$. We now reason as follows.
\begin{align*}
& f \iupd{\vec{\node{n}}}{\vec{v}}(\node{n}')
&&
\\
&= g[\vec{\node{n}}:=\vec{v}](\node{n}', f\iupd{\vec{\node{n}}}{\vec{v}}(cs(\node{n}')))
&& \text{Definition of $f\iupd{\vec{\node{n}}}{\vec{v}}(\node{n}')$}
\\
&= g[\vec{\node{n}}:=\vec{v}](\node{n}', f(cs(\node{n}')))
&& \text{Induction hypothesis}
\\
&= g(\node{n}', f(cs(\node{n}')))
&& \text{Definition of $g[\vec{\node{n}} := \vec{v}]$}
\\
&= f(\node{n}')
&& \text{$f$ inductively generated}
\end{align*} \qedhere \end{proof}
\section{Support orderings and fixpoints}\label{sec:support-orderings}
A key contribution of this paper is the formulation of a novel characterization of least fixpoints for monotonic functions over subset lattices. This characterization may be seen as constructive in a precise sense, and relies on the notion of \emph{support ordering}.
\begin{definition}[Support ordering]\label{def:support-ordering}
Let $S$ be a set, let $(2^S, \subseteq, \bigcup, \bigcap)$ be the subset lattice generated by $S$, and let $f \in 2^S \rightarrow 2^S$ be a monotonic function over this lattice. Then $(X, \prec)$ is a \emph{support ordering} for $f$ iff the following hold.
\begin{enumerate}
\item
$X \subseteq S$ and ${\prec} \subseteq X \times X $ is a binary relation on $X$.
\item
For all $x \in X$, $x \in f(\preimg{{\prec}}{x})$.\footnote{Recall that $\preimg{{\prec}}{x} = \{x' \in X \mid x' \prec x\}$.}
\end{enumerate} \end{definition}
We call a support ordering $(X, \prec)$ for monotonic $f$ \emph{well-founded} if $\prec$ is well-founded and $X$ is \emph{\mbox{(well-)}supported} for $f$ iff there is a (well-founded) binary relation ${\prec} \subseteq X \times X$ such that $(X, \prec)$ is a support ordering for $f$.
Using support orderings, we can give \emph{constructive} accounts of both the least and greatest fixpoints of monotonic functions over subset lattices. We call the characterization constructive in the case of $\mu f$ because, to show that $s \in \mu f$, it suffices to construct a well-founded support ordering $(X, \prec)$ for $f$ such that $s \in X$. We also establish that $s \in \nu f$ exactly when there is a (not necessarily well-founded) support ordering $(X, \prec)$ such that $s \in X$.
The support-ordering characterization of the least fixpoint $\mu f$ of monotonic function $f$ over a given subset lattice relies on the following lemma, which asserts that the union of a collection of well-supported subsets of $S$ is also well-supported.
\begin{lemma}[Unions of well-supported sets]\label{lem:well-supported-sets}
Let $S$ be a set, and let $f \in 2^S \rightarrow 2^S$ be monotonic over subset lattice $(2^S, \subseteq, \bigcup, \bigcap)$. Also let $\mathcal{W} \subseteq 2^S$ be a set of well-supported sets for $f$. Then $\bigcup\mathcal{W}$ is well-supported for $f$. \end{lemma}
\remove{ \begin{proofsketch} To prove that $\bigcup \mathcal{W}$ is well-supported for $f$ we note that there is an ordinal $\beta$ that is in bijective correspondence with $\mathcal{W}$. We then define ordinal-indexed sequences, $X_\alpha$ and $\prec_\alpha$ ($\alpha < \beta$), with the property that $(X_\alpha, \prec_\alpha)$ is a support ordering for $f$, $\prec_\alpha$ is well-founded, and $\bigcup \mathcal{W} = \bigcup_{\alpha < \beta} X_\alpha$. We subsequently show that ${\prec} = \bigcup_{\alpha < \beta} {\prec_\alpha}$ is well-founded and that $(\bigcup \mathcal{W}, \prec)$ is a support ordering for $f$, thereby establishing that $\bigcup \mathcal{W}$ is well-supported for $f$. Details may be found in the appendix. \qedhere \end{proofsketch} }
\begin{proof} This proof uses constructions over the ordinals, for which we use the von Neumann definition~\cite{ciesielski1997set}: a well-ordered set $\alpha$ is a von Neumann ordinal iff it contains all ordinals preceding $\alpha$. The well-ordering on von Neumann ordinals is often written $<$ and has the property that $\alpha < \beta$ iff $\alpha \in \beta$, which in turn is true iff $\alpha \subsetneq \beta$. We also recall that in this paper, we assume the Axiom of Choice~\cite{jech2008axiom}.
To prove that $\bigcup \mathcal{W}$ is well-supported for $f$ we note that there is an ordinal $\beta$ that is in bijective correspondence with $\mathcal{W}$. We then define ordinal-indexed sequences, $X_\alpha$ and $\prec_\alpha$ ($\alpha < \beta$), with the property that $(X_\alpha, \prec_\alpha)$ is a support ordering for $f$, $\prec_\alpha$ is well-founded, and $\bigcup \mathcal{W} = \bigcup_{\alpha < \beta} X_\alpha$. We subsequently show that ${\prec} = \bigcup_{\alpha < \beta} {\prec_\alpha}$ is well-founded and that $(\bigcup \mathcal{W}, \prec)$ is a support ordering for $f$, thereby establishing that $\bigcup \mathcal{W}$ is well-supported for $f$.
To this end, let $|\mathcal{W}| = \beta$ be the cardinality of $\mathcal{W}$ (i.e.\ $\beta$ is the least ordinal in bijective correspondence to $\mathcal{W}$), and let $h \in \beta \rightarrow \mathcal{W}$ be a bijection. It follows that $\mathcal{W} = \{h(\alpha) \mid \alpha < \beta\}$. Also let $o \in \beta \rightarrow 2^{S \times S}$ be such that for any $\alpha < \beta$, $o(\alpha) \subseteq h(\alpha) \times h(\alpha)$ is a well-founded binary relation with the property that $(h(\alpha), o(\alpha))$ is a support ordering for $f$.
We now define the following ordinal-indexed sequences $X_\alpha$, $X'_\alpha$ and $\prec_\alpha$, $\alpha < \beta$, of subsets of $S$ and binary relations on $X_\alpha$, respectively, as follows, using transfinite recursion. \begin{align*}
X_\alpha
&= \bigl(\bigcup_{\alpha' < \alpha} X_{\alpha'}\bigr) \cup h(\alpha)
\\
X'_{\alpha}
&= h(\alpha) \setminus \bigl( \bigcup_{\alpha' < \alpha} X_{\alpha'} \bigr)
\\
\prec_\alpha
&=
\bigl(\bigcup_{\alpha' < \alpha} \prec_{\alpha'}\bigr) \cup
\bigl\{(x',x) \in o(\alpha) \mid x \in X'_\alpha \bigr\} \end{align*}
Note that $X_\alpha = (\bigcup_{\alpha' < \alpha} X_{\alpha'}) \cup X'_\alpha$ and that $(\bigcup_{\alpha' < \alpha} X_{\alpha'}) \cap X'_\alpha = \emptyset$. Based on these definitions, it is easy to see that if $\alpha' < \alpha$ then $X_{\alpha'} \subseteq X_\alpha$ and ${\prec_{\alpha'}} \subseteq {\prec_{\alpha}}$. We now prove two properties of $X_\alpha$ and $\prec_\alpha$ that will be used in what follows.
\begin{enumerate}[left = \parindent, label = P\arabic*., ref = P\arabic*]
\item\label{item:support-ordering}
For all $\alpha < \beta$, $(X_\alpha, \prec_\alpha)$ is a support ordering over $f$.
\item\label{item:well-founded}
For all $\alpha < \beta$, $\prec_\alpha$ is well-founded. \end{enumerate}
To prove \ref{item:support-ordering} we use transfinite induction. So fix $\alpha < \beta$; the induction hypothesis states that for any $\alpha' < \alpha$, $(X_{\alpha'}, \prec_{\alpha'})$ is a support ordering over $f$. Now consider $x \in X_\alpha$; we must show that $x \in f(\preimg{\prec_\alpha}{x})$. There are two cases. In the first, $x \in \bigcup_{\alpha' < \alpha} X_{\alpha'}$, which means $x \in X_{\alpha'}$ for some $\alpha' < \alpha$. In this case the induction hypothesis guarantees that $(X_{\alpha'}, \prec_{\alpha'})$ is a support ordering, meaning $x \in f(\preimg{\prec_{\alpha'}}{x})$. Since ${\prec_{\alpha'}} \subseteq {\prec_{\alpha}}$ it follows that
\[
{\preimg{{\prec_{\alpha'}}}{x}} \subseteq {\preimg{{\prec_{\alpha}}}{x}},
\]
and since $f$ is monotonic and $x \in f(\preimg{\prec_{\alpha'}}{x})$, $x \in f(\preimg{\prec_{\alpha}}{x})$. In the second case, $x \in X'_\alpha$. Here it is easy to see that
\[
\preimg{{\prec_\alpha}}{x} = \{x' \mid (x',x) \in o(\alpha)\},
\]
and since $(h(\alpha), o(\alpha))$ is a support ordering, it immediately follows that $x \in f(\{x' \mid (x',x) \in o(\alpha)\}) = f(\preimg{\prec_\alpha}{x})$. \ref{item:support-ordering} is thus proved.
To prove \ref{item:well-founded} we again use transfinite induction. So fix $\alpha < \beta$. The induction hypothesis states that for all $\alpha' < \alpha$, $\prec_{\alpha'}$ is well-founded; we must show that $\prec_\alpha$ is as well. So consider a descending chain $C = \cdots \prec_\alpha x_2 \prec_\alpha x_1$; it suffices to show that $C$ must be finite. There are three cases to consider.
\begin{description}
\item[$C$ is a chain in $\bigcup_{\alpha' < \alpha} \prec_{\alpha'}$.]
In this case $x_1 \in \bigcup_{\alpha' < \alpha} X_{\alpha'}$, meaning there is an $\alpha' < \alpha$ such that $x_1 \in X_{\alpha'}$. Since $\alpha_1 < \alpha_2$ implies ${\prec_{\alpha_1}} \subseteq {\prec_{\alpha_2}}$, it follows that each $x_i \in X_{\alpha'}$, each $x_{i+1} \prec_{\alpha'} x_i$, and that $C$ is thus a descending chain in $\prec_{\alpha'}$. Since the induction hypothesis guarantees that $\prec_{\alpha'}$ is well-founded, $C$ must be finite.
\item[$C$ is a chain in $o(\alpha)$.]
In this case, since $o(\alpha)$ is well-founded, $C$ must be finite.
\item[$C$ is a mixture of $\bigcup_{\alpha' < \alpha} \prec_{\alpha'}$ and $o(\alpha)$.]
In this case, from the definition of $C$ and $\prec_\alpha$ it follows that $C$ can be split into two pieces:
\begin{enumerate}
\item
an initial segment $x_{i} \prec_\alpha \cdots \prec_{\alpha} x_1$, where $i \geq 1$, $x_i \in \bigcup_{\alpha' < \alpha} X_{\alpha'}$, and for all $i > j \geq 1$, $x_{j} \in X'_\alpha$ and $(x_{j+1}, x_j) \in o(\alpha)$; and
\item
a segment $\cdots \prec_\alpha x_{i+1} \prec_\alpha x_i$, where for all $j \geq i$, $(x_{j+1}, x_j) \in \bigcup_{\alpha' < \alpha} \prec_{\alpha'}$.
\end{enumerate}
The previous arguments establish that each of these sub-chains must be finite, and thus $C$ is finite as well.
\end{description}
To finish the proof of the lemma, we note that the following hold, using arguments given above.
\begin{itemize}
\item $(\bigcup_{\alpha < \beta} X_\alpha, \bigcup_{\alpha < \beta} \prec_\alpha)$ is a support ordering.
\item $\bigcup_{\alpha < \beta} \prec_\alpha$ is well-founded.
\item $\bigcup\mathcal{W} = \bigcup_{\alpha < \beta} X_\alpha$.
\end{itemize}
From the definitions it therefore follows that $\bigcup \mathcal{W}$ is well-supported.\qedhere \end{proof}
We now have the following.
\begin{theorem}\label{thm:well-supported} Let $S$ be a set, and let $f \in 2^S \rightarrow 2^S$ be a monotonic function over the subset lattice $(2^S, \subseteq, \bigcup, \bigcap)$. \begin{enumerate}
\item\label{subthm:well-supported-1}
For all $X \subseteq S$, if $X$ is well-supported for $f$ then $X \subseteq \mu f$.
\item\label{subthm:well-supported-2}
Let $\mathcal{X} = \{X \subseteq S \mid X \textnormal{ is well-supported for } f\}$. Then $f(\bigcup\mathcal{X}) = \bigcup\mathcal{X}$. \end{enumerate} \end{theorem}
\begin{proof} To prove the statement~\ref{subthm:well-supported-1}, suppose $X \subseteq S$ is well-supported for $f$, and let ${\prec} \subseteq X \times X$ be a well-founded relation such that $(X, {\prec})$ is a support ordering for $f$. Also recall that $\mu f = \bigcap \{Y \subseteq S \mid f(Y) \subseteq Y\}$. To show that $X \subseteq \mu f$ it suffices to show that $X \subseteq Y$ for all $Y$ such that $f(Y) \subseteq Y$. So fix such a $Y$; we prove that for all $x \in X$, $x \in Y$ using well-founded induction on $\prec$. So fix $x \in X$. The induction hypothesis states that for all $x' \prec x$, $x' \in Y$. By definition of support ordering we know that $x \in f(\preimg{{\prec}}{x})$; the induction hypothesis also guarantees that $\preimg{{\prec}}{x} \subseteq Y$. Since $f$ is monotonic, $f(\preimg{{\prec}}{x}) \subseteq f(Y)$, and thus we have $
x \in f(\preimg{{\prec}}{x}) \subseteq f(Y) \subseteq Y. $ Hence $x \in Y$.
As for statement~\ref{subthm:well-supported-2}, Lemma~\ref{lem:well-supported-sets} guarantees that $\bigcup\mathcal{X}$ is well-supported; let $\prec$ be the well-founded relation over $\bigcup \mathcal{X}$ such that $(\bigcup \mathcal{X}, \prec)$ is a support ordering for $f$. It suffices to show that $f(\bigcup\mathcal{X}) \subseteq \bigcup\mathcal{X}$ and $\bigcup\mathcal{X} \subseteq f(\bigcup\mathcal{X})$. For the former, by way of contradiction assume $x$ is such that $x \in f(\bigcup\mathcal{X})$ and $x \not\in \bigcup\mathcal{X}$. Now consider the relation $\prec'$ on $(\bigcup\mathcal{X}) \cup \{x\}$ given by: ${\prec'} = {\prec} \cup \{(x', x) \mid x' \in \bigcup\mathcal{X}\}$. It is easy to see that $\prec'$ is well-founded, and that $((\bigcup\mathcal{X}) \cup \{x\}, \prec')$ is a support ordering for $f$. This implies $(\bigcup\mathcal{X}) \cup \{x\} \subseteq \bigcup \mathcal{X}$, which contradicts the assumption that $x \not\in \bigcup\mathcal{X}$. To see that $\bigcup\mathcal{X} \subseteq f(\bigcup\mathcal{X})$, note that, since $(\bigcup \mathcal{X}, \prec)$ is a support ordering, $x \in f(\preimg{{\prec}}{x})$ and $\preimg{{\prec}}{x} \subseteq \bigcup \mathcal{X}$ for all $x \in \bigcup\mathcal{X}$. Since $f$ is monotonic we have $ x \in f(\preimg{{\prec}}{x}) \subseteq f(\bigcup \mathcal{X}) $ for all $x \in \bigcup \mathcal{X}$.\qedhere \end{proof}
The following corollary is immediate. \begin{corollary}\label{cor:least-fixpoint}
Let $f$ be a monotonic function over the subset lattice generated by $S$. Then
$
\mu f = \bigcup \{X \in 2^S \mid X \textnormal{ is well-supported for $f$} \}.
$ \end{corollary} \begin{proof}
Let $\mathcal{X} = \bigcup \{X \mid X \textnormal{ is well-supported for } f\}$. Theorem~\ref{thm:well-supported} guarantees that $\bigcup\mathcal{X} \subseteq \mu f$ and that $\bigcup\mathcal{X}$ is a fixpoint of $f$. It is immediate that $\bigcup\mathcal{X} = \mu f$.\qedhere \end{proof}
This corollary may be seen as providing a \emph{constructive} characterization of $\mu f$ in the following sense: to establish that $x \in \mu f$ it suffices to construct a well-supported set $X$ for $f$ such that $x \in X$.
Support orderings can also be used to characterize $\nu f$, the greatest fixpoint of monotonic function $f$. This characterization relies on the following observations. \begin{theorem}\label{thm:supported} Let $S$ be a set, let $f \in 2^S \to 2^S$ be a monotonic function over the subset lattice $(2^S, \subseteq, \bigcup, \bigcap)$, and let $X \subseteq S$. Then $X$ is supported for $f$ if, and only if, $X \subseteq f(X)$. \end{theorem} \begin{proof}
Let $X \subseteq S$ and $f$ be given. We prove both implications separately.
\begin{itemize}
\item[$\Rightarrow$] Assume $X$ is supported for $f$, so there is a support ordering $(X, \prec)$ for $f$. To show that $X \subseteq \nu f$, fix $x \in X$; we must show that $x \in f(X)$. Since $(X, \prec)$ is a support ordering for $f$, it follows that $x \in f(\preimg{{\prec}}{x})$. Since $f$ is monotonic and $\preimg{{\prec}}{x} \subseteq X$, we have $x \in f(X)$.
\item[$\Leftarrow$] Assume $X \subseteq f(X)$. To prove this result we must come up with ${\prec} \subseteq X \times X$ such that $(X, \prec)$ is a support ordering for $f$. Consider ${\prec} = U_X = X \times X$. Since for every $x \in X, x \in f(X)$, the desired result is immediate.\qedhere
\end{itemize} \end{proof}
This theorem establishes that any $X \subseteq S$ is a post-fixpoint for monotonic $f$ (i.e.\/ $X \subseteq f(X)$) if and only if $X$ is supported for $f$. We also know from Lemma~\ref{lem:tarski-knaster} that $
\nu f = \bigcup \{X \subseteq S \mid X \subseteq f(X) \}. $ Thus we have the following.
\begin{corollary}\label{cor:greatest-fixpoint}
Let $f$ be a monotonic function over the subset lattice generated by $S$. Then
$
\nu f = \bigcup \{ X \in 2^S \mid X \textnormal{ is supported for $f$}\}.
$ \end{corollary}
This section closes with definitions and results on support orderings that we use later in this paper. In what follows we fix a set $S$ and the associated complete lattice $(2^S, \subseteq, \bigcup, \bigcap)$. The first lemma establishes that any extension of a support ordering is also a support ordering.
\begin{lemma}[Extensions of support orderings]\label{lem:support-ordering-extension} Let $f \in 2^S \rightarrow 2^S$ be monotonic, let $(X, \prec)$ be a support ordering for $f$, and let ${\prec'} \subseteq S \times S$ be an extension of $\prec$. Then $(X, \prec')$ is also a support ordering for $f$. \end{lemma} \begin{proof} Follows from monotonicity of $f$ and the fact that $\preimg{{\prec}}{s} \subseteq \preimg{{\prec'}}{s}$.\qedhere \end{proof}
\noindent The next lemma establishes that unions of support orderings are also support orderings. \begin{lemma}[Unions of support orderings]\label{lem:unions-of-support-orderings} Let $f \in 2^S \rightarrow 2^S$ be monotonic, and let $\mathcal{X}$ be a set of support orderings for $f$. Then $(S_\mathcal{X},{\prec}_\mathcal{X})$ is a support ordering for $f$, where \begin{align*} S_\mathcal{X} &= \bigcup_{(S,{\prec}) \in \mathcal{X}} S \\ {\prec}_\mathcal{X} &= \bigcup_{(S,{\prec}) \in \mathcal{X}} {\prec}. \end{align*} \end{lemma} \begin{proof} It suffices to show that for every $s \in S_\mathcal{X}$, $s \in f(\preimg{{\prec_{\mathcal{X}}}}{s})$. So fix such and $s$. Since $s \in S_\mathcal{X}$ there is $(S,{\prec}) \in \mathcal{X}$ such that $s \in S$, and as $(S,{\prec})$ is a support ordering we know that $s \in f(\preimg{{\prec}}{s})$. That $s \in f(\preimg{{\prec_\mathcal{X}}}{s})$ follows immediately from the fact that $f$ is monotonic and $\preimg{{\prec}}{x} \subseteq \preimg{{\prec_\mathcal{X}}}{x}$. \qedhere \end{proof}
\noindent This result should be contrasted with Lemma~\ref{lem:well-supported-sets}. On the one hand, this lemma asserts a property about all sets of support orderings, whereas that lemma only refers to sets of well-supported sets, i.e.\/ sets of sets having well-founded support orderings. On the other hand, this lemma makes no guarantees about the properties of support ordering $(S_\mathcal{X}, {\prec}_\mathcal{X})$ \emph{vis \`a vis} the orderings $(S,{\prec})$. In particular, if all the $(S,{\prec}) \in \mathcal{X}$ are well-founded, it does not follow that ${\prec}_\mathcal{X}$ is well-founded. Lemma~\ref{lem:well-supported-sets} on the other does guarantee that a well-founded ordering over $S_{\mathcal{X}}$ does exist if each $(S,\prec) \in \mathcal{X}$ is well-founded.
It can also be shown that well-founded support orderings can be extended to well-orderings.
\begin{lemma}[Well-orderings for well-supported sets] Let $f \in 2^S \rightarrow 2^S$ be monotonic, and let $(X,\prec)$ be a well-founded support ordering for $f$. Then there is a well-ordering ${\prec'} \subseteq X \times X$ extending $\prec$ such that $(X, \prec')$ is a support ordering for $f$. \end{lemma} \begin{proof} Follows from Lemmas~\ref{lem:well-ordering-extension} and~\ref{lem:support-ordering-extension}. \qedhere \end{proof}
The next result is a corollary of earlier lemmas.
\begin{corollary}[Support orderings for fixpoints]\label{cor:support-fixpoints} Let $f \in 2^S \rightarrow 2^S$ be monotonic. \begin{enumerate}
\item
$(\nu f, U_{\nu f})$ is a support ordering for $f$.\footnote{Recall that $U_{\nu f} = \nu f \times \nu f$.}
\item
There is a well-ordering ${\prec} \subseteq \mu f \times \mu f$ such that $(\mu f, \prec)$ is a support ordering for $f$. \end{enumerate} \end{corollary}
For technical convenience in what follows we introduce the notions of \emph{$\sigma$-compatible} and \emph{$\sigma$-maximal} support orderings for monotonic $f$ and $\sigma \in \{\mu,\nu\}$.
\begin{definition}[Compatible, maximal support orderings] Let $f \in 2^S \rightarrow 2^S$ be monotonic, let $\sigma \in \{\mu,\nu\}$, and let $(X, \prec)$ be a support ordering for $f$. \begin{enumerate}
\item $(X, \prec)$ is \emph{$\sigma$-compatible for $f$} iff either $\sigma = \nu$, or $\sigma = \mu$ and $\prec$ is well-founded.
\item $(X, \prec)$ is \emph{$\sigma$-maximal for $f$} iff $X = \sigma f$ and one of the following holds.
\begin{enumerate}
\item $\sigma = \nu$ and ${\prec} = U_X$.
\item $\sigma = \mu$ and $\prec$ is a well-ordering over $X \times X$.
\end{enumerate} \end{enumerate} \end{definition}
\noindent Corollary~\ref{cor:support-fixpoints} ensures that for any monotonic $f \in 2^S \rightarrow 2^S$ and $\sigma \in \{\mu,\nu\}$ there is a $\sigma$-maximal support ordering for $f$. When $\sigma = \nu$ this $\sigma$-maximal support ordering is unique, although this uniqueness property in general fails to hold for $\sigma = \mu$; while the fixpoint is unique, the associated well-ordering may not be.
\section{The Propositional Modal Mu-Calculus}\label{sec:mu-calculus}
This section defines the syntax and semantics of the modal mu-calculus and also establishes properties of the logic that will be used later in the paper.
\subsection{Labeled transition systems}
Labeled transition systems are intended to model the behavior of discrete systems. Define a \emph{sort} $\Sigma$ to be the set of atomic actions that a system can perform.
\begin{definition}[Labeled transition system]\label{def:lts}
A \emph{labeled transition system} (LTS) of sort $\Sigma$ is a pair $\lts{S}$, where $\states{S}$ is a set of \emph{states} and ${\xrightarrow{}} \subseteq \states{S} \times \Sigma \times \states{S}$ is the \emph{transition relation}. We write $s \xrightarrow{a} s'$ when $(s, a, s') \in {\xrightarrow{}}$ and $s \xrightarrow{a}$ when $s \xrightarrow{a} s'$ for some $s' \in \states{S}$. If $K \subseteq \Sigma$ then we write $s \xrightarrow{K} s'$ iff $s \xrightarrow{a} s'$ for some $a \in K$ and $s \xrightarrow{K}$ if $s \xrightarrow{K} s'$ for some $s'$. If there is no $s'$ such that $s \xrightarrow{a} s'$ / $s \xrightarrow{K} s'$ then we denote this as $s \centernot{\xrightarrow{a}}$ / $s \centernot{\xrightarrow{K}}$. \end{definition}
\noindent An LTS $(\states{S}, \xrightarrow{})$ of sort $\Sigma$ represents a system whose state space is $\states{S}$; the presence of transition $s \xrightarrow{a} s'$ indicates that when the system is in state $s$, it can engage in an atomic execution step labeled by $a$ and evolve to state $s'$. We now introduce two notions of \emph{predecessors} of sets of states in an LTS.
\begin{definition}[Predecessor Sets]\label{def:pre}
Let $\lts{S}$ be an LTS of sort $\Sigma$, with $S \subseteq \Bstates{S}$ and $K \subseteq \Sigma$. Then we define the following.
\begin{enumerate}
\item $\mathit{pred}_{\dia{K}}(S) = \{s \in \Bstates{S} \mid \exists s' \in S \colon s \xrightarrow{K} s' \}$
\item $\mathit{pred}_{[K]}(S) = \{s \in \Bstates{S} \mid \forall s' \in \Bstates{S} \colon s \xrightarrow{K} s' \implies s' \in S \}$
\end{enumerate} \end{definition}
\noindent If state $s \in \mathit{pred}_{\dia{K}}(S)$ then one of its outgoing transitions is labeled by $K$ and leads to a state in $S$, while $s \in \mathit{pred}_{[K]}(S)$ holds iff every outgoing transition from $s$ labeled by $K$ leads to $S$. Note that if $s \centernot{\xrightarrow{K}}$ then $s \in \mathit{pred}_{[K]}(S)$ but $s \not\in \mathit{pred}_{\dia{K}}(S)$. It immediately follows from the definitions that the operators satisfy the following properties.
\begin{lemma}\label{lem:pre}
Let $\lts{S}$ be an LTS of sort $\Sigma$, with $K \subseteq \Sigma$ and $S_1, S_2 \subseteq \states{S}$.
If $S_1 \subseteq S_2$ then $\mathit{pred}_{[K]}(S_1) \subseteq \mathit{pred}_{[K]}(S_2)$ and $\mathit{pred}_{\langle K \rangle}(S_1) \subseteq \mathit{pred}_{\langle K \rangle}(S_2)$. \end{lemma}
\subsection{Propositional modal mu-calculus}\label{subsec:propositional-modal-mu-calculus}
The propositional modal mu-calculus, which we often just call the mu-calculus, is a logic for describing properties of states in labeled transition systems. The version of the logic considered here matches the one in~\cite{BS1992}, which slightly extends~\cite{Koz1983} by allowing sets of labels in the modalities. We first define the set of formulas of the mu-calculus, then the \emph{well-formed} formulas. The latter will be the object of study in this paper.
\begin{definition}[Mu-calculus formulas]
Let $\Sigma$ be a sort and $\textnormal{Var}\xspace$ a countably infinite set of \emph{propositional variables}.
Then
formulas of the propositional modal mu-calculus over $\Sigma$ and $\textnormal{Var}\xspace$ are given by the following grammar
$$
\Phi ::= Z
\mid \lnot\Phi'
\mid \Phi_1 \land \Phi_2
\mid [K] \Phi'
\mid \nu Z.\Phi'
$$
where $K \subseteq \Sigma$ and $Z \in \textnormal{Var}\xspace$. \end{definition}
We assume the usual definitions of subformula, etc. To define the well-formed mu-calculus formulas, we first review the notions of free, bound and positive variables. Occurrences of $Z$ in $\nu Z.\Phi'$ are said to be \emph{bound}; an occurrence of a variable in a formula that is not bound within the formula is called \emph{free}. A variable $Z$ is free within a formula if it has at least one free occurrence in the formula, and is called \emph{non-free} otherwise. (So a variable may be non-free in a formula if either all its occurrences are bound, or if the variable has no occurrences at all in the formula.) A variable $Z$ is \emph{positive} in $\Phi$ if every free occurrence of $Z$ in $\Phi$ occurs inside the scope of an even number of negations. We can now define the well-formed mu-calculus formulas as follows.
\begin{definition}[Well-formed mu-calculus formulas]~\label{def:mu-calculus-syntax} A mu-calculus formula over $\Sigma$ and $\textnormal{Var}\xspace$ is \emph{well-formed} if each of its subformulas of form $\nu Z.\Phi$ satisfies: $Z$ is positive in $\Phi$.
We use $\muforms^{\Sigma}_{\textnormal{Var}\xspace}$ for the set of well-formed mu-calculus formulas over $\Sigma$ and $\textnormal{Var}\xspace$, or just $\mathbb{F}$ when $\Sigma$ and $\textnormal{Var}\xspace$ are obvious from context. \end{definition}
We denote substitution for free variables in the usual fashion: if $Z_1 \cdots Z_n \in \textnormal{Var}\xspace^*$ is duplicate-free and $\Phi_1 \cdots \Phi_n \in \mathbb{F}^*$ then we write $\Phi[Z_1 \cdots Z_n := \Phi_1 \cdots \Phi_n]$ for the simultaneous capture-free substitution of each $Z_i$ by $\Phi_i$ in $\Phi$. We also use the following the standard derived operators. \begin{align*} \Phi_1 \lor \Phi_2 &= \lnot (\lnot \Phi_1 \land \lnot \Phi_2) \\ \dia{K} \Phi &= \lnot [K] \lnot\Phi\\ \mu Z. \Phi &= \lnot \nu Z . \lnot \Phi[Z := \lnot Z]\\ \texttt{t\!t} &= \nu Z . Z\\ \texttt{f\!f} &= \lnot \texttt{t\!t} \end{align*} In the definiton of $\mu Z.\Phi$, note that if $Z$ is positive in $\Phi$ then it is also positive in $\lnot \Phi[Z := \lnot Z]$. Following standard convention, we refer to $\land$ and $\lor$, $[K]$ and $\dia{K}$, and $\nu$ and $\mu$ as \emph{duals}. Formulas extended with these dual operators are in \emph{positive normal form} iff all negation symbols directly apply to variable symbols. It is well-known that every formula can be rewritten to positive normal form when the duals are included in the logic. We refer to formulas of form $\nu Z.\Phi$ or $\mu Z.\Phi$ as \emph{fixpoint formulas} and write $\sigma Z.\Phi$ for a generic such formula (so $\sigma$ may be either $\nu$ or $\mu$).
To define the semantics of mu-calculus formulas we use \emph{valuations} to handle free propositional variables. \begin{definition}[Valuations]~\label{def:valuation} Let $\mathcal{T} = \lts{S}$ be an LTS and $\textnormal{Var}\xspace$ a countably infinite set of variables. Then a \emph{valuation for $\textnormal{Var}\xspace$ over $\mathcal{T}$} is a function $\val{V} \in \textnormal{Var}\xspace \rightarrow 2^{\states{S}}$. \end{definition}
\noindent Since a valuation $\val{V}$ is a function, standard operations on functions such as $\val{V}[Z_1 \cdots Z_n := S_1 \cdots S_n]$, where $Z_1 \cdots Z_n \in \textnormal{Var}\xspace^*$ is duplicate-free and $S_1 \cdots S_n \in (2^\states{S})^*$, are applicable.
The semantics of the mu-calculus is now defined as follows. \begin{definition}[Mu-calculus semantics]~\label{def:mu-calculus-semantics}
Let $\mathcal{T} = \lts{S}$ be an LTS of sort $\Sigma$
and $\val{V} \in \textnormal{Var}\xspace \rightarrow 2^\states{S}$ a valuation.
Then
the semantic function $ \semTV{\Phi} \subseteq \mathcal{S}$, where $\Phi \in \muforms^{\Sigma}_{\textnormal{Var}\xspace}$, is defined as follows.
\begin{align*}
\semTV{Z} & = \val{V}(Z) \\
\semTV{\lnot \Phi} & = \mathcal{S} \setminus \semTV{\Phi} \\
\semTV{\Phi_1 \land \Phi_2} & = \semTV{\Phi_1} \cap \semTV{\Phi_2} \\
\semTV{[K]\Phi} & = \mathit{pred}_{[K]}\left(\semTV{\Phi}\right) \\
\semTV{\nu Z . \Phi} & = \bigcup \{ S \subseteq \states{S} \mid S \subseteq \semT{\Phi}{\val{V}[Z := S]} \}
\end{align*}
If $s \in \semTV{\Phi}$ then we say that $s$ \emph{satisfies} $\Phi$ in the context of $\mathcal{T}$ and $\val{V}$. \end{definition}
\noindent For the dual operators one may derive the following semantic equivalences. \begin{align*}
\semTV{\Phi_1 \lor \Phi_2} & = \semTV{\Phi_1} \cup \semTV{\Phi_2} \\
\semTV{\dia{K} \Phi} & = \mathit{pred}_{\dia{K}}\left( \semTV{\Phi} \right) \\
\semTV{\mu Z . \Phi} & = \bigcap \{ S \subseteq \states{S} \mid \semT{\Phi}{\val{V}[Z := S]} \subseteq S \} \end{align*}
We frequently wish to view formulas as functions of their free variables. The next definition introduces this concept at both the syntactic and semantic level.
\begin{definition}[Formula functions]\label{def:formula-functions} Let $Z \in \textnormal{Var}\xspace$ be a variable and $\Phi \in \muforms^{\Sigma}_{\textnormal{Var}\xspace}$ be a formula. \begin{enumerate}
\item \label{subdef:synactic-formula-function}
The \emph{syntactic function}, $\synf{Z}{\Phi} \in \muforms^{\Sigma}_{\textnormal{Var}\xspace} \to \muforms^{\Sigma}_{\textnormal{Var}\xspace}$, for $Z$ and $\Phi$ is defined as:
$$
(\synf{Z}{\Phi})(\Phi') = \Phi[Z:=\Phi'].
$$
\item \label{subdef:semantic-formula-function}
Let $\mathcal{T} = \lts{S}$ be an LTS of sort $\Sigma$, and $\val{V} \in \textnormal{Var}\xspace \to \states{S}$ a valuation over $\mathcal{T}$.
Then the \emph{semantic function}, $\semfTV{Z}{\Phi} \in 2^{\states{S}} \to 2^{\states{S}}$, for $Z$ and $\Phi$ is defined as:
$$
\semfTV{Z}{\Phi}(S) = \semT{\Phi}{\val{V}[Z:=S]}.
$$ \end{enumerate} \end{definition}
We now state a well-known monotonicity result for formulas in which $Z$ is positive.
\begin{lemma}[Mu-calculus monotonicity]~\label{lem:mu-calculus-semantic-monotonicity}
Fix $\mathcal{T} = \lts{S}$ and $\val{V}$, and let $\Phi \in \muforms^{\Sigma}_{\textnormal{Var}\xspace}$ be such that $Z \in \textnormal{Var}\xspace$ is positive in $\Phi$. Then $\semfZTV{\Phi} \in 2^\states{S} \to 2^\states{S}$ is monotonic over the subset lattice for $\states{S}$. \end{lemma} \begin{proof} We must prove that for all $S_1 \subseteq S_2 \subseteq \states{S}$, \[ \semfZTV{\Phi}(S_1) = \semT{\Phi}{\val{V}[Z := S_1]} \subseteq \semT{\Phi}{\val{V}[Z := S_2]} = \semfZTV{\Phi}(S_2). \] The proof proceeds by induction on $\Phi$. \qedhere \end{proof}
It turns out that the semantics of well-formed formulas $\nu Z.\Phi$ and $\mu Z.\Phi$ with respect to $\mathcal{T} = \lts{S}$ can be characterized as the greatest and least fixpoints, respectively, of $\semfT{Z}{\Phi}{\val{V}}$ over the subset lattice induced $\states{S}$. In particular, Lemma~\ref{lem:mu-calculus-semantic-monotonicity} guarantees the monotonicity of $\semfT{Z}{\Phi}{\val{V}}$ over this lattice; Lemma~\ref{lem:tarski-knaster} then implies the characterization. The next lemma formalizes this insight.
\begin{lemma}[Fixpoint characterizations of formula functions]\label{lem:fixpoint-characterizations}
Fix $\mathcal{T}$ and $\val{V}$, let $Z \in \textnormal{Var}\xspace$, and let $\Phi \in \muforms^{\Sigma}_{\textnormal{Var}\xspace}$ be such that $Z \in \textnormal{Var}\xspace$ is positive in $\Phi$. Then $\nu \left( \semfZTV{\Phi} \right) = \semTV{\nu Z.\Phi}$ and $\mu \left(\semfZTV{\Phi} \right) = \semTV{\mu Z.\Phi}$. \end{lemma} \begin{proof} Fix $\mathcal{T}, \val{V}, Z$ and $\Phi$ so that $Z$ is positive in $\Phi$. We prove the $\nu$ case; the $\mu$ case is left to the reader. Since $Z$ is positive in $\Phi$ Lemma~\ref{lem:mu-calculus-semantic-monotonicity} guarantees that $\semfTV{Z}{\Phi} \in 2^{\states{S}} \to 2^{\states{S}}$ is monotonic over the subset lattice generated by $\states{S}$, and thus $\nu\left(\semfTV{Z}{\Phi}\right) \subseteq \states{S}$ exists. We reason as follows. \begin{align*} \nu\left(\semfTV{Z}{\Phi}\right)
&= \bigcup \{S \subseteq \states{S} \mid S \subseteq \left(\semfTV{Z}{\Phi}\right)(S)\}
&& \text{Lemma~\ref{lem:tarski-knaster}} \\
&= \bigcup \{S \subseteq \states{S} \mid S \subseteq \semT{\Phi}{\val{V}[Z := S]}\}
&& \text{Definition of $\semfTV{Z}{\Phi}$} \\
&= \semTV{\nu Z.\Phi}
&& \text{Definition of $\semTV{\nu Z.\Phi}$} \end{align*} \qedhere \end{proof}
In the remainder of this section, we establish some identities on mu-calculus formulas that we will use later in this paper. The first result establishes a correspondence between substitution and valuation updates.
\begin{lemma}[Substitution and valuations]\label{lem:substitution}
Fix $\mathcal{T}$ and $\val{V}$,
let $\Phi, \Phi_1, \ldots, \Phi_n \in \muforms^{\Sigma}_{\textnormal{Var}\xspace}$ and let $Z_1 \cdots Z_n \in \textnormal{Var}\xspace^*$ be duplicate-free. Then
\[
\semTV{\Phi[Z_1 \cdots Z_n := \Phi_1 \cdots \Phi_n]}
=
\semT{\Phi}{\val{V}[Z_1 \cdots Z_n := \semTV{\Phi_1} \cdots \semTV{\Phi_n}]}
\] \end{lemma} \begin{proof} For notational conciseness we write $\vec{Z}$ for $Z_1 \cdots Z_n$, $\vec{\Phi}$ for $\Phi_1 \cdots \Phi_n$, and $\semTV{\vec{\Phi}}$ for $\semTV{\Phi_1} \cdots \semTV{\Phi_n}$. The proof proceeds by induction on the structure of $\Phi$. Most cases follow straightforwardly from the induction hypothesis; we consider the cases for formulas that involve variables at the top level. \begin{itemize}
\item $\Phi = Z$.
If $Z = Z_i$ for some $i$, the following reasoning applies.
\begin{flalign*}
& \semTV{Z[\vec{Z} := \vec{\Phi}]}
&& \\
&= \semTV{Z_i[\vec{Z} := \vec{\Phi}]}
&& \text{$Z = Z_i$}
\\
& = \semTV{\Phi_i}
&& \text{Definition of substitution}
\\
& = \left(\val{V}[\vec{Z} := \semTV{{\vec{\Phi}}}\,]\right)(Z_i)
&& \text{Definition of of $\val{V}[\vec{Z} := \semTV{{\vec{\Phi}}}\,]$}
\\
& = \semT{Z}{\val{V}[\vec{Z} := \semTV{{\vec{\Phi}}}]}
&& \text{Definition of $\semT{Z_i}{\val{V}[\vec{Z} := \semTV{{\vec{\Phi}}}\,]}$, $Z = Z_i$}
\end{flalign*}
If $Z \neq Z_i$ for all $i$, we reason as follows.
\begin{flalign*}
& \semTV{Z[\vec{Z} := \vec{\Phi}]}
&&
\\
& = \semTV{Z}
&& \text{$Z \neq Z_i$ for all $i$}
\\
& = \val{V}(Z)
&& \text{Definition of $\semTV{Z}$}
\\
& = \left(\val{V}[\vec{Z} := \semTV{{\vec{\Phi}}}\,]\right)(Z)
&& \text{Definition of $\val{V}[\vec{Z} := \semTV{{\vec{\Phi}}}\,]$, $Z \neq Z_i$ for all $i$}
\\
& = \semT{Z}{\val{V}[\vec{Z} := \semTV{{\vec{\Phi}}}]}
&& \text{Definition of $\semT{Z}{\val{V}[\vec{Z} := \semTV{{\vec{\Phi}}}]}$}
\end{flalign*}
\item $\Phi = \nu Z . \Phi'$.
In this case, and without loss of generality due to the fact that substitution is capture-free, we may assume that $Z$ is not free in any of the $\Phi_i$.
If $Z \neq Z_i$ for all $i$, the following reasoning applies.
\begin{flalign*}
& \semTV{(\nu Z . \Phi')[\vec{Z} := \vec{\Phi}]}
&&
\\
& = \semTV{\nu Z . \left(\Phi'[\vec{Z} := \vec{\Phi}]\right)}
&& \text{$Z \neq Z_i$, $Z$ not free in $\vec{\Phi}$}
\\
& = \bigcup\{ S \subseteq \states{S} \mid S \subseteq \semT{\Phi'[\vec{Z} := \vec{\Phi}]}{\val{V}[Z := S]} \}
&& \text{Definition of $\semTV{\nu Z . \cdots}$}
\\
& = \bigcup\{ S \subseteq \states{S} \mid S \subseteq \semT{\Phi'}{(\val{V}[Z := S])[\vec{Z} := \semTV{{\vec{\Phi}}}]} \}
&& \text{Induction hypothesis}
\\
& = \bigcup\{ S \subseteq \states{S} \mid S \subseteq \semT{\Phi'}{(\val{V}[\vec{Z} := \semTV{{\vec{\Phi}}}\,])[Z := S]} \}
&& \text{$Z \neq Z_i$, for all $i$}
\\
& = \semT{\nu Z . \Phi'}{\val{V}[\vec{Z} := \semTV{{\vec{\Phi}}}\,]}
&&
\end{flalign*}
If $Z = Z_i$ for some $i$ then from definition of substitution it follows that
\[
(\nu Z.\Phi')[\vec{Z} := \vec{\Phi}]
=
(\nu Z.\Phi')[\vec{Z}_{\neq i} := \vec{\Phi}_{\neq i}]
\]
where $\vec{Z}_{\neq i} = Z_1 \cdots Z_{i-1} Z_{i+1} \cdots Z_n$, and similarly for $\vec{\Phi}_{\neq i}$.
Using an inductive argument on $|\vec{Z}| = n$ we can assume that
\[
\semTV{(\nu Z.\Phi')[\vec{Z}_{\neq i} := \vec{\Phi}_{\neq i}]}
=
\semT{\nu Z.\Phi'}{\val{V}[\vec{Z}_{\neq i} := \semTV{\vec{\Phi}_{\neq i}}]}.
\]
From this fact, and the observation that
\[
\left( \val{V}[\vec{Z} := \semTV{\vec{\Phi}}] \right)[Z_i = S]
=
\left( \val{V}[\vec{Z}_{\neq i} := \semTV{\vec{\Phi}_{\neq i}}] \right)[Z_i = S]
\]
it is easy to establish the desired result. The details are left to the reader.
\qedhere \end{itemize} \end{proof}
\noindent The next result is immediate from this lemma and Lemma~\ref{lem:mu-calculus-semantic-monotonicity}
\begin{corollary}[Monotonicity of substitution]\label{cor:monotonicity-of-substitution}
Fix $\mathcal{T}, \val{V}$ and $Z \in \textnormal{Var}\xspace$, and let $\Phi$, $\Phi_1$ and $\Phi_2$ be such that $Z$ is positive in $\Phi$ and $\semTV{\Phi_1} \subseteq \semTV{\Phi_2}$. Then
\[
\semTV{\Phi[ Z:= \Phi_1 ]}
\subseteq
\semTV{\Phi[ Z:= \Phi_2 ]}.
\] \end{corollary}
The following lemma states that the semantics of a formula is not affected by the values that a valuation assigns to variables that are not free in the formula.
\begin{lemma}[Semantics and non-free variables]\label{lem:substitution-of-bound-variables}
Let $\Phi$ be a formula in which $Z \in \textnormal{Var}\xspace$ is not free. Then for all $\mathcal{T} = \lts{S}, \val{V}$ and $S \subseteq \states{S}$,
$
\semTV{\Phi} = \semfZTV{\Phi}(S).
$ \end{lemma} \begin{proof} Observe that $\semfZTV{\Phi}(S) = \semT{\Phi}{\val{V}[Z := S]}$. It follows from a routine induction on the structure of $\Phi$ that $\semTV{\Phi} = \semT{\Phi}{\val{V}[Z := S]}$ if $Z$ does not appear free in $\Phi$. \qedhere \end{proof}
\noindent Lemmas~\ref{lem:fixpoint-characterizations} and~\ref{lem:substitution} guarantee that formulas can be \emph{unfolded}.
\begin{lemma}[Fixpoint unfolding]\label{lem:unfolding}
Fix $\mathcal{T}$ and $\val{V}$, and let $\sigma Z.\Phi$ be a fixpoint formula. Then $\semTV{\sigma Z.\Phi} = \semTV{\Phi[Z := \sigma Z.\Phi]}$. \end{lemma} \begin{proof} The result is established using the following reasoning. \begin{flalign*}
\semTV{\sigma Z.\Phi}
& = \sigma \semfTV{Z}{\Phi}
&& \text{Lemma~\ref{lem:fixpoint-characterizations}}
\\
& = \semfTV{Z}{\Phi}(\sigma \semfTV{Z}{\Phi})
&& \text{Definition of fixed point}
\\
& = \semT{\Phi}{[Z := \sigma \semfTV{Z}{\Phi}]}
&& \text{Definition~\ref{def:formula-functions}}
\\
& = \semT{\Phi}{[Z := \semTV{\sigma Z.\Phi}]}
&& \text{Lemma~\ref{lem:fixpoint-characterizations}}
\\
& = \semTV{\Phi[Z := \sigma Z . \Phi}
&& \text{Lemma~\ref{lem:substitution}} \end{flalign*} \qedhere \end{proof}
\section{Base proof system}\label{sec:base-proof-system}
This section defines the base proof system for the mu-calculus considered in this paper. It mirrors the ones given in~\cite{BS1992,Bra1991} and is intended to prove that sets of states in a transition system satisfy mu-calculus formulas. Later in the paper we will extend this proof system in various ways. In what follows, fix sort $\Sigma$ and countably infinite propositional variable set $\textnormal{Var}\xspace$.
\subsection{Definition lists and sequents} The proof system reasons about \emph{sequents}, which make statements about sets of states satisfying mu-formulas. Our sequents also involve \emph{definition lists}, which are used in the construction of proofs to control the unfolding of fixpoint formulas. We define definition lists and sequents below.
\paragraph{Definition lists.} Definition lists bind fresh variables in $\textnormal{Var}\xspace$ to fixpoint formulas. In a proof setting, such a list records the fixpoint formulas that have been unfolded previously, so that decisions about whether or to unfold again later in the proof can be made. Here we define definition lists formally and establish basic results about them.
\begin{definition}[Definition lists]\label{def:definition-list}
A \emph{definition list} $\Delta$ is a finite sequence $(U_1 = \Phi_1) \cdots (U_n = \Phi_n)$, with each $U_i \in \textnormal{Var}\xspace$ and $\Phi_i \in \muforms^{\Sigma}_{\textnormal{Var}\xspace}$, satisfying the following.
\begin{enumerate}
\item
If $i \neq j$ then $U_i \neq U_j$.
\item
For all $1 \leq i,j \leq n$, $U_i$ cannot appear bound anywhere in $\Phi_j$.
\item
If $i \leq j$ then $U_j$ cannot appear free in $\Phi_i$.
\end{enumerate}
The individual $(U_i = \Phi_i)$ in a definition list are sometimes called \emph{definitions}, with each $U_i$ referred to as a \emph{definitional constant}. We also define $\Delta(U_i) = \Phi_i$ to be the formula associated with $U_i$ in $\Delta$ and $\operatorname{dom}(\Delta) = \{U_1, \ldots, U_n\}$ to be the set of definitional constants in $\Delta$. \end{definition}
\noindent A definition list consists of a sequence bindings, or definitions, of form $(U_i = \Phi_i)$. The constraints ensure that every $U_i$ is unique and not part of any $\sigma$ operator inside $\Phi_i$. $U_i$ may also appear free in definitions to the right of $(U_i = \Phi_i)$, but not in $\Phi_i$ or in definitions to the left.
Since definition lists are sequences the sequence notations defined in Section~\ref{subsec:sequences}, including $\varepsilon$, $\cdot$ and $\preceq$, are applicable. We now introduce additional definition-list terms and notation.
\begin{definition}[Prefixes / suffixes of definition lists]\label{def:definition-list-prefix} Let $\Delta = \Delta_1 \cdot (U = \Phi) \cdot \Delta_2$ be a definition list. \begin{enumerate}
\item
$\Delta_{\prec U} = \Delta_1$ is the longest prefix of $\Delta$ that omits $U$.
\item
$\Delta_{\preceq U} = \Delta_1 \cdot (U = \Phi)$ is the shortest prefix of $\Delta$ that includes $U$.
\item
$\Delta_{\succ U} = \Delta_2$ is the longest suffix of $\Delta$ that omits $U$.
\item
$\Delta_{\succeq U} = (U = \Phi) \cdot \Delta_2$ is the shortest suffix of $\Delta$ that includes $U$. \end{enumerate} \end{definition}
\begin{definition}[Compatibility of definition lists]\label{def:definition-list-compatibility} Let $\Delta_1$ and $\Delta_2$ be definition lists. Then $\Delta_2$ is \emph{compatible with} $\Delta_1$ iff $\operatorname{dom}(\Delta_1) \cap \operatorname{dom}(\Delta_2) = \emptyset$ and no $U_2 \in \operatorname{dom}(\Delta_2)$ appears in $\Delta_1(U_1)$ for any $U_1 \in \operatorname{dom}(\Delta_1)$. \end{definition}
\noindent We have the following.
\begin{lemma}[Definition-list concatenation]\label{lem:definition-list-concatenation}
\begin{enumerate}
\item
Let $\Delta$ be a definition list such that $\Delta = \Delta_1 \cdot \Delta_2$. Then $\Delta_1$ and $\Delta_2$ are definition lists, and $\Delta_2$ is compatible with $\Delta_1$.
\item
Suppose that $\Delta_1$ and $\Delta_2$ are definition lists such that $\Delta_2$ is compatible with $\Delta_1$. Then $\Delta_1 \cdot \Delta_2$ is a definition list.
\end{enumerate} \end{lemma} \begin{proof}
Immediate from Definitions~\ref{def:definition-list} and~\ref{def:definition-list-compatibility}.\qedhere \end{proof}
\begin{corollary}\label{cor:prefix-suffix-inheritance}
Let $\Delta$ be a definition list and $U \in \operatorname{dom}(\Delta)$ be a definitional constant. Then $\Delta_{\prec U}, \Delta_{\preceq U}$, $\Delta_{\succ U}$ and $\Delta_{\succeq U}$ are also definition lists. \end{corollary} \begin{proof}
Immediate from Lemma~\ref{lem:definition-list-concatenation} and Definition~\ref{def:definition-list-prefix}.\qedhere \end{proof}
Syntactically, definition lists may be seen as \emph{iterated substitutions}. The intuition is formalized below.
\begin{definition}[Formula expansion by a definition list]\label{def:formula-expansion}
Let $\Delta$ be a definition list and $\Phi \in \muforms^{\Sigma}_{\textnormal{Var}\xspace}$. Then $\Phi[\Delta]$, the \emph{expansion} of $\Phi$ with respect to $\Delta$, is defined inductively on $\Delta$ as follows.
\begin{itemize}
\item
$\Phi[\varepsilon] = \Phi$
\item
$\Phi[\Delta \cdot (U = \Psi)] = \left( \Phi [U := \Psi] \right)[\Delta]$
\end{itemize} \end{definition}
\noindent Note that $\Phi[\Delta]$ contains no occurrences of any elements of $\operatorname{dom}(\Delta)$.
The definition of $\Phi[\Delta]$ recurses from the back of list $\Delta$ to the front. The following property, which characterizes $\Phi[\Delta]$ for non-empty $\Delta$ in terms of the first definition in $\Delta$, is useful in proofs by induction on $\Delta$. \begin{lemma}\label{lem:nonempty-formula-expansion}
Let $\Delta = (U_1 = \Phi_1) \cdot \Delta'$ be a non-empty definition list. Then for any $\Phi \in \muforms^{\Sigma}_{\textnormal{Var}\xspace}$,
$
\Phi[\Delta] = \left( \Phi[\Delta'] \right) \,[U_1 := \Phi_1].
$ \end{lemma} \remove{ \begin{proofsketch}
By induction on $\Delta'$. Details may be found in the appendix.\qedhere \end{proofsketch} } \begin{proof}
Proceeds by induction on $\Delta'$. There are two cases to consider.
\begin{itemize}
\item $\Delta' = \varepsilon$.
Fix $\Phi$. We have the following.
\begin{align*}
\Phi[\Delta]
&= \Phi[(U_1 = \Phi_1)]
&& \text{$\Delta = (U_1 = \Phi_1) \cdot \Delta', \Delta' = \varepsilon$}
\\
&= \Phi[\varepsilon \cdot (U_1 = \Phi_1)]
&& \text{$\vec{w} = \varepsilon \cdot \vec{w}$ for any sequence $\vec{w}$}
\\
&= \left( \Phi[\varepsilon] \right) [U_1 := \Phi_1]
&& \text{Definition of $\Phi[\varepsilon \cdot (U_1 = \Phi_1)]$}
\\
&= \left( \Phi[\Delta'] \right) [U_1 := \Phi_1]
&& \text{$\Delta' = \varepsilon$}
\end{align*}
\item $\Delta' = \Delta'' \cdot (U' = \Phi')$.
The induction hypothesis guarantees that for any $\Phi$, $\Phi[(U_1 = \Phi_1) \cdot \Delta''] = \left( \Phi[\Delta''] \right)[U_1 := \Phi_1]$. Now fix $\Phi$. We reason as follows.
\begin{align*}
\Phi[\Delta]
&= \Phi[(U_1 = \Phi_1) \cdot \Delta']
&& \text{$\Delta = (U_1 = \Phi_1) \cdot \Delta'$}
\\
&= \Phi[(U_1 = \Phi_1) \cdot (\Delta'' \cdot (U' = \Phi'))]
&& \text{$\Delta' = \Delta'' \cdot (U' = \Phi')$}
\\
&= \Phi[((U_1 = \Phi_1) \cdot \Delta'') \cdot (U' = \Phi'))]
&& \text{Associativity of $\cdot$}
\\
&= \left( \Phi[U' := \Phi'] \right) [(U_1 = \Phi_1) \cdot \Delta'']
&& \text{Definition of $\Phi[\cdots (U' = \Phi')]$}
\\
&= \left( \left( \Phi[U' := \Phi'] \right)[\Delta''] \right) \,[U_1 := \Phi_1]
&& \text{Induction hypothesis}
\\
&= \left( \Phi[\Delta'' \cdot (U' = \Phi'] \right) \,[U_1 := \Phi_1]
&& \text{Definition of $\Phi[\Delta'' \cdot (U' = \Phi')]$}
\\
&= \left( \Phi[\Delta'] \right) \,[U_1 := \Phi_1]
&& \text{$\Delta' = \Delta'' \cdot (U' = \Phi')$}
\end{align*}
\qedhere
\end{itemize} \end{proof}
In a similar vein we define the semantic extension, $\val{V}[\Delta]$, of valuation $\val{V}$ by definition list $\Delta$ as follows. Essentially, $\val{V}[\Delta]$ updates $\val{V}$ with the semantic interpretation of the equations appearing in $\Delta$. We use this notion later to assign semantics to the sequents labeling the nodes in the proof tree.
\begin{definition}[Valuation extension by a definition list]\label{def:valuation-extension}
Let $\mathcal{T} = \lts{S}$ be an LTS over $\Sigma$, $\val{V} \in \textnormal{Var}\xspace \to 2^\states{S}$ a valuation,
and $\Delta$ a definition list. Then $\val{V}[\Delta]$, the \emph{extension} of $\val{V}$ by $\Delta$, is defined inductively on $\Delta$ as follows.
\begin{enumerate}
\item
$\val{V}[\varepsilon] = \val{V}$
\item
$\val{V}[(U = \Phi) \cdot \Delta] = \bigl(\val{V} \,[\, U := \semTV{\Phi} \;]\,\bigr) \,[\Delta]$
\end{enumerate} \end{definition}
In contrast with $\Phi[\Delta]$, the definition of $\val{V}[\Delta]$ recurses from the from front of $\Delta$ to the back. The next lemma gives a characterization of $\val{V}[\Delta]$ for non-empty $\Delta$ in terms of the last binding contained in $\Delta$.
\begin{lemma}\label{lem:nonempty-valuation-extension}
Let $\mathcal{T}$ be an LTS over $\Sigma$ and $\Delta = \Delta' \cdot (U = \Phi)$ a non-empty definition list.
Then for any valuation $\val{V} \in \textnormal{Var}\xspace \to 2^\states{S}$, $\val{V}[\Delta] = \left( \val{V}[\Delta'] \right) [U := \semT{\Phi}{\val{V}[\Delta']}]$. \end{lemma} \remove{ \begin{proofsketch}
Fix an arbitrary LTS $\mathcal{T}$ and $\Delta = \Delta' \cdot (U = \Phi)$.
The proof is by induction on $\Delta'$. Details can be found in the appendix. \end{proofsketch} } \begin{proof}
Fix an arbitrary LTS $\mathcal{T}$ and $\Delta = \Delta' \cdot (U = \Phi)$.
We proceed by induction on $\Delta'$. There are two cases to consider.
\begin{itemize}
\item
$\Delta' = \varepsilon$.
Fix $\val{V}$. The definition of $\val{V}[\Delta]$ then guarantees the desired result.
\item
$\Delta' = (U_1 = \Phi_1) \cdot \Delta''$.
The induction hypothesis guarantees that for all $\val{V}$,
$\val{V}[\Delta'' \cdot (U = \Phi)] = \left( \val{V}[\Delta''] \right) [U := \semT{\Phi}{\val{V}[\Delta'']}].$
We reason as follows.
\begin{align*}
\val{V}[\Delta]
& = \val{V}[\Delta' \cdot (U = \Phi)]
& & \text{$\Delta = \Delta' \cdot (U = \Phi)$}
\\
& = \val{V}[((U_1 = \Phi_1) \cdot \Delta'') \cdot (U = \Phi)]
& & \text{$\Delta' = (U_1 = \Phi_1) \cdot \Delta''$}
\\
& = \val{V}[(U_1 = \Phi_1) \cdot (\Delta'' \cdot (U = \Phi))]
& & \text{Commutativity of $\cdot$}
\\
& = \left( \val{V}[ U_1 := \semTV{\Phi_1}] \right) [\Delta'' \cdot (U = \Phi)]
& & \text{Definition of $\val{V}[(U_1 = \Phi_1) \cdots]$}
\\
& = \val{V}'[U := \semT{\Phi}{\val{V}'}]
& & \text{Induction hypothesis;}
\\
& & & \text{$\val{V}' = \left( \val{V}[\, U_1 := \semTV{\Phi_1}\,] \right) [\Delta'']$}
\\
& = \val{V}'' [U := \semT{\Phi}{\val{V}''}]
& & \text{Definition of $\val{V}[(U_1 = \Phi_1) \cdot \Delta'']$;}
\\
& & & \text{$\val{V}'' = \val{V}[\,(U_1 = \Phi_1) \cdot \Delta''\,]$}
\\
& = \left( \val{V}[\Delta'] \right)[U := \semT{\Phi}{\val{V}[\Delta']}]
& & \text{$\Delta' = (U_1 = \Phi_1) \cdot \Delta''$}
\end{align*}
\qedhere
\end{itemize} \end{proof}
\noindent In Lemma~\ref{lem:definition-list-correspondence} we establish a correspondence between $\semTV{\Phi[\Delta]}$ and $\semT{\Phi}{\val{V}[\Delta]}$.
\begin{lemma}[Definition-list correspondence]\label{lem:definition-list-correspondence}
Let $\Phi \in \muforms^{\Sigma}_{\textnormal{Var}\xspace}$. Then for every LTS $\mathcal{T}$ over $\Sigma$, definition list $\Delta$, and valuation $\val{V}$,
$
\semTV{\Phi[\Delta]}
=
\semT{\Phi}{\val{V}[\Delta]}.
$ \end{lemma}
\begin{proof}
Fix $\Phi$ and LTS $\mathcal{T}$ over $\Sigma$.
The proof proceeds by induction on $\Delta$. In the base case $\Delta = \varepsilon$, and the result is immediate, as $\Phi[\varepsilon] = \Phi$ and $\val{V}[\varepsilon] = \val{V}$. Now assume that $\Delta = (U_1 = \Phi_1) \cdot \Delta'$. The induction hypothesis guarantees that for any valuation $\val{V}$,
$\semTV{\Phi[\Delta']}
=
\semT{\Phi}{\val{V}[\Delta']}.$
We must prove that for any valuation $\val{V}$,
$\semTV{\Phi[\Delta]}
=
\semT{\Phi}{\val{V}[\Delta]}.$
So fix $\val{V}$. We reason as follows.
\[
\begin{array}{rcl@{\;\;\;}p{5cm}}
\semTV{\Phi[\Delta]}
& =
& \semTV{\Phi[(U_1 = \Phi_1) \cdot \Delta']}
& $\Delta = (U_1 = \Phi_1) \cdot \Delta'$
\\[6pt]
& =
& \semTV{(\Phi[\Delta'])[U_1 := \Phi_1]}
& Lemma~\ref{lem:nonempty-formula-expansion}
\\[6pt]
& =
& \semT{\Phi[\Delta']}{\val{V}[U_1 := \semTV{\Phi_1}]}
& Lemma~\ref{lem:substitution}
\\[6pt]
& =
& \semT{\Phi}{(\val{V}[U_1 := \semTV{\Phi_1}])\,[\Delta']}
& Induction hypothesis
\\[6pt]
& =
& \semT{\Phi}{\val{V}[(U_1 = \Phi_1) \cdot \Delta']}
& Definition of $\val{V}[(U_1 = \Phi_1) \cdot \Delta']$
\\[6pt]
& =
& \semT{\Phi}{\val{V}[\Delta]}
& $\Delta = (U_1 = \Phi_1) \cdot \Delta'$
\end{array}
\]\qedhere \end{proof}
The next lemma asserts that the semantics of definitional constant $U$ in the definition list $\Delta$ only depends on prefix of $\Delta$ up to and including the definition of $U$; the subsequent definitions in $\Delta$ have no effect.
\begin{lemma}[Definitional-constant semantics]\label{lem:invariance-definitional-constant}
Let $\mathcal{T}$ be an LTS over $\Sigma$ and $\Delta$ a definition list with $U \in \operatorname{dom}(\Delta)$.
Then for any valuation $\val{V} \in \textnormal{Var}\xspace \to 2^\states{S}$, $\val{V}[\Delta] \left( U \right) = \val{V}[\Delta_{\preceq U}] \left( U \right)$. \end{lemma} \remove{ \begin{proofsketch}
From the definitions of $\Delta_{\preceq U}$ and $\Delta_{\succ U}$ it follows that $\Delta = (\Delta_{\preceq U}) \cdot (\Delta_{\succ U})$. The proof then proceeds by induction on $\Delta_{\succ U}$, using Lemma~\ref{lem:nonempty-valuation-extension}. The detailed proof can be found in the appendix. \end{proofsketch} } \begin{proof}
From the definitions of $\Delta_{\preceq U}$ and $\Delta_{\succ U}$ it follows that $\Delta = (\Delta_{\preceq U}) \cdot (\Delta_{\succ U})$. The proof proceeds by induction on $\Delta_{\succ U}$. There are two cases to consider.
\begin{itemize}
\item
$\Delta_{\succ U} = \varepsilon$.
In this case $\Delta = \Delta_{\preceq U} = \Delta_{\prec U} \cdot (U = \Phi)$, and the result follows from Lemma~\ref{lem:nonempty-valuation-extension}.
\item
$\Delta_{\succ U} = \Delta' \cdot (U' = \Phi')$ for some $U' \not\in \operatorname{dom}(\Delta)$.
Note that since $\Delta_{\succ U}$ is compatible with $\Delta_{\preceq U}$, so is $\Delta'$, and thus $\Delta_{\preceq U} \cdot \Delta'$ is a definition list.
The induction hypothesis thus guarantees that
$\left( \val{V}[\Delta_{\preceq U}] \right) (U) = \left( \val{V}[\Delta_{\preceq U} \cdot \Delta'] \right) (U).$
We reason as follows.
\begin{flalign*}
& \left( \val{V}[\Delta] \right) (U)
&&
\\
&= \val{V}[ \,\left( \Delta_{\preceq U} \right) \cdot \left( \Delta_{\succ U} \right)\, ] \,(U)
&& \text{$\Delta = \left( \Delta_{\preceq U} \right) \cdot \left( \Delta_{\succ U} \right)$}
\\
&= \left( \val{V}[ \,\left( \Delta_{\preceq U} \right) \cdot \Delta' \cdot (U' = \Phi')\, ] \right) \,(U)
&& \text{$\Delta_{\succ U} = \Delta' \cdot (U' = \Phi')$}
\\
&= \left( \left(
\val{V} [ \,\left( \Delta_{\preceq U} \right) \cdot \Delta'\, ] \right)
[U' := \semT{\Phi'}{\val{V}[\Delta']}]\right) \,(U)
&& \text{Lemma~\ref{lem:nonempty-valuation-extension}}
\\
&= \left( \val{V} [ \,\left( \Delta_{\preceq U} \right) \cdot \Delta'\, ] \right) \,(U)
&& \text{$U \neq U'$}
\\
&= \val{V}[\Delta_{\preceq U}] \,(U)
&& \text{Induction hypothesis}
\end{flalign*}
\qedhere
\end{itemize} \end{proof}
\noindent The next corollaries follow from this lemma.
\begin{corollary}\label{cor:shared-prefix}
Let $\mathcal{T}$ be an LTS over $\Sigma$, $\Delta$, $\Delta'$ be definition lists, $U \in \operatorname{dom}(\Delta)$ such that $\Delta_{\preceq U} = \Delta'_{\preceq U}$, and $\val{V}$ be a valuation. Then:
\begin{enumerate}
\item\label{cor:shared-prefix-constant-semantics}
$\left( \val{V}[\Delta] \right) (U) = \left( \val{V}[\Delta'] \right) (U)$.
\item\label{cor:shared-prefix-constant-formula-semantics}
$\semT{U}{\val{V}[\Delta]} = \semT{U}{\val{V}[\Delta']}$.
\end{enumerate} \end{corollary}
\begin{corollary}\label{cor:prefix-formula-semantics}
Let $\mathcal{T}$ be an LTS over $\Sigma$ and $\Delta, \Delta_1$ and $\Delta_2$ be definition lists such that $\Delta = \Delta_1 \cdot \Delta_2$, and let $\Phi$ be such that no $U' \in \operatorname{dom}(\Delta_2)$ is free in $\Phi$. Then for any $\val{V}$, $\semT{\Phi}{\val{V}[\Delta]} = \semT{\Phi}{\val{V}[\Delta_1]}$. \end{corollary} \begin{proof}
By induction on $\Phi$. \end{proof}
We now show a correspondence between the semantics of a constant $U$ and $\Delta(U)$, where $\Delta$ is a definition list with $U \in \operatorname{dom}(\Delta)$.
\begin{lemma}[$U$ semantic correspondence]\label{lem:U-semantic-correspondence}
Let $\mathcal{T}$ be and LTS over $\Sigma$, $\Delta$ be a definition list with $U \in \operatorname{dom}(\Delta)$, and $\val{V}$ be a valuation. Then $\semT{U}{\val{V}[\Delta]} = \semT{\Delta(U)}{\val{V}[\Delta_{\prec U}]} = \semT{\Delta(U)}{\val{V}[\Delta]}$. \end{lemma} \begin{proof}
Fix LTS $\mathcal{T}$, definition list $\Delta$ and $U \in \operatorname{dom}(\Delta)$, and assume $\Delta(U) = \Phi$.
Let $\val{V}$ be an arbitrary valuation.
We reason as follows.
\begin{align*}
\semT{U}{\val{V}[\Delta]}
& = \semT{U}{\val{V}[\Delta_{\preceq U}]}
& & \text{Corollary~\ref{cor:shared-prefix}(\ref{cor:shared-prefix-constant-formula-semantics})}
\\
& = \semTV{U[\Delta_{\preceq U}]}
& & \text{Lemma~\ref{lem:definition-list-correspondence}}
\\
& = \semTV{U[ \,\Delta_{\prec U} \cdot (U = \Phi) \, ]}
& & \Delta_{\preceq U} = \Delta_{\prec U} \cdot (U = \Phi)
\\
& = \semTV{\left(U [U := \Phi] \right) \, [ \,\Delta_{\prec U} \, ]}
& & \text{Definition of $U[ \,\Delta_{\prec U} \cdot (U = \Phi) \, ]$}
\\
& = \semTV{\Phi [ \,\Delta_{\prec U} \, ]}
& & \text{Definition of substitution}
\\
& = \semT{\Phi}{\val{V}[\Delta_{\prec U}]}
& & \text{Lemma~\ref{lem:definition-list-correspondence}}
\\
& = \semT{\Delta(U)}{\val{V}[\Delta_{\prec U}]}
& & \Delta(U) = \Phi
\\
& = \semT{\Delta(U)}{\val{V}[\Delta]}
& & \text{Corollary~\ref{cor:prefix-formula-semantics}, $\Delta = (\Delta_{\prec U}) \cdot (\Delta_{\succeq U})$,}
\\
&
& & \text{no $U' \in \operatorname{dom}(\Delta_{\succeq U})$ free in $\Phi = \Delta(U)$}
\end{align*}
\qedhere \end{proof}
We close our treatment of definition lists by showing the following useful fact about unfolding when $\Delta(U) = \sigma Z.\Phi$ for some $\Phi$.
\begin{lemma}[Definitional-constant unfolding]\label{lem:constant-unfolding}
Let $\mathcal{T}$ be and LTS over $\Sigma$, $\Delta$ be a definition list with $\Delta(U) = \sigma Z. \Phi$, and $\val{V}$ be a valuation. Then $\semT{U}{\val{V}[\Delta]} = \semT{\Phi[Z := U]}{\val{V}[\Delta]}$. \end{lemma} \begin{proof}
Fix $\mathcal{T}$, $\Delta$ and $\val{V}$, and let $U$ be such that $\Delta(U) = \sigma Z.\Phi$. We reason as follows.
\begin{align*}
\semT{U}{\val{V}[\Delta]}
&= \semT{\sigma Z. \Phi}{\val{V}[\Delta_{\prec U}]}
&& \text{Lemma~\ref{lem:U-semantic-correspondence}, $\Delta(U) = \sigma Z. \Phi$}
\\
&= \semT{\Phi [Z := \sigma Z.\Phi]}{\val{V}[\Delta_{\prec U}]}
&& \text{Lemma~\ref{lem:unfolding}}
\\
&= \semT{\left( \Phi [Z := U] \right) [U := \sigma Z. \Phi]}{\val{V}[\Delta_{\prec U}]}\
\\
\multispan4{\hfil Property of substitution; $U$ not free in $\Phi$}
\\
&= \semT{\Phi [Z := U]}{\left( \val{V}[\Delta_{\prec U}] \right)[U := \semT{\sigma Z. \Phi}{\Delta_{\prec U}}]}
&& \text{Lemma~\ref{lem:substitution}}
\\
&= \semT{\Phi [Z := U]}{\val{V}[\Delta_{\prec U} \cdot (U = \sigma Z. \Phi)]}
&& \text{Lemma~\ref{lem:nonempty-valuation-extension}}
\\
&= \semT{\Phi [Z := U]}{\val{V}[\Delta_{\preceq U}]}
&& \text{$\Delta_{\preceq U} = \Delta_{\prec U} \cdot (U = \sigma Z.\Phi)$}
\\
&= \semT{\Phi [Z:= U]}{\val{V}[\Delta]}
&& \text{Corollary~\ref{cor:prefix-formula-semantics},}
\\
\multispan4{\hfil
$\Delta = \left( \Delta_{\preceq U} \right) \cdot \left( \Delta_{\succ U} \right)$,
no $U' \in \operatorname{dom}(\Delta_{\succ U})$ free in $\Phi [Z:= U]$}
\end{align*}
\qedhere \end{proof}
\paragraph{Sequents.}
We can now define the sequents used in the base proof system.
\begin{definition}[Sequents]\label{def:sequent} Let $\mathcal{T} = \lts{S}$ be an LTS over $\Sigma$ and $\val{V} \in \textnormal{Var}\xspace \to 2^\states{S}$ a valuation over $\textnormal{Var}\xspace$. Then a \emph{sequent over $\mathcal{T}$ and $\val{V}$} has form $$S \tnxTVD \Phi,$$ where $S \subseteq \states{S}$, $\Delta$ is a definition list, and $\Phi \in \muforms^{\Sigma}_{\textnormal{Var}\xspace}$ has the following properties. \begin{enumerate}
\item $\Phi$ is in positive normal form.
\item Every $U \in \operatorname{dom}(\Delta)$ is positive in $\Phi$.
\item No $U \in \operatorname{dom}(\Delta)$ is bound in $\Phi$. \end{enumerate} We use $\SeqTV$ for the set of sequents over $\mathcal{T}$ and $\val{V}$. If $\seq{s} = S \tnxTV{\Delta} \Phi$ is in $\SeqTV$, then we access the components of $\seq{s}$ as follows:
$\textit{st}(\seq{s}) = S$,
$\textit{dl}(\seq{s}) = \Delta$, and
$\textit{fm}(\seq{s}) = \Phi$. \end{definition}
The intended interpretation of sequent $S \tnxTVD \Phi$ is that it is valid iff every $s \in S$ satisfies $\Phi$, where $\val{V}$ is used to interpret free propositional variables $Z \not\in \operatorname{dom}(\Delta)$ that appear in $\Phi$ and $\Delta$ is used for definitional constants $U \in \operatorname{dom}(\Delta)$ that occur in $\Phi$. This notion is formalized as follows.
\begin{definition}[Sequent semantics and validity]\label{def:sequent-semantics}
Let $\seq{s} = S \tnxTVD \Phi$ be a sequent in $\SeqTV$.
\begin{enumerate}
\item
The \emph{semantics} of $\seq{s}$ is defined as $\semop{\seq{s}} = \semT{\Phi}{\val{V}[\Delta]}$.
\item
We define $\seq{s}$ to be \emph{valid} iff $S \subseteq \semop{\seq{s}}$.
\end{enumerate} \end{definition}
Sequents of form $S \tnxTVD U$, where $U \in \operatorname{dom}(\Delta)$, play a prominent role in the rest of the paper. The following corollary about such sequents follows directly from our earlier results on definition lists and the semantics of sequents.
\begin{corollary}\label{cor:shared-prefix-sequent-semantics}
Let $\mathcal{T}$ be an LTS over $\Sigma$, $\Delta$, $\Delta'$ be definition lists, $U \in \operatorname{dom}(\Delta)$ such that $\Delta_{\preceq U} = \Delta'_{\preceq U}$, $\val{V}$ be a valuation, and $\seq{s} = S \tnxTVD U$ and $\seq{s}' = S' \tnxTV{\Delta'} U$ both be sequents. Then $\semop{\seq{s}} = \semop{\seq{s}'}$. \end{corollary}
\subsection{Proof rules and tableaux}
We now present the proof rules used in this paper for the mu-calculus formula sequents described in the previous section. These proof rules come from~\cite{BS1992,Bra1991} and resemble traditional natural-deduction-style proof rules. Following~\cite{BS1992,Bra1991}, however, we write the conclusion of the proof rule above the premises to emphasize that the conclusion is a ``goal'' and the premises are ``subgoals'' in a goal-directed proof-search strategy. In what follows we fix sort $\Sigma$, transition system $\mathcal{T} = \lts{S}$ over $\Sigma$, variable set $\textnormal{Var}\xspace$ and valuation $\val{V} \in \textnormal{Var}\xspace \rightarrow 2^{\states{S}}$.
\begin{definition}[Proof rule]\label{def:proof-rule}
A \emph{proof rule} has form
\[
\proofrule[ \mathit{name}\;\; ]
{\seq{s}}
{\seq{s}_1 \;\;\cdots\;\; \seq{s}_n}
\;\mathit{side\; condition}
\]
where $\mathit{name}$ is the name of the rule, $\seq{s}_1 \cdots \seq{s}_n \in (\SeqTV)^*$
is a sequence of sequents called the \emph{premises} of the rule,
$\seq{s} \in \SeqTV$ is the \emph{conclusion} of the rule,
and the optional $\mathit{side\; condition}$ is a property determining when the rule may be applied. \end{definition}
Figure~\ref{fig:proof-rules} gives the proof rules from~\cite{Bra1991}, lightly adapted to conform to our notational conventions. The rules are named for the top-level operators appearing in the formulas of the sequents to which they apply.
Rule $\sigma Z$ is actually short-hand for two rules -- one for each possible value, $\mu$ and $\nu$, of $\sigma$ -- and asserts that a definition involving fresh constant $U$ is added to the end of the definition list of the subgoal. The Thin rule is needed to ensure completeness of the proof system, and is not required for soundness.
\begin{figure}
\caption{Proof rules for mu-formula sequents.}
\label{fig:proof-rules}
\end{figure}
The side condition of Rule $\dia{K}$ refers to a function $f$ that is responsible for selecting, for each state $s$ in the goal sequent, a \emph{witness} state $s'$ such that $s \xrightarrow{K} s'$ and $s'$ is in the subgoal sequent. To apply this rule, the side condition requires the identification of a specific function $f$ that computes these witness states. We call the function associated in this manner with an application of $\dia{K}$ the \emph{witness function} of the application of the rule. We further define the set of \emph{rule applications} for a given LTS $\lts{S}$ over $\Sigma$ as follows. \[ \textnormal{RAppl} = \{\land, \lor, [K], \mu Z, \nu Z, \textnormal{Un}, \textnormal{Thin}\} \cup \{(\dia{K},f) \mid \text{$f$ is a witness function}\} \] Note that rule applications are either rule names, if the rule being applied is not $\dia{K}$, or pairs of form $(\dia{K}, f)$ where $f$ is the witness function used in applying the rule. If $a$ is a rule application then we write $\textit{rn}(a)$ for the rule name used in $a$. Formally, $\textit{rn}(a)$ is defined as follows. \[ \textit{rn}(a) = \begin{cases} \dia{K} & \text{if $a = (\dia{K},f)$}\\ a & \text{otherwise} \end{cases} \]
Intuitively, proofs are constructed as follows.
\footnotetext{Here $U$ is fresh if has not been previously used anywhere in a proof currently being constructed using these proof rules.}
Suppose $S \tnxTV{\varepsilon} \Phi$ is a sequent we wish to prove. Based on the form of $\Phi$ we select a rule whose conclusion matches this sequent and apply it, generating the corresponding premises and witness function as required by the rule. We then recursively build proofs for these premises. The proof construction process terminates when the validity (or lack thereof) of the current sequent can be immediately established. The resulting proofs can be viewed as trees, which are called \emph{tableaux} in~\cite{BS1992,Bra1991}, and which we formalize as follows.
\begin{definition}[(Partial) tableaux]\label{def:tableau}
\begin{enumerate}
\item\label{subdef:partial-tableau}
A \emph{partial tableau} has form $\mathbb{T} = \tableauTrl$, where:
\begin{itemize}
\item
$\tree{T} = (\node{N}, \node{r}, p, cs) $ is a finite non-empty ordered tree (cf.\ Definition~\ref{def:ordered-tree}).
\item
Partial function $\rho \in \node{N} \to_{\perp} \textnormal{RAppl}$ satisfies: $\rho(\node{n}) {\perp}$ iff $\node{n}$ is a leaf of $\tree{T}$.
\item
$\mathcal{T} = \lts{S}$ is a labeled transition system.
\item
$\val{V} \in \textnormal{Var}\xspace \to 2^{\states{S}}$ is a valuation.
\item
Function $\lambda \in \node{N} \rightarrow \SeqTV$, the \emph{sequent labeling}, satisfies: if $\rho(\node{n}) \in \textnormal{RAppl}$ then
$\lambda(\node{n})$ and $\lambda(cs(\node{n}))$\footnote{Recall that $cs(\node{n})$ is the sequence of the children of node $\node{n}$ in left-to-right order.} satisfy the form and side condition associated with $\rho(\node{n})$.
\end{itemize}
Elements of $\node{N}$ are sometimes called the \emph{proof nodes} of $\mathbb{T}$.
\item\label{subdef:complete-tableau}
Partial tableau $\tableauTrl$ is a \emph{complete tableau}, or simply a \emph{tableau},
iff
$\textit{dl}(\lambda(\node{r})) = \varepsilon$ (i.e.\/ the definition list in the root is empty), and
all leaves $\node{n}$ in $\tree{T}$ are \emph{terminal}, i.e.\/ satisfy one of the following.
\begin{enumerate}[ref=\theenumi(\alph*)]
\item \label{subdef:free-leaf}
$\textit{fm}(\lambda(\node{n})) = Z$ or $\textit{fm}(\lambda(\node{n})) = \lnot Z$ for some $Z \in \textnormal{Var}\xspace \setminus \operatorname{dom}(\textit{dl}(\lambda(\node{n})))$
(in this case $\node{n}$ is called a \emph{free leaf}); or
\item \label{subdef:diamond-leaf}
$\textit{fm}(\lambda(\node{n})) = \dia{K}\ldots$ and there is $s \in \textit{st}(\lambda(\node{n}))$ such that $s \centernot{\xrightarrow{K}}$
(in this case $\node{n}$ is called a \emph{diamond leaf}); or
\item \label{subdef:companion-leaf}
$\textit{fm}(\lambda(\node{n})) = U$ for some $U \in \operatorname{dom}(\textit{dl}(\lambda(\node{n})))$, and there is $\node{m} \in A_s(\node{n})$\footnote{Recall that $A_s(\node{n})$ is the set of strict ancestors of $\node{n}$.} such that
$\textit{fm}(\lambda(\node{m})) = U$ and $\textit{st}(\lambda(\node{n})) \subseteq \textit{st}(\lambda(\node{m}))$
(in this case $\node{n}$ is called a \emph{$\sigma$-leaf}).
\end{enumerate}
We may also refer to a $\sigma$-leaf as a $\mu$- / $\nu$-leaf if $\Delta(U) = \mu\ldots$ / $\Delta(U) = \nu\ldots$.
The deepest node $\node{m}$ making $\node{n}$ a $\sigma$-leaf is the \emph{companion node} of $\node{n}$, with $\node{n}$ then being a \emph{companion leaf} of $\node{m}$.
\end{enumerate} \end{definition}
A $\sigma$-leaf in a tableau has a unique companion node based on the Definition~\ref{def:tableau}(\ref{subdef:complete-tableau}), but a node may be a companion node for multiple (or no) companion leaves; all such companion leaves must be in the subtree rooted at the node, however. The definition of terminal leaf in~\cite{BS1992,Bra1991} also includes an extra case, $\textit{st}(\lambda(\node{n})) = \emptyset$, in addition to the Conditions \ref{subdef:free-leaf}--\ref{subdef:companion-leaf} in our definition above. It turns out that this case is unnecessary, as the completeness results in Section~\ref{sec:Completeness} show.
In what follows we adopt the following notational shorthands for proof nodes in partial tableaux. \begin{notation}[Proof nodes] Let $\node{n}$ be a proof node in partial tableau $\tableauTrl$. \begin{itemize}
\item $\node{n} = S \tnxTVD \Phi$ means $\lambda(\node{n}) = S \tnxTVD \Phi$.
\item $\semop{\node{n}} = \semop{\lambda(\node{n})} = \semTV{\textit{fm}(\lambda(\node{n}))}$.
\item $\node{n}$ is valid iff $\lambda(\node{n})$ is valid.
\item $\textit{st}(\node{n}) = \textit{st}(\lambda(\node{n}))$.
\item $\textit{dl}(\node{n}) = \textit{dl}(\lambda(\node{n}))$.
\item $\textit{fm}(\node{n}) = \textit{fm}(\lambda(\node{n}))$. \end{itemize} \end{notation}
Companion nodes and leaves feature prominently later, so we introduce the following notation and remark on a semantic property that they satisfy.
\begin{notation}[Companion nodes and leaves]
Let $\mathbb{T} = \tableauTrl$ be a partial tableau, with $\tree{T} = (\node{N}, \node{r}, p, cs)$.
\begin{enumerate}
\item
The set $\cnodes{\mathbb{T}} \subseteq \node{N}$ of companion nodes of $\mathbb{T}$ is given by:
$$
\cnodes{\mathbb{T}} = \{ \node{n} \in \node{N} \mid \rho(\node{n}) = \textnormal{Un}\}.
$$
\item
Let $\node{n} \in \cnodes{\mathbb{T}}$. Then the set $\cleaves{\mathbb{T}}(\node{n}) \subseteq D_s(\node{n})$ of companion leaves of $\node{n}$ is given as follows, where $\cnodes{\mathbb{T},\node{n}} = \cnodes{\mathbb{T}} \cap D_s(\node{n})$ are the companion nodes of $\mathbb{T}$ that are strict descendants of $\node{n}$.
\begin{align*}
&\cleaves{\mathbb{T}}(\node{n}) =
\\
&\{ \node{n}' \in D_s (\node{n})
\mid
c(\node{n}') = \emptyset
\land
\textit{fm}(\node{n}') = \textit{fm}(\node{n})
\land
\textit{st}(\node{n}') \subseteq \textit{st}(\node{n})
\}
\setminus
\bigcup_{\node{n}' \in \cnodes{\mathbb{T},\node{n}}} \cleaves{\mathbb{T}}(\node{n}')
\end{align*}
\end{enumerate} \end{notation}
Note that if $\node{n}$ is a companion node then it must be the case that $\textit{fm}(\node{n}) = U$ for some definitional constant $U$ defined in the definition list $\textit{dl}(\node{n})$ of $\node{n}$. Given a companion node $\node{n}$ in $\mathbb{T}$ its associated companion leaves, $\cleaves{\mathbb{T}}(\node{n})$, consist of nodes $\node{n}'$ that: \begin{enumerate*}[label=(\roman*)]
\item are leaves ($c(\node{n}') = \emptyset$);
\item have the same definitional constant as $\node{n}$ for their formula
($\textit{fm}(\node{n}') = \textit{fm}(\node{n})$);
\item have their state sets included in the state set of $\node{n}$
($\textit{st}(\node{n}') \subseteq \textit{st}(\node{n})$); and
\item are not companion leaves of any companion node that is a strict descendant of $\node{n}$
($\bigcup_{\node{n}' \in \cnodes{\mathbb{T},\node{n}}} \cleaves{\mathbb{T}}(\node{n})$). \end{enumerate*}
\begin{lemma}[Semantics of companion nodes, leaves]\label{lem:semantic-invariance}
Let $\mathbb{T} = \tableauTrl$ be a partial tableau. Then for any proof nodes $\node{n}$ and $\node{n}'$ in $\mathbb{T}$ and definitional constant $U$ such that $\textit{fm}(\node{n}) = \textit{fm}(\node{n}') = U$, $\semop{\node{n}} = \semop{\node{n}'}$. \end{lemma} \begin{proof}
Follows from the fact that the side condition of Rule Un, which is the only rule that modifies definition lists, introduces a given definitional constant at most once in a partial tableau, and that the associated definition is added at the end of the definition list in the premise of the rule. Based on these observations, one can show that $\textit{dl}(\node{n})_{\preceq U} = \textit{dl}(\node{n}')_{\preceq U}$. Lemma~\ref{lem:invariance-definitional-constant} and the definition of $\semop{\node{n}}$ then give the desired result.
\qedhere \end{proof}
A tableau is a candidate proof; while the structure of a tableau ensures the correct application of proof rules, this alone does not guarantee that a tableau is a valid proof. The notion of \emph{successful tableau} fills this gap.
\begin{definition}[Successful leaf / tableau]\label{def:successful-tableau}
Let $\mathbb{T} = \tableauTrl$ be a tableau. Then leaf node $\node{n}$ in $\mathbb{T}$ is \emph{successful} iff one of the following hold.
\begin{enumerate}
\item\label{def:successful-leaf-Z}
$\textit{fm}(\node{n}) = Z$, where $Z \in \textnormal{Var}\xspace \setminus \textnormal{dom}(\Delta)$, and $S \subseteq \val{V}(Z)$; or
\item\label{def:successful-leaf-notZ}
$\textit{fm}(\node{n}) = \lnot Z$, where $Z \in \textnormal{Var}\xspace \setminus \textnormal{dom}(\Delta)$, and $S \cap \val{V}(Z) = \emptyset$; or
\item\label{def:successful-leaf-nu}
$\node{n}$ is a $\nu$-leaf; or
\item \label{def:successful-tableau-mu}
$\node{n}$ is a $\mu$-leaf satisfying the condition given below in Definition~\ref{def:successful-mu-leaf}.
\end{enumerate}
A tableau is successful iff all its leaves are successful.
A tableau is \emph{partially successful} iff all its non-$\mu$-leaves are successful. \end{definition} If a tableau is partially successful, then establishing that it is (fully) successful only requires showing that each of its $\mu$-leaves is successful, as defined below. Similarly, a tableau that is not partially successful cannot be successful, regardless of the status of its $\mu$-leaves. In particular, no diamond leaf is successful, so any tableau containing a diamond leaf can only be unsuccessful.
The rest of this section is devoted to defining the success of $\mu$-leaves, which is more complicated than the other cases and spans several definitions. Intuitively, the success of a $\mu$-leaf depends on the well-foundedness of an extended \emph{dependency ordering} involving the companion node of the leaf. We define the dependency ordering in three stages. The definition of this (extended) dependency ordering essentially corresponds to Bradfield and Stirling's definition of (extended) paths from~\cite{BS1992,Bra1991}; our presentation is intended to simplify proofs. Details of the correspondence can be found in the appendix.
In what follows, fix partial tableau $\mathbb{T}= \tableauTrl$, with $\tree{T} = (\node{N}, r, p, cs)$. The \emph{local dependency ordering} captures the one-step dependencies between a state in a node in $\mathbb{T}$ and states in the node's children.
\begin{definition}[Local dependency ordering]\label{def:local_dependency_ordering}
Let $\node{n}, \node{n}' \in \node{N}$ be proof nodes in $\mathbb{T}$, with $\node{n}' \in c(\node{n})$ a child of $\node{n}$. Then $s' <_{\node{n}',\node{n}} s$ iff $s' \in \textit{st}(\node{n}')$, $s \in \textit{st}(\node{n})$, and one of the following hold:
\begin{enumerate}
\item
$\rho(\node{n}) = [K]$ and $s \xrightarrow{K} s'$; or
\item
$\rho(\node{n}) = (\dia{K},f)$ and $s' = f(s)$; or
\item
$\textit{rn}(\rho(\node{n})) \not\in \{ [K], \dia{K} \}$ and $s = s'$.
\end{enumerate}
Note that since $\node{n}' \in c(\node{n})$, $\node{n}$ is internal and $\rho(\node{n})$ is defined. \end{definition}
In the second part of the definition of our dependency ordering we extend the local dependency ordering to capture dependencies between states in a node and states in descendants of the node.
\begin{definition}[Dependency ordering]\label{def:dependency_ordering}
Let $\node{n}, \node{n}'$ be proof nodes in $\mathbb{T}$. Then $s' \lessdot_{\node{n}',\node{n}} s$ iff $s' \in \textit{st}(\node{n}')$, $s \in \textit{st}(\node{n})$, and one of the following holds.
\begin{enumerate}
\item
$\node{n} = \node{n}'$ and $s = s'$, or
\item
There exist proof node $\node{m}$ and $s'' \in \textit{st}(\node{m})$ with $s' \lessdot_{\node{n}',\node{m}} s''$ and $s'' <_{\node{m},\node{n}} s$.
\end{enumerate} \end{definition}
\noindent The definition of $\lessdot_{\node{n}',\node{n}}$ is inductive, and may be seen as analogous to the transitive and reflexive closure of $<_{\node{n}',\node{n}}$, modulo the node indices $\node{n}'$ and $\node{n}$ decorating $\lessdot_{\node{n}', \node{n}}$. It is easy to see that if $s' <_{\node{n}',\node{n}} s$ then $s' \lessdot_{\node{n}',\node{n}} s$, and that if $s' \lessdot_{\node{n}', \node{n}} s$ then $\node{n}' \in D(\node{n})$.\footnote{Recall that $D(\node{n})$ are descendants of $\node{n}$; see Section~\ref{subsec:trees}.}
In the third and final of our definitions, we allow \emph{cycling} through states in companion nodes that are descendants of a given node.
\begin{definition}[Extended dependency ordering]\label{def:extended_path_ordering}
The \emph{extended dependency ordering}, $<:_{\node{n}',\node{n}}$, and the \emph{companion-node ordering} $<:_{\node{m}}$, are defined mutually recursively as follows.
\begin{enumerate}
\item
Let $\node{m} \in \cnodes{\mathbb{T}}$ be a companion node, with $s, s' \in \textit{st}(\node{m})$. Then $s' <:_{\node{m}} s$ iff there is a companion leaf $\node{m}' \in \cleaves{\mathbb{T}}(\node{m})$ with $s' \in \textit{st}(\node{m}')$ and $s' <:_{\node{m}',\node{m}} s$.
\item
Let $\node{n}, \node{n}'$ be proof nodes in $\mathbb{T}$. Then $s' <:_{\node{n}',\node{n}} s$ iff $s' \in \textit{st}(\node{n}')$, $s \in \textit{st}(\node{n})$ and one of the following holds.
\begin{enumerate}
\item \label{def:extended_path_ordering-base}
$s' \lessdot_{\node{n}',\node{n}} s$; or
\item \label{def:extended_path_ordering-step}
there exists $\node{m} \in \cnodes{\mathbb{T}}$,
with $\node{m} \neq \node{n}$ and $\node{m} \neq \node{n}'$,
and $t, t' \in \textit{st}(\node{m})$, such that:
$s' <:_{\node{n}',\node{m}} t'$,
$t' <:_{\node{m}}^{+} t$\footnote{Recall that if $R$ is a binary relation then $R^+$ is the transitive closure of $R$.}, and
$t \lessdot_{\node{m},\node{n}} s$.
\end{enumerate}
\end{enumerate} \end{definition}
Intuitively, $s' <:_{\node{n}',\node{n}} s$ is intended to reflect a semantic dependency between state $s'$ in $\node{n}'$ and state $s$ in $\node{n}$. The first case indicates that this relation holds if applying a sequence of proof rules starting at $\node{n}$ leads to $\node{n}'$, with the rules $[K]$ and $\dia{K}$ inducing state changes in the dependency relation as the rules are applied. (Of course, the intuition involves soundness assumptions about the proof rules; these are proved later.) In the second case, the dependency chain can cycle through a companion node if there is a dependency chain from the companion node to one of its companion leaves (this is captured in the relation $<:_{\node{m}}$). Recall that the states in a companion leaf are also in the leaf's companion node; consequently, if there is a dependency involving a state in the companion node and another state in one of the node's companion leaves, the dependency can be extended with a dependency involving the second state, but starting from the companion node. This explains the appearance of the transitive-closure operation in this case.
The success criterion for $\mu$-leaves (which is really a condition on the companion nodes for these leaves) in a complete tableau can now be given as follows.
\begin{definition}[Successful $\mu$-leaf]\label{def:successful-mu-leaf}
Let $\node{n}'$ be a $\mu$-leaf in tableau $\mathbb{T}$ and let $\node{n}$ be the companion node of $\node{n}'$. Then $\node{n}'$ is successful if and only if $<:_{\node{n}}$ is well-founded. \end{definition}
Note that this definition implies that either all $\mu$-leaves having the same companion node are successful, or none are.
The remainder of this section is devoted to proving \emph{pseudo-transitivity} properties of the dependency relations $\lessdot_{\node{n}', \node{n}}$ and $<:_{\node{n}', \node{n}}$. Generally speaking, we would not expect these to be transitive, because e.g.\/ when $s' \lessdot_{\node{n}',\node{n}} s$ holds, $s'$ and $s$ may belong to different sets (namely, the sets of states in their respective proof nodes). However, if we allow the node labels on the relations to align properly, we do have a property that resembles transitivity.
\begin{lemma}[Pseudo-transitivity of $\lessdot_{\node{n}',\node{n}}$]\label{lem:pseudo-transitivity-of-dependency-ordering}
Let $\node{n}_1, \node{n}_2$ and $\node{n}_3$ be proof nodes in partial tableau $\mathbb{T}$, and assume $s_1, s_2$ and $s_3$ are such that $s_3 \lessdot_{\node{n}_3, \node{n}_2} s_2$ and $s_2 \lessdot_{\node{n}_2, \node{n}_1} s_1$.
Then $s_3 \lessdot_{\node{n}_3,\node{n}_1} s_1$. \end{lemma}
\remove{ \begin{proofsketch}
Assume $s_3 \lessdot_{\node{n}_3, \node{n}_2} s_2$. The result then follows by induction on the definition of $s_2 \lessdot_{\node{n}_2,\node{n}_1} s_1$. The detailed proof is included in the appendix. \qedhere \end{proofsketch} } \begin{proof}
Assume that $s_3 \lessdot_{\node{n}_3, \node{n}_2} s_2$.
The proof proceeds by induction on the definition of $s_2 \lessdot_{\node{n}_2,\node{n}_1} s_1$. There are two cases consider.
\begin{itemize}
\item
$\node{n}_1 = \node{n}_2$ and $s_1 = s_2$.
In this case it immediately follows that $s_3 \lessdot_{\node{n}_3, \node{n}_1} s_1$.
\item
There exists $\node{m}$ and $s' \in \textit{st}(\node{m})$ such that $s_2 \lessdot_{\node{n}_2, \node{m}} s'$ and $s' <_{\node{m},\node{n}_1} s_1$.
In this case the induction hypothesis guarantees that $s_3 \lessdot_{\node{n}_3,\node{m}} s'$. Since $s' <_{\node{m},\node{n}_1} s_1$, Definition~\ref{def:dependency_ordering} guarantees that $s_3 \lessdot_{\node{n}_3, \node{n}_1} s_1$.
\qedhere
\end{itemize} \end{proof}
We establish a similar pseudo-transitivity property for $<:_{\node{n}', \node{n}}$. First, the following lemma considers a restricted case of the pseudo-transitivity property for $<:_{\node{n}', \node{n}}$ first, in which $s_2$ and $s_1$ are related by $\lessdot_{\node{n}_2, \node{n}_1}$ rather than $<:_{\node{n}_2, \node{n}_1}$.
\begin{lemma}\label{lem:front_extend_extended_path_ordering}
Let $\node{n}_1, \node{n}_2$ and $\node{n}_3$ be proof nodes in partial tableau $\mathbb{T}$, such that $s_3 <:_{\node{n}_3, \node{n}_2} s_2$ and $s_2 \lessdot_{\node{n}_2, \node{n}_1} s_1$.
Then $s_3 <:_{\node{n}_3,\node{n}_1} s_1$. \end{lemma} \begin{proof}
Assume that $s_2 \lessdot_{\node{n}_2, \node{n}_1} s_1$. The proof proceeds by induction on the definition of $s_3 <:_{\node{n}_3, \node{n}_2} s_2$. There are two cases to consider.
\begin{itemize}
\item
$s_3 \lessdot_{\node{n}_3,\node{n}_2} s_2$.
From the pseudo-transitivity of $\lessdot_{\node{n}',\node{n}}$ (Lemma~\ref{lem:pseudo-transitivity-of-dependency-ordering}) it follows that $s_3 \lessdot_{\node{n}_3, \node{n}_1} s_1$,
and thus
$s_3 <:_{\node{n}_3,\node{n}_1} s_1$.
\item
There exists companion node $\node{m}$,
with
$\node{m} \neq \node{n}_2$ and $\node{m} \neq \node{n}_3$,
and $t, t' \in \textit{st}(\node{m})$
such that
$s_3 <:_{\node{n}_3, \node{m}}t'$,
$t' <:^+_{\node{m}} t$ and
$t \lessdot_{\node{m}, \node{n}_2} s_2$.
Lemma~\ref{lem:pseudo-transitivity-of-dependency-ordering} ensures that $t \lessdot_{\node{m}, \node{n}_1} s_1$,
and the definition of $<:_{\node{n}_3, \node{n}_1}$ confirms $s_3 <:_{\node{n}_3, \node{n}_1} s_1$.
\qedhere
\end{itemize} \end{proof} \noindent We can now prove the pseudo-transitivity of $<:_{\node{n}, \node{n}}$.
\begin{lemma}[Pseudo-transitivity of $<:_{\node{n}', \node{n}}$]\label{lem:join_extended_path_ordering} Let $\node{n}_1, \node{n}_2$ and $\node{n}_3$ be proof nodes in partial tableau $\mathbb{T}$, and assume $s_3 <:_{\node{n}_3, \node{n}_2} s_2$ and $s_2 <:_{\node{n}_2, \node{n}_1} s_1$. Then $s_3 <:_{\node{n}_3,\node{n}_1} s_1$. \end{lemma} \remove{ \begin{proofsketch}
Assume that $s_3 <:_{\node{n}_3,\node{n}_2} s_2$. The result then follows by induction on the definition of $s_2 <:_{\node{n}_2,\node{n}_1} s_1$. The full proof is included in the appendix.\qedhere \end{proofsketch} } \begin{proof}
Assume that $s_3 <:_{\node{n}_3,\node{n}_2} s_2$. The proof proceeds by induction on the definition of $s_2 <:_{\node{n}_2,\node{n}_1} s_1$. There are two cases to consider.
\begin{itemize}
\item
$s_2 \lessdot_{\node{n}_2, \node{n}_1} s_1$.
The result follows immediately from the pseudo-transitivity of $\lessdot_{\node{n}',\node{n}}$ (Lemma~\ref{lem:front_extend_extended_path_ordering}).
\item
There exists companion node $\node{m}$ in $\mathbb{T}$,
with
$\node{m} \neq \node{n}_1$ and $\node{m} \neq \node{n}_2$,
and $t, t' \in \textit{st}(\node{m})$,
such that
$s_2 <:_{\node{n}_2, \node{m}} t'$,
$t' <:^+_{\node{m}} t$ and
$t \lessdot_{\node{m}, \node{n}_1} s_1$.
Since $s_2 <:_{\node{n}_2, \node{m}} t'$, the induction hypothesis guarantees that $s_3 <:_{\node{n}_3, \node{m}} t'$, and
the definition of $<:_{\node{n}_3, \node{n}_1}$
then establishes that $s_3 <:_{\node{n}_3, \node{n}_1} s_1$.
\qedhere
\end{itemize} \end{proof}
We now formalize a semantic property, which we call \emph{semantic sufficiency}, enjoyed by the local dependency relation $<_{\node{n}', \node{n}}$ for internal node $\node{n}$. This property asserts that if for every $s \in \textit{st}(\node{n})$, and every $s'$ such that that $s' <_{\node{n}', \node{n}} s$, $s'$ belongs to the semantics of $\node{n}'$, then this is sufficient to conclude that $s$ belongs to the semantics of $\node{n}$.
\begin{lemma}[Semantic sufficiency of $<_{\node{n}', \node{n}}$] \label{lem:semantic-sufficiency-of-<}
Let $\node{n}$ be an internal proof node in partial tableau $\mathbb{T}$, and let $s \in \textit{st}(\node{n})$ be such that for all $s'$ and $\node{n}'$ with $s' <_{\node{n}', \node{n}} s$, $s' \in \semop{\node{n}'}$. Then $s \in \semop{\node{n}}$. \end{lemma}
\remove{ \begin{proofsketch} Let $\node{n}$ be an internal node in $\mathbb{T} = \tableauTrl$. Since $\node{n}$ is internal $\rho(\node{n})$ is defined. Now assume $s \in \textit{st}(\node{n})$; the proof proceeds by a case analysis on $\rho(\node{n})$. A full proof can be found in the appendix. \qedhere \end{proofsketch} } \begin{proof}
Let $\node{n} = S \tnxTVD \Phi$ be an internal node in $\mathbb{T} = \tableauTrl$. Since $\node{n}$ is internal, $\rho(\node{n})$ is defined. Now fix $s \in S$; we proceed by a case analysis on $\rho(\node{n})$.
\begin{description}
\item[$\rho(\node{n}) = \land$.]
In this case $\Phi = \Phi_1 \land \Phi_2$, and $cs(\node{n}) = \node{n}_1\node{n}_2$, where $\node{n}_1 = S \tnxTVD \Phi_1$ and $\node{n}_2 = S \tnxTVD \Phi_2$. By definition $s' <_{\node{n}', \node{n}} s$ iff $s' = s$ and $\node{n}' = \node{n}_1$ or $\node{n}' = \node{n}_2$. We reason as follows.
\begin{flalign*}
&\text{For all $s', \node{n}'$ such that $s' <_{\node{n}', \node{n}} s$, $s' \in \semop{\node{n}'}$}\span\span
\\
&\text{iff}\;\;\; s \in \semop{\node{n}_1} \;\text{and}\; s \in \semop{\node{n}_2}
&& \text{Definition of $<_{\node{n}', \node{n}}$ when $\rho(\node{n}) = \land$}
\\
&\text{iff}\;\;\; s \in \semT{\Phi_1}{\val{V}[\Delta]} \;\text{and}\; s \in \semT{\Phi_2}{\val{V}[\Delta]}
&& \text{Definition of $\semTV{\node{n}_i}$, $i=1,2$}
\\
& \text{iff}\;\;\; s \in \semT{\Phi_1 \land \Phi_2}{\val{V}[\Delta]}
&& \text{Definition of $\semT{\Phi_1 \land \Phi_2}{\val{V}[\Delta]}$}
\\
& \text{iff}\;\;\; s \in \semT{\Phi}{\val{V}[\Delta]}
&& \text{$\Phi = \Phi_1 \land \Phi_2$}
\\
& \text{iff}\;\;\; s \in \semop{\node{n}}
&& \text{Definition of $\semop{\node{n}}$}
\end{flalign*}
\item[$\rho(\node{n}) = \lor$.]
In this case $\Phi = \Phi_1 \lor \Phi_2$, and $cs(\node{n}) = \node{n}_1\node{n}_2$, where $\node{n}_1 = S_1 \tnxTVD \Phi_1$, $\node{n}_2 = S_2 \tnxTVD \Phi_2$ and $S = S_1 \cup S_2$. By definition $s' <_{\node{n}', \node{n}} s$ iff $s' = s$ and either $\node{n}' = \node{n}_1$, provided $s \in S_1$, or $\node{n}' = \node{n}_2$, provided $s \in S_2$. Since either $s \in S_1$ or $s \in S_2$, it follows that $s <_{\node{n}_1, \node{n}} s$ or $s <_{\node{n}_2, \node{n}} s$ (or both, if $s \in S_1 \cap S_2$). We reason as follows.
\begin{flalign*}
&\text{For all $s', \node{n}'$ such that $s' <_{\node{n}', \node{n}} s$, $s' \in \semop{\node{n}'}$}\span\span
\\
&\text{implies}\;\;\; s \in \semop{\node{n}_1} \;\text{or}\; s \in \semop{\node{n}_2}
&& \text{Definition of $<_{\node{n}', \node{n}}$ when $\rho(\node{n}) = \lor$}
\\
&\text{iff}\;\;\; s \in \semT{\Phi_1}{\val{V}[\Delta]} \;\text{or}\; s \in \semT{\Phi_2}{\val{V}[\Delta]}
&& \text{Definition of $\semop{\node{n}_i}$, $i=1,2$}
\\
& \text{iff}\;\;\; s \in \semT{\Phi_1 \lor \Phi_2}{\val{V}[\Delta]}
&& \text{Definition of $\semT{\Phi_1 \lor \Phi_2}{\val{V}[\Delta]}$}
\\
& \text{iff}\;\;\; s \in \semT{\Phi}{\val{V}[\Delta]}
&& \text{$\Phi = \Phi_1 \lor \Phi_2$}
\\
& \text{iff}\;\;\; s \in \semop{\node{n}}
&& \text{Definition of $\semop{\node{n}}$}
\end{flalign*}
\item[$\rho(\node{n}) = {[K]}$.]
In this case $\Phi = [K] \Phi'$, and $cs(\node{n}) = \node{n}''$, where $\node{n}'' = S'' \tnxTVD \Phi'$ and $S'' = \{s'' \in S \mid \exists s \in S. s \xrightarrow{K} s'' \}$. By definition $s' <_{\node{n}', \node{n}} s$ iff $\node{n}' = \node{n}''$ and $s \xrightarrow{K} s'$. We reason as follows.
\begin{flalign*}
&\text{For all $s', \node{n}'$ such that $s' <_{\node{n}', \node{n}} s$, $s' \in \semop{\node{n}'}$}\span\span
\\
&\text{iff}\;\;\; \forall s'. s \xrightarrow{K} s' \implies s' \in \semop{\node{n}''}
&& \text{Definition of $<_{\node{n}', \node{n}}$ when $\rho(\node{n}) = [K]$}
\\
& \text{iff}\;\;\; \forall s'. s \xrightarrow{K} s' \implies s' \in \semT{\Phi'}{\val{V}[\Delta]}
&& \text{Definition of $\semTV{\node{n}''}$}
\\
& \text{iff}\;\;\; s \in \semT{[K] \Phi'}{\val{V}[\Delta]}
&& \text{Definition of $\semT{[K] \Phi'}{\val{V}[\Delta]}$}
\\
& \text{iff}\;\;\; s \in \semT{\Phi}{\val{V}[\Delta]}
&& \text{$\Phi = [K]\Phi'$}
\\
& \text{iff}\;\;\; s \in \semop{\node{n}}
&& \text{Definition of $\semop{\node{n}}$}
\end{flalign*}
\item[$\rho(\node{n}) = (\dia{K},f)$.]
In this case $\Phi = \dia{K} \Phi'$, and $cs(\node{n}) = \node{n}''$, where $\node{n}'' = S'' \tnxTVD \Phi'$ and witness function $f \in S \rightarrow \states{S}$ is such that $S'' = f(S)$ and for all $s \in S, s \xrightarrow{K} f(s)$. By definition $s' <_{\node{n}', \node{n}} s$ iff $\node{n}' = \node{n}''$ and $s' = f(s)$. We reason as follows.
\begin{flalign*}
&\text{For all $s', \node{n}'$ such that $s' <_{\node{n}', \node{n}} s$, $s' \in \semop{\node{n}'}$}\span\span
\\
&\text{iff}\;\;\; f(s) \in \semop{\node{n}''}
&& \text{Definition of $<_{\node{n}', \node{n}}$ when $\rho(\node{n}) = \dia{K}$}
\\
& \text{iff}\;\;\; s \xrightarrow{K} f(s) \;\text{and}\; f(s) \in \semT{\Phi'}{\val{V}[\Delta]}
&& \text{Property of $f$, definition of $\semop{\node{n}''}$}
\\
& \text{implies}\;\;\; s \in \semT{\dia{K}\Phi'}{\val{V}[\Delta]}
&& \text{Definition of $\semT{\dia{K}\Phi'}{\val{V}[\Delta]}$}
\\
& \text{iff}\;\;\; s \in \semT{\Phi}{\val{V}[\Delta]}
&& \text{$\Phi = \dia{K}\Phi'$}
\\
& \text{iff}\;\;\; s \in \semop{\node{n}}
&& \text{Definition of $\semop{\node{n}}$}
\end{flalign*}
\item[$\rho(\node{n}) = \sigma Z$.]
In this case $\Phi = \sigma Z. \Phi'$, and $cs(\node{n}) = \node{n}''$, where $\node{n}'' = S \tnxTV{\Delta'} U$, $U$ is fresh, and $\Delta' = \Delta \cdot (U = \sigma Z. \Phi')$. By definition $s' <_{\node{n}', \node{n}} s$ iff $\node{n}' = \node{n}''$ and $s = s'$. We reason as follows.
\begin{flalign*}
&\text{For all $s', \node{n}'$ such that $s' <_{\node{n}', \node{n}} s$, $s' \in \semop{\node{n}'}$}\span\span
\\
& \text{iff}\;\;\; s \in \semop{\node{n}''}
&& \text{Definition of $<_{\node{n}', \node{n}}$ when $\rho(\node{n}) = \sigma Z$}
\\
& \text{iff}\;\;\; s \in \semT{U}{V[\Delta']}
&& \text{Definition of $\semop{\node{n}''}$}
\\
& \text{iff}\;\;\; s \in \semTV{ U [ \Delta'] }
&& \text{Lemma~\ref{lem:definition-list-correspondence}}
\\
& \text{iff}\;\;\; s \in \semTV{ U [ \Delta \cdot (U := \sigma Z . \Phi') ] }
&& \text{$\Delta' = \Delta \cdot (U = \sigma Z . \Phi') $}
\\
& \text{iff}\;\;\; s \in \semTV{ \left(U [U := \sigma Z . \Phi'] \right) [ \Delta ] }
&& \text{Definition of $U [ \Delta \cdot (U := \sigma Z . \Phi') ]$}
\\
& \text{iff}\;\;\; s \in \semTV{\left( \sigma Z . \Phi'\right) [\Delta]}
\\
\multispan4{\hfil Definition of substitution, application of $\rho(\node{n}) = \sigma Z.$ ensures $U$ fresh}
\\
& \text{iff}\;\;\; s \in \semT{\sigma Z . \Phi'}{\val{V}[\Delta]}
&& \text{Lemma~\ref{lem:definition-list-correspondence}}
\\
& \text{iff}\;\;\; s \in \semTV{\Phi}
&& \text{$\Phi =\sigma Z . \Phi'$}
\\
& \text{iff}\;\;\; s \in \semop{\node{n}}
&& \text{Definition of $\semop{\node{n}}$}
\end{flalign*}
\item[$\rho(\node{n}) = \textnormal{Un}$.]
In this case $\Phi = U$, with $\Delta(U) = \sigma Z. \Phi'$, and $cs(\node{n}) = \node{n}''$, where $\node{n}'' = S \tnxTVD \Phi'[ Z:=U ]$. By definition $s' <_{\node{n}', \node{n}} s$ iff $\node{n}' = \node{n}''$ and $s = s'$. We reason as follows.
\begin{flalign*}
&\text{For all $s', \node{n}'$ such that $s' <_{\node{n}', \node{n}} s$, $s' \in \semop{\node{n}'}$}\span\span
\\
& \text{iff}\;\;\; s \in \semop{\node{n}''}
&& \text{Definition of $<_{\node{n}', \node{n}}$ when $\rho(\node{n}) = \text{Un}$}
\\
& \text{iff}\;\;\; s \in \semT{ \Phi' [Z:=U]}{\val{V}[\Delta]}
&& \text{Definition of $\semop{\node{n}''}$}
\\
& \text{iff}\;\;\; s \in \semT{U}{\val{V}[\Delta]}
&& \text{Lemma~\ref{lem:constant-unfolding}}
\\
& \text{iff}\;\;\; s \in \semT{\Phi}{\val{V}[\Delta]}
&& \text{$\Phi =U$}
\\
& \text{iff}\;\;\; s \in \semop{\node{n}}
&& \text{Definition of $\semop{\node{n}}$}
\end{flalign*}
\item[$\rho(\node{n}) = \textnormal{Thin}$.]
In this case $cs(\node{n}) = \node{n}''$, where $\node{n}'' = S' \tnxTVD \Phi$ and $S \subseteq S'$. By definition $s' <_{\node{n}', \node{n}} s$ iff $\node{n}' = \node{n}''$ and $s = s'$. We prove the following bi-implication, which implies the desired result.
\begin{flalign*}
&\text{For all $s', \node{n}'$ such that $s' <_{\node{n}', \node{n}} s$, $s' \in \semop{\node{n}'}$}\span\span
\\
& \text{iff}\;\;\; s \in \semop{\node{n}''}
&& \text{Definition of $<_{\node{n}', \node{n}}$ when $\rho(\node{n}) = \text{Thin}$}
\\
& \text{iff}\;\;\; s \in \semT{ \Phi }{\val{V}[\Delta]}
&& \text{Definition of $\semop{\node{n}''}$}
\\
& \text{iff}\;\;\; s \in \semTV{\node{n}}
&& \text{Definition of $\semop{\node{n}}$}
\end{flalign*}
\qedhere
\end{description} \end{proof}
\section{Soundness using support orderings}\label{sec:Soundness-via-support-orderings}
In this section we prove soundness of the proof system in the previous section by showing that for any successful tableau whose root is labeled by sequent $\seq{s} = S \tnxTV{\varepsilon} \Phi$, $\seq{s}$ must be valid. Our proof relies on establishing that the transitive closure, $<:_{\node{m}}^+$, of the companion-node dependency relation (Definition~\ref{def:extended_path_ordering}) used to define the success of $\mu$-leaves, is a $\sigma$-compatible support ordering for the semantic functions associated with fixpoint nodes $\node{m}$. Our reliance on support orderings for soundness stands in contrast to Bradfield's and Stirling's soundness proof for essentially this proof system~\cite{BS1992,Bra1991}, which relies on infinitary logic and the introduction of (infinite) ordinal-unfoldings of fixpoint formulas in particular. Since our ultimate goal in this paper is to reason about timed extensions of the modal mu-calculus, we have opted for a different proof strategy that is based more on semantic rather than syntactic reasoning. We also note that our use of support orderings is likely to enable the study of other success criteria besides $<:_{\node{m}}$, which is especially interesting for adaptations of this proof system to other settings, such as ones in which formulas are defined equationally or in which less aggressive use is made of formula unfolding than is the case here.
Our proof of soundness proceeds in four steps. \begin{enumerate}
\item
We show (Section~\ref{subsec:local-soundness}) that tableau rules are \emph{locally sound}, i.e., that when the child nodes of a proof node are valid, then the node itself is also valid.
\item
We wish to be able to reason using tree induction about the meanings of proof nodes and the formulas in those nodes. This reasoning is sometimes impeded because, due to unfolding, many nodes have the same formulas in them. To address this problem, we show how syntactically distinct \emph{node formulas} can be constructed in a semantics-preserving fashion for nodes in a proof tree based on the structure of the proof tree. This material is in Section~\ref{subsec:node-formulas}.
\item
We then prove that for companion nodes $\node{m}$ in a tableau, the ordering $<:_{\node{m}}^+$ is a support ordering for the semantic function associated with its node formula. This is done in Section~\ref{subsec:support-ordering-proof}.
\item
Finally, in Section~\ref{subsec:soundness} we combine the previous three results to obtain soundness for the proof system: if there is a successful tableau whose root $\seq{s}$ is such that $\textit{dl}(\seq{s}) = \varepsilon$ then $\seq{s}$ is valid. \end{enumerate}
\subsection{Local soundness}\label{subsec:local-soundness}
We call a proof system like ours \emph{locally sound} if for every internal node $\node{n}$ in any partial tableau, the validity of all the children of $\node{n}$ implies the validity of $\node{n}$. This may be proven as follows.
\begin{lemma}[Local soundness]\label{lem:local-soundness}
Let $\node{n}$ be an internal proof node in partial tableau $\mathbb{T}$. Then $\node{n}$ is valid if all its children are valid. \end{lemma}
\begin{proof}
Let $\mathbb{T} = \tableauTrl$, with $\tree{T} = (\node{N}, \node{r}, p, cs)$, be a partial tableau with internal node $\node{n}$, and assume that for each $\node{n}' \in c(\node{n})$, node $\node{n}'$ is valid.
To establish that $\node{n}$ is valid, we must show that $\textit{st}(\node{n}) \subseteq \semop{\node{n}}$. To do so, we fix $s \in \textit{st}(\node{n})$ and show that $s \in \semop{\node{n}}$. In support of this, consider arbitrary $s', \node{n}'$ such that $s' <_{\node{n}',\node{n}} s$ in $\mathbb{T}$. By definition of $<_{\node{n}',\node{n}}$ it follows that $\node{n}' \in c(\node{n})$ and that $s' \in \textit{st}(\node{n}')$. Moreover, since $\node{n}'$ is valid it follows that $s' \in \semop{\node{n}'}$; since this holds for all such $s'$ and $\node{n}'$, Lemma~\ref{lem:semantic-sufficiency-of-<} ensures that $s \in \semop{\node{n}}$.
\qedhere \end{proof}
\subsection{Node formulas}\label{subsec:node-formulas}
We now show how to associate a formula $P(\node{n})$ with every node $\node{n}$ in a tableau so that the structure of $P(\node{n})$ is based on the structure of the subtableau rooted at $\node{n}$ and $P(\node{n})$ is disentangled from the definition list of $\node{n}$. We then show that these formulas are semantically equivalent to the formulas embedded in the nodes' sequents in a precise sense. Since our definition is inductive on the structure of the tableau rooted at $\node{n}$, this facilitates proofs over the semantics of formulas using tree induction.
In the remainder of this section we fix sort $\Sigma$, labeled transition system $\mathcal{T} = \lts{S}$ over $\Sigma$, valuation $\val{V} \in \textnormal{Var}\xspace \to 2^{\states{S}}$, and tableau $\mathbb{T} = \tableauTrl$, with $\tree{T} = (\node{N}, \node{r}, p, cs)$. We also recall the definition of $\cnodes{\mathbb{T}}$ --- the companion nodes of $\mathbb{T}$ --- and fix the definitions of the following sets.
\begin{align*} \mathbb{U} &= \bigcup_{\node{n} \in \node{N}} \operatorname{dom}(\textit{dl}(\node{n})) \\ \cnodes{\tree{T}'} &= \node{N}' \cap \cnodes{\mathbb{T}} \text{ for subtree $\tree{T}' = (\node{N}', \ldots)$ of $\tree{T}$} \\ \cnodes{\tree{T}'}(U) &= \{ \node{n} \in \cnodes{\tree{T}'} \mid \textit{fm}(\node{n}) = U \} \text{ for subtree $\tree{T}' = (\node{N}', \ldots)$ of $\tree{T}$ and $U \in \mathbb{U}$} \end{align*}
Set $\mathbb{U}$ contains all the definitional constants appearing in $\mathbb{T}$, while $\cnodes{\tree{T}'}$ contains the companion nodes of $\mathbb{T}$ in subtree $\tree{T}'$ of $\tree{T}$. Set $\cnodes{\tree{T}'}(U)$ consists of the companion nodes of $\mathbb{T}$ in subtree $\tree{T}'$ whose formula is $U \in \mathbb{U}$. For subtree $\tree{T}_{\node{n}}$ rooted at node $\node{n}$, note that $\companions{\tree{T}_{\node{n}}} \subseteq \cnodes{\tree{T}} = \cnodes{\mathbb{T}}$. Also, if $\rho(\node{n}) = \text{Un}$ with $c(\node{n}) = \node{n}'$, then $\companions{\tree{T}_{\node{n}}} = \companions{\tree{T}_{\node{n}'}} \cup \{ \node{n} \}$. Note that Corollary~\ref{cor:shared-prefix-sequent-semantics} and the definition of $\semop{\node{n}}$ guarantee that for all $U \in \mathbb{U}$ and $\node{n}, \node{n}' \in \cnodes{\mathbb{T}}(U)$, $\semop{\node{n}} = \semop{\node{n}'}$; we write $\sem{U}{}{\mathbb{T}}$ for this common value associated with $U$. We now define $P(-)$ as follows.
\begin{definition}[Node formulas]~\label{def:node-formulas} For each companion node $\node{m} \in \cnodes{\mathbb{T}}$ let $Z_\node{m}$ be a unique fresh variable, with $\textnormal{Var}\xspace_{\mathbb{T}} = \{ Z_{\node{m}} \mid \node{m} \in \cnodes{\mathbb{T}} \}$ the set of all such variables. Then for $\node{n} \in \node{N}$ formula $P(\node{n})$ is defined inductively as follows. \begin{enumerate} \item\label{subdef:node-formulas-free-leaf}
If $\node{n}$ is a free leaf (cf.\/ Definition~\ref{def:tableau}(\ref{subdef:free-leaf}))
then $P(\node{n}) = \textit{fm}(\node{n})$. \item
If $\node{n}$ is a $\dia{K}$-leaf
then $P(\node{n}) = (\textit{fm}(\node{n}))[\Delta]$. \item
If $\node{n}$ is a $\sigma$-leaf with companion node $\node{m}$
then $P(\node{n}) = Z_\node{m}$. \item
If $\rho(\node{n}) = \land$ and $cs(\node{n}) = \node{n}_1\node{n}_2$
then $P(\node{n}) = P(\node{n}_1) \land P(\node{n}_2)$. \item
If $\rho(\node{n}) = \lor$ and $cs(\node{n}) = \node{n}_1\node{n}_2$
then $P(\node{n}) = P(\node{n}_1) \lor P(\node{n}_2)$. \item
If $\rho(\node{n}) = [K]$ and $cs(\node{n}) = \node{n}'$
then $P(\node{n}) = [K] (P(\node{n}'))$. \item
If $\rho(\node{n}) = (\dia{K},f)$ and $cs(\node{n}) = \node{n}'$
then $P(\node{n}) = \dia{K} (P(\node{n}'))$. \item
If $\rho(\node{n}) = \sigma Z$ and $cs(\node{n}) = \node{n}'$
then $P(\node{n}) = P(\node{n}')$. \item
If $\rho(\node{n}) = \textnormal{Thin}$ and $cs(\node{n}) = \node{n}'$
then $P(\node{n}) = P(\node{n}')$. \item\label{subdef:node-formulas-un}
If $\rho(\node{n}) = \textnormal{Un}$, $\node{n} = S \tnxTVD U$, $\Delta(U) = \sigma Z.\Phi$ and $cs(\node{n}) = \node{n}'$
then $P(\node{n}) = \sigma Z_\node{n}. \left( P(\node{n}') \right)$.
\end{enumerate} \end{definition} When $\node{n}$ is a free leaf (case~\ref{subdef:node-formulas-free-leaf}) $\textit{fm}(\node{n})$ contains no definitional constants, and thus $(\textit{fm}(\node{n}))[\Delta] = \textit{fm}(\node{n}) = P(\node{n})$. Also, when $\rho(\node{n}) = \text{Un}$ (case~\ref{subdef:node-formulas-un}), $\node{n} \in \cnodes{\mathbb{T}}$ is a companion node in $\mathbb{T}$. Thus $P$ associates a syntactically distinct formula to each companion node in $\mathbb{T}$.
Intuitively, $P(\node{n})$ can be seen as the formula whose ``parse tree'' is the sub-tableau of $\mathbb{T}$ rooted at $\node{n}$. The construction works bottom-up from the leaves that are descendants of $\node{n}$, using the proof rule labeling each internal node to recursively construct formulas from those associated with the node's children. Each companion node is converted into a $\sigma$-formula, with a freshly generated bound variable that $P(-)$ ensures is assigned to each companion leaf of the companion node. It is easy to see that $P(-)$ contains no instances of any $U \in \mathbb{U}$. Moreover, $P(-)$ is an \emph{inductively generated} node function over $\tree{T}$, in the sense of Definition~\ref{def:node-function}(\ref{def:node-function-inductively-generated}). In particular, the function $g \in \node{N} \times (\muforms^{\Sigma}_{\textnormal{Var}\xspace})^* \to \muforms^{\Sigma}_{\textnormal{Var}\xspace}$ used to generate $P(-)$ is defined based on the inductive case associated with $\node{n}$ in Definition~\ref{def:node-formulas}. For example, $g(\node{n}, \Phi_1\Phi_2) = \Phi_1 \lor \Phi_2$ if $\rho(\node{n}) = \lor$.
We now turn to establishing a semantic equivalence between $\semT{P(\node{n})}{\val{V}'}$ for certain $\val{V}'$ and $\semop{\node{n}}$ in $\mathbb{T}$ by first defining the following notion of \emph{valuation consistency with $\mathbb{T}$}.
\begin{definition}[Valuation consistency]\label{def:consistency}
Let $\textnormal{Var}\xspace_{\mathbb{T}}$ be as given in Definition~\ref{def:node-formulas}, and let $\val{V}$ be the valuation in tableau $\mathbb{T}$.
Then
valuation $\val{V}'$ is \emph{consistent with tableau $\mathbb{T}$} iff
\begin{itemize}
\item
for every $U \in \mathbb{U}$ and $\node{m} \in \companions{\tree{T}}(U)$, $\val{V}'(Z_{\node{m}}) = \sem{U}{}{\mathbb{T}}$, and
\item
for every variable $X \in \textnormal{Var}\xspace \setminus \textnormal{Var}\xspace_{\mathbb{T}}$, $\val{V}'(X) = \val{V}(X)$.
\end{itemize} \end{definition}
\noindent Intuitively, $\val{V}'$ is consistent with $\mathbb{T}$ iff it assigns the semantics of the associated definitional constant to every fresh variable used in the definition of $P(-)$, and to all other variables it assigns the same value as valuation $\val{V}$ in $\mathbb{T}$. The following result is immediate from the definitions.
\begin{lemma}\label{lem:consistency-property} Let $\val{V}'$ be a valuation consistent with $\mathbb{T}$. Then for all $\node{n} \in \node{N}$ such that $\textit{fm}(\node{n}) = \Phi$ and $\textit{dl}(\node{n}) = \Delta$, $ \semop{\node{n}} = \semT{\Phi[\Delta]}{\val{V}'}. $ \end{lemma} \begin{proof} Suppose $\node{n} = S \tnxTVD \Phi$; we must show that $\semop{\node{n}} = \semT{\Phi[\Delta]}{\val{V}'}$. By definition, $\semop{\node{n}} = \semT{\Phi}{\val{V}[\Delta]}$. Lemma~\ref{lem:definition-list-correspondence} then guarantees that $\semT{\Phi}{\val{V}[\Delta]} = \semTV{\Phi[\Delta]}$, and as $\val{V}$ and $\val{V}'$ only disagree on definitional constants in $\mathbb{T}$, and thus are not free in $\Phi[\Delta]$, we have that $\semop{\node{n}} = \semT{\Phi[\Delta]}{\val{V}'}$. \qedhere \end{proof}
We now prove that $\semop{\node{n}} = \semT{P(\node{n})}{\val{V}'}$ for proof node $\node{n}$ in $\mathbb{T}$ and $\val{V}'$ consistent with $\mathbb{T}$. This fact establishes that $P(\node{n})$ summarizes all relevant information about the semantics of $\node{n}$, modulo the connection made by $\val{V}'$ between definitional constants in $\node{n}$ and the associated free variables introduced by $P(-)$. The proof is split across two lemmas; we first consider the special case when $\node{n} \in \cnodes{\mathbb{T}}$ is a companion node, and then use this result to prove the general case.
\begin{lemma}[Companion-node formulas and semantics]\label{lem:companion-node-formulas-and-semantics} Let $\val{V}'$ be a consistent valuation for tableau $\mathbb{T}$. Then for every $\node{m} \in \cnodes{\mathbb{T}}$, $\semT{P(\node{m})}{\val{V}'} = \semop{\node{m}}$. \end{lemma} \remove{ \begin{proofsketch} For any valuation $\val{V}'$ we call a syntactic transformation of $\Phi$ to $\Gamma$ \emph{semantics-preserving} for $\val{V}'$ iff $\semT{\Phi}{\val{V}'} = \semT{\Gamma}{\val{V}'}$. Now let $\val{V}'$ be consistent with $\mathbb{T}$. We actually prove the following stronger result:
\begin{quote}
for any $\node{m} \in \cnodes{\mathbb{T}}$ with $\node{m} = S \tnxTVD U$ and $\Delta(U) = \sigma Z.\Phi$, there is a semantics-preserving transformation of $P(\node{m})$ to $(\sigma Z.\Phi)[\Delta]$ for $\val{V}'$. \end{quote}
The following reasoning then gives the desired result. \begin{flalign*} &\semT{P(\node{m})}{\val{V}'} \\ &{=}\; \semT{(\sigma Z.\Phi)[\Delta]}{\val{V}'} && \text{Semantics-preserving transformation for $\val{V}'$} \\ &{=}\; \semT{(\sigma Z.\Phi)[\Delta]}{\val{V}} && \text{$\val{V}'$ consistent for $\mathbb{T}$, no $Z' \in \textnormal{Var}\xspace_{\mathbb{T}}$ free in $(\sigma Z.\Phi)[\Delta]$} \\ &{=}\; \semT{\sigma Z.\Phi}{\val{V}[\Delta]} && \text{Lemma~\ref{lem:definition-list-correspondence}} \\ &{=}\; \semT{U}{\val{V}[\Delta]} && \text{$\Delta(U) = \sigma Z.\Phi$, Lemma~\ref{lem:U-semantic-correspondence}} \\ &{=}\; \semop{\node{m}} && \text{Definition of $\semop{\node{m}}$} \end{flalign*} The detailed proof, containing the precise semantics-preserving transformation, is included in the appendix. \qedhere \end{proofsketch} }
\begin{proof} For any valuation $\val{V}'$ we call a syntactic transformation of $\Phi$ to $\Gamma$ \emph{semantics-preserving} for $\val{V}'$ iff $\semT{\Phi}{\val{V}'} = \semT{\Gamma}{\val{V}'}$. Now let $\val{V}'$ be consistent with $\mathbb{T}$. We actually prove the following stronger result:
\begin{quote}
for any $\node{m} \in \cnodes{\mathbb{T}}$ with $\node{m} = S \tnxTVD U$ and $\Delta(U) = \sigma Z.\Phi$, there is a semantics-preserving transformation of $P(\node{m})$ to $(\sigma Z.\Phi)[\Delta]$ for $\val{V}'$. \end{quote}
The following reasoning then gives the desired result. \begin{flalign*} &\semT{P(\node{m})}{\val{V}'} \\ &{=}\; \semT{(\sigma Z.\Phi)[\Delta]}{\val{V}'} && \text{Semantics-preserving transformation for $\val{V}'$} \\ &{=}\; \semT{(\sigma Z.\Phi)[\Delta]}{\val{V}} && \text{$\val{V}'$ consistent for $\mathbb{T}$, no $Z' \in \textnormal{Var}\xspace_{\mathbb{T}}$ free in $(\sigma Z.\Phi)[\Delta]$} \\ &{=}\; \semT{\sigma Z.\Phi}{\val{V}[\Delta]} && \text{Lemma~\ref{lem:definition-list-correspondence}} \\ &{=}\; \semT{U}{\val{V}[\Delta]} && \text{$\Delta(U) = \sigma Z.\Phi$, Lemma~\ref{lem:U-semantic-correspondence}} \\ &{=}\; \semop{\node{m}} && \text{Definition of $\semop{\node{m}}$} \end{flalign*}
\noindent The proof therefore reduces to showing how to transform $P(\node{m})$, where companion node $\node{m} \in \cnodes{\mathbb{T}}$ is such that $\node{m} = S \tnxTVD U$ and $\Delta(U) = \sigma Z.\Phi$, to $(\sigma Z.\Phi)[\Delta]$ in a way that is semantics-preserving for $\val{V}'$. So fix $\node{m} \in \cnodes{\mathbb{T}}$. The proof proceeds by strong induction on $|\companions{\tree{T}_{\node{m}}}| \geq 1$. The induction hypothesis guarantees that for any companion node $\node{m}' = S' \tnxTV{\Delta'} U' \in \companions{\tree{T}_{\node{m}}} \setminus \{ \node{m} \}$, there is a semantics-preserving transformation of $P(\node{m}')$ to $(\Delta'(U'))[\Delta']$ for $\val{V}'$, since in this case $|\companions{\tree{T}_{\node{m}'}}| < |\companions{\tree{T}_{\node{m}}}|$. We also remark on the following. \begin{enumerate}
\item
As $\node{m} \in \cnodes{T}$, $cs(\node{m}) = \node{n}$ is such that $\node{n} = S \tnxTVD \Phi[Z := U]$.
\item
By definition, $P(\node{m}) = \sigma Z_{\node{m}} . P(\node{n})$. \end{enumerate}
Our transformation from $P(\node{m})$ to $(\sigma Z.\Phi)[\Delta]$ uses \emph{inductive updates} (cf.\/ Definition~\ref{def:node-function-inductive-update}) of $P$. That is, we will transform $P(\node{m})$ by using inductive updates to change the values returned by $P$ for some of the descendants of $\node{m}$. This will have the effect of rewriting $P(\node{m})$ to $(\sigma Z.\Phi)[\Delta]$ to $\node{m}$. To define these inductive updates we will use a tree prefix, $\tree{T}'$, of the subtree $\tree{T}_{\node{n}}$ of $\tree{T}$ rooted at $\node{n}$, the sole child of $\node{m}$ (cf.\/ Definition~\ref{def:subtree}). Let $\node{F} = \{ \node{f} \in D(\node{n}) \mid \rho(\node{f}) \in \{ \sigma Z, \text{Un} \}\}$ be the companion and fixpoint nodes in $\tree{T}_{\node{n}}$. Then \[ \tree{T}' = \tpre{\tree{T}_{\node{n}}}{\node{F}} = (\node{N}', \node{n}, p', cs') \] (cf.\/ Definition~\ref{def:tree-prefix-generation}) is the tree prefix of the subtree $\tree{T}_{\node{n}}$ obtained by converting the nearest descendants of $\node{n}$ that are companion or fixpoint nodes of $\mathbb{T}$ into leaves. We note that $\tree{T}'$ contains no internal nodes that are companion nodes or fixpoint nodes of $\mathbb{T}$. It is also straightforward to show that for each $\node{n}' \in \node{N}'$, $\textit{fm}(\node{n}')$ is a subformula of $\Phi[Z := U]$ and $\textit{dl}(\node{n}') = \Delta$. Finally, we remark on a property involving inductive updates of $P$ on nodes in $\node{N}'$; this property allows us to replace $P(\node{n}')$, where $\node{n}' \in \node{N}'$, by a semantically equivalent formula $\Gamma$ in an inductive update of $P$ without changing the semantics of any formulas generated by the updated function.
\begin{quote}
\textbf{(IU)}
Let $\node{N}'' \subseteq \node{N}'$, with $\vec{\node{n}}'' = \node{n}''_1 \cdots \node{n}''_j$ an ordering of $\node{N}''$. Also assume $\vec{\Gamma} = \Gamma_1 \cdots \Gamma_j \in (\muforms^{\Sigma}_{\textnormal{Var}\xspace})^*$ satisfies $\semT{P(\node{n}''_i)}{\val{V}'} = \semT{\Gamma_i}{\val{V}'}$ all $i$. Then for all $\node{n}' \in \node{N}'$,
\[
\semT{P(\node{n}')}{\val{V}'}
= \semT{P \iupd{\vec{\node{n}}''}{\vec{\Gamma}}(\node{n}')}{\val{V}'}.
\] \end{quote}
Property (IU) follows from the definition of $P \iupd{\vec{\node{n}}''}{\vec{\Gamma}}$ via a simple tree induction on $\tree{T}'$.
We now show how to transform $P(\node{m})$ to $(\sigma Z.\Phi)[\Delta]$ in a semantics-preserving fashion for $\val{V}'$. The key transformation relies on inductively updating $P$ via the leaves, $\node{L} \subseteq \node{N}'$, of $\tree{T}'$. Each such leaf falls into one of three disjoint sets. \begin{align*} \node{L}_{\perp} &= \{\node{n}' \in \node{N}' \mid \rho(\node{n}'){\perp} \} \\ \node{L}_{\textnormal{Un}} &= \{\node{n}' \in \node{N}' \mid \rho(\node{n}') = \text{Un} \} \\ \node{L}_\sigma &= \{\node{n}' \in \node{N}' \mid \rho(\node{n}') = \sigma Z \} \end{align*} $\node{L}_{\perp}$ consists of leaves in $\tree{T}'$ that are also leaves in $\tree{T}_{\node{n}}$, and hence in $\tree{T}$, while $\node{L}_{\textnormal{Un}}$ contains the leaves of $\tree{T}'$ that are companion nodes in $\node{T}_{\node{n}}$. $\node{L}_\sigma$ is the set of the leaves in $\tree{T}'$ whose formulas involve fixpoint formulas. Note $\node{L} = \node{L}_{\perp} \cup \node{L}_{\textnormal{Un}} \cup \node{L}_\sigma$.
Now let $\vec{\node{n}}' = \node{n}'_1 \cdots \node{n}'_j$ be an ordering of $\node{L}$, and define $\vec{\Phi}' = \Phi'_1 \cdots \Phi'_j$ by:
\[ \Phi'_i = \begin{cases} Z_{\node{m}}
& \text{if $\textit{fm}(\node{n}'_i) = U$}
\\ (\textit{fm}(\node{n}'_i))[\Delta]
& \text{otherwise.} \end{cases} \]
That is, $\Phi'_i$ is defined to be $Z_{\node{m}}$, the fresh variable associated with $\node{m}$, if the formula in $\node{n}_i$ is $U$, and thus is either a companion leaf of $\node{m}$ in $\mathbb{T}$ (and thus in $\node{L}_{\perp}$) or a companion node for $U$ that is a strict descendant of $\node{m}$ (and thus in $\node{L}_{\textnormal{Un}})$. Otherwise, $\Phi'_i$ is the formula in $\node{n}'_i$, instantiated by $\Delta$. Now define \[ P' = P\iupd{\vec{\node{n}}'}{\vec{\Phi}'}. \] We will show the transformations of $P(\node{m})$ to $P'(\node{m})$, and of $P(\node{n})$ to $P'(\node{n})$, are semantics-preserving for $\val{V}'$ by showing that for each $\node{n}'_i \in \node{L}$, $\semT{P(\node{n}'_i)}{\val{V}'} = \semT{\Phi'_i}{\val{V}'}$; Property (IU) then gives the desired result, namely, $\semT{P(\node{n})}{\val{V}'} = \semT{P'(\node{n})}{\val{V}'}$ and thus $\semT{P(\node{m})}{\val{V}'} = \semT{P'(\node{m})}{\val{V}'}$. The argument proceeds via a case analysis on $\node{n}'_i$. \begin{description} \item[$\textit{fm}(\node{n}'_i) = U$.]
In this case $\Phi'_i = Z_\node{m}$, and either $\node{n}'_i \in \node{L}_{\perp}$ or $\node{n}'_i \in \node{L}_{\textnormal{Un}}$. If $\node{n}'_i \in \node{L}_{\perp}$ then $\node{n}'_i$ is a companion leaf of $\node{m}$ in $\mathbb{T}$, and by definition $P(\node{n}'_i) = Z_\node{m} = \Phi'_i$. Now assume $\node{n}'_i \in \node{L}_{\textnormal{Un}}$ is a companion node in $\mathbb{T}$, meaning $\node{n}'_i \in \companions{\tree{T}_{\node{m}}} \setminus \{ \node{m} \}$. As $\textit{dl}(\node{n}'_i) = \Delta$ the induction hypothesis guarantees that $\semT{P(\node{n}'_i)}{\val{V}'} = \semT{(\Delta(U))[\Delta]}{\val{V}'} = \semT{(\sigma Z.\Phi)[\Delta]}{\val{V}'}$. We now reason as follows.
\begin{flalign*}
&\semT{P(\node{n}'_i)}{\val{V}'}
\\
&{=}\; \semT{(\sigma Z.\Phi)[\Delta]}{\val{V}'}
&& \text{Induction hypothesis}
\\
&{=}\; \semT{(\sigma Z.\Phi)[\Delta]}{\val{V}}
&& \text{$\val{V}'$ consistent with $\mathbb{T}$, no $Z' \in \textnormal{Var}\xspace_{\mathbb{T}}$ free in $(\sigma Z.\Phi)[\Delta]$}
\\
&{=}\; \semT{\sigma Z.\Phi}{\val{V}[\Delta]}
&& \text{Lemma~\ref{lem:definition-list-correspondence}}
\\
&{=}\; \semT{U}{\val{V}[\Delta]}
&& \text{$\Delta(U) = \sigma Z.\Phi$, Lemma~\ref{lem:U-semantic-correspondence}}
\\
&{=}\; \sem{U}{}{\mathbb{T}}
&& \text{Definition of $\sem{U}{}{\mathbb{T}}$}
\\
&{=}\; \val{V}'(Z_{\node{m}})
&& \text{$\val{V}'$ consistent with $\mathbb{T}$}
\\
&{=}\; \semT{\Phi'_i}{\val{V}'}
&& \text{$\Phi'_i = Z_{\node{m}}$, so $\semT{\Phi'_i}{\val{V}'} = \semT{Z_{\node{m}}}{\val{V}'} = \val{V}'(Z_{\node{m}})$}
\end{flalign*} \item[$\node{n}'_i \in \node{L}_{\perp}, \textit{fm}(\node{n}'_i) \neq U$.]
There are two subcases to consider. In the first $\textit{fm}(\node{n}'_i) \not\in \mathbb{U}$, meaning either $\node{n}'_i$ is a free leaf, in which case we have argued above that $P(\node{n}'_i) = (\textit{fm}(\node{n}'_i))[\Delta]$,
or $\node{n}'_i$ is a $\dia{K}$-leaf in $\tree{T}$, in which case by definition $P(\node{n}'_i) = (\textit{fm}(\node{n}'_i))[\Delta]$. Regardless, $P(\node{n}'_{i}) = (\textit{fm}(\node{n}'_{i}))[\Delta] = \Phi'_i$, and the result is immediate.
In the second subcase $\textit{fm}(\node{n}'_{i}) = U' \in \mathbb{U}$ for some $U' \neq U$. In this case $\node{n}'_{i}$ is a companion leaf of some ancestor companion node $\node{m}'$ of $\node{m}$, and
$P(\node{n}'_i) = Z_{\node{m}'}$. We reason as follows.
\begin{align*}
\semT{P(\node{n}'_{i})}{\val{V}'}
&= \val{V}'( Z_{\node{m}'} )
&& \text{$\semT{P(\node{n}'_{i})}{\val{V}'} = \semT{Z_{\node{m}'}}{\val{V}'} = \val{V}'(Z_{\node{m}'_i})$}
\\
&= \sem{U'}{}{\mathbb{T}}
&& \text{Consistency of $\val{V}'$ for $\mathbb{T}$}
\\
&= \semop{\node{n}'_{i}}
&& \text{Definition of $\sem{U'}{}{\mathbb{T}}$}
\\
&= \semT{U'[\Delta]}{\val{V}'}
&& \text{Lemma~\ref{lem:consistency-property}, $\val{V}'$ consistent with $\mathbb{T}$}
\\
&= \semT{\Phi'_{i}}{\val{V}'}
&& \text{$\Phi'_{i} = (\textit{fm}(\node{n}'_{i}))[\Delta] = U'[\Delta]$}
\end{align*}
\item[$\node{n}'_i \in \node{L}_{\textnormal{Un}}, \textit{fm}(\node{n}'_i) \neq U$.]
In this case $\textit{fm}(\node{n}'_i) = U' \in \mathbb{U}$
for some $U' \neq U$,
and $\node{n}'_i \in \companions{\tree{T}_{\node{m}}} \setminus \{ \node{m} \}$.
Hence $\semT{P(\node{n}'_i)}{\val{V}'} = \semT{(\Delta(U')[\Delta]}{\val{V}'}$ according to the induction hypothesis.
Since $\semT{(\Delta(U')[\Delta]}{\val{V}'} = \semT{U'[\Delta]}{\val{V}'} = \semT{\Phi'_i}{\val{V}'}$, $\semT{P(\node{n}'_i)}{\val{V}'} = \semT{\Phi'_i}{\val{V}'}$. \item[$\node{n}'_i \in \node{L}_\sigma$.]
In this case $\textit{fm}(\node{n}'_{i}) = \sigma' Z'.\Phi'$ for some $\sigma', Z'$ and $\Phi'$. Also,
$cs(\node{n}'_{i}) = \node{n}''_{i}$ in $\tree{T}$ is such that $\node{n}''_{i} \in \companions{\tree{T}}$, with
\[
\textit{dl}(\node{n}''_{i})
= \Delta'
= \Delta \cdot (U' = \sigma'Z'.\Phi')
\]
for some $U' \not\in \operatorname{dom}(\Delta)$ and $\textit{fm}(\node{n}''_{i}) = U'$.
Since $\node{n}''_{i} \in \companions{\tree{T}_{\node{m}}} \setminus \{ \node{m} \}$, the induction hypothesis implies
$\semT{P(\node{n}''_{i})}{\val{V}'}
= \semT{(\Delta'(U'))[\Delta']}{\val{V}'}
= \semT{(\sigma'Z'.\Phi')[\Delta']}{\val{V}'}$.
We prove that $\semT{P(\node{n}'_i)}{\val{V}'} = \semT{\Phi'_i}{\val{V}'}$ as follows.
\begin{align*}
&\semT{P(\node{n}'_{i})}{\val{V}'}
\\
&= \semT{P(\node{n}''_{i})}{\val{V}'}
&& \text{Definition of $P(\node{n}'_{i})$ when $\textit{fm}(\node{n'_i}) = \sigma' Z'.\Phi'$}
\\
&= \semT{(\sigma'Z'.\Phi')[\Delta']}{\val{V}'}
&& \text{Induction hypothesis}
\\
&= \semT{(\sigma'Z'.\Phi')[\Delta]}{\val{V}'}
&& \text{$U'$ not free in $\sigma'Z'.\Phi'$}
\\
&= \semT{(\textit{fm}(\node{n}'_i))[\Delta]}{\val{V}'}
&& \text{$\textit{fm}(\node{n}'_i) = \sigma'Z'.\Phi'$}
\\
&= \semT{\Phi'_i}{\val{V}'}
&& \text{$\Phi'_i = (\textit{fm}(\node{n}'_i))[\Delta]$}
\end{align*} \end{description}
\noindent The final part of the semantics-preserving transformation of $P(\node{m})$ to $(\sigma Z.\Phi)[\Delta]$ for $\val{V}'$ involves the following steps. \begin{itemize} \item
We show that $\Gamma = \sigma Z_{\node{m}}.(\Phi[Z := Z_{\node{m}}])$ is such that $P'(\node{m}) = \Gamma[\Delta]$,
and thus $\semT{P'(\node{m})}{\val{V}'} = \semT{\Gamma[\Delta]}{\val{V}'}$. \item
Since $Z_{\node{m}}$ is not free in $\Phi$, we also know that for any valuation $\val{V}''$, $\semT{\Gamma}{\val{V}''} = \semT{\sigma Z.\Phi}{\val{V}''}$.
In particular, $\semT{\Gamma}{\val{V}'[\Delta]} = \semT{(\sigma Z.\Phi)}{\val{V}'[\Delta]}$, whence $\semT{\Gamma[\Delta]}{\val{V}'} = \semT{(\sigma Z.\Phi)[\Delta]}{\val{V}'}$ and we have the following,
\[
\semT{P(\node{m})}{\val{V}'}
= \semT{P'(\node{m})}{\val{V}'}
= \semT{\Gamma[\Delta]}{\val{V}'}
= \semT{(\sigma Z.\Phi)[\Delta]}{\val{V}'},
\]
which is what is to be proved. \end{itemize}
To finish the proof, we first show that $P'(\node{n}) = \Gamma'[\Delta]$, where $\Gamma' = \Phi[Z := Z_{\node{m}}]$. In support of this, consider. \[
\Phi''_i =
\begin{cases}
Z_{\node{m}}
& \text{if $\textit{fm}(\node{n}'_i) = U$}
\\
\textit{fm}(\node{n}'_i)
& \text{otherwise.}
\end{cases} \] It is immediate that $\Phi'_i = \Phi''_i[\Delta]$ for all $1 \leq i \leq j$. Now consider \[ P'' = P\iupd{\vec{\node{n}}'}{\vec{\Phi}''}. \]
If $\node{n}' \in \node{L}$ then either $\textit{fm}(\node{n}') = U$ and $P''(\node{n}') = Z_{\node{m}}$, or $P''(\node{n}') = \textit{fm}(\node{n}')$. Based on this observation and the definition of $P''$, it follows that $P''(\node{n}') = (\textit{fm}(\node{n}'))[U := Z_{\node{m}}]$ for any $\node{n}' \in \node{N}'$. In particular, $P''(\node{n}) = (\textit{fm}(\node{n}))[U := Z_{\node{m}}] = (\Phi[Z:=U])[U := Z_{\node{m}}] = \Phi[Z := Z_{\node{m}}] = \Gamma'$. It also follows from the properties of $P''$ and $\Phi''_i$ that for any $\node{n}' \in \node{N}'$, $P'(\node{n}') = (P''(\node{n}'))[\Delta]$. Thus $\Gamma'[\Delta] = (P''(\node{n}))[\Delta] = P'(\node{n}) = \Gamma_1'$.
The final step of the proof is the observation that $\Gamma = P''(\node{m})$ and that $\Gamma_2[\Delta] = (P''(\node{m}))[\Delta] = P'(\node{m})$. \qedhere
\end{proof}
\noindent The next lemma extends the previous one, which focused only on companion nodes, to all nodes.
\begin{lemma}[Node formulas and node semantics]\label{lem:node-formulas-and-node-semantics} Let $\val{V}'$ be a consistent valuation for tableau $\mathbb{T}$. Then for every $\node{n} \in \node{N}$, $\semT{P(\node{n})}{\val{V}'} = \semop{\node{n}}$. \end{lemma}
\remove{ \begin{proofsketch}
The proof proceeds by induction on $\tree{T}$, the tree embedded in $\mathbb{T}$. The case where $\rho(\node{n}) = \text{Un}$ follows from Lemma~\ref{lem:companion-node-formulas-and-semantics}.
The detailed proof is included in the appendix. \end{proofsketch} } \begin{proof} Let valuation $\val{V}'$ be consistent with $\mathbb{T}$. The proof is by induction on $\tree{T}$. So fix node $\node{n}$ in $\tree{T}$. The induction hypothesis asserts that for all $\node{n}' \in c(\node{n})$, $\semop{\node{n}'} = \semT{P(\node{n}')}{\val{V}'}$. The proof now proceeds by an analysis of $\rho(\node{n})$.
\begin{description} \item[$\rho(\node{n}) {\perp}$.]
In this case $\node{n}$ is a leaf. We proceed by an analysis on the form of $\node{n}$.
\begin{itemize}
\item
$\node{n}$ is a free leaf or $\dia{K}$-leaf. Let $\Phi = \textit{fm}(\node{n})$; we reason as follows.
\begin{align*}
\semT{P(\node{n})}{\val{V}'}
&= \semT{\Phi[\Delta]}{\val{V}'}
&& \text{Definition of $P(\node{n})$}
\\
&= \semT{\Phi[\Delta]}{\val{V}}
&& \text{Consistency of $\val{V}'$, no $Z_\node{m}$ can appear in $\Phi[\Delta]$}
\\
&= \semT{\Phi}{\val{V}[\Delta]}
&& \text{Lemma~\ref{lem:definition-list-correspondence}}
\\
&= \semop{\node{n}}
&& \text{Definition of $\semop{\node{n}}$}
\end{align*}
\item
$\node{n} $ is a $\sigma$-leaf. Let $\textit{fm}(\node{n}) = U$, and let $\node{m}$ be the companion node of $\node{n}$. Then $P(\node{n}) = Z_\node{m}$, where $Z_\node{m} \in \textnormal{Var}\xspace_{\mathbb{T}}$ is the fresh variable associated $\node{m}$. We reason as follows.
\begin{align*}
\semT{P(\node{n})}{\val{V}'}
&= \semT{Z_\node{m}}{\val{V}'}
&& \text{Definition of $P(\node{n})$}
\\
&= \val{V}'( Z_\node{m})
&& \text{Definition of $\semTV{Z_\node{m}}$}
\\
&= \sem{U}{}{\mathbb{T}}
&& \text{Consistency of $\val{V}'$ for $\mathbb{T}$}
\\
&= \semop{\node{n}}
&& \text{Definition of $\sem{U}{}{\mathbb{T}}$}
\end{align*}
\end{itemize}
\item[$\rho(\node{n}) = \land$.]
In this case we know that $\node{n} = S \tnxTVD \Phi_1 \land \Phi_2$ and that $cs(\node{n}) = \node{n}_1\node{n}_2$, where each $\node{n}_i = S \tnxTVD \Phi_i$. The induction hypothesis guarantees that $\semT{P(\node{n}_i)}{\val{V}'} = \semop{\node{n}_i} = \semT{\Phi_i}{\val{V}[\Delta]}$. We reason as follows.
\begin{align*}
\semT{P(\node{n})}{\val{V}'}
&= \semT{P(\node{n}_1) \land P(\node{n}_2)}{\val{V}'}
&& \text{Definition of $P(\node{n})$}
\\
&= \semT{P(\node{n}_1)}{\val{V}'} \cap \semT{P(\node{n}_2)}{\val{V}'}
&& \text{Semantics of $\land$}
\\
&= \semop{\node{n}_1} \cap \semop{\node{n}_2}
&& \text{Induction hypothesis (twice)}
\\
&= \semT{\Phi_1}{\val{V}[\Delta]} \cap \semT{\Phi_2}{\val{V}[\Delta]}
&& \text{Definition of $\semop{\node{n}_i}$}
\\
&= \semT{\Phi_1 \land \Phi_2}{\val{V}[\Delta]}
&& \text{Semantics of $\land$}
\\
&= \semop{\node{n}}
&& \text{Definition of $\semop{\node{n}}$}
\end{align*}
\item[$\rho(\node{n}) = \lor$.]
In this case we know that $\node{n} = S \tnxTVD \Phi_1 \lor \Phi_2$ and that $cs(\node{n}) = \node{n}_1\node{n}_2$, where each $\node{n}_i = S_i \tnxTVD \Phi_i$ and $S = S_1 \cup S_2$. The induction hypothesis guarantees that $\semT{P(\node{n}_i)}{\val{V}'} = \semop{\node{n}_i} = \semT{\Phi_i}{\val{V}[\Delta]}$. We reason as follows.
\begin{align*}
\semT{P(\node{n})}{\val{V}'}
&= \semT{P(\node{n}_1) \lor P(\node{n}_2)}{\val{V}'}
&& \text{Definition of $P(\node{n})$}
\\
&= \semT{P(\node{n}_1)}{\val{V}'} \cup \semT{P(\node{n}_2)}{\val{V}'}
&& \text{Semantics of $\lor$}
\\
&= \semop{\node{n}_1} \cup \semop{\node{n}_2}
&& \text{Induction hypothesis (twice)}
\\
&= \semT{\Phi_1}{\val{V}[\Delta]} \cup \semT{\Phi_2}{\val{V}[\Delta]}
&& \text{Definition of $\semop{\node{n}_i}$}
\\
&= \semT{\Phi_1 \lor \Phi_2}{\val{V}[\Delta]}
&& \text{Semantics of $\lor$}
\\
&= \semop{\node{n}}
&& \text{Definition of $\semop{\node{n}}$}
\end{align*}
\item[$\rho(\node{n}) = {[K]}$.]
In this case we know that $\node{n} = S \tnxTVD [K] \Phi'$ and that $cs(\node{n}) = \node{n}'$, where $\node{n}' = S' \tnxTVD \Phi'$ and $S' = \{ s' \mid \exists s \in S \colon s \xrightarrow{K} s' \}$. The induction hypothesis guarantees that $\semT{P(\node{n}')}{\val{V}'} = \semTV{\node{n}'} = \semT{\Phi'}{\val{V}[\Delta]}$. We reason as follows.
\begin{align*}
\semT{P(\node{n})}{\val{V}'}
&= \semT{[K] (P(\node{n}'))}{\val{V}'}
&& \text{Definition of $P(\node{n})$}
\\
&= \mathit{pred}_{[K]} (\semT{P(\node{n}')}{\val{V}'})
&& \text{Semantics of $[K]$}
\\
&= \mathit{pred}_{[K]} (\semop{\node{n}'})
&& \text{Induction hypothesis}
\\
&= \mathit{pred}_{[K]} (\semT{\Phi'}{\val{V}[\Delta]})
&& \text{Definition of $\semop{\node{n}'}$}
\\
&= \semT{[K] \Phi'}{\val{V}[\Delta]}
&& \text{Semantics of $[K]$}
\\
&= \semop{\node{n}}
&& \text{Definition of $\semop{\node{n}}$}
\end{align*}
\item[$\rho(\node{n}) = (\dia{K},f)$.]
In this case we know that $\node{n} = S \tnxTVD \dia{K} \Phi'$ and that $cs(\node{n}) = \node{n}'$, where $\node{n}' = f(S) \tnxTVD \Phi'$. The induction hypothesis guarantees that $\semT{P(\node{n}')}{\val{V}'} = \semop{\node{n}'} = \semT{\Phi'}{\val{V}[\Delta]}$. We reason as follows.
\begin{align*}
\semT{P(\node{n})}{\val{V}'}
&= \semT{\dia{K} P(\node{n}')}{\val{V}'}
&& \text{Definition of $P(-)$}
\\
&= \mathit{pred}_{\dia{K}} (\semT{P(\node{n}')}{\val{V}'})
&& \text{Semantics of $\langle K \rangle$}
\\
&= \mathit{pred}_{\dia{K}} (\semop{\node{n}'})
&& \text{Induction hypothesis}
\\
&= \mathit{pred}_{\dia{K}} (\semT{\Phi'}{\val{V}[\Delta]})
&& \text{Definition of $\semT{\node{n}'}{\val{V}[\Delta]}$}
\\
&= \semT{\dia{K} \Phi'}{\val{V}[\Delta]}
&& \text{Semantics of $\dia{K}$}
\\
&= \semop{\node{n}}
&& \text{Definition of $\semop{\node{n}}$}
\end{align*}
\item[$\rho(\node{n}) = \sigma Z$.]
In this case we know that $\node{n} = S \tnxTVD \sigma Z.\Phi'$ and that $cs(\node{n}) = \node{n}'$, where $\node{n}' = S \tnxTV{\Delta'} U$ for some fresh definitional constant $U$ and $\Delta' = \Delta \cdot (U = \sigma Z.\Phi')$. The induction hypothesis guarantees that $\semT{P(\node{n}')}{\val{V}'} = \semop{\node{n}'} = \semT{U}{\val{V}[\Delta']}$. We reason as follows.
\begin{align*}
\semT{P(\node{n})}{\val{V}'}
&= \semT{P(\node{n}')}{\val{V}'}
&& \text{Definition of $P(\node{n})$}
\\
&= \semop{\node{n}'}
&& \text{Induction hypothesis}
\\
&= \semT{U}{\val{V}[\Delta']}
&& \text{Definition of $\semop{\node{n}'}$}
\\
&= \semTV{U[\Delta']}
&& \text{Lemma~\ref{lem:definition-list-correspondence}}
\\
&= \semTV{(\sigma Z.\Phi')[\Delta]}
&& \text{Definition of $U[\Delta']$, $\Delta' = \Delta \cdot (U = \sigma Z.\Phi')$}
\\
&= \semT{\sigma Z. \Phi'}{\val{V}[\Delta]}
&& \text{Lemma~\ref{lem:definition-list-correspondence}}
\\
&= \semop{\node{n}}
&& \text{Definition of $\semop{\node{n}}$}
\end{align*}
\item[$\rho(\node{n}) = \textnormal{Un}$.]
Follows immediately from Lemma~\ref{lem:companion-node-formulas-and-semantics}.
\item[$\rho(\node{n}) = \textnormal{Thin}$.]
In this case we know that $\node{n} = S \tnxTVD \Phi$ and that $cs(\node{n}) = \node{n}' $, where $\node{n}' = S' \tnxTVD \Phi$ for some $S \subseteq S'$. The induction hypothesis guarantees that $\semT{P(\node{n}')}{\val{V}'} = \semop{\node{n}'} = \semT{\Phi}{\val{V}[\Delta]}$. We reason as follows.
\begin{align*}
\semT{P(\node{n})}{\val{V}'}
&= \semT{P(\node{n}')}{\val{V}'}
&& \text{Definition of $P(-)$}
\\
&= \semop{\node{n}'}
&& \text{Induction hypothesis}
\\
&= \semT{\Phi}{\val{V}[\Delta]}
&& \text{Definition of $\semop{\node{n}'}$}
\\
&= \semop{\node{n}}
&& \text{Definition of $\semop{\node{n}}$}
\end{align*}
\end{description}\qedhere \end{proof}
\noindent The final corollary asserts that when the definition list in a proof node is empty, there is no need to make special provision for consistent valuations.
\begin{corollary}\label{cor:node-formulas-vs-node-semantics} Let $\node{n} \in \node{N}$ be such that $\textit{dl}(\node{n}) = \varepsilon$. Then $\semop{\node{n}} = \semTV{P(\node{n})}$. \end{corollary} \begin{proof} Fix $\node{n} \in \node{N}$ such that $\textit{dl}(\node{n}) = \varepsilon$. Based on Lemma~\ref{lem:node-formulas-and-node-semantics} we know that for any valuation $\val{V}'$ that is consistent with $\mathbb{T}$, $\semT{P(\node{n})}{\val{V}'} = \semop{\node{n}}$. It may also be seen that no $Z' \in \textnormal{Var}\xspace_{\mathbb{T}}$ can be free in $P(\node{n})$, and since $\val{V}'$ is consistent with $\mathbb{T}$ we have that $\semT{P(\node{n})}{\val{V}'} = \semTV{P(\node{n})}$. Consequently, $\semop{\node{n}} = \semTV{P(\node{n})}$. \qedhere \end{proof}
\subsection{Support Orderings for Companion Nodes}\label{subsec:support-ordering-proof}
As the next step in our soundness proof, we establish that for all companion nodes $\node{n}$ in the tableau, $(\textit{st}(\node{n}),<:_{\node{n}}^+)$, where $<:_{\node{n}}^+$ is the transitive closure of the extended dependency ordering on $\node{n}$, is a support ordering for a semantic function derived from $P(\node{n})$. This fact is central in the proof of soundness, as it establishes a key linkage between the tableau-based ordering $<:_{\node{n}}^+$ and the semantic notion of support ordering.
In order to prove this result about $<:_\node{n}^+$ we first introduce a derived dependency relation, which we call the \emph{support dependency ordering} (notation $\leq:_{\node{m},\node{n}}$). This ordering is based on the extended dependency ordering $<:_{\node{m},\node{n}}$, but it allows dependencies based on cycling through node $\node{n}$ first, in case $\node{n}$ is a companion node. Specifically, the support dependency ordering captures exactly the dependencies guaranteeing that $s$ is in the semantics of $\node{n}$ if for every $s'$, $\node{m}$ with $s' \leq :_{\node{m},\node{n}} s$, state $s'$ is in the semantics of $\node{m}$. If $\node{n}$ is a node in which the unfolding rule has been applied, to show that $s$ is in the semantics of $\node{n}$, we may require to first show that $s'$ is in the semantics of $\node{n}$. This is not captured by the relation $<:_{\node{m},\node{n}}$, which does not take the dependencies within node $\node{n}$ into account.
\begin{definition}[Support dependency ordering]\label{def:support-dependency-ordering} Let $\node{m},\node{n} \in \node{N}$ be proof nodes in $\mathbb{T}$. The \emph{support dependency ordering}, $\leq :_{\node{m},\node{n}}$ is defined as follows: \[ \leq :_{\node{m},\node{n}} = \begin{cases}
<:_{\node{m},\node{n}} \mathbin{;} <:_\node{n}^* & \text{if $\rho(\node{n}) = \textnormal{Un}$} \\
<:_{\node{m},\node{n}} & \text{otherwise} \end{cases} \] \end{definition}
We now remark on some properties of $\leq:_{\node{m},\node{n}}$ that will be used below. We first note that ${\leq:_{\node{m},\node{n}}}$ extends ${<:_{\node{m},\node{n}}}$ (as well as $<_{\node{m},\node{n}}$ and $\lessdot_{\node{m},\node{n}}$, since $<:_{\node{m},\node{n}}$ extends both of these relations): for all $s, s'$, if $s' <:_{\node{m},\node{n}} s$ then $s' \leq:_{\node{m},\node{n}} s$. Also, if $\node{n}$ is a companion node (i.e.\/ $\rho(\node{n}) = \text{Un}$) then the transitivity of $<:_{\node{n}}^*$ guarantees that ${\leq:_{\node{m},\node{n}}} = (\leq:_{\node{m},\node{n}} \mathbin{;} <:_{\node{n}}^*)$, as in this case $$ {\leq:_{\node{m},\node{n}}} = (<:_{\node{m},\node{n}} \mathbin{;} <:_\node{n}^*) = (<:_{\node{m},\node{n}} \mathbin{;} <:_\node{n}^* \mathbin{;} <:_\node{n}^*) = (\leq:_{\node{m},\node{n}} \mathbin{;} <:_\node{n}^*). $$ From the definition of $<:_{\node{m},\node{n}}$ (cf.\/ Definition~\ref{def:extended_path_ordering}) we have that if $\node{m} = \node{n}$ then $<:_{\node{m},\node{n}} = I_{\textit{st}(\node{n})}$ is the identity relation over $\textit{st}(\node{n})$. From this fact we can make the following observations. First, if $\node{m} = \node{n}$ and $\node{n}$ is not a companion node, then ${\leq:_{\node{m},\node{n}}} = {<:_{\node{m},\node{n}}}$ is the identity relation over $\textit{st}(\node{n})$. Second, if $\node{m} = \node{n}$ and $\node{n}$ is a companion node then $\leq:_{\node{m},\node{n}}$ is ${<:_{\node{n}}^*}$, the reflexive and transitive closure of the companion node ordering for $\node{n}$. . The next lemma establishes a technical property, derived from the definition of $<:_{\node{m},\node{n}}$, that is satisfied by $\leq:_{\node{m},\node{n}}$. It is used later to show that $\leq:_{\node{m},\node{n}}$ obeys a pseudo-transitivity law.
\begin{lemma}[Characterization of $\leq:_{\node{m},\node{n}}$] \label{lem:support-dependency-ordering-characterization} Let $\node{n}_1, \node{n}_2$ and $\node{n}_3$ be proof nodes in $\mathbb{T}$, with $\node{n}_2$ a companion node and $\node{n}_3 \neq \node{n}_2$, and let $s_1, s_2, s_2'$ and $s_3$ satisfy the following. \begin{enumerate}
\item $s_2 \leq:_{\node{n}_2,\node{n}_1} s_1$
\item $s_2' <:_{\node{n}_2}^+ s_2$
\item $s_3 \leq:_{\node{n}_3,\node{n}_2} s_2'$ \end{enumerate} Then $s_3 \leq:_{\node{n}_3,\node{n}_1} s_1$. \end{lemma} \remove{ \begin{proofsketch} Follows from the definitions of $\leq:_{\node{m},\node{n}}$ and $<:_{\node{m},\node{n}}$ and the pseudo-transitivity (Lemma~\ref{lem:join_extended_path_ordering}) of $<:_{\node{m},\node{n}}$. The detailed proof is included in the appendix \qedhere \end{proofsketch} } \begin{proof} Fix nodes $\node{n}_1, \node{n}_2$ and $\node{n}_3$ and states $s_1, s_2, s_2'$ and $s_3$ satisfying the conditions in the statement of the lemma. We must show that $s_3 \leq:_{\node{n}_3,\node{n}_1} s_1$. There are two cases to consider. \begin{description} \item[$\node{n}_2 = \node{n}_1$.]
In this case $\node{n}_1 = \node{n}_2$ is a companion node, and from the observations above we know that ${\leq:_{\node{n}_2,\node{n}_1}} = {<:_{\node{n}_1}^*} = {<:_{\node{n}_2}^*}$. Thus
$s_2' <:_{\node{n}_2}^+ s_2 <:_{\node{n}_2}^* s_1$, meaning $s_2' <:_{\node{n}_2}^* s_1$. As $s_3 \leq:_{\node{n}_3,\node{n}_2} s_2' <:_{\node{n}_2}^* s_1$ and ${\leq:_{\node{n}_3,\node{n}_2}} = {\leq:_{\node{n}_3,\node{n}_2}} \mathbin{;} {<:_{\node{n}_2}^*}$, we can conclude that $s_3 \leq:_{\node{n}_3,\node{n}_2} s_1$ and thus, since $\node{n}_2 = \node{n}_1$, $s_3 \leq:_{\node{n}_3,\node{n}_1} s_1$. \item[$\node{n}_2 \neq \node{n}_1$.]
In this case there exists $s_1'$ such that $s_2 <:_{\node{n}_2,\node{n}_1} s_1'$ and $s_1' <:_{\node{n}_1}^* s_1$. It suffices to establish that $s_3 <:_{\node{n}_3,\node{n}_1} s_1'$, as the definition of $\leq:_{\node{n}_3,\node{n}_1}$ then guarantees that $s_3 \leq:_{\node{n}_3,\node{n}_1} s_1$.
The proof proceeds by induction on the definition of $<:_{\node{n}_2,\node{n}_1}$.
In the base case, $s_2 \lessdot_{\node{n}_2,\node{n}_1} s_1'$. Since $s_3 \leq:_{\node{n}_3,\node{n}_2} s_2'$ there must exist $s_2''$ such that $s_3 <:_{\node{n}_3,\node{n}_2} s_2''$ and $s_2'' <:_{\node{n}_2}^* s_2'$. Since $s_2' <:_{\node{n}_2}^+ s_2$ we have that $s_2'' <:_{\node{n}_2}^+ s_2$, and the definition of $<:_{\node{n}_3,\node{n}_1}$ gives $s_3 <:_{\node{n}_3,\node{n}_1} s_1'$.
In the induction step, there exist node $\node{n}_1'$ such that $\node{n}_1' \neq \node{n}_1$, $\node{n}_1' \neq \node{n}_2$, and states $t_1, t_1'$ such that $t_1 \lessdot_{\node{n}_1', \node{n}_1} s_1'$, $t_1' <:_{\node{n}_1'} t_1$, and $s_2 <:_{\node{n}_2,\node{n}_1'} t_1'$. The induction hypothesis guarantees that $s_3 <:_{\node{n}_3,\node{n}_1'} t_1'$. The definition of $<:_{\node{n}_3,\node{n}_1}$ now guarantees that $s_3 <:_{\node{n}_3,\node{n}_1} s_1'$.
\qedhere \end{description} \end{proof}
\noindent Relation $\leq:_{\node{m},\node{n}}$ also enjoys a pseudo-transitivity property. \begin{lemma}[Pseudo-transitivity of $\leq:_{\node{m},\node{n}}$] \label{lem:pseudo-transitivity-of-support-dependency-ordering}
Let $\node{n}_1, \node{n}_2$ and $\node{n}_3$ be proof nodes in partial tableau $\mathbb{T}$, and assume $s_1, s_2$ and $s_3$ are such that $s_3 \leq:_{\node{n}_3, \node{n}_2} s_2$ and $s_2 \leq:_{\node{n}_2, \node{n}_1} s_1$.
Then $s_3 \leq:_{\node{n}_3,\node{n}_1} s_1$. \end{lemma} \remove{ \begin{proofsketch} Follows from the pseudo-transitivity of $<:_{\node{m},\node{n}}$ (Lemma~\ref{lem:join_extended_path_ordering}) and the preceding observations. The detailed proof is included in the appendix. \qedhere \end{proofsketch} } \begin{proof} Suppose that $\node{n}_1, \node{n}_2$ and $\node{n}_3$, and $s_1, s_2$ and $s_3$, are such that $s_3 \leq:_{\node{n}_3,\node{n}_2} s_2$ and $s_2 \leq:_{\node{n}_2,\node{n}_1} s_1$. We must show that $s_3 \leq:_{\node{n}_3,\node{n}_1} s_1$. There are two cases to consider. \begin{description} \item[$s_2 <:_{\node{n}_2,\node{n}_1} s_1$.]
We consider two sub-cases.
In the first, $s_3 <:_{\node{n}_3,\node{n}_2} s_2$; the pseudo-transitivity of $<:_{\node{m},\node{n}}$ immediately implies that $s_3 <:_{\node{n}_3,\node{n}_1} s_1$, so $s_3 \leq:_{\node{n}_3,\node{n}_1} s_1$.
In the second sub-case, $s_3 \centernot{<:}_{\node{n}_3,\node{n}_2} s_2$.
As $s_3 \leq:_{\node{n}_3,\node{n}_2} s_2$ it therefore must be the case that $\node{n}_2$ is a companion node and that
$$s_3 \,(<:_{\node{n}_3, \node{n}_2} \mathbin{;} <:_{\node{n}_2}^*))\, s_2,$$
meaning
that there exists $s_2'$ such that
\begin{align*}
s_3 &<:_{\node{n}_3, \node{n}_2} s_2' \\
s_2' &<:_{\node{n}_2}^* s_2.
\end{align*}
If $\node{n}_2 = \node{n}_1$ then $<:_{\node{n}_2,\node{n}_1}$ is the identity relation, and thus $s_2 = s_1$. This fact, and the fact that $\node{n}_2 = \node{n}_1$, ensures that $s_3 \leq_{\node{n}_3,\node{n}_1} s_1$.
If $\node{n}_2 \neq \node{n}_1$ and $\node{n}_2 = \node{n}_3$, it also follows that ${<:_{\node{n}_3,\node{n}_2}}$ is the identity relation, and thus $s_3 = s_2'$. We again have that $s_3 \leq:_{\node{n}_3,\node{n}_1} s_1$.
Finally, if $\node{n}_2 \neq \node{n}_1$ and $\node{n}_2 \neq \node{n}_3$ then Lemma~\ref{lem:support-dependency-ordering-characterization} gives the desired result. \item[$s_2 \centernot{<:}_{\node{n}_2,\node{n}_1} s_1$.]
In this case, it must be that $\node{n}_1$ is a companion node and that
there exists $s_1'$ such that $s_2 <:_{\node{n_2},\node{n_1}} s_1'$ and $s_1' <:_{\node{n}_1}^* s_1$. From the previous case we know that $s_3 \leq:_{\node{n}_3,\node{n}_1} s_1'$; the the fact that $\leq:_{\node{n}_3,\node{n}_1} = (\leq:_{\node{n}_3,\node{n}_1} \mathbin{;} <:_{\node{n}_1}^*)$ immediately guarantees that $s_3 \leq:_{\node{n}_3,\node{n}_1} s_1$.
\qedhere \end{description} \end{proof}
In the remainder of this section we wish to establish that for any companion node $\node{n}$ in successful tableau $\mathbb{T}$, $<:_{\node{n}}^+$ is a support ordering for a function derived from $P(\node{n})$. In order to define this function we must deal with the free variables embedded in $P(\node{n})$. In particular, if $\node{m}$ is a companion node that is a strict ancestor of $\node{n}$ then variable $Z_{\node{m}}$ may appear free in $P(\node{n})$; this would be the case if any of the companion leaves of $\node{m}$ are also descendants of $\node{n}$. To accommodate these free variables in $P(\node{n})$ we will define a modification of valuation $\val{V}$ that assigns sets of states to these variables based on the $<:_{\node{m}', \node{n}}$ relation, where $\node{m}'$ is a companion leaf of $\node{m}$ that is also a descendant of $\node{n}$.
\begin{definition}[Influence extensions of valuations]\label{def:support-extension-of-valuation} Let $\mathbb{T} = \tableauTrl$ be a tableau, with $\node{m}_1 \cdots \node{m}_k$ an ordering on the companion nodes $\cnodes{\mathbb{T}}$ of $\mathbb{T}$. Also let $\node{n}$ be a node in $\mathbb{T}$, with $S = \textit{st}(\node{n})$ the states in $\node{n}$. We define the following. \begin{enumerate} \item
$\cleaves{\node{m}_i,\node{n}} = \cleaves{\mathbb{T}}(\node{m}_i) \cap D(\node{n})$ is the set of companion leaves of $\node{m}_i$ that are also descendants of $\node{n}$. \item
The set of states in companion leaves of $\node{m}_i$ that influence state $s$ in $\node{n}$ is given as follows.
\[
S_{\node{n},s,\node{m}_i} = \bigcup_{\node{m}' \in \cleaves{\node{m}_i, \node{n}}} \preimg{(\leq:_{\node{m}',\node{n}})}{s}
\]
We also define $$S_{\node{n},\node{m}_i} = \bigcup_{s \in S} S_{\node{n},s,\node{m}_i}$$ to be the set of states in companion leaves of $\node{m}_i$ that influence $\node{n}$. \item
The \emph{influence extension} of $\val{V}$ for state $s$ in node $\node{n}$ is defined as
\[
\val{V}_{\node{n},s} = \val{V}[Z_{\node{m}_1} \cdots Z_{\node{m}_k} = S_{\node{n},s,\node{m}_1} \cdots S_{\node{n},s,\node{m}_k}].
\]
Similarly
\[
\val{V}_\node{n} = \val{V}[Z_{\node{m}_1} \cdots Z_{\node{m}_k} = S_{\node{n},\node{m}_1} \cdots S_{\node{n},\node{m}_k}]
\]
is the influence extension of $\val{V}$ for node $\node{n}$. \end{enumerate}
\end{definition}
Intuitively, $S_{\node{n},s,\node{m}_i}$ contains all the states in the companion leaves of $\node{m}_i$ at or below node $\node{n}$ that influence the determination that state $s$ belongs in node $\node{n}$. Note these definitions also set $\val{V}_{\node{n},s}(Z_{\node{m}_i}) = \emptyset$ in case node $\node{m}_i$ has no companion leaves that are descendants of $\node{n}$; when this happens $Z_{\node{m}_i}$ cannot appear free in $P(\node{n})$. Also note that $Z_{\node{m}_i}$ does not appear free in $P(\node{n})$ if $\node{m}_i$ is a descendant of $\node{n}$, as $P(\node{m}_i) = \sigma Z_{\node{m}_i}.\Phi'$ for some $\Phi'$ is a subformula of $P(\node{n})$ and contains all occurrences of $Z_{\node{m}_i}$ in $P(\node{n})$. In both cases the value assigned to $Z_{\node{m}_i}$ by $\val{V}_{\node{n},s}$ does not affect the semantics of $P(\node{n})$.
We now state a technical but useful lemma about dependency extensions. \begin{lemma}[Monotonicity of extensions] \label{lem:monotonicity-of-dependency-extensions} Let $\mathbb{T} = \tableauTrl$ be a tableau with nodes $\node{n}$ and $\node{n}'$ and states $s$ and $s'$ such that $s' <_{\node{n}',\node{n}} s$. Then: \begin{enumerate} \item
for all $Z \in \textnormal{Var}\xspace_{\mathbb{T}}, \val{V}_{\node{n}',s'}(Z) \subseteq \val{V}_{\node{n},s}(Z)$, and \item
for all $Z \in \textnormal{Var}\xspace \setminus \textnormal{Var}\xspace_{\mathbb{T}}, \val{V}_{\node{n}',s'}(Z) = \val{V}_{\node{n},s}(Z)$. \end{enumerate} \end{lemma} \begin{proof} Follows from the definition of $\val{V}_{\node{n},s}$ and the fact that the definition of $<_{\node{n}',\node{n}}$ ensures that $\node{n}' \in c(\node{n})$ and thus $D(\node{n}') \subseteq D(\node{n})$. Consequently $\cleaves{\node{m}_i,\node{n}'} \subseteq \cleaves{\node{m}_i,\node{n}}$ for all $\node{m}_i \in \cnodes{\mathbb{T}}$, and $s' <_{\node{n}',\node{n}} s$ guarantees that $S_{\node{n}',s',\node{m}_i} \subseteq S_{\node{n},s,\node{m}_i}$. \end{proof}
The next corollary is an immediate consequence of this lemma. \begin{corollary} \label{cor:monotonicity-of-dependency-extensions} Let $\mathbb{T} = \tableauTrl$ be a tableau, with $\node{n}$ and $\node{n}'$ and states $s$ and $s'$ such that $s' <_{\node{n}',\node{n}} s$. Then $\semT{P(\node{n})}{\val{V}_{\node{n}',s'}} \subseteq \semT{P(\node{n})}{\val{V}_{\node{n}',s}}$. \end{corollary} \begin{proof} Follows from Lemma~\ref{lem:monotonicity-of-dependency-extensions} and the fact that every occurrence of any $Z_{\node{m}} \in \textnormal{Var}\xspace_{\mathbb{T}}$ in $P(\node{n}')$ must be positive. \end{proof}
We now state and prove the main lemma of this section, which is that for any companion node $\node{n}$ in a successful tableau, $<:_{\node{n}}^+$ is a support ordering for a semantic function derived from $P(\node{n})$.
\begin{lemma}[$<:_{\node{n}}^+$ is a support ordering] \label{lem:support-ordering-for-companion-nodes} Let $\mathbb{T} = \tableauTrl$ be a successful tableau, with $\node{n} \in \cnodes{\mathbb{T}}$ a companion node of $\mathbb{T}$ and $\node{n}'$ the child of $\node{n}$ in $\tree{T}$. Also let $S = \textit{st}(\node{n})$. Then $(S, <:_{\node{n}}^+)$ is a support ordering for $\semfT{Z_\node{n}}{P(\node{n}')}{\val{V}_{\node{n}}}$. \end{lemma} \remove{ \begin{proofsketch} We sketch the proof. Details can be found in the appendix.
Fix successful tableau $\mathbb{T} = \tableauTrl$, with $\tree{T} = (\node{N},\node{r},p,cs)$, and let $\node{n} \in \cnodes{\mathbb{T}}$ be a companion node of $\mathbb{T}$ with $S = \textit{st}(\node{n})$. We prove the following stronger result. \begin{quote} For every $\node{m} \in D(\node{n})$ and $s \in S$ statements \ref{stmt:necessity-sketch} and \ref{stmt:support-sketch} hold. \begin{enumerate}[left=\parindent, label=S\arabic*., ref=S\arabic*] \item\label{stmt:necessity-sketch}
For all $x$ such that $x \leq:_{\node{m},\node{n}} s$,
$x \in \semT{P(\node{m})}{\val{V}_{\node{m},x}}$. \item\label{stmt:support-sketch}
If $\node{m} \in \cnodes{\mathbb{T}}$,
$\node{m}' = cs(\node{m})$
and
$x$ satisfies $x \leq:_{\node{m},\node{n}} s$
then
$(S_x, <:_{\node{m},x})$ is a support ordering for $\semfT{Z_{\node{m}}}{P(\node{m}')}{\val{V}_{\node{m},x}}$, where
$S_x = \preimg{(<:_{\node{m}}^*)}{x}$
and
${<:_{\node{m},x}} = \restrict{(<:_{\node{m}}^+)}{S_x}$. \end{enumerate} \end{quote} From this stronger result, the lemma follows.
We prove the stronger result by tree induction on $\tree{T}_{\node{n}}$, the subtree rooted at $\node{n}$ in $\tree{T}$. We must show that for all $s \in S$, \ref{stmt:necessity-sketch} and~\ref{stmt:support-sketch} hold for $\node{m}$. The proof proceeds by a case analysis on $\rho(\node{m})$. If $\rho(\node{m}) \neq \text{Un}$, $\node{m} \not\in \cnodes{\mathbb{T}}$, and \ref{stmt:support-sketch} vacuously holds for all $s \in S$, so all that needs to be proved is \ref{stmt:necessity-sketch} for all $s \in S$. So fix $s \in S$; we show the case for $\rho(\node{n}) = \dia{K}$, the other cases where $\rho(\node{n}) \neq \text{Un}$ follow a similar line of reasoning.
If $\rho(\node{n}) = \dia{K}$, $\node{m} = S' \tnxTVD \dia{K} \Phi$ for some $\Phi$, $cs(\node{m}) = \node{m}'$, and $\node{m}' = f(S') \tnxTVD \Phi$, where $f \in S' \to \states{S}$ has the property that $s'' \xrightarrow{K} f(s'')$ for all $s'' \in S'$. The induction hypothesis ensures that for all $s' \in S$, \ref{stmt:necessity} holds for $\node{m}'$; we must show that \ref{stmt:necessity} holds for $\node{m}$ and $s$. To this end, let $x$ be such that $x \leq:_{\node{m},\node{n}} s$; we must show that $x \in \semT{P(\node{m})}{\val{V}_{\node{m},x}}$. Note that $f(x) <_{\node{m}', \node{m}} x$; the pseudo-transitivity of $\leq:_{\node{m},\node{n}}$ guarantees that $f(x)$ satisfies $f(x) \leq:_{\node{m}',\node{n}} s$, and the induction hypothesis then ensures that $f(x) \in \semT{P(\node{m}')}{\val{V}_{\node{m}', f(x)}}$. Corollary~\ref{cor:monotonicity-of-dependency-extensions} guarantees that $f(x) \in \semT{P(\node{m}')}{\val{V}_{\node{m},f(x)}}$, and the semantics of $\dia{K}$ then ensures that $x \in \semT{\dia{K} P(\node{m}')}{\val{V}_{\node{m},x}} = \semT{P(\node{m})}{\val{V}_{\node{m},x}}$, thereby establishing \ref{stmt:necessity}.
In case $\rho(\node{m}) = \text{Un}$, $\node{m} \in \cnodes{\mathbb{T}}$, meaning $\node{m} = X \tnxTVD U$ where $U \in \operatorname{dom}(\Delta)$, $\Delta(U) = \sigma Z.\Phi$, $cs(\node{m}) = \node{m}'$ and $\node{m}' = X \tnxTVD \Phi[Z := U]$. We must show \ref{stmt:necessity} and \ref{stmt:support} hold for all $s \in S$ for $\node{m}$. So fix $s \in S$. We consider \ref{stmt:support} first. Let $x \leq:_{\node{m},\node{n}} s$, and define $f_{\node{m},x} = \semfT{Z_\node{m}}{P(\node{m}')}{\val{V}_{\node{m},x}}$. We must show that $(S_x, <:_{\node{m},x})$ is a support ordering for $f_{\node{m},x}$. Following Definition~\ref{def:support-ordering} it suffices to prove that for every $x' \in S_x, x' \in f_{\node{m},x}(\preimg{(<:_{\node{m},x})}{x'})$. So fix $x' \in S_x$. By definition of $S_x$ this means that $x' <:_{\node{m}}^* x$. Since $x' <_{\node{m}',\node{m}} x'$, it follows that $x' \leq:_{\node{m}',\node{m}} x'$ and, due to the pseudo-transitivity Lemma~\ref{lem:pseudo-transitivity-of-support-dependency-ordering}, that $x' \leq:_{\node{m}',\node{n}} s$. From the induction hypothesis, we know that \ref{stmt:necessity} holds for $\node{m}'$ and $x'$, meaning $x' \in \semT{P(\node{m}')}{\val{V}_{\node{m}',x'}}$. To complete this part of the proof it suffices to establish that $\semT{P(\node{m}')}{\val{V}_{\node{m}',x'}} \subseteq f_{\node{m},x}(\preimg{({<:_{\node{m},x}})}{x'})$. We begin by noting that since $x' <_{\node{m}',\node{m}} x'$ and all occurrences of any $Z \in \textnormal{Var}\xspace_{\mathbb{T}}$ in $P(\node{m}')$ are positive, Lemma~\ref{lem:monotonicity-of-dependency-extensions} ensures that $\semT{P(\node{m}')}{\val{V}_{\node{m}',x'}} \subseteq \semT{P(\node{m}')}{\val{V}_{\node{m},x'}}$. It therefore suffices to show that $\semT{P(\node{m}')}{\val{V}_{\node{m},x'}} \subseteq f_{\node{m},x}(\preimg{({<:_{\node{m},x}})}{x'})$. From the definition of $f_{\node{m},x}$ we have that \[ f_{\node{m},x}(\preimg{({<:_{\node{m},x}})}{x'}) = \semT{P(\node{m}')}{\val{V}_{\node{m},x}[Z_\node{m} := \preimg{(<:_{\node{m},x})}{x'}]}. \] Because every $Z \in \textnormal{Var}\xspace_{\mathbb{T}}$ appearing in $P(\node{m}')$ appears only positively, the fact that $\semT{P(\node{m}')}{\val{V}_{\node{m},x'}} \subseteq f_{\node{m},x}(\preimg{({<:_{\node{m},x}})}{x'})$ follows from the following two observations. \begin{enumerate} \item
For all $Z \in \textnormal{Var}\xspace \setminus \textnormal{Var}\xspace_{\mathbb{T}}$,
$\val{V}_{\node{m},x'}(Z) = \left(\val{V}_{\node{m},x}[Z_\node{m} := \preimg{(<:_{\node{m},x})}{x'}] \right) (Z)$. \item
For all $Z \in \textnormal{Var}\xspace_{\mathbb{T}}$,
$\val{V}_{\node{m},x'}(Z) \subseteq \left(\val{V}_{\node{m},x}[Z_\node{m} := \preimg{(<:_{\node{m},x})}{x'}]\right) (Z)$. \end{enumerate}
To finish the proof we now need to show that statement~\ref{stmt:necessity} holds for companion node $\node{m}$ and $s \in S$. So fix $x$ such that $x \leq:_{\node{m},\node{n}} s$; we must show that $x \in \semT{P(\node{m})}{\val{V}_{\node{m},x}}$. We know from the definition of $P$ that $P(\node{m}) = \sigma Z_\node{m}.P(\node{m}')$. If $\sigma = \nu$ then as statement~\ref{stmt:support} holds we know that $(S_x, <:_{\node{m},x})$ is a support ordering for $\semfT{Z_{\node{m}}}{P(\node{m}')}{\val{V}_{\node{m},x}}$. Therefore, $S_x$ is supported for $\semfT{Z_{\node{m}}}{P(\node{m}')}{\val{V}_{\node{m},x}}$, and Corollary~\ref{cor:greatest-fixpoint} ensures that $S_x \subseteq \nu \left(\semfT{Z_{\node{m}}}{P(\node{m}')}{\val{V}_{\node{m},x}}\right)$. As $x \in S_x$ and \[\nu \left(\semfT{Z_{\node{m}}}{P(\node{m}')}{\val{V}_{\node{m},x}}\right) = \semT{P(\node{m})}{\val{V}_{\node{m},x}}, \] the result follows. The case where $\sigma = \mu$ follows a similar line of reasoning, but uses the fact that $<:_{\node{m},x}$ is well-founded.
\qedhere \end{proofsketch} } \begin{proof} Fix successful tableau $\mathbb{T} = \tableauTrl$, with $\tree{T} = (\node{N},\node{r},p,cs)$, and let $\node{n} \in \cnodes{\mathbb{T}}$ be a companion node of $\mathbb{T}$ with $S = \textit{st}(\node{n})$. We prove the following stronger result. \begin{quote} For every $\node{m} \in D(\node{n})$ and $s \in S$ statements \ref{stmt:necessity} and \ref{stmt:support} hold. \begin{enumerate}[left=\parindent, label=S\arabic*., ref=S\arabic*] \item\label{stmt:necessity}
For all $x$ such that $x \leq:_{\node{m},\node{n}} s$,
$x \in \semT{P(\node{m})}{\val{V}_{\node{m},x}}$. \item\label{stmt:support}
If $\node{m} \in \cnodes{\mathbb{T}}$,
$\node{m}' = cs(\node{m})$
and
$x$ satisfies $x \leq:_{\node{m},\node{n}} s$
then
$(S_x, <:_{\node{m},x})$ is a support ordering for $\semfT{Z_{\node{m}}}{P(\node{m}')}{\val{V}_{\node{m},x}}$, where
$S_x = \preimg{(<:_{\node{m}}^*)}{x}$
and
${<:_{\node{m},x}} = \restrict{(<:_{\node{m}}^+)}{S_x}$. \end{enumerate} \end{quote} To see that proving \ref{stmt:necessity} and~\ref{stmt:support} for all $\node{m} \in D(\node{n})$ and $s \in S$ establishes the lemma, first note that as $\node{n} \in \cnodes{\mathbb{T}}$ and $\node{n} \in D(\node{n})$, such a proof would imply that for all $x \in S$, $(S_x, <:_{\node{n},x})$ is a support ordering for $\semfT{Z_{\node{n}}}{P(\node{n}')}{\val{V}_{\node{n},x}}$. We also note that for all $x \in S$, $\val{V}_{\node{n},x}(Z) \subseteq \val{V}_{\node{n}}(Z)$ if $Z \in \textnormal{Var}\xspace_{\mathbb{T}}$ and $\val{V}_{\node{n},x}(Z) = \val{V}_{\node{n}}(Z)$ otherwise; these imply that for all $x \in S$ and $S' \subseteq \states{S}$, $\semfT{Z_{\node{n}}}{P(\node{n}')}{\val{V}_{\node{n},x}}(S') \subseteq \semfT{Z_{\node{n}}}{P(\node{n}')}{\val{V}_{\node{n}}}(S')$. Therefore, for all $x \in S$ $(S_x, <:_{\node{n}.x})$ is also a support ordering for $\semfT{Z_{\node{n}}}{P(\node{n}')}{\val{V}_{\node{n}}}$. It is easy to see that $\bigcup_{x \in S} S_x = S$ and also that $\bigcup_{x \in S} {<:_{\node{n},x}} = {<:_{\node{n}}^+}$; Lemma~\ref{lem:unions-of-support-orderings} then guarantees that $(S, <:_{\node{n}}^+)$ is a support ordering for $\semfT{Z_{\node{n}}}{P(\node{n}')}{\val{V}_{\node{n}}}$, which is what the lemma asserts.
We prove the stronger result by tree induction on $\tree{T}_{\node{n}}$, the subtree rooted at $\node{n}$ in $\tree{T}$. (Recall that, per Definition~\ref{def:subtree}, the set of nodes in $\tree{T}_{\node{n}}$ is $D(\node{n})$.) So pick $\node{m} \in D(\node{n})$; the induction hypothesis states that for all $s \in S$, \ref{stmt:necessity} and \ref{stmt:support} hold for all $\node{m}' \in c(\node{m})$. We must show that for all $s \in S$, \ref{stmt:necessity} and ~\ref{stmt:support} hold for $\node{m}$. The proof proceeds by a case analysis on $\rho(\node{m})$. We first consider the cases in which $\rho(\node{m}) \neq \text{Un}$, meaning $\node{m} \not\in \cnodes{\mathbb{T}}$. In each of these cases \ref{stmt:support} vacuously holds for all $s \in S$, so all that needs to be proved is \ref{stmt:necessity} for all $s \in S$. So fix $s \in S$; the case analysis for $\rho(\node{n}) \neq \text{Un}$ is as follows. \begin{description} \item[$\rho(\node{m}) {\perp}$.]
In this case $\rho(\node{m})$ is undefined, and $\node{m}$ must be a leaf. To prove \ref{stmt:necessity} holds for $s$ we first note that since $\mathbb{T}$ is successful it follows that $\node{m}$ is a successful leaf, meaning that either $\node{m}$ is a free leaf or a $\sigma$-leaf (successful tableaux cannot contain any diamond leaves, so this leaf type need not be considered).
There are two subcases to consider.
In the first, $\node{m}$ is a free leaf.
In this case, by definition $P(\node{m}) = \textit{fm}(\node{m})$, where $\textit{fm}(\node{m})$ is either $Z$ or $\lnot Z$ for some $Z \in \textnormal{Var}\xspace \setminus \left(\textnormal{Var}\xspace_{\mathbb{T}} \cup \operatorname{dom}(\textit{dl}(\node{m}))\right)$.
For any such $Z$ and $x \in \textit{st}(\node{m})$ Lemma~\ref{lem:monotonicity-of-dependency-extensions} ensures that $\val{V}_{\node{m},x}(Z) = \val{V}(Z)$, whence
\[
\semT{P(\node{m})}{\val{V}_{\node{m},x}}
= \semTV{\textit{fm}(\node{m})}
= \sem{\node{m}}{}{},
\]
and since $\node{m}$ is successful, we have that $x \in \semT{P(\node{m})}{\val{V}_{\node{m},x}}$ for all $x \in \textit{st}(\node{m})$, and thus for all $x$ such that $x \leq:_{\node{m},\node{n}} s$.
In the second subcase, $\node{m}$ is a $\sigma$-leaf, meaning it is a companion leaf for some $\node{m}_i \in \cnodes{\mathbb{T}}$. Now fix $x$ such that $x \leq:_{\node{m},\node{n}} s$; we must show that $x \in \semT{P(\node{m})}{\val{V}_{\node{m},x}}$.
In this case $P(\node{m}) = Z_{\node{m}_i}$, and $\semT{P(\node{m})}{\val{V}_{\node{m},x}} = \val{V}_{\node{m},x}(Z_{\node{m}_i}) = S_{\node{m},x,\node{m}_i}$. Since $\cleaves{\node{m}_i,\node{m}} = \{\node{m}\}$ in this case, $S_{\node{m},x,\node{m}_i} = \preimg{(\leq:_{\node{m},\node{m}})}{x} = \{x\}$. As $x \in \{x\}$ the desired result holds. \item[$\rho(\node{m}) = \land$.]
In this case $\node{m} = S' \tnxTVD \Phi_1 \land \Phi_2$ for some $\Phi_1$ and $\Phi_2$, $cs(\node{m}) = \node{m}'_1 \node{m}'_2$, and $\node{m}'_i = S' \tnxTVD \Phi_i$ for $i = 1,2$. The induction hypothesis ensures that for all $s' \in S$, \ref{stmt:necessity} holds for each $\node{m}'_i$; we must show that \ref{stmt:necessity} holds for $\node{m}$ and $s$.
To this end, let $x$ be such that $x \leq:_{\node{m},\node{n}} s$; we must show that $x \in \semT{P(\node{m})}{\val{V}_{\node{m},x}}$. Note that $x <_{\node{m}'_i, \node{m}} x$ for $i = 1,2$; the pseudo-transitivity of $\leq:_{\node{m},\node{n}}$ guarantees that $x \leq:_{\node{m}'_i,\node{n}} s$, and the induction hypothesis then ensures that $x \in \semT{P(\node{m}'_i)}{\val{V}_{\node{m}'_i, x}}$ for $i = 1,2$. Corollary~\ref{cor:monotonicity-of-dependency-extensions} guarantees that $x \in \semT{P(\node{m}'_i)}{\val{V}_{\node{m},x}}$ for $i = 1,2$, and the semantics of $\land$ then ensures that $x \in \semT{P(\node{m}_1) \land P(\node{m}_2)}{\val{V}_{\node{m},x}} = \semT{P(\node{m})}{\val{V}_{\node{m},x}}$, thereby establishing \ref{stmt:necessity}. \item[$\rho(\node{m}) = \lor$.]
In this case $\node{m} = S' \tnxTVD \Phi_1 \lor \Phi_2$ for some $\Phi_1$ and $\Phi_2$, $cs(\node{m}) = \node{m}'_1 \node{m}'_2$, and $\node{m}'_i = S_i' \tnxTVD \Phi_i$ for $i = 1,2$, with $S' = S'_1 \cup S'_2$. The induction hypothesis ensures that for all $s' \in S$, \ref{stmt:necessity} holds for each $\node{m}'_i$; we must show that \ref{stmt:necessity} holds for $\node{m}$ and $s$.
To this end, let $x$ be such that $x \leq:_{\node{m},\node{n}} s$; we must show that $x \in \semT{P(\node{m})}{\val{V}_{\node{m},x}}$. Note that $x <_{\node{m}'_i, \node{m}} x$ for at least one of $i = 1$ or $i = 2$; the pseudo-transitivity of $\leq:_{\node{m},\node{n}}$ guarantees that $x \leq:_{\node{m}'_i,\node{n}} s$, and the induction hypothesis then ensures that $x \in \semT{P(\node{m}'_i)}{\val{V}_{\node{m}'_i, x}}$ for at least one of $i = 1$ or $i = 2$. Corollary~\ref{cor:monotonicity-of-dependency-extensions} guarantees that $x \in \semT{P(\node{m}'_i)}{\val{V}_{\node{m},x}}$ for at least one of $i = 1$ or $i = 2$, and the semantics of $\lor$ then ensures that $x \in \semT{P(\node{m}_1) \lor P(\node{m}_2)}{\val{V}_{\node{m},x}} = \semT{P(\node{m})}{\val{V}_{\node{m},x}}$, thereby establishing \ref{stmt:necessity}. \item[$\rho(\node{m}) = [K{]}$.]
In this case $\node{m} = S' \tnxTVD [K] \Phi$ for some $\Phi$, $cs(\node{m}) = \node{m}'$, and $\node{m}' = S'' \tnxTVD \Phi$, where $S'' = \{s'' \in \states{S} \mid \exists s' \in S'. s' \xrightarrow{K} s'' \}$. The induction hypothesis ensures that for all $s' \in S$, \ref{stmt:necessity} holds for $\node{m}'$; we must show that \ref{stmt:necessity} holds for $\node{m}$ and $s$.
To this end, let $x$ be such that $x \leq:_{\node{m},\node{n}} s$; we must show that $x \in \semT{P(\node{m})}{\val{V}_{\node{m},x}}$. Note that $x' <_{\node{m}', \node{m}} x$ for each $x'$ such that $x \xrightarrow{K} x'$; the pseudo-transitivity of $\leq:_{\node{m},\node{n}}$ guarantees that each such $x'$ satisfies $x' \leq:_{\node{m}',\node{n}} s$, and the induction hypothesis then ensures that each such $x' \in \semT{P(\node{m}')}{\val{V}_{\node{m}', x'}}$. Corollary~\ref{cor:monotonicity-of-dependency-extensions} guarantees that each such $x' \in \semT{P(\node{m}')}{\val{V}_{\node{m},x}}$, and the semantics of $[K]$ then ensures that $x \in \semT{[K] P(\node{m}')}{\val{V}_{\node{m},x}} = \semT{P(\node{m})}{\val{V}_{\node{m},x}}$, thereby establishing \ref{stmt:necessity}. \item[$\rho(\node{m}) = (\dia{K},f)$.]
In this case $\node{m} = S' \tnxTVD \dia{K} \Phi$ for some $\Phi$, $cs(\node{m}) = \node{m}'$, $\node{m}' = f(S') \tnxTVD \Phi$, and $f \in S' \to \states{S}$ has the property that $s'' \xrightarrow{K} f(s'')$ for all $s'' \in S'$.
The induction hypothesis ensures that for all $s' \in S$, \ref{stmt:necessity} holds for $\node{m}'$; we must show that \ref{stmt:necessity} holds for $\node{m}$ and $s$.
To this end, let $x$ be such that $x \leq:_{\node{m},\node{n}} s$; we must show that $x \in \semT{P(\node{m})}{\val{V}_{\node{m},x}}$. Note that $f(x) <_{\node{m}', \node{m}} x$; the pseudo-transitivity of $\leq:_{\node{m},\node{n}}$ guarantees that $f(x)$ satisfies $f(x) \leq:_{\node{m}',\node{n}} s$, and the induction hypothesis then ensures that $f(x) \in \semT{P(\node{m}')}{\val{V}_{\node{m}', f(x)}}$. Corollary~\ref{cor:monotonicity-of-dependency-extensions} guarantees that $f(x) \in \semT{P(\node{m}')}{\val{V}_{\node{m},f(x)}}$, and the semantics of $\dia{K}$ then ensures that $x \in \semT{\dia{K} P(\node{m}')}{\val{V}_{\node{m},x}} = \semT{P(\node{m})}{\val{V}_{\node{m},x}}$, thereby establishing \ref{stmt:necessity}. \item[$\rho(\node{m}) = \sigma Z$.]
In this case $\node{m} = S' \tnxTVD \sigma Z.\Phi$ for some $\Phi$, $cs(\node{m}) = \node{m}'$, and $\node{m}' = S' \tnxTV{\Delta'} U$, where $\Delta' = \Delta \cdot (U = \sigma Z.\Phi)$.
The induction hypothesis ensures that for all $s' \in S$, \ref{stmt:necessity} holds for $\node{m}'$; we must show that \ref{stmt:necessity} holds for $\node{m}$ and $s$.
To this end, let $x$ be such that $x \leq:_{\node{m},\node{n}} s$; we must show that $x \in \semT{P(\node{m})}{\val{V}_{\node{m},x}}$. Note that $x <_{\node{m}', \node{m}} x$; the pseudo-transitivity of $\leq:_{\node{m},\node{n}}$ guarantees that $x$ satisfies $x \leq:_{\node{m}',\node{n}} s$, and the induction hypothesis then ensures that $x \in \semT{P(\node{m}')}{\val{V}_{\node{m}', x}}$. Corollary~\ref{cor:monotonicity-of-dependency-extensions} guarantees that $x \in \semT{P(\node{m}')}{\val{V}_{\node{m},x}}$, and the semantics of $U$ and $\sigma Z.\Phi$, and Lemma~\ref{lem:companion-node-formulas-and-semantics}, then ensure that $x \in \semT{P(\node{m})}{\val{V}_{\node{m},x}}$, thereby establishing \ref{stmt:necessity}. \item[$\rho(\node{m}) = \text{Thin}$.]
In this case $\node{m} = S' \tnxTVD \Phi$ for some $\sigma, Z$ and $\Phi$, $cs(\node{m}) = \node{m}'$, and $\node{m}' = S'' \tnxTVD \Phi$, where $S' \subseteq S''$.
The induction hypothesis ensures that for all $s' \in S$, \ref{stmt:necessity} holds for $\node{m}'$; we must show that \ref{stmt:necessity} holds for $\node{m}$ and $s$.
To this end, let $x$ be such that $x \leq:_{\node{m},\node{n}} s$; we must show that $x \in \semT{P(\node{m})}{\val{V}_{\node{m},x}}$. Note that $x <_{\node{m}', \node{m}} x$; the pseudo-transitivity of $\leq:_{\node{m},\node{n}}$ guarantees that $x$ satisfies $x \leq:_{\node{m}',\node{n}} s$, and the induction hypothesis then ensures that $x \in \semT{P(\node{m}')}{\val{V}_{\node{m}', x}}$. Corollary~\ref{cor:monotonicity-of-dependency-extensions} guarantees that $x \in \semT{P(\node{m}')}{\val{V}_{\node{m},x}}$, and as $P(\node{m}) = P(\node{m}')$ in this case, \ref{stmt:necessity} holds. \end{description}
The final case to be considered is $\rho(\node{m}) = \text{Un}$; in this case, $\node{m} \in \cnodes{\mathbb{T}}$, meaning $\node{m} = X \tnxTVD U$ where $U \in \operatorname{dom}(\Delta)$, $\Delta(U) = \sigma Z.\Phi$, $cs(\node{m}) = \node{m}'$ and $\node{m}' = X \tnxTVD \Phi[Z := U]$. The induction hypothesis guarantees that \ref{stmt:necessity} and \ref{stmt:support} hold for all $s \in S$ for $\node{m}'$; we must show \ref{stmt:necessity} and \ref{stmt:support} hold for all $s \in S$ for $\node{m}$. So fix $s \in S$. We consider \ref{stmt:support} first. Let $x \leq:_{\node{m},\node{n}} s$, and define $f_{\node{m},x} = \semfT{Z_\node{m}}{P(\node{m}')}{\val{V}_{\node{m},x}}$. We must show that $(S_x, <:_{\node{m},x})$ is a support ordering for $f_{\node{m},x}$. Following Definition~\ref{def:support-ordering} it suffices to prove that for every $x' \in S_x, x' \in f_{\node{m},x}(\preimg{(<:_{\node{m},x})}{x'})$. So fix $x' \in S_x$. By definition of $S_x$ this means that $x' <:_{\node{m}}^* x$. Since $x' <_{\node{m}',\node{m}} x'$, it follows that $x' \leq:_{\node{m}',\node{m}} x'$ and, due to the pseudo-transitivity Lemma~\ref{lem:pseudo-transitivity-of-support-dependency-ordering}, that $x' \leq:_{\node{m}',\node{n}} s$. From the induction hypothesis, we know that \ref{stmt:necessity} holds for $\node{m}'$ and $x'$, meaning $x' \in \semT{P(\node{m}')}{\val{V}_{\node{m}',x'}}$. To complete this part of the proof it suffices to establish that $\semT{P(\node{m}')}{\val{V}_{\node{m}',x'}} \subseteq f_{\node{m},x}(\preimg{({<:_{\node{m},x}})}{x'})$. We begin by noting that since $x' <_{\node{m}',\node{m}} x'$ and all occurrences of any $Z \in \textnormal{Var}\xspace_{\mathbb{T}}$ in $P(\node{m}')$ are positive, Lemma~\ref{lem:monotonicity-of-dependency-extensions} ensures that $\semT{P(\node{m}')}{\val{V}_{\node{m}',x'}} \subseteq \semT{P(\node{m}')}{\val{V}_{\node{m},x'}}$. It therefore suffices to show that $\semT{P(\node{m}')}{\val{V}_{\node{m},x'}} \subseteq f_{\node{m},x}(\preimg{({<:_{\node{m},x}})}{x'})$. From the definition of $f_{\node{m},x}$ we have that \[ f_{\node{m},x}(\preimg{({<:_{\node{m},x}})}{x'}) = \semT{P(\node{m}')}{\val{V}_{\node{m},x}[Z_\node{m} := \preimg{(<:_{\node{m},x})}{x'}]}. \] Because every $Z \in \textnormal{Var}\xspace_{\mathbb{T}}$ appearing in $P(\node{m}')$ appears only positively, to establish that $\semT{P(\node{m}')}{\val{V}_{\node{m},x'}} \subseteq f_{\node{m},x}(\preimg{({<:_{\node{m},x}})}{x'})$ it suffices to show the following. \begin{enumerate} \item\label{prop:eq}
For all $Z \in \textnormal{Var}\xspace \setminus \textnormal{Var}\xspace_{\mathbb{T}}$,
$\val{V}_{\node{m},x'}(Z) = \left(\val{V}_{\node{m},x}[Z_\node{m} := \preimg{(<:_{\node{m},x})}{x'}] \right) (Z)$. \item\label{prop:subset}
For all $Z \in \textnormal{Var}\xspace_{\mathbb{T}}$,
$\val{V}_{\node{m},x'}(Z) \subseteq \left(\val{V}_{\node{m},x}[Z_\node{m} := \preimg{(<:_{\node{m},x})}{x'}]\right) (Z)$. \end{enumerate} Property~\ref{prop:eq} follows immediately from the fact that for all $Z \in \textnormal{Var}\xspace \setminus \textnormal{Var}\xspace_{\mathbb{T}}$, \[ \val{V}(Z) = \val{V}_{\node{m},x'}(Z) = \val{V}_{\node{m},x}(Z) = \left(\val{V}_{\node{m},x}[Z_\node{m} := \preimg{(<:_{\node{m},x})}{x'}]\right) (Z). \] To establish Property~\ref{prop:subset}, fix $Z \in \textnormal{Var}\xspace_{\mathbb{T}}$. There are two sub-cases to consider. In the first, $Z = Z_{\node{m}}$. In this case \[ \left(\val{V}_{\node{m},x}[Z_\node{m} := \preimg{(<:_{\node{m},x})}{x'}]\right) (Z) = \preimg{(<:_{\node{m},x})}{x'} = \preimg{(<:_\node{m}^+)}{x'}; \] we must show that $\val{V}_{\node{m},x'} (Z) \subseteq \preimg{(<:_\node{m}^+)}{x'}$, i.e.\/ that $x'' <:_\node{m}^+ x'$ for all $x'' \in \val{V}_{\node{m},x'} (Z)$. So fix $x'' \in \val{V}_{\node{m},x'} (Z)$. From Definition~\ref{def:support-extension-of-valuation}, \[ \val{V}_{\node{m},x'} (Z) = \bigcup_{\node{m}'' \in \cleaves{\mathbb{T}}(\node{m})} \preimg{(\leq:_{\node{m}'',\node{m}})}{x'}. \] Thus, there must exist leaf $\node{m}'' \in \cleaves{\mathbb{T}}(\node{m})$ such that $x'' \leq:_{\node{m}'',\node{m}} x'$. From the definition of $\leq:_{\node{m}'',\node{m}}$ it follows that there is $y$ such that $x'' <:_{\node{m}'',\node{m}} y$ and $y <:_{\node{m}}^* x'$; since $\node{m}''$ is a companion leaf of $\node{m}$ we also have that $x'' <:_{\node{m}} y$, whence $x'' <:_{\node{m}}^+ x'$.
In the second subcase, $Z \in \textnormal{Var}\xspace_{\mathbb{T}}$ but $Z \neq Z_{\node{m}}$. In this case, we know that $\left(\val{V}_{\node{m},x}[Z_\node{m} := \preimg{(<:_{\node{m},x})}{x'}]\right) (Z) = \val{V}_{\node{m},x} (Z)$; we must therefore show that $\val{V}_{\node{m},x'}(Z) \subseteq \val{V}_{\node{m},x}(Z)$. Since $Z \in \textnormal{Var}\xspace_{\mathbb{T}}$ it follows that there is a companion node $\node{k} \in \cnodes{\mathbb{T}}$ such that $Z = Z_{\node{k}}$ and the following hold. \begin{align*} \val{V}_{\node{m},x'} (Z) &= \bigcup_{\node{k}' \in \cleaves{\node{k},\node{m}}} \preimg{(\leq:_{\node{k}',\node{m}})}{x'} \\ \val{V}_{\node{m},x} (Z) &= \bigcup_{\node{k}' \in \cleaves{\node{k},\node{m}}} \preimg{(\leq:_{\node{k}',\node{m}})}{x} \end{align*} Now assume $x'' \in \val{V}_{\node{m},x'} (Z)$; we must show $x'' \in \val{V}_{\node{m},x} (Z)$. Since $x'' \in \val{V}_{\node{m},x'} (Z)$ there must exist leaf node $\node{k}' \in \cleaves{\node{k},\node{m}}$ such that $x'' \leq:_{\node{k}', \node{m}} x'$; if we can show that $x'' \leq:_{\node{k}',\node{m}} x$ then we will have established that $x'' \in \val{V}_{\node{m},x} (Z)$. Recall that $x' <:_{\node{m}}^* x$; this implies that $x' \leq:_{\node{m},\node{m}} x$, and the pseudo-transitivity of $\leq:_{\node{k}',\node{m}}$ then ensures that $x'' \leq:_{\node{k}',\node{m}} x$.
To finish the proof we now need to show that statement~\ref{stmt:necessity} holds for companion node $\node{m}$ and $s \in S$. So fix $x$ such that $x \leq:_{\node{m},\node{n}} s$; we must show that $x \in \semT{P(\node{m})}{\val{V}_{\node{m},x}}$. We know from the definition of $P$ that $P(\node{m}) = \sigma Z_\node{m}.P(\node{m}')$. If $\sigma = \nu$ then as statement~\ref{stmt:support} holds we know that $(S_x, <:_{\node{m},x})$ is a support ordering for $\semfT{Z_{\node{m}}}{P(\node{m}')}{\val{V}_{\node{m},x}}$. Therefore, $S_x$ is supported for $\semfT{Z_{\node{m}}}{P(\node{m}')}{\val{V}_{\node{m},x}}$, and Corollary~\ref{cor:greatest-fixpoint} ensures that $S_x \subseteq \nu \left(\semfT{Z_{\node{m}}}{P(\node{m}')}{\val{V}_{\node{m},x}}\right)$. As $x \in S_x$ and \[\nu \left(\semfT{Z_{\node{m}}}{P(\node{m}')}{\val{V}_{\node{m},x}}\right) = \semT{P(\node{m})}{\val{V}_{\node{m},x}}, \] the result follows. Now assume that $\sigma = \mu$. In this case, since $<:_{\node{m}}$ is well-founded it follows from Lemma~\ref{lem:transitive-closure-well-founded} that $<:_{\node{m}}^+$ is as well, as is $<:_{\node{m},x}$. Therefore $S_x$ is well-supported for $\semfT{Z_{\node{m}}}{P(\node{m}')}{\val{V}_{\node{m},x}}$, and Corollary~\ref{cor:least-fixpoint} guarantees that $S_x \subseteq \mu \left(\semfT{Z_{\node{m}}}{P(\node{m}')}{\val{V}_{\node{m},x}}\right)$. As $x \in S_x$ and $\mu \left(\semfT{Z_{\node{m}}}{P(\node{m}')}{\val{V}_{\node{m},x}}\right) = \semT{P(\node{m})}{\val{V}_{\node{m},x}}$, the result follows. \qedhere \end{proof}
\noindent The next corollary specializes the previous lemma to the case of \emph{top-level companion nodes} in successful tableau $\mathbb{T}$. A companion node $\node{n} \in \cnodes{\mathbb{T}}$ is top-level iff $A_s(\node{n}) \cap \cnodes{\mathbb{T}} = \emptyset$; recalling that $A_s(\node{n})$ is the set of strict ancestors of $\node{n}$ in $\mathbb{T}$, a companion node is top-level iff it has no strict ancestors that are companion nodes. It is straightforward to see that if $\node{n}$ is a top-level companion node, then $\node{n} = S \tnxTV{(U = \sigma Z.\Phi)} U$ for some $\sigma$, $Z$, $\Phi$ and $U$; note that the definition list of $\node{n}$ contains only the single element $(U = \sigma Z.\Phi)$. We have the following.
\begin{corollary}[Support orderings for top-level companion nodes] \label{cor:support-orderings-for-top-level-companion-nodes} Let $\mathbb{T} = \tableauTrl$ be a successful tableau, with $\node{n} \in \cnodes{\mathbb{T}}$ a top-level companion of $\mathbb{T}$ and $\node{n}'$ the child of $\node{n}$ in $\tree{T}$. Also let $S = \textit{st}(\node{n})$. Then $(S, <:_{\node{n}}^+)$ is a support ordering for $\semfTV{Z_{\node{n}}}{P(\node{n}')}$. \end{corollary}
\begin{proof} Follows from the fact that since $\node{n}$ is top-level, the only variable in $\textnormal{Var}\xspace_{\mathbb{T}}$ that can appear free in $P(\node{n}')$ is $Z_\node{n}$. Lemma~\ref{cor:monotonicity-of-dependency-extensions} thus guarantees that for any $S'$, $\semfT{Z_\node{n}}{P(\node{n}')}{\val{V}_{\node{n}}}(S') = \semfTV{Z_\node{n}}{P(\node{n}')}(S')$. Lemma~\ref{lem:support-ordering-for-companion-nodes} then establishes the corollary. \end{proof}
\subsection{Soundness}\label{subsec:soundness}
We now prove that our proof system is sound by establishing that the root sequent of every successful tableau is valid.
\begin{theorem}[Soundness of mu-calculus proof system]\label{thm:soundness} Fix LTS $(\states{S},\to)$ of sort $\Sigma$ and valuation $\val{V}$, and let $\mathbb{T} = \tableauTrl$ be a successful tableau for sequent $\seq{s} \in \Seq{\mathcal{T}}{\textnormal{Var}\xspace}$, where $\textit{dl}(\seq{s}) = \varepsilon$. Then $\seq{s}$ is valid. \end{theorem} \begin{proof} Let $\tree{T} = (\node{N}, \node{r}, p, cs)$ be the tree component of $\mathbb{T}$, and define $\node{L} \subseteq \node{N}$ as follows. \[ \node{L} = \{ \node{n} \in \node{N} \mid \textit{dl}(\node{n}) = \varepsilon \land \rho(\node{n}) = \sigma Z\} \] Now consider the tree prefix $\tpre{\tree{T}}{\node{L}}$ of $\tree{T}$ (cf.\/ Definition~\ref{def:tree-prefix}). It can be seen that $\tpre{\tree{T}}{\node{L}} = (\node{N'},\node{r},p',cs')$ is such that $\node{N}'$ contains precisely the nodes of $\tree{T}$ for which $\textit{dl}(\node{n}) = \varepsilon$. Moreover, each leaf $\node{n}$ of $\tpre{\tree{T}}{\node{L}}$ is either a leaf of $\tree{T}$ or has the property that $\node{n} = S \tnxTV{\varepsilon} \sigma Z.\Phi$ for some $S$ and $\Phi$ and that the child $\node{n}'$ of $\node{n}$ in $\tree{T}$ is such that $\node{n}' = S \tnxTV{(U = \sigma Z.\Phi)} U$ is a top-level companion node. We will show that each leaf $\node{n}$ of $\tpre{\tree{T}}{\node{L}}$ is valid; this fact, and Lemma~\ref{lem:local-soundness}, can be used as the basis for a simple inductive argument on $\tpre{\tree{T}}{\node{L}}$ to establish that every node in $\tpre{\tree{T}}{\node{L}}$ is valid, including root node $\node{r}$, whose sequent label is $\seq{s}$. The result follows from the definition of node validity, which says that a node in a tableau is valid iff its sequent label is valid.
So fix leaf $\node{n}$ in $\tpre{\tree{T}}{\node{L}}$. There are two cases to consider. In the first, $\node{n}$ is also a leaf in $\tree{T}$. In this case, since $\mathbb{T}$ is successful, $\node{n}$ is successful, and therefore valid. In the second case, $\node{n} = S \tnxTV{\varepsilon} \sigma Z.\Phi$ and has a single child $\node{n}' = S \tnxTV{(U = \sigma Z.\Phi)} U$ that is a top-level companion node. Let $\node{n}''$ be the child of $\node{n}'$. Corollary~\ref{cor:support-orderings-for-top-level-companion-nodes} guarantees that $(S, <:_{\node{n}'}^+)$ is a support ordering for $\semfTV{Z_{\node{n}'}}{P(\node{n}'')}$. We will now show that $S \subseteq \semTV{P(\node{n}')}$. There are two sub-cases to consider. In the first, $\sigma = \nu$. It follows from the definitions that $S$ is supported for $\semfTV{Z_{\node{n}'}}{P(\node{n}'')}$ and thus by Corollary~\ref{cor:greatest-fixpoint}, $S \subseteq \nu (\semfTV{Z_{\node{n}'}}{P(\node{n}'')}) = \semTV{P(\node{n}')}$. In the second, $\sigma = \mu$. Since $\mathbb{T}$ is successful $<:_{\node{n}'}$ is well-founded, meaning that $<:_{\node{n}'}^+$ is also well-founded. Thus $S$ is well-supported for $\semfTV{Z_{\node{n}'}}{P(\node{n}'')}$ and thus by Corollary~\ref{cor:least-fixpoint}, $S \subseteq \mu (\semfTV{Z_{\node{n}'}}{P(\node{n}'')}) = \semTV{P(\node{n}')}$. Since $P(\node{n}) = P(\node{n}')$, $S \subseteq \semTV{P(\node{n})}$. Also note that since $\node{n}'$ is a top-level companion node, $P(\node{n})$ can contain no free occurrences of any $Z' \in \textnormal{Var}\xspace_{\mathbb{T}}$, meaning that for any $\val{V}'$ consistent with $\mathbb{T}$, $\semTV{P(\node{n})} = \semT{P(\node{n})}{\val{V}'}$. Consequently, Lemma~\ref{lem:companion-node-formulas-and-semantics} implies that $\semTV{P(\node{n})} = \sem{\node{n}}{}{}$, and thus $S \subseteq \sem{\node{n}}{}{}$, whence $\node{n}$ is valid. \qedhere \end{proof}
\section{Completeness}\label{sec:Completeness}
This section now establishes the completeness of our proof system. Call a tableau $\tableauTrl$, where $\tree{T} = (\node{N}, \node{r}, p, cs)$, \emph{successful for} sequent $S \tnxTVD \Phi$ iff it is successful and $\node{r} = S \tnxTVD \Phi$. We show that for any $\mathcal{T}$, $\val{V}$, $S$ and $\Phi$, if $S \tnxTV{\varepsilon} \Phi$ is valid then there is a successful tableau for $S \tnxTV{\varepsilon} \Phi$.
The completeness results in this section rely heavily on tableau manipulations; in particular, several proofs define constructions for merging multiple successful tableaux into a single successful tableau. These constructions in turn rely on variations of well-founded induction over support orderings for the semantic functions used to give meaning to fixpoint formulas, and become subtle in the setting of mutually recursive fixpoints. To clarify and simplify these arguments, the first subsection below introduces relevant notions from general fixpoint theory in the setting of mutual recursion. These results are then used later in this section to define the tableau constructions we need to establish completeness.
\subsection{Mutual recursion and fixpoints} \label{subsec:mutual-recursion}
Mu-calculus formulas of form $\sigma Z.(\cdots \sigma' Z'.( \cdots Z \cdots ) \cdots)$ are said to be \emph{mutually recursive}, because of the semantics of the outer fixpoint formula, $\sigma Z. \cdots$, depends on the semantics of the inner fixpoint formula, $\sigma' Z'. \cdots$, which in turn (because $Z$ is free in its body) depends on the semantics of the outer fixpoint. If $\sigma \neq \sigma'$ then these mutually recursive fixpoints are also said to be \emph{alternating}. Alternating fixpoints present challenges when reasoning about completeness; in support of the constructions to come, in this section we develop a theory of mutually recursive fixpoints in the general setting of recursive functions over complete lattices. In particular, we show how to define mutually recursive fixpoints in terms of binary functions, and we define a property of binary relations, which we call \emph{quotient well-foundedness}, that can be applied to support orderings for mutually recursive fixpoints in order to support a form of well-founded induction.
\paragraph{Mutual recursion.}
Let $S$ be a set, with $(2^S, \subseteq, \bigcup, \bigcap)$ the subset lattice over $S$ (cf.\/ Definition~\ref{def:subset-lattice}). To define mutually recursive fixpoints in this setting we use \emph{binary} monotonic functions over $2^S$ defined as follows.
\begin{definition}[Monotonic binary functions] Binary function $f \in 2^S \times 2^S \rightarrow 2^S$ is \emph{monotonic} iff for all $X_1, X_2, Y_1, Y_2 \in 2^S$, if $X_1 \subseteq X_2$ and $Y_1 \subseteq Y_2$ then $f(X_1, Y_1) \subseteq f(X_2, Y_2)$. \end{definition}
\noindent Binary functions are monotonic when they are monotonic in each argument individually.
We now define the following operations on binary functions towards our goal of defining mutually recursive fixpoints.
\begin{definition}[Binary-function operations]\label{def:binary-function-operations} Let $f \in 2^S \times 2^S \rightarrow 2^S$ be monotonic. \begin{enumerate}
\item\label{it:fix-parm}
Let $X, Y \subseteq S$. Then functions $f_{(X, \cdot)}, f_{(\cdot, Y)} \in 2^S \rightarrow 2^S$ are defined by:
\[
f_{(X, \cdot)}(Y) = f_{(\cdot, Y)}(X) = f(X,Y)
\]
Since $f_{(X, \cdot)}$ and $f_{(\cdot, Y)}$ are monotonic when $f$ is, we may further define $f_{(\cdot, \sigma)}, f_{(\sigma, \cdot)} \in 2^S \rightarrow 2^S$, where $\sigma \in \{\mu, \nu\}$, as follows.
\begin{align*}
f_{(\cdot,\sigma)}(X) &= \sigma f_{(X, \cdot)}\\
f_{(\sigma,\cdot)}(Y) &= \sigma f_{(\cdot, Y)}
\end{align*}
\item\label{it:sigma-composition}
Suppose further that $g \in 2^S \times 2^S \rightarrow 2^S$ is binary and monotonic over $2^S$ and that $\sigma \in \{\mu,\nu\}$. Then function $(f [\sigma] g) \in 2^S \rightarrow 2^S$ is defined as follows.
\[
(f [\sigma] g)(X) = f(X, g_{(\cdot, \sigma)}(X))
\] \end{enumerate} \end{definition}
To understand the above definitions, first note that if $f$ is binary and monotonic then $f_{(X,\cdot)} \in 2^S \to 2^S$ is the unary function obtained by holding the first argument of $f$ fixed at $X$, leaving only the second argument to vary. Similarly, $f_{(\cdot,Y)} \in 2^S \to 2^S$ is the unary function obtained by holding the second argument of $f$ fixed at $Y$. The monotonicity of $f$ guarantees the monotonicity of $f_{(X, \cdot)}$ and $f_{(\cdot, Y)}$, and thus fixpoints $\sigma f_{(X,\cdot)}$ and $\sigma f_{(\cdot,Y)}$ are well defined for all $X$ and $Y$ and $\sigma \in \{\mu,\nu\}$. This fact ensures that unary function $f_{(\cdot, \sigma)}$ and $f_{(\sigma, \cdot)}$ are well-defined; in the first case, given argument $X$, $f_{(\cdot, \sigma)}(X)$ returns the result of computing the $\sigma$-fixpoint of $f$ when the first argument of $f$ is held at $X$. The second case is similar. It is straightforward to establish that $f_{(\cdot,\sigma)}$ and $f_{(\sigma, \cdot)}$ are also monotonic for all $\sigma \in \{\mu,\nu\}$. Finally, for each $\sigma \in \{\mu,\nu\}$ the operation $[\sigma]$ defines a composition operation that converts binary monotonic functions $f$ and $g$ into a unary monotonic function with the following behavior. Given input $X \subseteq S$ the composition function applies $f$ to $X$ and the result of computing the $\sigma$-fixpoint of $g$ with its first argument held at $X$. To understand the importance of this function, consider the following notional pair of mutually recursive equations, where $f$ and $g$ are binary monotonic functions. \begin{align*} X &\stackrel{\sigma}{=} f(X,Y)\\ Y &\stackrel{\sigma'}{=} g(X,Y) \end{align*} Here $\sigma, \sigma' \in \{\mu,\nu\}$; the intention of these equations is to define $X$ and $Y$ as the mutually recursive $\sigma$ and $\sigma'$ fixpoints of $f$ and $g$, with the first equation dominating the second one. In the usual definitions of such equation systems, $X$ is defined to be $\sigma (f [\sigma'] g)$, i.e. the $\sigma$-fixpoint of the function $f [\sigma'] g$, while $Y$ is taken to be $g_{(\cdot, \sigma')}(X)$; see, e.g.,~\cite{Mad1997}.
The next lemma highlights the role that the $f [\sigma] g$ construct plays in the semantics of the mu-calculus formulas that involve nested fixpoints. The statement relies on the notion of a \emph{maximal fixpoint subformula}. Briefly, if $\Phi$ is a formula then $\sigma Z.\Gamma$ is a maximal fixpoint subformula of $\Phi$ iff it is a subformula of $\Phi$ and is not a proper subformula of another fixpoint subformula of $\Phi$.
\begin{lemma}[Nested fixpoint semantics]\label{lem:nested-fixpoint-semantics} Let $\sigma Z.\Phi \in \muforms^{\Sigma}_{\textnormal{Var}\xspace}$ be a formula, let $\sigma' Z'.\Gamma$ be a maximal fixpoint subformula of $\Phi$, and let $\Phi'$ and $W \in \textnormal{Var}\xspace$ be such that $W$ is fresh and $\Phi = \Phi'[W:=\sigma'Z'.\Gamma]$. Then $\semfTV{Z}{\Phi} = f [\sigma'] g$, where $f(X,Y) = \semT{\Phi'}{\val{V}[Z, W := X,Y]}$ and $g(X,Y) = \semT{\Gamma}{\val{V}[Z, Z' := X, Y]}$. \end{lemma} \remove{ \begin{proofsketch} Follows from Lemma~\ref{lem:substitution} and the definition of $f [\sigma'] g$. The detailed proof is included in the appendix. \qedhere \end{proofsketch} } \begin{proof} Fix $\sigma Z.\Phi$, $\sigma'Z'.\Gamma$, $W$, $f$ and $g$ as stated. We must prove that for all $X$, $\semfTV{Z}{\Phi}(X) = (f [\sigma'] g)(X)$. We reason as follows. \begin{align*} \semfTV{Z}{\Phi}(X)
&= \semT{\Phi}{\val{V}[Z := X]}
&& \text{Definition of $\semfTV{Z}{\Phi}$} \\
&= \semT{(\Phi'[W := \sigma' Z'.\Gamma])}{\val{V}[Z := X]}
&& \text{$\Phi = \Phi'[W:=\sigma' Z'.\Gamma]$} \\
&= \semT{\Phi'}{\val{V}[Z, W := X, \semT{\sigma'Z'.\Gamma}{\val{V}[Z := X]}]}
&& \text{Lemma~\ref{lem:substitution}, $W$ fresh} \\
&= \semT{\Phi'}{\val{V}[Z,W := X, g_{(\cdot, \sigma')}(X)]}
&& \text{$g_{(\cdot,\sigma')} = \semT{\sigma'Z'.\Gamma}{\val{V}[Z := X]}$} \\
&= f(X, g_{(\cdot, \sigma')}(X))
&& \text{Definition of $f$} \\
&= (f [\sigma'] g)(X)
&& \text{Definition of $f [\sigma'] g$} \end{align*} \qedhere \end{proof}
\noindent Note this result implies that $\semTV{\sigma Z.\Phi} = \sigma (f [\sigma'] g)$, where $\sigma Z . \Phi$, $f$ and $g$ are defined as in the lemma.
\paragraph{Quotient well-foundedness and well-orderings.} Our goal in this part of the paper is to characterize a support ordering for $g$ in terms of a given a support ordering for $f [\sigma] g$. For unary functions, support orderings may be either well-founded or not, and this property is sufficient to characterize both the greatest and least fixponts of these functions. For function $f [\sigma] g$, where $f$ and $g$ are binary and have mutually recursive fixpoints, an intermediate notion, which we call \emph{quotient well-foundedness}, is important, especially when the mutually recursive fixpoints are of different types (i.e.\/ one is least while the other is greatest).
\label{subsec:qwf-and-wo} \begin{definition}[Quotient well-founded (qwf) / well-ordering (qwo)]\label{def:qwf} Let $S$ be a set and $R \subseteq S \times S$ a binary relation over $S$, and let $(Q_R, \sqsubseteq)$ be the quotient of $R$ (cf.\/ Definition~\ref{def:relation-quotient}), with ${\sqsubset} = {\sqsubseteq^-}$ the irreflexive core of $\sqsubseteq$. \begin{enumerate} \item\label{subdef:qwf} $R$ is \emph{quotient well-founded} (qwf) iff $\sqsubset$ is well-founded over $Q_R$. \item\label{subdef:qwo} $R$ is a \emph{quotient well-ordering} (qwo) iff $\sqsubset$ is a well-ordering over $Q_R$. \end{enumerate} \end{definition}
That is, $R$ is quotient well-founded iff the irreflexive core of the partial order induced by $R$ over its equivalence classes is well-founded. Note that $R$ can be qwf without being well-founded; when this is the case the non-well-foundedness of $R$ can be seen as due solely to non-well-foundedness within its equivalence classes. It is also easy to see that if $R$ is well-founded then it is qwf as well; in this case each $Q \in Q_R$ has form $\{s\}$ for some $s \in S$. Also note that the universal relation $U_S = S \times S$ over $S$ is trivially qwf, as its quotient has one equivalence class, namely, $S$.
It turns out that if a relation is total and quotient well-founded, then it is also a quotient well-ordering.
\begin{lemma}[Total qwf relations are qwos]\label{lem:total-qwo} Suppose that $R \subseteq S \times S$ is total and qwf. Then $R$ is a qwo. \end{lemma} \begin{proof} Let $R \subseteq S \times S$ be total and qwf, with $(Q_R, \sqsubseteq)$ the quotient of $R$. We must show that ${\sqsubset} = {\sqsubseteq^-}$ is a well-ordering, i.e.\/ is well-founded and total, over $Q_R$. Well-foundedness of $\sqsubset$ is immediate from the fact that $R$ is qwf. We now must show that $\sqsubset$ is total, i.e.\/ is irreflexive and transitive and has the property that for any $Q_1, Q_2 \in Q_R$ such that $Q_1 \neq Q_2$, either $Q_1 \sqsubset Q_2$ or $Q_2 \sqsubset Q_1$. Irreflexivity and transitivity are immediate from the fact that $\sqsubset = \sqsubseteq^-$ is the irreflexive core of partial order $\sqsubseteq$. Now suppose $Q_1, Q_2 \in Q_R$ are such that $Q_1 \neq Q_2$; we must show that either $Q_1 \sqsubset Q_2$ or $Q_2 \sqsubset Q_1$. From the definition of $Q_R$ it follows that there are $s_1, s_2 \in S$ such that $Q_1 = [s_1]_R$ and $Q_2 = [s_2]_R$. Moreover, since $Q_1 \neq Q_2$ it must be that $s_1 \not\sim_R s_2$, and since $R$ is total we have that either $s_1 \inr{R} s_2$ and $s_2 \not\inr{R} s_1$, whence $Q_1 = [s_1]_R \sqsubset [s_2]_R = Q_2$, or $s_2 \inr{R} s_1$ and $s_1 \not\inr{R} s_2$, whence $Q_2 \sqsubset Q_1$. As $\sqsubset$ is a well-ordering on $Q_R$, $R$ is by definition a qwo. \qedhere \end{proof}
The next result establishes the existence of so-called \emph{pseudo-minimum} elements in subsets drawn from qwos.
\begin{definition}[Pseudo-minimum elements]\label{def:minimal} Let $R \subseteq S \times S$ be a binary relation over $S$, and let $X \subseteq S$. Then $x \in X$ is \emph{$R$-pseudo-minimum for $X$} iff $x$ is an $R$-lower bound for $X$. \end{definition}
\noindent An element $x \in X$ is an $R$-pseudo-minimum for $X$ iff $x$ is an $R$-lower bound for $X$. This does not imply that $x$ is an $R$-minimum, or even $R$-minimal, since even though $x$ is a $R$-pseudo-minimum there may exist $x' \in X$ such that $x' \inr{R} x$. In this case it must hold that $x \sim_R x'$, however.
The next lemma states a pseudo-minimum result for qwo relations that are total. (It should be noted that a relation can be a qwo and still not itself be total.)
\begin{lemma}[Pseudo-minimum elements and quotient well-orderings]\label{lem:qwo-pseudo-minimum} Suppose qwo $R \subseteq S \times S$ is total. Then every non-empty subset $X \subseteq S$ contains an $R$-pseudo-minimum element. \end{lemma} \begin{proof} Fix total qwo $R \subseteq S \times S$, and let $X \subseteq S$ be non-empty. We must exhibit an $R$-pseudo-minimum element $x \in X$. To this end, consider the quotient $(Q_R, \sqsubseteq)$ of $R$, and let ${\sqsubset} = {\sqsubseteq^-}$ be the irreflexive core of $\sqsubseteq$. Note that $\sqsubset$ is a well-ordering over $Q_R$. Now consider $Q_X \subseteq Q_R$ defined by $Q_X = \{\,[x]_R \mid x \in S\}$. It follows that there is a $Q \in Q_X$ that is a $\sqsubset$-minimum for $Q_X$, and that $Q = [x]_R$ for some $x \in X$. Since $R$ is total it is transitive, meaning $R^* = R^=$. It follows that $x \inr{R} x'$ for all $x' \in [x]_R$ such that $x \neq x'$; it is also the case that $x \inr{R} x'$ for all $x'$ such that $[x]_R \sqsubset [x']_R$. These facts imply that $x \inr{R} x'$ for all $x' \in X$. \qedhere \end{proof}
We now establish a relationship between support orderings for $f [\sigma] g$ and $g$. We first define a notion of compatibility for such support orderings.
\begin{definition}[Local consistency of support orderings] \label{def:consistent-support-orderings} Let $f, g \in 2^S \times 2^S \rightarrow 2^S$ be monotonic, and let $\sigma_1, \sigma_2 \in \{\mu,\nu\}$. Further let $(X, \prec)$ be a $\sigma_1$-compatible support ordering for $f [\sigma_2] g$, with $Y_x = \sigma_2 g_{(\preimg{{\prec}}{x}, \cdot)}$ for $x \in X$ and $Y = \sigma_2\, g_{(\preimg{{\prec}}{X}, \cdot)}$. Then $\sigma_2$-compatible support ordering $(Y,\prec')$ for $g_{(\preimg{{\prec}}{X}, \cdot)}$ is \emph{locally consistent} with $(X, \prec)$ iff for all $x \in X$, $(Y_x, \restrict{(\prec')}{Y_x})$ is a $\sigma_2$-compatible support ordering for $g_{(\preimg{\prec}{x}, \cdot)}$ and for all $y \in Y_x, \preimg{{\prec'}}{y} \subseteq Y_x$. \end{definition}
Intuitively, $(Y, \prec')$ is locally consistent with $(X, \prec)$ if $\prec'$ not only supports the fact that $Y$ is the $\sigma_2$-fixpoint for $g_{(\preimg{{\prec}}{X})}$, but via the restriction of $\prec'$ to $Y_x$, it also provides localized support for the fact that $Y_x$ is the $\sigma_2$-fixpoint for $g_{(\preimg{{\prec}}{x}, \cdot)}$, for each $x \in X$. In addition, for any $x \in X$ and $y \in Y_x$ every element in the support set $\preimg{{\prec'}}{y}$ with respect to $\prec'$ must also be an element of $Y_x$. Note that this last aspect of the definition ensures that for any $x$ and $y \in Y_x$, $$ \preimg{{\prec'}}{y} = \preimg{(\restrict{(\prec')}{Y_x})}{y}. $$
The next lemma establishes that, for given support orderings of a specific type for $f [\sigma_2] g$, consistent support orderings exist for $g$.
\begin{lemma}[From composite to local support orderings]\label{lem:fg-support} Let $f, g \in 2^S \times 2^S \rightarrow 2^S$ be monotonic and $\sigma_1, \sigma_2 \in \{\mu,\nu\}$, with $X = \sigma_1 (f [\sigma_2] g)$. Also let $(X,\prec)$ be a $\sigma_1$-compatible, total qwf{\,}
support ordering for $f[\sigma_2]g$ and $Y = \sigma_2 g_{(\preimg{{\prec}}{X}, \cdot)}$. Then there is a $\sigma_2$-compatible, total qwf support ordering $(Y,\prec')$ for $g_{(\preimg{{\prec}}{X}, \cdot)}$ that is locally consistent with $(X,\prec)$. \end{lemma}
\begin{proof} Fix monotonic $f, g \in 2^S \times 2^S \rightarrow 2^S$ and $\sigma_1, \sigma_2 \in \{\mu,\nu\}$, let $X = \sigma_1 (f [\sigma_2] g)$, and let ${\prec} \subseteq X \times X$ be such that $(X, \prec)$ is a $\sigma_1$-compatible, total qwf support ordering for $f [\sigma_2] g$. Also fix $Y = \sigma_2 g_{(\preimg{{\prec}}{X}, \cdot)}$. We must construct a $\sigma_2$-compatible, total qwf ${\prec'} \subseteq Y \times Y$ such that $(Y, \prec')$ is a support ordering for $g_{(\preimg{{\prec}}{X}, \cdot)}$ that is locally consistent with $(X, \prec)$.
Let $(Q_\prec, \sqsubseteq)$ be the quotient of $(S,\prec)$ as given in Definition~\ref{def:relation-quotient}, and let ${\sqsubset} = {\sqsubseteq^-}$ be the irreflexive core of $\sqsubseteq$. For notational convenience, if $Z \subseteq S$ then we define $$ g_Z = g_{(\preimg{{\prec}}{Z}, \cdot)}. $$ Since $\prec$ is total and qwf it follows that $\sqsubset$ is a well-ordering on $Q_\prec$. For any $Q \in Q_\prec$ define $$ Y_Q = \sigma_2 \,g_Q, $$ and let $(Y_Q, \prec'_Q)$ be a $\sigma_2$-maximal support ordering for $g_Q$. Since ${\prec'_Q} \subseteq Y_Q \times Y_Q$ is $\sigma_2$-maximal, we have that $\prec'_Q$ is a well-ordering if $\sigma_2 = \mu$, and $Y_Q \times Y_Q$ if $\sigma_2 = \nu$. In either case it is easy to see that $\prec'_Q$ is total and qwf. Moreover, since $Q \subseteq X$ it follows that $\preimg{{\prec}}{Q} \subseteq \preimg{{\prec}}{X}$, and this means that for all $Z \subseteq S$, $g_Q(Z) \subseteq g_X(Z)$. Consequently $(Y_Q, \prec'_Q)$ is also a $\sigma_2$-compatible support ordering for $g_X$, as for all $y \in Y_Q$, $y \in g_Q(\preimg{{\prec'_Q}}{y}) \subseteq g_X(\preimg{{\prec'_Q}}{y})$. We now define the following using well-founded induction on ${\sqsubset} = P({\prec})^-$. \begin{align*} Y'_{\sqsubset Q} &= \bigcup_{Q' \sqsubset Q} Y'_{Q'} \\ Y'_{Q} &= Y_Q \cup Y'_{\sqsubset Q} \\ Y''_{Q} &= Y_Q \setminus Y'_{\sqsubset Q} \\ \prec''_{Q} &= \left( \bigcup_{Q' \sqsubset Q} \prec''_{Q'} \right) \cup \left( Y'_{\sqsubset Q} \times Y''_Q \right) \cup \left(\restrict{\prec'_Q}{Y''_Q}\right) \end{align*}
\noindent An inductive argument further establishes that for each $Q$, $(Y'_Q, \prec''_Q)$ is a $\sigma_2$-compatible support ordering for $g_Q$ and that $\prec''_Q$ is total and qwf.
Now consider $Y' = \bigcup_{Q \in Q_\prec} Y'_Q$ and ${\prec''} = \bigcup_{Q \in Q_\prec} \prec''_Q$. It is straightforward to show that $(Y', \prec'')$ is a $\sigma_2$-compatible, total, qwf support ordering for $g_X$ since each $(Y_Q, \prec'_Q)$ is. Also note that $Y' \subseteq Y$. To finish the construction of $(Y, \prec')$, take $Y'' = Y \setminus Y'$, and let $(Y, \prec''')$ be a maximal $\sigma_2$-compatible support ordering for $g_X$. Note that $\prec'''$ is qwf, and well-founded if $\sigma_2 = \mu$ and $Y \times Y$ if $\sigma_2 = \nu$. Now define the following. \begin{align*} \prec' &= {\prec''} \cup \left(Y' \times Y''\right) \cup \left(\restrict{\prec'''}{Y''}\right) \end{align*} From the reasoning above it follows that $(Y, \prec')$ is a $\sigma_2$-compatible support ordering for $g_X$, and that $\prec'$ is total and qwf.
To complete the proof we must show that $(Y, \prec')$ is locally consistent with $(X, \prec)$. To this end, fix $x \in X$ and define ${\prec'_x} = \restrict{(\prec')}{Y_x}$. We must show that $(Y_x, \prec'_x)$ is a $\sigma_2$-compatible support ordering for $g_x = g_{\{x\}}$. Recall that $[x] \in Q_\prec$ is the equivalence class containing $x$. We begin by noting that since $\prec$ is total and qwf, \[ \preimg{{\prec}}{x} = [x] \cup \left(\bigcup_{Q \sqsubset [x]} Q\right). \] Also, $\sigma_2 (g_x) = \sigma_2 (g_{[x]}) = Y_{[x]}$. The definition of $\prec'$ further guarantees that $\prec'_x = \prec''_{Q_x}$. As we know that $(Y_Q, \prec''_Q)$ is a $\sigma_2$-compatible support ordering for $g_Q$ for all $Q \in Q_\prec$, the desired result holds. \qedhere \end{proof}
\subsection{Tableau normal form}
Later in this section we use constructions on tableaux to prove completeness of our proof system. The tableaux we work with have a restricted form, which we call \emph{tableau normal form} (TNF). TNF is defined as follows.
\begin{definition}[Tableau normal form (TNF)]\label{def:tableau-normal-forms} Let $\mathbb{T} = \tableauTrl$ be a tableau, with $\tree{T} = (\node{N}, \node{r}, p, cs)$. \begin{enumerate}
\item \label{def:thinning-restricted}
$\mathbb{T}$ is \emph{thinning-restricted} iff $\rho(\node{r}) \neq \textnormal{Thin}$ and for all $\node{n} \neq \node{r}$, $\rho(\node{n}) = \sigma Z.$ iff $\rho(p(\node{n})) = \textnormal{Thin}$.
\item \label{def:unfolding-limited}
$\mathbb{T}$ is \emph{unfolding-limited} iff for each definitional constant $U$ appearing in $\mathbb{T}$ there is exactly one node $\node{n}_U$ such that $\textit{fm}(\node{n}_U) = U$ and $\rho(\node{n}_U) = \textnormal{Un}$.
\item \label{def:irredundant}
$\mathbb{T}$ is \emph{irredundant} iff for each node $\node{n}$ such that $\rho(\node{n}) = \lor$ and $cs(\node{n}) = \node{n}_1\node{n}_2$, $\textit{st}(\node{n}_1) \cap \textit{st}(\node{n}_2) = \emptyset$.
\item \label{def:TNF}
$\mathbb{T}$ is in \emph{tableau normal form} (TNF) iff it is thinning-restricted, unfolding-limited and irredundant.
\end{enumerate} \end{definition}
Intuitively, $\mathbb{T}$ is
thinning-restricted if Thin is not applied to the root node, every application of the $\sigma Z.$ rule to a non-root node is immediately preceded by a single instance of Thin, and there are otherwise no other applications of Thin. It is unfolding-limited if each definitional constant is unfolded exactly once, and irredundant if for each $\lor$-node, the state sets of the node's children are disjoint (i.e. no state can appear in both children, meaning there can be no redundant reasoning about states in the $\lor$-node). The tableau is in TNF if it is thinning-restricted, unfolding-limited and irredundant.
In what follows we will on occasion build new (successful) TNF tableaux out of existing (successful) TNF tableaux that are \emph{structurally equivalent}.
\begin{definition}[Structural tableau equivalence]\label{def:structurally-equivalent-tableaux} Let $\tree{T}_1 = (\node{N}_1, \node{r}_1, p_1, cs_1)$ and $\tree{T}_2 = (\node{N}_2, \node{r}_2, p_2, cs_2)$ be finite ordered trees. \begin{enumerate}
\item \label{subdef:tree-isomorphism}
Bijection $\iota \in \node{N}_1 \to \node{N_2}$ is a \emph{tree isomorphism from $\tree{T}_1$ to $\tree{T}_2$} iff it satisfies:
\begin{enumerate}
\item
$\iota(\node{r}_1) = \node{r}_2$;
\item
for all $\node{n}_1 \in \node{N}_1$ $\iota(p_1(\node{n}_1)) = p_2(\iota(\node{n}_1))$; and
\item
for all $\node{n}_1 \in \node{N}_1$ $\iota(cs_1(\node{n}_1)) = cs_2(\iota(\node{n}_1))$.
\end{enumerate}
\item \label{subdef:isomorphic-trees}
$\tree{T}_1$ and $\tree{T}_2$ are \emph{isomorphic} iff there exists a tree isomorphism $\iota$ from $\tree{T}_1$ to $\tree{T}_2$. In this case we call $\iota$ a \emph{witnessing tree isomorphism from $\tree{T}_1$ to $\tree{T}_2$}.
\item\label{subdef:structurally-equivalent-tableaus}
Fix LTS $\mathcal{T}$, and let
\begin{align*}
\mathbb{T}_1 &= (\tree{T}_1, \rho_1, \mathcal{T}, \val{V}_1, \lambda_1)\\
\mathbb{T}_2 &= (\tree{T}_2, \rho_2, \mathcal{T}, \val{V}_2, \lambda_2)
\end{align*}
be tableaux. Then $\mathbb{T}_1$ and $\mathbb{T}_2$ are \emph{structurally equivalent} iff $\tree{T}_1$ and $\tree{T}_2$ are isomorphic, with witnessing tree isomorphism $\iota$ from $\tree{T}_1$ to $\tree{T}_2$, and the following hold for all $\node{n}_1 \in \node{N}_1$.
\begin{enumerate}
\item
$\textit{rn}(\rho_1(\node{n}_1)) = \textit{rn}(\rho_2(\iota(\node{n}_1)))$.
\item
$\textit{fm}(\lambda_1(\node{n}_1)) = \textit{fm}(\lambda_2(\iota(\node{n}_1)))$
\item
$\textit{dl}(\lambda_1(\node{n}_1)) = \textit{dl}(\lambda_2(\iota(\node{n}_1)))$
\end{enumerate}
In this case we refer to $\iota$ as a \emph{structural tableau morphism} from $\mathbb{T}_1$ to $\mathbb{T}_2$. \end{enumerate} \end{definition}
\noindent The definitions of tree isomorphism and isomorphic trees are standard. Two tableaux are structurally equivalent if they are ``almost isomorphic", in the standard sense. Specifically, their trees must be isomorphic, and isomorphic nodes in the two trees must have the same proof rule applied to them, although they may have different witness functions if the rule involved is $\dia{K}$. Sequents labeling isomorphic nodes may differ in their valuations, and the set of states mentioned in the sequents, although the formulas and definition lists must be the same. Intuitively, structurally equivalent tableaux may be seen as employing the same reasoning, but on slightly different, albeit similar, sequents.
Call a tableau \emph{diamond-leaf-free} if it contains no diamond leaves; recall that that any successful tableau must be diamond-leaf-free. The next lemma establishes that if two diamond-leaf-free TNF tableaux have root sequents $\seq{s}_1$ and $\seq{s}_2$ such that $\textit{fm}(\seq{s}_1) = \textit{fm}(\seq{s}_2)$ and $\textit{dl}(\seq{s}_1) = \textit{dl}(\seq{s}_2) = \varepsilon$, then the tableaux must be structurally equivalent. It relies on an assumption that we make throughout this section: that definitional constants as introduced in the $\sigma Z.$ rule are generated uniformly. That is, if the sequent labeling a node has form $S \tnxTV{\Delta} \sigma Z.\Phi$ and rule $\sigma Z.$ is applied, then a given definitional constant $U$ depending only on $\sigma Z.\Phi$ and $\Delta$ is introduced, with the child sequent $S \tnxTV{\Delta \cdot (U = \sigma Z.\Phi)} U$ being generated.
\begin{lemma}[Structural equivalence of TNF tableaux]\label{lem:structural-equivalence-of-TNF-tableaux} Let $\seq{s}_1 = S_1 \tnxT{\val{V}_1}{\varepsilon} \Phi$ and $\seq{s}_2 = S_2 \tnxT{\val{V}_2}{\varepsilon} \Phi$ be sequents, with $\mathbb{T}_1$ and $\mathbb{T}_2$ diamond-leaf-free TNF tableaux for $\seq{s}_1$ and $\seq{s}_2$, respectively. Then $\mathbb{T}_1$ and $\mathbb{T}_2$ are structurally equivalent. \end{lemma} \begin{proof} Fix sequents $\seq{s}_1 = S_1 \tnxT{\val{V}_1}{\varepsilon} \Phi$ and $\seq{s}_2 = S_2 \tnxT{\val{V}_2}{\varepsilon} \Phi$, and let $\mathbb{T}_1$ and $\mathbb{T}_2$ be diamond-leaf-free TNF tableaux for $\seq{s}_1$ and $\seq{s}_2$, respectively, such that $\mathbb{T}_i = (\tree{T}_i,\rho_i, \mathcal{T}, \val{V}_i, \lambda_i)$, and $\tree{T}_i = (\node{N}_i,\node{r}_i,p_i,cs_i)$, for $i = 1,2$.
It suffices to construct a structural tableau morphism from $\mathbb{T}_1$ to $\mathbb{T}_2$, so that $\iota(\node{r}_1) = \node{r}_2$, and the following properties are satisfied for all $\node{n}_1 \in \node{N}_1$. \begin{enumerate}[left=\parindent, label=S\arabic*., ref=S\arabic*]
\item \label{stm:related-parents} $\iota(p_1(\node{n}_1)) = p_2(\iota(\node{n}_1))$;
\item \label{stm:related-children} $\iota(cs_1(\node{n}_1)) = cs_2(\iota(\node{n}_1))$;
\item \label{stm:equal-rule} $\textit{rn}(\rho_1(\node{n}_1)) = \textit{rn}(\rho_2(\iota(\node{n}_1)))$;
\item \label{stm:equal-formula} $\textit{fm}(\lambda_1(\node{n}_1)) = \textit{fm}(\lambda_2(\iota(\node{n}_1)))$; and
\item \label{stm:equal-definition-list} $\textit{dl}(\lambda_1(\node{n}_1)) = \textit{dl}(\lambda_2(\iota(\node{n}_1)))$. \end{enumerate}
The definition of $\iota$ is given in a co-inductive fashion (i.e., ``from the root down''). Effectively, the definition is such that when two nodes are in bijective correspondence, they have the same formula and definition list, i.e., they satisfy statements~\ref{stm:equal-formula} and~\ref{stm:equal-definition-list}. When two nodes are in bijective correspondence and have the same formula and definition list, we show that necessarily the same proof rule is applied to both, i.e., the nodes satisfy statement~\ref{stm:equal-rule}, and the bijective correspondence can be extended to their children in a way that statement~\ref{stm:related-children} is satisfied, and furthermore, the children satisfy statements~\ref{stm:related-parents},~\ref{stm:equal-formula} and~\ref{stm:equal-definition-list}.
The construction of $\iota$ is as follows. We begin by taking $\iota(\node{r}_1) = \node{r}_2$. We observe that $\lambda_1(\node{r}_1) = \seq{s}_1 = S_1 \tnxT{\val{V}_1}{\varepsilon} \Phi$ and $\lambda_2(\node{r}_2) = \seq{s}_2 = S_2 \tnxT{\val{V}_2}{\varepsilon} \Phi$, hence $\textit{fm}(\lambda_1(\node{r}_1)) = \Phi = \textit{fm}(\lambda_2(\node{r}_2)) = \textit{fm}(\lambda_2(\iota(\node{r}_1)))$ and $\textit{dl}(\lambda_1(\node{r}_1)) = \varepsilon = \textit{dl}(\lambda_2(\node{r}_2)) = \textit{dl}(\lambda_2(\iota(\node{r}_1)))$, so statements~\ref{stm:equal-formula} and~\ref{stm:equal-definition-list} clearly hold of $\node{r}_1$. Furthermore, $p_1(\node{r}_1)$ and $p_2(\node{r}_2)$ are both undefined, so statement~\ref{stm:related-parents} trivially holds.
Now, fix $\node{n}_1$ and $\node{n}_2$ such that $\iota(\node{n}_1) = \node{n}_2$, and assume they satisfy statements~\ref{stm:related-parents},~\ref{stm:equal-formula} and~\ref{stm:equal-definition-list}. We distinguish cases based on $\textit{rn}(\rho_1(\node{n}_1))$, the name of the rule applied to $\node{n}_1$. \begin{itemize}
\item $\textit{rn}(\rho_1(\node{n}_1)){\perp}$. So, $\node{n}_1$ is a leaf. As $\mathbb{T}_1$ is diamond leaf free, $\textit{fm}(\lambda_1(\node{n}_1)) \in \{ Z, \neg Z, U \}$ for $Z \in \textnormal{Var}\xspace \setminus \operatorname{dom}(\textit{dl}(\lambda_1(\node{n}_1)))$, $U \in \textit{dl}(\lambda_1(\node{n}_1))$; as $\textit{fm}(\lambda_1(\node{n}_1)) = \textit{fm}(\lambda_2(\node{n}_2))$ and $\textit{dl}(\lambda_1(\node{n}_1)) = \textit{dl}(\lambda_2(\node{n}_2))$, also $\rho_2(\node{n}_2)){\perp}$, hence statement~\ref{stm:equal-rule} is satisfied.
Since $\node{n}_1$ and $\node{n}_2$ do not have a rule applied, neither node has children, and statement~\ref{stm:related-children} is satisfied trivially.
\item $\textit{rn}(\rho_1(\node{n}_1)) \in \{ \wedge, \vee, [K], \dia{K} \}$. Then,
it follows directly from the fact that $\mathbb{T}_1$ and $\mathbb{T}_2$ are in TNF, and Definition~\ref{def:tableau-normal-forms}(\ref{def:thinning-restricted}), that $\textit{fm}(\lambda_1(\node{n}_1))$ and $\textit{fm}(\lambda_2(\node{n}_2))$ are not of the form $\sigma Z . \Phi$, and therefore, $\textit{rn}(\rho_2(\node{n}_2)) \neq \textnormal{Thin}$. Therefore, $\textit{rn}(\rho_2(\node{n}_2))$ must be dictated by the shape of $\textit{fm}(\lambda_2(\node{n}_2))$, and as $\textit{fm}(\lambda_1(\node{n}_1)) = \textit{fm}(\lambda_2(\node{n}_2))$ it immediately follows that $\textit{rn}(\rho_2(\node{n}_2)) = \textit{rn}(\rho_1(\node{n}_1))$, and statement~\ref{stm:equal-rule} is satisfied.
Since $\textit{rn}(\rho_2(\node{n}_2)) = \textit{rn}(\rho_1(\node{n}_1))$, they have the same number of children, and we define $\iota$ such that $\iota(cs(\node{n}_1)) = cs(\node{n}_2) = cs(\iota(\node{n}_1))$. So statement~\ref{stm:related-children} is satisfied immediately. It is furthermore easy to see that these children satisfy statements~\ref{stm:related-parents},~\ref{stm:equal-formula} and~\ref{stm:equal-definition-list}.
\item $\textit{rn}(\rho_1(\node{n}_1)) = \textnormal{Thin}$. Then, as $\mathbb{T}_1$ is in TNF, it follows directly from Definition~\ref{def:tableau-normal-forms}(\ref{def:thinning-restricted}) that $\textit{fm}(\lambda_1(\node{n}_1)) = \textit{fm}(\lambda_2(\node{n}_2)) = \sigma Z . \Phi$, for some $Z$ and $\Phi$, and that $\textit{rn}(\rho_1(p_1(\node{n}_1))) \neq \textnormal{Thin}$.
According to our coinductive hypothesis, $\iota(p_1(\node{n}_1)) = p_2(\node{n}_2)$, hence $\textit{rn}(\rho_2(p_2(\node{n}_2))) = \textit{rn}(\rho_1(p_1(\node{n}_1))) \neq \textnormal{Thin}$; as $\textit{fm}(\lambda_2(\node{n}_2)) = \sigma Z . \Phi$ it must therefore be the case that $\textit{rn}(\rho_2(\node{n}_2)) = \textit{rn}(\rho_1(\node{n}_1)) = \textnormal{Thin}$, as rule $\sigma Z$, the other rule that could potentially be applied, is not allowed due to the restriction to TNF.
Since $\textit{rn}(\rho_2(\node{n}_2)) = \textit{rn}(\rho_1(\node{n}_1))$, they have the same number of children, and we define $\iota$ such that $\iota(cs(\node{n}_1)) = cs(\node{n}_2) = cs(\iota(\node{n}_1))$. So statement~\ref{stm:related-children} is satisfied immediately. It is furthermore easy to see that these children satisfy statements~\ref{stm:related-parents},~\ref{stm:equal-formula} and~\ref{stm:equal-definition-list}.
\item $\textit{rn}(\rho_1(\node{n}_1)) = \sigma Z$. Then, as $\mathbb{T}_1$ is in TNF, it follows directly from Definition~\ref{def:tableau-normal-forms}(\ref{def:thinning-restricted}) that $\textit{fm}(\lambda_1(\node{n}_1)) = \textit{fm}(\lambda_2(\node{n}_2)) = \sigma Z . \Phi$, for some $Z$ and $\Phi$, and that $\textit{rn}(\rho_1(p_1(\node{n}_1))) = \textnormal{Thin}$. According to our coinductive hypothesis, $p_1(\node{n}_1) = p_2(\node{n}_2)$, hence $\textit{rn}(\rho_2(p_2(\node{n}_2))) = \textit{rn}(\rho_1(p_1(\node{n}_1))) = \textnormal{Thin}$; as $\textit{fm}(\lambda_2(\node{n}_2)) = \sigma Z . \Phi$ it must therefore be the case that $\textit{rn}(\rho_2(\node{n}_2)) = \textit{rn}(\rho_1(\node{n}_1)) = \sigma Z$.
Since $\textit{rn}(\rho_2(\node{n}_2)) = \textit{rn}(\rho_1(\node{n}_1))$, they have the same number of children, and we define $\iota$ such that $\iota(cs(\node{n}_1)) = cs(\node{n}_2) = cs(\iota(\node{n}_1))$. So statement~\ref{stm:related-children} is satisfied immediately. It is furthermore easy to see that these children satisfy statements~\ref{stm:related-parents},~\ref{stm:equal-formula} and~\ref{stm:equal-definition-list}. Note that in particular statements~\ref{stm:equal-formula} and~\ref{stm:equal-definition-list} require that the definitional constants are generated uniformly, so the same definitional constant is introduced in the application of the $\sigma Z$ rule in both tableaux.\qedhere \end{itemize} \end{proof}
\remove{ \begin{proofsketch} It suffices to give a structural tableau morphism from $\mathbb{T}_1$ to $\mathbb{T}_2$. This can be done co-inductively using $\tree{T}_1$ and $\tree{T}_2$, the trees embedded in $\mathbb{T}_1$ and $\mathbb{T}_2$. The limitations imposed by TNF on the use of the Thin and Un rules ensure the desired similarities in sequents labeling isomorphic tree nodes, while the diamond-leaf-free property ensures that all leaves must be free leaves, i.e.\/ of form $Z$ or $\lnot Z$ for some $Z$ free in $\Phi$, or $\sigma$-leaves, i.e.\/ of form $U$ for some $U$ defined in the definition list of the leaf. \qedhere \end{proofsketch} }
\subsection{Completeness via tableau constructions}
We now turn to proving lemmas about the existence of successful TNF tableaux for different classes of sequents. The first establishes the existence of such tableaux for valid sequents whose formulas contain no fixpoint subformulas.
\begin{lemma}[Fixpoint-free completeness]\label{lem:fixpoint-free-completeness} Let $\mathcal{T}, \val{V}, \Phi$ and $S$ be such that $\Phi$ is fixpoint-free and $S \subseteq \semTV{\Phi}$. Then there is a successful TNF tableau for $S \tnxTV{\varepsilon} \Phi$. \end{lemma} \begin{proof} Let $\mathcal{T} = \lts{S}$ be an LTS of sort $\Sigma$ and $\val{V}$ be a valuation. The proof proceeds by structural induction on $\Phi$; the induction hypothesis states that for any subformula $\Phi'$ of $\Phi$ and $S'$ such that $S' \subseteq \semTV{\Phi'}$, $S' \tnxTV{\varepsilon} \Phi'$ has a successful TNF tableau. The argument involves a case analysis on the form of $\Phi$. Most cases are routine and left to the reader. We consider here the cases for $\lor$ and $\dia{K}$.
Assume $\Phi = \Phi_1 \lor \Phi_2$; let $S_1 = S \cap \semTV{\Phi_1}$ and $S_2 = S \setminus S_1$. It is easy to see that $S_1 \subseteq \semTV{\Phi_1}$ and $S_2 \subseteq \semTV{\Phi_2}$; hence the induction hypothesis guarantees that both $S_1 \tnxTV{\varepsilon} \Phi_1$ and $S_2 \tnxTV{\varepsilon} \Phi_2$ have successful TNF tableaux. Without loss of generality, assume that these tableaux have disjoint sets of proof nodes. We obtain a successful TNF tableau for $S \tnxTV{\varepsilon} \Phi$ by creating a fresh tree node labeled by this sequent and having as its left child the root of the successful TNF tableau for $S_1 \tnxTV{\varepsilon} \Phi_1$ and as its right child the root of the successful TNF tableau for $S_2 \tnxTV{\varepsilon} \Phi_2$. The proof rule applied to the new node is $\lor$. It is easy to establish that this tableau is successful and TNF.
Now assume $\Phi = \dia{K} \Phi'$, and let $f \in S \to \states{S}$ be a function such that for every $s \in S$, $s \xrightarrow{K} f(s)$ and $f(s) \in \semTV{\Phi'}$. Such an $f$ must exist, as $S \subseteq \semTV{\dia{K}\Phi'}$ and thus for every $s \in S$ there is an $s' \in \states{S}$ such that $s \xrightarrow{K} s'$ and $s' \in \semTV{\Phi'}$. Since $f(S) \subseteq \semTV{\Phi'}$, the induction hypothesis guarantees the existence of a successful TNF tableau for $f(S) \tnxTV{\varepsilon} \Phi'$. We now construct a successful TNF tableau for $S \tnxT{\val{V}'}{\varepsilon} \Phi$ as follows. Create a fresh tree node labeled by $S \tnxTV{\varepsilon} \Phi$, and let its only child be the root node for the successful TNF tableau for $f(S) \tnxTV{\varepsilon} \Phi'$. The rule application associated with the new node is $(\dia{K},f)$. The new tableau is clearly successful and TNF. \qedhere \end{proof}
We now prove the existence of successful TNF tableaux for different classes of sequents involving fixpoint formulas. Before doing so, we first define the notion of \emph{compliance} between a tableau and a support ordering.
\begin{definition}[Tableau compliance with $\prec$]\label{def:tableau-compliance} Let $\mathcal{T}, \val{V}, Z, \Phi, \sigma$ and $S$ be such that $S = \semTV{\sigma Z.\Phi}$. Also let $(S, \prec)$ be a $\sigma$-compatible support ordering for $\semfTV{Z}{\Phi}$. Then TNF tableau $\tableauTrl$ with root node $\node{r} = S \tnxTV{\varepsilon} \sigma Z.\Phi$ and $\node{r'} = cs(\node{r}) = S \tnxTV{(U = \sigma Z.\Phi)} U$ is \emph{compliant} with $\prec$ iff whenever $s' <:_{\node{r'}} s$, $s' \prec s$. \end{definition}
Intuitively, tableau $\mathbb{T}$ for sequent $S \tnxTV{\varepsilon} \sigma Z.\Phi$ is compliant with support ordering $\prec$ iff every extended dependency from $s \in S$ to a state in a companion leaf of $\node{r}'$ is also a semantic dependency as reflected in $\prec$. Note that since the root node of $\mathbb{T}$ is a fixpoint node and $\mathbb{T}$ is in TNF, $\rho(\node{r}) = \sigma Z.$ and thus $cs(\node{r}) = \node{r'} = S \tnxTV{U = \sigma Z.\Phi} U$ for some definitional constant $U$. Also, since $\mathbb{T}$ is TNF $\rho(\node{r}') = \text{Un}$, and $\node{r'}$ is the only node in which unfolding is applied to definitional constant $U$.
The next lemma continues the sequence of results in this section on the existence of successful TNF tableaux for valid sequents. In this case, formulas have form $\sigma Z.\Phi$, where $\Phi$ contains no fixpoint subformulas, and have specific $\sigma$-compatible support orderings, and the result shows how successful TNF tableaux that are compliant with the given support ordering can be constructed.
\begin{lemma}[Single-fixpoint completeness]\label{lem:single-fixpoint-completeness} Fix $\mathcal{T}$, and let $\Phi, Z, \val{V}, \sigma$ and $S$ be such that $\Phi$ is fixpoint-free and $S = \semTV{\sigma Z.\Phi}$. Also let $(S, \prec)$ be a $\sigma$-compatible, total, qwf support ordering for $\semfTV{Z}{\Phi}$. Then $S \tnxTV{\varepsilon} \sigma Z.\Phi$ has a successful TNF tableau compliant with $(S, \prec)$. \end{lemma} \remove{ \begin{proofsketch} Fix $\mathcal{T}= \lts{\states{S}}$ of sort $\Sigma$, and let $\Phi, Z, \val{V}, \sigma$ and $S$ be such that $\Phi$ is fixpoint-free and $S = \semTV{\sigma Z.\Phi}$. Also let $(S, \prec)$ be a $\sigma$-compatible, total, qwf support ordering for $f = \semfTV{Z}{\Phi}$. Since $s \in f(\preimg{{\prec}}{s})$ for all $s \in S$ it follows from Lemma~\ref{lem:fixpoint-free-completeness} that for each such $s$ there is a successful TNF tableau $\mathbb{T}_s$ for sequent $\{ s \} \tnxT{\val{V}_s}{\varepsilon} \Phi$, where $\val{V}_s = \val{V} [Z := \preimg{{\prec}}{s}]$. The proof then shows how these tableaux may be merged into a single successful TNF tableau for $S \tnxT{\val{V}_S}{\varepsilon} \Phi$, where $\val{V}_S = \val{V}[Z := \preimg{{\prec}}{S}]$, with the property that extended dependencies from the root to leaves labeled by $Z$ are also semantic-support relationships according to $\prec$. It is then shown how to convert this tableau into a successful TNF tableau for $S \tnxTV{\varepsilon} \sigma Z.\Phi$ that is compliant with $(S, \prec)$. The detailed proof is included in the appendix. \qedhere \end{proofsketch} } \begin{proof} Fix $\mathcal{T} = \lts{\states{S}}$ of sort $\Sigma$, and let $\Phi, Z, \val{V}, \sigma$ and $S$ be such that $\Phi$ is fixpoint-free and $S = \semTV{\sigma Z.\Phi}$. Also let $(S, \prec)$ be a $\sigma$-compatible, total, qwf support ordering for $f = \semfTV{Z}{\Phi}$. We must construct a successful TNF tableau for sequent $S \tnxTV{\varepsilon} \sigma Z.\Phi$ that is compliant with $(S,\prec)$. The proof consists of the following steps. \begin{enumerate}
\item\label{it:step-single-state-tableau}
For each $s \in S$ we use Lemma~\ref{lem:fixpoint-free-completeness} to establish the existence of a successful TNF tableau for sequent $\{s\} \tnxT{\val{V}_s}{\varepsilon} \Phi$, where $\val{V}_s = \val{V}[Z := \preimg{{\prec}}{s}]$.
\item\label{it:step-full-set-tableau}
We then construct a successful TNF tableau for sequent $S \tnxT{\val{V}_S}{\varepsilon} \Phi$, where $\val{V}_S = \val{V}[Z := \preimg{{\prec}}{S}]$, from the individual tableaux for the $s \in S$.
\item\label{it:step-fixpoint-tableau}
We convert the tableau for $S \tnxT{\val{V}_s}{\varepsilon} \Phi$ into a successful TNF tableau for $S \tnxTV{\varepsilon} \sigma Z.\Phi$ that is compliant with $\prec$. \end{enumerate}
We begin by noting that $S = \sigma f$, and that since $(S,\prec)$ is a support ordering for $f$, it is the case that $s \in f(\preimg{{\prec}}{s})$ for every $s \in S$. From the definition of $f$ and $\val{V}_s$ it therefore follows that for each $s \in S$, $s \in \semT{\Phi}{\val{V}_s}$. Now, let $(Q_{\prec}, \sqsubseteq)$ be the quotient of $(S, {\prec})$ as given in Definition~\ref{def:relation-quotient}, with ${\sqsubset} = {\sqsubseteq^-}$ the irreflexive core of $\sqsubseteq$. (Recall that each equivalence class $Q \in Q_\prec$ is such that that $Q \subseteq S$.) Since $\prec$ is total and qwf Lemma~\ref{lem:total-qwo} guarantees that it is a qwo, which means that $\sqsubset$ is a well-ordering (i.e.\/ is total and well-founded) over $Q_\prec$. If $s \in S$, then we write $[s]$ for the unique $Q \in Q_\prec$ such that $s \in Q$ (i.e.\/ $[s]$ is the equivalence class of $s$ with respect to the equivalence on $S$ induced by $\prec$). It is easy to see the following. \[ \preimg{{\prec}}{s} = \bigcup_{\{s' \mid s' \prec s\}} [s'] \] Note that if $s \prec s$ then $[s] \subseteq {\preimg{\prec}{s}}$.
\paragraph{Step~\ref{it:step-single-state-tableau} of proof outline: construct tableau for $\{s\} \tnxT{\val{V}_s}{\varepsilon} \Phi$, where $s \in S$.} For any $s \in S$ we have that $s \in \semT{\Phi}{\val{V}_s}$, meaning $\{s\} \subseteq \semTV{\Phi}$ is valid. Since $\Phi$ is fixpoint-free Lemma~\ref{lem:fixpoint-free-completeness} guarantees the existence of a successful TNF tableau \begin{align*} \mathbb{T}_s &= \tableau{\tree{T}_s}{\rho_s}{\mathcal{T}}{\val{V}_s}{\lambda_s}, \text{\rm where} \\ \tree{T}_s &= (\node{N}_s,\node{r}_s,p_s,cs_s) \end{align*} for $\{s\} \tnxT{\val{V}_s}{\varepsilon} \Phi$. We now remark on some properties of $\mathbb{T}_s$. \begin{enumerate}
\item\label{it:obs-dependencies}
Suppose $s'$ and $\node{n}' \in \node{N}_s$ are such that $\textit{fm}(\node{n}') = Z$ (so $\node{n}'$ is a leaf whose formula is $Z$, the variable bound by $\sigma$ in $\sigma Z.\Phi$) and $s' <:_{\node{n}',\node{r}_s} s$. Then $s' \prec s$, since $s' \in \val{V}_s(Z) = \preimg{{\prec}}{s}$.
\item\label{it:obs-isomorphism}
Let $s' \in S$, and let $\mathbb{T}_{s'}$ be the successful TNF tableau for $\{s'\} \tnxT{\val{V}_{s'}}{\varepsilon} \Phi$. Lemma~\ref{lem:structural-equivalence-of-TNF-tableaux} and the fact that successful tableaux must be diamond-leaf-free guarantees that $\mathbb{T}_s$ and $\mathbb{T}_{s'}$ are structurally equivalent. \end{enumerate} Observation \ref{it:obs-dependencies} establishes a relationship between dependencies involving the single state in the root node of $\mathbb{T}_s$ and states in the leaves involving $Z$. Observation \ref{it:obs-isomorphism} guarantees that the successful TNF tableaux due to Lemma~\ref{lem:fixpoint-free-completeness} are structurally equivalent, and hence isomorphic as trees, and satisfying the property that isomorphic nodes in these trees share the same formulas, definition lists ($\varepsilon$ in this case) and rule names. In what follows we use $\tree{T} = (\node{N}, \node{r}, p, cs)$, $\textit{fm}(\node{n})$ and $\textit{rn}(\node{n})$ for these common structures and write $\mathbb{T}_s = (\tree{T}, \rho_s, \mathcal{T}, \val{V}_s, \lambda_s)$ for $s \in S$, noting that for all $s, s' \in S$, $\textit{rn}(\rho_s(\node{n})) = \textit{rn}(\rho_{s'}(\node{n})) = \textit{rn}(\node{n})$.
\paragraph{Step~\ref{it:step-full-set-tableau} of proof outline: construct tableau for $S \tnxT{\val{V}_S}{\varepsilon} \Phi$.} We now construct a successful TNF tableau for $S \tnxT{\val{V}_S}{\varepsilon} \Phi$ satisfying the following: if $s, s'$ and $\node{n}'$ are such that $\textit{fm}(\node{n}') = Z$ and $s' <:_{\node{n}',\node{r}} s$, then $s' \prec s$. There are two cases to consider. In the first case, $S = \emptyset$. In this case, ${\prec} = \preimg{{\prec}}{S} = \emptyset$, and $\emptyset \tnxT{\val{V}_S}{\varepsilon} \Phi$ is valid and therefore, by Lemma~\ref{lem:fixpoint-free-completeness}, has a successful TNF tableau. Define $\mathbb{T}_S$ to be this tableau. Note that since $S = \emptyset$ $T_S$ vacuously satisfies the property involving $<:$.
In the second case, $S \neq \emptyset$; we will construct $\mathbb{T}_S = (\tree{T}, \rho_S, \mathcal{T}, \val{V}_S, \lambda_S)$ that is structurally equivalent to each $\mathbb{T}_s$ for $s \in S$. The intuition behind the construction is to ``merge" the individual tableaux $\mathbb{T}_s$ for the $s \in S$ by assigning to each node $\node{n}$ in $\mathbb{T}_S$ the set of states obtained by appropriately combining all the sets of states each individual tableau $\mathbb{T}_s$ assigns to the node. Care must be taken with nodes involving the $\lor$ and $\dia{K}$ proof rules.
Since $\tree{T}$ $\rho$ is already given, completing the construction of $\mathbb{T}_S$ only requires that we define $\rho_S$ and $\lambda_S$, which we do so that the following invariants hold for each $\node{n} \in \node{N}$.
\begin{invariants}
\item\label{inv:rule}
If $\rho_S(\node{n})$ is defined, then $\textit{rn}(\rho_S(\node{n})) = \textit{rn}(\node{n})$ and the sequents assigned by $\lambda_S$ to $\node{n}$ and its children are consistent with $\rho_S(\node{n})$.
\item\label{inv:formula}
$\textit{fm}(\lambda_S(\node{n})) = \textit{fm}(\node{n})$
\item\label{inv:state-set}
$\textit{st}(\lambda_S(\node{n})) \subseteq \bigcup_{s \in S} \textit{st}(\lambda_s(\node{n}))$ \end{invariants}
The definitions of $\rho_S$ and $\lambda_S$ are given in a co-inductive fashion (i.e.\/ ``from the root down"). We begin by taking $\lambda_S(\node{r}) = S \tnxT{\val{V}_S}{\varepsilon} \Phi$ for root node $\node{r}$; invariants~\ref{inv:formula} and \ref{inv:state-set} clearly hold of $\lambda_S(\node{r})$. For the co-inductive step we assume that $\node{n}$ satisfies \ref{inv:formula} and \ref{inv:state-set} and define $\rho_S(\node{n}')$ and $\lambda(\node{n}')$ for each child $\node{n}'$ of $\node{n}$ so that \ref{inv:rule} holds of $\node{n}$ and \ref{inv:formula} and \ref{inv:state-set} hold of each of the $\node{n}'$. This is done below based on $\textit{rn}(\node{n})$, the name of the rule applied to $\node{n}$. Note that because each $\mathbb{T}_s$ is in TNF and $\Phi$ is fixpoint-free there can be no $\node{n} \in \node{N}$ such that $\textit{rn}(\node{n}) \in \{\textnormal{Thin}, \sigma Z, \text{Un}\}$. Thus the case analysis below need not consider these possibilities. In what follows we let $S_{\node{n}} = \textit{st}(\lambda_S(\node{n}))$ be the set of states in the sequent labeling $\node{n}$. \begin{description}
\item[$\textit{rn}(\node{n}) {\perp}$.]
In this case \ref{inv:rule} holds vacuously. Note also that $\node{n}$ must be a leaf and therefore has no children.
\item[$\textit{rn}(\node{n}) = \land$.]
In this case $cs(\node{n}) = \node{n}_1\node{n}_2$ and $\textit{fm}(\node{n}) = \textit{fm}(\node{n}_1) \land \textit{fm}(\node{n}_2)$.
Define
$\rho_S(\node{n}) = \land$
and
$\lambda_S(\node{n}_1) = S_\node{n} \tnxT{\val{V}_S}{\varepsilon} \textit{fm}(\node{n}_1)$ and $\lambda_S(\node{n}_2) = S_\node{n} \tnxT{\val{V}_S}{\varepsilon} \textit{fm}(\node{n}_2)$.
Invariant \ref{inv:rule} clearly holds for $\node{n}$, while \ref{inv:formula} and \ref{inv:state-set} each hold for $\node{n}_1$ and $\node{n}_2$ since they do for $\node{n}$ and $\rho_S(\node{n}) = \land$.
\item[$\textit{rn}(\node{n}) = \lor$.]
In this case $cs(\node{n}) = \node{n}_1\node{n}_2$ and $\textit{fm}(\node{n}) = \textit{fm}(\node{n}_1) \lor \textit{fm}(\node{n}_2)$.
Take $\rho_S(\node{n}) = \lor$.
We now construct $S_1$ and $S_2$, the sets of states in $\node{n}_1$ and $\node{n}_2$, as follows.
For $s \in S_\node{n}$ define
\[
I_s = \{s' \in S \mid s \in \textit{st}(\lambda_{s'}(\node{n}))\}.
\]
Intuitively, $I_s$ consists of all states $s' \in S$ whose tableaux $\mathbb{T}_{s'}$ contain state $s$ in node $\node{n}$. This set must be non-empty since $s \in S_\node{n}$ and Property~\ref{inv:state-set} holds of $\node{n}$. Since $\prec$ is a qwo, Lemma~\ref{lem:qwo-pseudo-minimum} guarantees that $I_s$ has at least one pseudo-minimum element $s'$: $s' \in I_s$ has the property that $s' \prec s''$ for all $s'' \in I_s$.
Select $s'$ to be one of these pseudo-minimum elements.
Since $\mathbb{T}_{s'}$ is successful and TNF it must be the case that either $s \in \textit{st}(\lambda_{s'}(\node{n}_1))$ or $s \in \textit{st}(\lambda_{s'}(\node{n}_2))$, but not both. Now define the following.
\begin{align*}
S_{1,s}
&=
\begin{cases}
\{s\}
& \text{if $s \in \textit{st}(\lambda_{s'}(\node{n}_1))$}
\\
\emptyset
& \text{otherwise}
\end{cases}
\\
S_{2,s}
&= \{s\} \setminus S_{1,s}
\\
S_1
&= \bigcup_{s \in S} S_{1,s}
\\
S_2
&= \bigcup_{s \in S} S_{2,s}
\end{align*}
For any $s \in S_{\node{n}}$, since either $s \in S_{1,s}$ or $s \in S_{2,s}$, but not both, it follows that either $s \in S_1$ or $s \in S_2$, but not both.
Therefore, $S_1 \cup S_2 = S_{\node{n}}$ and $S_1 \cap S_2 = \emptyset$.
It can also be seen that if $s \in S_{i,s}$ then
$s \in \textit{st}(\lambda_{s'}(\node{n}_i))$
and thus $s \in \bigcup_{s'' \in S} \textit{st}(\lambda_{s''}(\node{n}_i))$.
Now define
\begin{align*}
\lambda_S(\node{n}_1) &= S_1 \tnxT{\val{V}_S}{\varepsilon} \textit{fm}(\node{n}_1)
\\
\lambda_S(\node{n}_2) &= S_2 \tnxT{\val{V}_S}{\varepsilon} \textit{fm}(\node{n}_2).
\end{align*}
Based on the definitions of $S_1$ and $S_2$ invariant~\ref{inv:rule} certainly holds for $\node{n}$, as do \ref{inv:formula} and \ref{inv:state-set} for each of $\node{n}_1$ and $\node{n}_2$.
\item[$\textit{rn}(\node{n}) = [K{]}$.]
In this case $cs(\node{n}) = \node{n}'$ and $\textit{fm}(\node{n}) = [K] \textit{fm}(\node{n}')$.
Set $\rho_S(\node{n}) = [K]$.
Now let
\[
S' = \{ s' \mid \exists s \in S_{\node{n}} \colon s \xrightarrow{K} s'\},
\]
and define $\lambda_S(\node{n}') = S' \tnxT{\val{V}_S}{\varepsilon} \textit{fm}(\node{n}')$. Invariant~\ref{inv:rule} holds for $\node{n}$, while \ref{inv:formula} and \ref{inv:state-set} hold for $\node{n}'$ based on the fact that these hold by assumption for $\node{n}$.
\item[$\textit{rn}(\node{n}) = \dia{K}$.]
In this case $cs(\node{n}) = \node{n}'$ and $\textit{fm}(\node{n}) = \dia{K}\textit{fm}(\node{n}')$.
To define $\rho_S(\node{n})$ we first
construct a witness function $f_\node{n} \in S_{\node{n}} \rightarrow \states{S}$ such that $s \xrightarrow{K} f_{\node{n}}(s)$ for all $s \in S_\node{n}$ and such that $f_\node{n}(S_\node{n}) \subseteq \bigcup_{s \in S}\textit{st}(\lambda_s(\node{n}'))$.
This function will then be used to define the sequent labeling $\node{n}'$.
So fix $s \in S_\node{n}$; we construct $f_{\node{n}}(s)$ based on the tableaux $\mathbb{T}_{s'}$ whose sequent for $\node{n}$ contains $s$.
To this end, define
\[
I_s = \{s' \in S \mid s \in \textit{st}(\lambda_{s'}(\node{n})) \}.
\]
Intuitively, $I_s \subseteq S$ contains all states $s'$ whose tableau $\mathbb{T}_{s'}$ contains state $s$ in $\node{n}$.
Clearly $I_s$ is non-empty and thus contains a pseudo-minimum element $s'$ (Lemma~\ref{lem:qwo-pseudo-minimum}).
Now consider $\rho_{s'}(\node{n})$; it has form $(\dia{K},f_{\node{n},s'})$, where $f_{\node{n},s'}$ is the witness function for $\node{n}$ in tableau $\mathbb{T}_{s'}$.
This means that $f_{\node{n}, s'} \in \textit{st}(\lambda_{s'}(\node{n})) \rightarrow \states{S}$ is such that $\textit{st}(\lambda_{s'}(\node{n}')) = f_{\node{n},s'}(\textit{st}(\lambda_{s'}(\node{n})))$.
We now define $f_{\node{n}}(s) = f_{\node{n},s'}(s)$, $\rho_S(\node{n}) = (\dia{K}, f_{\node{n}})$, and $\lambda_S(\node{n}') = f_\node{n}(S_\node{n}) \tnxT{\val{V}_S}{\varepsilon} \textit{fm}(\node{n}')$. It can be seen that invariant~\ref{inv:rule} holds of $\node{n}$ and that \ref{inv:formula} and \ref{inv:state-set} hold of $\node{n}'$. \end{description} This construction ensures that Properties~\ref{inv:rule}--\ref{inv:state-set} hold for all $\node{n}$.
To establish that $\mathbb{T}_S$ is successful we must show that every leaf in $\mathbb{T}_S$ is successful (cf.\/ Definition~\ref{def:successful-tableau}), which amounts to showing that for each such leaf $\node{n}$, $\textit{st}(\lambda_S(\node{n})) \subseteq \semT{\textit{fm}(\node{n})}{\val{V}_S}$. Since $\Phi$ contains no fixpoint subformulas there are two cases to consider. \begin{description}
\item[$\textit{fm}(\node{n}) = Z$.]
In this case the formula labeling $\node{n}$ is the bound variable $Z$ in $\sigma Z.\Phi$. Since for each $s \in S$ $\mathbb{T}_{s}$ is successful and $\preimg{{\prec}}{s} \subseteq \preimg{{\prec}}{S}$, we have
$$
\textit{st}(\lambda_{s}(\node{n}))
\subseteq \semT{\textit{fm}(\node{n})}{\val{V}_s}
= \val{V}_s(Z)
= \preimg{{\prec}}{s}.
$$
As invariant~\ref{inv:state-set} ensures that $\textit{st}(\lambda_S(\node{n})) \subseteq \bigcup_{s \in S} \textit{st}(\lambda_{s}(\node{n}))$, it follows that
$$
\textit{st}(\lambda_S(\node{n}))
\subseteq \bigcup_{s \in S} \textit{st}(\lambda_{s}(\node{n}))
= \bigcup_{s \in S} \preimg{{\prec}}{s}
= \preimg{{\prec}}{S}
= \val{V}_S(Z)
= \semT{\textit{fm}(\node{n})}{\val{V}_S}.
$$
Leaf $\node{n}$ is therefore successful.
\item[$\textit{fm}(\node{n}) \in \{Y, \lnot Y\}$ for some $Y \neq Z$ free in $\sigma Z.\Phi$.]
The argument is very similar to the previous case, the only difference being that for any $s \in S$, $\val{V}_s(Y) = \val{V}_S(Y)$ and thus $\semT{\textit{fm}(\node{n})}{\val{V}_s} = \semT{\textit{fm}(\node{n})}{\val{V}_S}$. \end{description}
That $\mathbb{T}_S$ is in TNF follows from the fact that it is successful (and thus diamond-leaf-free) and that each $\mathbb{T}_s$ is successful and TNF, and from the definitions of $\rho_S$ and $\lambda_S$.
We now establish that for all leaves $\node{n}$ in $\mathbb{T}_S$ such that $\textit{fm}(\node{n}) = Z$, and $s_\node{n}, s_\node{r} \in S$ such that $s_\node{n} <:_{\node{n}, \node{r}} s_\node{r}$ in $\mathbb{T}_S$, $s_\node{n} \prec s_\node{r}$. We begin by noting that if $\node{n} = \node{r}$ then $\Phi = Z$ and $s_\node{n} <:_{\node{n}, \node{r}} s_\node{r}$ iff $s_\node{n} = s_\node{r}$. In this case, if $\sigma = \nu$ then $S = \semTV{\nu Z.Z} = \states{S}$ and the result holds because $(S, \prec)$ is a support ordering for $f$, and thus must be reflexive since $f$ is the identity function. If instead $\sigma = \mu$ then $S = \semTV{\mu Z.Z} = \emptyset$ and the result is vacuously true.
Now assume that $\node{n} \neq \node{r}$. We start by remarking on a property that holds of all $\node{n}_1, \node{n}_2, s_1$ and $s_2$ such that $s_2 <_{\node{n}_2, \node{n}_1} s_1$ in $\mathbb{T}_S$: for every $s \in S$ such that $s_1 \in \textit{st}(\lambda_s(\node{n}_1))$, either $s_2 \in \textit{st}(\lambda_s(\node{n}_2))$, or there exists $s' \prec s$ such that $s_2 \in \textit{st}(\lambda_{s'}(\node{n}_2))$. That is, when $s_2 <_{\node{n}_2, \node{n}_1} s_1$ in $\mathbb{T}_S$, meaning $\node{n}_2$ is a child of $\node{n}_1$ and $s_2$ is a direct dependent of $s_1$ in $\mathbb{T}_S$, and $\mathbb{T}_s$ is a tableau containing $s_1$ in $\node{n}_1$, then $s_2$ is also contained in $\node{n}_2$ of either $\mathbb{T}_s$ or $\mathbb{T}_{s'}$ for some $s' \prec s$. This is easily observed based on the definition of $<_{\node{n}_1, \node{n}_2}$, as well as the construction of $\mathbb{T}_S$ above and its use of pseudo-minimal states in the $\lor$ and $\dia{K}$ cases. A simple inductive argument lifts this result to the case when $\node{n}_1 \neq \node{n}_2$ and $s_1, s_2$ are such that $s_2 <:_{\node{n}_2, \node{n}_1} s_1$ in $\mathbb{T}_S$: for all $s \in S$ such that $s_1 \in \textit{st}(\lambda_s(\node{n}_1))$, either $s_2 \in \textit{st}(\lambda_s(\node{n}_2))$ or there exists $s' \prec s$ such that $s_2 \in \textit{st}(\lambda_{s'}(\node{n}_2))$. Now consider $\node{n}$ and $\node{r}$ as given above, and assume $s_\node{n} <:_{\node{n},\node{r}} s_\node{r}$. It follows from the definition of $\mathbb{T}_s$ that if $s_{\node{r}} \in \textit{st}(\lambda_s(\node{r}))$ then $s = s_\node{r}$, since $\textit{st}(\lambda_s(\node{r})) = \{s\}$. There are now two cases to consider. In the first, $s_\node{n} \in \textit{st}(\lambda_{s}(\node{n}))$, whence $s_\node{n} \in \val{V}_{s}(Z)$ and $s_\node{n} \prec s = s_\node{r}$. In the second, there is an $s' \in S$ such that $s' \prec s_\node{r}$ and $s_\node{n} \in \textit{st}(\lambda_{s'}(\node{n}))$. In this case $s_\node{n} \in \val{V}_{s'}(Z) = \preimg{{\prec}}{s'}$, meaning that $s_\node{n} \prec s'$. Since $\prec$ is total, and hence transitive, we have $s_\node{n} \prec s' \prec s_\node{r}$ and thus $s_\node{n} \prec s_\node{r}$.
\paragraph{Step~\ref{it:step-fixpoint-tableau} of proof outline: construct tableau for $S \tnxT{\val{V}}{\varepsilon} \sigma Z.\Phi$.} To complete the proof, we convert $\mathbb{T}_S$ into a tableau $\mathbb{T}_\sigma = (\tree{T}_\sigma, \rho_\sigma, \mathcal{T}, \val{V}, \lambda_\sigma)$ for sequent $S \tnxTV{\varepsilon} \sigma Z.\Phi$ as follows. We create two fresh tree nodes $\node{r}_1, \node{r}_2 \not\in \node{N}$ and add these into $\tree{T}_\sigma$ along with all the nodes of $\tree{T}$. The root of $\tree{T}_\sigma$ is taken to be $\node{r}_1$; the parent of $\node{r}_2$ is then $\node{r}_1$, while the parent of $\node{r}$, the original root of $\tree{T}$, is $\node{r}_2$. The other nodes of $\tree{T}$ retain their parents and sibling structure from $\tree{T}$. We also define $\rho_\sigma(\node{r}_1) = \sigma Z$ and $\rho_\sigma(\node{r}_2) = \text{Un}$; for all nodes $\node{n}$ in $\tree{T}$, $\rho_\sigma(\node{n}) = \rho_S(\node{n})$. If $\seq{s} = S \tnxTVD \Phi'$ and $Z \in \textnormal{Var}\xspace$ then take \[
\seq{s}[Z := \Gamma] = S \tnxTV{\Delta} \Phi'[Z := \Gamma]l \] We now define $\lambda_\sigma$ as follows. \[ \lambda_\sigma(\node{n}) = \begin{cases}
S \tnxTV{\varepsilon} \sigma Z.\Phi & \text{if $\node{n} = \node{r}_1$} \\[6pt]
S \tnxTV{(U = \sigma Z.\Phi)} U & \text{if $\node{n} = \node{r}_2, U$ fresh} \\[6pt]
\lambda_S(\node{n})[Z:=U]
& \text{otherwise} \end{cases} \]
To finish the proof we must establish that $\mathbb{T}_\sigma$ is a successful TNF tableau compliant with $(S,\prec)$. That $\mathbb{T}_\sigma$ is a tableau follows from the fact that $\mathbb{T}_S$ is a successful tableau. In particular, sequents $\lambda_\sigma(\node{r}_1), \lambda_\sigma(\node{r}_2)$ and $\lambda_\sigma(\node{r})$ represent valid applications of rules $\rho_\sigma(\node{r}_1)$ and $\rho_\sigma(\node{r}_2)$. Moreover, since $\lambda_\sigma(\node{n}) = S_\node{n} \tnxTV{(U = \sigma Z.\Phi)} \Phi_n[Z:=U]$, where $\lambda_S(\node{n}) = S_\node{n} \tnxTV{\varepsilon} \Phi_n$, it can be seen that for each non-leaf $\node{n}$, $\lambda_\sigma(\node{n})$ and $\lambda_\sigma(cs(\node{n}))$ are consistent with rule application $\rho_\sigma(\node{n})$, based on the structure of $\mathbb{T}_S$. Finally, consider $\sigma$-leaf $\node{n}$ in $\mathbb{T}_\sigma$; that is, $\lambda_\sigma(\node{n}) = S_\node{n} \tnxTV{(U = \sigma Z.\Phi)} U$. Since $\mathbb{T}_S$ is successful we have that $\node{n}$ is successful and that $\lambda_S(\node{n}) = S_\node{n} \tnxTV{\varepsilon} Z$, meaning that $S_\node{n} \subseteq \val{V}_S(Z) = \preimg{{\prec}}{S} \subseteq S$. Since $\node{r}_2$ is the only node with an application of rule Un, it must be the companion node of $\node{n}$. Since $\textit{st}(\lambda_\sigma(\node{r}_2)) = S$, $\node{n}$ is terminal, and $\mathbb{T}_\sigma$ is indeed a tableau.
The fact that $\mathbb{T}_S$ is TNF and that definitional constant $U$ is unfolded only once in $\mathbb{T}_\sigma$ guarantees that $\mathbb{T}_\sigma$ is TNF.
We now argue that $\mathbb{T}_\sigma$ is compliant with $(S, \prec)$. To this end, suppose that $s_1, s_2 \in S$ are such that $s_2 <:_{\node{r}_2} s_1$ in $\mathbb{T}_\sigma$; we must show that $s_2 \prec s_1$. This follows immediately from the fact that $s_2 <:_{\node{r}_2} s_1$ in $\mathbb{T}_\sigma$ iff there is a leaf $\node{n}$ such that $\textit{fm}(\lambda_S(\node{n})) = Z$, $s_2 \in \textit{st}(\lambda_S(\node{n}))$, and $s_2 <:_{\node{n},\node{r}} s_1$ in $\mathbb{T}_S$. Previous arguments then establish that $s_2 \prec s_1$.
To establish that $\mathbb{T}_\sigma$ is successful we must show that each of its leaves is successful. Suppose $\node{n}$ is a non-$\sigma$-leaf leaf; that is, $\textit{fm}(\lambda_\sigma(\node{n})) \neq U$. In this case $\lambda_\sigma(\node{n}) = \lambda_S(\node{n})$, and the success of $\mathbb{T}_S$ and all its leaves guarantees the success of this leaf. Now suppose that $\node{n}$ is a $\sigma$-leaf, meaning that $\textit{fm}(\lambda_\sigma(\node{n})) = U$. If $\sigma = \nu$ then this leaf is successful. If $\sigma = \mu$ then we note that for $(S, \prec)$ to be a support ordering for $S = \mu f$, $\prec$ must be well-founded. Compliance of $\mathbb{T}_\sigma$ with $\prec$ guarantees that $<:_{\node{r}_2}$ is also well-founded, and thus $\node{n}$ is successful in this case also. This completes the proof. \qedhere \end{proof}
\noindent As an immediate corollary, we have the following.
\begin{corollary}\label{cor:single-fixpoint-completeness} Fix $\mathcal{T}$, and let $\Phi, Z, \val{V}, \sigma$ and $S$ be such that $\Phi$ is fixpoint-free and $S = \semTV{\sigma Z.\Phi}$. Then $S \tnxTV{\varepsilon} \sigma Z.\Phi$ has a successful tableau. \end{corollary} \begin{proof} Follows from Lemma~\ref{lem:single-fixpoint-completeness} and the fact that every $\sigma$-maximal support ordering $(S, \prec)$ for $\semfTV{Z}{\Phi}$ is total and qwf. \qedhere \end{proof}
We now state and prove a generalization of Lemma~\ref{lem:single-fixpoint-completeness} in which the body of the fixpoint formula is allowed also to have fixpoint subformulas.
\begin{lemma}[Fixpoint completeness]\label{lem:fixpoint-completeness} Fix $\mathcal{T}$, and let $\Phi, Z, \val{V}, \sigma$ and $S$ be such that $S = \semTV{\sigma Z.\Phi}$. Also let $(S, \prec)$ be a $\sigma$-compatible, total, qwf support ordering for $\semfTV{Z}{\Phi}$. Then $S \tnxTV{\varepsilon} \sigma Z.\Phi$ has a successful TNF tableau compliant with $(S, \prec)$.
\end{lemma} \remove{ \begin{proofsketch} Fix $\mathcal{T} = \lts{\states{S}}$ of sort $\Sigma$, and let $\Phi, Z, \val{V}, \sigma$ and $S$ be such that $S = \semTV{\sigma Z.\Phi}$. Also let $(S, \prec)$ be a $\sigma$-compatible, total, qwf support ordering for $\semfTV{Z}{\Phi}$. The proof proceeds by strong induction on the number of fixpoint subformulas in $\Phi$. If $\Phi$ contains no fixpoint subformulas then the result follows from Lemma~\ref{lem:single-fixpoint-completeness}. If $\Phi$ does contain fixpoint subformulas, then we select a maximal such formula of form $\sigma' Z'.\Gamma$ and use Lemma~\ref{lem:nested-fixpoint-semantics} to decompose $\sigma Z.\Phi$ into $\sigma Z.\Phi'$ and $\sigma'Z'.\Gamma$, with associated semantic functions $f$ and $g$, such that $\semTV{\sigma Z.\Phi} = \sigma (f [\sigma'] g)$. From this we use Lemma~\ref{lem:fg-support} to obtain appropriate $\sigma'$-compatible, total, qwf support orderings for $g_{(\cdot, \sigma')}(Q) $ from $(S,\prec)$ for equivalence classes $Q \subseteq S$ in the quotient of $(S, \prec)$. The induction hypothesis then guarantees successful TNF tableaux involving $\sigma Z.\Phi'$ and $\sigma'Z'.\Gamma$; we then give constructions for combining these tableaux into a successful TNF tableau for $S \tnxTV{\varepsilon} \sigma Z.\Phi$ that is compliant with $(S,\prec)$. The detailed proof is included in the appendix. \qedhere \end{proofsketch} } \begin{proof} Fix $\mathcal{T} = \lts{\states{S}}$ of sort $\Sigma$. We prove the following: for all $\Phi$, and $Z, \val{V}, \sigma$ and $S$ with $S = \semTV{\sigma Z.\Phi}$, and $\sigma$-compatible, total qwf support ordering $(S,\prec)$ for $\semfTV{Z}{\Phi}$, $S \tnxTV{\varepsilon} \sigma Z.\Phi$ has a successful TNF tableau $\mathbb{T}_\Phi$ that is compliant with $(S,\prec)$. To simplify notation we use the following abbreviations. \begin{align*} f_\Phi &= \semfTV{Z}{\Phi} \\ \val{V}_X &= \val{V}[Z := \preimg{{\prec}}{X}] \end{align*} Note that $\val{V}_S = \val{V}[Z := \preimg{{\prec}}{S}]$. When $s \in S$ we also write $\val{V}_s$ in lieu of $V_{\{ s\}}$.
The proof proceeds by strong induction on the number of fixpoint subformulas of $\Phi$. There are two cases to consider. In the first case, $\Phi$ contains no fixpoint formulas. Lemma~\ref{lem:single-fixpoint-completeness} immediately gives the desired result.
In the second case, $\Phi$ contains at least one fixpoint subformula. The outline of the proof in this case is as follows. \begin{enumerate}
\item\label{it:step-decompose}
We decompose $\Phi$ into $\Phi'$, which uses a new free variable $W$, and $\sigma' Z'.\Gamma$ in such a way that $\Phi = \Phi'[W:=\sigma' Z'.
\Gamma]$.
\item\label{it:step-outer-tableau}
We inductively construct a successful TNF tableau $\mathbb{T}_{\Phi'}$ for $S \tnxT{\val{V}'}{\varepsilon} \sigma Z.\Phi'$ that is compliant with $(S,\prec)$ where:
\begin{align*}
S' &= \semT{\sigma'Z'.\Gamma}{\val{V}_S} \\
\val{V}' &= \val{V}[W:=S'].
\end{align*}
($S'$ may be seen as the semantic content of $\sigma'Z'.\Gamma$ relevant for $\semTV{\sigma Z.\Phi}$.)
\item\label{it:step-inner-tableau}
We construct a successful TNF tableau $\mathbb{T}_\Gamma$ satisfying a compliance-related property for $S' \tnxT{\val{V}_S}{\varepsilon} \sigma'Z'.\Gamma$ by merging inductively constructed tableaux involving subsets of $S'$.
\item\label{it:step-tableau-composition}
We show how to compose $\mathbb{T}_\Phi$ and $\mathbb{T}_\Gamma$ to yield a successful TNF tableau for $S \tnxTV{\varepsilon} \sigma Z.\Phi$ that is compliant with $(S,\prec)$. \end{enumerate}
We now work through each of these proof steps.
\paragraph{Step~\ref{it:step-decompose} of proof outline: decompose $\Phi$.}
Let $\sigma' Z'.\Gamma$ be a maximal fixpoint subformula in $\Phi$ as defined previously in this section. Also let $W \in \textnormal{Var}\xspace$ be a fresh propositional variable, and define $\Phi'$ so that it contains exactly one instance of $W$ and so that \[ \Phi = \Phi'[W := \sigma' Z'.\Gamma] \] ($\Phi'$ is obtained by replacing one maximal instance of $\sigma' Z'.\Gamma$ in $\Phi$ by $W$.) Note that $\Phi'$ and $\Gamma$ contain strictly fewer fixpoint subformulas than $\Phi$. Lemma~\ref{lem:nested-fixpoint-semantics} may now be applied to conclude that $f_\Phi = f [\sigma'] g$, where $f, g \in 2^\states{S} \times 2^\states{S} \rightarrow 2^S$ are defined as follows. \begin{align*} f(X,Y) &= \semT{\Phi'}{\val{V}[Z, W := X, Y]}\\ g(X,Y) &= \semT{\Gamma}{\val{V}[Z, Z' := X, Y]} \end{align*}
It is the case that $(S,\prec)$ is a $\sigma$-compatible, total qwf support ordering for $f [\sigma'] g$ since it is for $f_\Phi$ and $f_\Phi = f[\sigma'] g$.
\paragraph{Step~\ref{it:step-outer-tableau} of proof outline: construct tableau for $S \tnxT{\val{V}'}{\varepsilon} \sigma Z.\Phi'$.} Since $\Phi'$ has strictly fewer fixpoint subformulas than $\Phi$, we wish to apply the induction hypothesis to infer the existence of successful TNF tableau $\mathbb{T}_{\Phi'}$ for $S \tnxT{\val{V}'}{\varepsilon} \sigma Z.\Phi'$ that is compliant with $(S,\prec)$. To do this it suffices to confirm that $S = \semT{\sigma Z.\Phi'}{\val{V}'} = \sigma f_{\Phi'}$, where $f_{\Phi'} = \semfT{Z}{\Phi'}{\val{V}'}$, and that $(S,\prec)$ is a support ordering for $f_{\Phi'}$ (it is already $\sigma$-compatible, total and qwf).
We begin by showing that $(S,\prec)$ is a support ordering for $f_{\Phi'}$; to do so we must establish that for every $s \in S$, $s \in f_{\Phi'}(\preimg{{\prec}}{s})$. It suffices to show that for every $s \in S$, $f_{\Phi}(\preimg{{\prec}}{s}) \subseteq f_{\Phi'}(\preimg{{\prec}}{s})$, as the fact that $(S,\prec)$ is a support ordering for $f_{\Phi}$ guarantees that $s \in f_{\Phi}(\preimg{{\prec}}{s}) \subseteq f_{\Phi'}(\preimg{{\prec}}{s})$. So fix $s \in S$; we reason as follows.
\begin{align*} f_{\Phi} (\preimg{{\prec}}{s}) &= \semT{\Phi}{\val{V}_s} && \text{Definition of $f_{\Phi}$} \\ &= \semT{\Phi'[W:=\sigma'Z'.\Gamma']}{\val{V}_s} && \Phi = \Phi'[W:=\sigma'Z'.\Gamma'] \\ &= \semT{\Phi'}{\val{V}_s[W := \semT{\sigma'Z'.\Gamma}{\val{V}_s}]} && \text{Lemma~\ref{lem:substitution}} \\ &\subseteq \semT{\Phi'}{\val{V}_s[W := \semT{\sigma'Z'.\Gamma}{\val{V}_S}]} && \text{$\val{V}_s(Y) \subseteq V_S(Y)$ all $Y$; see below} \\ &= \semT{\Phi'}{\val{V}_s[W := S']} && \text{Definition of $S'$} \\ &= \semT{\Phi'}{\val{V}'_s} && \text{Definition of $\val{V}'_s$} \\ &= f_{\Phi'}(\preimg{{\prec}}{s}) && \text{Definition of $f_{\Phi'}$} \end{align*}
To see that $\val{V}_s(Y) \subseteq V_S(Y)$ holds for all $Y$, note that $\val{V}_s(Y) = \val{V}_S(Y)$ when $Y \neq Z$ and \[ \val{V}_s(Z) = \preimg{{\prec}}{s} \subseteq \preimg{{\prec}}{S} = \val{V}_S(Z). \] Monotonicity then guarantees that $ \semT{\sigma'Z'.\Gamma}{\val{V}_s} \subseteq \semT{\sigma'Z'.\Gamma}{\val{V}_S}, $ justifying this step of argument.
To prove that $S = \semT{\sigma Z.\Phi'}{\val{V}'}$ note that since we just established that $(S,\prec)$ is a $\sigma$-compatible support ordering for $f_{\Phi'}$, Theorem~\ref{thm:well-supported} and Corollary~\ref{cor:support-fixpoints} guarantee that $S \subseteq S''$, where $S'' = \sigma f_{\Phi'} = \semT{\sigma Z.\Phi'}{\val{V}'}$. It remains to show that $S = S''$. Suppose to the contrary that $S \neq S''$. Since we already know that $S \subseteq S''$ it must be the case that $S \subsetneq S''$. Based on Theorem~\ref{thm:well-supported} and Corollary~\ref{cor:support-fixpoints} it follows that there must exist a relation ${\prec'} \subseteq S'' \times S''$ such that $(S'',\prec')$ is a $\sigma$-compatible support ordering for $f_{\Phi'}$. However, this yields a contradiction, because we can then construct a relation ${\prec''} \subseteq S'' \times S''$ such that $(S'', \prec'')$ is a $\sigma$-compatible support ordering for $f_{\Phi}$. As $S = \sigma f_\Phi$ this would imply that $S'' \subseteq S$, and thus that $S = S''$. The construction of $\prec''$ is as follows. \[ {\prec''} = {\prec} \;\cup\; \left((\preimg{{\prec}}{S}) \times (S'' \setminus S)\right)
\;\cup\; \{ (s_1, s_2) \mid s_2 \in S'' \setminus S \land s_1 \prec' s_2 \} \] Intuitively, $s_1 \prec'' s_2$ when one of the following hold. \begin{itemize}
\item $s_1 \prec s_2$ (in this case, $s_1, s_2 \in S$); or
\item $s_1 \in \preimg{{\prec}}{S}$ (so $s_1 \in S$ also) and $s_2 \in S''$ but $s_2 \not\in S$; or
\item $s_1 \prec' s_2$ and $s_1, s_2 \not\in S$ \end{itemize} Based on this definition the following can easily be established. \[ \preimg{{{\prec}''}}{s} = \begin{cases}
\preimg{{\prec}}{s} & \text{if $s \in S$} \\
\preimg{{\prec}}{S} \cup \preimg{{\prec'}}{s} & \text{if $s \in S''\setminus S$} \end{cases} \] Note that if $\prec$ and $\prec'$ are well-founded then so is $\prec''$, so $(S, \prec'')$ must be $\sigma$-compatible. We now show that $(S'', \prec'')$ is a support ordering for $f_{\Phi}$ by establishing that for every $s \in S''$, $s \in f_{\Phi}(\preimg{{\prec''}}{s})$. There are two cases to consider. When $s \in S$ this is immediate from the fact that $\preimg{{\prec''}}{s} = \preimg{{\prec}}{s}$; since $(S,\prec)$ is a support-ordering for $f_{\Phi}$ this means $s \in f_\Phi(\preimg{{\prec}}{s}) = f_\Phi(\preimg{{\prec''}}{s})$. Now suppose $s \in S'' \setminus S$; then $\preimg{{\prec''}}{s} = \preimg{{\prec}}{S} \cup \preimg{{\prec'}}{s}$. This implies that $\preimg{{\prec}}{S} \subseteq \preimg{{\prec''}}{s}$. Now recall that $S' = \semT{\sigma' Z'.\Gamma}{\val{V}_S}$, and that $\val{V}_S = \val{V}[Z := \preimg{{\prec}}{S}]$. It follows that $S' \subseteq \semT{\sigma' Z'.\Gamma}{\val{V}''}$, where $\val{V}'' = \val{V}[Z := \preimg{{\prec''}}{s}]$, and from this we may reason as follows. \begin{align*} f_{\Phi'} (\preimg{{\prec''}}{s}) &= \semT{\Phi'}{\val{V}'[Z := \preimg{{\prec''}}{s}]} \\ &= \semT{\Phi'}{\val{V}[Z, W:= \preimg{{\prec''}}{s}], S']} \\ &\subseteq \semT{\Phi'}{\val{V}[Z, W := \preimg{{\prec''}}{s}, \semT{\sigma Z'.\Phi'}{\val{V}[Z := \preimg{{\prec''}}{s}]}]} \\ &= \semT{\Phi'[W := \sigma Z'.\Phi']}{\val{V}[Z := \preimg{{\prec''}}{s}]} \\ &= \semT{\Phi}{\val{V}[Z := \preimg{{\prec''}}{s}]} \\ &= f_\Phi(\preimg{{\prec''}}{s}). \end{align*} Based on this and the fact that $s \in f_{\Phi'} (\preimg{{\prec''}}{s})$, we have that $s \in f_{\Phi} (\preimg{{\prec''}}{s})$ as well, and $(S'',\prec'')$ is a support ordering for $f_\Phi$ in addition to $f_{\Phi'}$. We have arrived at the contradiction we set out to establish, and it must be the case that $S = S''$. Thus $S = \semT{\sigma Z.\Phi'}{\val{V}'}$ and $(S, \prec)$ is a $\sigma$-compatible support ordering for $f_{\Phi'} = \semfT{Z}{\Phi'}{\val{V}'}$. We may now apply the induction hypothesis to infer the existence of successful TNF tableau \[ \mathbb{T}_{\Phi'} = (\tree{T}_{\Phi'}, \rho_{\Phi'}, \mathcal{T}, \val{V}', \lambda_{\Phi'}), \] where $\tree{T}_{\Phi'} = (\node{N}_{\Phi'}, \node{r}_{\Phi'}, p_{\Phi'}, cs_{\Phi'})$, such that $\lambda_{\Phi'}(\node{r}_{\Phi'}) = S \tnxT{\val{V}'}{\varepsilon} \sigma Z.\Phi'$ and $\mathbb{T}_{\Phi'}$ is compliant with $(S, \prec)$. It is also easy to see that $\mathbb{T}_{\Phi'}$ contains exactly one successful leaf $\node{n}_W$ such that $\textit{fm}(\lambda_{\Phi'}(\node{n}_W)) = W$.
\paragraph{Step~\ref{it:step-inner-tableau} of proof outline: construct tableau for $S' \tnxT{\val{V}_S}{\varepsilon} \sigma'Z'.\Gamma$.}
Since $\Gamma$ contains strictly fewer fixpoint subformulas than $\Phi$, the induction hypothesis also guarantees the existence of certain successful TNF tableaux involving $\sigma' Z'.\Gamma$. We remark on these tableaux and show how to use them to construct a successful TNF tableau $\mathbb{T}_\Gamma$ for sequent $S' \tnxT{\val{V}_S}{\varepsilon} \sigma'Z'.\Gamma$ and also ensuring a key property, formalized below as Property~\ref{goal:dependencies}, involving extended dependencies between $s \in S$ and states in $\semT{\sigma'Z'.\Gamma}{\val{V}_s}$. In what follows, for any $X \subseteq S$ define \begin{align*} g_X &= g_{(\preimg{{\prec}}{X},\cdot)} \\ S'_X &= \sigma' g_X \end{align*} That is, $g_X \in 2^{\states{S}} \to 2^{\states{S}}$ computes the semantics of $\Gamma$, which has both $Z$ and $Z'$ free, with the semantics of $Z$ fixed to be $\preimg{{\prec}}{X}$ and $Z'$ interpreted as the input provided to $g_X$. $S'_X$ is then the $\sigma'$ fixpoint of $g_X$. It can easily be seen that for any $X \subseteq S$, $\preimg{{\prec}}{X} \subseteq \preimg{{\prec}}{S}$ and thus \[ S'_X = \sigma' g_X \subseteq \sigma' g_S = \semT{\sigma'Z'.\Gamma}{\val{V}_S} = S'. \]
In order to apply the induction hypothesis to generate successful TNF compliant tableaux for sequents involving $\sigma'Z'.\Gamma$ we also need $\sigma'$-compatible, total qwf support orderings for the values of $g_X$ we wish to consider. We obtain these from Lemma~\ref{lem:fg-support} as follows. Recall that $(S,\prec)$ is a $\sigma$-compatible total qwf support ordering for $f_\Phi$, and that $f_\Phi = f [\sigma'] g$. The lemma guarantees the existence of a $\sigma'$-compatible, total qwf support ordering $(S', \prec')$ for $g_S$ that is locally consistent\footnote{See Definition~\ref{def:consistent-support-orderings} for the meaning of local consistency.} with $(S,\prec)$.
Now let partial order $(Q_\prec, \sqsubseteq)$ be the quotient of $(S,\prec)$ (cf.\/ Definition~\ref{def:relation-quotient}), with $\sqsubseteq^-$ the irreflexive core of $\sqsubseteq$. Totality of $\prec$ guarantees that if $Q_1 \sqsubseteq Q_2$ then $\preimg{{\prec}}{Q_1} \subseteq \preimg{{\prec}}{Q_2}$. If $x \in S$ then we write $[x] \in Q_\prec$ for the equivalence class of $x$. Since $\prec$ is total it follows that for all $x, x'$, if $x' \in [x]$ then also $x \in [x']$ and $\preimg{{\prec}}{x} = \preimg{{\prec}}{x'}$.
The construction we present below for $\mathbb{T}_\Gamma$ proceeds in three steps. \begin{itemize}
\item
For each $Q \in Q_\prec$ we inductively construct a successful TNF tableau $\mathbb{T}_{\Gamma,Q}$ for sequent $S'_Q \tnxT{\val{V}_Q}{\varepsilon} \sigma' Z'.\Gamma$ that is compliant with a subrelation of $\prec'$.
\item
We then merge the individual $\mathbb{T}_{\Gamma,Q}$ to form a successful TNF tableau $\mathbb{T}'_\Gamma$ compliant with $\prec'$ whose root sequent contains as its state set the union of all the individual root-sequent state sets of the $\mathbb{T}_{\Gamma,Q}$.
\item
We perform a final operation to obtain $\mathbb{T}_\Gamma$. \end{itemize}
\textit{Constructing $\mathbb{T}_{\Gamma,Q}$.}
We begin by noting that since $(S',\prec')$ is locally consistent with $(S,\prec)$ and $\preimg{{\prec}}{x} = \preimg{{\prec}}{x'}$ if $x \in [x']$ it follows that for any $Q \in Q_\prec$, $(S'_Q, \prec'_Q)$, where ${\prec'_Q} = \restrict{{\prec'}}{S'_Q}$, is a $\sigma'$-compatible, total qwf support ordering for $g_Q$. Thus, based on the induction hypothesis there exists, for each $Q \in Q_\prec$, a successful TNF tableau \[ \mathbb{T}_{\Gamma,Q} = (\tree{T}_{\Gamma,Q}, \rho_{\Gamma,Q}, \mathcal{T}, \val{V}_Q, \lambda_{\Gamma,Q}) \] for $S'_Q \tnxT{\val{V}_Q}{\varepsilon} \sigma'Z'.\Gamma$, where $\tree{T}_{\Gamma,Q} = (\node{N}_{\Gamma,Q}, \node{r}_{\Gamma,Q}, p_{\Gamma,Q}, cs_{\Gamma,Q})$, that is compliant with $\prec'_Q$.
We now note that Lemma~\ref{lem:structural-equivalence-of-TNF-tableaux} guarantees that for all $Q, Q' \in Q_\prec$, $\mathbb{T}_{\Gamma,Q}$ and $\mathbb{T}_{\Gamma,Q'}$ are structurally equivalent. This means: \begin{enumerate}
\item $\tree{T}_{\Gamma,Q}$ and $\tree{T}_{\Gamma,Q'}$ are isomorphic; and
\item For isomorphic nodes $\node{n}$ and $\node{n}'$ in
$\tree{T}_{\Gamma,Q}$ and $\tree{T}_{\Gamma,Q'}$, respectively, $\textit{rn}(\rho_{\Gamma,Q}(\node{n})) = \textit{rn}(\rho_{\Gamma,Q'}(\node{n}'))$,
$\textit{fm}(\lambda_{\Gamma,Q}(\node{n})) = \textit{fm}(\lambda_{\Gamma,Q'}(\node{n}'))$, and
$\textit{dl}(\lambda_{\Gamma,Q}(\node{n})) = \textit{dl}(\lambda_{\Gamma,Q}(\node{n}'))$. \end{enumerate} In other words, the only differences between these tableaux are the state sets appearing in the sequents at each tree node and the witness functions used in rule applications involving $\dia{K}$. We consequently assume in what follows that there is a single common tree $\tree{T}_\Gamma = (\node{N}_\Gamma, \node{r}_\Gamma, p_\Gamma, cs_\Gamma)$. We also introduce the following functions with respect to $\node{N}$ that return the common elements in the sequents and rule applications labeling the tree nodes. \begin{align*}
\textit{fm}_\Gamma(\node{n}) &- \text{the formula labeling $\node{n}$ in all the $\mathbb{T}_{\Gamma,Q}$}
\\
\textit{dl}_\Gamma(\node{n}) &- \text{the definition list labeling $\node{n}$ in all the $\mathbb{T}_{\Gamma,Q}$}
\\
\textit{rn}_\Gamma(\node{n}) &- \text{the rule name in the rule application for $\node{n}$ in all the $\mathbb{T}_{\Gamma,Q}$} \end{align*}
\textit{Constructing $\mathbb{T}'_\Gamma$.}
When $Q_\prec \neq \emptyset$ we can merge the non-empty set of tableaux $\{ \mathbb{T}_{\Gamma,Q} \mid Q \in Q_\prec\}$ into a single successful tableau $\mathbb{T}'_\Gamma = (\tree{T}_\Gamma, \rho'_\Gamma, \mathcal{T}, \val{V}_{S}, \lambda'_\Gamma)$, sharing the same tree $\tree{T}_\Gamma$ and functions $\rho_\Gamma$, $\textit{fm}_\Gamma$ and $\textit{dl}_\Gamma$ as the $\mathbb{T}_{\Gamma,Q}$, with the following properties. \begin{enumerate}[left = \parindent, label = G\arabic*., ref = G\arabic*]
\item\label{goal:root}
$\lambda'_\Gamma(\node{r}_\Gamma) = \left(\bigcup_{Q \in Q_{\prec}} S'_Q \right) \tnxT{\val{V}_S}{\varepsilon} \sigma' Z'.\Gamma$.
\item\label{goal:dependencies}
For all $x \in S$, $y \in S'_{[x]}$, and $x' <:_{\node{n}',\node{r}_\Gamma} y$ in $\mathbb{T}'_\Gamma$ such that $\textit{fm}_\Gamma(\node{n}') = Z$, $x' \prec x$. \end{enumerate}
Property~\ref{goal:dependencies} is important later in the proof, and we comment on it briefly here. The property asserts that if $y$ is a state in the root of $\mathbb{T}'_{\Gamma,[x]}$ for some $x \in S$, and if there is an extended dependency between $y$ and some state $x'$ in a $Z$-leaf ($Z$ being the bound variable in $\sigma Z.\Phi$), then $x' \prec x$. In other words, $y$ can only depend on states in $Z$-leaves that (semantically) support $x$.
As $\tree{T}_\Gamma, \mathcal{T}$ and $\val{V}_S$ are already defined, completing the construction of $\mathbb{T}'_\Gamma$ only requires that we define $\rho'_\Gamma$ and $\lambda'_\Gamma$, which we do so that the following invariants hold for each $\node{n} \in \node{N}$. \begin{invariants}
\item
If $\rho'_\Gamma(\node{n})$ is defined then the sequents assigned by $\lambda'_\Gamma$ to $\node{n}$ and its children are consistent with rule application $\rho'_\Gamma(\node{n})$.
\item
$\textit{fm}(\lambda'_\Gamma(\node{n})) = \textit{fm}(\node{n})$
\item
$\textit{st}(\lambda'_\Gamma(\node{n})) \subseteq \bigcup_{Q \in Q_\prec} \textit{st}(\lambda_{\Gamma,Q}(\node{n}))$
\item\label{inv:definition-list}
$\textit{dl}(\lambda'_\Gamma(\node{n})) = \textit{dl}_\Gamma(\node{n})$ \end{invariants} These invariants are suitably updated versions of the invariants appearing in the proof of Lemma~\ref{lem:single-fixpoint-completeness}.
The definitions of $\rho'_\Gamma$ and $\lambda'_\Gamma$ are given in a co-inductive fashion (i.e.\/ ``from the root down" rather than ``the leaves up"). More specifically, the construction first assigns a sequent to $\node{r}_\Gamma$, the root of $\tree{T}_\Gamma$ that ensures that Property~\ref{goal:root} holds. It immediately follows that Properties~\ref{inv:formula}, \ref{inv:state-set} and \ref{inv:definition-list} also hold for the root. Then for any non-leaf node $\node{n}$ whose sequent satisfies \ref{inv:formula}, \ref{inv:state-set} and \ref{inv:definition-list}, the co-induction step defines sequents for each child of $\node{n}$ so that these sequents each satisfy \ref{inv:formula}, \ref{inv:state-set} and \ref{inv:definition-list} and so that Property~\ref{inv:rule} holds for $\node{n}$. Property~\ref{goal:dependencies} will be proved later.
We begin by defining $\lambda'_\Gamma(\node{r}_\Gamma) = \left(\bigcup_{Q \in Q_{\prec}} S'_Q \right) \tnxT{\val{V}_S}{\varepsilon} \sigma' Z'.\Gamma$. Property~\ref{goal:root} is immediate, as is the fact that \ref{inv:formula}, \ref{inv:state-set} and \ref{inv:definition-list} hold for $\node{r_\Gamma}$.
For the co-inductive step, suppose $\lambda'_\Gamma(\node{n}) = S_\node{n} \tnxT{\val{V}_S}{\Delta_{\node{n}}} \Gamma_{\node{n}}$ and that invariants \ref{inv:formula}, \ref{inv:state-set} and \ref{inv:definition-list} all hold for $\lambda'_\Gamma(\node{n})$. We define $\lambda'_\Gamma(\node{n}')$ for each child $\node{n}'$ of $\node{n}$, and $\rho'_{\Gamma}(\node{n})$, the rule application for $\node{n}$, so as to ensure that invariant \ref{inv:rule} holds for $\node{n}$ and that \ref{inv:formula}, \ref{inv:state-set} and \ref{inv:definition-list} are established for each child $\node{n}'$. The constructions in many cases closely match those found in Lemma~\ref{lem:single-fixpoint-completeness}. \begin{description}
\item[$\textit{rn}_\Gamma(\node{n}){\perp}$, or $\textit{rn}_\Gamma(\node{n}) \in \{\land,\lor, {[K]},\dia{K}\}$.]
The constructions in this case mirror those in Lemma~\ref{lem:single-fixpoint-completeness} for $\lambda_S$; the only difference is that the definition lists in the child sequents are inherited from the parent, rather than always being $\varepsilon$. It is straightforward to see that invariant~\ref{inv:rule} holds for $\node{n}$ while \ref{inv:formula}, \ref{inv:state-set} and \ref{inv:definition-list} hold for the children of $\node{n}$.
\item[$\textit{rn}_\Gamma(\node{n}) = \sigma Z''$.]
In this case $\textit{fm}_\Gamma(\node{n}) = \sigma''Z''.\Gamma'$ for some $\sigma''$, $Z''$ and $\Gamma'$; $cs(\node{n}) = \node{n}'$;
$\textit{dl}_\Gamma(\node{n}') = \Delta_{\node{n}'} = \Delta_{\node{n}} \cdot (U' = \textit{fm}_\Gamma(\node{n}))$ for some $U' \not\in \operatorname{dom}(\Delta_{\node{n}})$;
and $\textit{fm}_\Gamma(\node{n}') = U'$. Define $\rho'_{\Gamma}(\node{n}) = \sigma Z''$ and $\lambda'_\Gamma(\node{n}') = S_{\node{n}} \tnxT{\val{V}_S}{\Delta_{\node{n}'}} U'$.
It is easy to establish that invariant~\ref{inv:rule} holds for $\node{n}$ and that \ref{inv:formula}, \ref{inv:state-set} and \ref{inv:definition-list} hold for $\node{n}'$.
\item[$\textit{rn}_\Gamma(\node{n}) = \textnormal{Un}$.]
In this case $cs(\node{n}) = \node{n}'$,
$\textit{dl}_\Gamma(\node{n}') = \Delta_\node{n}$
and $\textit{fm}_\Gamma(\node{n}) = U'$ for some $U' \in \operatorname{dom}(\Delta_{\node{n}})$. Let $\Delta_{\node{n}}(U') = \sigma''Z''.\Gamma'$; then $\textit{fm}_\Gamma(\node{n}') = \Gamma'[Z'' := U']$.
Define
$\rho'_\Gamma(\node{n}) = \text{Un}$ and
$\lambda'_\Gamma(\node{n}') = S_{\node{n}} \tnxT{\val{V}_S}{\Delta_{\node{n}}}
\textit{fm}_\Gamma(\node{n}')$.
It is easy to establish that invariant~\ref{inv:rule} holds for $\node{n}$ and that \ref{inv:formula}, \ref{inv:state-set} and \ref{inv:definition-list} hold for $\node{n}'$.
\item[$\textit{rn}_\Gamma(\node{n}) = \textnormal{Thin}.$]
In this case $cs(\node{n}) = \node{n}'$,
$\textit{dl}_\Gamma(\node{n}') = \Delta_{\node{n}}$ and
$\textit{fm}_\Gamma(\node{n}) = \textit{fm}_\Gamma(\node{n}')$.
Take $S_{\node{n}'} = \bigcup_{Q \in Q_\prec} \textit{st}(\lambda_{\Gamma,Q}(\node{n}'))$, and
define
$\rho'_\Gamma(\node{n}) = \text{Thin}$ and
$\lambda'_\Gamma(\node{n}') = S_{\node{n}'} \tnxT{\val{V}_S}{\Delta_{\node{n}}} \textit{fm}_\Gamma(\node{n}')$.
Invariant~\ref{inv:rule} can be shown to hold for $\node{n}$, while \ref{inv:formula}, \ref{inv:state-set} and \ref{inv:definition-list} hold for $\node{n}'$.
(Indeed, a stronger version of \ref{inv:state-set} holds in this case, as $\textit{st}(\lambda'_\Gamma(\node{n}')) = \bigcup_{Q \in Q_\prec} \textit{st}_{\Gamma,Q}(\node{n}')$.) \end{description}
We now argue that $\mathbb{T}'_\Gamma$ is a successful TNF tableau that is compliant with $\prec'$ and that Property~\ref{goal:dependencies} holds (Property~\ref{goal:root} has already been established.). We first must establish that $\mathbb{T}'_\Gamma$ is indeed a tableau. Because of invariant~\ref{inv:rule} it suffices to show that the sequent labeling any leaf node is terminal (cf.\/ Definition~\ref{def:tableau}(\ref{subdef:complete-tableau})). Let $\node{n}$ be a leaf in $\tree{T}_\Gamma$; we note that $\textit{fm}(\node{n})$ has form either $Z''$ or $\lnot Z''$, where $Z''$ is free in $\sigma'Z'.\Gamma$, or $U'$ for some definitional constant $U' \in \operatorname{dom}(\textit{dl}(\node{n}))$. In the first two cases $\node{n}$ is clearly terminal. In the latter case we must argue that $\textit{st}(\lambda'_\Gamma(\node{n})) \subseteq \textit{st}(\lambda'_\Gamma(\node{m}))$, where $\node{m}$ is the intended companion node of $\node{n}$ (i.e.\/ the strict ancestor of $\node{n}$ such that $\textit{fm}_\Gamma(\node{m}) = \textit{fm}_\Gamma(\node{n}) = U'$). From the definition of $\mathbb{T}'_\Gamma$ and that fact that each $\mathbb{T}_{\Gamma,Q}$ is a TNF tableau we observe the following. \begin{enumerate}
\item
$\rho'_\Gamma(\node{m}) = \text{Un}$, and $\node{m}$ is the only internal node whose formula is $U'$.
\item
Since $p_\Gamma(\node{m})$ is the parent of $\node{m}$, $\rho'_\Gamma(p_\Gamma(\node{m})) = \sigma Z''.$ for some $Z''$.
\item
Either $p_\Gamma(\node{m})$ is the root of $\tree{T}_\Gamma$ (i.e.\/ $p(\node{m}) = \node{r}_\Gamma$), or $p(p(\node{m}))$, the grandparent of $\node{m}$, is defined, and $\rho'_\Gamma(p(p(\node{m}))) = \text{Thin}$. In either case, from the definition of the construction it can be shown that $\textit{st}(\lambda_\Gamma(\node{m})) = \bigcup_{Q \in Q_\prec} \textit{st}(\lambda_{\Gamma,Q}(\node{m}))$. \end{enumerate} Because each $\mathbb{T}_{\Gamma,Q}$ is a tableau it follows that for each $Q \in Q_\prec$, $\node{n}$ is terminal in $\mathbb{T}_{\Gamma,Q}$ and thus $\textit{st}(\lambda_{\Gamma,Q}(\node{n})) \subseteq \textit{st}(\lambda_{\Gamma,Q}(\node{m}))$. Also, since invariant~\ref{inv:state-set} holds of $\node{n}$ in $\mathbb{T}'_\Gamma$ we have that $\textit{st}(\lambda'_\Gamma(\node{n})) \subseteq \bigcup_{Q \in Q_\prec} \textit{st}(\lambda_{\Gamma,Q}(\node{n}))$. We can now reason as follows \[ \textit{st}(\lambda'_\Gamma(\node{n})) \subseteq \bigcup_{Q \in Q_\prec} \textit{st}(\lambda_{\Gamma, Q}(\node{n})) \subseteq \bigcup_{Q \in Q_\prec} \textit{st}(\lambda_{\Gamma,Q}(\node{m})) = \textit{st}(\lambda'_\Gamma(\node{m})) \] to see that $\node{n}$ is terminal in $\mathbb{T}'_\Gamma$ and thus $\mathbb{T}'_\Gamma$ is indeed a tableau.
We now need to show that $\mathbb{T}'_\Gamma$ is successful and compliant with $\prec'$, and that Property~\ref{goal:dependencies} holds of the tableau. Before doing that, however, we remark on properties of dependency relations in $\mathbb{T}'_\Gamma$ that will be used in the arguments to follow. To begin with, the following holds of all $\node{n}_1, \node{n}_2, s_1$ and $s_2$ such that $s_2 <_{\node{n}_2,\node{n}_1} s_1$ in $\mathbb{T}'_\Gamma$: \begin{quote} for every $Q \in Q_\prec$ such that $s_1 \in \textit{st}(\lambda_{\Gamma, Q}(\node{n}_1))$, either $s_2 \in \textit{st}(\lambda_{\Gamma,Q}(\node{n_2}))$, and thus $s_2 <_{\node{n}_2,\node{n}_1} s_1$ in $\mathbb{T}_{\Gamma,Q}$, or there exists $Q'$ such that $Q' \sqsubset Q$ such that $s_2 \in \textit{st}(\lambda_{\Gamma,Q'}(\node{n}_2))$. \end{quote} This is immediate from the definition of $<_{\node{n}_2, \node{n}_1}$ (Definition~\ref{def:local_dependency_ordering}) and $\mathbb{T}'_\Gamma$: the need for the $Q'$ case comes from the construction used for rules $\lor$ and $\dia{K}$. \footnote{An analogous property is is used in the proof of Lemma~\ref{lem:single-fixpoint-completeness}.} An inductive argument based on the definition of $<:$ lifts this result to $\node{n}_1, \node{n}_2, s_1$ and $s_2$ such that $s_2 <:_{\node{n}_2,\node{n}_1} s_2$ in $\mathbb{T}'_\Gamma$: for all $Q \in Q_\prec$ such that $s_1 \in \textit{st}(\lambda_{\Gamma,Q}(\node{n}_1))$, either $s_2 \in \textit{st}(\lambda_{\Gamma,Q}(\node{n}_2))$ and $s_2 <:_{\node{n}_2,\node{n}_1} s_1$ in $\mathbb{T}_{\Gamma,Q}$, or there exists $Q' \sqsubset Q$ such that $s_2 \in \textit{st}(\lambda_{\Gamma,Q'}(\node{n}_2))$.
We now establish that $\mathbb{T}'_\Gamma$ is successful by showing that every leaf in $\mathbb{T}'_\Gamma$ is successful (cf.\/ Definition~\ref{def:successful-tableau}), which amounts to showing that for each leaf $\node{n}$, $\textit{st}(\lambda'_\Gamma(\node{n})) \subseteq \semT{\textit{fm}_\Gamma(\node{n})}{\val{V}_S}$. There are four cases to consider. \begin{description}
\item[$\textit{fm}_\Gamma(\node{n}) = Z$.]
Analogous to the same case in the proof of Lemma~\ref{lem:single-fixpoint-completeness}.
\item[$\textit{fm}_\Gamma(\node{n}) \in \{Z'', \lnot Z''\}$ for some $Z'' \neq Z$ free in $\sigma'Z'.\Gamma$.]
Analogous to the same case in the proof of Lemma~\ref{lem:single-fixpoint-completeness}.
\item[$\node{n}$ is a $\nu$-leaf.]
In this case $\textit{fm}_\Gamma(\node{n})$ is successful by definition.
\item[$\node{n}$ is a $\mu$-leaf.]
Let $\node{m}$ be the companion node of $\node{n}$; we must show that $<:_\node{m}$ is well-founded in $\mathbb{T}'_\Gamma$.
Suppose to the contrary that this is not the case, i.e. that there is an infinite descending chain
$\cdots <:_\node{m} s_2 <:_\node{m} s_1$
with each $s_i \in \textit{st}(\lambda'_\Gamma((\node{m}))$.
From the definition of $<:$ (cf.\/ Definition~\ref{def:extended_path_ordering}) it follows that for all $j > 1$, $s_j <:_{\node{n_j},\node{m}} s_{j-1}$ in $\mathbb{T}'_\Gamma$ for some companion leaf $\node{n}_j$ of $\node{m}$ (note that $\node{n}$ is one of these $\node{n}_j$).
Since each $\mathbb{T}_{\Gamma,Q}$ is successful we know that $<:_{\node{m}}$ in $\mathbb{T}_{\Gamma,Q}$ is well-founded for any $Q \in Q_\prec$.
Recall also that $\sqsubset$ is a well-ordering on $Q_\prec$.
Now consider $\cdots <:_{\node{m}} s_2 <:_{\node{m}} s_1$. Since $s_1 \in \textit{st}(\lambda'_\Gamma(\node{m}))$ invariant~\ref{inv:state-set} guarantees that $s_1 \in \textit{st}(\lambda_{\Gamma,Q}(\node{n}_1))$ for some $Q \in Q_\prec$.
Now consider the $s_j$, $j > 1$.
Since each $s_j <:_{\node{n}_j,\node{m}} s_1$ in $\mathbb{T}'_\Gamma$ the preceding argument ensures that
either $s_j <:_{\node{n},\node{m}} s_1$ in $\mathbb{T}_{\Gamma,Q}$, and thus $s_j <:_{\node{m}} s_1$ in $\mathbb{T}_{\Gamma,Q}$,
or there exists $s' \in S$ such that $[s'] \sqsubset Q$ and $s_j \in \textit{st}(\lambda_{\Gamma,[s']}(\node{n}_j))$.
However, $<:_{\node{m}}$ in $\mathbb{T}_{\Gamma,Q}$ is well-founded, so only finitely many of the $s_j$ can satisfy $s_j <:_{\node{m}} s_1$ in $\mathbb{T}_{\Gamma,Q}$; there must be some $j > 1$ such that $s_j \in \textit{st}(\lambda_{\Gamma,[s']}(\node{n}_2))$ some $[s'] \sqsubseteq Q$. But then, for $\cdots <:_{\node{m}} s_2 <:_{\node{m}} s_1$ to be an infinite descending chain there must be an infinite descending chain in $\sqsubset$. As $\sqsubset$ is well-founded, this is a contradiction, and $<:_\node{m}$ must be well-founded in $\mathbb{T}'_\Gamma$, meaning $\node{n}$ is a successful leaf. \end{description}
To prove compliance of $\mathbb{T}'_\Gamma$ with $\prec'$, assume that $s_2 <:_{\node{r}_\Gamma} s_1$ in $\mathbb{T}'_\Gamma$; we must show that $s_2 \prec' s_1$. Let $\node{n}$ be a companion leaf of $\node{r}_\Gamma$ such that $s_2 <:_{\node{n},\node{r}_\Gamma} s_1$. From the arguments above we know that $s_1 \in \textit{st}(\lambda_{\Gamma,Q}(\node{r}_\Gamma))$ for some $Q \in Q_\prec$; assume further $Q$ is the minimum such element in $Q_\prec$ with respect to $\sqsubset$ (which is guaranteed to exist because $\sqsubset$ is a well-ordering on $Q_\prec$). Also, either $s_2 <:_{\node{n},\node{r}_\Gamma} s_1$ in $\mathbb{T}_{\Gamma,Q}$ or there is $Q' \sqsubset Q$ such that $s_2 \in \textit{st}(\lambda_{\Gamma,Q'}(\node{n}))$. In the former case $s_2 <:_{\node{r}_\Gamma} s_1$, and the fact that $\mathbb{T}_{\Gamma,Q}$ is successful and compliant with ${\prec'_Q} \subseteq {\prec'}$ guarantees that $s_2 \prec' s_1$. In the latter case, since $\node{n}$ is a companion leaf of $\node{r}_\Gamma$ we know that $s_2 \in \textit{st}(\lambda_{\Gamma,Q}(\node{r}_\Gamma))$ and $s_2 \in \textit{st}(\lambda_{\Gamma,Q'}(\node{r}_\Gamma))$. Moreover, the fact that $Q$ is minimum ensures that $s_1 \not\in \textit{st}(\lambda_{\Gamma,Q'}) = S'_{Q'} = \semT{\sigma'Z'.\Gamma}{\val{V}_{Q'}}$. However, since $\prec'$ is locally consistent with $\prec$ means that $s_1 \not\prec' s_2$, and as $\prec'$ is total it must be that $s_2 \prec' s_1$.
We now prove Property~\ref{goal:dependencies} for $\mathbb{T}'_\Gamma$. So fix $x \in S$, $y' \in S'_{[x]}$ and $x' <:_{\node{n}',\node{r}_\Gamma} y$ where $\textit{fm}_\Gamma(\node{n}') = Z$. Note that $x' \in \preimg{{\prec}}{S}$. We must show that $x' \prec x$.
From facts established above we know that either $x' <:_{\node{n}',\node{r}_\Gamma} y$ in $\mathbb{T}_{\Gamma,[x]}$, or there exists a $Q \sqsubset [x]$ such that $x' \in \textit{st}(\lambda_{\Gamma,Q}(\node{n}'))$. In the former case the success of $\mathbb{T}'_Q$ guarantees that $x' \in \val{V}_{[x]}(Z) = \preimg{{\prec}}{[x]}$, meaning $x' \prec x$. In the latter case $x' \in \val{V}_{Q}(Z) = \preimg{{\prec}}{Q}$; since $\prec$ is total and $Q \sqsubset [x]$ it follows that $\preimg{{\prec}}{Q} \subseteq \preimg{{\prec}}{[x]}$, so $x' \in \preimg{{\prec}}{[x]}$ and $x' \prec x$.
\textit{Construction of $\mathbb{T}_\Gamma$.}
We now show how to construct successful TNF $\mathbb{T}_\Gamma$ for sequent $S' \tnxT{\val{V}_S}{\varepsilon} \sigma'Z'.\Gamma$ that is compliant with $\prec'$ and satisfies Property~\ref{goal:dependencies}.
We begin by noting that since $(S',\prec')$ is a $\sigma'$-compatible, total qwf support ordering for $g_S$, the induction hypothesis and Lemma~\ref{lem:structural-equivalence-of-TNF-tableaux} guarantee the existence of a successful TNF tableau $$ \mathbb{T}_{\Gamma,S} = (\tree{T}, \rho_{\Gamma,S}, \mathcal{T}, \val{V}_S, \lambda_{\Gamma,S}) $$ for sequent $S' \tnxT{\val{V}_S}{\varepsilon} \sigma'Z'.\Gamma$ that is compliant with $\prec'$ and structurally equivalent to $\mathbb{T}_{\Gamma,Q}$ for any $Q \in Q_\prec$. There are two cases to consider. In the first case, $S = \emptyset$. In this case, $\mathbb{T}_{\Gamma,S}$ vacuously satisfies Property~\ref{goal:dependencies}, and we take $\mathbb{T}_\Gamma$ to be $\mathbb{T}_{\Gamma,S}$.
In the second case, $S \neq \emptyset$, and thus $Q_\prec \neq \emptyset$. In this case it is not guaranteed that $\mathbb{T}_{\Gamma,S}$ satisfies \ref{goal:dependencies}, so we cannot take $\mathbb{T}$ to be $\mathbb{T}_{\Gamma,S}$. Instead, we build $\mathbb{T}_\Gamma$ using a coinductive definition of $\lambda_\Gamma$ that merges $\mathbb{T}'_\Gamma$ and $\mathbb{T}_{\Gamma,S}$ in a node-by-node fashion, starting with $\node{r}_\Gamma$, the root, so that certain invariants are satisfied. The invariants in this case are the same as \ref{inv:rule}--\ref{inv:definition-list} from construction of $\mathbb{T}'_\Gamma$, but adapted as follows. \begin{invariants}
\item
If $\rho_\Gamma(\node{n})$ is defined then the sequents assigned by $\lambda_\Gamma$ to $\node{n}$ and its children are consistent with the rule application $\rho_\Gamma(\node{n})$.
\item
$\textit{fm}(\lambda_\Gamma(\node{n})) = \textit{fm}(\node{n})$
\item
$\textit{st}(\lambda_\Gamma(\node{n})) \subseteq \textit{st}(\lambda'_\Gamma(\node{n})) \cup \textit{st}(\lambda_{\Gamma,S}(\node{n}))$
\item
$\textit{dl}(\lambda_\Gamma(\node{n})) = \textit{dl}_\Gamma(\node{n})$ \end{invariants} (That is, $\lambda'_\Gamma$ is replaced by $\lambda_\Gamma$, and $\bigcup_{Q \in Q_\prec} \textit{st}(\lambda_{\Gamma,Q}(\node{n}))$ is replaced by $\textit{st}(\lambda'_\Gamma(\node{n})) \cup \textit{st}(\lambda_{\Gamma,S}(\node{n}))$.) As before, the definition of $\lambda_\Gamma$ begins by assigning a value to $\lambda_\Gamma(\node{r}_\Gamma)$ so that invariants~\ref{inv:formula}--\ref{inv:definition-list} are satisfied. The coinductive step then assumes that $\node{n}$ satisfies these invariants and defines $\lambda_\Gamma$ for the children of $\node{n}$ and $\rho_\Gamma(\node{n})$ so that \ref{inv:rule} holds for $\node{n}$ and \ref{inv:formula}--\ref{inv:definition-list} hold for each child.
To start the construction, define $\lambda_\Gamma(\node{r}_\Gamma) = S' \tnxT{\val{V}_S}{\varepsilon} \sigma'Z'.\Gamma$. Invariants~\ref{inv:formula}--\ref{inv:definition-list} clearly hold of $\node{r}_\Gamma$. In particular, it should be noted that $$ \textit{st}(\lambda_\Gamma(\node{r}_\Gamma)) = S' = \textit{st}(\lambda'_\Gamma(\node{r}_\Gamma)) \cup \textit{st}(\lambda_\Gamma(\node{r}_\Gamma)), $$ since $\textit{st}(\lambda'_\Gamma(\node{r}_\Gamma)) = \left(\bigcup_{Q \in Q_{\prec}} S'_Q \right) \subseteq S' = \textit{st}(\lambda_\Gamma(\node{r}_\Gamma))$.
For the coinductive step, assume that $\node{n}$ is such that $\lambda_\Gamma(\node{n})$ satisfies \ref{inv:formula}--\ref{inv:definition-list}; we must define $\rho_\Gamma(\node{n})$ and $\lambda_\Gamma$ for the children of $\node{n}$ so that \ref{inv:rule} holds for $\lambda_\Gamma(\node{n})$ and \ref{inv:formula}--\ref{inv:definition-list} holds for each child. As in the definition of $\mathbb{T}'_\Gamma$ the construction proceeds via a case analysis on $\textit{rn}_\Gamma(\node{n})$. The only non-routine cases involve rules $\lor$, $\dia{K}$ and Thin. We give these below; the other cases are left to the reader. \begin{description}
\item[$\textit{rn}_\Gamma(\node{n}) = \lor$.]
In this case $cs(\node{n}) = \node{n}_1\node{n}_2$,
$\textit{fm}_\Gamma(\node{n}) = \textit{fm}_\Gamma(\node{n}_1) \lor \textit{fm}_\Gamma(\node{n}_2)$; let $\Delta = \textit{dl}_\Gamma(\node{n}) = \textit{dl}_\Gamma(\node{n}_1) = \textit{dl}_\Gamma(\node{n}_2)$.
For each $s \in \seq \textit{st}(\lambda_\Gamma(\node{n}))$ define the following.
\begin{align*}
S_{1,s} &=
\begin{cases}
\{s\} & \text{if $s \in \textit{st}(\lambda'_\Gamma(\node{n}))$ and
$s \in \textit{st}(\lambda'_\Gamma(\node{n}_1))$}\\
\{s\} & \text{if $s \not\in \textit{st}(\lambda'_\Gamma(\node{n}))$,
$s \in \textit{st}(\lambda_{\Gamma,S}(\node{n}))$ and
$\textit{st}(\lambda_{\Gamma,S}(\node{n}_1))$}\\
\emptyset & \text{otherwise}
\end{cases}
\\
S_{2,s} &= \{s\} \setminus S_{1,s}
\end{align*}
Define $\rho_\Gamma(\node{n}) = \lor$.
Taking $S_1 = \bigcup_{s \in S} S_{1,s}$ and $S_2 = \bigcup_{s \in S} S_{2,s}$, we set
\begin{align*}
\lambda_\Gamma(\node{n}_1) &= S_1 \tnxT{\val{V}_S}{\Delta} \textit{fm}_\Gamma(\node{n}_1)\\
\lambda_\Gamma(\node{n}_2) &= S_2 \tnxT{\val{V}_S}{\Delta} \textit{fm}_\Gamma(\node{n}_2).
\end{align*}
It is easy to see that invariant~\ref{inv:rule} holds for $\node{n}$, while \ref{inv:formula}--\ref{inv:definition-list} hold for $\node{n}_1$ and $\node{n}_2$.
\item[$\rho_\Gamma(\node{n}) = \dia{K}$.]
In this case $cs(\node{n}) = \node{n}'$,
$$
\lambda_\Gamma(\node{n}) = S_\node{n} \tnxT{\val{V}_S}{\Delta} \dia{K}\textit{fm}_\Gamma(\node{n}')
$$
for some $S_\node{n}$ and $\Delta$, and
$\textit{dl}_\Gamma(\node{n}') = \Delta$.
We must construct a witness
function $f_{\Gamma,\node{n}} \in S_\node{n} \to \states{S}$ such that $s \xrightarrow{K} f_{\Gamma,\node{n}}(s)$ for each $s \in S_\node{n}$ and such that $f(S_\node{n}) \subseteq \textit{st}(\lambda'_\Gamma(\node{n}')) \cup \textit{st}(\lambda_{\Gamma, S}(\node{n}'))$.
Since $\mathbb{T}'_\Gamma$ and $\mathbb{T}_{\Gamma,S}$ are successful it follows there are functions $f'_{\Gamma, \node{n}} \in \textit{st}(\lambda'_\Gamma(\node{n})) \to \states{S}$ and
$f'_{\Gamma, S, \node{n}} \in \textit{st}(\lambda_{\Gamma,S}(\node{n})) \to \states{S}$
such that:
\begin{itemize}
\item
For all $s \in \textit{st}(\lambda'_\Gamma(\node{n}))$ $s \xrightarrow{K} f_{\Gamma,\node{n}}(s)$;
\item
$\textit{st}(\lambda'_\Gamma(\node{n}')) = f_{\Gamma,\node{n}}(\textit{st}(\lambda'_\Gamma(\node{n})))$;
\item
For all $s \in \textit{st}(\lambda_{\Gamma, S}(\node{n}))$ $s \xrightarrow{K} f_{\Gamma,S,\node{n}}(s)$; and
\item
$\textit{st}(\lambda_{\Gamma,S}(\node{n}')) = f_{\Gamma,S,\node{n}}(\textit{st}(\lambda_{\Gamma,S}(\node{n})))$.
\end{itemize}
We define $f_{\Gamma,\node{n}}$ as follows.
\[
f_{\Gamma,\node{n}}(s)
=
\begin{cases}
f'_{\Gamma,\node{n}}(s)
& \text{if $s \in \textit{st}(\lambda'_\Gamma(\node{n}))$}
\\
f_{\Gamma,S, \node{n}}(s)
& \text{if $s \in \textit{st}(\lambda_{\Gamma,S}(\node{n})) \setminus \textit{st}(\lambda'_\Gamma(\node{n}))$}
\end{cases}
\]
Since $\node{n}$ satisfies \ref{inv:state-set} it follows that $S_\node{n} \subseteq \textit{st}(\lambda'_\Gamma(\node{n})) \cup \textit{st}(\lambda_{\Gamma,S}(\node{n}))$, and thus $f_{\Gamma,\node{n}}$ is well-defined.
We now take $\rho_\Gamma(\node{n}) = (\dia{K},f_{\Gamma,\node{n}})$ and
$$
\lambda_\Gamma(\node{n}') = f_{\Gamma,\node{n}}(S_\node{n}) \tnxT{\val{V}_S}{\Delta} \textit{fm}_\Gamma(\node{n}');
$$
It is clear that \ref{inv:rule} holds for $\node{n}$ and that \ref{inv:formula}--\ref{inv:definition-list} hold for $\node{n}'$.
\item[$\rho_\Gamma(\node{n}) = \text{Thin}$.]
In this case, since both $\mathbb{T}'_\Gamma$ and $\mathbb{T}_{\Gamma,S}$ are TNF it follows that
$$
\lambda_\Gamma(\node{n}) = S_\node{n} \tnxT{\val{V}_S}{\Delta} \sigma'' Z''.\Gamma'
$$
for some $S, \sigma''Z''.\Gamma'$ and $\Delta$, that $\textit{fm}_\Gamma(\node{n}') = \sigma''Z''.\Gamma'$, and that $\textit{dl}_\Gamma(\node{n}') = \Delta$.
Now define $\rho_\Gamma(\node{n}) = \text{Thin}$
and $\lambda_\Gamma(\node{n}')$ as follows.
$$
\lambda_\Gamma(\node{n}') =
\textit{st}(\lambda'_\Gamma(\node{n}')) \cup \textit{st}(\lambda_{\Gamma,S}(\node{n}')
\tnxT{\val{V}_S}{\Delta}
\sigma'' Z''.\Gamma'
$$
It is the case that \ref{inv:rule} holds of $\node{n}$ and that \ref{inv:formula}--\ref{inv:definition-list} hold of $\node{n}'$. \end{description} It can be shown that $\mathbb{T}_\Gamma$ is successful, TNF and compliant with $\prec'$ by adapting the arguments given above for $\mathbb{T}'_\Gamma$. We now establish that \ref{goal:dependencies} also holds of $\mathbb{T}_\Gamma$. To this end, fix $x \in S$, $y \in S'_{[x]},$ and $x' <:_{\node{n}',\node{r}_\Gamma} y$, where $\textit{fm}(\lambda_\Gamma(\node{n}') = Z$. From the construction of $\mathbb{T}_\Gamma$ it is easy to see in this case that $y \in \textit{st}(\lambda'_\Gamma(\node{n}'))$ and that $x' <:_{\node{n}',\node{r}_\Gamma} y$ in $\mathbb{T}_\Gamma$ iff $x' <:_{\node{n}',\node{r}_\Gamma} y$ in $\mathbb{T}'_\Gamma$. Since we have established that \ref{goal:dependencies} holds for $\mathbb{T}'_\Gamma$, the desired result follows.
\paragraph{Step~\ref{it:step-tableau-composition} of proof outline: construct tableau for $S \tnxTV{\varepsilon} \sigma Z.\Phi$.}
To complete the proof we construct tableau $\mathbb{T}_\Phi = (\tree{T}_\Phi, \rho_\Phi, \mathcal{T}, \val{V}, \lambda_\Phi)$, where $\tree{T}_\Phi = (\node{N}_\Phi, \node{r}_\Phi, p_\Phi, cs_\Phi)$, from $\mathbb{T}_{\Phi'}$ and $\mathbb{T}_\Gamma$ and establish that it is successful and compliant with $(S,\prec)$. Without loss of generality we assume that $\node{N_\Phi'} \cap \node{N}_\Gamma = \emptyset$. We begin by defining $\tree{T}_\Phi$ as follows, where $\node{n}_W \in \node{N}_{\Phi'}$ is the unique (leaf) node in $\tree{T}_{\Phi'}$ such that $\textit{fm}(\lambda_{\Phi'}(\node{n}_W)) = W$. \begin{itemize}
\item $\node{N}_\Phi = \node{N}_{\Phi'} \cup \node{N}_\Gamma$
\item $\node{r}_\Phi = \node{r}_{\Phi'}$
\item (Partial) parent function $p_\Phi \in \node{N}_\Phi \rightarrow \node{N}_\Phi$ is given as follows.
\[
p_\Phi(\node{n}) =
\begin{cases}
p_{\Phi'}(\node{n}) & \text{if $\node{n} \in \node{N}_{\Phi'}$}\\
\node{n}_W & \text{if $\node{n} = \node{r}_\Gamma$}\\
p_{\Gamma}(\node{n}) & \text{otherwise}
\end{cases}
\]
\item Child-ordering function $cs_\Phi \in \node{N}_\Phi \to (\node{N}_\Phi)^*$ is given as follows.
\[
cs_\Phi(\node{n}) =
\begin{cases}
\node{r}_\Gamma & \text{if $\node{n} = \node{n}_W$} \\
cs_{\Phi'}(\node{n}) & \text{if $\node{n} \in \node{N}_{\Phi'} \setminus \{\node{n}_W\}$}\\
cs_\Gamma (\node{n}) & \text{otherwise}
\end{cases}
\] \end{itemize} This construction creates $\tree{T}_\Phi$ by in effect inserting $\tree{T}_\Gamma$ into $\tree{T}_{\Phi'}$ as the single subtree underneath $\node{n}_W$.
To finish the construction of $\mathbb{T}_\Phi$ we need to specify $\rho_\Phi$ and $\lambda_\Phi$. The first of these is given as follows. \[ \rho_\Phi(\node{n}) = \begin{cases}
\rho_{\Phi'}(\node{n})
& \text{if $\node{n} \in \node{N}_{\Phi'} \setminus \{\node{n}_W\}$}
\\
\text{Thin}
& \text{if $\node{n} = \node{n}_W$}
\\
\rho_\Gamma(\node{n})
& \text{otherwise} \end{cases} \] In this definition all nodes inherit their rule applications from $\mathbb{T}_{\Phi'}$ and $\mathbb{T}_\Gamma$, with the exception of $\node{n}_W$, which, as a leaf, has no rule in $\mathbb{T}_{\Phi'}$. In $\mathbb{T}_\Phi$, $\node{n}_W$ is assigned the rule Thin.
To define $\lambda_\Phi$, we first introduce some notation. Let $\seq{s} = S'' \tnxTVD \Phi''$ be a sequent. Then $\seq{s}[Z := \Gamma']$ is the sequent obtained by substituting all free instances of $Z$ in both $\Phi''$ and $\Delta$ with $\Gamma'$. Now suppose that $U \not\in \operatorname{dom}(\Delta)$; the sequent $(U = \Phi''')\cdot \seq{s}$ is defined to be $S'' \tnxTV{(U = \Phi''') \cdot \Delta} \Phi''$. We may now define $\lambda_\Phi$ as follows. Assume without loss of generality that $U$ is the definitional constant introduced by the application of Rule $\sigma Z$ to $\lambda_{\Phi'}(\node{r}_{\Phi'})$, and that $U$ does not appear in $\mathbb{T}_\Gamma$. \[ \lambda_\Phi (\node{n}) = \begin{cases}
S'' \tnxTV{\varepsilon} \Phi''
&
\\
\multicolumn{2}{l}{
\qquad\text{
if $\node{n} \in \node{N}_{\Phi'}$ and
$S'' \tnxT{\val{V}'}{\varepsilon} \Phi'' = \lambda_{\Phi'}(\node{n})[W := \sigma' Z'.\Gamma]$
}
}
\\
S'' \tnxTV{\Delta'} \Gamma'
&
\\
\multicolumn{2}{l}{
\qquad\text{
if $\node{n} \in \node{N}_\Gamma$ and
$S'' \tnxT{\val{V}_S}{\Delta'} \Gamma'
= (U = \sigma Z.\Phi) \cdot ((\lambda_\Gamma(\node{n}) [Z := U])$
}
} \end{cases} \] Intuitively, if $\node{n}$ is a node in $\mathbb{T}_{\Phi'}$ then $\lambda_\Phi(\node{n})$ modifies the sequent $\lambda_{\Phi'}(\node{n})$ by replacing all free occurrences of $W$ in the definition list and formula components of the sequent by $\sigma' Z'.\Gamma$, and the occurrence of $\val{V}' = \val{V}[W := S']$ on the turnstile by $\val{V}$. Likewise, if $\node{n}$ is a node $\node{N}_\Gamma$ then $\lambda_\Phi(\node{n})$ modifies the sequent $\lambda_\Gamma(\node{n})$ by replacing all occurrences of $Z$ by $U$, prepending the definition $(U = \sigma Z.\Phi)$ to the front of the definition list, and replacing the occurrence of $\val{V}_S$ on the turnstile by $\val{V}$. To complete the proof of the lemma we must show that $\mathbb{T}_\Phi$ is a successful TNF tableau that is compliant with $(S,\prec)$. That $\mathbb{T}_\Phi$ is indeed a tableau is immediate from its construction and the fact that $\mathbb{T}_{\Phi'}$ and $\mathbb{T}_\Gamma$ are tableaux: in particular, every leaf in $\mathbb{T}_\Phi$ corresponds either to a leaf in $\mathbb{T}_{\Phi'}$ or to a leaf in $\mathbb{T}_\Gamma$ and is therefore guaranteed to be terminal. The TNF property follows similarly. We now argue that $\mathbb{T}_\Phi$ is successful and compliant with $(S, \prec)$. We begin by noting that since $\mathbb{T}_{\Phi'}$ and $\mathbb{T}_\Gamma$ are successful, every leaf $\node{n}$ such that $\textit{fm}(\lambda_\Phi(\node{n})) \neq U$, where $U$ is the definitional constant associated with $\sigma Z.\Phi$, is also successful in $\mathbb{T}_\Phi$. Proving that $\mathbb{T}_\Phi$ is successful therefore reduces to proving the success of each $U$-leaf. If we can show compliance of $\mathbb{T}_\Phi$ with $(S,\prec)$, then the success of each $U$-leaf also follows, for in the particular case when $\sigma = \mu$, the well-foundedness of $<:_{\node{r}'}$, where $\node{r}' = cs(\node{r}_\Phi)$ is the unique (due to TNF) $U$-companion node in $\mathbb{T}_\Phi$, follows immediately from the well-foundedness of $\prec$. To this end, suppose that $s, s' \in S$ are such that $s' <:_{\node{r}'} s$; we must show that $s' \prec s$. Since $s' <:{\node{r}'} s$ holds there must exist a $U$-leaf $\node{n}$ such that $s' \in \textit{st}(\lambda_\Phi(\node{n}))$ and $s' <:_{\node{n},\node{r}'} s$ in $\mathbb{T}_\Phi$. There are two cases to consider. \begin{enumerate}
\item
$\node{n}'$ is a leaf in $\mathbb{T}_{\Phi'}$. In this case the compliance of $\mathbb{T}_{\Phi'}$ with respect to $(S, \prec)$ guarantees that $s' \prec s$.
\item
$\node{n}'$ is a leaf in $\mathbb{T}_\Gamma$. In this case there must exist $y \in \semT{\sigma'Z'.\Gamma}{\val{V}_{[s]}}$ such that $s' <:_{\node{n},\node{r}_\Gamma} y$ in $\mathbb{T}_\Phi$, and thus $\mathbb{T}_\Gamma$, and $y <:_{\node{r}_\Gamma, \node{r}'} s$. Since $\mathbb{T}_\Gamma$ satisfies \ref{goal:dependencies} it must follow in this case that $s' \prec s$. \end{enumerate} This completes the proof. \qedhere \end{proof}
With these lemmas in hand we may now state and prove the completeness theorem.
\begin{theorem}[Completeness]\label{thm:completeness} Let $\mathcal{T} = \lts{S}$ be an LTS and $\val{V}$ a valuation, and let $S$ and $\Phi$ be such that $S \subseteq \semTV{\Phi}$. Then $S \tnxTV{\varepsilon} \Phi$ has a successful tableau. \end{theorem}
\begin{proof} The proof proceeds as follows. Let $\sigma_1 Z_1 . \Phi_1, \ldots, \sigma_n Z_n . \Phi_n$ be the top-level fixpoint subformulas of $\Phi$, and let $W_1, \ldots, W_n$ be fresh variables. Define $\Phi'$ to be the fixpoint-free formula containing exactly one occurrence of each $W_i$ and such that \[ \Phi = \Phi'[W_1, \ldots, W_n := \sigma_1 Z_1 . \Phi_1, \ldots \sigma_n Z_n . \Phi_2]. \] Also define $S_i = \semTV{\sigma_i Z_i . \Phi_i}$ for each $i = 1, \ldots n$, and let $\val{V}'$ be defined by \[ \val{V}' = \val{V}[W_1, \ldots W_n := S_1, \ldots S_N] \] Lemma~\ref{lem:fixpoint-completeness} guarantees a successful tableau $\mathbb{T}'$ for $S \tnxT{\val{V}'}{\varepsilon} \Phi'$.
Also, for each $i = 1, \ldots, n$ define $(S_i, \prec_i)$ to be a $\sigma_i$-maximal support ordering for $\semfTV{Z_i}{\Phi_i}$. Lemma~\ref{lem:fixpoint-completeness} guarantees a successful tableau $\mathbb{T}_i$ for each sequent $S_i \tnxTV{\varepsilon} \sigma_i Z_i . \Phi_i$. We may now construct a successful tableau $\mathbb{T}$ by making each leaf in $\mathbb{T}'$ whose formula is $W_i$ the parent of the root of tableau $\mathbb{T}_i$, setting the rule for this former leaf node to be Thin and replacing all occurrences of $W_i$ by $\sigma_i Z_i.\Phi_i$. The resulting tableau is guaranteed to be successful. \qedhere \end{proof}
\section{Proof search}\label{sec:proof-search}
From a reasoning perspective, the most difficult aspect of the proof system in this paper is the well-foundedness of the support orderings for nodes labeled with a least fixed-point formula. In the abstract setting of infinite-state labeled transition systems considered in this paper, unfortunately there is really no alternative. In many situations, however, this well-foundedness can be inferred from the additional semantic information present in the definition of the transition relations.
One interesting such case involves modifying the terminal criterion for $\mu$-leaves, so that they are only terminal if their set of states is empty. Intuitively, this means we simply keep unfolding nodes labeled with definitional constants that are associated with a least fixpoint formula until we reach a node with an empty state set. Of course, with this modification, the unfolding procedure does not necessarily terminate, and hence the proof system is not complete for infinite-state labeled transition systems in general. However, for instance for the class of infinite-state transition systems induced by timed automata, the resulting tableaux method is complete~\cite{DC2005}. Furthermore, even in case the modification is not complete, the result is a sound semi-decision procedure in the sense that, if a (finite) proof tree is obtained, the proof tree is successful.
In this section we show how this modified termination condition can also be proven sound using our results. We first modify Definition~\ref{def:tableau} to change the termination condition. \begin{definition}[$\nu$-complete tableau]\label{def:weak-tableau}
Partial tableau $\tableauTrl$ is a \emph{$\nu$-complete tableau},
if $\textit{dl}(\node{r}) = \varepsilon$, and
if all leaves $\node{n}$ in $\tree{T}$ are \emph{$\nu$-terminal}, i.e.\/ satisfy at least one of the following.
\begin{enumerate}[label=(\alph*)]
\item \label{def:weak-tableau-terminal-proposition}
$\textit{fm}(\node{n}) = Z$ or $\textit{fm}(\node{n}) = \lnot Z$ for $Z \in \textnormal{Var}\xspace \setminus \operatorname{dom}(\textit{dl}(\node{n}))$; or
\item $\textit{fm}(\node{n}) = \dia{K} \ldots$ and there is $s \in \textit{st}(\node{n})$ such that $s \centernot{\xrightarrow{K}}$; or
\item $\textit{fm}(\node{n}) = U$ for some $U \in \operatorname{dom}(\textit{dl}(\node{n}))$, and there is $\node{m} \in A_s(\node{n})$ such that $\textit{fm}(\node{m}) = U$, and either
\begin{enumerate}[label=\roman*.]
\item \label{def:mu-companion-leaf}
$\textit{dl}(\node{n})(U) = \mu Z. \Phi$, and $\textit{st}(\node{n}) = \emptyset$; or
\item \label{def:nu-companion-leaf}
$\textit{dl}(\node{n})(U) = \nu Z. \Phi$, $\textit{st}(\node{n}) \subseteq \textit{st}(\node{m})$.
\end{enumerate}
\end{enumerate} \end{definition} Note that this definition only splits clause~\ref{subdef:companion-leaf} of Definition~\ref{def:tableau} such that the original definition only applies to definitional constants whose right-hand sides are greatest fixed point formulas. Definitional constants whose right-hand sides are least fixed points are only $\nu$-terminal in case the set of states is empty. Note that, as the set of states in a $\mu$-leaf is empty in case of a $\nu$-complete tableau, and it has a corresponding companion node in the tableaux, a $\nu$-complete tableaux is a complete tableaux, as stated in the following lemma.
\begin{lemma}\label{lem:weak-tableau-is-tableau}
Let $\tableauTrl$ be a $\nu$-complete tableau. Then $\tableauTrl$ is a complete tableau. \end{lemma} \begin{proof}
Immediate from the definitions. \qedhere \end{proof}
This immediately means that $\nu$-complete tableaux are sound.
\begin{theorem}\label{thm:soundness-weak-tableau} Fix LTS $(\states{S},\to)$ of sort $\Sigma$ and valuation $\val{V}$. Let $\mathbb{T} = \tableauTrl$ be a successful $\nu$-complete tableau for sequent $\seq{s} \in \Seq{\mathcal{T}}{\textnormal{Var}\xspace}$, where $\textit{dl}(\seq{s}) = \varepsilon$. Then $\seq{s}$ is valid. \end{theorem} \begin{proof}
Follows immediately from Lemma~\ref{lem:weak-tableau-is-tableau} and Theorem~\ref{thm:soundness}.\qedhere \end{proof}
Hence the proof system remains sound if we modify the termination criterion in such a way that we always keep unfolding $\mu$-nodes until the set of states in the node is empty.
We would like to point out here that the preceding theorem holds in part due to the definition of complete tableaux. Even if a node in the tableaux labeled by a definitional constant satisfies the conditions of a terminal node, the definition allows further unfolding of that node. Bradfield defines a tableau as ``a proof-tree that is built from applications of proof rules, starting at the root of the tableau, until all leaves of the tree are terminal''~\cite[Definition 3.4]{Bra1991}. This definition does not allow the continued unfolding that we use in our $\nu$-complete tableaux; as a consequence, the original soundness proof does not carry over immediately to this case.
\section{Timed modal mu-calculus}\label{sec:timed-mu-calculus}
Timed transition systems are infinite-state transition systems used to give semantics to, for example, timed and hybrid automata. Besides introducting transitions labeled by time, timed transition systems also capture continuous as well as discrete system behavior.
In this section, we extend the proof system described in Section~\ref{sec:base-proof-system} to timed transition systems and properties expressed in a timed extension of the mu-calculus, which enriches the mu-calculus with two modalities for expressing properties of continuous timed behavior. These operators correspond to timed versions of the well-known until and release operators from Linear-Time Temporal Logic (LTL). This section represents another illustration of the extensibility of the proof system in this paper: the proofs of soundness and completeness given earlier only need to be extended with cases for the new modal operators to cover the timed setting.
We begin the section by defining timed transition systems and the extension of the mu-calculus to be considered.
\begin{definition}[Timed sort] Sort $\Sigma$ is a \emph{timed sort} iff $\mathbb{R}_{\geq 0} \subseteq \Sigma$, i.e., every non-negative real number is an element of $\Sigma$. If $\Sigma$ is a timed sort we write $\textit{act}(\Sigma) = \Sigma \setminus \mathbb{R}_{\geq 0}$ for the set of non-numeric elements in $\Sigma$. We sometimes refer to elements of $\textit{act}(\Sigma)$ as \emph{actions} and $\delta \in \mathbb{R}_{\geq 0}$ as \emph{time delays}. \end{definition}
\begin{definition}[Timed transition system~\cite{BCL2011}]\label{def:timed-transition-system}
LTS $(\states{S}, \to)$ of timed sort $\Sigma$ is a \emph{timed transition system (TTS)} if it satisfies the following conditions.
\begin{enumerate}
\item For all states $s \in \states{S}$, $s \xrightarrow{0} s$, i.e., the transition system is \emph{time-reflexive}.
\item For all states $s,s',s'' \in \states{S}$ and $\delta \in \mathbb{R}_{\geq 0}$ if $s \xrightarrow{\delta} s'$ and $s \xrightarrow{\delta} s''$ then $s' = s''$, i.e., the transition system is \emph{time-deterministic}.
\item For all states $s, s', s'' \in \states{S}$ and $\delta, \delta' \in \mathbb{R}_{\geq 0}$, if $s \xrightarrow{\delta} s'$ and $s' \xrightarrow{\delta'} s''$ then $s \xrightarrow{\delta + \delta'} s''$, i.e., the transition system is \emph{time-additive}.
\item For all states $s, s' \in \states{S}$ and $\delta \in \mathbb{R}_{\geq 0}$, if $s \xrightarrow{\delta} s'$ then for all $\delta', \delta'' \in \mathbb{R}_{\geq 0}$ with $\delta = \delta' + \delta''$, there exists $s'' \in \states{S}$ such that $s \xrightarrow{\delta'} s''$ and $s'' \xrightarrow{\delta''} s'$, i.e., the transition system is \emph{time-continuous}.
\end{enumerate} \end{definition}
In a TTS a transition can either be labeled by an action, i.e., those labels in $\textit{act}(\Sigma)$, or a time delay $\delta \geq 0$. Intuitively, if a system is in state $s$ and $s \xrightarrow{\delta} s'$ then the system is in state $s'$ after $\delta$ units of time have elapsed, assuming no action has occurred in the interim.
The timed mu-calculus of~\cite{FC2014} extends the mu-calculus from Definition~\ref{def:mu-calculus-syntax} with a timed modal operator $\tR{\Phi_1}{\Phi_2}$. This operator is analogous to the release operator of LTL, in the sense that for $\tR{\Phi_1}{\Phi_2}$ to hold of a state, either $\Phi_2$ must hold along every time instant of the time trajectory emanating from $s$, or there is a point along the trajectory in which $\Phi_1$ holds, which releases the system from having to maintain $\Phi_2$. These intuitions are formalized below.
\begin{definition}[Timed mu-calculus syntax~\cite{FC2014}]\label{def:timed-mu-calculus-syntax} Let $\Sigma$ be a timed sort and $\textnormal{Var}\xspace$ a countably infinite set of propositional variables. Then formulas of the timed modal mu-calculus over $\Sigma$ and $\textnormal{Var}\xspace$ are given by the following grammar
$$
\Phi ::= Z
\mid \lnot\Phi'
\mid \Phi_1 \land \Phi_2
\mid [K] \Phi'
\mid \tR{\Phi_1}{\Phi_2}
\mid \nu Z. \Phi'
$$
where $K \subseteq \Sigma$, $Z \in \textnormal{Var}\xspace$, and formulas of the form $\nu Z . \Phi'$, $Z$ are such that $Z$ must be positive in $\Phi'$. \end{definition} Besides the standard dualities introduced in Section~\ref{subsec:propositional-modal-mu-calculus}, we also have the following dual for $\forall_{\Phi_1} (\Phi_2)$: \[
\tU{\Phi_1}{\Phi_2} = \lnot \tR{\lnot \Phi_1}{\lnot \Phi_2}. \] Also in analogy with LTL, $\tU{\Phi_1}{\Phi_2}$ can be seen as an until operator: specifically, a state satisfies $\tU{\Phi_1}{\Phi_2}$ if $\Phi_2$ is true after some time delay from $s$, and until that time instant $\Phi_1$ must be true.
Before we introduce the semantics of the timed mu-calculus, we fix some notation to deal with time delays and time intervals. \begin{notation}
Let $\mathcal{T} = (\states{S}, \to)$ be a TTS of timed sort $\Sigma$, and let $\delta \in \mathbb{R}_{\geq 0}$.
\begin{itemize}
\item
We write
\[
\mathit{succ}(s,\delta) = s' \text{ iff } s \xrightarrow{\delta} s'
\]
This definition is well-formed because of the time-determinacy of $\xrightarrow{}$. Note that $\mathit{succ}(s,\delta) {\perp}$ iff $s \centernot{\xrightarrow{\delta}}$, and that if $\delta' \geq \delta$ and $\mathit{succ}(s,\delta){\perp}$ then $\mathit{succ}(s,\delta') {\perp}$. Furthermore, because of time continuity, if $\mathit{succ}(s,\delta) \in \states{S}$, $\mathit{succ}(s,\delta') \in \states{S}$ for all $\delta' \leq \delta$.
\item
If $s \in \states{S}$ then we write
\begin{align*}
\mathit{del}(s)
&= \{ \delta \in \mathbb{R}_{\geq 0} \mid s \xrightarrow{\delta} \}
\\
\mathit{succ}(s)
&= \{ s' \in \states{S} \mid \exists \delta \in \mathbb{R}_{\geq 0} \colon s \xrightarrow{\delta} s' \}
\\
\mathit{succ}_{<}(s, \delta)
&= \{s' \in \states{S} \mid \exists \delta' \in \mathbb{R}_{\geq 0} \colon \delta' < \delta \land s \xrightarrow{\delta'} s' \}
\\
\mathit{succ}_{\leq}(s, \delta)
&= \{s' \in \states{S} \mid \exists \delta' \in \mathbb{R}_{\geq 0} \colon \delta' \leq \delta \land s \xrightarrow{\delta'} s' \}
\end{align*}
for all possible delays, the states reachable by any delay, the states reachable by delays less than $\delta$, and states reachable by delays less than or equal to $\delta$, respectively, from $s$.
\end{itemize} \end{notation}
\noindent For the semantics of the timed mu-calculus, we follow the definition in~\cite{Fon2014}. \begin{definition}[Timed mu-calculus semantics]\label{def:timed-mu-calculus-semantics}
Let $\mathcal{T} = (\states{S}, \to)$ be a TTS of sort $\Sigma$ and $\val{V} \in \textnormal{Var}\xspace \to 2^{\states{S}}$ a valuation. Then the semantic function $\semTV{\Phi} \subseteq \states{S}$, where $\Phi$ is a timed mu-formula, is defined as in Definition~\ref{def:mu-calculus-semantics}, extended with the following clause.
\begin{align*}
& \semTV{\tR{\Phi_1}{\Phi_2}} \\
&{=}\; \{ s \in \states{S} \mid \forall \delta \in \mathit{del}(s) \colon \mathit{succ}_<(s,\delta) \cap \semTV{\Phi_1} = \emptyset
\implies \mathit{succ}(s,\delta) \in \semTV{\Phi_2} \}
\end{align*} \end{definition}
\noindent Intuitively, $s$ satisfies $\tR{\Phi_1}{\Phi_2}$ if for every possible delay transition from $s$, either the target state satisfies $\Phi_2$, or there is a delay transition of smaller duration whose target state satisfies $\Phi_1$, thereby releasing $s$ of the responsibility of keeping $\Phi_2$ true beyond that delay. For the dual operator one may derive the following semantic equivalence: \begin{align*}
& \semTV{\tU{\Phi_1}{\Phi_2}} \\
& {=}\; \{ s \in \states{S} \mid \exists \delta \in \mathit{del}(s) \colon
\mathit{succ}_<(s,\delta) \subseteq \semTV{\Phi_1} \land \mathit{succ}(s,\delta) \in \semTV{\Phi_2} \}. \end{align*}
\noindent Based on this characterization, one can see that $\tU{\Phi_1}{\Phi_2}$ captures a notion of until. Specifically, $s$ satisfies $\tU{\Phi_1}{\Phi_2}$ if there is a delay transition from $s$ leading to a state satisfying $\Phi_2$, and all delay transitions of strictly shorter duration from $s$ lead to states satisfying $\Phi_1$.
The notions of \emph{timed definition list} and \emph{timed sequent} are the obvious generalizations of the same notions given in Definitions~\ref{def:definition-list} and~\ref{def:sequent}, as is the notion of the semantics $\sem{\seq{s}}{}{}$ of timed sequent $\seq{s}$, which generalizes Definition~\ref{def:sequent-semantics}. The sequent extractor functions $\textit{st}$, $\textit{dl}$ and $\textit{fm}$ also carry over to the timed setting.
To obtain proof rules for the timed mu-calculus, we extend those from Figure~\ref{fig:proof-rules} with the two rules named $\forall$ and $\exists$ as shown in Figure~\ref{fig:proof-rules-timed}.
\begin{figure}
\caption{Proof rules for timed modal operators (extends Figure~\ref{fig:proof-rules}).}
\label{fig:proof-rules-timed}
\end{figure}
Rule $\exists$ is similar to the rule $\dia{K}$ in that it uses a witness function $f$. In this case $f$ is intended to identify, for every $s \in S$, a \emph{witness delay} $f(s) \in \mathit{del}(s)$ allowed from $s$ such that the state $\mathit{succ}(s,f(s))$ reached by delaying $f(s)$ time units from $s$ satisfies $\Phi_2$ and also such that all states reached from $s$ using smaller delays, i.e., those states in $\mathit{succ}_{<}(s,f(s))$, must satisfy $\Phi_1$.
Proof rule $\forall$ also uses a witness function $g$ in its side condition, but its functionality is a bit more complicated than the witness function used in rule $\exists$. In particular, $g$ takes two arguments, a state and a delay, and while $g$ may be partial, it is required that for any $s \in S$, $g(s,\delta)$ is defined for every $\delta \in \mathit{del}(s)$. If $\delta \in \mathit{del}(s)$ and $g(s,\delta) < \delta$ is intended to be a delay shorter than $\delta$ such that the state reached from $s$ via that delay satisfies the ``release'' formula $\Phi_1$. If no such shorter delay exists, then $\delta(s,\delta) =\delta$, reflecting the fact that the state reached from $s$ after $\delta$ has not been released from the obligation to keep $\Phi_2$ true.
We sometimes refer to the function $f$ mentioned in the side condition of the $\exists$ rule an \emph{$\exists$-function} and the function $g$ referred to in the side condition of the $\forall$ rule as a \emph{$\forall$-function}. In what follows we use \[ \RuleAppl_T = \textnormal{RAppl} \cup \{(\exists, f) \mid f \ \text{is an $\exists$-function}\} \cup \{(\forall, g) \mid g \ \text{is a $\forall$-function}\} \] for the set of rule applications for the timed mu-calculus. The definitions of partial and complete tableaux (Definition~\ref{def:tableau}) carry over immediately to partial and complete \emph{timed} tableaux, with $\RuleAppl_T$ replacing $\textnormal{RAppl}$. Observe that, in particular, no new terminal nodes need to be added: due to time reflexivity, $s \in \mathit{succ}(s)$ for any state $s$, and thus type-correct $\exists$- and $\forall$-functions can always be given when applying the $\exists$ and $\forall$ rules. (Of course the chosen functions may not necessarily lead to \emph{successful} tableaux.) Likewise, the definitions of successful terminals and successful timed tableaux are the obvious adaptations of the notions given in Definition~\ref{def:successful-tableau}. Also, if $\node{n}$ is a node in a timed partial tableau then the semantics $\sem{\node{n}}{}{}$ of $\node{n}$ carries over in a straightforward manner.
We now extend the local dependency ordering (Definition~\ref{def:local_dependency_ordering}) to timed tableaux as follows. \begin{definition}[Timed local dependency ordering]\label{def:local-dependency-ordering-timed} Let $\node{n}, \node{n}'$ be proof nodes in timed tableau $\mathbb{T}$, with $\node{n}' \in c(\node{n})$ a child of $\node{n}$. Then $s' <_{\node{n}',\node{n}} s$ iff $s' \in \textit{st}(\node{n}')$, $s \in \textit{st}(\node{n})$, and one of the following hold:
\begin{enumerate}
\item $\rho(\node{n}) = [K]$ and $s \xrightarrow{K} s'$; or
\item $\rho(\node{n}) = (\dia{K},f)$ and $s' = f(s)$; or
\item $\textit{rn}(\rho(\node{n})) \not \in \{ [K], \langle K \rangle, \forall, \exists \}$ and $s = s'$; or
\item $\rho(\node{n}) = (\exists, f)$, $cs(\node{n}) = \node{n}_1\node{n}_2$, and either
\begin{itemize}
\item $\node{n}' = \node{n}_1$ and $s' \in \mathit{succ}_{<}(s,f(s))$, or
\item $\node{n}' = \node{n}_2$ and $s' = \mathit{succ}(s,f(s))$; or
\end{itemize}
\item $\rho(\node{n}) = (\forall,g)$, $cs(\node{n}) = \node{n}_1\node{n}_2$, and either
\begin{itemize}
\item $\node{n}' = \node{n}_1$ and $s' = \mathit{succ}(s, g(s,\delta))$ for some $\delta \in \mathit{del}(s)$ with $g(s,\delta) < \delta$, or
\item $\node{n}' = \node{n}_2$ and $s' = \mathit{succ}(s, g(s,\delta))$ for some $\delta \in \mathit{del}(s)$ with $g(s,\delta) = \delta$
\end{itemize}
\end{enumerate} \end{definition} The dependency ordering $\lessdot_{\node{n}',\node{n}}$ and extended dependency ordering $<:_{\node{n}',\node{n}}$ are adapted to timed tableaux in the obvious way, using the timed local dependency ordering as a basis.
We next establish that the timed local dependency ordering also satisfies the semantic sufficiency property.
\begin{lemma}[Semantic sufficiency of timed $<_{\node{n}', \node{n}}$] \label{lem:semantic-sufficiency-of-timed-<}
Let $\node{n}$ be an internal proof node in partial timed tableau $\mathbb{T}$, and let $s \in \textit{st}(\node{n})$ be such that for all $s'$ and $\node{n}'$ with $s' <_{\node{n}', \node{n}} s$, $s' \in \semop{\node{n}'}$. Then $s \in \semop{\node{n}}$. \end{lemma} \remove{ \begin{proofsketch}
The proof of the cases where $\rho(\node{n}) \in \{\exists, \forall\}$ follows the exact same line of reasoning as the the cases in the proof of Lemma~\ref{lem:semantic-sufficiency-of-<}. The detailed proof is included in the appendix. \end{proofsketch} } \begin{proof}
Let $\node{n} = S \tnxTVD \Phi$ be an internal node in $\mathbb{T} = \tableauTrl$. Since $\node{n}$ is internal, $\rho(\node{n})$ is defined. Now fix $s \in S$. The proof proceeds by case analysis on $\rho(\node{n})$.
All cases other than $(\exists,f)$ and $(\forall,g)$ are identical to the proof of Lemma~\ref{lem:semantic-sufficiency-of-<}. We therefore only show the remaining two cases.
\begin{itemize}
\item $\rho(\node{n}) = (\exists,f)$.
In this case $\Phi = \tU{\Phi_1}{\Phi_2}$, and $cs(\node{n}) = \node{n}_1\node{n}_2$ where $\node{n}_1 = f_<(S) \tnxTVD \Phi_1$ and $\node{n}_2 = f_=(S) \tnxTVD \Phi_2$.
By definition, $s' <_{\node{n}',\node{n}} s$ iff $\node{n}' \in \{ \node{n}_1, \node{n}_2 \}$, and $s' = \mathit{succ}(s,f(s))$ if $\node{n}' = \node{n}_2$, and $s' \in \mathit{succ}_{<}(s,f(s))$ otherwise. We reason as follows.
\begin{flalign*}
& \text{For all $s'$, $\node{n}'$ such that $s' <_{\node{n}',\node{n}} s$, $s' \in \semop{\node{n}'}$}\span\span
\\
& \text{iff}\;\;\; \mathit{succ}(s,f(s)) \in \semop{\node{n}_2}\ \text{and}\ \mathit{succ}_{<}(s,f(s)) \subseteq \semop{\node{n}_1}\span\span
\\
&
&& \text{Definitions of $\node{n}_1, \node{n}_2, <_{\node{n}', \node{n}}$ when $\rho(\node{n}) = (\exists,f)$}
\\
& \text{iff}\;\;\; f(s) \in \mathit{del}(s), \mathit{succ}(s,f(s)) \in \semT{\Phi_2}{\val{V}[\Delta]},\ \text{and}\ \mathit{succ}_{<}(s,f(s)) \subseteq \semT{\Phi_1}{\val{V}[\Delta]} \span\span
\\
&
&& \text{Property of $f$, definition of $\semop{\node{n}_1}$, $\semop{\node{n}_2}$}
\\
& \text{implies}\;\;\; s \in \semT{\tU{\Phi_1}{\Phi_2}}{\val{V}[\Delta]}
&& \text{Definition of $\semT{\tU{\Phi_1}{\Phi_2}}{\val{V}[\Delta]}$} \\
& \text{iff}\;\;\; s \in \semT{\Phi}{\val{V}[\Delta]}
&& \text{$\Phi = \tU{\Phi_1}{\Phi_2}$} \\
& \text{iff}\;\;\; s \in \semop{\node{n}}
&& \text{Definition of $\semop{\node{n}}$}
\end{flalign*}
\item $\rho(\node{n}) = (\forall, g)$.
In this case $\Phi = \tR{\Phi_1}{\Phi_2}$, and $cs(\node{n}) = \node{n}_1\node{n}_2$ where $\node{n}_1 = g_<(S) \tnxTVD \Phi_1$ and $\node{n}_2 = g_=(S) \tnxTVD \Phi_2$,.
By definition, $s' <_{\node{n}',\node{n}} s$ iff $\node{n}' \in \{ \node{n}_1, \node{n}_2 \}$, and $\node{n}' = \node{n}_1$ if $s' = \mathit{succ}(s, g(s,\delta))$ for some $\delta \in \mathit{del}(s)$ with $g(s,\delta) < \delta$, and $s' = \mathit{succ}(s, g(s,\delta))$ for some $\delta \in \mathit{del}(s)$ with $g(s,\delta) = \delta$ otherwise.
We reason as follows.
\begin{flalign*}
& \text{For all $s'$, $\node{n}'$ such that $s' <_{\node{n}',\node{n}} s$, $s' \in \semop{\node{n}'}$}\span\span
\\
& \text{iff}\;\;\; \text{for all $\delta \in \mathit{del}(s)$, $g(s,\delta) = \delta$ and $\mathit{succ}(s,g(s,\delta)) \in \semop{\node{n}_2}$}\span\span
\\
& \qquad \text{or $g(s,\delta)) < \delta$ and $\mathit{succ}(s,g(s,\delta)) \in \semop{\node{n}_1}$}\span
\\
\multispan4{\hfil \text{Definitions of $\node{n}_1, \node{n}_2, <_{\node{n}',\node{n}}$ when $\rho(\node{n}) = (\forall,g)$}}
\\
& \text{iff}\;\;\; \text{for all $\delta \in \mathit{del}(s)$, $g(s,\delta) = \delta$ and $\mathit{succ}(s,g(s,\delta)) \in \semT{\Phi_2}{\val{V}[\Delta]}$}\span\span \\
& \qquad \text{or $g(s,\delta)) < \delta$ and $\mathit{succ}(s,g(s,\delta)) \in \semT{\Phi_1}{\val{V}[\Delta]}$}\span\span
\\
&
&& \text{Definition of $\sem{\node{n}_1}{}{}, \sem{\node{n}_2}{}{}$}
\\
& \text{implies } \text{for all $\delta \in \mathit{del}(s)$, $\mathit{succ}_{<}(s,\delta) \cap \semT{\Phi_1}{\val{V}[\Delta]} \neq \emptyset$ or $\mathit{succ}(s, \delta) \in \semT{\Phi_2}{\val{V}[\Delta]}$} \span\span
\\
\multispan4{\hfil \text{Definition of $g$: if $g(s,\delta) < \delta$, then $\mathit{succ}(s,g(s,\delta)) \in \mathit{succ}_{<}(s,g(s,\delta))$}}
\\
& \text{iff}\;\;\; s \in \semT{\tR{\Phi_1}{\Phi_2}}{\val{V}[\Delta]}
&& \text{Definition of $\semT{\tR{\Phi_1}{\Phi_2}}{\val{V}[\Delta]}$}
\\
& \text{iff}\;\;\; s \in \semT{\Phi}{\val{V}[\Delta]}
&& \text{$\Phi = \tR{\Phi_1}{\Phi_2}$}
\\
& \text{iff}\;\;\; s \in \semop{\node{n}}
&& \text{Definition of $\semop{\node{n}}$}
\end{flalign*}
\qedhere
\end{itemize}
\end{proof}
\subsection{Soundness} We now use the semantic sufficiency result from the previous section to generalize the soundness results from Section~\ref{sec:Soundness-via-support-orderings} to timed tableaux.
First, observe that the local soundness result from Lemma~\ref{lem:local-soundness} carries over to the timed setting immediately, since it solely relies on the semantic sufficiency result for $<_{\node{n},\node{n}'}$.
We next show how the node formulas from Definition~\ref{def:node-formulas} generalize to nodes in a timed tableau.
\begin{definition}[Timed node formulas]~\label{def:node-formulas-timed}
For each companion node $\node{m} \in \companions{\mathbb{T}}$ let $Z_\node{m}$ be a unique fresh variable, with $\textnormal{Var}\xspace_{\mathbb{T}} = \{ Z_\node{m} \mid\node{m} \in \companions{\mathbb{T}} \}$ the set of all such variables. Then for node $\node{n} \in \node{N}$ formula $P(\node{n})$ is defined inductively as follows.
Cases 1-10 are as in Definition~\ref{def:node-formulas}. The cases for the relativized modal operators are as follows.
\begin{enumerate}
\setcounter{enumi}{10}
\item If $\rho(\node{n}) = (\exists,f)$ and $cs(\node{n}) = \node{n}_1\node{n}_2$ then $P(\node{n}) = \exists_{P(\node{n}_1)}(P(\node{n}_2))$.
\item If $\rho(\node{n}) = (\forall,g)$ and $cs(\node{n}) = \node{n}_1\node{n}_2$ then $P(\node{n}) = \forall_{P(\node{n}_1)}(P(\node{n}_2))$.
\end{enumerate} \end{definition} Valuation consistency (Definition~\ref{def:consistency}) and the associated results in Lemmas~\ref{lem:consistency-property} and~\ref{lem:companion-node-formulas-and-semantics} carry over immediately to the timed setting. We next use these results to link the node formulas and the semantics in timed tableaux.
\begin{lemma}[Timed node formulas and semantics]\label{lem:node-formulas-and-node-semantics-timed}
Let $\val{V}'$ be a consistent valuation for timed tableau $\mathbb{T}$. Then for every $\node{n} \in \node{N}$, $\semT{P(\node{n})}{\val{V}'} = \semop{\node{n}}$. \end{lemma} \remove{ \begin{proofsketch}
Analogous to the proof of Lemma~\ref{lem:node-formulas-and-node-semantics}. \end{proofsketch} } \begin{proof}
Let valuation $\val{V}'$ be consistent with $\mathbb{T}$.
The proof is by induction on $\tree{T}$.
So fix node $\node{n}$ in $\tree{T}$.
The induction hypothesis asserts that for all $\node{n}' \in c(\node{n})$, $\semT{P(\node{n}')}{\val{V}'} = \semop{\node{n}'} $.
The proof now proceeds by an analysis of $\rho(\node{n})$.
All cases except $\textit{rn}(\node{n}) \in \{ \forall, \exists \}$ are identical to those in Lemma~\ref{lem:node-formulas-and-node-semantics}. For the remaining cases, the proofs are as follows.
\begin{itemize}
\item $\rho(\node{n}) = (\exists,f)$.
In this case we know that $\node{n} = S \tnxTVD \tU{\Phi_1}{\Phi_2}$ and that $cs(\node{n}) = \node{n}_1\node{n}_2$, where each $\node{n}_i = S_i \tnxTVD \Phi_i$,
$S_1 = f_<(S)$, and $S_2 = f_=(S)$.
The induction hypothesis guarantees that $\semT{P(\node{n}_i)}{\val{V}'} = \semop{\node{n}_i} = \semT{\Phi_i}{\val{V}[\Delta]}$.
We reason as follows.
\begin{align*}
& \semT{P(\node{n})}{\val{V}'}
\\
& = \semT{\exists_{P(\node{n}_1)}(P(\node{n}_2))}{\val{V}'}
& & \text{Definition of $P(-)$}
\\
& = \{ s \in \states{S} \mid \exists \delta \in \mathit{del}(s) \colon \mathit{succ}_{<}(s,\delta) \subseteq \semT{P(\node{n}_1)}{\val{V}'} \land \mathit{succ}(s,\delta) \in \semT{P(\node{n}_2)}{\val{V}'} \} \span\span
\\
& & & \text{Semantics of $\exists$}
\\
& = \{ s \in \states{S} \mid \exists \delta \in \mathit{del}(s) \colon \mathit{succ}_{<}(s,\delta) \subseteq \semop{\node{n}_1} \land \mathit{succ}(s,\delta) \in \semop{\node{n}_2} \} \span\span
\\
& & & \text{Induction hypothesis (twice)}
\\
& = \{ s \in \states{S} \mid \exists \delta \in \mathit{del}(s) \colon \mathit{succ}_{<}(s,\delta) \subseteq \semT{\Phi_1}{\val{V}[\Delta]} \land \mathit{succ}(s,\delta) \in \semT{\Phi_2}{\val{V}[\Delta]} \} \span\span
\\
& & & \text{Definition of $\semop{\node{n}_i}$}
\\
& = \semT{\tU{\Phi_1}{\Phi_2}}{\val{V}[\Delta]}
& & \text{Semantics of $\exists$}
\\
& = \semop{\node{n}}
& & \text{Definition of $\semop{\node{n}}$}
\end{align*}
\item $\rho(\node{n}) = (\forall,g)$.
In this case we know that $\node{n} = S \tnxTVD \tR{\Phi_1}{\Phi_2}$ and that $cs(\node{n}) = \node{n}_1\node{n}_2$, where each $\node{n}_i = S_i \tnxTVD \Phi_i$, $S_1 = g_<(S)$ and $S_2 = g_=(S)$.
The induction hypothesis guarantees that $\semT{P(\node{n}_i)}{\val{V}'} = \semop{\node{n}_i} = \semT{\Phi_i}{\val{V}[\Delta]}$.
We reason as follows.
\begin{align*}
& \semT{P(\node{n})}{\val{V}'} \\
&{=}\; \semT{\tR{P(\node{n}_1)}{P(\node{n}_2)}}{\val{V}'}
& & \text{Definition of $P(-)$}
\\
&{=}\; \{ s \in \states{S} \mid \forall \delta \in \mathit{del}(s) \colon \mathit{succ}_{<}(s,\delta) \cap \semT{P(\node{n}_1)}{\val{V}'} = \emptyset \span\span\\
& \hspace{120pt} \implies \mathit{succ}(s,\delta) \in \semT{P(\node{n}_2)}{\val{V}'} \} \span\span
\\
& & & \text{Semantics of $\forall$}
\\
&{=}\; \{ s \in \states{S} \mid \forall \delta \in \mathit{del}(s) \colon \mathit{succ}_{<}(s,\delta) \cap \semop{\node{n}_1} = \emptyset \implies \mathit{succ}(s,\delta) \in \semop{\node{n}_2} \} \span\span
\\
& & & \text{Induction hypothesis (twice)}
\\
&{=}\; \{ s \in \states{S} \mid \forall \delta \in \mathit{del}(s) \colon \mathit{succ}_{<}(s,\delta) \cap \semT{\Phi_1}{\val{V}[\Delta]} = \emptyset \span\span \\
& \hspace{120pt}\implies \mathit{succ}(s,\delta) \in \semT{\Phi_2}{\val{V}[\Delta]} \} \span\span
\\
& & & \text{Definition of $\semop{\node{n}_i}$}
\\
&{=}\; \semT{\tR{\Phi_1}{\Phi_2}}{\val{V}[\Delta]}
& & \text{Semantics of $\forall$}
\\
&{=}\; \semop{\node{n}}
& & \text{Definition of $\semop{\node{n}}$}
\end{align*}
\qedhere
\end{itemize} \end{proof} Corollary~\ref{cor:node-formulas-vs-node-semantics}, as well as the definitions of support dependency ordering and influence extensions of valuations (Definitions~\ref{def:support-dependency-ordering} and~\ref{def:support-extension-of-valuation}) and the associated Lemmas~\ref{lem:support-dependency-ordering-characterization}, \ref{lem:pseudo-transitivity-of-support-dependency-ordering} and~\ref{lem:monotonicity-of-dependency-extensions} and Corollary~\ref{cor:monotonicity-of-dependency-extensions} generalize to timed tableaux in the obvious way.
These results now allow us to prove that $<:_\node{n}^+$ is also a support ordering for companion nodes in timed tableaux, generalizing Lemma~\ref{lem:support-ordering-for-companion-nodes}. \begin{lemma}\label{lem:support-ordering-for-companion-nodes-timed}
Let $\mathbb{T} = \tableauTrl$ be a successful timed tableau with $\node{n} \in \companions{\mathbb{T}}$ a companion node of $\mathbb{T}$ and $\node{n}'$ the child of $\node{n}$ in $\tree{T}$. Also let $S = \textit{st}(\node{n})$. Then $(S, <:_{\node{n}}^+)$ is a support ordering for $\semfT{Z_{\node{n}}}{P(\node{n}')}{\val{V}_{\node{n}}}$. \end{lemma} \begin{proof} The proof is analogous to the proof of Lemma~\ref{lem:support-ordering-for-companion-nodes}, inductively proving the following stronger result. Fix successful timed tableau $\mathbb{T} = \tableauTrl$ with $\tree{T} = (\node{N},\node{r},p,cs)$ and let $\node{n} \in \companions{\mathbb{T}}$ a companion node of $\mathbb{T}$ with $S = \textit{st}(\node{n})$. We prove that for every $\node{m} \in D(\node{n})$ and $s \in S$ statements
\ref{stmt:necessity-timed} and \ref{stmt:support-timed} hold. \begin{enumerate}[left=\parindent, label=S\arabic*., ref=S\arabic*] \item\label{stmt:necessity-timed}
For all $x$ such that $x \leq:_{\node{m},\node{n}} s$,
$x \in \semT{P(\node{m})}{\val{V}_{\node{m},x}}$. \item\label{stmt:support-timed}
If $\node{m} \in \cnodes{\mathbb{T}}$,
$\node{m}' = cs(\node{m})$
and
$x$ satisfies $x \leq:_{\node{m},\node{n}} s$
then
$(S_x, <:_{\node{m},x})$ is a support ordering for $\semfT{Z_{\node{m}}}{P(\node{m}')}{\val{V}_{\node{m},x}}$, where
$S_x = \preimg{(<:_{\node{m}}^*)}{x}$
and
${<:_{\node{m},x}} = \restrict{(<:_{\node{m}}^+)}{S_x}$. \end{enumerate} The proof proceeds by case analysis on the form of $\rho(\node{m})$; all cases are completely analogous to those in the proof of Lemma~\ref{lem:support-ordering-for-companion-nodes}, except the proofs of statement~\ref{stmt:necessity-timed} in case $\rho(\node{m}) \in \{ (\forall, g), (\exists,f) \}$. The proof for these two cases is as follows.
If $\rho(\node{m}) = (\exists,f)$, $\node{m} = S' \tnxTVD \tU{\Phi_1}{\Phi_2}$ for some $\Phi_1$ and $\Phi_2$, $cs(\node{m}) = \node{m}'_1\node{m}'_2$, $\node{m}'_i = S'_i \tnxTVD \Phi_i$ for $i = 1,2$, $S'_1 = f_<(S')$ and $S'_2 = f_=(S')$. The induction hypothesis ensures that \ref{stmt:necessity-timed} holds for each $\node{m}'_i$; we must show that \ref{stmt:necessity-timed} holds for $\node{m}$ and $s$. To this end, let $x$ be such that $x \leq:_{\node{m},\node{n}} s$; we must show that $x \in \semT{P(\node{m})}{\val{V}_{\node{m},x}}$. Let $x'' = \mathit{succ}(x,f(x))$, and note that $x'' <_{\node{m}_2',\node{m}} x$; the pseudo-transitivity of $\leq:_{\node{m},\node{n}}$ guarantees that $x''$ satisfies $x'' \leq:_{\node{m}'_2,\node{n}} s$, and the induction hypothesis then ensures that $x'' \in \semT{P(\node{m}'_2)}{\val{V}_{\node{m}'_2,x''}}$. Corollary~\ref{cor:monotonicity-of-dependency-extensions} guarantees that $x'' \in \semT{P(\node{m}'_2)}{\val{V}_{\node{m},x}}$. Next, fix arbitrary $x' \in \mathit{succ}_{<}(x, f(x))$, and note that $x' <_{\node{m}'_1,\node{m}} x$. Using the same line of argument as before, we find that $x' \in \semT{P(\node{m}'_1)}{\val{V}_{\node{m},x}}$. So, $\mathit{succ}_{<}(x, f(x)) \subseteq \semT{P(\node{m}'_1)}{\val{V}_{\node{m},x}}$. Now, the semantics of $\exists$ ensures that $x \in \semT{P(\node{m})}{\val{V}_{\node{m},x}}$.
If $\rho(\node{m}) = (\forall,g)$, $\node{m} = S' \tnxTVD \tR{\Phi_1}{\Phi_2}$ for some $\Phi_1$ and $\Phi_2$, $cs(\node{m}) = \node{m}'_1\node{m}'_2$, $\node{m}'_i = S'_i \tnxTVD \Phi_i$ for $i = 1,2$, and $S_1 = g_<(S)$, $S_2 = g_=(S)$. The induction hypothesis ensures that \ref{stmt:necessity-timed} holds for each $\node{m}'_i$; we must show that \ref{stmt:necessity-timed} holds for $\node{m}$ and $s$. To this end, let $x$ be such that $x \leq:_{\node{m},\node{n}} s$; we must show that $x \in \semT{P(\node{m})}{\val{V}_{\node{m},x}}$. The result follows from the semantics of $\forall$ if we prove for all $\delta \in \mathit{del}(x)$ that either $\mathit{succ}_{<}(x,\delta) \cap \semT{P(\node{m}'_1)}{\val{V}_{\node{m},x}} \neq \emptyset$ or $\mathit{succ}(x,\delta) \in \semT{P(\node{m}'_2)}{\val{V}_{\node{m},x}}$. So, fix arbitrary $\delta \in \mathit{del}(x)$. Suppose $g(x,\delta) = \delta$, and let $x' = \mathit{succ}(x,g(x,\delta))$; then $x' <_{\node{m}'_2,\node{m}} x$. The pseudo-transitivity of $\leq:_{\node{m},\node{n}}$ guarantees that $x' \leq:_{\node{m}'_2,\node{n}} s$, and the induction hypothesis then ensures that $x' \in \semT{P(\node{m}'_2)}{\val{V}_{\node{m}'_2,x'}}$. Corollary~\ref{cor:monotonicity-of-dependency-extensions} guarantees that $x' \in \semT{P(\node{m}'_2)}{\val{V}_{\node{m},x}}$. Next, suppose $g(x,\delta) = \delta' < \delta$, and let $x'' = \mathit{succ}(x,g(x,\delta'))$. Observe $x'' \in \mathit{succ}_{<}(x,g(x,\delta))$, and $x'' <_{\node{m}'_1,\node{m}} x$. The pseudo-transitivity of $\leq:_{\node{m},\node{n}}$ guarantees that $x'' \leq:_{\node{m}'_1,\node{n}} s$, and the induction hypothesis then ensures that $x'' \in \semT{P(\node{m}'_1)}{\val{V}_{\node{m}'_1,x''}}$. Corollary~\ref{cor:monotonicity-of-dependency-extensions} guarantees that $x'' \in \semT{P(\node{m}'_1)}{\val{V}_{\node{m},x}}$. As $x'' \in \mathit{succ}_{<}(x,g(x,\delta))$, $\mathit{succ}_{<}(x,g(x,\delta)) \cap \semT{P(\node{m}'_1)}{\val{V}_{\node{m},x}} \neq \emptyset$. Hence it follows from the semantics of $\forall$ that $x \in \semT{P(\node{m})}{\val{V}_{\node{m},x}}$.\qedhere \end{proof}
Corollary~\ref{cor:support-orderings-for-top-level-companion-nodes} again immediately generalizes to timed tableaux based on the previous lemma. We now conclude our soundness proof.
\begin{theorem}[Soundness of timed mu-calculus proof system\label{thm:soundness-timed-tableau}]
Fix TTS $(\states{S},\to)$ of timed sort $\Sigma$ and valuation $\val{V}$, and let $\mathbb{T} = \tableauTrl$ be a successful timed tableau for sequent $\seq{s}$, where $\textit{dl}(\seq{s}) = \varepsilon$. Then $\seq{s}$ is valid. \end{theorem} \begin{proof}
Follows the exact same line of reasoning as the proof of Theorem~\ref{thm:soundness}.\qedhere \end{proof}
Observe that the results in this section only ever involve adding cases for the new operators to definitions, as well as to the proofs that use case distinction on the operators. In all of the cases we considered, the results that need to be added for these new operators are straightforward, and follow the same line of reasoning as the other operators in the mu-calculus. This illustrates the extensibility of the proof methodology we developed, at least for proving soundness of tableaux when adding new operators to the mu-calculus.
\subsection{Completeness}\label{sec:timed-completeness} We finally turn our attention to proving completeness of the tableaux construction for the timed mu-calculus. In particular, we show that the construction used to establish completeness in Section~\ref{sec:Completeness} can be straightforwardly adapted to account for the new modalities introduced by the timed mu-calculus. This, again, illustrates the extensibility of the proofs given in this paper.
We first note that the notion of tableau normal form (TNF) introduced in Definition~\ref{def:tableau-normal-forms} carries over to timed tableaux as well; the proof of the corresponding Lemma~\ref{lem:structural-equivalence-of-TNF-tableaux} only needs to be adapted by including $\forall$ and $\exists$ in case that $\textit{rn}(\rho(\node{n})) \in \{\land,\lor,[K],\dia{K}\}$. The details of that adaptation are routine and left to the reader. We now establish completeness for timed mu-calculus formulas without fixed points, extending the result in Lemma~\ref{lem:fixpoint-free-completeness}.
\begin{lemma}[Timed fixpoint-free completeness]\label{lem:timed-fixpoint-free-completeness} Let $\mathcal{T}, \val{V}, \Phi$ and $S$ be such that $\Phi$ is a fixpoint-free timed mu-calculus formula and $S \subseteq \semTV{\Phi}$. Then there is a successful TNF timed tableau for $S \tnxTV{\varepsilon} \Phi$. \end{lemma} \begin{proof} Let $\mathcal{T} = (\states{S}, \to)$ be a TTS of timed sort $\Sigma$, and $\val{V}$ be a valuation. The proof proceeds by structural induction on $\Phi$; the induction hypothesis states that for any subformula $\Phi'$ of $\Phi$ and $S'$ such that $S' \subseteq \semTV{\Phi'}$, $S' \tnxTV{\varepsilon} \Phi'$ has a successful TNF timed tableau. The proof is completely analogous to that of Lemma~\ref{lem:fixpoint-free-completeness}, and involves a case analysis on the form of $\Phi$. We here only show the cases for the new operators, $\forall$ and $\exists$.
Assume $\Phi = \exists_{\Phi_1} \Phi_2$; we first establish that there exists a function $f \in S \to \mathbb{R}_{\geq 0}$ such that for all $s \in S$, $f(s) \in \mathit{del}(s)$, $\mathit{succ}_{<}(s,f(s)) \subseteq \semTV{\Phi_1}$, and $\mathit{succ}(s,f(s)) \in \semTV{\Phi_2}$. Fix arbitrary $s \in S$. Then $s \in \semTV{\exists_{\Phi_1} \Phi_2}$, and the definition of $\semTV{\exists_{\Phi_1} \Phi_2}$ guarantees the existence of $\delta \in \mathit{del}(s)$ such that $\mathit{succ}_{<}(s,\delta) \subseteq \semTV{\Phi_1}$ and $\mathit{succ}(s,\delta) \in \semTV{\Phi_2}$. Let $\delta$ be such and set $f(s) = \delta$. This immediately satisfies the required conditions.
It follows from this definition of $f$ that $f_<(S) \subseteq \semTV{\Phi_1}$. Furthermore, as for every $s \in S$, $\mathit{succ}(s,f(s)) \in \semTV{\Phi_2}$, $f_=(S) \subseteq \semTV{\Phi_2}$. Hence, the induction hypothesis guarantees that successful TNF timed tableaux exist for $f_<(S) \tnxTV{\varepsilon} \Phi_1$ and $f_=(S) \tnxTV{\varepsilon} \Phi_2$. Without loss of generality, assume that these tableaux have disjoint sets of proof nodes. We now construct a successful TNF timed tableau for $S \tnxTV{\varepsilon} \exists_{\Phi_1} \Phi_2$ as follows. Create a fresh tree node labeled by $S \tnxTV{\varepsilon} \exists_{\Phi_1} \Phi_2$, having as its left child the root node of the successful TNF timed tableau for $f_<(S) \tnxTV{\varepsilon} \Phi_1$ and as its right child the root of the successful TNF timed tableau for $f_=(S) \tnxTV{\varepsilon} \Phi_2$. The rule application associated with the new node is $(\exists,f)$. The new tableau is clearly successful and TNF.
Now assume $\Phi = \forall_{\Phi_1} \Phi_2$; we first establish that there exists a function $g \in S \times \mathbb{R}_{\geq 0} \to \mathbb{R}_{\geq 0}$ such that, for all $s \in S$, $\delta \in \mathit{del}(s)$, $g(s,\delta) \leq \delta$, and such that if $g(s,\delta) = \delta$, $\mathit{succ}(s,g(s,\delta)) \in \semTV{\Phi_2}$, and $\mathit{succ}(s,g(s,\delta)) \in \semTV{\Phi_1}$ otherwise. Fix arbitrary $s \in S$. Since $s \in \semTV{\forall_{\Phi_1} \Phi_2}$, the definition of $\semTV{\forall_{\Phi_1} \Phi_2}$ guarantees that, for all $\delta \in \mathit{del}(s)$, either $\mathit{succ}_{<}(s,\delta) \cap \semTV{\Phi_1} \neq \emptyset$ or $\mathit{succ}(s,\delta) \in \semTV{\Phi_2}$. So, if $\mathit{succ}_{<}(s,\delta) \cap \semTV{\Phi_1} \neq \emptyset$ there is some $\delta' < \delta$ such that $\mathit{succ}(s,\delta') \in \semTV{\Phi_1}$, and we choose $g(s,\delta) = \delta'$. Otherwise, $\mathit{succ}(s,\delta) \in \semTV{\Phi_2}$ and we can choose $g(s,\delta) = \delta$. Given such a $g$,
it is easy to see that $g_<(S) \subseteq \semTV{\Phi_1}$ and $g_=(S) \subseteq \semTV{\Phi_2}$; hence the induction hypothesis guarantees that successful TNF timed tableaux exist for $g_<(S) \tnxTV{\varepsilon} \Phi_1$ and $g_=(S) \tnxTV{\varepsilon} \Phi_2$. Without loss of generality, assume that these tableaux have disjoint sets of proof nodes. We now construct a successful TNF timed tableau for $S \tnxTV{\varepsilon} \forall_{\Phi_1} \Phi_2$ as follows. Create a fresh tree node labeled by $S \tnxTV{\varepsilon} \forall_{\Phi_1} \Phi_2$, having as its left child the root node of the successful TNF timed tableau for $g_<(S) \tnxTV{\varepsilon} \Phi_1$ and as its right child the root of the successful TNF timed tableau for $g_=(S) \tnxTV{\varepsilon} \Phi_2$. The rule application associated with the new node is $(\forall,g)$. The new tableau is clearly successful and TNF.\qedhere \end{proof}
The notion of tableau compliance (Definition~\ref{def:tableau-compliance}) generalizes to the timed setting in the obvious way.
We next establish the existence of successful TNF timed tableaux for valid sequents in the case where formulas have the form $\sigma Z . \Phi$, where $\Phi$ does not contain fixpoint subformulas, and have specific $\sigma$-compatible fixpoint orderings, analogous to Lemma~\ref{lem:single-fixpoint-completeness}.
\begin{lemma}[Timed single-fixpoint completeness]\label{lem:timed-single-fixpoint-completeness} Let $\mathcal{T}$ be a TTS, and let $\Phi$, $Z$, $\val{V}$, $\sigma$ and $S$ be such that $\Phi$ is a fixpoint-free timed mu-calculus formula and $S = \semTV{\sigma Z . \Phi}$. Also let $(S, \prec)$ be a $\sigma$-compatible, total, qwf support ordering for $\semfZTV{\Phi}$. Then $S \tnxTV{\varepsilon} \sigma Z . \Phi$ has a successful TNF timed tableau compliant with $(S, \prec)$. \end{lemma} \begin{proof} Fix TTS $\mathcal{T}= \lts{\states{S}}$ of sort $\Sigma$, and let $\Phi, Z, \val{V}, \sigma$ and $S$ be such that $\Phi$ is a fixpoint-free timed mu-calculus formula, and $S = \semTV{\sigma Z.\Phi}$. Also let $(S, \prec)$ be a $\sigma$-compatible, total, qwf support ordering for $f_\Phi = \semfZTV{\Phi}$. We must construct a successful TNF timed tableau for sequent $S \tnxTV{\varepsilon} \sigma Z.\Phi$ that is compliant with $(S,\prec)$. The proof mirrors the one given for Lemma~\ref{lem:single-fixpoint-completeness} and consists of the following steps. \begin{enumerate}
\item\label{it:step-single-state-timed-tableau}
For each $s \in S$ we use Lemma~\ref{lem:timed-fixpoint-free-completeness} to establish the existence of a successful TNF timed tableau for sequent $\{s\} \tnxT{\val{V}_s}{\varepsilon} \Phi$, where $\val{V}_s = \val{V}[Z := \preimg{{\prec}}{s}]$.
\item\label{it:step-full-set-timed-tableau}
We then construct a successful TNF timed tableau for sequent $S \tnxT{\val{V}_S}{\varepsilon} \Phi$, where $\val{V}_S = \val{V}[Z := \preimg{{\prec}}{S}]$, from the individual tableaux for the $s \in S$.
\item\label{it:step-fixpoint-timed-tableau}
We convert the tableau for $S \tnxT{\val{V}_s}{\varepsilon} \Phi$ into a successful TNF timed tableau for $S \tnxTV{\varepsilon} \sigma Z.\Phi$ that is compliant with $\prec$. \end{enumerate}
\paragraph{Step~\ref{it:step-single-state-timed-tableau} of proof outline: construct tableau for $\{s\} \tnxT{\val{V}_s}{\varepsilon} \Phi$, where $s \in S$.} This construction is completely analogous to the one in the proof of Lemma~\ref{lem:single-fixpoint-completeness}, using Lemma~\ref{lem:timed-fixpoint-free-completeness} instead of Lemma~\ref{lem:fixpoint-free-completeness}. Let $\mathbb{T}_s = (\mathcal{T},\val{V}_s,\tree{T},\rho_s,\lambda_s)$ be the successful TNF timed tableau whose root is $\{s\} \tnxT{\val{V}_s}{\varepsilon} \Phi$, with $\tree{T} = (\node{N},\node{r},p,cs)$ the common tree shared by all these structurally equivalent tableaux, with $\textit{fm}(\node{n})$ and $\textit{rn}(\node{n})$ for $\node{n} \in \node{N}$ the common formulas and rule names each tableau includes in $\node{n}$.
\paragraph{Step~\ref{it:step-full-set-timed-tableau} of proof outline: construct tableau for $S \tnxT{\val{V}_S}{\varepsilon} \Phi$.} We now adapt the construction in the proof of Lemma~\ref{lem:single-fixpoint-completeness} to build a successful TNF timed tableau for $S \tnxT{\val{V}_S}{\varepsilon} \Phi$ satisfying the following: if $s, s'$ and $\node{n}'$ are such that $\textit{fm}(\node{n}') = Z$ and $s' <:_{\node{n}',\node{r}} s$, then $s' \prec s$. There are two cases to consider. In the first case, $S = \emptyset$. In this case, ${\prec} = \preimg{{\prec}}{S} = \emptyset$, and $\emptyset \tnxT{\val{V}_S}{\varepsilon} \Phi$ is valid and therefore, by Lemma~\ref{lem:timed-fixpoint-free-completeness}, has a successful TNF tableau. Define $\mathbb{T}_S$ to be this tableau. Note that since $S = \emptyset$ $T_S$ vacuously satisfies the property involving $<:$.
In the second case, $S \neq \emptyset$; we will construct $\mathbb{T}_S = (\mathcal{T}, \val{V}_S, \tree{T}, \rho_S, \lambda_S)$ that is structurally equivalent to each $\mathbb{T}_s$ for $s \in S$. As was the case in the proof of Lemma~\ref{lem:single-fixpoint-completeness} the idea is to appropriately ``merge" the individual tableaux $\mathbb{T}_s$ for the $s \in S$.
The construction uses a co-inductive strategy to define $\rho_S$ and $\lambda_S$ so that invariants \ref{inv:rule}--\ref{inv:state-set} in Lemma~\ref{lem:single-fixpoint-completeness} hold for $\node{n} \in \node{N}$. Specifically, $\lambda_S(\node{r})$ is set to be sequent $S \tnxT{\varepsilon}{\val{V}_S} \Phi$; this ensures that invariants $\ref{inv:formula}$ and $\ref{inv:state-set}$ hold of $\node{r}$. Then, for every internal node $\node{n}$ for which $\lambda_S(\node{n})$ has been defined and for which invariants~\ref{inv:formula} and~\ref{inv:state-set} hold, $\rho_S(\node{n})$ and $\lambda_S(\node{n}')$ for each child $\node{n}'$ of $\node{n}$ are defined so that invariant~\ref{inv:rule} holds of $\node{n}$ and invariants~\ref{inv:formula} and~\ref{inv:state-set} hold of each $\node{n}'$. This processing of internal nodes is done using a case analysis on $\textit{rn}(\node{n})$. The details are as follows, where we let $S_{\node{n}} = \textit{st}(\lambda_S(\node{n}))$ be the set of states in the sequent labeling $\node{n}$.
\begin{description}
\item[$\textit{rn}(\node{n}){\perp}$, or $\textit{rn}(\node{n}) \in \{ \land, \lor, [K{]}, \dia{K} \}$.] The constructions are the same as the ones in the proof of Lemma~\ref{lem:single-fixpoint-completeness}.
\item[$\textit{rn}(\node{n}) = \exists$.]
In this case, $cs(\node{n}) = \node{n}_1\node{n}_2$ and $\textit{fm}(\node{n}) = \exists_{\textit{fm}(\node{n}_1)}\textit{fm}(\node{n}_2)$.
We begin by constructing a function $f_{\node{n}} \in S_{\node{n}} \to \mathbb{R}_{\geq 0}$ such that for all $s \in S_{\node{n}}$, $f_{\node{n}}(s) \in \mathit{del}(s)$, and also such that $(f_\node{n})_<(S_\node{n}) \subseteq \bigcup_{t \in S} \textit{st}(\lambda_t(\node{n}_1))$ and $(f_\node{n})_=(S_\node{n}) \subseteq \bigcup_{t \in S} \textit{st}(\lambda_t(\node{n}_2))$.
This function will then be used to define $\rho_S(\node{n}$, $\lambda_S(\node{n}_1)$ and $\lambda_S(\node{n}_2)$.
So fix $s \in S_{\node{n}}$; we construct $f_{\node{n}}(s)$ based on the tableaux $\mathbb{T}_t$ whose sequence for $\node{n}$ contains $s$. To this end, define
\[
I_s = \{t \in S \mid s \in \textit{st}(\lambda_{t}(\node{n})) \}.
\]
Intuitively, $I_s \subseteq S$ contains all states $t$ whose tableau $\mathbb{T}_{t}$ contains state $s$ in $\node{n}$.
Clearly $I_s$ is non-empty and thus contains a pseudo-minimum element $t$ (Lemma~\ref{lem:qwo-pseudo-minimum}).
Let $f_{\node{n},t} \in \textit{st}(\lambda_t(\node{n})) \to \mathbb{R}_{\geq 0}$ be such that $\rho_t(\node{n}) = (\exists, f_{\node{n},t})$.
We know that
for all $s \in \textit{st}(\lambda_t(\node{n}))$, $f_{\node{n},t}(s) \in \mathit{del}(s)$, $\mathit{succ}_{<}(s,f_{\node{n},t}(s)) \subseteq \textit{st}(\lambda_t(\node{n}_1))$, and $\{ \mathit{succ}(s, f_{\node{n},t}(s)) \mid s \in \textit{st}(\lambda_t(\node{n})) \} \subseteq \textit{st}(\lambda_t(\node{n}_2))$.
We now define $f_{\node{n}}(s) = f_{\node{n},t}(s)$
Finally, we define $\rho_S(\node{n}) = (\exists, f_{\node{n}})$,
$\lambda_S(\node{n}_1) = (f_\node{n})_<(S_\node{n}) \tnxT{\val{V}_S}{\varepsilon} \textit{fm}(\node{n}_1)$, and $\lambda_S(\node{n}_2) = (f_\node{n})_=(S_\node{n}) \tnxT{\val{V}_S}{\varepsilon} \textit{fm}(\node{n}_2)$.
It can be seen that invariant~\ref{inv:rule} holds of $\node{n}$ and that \ref{inv:formula} and \ref{inv:state-set} hold of $\node{n}_1$ and $\node{n}_2$.
\item[$\textit{rn}(\node{n}) = \forall$.]
In this case, $cs(\node{n}) = \node{n}_1\node{n}_2$ and $\textit{fm}(\node{n}) = \forall_{\textit{fm}(\node{n}_1)}\textit{fm}(\node{n}_2)$.
We begin by constructing a function $g_\node{n} \in S_{\node{n}} \times \mathbb{R}_{\geq 0} \to \mathbb{R}_{\geq 0}$ such that $(g_\node{n})_<(S_\node{n}) \subseteq \bigcup_{t \in S} \textit{st}(\lambda_t(\node{n}_1))$ and $(g_\node{n})_=(S_\node{n}) \subseteq \bigcup_{t \in S} \textit{st}(\lambda_t(\node{n}_2))$.
This function will then be used to define $\rho_S(\node{n})$, $\lambda_S(\node{n}_1)$ and $\lambda_S(\node{n}_2)$ so that the desired invariants hold.
So fix $s \in S_\node{n}$ and $\delta \in \mathit{del}(s)$; we construct $g_{\node{n}}(s,\mathit{del})$ based on the tableaux $\mathbb{T}_{t}$ whose sequent for $\node{n}$ contains $s$. To this end, define
\[
I_s = \{t \in S \mid s \in \textit{st}(\lambda_{t}(\node{n})) \}.
\]
Intuitively, $I_s \subseteq S$ contains all states $t$ whose tableau $\mathbb{T}_{t}$ contains state $s$ in $\node{n}$.
Clearly $I_s$ is non-empty and thus contains a pseudo-minimum element $t$ with respect to $\prec$ (Lemma~\ref{lem:qwo-pseudo-minimum}).
Let $g_{\node{n}, t} \in \textit{st}(\lambda_{t}(\node{n})) \times \states{S} \to \states{S}$ be such that $\rho_t(\node{n}) = (\forall,g_{\node{n},t})$. We know that for all $\delta \in \mathit{succ}(s)$, if $g_{\node{n},t}(s,\delta) < \delta$ then $\mathit{succ}(s,g_{\node{n},t}(s,\delta)) \in \textit{st}(\lambda_t(\node{n}_1))$, and if $g_{\node{n},t}(s,\delta) = \delta$ then $\mathit{succ}(s,g_{\node{n},t}(s,\delta)) \in \textit{st}(\lambda_t(\node{n}_2))$.
We take $g_{\node{n}}(s,\delta) = g_{\node{n},t}(s,\delta)$.
Finally, we define $\rho(\node{n}) = (\forall, g_{\node{n}})$, $\lambda_S(\node{n}_1) = (g_\node{n})_<(S_\node{n}) \tnxT{\val{V}_S}{\varepsilon} \textit{fm}(\node{n}_1)$ and $\lambda_S(\node{n}_2) = (g_\node{n})_=(S_\node{n}) \tnxT{\val{V}_S}{\varepsilon} \textit{fm}(\node{n}_2)$.
It can be seen that invariant~\ref{inv:rule} holds of $\node{n}$ and that \ref{inv:formula} and \ref{inv:state-set} hold of $\node{n}_1$ and $\node{n}_2$.
\end{description} This construction ensures that Properties~\ref{inv:rule}--\ref{inv:state-set} hold for all $\node{n}$.
To establish that $\mathbb{T}_S$ is successful we must show that every leaf in $\mathbb{T}_S$ is successful. The argument is identical to the proof in Lemma~\ref{lem:single-fixpoint-completeness}, Step~\ref{it:step-full-set-tableau}.
\paragraph{Step~\ref{it:step-fixpoint-timed-tableau} of proof outline: construct tableau for $S \tnxT{\val{V}}{\varepsilon} \sigma Z.\Phi$.} The construction is identical to the one in Lemma~\ref{lem:single-fixpoint-completeness}, Step~\ref{it:step-fixpoint-tableau}. \qedhere \end{proof}
\noindent As an immediate corollary, we have the following.
\begin{corollary}\label{cor:timed-single-fixpoint-completeness} Fix $\mathcal{T}$, and let $\Phi, Z, \val{V}, \sigma$ and $S$ be such that $\Phi$ is a timed fixpoint-free formula and $S = \semTV{\sigma Z.\Phi}$. Then $S \tnxTV{\varepsilon} \sigma Z.\Phi$ has a successful tableau. \end{corollary} \begin{proof} Follows from Lemma~\ref{lem:timed-single-fixpoint-completeness} and the fact that every $\sigma$-maximal support ordering $(S, \prec)$ for $\semfTV{Z}{\Phi}$ is total and qwf. \qedhere \end{proof}
We now generalize Lemma~\ref{lem:timed-single-fixpoint-completeness} to the multiple fixpoint case. This lemma mirrors the analogous result (Lemma~\ref{lem:fixpoint-completeness}) proved earlier for the non-timed mu-calculus, and the proof is a straightforward extension of the earlier proof.
\begin{lemma}[Timed fixpoint completeness]\label{lem:timed-fixpoint-completeness} Fix TTS $\mathcal{T}$, let $\Phi$ be a timed mu-calculus formula, and let $Z, \val{V}, \sigma$ and $S$ be such that $S = \semTV{\sigma Z.\Phi}$. Also let $(S, \prec)$ be a $\sigma$-compatible, total, qwf support ordering for $\semfTV{Z}{\Phi}$. Then $S \tnxTV{\varepsilon} \sigma Z.\Phi$ has a successful TNF timed tableau compliant with $(S, \prec)$.
\end{lemma}
\begin{proof}
Fix $\mathcal{T} = \lts{\states{S}}$ of timed sort $\Sigma$. We prove the following: for all timed $\Phi$, and $Z, \val{V}, \sigma$ and $S$ with $S = \semTV{\sigma Z.\Phi}$, and $\sigma$-compatible, total qwf support ordering $(S,\prec)$ for $\semfTV{Z}{\Phi}$, $S \tnxTV{\varepsilon} \sigma Z.\Phi$ has a successful TNF timed tableau $\mathbb{T}_\Phi$ that is compliant with $(S,\prec)$. To simplify notation we use the following abbreviations. \begin{align*} f_\Phi &= \semfTV{Z}{\Phi} \\ \val{V}_X &= \val{V}[Z := \preimg{{\prec}}{X}] \end{align*} Note that $\val{V}_S = \val{V}[Z := \preimg{{\prec}}{S}]$. When $s \in S$ we also write $\val{V}_s$ in lieu of $V_{\{ s\}}$.
The proof proceeds by strong induction on the number of fixpoint subformulas of $\Phi$. There are two cases to consider. In the first case, $\Phi$ contains no fixpoint formulas. Lemma~\ref{lem:timed-single-fixpoint-completeness} immediately gives the desired result.
In the second case, $\Phi$ contains at least one fixpoint subformula. The outline of the proof follows the same lines as the one for the same case in Lemma~\ref{lem:fixpoint-completeness}). \begin{enumerate}
\item\label{it:step-decompose-timed}
We decompose $\Phi$ into $\Phi'$, which uses a new free variable $W$, and $\sigma' Z'.\Gamma$ in such a way that $\Phi = \Phi'[W:=\sigma' Z'.
\Gamma]$.
\item\label{it:step-outer-timed-tableau}
We inductively construct a successful TNF timed tableau $\mathbb{T}_{\Phi'}$ for $S \tnxT{\val{V}'}{\varepsilon} \sigma Z.\Phi'$ that is compliant with $(S,\prec)$ where:
\begin{align*}
S' &= \semT{\sigma'Z'.\Gamma}{\val{V}_S} \\
\val{V}' &= \val{V}[W:=S'].
\end{align*}
($S'$ may be seen as the semantic content of $\sigma'Z'.\Gamma$ relevant for $\semTV{\sigma Z.\Phi}$.)
\item\label{it:step-inner-timed-tableau}
We construct a successful TNF timed tableau $\mathbb{T}_\Gamma$ satisfying a compliance-related property for $S' \tnxT{\val{V}_S}{\varepsilon} \sigma'Z'.\Gamma$ by merging inductively constructed tableaux involving subsets of $S'$.
\item\label{it:step-timed-tableau-composition}
We show how to compose $\mathbb{T}_\Phi$ and $\mathbb{T}_\Gamma$ to yield a successful TNF timed tableau for $S \tnxTV \sigma Z.\Phi$ that is compliant with $(S,\prec)$. \end{enumerate}
We now work through each of these proof steps.
\paragraph{Step~\ref{it:step-decompose-timed} of proof outline: decompose $\Phi$.} Completely analogous to Step~\ref{it:step-decompose} in Lemma~\ref{lem:fixpoint-completeness}. We recall the following function definitions from the proof of that lemma, \begin{align*} f(X,Y) &= \semT{\Phi'}{\val{V}[Z, W := X, Y]}\\ g(X,Y) &= \semT{\Gamma}{\val{V}[Z, Z' := X, Y]}, \end{align*} and also note that $f_\Phi = f[\sigma']g$.
\paragraph{Step~\ref{it:step-outer-timed-tableau} of proof outline: construct tableau for $S \tnxT{\val{V}'}{\varepsilon} \sigma Z.\Phi'$ that is compliant with $(S,\prec)$.} Completely analogous to Step~\ref{it:step-outer-tableau} in Lemma~\ref{lem:fixpoint-completeness}. Recall that $\val{V}' = \val{V}[W := S']$, where $S' = \semT{\sigma' Z'.\Gamma}{\val{V}_S}$.
\paragraph{Step~\ref{it:step-inner-tableau} of proof outline: construct tableau for $S' \tnxT{\val{V}_S}{\varepsilon} \sigma'Z'.\Gamma$.}
Since $\Gamma$ contains strictly fewer fixpoint subformulas than $\Phi$, the induction hypothesis guarantees the existence of certain successful TNF timed tableaux involving $\sigma' Z'.\Gamma$. Following the proof of Lemma~\ref{lem:fixpoint-completeness} we use this fact to construct a successful tableau, $\mathbb{T}_\Gamma$ for $S' \tnxT{\val{V}_S}{\varepsilon} \sigma'Z'.\Gamma$ satisfying Property~\ref{goal:dependencies} defined in that earlier proof.
We begin by recalling the following definitions from that earlier proof, where $X \subseteq S$. \begin{align*} g_X &= g_{(\preimg{{\prec}}{X},\cdot)} \\ S'_X &= \sigma' g_X \end{align*} It was also noted there that, for any $X \subseteq S$, $ S'_X \subseteq S'. $ We again take $(Q_\prec, \sqsubseteq)$ to be the quotient of $(S,\prec)$, with $[x] \in Q_\prec$ the equivalence class of $x \in S$. We also let $(S',\prec')$ be the $\sigma'$-compatible, total qwf support ordering $g_S$ that is locally consistent with $(S,\prec)$, as guaranteed by Lemma~\ref{lem:fg-support}. The construction we present below for $\mathbb{T}_\Gamma$ follows the same approach as the one in the proof of Lemma~\ref{lem:fixpoint-completeness}. \begin{itemize}
\item
For each $Q \in Q_\prec$ we inductively construct a successful TNF timed tableau $\mathbb{T}_{\Gamma,Q}$ for sequent $S'_Q \tnxT{\val{V}_Q}{\varepsilon} \sigma' Z'.\Gamma$ that is compliant with a subrelation of $\prec'$.
\item
We then merge the individual $\mathbb{T}_{\Gamma,Q}$ to form a successful TNF timed tableau $\mathbb{T}'_\Gamma$ compliant with $\prec'$ whose root sequent contains as its state set the union of all the individual root-sequent state sets of the $\mathbb{T}_{\Gamma,Q}$.
\item
We perform a final operation to obtain $\mathbb{T}_\Gamma$. \end{itemize}
The constructions of $\mathbb{T}_{\Gamma,Q}$ and $\mathbb{T}'_\Gamma = (\mathcal{T},\val{V}_S,\tree{T}_\Gamma,\rho'_\Gamma,\lambda'_\Gamma)$ are completely analogous to the corresponding constructions in the proof of Lemma~\ref{lem:fixpoint-completeness}. In what follows we focus on the final step: constructing $\mathbb{T}_\Gamma$ that is compliant with $\prec'$ and satisfies Property~\ref{goal:dependencies}. We begin by noting that since $(S',\prec')$ is a $\sigma'$-compatible, total qwf support ordering for $g_S$, the induction hypothesis and Lemma~\ref{lem:structural-equivalence-of-TNF-tableaux} guarantee the existence of a successful TNF tableau $$ \mathbb{T}_{\Gamma,S} = (\tree{T}, \rho_{\Gamma,S}, \mathcal{T}, \val{V}_S, \lambda_{\Gamma,S}) $$ for sequent $S' \tnxT{\val{V}_S}{\varepsilon} \sigma'Z'.\Gamma$ that is compliant with $\prec'$ and structurally equivalent to $\mathbb{T}_{\Gamma,Q}$ for any $Q \in Q_\prec$. There are two cases to consider. In the first case, $S = \emptyset$. In this case, $\mathbb{T}_{\Gamma,S}$ vacuously satisfies Property~\ref{goal:dependencies}, and we take $\mathbb{T}_\Gamma$ to be $\mathbb{T}_{\Gamma,S}$.
In the second case, $S \neq \emptyset$, and thus $Q_\prec \neq \emptyset$. In this case, since it is not guaranteed that $\mathbb{T}_{\Gamma,S}$ satisfies \ref{goal:dependencies}, we follow the approach in the proof of Lemma~\ref{lem:fixpoint-completeness} by building $\mathbb{T}_\Gamma$ using a coinductive definition of $\rho_\Gamma$ and $\lambda_\Gamma$ that merges $\mathbb{T}'_\Gamma$ and $\mathbb{T}_{\Gamma,S}$ so that the following invariants are satisfied. \begin{invariants}
\item
If $\rho_\Gamma(\node{n})$ is defined then the sequents assigned by $\lambda_\Gamma$ to $\node{n}$ and its children constitute a valid application of the rule $\rho_\Gamma(\node{n})$.
\item
$\textit{fm}(\lambda_\Gamma(\node{n})) = \textit{fm}(\node{n})$
\item
$\textit{st}(\lambda_\Gamma(\node{n})) \subseteq \textit{st}(\lambda'_\Gamma(\node{n})) \cup \textit{st}(\lambda_{\Gamma,S}(\node{n}))$
\item
$\textit{dl}(\lambda_\Gamma(\node{n})) = \textit{dl}_\Gamma(\node{n})$ \end{invariants} The definitions begin by assigning a value to $\lambda_\Gamma(\node{r}_\Gamma)$ so that invariants~\ref{inv:formula}--\ref{inv:definition-list} are satisfied. The coinductive step then assumes that $\node{n}$ satisfies these invariants and defines $\rho_\Gamma(\node{n})$ and $\lambda_\Gamma$ for the children of $\node{n}$ so that \ref{inv:rule} holds for $\node{n}$ and \ref{inv:formula}--\ref{inv:definition-list} hold for each child.
To start the construction, define $\lambda_\Gamma(\node{r}_\Gamma) = S' \tnxT{\val{V}_S}{\varepsilon} \sigma'Z'.\Gamma$. Invariants~\ref{inv:formula}--\ref{inv:definition-list} clearly hold of $\node{r}_\Gamma$, since $\textit{st}(\lambda'_\Gamma(\node{r}_\Gamma)) \subseteq S' = \textit{st}(\lambda_\Gamma(\node{r}_\Gamma))$.
For the coinductive step, assume that $\node{n}$ is such that $\lambda_\Gamma(\node{n})$ satisfies \ref{inv:formula}--\ref{inv:definition-list}; we must define $\rho_\Gamma(\node{n})$, and $\lambda_\Gamma$ for the children of $\node{n}$, so that \ref{inv:rule} holds for $\lambda_\Gamma(\node{n})$ and \ref{inv:formula}--\ref{inv:definition-list} holds for each child. All cases are analogous to the cases in the proof of Lemma~\ref{lem:fixpoint-completeness}, except those that involve rules $\forall, \exists$, which are given below. \begin{description}
\item[$\textit{rn}_\Gamma(\node{n}) = \exists$.] In this case $cs(\node{n}) = \node{n}_1\node{n}_2$, $\lambda_{\Gamma}(\node{n}) = S_{\node{n}} \tnxT{\val{V}_S}{\Delta} \exists_{\textit{fm}_\Gamma(\node{n}_1)} \textit{fm}_\Gamma(\node{n}_2)$, with $\Delta = \textit{dl}_\Gamma(\node{n}) = \textit{dl}_\Gamma(\node{n}_1) = \textit{dl}_\Gamma(\node{n}_2)$.
We construct a function $f_{\Gamma,\node{n}} \in S_{\node{n}} \to \mathbb{R}_{\geq 0}$ such that for all $s \in S_{\node{n}}$, $f_{\Gamma,\node{n}}(s) \in \mathit{del}(s)$ and such that
$(f_{\Gamma,\node{n}})_<(S_\node{n}) \subseteq \textit{st}(\lambda'_\Gamma(\node{n}_1)) \cup \textit{st}(\lambda_{\Gamma,S}(\node{n}_1))$ and
$(f_{\Gamma,\node{n}})_=(S_\node{n}) \subseteq \textit{st}(\lambda'_\Gamma(\node{n}_2)) \cup \textit{st}(\lambda_{\Gamma,S}(\node{n}_2))$.
Since $\mathbb{T}'_{\Gamma}$ and $\mathbb{T}_{\Gamma,S}$ are successful we know that $\exists$-functions $f'_{\Gamma,\node{n}} \in \textit{st}(\lambda'_{\Gamma}(\node{n})) \to \mathbb{R}_{\geq 0}$ and $f_{\Gamma,S,\node{n}} \in \textit{st}(\lambda_{\Gamma,S}(\node{n})) \to \mathbb{R}_{\geq 0}$, where $\rho'_\Gamma(\node{n}) = (\exists, f'_{\Gamma,\node{n}})$ and $\rho_{\Gamma,S}(\node{n}) = (\exists, f_{\Gamma,S,\node{n}})$, satisfy the following.
\begin{itemize}
\item
$(f'_{\Gamma,\node{n}})_<(\textit{st}(\lambda'_\Gamma(\node{n}))) = \textit{st}(\lambda'_{\Gamma}(\node{n}_1))$
\item
$(f'_{\Gamma,\node{n}})_=(\textit{st}(\lambda'_\Gamma(\node{n}))) = \textit{st}(\lambda'_{\Gamma}(\node{n}_2))$
\item
$(f_{\Gamma,S,\node{n}})_<(\textit{st}(\lambda_{\Gamma,S}(\node{n}))) \subseteq \textit{st}(\lambda_{\Gamma,S}(\node{n}_1))$
\item
$(f_{\Gamma,S,\node{n}})_=(\textit{st}(\lambda_{\Gamma,S}(\node{n}))) \subseteq \textit{st}(\lambda_{\Gamma,S}(\node{n}_2))$
\end{itemize}
We now define $f_{\Gamma,\node{n}}$ as follows.
\[
f_{\Gamma,\node{n}}(s) = \begin{cases}
f'_{\Gamma,\node{n}}(s) & \text{if $s \in \textit{st}(\lambda'_{\Gamma}(\node{n}))$} \\
f_{\Gamma,S,\node{n}}(s) & \text{if $s \in \textit{st}(\lambda_{\Gamma,S}(\node{n})) \setminus \textit{st}(\lambda'_{\Gamma}(\node{n}))$}
\end{cases}
\]
Since $\node{n}$ satisfies \ref{inv:state-set} if follows that $S_\node{n} \subseteq \textit{st}(\lambda'_\Gamma(\node{n})) \cup \textit{st}(\lambda_{\Gamma,S}(\node{n}))$ and thus $f_{\Gamma,\node{n}}$ is well-defined.
Finally, we set $\rho(\node{n}) = (\exists, f_{\Gamma,\node{n}})$ and
\begin{align*}
\lambda_\Gamma(\node{n}_1) &= (f_{\Gamma,\node{n}})_<(S_\node{n})
\tnxT{\val{V}_{S}}{\Delta} \textit{fm}_{\Gamma}(\node{n}_1) \\
\lambda_\Gamma(\node{n}_2) &= (f_{\Gamma,\node{n}})_=(S_\node{n}) \tnxT{\val{V}_{S}}{\Delta} \textit{fm}_{\Gamma}(\node{n}_2)
\end{align*}
It is clear that invariant~\ref{inv:rule} holds for $\node{n}$, while \ref{inv:formula}--\ref{inv:definition-list} hold for $\node{n}_1$ and $\node{n}_2$.
\item[$\textit{rn}_\Gamma(\node{n}) = \forall$.]
In this case $cs(\node{n}) = \node{n}_1\node{n}_2$, $\lambda_{\Gamma}(\node{n}) = S_{\node{n}} \tnxT{\val{V}_S}{\Delta} \forall_{\textit{fm}_\Gamma(\node{n}_1)} \textit{fm}_\Gamma(\node{n}_2)$, with $\Delta = \textit{dl}_\Gamma(\node{n}) = \textit{dl}_\Gamma(\node{n}_1) = \textit{dl}_\Gamma(\node{n}_2)$.
We construct a function $g_{\Gamma,\node{n}} \in S_{\node{n}} \times \mathbb{R}_{\geq 0} \to \mathbb{R}_{\geq 0}$ such that $g(s,\delta) \leq \delta$ for all $s \in S_\node{n}$ and $\delta \in \mathit{del}(s)$ and such that $(g_{\Gamma,\node{n}})_<(S_\node{n}) \subseteq \textit{st}(\lambda'_\Gamma(\node{n}_1)) \cup \textit{st}(\lambda_{\Gamma,S}(\node{n}_1))$ and $(g_{\Gamma,\node{n}})_=(S_\node{n}) \subseteq \textit{st}(\lambda'_\Gamma(\node{n}_2)) \cup \textit{st}(\lambda_{\Gamma,S}(\node{n}_2))$.
Since $\mathbb{T}'_\Gamma$ and $\mathbb{T}_{\Gamma, S}$ are successful we know that $\forall$-functions $g'_{\Gamma, \node{n}} \in \textit{st}(\lambda'_{\Gamma}(\node{n})) \times \mathbb{R}_{\geq 0} \to \mathbb{R}_{\geq 0}$ and $g_{\Gamma, S, \node{n}} \in \textit{st}(\lambda_{\Gamma,S}(\node{n})) \times \mathbb{R}_{\geq 0} \to \mathbb{R}_{\geq 0}$, where $\rho'_\Gamma(\node{n}) = (\forall, g'_{\Gamma,\node{n}})$ and
$\rho_{\Gamma,S}(\node{n}) = (\forall, g_{\Gamma, S, \node{n}})$, satisfy the following.
\begin{itemize}
\item
$(g'_{\Gamma,\node{n}})_<(\textit{st}(\lambda'_\Gamma(\node{n}))) = \textit{st}(\lambda'_\Gamma(\node{n}_1))$
\item
$(g'_{\Gamma,\node{n}})_=(\textit{st}(\lambda'_\Gamma(\node{n}))) = \textit{st}(\lambda'_\Gamma(\node{n}_2))$
\item
$(g_{\Gamma,S,\node{n}})_<(\textit{st}(\lambda_{\Gamma,S}(\node{n}))) =
\textit{st}(\lambda_{\Gamma,S}(\node{n}_1))$
\item
$(g_{\Gamma,S,\node{n}})_=(\textit{st}(\lambda_{\Gamma,S}(\node{n}))) =
\textit{st}(\lambda_{\Gamma,S}(\node{n}_2))$
\end{itemize}
We define $g_{\Gamma,\node{n}}$ as follows.
\[
g_{\Gamma,\node{n}}(s,\delta) = \begin{cases}
g'_{\Gamma,\node{n}}(s,\delta) & \text{if $s \in \textit{st}(\lambda'_{\Gamma}(\node{n}))$}\\
g_{\Gamma,S,\node{n}}(s,\delta) & \text{if $s \in \textit{st}(\lambda_{\Gamma,S}(\node{n})) \setminus \textit{st}(\lambda'_{\Gamma}(\node{n}))$}
\end{cases}
\]
Since $\node{n}$ satisfies \ref{inv:state-set} if follows that $S_\node{n} \subseteq \textit{st}(\lambda'_\Gamma(\node{n})) \cup \textit{st}(\lambda_{\Gamma,S}(\node{n}))$ and thus $g_{\Gamma,\node{n}}$ is well-defined.
Finally, we set $\rho(\node{n}) = (\forall, g_{\Gamma,\node{n}})$, and
\begin{align*}
\lambda_\Gamma(\node{n}_1) &= (g_{\Gamma,\node{n}})_<(S_\node{n}) \tnxT{\val{V}_{S}}{\Delta} \textit{fm}_{\Gamma}(\node{n}_1) \\
\lambda_\Gamma(\node{n}_2) &= (g_{\Gamma,\node{n}})_=(S_\node{n}) \tnxT{\val{V}_{S}}{\Delta} \textit{fm}_{\Gamma}(\node{n}_2)
\end{align*}
It is clear that invariant~\ref{inv:rule} holds for $\node{n}$, while \ref{inv:formula}--\ref{inv:definition-list} hold for $\node{n}_1$ and $\node{n}_2$.
\end{description} The same arguments in the proof of Lemma~\ref{lem:fixpoint-completeness} can be used to show that $\mathbb{T}_\Gamma$ is successful, TNF and compliant with $\prec'$ and that it satisfies Property~\ref{goal:dependencies}.
\paragraph{Step~\ref{it:step-timed-tableau-composition} of proof outline: construct tableau for $S \tnxTV \sigma Z.\Phi$.} Analogous to Step~\ref{it:step-timed-tableau-composition} in the proof of Lemma~\ref{lem:fixpoint-completeness}. \qedhere \end{proof}
With these lemmas in hand we may now state and prove the completeness theorem.
\begin{theorem}[Completeness]\label{thm:timed-completeness} Let $\mathcal{T} = \lts{S}$ be a TTS and $\val{V}$ a valuation, and let $S$ be a set of states and $\Phi$ a timed mu-calculus formula such that $S \subseteq \semTV{\Phi}$. Then $S \tnxTV{\varepsilon} \Phi$ has a successful tableau. \end{theorem} \begin{proof} Analogous to the proof of Theorem~\ref{thm:completeness}, but using Lemma~\ref{lem:timed-fixpoint-completeness} instead of Lemma~\ref{lem:fixpoint-completeness}. \qedhere \end{proof}
\section{Conclusions and Future Work}\label{sec:Conclusions}
The work in this paper was motivated by a desire to give a sound and complete proof system for the timed mu-calculus for infinite-state systems. We intended to do so by modifying an existing proof system for infinite-state systems and the untimed mu-calculus due to Bradfield and Stirling~\cite{Bra1991,BS1992}, but this proved difficult because of the delicacy of their soundness and completeness arguments. Instead, we gave an alternative approach, based on explicit tableau constructions, for the untimed mu-calculus. A hallmark of these constructions is the extensibility and their lack of dependence on infinitary logic. We then showed how these constructions admitted modifications to the core proof system, including new proof-search strategies and new logical modalities, such as those in the timed mu-calculus.
Our proof techniques are based on a fundamental, lattice-theoretic result giving a new characterization in terms of a notion we call \emph{support orderings} for least and greatest fixpoints of monotonic functions over complete lattices whose carrier sets have form $2^S$ for some set $S$. Using this approach, we are able to present a proof that, contrary to Bradfield and Stirling's original proof, does not require reasoning about ordinal unfoldings directly, and that is extensible to other termination conditions and modalities. Our completeness results also rely on direct constructions of proof tableaux for valid sequents; this also facilitates extensibility.
We have illustrated extensibility of our approach by showing that the soundness proof straightforwardly carries over to the proof system where $\mu$-nodes with non-empty sets of states are always unfolded. Additionally, we have presented a proof system for an extension of the mu-calculus with two timed modalities, and shown our soundness and completeness proof are extended straightforwardly to the latter setting.
\paragraph{Future work.} The proof approach presented in this paper enables us to prove soundness of extensions and modifications of the proof system for infinite state systems. In particular, soundness of the proof system for the timed mu-calculus opens up the way to model check such systems. We plan to implement this proof system, dealing with sets of states symbolically, and thus extending model checkers for alternation free timed mu-calculi \cite{FC2014} to the full mu-calculus.
Furthermore, the proof of soundness and completeness for the timed mu-calculus only involves adding cases for the newly added operators to the proofs of results about the base proof system. Each of these cases follows the same line of reasoning as the results for other operators. We therefore expect that the soundness and completeness results can be generalized to only refer to properties about the local dependencies of the operators in the mu-calculus, and our support ordering results. It would be interesting to investigate this direction, and simplify the proof obligations for soundness and completeness when adding operators to the mu-calculus even further.
Another direction to be explored is how the soundness and completeness results in this paper can be adapted to the equational mu-calculus~\cite{CS1993}, and other equational theories such as Boolean equation systems~\cite{Mad1997} and parameterized Boolean equation systems~\cite{GM1999,GW2005}. In particular, the definition lists are, in essence, already part of the formalism, so the mechanism of unfolding in the proof system needs to be adapted to deal with this difference.
Finally, another common extension of the mu-calculus used in the timed setting is to add freeze quantification, see e.g.,~\cite{BCL2011}. The extension of our proof rules to this setting should be similarly straightforward as the extension we presented in this paper, although the definitions of the underlying transition systems would need extension to accommodate explicity clock variables..
\appendix \section{Relation between (extended) dependency orderings and Bradfield and Stirlings (extended) paths}
We recall Bradfield and Stirling's definition of (extended) paths~\cite{BS1992}, in the version as presented by Bradfield~\cite{Bra1991}, and show the correspondence with our (extended) dependency ordering.
Bradfield and Stirling define the notion of \emph{path} from one state in a node in a proof tree to another state in a descendant node in the proof tree. This is done on the basis of the proof rules used to generate the child nodes. The formal definition is as follows. \begin{definition}[Path~{\cite[Definition 3.8]{Bra1991}}]\label{def:path}
There is a \emph{path} from state $s$ at node $\node{n}$ to state $s'$ at node $\node{n}'$ in a tableau iff there is a finite sequence $(s, \node{n}) = (s_0, \node{n}_0), \ldots, (s_k,\node{n}_k) = (s', \node{n}')$ such that the following hold.
\begin{enumerate}
\item $\node{n}_{i+1}$ is a child of $\node{n}_i$ for all $0 \leq i < k$.
\item If $\node{n}_i = S_i \tnxTV{\Delta_i} \Phi_i$ then $s_i \in S_i$ for all $0 \leq i \leq k$.
\item If the rule applied to $\node{n}_i$ is $[K]$ then $s_i \xrightarrow{K} s_{i+1}$; if the rule applied to $\node{n}_i$ is $\langle K \rangle$ then $s_{i+1} = f(s_i)$; in all other cases $s_{i+1} = s_i$.
\end{enumerate} \end{definition} \noindent Bradfield writes $s @ \node{n} \xrightarrow[\dot{}]{} s' @ \node{n}'$ if there is a path from $s$ at $\node{n}$ to $s'$ at $\node{n}'$. Note that for any $s$ and $\node{n} = S \tnxTVD \Phi$ such that $s \in S$, $s @ \node{n} \xrightarrow[\dot{}]{} s @ \node{n}$.
In the same definition Bradfield also defines the notion of an \emph{extended} path from $s$ at $\node{n}$ to $s'$ at $\node{n}'$. \begin{definition}[Extended path~{\cite[Definition 3.8]{Bra1991}}]\label{def:epath}
There is an \emph{extended path} from state $s$ at node $\node{n}$ to state $s'$ at node $\node{n}'$ in a tableau, denoted $s @ \node{n} \xrightarrow[\dot{}]{.} s' @ \node{n}'$, as follows.
\begin{enumerate}
\item \label{def:epath-base}
Either $s @ \node{n} \xrightarrow[\dot{}]{} s' @ \node{n}'$; or
\item \label{def:epath-step}
there is a node $\node{n}'' = S'' \tnxTV{\Delta''} U$ with $\node{n}'' \neq \node{n}$ and $\node{n}''\neq \node{n}'$,\footnote{Bradfield's original definition does not require $\node{n}'' \neq \node{n}$, however, he does assert that the extended paths for $\node{n}$ are defined in terms of extended paths from strict descendants, and this assertion does not hold when the requirement is omitted.}
and a finite sequence of states $s_0, \ldots, s_k$, and a finite sequence of nodes $\node{n}_1, \ldots \node{n}_k$ such that the following hold.
\begin{enumerate}
\item $s @ \node{n} \xrightarrow[\dot{}]{} s_0 @ \node{n}''$.
\item Each $\node{n}_i$ is a $\sigma$-leaf with companion node $\node{n}''$.
\item For $0 \leq i < k$, $s_i @ \node{n}'' \xrightarrow[\dot{}]{.} s_{i+1} @ \node{n}_{i+1}$.
\item $s_k @ \node{n}'' \xrightarrow[\dot{}]{.} s' @ \node{n}'$
\end{enumerate}
\end{enumerate} \end{definition}
This definition extends the notion of path by allowing ``looping'' through companion nodes and their associated leaves within the subtree rooted at $\node{n}$.
Bradfield's original definition~\cite[Definition 3.9]{Bra1991} of the ordering of states in a node, $\sqsupset_{\node{n}}$ is the following. \begin{definition}[{\cite[Definition 3.9]{Bra1991}}]\label{def:bs-order}
Let $\node{n}$ be a node in a tableau of the form $S \tnxTVD U$ where $\Delta(U) = \mu Z. \Phi$; let $\{ \node{n}_0, \ldots \node{n}_k\}$ be the leaves for which $\node{n}$ is the companion. Define ordering $\sqsupset_{\node{n}} \;\subseteq S \times S$ as follows: $s \sqsupset_{\node{n}} s'$ iff there exists $\node{n}_i$ such that $s @ \node{n} \xrightarrow[\dot{}]{.} s' @ \node{n}_i$. \end{definition}
We prove that our (extended) dependency orderings correspond to Bradfield's (extended) paths. Hence, our dependency orderings are simply an alternative characterization of the dependencies in a tableau.
It follows immediately from the definitions that our dependency ordering (Definition~\ref{def:dependency_ordering}) coincides with Bradfield's paths (Definition~\ref{def:path}). \begin{lemma}\label{lem:child_order_path}
Let $\node{n} = S\tnxTVD \Phi$, $\node{n}' = S' \tnxTV{\Delta'} \Phi'$ be nodes in a tableau with $s \in S$ and $s' \in S'$. We have $s@\node{n} \xrightarrow[\dot{}]{} s'@\node{n}'$ iff there is a sequence $(s, \node{n}) = (s_0, \node{n}_0), \ldots, (s_k,\node{n}_k) = (s', \node{n}')$ such that for all $0 \leq i < k$, $s_{i+1} <_{\node{n}_{i+1},\node{n}_{i}} s_i$. \end{lemma} \begin{proof}
Immediate since the definition of $<_{\node{n}',\node{n}}$ corresponds directly with the three clauses in Definition~\ref{def:path}.\qedhere \end{proof}
\begin{lemma}\label{lem:dependency_ordering_is_path}
For nodes $\node{m} = S_{\node{m}} \tnxTV{\Delta_{\node{m}}} \Phi_{\node{m}}$ and $\node{n} = S_{\node{n}} \tnxTV{\Delta_{\node{n}}} \Phi_{\node{n}}$ in a tableau, we have that $s_{\node{n}'} \lessdot_{\node{n}',\node{n}} s_{\node{n}}$ if and only if $s_{\node{n}}@\node{n} \xrightarrow[\dot{}]{} s_{\node{n}'}@\node{n}'$. \end{lemma} \begin{proof}
We prove both directions separately.
\begin{itemize}
\item[$\Rightarrow$]
We prove that if $s_{\node{n}'} \lessdot_{\node{n}',\node{n}} s_{\node{n}}$, then $s_{\node{n}}@\node{n} \xrightarrow[\dot{}]{} s_{\node{n}'}@\node{n}'$ by induction on the definition of $\lessdot_{\node{n}',\node{n}}$.
If $\node{n} = \node{n}'$ and $s = s'$ (first case of Definition~\ref{def:dependency_ordering}), the empty sequence $(s_{\node{n}},\node{n}) = (s_{\node{n}'},\node{n}')$ witnesses $s_{\node{n}}@\node{n} \xrightarrow[\dot{}]{} s_{\node{n}'}@\node{n}'$.
Now, suppose there exists $\node{n}''$ and $s_{\node{n}''}$ such that $s_{\node{n}''} <_{\node{n}'',\node{n}} s_{\node{n}}$ and $s_{\node{n}'} \lessdot_{\node{n}',\node{n}''} s_{\node{n}''}$.
By the induction hypothesis, we have $s_{\node{n}''}@\node{n}'' \xrightarrow[\dot{}]{} s_{\node{n}'}@\node{n}'$, so there is a sequence $(s_{\node{n}''},\node{n}'') = (s_0, \node{n}_0), \ldots, (s_k, \node{n}_k) = (s_{\node{n}'}, \node{n}')$ with $s_{i+1} <_{\node{n}_{i+1},\node{n}_i} s_i$ for all $i < k$. The sequence $(s_{\node{n}},\node{n}), (s_0, \node{n}_0), \ldots, (s_k,\node{n}_k)$ now witnesses $s_{\node{n}}@\node{n} \xrightarrow[\dot{}]{} s_{\node{n}'}@\node{n}'$.
\item[$\Leftarrow$] Suppose $s_{\node{n}}@\node{n} \xrightarrow[\dot{}]{} s_{\node{n}'}@\node{n}'$.
We show that $s_{\node{n}'} \lessdot_{\node{n}',\node{n}} s_{\node{n}}$.
Since $s_{\node{n}}@\node{n} \xrightarrow[\dot{}]{} s_{\node{n}'}@\node{n}'$, there must be a sequence
$(s_{\node{n}},\node{n}) = (s_0,\node{n}_0), \ldots, (s_k,\node{n}_k) = (s_{\node{n}'},\node{n}')$
such that for all $0 \leq i < k$, $s_{i+1} <_{\node{n}_{i+1},\node{n}_i} s_i$.
By definition, then also $s_{i+1} \lessdot_{\node{n}_{i+1},\node{n}_i} s_i$,
so it follows immediately from repeated application of Lemma~\ref{lem:pseudo-transitivity-of-dependency-ordering} that $s_{k} \lessdot_{\node{n}_{k},\node{n}_0} s_0$, thus $s_{\node{n}'} \lessdot_{\node{n}',\node{n}} s_{\node{n}}$. \qedhere
\end{itemize} \end{proof}
Furthermore, our extended dependency ordering (Definition~\ref{def:extended_path_ordering}) corresponds to Bradfield's extended paths (Definition~\ref{def:epath}).
\begin{lemma}\label{lem:extended_path_implies_extended_order}\label{lem:extended_order_implies_extended_path}
Given two nodes $\node{n}' = S' \tnxTV{\Delta'} \Phi'$ and $\node{n} = S \tnxTVD \Phi$ in a tableau, and states $s \in S$, $s' \in S'$. Then $s' <:_{\node{n}',\node{n}} s$ if and only if $s@\node{n} \xrightarrow[\dot{}]{.} s'@\node{n}'$. \end{lemma} \begin{proof} We prove both directions separately.
\begin{itemize} \item[$\Rightarrow$] We proceed by induction on the definition of $<:_{\node{n}',\node{n}}$.
\begin{itemize}
\item If $s' \lessdot_{\node{n}',\node{n}} s$, according to Lemma~\ref{lem:dependency_ordering_is_path}, we have $s@\node{n} \xrightarrow[\dot{}]{} s'@\node{n}'$, and according to Definition~\ref{def:epath}~(\ref{def:epath-base}) we have $s@\node{n} \xrightarrow[\dot{}]{.} s'@\node{n}'$.
\item Otherwise, there is a node $\node{n}'' = S'' \tnxTV{\Delta''} U$ with $\node{n}'' \neq \node{n}$, $\node{n}'' \neq \node{n}'$, and $s'' \in S$ such that for some $\overline{s} \in S''$, we have (a) $s'' \lessdot_{\node{n}'',\node{n}} s$, (b) for some $m \geq 1$, $\overline{s} <:_{\node{n}''}^{+} s''$, and (c) $s' <:_{\node{n}',\node{n}''} \overline{s}$.
From (a) and Lemma~\ref{lem:dependency_ordering_is_path}, we get that $s@\node{n} \xrightarrow[\dot{}]{} s''@\node{n}''$.
From (b) we find that $\overline{s} <:_{\node{n}''}^{+} s''$. Therefore, for some $m \geq 1$, $\overline{s} <:^m_{\node{n}''} s''$. Consider the smallest such $m$. Then there exist states $s_0, \ldots, s_m$ such that $\overline{s} = s_0$, $s_i <:_{\node{n}''} s_{i+1}$ for $0 \leq i < m$, and $s_m = s''$.
By definition of $<:_{\node{n}''}$, there exist nodes $\node{n}_0, \ldots, \node{n}_{m-1}$ such that $s_i <:_{\node{n}_{i}, \node{n}'',} s_{i+1}$ for $0 \leq i < m$. From the induction hypothesis, it follows that $s_{i+1}@\node{n}'' \xrightarrow[\dot{}]{.} s_i@\node{n}_i$ for $0 \leq i < m$.
From (c) and the induction hypothesis we find that $s''@\node{n}'' \xrightarrow[\dot{}]{.} s'@\node{n}'$.
Hence, according to Definition~\ref{def:epath}~(\ref{def:epath-step}), we have $s@\node{n} \xrightarrow[\dot{}]{.} s'@\node{n}'$.
\end{itemize}
\item[$\Leftarrow$] We proceed by induction on the definition of $\xrightarrow[\dot{}]{.}$. \begin{itemize}
\item If $s@\mathbf{n} \xrightarrow[\dot{}]{} s'@\node{n}'$, according to Lemma~\ref{lem:dependency_ordering_is_path}, we have $s' \lessdot_{\node{n}',\node{n}} s$, and according to Definition~\ref{def:extended_path_ordering}~(\ref{def:extended_path_ordering-base}), we have $s' <:_{\node{n}',\node{n}} s$.
\item There is a node $\node{n}'' = S'' \tnxTV{\Delta''} U$ with $\node{n}'' \neq \node{n}$, $\node{n}'' \neq \node{n}'$, and a finite sequence of states $s_0, \ldots s_k$, and nodes $\node{n}_1, \ldots \node{n}_k$ such that (a) $s@\node{n} \xrightarrow[\dot{}]{} s_0@\node{n}''$, (b) all $\node{n}_i$ are companion leaves of $\node{n}''$, (c) for $0 \leq i < k$, $s_i@\node{n}'' \xrightarrow[\dot{}]{.} s_{i+1}@\node{n}_{i+1}$, and (d) $s_k@\node{n}'' \xrightarrow[\dot{}]{.} s'@\node{n}'$.
From (a) and Lemma~\ref{lem:dependency_ordering_is_path} we get that $s_0 \lessdot_{\node{n}'',\node{n}} s$.
Using (b), (c) and the induction hypothesis, we get for $0 \leq i < k$, $s_{i+1} <:_{\node{n}_{i+1},\node{n}''} s_i$. Thus, $s_{i+1} <:_{\node{n}''} s_i$ for each such $i$, and by definition, $s_k <:^{+}_{\node{n}''} s_0$.
Using (d) and the induction hypothesis, we get $s' <:_{\node{n}',\node{n}''} s_k$.
Now, according to Definition~\ref{def:extended_path_ordering}~(\ref{def:extended_path_ordering-step}), we have $s' <:_{\node{n}',\node{n}} s$. \qedhere
\end{itemize} \end{itemize} \end{proof}
The following now follows immediately from Lemma~\ref{lem:extended_path_implies_extended_order}, Definition~\ref{def:bs-order} and Definition~\ref{def:extended_path_ordering} \begin{corollary}\label{lem:bradfield_order_vs_extended_ordering}
Let $\node{n} = S \tnxTV U$ be a companion node in a tableau, and let $\node{n}_0, \ldots, \node{n}_k$ be the companion leaves of $\node{n}$. Then, for $s, s' \in S$, we we have $s' <:_{\node{n}} s$ if and only if $s \sqsupset_{\node{n}} s'$. \end{corollary}
\end{document} | arXiv |
An invariant characterization of the quasi-spherical Szekeres dust models
A. A. Coley1,
N. Layden1 &
D. D. McNutt ORCID: orcid.org/0000-0003-2677-86492
General Relativity and Gravitation volume 51, Article number: 164 (2019) Cite this article
The quasi-spherical Szekeres dust solutions are a generalization of the spherically symmetric Lemaitre–Tolman–Bondi dust models where the spherical shells of constant mass are non-concentric. The quasi-spherical Szekeres dust solutions can be considered as cosmological models and are potentially models for the formation of primordial black holes in the early universe. Any collapsing quasi-spherical Szekeres dust solution where an apparent horizon covers all shell-crossings that will occur can be considered as a model for the formation of a black hole. In this paper we will show that the apparent horizon can be detected by a Cartan invariant. We will show that particular Cartan invariants characterize properties of these solutions which have a physical interpretation such as: the expansion or contraction of spacetime itself, the relative movement of matter shells, shell-crossings and the appearance of necks and bellies.
If a Szekeres dust-model admits a symmetry, there will only be three functionally independent invariants [31], since \(\epsilon \ne 0\) in these solutions.
If there is a symmetry, then \(dim(H_2) = dim(H_1)=0\) and \(t_2 = t_1 = 3\), and so the algorithm still stops.
Since \(u_a = dt\), the projection operator \(h_{ab} = g_{ab} + u_a u_b = g_{ab} + 2(\ell _a + n_a) (\ell _b + n_b)\) was used to compute the Ricci scalar, \({^3} \mathcal {R}\), of the hypersurfaces \(t=const\). To recover the form in [17] we notice that \(\dot{Y} = 0\) and so \(\tilde{M} = \frac{1}{2} \tilde{K} Y\).
Since the discriminant SPIs built from the Weyl and Ricci tensors, along with the covariant derivatives of these tensors do not vanish anywhere, these tensors cannot be of alignment type II or more special.
Zakharov, V.: Gravitational Waves in Einstein's Theory. Israel Program for Scientific Translations. Halsted Press, New York (1973)
Wainwright, J., Ellis, G.F.R.: Dynamical Systems in Cosmology. Cambridge University Press, Cambridge (2005)
Bolejko, K., Krasiński, A., Hellaby, C., Célérier, M.N.: Structures in the Universe by Exact Methods: Formation, Evolution, Interactions. Cambridge University Press, Cambridge (2010)
Musco, I., Miller, J.C., Rezzolla, L.: Computations of primordial black-hole formation. Class. Quantum Gravity 22(7), 1405 (2005). arXiv:gr-qc/0412063
Ashtekar, A., Krishnan, B.: Isolated and dynamical horizons and their applications. Living Rev. Relativ. 7, 10 (2004). arXiv:gr-qc/0407042
Penrose, R.: Gravitational collapse and space-time singularities. Phys. Rev. Lett. 14, 57–59 (1965)
Booth, I.: Black-hole boundaries. Can. J. Phys. 83, 1073–1099 (2005). arXiv:gr-qc/0508107
Abbott, B.P., et al.: Observation of gravitational waves from a binary black hole merger. Phys. Rev. Lett. 116(6), 061102 (2016). arXiv:1602.03837 [gr-qc]
Coley, A.A., McNutt, D.D., Shoom, A.A.: Geometric horizons. Phys. Lett. B 771, 131–135 (2017). arXiv:1710.08457 [gr-qc]
Coley, A., McNutt, D.: Identification of black hole horizons using scalar curvature invariants. Classical and Quantum Gravity 35(2), 025013 (2018). arXiv:1710.08773 [gr-qc]
McNutt, D., Coley, A.: Geometric horizons in the Kastor–Traschen multi-black-hole solutions. Phys. Rev. D 98(6), 064043 (2018). arXiv:1811.02931 [gr-qc]
Harada, T., Yoo, C.M., Kohri, K., Nakao, K., Jhingan, S.: Primordial black hole formation in the matter-dominated phase of the universe. Astrophys. J. 833(1), 61 (2016). arXiv:1609.01588 [astro-ph.CO]
Harada, T., Jhingan, S.: Spherical and nonspherical models of primordial black hole formation: exact solutions. Progr. Theor. Exp. Phys. 2016(9), 093E04 (2016). arXiv:1512.08639 [gr-qc]
Hellaby, C., Krasiński, A.: You cannot get through szekeres wormholes: regularity, topology, and causality in quasispherical szekeres models. Phys. Rev. D 66(8), 084011 (2002). arXiv:gr-qc206052
Hellaby, C., Krasiński, A.: Physical and geometrical interpretation of the \(\epsilon \le 0\) szekeres models. Phys. Rev. D 77(2), 023529 (2008). arXiv:0710.2171 [gr-qc]
Krasinski, A., Bolejko, K.: Apparent horizons in the quasispherical Szekeres models. Phys. Rev. D 85(12), 124016 (2012). arXiv:1202.5970 [gr-qc]
Sussman, R.A., Bolejko, K.: A novel approach to the dynamics of Szekeres dust models. Class. Quantum Gravity 29(6), 065018 (2012). arXiv:1109.1178 [gr-qc]
Gaspar, l D, Hidalgo, J .C., Sussman, R .A., Quiros, I.: Black hole formation from the gravitational collapse of a nonspherical network of structures. Phys. Rev. D 97(10), 104029 (2018). arXiv:1802.09123 [gr-qc]
Szekeres, P.: Quasispherical gravitational collapse. Phys. Rev. D 12(10), 2941 (1975)
Collins, J.M., d'Inverno, R.A., Vickers, J.A.: The Karlhede classification of type D vacuum spacetimes. Class. Quantum Gravity 7, 2005–2015 (1990)
Collins, J.M., d'Inverno, R.A.: The Karlhede classification of type-D nonvacuum spacetimes. Class. Quantum Gravity 10, 343–351 (1993)
Brooks, D., Chavy-Waddy, P.C., Coley, A.A., Forget, A., Gregoris, D., MacCallum, M.A.H., McNutt, D.D.: Cartan invariants and event horizon detection. Gen. Relativ. Gravit. 50(4), 37 (2018). arXiv:1709.03362 [gr-qc]
van Elst, H., Uggla, C.: General relativistic orthonormal frame approach. Class. Quantum Gravity 14(9), 2673 (1997)
Szafron, D.A.: Inhomogeneous cosmologies: new exact solutions and their evolution. J. Math. Phys. 18(8), 1673–1677 (1977)
Szafron, D.A., Collins, C.B.: A new approach to inhomogeneous cosmologies: intrinsic symmetries. II. Conformally flat slices and an invariant classification. J. Math. Phys. 20(11), 2354–2361 (1979)
Barnes, A., Rowlingson, R.R.: Irrotational perfect fluids with a purely electric weyl tensor. Class. Quantum Gravity 6(7), 949 (1989)
Wainwright, J.: Characterization of the szekeres inhomogeneous cosmologies as algebraically special spacetimes. J. Math. Phys. 18(4), 672–675 (1977)
Coll, B., Ferrando, J.J., Sáez, J.A.: Thermodynamic class II Szekeres–Szafron solutions. Singular models. Class. Quantum Gravity 36, 175004 (2019). arXiv:1812.09054 [gr-qc]
Hellaby, C.: The null and KS limits of the Szekeres model. Class. Quantum Gravity 13(9), 2537 (1996)
Nolan, B.C., Debnath, U.: Is the shell-focusing singularity of Szekeres space-time visible? Phys. Rev. D 76, 104046 (2007). arXiv:0709.3152 [gr-qc]
Georg, I., Hellaby, C.: Symmetry and equivalence in szekeres models. Phys. Rev. D 95(12), 124016 (2017). arXiv:1702.05347 [gr-qc]
Buckley, R.G., Schlegel, E.M.: Physical geometry of the quasispherical Szekeres models (2019). arXiv:1908.02697 [gr-qc]
Coley, A., Milson, R., Pravda, V., Pravdová, A.: Classification of the Weyl tensor in higher dimensions. Class. Quantum Gravity 21, L35–L41 (2004). arXiv:gr-qc/0401008
Milson, R., Coley, A., Pravda, V., Pravdova, A.: Alignment and algebraically special tensors in Lorentzian geometry. Int. J. Geom. Methods Modern Phys. 02(01), 41–61 (2005). arXiv:gr-qc/0401010
Coley, A.: Classification of the Weyl tensor in higher dimensions and applications. Class. Quantum Gravity 25(3), 033001 (2008). arXiv:0710.1598 [gr-qc]
Stewart, J.: Advanced General Relativity. Cambridge University Press, Cambridge (1993)
Ellis, G.F.R., Bruni, M.: Covariant and gauge-invariant approach to cosmological density fluctuations. Phys. Rev. D 40(6), 1804 (1989)
Polášková, E., Svitek, O.: Quasilocal horizons in inhomogeneous cosmological models. Class. Quantum Gravity 36(2), 025005 (2018). arXiv:1803.11005 [gr-qc]
Page, D.N., Shoom, A.A.: Local invariants vanishing on stationary horizons: a diagnostic for locating black holes. Phys. Rev. Lett. 114(14), 141102 (2015). arXiv:1501.03510 [gr-qc]
Faraoni, V., Ellis, G.F.R., Firouzjaee, J.T., Helou, A., Musco, I.: Foliation dependence of black hole apparent horizons in spherical symmetry. Phys. Rev. D 95(2), 024008 (2017). arXiv:1610.05822 [gr-qc]
Krasiński, A., Hellaby, C.: Formation of a galaxy with a central black hole in the Lemaitre–Tolman model. Phys. Rev. D 69(4), 043502 (2004). arXiv:gr-qc/0309119
We would like to thank Ismael Delgado Gaspar and Daniele Gregoris for useful discussions at the beginning of this project. The work was supported by NSERC of Canada (A.C.), and through the Research Council of Norway, Toppforsk grant no. 250367: Pseudo-Riemannian Geometry and Polynomial Curvature Invariants: Classification, Characterisation and Applications (D.M.).
Department of Mathematics and Statistics, Dalhousie University, Halifax, NS, B3H 3J5, Canada
A. A. Coley
& N. Layden
Faculty of Science and Technology, University of Stavanger, 4036, Stavanger, Norway
D. D. McNutt
Search for A. A. Coley in:
Search for N. Layden in:
Search for D. D. McNutt in:
Correspondence to D. D. McNutt.
Appendix: Frame independent curvature invariants
As in the case of the spherically symmetric metrics, the components in (38) vanish on the apparent horizon \(R = 2M\), while the components in (39) do not. This relationship is reflected in the vanishing of the Cartan invariant \(\rho \) relative to the invariant coframe chosen by the Cartan–Karlhede algorithm. Taking the zeroth order and first order SPIs:
$$\begin{aligned} I_1 = C_{abcd} C^{abcd} = \Psi _2,~~R = R^a_{~a} = 8 \Phi _{11}, \end{aligned}$$
along with the quadratic first order SPIs:
$$\begin{aligned} \begin{aligned}&I_3 = C_{abcd;e} C^{abcd;e},~~I_{3a} = C_{abcd;e} C^{ebcd;a}, I_5 = I_{1;a}I_1^{~;a}, \\&J_1 = R_{ab;c} R^{ab;c},~~ J_2 = R_{ab;c} R^{ac;b},~~J_3 = R_{;a}R^{;a}, \end{aligned} \end{aligned}$$
we can produce the following algebraically independent SPIs:
$$\begin{aligned}&(\mu - \rho )(\mu - \rho + 8 \epsilon ), \\&\epsilon ( \mu - \rho - \epsilon ), \\&\rho \mu - 2 |\tau |^2, \\&\mu \Delta \ln (\Phi _{11}) + 4 \rho \Delta \ln (\Phi _{11}) + 8 \rho \mu + 16 \rho \epsilon - 8 \rho ^2 - 9 \rho \mu \frac{\Psi _2 }{ \Phi _{11}}, \\&2^2 5 \rho \Delta \ln (\Phi _{11}) - 2^5 \epsilon \Delta \ln (\Phi _{11}) + 2^5 \rho \mu + 2^6 \rho \epsilon + 2^5 \rho ^2 - 6^2 \rho \mu \frac{\Psi _2}{\Phi _{11}} + 3^2 \rho \mu \frac{\Psi _2^2}{\Phi _{11}^2}, \\&2^7 (\Delta \ln (\Phi _{11})^2 - 2^8 \rho \Delta \ln (\Phi _{11}) + 2^8 \mu \Delta \ln (\Phi _{11}) + 2^9 \epsilon \Delta \ln (\Phi _{11}) + 6^2 2^3 |\tau |^2 \frac{\Psi _2^2}{\Phi _{11}^2}. \end{aligned}$$
The six SPIs in (59) and (60) are polynomials in terms of six Cartan invariants:
$$\begin{aligned}\Delta \ln (\Phi _{11}), \rho , \mu , \epsilon , |\tau |^2,~~and~~ \frac{\Psi _2}{\Phi _{11}}.\end{aligned}$$
Locally, it is possible to express \(\rho \) (or \(\mu \)) as a function of these SPIs in order to detect the horizon when the Jacobian of these polynomials in terms of the six Cartan invariants is non-zero. However, this will introduce additional regions where the SPIs will vanish, giving rise to the possibility of the incorrect detection of the apparent horizon.
Coley, A.A., Layden, N. & McNutt, D.D. An invariant characterization of the quasi-spherical Szekeres dust models. Gen Relativ Gravit 51, 164 (2019) doi:10.1007/s10714-019-2647-6 | CommonCrawl |
Reproductive biology of common carp (Cyprinus carpio Linnaeus, 1758) in Lake Hayq, Ethiopia
Assefa Tessema ORCID: orcid.org/0000-0002-0873-68351,
Abebe Getahun1,
Seyoum Mengistou1,
Tadesse Fetahi1 &
Eshete Dejen2
Fisheries and Aquatic Sciences volume 23, Article number: 16 (2020) Cite this article
This study was conducted in Lake Hayq between January and December 2018. The objectives of this study were to determine the growth, condition, sex ratio, fecundity, length at first sexual maturity (L50), and spawning seasons of common carp (Cyprinus carpio). Monthly fish samples of C. carpio were collected using gillnets of stretched mesh sizes of 4, 6, 7, 8, 10, and 13 cm and beach seines of mesh size of 6 cm. Immediately after the fish were captured, total length (TL) and total weight (TW) for each individual were measured in centimeters and grams, respectively, and their relationship was determined using power function. Length at first maturity (L50) was determined for both males and females using the logistic regression model. The spawning season was determined from the frequency of mature gonads and variation of gonadosomatic index (GSI) values of both males and females. Fecundity was analyzed from 67 mature female specimens. The length and weight relationship of C. carpio was TW = 0.015TL2.93 for females and TW = 0.018TL2.87 for males that indicate negative allometric growth in both cases. The mean Fulton condition factor (CF) was 1.23 ± 0.013 for females and 1.21 ± 0.011 for males. The value of CF in both cases was > 1 that shows both sexes are in good condition. Among the total 1055 C. carpio collected from Lake Hayq, 459 (43.5%) were females and 596 (56.5%) were males. The chi-square test showed that there was a significant deviation between male and female numbers from 1:1 ratio (χ2= 22, df = 11, P > 0.05) within sampling months. The length at first sexual maturity (L50) for females and males were 21.5 and 17.5 cm, respectively. Males mature at smaller sizes than females. The spawning season of C. carpio was extended from February to April, and the peak spawning season for both sexes was in April. The average absolute fecundity was 28,100 ± 17,462. C. carpio is currently the commercially important fish while Nile tilapia fishery has declined in Lake Hayq. Therefore, this baseline data on growth, condition, and reproductive biology of common carp will be essential to understand the status of the population of carp and design appropriate management systems for the fish stock of Lake Hayq, Ethiopia, and adjacent countries.
Common carp (Cyprinus carpio) is one of the widely cultured commercially important freshwater fish species in the world (FAO 2013). C. carpio is native to Eastern Europe and Central Asia. It can tolerate a wide range of water quality parameters. In natural water bodies, this species can survive in very low water temperature and it can tolerate low concentrations and supersaturation of dissolved oxygen (Banarescu and Coad 1991).
Common carp is omnivorous fish species that consume animals (aquatic insects, macroinvertebrates, and zooplankton) and plant origin (phytoplankton, macrophytes) (Rahman et al. 2008, 2009; Weber and Brown 2009). C. carpio grows rapidly, achieves sexual maturation in the second year of life, and is highly fertile (about 2 million eggs per female) (Balon 1975; Hossain et al. 2016). The combination of these features allows developing invasiveness potential (Troca and Vieira 2012).
Knowledge of fish reproductive biology is very important for the rational utilization of fish stocks and their sustainable production (Cochrane 2002; Temesgen 2017). Understanding the reproductive aspects of fish is also very important for providing sound scientific advice in fishery management (Hossain et al. 2017; Khatun et al. 2019).
Common carp have been introduced into many water bodies throughout the world, including Europe, Australia, North America, Africa, and Asia. The wide distribution and successful introductions of common carp are mostly due to their tolerance to variable environmental conditions (Forester and Lawrence 1978), as well as to their capability for early sexual maturity and rapid growth (Koehn 2004).
Cyprinus carpio was first introduced to Aba Samuel Dam (Awash River basin) in 1940 from Italy (Getahun 2017). Later, C. carpio has been introduced in Lake Ziway in the late 1980s (FAO 1997; Abera et al. 2015), in highland lakes such as Ashengie, Ardibo, and Maybar (Golubtsov and Darkov 2008) for food security purpose, and the introduction was successful. Common carp were introduced to Lake Hayq accidentally from Lake Ardibo in 2008 (Wolde Mariam, Personal Communication 2018) through Ankerkeha River that connects the two lakes during the rainy season. Though common carp have established recently in Lake Hayq, it is dominating the other commercially important fish species, Nile tilapia and catfish. Fishermen of Lake Hayq believe that the current stunt growth of Nile tilapia (Oreochromis niloticus) is due to the recent invasion of common carp in the lake.
Though there are some research works conducted in different water bodies of Ethiopia on common carp reproductive biology such as Hailu (2013) in Amerti Reservoir, Abera (2015) in Lake Ziway, and Asnake (2010) in Lake Ardibo, there is no information on the reproductive biology of common carp in Lake Hayq. Therefore, the purpose of this study was to establish baseline data on growth and condition, sex ratio, fecundity, length at first sexual maturity, and spawning seasons of common carp and design management strategy for the population of common carp in Lake Hayq.
Study area and sampling techniques
The study was conducted in Lake Hayq. Lake Hayq is located in the North Central highlands of Ethiopia. It is a typical example of highland lake of Ethiopia with volcanic origin. Geographically, it lies between 11° 3′ N to 11° 18′ N latitude and 39° 41′ E to 39° 68′ E longitude with an average elevation of 1911 meters above sea level. The lake has a closed drainage system, and the total watershed area is about 77 km2 of which 22.8 km2 is occupied by Lake Hayq. According to Demlie et al. (2007), the average depth of the lake is 37 m, and the maximum depth is 81 m. The only stream entering the lake is the Ankerkeha River, which flows into its southeastern corner. According to Fetahi et al. (2011), Lake Hayq is classified as a small highland freshwater (Fig. 1).
Location map of Lake Hayq with respect to Ethiopia and Amhara Regional State
Among the climate variables, only maximum and minimum temperature and rainfall of Lake Hayq were available at Kombolcha Meteorological Agency. In 2018, the average monthly maximum and minimum temperature around Lake Hayq was 25.9 and 9.9 °C, respectively (Fig. 2). The annual rainfall were 1200 mm (Fig. 3). The rainfall and the temperature variability around Lake Hayq for the last 10 years (2009–2018) were very low. The average monthly minimum and maximum temperature and annual rainfall were 9.8 °C, 26.6 °C, and 1205.6 mm, respectively (Kombolcha Meteorological Agency, 2019).
Monthly maximum and minimum temperature variation of Lake Hayq in 2018
Monthly rainfall variation of Lake Hayq in 2018
Fishery data
Three sampling sites were selected based on the impact of human and livestock activities. These are littoral site with intensive human activities related to recreation in lodges; pelagic site, less impact from human and livestock; and river mouth (Ankerkeha River), carrying huge silt every year (Table 1). The sampling sites were fixed with GPS, and a map was generated (Fig. 1). Fish specimens were collected each month for 1 year using gill nets of 4, 6, 8, 10, and 13 cm stretched mesh sizes through setting the nets overnight in the lake and beach seines of 6 cm mesh size. Data such as length, weight, sex, and maturity stages were collected in the field immediately after the fish were caught.
Table 1 Sampling site description
Some biological aspects of common carp
Length-weight relationship
The relationship between total length (TL) and total weight (TW) of C. carpio was calculated using power function as in Bagenal and Tesch (1978).
$$ \mathrm{TW}={\mathrm{aTL}}^{\mathrm{b}} $$
TW Total weight (g)
TL Total length (cm)
a Intercept of the regression line
b Slope of the regression line
Condition factor (Fulton factor)
The wellbeing of common carp was determined by using the Fulton condition factor as indicated in Bagenal and Tesch (1978).
Fulton condition factor was calculated as:
$$ \mathrm{FCF}=\frac{\mathrm{TW}}{{\mathrm{TL}}^3}\times 100 $$
where TW is the total weight in grams and TL is the total length in centimeters
Sex ratio was determined using the formula:
$$ \mathrm{Sex}\ \mathrm{ratio}=\frac{\mathrm{Number}\kern0.5em \mathrm{of}\kern0.5em \mathrm{females}}{\mathrm{Number}\kern0.5em \mathrm{of}\kern0.5em \mathrm{males}} $$
The absolute fecundity (AF) of individual females was determined gravimetrically (Bagenal and Braum 1987), with the number of ripe oocytes counted from triplicates of 1-g sub-sample of the ovary. The relationship between absolute fecundity with total length, total weight, and gonad weight was determined using least squares regression.
The spawning season was determined from the percentages of fish with ripe gonads taken each month (Hossain and Ohtomi 2008) and from monthly GSI variations (Hossain et al. 2017). The spawning seasons of C. carpio were determined based on monthly variations of the gonadosomatic index (GSI):
$$ GSI=\frac{W_g}{W-{W}_g}\times 100 $$
where Wg is the gonad weight (g) and W is the total weight (g) of the fish (Ricker 1975).
Maturity estimation
Total length (cm) and total weight (g) of each specimen of common carp were measured at the sampling sites using measuring board and sensitive balance, respectively. After dissection, the gonad maturity of each specimen was identified using a 5-point maturity scale (Wudneh, 1998). The length at which 50% of both sexes reached maturity (L50) was determined from the percentages of mature fish selected from peak breeding seasons (March–April) and fitted to the logistic equation described by Echeverria (1987).
Descriptive statistics (frequency, percentages, and graphs) and inferential statistics (chi-square, independent t test, linear, and logistic regression) were used to summarize the collected data. SPSS Software Package version 16 and R 3.3.1 were used to summarize the collected data.
The total length of female and male C. carpio ranged from 11 to 50 cm and 10.5 to 52 cm, respectively, and the total weight of females and males ranged from 19 to 1697 g and 18 to 1378 g, respectively. The length-weight relationship of both female and male C. carpio in Lake Hayq was curvilinear, and as a result, the line fitted to the data was described by the regression equation (Table 2). In this study, the "b" values of both female and male C. carpio were significantly different from 3, showing allometric growth (Figs. 4 and 5)
Table 2 Length-weight relationship of C. carpio in Lake Hayq
Length-weight relationship of female Cyprinus carpio in Lake Hayq (N = 459)
Length-weight relationship of male Cyprinus carpio in Lake Hayq (N = 596)
Fulton's condition factor
The Fulton condition factor values of female and male C. carpio ranged from 1 to 1.98 and 1 to 1.83, respectively. The mean and SE values of FCF of females and males were 1.23 ± 0.013 and 1.21 ± 0.011, respectively. The independent t test analysis showed that there was no significant difference (P > 0.05) in mean FCF between male and female C. carpio in Lake Hayq.
From 1055 specimens of C. carpio collected from Lake Hayq, 459 (43.5%) were females and 596 (56.5%) were males. The chi-square test showed that there was a significant deviation between males and females from 1:1 ratio (χ2= 22, df = 11, P > 0.05) within sampling months.
Reproductive aspects of common carp
Length at first sexual maturity
Size at first maturity (L50) is the size at which 50% of the fish get matured for the first time. From the logistic regression model analyzed, male C. carpio matured at smaller size (17.5 cm) than female (21.5 cm) in Lake Hayq as shown in Fig. 6.
Length at first sexual maturity (L50) of female (a) and male (b) C. carpio in Lake Hayq
The occurrence of mature males and females
The number of mature males (stage 4) of C. carpio was higher than that of females during sampling months. The number of mature female and male specimens was higher from January to April. The highest number of mature females and males was observed from February to April (Fig. 7). The 10-year (2009–2018) meteorological data analysis showed that the average atmospheric temperature around Lake Hayq from February to April was 27.2 °C. Rainfall distribution around Lake Hayq is bimodal. Rainfall was available in these months for the same year around the lake. This warm weather condition and rainfall availability might trigger the spawning of common carp in the lake.
Monthly frequency of mature specimens of C. carpio in Lake Hayq
Sixty-seven fully mature C. carpio with TL (21–49 cm) and TW (104–1230 g) were selected for fecundity study. The average absolute fecundity (AF) was 28100 ± 17462. The relation between AF with TL, TW, and GW was linear (Figs. 8, 9, and 10). There was a significant relation in absolute fecundity with TL, TW, and GW (P < 0.05).
Relationship between absolute fecundity (AF) and total length (TL) in C. carpio
Relation between absolute fecundity (AF) and total weight (TW) in C. carpio
Relation between absolute fecundity (AF) and gonad weight (GW) in C. carpio
Gonadosomatic index
Cyprinus carpio in Lake Hayq has more than one peak spawning season starting from February to April. However, the highest peak spawning season for both sexes was in April (Fig. 11).
Mean monthly gonadosomatic index (GSI) of C. carpio in Lake Hayq
This reproductive biological study of C. carpio in Lake Hayq is the first report which will be used as basic and baseline information. The result of the study helps to know the population status of the fish and design the possible strategies for sustainable utilization of fisheries of the lake.
Length-weight relationships in fishes are an important tool in fish stock assessment to know the growth status and management of the fishes (Ujjania et al. 2012). The length-weight relationship of C. carpio in Lake Hayq was negative allometric growth with a "b" value of 2.93 for females and 2.87 for males. These values were similar to 2.82 for C. carpio in Lake Ardibo for both sexes (Asnake 2010), 2.87 and 2.77 for female and male C. carpio in Foum El-Khanga Dam in Algeria (Sahtout et al.2017), but different from 1.9 and 2.3 for female and male C. carpio in Lake Naivasha in Kenya (Aera et al. 2014) and 2.92 for C. carpio in Lake Amerti (Hailu 2013). These situations may be caused by several factors including the seasonal effect, habitat type, degree of stomach fullness, gonad maturity, sex, health, preservation techniques, food availability, differences in the observed length ranges, and fatness of the species as well as physical factors such as temperature and salinity (Wootton, 1998; Rahman et al. 2012; Hossain et al. 2016). The variations in "b" values between males and females may depend on various factors such as the number of specimens examined, and the sampling season.
The FCF of females and males of C. carpio were 1.23 ± 0.13 and 1.21 ± 0.011, respectively. These values were similar to 1.22 ± 0.14 for C. carpio in Amerti Reservoir (Hailu, 2013), but different from 1.58 and 1.57 for female and male C. carpio in Damsa Dam Lake in Turkey (Mert and Bulut 2014), 1.57 for both sexes of C. carpio in Foum El-Khanga Dam in Algeria (Sahtout et al. 2017), and 1.39 and 1.27 for female and male C. carpio in Almus Dam Lake in Turkey (Karataş et al., 2007). These variations in FCF of C. carpio in different water bodies could be based on the difference in age, sex, season, stage of maturity, the fullness of gut, the type of food consumed, the amount of fat reserve, and the degree of muscular development (Pauker and Coot, 2004; Hossain et al. 2013).
The sex ratio (F:M) in this study was 1.3:1, and there was a significant deviation from hypothetical female to male ratio (1:1). The result of this study disagrees with Hailu (2013) that has reported non-significant variation (1.15:1) female to male ratio in Amerti Reservoir. However, this result agrees with the report (1.53:1) female to male ratio in Damsa Dam Lake in Turkey (Mert and Bulut 2014).
In the present study, the size at first sexual maturity of C. carpio was 17.5 cm for males and 21.5 cm for females. These values were similar to 15.8 and 22.5 for male and female C. carpio in Sidi Saad Reservoir in Tunisia (Hajlaoui et al. 2016). But these values were different from 27 cm and 28.3 cm for male and female C. carpio in Amerti Reservoir (Hailu 2013), 27 cm and 28.7 cm for male and female C. carpio in Lake Ziway (Abera et al. 2015), and 34 and 42 cm for male and female C. carpio in Lake Naivasha in Kenya (Oyugi 2012 ).
Knowledge on the fecundity of fish is important to examine the potential of its stocks, life history, practical culture, and actual management of the fishery (Islam et al. 2012). The range and the mean fecundity of C. carpio in Lake were 10,316–122,600 and 28100 ± 17462, respectively. These values were greater than the absolute fecundity range of 1610–99,737 for C. carpio in Lake Ardibo (Asnake 2010). However, the fecundity of C. carpio of Lake Hayq was lower than most water bodies of Ethiopia; it is less than a range of 36955–318584 and mean of 170937 ± 1308 fecundity recorded for C. carpio in Amerti Reservoir (Hailu 2013) and a range of 75645–356745 and mean of 210538 for C. carpio in Lake Ziway (Lemma et al. 2015). Fecundity of C. carpio depends on body size and produce between 500,000 and 3 million eggs per spawning (Smith 2004). Thus, the reproductive potential of C. carpio is exceptional as they mature early, are highly fecund, increase reproductive effort with age over their life span, and reproduce at least once each year when conditions are appropriate for survival of larvae. The lower absolute fecundity in Lake Hayq could be due to the smaller size of fish compared to the C. carpio in Amerti and Lake Ziway.
Appropriate identification of the maturity status of fishes is a fundamental strategy for the appropriate management of exploited stocks in the fishery and is commonly used tools by fisheries' biologists and managers (Rahman et al. 2018). The monthly average GIS values of males and females were higher from February to April and were highest in April (Fig. 11). The lowest and highest GSI values were 1.1 and 4 for males and 3.5 and 10 for female C. carpio. The GIS values of females were higher than those of males due to the higher gonad weight of females. The observed higher values of GSI of both males and females between February and April and the highest values in April might be associated with higher atmospheric and water temperature values of 26 and 23 °C, respectively. Rainfall availability might also contribute for more food (planktons, macrophytes, and detritus) together with temperature, and triggers the spawning of C. carpio in Lake Hayq. The mean monthly average water temperature of Lake Hayq was 23 °C, and better rainfall was recorded during the spawning months. In agreement with the current study, peak breeding season was recorded in Amerti Reservoir (Hailu 2013) and Lake Ziway (Abera et al. 2015) when water temperature becomes higher and rainfall is available. C. carpio in Lake Hayq has more than one spawning season similar to Amerti Reservoir (Hailu 2013), Lake Ziway (Abera et al. 2015), and Lake Naivasha in Kenya (Oyugi, 2012). This might be related to the thermally stable warm environment and unlimited food resources (Muchiri et al. 1995). The mean monthly surface water temperature that ranged from 21.1 to 25.1 °C during the study period appears to favor year-round spawning of common carp in Lake Hayq.
Conclusion and recommendation
The growth and condition of common carp in Lake Hayq were good. The absolute fecundity of common carp in Lake Hayq was lower compared to other Ethiopian and African water bodies which could be due to smaller size of fishes used for fecundity analysis. The L50 values of common carp were smaller (17.5 and 21.5 cm for male and female) which might be associated with illegal fishing activities, and narrow-sized gillnets of mesh size of 4–6 cm. Hence, the mesh size of the gillnets should be regulated at least to 8 cm which is the national standard. Furthermore, common carp have extended spawning seasons in Lake Hayq (February–April) with peak spawning in April. Therefore, these intense spawning months should be used for closing seasons (no fishing activities). Thus restricted gillnet use and closed season practices could bring better recruitment and better fish size. Long-term monitoring on reproduction potential, spawning season, and population status of common carp should be done for sustainable fishery utilization of Lake Hayq.
Data sharing is not applicable.
Abera L, Getahun A, Lemma B. Some aspects of reproductive biology of the common carp (Cyprinus Carpio Linnaeus, 1758) in Lake Ziway, Ethiopia. Global J Agric Res Rev. 2015;3:151–7.
Aera NC, Migiro EK, Yasindi A, Outa N. Length-weight relationship and condition factor of common carp, (Cyprinus carpio) in Lake Naivasha, Kenya. Int J Curr Res. 2014;6:8286–96.
Asnake W. Fish resource potential and some biological aspect of Oreochromis niloticus and Cyprinus carpio in Lake Ardibo, Northern Ethiopia. MSc Thesis, College of Agricultural and Environmental Science. Bahir Dar: Bahir Dar University; 2010.
Bagenal TB, Braum E. Methods for assessment of fish production in freshwaters. London: Blackwell Scientific Publications; 1987.
Bagenal TB, Tesch FW. Age and growth. In: Bagenal TB, editor. Methods for assessment of fish production in freshwaters. Handbook no.3, England. Oxford: Blackwell; 1978. p. 101–136.
Balon EK. Reproductive guilds of fishes: a proposal and definition. J Fish Res Board Can. 1975;32:821–64.
Banarescu P, Coad B W. Cyprinids of Eurasia. In: Winfield, IJ, Nelson JS, editors. Cyprinid fishes: systematics, biology, and exploitation., Chapman and Hall, London 1991. p127–155.
Cochrane KL. A fishery manager's guidebook: management measures and their application. In: FAO fisheries technical paper. No. 424. Rome: FAO; 2002. p. 231.
Demlie M, Ayenew T, Stefan W. Comprehensive hydrological and hydrogeological study of topographically closed lakes in highland Ethiopia: the case of Hayq and Ardibo Lakes. J Hydrol. 2007;339:145–58.
Echeverria TW. Thirty-four species of California rockfishes: maturity and seasonality of reproduction. US Fish Bull. 1987;85:229–50.
FAO. Fish state plus: Universal software for fishery statistical time series (available at www.fao.org/fi/statist/fisoft/fishplus.asp). 2013.
FAO (Food and Agricultural Organization). Aquaculture production statistics 1986-1996. FAO fish. Circ, 815, Rev. 9. 1997.
Fetahi T, Michael S, Mengistou S, Simone L. Food web structure and trophic interactions of the tropical highland Lake Hayq. Ethiopia Ecol Model. 2011;222:804–13.
Forester TS, Lawrence JM. Effects of grass carp and carp on populations of bluegill and largemouth bass in ponds. Trans Am Fish Soc. 1978;107:172–5.
Getahun A. The freshwater fishes of Ethiopia, diversity, and utilization. Addis Ababa: View Graphics and Printing Plc; 2017.
Golubtsov AS, Darkov AA. A review of fish diversity in the main drainage systems of Ethiopia based on the data obtained by 2008. In: Pavlov DS, Dgebudaze, YuYu, Darkov AA, Golubtsov AS, Mina MV, editors. Ecological and faunistic studies in Ethiopia, "Proceedings of Jubilee Meeting Joint Ethio-Russian Biological Expedition: 20 years of scientific cooperation". Moscow: KMK Scientific Press Ltd; 2008. p. 69–102.
Hailu M. Reproductive aspects of common carp (Cyprinus Carpio L, 1758) in Amerti reservoir, Ethiopia. J Ecol Nat Environ. 2013;5:260–4.
Hajlaoui W, Missaoui S. Reproductive biology of the common carp, Cyprinus carpio communis, in Sidi Saad reservoir (Central Tunisia). Bull Soc Zool Fr. 2016;141:25–39.
Hossain MY, Hossen MA, Islam MM, Pramanik MNU, Nawer F, Paul AK, et al. Biometric indices and size at first sexual maturity of eight alien fish species from Bangladesh. Egypt J Aquatic Res. 2016;42:331–9.
Hossain MY, Hossen MA, Islam MS, Jasmine S, Nawer F, Rahman MM. Reproductive biology of Pethia ticto (Cyprinidae) from the Gorai River (SW Bangladesh). J Appl Ichthyol. 2017;33:1007–14.
Hossain MY, Ohtomi J. Reproductive biology of the southern rough shrimp Trachysalambria curvirostris (Penaeidae) in Kagoshima Bay, southern Japan. J Crustac Biol. 2008;28:607–12.
Hossain MY, Rahman MM, Abdallah EM, Ohtomi J. Biometric relationships of the pool barb Puntius sophore (Hamilton 1822) (Cyprinidae) from three major rivers of Bangladesh. Sains Malaysiana. 2013;22:1571–80.
Islam MR, Sultana N, Hossain MB, Mondal S. Estimation of fecundity and gonadosomatic index (GSI) of gangetic whiting, Sillaginopsis panijus (Hamilton, 1822) from the Meghna River Estuary, Bangladesh. World Appl Sci J. 2012;17:1253–60.
Karataş M, Çiçek E, Başusta A, Başusta N. Age, growth, and mortality of common carp (Cyprinus Carpio Linneaus, 1758) population in Almus dam Lake (Tokat- Turkey). J Appl Biol Sci. 2007;1:81–5.
Khatun D, Hossain MY, Nawer F, Mostafa AA, Al-Askar AA. Reproduction of Eutropiichthys vacha (Schilbeidae) in the Ganges River (NW Bangladesh) with special reference to the potential influence of climate variability. Environ Sci Pollut Res. 2019;26:10800–15.
Koehn JD. Carp (Cyprinus carpio) as a powerful invader in Australian waterways. Freshw Biol. 2004;49:882–94.
Lemma A, Abebe G, Brook L. Some Aspects of Reproductive Biology of the common carp (Cyprinus Carpio Linnaeus, 1758) in Lake Ziway, Ethiopia. Global Journal of Agricultural Research and Reviews. 2015;3:151–157.
Mert R, Bulut S. Some biological properties of carp (Cyprinus carpio L., 1758) introduced into Damsa Dam Lake, Cappadocia Region, Turkey. Pakistan J Zool. 2014;46:337–46.
Muchiri SM, Hart BJ, Harper MD. The persistence of two introduced tilapia species in Lake Naivasha, Kenya in the face of environmental variability and fishing pressure. In: Pitcher TJ, Hart PJB, editors. The impact of species changes in African lakes: Chapman & Hall; 1995. p. 299–320.
Oyugi OD. Ecological impacts of common carp (Cyprinus Carpio L. 1758) (Pisces: Cyprinidae) on naturalised fish species in Lake Naivasha, Kenya. PhD dissertation. Kenya: University of Nairobi; 2012.
Pauker C, Coot RSR. Factors affecting the condition of Flanmelmout suckers in Colorado River, grand canyon, Arizona. North Am J Fish Manag. 2004;24:648–53.
Rahman MM, Hossain MY, Jewel MAS, Rahman MM, Jasmine S, Abdallah EM, Ohtomi J. Population structure, length-weight, and length-length relationships, and condition-and form-factors of the Pool barb Puntius sophore (Hamilton, 1822) (Cyprinidae) from the Chalan Beel, North-Central Bangladesh. Sains Malaysiana. 2012;41:795–802.
Rahman MM, Hossain MY, Jo Q, Kim SK, Ohtomi J, Meyer C. Ontogenetic shift in dietary preference and low dietary overlap in rohu (Labeo rohita) and common carp (Cyprinus carpio) in semi-intensive polyculture ponds. Ichthyol Res. 2009;56.
Rahman MM, Hossain MY, Tumpa AS, Hossain MI, Billah MM, Ohtomi J. Size at sexual maturity and fecundity of the mola carplet, Amblypharyngodon mola (Hamilton 1822) (Cyprinidae) in the Ganges River, Bangladesh. Zool Ecol. 2018;28:429–36.
Rahman MM, Jo Q, Gong YG, Miller SA, Hossain MY. A comparative study of common carp (Cyprinus carpio L.) and calbasu (Labeo calbasu Hamilton) on bottom soil resuspension, water quality, nutrient accumulations, food intake, and growth of fish in simulated rohu (Labeo rohita Hamilton) ponds. Aquaculture. 2008;285:78–83.
Ricker WE. Computational and interpretation of biological statistics of fish populations Bulletin of the Fisheries Research Board of Canada; 1975.
Sahtout F, Boualleg C, Khelifi N, Kaouachi N, Boufekane B, Brahmia S, et al. Study of some biological parameters of Cyprinus carpio from Foum El-Khanga dam, souk-Ahras. Algeria. AACL Bioflux. 2017;10:663–74.
Smith BB. Common carp (Cyprinus carpio L.1758): Spawning dynamics and early growth in lower River Murray. PhD dissertation, School of Earth and Environmental Sciences, University of Adeilaide, Austrialia. 2004.
Temesgen M. Status and trends of fish and fisheries in a tropical rift valley lake, Lake Langeno, Ethiopia. PhD dissertation, Department of Zoological Sciences. Addis Ababa: Addis Ababa University; 2017.
Troca DFA, Vieira JP. Potencial invasor dos Peixes Não Nativos Cultivados Na Região Costeira do Rio Grande Do Sul, Brasil. Bol Inst Pesca. 2012;38:109–20.
Ujjania NC, Kohli MPS, Sharma LL. Length-weight relationship and condition factors of Indian major carps (Catla catla, Labeo rohita, and Cirrhinus mrigala) in Mahi Bajaj Sagar, India. Res J Biol. 2012;2:30–6.
Weber M, Brown M. Effects of common carp on aquatic ecosystems 80 years after "carp as a dominant": ecological insights for fisheries management. Rev Fish Sci. 2009;17:524–37.
Wootton RJ. Ecology of teleost fishes. 2nd ed. London: Kluwer Academic Publishers; 1998.
Wudneh T. Biology and management of fish stocks in Bahir Dar Gulf, Lake Tana, Ethiopia. PhD dissertation. Wageningen: Wageningen Agricultural University; 1998.
The authors would like to acknowledge Addis Ababa University, Ministry of Water, Irrigation, and Electricity, Haik Agricultural Research Sub-Center, for their financial and logistic support. We would also like to extend our gratitude to fishermen of Lake Hayq, specially Fiseha Woldemariam and Seid Abebe, for their unreserved support during fish sample collection and Kidane Aragaw for his support in the data analysis especially logistic regression analysis using R software.
Addis Ababa University and Ministry of Water, Irrigation, and Electricity have granted us 2000 USD for data collection for this research. The funding organizations have commented on the work for the betterment of the paper.
Department of Zoological Sciences, Addis Ababa University, Addis Ababa, Ethiopia
Assefa Tessema, Abebe Getahun, Seyoum Mengistou & Tadesse Fetahi
Intergovernmental Authorities on Development (IGAD), Djibouti City, Djibouti
Eshete Dejen
Assefa Tessema
Abebe Getahun
Seyoum Mengistou
Tadesse Fetahi
TA, the corresponding author, has prepared the report from the collected data where us GA, MS, FT, and DE are co-authors who have edited the paper. The authors read and approved the final manuscript.
Authors' information
TA is currently a PhD student in Addis Ababa University.
The co-authors, GA and MS, are professors, and FT is an associate professor working in Addis Ababa University department of Zoological sciences. They have published many articles in fisheries and Aquatic Sciences and have advised MSc and PhD students. The last co-author, DE, is a senior and energetic researcher in fishery and aquaculture areas. He is a PhD holder and working in IGAD at senior fishery expert.
Correspondence to Assefa Tessema.
Not applicable, since there is no ethical approval process for fishery data in Ethiopia.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Tessema, A., Getahun, A., Mengistou, S. et al. Reproductive biology of common carp (Cyprinus carpio Linnaeus, 1758) in Lake Hayq, Ethiopia. Fish Aquatic Sci 23, 16 (2020). https://doi.org/10.1186/s41240-020-00162-x
DOI: https://doi.org/10.1186/s41240-020-00162-x
Fulton condition factor
Length at first sexual maturity spawning seasons
Associated Content
Ecology and Fisheries resource management | CommonCrawl |
A deep learning framework for predicting cyber attacks rates
Xing Fang ORCID: orcid.org/0000-0001-8574-91491,
Maochao Xu2,
Shouhuai Xu3 &
Peng Zhao4
Like how useful weather forecasting is, the capability of forecasting or predicting cyber threats can never be overestimated. Previous investigations show that cyber attack data exhibits interesting phenomena, such as long-range dependence and high nonlinearity, which impose a particular challenge on modeling and predicting cyber attack rates. Deviating from the statistical approach that is utilized in the literature, in this paper we develop a deep learning framework by utilizing the bi-directional recurrent neural networks with long short-term memory, dubbed BRNN-LSTM. Empirical study shows that BRNN-LSTM achieves a significantly higher prediction accuracy when compared with the statistical approach.
Cyber attacks have become a prevalent and severe threat against the society, including its infrastructures, economy, and citizens' privacy. According to a 2017 report by SymantecFootnote 1, cyber attacks in year 2016 include multi-million dollar virtual bank heists as well as overt attempts to disrupt the U.S. election process; according to another 2017 report by NetDiligenceFootnote 2, the average cyber breach cost is $394K and companies with revenues greater than $2B suffer an average breach cost of $3.2M.
Given the severe consequence of cyber attacks, cyber defense capability needs to be substantially improved. One approach to improving cyber defense is to forecast or predict cyber attacks, similar to how weather forecasting has benefited the society in mitigating natural hazards. The prediction capability can guide defenders to achieve cost-effective, if not optimally, allocation of defense resources [1–4]. For example, the defender may need to allocate more resources for deep packet inspection [5] to accommodate the predicted high cyber attack rate. Moreover, researchers have studied how to use a Bayesian method to predict the increase or decrease of cyber attacks [6], how to use a hidden Markov model to predict the increase or decrease of Bot agents [7], how to use a seasonal ARIMA model to predict cyber attacks [8], how to use a FARIMA model to predict cyber attack rates when the time series data exhibits long-range dependence [1], how to use a FARIMA+GARCH model to achieve even more accurate predictions by further accommodating the extreme values exhibited by the time series data [9], how to use a marked point process to model extreme cyber attack rates while considering both magnitudes and inter-arrival times of time series [10], how to use a vine copula model to quantify the effectiveness of cyber defense early-warning mechanisms [11], and how to use a vine copula model to predict multivariate time series of cybersecurity attacks while accommodating the high-dimensional dependence between the time series [12]. We refer to two recent surveys on the use of statistical methods in cyber incident and attack detection and prediction [13, 14].
A particular kind of cyber threat data is the time series of cyber attacks observed by a cyber defense instrument known as honeypots, which passively monitor the incoming Internet connections. Such datasets exhibit rich phenomena, including long-range dependence (LRD) and highly nonlinearity [1, 9].
It is worth mentioning that the usefulness of prediction capabilities in the context of cyber defense ultimately depends on the degree of prediction accuracy, a situation similar to the usefulness of weather forecasting. This factor should be made fully aware to cyber defense practitioners. Although the prediction accuracy could be assured by leveraging large amounts of data, which is indeed true to the case of weather forecasting, the collection of large amounts of cyber attack data may be challenging. Nevertheless, understanding the usefulness of prediction capabilities in the context of cyber security is a problem of high importance but has yet to be thoroughly investigated.
Our contributions
The contribution of the present paper is in two-fold. First, we propose a novel bi-directional recurrent neural networks with long short-term memory framework, or BRNN-LSTM for short, to accommodate the statistical properties exhibited by cyber attack rate time series data. The framework gives users the flexibility in choosing the number of LSTM layers that are incorporated into the BRNN structure. Second, we use real-world cyber attack rate datasets to show that BRNN-LSTM can achieve a substantially higher prediction accuracy than statistical prediction models, including the one proposed in literature [9] and the ones that are studied in the present paper for comparison purposes.
Statistical methods have been widely used in the context of data-driven cyber security research, such as intrusion detection [15–18]. However, deep learning has not received the due amount of attention in the context of cyber security [13, 14]. This is true despite the fact that deep learning has been tremendous successful in other application domains [19–21] and has started to be employed in the cyber security domains, including adversarial malware detection [22, 23] and vulnerability detection [24, 25].
In the context of vulnerability detection, supervised machine learning methods inlcuding logistic regression, neural network, and random forest, have been proposed for this purpose [26, 27]. These models are trained using large-scale vulnerability data. However, unlike deep learning models that can directly work on raw data, those models require the data to be preprocessed to extract features. There are also other approaches to detecting vulnerabilities. For example, an architectural approach to pinpointing memory-based vulnerabilities has been proposed in [28], which consists of an online attack detector and an offline vulnerability locator that are linked by a record and replay mechanism. Specifically, it records the execution history of a program and simultaneously monitors its execution for attacks. If an attack is detected by the online detector, the execution history is replayed by the offline locator to locate the vulnerability that is being exploited. For more discussions on the vulnerability detection, please refer to [24, 25, 27, 28], and the references therein.
In the context of time series analytics, various statistical approaches have been developed. For example, ARIMA, Holt-Winters, and GARCH models are among the most popular statistical approaches for analyzing time series data [1, 8, 9, 29]. Other statistical models, such as Gaussian mixture models, hidden Markov models, and state space models have been developed to analyze time series data with uncertainties and/or some unobservable factors [17, 30]. Recently, it was discovered that deep learning is very efficient in time series prediction. For example, deep learning has been employed to predict financial data, which contains some noise and volatility [21]. In the context of transportation application, deep learning has been used to predict passenger demands for on-demand ride service [31]. In particular, it is discovered that deep learning can achieve a higher accuracy than statistical time series models (e.g., ARMA and Holt-Winters models) in predicting transportation traffic [32–34]. It is further argued in [32] that a particular class of deep learning models, known as feed-forward neural networks, are the best predictors when taking into account both prediction precision and model complexity. In [34], the prediction performances of the deep learning approach and of the statistical ARIMA approach are compared against each other. It is shown that the deep learning approach can significantly (more than 80%) reduce the error rate when compared with the ARIMA models.
The rest of the paper is organized as follows. In the "Preliminaries" section, we review some concepts of deep learning that are related to the deep learning framework we will propose in this paper. In the "Framework" section, we present the framework we propose for predicting cyber attack rates. In the "Empirical study" section, we present our experiments on applying the framework to a dataset of cyber attack rates and compare the resulting prediction accuracy with the accuracy of the statistical approach reported in the literature. In the "Conclusion" section, we conclude the present paper with future research directions.
In order to improve the readability of the paper, we summarize the main notations that are used in the present paper in Table 1:
Table 1 Summary of notations
In this section, we review three deep learning concepts that are related to the present work: recurrent neural network (RNN), bi-directional RNN, and long short-term memory (LSTM).
Figure 1 highlights the standard RNN structure, which updates its hidden layers according to the information received from the input layer and the activation from the previous forward propagation. When compared with feed-forward neural networks, RNN can accommodate the temporal information embedded into the sequence of input data (see, e.g., [35, 36]). Intuitively, this explains why RNN is suitable for natural language processing and time series analysis (see, e.g., [36–39]). This observation motivates us to leverage RNN as a starting point in designing our framework that will be presented later.
A standard unfolded RNN structure at time t
As highlighted in Fig. 1, the computing process at each time step of RNN is
$$h_{t}=\sigma(W_{x} \cdot x_{t}+W_{h} \cdot h_{t-1}+b_{h}),$$
where Wx∈Rm×n is the weight matrix connecting the input layer and the hidden layer with m being the size of the input and n being the size of the hidden layer, Wh∈Rn×n is the weight matrix between two consecutive hidden states ht−1 and ht,bh is the bias vector of the hidden layer, and σ is the activation function to generate the hidden state. As a result, the network output can be described by
$$y_{t}=\sigma(W_{y} \cdot h_{t}+b_{y}),$$
where Wy∈Rn is the weight connecting the hidden layer and the output layer, by is the bias vector of the output layer, and σ is the activation function of the output layer.
Bi-directional RNN
A uni-directional RNN is a RNN that only takes one sequence as the input. A uni-directional RNN cannot take full advantage of the input data in the sense that it only learns information from the "past." In order to overcome this issue, the concept of bi-directional RNN is introduced to make a RNN learn from both the past and the future [40]. Technically speaking, a bi-directional RNN is essentially two uni-directional RNNs that are combined together, where one learns from the past and the other learns from the "future"; the results of the two uni-directional RNNs are merged together to compute a final output.
The training process of RNNs can suffer from the gradient vanishing/exploding problem [41], which can be alleviated by another RNN structure known as LSTM [42]. LSTM is composed of units called memory blocks, each of which contains some memory cells with self-connections, which store (or remember) the temporal state of the network, and some special multiplicative units called gates. Each memory block contains an input gate, which controls the flow of input activations into the memory cell; an output gate, which controls the output flow of cell activations into the rest of the network; and a forget gate.
As highlighted in Fig. 2, the activation at step t, namely, ht, is computed based on four pieces of gate input, namely, the information gate it, the forget gate ft, the output gate ot, and the cell gate ct [43]. Specifically, the information gate input at step t is
$$i_{t} = \sigma\left(U_{i}\cdot h_{t-1}+W_{i}\cdot \mathbf{x}_{t}+b_{i}\right), $$
where σ(·) is a sigmoid activation function, bi is the bias, xt is the input vector at step t, and Wi and Ui are weight matrices. The forget gate input and the output gate input are respectively computed as
$$\begin{array}{@{}rcl@{}} f_{t} &=& \sigma\left(U_{f}\cdot h_{t-1}+W_{f}\cdot \mathbf{x}_{t}+b_{f}\right), \\ o_{t} &=& \sigma\left(U_{o}\cdot h_{t-1}+W_{o}\cdot \mathbf{x}_{t}+b_{o}\right), \end{array} $$
LSTM block at step t with information gate it, forget gate ft, output gate ot, and cell gate ct
where Uf,Uo,Wf, and Wo are weight matrices, and bf and bo are biases. The cell gate input is computed as
$$c_{t} \,=\, f_{t}\cdot c_{t-1} + i_{t}\cdot k_{t}~~\text{with}~~ k_{t}\,=\, \tanh\left(U_{k}\cdot h_{t-1}\,+\,W_{k}\cdot \mathbf{x}_{t}\,+\,b_{k}\right), $$
where tanh is the hyperbolic tangent function, Uk and Wk are weights, and bk is bias. The activation at step t is computed as
$$ h_{t} = o_{t} \cdot \tanh(c_{t}). $$
Intuitively, the key component of LSTM is the cell state, which flows throughout the network. Given input ht−1 and xt, the forget gate ft decides to throw away what information from the previous cell state ct−1. The forget gate ft takes ht−1 and xt as input and uses the sigmoid activation function σ(·) to generate a number between 0 and 1 for each value in cell state ct−1. The information gate it determines what new information in the current cell state ct to be stored, via two steps: a set of candidate values are computed by kt based on the current input; the information gate it then uses σ(·) to decide which candidate values will be stored in ct. The cell gate will then compute ct. Finally, ht is computed based on ct and ot, where the latter is the information from the output gate.
The bi-directional RNN with LSTM framework
The framework we propose for predicting cyber attack rates is called bi-directional RNN with LSTM or BRNN-LSTM for short, which incorporates some LSTM layers into a bi-directional RNN. BRNN-LSTM has three components: an input layer, a number of hidden layers, and an output layer, where each hidden layer is replaced with a LSTM cell. The same sequential input, denoted by xt={x0,...,xt}, is passed to the two states of the LSTM layers, the forward state, and the backward state. There is no connection in between the two states. The outputs from the two states are then combined together to predict a target value at each step. Figure 3 highlights the structure of BRNN-LSTM with three LSTM layers.
BRNN-LSTM with three LSTM layers
For training a BRNN-LSTM model, we propose using the following objective function:
$$ J = \frac{1}{2m} \cdot \sum\limits^{m}_{i = 1}(\hat{y}_{i}-y_{i})^{2}+\frac{\lambda}{2} \left(||\mathbf{W}||_{2}^{2}+||\mathbf{U}||_{2}^{2}\right), $$
where m is the size of the input, \(\hat {y}_{i}\) and yi are respectively the output of network and the observed values at step i, W and U are weight matrices, \(\mathbf {W} = \{W_{f},W_{i},W_{k},W_{o}\}, \mathbf {U} = \{U_{f},U_{i},U_{k},U_{o}\}, ||\cdot ||_{2}^{2}\) represents the squared L2 norm of weight matrices, and λ is a user-defined penalty parameter. Note that the second term in Eq. (1) is the penalty term for avoiding overfitting. The optimization is defined as
$$\Theta^{*}=\arg\min_{\boldsymbol{\Theta}} J,$$
where Θ=(W,U) are model parameters and can be solved by using the gradient descent method [42, 44].
Empirical study
Accuracy metrics
Let (y1,…,yN) be observed values and \(\left (\hat y_{1},\ldots,\hat y_{N}\right)\) be the predicted values. In order to evaluate the accuracy of the BRNN-LSTM framework, we propose using the following widely used metrics [1, 9, 45].
Mean square error (MSE): \(\text {MSE}={\sum \nolimits }_{i=1}^{N} \left (y_{i}-\hat y_{i}\right)^{2}/N\).
Mean absolute deviation (MAD): \(\text {MAD}={\sum \nolimits }_{i=1}^{N} \left |y_{i}-\hat y_{i}\right |/N\).
Percent mean absolute deviation (PMAD): \(\text {PMAD}={\sum \nolimits }_{i=1}^{N} \left |y_{i}-\hat y_{i}\right |/{\sum \nolimits }_{i=1}^{N} |y_{i}|\).
Mean absolute percentage error (MAPE): \(\text {MAPE}={\sum \nolimits }_{i=1}^{N} \left |(y_{i}-\hat y_{i})/y_{i}\right |/N\).
The dataset we analyze is the same as the dataset analyzed in [1]. The dataset was collected by a low-interaction honeypot consisting of 166 consecutive IP addresses during five periods of time in the interval between year 2010 and year 2011. These five periods of time are respectively 1,123, 421, 1,375, 528, and 1920 h, each of which is represented by a separate dataset. The honeypot runs the following four honeypot programs: DionaeaFootnote 3, MwcollectorFootnote 4, AmunFootnote 5, and Nepenthes [46], which run some vulnerable services such as SMB (with Microsoft Windows Server Service Buffer Overflow vulnerability MS06040 and Workstation Service Vulnerability MS06070), NetBIOS, HTTP, MySQL and SSH. A honeypot computer runs multiple honeypot programs, each of which monitors (i.e., is associated to) one IP address. A dedicated computer collects the raw network traffic coming to the honeypot as pcap files. Honeypot-captured data are treated as cyber attacks because no legitimate services are associated to the honeypot computers. We refer to [1] for more details about the honeypot instrument.
As in [1] and many analyses, we treat flows (rather than packets) as attacks, while noting that flows can be based on the TCP or UDP protocol. A TCP flow is uniquely identified by an attacker's IP address, the port used by the attacker to wage the attack, a victim IP address (belonging to the honeypot), and the port of the victim IP address under attack. An unfinished TCP handshake is also treated as a flow or attack because the unsuccess may be attributed to the fact that the connection is dropped because the port in question is busy. Also as in [1], the preprocessing contains the following steps. First, we disregard the cyber attacks that are waged against the non-production (i.e., unassigned) ports (i.e., any ports that are not associated with the honeypot programs) because these TCP connections are often dropped. Since low-interaction honeypot programs do not collect adequate traffic information that would allows us to determine specific attacks, we only consider the attack rate or the number of attacks (rather than specific types of attacks). Second, the following two widely used parameters [47] are also used to preprocess network traffic flows not ending with the FIN flag (meaning that these flows are terminated unsafely) or the RST flag (meaning that these flows are terminated unnaturally): 60 s for the flow timeout time (meaning that an attack or flow expires after being idle for 60 s) and 300 s for the flow lifetime (meaning that an attack or flow does not span over 5 min or 300 s).
For each period or dataset, the data is represented by {(t,xt)} for t=0,1,2,…, where xt is the number of attacks (i.e., attack rate) that are observed by the honeypot at time t. Unlike [1], we further preprocess the derived attack rate time series by normalizing attack rates into interval (0,1]. Then, small data batches (periods) are selected based on a pre-defined mini-batch size. For prediction purposes, we split each time series into an in-sample part (for model training) and an out-of-sample part (for prediction). As in [1], we set the last 120 h of each period as the out-of-sample part for evaluating prediction accuracy.
Model training and selection
In the training process, we use the mini-batch gradient descent method to compute the minimum of the objective function, which is described in Eq. (1). We use 10,000 iterations to train a network and set the penalty parameter λ =.001 because other parameters do not lead to any significantly better result. For each dataset, we use Algorithm 1 to compute the fitted values with varying model parameters. We select the model that achieves the minimum MSE.
Table 2 describes the selected model and MSE for each dataset. We observe that the selected model for different datasets may use different batch size r and different number l of LSTM layers. For datasets I, IV, and V, the selected batch size is 20; for datasets II and III, the selected batch size is respectively 30 and 40. For the number of LSTM layers, datasets I and IV prefer to 4 layers; datasets II and V prefer to 2 layers; and period IV prefers to 3 layers.
Table 2 Parameters (r,l) of selected model and MSE for each dataset
Figure 4 plots the fitting of the selected model corresponding to each dataset. We observe that the selected models have satisfactory fitting accuracy. In particular, the extreme values are fitted well in every dataset.
BRNN-LSTM fitting results of cyber attack rates in the five datasets (black line: observed values; red circles: fitted values)
Prediction accuracy
We use Algorithm 2 to predict cyber attack rates corresponding to the out-of-samples, which allow us to calculate the prediction accuracy.
Table 3 describes the prediction results in terms of the accuracy metrics mentioned above. Based on metrics PMAD and MAPE, BRNN-LSTM achieves a remarkable prediction accuracy for datasets I, II, III, and V because prediction errors are less than 5%. However, for dataset IV, the prediction accuracy in metric PMAD is around 17% and in metric MAPE is around 27%. Fortunately, BRNN-LSTM can be easily calibrated to improve its prediction accuracy via a rolling approach as follows. For period IV, we re-estimate model parameters in Θ via Algorithm 1 after observing 20 more data points; the corresponding prediction accuracy, indicated by "IV*" in Table 3, is much better than the original prediction accuracy. For example, the rolling approach reduces the PMAD metric to 10% and reduces the MAPE metric to 13%.
Table 3 Parameters of selected models and prediction accuracy metrics of these selected models, where IV* indicates the rolling approach for dataset IV
Figure 5 plots the prediction results. We observe that predicted values match observed values well, but some observed values that are still missed by BRNN-LSTM. For example, for dataset III, the extreme value is missed and some observed values are over-predicted. Nevertheless, we conclude that the prediction accuracy is satisfactory.
Prediction accuracy of BRNN-LSTM (black line: observed values; red circles: predicted values)
Model comparisons
In order to further evaluate the prediction accuracy of the proposed framework, we now compare it with other popular models.
The first model we consider (as a benchmark) is the AutoRegressive Integrated Moving Average or ARIMA (p,d,q), which is perhaps the most well-known model in time series analysis [29, 30]. The ARIMA model is described as
$$\begin{array}{@{}rcl@{}} \phi(B)(1-B)^{d} Y_{t}=\theta(B) e_{t}, \end{array} $$
where B is the backshift operator, and ϕ(B) and θ(B) are respectively the AR and MA characteristic polynomials evaluated at B. In order to select the ARIMA model for prediction purpose, we use the AIC criterion while allowing the orders of p and q to vary from 0 to 5 and d to vary from 0 to 2.
ARMA+GARCH
The second model we consider further incorporates the Generalized AutoRegressive Conditional Heteroscedastic or GARCH model, which is widely used in financial time series applications. We use GARCH(1,1) to model the conditional variance and the ARMA model to accommodate the conditional mean. This leads to the following ARMA+GARCH model:
$$Y_{t}=\mathrm{E}(Y_{t}|\mathfrak{F}_{t-1})+\epsilon_{t}, $$
where E(·|·) is the conditional expectation function, \(\mathfrak {F}_{t-1}\) is the historic information up to time t−1, and εt is the innovation of the time series. Since the mean part is modeled as ARMA (p,q), the model can be rewritten as
$$ Y_{t}= \mu+\sum\limits_{k=1}^{p} \phi_{k} Y_{t-k} +\sum\limits_{l=1}^{q} \theta_{l} \epsilon_{t-l} +\epsilon_{t}, $$
where εt=σtZt with Zt being i.i.d. innovations. For the standard GARCH(1,1) model, we have
$$ \sigma_{t}^{2}=w+ \alpha_{1} \epsilon^{2}_{t-1}+ \beta_{1} \sigma^{2}_{t-1}, $$
where \(\sigma ^{2}_{t}\) is the conditional variance and w is the intercept. After some preliminary analysis, we set the order of ARMA to (1,1) as a higher order does not provide significant better predictions.
The third model we consider is based on the recently developed hybrid approach, which is a two-step procedure [48, 49]. The hybrid model first extracts the linear relationship using an ARIMA model, and then uses a nonlinear approach to determine the nonlinear relationship. The nonlinear step can be considered as a prediction on the error term. The resulting hybrid model is written as
$$\begin{array}{@{}rcl@{}} Y_{t}=L_{t}+N_{t}, \end{array} $$
where Lt is the linear part and Nt is the nonlinear part. Since Lt is modeled by an ARIMA model, the residuals at time t are
$$e_{t}=Y_{t}-\hat Y_{t},$$
where \(\hat Y_{t}\) is the fitted value. The residuals are modeled by a nonlinear model, which utilizes the lag information. We consider the following three types of hybrid models:
$${{} \begin{aligned} \text{H1}: \quad N_{t}&=f(e_{t-1},e_{t-2},\ldots,e_{t-n})+\epsilon_{t}, \\ \text{H2}: \quad N_{t}&\,=\,f(\!e_{t-1},e_{t-2},\ldots,e_{t-n},y_{t-1},y_{t-2},\ldots,\!y_{t-m}\!)\,+\,\epsilon_{t},\\ \text{H3}: \quad N_{t}&=f(y_{t-1},y_{t-2},\ldots,y_{t-n})+\epsilon_{t}, \end{aligned}} $$
where epsilont is the random error at time t and f is a nonlinear function. For nonlinear function f, we consider the following three popular machine learning approaches [50]: random Forest or RF [49], support vector machine or SVM [51], and artificial neural network or ANN [48, 52].
In order to achieve the best prediction accuracy, we examine a number of models. For the linear part of ARIMA (p,d,q), we use the AIC criterion to select models in the training process, where p and d vary from 0 to 5 and d varies from 0 to 1. For the nonlinear model, we vary the lag parameter from 1 to 12. All of the models are trained by using 10-folder validation. For RF, we set the number of trees to 1000; for SVM, we consider the following kernel functions: linear, polynomial, radial basis, and sigmoid; for ANN, we set the number of hidden layers to one while varying the number of hidden nodes from 1 to 10.
We select the highest prediction accuracy in terms of the MSE metric derived from the predicted values and the out-of-sample data. For dataset I, the best prediction model is ARIMA(2,1,1)+ANN+H3 with the number of lags being 5 and 8 hidden nodes. For dataset II, the best prediction model is ARIMA(3,1,1)+"linear SVM"+H2 with the number of lags being 6. For dataset III, the best prediction model is ARIMA(3,0,1)+"radial SVM"+H3 with the number of lags being 8. For dataset IV, the best prediction model is ARIMA(0,1,2)+"radial SVM"+H1 with the number of lags being 4. For dataset V, the best prediction model is ARIMA+"radial SVM"+H3 with the number of lags being 7.
Table 4 summarizes the one-step ahead rolling prediction accuracy. Considering the MSE metric, we observe that the ARIMA model has the worst prediction accuracy for datasets I–IV, and the hybrid model outperforms the ARMA+GARCH model for every dataset; we also observe that the ARIMA model has the smallest MSE for dataset V. Considering the MAD metric, we observe that the hybrid model outperforms the other two models for datasets I, III, and IV, but the ARMA+GARCH model outperforms the other two models for dataset II; we also observe that the ARIMA model has the smallest MAD for dataset IV. Considering metrics PMAD and MAPE, we observe that the hybrid model outperforms the other two models for datasets I, III, IV, and V, and the ARMA+GARCH model is slightly better than the hybrid model for dataset II; we also observe that all of the models have the worst prediction accuracy for datasets IV and V, which coincides with the conclusion drawn in [9], namely, that the PMADs of one-step ahead rolling prediction of the FARIMA+GARCH model are respectively 0.138,0.121,0.140,0.339, and 0.378 for the five datasets. By comparing Tables 3 and 4, we draw:
Table 4 Prediction accuracy of the selected model with respect to each dataset
The BRNN+LSTM framework achieves a higher prediction accuracy than the FARIMA+GARCH model proposed in [9] and the ARIMA, ARIMA+GARCH, and hybrid models considered above.
We proposed a BRNN-LSTM framework for predicting cyber attack rates. The framework can accommodate complex phenomena exhibited by datasets, including long-range dependence and highly nonlinearity. Using five real-world datasets, we showed that the framework significantly outperforms the other prediction approaches in terms of prediction accuracy, which confirms that LSTM cells can indeed accommodate the long memory behavior of cyber attack rates. From these five datasets, we found that only dataset IV requires to re-training the model in order to achieve a better prediction accuracy. We compared the prediction accuracy of BRNN-LSTM and other prediction approaches, which use rolling predictions (i.e., re-building the prediction model after observing a new value). We hope the present work will inspire more research in deploying deep learning to prediction tasks in the cybersecurity domain.
https://www.symantec.com/security-center/threat-report
https://netdiligence.com/portfolio/cyber-claims-study/
http://dionaea.carnivore.it/
https://alliance.mwcollect.org/
http://amunhoney.sourceforge.net/
ARIMA:
Autoregressive integrated moving average
BRNN:
Bi-directional recurrent neural network
GARCH:
Generalized autoregressive conditional heteroskedasticity
LSTM:
Long short-term memory
RNN:
Recurrent neural network
Z. Zhan, M. Xu, S. Xu, Characterizing honeypot-captured cyber attacks: Statistical framework and case study. IEEE Trans. Inf. Forensic Secur.8(11), 1775–1789 (2013).
E. Gandotra, D. Bansal, S. Sofat, Computational techniques for predicting cyber threats. Intell. Comput. Commun. Devices Proc ICCD 2014. 1:, 247 (2014).
S. Xu, in Proc. Symposium on the Science of Security (HotSoS'14). Cybersecurity dynamics (ACMRaleigh, 2014), pp. 14–1142.
S. Xu, in Proactive and Dynamic Network Defense, ed. by Z. Lu, C. Wang. Cybersecurity dynamics: A foundation for the science of cybersecurity (Springer International PublishingNew York City, 2018).
L. D. Carli, R. Sommer, S. Jha, in Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, Scottsdale, AZ, USA, November 3-7, 2014. Beyond pattern matching: A concurrency model for stateful deep packet inspection (ACMScottsdale, 2014), pp. 1378–1390.
C. Ishida, Y. Arakawa, I. Sasase, K. Takemori, in Proceedings of PACRIM. 2005 IEEE Pacific Rim Conference on Communications, Computers and signal Processing, August 24-26. Forecast techniques for predicting increase or decrease of attacks using bayesian inference (IEEEVictoria, 2005), pp. 450–453.
D. H. Kim, T. Lee, S. -O. D. Jung, H. P. In, H. J. Lee, in Information Assurance and Security, 2007. IAS 2007. Third International Symposium On. Cyber threat trend analysis model using HMM (IEEEManchester, 2007), pp. 177–182.
Z. Yong, T. Xiaobin, X. Hongsheng, in Computational Intelligence and Security, 2007 International Conference On. A novel approach to network security situation awareness based on multi-perspective analysis (IEEEHarbin, 2007), pp. 768–772.
Z. Zhan, M. Xu, S. Xu, Predicting cyber attack rates with extreme values. IEEE Trans. Inf. Forensic Secur.10(8), 1666–1677 (2015).
C. Peng, M. Xu, S. Xu, T. Hu, Modeling and predicting extreme cyber attack rates via marked point processes. J. Appl. Stat.44(14), 2534–2563 (2017).
M. Xu, L. Hua, S. Xu, A vine copula model for predicting the effectiveness of cyber defense early-warning. Technometrics. 59(4), 508–520 (2017).
C. Peng, M. Xu, S. Xu, T. Hu, Modeling multivariate cybersecurity risks. J. Appl. Stat.45(15), 2718–2740 (2018).
N. Sun, J. Zhang, P. Rimba, S. Gao, Y. Xiang, L. Y. Zhang, Data-driven cybersecurity incident prediction: A survey. IEEE Commun. Surv. Tutor., 1–1 (2018). https://doi.org/10.1109/COMST.2018.2885561.
M. Husák, J. Komárková, E. Bou-Harb, P. Čeleda, Survey of attack projection, prediction, and forecasting in cyber security. IEEE Commun. Surv. Tutor.21(1), 640–660 (2019).
D. E. Denning, An intrusion-detection model. IEEE Trans. Softw. Eng.SE-13(2), 222–232 (1987).
M. Markou, S. Singh, Novelty detection: a review part 1: statistical approaches. Sig. Process. 83(12), 2481–2497 (2003).
V. Chandola, A. Banerjee, V. Kumar, Anomaly detection: a survey. ACM Comput. Surv. (CSUR). 41(3), 15 (2009).
J. Neil, C. Hash, A. Brugh, M. Fisk, C. B. Storlie, Scan statistics for the online detection of locally anomalous subgraphs. Technometrics. 55(4), 403–414 (2013).
L. Deng, D. Yu, et al., Deep learning: methods and applications. Found. Trends® Sig. Process. 7(3–4), 197–387 (2014).
M. Längkvist, L. Karlsson, A. Loutfi, A review of unsupervised feature learning and deep learning for time-series modeling. Pattern Recogn. Lett.42:, 11–24 (2014).
R. C. Cavalcante, R. C. Brasileiro, V. L. Souza, J. P. Nobrega, A. L. Oliveira, Computational intelligence and financial markets: A survey and future directions. Expert Syst. Appl.55:, 194–211 (2016).
D. Li, Q. Li, Y. Ye, S. Xu, Enhancing robustness of deep neural networks against adversarial malware samples: Principles, framework, and aics'2019 challenge. CoRR. abs/1812.08108: (2018). http://arxiv.org/abs/1812.08108.
D. Li, R. Baral, T. Li, H. Wang, Q. Li, S. Xu, Hashtran-dnn: a framework for enhancing robustness of deep neural networks against adversarial malware samples. CoRR. abs/1809.06498: (2018). http://arxiv.org/abs/1809.06498.
Z. Li, D. Zou, S. Xu, X. Ou, H. Jin, S. Wang, Z. Deng, Y. Zhong, in 25th Annual Network and Distributed System Security Symposium, NDSS 2018, San Diego, California, USA, February 18-21, 2018. Vuldeepecker: A deep learning-based system for vulnerability detection (Internet SocietySan Diego, 2018).
Z. Li, D. Zou, S. Xu, H. Jin, Y. Zhu, Z. Chen, S. Wang, J. Wang, Sysevr: A framework for using deep learning to detect software vulnerabilities. CoRR. abs/1807.06756: (2018). http://arxiv.org/abs/1807.06756.
G. Grieco, G. L. Grinblat, L. Uzal, S. Rawat, J. Feist, L. Mounier, in Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy. CODASPY '16. Toward large-scale vulnerability discovery using machine learning (ACMNew York, 2016), pp. 85–96.
Z. Li, D. Zou, S. Xu, H. Jin, H. Qi, J. Hu, in Proceedings of the 32nd Annual Conference on Computer Security Applications, ACSAC 2016, Los Angeles, CA, USA, December 5-9, 2016. Vulpecker: an automated vulnerability detection system based on code similarity analysis (ACMLos Angeles, 2016), pp. 201–213.
Y. Chen, M. Khandaker, Z. Wang, in Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. ASIA CCS '17. Pinpointing vulnerabilities (ACMNew York, 2017), pp. 334–345.
J. D. Cryer, K. -S. Chan, Time Series Analysis With Applications in R (Springer, New York, 2008).
MATH Google Scholar
P. J. Brockwell, R. A. Davis, Introduction to Time Series and Forecasting (Springer, Switzerland, 2016).
J. Ke, H. Zheng, H. Yang, X. M. Chen, Short-term forecasting of passenger demand under on-demand ride services: A spatio-temporal deep learning approach. Transp. Res. C Emerg. Technol.85:, 591–608 (2017).
M. Barabas, G. Boanea, A. B. Rus, V. Dobrota, J. Domingo-Pascual, in Intelligent Computer Communication and Processing (ICCP), 2011 IEEE International Conference On. Evaluation of network traffic prediction based on neural networks with multi-task learning and multiresolution decomposition (IEEECluj-Napoca, 2011), pp. 95–102.
A. Azzouni, G. Pujolle, A Long Short-Term Memory Recurrent Neural Network Framework for Network Traffic Matrix Prediction. CoRR. abs/1705.05690: (2017). http://arxiv.org/abs/1705.05690.
S. Siami-Namini, A. S. Namin, Forecasting Economics and Financial Time Series: ARIMA vs. LSTM. CoRR. abs/1803.06386: (2018). http://arxiv.org/abs/1803.06386.
C. -M. Kuan, T. Liu, Forecasting exchange rates using feedforward and recurrent neural networks. J. Appl. Econ.10(4), 347–364 (1995).
T. Mikolov, M. Karafiát, L. Burget, J. Cernocký, S. Khudanpur, in Proceesings of the 11th Annual Conference of the International Speech Communication Association. Recurrent neural network based language model (International Speech Communication Association (ISCA)Makuhari, Chiba, 2010), pp. 1045–1048.
M. Sundermeyer, I. Oparin, J. L. Gauvain, B. Freiberg, R. Schlüter, H. Ney, in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. Comparison of feedforward and recurrent neural network language models (IEEEVancouver, 2013), pp. 8430–8434.
Z. Huang, G. Zweig, B. Dumoulin, in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Cache based recurrent neural network language model inference for first pass speech recognition (IEEEFlorence, 2014), pp. 6354–6358.
X. Liu, Y. Wang, X. Chen, M. J. Gales, P. C. Woodland, in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference On. Efficient lattice rescoring using recurrent neural network language models (IEEEFlorence, 2014), pp. 4908–4912.
M. Schuster, K. K. Paliwal, Bidirectional recurrent neural networks. IEEE Trans. Sig. Process. 45(11), 2673–2681 (1997).
Y. Bengio, P. Simard, P. Frasconi, Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw.5(2), 157–166 (1994).
S. Hochreiter, J. Schmidhuber, Long short-term memory. Neural Comput.9(8), 1735–1780 (1997).
I. Goodfellow, Y. Bengio, A. Courville, Deep Learning (MIT Press, MA, 2016).
D. P. Kingma, J. Ba, Adam: A method for stochastic optimization. CoRR. arXiv preprint arXiv:1412.6980: (2014).
R. J. Hyndman, A. B. Koehler, Another look at measures of forecast accuracy. Int. J. Forecast.22(4), 679–688 (2006).
P. Baecher, M. Koetter, T. Holz, M. Dornseif, F. Freiling, in International Workshop on Recent Advances in Intrusion Detection. The nepenthes platform: An efficient approach to collect malware (SpringerBerlin, Heidelberg, 2006), pp. 165–184.
S. Almotairi, A. Clark, G. Mohay, J. Zimmermann, in 2008 IFIP International Conference on Network and Parallel Computing. Characterization of attackers' activities in honeypot traffic using principal component analysis (IEEEShanghai, 2008), pp. 147–154.
G. P. Zhang, Time series forecasting using a hybrid arima and neural network model. Neurocomputing. 50:, 159–175 (2003).
M. Kumar, M. Thenmozhi, Forecasting stock index returns using arima-svm, arima-ann, and arima-random forest hybrid models. Int. J. Bank. Account. Financ.5(3), 284–308 (2014).
J. Friedman, T. Hastie, R. Tibshirani, The Elements of Statistical Learning, vol. 1 (Springer, New York, 2001).
P. -F. Pai, C. -S. Lin, A hybrid arima and support vector machines model in stock price forecasting. Omega. 33(6), 497–505 (2005).
Y. Chen, B. Yang, J. Dong, A. Abraham, Time-series forecasting using flexible neural tree model. Inf. Sci.174(3-4), 219–235 (2005).
Data used in this work is not suitable for public use. The source code used in the present paper is available at https://github.com/xingfang912/time-series-analysis
School of Information Technology, Illinois State University, Normal, 61761, IL, USA
Xing Fang
Department of Mathematics, Illinois State University, Normal, 61761, IL, USA
Maochao Xu
Department of Computer Science, University of Texas at San Antonio, San Antonio, 78249, TX, USA
Shouhuai Xu
Department of Computer Science, Jiangsu Normal University, Xuzhou, 221110, China
Peng Zhao
XF constructed the deep learning framework and performed the deep learning experiments. MX and PZ performed the experiments on the statistical models. SX drafted the manuscript. All authors reviewed the draft. All authors read and approved the final manuscript.
Correspondence to Xing Fang.
Fang, X., Xu, M., Xu, S. et al. A deep learning framework for predicting cyber attacks rates. EURASIP J. on Info. Security 2019, 5 (2019). https://doi.org/10.1186/s13635-019-0090-6
GARCH
Hybrid models
BRNN-LSTM | CommonCrawl |
\begin{document}
\def\mathbb N{\mathbb N} \def\mathcal{L}{\mathcal{L}} \def\mathcal{W}{\mathcal{W}} \def\mathcal{N}{\mathcal{N}} \def\mathcal{P}{\mathcal{P}} \def\oplus({{\rm{(}}} \def\oplus){{\rm{)}}} \def{\rm{CN}}{{\rm{CN}}} \def{\mathbf p}{{\mathbf p}} \def{\mathbf q}{{\mathbf q}} \def{\mathbf e}{{\mathbf e}} \def{\mathbf 1}{{\mathbf 1}} \def\rightarrow{\rightarrow}
\def\oplus+{\oplus} \def\mbox{-}{\mbox{-}} \def\newop#1{\expandafter\def\csname #1\endcsname{\mathop{\rm #1}\nolimits}} \def\backslash_{\_} /{\backslash_{\oplus_} /} \def\spic{\psdots*[dotstyle=*](0,0) \rput(0.55,0){$\ldots$} \psdots*[dotstyle=o](0.25,0)(0.9,0) \psline(-0.1,-0.05)(-0.1,-0.1)(1.05,-0.1)(1.05,-0.05) \rput(0.5,-0.25){$s$}} \newcommand{\val}[2]{#1\begin{pspicture}(12pt,9pt)\psline[unit=4pt,fillcolor=black](0,2)(1,0)(2,0)(3,2)\end{pspicture}#2} \newcommand{\arc}[2]{#1\begin{pspicture}(12pt,9pt)\pscurve[unit=4pt,fillcolor=black](0.2,0)(1.5,1.5)(2.8,0)\end{pspicture}#2} \newcommand{\fl}[1]{\left\lfloor #1\right \rfloor} \newcommand{\ceil}[1]{\left \lceil #1 \right\rceil}
\def\underline{h}{\underline{h}} \def\overline{h}{\overline{h}}
\def\color{blue}{\color{blue}} \def\color[rgb]{0.1, 0.5, 0.2} {\color[rgb]{0.1, 0.5, 0.2} } \def\color[rgb]{0.5,0.2,0.5} {\color[rgb]{0.5,0.2,0.5} } \def\color{red}{\color{red}} \newrgbcolor{purple}{0.7 0.2 0.7} \newrgbcolor{orange}{1.0 0.5 0.0} \newrgbcolor{mygreen}{0.1 0.5 0.2}
\makeatletter \def\imod#1{\allowbreak\mkern10mu({\operator@font mod}\oplus,\oplus,#1)} \makeatother
\pagenumbering{arabic} \pagestyle{headings} \def
\rule{2mm}{2mm}{
\rule{2mm}{2mm}} \def\lim_{n\rightarrow\infty}{\lim_{n\rightarrow\infty}} \date{\today} \title{Circular Nim games} \maketitle
\begin{center} {\bf Matthieu Dufour}\oplus {\it Dept. of Mathematics, Universit\oplus'e du Qu\oplus'ebec \oplus`a Montr\oplus'eal\oplus Montr\oplus'eal, Qu\oplus'ebec H3C 3P8, Canada}\oplus {\tt [email protected]}\oplus \vskip 10pt {\bf Silvia Heubach}\oplus {\it Dept. of Mathematics, California State University Los Angeles\oplus Los Angeles, CA 90032, USA}\oplus {\tt [email protected]}\oplus
\end{center}
\section*{Abstract} A circular Nim game is a two player impartial combinatorial game consisting of $n$ stacks of tokens placed in a circle. A move consists of choosing $k$ consecutive stacks, and taking at least one token from one or more of the $k$ stacks. The last player able to make a move wins. We prove results on the structure of the losing positions for small $n$ and $k$ and pose some open questions for further investigations.
\noindent{\bf Keywords}: Combinatorial games, Nim, winning strategy
\noindent{\bf 2010 Mathematics Subject Classification}: 91A46, 91A05 \thispagestyle{empty}
\section{Introduction}\label{Introduction}
We consider circular Nim games, one of the many variations of the game of Nim. The game of Nim consists of several stacks of tokens. Two players alternate taking one or more tokens from one of the stacks, and the player who cannot make a move loses. Nim is an example of an {\em impartial combinatorial game}, that is, all possible moves and positions in the game are known (there is no randomness), and both players have the same moves available from a given position (unlike in Chess). Nim plays a central role among impartial games as any such game is equivalent to a Nim heap (see for example~\cite[Corollary 7.8]{AlbNowWol2007}). Nim has been completely analyzed and a winning strategy consists of removing tokens from one stack such that the {\em digital sum} of the heights of all stacks becomes zero. The digital sum of two or more integers in base $10$ is computed by first converting the integers into base $2$, then adding the base $2$ values without carry over, and then translating back into base $10$. We denote the digital addition operator by $\oplus+$. For example, $3 \oplus+ 6\oplus+ 14= 11$. Note that the digital sum $a \oplus+ a = 0$ for all values of $a$.
The variation of Nim that we will consider is to arrange the $n$ stacks of tokens of a Nim game in a circle. In addition, we allow the players to take at least one token from one or more of $k$ {\underline {consecutive}} stacks (as order now matters, unlike in the game of Nim). More specifically, if $p_j$ is the number of tokens in stack $j$, and $a_j$ is the number of tokens the player selects from stack $j$, then a {\em legal move} consists of picking stacks $i, i+1,\ldots,i+k-1$ (modulo $n$) for some $i=1,\ldots, n-1$, and then selecting $0 \le a_j \le p_j$ tokens from stack $j=i, i+1,\ldots,i+k-1$ with $\sum_{j=i}^{i+k-1}a_j\ge 1$. We denote this game by ${\rm{CN}}(n,k)$. A {\em position} in a circular Nim game can be represented by a vector ${\mathbf p}=(p_1,p_2,\ldots,p_n)$ of non-negative entries indicating the heights of the stacks in order around the circle or any of its {\em symmetries}, namely the set of vectors $$\oplus{(p_{\ell},p_{\ell+1},\ldots,p_n,p_1,\ldots,p_{\ell-1})\mid 1\le \ell\le n\oplus}\cup \oplus{(p_{\ell-1},p_{\ell-2},\ldots,p_1,p_n,\ldots,p_{\ell})\mid 1\le \ell\le n\oplus},$$ where the indices are modulo $n$. The {\em final position} of ${\rm{CN}}(n,k)$ is given by $(0,0,\ldots,0)$. Figure~\ref{8-3} visualizes a position in a ${\rm{CN}}(8,3)$ game together with a possible choice of three stacks to play on.
\begin{figure}\label{8-3}
\end{figure}
We usually denote the current position in a game by ${\mathbf p}=(p_1,p_2,\ldots,p_n)$, and any position that can be reached by a legal move from ${\mathbf p}$ by ${\mathbf p}'=(p'_1,p_2',\ldots,p'_n)$. Such a position is called an {\em option} of ${\mathbf p}$, and we used the notation ${\mathbf p} \rightarrow {\mathbf p}'$. We will also find it convenient in the proofs to use lowercase letters for the stack sizes to avoid the need for subscripts. In the same spirit of easy readability, we will refer to a stack by the number of its tokens, for example as ``stack $a$'' or ``the $a$ stack'' instead of ``the third stack.'' If we need to make reference to a specific stack, we envision the first stack to be the one positioned at $12$ o'clock, and assume that the stacks are labeled in clockwise order. In Figure~\ref{8-3}, the third stack is a $5$ stack. In addition, we refer to the minimal value of ${\mathbf p}$ as $\min({\mathbf p})$, and the maximal value as $\max({\mathbf p})$, and to the vector $(1,1,\ldots,1)$ as ${\mathbf 1}$.
Usually, combinatorial games are studied from the standpoint of which player will win when playing from a given position. In this scenario, a position is either of type $\mathcal{N}$ or $\mathcal{P}$, where $\mathcal{N}$ indicates that the {\bf N}ext player to play from the current position has a winning strategy. The label $\mathcal{P}$ refers to the fact that the {\bf P}revious player, the one who made the move to the current position, is the one to win (which means the player to play from the current position will lose no matter how s/he plays). We will take a slightly different (but equivalent) viewpoint, namely characterizing the position as either a winning or losing position for the player who goes next. Therefore, an $\mathcal{N}$ position is a winning position (as the next player wins), while a $\mathcal{P}$ position is a losing position. We will denote the set of winning and losing positions, respectively, as $\mathcal{W}$ and $\mathcal{L}$\footnote{In partizan games, $\mathcal{L}$ refers to the Left player.}, and characterize the set $\mathcal{L}$. For impartial games, the situation is remarkably simple.
\begin{theorem} \oplus(see for example \cite[Theorem 2.11]{AlbNowWol2007}\oplus) If $G$ is an impartial finite game, then for any position ${\mathbf p}$ of $G$, ${\mathbf p} \in \mathcal{L}$ or ${\mathbf p} \in \mathcal{W}$. \end{theorem}
With this result, determining either the set of winning or losing positions completely answers the question of whether the first or the second player has a winning strategy. If we are discussing several games at the same time, then we will indicate the relevant game as a subscript for the set of losing positions, for example $\mathcal{L}_G$. Another well-known theorem will be crucial for the determination of the set of losing positions.
\begin{theorem} \label{lose}\oplus(see for example~\cite[Theorem 2.12]{AlbNowWol2007}\oplus) Suppose the positions of a finite impartial game can be partitioned into mutually exclusive sets $A$ and $B$ with the properties: \begin{itemize} \item[ {\rm (I)}] every option of a position in $A$ is in $B$; \item[{\rm (II)}] every position in $B$ has at least one option in $A$; and \item[ {\rm (III)}] the final positions are in $A$. \end{itemize} Then $A=\mathcal{L}$ and $B=\mathcal{W}$.
\end{theorem}
Theorem~\ref{lose} tells us how to determine the set of losing positions. First we need to obtain a candidate set $S$ for the set of losing positions $\mathcal{L}$. Such a set $S$ may suggest itself when we examine patterns in the output of a computer program that determines the losing positions by recursively computing the Grundy function for each position. Once we have a candidate set, then we need to show that any move from a position ${\mathbf p} \in S$ leads to a position ${\mathbf p}' \notin S$ (condition {\rm (I)}), and that for every position ${\mathbf p} \notin S$, there is a move that leads to a position ${\mathbf p}' \in S$ (condition {\rm (II)}). Since $(0,0,\ldots,0)$ is the only final position, it is easy to see that condition {\rm (III)} is satisfied in all the proofs we give. Thus, showing that conditions {\rm (I)} and {\rm (II)} are satisfied yields $S = \mathcal{L}$. Generally, it is relatively easy to show condition {\rm (I)}, while it may be quite difficult to show condition {\rm (II)}.
\section{The easy cases}\label{S:easy}
We first state a few easy general results. \begin{theorem}\label{T:easy}
\begin{itemize} \item[{\rm(1)}] The game ${\rm{CN}}(n,1)$ reduces to Nim, for which the set of losing positions is given by $\mathcal{L}=\oplus{(p_1,p_2,\ldots,p_n)\mid p_1\oplus+ p_2 \oplus+ \cdots \oplus+ p_n=0\oplus}$. \item[{\rm(2)}] The game ${\rm{CN}}(n,n)$ has a single losing position, namely $\mathcal{L} = \oplus{(0,0,\ldots,0)\oplus}$. \item[{\rm(3)}] The game ${\rm{CN}}(n,n-1)$ has losing positions $\mathcal{L} = \oplus{(a,a,\ldots,a)\mid a \ge 0\oplus}$. \end{itemize} \end{theorem}
\begin{proof} {\rm(1)} This result can be found for example in the original analysis of Nim by Bouton~\cite{Bou1901}, in~\cite[Theorem 7.12]{AlbNowWol2007}, or the bible for combinatorial games~\cite{BerConGuy1}. \oplus {\rm(2)} In this game, the player playing from a position ${\mathbf p} \ne (0,0,\ldots,0)$ can always take all tokens from all stacks.\oplus {\rm(3)} Let $S= \oplus{(a,a,\ldots,a)\mid a \ge 0\oplus}$. Starting from a position ${\mathbf p}=(a,a,\ldots,a) \in S$, at least one token has to be removed, so w.l.o.g., removal occurs at stack $1$, and the play is on stacks $1, \ldots, n-1$. Thus, if the position after the play is ${\mathbf p}'=(p_1',p_2',\ldots, p_n')$, we have that $p_1'<a=p_n'$, and therefore, ${\mathbf p}' \notin S$, satisfying condition {\rm (I)}. On the other hand, from any position ${\mathbf p} \notin S$, we can reach a position in $S$ by finding the stack with the least number of tokens, and reducing the number of tokens in the $n-1$ other stacks to that minimal number of tokens, resulting in a position ${\mathbf p}'$ where all stacks have the same height, that is, ${\mathbf p}'\in S$. Thus, $S$ satisfies condition {\rm (II)}, which completes the proof. \end{proof}
Note that Theorem~\ref{T:easy} completely covers the games ${\rm{CN}}(n,k)$ for $n = 1, 2, 3$. For $n = 4$, the only game not covered is ${\rm{CN}}(4,2)$.
\begin{theorem}\label{L4-2} For the game ${\rm{CN}}(4,2)$, the set of losing positions is $\mathcal{L}=\oplus{(a,b,a,b)\mid a, b \ge 0\oplus}$. \end{theorem}
\begin{proof} Again we follow the directions of Theorem~\ref{lose} to determine the set $\mathcal{L}$. Let $S=\oplus{(a,b,a,b)\mid a, b \ge 0\oplus}$ and imagine the four stacks to be located at the corners of a square. For any position ${\mathbf p}=(p_1,p_2,p_3,p_4)=(a,b,a,b) \in S$, diagonally opposite stacks of the square have the same number of tokens. Any play on either one or two adjacent stacks affects only one stack of the diagonally opposite pair. Assuming w.l.o.g. that the play is on stacks $1$ and $2$, we have that $p'_1<p_1=p_3=p'_3$, and $p'_2\le p_2=p_4=p'_4$. Thus, ${\mathbf p}' \notin S$, and condition {\rm (I)} holds. On the other hand, starting from any position ${\mathbf p} \notin S$, we determine the minimal value of each diagonal pair of stacks and reduce the stack with the larger number of tokens to the smaller value. This is always possible as any one stack is adjacent to both stacks of the other diagonal pair, so condition {\rm (II)} is satisfied. For example, for ${\mathbf p}=(3, 5, 4, 2) \notin \mathcal{L}$, reduce the second stack by three tokens and the third stack by one token to arrive at position ${\mathbf p}'=(3, 2, 3, 2)\in \mathcal{L}$. \end{proof}
\section{Harder results }\label{main}
For $n=5$, the cases not covered by Theorem~\ref{T:easy} are ${\rm{CN}}(5,2)$ and ${\rm{CN}}(5,3)$. The result for ${\rm{CN}}(5,2)$ was obtained by Dufour in his thesis~\cite{Duf}, and independently, by Ehrenborg and Steingr{\oplus'{\i}}msson~\cite{EhrSte1996} as a special case of Nim played on a simplicial complex. {The results by Ehrenborg and Steingr{\oplus'{\i}}msson depend on the ability to explicitly obtain the circuits (see Definition~\ref{simp comp}) of the {\em cycle complex $C_{n,k}$}, which is possible only for small values of $n$ and $k$.} We will give elementary proofs of these results that do not rely on the framework of simplicial complexes.
\begin{theorem} \label{L5} \oplus(see~\cite[Propositions 8.3 and 8.4]{EhrSte1996} and~\cite[Theorem 6.2.1]{Duf}\oplus) \begin{itemize} \item[{\rm(1)}] The game ${\rm{CN}}(5,2)$ has losing positions $\mathcal{L}=\oplus{(a^*,b,c,d,b)\mid a^*+b=c+d \mbox{ and } a^*=\max({\mathbf p})\oplus}$. \item[{\rm(2)}] The game ${\rm{CN}}(5,3)$ has losing positions $\mathcal{L}=\oplus{(0,b,c,d,b)\mid b=c+d\oplus}$. \end{itemize} \end{theorem}
Note that the conditions for ${\rm{CN}}(5,2)$ force $b$ to be the minimal value, while the conditions for ${\rm{CN}}(5,3)$ force $b$ to be maximal. Figure~\ref{n5} gives a visualization of the two sets of losing positions.
\begin{figure}
\caption{Losing positions for $n=5$.}
\label{fig:(5,2)}
\label{fig:(5,3)}
\label{n5}
\end{figure}
\begin{proof} {\rm(1)} Let ${\mathbf p}=(a^*,b,c,d,b)$ where $a^*+b=c+d$. Play cannot be on a single stack, as it would destroy either the equality of the $b$ stacks, or the condition on the equality of the sums. Play on two stacks cannot include any of the $b$ stacks (as they cannot both be played), so the only choice is to play on $c$ and $d$, which results in $c'+d' < c+d=a^*+b$, violating the equality of sums. Thus, any move from ${\mathbf p}\in S$ will lead to a position ${\mathbf p}' \notin S$, and therefore, (I) holds.
To show that we can move from any position ${\mathbf p} \notin S$ to a position ${\mathbf p}' \in S$, first note that ${\mathbf p} \in S\Leftrightarrow {\mathbf p}+m\cdot {\mathbf 1} \in S$ since the equality of the two sums and the equality of the $b$ stacks are not affected when a fixed amount is added or subtracted from each stack. Thus we may assume that $\min({\mathbf p})=0$. We consider two cases: \begin{itemize} \item[{\rm(i)}] maximal and minimal value are adjacent; w.l.o.g., ${\mathbf p}=(0,w,x,y,z)$ and $w\ge x, y, z$. If $w \ge z+y$, then ${\mathbf p} \rightarrow (0,z+y,0,y,z) \in S$ is a legal move. For $w<z+y$, ${\mathbf p} \rightarrow (0,w,0,w-z,z) \in S$ is a legal move. For example, $(0,6,4,3,2) \rightarrow (0,5,0,3,2)$ and $(0,6,4,3,5) \rightarrow (0,6,0,1,5)$; \item[{\rm(ii)}] maximal and minimal values are separated by one stack; w.l.o.g., ${\mathbf p}=(0,x+y,w,z,y)$, and $\max({\mathbf p})\in\oplus{w,z\oplus}$. If $z \ge x$, then ${\mathbf p}\rightarrow (0,x+y,0,x,y) \in S$ is a legal move. Otherwise ${\mathbf p}\rightarrow (0,z+y,0,z,y) \in S$ is a legal move. For example, $(0,5,6,3,4)\rightarrow(0,5,0,1,4)$, and $(0,5,6,1,3)\rightarrow(0,4,0,1,3)$. \end{itemize} This completes the proof that $S=\mathcal{L}$ for ${\rm{CN}}(5,2)$.\oplus {\rm(2)} Now we look at the case ${\rm{CN}}(5,3)$ and rewrite the structure of the losing positions, letting $S=\oplus{(0,a+b,a,b,a+b)\oplus}$. Now we are allowed to play on three stacks. If play involves either the $a$ or $b$ stack, then both $a+b$ stacks have to change, which would mean play on four stacks, which is not allowed. If the play is on the other three stacks, then we have to reduce both $a+b$ stacks by the same amount, but their height no longer is the sum of the height of the $a$ and $b$ stacks, so condition (I) holds. To show the validity of condition (II), we let $\min({\mathbf p})=m$ and $\max({\mathbf p})=M$, and again consider the two cases where $\min({\mathbf p})$ and $\max({\mathbf p})$ are either adjacent or one stack apart. \begin{itemize} \item[{\rm(i)}] max(${\mathbf p}$) and min(${\mathbf p}$) are adjacent, w.l.o.g., ${\mathbf p}=(m,M,x,y,z)$. We display the different cases and examples of moves in a table, with stacks that remain fixed underlined:\oplus \begin{center}
\begin{tabular}{c|c|c}\hline Case & ${\mathbf p}'$ & Example\oplus \hline $y-z \ge m$ & $(\underline{m},m+z,0,m+z,\underline{z})$ & $(3,9,5,7,4) \rightarrow (3,7,0,7,4)$ \oplus \hline $0 \le y-z<m$ & $(y-z,y,0,\underline{y},\underline{z})$ & $(3,9,5,6,4) \rightarrow (2,6,0,6,4)$ \oplus \hline $y-z <0 \wedge x > z-y$ &$(0,z,z-y,\underline{y},\underline{z})$ & $(3,6,4,3,5) \rightarrow (0,5,2,3,5)$ \oplus \hline $y-z <0 \wedge x \le z-y$ &$(0,x+y,\underline{x},\underline{y},x+y)$ & $(3,6,1,3,5) \rightarrow (0,4,1,3,4)$ \oplus \hline \end{tabular} \end{center}
\item[{\rm(ii)}] max(${\mathbf p}$) and min(${\mathbf p}$) are separated by one stack; w.l.o.g., ${\mathbf p}=(m,x,M,y,z)$. If $x \ge z-m$, then ${\mathbf p}\rightarrow (m,z-m,z,0,z) \in S$ is a legal move. Otherwise ${\mathbf p}\rightarrow (m,x,x+m,0,x+m) \in S$ is a legal move. For example, $(2,5,8,7,3)\rightarrow(2,1,3,0,3)$ and $(2,3,8,7,3)\rightarrow(2,3,5,0,5)$. \end{itemize} This completes the proof that $S=\mathcal{L}$ for ${\rm{CN}}(5,3)$. \end{proof}
Now we turn to results for $n=6$. Figure~\ref{n6-3} visualizes the set of losing configurations.
\begin{theorem} \label{L6-3} For the game ${\rm{CN}}(6,3)$, the set of losing positions is given by $\mathcal{L}=\oplus{(a,b,c,d,e,f)\mid a+b = d+e \text{ and } b+c = e+f\oplus}$.\footnote{This result has also recently been discovered independently and appeared in {\cite[Example 23]{Hor2010}}. Once more, we provide an elementary proof that does not rely on the framework of simplicial complexes.} \end{theorem}
\begin{figure}\label{n6-3}
\end{figure}
\begin{remark} Note that for positions in the losing set given in Theorem~\ref{L6-3}, two pairs of opposite stacks have equal sums. However, having two sets of opposite pairs with the same sum also forces the third set of opposite pairs to have equal sums. Thus when proving results about the losing set, we are done as soon as we have shown that any two sets of opposite pairs have the same sum. This will come in handy in the proof that follows. Alternatively, the symmetries indicate that it does not matter which two sets of opposite pairs have the same sum. \end{remark}
\begin{proof} Let $S=\oplus{(a,b,c,d,e,f)\mid a+b = d+e \text{ and } b+c = e+f\oplus}$. Suppose that ${\mathbf p} \in S$, and w.o.l.g, the move is made on the three consecutive stacks $a$, $b$ and $c$, so ${\mathbf p}=(a,b,c,d,e,f)\rightarrow {\mathbf p}'=(a', b', c', d,e, f)$. At least one token is removed, so w.l.o.g. suppose $a'<a$. Then $a'+b'<a+b =d+e$, so ${\mathbf p}'\notin S$ and (I) holds. To show condition (II), assume that ${\mathbf p} \notin S$ and observe that if there is a legal move ${\mathbf p} \rightarrow {\mathbf p}'$, then there is a legal move from ${\mathbf p}+\ell\cdot {\mathbf 1} \rightarrow {\mathbf p}'+\ell\cdot {\mathbf 1}$, for any positive integer value of $\ell$. Therefore, we can assume w.l.o.g. that $f=0$. Also, due to the circular symmetries, one can assume that $a+b \ge d+e$ (*). Three cases need to be considered: \begin{itemize} \item[\rm{(i)}] $b > e$: Play is on stacks $b$ and $c$ and on either stack $a$ or $d$, depending on which value is bigger; ${\mathbf p} \rightarrow (\min(a,d),e,0,\min(a,d),e,0) \in S$ is a legal move. For example, $(5, 10, 8, 6, 9, 0) \rightarrow (5, 9, 0, 5, 9, 0)$;
\item[\rm{(ii)}] $b \le e \wedge c \ge e-b$: We play on stacks $a$, $b$, and $c$. Condition (*) guarantees that $a\ge d+e-b$, and thus ${\mathbf p} \rightarrow (d+e-b,b,e-b,d,e,0)\in S$ is a legal move. For example, $(10, 8, 8, 4, 9, 0) \rightarrow (5, 8, 1, 4, 9, 0)$. (Note that if $a+b=d+e$ and $c=e-b$, then ${\mathbf p} \in S$, a contradiction.)
\item[\rm{(iii)}] $b \le e \wedge c < e-b$:
In this case, we play on stacks $e$, $f$, and $a$. Since $a \ge d + e - b > d + c$, ${\mathbf p} \rightarrow (c+d,b,c,d,b+c,0)\in S$ is a legal move. For example, $(10, 8, 5, 2, 14, 0) \rightarrow(7, 8, 5, 2, 13, 0)$.
\end{itemize} In all cases, we can move from any ${\mathbf p} \notin S$ to ${\mathbf p}' \in S$, thus condition (II) holds and therefore $S=\mathcal{L}$. \end{proof}
We will discuss in Section~\ref{GenRes} why the proof given in \cite{Hor2010} does not extend to other cases.
\begin{remark} The proof of Theorem~\ref{L6-3} illustrates just one way of making a move from a position not in $\mathcal{L}$ to a position in $\mathcal{L}$. In general, this move is not unique. For example, for ${\mathbf p}=(a, b, c ,d, e, f) \notin \mathcal{L}$, ${\mathbf p}'=(a'+ \ell, b' - \ell, c' + \ell, d, e, f)\in \mathcal{L}$ for all the values of $\ell$ that preserve the legality of the move, that is, $a' + \ell \le a$, $ c' + \ell \le c$, and $b' - \ell\ge 0$. As an illustration, from the position ${\mathbf p}=(10, 9, 5, 8, 4, 3)$, one can move to the positions ${\mathbf p}'= (5 + \ell, 7 - \ell, 0 + \ell, 8, 4, 3)\in \mathcal{L}$ for $\ell = 0 , 1, 2, \ldots, 5$. \end{remark}
We next present the result for the game ${\rm{CN}}(6,4)$. Figure~\ref{n6-4} visualizes the set of losing positions, which are very similar to those in the game ${\rm{CN}}(6,3)$, with additional properties involving a digital sum.
\begin{theorem} \label{L6-4} For the game ${\rm{CN}}(6,4)$, the set of losing positions is given by $$\mathcal{L}=\oplus{(a,b,c,d,e,f)\mid a+b = d+e, b+c = e+f, a\oplus+c\oplus+e=0, \mbox{ and } a=\min({\mathbf p})\oplus}.$$\end{theorem}
\begin{figure}\label{n6-4}
\end{figure}
As before, the third set of opposite pairs also has to have equal sums. In addition, a losing position in which the minimum occurs simultaneously in each of the triples $(a, c, e)$ and $(b,d,f)$ reduces to a special case.
\begin{lemma}\label{two minima} If the position ${\mathbf p}=(a,b,c,d,e,f) \in \mathcal{L}_{{\rm{CN}}(6,4)}$ has its minimal value in each of the two triples $(a, c, e)$ and $(b,d,f)$, then ${\mathbf p}=(a,b,c,a,b,c)$. \end{lemma}
\begin{proof} There are two cases to be considered: The minima are adjacent, or they are not adjacent. Assume that w.l.o.g. that the two adjacent minima occur at $a$ and $b$. Since $a=b$, we have $d=e=a$ (because of the minimality of $a$ and $b$), and consequently, due to the equality of the paired sums, $c=f$. For the second case assume the minima occur at $a$ and $d$. Then $e=b$ and $f=c$ because of the equality of paired sums. \end{proof}
In addition, we make use of a well-known result about digital sums.
\begin{lemma}\label{digsumzero} For any set of positive integers $x_1$, $x_2, \ldots, x_n$ whose digital sum is not equal to zero, there exists an index $i$ and a value $x_i'$ such that $0 \le x_i' < x_i$ and $x_1\oplus+\cdots \oplus+x_{i-1}\oplus+x_i'\oplus+x_{i+1}\oplus+\cdots\oplus+x_n=0.$ \end{lemma}
We are ready to prove Theorem~\ref{L6-4}.
\begin{proof} Let $S=\oplus{(a,b,c,d,e,f)\mid a+b = d+e, b+c = e+f, a\oplus+c\oplus+e=0 \oplus}$ and let ${\mathbf p} \in S$. Note that we have not yet indicated where the minimum occurs, but we assume that it occurs at either $a$, $c$ or $e$. If play is on one, two, or three consecutive stacks, then any move from ${\mathbf p} \in S$ is to ${\mathbf p}' \notin S$ as in the game ${\rm{CN}}(6,3)$. Therefore, play has to occur on four consecutive stacks, w.l.o.g., on stacks $a$ through $d$. We now attempt to make a move to another position in $S$. Since stacks $e$ and $f$ do not change, we cannot have a reduction in stacks $b$ and $c$ as the sums have to remain equal. Therefore, play is only on stacks $a$ and $d$, and these two stacks have to be reduced by the same amount, that is ${\mathbf p}'=(a-x,b,c,d-x,e,f)$ for some $x>0$. Let us refer to a triple whose stack heights have digital sum zero as a {\em digital triangle}. Since the digital triangle of ${\mathbf p}$ is $(a,c,e)$ and only stack $a$ is changed, the triangle $(a-x, c, e)$ is no longer digital, so $(b, d-x,f)$ has to be the digital triangle of ${\mathbf p}'$. If the minimal value in the digital triangle of ${\mathbf p}$ is $a$, then $a-x$ is the only minimum in ${\mathbf p}'$ and it is not part of the digital triangle, so ${\mathbf p}'\notin S$. On the other hand, if the minimum of ${\mathbf p}$ occurs at either $c$ or $e$, then for ${\mathbf p}'$ to be in $S$, the minimum for ${\mathbf p}'$ has to occur in the digital triangle $(b, d-x, f)$. Since only the value of $d$ has changed in those stacks, then $d-x$ has to be the minimal value of ${\mathbf p}'$. We need to consider two subcases: the minimum occurs in both triangles of ${\mathbf p}'$, or the minimum of ${\mathbf p}'$ is unique. In the first subcase, Lemma~\ref{two minima} tells us that ${\mathbf p}'$ is of the form $(a,b,c,a,b,c)$; therefore, ${\mathbf p}'=(a-x,b,c,a-x,b,c)\notin S$ as ${\mathbf p}'$ does not have a digital triangle. In the second case, we may assume w.l.o.g. that $\min({\mathbf p})=e$, and therefore, $e < b$. Since the minimum of ${\mathbf p}'$ is unique, $d-x<a-x$, which implies that $d < a$. Combining the inequalities leads to $d+e<a+b$, so ${\mathbf p}' \notin S$. As there is no legal move from a position in $S$ to another position in $S$, condition (I) is satisfied.
Now we turn to the harder part, namely showing that from any position ${\mathbf p} \notin S$, we can make a legal move to a position ${\mathbf p}' \in S$. Note that the condition to have equal sums for diagonally opposite pairs of stacks is equivalent to the condition $a-d=e-b=c-f$, that is, the differences between diagonally opposite stacks is the same for all such pairs. To create a position ${\mathbf p}' \in S$ from a position ${\mathbf p} \notin S$ we proceed in two steps - first we create the digital triangle, and then we adjust the pairwise differences of diagonally opposite pairs. To better visualize the relative sizes of stacks, we will label pairwise diagonally opposite values with the same letter, using lowercase for the smaller of the two and uppercase for the larger one. There are two different cases: \begin{enumerate} \item[1.] the pairwise minima are alternating with pairwise maxima (and thus form a triangle); or \item[2.] the pairwise minima are all consecutive. \end{enumerate} To show that there is no third case, consider what happens when two of the pairwise minima are next to each other. The values of the third pair are adjacent to the two pairwise minima, and one of the two values has to be the pairwise minimum, making all the pairwise minima adjacent to each other.
We now look at the two cases separately. Even though they have much in common, to combine them would create cumbersome notation). \oplus Case 1: Let ${\mathbf p}=(A,b,C,a,B,c)$. By Lemma~\ref{digsumzero}, we can adjust one of the three pairwise minima to create a digital sum of zero (if not already digital). Assume that the value adjusted is $a$, and the new value is $\tilde{a} \le a \le A$. Compute the minimal pair difference $m =\min(A-\tilde{a}, B-b,C-c)$. In order to make all the pairwise differences equal to $m$, we need only adjust two of the pairwise maxima. If $m=A-\tilde{a}$, we adjust $B$ and $C$, which are adjacent to $\tilde{a}$, and ${\mathbf p} \rightarrow (A,b,c+m,\tilde{a},b+m,c)$. If $m=B-b$, then we need to adjust $A$ and $C$, and the two consecutive stacks $B$ and $c$ are not changed; in this case ${\mathbf p} \rightarrow (\tilde{a}+m,b, c+m,\tilde{a}, B, c)$. The case $m=C-c$ follows by symmetry.\oplus Case 2: Let ${\mathbf p}=(A,B,C,a,b,c)$. Again using Lemma~\ref{digsumzero}, we identify the value that needs to be adjusted to create a digital triangle. If $a$ is the value to be reduced, then we reduce both $A$ and $a$ to $\tilde{a}$, and reduce the other two pairwise maxima to their respective minima, that is ${\mathbf p} \rightarrow (\tilde{a},b,c,\tilde{a},b,c)$. (The case where $c$ needs to be reduced follows by symmetry.) If $b$ is the value that needs to be reduced to create a zero digital sum, then we reduce $B$ to $\tilde{b} \le b \le B$, and are basically in the situation of Case 1. The two possible legal moves are ${\mathbf p} \rightarrow (a+m, \tilde{b}, c+m, a,b,c)$ with $m=b-\tilde{b}$ or ${\mathbf p} \rightarrow (a+m, \tilde{b}, C, a,\tilde{b}+m,c)$ with $m=C-c$. \end{proof}
The last case for $n=6$ is ${\rm{CN}}(6,2)$, which remains an open question. So far, we have not been able to find a conjectured structure for $\mathcal{L}_{{\rm{CN}}(6,2)}$ that has not been undone by a counter example. However, we know that the set of losing positions cannot be closed under addition, as ${\rm{CN}}(6,2)$ reduces to Nim on three stacks when every other stack has been reduced to zero tokens. In fact, this is the case for all games ${\rm{CN}}(n,2)$ for $n \ge 6$.
\section{Larger values of $n$}
As $n$ gets larger, the structure of the losing set becomes more complicated. We show one example for $n=8$, where we have a new feature, namely a minimum involving the sum of stack heights, in the structure of the losing set.
\begin{theorem} \label{8-6} The set of losing positions for the game ${\rm{CN}}(8,6)$ is given by $$\mathcal{L}=\oplus{ (0,x,a_1,b_1,e,b_2,a_2,x) \mid a_1+b_1=a_2+b_2=x \text{ and } e=\min(x,a_1+a_2)\oplus}.$$ \end{theorem}
\begin{figure}\label{n8,6}
\end{figure}
\begin{remark} \label{zeros} Before proving Theorem~\ref{8-6} we will discuss the role of the zeros in a losing position. Specifically, we will see that if a losing position has more than one zero then the position will have a reflection symmetry (dotted lines in Figure~\ref{sym}) that clearly shows that any of the zeros can be deemed the $``0"$ of the typical losing position ${\mathbf p}=(0,x,a_1,b_1,e,b_2,a_2,x) \in \mathcal{L}$. Note that a zero stack is always between two maximal stacks $x$. \begin{enumerate} \item If $x=0$, then ${\mathbf p}=(0,0,0,0,0,0,0,0)$. \item If $a_1=0$, then $b_1=x$ and $e=\min(x,a_1+a_2)=\min(x,a_2)=a_2$, and therefore, ${\mathbf p}=(0,x,0,x,a_2,b_2,a_2,x)$ and the conditions of $\mathcal{L}$ hold for either of the two zeros as only the sum of $a_2$ and $b_2$ matters, and their positions can be interchanged due to rotational symmetry (see Figure~\ref{sym(2)}). \item If $ b_1=0$, then $a_1=x$ and $ e=\min(a_1+a_2,x)=\min(x+a_2,x)=x$, and therefore ${\mathbf p}= (0,x,x,0,x,b_2,a_2,x)$, and once more, the conditions can be verified for the second zero as well (see Figure~\ref{sym(3)}). \item If $e=0$, then either $x=0$, a case considered before, or both $a_1$ and $a_2$ are zero, which results in ${\mathbf p}=(0,x,0,x,0,x,0,x)$, a special case of (2) above. \end{enumerate} \end{remark}
\begin{figure}
\caption{Symmetric positions.}
\label{sym(2)}
\label{sym(3)}
\label{sym}
\end{figure}
\begin{proof} Let $S=\oplus{ (0,x,a_1,b_1,e,b_2,a_2,x) \mid a_1+b_1=a_2+b_2=x \text{ and } e=\min(x,a_1+a_2)\oplus}$. As before, we will show that $S$ is the set of losing positions by showing that $S$ satisfies conditions (I) and (II) of Theorem~\ref{lose}. We start by proving condition (I).
Suppose ${\mathbf p}=(0,x,a_1,b_1,e,b_2,a_2,x)\in S$. Consider first the case where one of the two $x$ stacks is not reduced by the move. As it is beside a $0$ and remains the maximal element, neither the other $x$ nor any of the $a_i$ and $ b_i$ with $i=1,2$ can be reduced (otherwise $ a_i+b_i < x$) if the resulting position is to be in $S$. Furthermore, since $e=\min(x,a_1+a_2)$, it too must remain fixed, which implies that there is no legal move in this case. Now assume both $x$ stacks are reduced to $x'<x$. To create a position in $S$, one must play on at least one of $a_1$ and $b_1$ to make $a_1'+b_1'=x'$ and on at least one of $b_2$ and $a_2$ to make $a_2'+b_2'=x'$. Since it is not possible to play on both $b_1$ and $b_2$ (as no set of six consecutive stacks contains both of them and the two $x$ stacks), at least one of the $a_1$ and $a_2$ stacks must be reduced. W.l.o.g, assume play is on $a_1$, and thus, since $a_1'+a_2<a_1+a_2=x$ and $x'<x$, we have that $e' < e$, but no set of six consecutive stacks contains both of the $x$ stacks, $a_1$, $e$, and at least one of $a_2$ and $b_2$, and thus there is no legal move. This completes the proof for condition (I).
To show condition (II), we will prove that for different classes of positions ${\mathbf p}$ there is an option ${\mathbf p}'$ of ${\mathbf p}$ that belongs to $S$, and then show that every possible position ${\mathbf p} \notin S$ belongs to at least one of these classes. We will say that ${\mathbf p}$ {\em is solved} if there is a legal move from ${\mathbf p}$ to ${\mathbf p}' \in S$. Furthermore, in ${\rm{CN}}(8,6)$ we have at least two stacks whose height remains the same in a legal move, and we will refer to those stacks as {\em fixed}.
We now prove a sequence of lemmas, each showing that a different class of positions is solved. Lemmas~\ref{valley} and~\ref{max-min} will also be used in the proofs of the subsequent lemmas.
\begin{definition} If a position ${\mathbf p}$ contains four consecutive stacks $a,b,c$, and $d$ such that $b+c\le \min(a,d)$, then these four stacks are called a {\em valley} of the position, and we will refer to the four stacks satisfying this condition as $\val{a}{d}$. The {\em size of the valley} is defined as $|\val{a}{d}|=b+c$. \end{definition}
\begin{lemma} \label{valley} (Valley lemma) A position ${\mathbf p}$ that contains a valley is solved. \end{lemma}
\begin{proof} Consider the position ${\mathbf p}=(a,b,c,d,e,f,g,h)$, and suppose it contains $\val{a}{ d}$. If there is more than one valley, suppose without loss of generality that $\val{a}{ d}$ has minimal size. We make the $h$ stack the zero of ${\mathbf p}'$. When reducing the $a$ and $g$ stacks to equal heights we need to consider two cases, namely $\max(f,g) \ge b+c$ and $\max(f,g) < b+c$.
Let $\max(f,g)\ge b+c$. W.l.o.g., $g\ge f$, that is, $g\ge b+c$ (otherwise make $e$ stack the zero of ${\mathbf p}'$) . We fix stacks $b$ and $c$. Let $a'=b+c$ (possible because $\val{a}{d}$), $h'=0$, and $g'=b+c$. Now, if $e+f\ge b+c$, let $e'$ and $f'$ be such that $e'+f'=b+c$, and $d'=\min(b+c,b+f')$ (possible because $\val{a}{d}$ ensures $d \ge b+c$). The resulting position ${\mathbf p}'=(b+c,b,c,\min(b+c,b+f'),e',f',b+c,0)$ with $e'+f'=b+c$ is in $S$. Note that we always have $e+f\ge b+c$, as otherwise $ e+f<b+c<d$ and $e+f<b+c\le g$, that is we would have $\val{d}{g}$ with a smaller size than $\val{a}{d}$, a contradiction to the minimality of $\val{a}{d}$.
Now consider the second case, $\max(f,g)<b+c$. Then $f<b+c$, $g<b+c$ and w.l.o.g, $f\le g$. We fix $f$ and $g$ and let $h'=0$, $a'=g$ (possible because $a\ge b+c>g$), $e'=g-f$ (possible because otherwise $|\val{d}{g}|<|\val{a}{d}|$, which contradicts the minimality of $\val{a}{d}$). Also, $b$ and $c$ are reduced so that $b'+c'=g$, (possible because $b+c>g$), and finally $d'=\min(g,b'+f)$ (possible because $d\ge a+b>g$). The resulting position ${\mathbf p}'=(g, b', c', \min(g,b'+f),g-f, f, g, 0)$ with $b'+c'=g$ is in $S$, that is, ${\mathbf p}$ is solved.\end{proof}
\begin{lemma} (Trapezoid lemma) \label{trapezoid} If the position ${\mathbf p} =(a,b,c,d,e,f,g,h)$ satisfies that $\max(a,h) \le \min(f,c)$, then ${\mathbf p}$ is solved. \end{lemma}
\begin{proof} W.l.o.g. assume that $ a\ge h$. Now if $d+e\le \min(f,c)$, then $\val{c}{f}$ and ${\mathbf p}$ is solved, so we can assume that $d+e>\min(f,c)$. Similarly, if $g+h \le \min(a,f)=a$, then $\val{f}{a}$, so we can assume that $g+h>a$. With these two inequalities in hand we can proceed: fix $a$ and $h$ and let $b'=0$, $c'=a$, $d'+e'=a$ (possible because $d+e>c\ge a$), $g'=a-h\ge 0$ (possible because $h\le a$ and $g+h>a$), and $f'=\min(a,h+d')$ (possible because $f\ge a$). We then get ${\mathbf p}'=(a,0,a,d',e',\min(a,h+d'),a-h,h) \in S$. \end{proof}
The next two lemmas consider cases in which adjacent stacks $a$ and $b$ are each smaller than the minimum of a specified pair of stacks.
\begin{lemma} (First double min lemma) \label{dmin1} A position ${\mathbf p}=(a,b,c,d,e,f,g,h)$ for which $a\le \min(d,f)$ and $b\le \min(a,g)$ is solved. \end{lemma}
\begin{proof} By Lemma~\ref{valley}, we only need consider positions ${\mathbf p}$ that do not contain a valley. We fix
$a$ and $b$ and let $e'=0$, $d'=f'=a$ (possible because $a\le \min(d,f)$), $c'=a-b$ (possible because $b\le a$ and $c< a-b$ would imply $\val{a}{d}$). For the resulting position to be in $S$, one must have $g'+h'=a$ and $a'=a=\min(d',c'+g')=\min(a,a-b+g')$. We have that $g+h\ge a$, otherwise $\val{f}{a}$, and therefore it is possible to obtain $g'+h'=a$. Likewise, since $g \ge b$, we can achieve $g' \ge b$. Moreover, both conditions can be satisfied at the same time as follows: if $g\ge a$, let $g'=a$ and $hÕ=0$, so ${\mathbf p}'=(a,b, a-b,a,0,a,a,0)$; if $g<a$, let $g'=g$, $h'=a-g$ to yield ${\mathbf p}'=(a,b, a-b,a,0,a,g, a-g)$. \end{proof}
\begin{lemma} (Second double min lemma) \label{dmin2} A position ${\mathbf p}=(a,b,c,d,e,f,g,h)$ for which $a\le \min(e,g)$ and $b\le \min(a,d)$ is solved. \end{lemma}
\begin{proof} There are two cases to be considered, each of which results in a position of the form (2) of Remark~\ref{zeros}. If $b+c\ge a$, we fix $a$ and $b$, and let $c'=a-b$, $d'=b$, $e'=a$, $f'=0$, $g'=a$, and $h'=0$ to obtain ${\mathbf p}'=(a,b, a-b,b,a,0,a,0) \in S$. Otherwise, we fix $b$ and $c$ and let $a'=b+c$, $d'=b$, $e'=b+c$, $f'=0$, $g'=b+c$, and $h'=0$, which yields ${\mathbf p}'=(b+c,b,c,b,b+c,0,b+c,0) \in S$. \end{proof}
\begin{lemma}(MaxMin Lemma)\label{max-min} If for a position ${\mathbf p} = (a,b,c,d,e,f,g,h)$, either \begin{equation}\label{maxmin1eq} \max(b,c,b+c-e) \le \min(f,h, a+b,b+c, (b+c+d)/2) \end{equation} or \begin{equation}\label{maxmin2eq}\max(c,d, c+d-a) \le \min(f,h, c+d, d+e, (b+c+d)/2) \end{equation} holds, then $p$ is solved . \end{lemma}
\begin{proof} Let $g'=0$, and keep either $b$ and $c$ or $c$ and $d$ fixed. We first consider the case where $b$ and $c$ are fixed. In order for a legal move to exist, the following conditions have to be satisfied, where $m$ (the maximum adjacent to the zero of the new position) is a quantity to be determined: $$ f'=h'=m; \quad a'+b =m; \quad d'+e'=m; \quad \text{ and } c=\min(m,a'+e')=a'+e'.$$
Note that the last equality is an additional assumption used to determine all the values for the new position ${\mathbf p}'$. If these inequalities can be solved for $m$, then there is a legal move to ${\mathbf p}'=(m-b,b,c, 2m-b-c, b+c-m,m,0,m)$ for each value of $m$ that satisfies the conditions. (Note that we used the assumption that $c=a'+e'$ to compute $e'$.) All of these entries have to be non-negative, and smaller than the corresponding entries in ${\mathbf p}$. Thus we get two conditions for each of the stacks of ${\mathbf p}'$, which translate into conditions for $m$ as follows: \begin{eqnarray*} \begin{tabular}{lll} $0 \le m \le f$ & $\Rightarrow$ & $0 \le m \le f$ \oplus $0\le m \le h$ & $\Rightarrow$ & $0\le m \le h$ \oplus $0\le m-b \le a$ & $\Rightarrow$ & $b \le m \le a+b$ \oplus $c \le m$ & $\Rightarrow$ & $c \le m$ \oplus $0 \le 2m-b-c \le d$ & $\Rightarrow$ & $(b+c)/2 \le m \le (b+c+d)/2$ \oplus $0 \le b+c-m \le e$ & $\Rightarrow$ & $b+c-e \le m \le b+c$ \oplus \end{tabular} \end{eqnarray*}
Combining these conditions yields $$\max(b,c,(b+c)/2, b+c-e) \le m \le \min(f,h, a+b, b+c, (b+c+d)/2).$$ We can further simplify this condition by recognizing that the average $(b+c)/2$ is always smaller than either of $b$ and $c$, and thus the average $(b+c)/2$ can be taken out of the maximum requirement, yielding \eqref{maxmin1eq}. The second case, fixing $c$ and $d$, results in Equation \eqref{maxmin2eq} by symmetry across the line through $g$ and $c$. In this case, ${\mathbf p}'= (c+d-m, 2m-c-d,c,d, m-d,m,0,m)$ is a legal move. \end{proof}
\begin{example} Suppose that ${\mathbf p}=(4,12,11,9,10,16,1,17)$. Then \eqref{maxmin1eq} is satisfied as $\max(b,c,b+c-e)=\max(12,11,13) =13 \le 16= \min(16,17,16,23,16)=\min(f,h, a+b,b+c, (b+c+d)/2)$. Then for $m = 13, 14, 15, 16$, ${\mathbf p}'=(m-12,12,11, 2m-23, 23-m,m,0,m)$ is a legal move. \end{example}
We now provide a final lemma which deals with the remaining cases which are small in number, but unfortunately do not fall into a neat unifying structure.
\begin{lemma} (Clean-up Lemma) \label{cleanup} A position ${\mathbf p}=(a,b,c,d,e,f,g,h)$ for which $ f\ge \min(b,h) \ge \max(d,e)$ and $f\ge c\ge e\ge g$, $d\ge g$ and $a \le \min(c,d)$ is solved. \end{lemma}
\begin{proof} Of the given inequalities, the only one that does not unambiguously fix the relative order of stack heights is $f\ge \min(b,h)$. Several subcases arise which are summarized in Table~\ref{cleanupsum}. In most cases, we will just provide a position ${\mathbf p}'\in S$, and the reader can check that the given position ${\mathbf p}'$ is a legal move, that is $p_i\ge p'_i\ge 0$, using the inequalities of Lemma~\ref{cleanup} and the inequalities of the given subcase. We will provide some details of the proof and remark on the underlying structure for the case $ \min(b,h)=b< d+e-g$. In the first case, the condition $a+e \le c$ assures that the choice $c'=\min(m,a+e')\le a+e$ is legal. In each case, $g'=0$ and $f'=h'=m=\min\oplus(h,f,a+b,d+e)$. The other values of ${\mathbf p}'$ are adjusted depending on the value of $m$ so that the resulting position ${\mathbf p}'\in S$.
In the second case, when $a+e > c$, we apply Lemma~\ref{max-min}. Using~\eqref{maxmin1eq}, a position ${\mathbf p}$ is solved if $\max(b,c,b+c-e) \le \min(f,h, a+b,b+c, (b+c+d)/2)$. Since $b \ge \max(d,e)\ge e$ and $c\ge e$, we have that $b+c-e\ge b$ and $b+c-e\ge c$, so $\max(b,c,b+c-e)=b+c-e$. Thus we have to show that $b+c-e\le \min(f,h, a+b,b+c, (b+c+d)/2)$. But $b+c-e\le (b+c+d)/2$ is logically equivalent to $ b+c-e \le d+e $, so the condition becomes $m=b+c-e\le m^*=\min(f,h, a+b,b+c, d+e)$. If $m^*=a+b$ or $m^*=b+c$, then $m^*>m$ and ~\eqref{maxmin1eq} is satisfied. If $m^*= f$ or $m^*=d+e$, then for the case $m \le m^*$ ~\eqref{maxmin1eq} is true; if on the other hand $m > m^*$, then we cannot apply Lemma~\ref{max-min}, but give a position ${\mathbf p}'$ using other methods. Finally, if $m^*=h$, we apply ~\eqref{maxmin2eq} of Lemma~\ref{max-min} which asserts that a position is solved if $\hat{m}=\max(c,d, c+d-a) \le \tilde{m}=\min(f,h, c+d, d+e, (b+c+d)/2)$. Since $a \le \min(c,d)$, we obtain that $\hat{m} =c+d-a$. Also, $e \le c$ implies $d+e \le d+c$, and together with $m^*=h$, we have that $h \le \min(f,d+e,d+c)$, which implies that $\tilde{m}=\min(h, (b+c+d)/2)$. Furthermore, $c+d-a\le (b+c+d)/2$ is logically equivalent to $c+d-a\le a+b$. Using $m^*=h$ once more, we obtain that $\tilde{m}=\min(h, a+b)=h$. Thus, \eqref{maxmin2eq} holds if $\hat{m}=c+d-a\le m^*=h$, and ${\mathbf p}'= (c+d-\hat{m}, 2\hat{m}-c-d,c,d, \hat{m}-d,\hat{m},0,\hat{m})=(a,c+d-2a,c,d,c-a,\hat{m},0,\hat{m})$ with $\hat{m}=c+d-a$ is a legal move. Otherwise, if $\hat{m} > m^*$, we can move to ${\mathbf p}'=(a,h-a,a+h-d,d,h-d,h,0,h) \in S$. \end{proof}
Now that we have all the intermediate results, we can complete the proof of Theorem~\ref{8-6}. Lemmas~\ref{trapezoid}, ~\ref{dmin1},~\ref{dmin2}, and~\ref{cleanup} are all lemmas of the form: ``If a given set of conditions on the relative sizes of individual stacks holds, then the position is solved". By contrast, Lemmas~\ref{valley} and~\ref{max-min} were mere tools to prove the other lemmas. Note that there are a total of $7!$ ways to arrange the relative sizes of stacks around the circle, and we need to divide by $2$ to account for the reflection symmetry. Thus there are $7!/2=2520$ different size configurations to be considered. Each
of these arrangements is, under a suitable rotation or reflection, covered by at least one of the four main lemmas, as checked by a Visual Basic program that can run on any Excel workbook. (The code can be obtained from \url{http://www.calstatela.edu/faculty/sheubac/}.) It is quite interesting to see how the cases distribute among the four lemmas that settle Theorem~\ref{8-6}. Clearly, Lemma~\ref{trapezoid} is the most powerful, as it covers $2248$ out of the $2520$ cases, roughly $89\oplus%$. Lemma~\ref{cleanup} on the other hand covers only 62 cases, and was specifically designed to cover the 42 cases not already covered by the other lemmas, resulting in the very tedious conditions of Lemma~\ref{cleanup}. Figure~\ref{dist} shows the contributions of the four lemmas to the proof of Theorem~\ref{8-6}. \end{proof}
\begin{figure}
\caption{The contributions of the various lemmas to the proof of Theorem~\ref{8-6}}
\label{dist}
\end{figure}
\begin{landscape} \begin{table}[htb] \begin{center} \begin{tabular}{llll}
\multicolumn{3}{c|}{Conditions} &\multicolumn{1}{c}{${\mathbf p}'$}\oplus \hline
\multicolumn{1}{l|}{$\min(h,b)\ge d+e-g$} & & & \multicolumn{1}{|l}{ $(0,m,e-g,d,e,m-g,g,m)$}\oplus
\multicolumn{1}{l|}{} & & & \multicolumn{1}{|l}{$m=d+e-g$}\oplus \hline
\multicolumn{1}{l|}{$\min(h,b)=h < d+e-g$} & & & \multicolumn{1}{|l}{ $(0,h,e-g,h+g-e,e,h-g,g,h)$}\oplus \hline
$\min(h,b)=b < d+e-g$ & \multicolumn{1}{|l}{ $a+e \le c $} & \multicolumn{1}{|l}{ $m=a+b$} & \multicolumn{1}{|l}{ $(a,b,c',d',e',m,0,m)$}\oplus
& \multicolumn{1}{|l|}{$m=\min(h,f,a+b,d+e)$} &\multicolumn{1}{l|}{} & $d'+e'=m$, $c'=\min(m,a+e')$ \oplus \cline{3-4}
& \multicolumn{1}{|c|}{} & \multicolumn{1}{l} {$m=d+e$}& \multicolumn{1}{|l}{ $(a',b',c',d,e,m,0,m)$}\oplus
& \multicolumn{1}{|c|}{} & \multicolumn{1}{l|} {}& $a'+b'=m$, $c'=\min(m,a+e')$\oplus \cline{3-4}
& \multicolumn{1}{|c|}{} & \multicolumn{1}{l} {$m=f$}& \multicolumn{1}{|l}{ $(a',b',c',m-e,e,m,0,m)$}\oplus
& \multicolumn{1}{|c|}{} & \multicolumn{1}{l|} {}& $a'+b'=m$, $c'=\min(m,a+e')$\oplus \cline{3-4}
& \multicolumn{1}{|c|}{} & \multicolumn{1}{l} {$m=h$}& \multicolumn{1}{|l}{ $(a,m-a,c',d',e',m,0,m)$}\oplus
& \multicolumn{1}{|c|}{} & \multicolumn{1}{l|} {}& $d'+e'=m$, $c'=\min(m,a+e')$\oplus \cline{2-4}
& \multicolumn{1}{|l}{ $a+e > c $} & \multicolumn{1}{|l}{ $m^*=a+b$ or } & \multicolumn{1}{|l}{ $(c-e,b,c, m-e, e,m,0,m)$}\oplus
& \multicolumn{1}{|l}{ $m^*=\min(h,f,a+b,b+c,d+e)$} & \multicolumn{1}{|l}{ $m^*=b+c$ or} & \multicolumn{1}{|l}{ }\oplus
& \multicolumn{1}{|l|} {$m=b+c-e$}& \multicolumn{1}{l}{$m\le m^*=f $ or} & \multicolumn{1}{|l}{ }\oplus
& \multicolumn{1}{|l|} {}& \multicolumn{1}{l}{$m\le m^*=d+e $} & \multicolumn{1}{|l}{ }\oplus \cline{3-4}
& \multicolumn{1}{|l|} {}& \multicolumn{1}{l}{$m>m^*=f $} & \multicolumn{1}{|l}{$(f-b,b,e+f-b,f-e,e,f,0,f)$ }\oplus \cline{3-4}
& \multicolumn{1}{|l|} {}& \multicolumn{1}{l}{$m>m^*=d+e$} & \multicolumn{1}{|l}{$(a',b',c',d,e,m^*,0,m^*)$ }\oplus
& \multicolumn{1}{|l|} {}& \multicolumn{1}{l}{} & \multicolumn{1}{|l}{$b'=\min(b,m^*),a'+b'=m^*$ }\oplus
& \multicolumn{1}{|l|} {}& \multicolumn{1}{l}{} & \multicolumn{1}{|l}{$c=\min(m^*,e+a')$ }\oplus \cline{3-4}
& \multicolumn{1}{|l|} {$\hat{m}=c+d-a$}& \multicolumn{1}{l}{$\hat{m}\le m^*=h$} & \multicolumn{1}{|l}{$(a,c+d-2a,c,d,c-a,\hat{m},0,\hat{m})$ }\oplus \cline{3-4}
& \multicolumn{1}{|l|} {}& \multicolumn{1}{l}{$\hat{m}>m^*=h$} & \multicolumn{1}{|l}{$(a,h-a,a+h-d,d,h-d,h,0,h)$ }\oplus \hline \multicolumn{4}{c}{}\oplus \end{tabular} \end{center} \caption{Legal moves for the subcases of Lemma~\ref{cleanup} \label{cleanupsum}} \end{table}
\end{landscape}
\section{Generalizations}\label{GenRes}
Even though we have given results for a number of values of $n$, it would clearly be more satisfying to obtain general results that go beyond the ``extreme cases'' ${\rm{CN}}(n,1)$, ${\rm{CN}}(n,n)$, and ${\rm{CN}}(n,n-1)$. Ehrenborg and Steingr{\oplus'{\i}}msson~\cite{EhrSte1996} and Horrocks~\cite{Hor2010} investigated a more general set of games, namely playing Nim on a simplicial simplex, and obtained structural results for the set of losing positions. In particular, the results in~\cite{EhrSte1996} contain ${\rm{CN}}(5,2)$ and ${\rm{CN}}(5,3)$, while~\cite{Hor2010} contains ${\rm{CN}}(6,3)$ as a special case. The question then becomes whether these results solve ${\rm{CN}}(n,k)$ for other values of $n$. We consider this to be unlikely, as the structural results in~\cite{EhrSte1996} and~\cite{Hor2010} are linear in nature while our results for ${\rm{CN}}(6,2)$, ${\rm{CN}}(6,4)$ and ${\rm{CN}}(8,6)$ contain non-linear elements like digital sum and minimum. Nevertheless, an investigation of the structure of the circuits of ${\rm{CN}}(n,k)$, which are at the heart of the results of Ehrenborg, Steingr{\oplus'{\i}}msson, and Horrocks, might yield additional insights. We start by defining the necessary terminology, adapting the definitions given in~\cite{EhrSte1996} for the special case of circular Nim.
\begin{definition} \label{simp comp} A {\em simplicial complex} $\Delta$ on a finite set of nodes $V=\oplus{1,2,\ldots,n\oplus}$ is a collection of subsets of $V$ such that $\oplus{v\oplus} \in \triangle$ for every $v \in V$, and $B \in \triangle$ whenever $A \in \triangle$ and $B \subseteq A$. The elements of $\Delta$ are called {\em faces} and represent the choices for the stacks from which a player can take tokens. A face that is maximal with respect to inclusion is called a {\em facet}. A minimal (with respect to inclusion) non-face of $\Delta$ is called a {\em circuit}. The {\em size of a circuit} is the number of nodes in the circuit. \end{definition}
For circular Nim ${\rm{CN}}(n,k)$, the simplicial complex is given by $$\triangle =\bigcup_ {i =1}^n \bigcup _{j=0}^{k-1}\oplus{i,( i+1), \ldots, (i+j)\oplus}\imod{n}.$$ The facets are the sets consisting of $k$ consecutive vertices, while the structure of circuits is harder to describe in general. (They are not the sets consisting of $k+1$ consecutive vertices.) However, for $k=2$ we can explicitly describe the circuits and can enumerate them as well since the structure of the circuits is very simple in this case.
\begin{lemma} The circuits of ${\rm{CN}}(n,2)$ are of the form $\oplus{i,j\oplus}$ with $i=1,2, \ldots, n, j=i+2,\ldots, i-2 \imod{n}$. The number of circuits of ${\rm{CN}}(n,2)$ is given by $n(n-3)/2$. \end{lemma}
\begin{proof} By definition, a circuit is a set of stacks on which play is not allowed, but all subsets of the circuit are allowed choices for the stacks. Thus, for any stack $i$, all but the two pairs consisting of $i$ and its immediate neighbors, $i+1$ and $i-1$ form a circuit. There are $n-3$ such choices for each of the $n$ stacks; division by $2$ takes into account symmetry. \end{proof}
We now prove a result on the size of the circuits of ${\rm{CN}}(n,k)$ for any $n$ and $k$. Before we can do so, we need a few definitions. In what follows we always assume that vertices are listed in increasing (clockwise) order and that indices are given modulo $n$.
\begin{definition} An {\em arc of length $m$} with {\em end vertices} $i$ and ${i+m}$ is a set of $m+1$ consecutive vertices $\oplus{i,{i+1},\ldots,{i+m}\oplus} \subseteq \oplus{1,2,\ldots,n\oplus}$. We denote the arc from $i$ to ${i+m}$ by $\arc{i}{(i+m)}$. With each set $V$ of vertices, we associate two measurements: the {\em size (= number of elements)} of the set $V$, denoted by $|V|$, and the {\em span size} $sp(V)$, which is the length of the smallest arc containing $V$. \end{definition}
Note that $sp(V) \ge |V|-1$, with equality exactly when the elements of $V$ are consecutive vertices.
\begin{remark} \label{arc} \begin{enumerate} \item The smallest arc containing a set is not necessarily unique, but its length is. For example, if $n$ is even, and the set $V$ consists of two diagonally opposite vertices, then we have two arcs of length $n/2$. \item If an arc that covers a given set $V$ is minimal, then the arc's two end vertices belong to $V$; for if not, one could obtain a smaller arc that covers $V$ by simply removing the end vertex that does not belong to $V$, thus reducing the length of the arc. \item In the ${\rm{CN}}(n,k)$ game, every face is contained in an arc of length at most $k-1$, and every set with span size at most $k-1$ is a face. \end{enumerate} \end{remark}
\begin{definition} Given a set $V=\oplus{v_1, v_2, \ldots, v_m\oplus}$ (where the $v_i$ appear in clockwise order), the {\em distance set} $D_V=\oplus{d_1,d_2, \ldots, d_m\oplus}$ is the set of lengths of the arcs $\arc{v_i}{v_{i+1}}$, that is $ d_i = v_{i+1}- v_i$ for $i=1,\ldots,m-1$ and $d_m=v_1+n-v_m$. \end{definition}
Note that the sum of the distances of any distance set of $V\subseteq \oplus{1,2,\ldots,n\oplus}$ equals $n$, the total number of vertices of a ${\rm{CN}}(n,k)$ game, { and that each set of $m$ distances $d_1, d_2, \ldots, d_{m}$ with $0 < d_i\le n$ and $\sum_{i=1}^m d_i =n$ uniquely describes an $m$-subset of $\oplus{1,2,\dots,n\oplus}$ (up to rotation).}
\begin{example} Let $V=\oplus{2,5,6\oplus}$ and $n=8$. Then $d_1=3$, $d_2=1$, and $d_3=4$, so $D_V=\oplus{3, 1, 4\oplus}$. Adding the distances gives $3+1+4 = 8$. \end{example}
\begin{figure}
\caption{Distances between vertices.}
\label{distances}
\end{figure}
\begin{theorem} \label{circuit cond} A set of vertices $V=\oplus{v_{1}, v_{2}, \ldots, v_{\ell}\oplus}$ is a circuit of ${\rm{CN}}(n,k)$ if and only if the following conditions hold on its distance set $D_V$: \begin{enumerate} \item $d_i+d_{i+1} > n-k$ and \item $d_i \le n-k$ \end{enumerate} for $ i=1,2,\ldots,\ell$, where $d_{\ell+1}= d_1$. \end{theorem}
\begin{proof} $``\Rightarrow"$ Suppose $V$ is a circuit. Then by Remark~\ref{arc} (since $V$ is not a face), $$ k \le sp(V)=\min_i\oplus{n-d_i\oplus}\le n-d_i \quad \forall i=1\ldots,\ell,$$ so (2) is satisfied. Also, $V\backslash\oplus{v_i\oplus}$ is a face for all $i$, which implies that $sp(V\backslash\oplus{v_i\oplus})\le k-1$. Since $V$ is not a face, the minimal arc covering $V\backslash\oplus{v_i\oplus}$ has to be the arc $\arc{v_{i+1}}{v_{i-1}}$ (otherwise, the minimal arc would also include $v_i$, and thus $V$ would be a face). Therefore,
$$k-1 \ge sp(V\backslash\oplus{v_i\oplus})=|\arc{v_{i+1}}{v_{i-1}}|=n-d_i-d_{i+1},$$ so (1) holds.
$``\Leftarrow"$ Assume conditions (1) and (2) hold. Since $d_i \le n-k$, $$sp(V)=\min_i|\arc{v_{i+1}}{v_{i}}|=\min_i\oplus{n-d_i\oplus}\ge k,$$ and so $V$ is not a face. Also, \begin{eqnarray*}
sp(V\backslash\oplus{v_i\oplus})&=&\min_{j \ne i}\oplus{|\arc{v_{j+1}}{v_j}|,|\arc{v_{i+1}}{v_{i-1}}|\oplus}\oplus &=&\min_{j \ne i}\oplus{\underbrace{n-d_j}_{\ge k},\underbrace{n-d_i-d_{i+1}}_{<k}\oplus}=n-d_i-d_{i+1}\le k-1, \end{eqnarray*} so $V\backslash\oplus{v_i\oplus}$ is a face for every $i$, and thus $V$ is a circuit. This completes the proof. \end{proof}
\begin{theorem} \label{circuit length} For ${\rm{CN}}(n,k)$ with $n>1$ and $1<k<n$, a circuit of length $\ell$ exists if and only if \begin{equation}\label{circlen} \frac{n}{s} \le \ell \le \frac{2n}{s+1} \end{equation} where $s=n-k$. \end{theorem}
Table~\ref{cirlen} shows the size of circuits for given $n$ and $k$.
\begin{table}[htdp] \begin{center}
{\small
$\begin{array}{c|ccccccccc}
&k= 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10\oplus \hline n= 3 & \begin{array}{c}
3 \end{array}
& & & & & & & &\oplus
4 & \begin{array}{c}
2 \end{array}
& \begin{array}{c}
4 \end{array}
& & & & & & & \oplus
5 & \begin{array}{c}
2 \end{array}
& \begin{array}{c}
3 \end{array}
& \begin{array}{c}
5 \end{array}
& & & & & &\oplus
6 & \begin{array}{c}
2 \end{array}
& \begin{array}{c} \oplus{2,3\oplus} \end{array}
& \begin{array}{cc} \oplus{3,4\oplus} \end{array}
& \begin{array}{c}
6 \end{array}
& & & & & \oplus
7 & \begin{array}{c}
2 \end{array}
& \begin{array}{c}
2 \end{array}
& \begin{array}{c}
3 \end{array}
& \begin{array}{c}
4 \end{array}
& \begin{array}{c}
7 \end{array}
& & & & \oplus
8 & \begin{array}{c}
2 \end{array}
& \begin{array}{c}
2 \end{array}
& \begin{array}{cc} \oplus{2,3\oplus} \end{array}
& \begin{array}{cc} \oplus{3,4\oplus} \end{array}
& \begin{array}{cc} \oplus{4,5\oplus} \end{array}
& \begin{array}{c}
8 \end{array}
& & &\oplus
9 & \begin{array}{c}
2 \end{array}
& \begin{array}{c}
2 \end{array}
& \begin{array}{cc} \oplus{2,3\oplus} \end{array}
& \begin{array}{c}
3 \end{array}
& \begin{array}{cc} \oplus{3,4\oplus} \end{array}
& \begin{array}{cc} \oplus{5, 6\oplus} \end{array}
& \begin{array}{c}
9 \end{array}
& & \oplus
10 & \begin{array}{c}
2 \end{array}
& \begin{array}{c}
2 \end{array}
& \begin{array}{c}
2 \end{array}
& \begin{array}{cc} \oplus{2,3\oplus} \end{array}
& \begin{array}{cc} \oplus{3,4\oplus} \end{array}
& \begin{array}{cc} \oplus{4,5\oplus} \end{array}
& \begin{array}{cc} \oplus{5, 6\oplus} \end{array}
& \begin{array}{c}
10 \end{array} &\oplus \vdots &\vdots &\vdots &\vdots &\vdots & \vdots&\vdots &\vdots&\vdots&\oplus 15 & 2 & 2& 2& 2& \begin{array}{cc} \oplus{2,3\oplus} \end{array} & \begin{array}{cc} \oplus{2,3\oplus} \end{array} & 3& \begin{array}{cc} \oplus{3, 4\oplus} \end{array} & \begin{array}{cc} \oplus{3, 4, 5\oplus} \end{array}\oplus \multicolumn{10}{c}{} \end{array}$} \end{center} \caption{Possible lengths of circuits for given $n$ and $k$\label{cirlen} }
\end{table}
{ Before giving a proof of Theorem~\ref{circuit length} we point out that Table~\ref{cirlen} shows that the conditions of Horrocks~\cite{Hor2010} are unlikely to be satisfied except in the very special case of ${\rm{CN}}(6,3)$. Theorem 21 requires that the circuits split into two sets, each of which is a partition of $\oplus{1, 2, \ldots, n\oplus}$ with specific conditions for each vertex. However, in most cases, the sizes of the circuits do not allow for such partitions, without even considering whether the vertex condition is satisfied. For example, for ${\rm{CN}}(6,4)$, the circuit sizes are $3$ and $4$, and therefore, the circuits cannot create two partitions of $\oplus{1, 2, \ldots, 6\oplus}$. Obviously, this is not a rigorous proof that there is no instance in which the conditions of Theorem 21~\cite{Hor2010} are satisfied, but it corroborates what we have seen for ${\rm{CN}}(6,4)$ and ${\rm{CN}}(8,6)$, namely, that the set of losing positions is no longer a linear combination of some basis elements. }
\begin{proof} $``\Rightarrow"$ Let $V=\oplus{v_1,v_2,\ldots, v_{\ell}\oplus}$ be a circuit. Then $d_i+d_{i+1} > s\ge s+1$ and $d_i \le s$ for all $i$. Summing over $i=1,\ldots, \ell$, we obtain $$ \sum_{i=1}^{\ell}d_i+d_{i+1}\ge\ell(s+1) \quad \text{and}\quad \sum_{i=1}^{\ell} d_i \le \ell\cdot s.$$ Since $d_{\ell+1}=d_1$ and $\sum_{i=1}^{\ell}d_i=n$, we have that $$ 2n\ge\ell (s+1)\quad \text{and}\quad n \le \ell\cdot s,$$ and solving for $\ell$ gives the desired inequalities.
$``\Leftarrow"$ To show that there are circuits of the given lengths we will exhibit a circuit for the lower and the upper bounds, and then provide an algorithm to create a circuit of any intermediate length from the circuit for the upper bound. Note that for $s=1$, the only circuit consists of all vertices (and thus has size $n$) as each subset of size $n-1$ or smaller is a face. For $s \ge 2$ and $n=m \cdot s +r$ with $0 \le r<s$, we will show that the set $C=\oplus{1, s+1, 2s+1, ....,m\cdot s+1\oplus}\imod{n}$ is a circuit of size $\ell_1=\lceil n/s \rceil$. Note that $\ell_1 = m$ if $r=0$, and $\ell_1 = m+1$ if $r > 0$. Figure~\ref{circll} shows the construction for the circuit of size $\ell_1$, where the black dots indicate the vertices that make up the circuit, and the left and right end of the string of $n$ vertices are connected in the circular arrangement.
\begin{figure}
\caption{Circuit construction for lower limit of $\ell$.}
\label{circll}
\end{figure}
We need to show that $d_i+d_{i+1}>s$ and that $d_i\le s$. By construction of $C$, $d_i=s$ for $i=1,\ldots, m$, and if $r >0$, then $d_{m+1}=r<s$. In addition, $d_i+d_{i+1}$ is either $2s$ or $s+r$ with $r \ge 1$, so $C$ is a circuit of size $\ell_1$.
To create a circuit of size $\ell_2=\lfloor 2n/(s+1) \rfloor$, we spread out roughly twice as many vertices as evenly as possible, except potentially for the last one. Let $n=m' (s+1) + r'$, $\underline{h}=\lfloor (s+1)/2\rfloor$ and $\overline{h}=\lceil (s+1)/2 \rceil$. Then $\underline{h}+\overline{h}=s+1$. Let $$C'=\oplus{k(s+1)+1, k(s+1)+1+\overline{h} \mid k=0,\ldots, m'-1\oplus}, $$ that is, we start at vertex $1$ and then alternate distances $\overline{h}$ and $\underline{h}$ (or have an exact even spread if $\underline{h}=\overline{h}$). As in the case of the lower bound, we have to define what happens in the case when $r' >0$. In fact, we need to distinguish between $r'<(s+1)/2$ and $r'\ge (s+1)/2$, since those two cases distinguish between $\ell_2=2m'$ and $\ell_2=2m'+1$. We claim that a circuit of size $\ell_2$ is given by $$\left\oplus{\begin{array}{ll}C' & \text{if } r'<(s+1)/2 \oplus\C''=C' \cup \oplus{m'(s+1)+1\oplus}& \text{if } r'\ge (s+1)/2\end{array}\right..$$
For example, if $n=31$ and $s=4$, then the circuit vertices are given by $C'=\oplus{1,4,6,9,11,14,16,19,21,24,26,29\oplus}$, while for $n=34$ and $s=4$, the circuit vertices consist of $C' \cup \oplus{31\oplus}$. By construction of $C'$ and $C''$, the first $2m'-1$ distances in $D_{C'}$ and $D_{C''}$ are alternating between $\overline{h}$ and $\underline{h}$, and therefore, $d_i \le s$ (for $s \ge 1$ or $k<n$) and $d_i+d_{i+1}=s+1>s$ satisfying the circuit conditions independent of the value of $r'$. We now consider the two cases $r'<(s+1)/2$ and $r'\ge (s+1)/2$ separately to show the required inequalities for the remaining vertices.
If $r'<(s+1)/2$, then $D_{C'}=\oplus{\overline{h},\underline{h},\ldots,\underline{h},\overline{h},\underline{h}+r'\oplus}$ as $d_{2m'}=n+v_1-v_{2m'}$. Considering the two possibilities $(s+1)/2\in \mathbb{N} $ and $(s+1)/2\notin \mathbb{N} $ separately, it can be shown that $d_{2m'}=\underline{h}+r'\le s$. Also, $d_{2m'-1}+d_{2m'}=d_1+d_{2m'}=\overline{h}+(\underline{h}+r')\ge s+1$. If on the other hand $r'\ge (s+1)/2$, then $D_{C''}=\oplus{\overline{h},\underline{h},\ldots,\underline{h},\overline{h},\underline{h},r'\oplus}$ with $r' \le s$, $d_{2m'}+d_{2m'+1}=\underline{h}+r'\ge\underline{h}+\overline{h}=s+1$, and $d_{2m'+1}+d_1=r'+\overline{h}\ge\overline{h}+\overline{h}\ge s+1$. Thus, $C'$ and $C''$ are circuits of size $\ell_2$.
To show the existence of circuits of intermediate length, we transform the circuit for the upper limit step by step into the circuit for the lower limit, reducing the number of vertices by one in each step. We now describe the algorithm, and will use the term {\em $s$-barrier (for the $i^{\text{th}}$ segment)} to denote the space between vertices $(i-1)\cdot s$ and $(i-1)\cdot s+1$ for $i\ge 1$.
\begin{itemize} \item[Step 0:] Starting with the circuit of size $\ell_2$ described above, divide the vertices into segments of $s$ vertices. If the rightmost (partial) segment does not contain a circuit vertex, then move the rightmost circuit vertex of the next-to-last segment into the last segment next to that segment's $s$-barrier. If this process leaves the next-to-last segment without a circuit vertex, then move the rightmost circuit vertex of the adjacent segment (on the left) into the next-to-last segment adjacent to its $s$-barrier. (Note that the two rightmost full segments of length $s$ have to contain at least three circuit vertices.) \item[Step 1:] Search for the leftmost segment that does not consist of a single circuit vertex next to its $s$-barrier. \begin{itemize} \item[$\bullet$] If the segment contains two vertices, move (if needed) the left one to its $s$-barrier, and delete the second one. Move the leftmost circuit vertex in the next segment to its $s$-barrier and move all circuit vertices in the adjacent segments by the same amount to the left until there is a segment where the circuit vertex would have to cross an $s$-barrier. In that segment, move the left circuit vertex to its $s$-barrier and move the other circuit vertex (if any) to the left by the same amount as the leftmost vertex in that segment. \item[$\bullet$] If the segment contains only one circuit vertex, move it to its $s$-barrier and repeat Step 1. \end{itemize} \item[Step 2:] Repeat Step 1 until all segments consist of a single circuit vertex next to their respective $s$-barrier. \end{itemize}
We visualize this algorithm in Figure~\ref{alg} for the case of $n=31$ and $s=4$, that is, the game ${\rm{CN}}(31,26)$. In this case, $\overline{h}=3$ and $\underline{h}=2$, and circuits of sizes $\ell = 8,9,10,11$ and $12$ need to be displayed. We start with the configuration of circuit vertices at positions $5k+1$ and $5k+4$ for $k=0,\ldots, 5$. Positions with circuit vertices are displayed as black dots, other positions are displayed as open circles, and $s$-barriers are displayed as dotted lines. For each reduction step, only the segments where circuit vertices change positions are displayed, followed by the resulting circuit.
\begin{figure}
\caption{Circuit construction for intermediate values of $\ell$.}
\label{alg}
\end{figure}
Each complete application of Step 1 reduces the number of circuit vertices by one, and hence $\ell$ by one. Furthermore, the distance conditions for circuits remain intact. The initial rearrangement of vertices (if needed) at the right end creates distances that are at most $s$. In addition, the distances do not decrease, so condition (1) remains intact. Now let's look at the distances in the segments where vertices are moved or deleted. We proceed from left to right.
In the segment in which the vertex is deleted, the distance between the vertices adjacent to the deleted vertex is exactly $s$. The vertices to the right of the deleted vertex that moved by the same amount to the left retain their relative distances. The rightmost of these vertices and the circuit vertex to its right that moved a smaller distance due to the non-crossing of barriers have a distance that is at most $s$. That circuit vertex and its neighbor to the right either maintain their distance (if in the same segment) or their distance increases to at most $s$ (if in different segments). Likewise, the sums of consecutive distances continues to be at least $s+1$.
This construction shows that circuits of all required sizes exist (they are obviously not unique), therefore completing the proof.
\end{proof}
We will also provide a second proof for Theorem~\ref{circuit length} which uses a purely algebraic approach to show that the circuit conditions are satisfied when the stacks are spread out as equally as possible. The proof will involve many floor and ceiling functions as the stacks are integer distances apart. We provide three useful lemmas that will aide in the algebraic proof of Theorem~\ref{circuit length}.
\begin{lemma}[Reciprocities of ceilings]\label{rec} If $x$, $y$ and $n$ are any three integers, then $$\ceil{\frac{n}{x}} \le y \iff \ceil{\frac{n}{y}} \le x. $$ \end{lemma}
In the proof, we will repeatedly use the fact that $\lceil x \rceil$ is a non-decreasing function and thus, for a fixed $n$, $x \le y$ implies $ \lceil\frac{n}{y}\rceil \le \lceil\frac{n}{x}\rceil$.
\begin{proof} If $n=k\cdot x$, then $\ceil{\frac{n}{x}}=k$ and $\ceil{\frac{n}{x}}=k \le y$ implies that $\lceil\frac{n}{y}\rceil \le \ceil{\frac{n}{k}} =x$. Suppose now that $n$ is not a multiple of $x$, that is, $n=k\cdot x+m$, for a positive $m < x$, and $\lceil \frac{n}{x}\rceil =k+1$. Note that $n=(k+1)x-(x-m)$, which together with with $\lceil \frac{n}{x}\rceil =k+1\le y$ implies that $\lceil \frac{n}{y}\rceil \le \lceil \frac{n}{k+1}\rceil \le x$, because $x-m$ is positive. The reverse implication follows by symmetry. \end{proof}
\begin{lemma}[First double floor lemma]\label{fdf} Let $s$ and $ n$ be two natural numbers with $0 < s \le n$ and \begin{equation}\label{fdfl} n=a \cdot s + b \text{ with } 0 \le b < s.\end{equation} Then $\lfloor n/{\lfloor\frac{n}{s}\rfloor}\rfloor = s$ iff $b<a$; otherwise, $\lfloor n/{\lfloor\frac{n}{s}\rfloor}\rfloor > s$. \end{lemma}
\begin{proof} From~\eqref{fdfl} we have that $\fl{\frac{n}{s}}=a$. Therefore, $$\fl{ n/{\fl{\frac{n}{s}}}} = \fl{\frac{n}{a}}= \fl{ s+\frac{b}{a}}= s+\fl{\frac{b}{a}}.$$The latter is equal to $s$ iff $b<a$, and otherwise, is bigger than $s$.
\end{proof}
The next lemma deals with the case where $n$ is expressed as a multiple of a non-integer.
\begin{lemma}[Second double floor lemma]\label{sdf} Let $0 \le m< n$ be two natural numbers and $f$ be a real number such that $0 \le f<1$. Define $\ell = \fl{\frac{n}{m+f}}$ and let $a$ and $b$ be the unique integers such that \begin{equation}\label{sdfl} n=a \cdot \ell + b \text{ with } 0 \le b < \ell.\end{equation} If $m=\fl{\frac{n}{\ell}} $ then $\frac{b}{\ell}\ge f$. \end{lemma}
\begin{proof} By the definition of $\ell$, we have that $n=\ell \cdot (s+f) +c$, with $0 \le c<s+f$, where $c$ is not necessarily an integer. Then, $s=\fl{\frac{n}{\ell}}=\fl{s+f+\frac{c}{\ell}}=\fl{s+\frac{\ell \cdot f+c}{\ell}}=s+\fl{\frac{\ell \cdot f+c}{\ell}}$, and therefore, $\ell\cdot f+c<\ell$. Note that we can express $n$ as a multiple of $\ell$ as $n=s\cdot\ell+(\ell \cdot f+c)$, where $(\ell \cdot f+c)$ is an integer. Because $\ell\cdot f+c<\ell$, $\ell\cdot f+c$ is the residue modulo $\ell$ of $n$, that is $\ell \cdot f + c=b$ with $b$ as defined in \eqref{sdfl}. Thus, $\frac{b}{\ell}\ge f$.
\end{proof}
We are now ready for the alternative proof of Theorem~\ref{circuit length}.
\begin{proof} { We will make precise the immediate idea that if we want to distribute $\ell$ vertices as equidistant as possible among the $n$ vertices, then the distances should be roughly $n/\ell$. Since we need integer values, the distances should be $\fl{n/\ell}$ and $\ceil{n/\ell}$. Let $a$ and $b$ be integers such that $$\label{floorcond} n=a \cdot \ell + b \text{ with } a=\fl{\frac{n}{\ell}} \text{ and } b=n\imod{\ell}.$$ By Theorem~\ref{circuit cond}, the conclusion will follow if we can distribute the $\ell$ distances (and therefore determine the $\ell$ vertices) in such a way that
\begin{enumerate} \item $d_i+d_{i+1} > s$ and \item $d_i \le s$ \end{enumerate} for $ i=1,2,\ldots,\ell$, where $d_{\ell+1}= d_1$.} For a value of $\ell$ that satisfies~\eqref{circlen}, we will construct an $\ell$-subset as follows: Of the $\ell$ distances $d_1, d_2, \ldots, d_{\ell}$, we define $b$ distances to have value $a+1$ and the remaining $\ell-b$ distances to have value $a$. This assignment satisfies the condition that the sum of the distances be $n$. By assumption, $n/s \le \ell$ and $\ell$ is an integer, so $\ceil{n/s}\le \ell$ and Lemma~\ref{rec} implies that $\max(d_i)=\ceil{n/\ell}\le s$, so the second circuit condition holds for all values of $\ell$ satisfying ~\eqref{circlen}.
{ To show the first circuit condition on the sums of consecutive distances, we consider the two cases $b=0$ and $b>0$ separately. If $b=0$, then $a=\frac{n}{\ell}$ and all vertices are distance $a$ apart. Since by assumption $\ell \le \frac{2n}{s+1}$, $\frac{s+1}{2} \le \frac{n}{\ell} = a$, and therefore, $s+1 \le 2a = d_i+d_{i+1}$, so Theorem ~\ref{circuit length} follows in this case.
If $b>0$, we show that the first circuit condition holds when $\ell$ equals the upper bound $\fl{\frac{2n}{s+1}}$, and then show that the implication is also true for smaller values of $\ell$. Note that if there are at least as many distances of value $a+1$ as there are of value $a$ (that is, if $b \ge \ell - b$, or equivalently, $\ell/2 \le b$), then it is possible to order the distances in such a way that no two consecutive distances have value $a$, and $d_i + d_{i+1} \ge 2a+1$ for all $i$. Otherwise, there will be a pair of consecutive distances whose sum $d_i + d_{i+1} = 2a$ is minimal.}
Now let $\ell =\fl{\frac{2n}{s+1}}$ and assume that $s$ is odd, say $s=2m+1$. Then $\ell =\fl{\frac{n}{m+1}}$ and $\min(d_i)=\fl{n/\ell}=\fl{n/\fl{\frac{n}{m+1}}}\ge m+1$ (by Lemma~\ref{fdf}), so $d_i+d_{i+1}\ge 2(m+1)=s+1>s$. In the second case when $s=2m$, then $\ell=\fl{\frac{n}{m+(1/2)}}\le \fl{\frac{n}{m}}$ and therefore, $\fl{\frac{n}{\ell}}\ge \fl{n/ \fl{\frac{n}{m}}}\ge m$, where the second inequality follows once more from Lemma~\ref{fdf}. If $\fl{\frac{n}{\ell}}\ge m+1$, then $d_i+d_{i+1}>s$ as before. If $\fl{\frac{n}{\ell}}= m$, then Lemma~\ref{sdf} (with $f=1/2$) implies that $b/\ell \ge \frac{1}{2}$, that is, $b \ge \ell/2$. But this is precisely the case where in our construction $d_i+ d_{i+1} \ge 2a+1=2\fl{\frac{n}{\ell}}+1=2m+1=s+1>s$, and therefore the first circuit condition holds when $\ell$ equals the upper bound of~\eqref{circlen}.
{ It remains to be shown that if the first circuit condition holds for $\ell$, then it also holds for a value $\ell'\le \ell$. Let $d'_1, d'_2, \ldots, d'_{\ell}$ be the distances that we obtain with $\ellÕ$ in our construction, and let $n=a' \cdot \ell'+b'$. Then $a'=\fl{\frac{n}{\ell'}}\ge\fl{\frac{n}{\ell}}=a$, and therefore the only case we need to consider is the case for $s=2m$ and $a'=a$. Since $$n=a' \cdot \ell'+b'=a\cdot(\ell+(\ell'-\ell))+b'=a\cdot \ell+(b'-a(\ell-\ell'))=a\cdot\ell+b,$$ we have that $b'\ge b\ge \ell/2 \ge \ell'/2$, and therefore we can distribute the distances $a'$ and $a'+1$ such that $d'_i+d'_{i+1}=2a'+1=2a+1\ge 2m+1>s$, and the proof is complete.}
\end{proof}
\section{Open questions} We suggest a number of open questions for the interested reader to try his or her hand. \begin{enumerate} \item General results for $\mathcal{L}_{{\rm{CN}}(n,k)}$ for specific values of $k$: We have derived general results for ${\rm{CN}}(n,1)$, ${\rm{CN}}(n,n-1)$, and ${\rm{CN}}(n,n)$, the cases where $k$ is either small or large. We also investigated general results for intermediate values of $k$, specifically ${\rm{CN}}(2m,m)$. Recall that $$\mathcal{L}_{{\rm{CN}}(4,2)}=\oplus{(a,b,c,d)\mid a+b=c+d \wedge b+c=a+d\oplus}\hspace{0.1in}\text{and}$$
$$\mathcal{L}_{{\rm{CN}}(6,3)}=\oplus{(a,b,c,d,e,f)| a+b = d+e \wedge b+c = e+f\oplus}.$$ A natural conjecture for ${\rm{CN}}(2m,m)$ based on $m=2,3$ could be that sums of pairs that are diagonally across from each other are the same, as indicated in Figure~\ref{diag conj}. Unfortunately, we found a counterexample for this conjecture.
\begin{figure}\label{diag conj}
\end{figure}
\item Results for $n=7$: We know next to nothing about ${\rm{CN}}(7,k)$. Recursively computed losing positions for $k=3, 4, 5$ seem to always have an empty stack. \item Use of subgame structure: It is easy to see that ${\rm{CN}}(n,k)$ contains ${\rm{CN}}(3,1)$ for $n \ge 3k$ (when $k-1$ empty stacks are followed by a non-empty stack). Can one make use of this fact (and similar subgames)? At minimum this fact indicates that the losing sets for larger values of $n$ will not be closed under addition. \item Finally, there are numerous variations on this game. Here are a few: \begin{itemize} \item Select a fixed number $a$ from at least one of the stacks; \item Select a fixed number $a$ from each of the heaps; \item Select at least $a$ tokens from each of the $k$ heaps; \item Select a total of at least $a$ tokens from the $k$ stacks; \item Select a total of exactly $a$ tokens from the $k$ stacks. \end{itemize} \end{enumerate}
\pagestyle{headings}
\end{document} | arXiv |
Estimates of the modular-type operator norm of the general geometric mean operator
Chang-Pao Chen1 &
Jin-Wen Lan2
Journal of Inequalities and Applicationsvolume 2015, Article number: 347 (2015) | Download Citation
In this paper, the modular-type operator norm of the general geometric mean operator over spherical cones is investigated. We give two applications of a new limit process, introduced by the present authors, to the establishment of Pólya-Knopp-type inequalities. We not only partially generalize the sufficient parts of Persson-Stepanov's and Wedestig's results, but we also provide new proofs to these results.
Let E be a spherical cone in $\Bbb{R}^{n}$. By this, we mean that $E=\bigcup_{s>0} sA $ for some Borel measurable subset A of the unit sphere $\Sigma^{n-1}$. Let $\|\Bbb{K}\|_{D_{\Bbb{K}}\cap L_{\Phi}^{p}(v\,dx)\mapsto L_{\Phi}^{q}(u\,dx)}$ (in brief, $\|\Bbb{K}\|_{*}$) denote the smallest constant C in (1.1):
$$ \biggl\{ \int_{E} \bigl(\Phi\circ\Bbb{K}f(x) \bigr)^{q} u(x)\,dx \biggr\} ^{1/q} \le C \biggl\{ \int _{E} \bigl(\Phi\circ f(x) \bigr)^{p} v(x)\,dx \biggr\} ^{1/p} $$
for all $f\in D_{\Bbb{K}}\cap L_{\Phi}^{p}(v\,dx)$, where $p, q>0$, $u(x)\ge0$, $v(x)>0$, $\Phi\in CV^{+}(I)$, $\Phi\circ f(x)=\Phi(f(x))$, and $\Bbb{K}f(x)$ is of the form
$$ \Bbb{K}f(x):=\int_{\tilde{S}_{x}} k(x,t)f(t)\,dt\quad (x\in E). $$
Here $CV^{+}(I)$ denotes the set of all nonnegative convex functions defined on an open interval I in $\Bbb{R}$, $D_{\Bbb{K}}$ is the space of those f such that $\Bbb{K}f(x)$ is well defined for almost all $x\in E$, and $L_{\Phi}^{p}(v\,dx)$ is the set of all real-valued Borel measurable f with
$$\|f\|_{\Phi, p,v}:= \biggl\{ \int_{E} \bigl(\Phi\circ f(x) \bigr)^{p}v(x)\,dx \biggr\} ^{1/p}< \infty. $$
Moreover, $\tilde{S}_{x}=\bigcup_{0< s\le\|x\|} sA$, $S_{x}=\tilde{S}_{x}\setminus\| x\|A$, and $k(x,t)\ge0$ is locally integrable over $\Bbb{E}\times\Bbb{E}$.
We write $L^{p}(v\,dx)$ and $\|f\|_{p,v}$ instead of $L^{p}_{\Phi}(v\,dx)$ and $\| f\|_{\Phi,p,v}$, respectively, for the case $\Phi(s)=|s|$. We also write $L^{p}(E,v\,dx)$ for $L^{p}(v\,dx)$, whenever the integral region E is emphasized.
$$\|\Bbb{K}\|_{*}=\sup_{f} \frac{\|\Phi\circ{\Bbb{K}}f\|_{q,u}}{\|\Phi\circ f\|_{p,v}}, $$
where the supremum is taken over all $f\in D_{\Bbb{K}}\cap L_{\Phi}^{p}(v\,dx)$ with $\|\Phi\circ f\|_{p,v}\neq0$. This number reduces to the operator norm of $\Bbb{K}$ for the case $\Phi(s)=|s|$. The investigation of the value $\|\Bbb{K}\|_{*}$ has a long history in the literature. In [1], the present authors introduced a generalized Muckenhoupt constant $A_{M}(p,q)$ and established the following Muckenhoupt-type estimate for $\|\Bbb{K}\|_{*}$:
$$ \|\Bbb{K}\|_{*}\le \biggl(\frac{q}{p^{*}}+\frac{q}{\eta} \biggr)^{1/q} \biggl(1+\frac{p^{*}}{\eta} \biggr)^{\eta^{*}/(p^{*}q^{*})}A_{M}(p,q), $$
where $1\le p, q\le\infty$, $\eta=\max(p,q)$, and $(\cdot)^{*}$ is the conjugate exponent of $(\cdot)$ in the sense that $1/(\cdot)+1/(\cdot)^{*}=1$. For the particular case that
$$ \Phi(s)=|s|,\qquad k(x,t)=1, $$
there are two other types of estimates. They are
$$ \|\Bbb{K}\|_{*}\le p^{*}A_{PS}(p,q) $$
$$ \|\Bbb{K}\|_{*}\le A_{W}(p,q):=\inf_{1< s< p} A_{W}(s,p,q) \biggl(\frac {p-1}{p-s} \biggr)^{1/p^{*}}. $$
(1.5a)
These two inequalities were proved in [2] and [3], Theorem 3.1 and Lemma 7.4, for the case $1< p\le q<\infty$ (see also [4], Theorem 2.1). We refer the readers to Section 2 for details.
In this paper, we focus on the evaluation of $\|\Bbb{K}\|_{*}$ for the following case of (1.1):
$$\Phi(s)=e^{s},\qquad k(x,t)=g(t)/G(x),\qquad f(t)\longrightarrow \log f(t), $$
where $f(t)>0$, $g(t)>0$, and
$$ G(x)=\int_{\tilde{S}_{x}} g(t)\,dt\quad (x\in E). $$
The corresponding inequality to (1.1) takes the form
$$ \biggl(\int_{E} \biggl\{ \exp \biggl(\frac{1}{G(x)}\int _{\tilde{S}_{x}} g(t)\log f(t)\,dt \biggr) \biggr\} ^{q} u(x)\,dx \biggr)^{1/q}\le C \biggl\{ \int_{E} \bigl(f(x) \bigr)^{p} v(x)\,dx \biggr\} ^{1/p}, $$
which is known as the Pólya-Knopp-type inequality.
In [4], Theorem 3.1, [2, 5], and [3], Theorem 7.3, the particular case $g(t)=1$ of (1.7) was considered. They obtained the following estimates by means of the formula $(G_{\Bbb{K}}f)(x)=\lim_{\epsilon\to0^{+}} [\Bbb{K}(f^{\epsilon})]^{1/\epsilon}(x)$:
$$ \|\Bbb{K}\|_{*}\le e^{1/p}D^{*}_{PS} \quad\mbox{and}\quad \|\Bbb{K}\|_{*}\le \inf_{s>1} e^{(s-1)/p} D^{*}_{OG}(s), $$
where $0< p\le q<\infty$. The definitions of $D^{*}_{PS}$ and $D^{*}_{OG}(s)$ are given in Section 3.
The purpose of this paper is two-fold. We not only extend the aforementioned sufficient parts of [2, 4, 5], and [3] from $u(x)>0$ and $g(t)=1$ to $u(x)\ge0$ and
$$ \min \biggl(\sup_{x\in E} \bigl|g(x)\bigl|, \sup_{x\in E} \biggl| \frac{g(x)}{v(x)} \biggr| \biggr)< \infty, $$
but we also provide a new proof of (1.8) from the viewpoint of (1.10):
$$ \|\Bbb{K}\|_{*} \le\inf_{\epsilon\in\frak{F}_{\Phi}^{+}} (A_{p/\epsilon, q/\epsilon})^{1/\epsilon} \le\liminf_{\epsilon\to0^{+}} \bigl\{ (A_{p/\epsilon, q/\epsilon })^{1/\epsilon} \bigr\} , $$
where $0< p,q<\infty$, $\frak{F}_{\Phi}^{+}=\{\epsilon>0: \Phi^{\epsilon}\in CV^{+}(I)\}$, and $A_{p,q}$ are absolute constants subject to the condition
$$ \biggl(\int_{E} \bigl|\Bbb{K}f(x) \bigr|^{q} u(x)\,dx \biggr)^{1/q} \le A_{p,q} \biggl(\int_{E} \bigl|f(x)\bigr|^{p} v(x)\,dx \biggr)^{1/p} \quad(f\ge0). $$
It is clear that (1.10) is applicable to the case $\Phi(s)=e^{s}$. In this case, $\frak{F}_{\Phi}^{+}=\{\epsilon>0\}$ and the second inequality in (1.10) holds. We remark that it may not be an equality (cf. [6]). On the other hand, we have $p/\epsilon\to\infty$ and $q/\epsilon\to\infty$ as $\epsilon\to0^{+}$. This indicates that the infimum in (1.10) can be estimated by evaluating those $A_{p,q}$ with p, q large enough.
The limit process (1.10) differs from the scheme by means of the formula $(G_{\Bbb{K}}f)(x)=\lim_{\epsilon\to0^{+}} [\Bbb{K}(f^{\epsilon})]^{1/\epsilon}(x)$. It was introduced in [6] to get different types of Pólya-Knopp inequalities, including the n-dimensional extensions of the Levin-Cochran-Lee-type inequalities and Carleson's result. We showed that the infimum in (1.10) can easily be evaluated by applying the following choice of $A_{p,q}$ for $1< p,q<\infty$:
$$A_{p,q}\le \biggl(\frac{q}{p^{*}}+\frac{q}{\eta} \biggr)^{1/q} \biggl(1+\frac{p^{*}}{\eta} \biggr)^{\eta^{*}/(p^{*}q^{*})}A_{M}(p,q). $$
This choice is due to (1.3). We also pointed out that for some cases, the values of $\|\Bbb{K}\|_{*}$ obtained from (1.10) are better than the known constants in the literature. In this paper, we consider two other choices of $A_{p,q}$ with $1< p\le q<\infty$, that is, $A_{p,q}\le p^{*}\tilde{A}_{PS}(p,q)$ and $A_{p,q}\le\tilde{A}_{W}(p,q)$, which are general forms of (1.5) and (1.5a). We shall derive them from (1.5) and (1.5a) and relax the conditions on $u(x)$ and $g(t)$ from $u(x)>0$ and $g(t)=1$ to $u(x)\ge0$ and $g(t)>0$ (cf. Section 2). Based on such choices, we prove that (1.8) follows from (1.10). Moreover, (1.8) can be extended from $u(x)>0$ and $g(t)=1$ to $u(x)\ge0$ and $g(t)$ of the form (1.9). This extension gives Persson-Stepanov-type and Opic-Gurka-tpye estimates of the modular-type operator norm of the general geometric mean operator corresponding to $g(t)$. We remark that the particular case $g(t)=|{\tilde{S}}_{t}|^{s-1}$ can lead us to the Levin-Cochran-Lee-type inequality (see Section 3 for details).
General forms of (1.5) and (1.5a)
Let $1< p\le q<\infty$, $g(t)>0$, $u(x)\ge0$, and $v(x)>0$. Consider the inequality:
$$ \biggl(\int_{E} \biggl\{ \frac{1}{G(x)}\int _{\tilde{S}_{x}} g(t)f(t)\,dt \biggr\} ^{q} u(x)\,dx \biggr)^{1/q}\le C \biggl(\int_{E} \bigl(f(x) \bigr)^{p} v(x)\,dx \biggr)^{1/p} \quad(f\ge0), $$
where $G(x)$ is defined by (1.6). This corresponds to the case $\Phi(s)=|s|$ and $k(x,t)=g(t)/G(x)$ of (1.1). Inequality (2.1) reduces to the form (2.2) for the case $g(t)=1$:
$$ \biggl(\int_{E} \biggl\{ \int_{\tilde{S}_{x}} f(t)\,dt \biggr\} ^{q} \tilde{u}(x)\,dx \biggr)^{1/q}\le C \biggl( \int_{E} \bigl(f(x)\bigr)^{p} v(x)\,dx \biggr)^{1/p}\quad (f\ge0), $$
where $\tilde{u}(x)=u(x)/G(x)^{q}$. In [4], Theorem 2.1, [2] and [3], Lemma 7.4(a), it was proved that under the conditions $u(x)>0$ and $A_{PS}(p,q)<\infty$, (1.5) holds, in other words, (2.2) with $\tilde{u}(x)$ replaced by $u(x)$ is true for $C=p^{*}A_{PS}(p,q)$, where
$$A_{PS}(p,q):=\sup_{x\in E} \biggl(\int _{\tilde{S}_{x}} v(t)^{1-p*}\,dt \biggr)^{-1/p} \biggl(\int _{\tilde{S}_{x}} \biggl\{ \int_{\tilde{S}_{t}} v(y)^{1-p*}\,dy \biggr\} ^{q} u(t)\,dt \biggr)^{1/q}. $$
This result will be extended below from $g(t)=1$ and $u(x)>0$ to $g(t)>0$ and $u(x)\ge0$. We shall see its application in the proof of Theorem 3.2.
Theorem 2.1
Let $1< p\le q<\infty$, $u(x)\ge0$, $v(x)>0$, $g(t)>0$, and $0< G(x)<\infty$, where $G(x)$ is defined by (1.6). If $\tilde{A}_{PS}(p,q)<\infty$, then (2.1) holds for $C\le p^{*}\tilde{A}_{PS}(p,q)$, where
$$\tilde{A}_{PS}(p,q)=\sup_{x\in E} \biggl(\int _{\tilde{S}_{x}} \biggl(\frac{g(t)}{v(t)} \biggr)^{p*}v(t)\,dt \biggr)^{\frac{-1}{p}} \biggl(\int_{\tilde{S}_{x}} \biggl\{ \frac{1}{G(t)}\int_{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{p*}v(y)\,dy \biggr\} ^{q} u(t)\,dt \biggr)^{\frac{1}{q}}. $$
The case $u(x)>0$ follows from [4], Theorem 2.1, or [3], Lemma 7.4(a), under the following substitutions:
$$ f(t)\longrightarrow g(t)f(t),\qquad u(x)\longrightarrow\frac {u(x)}{(G(x))^{q}},\qquad v(x) \longrightarrow\frac {v(x)}{(g(x))^{p}}. $$
As for $u(x)\ge0$, let $u_{\tau}(x)=u(x)+\rho_{\tau}(x)$, where $0<\tau<1$ and $\rho_{\tau}(x)>0$ is subject to the condition
$$ \int_{\tilde{S}_{x}} \biggl\{ \frac{1}{G(t)}\int_{\tilde{S}_{t}} \biggl(\frac {g(y)}{v(y)} \biggr)^{p^{*}}v(y)\,dy \biggr\} ^{q} \rho_{\tau}(t)\,dt\le \tau \biggl\{ \int_{\tilde{S}_{x}} \biggl( \frac{g(t)}{v(t)} \biggr)^{p^{*}}v(t)\,dt \biggr\} ^{q/p}. $$
Such $\rho_{\tau}(x)$ exists. We have $u_{\tau}(x)>0$ on E. Moreover, the condition $1/q<1$ implies that $(a+b)^{1/q}\le a^{1/q}+b^{1/q}$ for all $a,b\ge0$. Putting this together with (2.4) yields
$$\begin{aligned} & \biggl(\int_{\tilde{S}_{x}} \biggl\{ \frac{1}{G(t)}\int _{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{p^{*}}v(y)\,dy \biggr\} ^{q} u_{\tau}(t)\,dt \biggr)^{1/q} \\ &\quad\le \biggl(\int_{\tilde{S}_{x}} \biggl\{ \frac{1}{G(t)}\int _{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{p^{*}} v(y)\,dy \biggr\} ^{q} u(t)\,dt \biggr)^{\frac{1}{q}}+ \tau^{\frac{1}{q}} \biggl\{ \int_{\tilde{S}_{x}} \biggl(\frac {g(t)}{v(t)} \biggr)^{p^{*}}v(t)\,dt \biggr\} ^{\frac{1}{p}}. \end{aligned}$$
This leads us to
$$ \tilde{A}_{PS}(p,q,\tau)\le\tilde{A}_{PS}(p,q)+ \tau^{1/q}< \infty, $$
where $\tilde{A}_{PS}(p,q,\tau)$ is the number obtained from $\tilde{A}_{PS}(p,q)$ by replacing $u(t)$ by $u_{r}(t)$. We have $u_{\tau}(x)>u(x)$ on E. By the result of the case $u(x)>0$, the following inequality holds for $f\ge0$:
$$\begin{aligned} & \biggl(\int_{E} \biggl\{ \frac{1}{G(x)} \int_{\tilde{S}_{x}} g(t)f(t)\,dt \biggr\} ^{q} u(x)\,dx \biggr)^{\frac{1}{q}} \\ &\quad\le \biggl(\int_{E} \biggl\{ \frac{1}{G(x)}\int _{\tilde{S}_{x}} g(t)f(t)\,dt \biggr\} ^{q} u_{\tau}(x)\,dx \biggr)^{\frac{1}{q}} \\ &\quad\le p^{*}\tilde{A}_{PS}(p,q,\tau) \biggl(\int_{E} \bigl(f(x)\bigr)^{p} v(x)\,dx \biggr)^{\frac{1}{p}}. \end{aligned}$$
It follows from (2.5) that $\liminf_{\tau\to0^{+}} \tilde{A}_{PS}(p,q,\tau)\le\tilde{A}_{PS}(p,q)$. Putting this together with (2.6) yields the desired inequality. The proof is complete. □
Next, consider (1.5a). The number $A_{W}(s,p,q)$ in (1.5a) is defined by the formula:
$$\begin{aligned} A_{W}(s,p,q)=\sup_{x\in E} \biggl(\int _{\tilde{S}_{x}} v(t)^{1-p*}\,dt \biggr)^{\frac{s-1}{p}} \biggl(\int _{E\setminus S_{x}} \biggl\{ \int_{\tilde{S}_{t}} v(y)^{1-p*}\,dy \biggr\} ^{\frac{q(p-s)}{p}} u(t)\,dt \biggr)^{\frac{1}{q}}. \end{aligned}$$
In [3], Lemma 7.4(b), $A_{W}(s,p,q)$ is replaced by another notation $A^{*}_{W}(s)$. Like (1.5), (1.5a) can be generalized in the following way, in which $g(t)=1$ and $u(x)>0$ are relaxed to $g(t)>0$ and $u(x)\ge0$. We shall see its application in the proof of Theorem 3.3.
Let $1< p\le q<\infty$, $u(x)\ge0$, $v(x)>0$, $g(t)>0$, and $0< G(x)<\infty$, where $G(x)$ is defined by (1.6). If $\tilde{A}_{W}(s,p,q)<\infty$ for some $1< s< p$, then (2.1) holds for $C\le\tilde{A}_{W}(p,q)$, where
$$ \tilde{A}_{W}(p,q):=\inf_{1< s< p} \tilde{A}_{W}(s,p,q) \biggl(\frac {p-1}{p-s} \biggr)^{1/p^{*}} $$
$$\begin{aligned} \tilde{A}_{W}(s,p,q)={}&\sup_{x\in E} \biggl(\int_{\tilde{S}_{x}} \biggl(\frac{g(t)}{v(t)} \biggr)^{p^{*}}v(t)\,dt \biggr)^{\frac {s-1}{p}} \\ &{}\times \biggl(\int_{E\setminus S_{x}} \biggl\{ \int_{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{p^{*}}v(y)\,dy \biggr\} ^{\frac{q(p-s)}{p}} \frac{u(t)\,dt}{(G(t))^{q}} \biggr)^{\frac{1}{q}}. \end{aligned}$$
The case $u(x)>0$ follows from [3], Lemma 7.4(b), under the substitutions (2.3). For the case $u(x)\ge0$, we modify the proof of Theorem 2.1 in the following way. Let $1< s< p$ and $0<\tau<1$. Set $u_{\tau}(x,s)=u(x)+\rho_{\tau}(x,s)$, where $\rho_{\tau}(x,s)>0$ and satisfies the condition
$$\begin{aligned} &\int_{E\setminus S_{x}} \biggl\{ \int_{\tilde{S}_{t}} \biggl( \frac {g(y)}{v(y)} \biggr)^{p^{*}}v(y)\,dy \biggr\} ^{\frac{q(p-s)}{p}} \frac{\rho_{\tau}(t,s)}{(G(t))^{q}}\,dt \le \tau \biggl(\frac{p-1}{p-s} \biggr)^{\frac{-q}{p^{*}}} \biggl\{ \int _{\tilde{S}_{x}} \biggl(\frac{g(t)}{v(t)} \biggr)^{p^{*}}v(t)\,dt \biggr\} ^{\frac{q(1-s)}{p}}. \end{aligned}$$
Such $\rho_{\tau}(x,s)$ exists. We have $u_{\tau}(x,s)>0$ on $x\in E$. Moreover,
$$ \tilde{A}^{\tau}_{W}(s,p,q)\le\tilde{A}_{W}(s,p,q)+ \tau^{1/q} \biggl(\frac {p-1}{p-s} \biggr)^{-1/p^{*}}, $$
where $\tilde{A}^{\tau}_{W}(s,p,q)$ is obtained from $\tilde{A}_{W}(s,p,q)$ by making the change in (2.8): $u(t)\longrightarrow u_{\tau}(t,s)$. Obviously, $u_{\tau}(x,s)>u(x)$. Applying the preceding result of the case $u(x)>0$ to $u_{\tau}(x,s)$, we get
$$\begin{aligned} & \biggl(\int_{E} \biggl\{ \frac{1}{G(x)} \int_{\tilde{S}_{x}} g(t)f(t)\,dt \biggr\} ^{q} u(x)\,dx \biggr)^{1/q} \\ &\quad\le \biggl(\int_{E} \biggl\{ \frac{1}{G(x)}\int _{\tilde{S}_{x}} g(t)f(t)\,dt \biggr\} ^{q} u_{\tau}(x,s)\,dx \biggr)^{1/q} \\ &\quad\le \biggl\{ \inf_{1< s'< p} \tilde{A}^{\tau}_{W} \bigl(s',p,q\bigr) \biggl(\frac{p-1}{p-s'} \biggr)^{1/p^{*}} \biggr\} \biggl(\int_{E} \bigl(f(x)\bigr)^{p} v(x)\,dx \biggr)^{1/p} \\ &\quad\le\tilde{A}^{\tau}_{W}(s,p,q) \biggl(\frac{p-1}{p-s} \biggr)^{1/p^{*}} \biggl(\int_{E} \bigl(f(x) \bigr)^{p} v(x)\,dx \biggr)^{1/p}. \end{aligned}$$
Taking '$\inf_{1< s< p}$' for both sides of (2.10), we get
$$ \biggl(\int_{E} \biggl\{ \frac{1}{G(x)}\int _{\tilde{S}_{x}} g(t)f(t)\,dt \biggr\} ^{q} u(x)\,dx \biggr)^{1/q}\le\tilde{A}^{\tau}_{W}(p,q) \biggl(\int _{E} \bigl(f(x)\bigr)^{p} v(x)\,dx \biggr)^{1/p}. $$
$$\tilde{A}^{\tau}_{W}(p,q)=\inf_{1< s< p} \tilde{A}^{\tau}_{W}(s,p,q) \biggl(\frac {p-1}{p-s} \biggr)^{1/p^{*}}. $$
From (2.9), we obtain $\tilde{A}^{\tau}_{W}(p,q)\le \tilde{A}_{W}(p,q)+\tau^{1/q}$. Taking $\tau\to0^{+}$ for both sides of (2.11), we get the desired inequality. This completes the proof. □
Extensions and new proofs of (1.8)
To derive the extensions of (1.8), we need the following lemma.
Lemma 3.1
Let $0< p<\infty$, $v(x)>0$, $g(t)>0$, and $0< G(x)<\infty$, where $G(x)$ is defined by (1.6). If $\sup_{x\in E} \{g(x)/v(x)\}<\infty$, then, for all $t\in E$,
$$ \lim_{\epsilon\to0^{+}} \biggl(\frac{1}{G(t)}\int_{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{\frac{\epsilon}{p-\epsilon}}g(y)\,dy \biggr)^{\frac{1}{\epsilon}} = \biggl\{ \exp \biggl(\frac{1}{G(t)}\int_{\tilde{S}_{t}}g(y) \biggl( \log\frac{g(y)}{v(y)} \biggr)\,dy \biggr) \biggr\} ^{\frac{1}{p}}. $$
Let $\alpha\ge\sup_{x\in E} \{g(x)/v(x)\}$. Without loss of generality, we may assume $\alpha>1$. We first consider the case that $\int_{\tilde{S}_{t}}g(y) |\log (\frac{g(y)}{v(y)} ) |\,dy<\infty$. Let
$$h(\epsilon)=\frac{1}{G(t)}\int_{\tilde{S}_{t}} \biggl( \frac {g(y)}{v(y)} \biggr)^{\epsilon/(p-\epsilon)}g(y)\,dy \quad(0\le \epsilon< p/2). $$
$$\int_{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{\epsilon/(p-\epsilon)}g(y)\,dy \le \alpha^{\epsilon/(p-\epsilon)}G(t)< \infty, $$
so $h(\epsilon)$ is well defined and has a finite value. For $\epsilon \in[0,p/2)$ and $0<\tau<\min(p/2-\epsilon,\epsilon)$, it follows from the mean value theorem that
$$\begin{aligned} \frac{h(\epsilon+\tau)-h(\epsilon)}{\tau}&=\frac{1}{G(t)}\int_{\tilde{S}_{t}} \frac{1}{\tau}\biggl\{ \biggl(\frac{g(y)}{v(y)} \biggr)^{\frac {\epsilon+\tau}{p-\epsilon-\tau}}- \biggl(\frac{g(y)}{v(y)} \biggr)^{\frac{\epsilon}{p-\epsilon}} \biggr\} g(y)\,dy \\ &=\frac{p}{G(t)}\int_{\tilde{S}_{t}} \frac{1}{(p-\epsilon _{0})^{2}} \biggl( \frac{g(y)}{v(y)} \biggr)^{\epsilon_{0}/(p-\epsilon _{0})}g(y) \biggl(\log\frac{g(y)}{v(y)} \biggr)\,dy, \end{aligned}$$
where $\epsilon_{0}:=\epsilon_{0}(y)$ lies between ϵ and $\epsilon+\tau$. We know that
$$\frac{\chi_{\tilde{S}_{t}}(y)}{(p-\epsilon_{0})^{2}} \biggl(\frac {g(y)}{v(y)} \biggr)^{\epsilon_{0}/(p-\epsilon_{0})}g(y) \biggl|\log \biggl(\frac{g(y)}{v(y)} \biggr) \biggr|\le\frac{\alpha\chi_{\tilde{S}_{t}}(y)g(y)}{(p-\epsilon)^{2}} \biggl|\log \biggl( \frac {g(y)}{v(y)} \biggr) \biggr| \in L^{1}(E,dy). $$
By (3.2) and the Lebesgue dominated convergence theorem, h is differentiable on $[0, p/2)$. In addition,
$$h'(\epsilon)=\lim_{\tau\to0^{+}}\frac{h(\epsilon+\tau)-h(\epsilon)}{\tau}= \frac{p}{(p-\epsilon)^{2}G(t)}\int_{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{\epsilon/(p-\epsilon)}g(y) \biggl(\log\frac{g(y)}{v(y)} \biggr)\,dy. $$
Thus,
$$\begin{aligned} &\lim_{\epsilon\to0^{+}}\log \biggl(\frac{1}{G(t)}\int _{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{\epsilon/(p-\epsilon)}g(y)\,dy \biggr)^{1/\epsilon} \\ &\quad=\lim_{\epsilon\to0^{+}}\frac{\log h(\epsilon)-\log h(0)}{\epsilon}\\ &\quad=\frac{d}{d\epsilon} \bigl(\log h(\epsilon) \bigr) \Big|_{\epsilon=0} = \frac{h'(0)}{h(0)}=\frac{1}{pG(t)}\int_{\tilde{S}_{t}}g(y) \biggl( \log\frac{g(y)}{v(y)} \biggr)\,dy. \end{aligned}$$
We get the desired result for the case $\int_{\tilde{S}_{t}} g(y) |\log (\frac{g(y)}{v(y)} ) |\,dy<\infty$. Next, consider the case $\int_{\tilde{S}_{t}} g(y) |\log (\frac {g(y)}{v(y)} ) |\,dy=\infty$. This implies
$$ \infty=\int_{\Omega_{1}} g(y) \biggl|\log \biggl(\frac {g(y)}{v(y)} \biggr) \biggr|\,dy +\int_{\Omega_{2}} g(y) \biggl|\log \biggl(\frac{g(y)}{v(y)} \biggr) \biggr|\,dy, $$
where $\Omega_{1}=\{y\in\tilde{S}_{t}: g(y)/v(y)\le1\}$ and $\Omega_{2}=\{y\in \tilde{S}_{t}: g(y)/v(y)> 1\}$. We have
$$\int_{\Omega_{2}}g(y) \biggl|\log \biggl(\frac{g(y)}{v(y)} \biggr) \biggr|\,dy \le(\log\alpha) G(t)< \infty. $$
Combining this with (3.3), we find that $\int_{\Omega_{1}}g(y) |\log (\frac{g(y)}{v(y)} ) |\,dy=\infty$. This leads us to
$$\int_{\tilde{S}_{t}}g(y) \biggl(\log\frac{g(y)}{v(y)} \biggr)\,dy= - \int_{\Omega_{1}}g(y) \biggl|\log \biggl(\frac{g(y)}{v(y)} \biggr) \biggr|\,dy+ \int_{\Omega_{2}} g(y) \biggl|\log \biggl(\frac{g(y)}{v(y)} \biggr) \biggr|\,dy=- \infty. $$
We shall show
$$\lim_{\epsilon\to0^{+}} \biggl(\frac{1}{G(t)}\int_{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{\epsilon/(p-\epsilon)}g(y)\,dy \biggr)^{1/\epsilon} =0. $$
If so, the desired equality follows. Let $0<\epsilon<p/2$ and $y\in \tilde{S}_{t}$. By the mean value theorem, we get
$$\biggl(\frac{g(y)}{v(y)} \biggr)^{\epsilon/(p-\epsilon)}-1=\frac {\epsilon p}{(p-\epsilon_{0})^{2}} \biggl( \frac{g(y)}{v(y)} \biggr)^{\epsilon _{0}/(p-\epsilon_{0})} \biggl(\log\frac{g(y)}{v(y)} \biggr) $$
for some $\epsilon_{0}\in(0, \epsilon)$. This implies
$$\begin{aligned} &\frac{1}{G(t)}\int_{\tilde{S}_{t}} \biggl( \frac {g(y)}{v(y)} \biggr)^{\epsilon/(p-\epsilon)}g(y)\,dy \\ &\quad=1+ \biggl(\frac {\epsilon p}{G(t)}\int_{\tilde{S}_{t}}\frac{1}{(p-\epsilon_{0})^{2}} \biggl(\frac {g(y)}{v(y)} \biggr)^{\epsilon_{0}/(p-\epsilon_{0})}g(y) \biggl(\log\frac{g(y)}{v(y)} \biggr)\,dy \biggr). \end{aligned}$$
By Fatou's lemma, we get
$$\begin{aligned} &\liminf_{\epsilon\to0^{+}}\frac{p}{G(t)}\int_{\tilde{S}_{t}} \frac{1}{(p-\epsilon_{0})^{2}} \biggl(\frac{g(y)}{v(y)} \biggr)^{\epsilon_{0}/(p-\epsilon_{0})}g(y)\biggl\vert \log \biggl(\frac {g(y)}{v(y)} \biggr)\biggr\vert \,dy \\ &\quad\ge\frac{1}{pG(t)}\int_{\tilde{S}_{t}} \biggl\{ \liminf _{\epsilon\to0^{+}} \biggl(\frac{g(y)}{v(y)} \biggr)^{\epsilon_{0}/(p-\epsilon_{0})} \biggr\} g(y)\biggl\vert \log \biggl(\frac{g(y)}{v(y)} \biggr)\biggr\vert \,dy \\ &\quad=\frac{1}{pG(t)}\int_{\tilde{S}_{t}}g(y)\biggl\vert \log \biggl( \frac {g(y)}{v(y)} \biggr)\biggr\vert \,dy=\infty. \end{aligned}$$
Like (3.3), decompose the integral $\int_{\tilde{S}_{t}} (\cdots)$ as the sum $\int_{\Omega_{1}} (\cdots) +\int_{\Omega_{2}} (\cdots)$. For the $\Omega_{2}$ term, we have
$$\begin{aligned} &\frac{p}{G(t)}\int_{\Omega_{2}}\frac{1}{(p-\epsilon_{0})^{2}} \biggl( \frac {g(y)}{v(y)} \biggr)^{\epsilon_{0}/(p-\epsilon_{0})}g(y)\biggl\vert \log \biggl( \frac{g(y)}{v(y)} \biggr)\biggr\vert \,dy \\ &\quad\le\frac{4\alpha\log\alpha}{pG(t)}\int_{\Omega_{2}} g(y)\,dy\le \frac{4\alpha\log\alpha}{p}< \infty, \end{aligned}$$
which implies
$$\lim_{\epsilon\to0^{+}}\frac{p}{G(t)}\int_{\tilde{S}_{t}} \frac{1}{(p-\epsilon_{0})^{2}} \biggl(\frac{g(y)}{v(y)} \biggr)^{\epsilon_{0}/(p-\epsilon_{0})}g(y) \biggl( \log\frac{g(y)}{v(y)} \biggr)\,dy=-\infty. $$
From (3.4) and the fact that $\lim_{\epsilon\to 0}(1+\epsilon\theta)^{1/\epsilon}=e^{\theta}$ for any $\theta\in\Bbb{R}$, we get
$$\limsup_{\epsilon\to0^{+}} \biggl(\frac{1}{G(t)}\int _{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{\epsilon/(p-\epsilon)}g(y)\,dy \biggr)^{1/\epsilon} \le\limsup_{\epsilon\to0^{+}} (1+\epsilon \theta )^{1/\epsilon}=e^{\theta} $$
for any $\theta<0$. Letting $\theta\to-\infty$, we get the desired result. □
Lemma 3.1 may be false for the case that $\sup_{x\in E} g(x)/v(x)=\infty $. A counterexample is given as follows. Consider $n=1$, $t=1$, $g(t)=1$, and $v(x)=\sum_{m=2}^{\infty}e^{-m}\chi_{(\frac{1}{m}-\frac{1}{m^{3}},\frac{1}{m}]}(x)+\chi_{\Bbb{R}\setminus\bigcup_{m\ge2} (\frac{1}{m}-\frac{1}{m^{3}}, \frac{1}{m}]}(x)$. We have
$$\int_{0}^{1} \biggl(\frac{g(y)}{v(y)} \biggr)^{\epsilon/(p-\epsilon)}g(y)\,dy= \int_{0}^{1} v(y)^{\epsilon/(\epsilon-p)}\,dy\ge\sum_{m=2}^{\infty}\frac{1}{m^{3}}e^{\frac{m\epsilon}{p-\epsilon}}=\infty \quad(0< \epsilon< p/2) $$
$$\int_{0}^{1} g(y) \biggl(\log\frac{g(y)}{v(y)} \biggr)\,dy=\int_{0}^{1}\log\frac{1}{v(y)}\,dy=\sum _{m=2}^{\infty}\frac{1}{m^{2}}< \infty. $$
From these, we know that (3.1) is false for this example.
Now, we go back to the investigation of the first part of (1.8). Set
$$\tilde{D}_{PS}:=\sup_{x\in E}\frac{1}{G(x)^{\frac{1}{p}}} \biggl( \int_{\tilde{S}_{x}} \biggl\{ \exp \biggl(\frac{1}{G(t)}\int _{\tilde{S}_{t}}g(y) \biggl(\log\frac{g(y)}{v(y)} \biggr)\,dy \biggr) \biggr\} ^{\frac{q}{p}}u(t)\,dt \biggr)^{\frac{1}{q}}, $$
where $G(x)$ is defined by (1.6). The case $g(t)=1$ of $\tilde{D}_{PS}$ reduces to $D^{*}_{PS}$ mentioned in (1.8). We shall establish the following result, which extends the first inequality in (1.8) from $u(x)>0$ and $g(t)=1$ to $u(x)\ge0$ and those $g(t)$ subject to the condition (1.9). This extension gives the Persson-Stepanov-type estimate of the modular-type operator norm of the general geometric mean operator corresponding to $g(t)$. In particular, $g(t)$ can be of the form $g(t)=|{\tilde{S}}_{t}|^{s-1}$. An elementary calculation of this case will lead us to the Levin-Cochran-Lee-type inequality. We leave such a calculation to the readers. Our result partially generalizes the sufficient parts of [4], Theorem 3.1, [2], and [3], Theorem 7.3(a).
Let $0< p\le q<\infty$, $u(x)\ge0$, $v(x)>0$, $g(t)>0$, and $0< G(x)<\infty$, where $G(x)$ is defined by (1.6). If (1.9) is true and $\tilde{D}_{PS}<\infty$, then (1.7) holds for $C\le e^{1/p}\tilde{D}_{PS}$.
Let $\Phi(s)=e^{s}$, $k(x,t)=g(t)/G(x)$, and $f(t)\longrightarrow\log f(t)$. The proof is the same as to prove that $\|\mathbb{K}\|_{*}\le e^{1/p}\tilde{D}_{PS}$. We first assume that $\sup_{x\in E} \{g(x)/v(x)\}<\infty$. Consider the case that u is bounded on $\tilde{\Omega}_{r}$ and $u(x)=0$ on $E\setminus\tilde{\Omega}_{r}$, where $r\ge1$ and $\tilde{\Omega}_{r}=\{x\in E: 1/r\le\|x\|\le r\}$. By (1.10)-(1.11) and Theorem 2.1, we know that
$$ \|\mathbb{K}\|_{*}\le\liminf_{\epsilon\to0^{+}} \bigl((p/\epsilon)^{*}\tilde{A}_{PS}(p/\epsilon,q/\epsilon) \bigr)^{1/\epsilon}, $$
provided that the term $(\cdots)^{1/\epsilon}$ in (3.5) is finite for all sufficiently small $\epsilon>0$. By an elementary calculation, we obtain $\lim_{\epsilon\to0^{+}} ((p/\epsilon)^{*} )^{1/\epsilon}=\lim_{\epsilon\to0^{+}} (\frac{p}{p-\epsilon } )^{1/\epsilon}=e^{1/p}$. On the other hand, let $0<\epsilon<p$. Then $p/\epsilon>1$ and $q/\epsilon>1$. Moreover, we have $(p/\epsilon)^{*}=p/(p-\epsilon)$, so
$$\biggl(\frac{g(t)}{v(t)} \biggr)^{(p/\epsilon)^{*}}v(t)= \biggl(\frac {g(t)}{v(t)} \biggr)^{p/(p-\epsilon)}v(t)= \biggl(\frac {g(t)}{v(t)} \biggr)^{\epsilon/(p-\epsilon)}g(t). $$
It follows from the definition of $\tilde{A}_{PS}(p/\epsilon,q/\epsilon)$ that
$$ \begin{aligned}[b] \bigl(\tilde{A}_{PS}(p/\epsilon,q/\epsilon) \bigr)^{1/\epsilon} ={}&\sup_{x\in E} \biggl(\int _{\tilde{S}_{x}} \biggl(\frac{g(t)}{v(t)} \biggr)^{\epsilon/(p-\epsilon)}g(t)\,dt \biggr)^{-1/p}\\ &{}\times \biggl(\int_{\tilde{S}_{x}} \biggl\{ \frac{1}{G(t)}\int _{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{\epsilon/(p-\epsilon)}g(y)\,dy \biggr\} ^{q/\epsilon}u(t)\,dt \biggr)^{1/q}. \end{aligned} $$
We have assumed that $u(x)=0$ on $E\setminus\tilde{\Omega}_{r}$. Moreover, for $t\in\tilde{S}_{x}$, we have
$$\begin{aligned} \frac{1}{G(t)}\int_{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{\epsilon/(p-\epsilon)}g(y)\,dy&\le \biggl\{ \sup_{y\in\tilde{S}_{x}} \biggl( \frac{g(y)}{v(y)} \biggr) \biggr\} ^{\epsilon/(p-\epsilon)} \biggl\{ \frac{1}{G(t)} \int _{\tilde{S}_{t}} g(y)\,dy \biggr\} \\ &= \biggl\{ \sup_{y\in\tilde{S}_{x}} \biggl(\frac{g(y)}{v(y)} \biggr) \biggr\} ^{\epsilon/(p-\epsilon)}. \end{aligned}$$
These imply
$$\begin{aligned} \bigl(\tilde{A}_{PS}(p/\epsilon,q/\epsilon) \bigr)^{1/\epsilon} \le{}&\biggl(\int_{\tilde{B}_{1/r}} \biggl( \frac{g(t)}{v(t)} \biggr)^{\epsilon/(p-\epsilon)}g(t)\,dt \biggr)^{-1/p} \\ &{}\times \biggl\{ \sup_{y\in E} \biggl(\frac {g(y)}{v(y)} \biggr) \biggr\} ^{1/(p-\epsilon)} \biggl(\int_{\tilde{\Omega}_{r}} u(t)\,dt \biggr)^{1/q}< \infty, \end{aligned}$$
where $\tilde{B}_{\rho}=\{x\in E: \|x\|\le\rho\}$. The above argument guarantees the validity of (3.5). Now, we try to estimate the limit infimum given in (3.5). It suffices to show that
$$ \liminf_{\epsilon\to0^{+}} \bigl(\tilde{A}_{PS}(p/\epsilon,q/ \epsilon) \bigr)^{1/\epsilon}\le\tilde{D}_{PS}. $$
Clearly, the term $(\int_{\tilde{S}_{x}} (\cdots) )^{-1/p}$ in (3.6) becomes bigger whenever x with $\|x\|>r$ is replaced by $rx/\|x\|$. Moreover, the term $(\int_{\tilde{S}_{x}} \{\cdots\} ^{q/\epsilon}u(t)\,dt )^{1/q}$ in (3.6) is zero for $\|x\|<1/r$ and it keeps the same value for the change: x with $\|x\|>r\longrightarrow rx/\|x\|$. Hence, the term '$\sup_{x\in E}$' in (3.6) can be replaced by '$\sup_{x\in\tilde{\Omega}_{r}}$'. By the Heine-Borel theorem, we can choose $0<\epsilon_{m}<p/2$, $\alpha_{m}>0$, and $x_{0}, x_{m}\in\tilde{\Omega}_{r}$, such that $\epsilon_{m}\to0$, $\alpha_{m}\to0$, $x_{m}\to x_{0}$, and the following inequality holds for all m:
$$\begin{aligned} &\bigl(\tilde{A}_{PS}(p/\epsilon_{m},q/ \epsilon_{m}) \bigr)^{1/\epsilon_{m}} \\ &\quad\le \biggl(\int _{\tilde{S}_{x_{m}}} \biggl(\frac{g(t)}{v(t)} \biggr)^{\epsilon_{m} /(p-\epsilon _{m})}g(t)\,dt \biggr)^{-1/p} \\ &\qquad{}\times \biggl(\int_{\tilde{S}_{x_{m}}} \biggl\{ \frac{1}{G(t)}\int _{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{\epsilon_{m}/(p-\epsilon _{m})}g(y)\,dy \biggr\} ^{q/\epsilon_{m}}u(t)\,dt \biggr)^{1/q}+\alpha_{m}. \end{aligned}$$
$$\begin{aligned} \biggl|\chi_{\tilde{S}_{x_{m}}}(t) \biggl(\frac{g(t)}{v(t)} \biggr)^{\epsilon_{m}/(p-\epsilon_{m})}g(t) \biggr| \le \chi_{\tilde{B}_{r}}(t) \biggl\{ \sup_{y\in E} \biggl( \frac{g(y)}{v(y)} \biggr)+1 \biggr\} g(t) \in L^{1}(E,dt) \quad(m=1,2,\ldots). \end{aligned}$$
By the Lebesgue dominated convergence theorem, we infer that
$$\begin{aligned} &\lim_{m\to\infty} \biggl(\int_{\tilde{S}_{x_{m}}} \biggl(\frac{g(t)}{v(t)} \biggr)^{\epsilon_{m}/(p-\epsilon_{m})}g(t)\,dt \biggr)^{-1/p} \\ &\quad= \biggl(\int_{\tilde{S}_{x_{0}}} \lim_{m\to\infty} \biggl\{ \biggl(\frac{g(t)}{v(t)} \biggr)^{\epsilon_{m}/(p-\epsilon_{m})} \biggr\} g(t)\,dt \biggr)^{-1/p}=\bigl(G(x_{0})\bigr)^{-1/p}. \end{aligned}$$
Similarly, the hypotheses on $u(t)$ and $g(t)/v(t)$ imply
$$\begin{aligned} & \biggl|\chi_{\tilde{S}_{x_{m}}}(t) \biggl\{ \frac{1}{G(t)}\int_{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{\epsilon_{m}/(p-\epsilon_{m})}g(y)\,dy \biggr\} ^{q/\epsilon_{m}}u(t) \biggr| \\ &\quad\le\chi_{\tilde{B}_{r}}(t) \biggl\{ \sup_{y\in E} \biggl( \frac {g(y)}{v(y)} \biggr) \biggr\} ^{q/(p-\epsilon_{m})} \biggl\{ \frac{1}{G(t)}\int _{\tilde{S}_{t}}g(y)\,dy \biggr\} ^{q/\epsilon_{m}}u(t) \\ &\quad\le\chi_{\tilde{B}_{r}}(t) \biggl\{ \sup_{y\in E} \biggl( \frac {g(y)}{v(y)} \biggr)+1 \biggr\} ^{2q/p}u(t)\in L^{1}(E,dt). \end{aligned}$$
Applying the Lebesgue dominated convergence theorem again, it follows from Lemma 3.1 that
$$\begin{aligned} &\lim_{m\to\infty} \biggl(\int_{\tilde{S}_{x_{m}}} \biggl\{ \frac{1}{G(t)}\int_{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{\epsilon_{m}/(p-\epsilon_{m})}g(y)\,dy \biggr\} ^{q/\epsilon_{m}}u(t)\,dt \biggr)^{1/q} \\ &\quad= \biggl(\int_{\tilde{S}_{x_{0}}} \lim_{m\to\infty} \biggl\{ \frac{1}{G(t)}\int_{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{\epsilon_{m}/(p-\epsilon_{m})}g(y)\,dy \biggr\} ^{q/\epsilon_{m}}u(t)\,dt \biggr)^{1/q} \\ &\quad= \biggl(\int_{\tilde{S}_{x_{0}}} \biggl\{ \exp \biggl(\frac{1}{G(t)} \int_{\tilde{S}_{t}} g(y) \biggl(\log \frac{g(y)}{v(y)} \biggr)\,dy \biggr) \biggr\} ^{q/p}u(t)\,dt \biggr)^{1/q}. \end{aligned}$$
Putting (3.9)-(3.11) together yields (3.8). This finishes the proof for those u and v with the restrictions stated above. Now, we come back to the proof of the case $u\ge0$ and $\sup_{x\in E} \{ g(x)/v(x)\}<\infty$. Let $u_{r}(x)=\min\{u(x),r\}\chi_{\tilde{\Omega}_{r}}(x)$, where $r=1,2,\ldots $ . By the preceding result,
$$\begin{aligned} & \biggl(\int_{E} \biggl\{ \exp \biggl( \frac{1}{G(x)}\int_{\tilde{S}_{x}} g(t)\log f(t)\,dt \biggr) \biggr\} ^{q}u_{r}(x)\,dx \biggr)^{1/q} \\ &\quad\le e^{1/p}\tilde{D}_{PS}(r) \biggl(\int_{E} \bigl(f(x)\bigr)^{p} v(x)\,dx \biggr)^{1/p}\quad (f>0), \end{aligned}$$
$$\tilde{D}_{PS}(r)=\sup_{x\in E}\bigl(G(x) \bigr)^{-\frac{1}{p}} \biggl(\int_{\tilde{S}_{x}} \biggl\{ \exp \biggl( \frac{1}{ G(t)}\int_{\tilde{S}_{t}}g(y) \biggl(\log\frac{g(y)}{v(y)} \biggr)\,dy \biggr) \biggr\} ^{\frac{q}{p}}u_{r}(t)\,dt \biggr)^{\frac{1}{q}}. $$
We have $u_{r}(t)\le u(t)$, so $\tilde{D}_{PS}(r)\le\tilde{D}_{PS}$. Replacing $\tilde{D}_{PS}(r)$ in (3.12) by $\tilde{D}_{PS}$ first and then applying the monotone convergence theorem to (3.12), we get the desired inequality for this case.
Next, we deal with the case $\sup_{x\in E} g(x)<\infty$. Let $v_{\ell}(x)=v(x)+1/\ell$, where $\ell=1,2,\ldots$ . Then $\sup_{x\in E} \{g(x)/v_{\ell}(x)\}<\infty$ for each ℓ. By the preceding result,
$$\begin{aligned} & \biggl(\int_{E} \biggl\{ \exp \biggl( \frac{1}{G(x)}\int_{\tilde{S}_{x}} g(t)\log f(t)\,dt \biggr) \biggr\} ^{q}u(x)\,dx \biggr)^{1/q} \\ &\quad\le e^{1/p}\tilde{D}^{\ell}_{PS} \biggl(\int _{E} \bigl(f(x)\bigr)^{p} v_{\ell}(x)\,dx \biggr)^{1/p}\quad (f>0), \end{aligned}$$
$$\tilde{D}^{\ell}_{PS}=\sup_{x\in E} \frac{1}{(G(x))^{\frac{1}{p}}} \biggl(\int_{\tilde{S}_{x}} \biggl\{ \exp \biggl( \frac{1}{ G(t)}\int_{\tilde{S}_{t}}g(y)\log \biggl(\frac {g(y)}{v_{\ell}(y)} \biggr)\,dy \biggr) \biggr\} ^{\frac{q}{p}}u(t)\,dt \biggr)^{\frac{1}{q}}. $$
We have $v_{\ell}(x)\ge v(x)$, so $\tilde{D}^{\ell}_{PS}\le\tilde{D}_{PS}$. This says that (3.13) can be replaced by (3.14):
$$\begin{aligned} & \biggl(\int_{E} \biggl\{ \exp \biggl( \frac{1}{G(x)}\int_{\tilde{S}_{x}} g(t)\log f(t)\,dt \biggr) \biggr\} ^{q}u(x)\,dx \biggr)^{1/q} \\ &\quad\le e^{1/p}\tilde{D}_{PS} \biggl(\int_{E} \bigl(f(x)\bigr)^{p} v_{\ell}(x)\,dx \biggr)^{1/p}\quad (f>0). \end{aligned}$$
We shall claim that $v_{\ell}(x)$ in (3.14) can be replaced by $v(x)$. Without loss of generality, we may assume $\int_{E} (f(x))^{p}v(x)\,dx <\infty$. Set
$$f_{r}(x)=\chi_{\tilde{B}_{r}}(x)\min\bigl(f(x),r\bigr)+ \chi_{E\setminus\tilde{B}_{r}}(x)h(x)\quad (r=1,2,\ldots), $$
where $\tilde{B}_{\rho}$ is defined before and $h:E\mapsto(0,\infty)$ is chosen so that
$$h(x)\le\min\bigl(f(x),1\bigr) \quad\mbox{and} \quad\int_{E} \bigl(h(x)\bigr)^{p}v_{1}(x)\,dx< \infty. $$
Replacing f in (3.14) by $f_{r}$, we get
$$\begin{aligned} & \biggl(\int_{E} \biggl\{ \exp \biggl( \frac{1}{G(x)}\int_{\tilde{S}_{x}}g(t)\log f_{r}(t)\,dt \biggr) \biggr\} ^{q}u(x)\,dx \biggr)^{1/q} \\ &\quad\le e^{1/p}\tilde{D}_{PS} \biggl(\int_{E} \bigl(f_{r}(x)\bigr)^{p}v_{\ell}(x)\,dx \biggr)^{1/p}. \end{aligned}$$
For each r, we have
$$\begin{aligned} \int_{E} \bigl(f_{r}(x)\bigr)^{p}v_{1}(x)\,dx&= \int_{\tilde{B}_{r}} \bigl(\min\bigl(f(x),r\bigr) \bigr)^{p}v_{1}(x)\,dx+ \int_{E\setminus\tilde{B}_{r}} \bigl(h(x)\bigr)^{p}v_{1}(x)\,dx \\ &\le\int_{E} \bigl(f(x)\bigr)^{p}v(x)\,dx+\int _{\tilde{B}_{r}} r^{p}\,dx+\int_{E} \bigl(h(x)\bigr)^{p}v_{1}(x)\,dx< \infty \end{aligned}$$
and $|f_{r}(x)|^{p}v_{\ell}(x)\le(f_{r}(x))^{p}v_{1}(x)$ for $\ell=1, 2,\ldots$ . Applying the Lebesgue dominated convergence theorem to the right hand side of (3.15), we get
$$ \begin{aligned}[b] &\biggl(\int_{E} \biggl\{ \exp \biggl( \frac{1}{G(x)}\int_{\tilde{S}_{x}}g(t)\log f_{r}(t)\,dt \biggr) \biggr\} ^{q}u(x)\,dx \biggr)^{1/q}\\ &\quad\le e^{1/p}\tilde{D}_{PS} \biggl(\int_{E} \bigl(f_{r}(x)\bigr)^{p}v(x)\,dx \biggr)^{1/p}. \end{aligned} $$
By definition, $f_{r}(x)\uparrow f(x)$ as $r\to\infty$. Applying the monotone convergence theorem to both sides of (3.16), the right hand side tends to
$$e^{1/p}\tilde{D}_{PS} \biggl(\int_{E} \bigl(f(x)\bigr)^{p}v(x)\,dx \biggr)^{1/p} \quad(\mbox{as } r\to\infty) $$
and the left hand side has the limit
$$ \biggl(\int_{E} \biggl\{ \exp \biggl(\frac{1}{G(x)}\lim _{r\to\infty} \int_{\tilde{S}_{x}}g(t)\log f_{r}(t)\,dt \biggr) \biggr\} ^{q} u(x)\,dx \biggr)^{1/q}. $$
Let $x\in E$. Since $\int_{\tilde{S}_{x}}g(t)\log f(t)\,dt $ is well defined, the following equality makes sense:
$$\int_{\tilde{S}_{x}}g(t)\log f(t)\,dt=\int_{\tilde{S}_{x}}g(t) \bigl(\log f(t) \bigr)^{+}\,dt-\int_{\tilde{S}_{x}}g(t) \bigl(\log f(t) \bigr)^{-}\,dt, $$
where $\xi^{+} =\max(\xi,0)$ and $\xi^{-}=\min(-\xi,0)$. Consider $r\ge \max(\|x\|,1)$. By the monotone convergence theorem,
$$\begin{aligned} \int_{\tilde{S}_{x}}g(t)\log f_{r}(t)\,dt&=\int _{\tilde{S}_{x}}g(t)\log \bigl\{ \min\bigl(f(t),r\bigr) \bigr\} \,dt \\ &=\int_{\tilde{S}_{x}}g(t)\min \bigl(\bigl(\log f(t)\bigr)^{+},\log r \bigr)\,dt- \int_{\tilde{S}_{x}}g(t) \bigl(\log f(t) \bigr)^{-}\,dt \\ &\longrightarrow\int_{\tilde{S}_{x}}g(t) \bigl(\log f(t) \bigr)^{+}\,dt- \int_{\tilde{S}_{x}}g(t) \bigl(\log f(t) \bigr)^{-}\,dt=\int _{\tilde{S}_{x}}g(t)\log f(t)\,dt. \end{aligned}$$
Inserting this limit in (3.17) yields the desired inequality. This finishes the proof. □
Theorem 3.2 gives a new proof of [3], Theorem 7.3(a). In the following, we shall display another example to show how (1.10) works well for the estimate of Opic-Gurka type. Set
$$\begin{aligned} \tilde{D}_{OG}(s):={}&\sup_{x\in E}\bigl(G(x) \bigr)^{\frac{s-1}{p}} \\ &{}\times \biggl(\int_{E\setminus S_{x}}\bigl(G(t)\bigr)^{\frac {-sq}{p}} \biggl\{ \exp \biggl(\frac{1}{G(t)}\int_{\tilde{S}_{t}}g(y) \biggl(\log \frac{g(y)}{v(y)} \biggr)\,dy \biggr) \biggr\} ^{\frac{q}{p}}u(t)\,dt \biggr)^{\frac{1}{q}}, \end{aligned}$$
where $G(x)$ is defined by (1.6). The number $D^{*}_{OG}(s)$ in (1.8) is just the case $g(t)=1$ of $\tilde{D}_{OG}(s)$. In the following, we shall extend the second inequality in (1.8) from $u(x)>0$ and $g(t)=1$ to $u(x)\ge0$ and those $g(t)$ subject to the condition (1.9). This extension gives the Opic-Gurka-type estimate of the modular-type operator norm of the general geometric mean operator corresponding to $g(t)$. In particular, $g(t)$ can be of the form $g(t)=|{\tilde{S}}_{t}|^{s-1}$, which leads us to the Levin-Cochran-Lee-type inequality. Our result partially generalizes the sufficient parts of [5] and [3], Theorem 7.3(b).
Let $0< p\le q<\infty$, $u(x)\ge0$, $v(x)>0$, $g(t)>0$, and $0< G(x)<\infty$, where $G(x)$ is defined by (1.6). If (1.9) is true and $\tilde{D}_{OG}(s)<\infty$ for some $s>1$, then (1.7) holds for $C\le\inf_{s>1}e^{(s-1)/p}\tilde{D}_{OG}(s)$.
Let $\Phi(s)=e^{s}$, $k(x,t)=g(t)/G(x)$, and $f(t)\longrightarrow\log f(t)$. The proof is similar to Theorem 3.2. We shall show that $\|\mathbb{K}\|_{*}\le \inf_{s>1}e^{(s-1)/p}\tilde{D}_{OG}(s)$. To observe the proof of Theorem 3.2, we find that it suffices to prove this inequality for the case: u is bounded on $\tilde{\Omega}_{r}$, $u(x)=0$ on $E\setminus\tilde{\Omega}_{r}$, and $\sup_{x\in E}\{g(x)/v(x)\}<\infty$, where $\tilde{\Omega}_{r}$ is defined in the proof of Theorem 3.2. It follows from (1.10)-(1.11) and Theorem 2.2 that
$$\begin{aligned} \|\mathbb{K}\|_{*} &\le\inf_{0< \epsilon< p} \bigl(\tilde{A}_{W}(p/\epsilon,q/\epsilon) \bigr)^{1/\epsilon} \\ &= \inf_{0< \epsilon< p} \biggl\{ \inf_{1< s< p/\epsilon} \biggl( \frac{p-\epsilon}{p-\epsilon s} \biggr)^{1/\epsilon-1/p} \bigl(\tilde{A}_{W}(s,p/ \epsilon,q/\epsilon ) \bigr)^{1/\epsilon} \biggr\} \\ &\le\inf_{s>1} \biggl\{ \liminf_{\epsilon\to 0^{+}} \biggl(\frac{p-\epsilon}{p-\epsilon s} \biggr)^{1/\epsilon-1/p} \bigl(\tilde{A}_{W}(s,p/ \epsilon,q/\epsilon) \bigr)^{1/\epsilon} \biggr\} . \end{aligned}$$
For $s>1$, we have $\lim_{\epsilon\to0^{+}} (\frac{p-\epsilon}{p-\epsilon s} )^{1/\epsilon-1/p}=e^{(s-1)/p}$. We shall prove
$$\liminf_{\epsilon\to0^{+}} \bigl(\tilde{A}_{W}(s,p/\epsilon,q/ \epsilon ) \bigr)^{1/\epsilon}\le \tilde{D}_{OG}(s). $$
If so, the desired inequality follows from (3.18). Let $0<\epsilon<p/s$. We have
$$\begin{aligned} \bigl(\tilde{A}_{W}(s,p/\epsilon,q/\epsilon) \bigr)^{1/\epsilon}={}&\sup_{x\in E} \biggl(\int _{\tilde{S}_{x}} \biggl(\frac{g(t)}{v(t)} \biggr)^{\frac {\epsilon}{p-\epsilon}}g(t)\,dt \biggr)^{\frac{s-1}{p}} \\ &{}\times \biggl(\int_{E\setminus S_{x}} \biggl\{ \int_{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{\frac{\epsilon}{p-\epsilon}}g(y)\,dy \biggr\} ^{\frac{q(p-\epsilon s)}{\epsilon p}} \frac {u(t)\,dt}{(G(t))^{q/\epsilon}} \biggr)^{1/q}. \end{aligned}$$
The term '$(\int_{\tilde{S}_{x}} (\cdots) )^{\frac{s-1}{p}}$' in (3.19) increases in $\|x\|$. On the other hand, the term '$(\int_{E\setminus S_{x}} \{\cdots \}^{q(p-\epsilon s)/(\epsilon p)}\frac{u(t)\,dt}{(G(t))^{q/\epsilon}} )^{1/q}$' in (3.19) is zero for $\|x\|>r$ and it keeps the same value for the change: x with $\|x\|<1/r\longrightarrow(1/r)x/\|x\|$. These imply that the term '$\sup_{x\in E}$' in (3.19) can be replaced by '$\sup_{x\in\tilde{\Omega}_{r}}$'. By the Heine-Borel theorem, we can choose $0<\epsilon_{m}<p/s$, $\alpha_{m}>0$, and $x_{0}, x_{m}\in\tilde{\Omega}_{r}$ such that $\epsilon_{m}\to0$, $\alpha_{m}\to0$, $x_{m}\to x_{0}$, and the following inequality holds for all m:
$$\begin{aligned} &\bigl(\tilde{A}_{W}(s,p/\epsilon_{m},q/ \epsilon_{m}) \bigr)^{1/\epsilon_{m}} \\ &\quad\le \biggl(\int_{\tilde{S}_{x_{m}}} \biggl(\frac{g(t)}{v(t)} \biggr)^{\frac{\epsilon _{m}}{p-\epsilon_{m}}}g(t)\,dt \biggr)^{\frac{s-1}{p}} \\ &\qquad{}\times \biggl(\int_{E\setminus S_{x_{m}}} \biggl\{ \int_{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{\frac{\epsilon_{m}}{p-\epsilon_{m}}}g(y)\,dy \biggr\} ^{\frac{q(p-\epsilon _{m} s)}{\epsilon_{m} p}} \frac {u(t)\,dt}{(G(t))^{q/\epsilon_{m}}} \biggr)^{1/q}+\alpha_{m}. \end{aligned}$$
For the first integral in (3.20), we have
$$\begin{aligned} &\lim_{m\to\infty} \biggl(\int_{\tilde{S}_{x_{m}}} \biggl(\frac {g(t)}{v(t)} \biggr)^{\frac{\epsilon_{m}}{p-\epsilon_{m}}}g(t)\,dt \biggr)^{\frac{s-1}{p}} \\ &\quad= \biggl(\int_{\tilde{S}_{x_{0}}} \lim_{m\to\infty} \biggl\{ \biggl(\frac{g(t)}{v(t)} \biggr)^{\frac{\epsilon_{m}}{p-\epsilon_{m}}} \biggr\} g(t)\,dt \biggr)^{\frac{s-1}{p}}=\bigl(G(x_{0})\bigr)^{\frac{s-1}{p}}. \end{aligned}$$
As for the second integral, it follows from Lemma 3.1 that
$$\begin{aligned} & \biggl(\int_{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{\frac{\epsilon_{m}}{p-\epsilon_{m}}}g(y)\,dy \biggr)^{\frac{q(p-\epsilon_{m} s)}{\epsilon_{m} p}}\frac{1}{(G(t))^{q/\epsilon_{m}}} \\ &\quad= \biggl(\int_{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{\frac{\epsilon_{m}}{p-\epsilon_{m}}}g(y)\,dy \biggr)^{-qs/p} \biggl(\frac{1}{G(t)}\int _{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{\frac{\epsilon_{m}}{p-\epsilon _{m}}}g(y)\,dy \biggr)^{q/\epsilon_{m}} \\ &\quad\longrightarrow\bigl(G(t)\bigr)^{\frac{-qs}{p}} \biggl\{ \exp \biggl( \frac{1}{G(t)}\int_{\tilde{S}_{t}} g(y) \biggl(\log\frac{g(y)}{v(y)} \biggr)\,dy \biggr) \biggr\} ^{\frac{q}{p}} \quad\mbox{as } m\to\infty. \end{aligned}$$
Moreover, for m large enough,
$$\begin{aligned} & \biggl|\chi_{E\setminus S_{x_{m}}(t)} \biggl(\int_{\tilde{S}_{t}} \biggl( \frac{g(y)}{v(y)} \biggr)^{\frac{\epsilon_{m}}{p-\epsilon_{m}}}g(y)\,dy \biggr)^{\frac{q(p-\epsilon_{m} s)}{\epsilon_{m} p}} \frac{u(t)}{(G(t))^{q/\epsilon_{m}}} \biggr| \\ &\quad\le \biggl\{ \sup_{x\in E} \biggl(\frac{g(y)}{v(y)} \biggr)+1 \biggr\} ^{q/p}\chi_{\tilde{\Omega}_{r}}(t)G(t)^{-qs/p}u(t)\in L^{1}(E, dt). \end{aligned}$$
Integrating the left hand side of (3.22) with respect to $u(t)\,dt$ first and then applying the Lebesgue dominated convergence theorem, we obtain
$$\begin{aligned} &\lim_{m\to\infty} \biggl(\int_{E\setminus S_{x_{m}}} \biggl(\int_{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{\frac{\epsilon_{m}}{p-\epsilon_{m}}}g(y)\,dy \biggr)^{\frac{q(p-\epsilon_{m} s)}{\epsilon_{m} p}}\frac{u(t)\,dt}{(G(t))^{q/\epsilon_{m}}} \biggr)^{1/q} \\ &\quad= \biggl(\int_{E\setminus S_{x_{0}}}\lim_{m\to\infty} \biggl\{ \frac{1}{(G(t))^{q/\epsilon_{m}}} \biggl(\int_{\tilde{S}_{t}} \biggl(\frac{g(y)}{v(y)} \biggr)^{\frac{\epsilon_{m}}{p-\epsilon_{m}}}g(y)\,dy \biggr)^{\frac{q(p-\epsilon_{m} s)}{\epsilon_{m} p}} \biggr\} u(t)\,dt \biggr)^{1/q} \\ &\quad= \biggl(\int_{E\setminus S_{x_{0}}} \frac{1}{(G(t))^{qs/p}} \biggl\{ \exp \biggl( \frac{1}{G(t)}\int_{\tilde{S}_{t}} g(y) \biggl(\log\frac{g(y)}{v(y)} \biggr)\,dy \biggr) \biggr\} ^{\frac{q}{p}}u(t)\,dt \biggr)^{\frac{1}{q}}. \end{aligned}$$
Putting (3.20), (3.21), and (3.23) together yields the desired inequality. This finishes the proof. □
For other estimates of Hardy-type inequalities, we may use a similar limit process to Theorems 3.2 and 3.3 to get the corresponding Pólya-Knopp inequalities.
Chen, C-P, Lan, J-W, Luor, D-C: The best constants for multidimensional modular inequalities over spherical cones. Linear Multilinear Algebra 62(5), 683-713 (2014). doi:10.1080/03081087.2013.777438
Persson, L-E, Stepanov, VD: Weighted integral inequalities with the geometric mean operator. J. Inequal. Appl. 7(5), 727-746 (2002)
Wedestig, A: Weighted Inequalities of Hardy-type and their Limiting Inequalities. Dissertation, Luleå University of Technology, Luleå (2003)
Gupta, B, Jain, P, Persson, L-E, Wedestig, A: Weighted geometric mean inequalities over cones in $\Bbb{R}^{N}$. J. Inequal. Pure Appl. Math. 4(4), 68 (2003)
Opic, B, Gurka, P: Weighted inequalities for geometric means. Proc. Am. Math. Soc. 120(3), 771-779 (1994)
Chen, C-P, Lan, J-W, Luor, D-C: Multidimensional extensions of Polya-Knopp-type inequalities over spherical cones. (to appear in MIA)
The first author was supported in part by the Ministry of Science and Technology, Taipei, ROC, under Grants Most103-2115-M-364-001 and Most104-2115-M-364-001. We express our gratitude to Professor Lars-Erik Persson and the reviewers for their valued comments in developing the final version of the article.
Center for General Education, Hsuan Chuang University, Hsinchu, 30092, Taiwan, Republic of China
Chang-Pao Chen
Municipal Jianguo High School, Taipei, 10066, Taiwan, Republic of China
Jin-Wen Lan
Search for Chang-Pao Chen in:
Search for Jin-Wen Lan in:
Correspondence to Chang-Pao Chen.
All authors contributed equally in drafting this manuscript and giving the main proofs. All authors read and approved the final manuscript.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
operator norm
integral operator
Hardy-Knopp-type inequalities
Pólya-Knopp-type inequalities
Recent Developments of Integral Inequalities and its Applications | CommonCrawl |
\begin{document}
\mainmatter
\title{Extended Formulations in Mixed-integer Convex Programming}
\titlerunning{Extended Formulations in Mixed-integer Convex Programming}
\author{Miles Lubin\inst{1} \and Emre Yamangil\inst{2} \and Russell Bent\inst{2} \and Juan Pablo Vielma\inst{1}}
\authorrunning{Lubin, Yamangil, Bent, Vielma}
\institute{Massachusetts Institute of Technology, Cambridge, MA, USA\\ \and Los Alamos National Laboratory, Los Alamos, NM, USA }
\toctitle{Lecture Notes in Computer Science} \tocauthor{Authors' Instructions} \maketitle
\begin{abstract} We present a unifying framework for generating extended formulations for the polyhedral outer approximations used in algorithms for mixed-integer convex programming (MICP). Extended formulations lead to fewer iterations of outer approximation algorithms and generally faster solution times. First, we observe that all MICP instances from the MINLPLIB2 benchmark library are conic representable with standard symmetric and nonsymmetric cones. Conic reformulations are shown to be effective extended formulations themselves because they encode separability structure. For mixed-integer conic-representable problems, we provide the first outer approximation algorithm with finite-time convergence guarantees, opening a path for the use of conic solvers for continuous relaxations. We then connect the popular modeling framework of disciplined convex programming (DCP) to the existence of extended formulations independent of conic representability. We present evidence that our approach can yield significant gains in practice, with the solution of a number of open instances from the MINLPLIB2 benchmark library.
\end{abstract}
\section{Introduction}
Mixed-integer convex programming (MICP) is the class of problems where one seeks to minimize a convex objective function subject to convex constraints and integrality restrictions on the variables. MICP is less general than mixed-integer nonlinear programming (MINLP), where the objective and constraints may be nonconvex, but unlike the latter, one can often develop finite-time algorithms to find a global solution. These finite-time algorithms depend on convex nonlinear programming (NLP) solvers to solve continuous subproblems. MICP, also called \textit{convex MINLP}, has broad applications and is supported in various forms by both academic solvers like Bonmin~\cite{Bonmin} and SCIP~\cite{scip} and commercial solvers like KNITRO~\cite{knitro}; see Bonami et al.~\cite{BonamiReview,MahajanReview} for a review.
The most straightforward approach for MICP is NLP-based branch and bound, an extension of the branch and bound algorithm for mixed-integer linear programming (MILP) where a convex NLP relaxation is solved at each node of the branch and bound tree~\cite{Gupta85}. However, driven by the availability of effective solvers for linear programming (LP) and MILP, it was observed in the early 1990s by Leyffer and others~\cite{SvenThesis} that it is often more effective to avoid solving NLP relaxations when possible in favor of solving polyhedral relaxations using MILP. Polyhedral relaxations form the basis of the majority of the existing solvers recently reviewed and benchmarked by Bonami et al.~\cite{BonamiReview}.
While traditional MICP approaches construct polyhedral approximations in the original space of variables, a number of authors have considered introducing auxiliary variables and forming a polyhedral approximation in a higher dimensional space~\cite{Baron,Hijazi,VielmaExtendedFormulations,MustafaThesis}. Such constructions are called \textit{extended formulations} or \textit{lifted formulations}, the motivation for which is the fact that the projection of these polyhedra onto the original space can provide a higher quality approximation than one built from scratch in the original space. Tawarmalani and Sahinidis~\cite{Baron} propose, in the context of nonconvex MINLP, extended formulations for compositions of functions. For MICP, Hijazi et al.~\cite{Hijazi} demonstrate the effectiveness of extended formulations in the special case where all nonlinear functions can be written as a sum of \textit{univariate} convex functions. Their method obtains promising speed-ups over Bonmin on the instances which exhibit this structure. Hijazi et al. generated these extended formulations by hand, and no subsequent work has proposed techniques for off-the-shelf MICP solvers to detect and exploit separability. Building on these results, Vielma et al.~\cite{VielmaExtendedFormulations} propose extended formulations for second-order cones. These extended formulations improved solution times for mixed-integer second-order cone programming (MISOCP) over state of the art commercial solvers CPLEX and Gurobi quite significantly; both solvers adopted the technique within a few months after its publication.
A major contribution of this work is to propose a new, unifying framework for generating extended formulations for the polyhedral outer approximations used in MICP algorithms. This framework generalizes the work of Hijazi et al.~\cite{Hijazi} which was specialized for separable problems to include all MICPs whose objective and constraints can be expressed in closed algebraic form. We begin in Section~\ref{sec:conicextended} by considering conic representability. While many MICP instances are representable by using MISOCP, reformulation to MISOCP has not been widely adopted, and MICP is still considered a significantly more general form. We demonstrate that with the introduction of the nonsymmetric exponential and power cones, surprisingly, all convex instances in the MINLPLIB2 benchmark library~\cite{MINLPLIB} are representable by a combination of these nonsymmetric cones and the second-order cone. We discuss how the conic-form representation of a problem is itself a strong extended formulation. Hence, the guideline to ``just solve the conic form problem'' is surprisingly effective.
We note that conic-form problems have modeling strength beyond that of smooth MICP, in particular for handling of nonsmooth perspective functions useful in disjunctive convex programming~\cite{perspective}. With the recent development of conic solvers supporting nonsymmetric cones~\cite{akle,SCS}, it may be advantageous to use these solvers over derivative-based NLP solvers, in which case the standard convergence theory for outer approximation algorithms no longer applies. In Section~\ref{sec:conicoa}, we present the first finite-time outer approximation algorithm applicable to mixed-integer conic programming with any closed, convex cones (symmetric and nonsymmetric), so long as conic duality holds in strong form. This algorithm extends the work of Drewes and Ulbrich~\cite{MISOCPOA} for MISOCP with a much simpler and more general proof.
In Section~\ref{sec:dcp}, we generalize the idea of extended formulations through conic representability by considering the modeling framework of \textit{disciplined convex programming} (DCP)~\cite{DCP}, a popular modeling paradigm for convex optimization which has so far received little notice in the MICP realm. In DCP, convex expressions are specified in an algebraic form such that convexity can be verified by simple composition rules. We establish a 1-1 connection between these rules for verifying convexity and the existence of extended formulations. Hence, all MICPs expressed in mixed-integer disciplined convex programming (MI\textbf{D}CP) form have natural extended formulations regardless of conic representability. This view has connections with techniques for nonconvex MINLP, where it is already common practice to construct extended outer approximations based on the algebraic representation of the problem~\cite{MahajanReview}.
In our computational experiments, we translate MICP problems from the MINLPLIB2 benchmark library into MIDCP form and demonstrate significant gains from the use of extended formulations, including the solution of a number of open instances. Our open-source solver, \textit{Pajarito}, is the first solver specialized for MIDCP and is accessible through existing DCP modeling languages.
\section{Extended formulations and conic representability}\label{sec:conicextended}
We state a generic mixed-integer convex programming problem as \begin{align}\label{eq:micvx} \operatorname{minimize}_{x,y}\,\, & f(x,y) \notag \\ \text{subject to } & g_j(x,y) \le 0 \quad \forall j \in J,\tag{MICONV}\\ &L \le x \le U,\quad x \in \mathbb{Z}^n, y \in \mathbb{R}^{p}_+, \notag \end{align} \noindent where the set $J$ indexes the nonlinear constraints, the functions $f, g_j : \mathbb{R}^n \times \mathbb{R}^p \to \mathbb{R} \cup \{\infty \}$ are convex, and the vectors $L$ and $U$ are finite bounds on $x$. Without loss of generality, when convenient, we may assume that the objective function $f$ is linear (via epigraph reformulation~\cite{BonamiReview}).
Vielma et al.~\cite{VielmaExtendedFormulations} discuss the motivation for extended formulations in MICP: many successful MICP algorithms use polyhedral outer approximations of nonlinear constraints, and polyhedral outer approximations in a higher dimensional space can often be much stronger than approximations in the original space. Hijazi et al.~\cite{Hijazi} give an example of an approximation of an $\ell_2$ ball in $\mathbb{R}^n$ which requires $2^n$ tangent hyperplanes in the original space to prove that the intersection of the ball with the integer lattice is in fact empty. By exploiting the summation structure in the definition of the $\ell_2$ ball, \cite{Hijazi} demonstrate that an extended formulation requires only $2n$ hyperplanes to prove an empty intersection. More generally, \cite{Hijazi,Baron} propose to reformulate constraints with separable structure $\sum_{k=1}^q g_k(x_k) \le 0,$ where $g_k : \mathbb{R} \to \mathbb{R}$ are \textit{univariate} convex functions by introducing auxiliary variables $t_k$ and imposing the constraints \begin{gather}\label{eq:hijazi} \sum\nolimits_{k=1}^q t_k \le 0, g_k(x_k) \le t_k \forall\, k. \end{gather}
A consistent theme in this paper is the \textit{representation} of the convex functions $f$ and $g_j,\; \forall j\in J$ in~\eqref{eq:micvx}. Current MICP solvers require continuous differentiability of the nonlinear functions and access to black-box oracles for querying the values and derivatives of each at any given point $(x,y)$. The difficulty in the reformulation~\eqref{eq:hijazi} is that the standard representation~\eqref{eq:micvx} does not encode the necessary information, since separability is an algebraic property which is not detectable given only oracles to evaluate function values and derivatives. As such, we are not aware of any off-the-shelf MICP solver which exploits this special-case structure, despite the promising experimental results of~\cite{Hijazi}.
In this section, we consider the equally general, yet potentially more useful, representation of~\eqref{eq:micvx} as a mixed-integer conic programming problem: \begin{align}\label{eq:miconic} \min_{x,z} \quad& c^Tz\notag\\ \text{s.t. } & A_xx + A_zz = b\tag{MICONE}\\ & L \le x \le U, x \in \mathbb{Z}^n, z \in \mathcal{K},\notag \end{align} where $\mathcal{K}\subseteq \mathbb{R}^k$ is a closed convex cone. Without loss of generality, we assume integer variables are not restricted to cones, since we may introduce corresponding continuous variables by equality constraints. The representation of~\eqref{eq:micvx} as~\eqref{eq:miconic} is equally as general in the sense that given a convex function $f$, we can define a closed convex cone $\mathcal{K}_f = \operatorname{cl}\{ (x,y,\gamma,t) : \gamma f(x/\gamma,y/\gamma) \le t, \gamma > 0 \}$ where $\operatorname{cl} S$ is defined as the closure of a set $S$. Using this, we can reformulate \eqref{eq:micvx} to the equivalent optimization problem \begin{align}\label{eq:miconic_all} \min \quad& t_f \notag\\ \text{s.t. } & t_j + s_j = 0 \quad \forall j \in J,\\ &\gamma_f = 1, x = x_f, y = y_f,\notag\\ &\gamma_j = 1, x = x_j, y = y_j, \forall j \in J,\notag\\ & L \le x \le U, x \in \mathbb{Z}^n, y \in \mathbb{R}^p_+,\notag\\ &(x_f,y_f,\gamma_f,t_f) \in \mathcal{K}_f\notag,\\ &(x_j,y_j,\gamma_j,t_j) \in \mathcal{K}_{g_j}, s_j \in \mathbb{R}_+ \quad\forall j \in J.\notag \end{align}
The problem~\eqref{eq:miconic_all} is in the form of~\eqref{eq:miconic} with $\mathcal{K} = \mathbb{R}^{n+|J|}_+ \times \mathcal{K}_f \times \mathcal{K}_{g_1} \times \cdots \times \mathcal{K}_{g_{|J|}}$. Such a tautological reformulation is not particularly useful, however. What \textit{is} useful is a reformulation of~\eqref{eq:micvx} into~\eqref{eq:miconic} where the cone $\mathcal{K}$ is a product $\mathcal{K}_1 \times \mathcal{K}_2
\times \cdots \times \mathcal{K}_r$, where each $\mathcal{K}_i$ is one of a small number of recognized cones, such as the positive orthant $\mathbb{R}^n_+$, the second-order cone $SOC_n = \{ (t,x) \in \mathbb{R}^n : ||x|| \le t \}$,
the exponential cone, $EXP = \operatorname{cl}\{ (x,y,z) \in \mathbb{R}^3: y\exp(x/y) \le z, y > 0 \}$, and the power cone (given $0 < \alpha < 1$), $POW_\alpha = \{ (x,y,z) \in \mathbb{R}^3 : |z| \le x^\alpha y^{1-\alpha}, x \ge 0, y \ge 0\}$.
The question of which functions can be represented by second-order cones has been well studied~\cite{SOCPApplications,lectures}. More recently, a number of authors have considered nonsymmetric cones, in particular the exponential cone, which can be used to model logarithms, entropy, logistic regression, and geometric programming~\cite{akle}, and the power cone, which can be used to model $p$-norms and powers~\cite{powercone}.
The folklore within the conic optimization community is that almost all convex constraints which arise in practice are representable by using these cones\footnote{\url{http://erlingdandersen.blogspot.com/2010/11/which-cones-are-needed-to-represent.html}}, in addition to the positive semidefinite cone which we do not consider here. To substantiate this claim, we classified the 333 MICP instances in MINLPLIB2 according to their conic representability and found that \textit{all} of the instances are conic representable; see Table~\ref{tab:conic}.
\begin{table}[t]
\centering
\begin{tabular}{c|c|c|c|c|c}
SOC only & EXP only & SOC and EXP & POW only & Not representable & Total \\
\hline
217 & 107 & 7 & 2 & 0 & 333
\end{tabular}
\caption{A categorization of the 333 MICP instances in the MINLPLIB2 library according to conic representability. Over two thirds are pure MISOCP problems and nearly one third is representable by using the exponential (EXP) cone alone. All instances are representable by using standard cones. }
\label{tab:conic}
\end{table}
While solvers for SOC-constrained problems (SOCPs) are mature and commercially supported, the development of effective and reliable algorithms for handling exponential cones and power cones is an emerging, active research area~\cite{akle,SCS}. Nevertheless, we claim that the conic view of~\eqref{eq:micvx} is useful \textit{even lacking} reliable solvers for continuous conic relaxations.
As a motivating example, we consider the trimloss~\cite{Harjunkoski} (\texttt{tls}) instances from MINLPLIB2, a convex formulation of the cutting stock problem. These instances are notable as some of the few unsolved instances in the benchmark library and also because they exhibit a separability structure more general than what can be handled by Hizaji et al.~\cite{Hijazi}.
The trimloss instances have constraints of the form \begin{equation}\label{eq:tls} \sum\nolimits_{k=1}^q -\sqrt{x_k y_k} \le c^Tz + b, \end{equation} where $x,y,z$ are arbitrary variables unrelated to the previous notation in this section. Harjunkoski et al.~\cite{Harjunkoski} obtain these constraints from a clever reformulation of nonconvex bilinear terms. The function $-\sqrt{xy}$ is the negative of the geometric mean of $x$ and $y$. It is convex for nonnegative $x$ and $y$ and its epigraph $E = \{(t,x,y) : -\sqrt{xy} \le t, x \ge 0, y \ge 0 \}$ is representable as an affine transformation of the three-dimensional second-order cone $SOC_3$~\cite{lectures}. A conic formulation for~\eqref{eq:tls} is constructed by introducing an auxiliary variable for each term in the sum plus a slack variable, resulting in the following constraints: \begin{equation}\label{eq:tls1} \sum\nolimits_{k=1}^q t_k +s = c^Tz + b, (t_k,x_k,y_k) \in E\,\, \forall k, \text{ and } s \in \mathbb{R}_+. \end{equation} Equation~\eqref{eq:tls1} provides an \textit{extended} formulation of the constraint~\eqref{eq:tls}, that is, an equivalent formulation using additional variables.
If we take the MINLPLIB2 library to be representative, then conic structure using standard cones exists in the overwhelming majority of MICP problems in practice. This observation calls for considering~\eqref{eq:miconic} as a standard form of MICP, one which is perhaps more useful for computation than~\eqref{eq:micvx} precisely because it is an extended formulation which encodes separability structure in a natural and general way. There is a large body of work and computational infrastructure for automatically generating the conic-form representation given an algebraic representation, a discussion we defer to Section~\ref{sec:dcp}.
The benefits of reformulation from~\eqref{eq:micvx} to~\eqref{eq:miconic} are quite tangible in practice. By direct reformulation from MICP to MISOCP, we were able to solve to global optimality the trimloss \texttt{tls5} and \texttt{tls6} instances from MINLPLIB2 by using Gurobi 6.0\footnote{Solutions reported to Stefan Vigerske, October 5, 2015}. These instances from this public benchmark library had been unsolved since 2001, perhaps indicating that the value of conic formulations is not widely known.
\section{An outer-approximation algorithm for mixed-integer conic programming}\label{sec:conicoa}
Although the conic representation~\eqref{eq:miconic} does not preclude the use of derivative-based solvers for continuous relaxations, derivative-based nonlinear solvers are typically not appropriate for conic problems because the nonlinear constraints which define the standard cones have points of nondifferentiability~\cite{Noam}. Sometimes the nondifferentiability is an artifact of the conic reformulation (e.g., of smooth functions $x^2$ and $\exp(x)$), but in a number of important cases the nondifferentiability is intrinsic to the model and provides additional modeling power. Nonsmooth perspective functions, for example, which are used in disjunctive convex programming, have been particularly challenging for derivative-based MICP solvers and have motivated smooth approximations~\cite{perspective}. On the other hand, conic form can handle these nonsmooth functions in a natural way, so long as there is a solver capable of solving the continuous conic relaxations.
There is a growing body of work as well as some (so far) experimental solvers supporting mixed second-order and exponential cone problems~\cite{akle,SCS}, which opens the door for considering conic solvers in place of derivative-based solvers. To the best of our knowledge, however, no outer-approximation algorithm or finite-time convergence theory has been proposed for general mixed-integer conic programming problems of the form~\eqref{eq:miconic}.
In this section, we present the first such algorithm for~\eqref{eq:miconic} with arbitrary closed, convex cones. This algorithm generalizes the work of Drewes and Ulbrich~\cite{MISOCPOA} for MISOCP with a much simpler proof based on conic duality. In stating this algorithm, we hope to motivate further development of conic solvers for cones beyond the second-order and positive semidefinite cones.
We begin with the definition of dual cones.
\begin{definition} Given a cone $\mathcal{K}$, we define $\mathcal{K}^* := \{ \beta \in \mathbb{R}^k : \beta^Tz \ge 0 \,\, \forall z \in \mathcal{K}\}$ as the dual cone of $\mathcal{K}$. \end{definition}
Dual cones provide an equivalent outer description of any closed, convex cone, as the following lemma states. We refer readers to~\cite{lectures} for the proof.
\begin{lemma} Let $\mathcal{K}$ be a closed, convex cone. Then $z \in \mathcal{K}$ iff $z^T\beta \ge 0\,\, \forall \beta \in \mathcal{K}^*$. \end{lemma} Based on the above lemma, we will consider an outer approximation of~\eqref{eq:miconic}: \begin{align}\label{eq:mioa} \min_{x,z} \quad& c^Tz\notag\\ \text{s.t. } & A_xx + A_zz = b\tag{MIOA(T)}\\ & L \le x \le U, x \in \mathbb{Z}^n,\notag\\ &\beta^Tz \ge 0\,\, \forall \beta \in T.\notag \end{align}
Note that if $T = \mathcal{K}^*$, \ref{eq:mioa} is an equivalent semi-infinite representation of~\eqref{eq:miconic}. If $T \subset \mathcal{K}^*$ and $|T| < \infty$ then~\ref{eq:mioa} is an MILP outer approximation of~\eqref{eq:miconic} whose objective value is a lower bound on the optimal value of~\eqref{eq:miconic}.
The outer approximation (OA) algorithm is based on iteratively building up $T$ until convergence in a finite number of steps to the optimal solution. First, we define the continuous subproblem for fixed integer value $\hat x$ which plays a key role in the OA algorithm: \begin{align}\label{eq:conic_cont} v_{\hat x} = \min_{z} \quad& c^Tz\notag\\ \text{s.t. } & A_zz = b - A_x \hat x\tag{$CP(\hat x)$},\\ &z \in \mathcal{K}\notag. \end{align} The dual of~\eqref{eq:conic_cont} is \begin{align}\label{eq:conic_cont_dual} \max_{\beta,\lambda} \quad& \lambda^T(b-A_x \hat x)\notag\\ \text{s.t. } & \beta = c - A_z^T\lambda\\ &\beta \in \mathcal{K}^*\notag. \end{align}
The following lemmas demonstrate, essentially, that the dual solutions to~\eqref{eq:conic_cont} provide the only elements of $\mathcal{K}^*$ that we need to consider.
\begin{lemma}\label{lem:conic} Given $\hat x$, assume~\ref{eq:conic_cont} is feasible and strong duality holds at the optimal primal-dual solution $(z_{\hat x},\beta_{\hat x},\lambda_{\hat x})$. Then for any $z$ with $A_z z = b - A_x \hat x$ and $\beta_{\hat x}^Tz \ge 0$, we have $c^Tz \ge v_{\hat x}$. \begin{proof} \begin{equation} \beta_{\hat x}^Tz = (c - A_z^T\lambda_{\hat x})^Tz = c^Tz - \lambda_{\hat x}^T(b - A_x\hat x) = c^Tz - v_{\hat x} \ge 0. \end{equation} \end{proof} \end{lemma}
\begin{lemma}\label{lem:ray} Given $\hat x$, assume~\ref{eq:conic_cont} is infeasible and~\eqref{eq:conic_cont_dual} is unbounded, such that we have a ray $(\beta_{\hat x},\lambda_{\hat x})$ satisfying $\beta_{\hat x} \in \mathcal{K}^*$, $\beta_{\hat x} = -A_z^T\lambda_{\hat x}$, and $\lambda_{\hat x}^T(b - A_x\hat x) > 0$. Then for any $z$ satisfying $A_z z = b - A_x \hat x$ we have $\beta_{\hat x}^Tz < 0$. \begin{proof} \begin{equation} \beta_{\hat x}^Tz = -\lambda_{\hat x}^TA_zz = -\lambda_{\hat x}^T(b - A_x\hat x) < 0. \end{equation}
\end{proof}
\end{lemma}
\begin{algorithm}[ht]\small \caption{The conic outer approximation (OA) algorithm}\label{alg:oa} \begin{algorithmic} \State \textbf{Initialize:} $z_U \leftarrow \infty, z_L \leftarrow -\infty$, $T \leftarrow \emptyset$. Fix convergence tolerance $\epsilon$. \While{$z_U - z_L \ge \epsilon$} \State Solve \ref{eq:mioa}. \If{\ref{eq:mioa} is infeasible} \State \eqref{eq:miconic} is infeasible, so terminate. \EndIf \State Let $(\hat x,\hat z)$ be the optimal solution of \ref{eq:mioa} with objective value $w_T$. \State Update lower bound $z_L \leftarrow w_T$. \State Solve~\ref{eq:conic_cont}. \If{\ref{eq:conic_cont} is feasible} \State Let $(z_{\hat x},\beta_{\hat x},\lambda_{\hat x})$ be an optimal primal-dual solution with objective value $v_{\hat x}$. \If{$v_{\hat x} < z_U$} \State $z_U \leftarrow v_{\hat x}$ \State Record $(\hat x, z_{\hat x})$ as the best known solution. \EndIf \ElsIf{\ref{eq:conic_cont} is infeasible} \State Let $(\beta_{\hat x},\lambda_{\hat x})$ be a ray of~\eqref{eq:conic_cont_dual}. \EndIf \State $T \leftarrow T \cup \{\beta_{\hat x}\}$ \EndWhile \end{algorithmic} \end{algorithm}
Finite termination of the algorithm is guaranteed because integer solutions $\hat x$ cannot repeat, and only a finite number of integer solutions is possible.
This algorithm is arguably incomplete because the assumptions of Lemmas \ref{lem:conic} and \ref{lem:ray} need not always hold. The assumption of strong duality at the solution is analogous to the constraint qualification assumption of the NLP OA algorithm~\cite{SvenOA}. Drewes and Ulbrich~\cite{MISOCPOA} describe a procedure in the case of MISOCP to ensure finite termination if this assumption does not hold. The assumption that a ray of the dual exists if the primal problem is infeasible is also not always true in the conic case, though \cite{lectures} provide a characterization of when this can occur. These cases will receive full treatment in future work.
A notable difference between the conic OA algorithm and the standard NLP OA algorithm is that there is no need to solve a second subproblem in the case of infeasibility, although some specialized NLP solvers may also obviate this need~\cite{Filmint}. In contrast, Drewes and Ulbrich~\cite{MISOCPOA} propose a second subproblem in the case of MISOCP even when dual rays would suffice.
Finally, the algorithm is presented in terms of a single cone $\mathcal{K}$ for simplicity. When $\mathcal{K}$ is a product of cones, our implementation disaggregates the elements of $\mathcal{K}^*$ per individual cone, adding one OA cut per cone per iteration.
\section{Extended formulations and disciplined convex programming}\label{sec:dcp}
While many problems are representable in conic form, the transformation from the user's algebraic representation of the problem often requires expert knowledge. Disciplined convex programming (DCP) is an algebraic modeling concept proposed by Grant et al.~\cite{DCP}, one of whose original motivations was to provide a means to make these transformations automatic and transparent to users. DCP is not intrinsically tied to conic representations, however. In this section, we present the basic concepts of DCP from the viewpoint of extended formulations. This perspective both provides insight into how conic formulations are generated and enables further generalization of the technique to problems which are not conic representable using standard cones.
Detection of convexity of arbitrary nonlinear expressions is NP-Hard~\cite{ConvexityNPHard}, and since a conic-form representation is a proof of convexity, it is unreasonable to expect a modeling system to be able to reliably generate these representations from arbitrary input. Instead, DCP requires users to construct expressions whose convexity can be proven by simple composition rules. A DCP implementation (e.g., the MATLAB package CVX) provides a library of basic operations like addition, subtraction, norms, square root, square, geometric mean, logarithms, exponential, entropy $x\log(x)$, powers, absolute value, $\max\{x,y\}$, $\min\{x,y\}$, etc. whose curvature (convex, concave, or affine) and monotonicity properties are known. These basic operations are called \textit{atoms}.
All expressions representing the objective function and constraints are built up via compositions of these atoms in such a way that guarantees convexity. For example, the expression $\exp(x^2+y^2)$ is convex and \textit{DCP compliant} because $\exp(\cdot)$ is convex and monotone increasing and $x^2+y^2$ is convex because it is a convex composition (through addition) of two convex atoms. The expression $\sqrt{xy}$ is concave when $x,y\ge 0$ as we noted previously, but not \textit{DCP compliant} because the inner term $xy$ has indefinite curvature. In this case, users must reformulate their expression using a different atom like $geomean(x,y)$. We refer readers to~\cite{DCP,dcpweb} for further introduction to DCP.
An important yet not well-known aspect of DCP is that the composition rules for DCP have a 1-1 correspondence with the existence of extended formulations of epigraphs. For example, suppose $g$ is convex and monotone increasing and $f$ is convex. Then the function $h(x) := g(f(x))$ is convex and recognized as such by DCP. If $E_h := \{ (x,t) : h(x) \le t \}$ is the epigraph of $h$, then we can represent $E_h$ through an extended formulation using the epigraphs $E_g$ and $E_f$ of $g$ and $f$, respectively. That is, $(x,t) \in E_h$ iff $\exists\, s$ such that $(x,s) \in E_f \text{ and } (s,t) \in E_g$. The validity of this extended formulation follows directly from monotonicty of $g$. Furthermore, if $E_f$ and $E_g$ are conic representable, then so is $E_h$, which is precisely how DCP automatically generates conic formulations. The conic form representation is not necessary, however; one may instead represent $E_f$ and $E_g$ using smooth nonlinear constraints if $f$ and $g$ are smooth.
This correspondence between composition of functions and extended formulations was considered by Tawarmalani and Sahinidis~\cite{Baron}, although in the context of nonconvex MINLP. Composition generalizes the notion of separability far beyond summations of univariate functions as proposed by Hijazi et al.~\cite{Hijazi}.
DCP, based on the philosophy that users should be ``disciplined'' in their modeling of convex functions, describes a simple set of rules for verifying convexity and rejects any expressions not satisfying them; it is not based on ad-hoc detection of convexity which is common among nonconvex MINLP solvers. DCP is well established within the convex optimization community as a practical modeling technique, and many would agree that it is reasonable to ask users to formulate convex optimization problems in DCP form. By doing so they unknowingly provide all of the information needed to generate powerful extended formulations.
\section{Computational results}
In this section we present preliminary computational results implementing the extended formulations proposed in this work. We have implemented a solver, \textit{Pajarito}, which currently accepts input as mixed-integer conic programming problems with a mix of second-order and exponential cones. We have translated 194 convex problems from MINLPLIB2 representable using these cones into Convex.jl~\cite{convexjl}, a DCP algebraic modeling language in Julia which performs automatic transformation into conic form. We exclude instances without integer constraints, some which are pure quadratic, and some which Bonmin is unable to solve within time limits. Pajarito currently implements traditional OA using derivative-based NLP solvers~\cite{Bonmin} applied to the conic extended formulation, as the conic solvers we tested were not sufficiently reliable. Pajarito itself relies on JuMP~\cite{LubinDunningIJOC}, and the implementation of the core algorithm spans less than 1000 lines of code. Pajarito will be released as open source in the upcoming months.
\begin{figure}
\caption{Comparison performance profiles for the entire data set. Higher is better. Here, $p$ is the proportion of instances for which the given solver is within a factor of $\theta$ of the best solution time or iteration count.}
\label{fig:all}
\end{figure}
Our main comparison is with Bonmin's OA algorithm, which in 2014 benchmarks by H. Mittelmann was found to be the overall fastest MICP solver when using CPLEX as the inner MILP solver~\cite{hans}. For the MISOCP instances, we also compare with CPLEX.
Performance profiles~\cite{perf} of all instances solved by Bonmin in greater than 30 seconds are provided in Figure \ref{fig:all}. Tables~\ref{tab:results:1} and~\ref{tab:results:2} in the Appendix list the complete set of results. Their highlights are: \begin{enumerate}
\item We observe that the extended formulation helps significantly reduce the number of OA iterations (Figure~\ref{fig:all}). This can be seen as a sign of scalability provided by the extended formulation.
\item Pajarito is much faster on many of the challenging problems (\texttt{slay},\texttt{netmod},\\ \texttt{portfol\_classical}), although these problems are MISOCPs where CPLEX dominates (note that CPLEX 12.6.2 already applies extended formulations for MISOCPs~\cite{VielmaExtendedFormulations}). Pajarito has not been optimized for performance, leading Bonmin to be faster on the relatively easy instances. \item Perhaps the strongest demonstration of Pajarito's strength is the \texttt{gams01} instance, which was previously unsolved and whose conic representation requires a mix of SOC and EXP cones. The best known bound was 1735.06 and the best known solution was 21516.83. Pajarito solved the instance to optimality with an objective value of 21380.20 in 6 iterations. Unfortunately, the origin of the instance is unknown and confidential. \end{enumerate}
{}
\section*{Appendix}
All computations were performed on a high-performance cluster at Los Alamos with Intel$^\circledR$ Xeon$^\circledR$ E5-2687W v3 @3.10GHz 25.6MB L3 cache processors and 251GB DDR3 memory installed on every node. CPLEX v12.6.2 is used as a MILP and MISOCP solver. We use KNITRO v9.1.0 as an NLP solver for Pajarito. Bonmin v1.8.3 is compiled with CPLEX v12.6.2 and Ipopt 3.12.3 using the HSL linear algebra library MA97. All solvers are set to a relative optimality gap of $10^{-5}$, are run on a single thread (both CPLEX and KNITRO), and are given 10 hours of wall time limit (with the exception of \texttt{gams01} where we give 32 threads to CPLEX for the MILP relaxations).
\begin{figure}
\caption{Comparison performance profiles for SOC representable instances}
\label{fig:soc}
\end{figure}
Performance profiles with respect to SOC representable instances are provided in Figure \ref{fig:soc}. After reformulating these instances, we are able solve them using CPLEX as MISOCPs. Although CPLEX dominates, Pajarito is able to solve more instances than Bonmin in this case.
\begin{table}[t]
\tiny
\centering
\begin{tabular}{l|c|r|r|r|r|r}
Instance & Conic rep. & Bonmin Iter & Bonmin Time & Pajarito Iter & Pajarito Time & CPLEX Time \\
\hline batch & Exp & 2 & 0.60 & 1 & 4.95 & -- \\ batchdes & Exp & 1 & 0.07 & 1 & 4.76 & -- \\ batchs101006m & Exp & 10 & 1.88 & 3 & 7.67 & -- \\ batchs121208m & Exp & 4 & 3.14 & 3 & 11.01 & -- \\ batchs151208m & Exp & 6 & 7.97 & 3 & 14.63 & -- \\ batchs201210m & Exp & 8 & 14.92 & 2 & 29.09 & -- \\ clay0203h & SOC & 9 & 0.90 & 5 & 6.53 & 0.35 \\ clay0203m & SOC & 10 & 0.40 & 6 & 6.74 & 0.37 \\ clay0204h & SOC & 3 & 3.60 & 1 & 6.27 & 1.61 \\ clay0204m & SOC & 3 & 0.33 & 1 & 5.14 & 1.02 \\ clay0205h & SOC & 4 & 20.89 & 3 & 28.76 & 8.93 \\ clay0205m & SOC & 6 & 5.50 & 3 & 12.67 & 1.77 \\ clay0303h & SOC & 9 & 0.97 & 5 & 7.29 & 0.54 \\ clay0303m & SOC & 10 & 0.58 & 7 & 7.36 & 0.68 \\ clay0304h & SOC & 11 & 5.27 & 9 & 17.96 & 1.42 \\ clay0304m & SOC & 16 & 2.84 & 13 & 22.99 & 2.13 \\ clay0305h & SOC & 4 & 23.81 & 3 & 56.93 & 23.32 \\ clay0305m & SOC & 7 & 6.16 & 3 & 16.14 & 2.51 \\ du-opt & SOC & 61 & 0.76 & 7 & 8.69 & 1.54 \\ du-opt5 & SOC & 22 & 0.22 & 4 & 6.66 & 1.97 \\ enpro48pb & Exp & 2 & 0.22 & 1 & 5.04 & -- \\ enpro56pb & Exp & 1 & 0.22 & 1 & 5.11 & -- \\ ex1223 & ExpSOC & 3 & 0.07 & 1 & 5.47 & -- \\ ex1223a & SOC & 1 & 0.03 & 0 & 4.57 & 0.01 \\ ex1223b & ExpSOC & 3 & 0.07 & 1 & 5.52 & -- \\ ex4 & SOC & 2 & 0.13 & 2 & 5.80 & 0.86 \\ fac3 & SOC & 6 & 0.15 & 2 & 5.22 & 0.07 \\ netmod\_dol2 & SOC & 33 & 167.49 & 7 & 53.04 & 12.58 \\ netmod\_kar1 & SOC & 102 & 56.45 & 12 & 13.75 & 7.68 \\ netmod\_kar2 & SOC & 102 & 56.35 & 12 & 13.68 & 7.66 \\ no7\_ar25\_1 & SOC & 2 & 25.19 & 3 & 69.55 & 54.34 \\ no7\_ar3\_1 & SOC & 4 & 71.04 & 3 & 95.84 & 126.09 \\ no7\_ar4\_1 & SOC & 5 & 85.87 & 4 & 110.70 & 48.97 \\ no7\_ar5\_1 & SOC & 7 & 69.23 & 5 & 117.65 & 32.60 \\ nvs03 & SOC & 1 & 0.06 & 1 & 4.89 & 0.00 \\ slay04h & SOC & 5 & 0.19 & 2 & 5.22 & 0.14 \\ slay04m & SOC & 5 & 0.11 & 2 & 5.20 & 0.18 \\ slay05h & SOC & 9 & 0.60 & 3 & 5.73 & 0.37 \\ slay05m & SOC & 7 & 0.18 & 3 & 5.51 & 0.16 \\ slay06h & SOC & 12 & 1.94 & 2 & 5.56 & 0.69 \\ slay06m & SOC & 9 & 0.29 & 2 & 5.27 & 0.42 \\ slay07h & SOC & 15 & 5.04 & 3 & 6.61 & 0.98 \\ slay07m & SOC & 12 & 0.66 & 3 & 5.67 & 0.67 \\ slay08h & SOC & 22 & 27.27 & 3 & 7.41 & 1.50 \\ slay08m & SOC & 21 & 2.89 & 2 & 5.43 & 0.96 \\ slay09h & SOC & 36 & 163.31 & 3 & 9.01 & 1.93 \\ slay09m & SOC & 28 & 17.22 & 3 & 6.12 & 1.54 \\ slay10h & SOC & 80 & 8155.02 & 4 & 26.13 & 7.55 \\ slay10m & SOC & 77 & 1410.08 & 4 & 9.08 & 1.80 \\ syn05h & Exp & 2 & 0.09 & 1 & 4.75 & -- \\ syn05m & Exp & 2 & 0.07 & 1 & 4.73 & -- \\ syn05m02h & Exp & 1 & 0.06 & 1 & 4.79 & -- \\ syn05m02m & Exp & 1 & 0.07 & 1 & 4.80 & -- \\ syn05m03h & Exp & 1 & 0.07 & 1 & 4.86 & -- \\ syn05m03m & Exp & 1 & 0.07 & 1 & 4.83 & -- \\ syn05m04h & Exp & 1 & 0.07 & 1 & 4.85 & -- \\ syn05m04m & Exp & 1 & 0.08 & 1 & 4.85 & -- \\ syn10h & Exp & 1 & 0.04 & 0 & 4.46 & -- \\ syn10m & Exp & 2 & 0.04 & 1 & 4.79 & -- \\ syn10m02h & Exp & 1 & 0.09 & 1 & 4.92 & -- \\ syn10m02m & Exp & 2 & 0.09 & 1 & 4.85 & -- \\ syn10m03h & Exp & 1 & 0.08 & 1 & 4.91 & -- \\ syn10m03m & Exp & 1 & 0.08 & 1 & 4.85 & -- \\ syn10m04h & Exp & 1 & 0.11 & 1 & 5.03 & -- \\ syn10m04m & Exp & 1 & 0.11 & 1 & 5.01 & -- \\ syn15h & Exp & 1 & 0.06 & 1 & 4.86 & -- \\ syn15m & Exp & 2 & 0.07 & 1 & 4.79 & -- \\ syn15m02h & Exp & 1 & 0.09 & 1 & 5.00 & -- \\ syn15m02m & Exp & 1 & 0.09 & 1 & 4.92 & -- \\ syn15m03h & Exp & 1 & 0.13 & 1 & 48.61 & -- \\ syn15m03m & Exp & 2 & 0.11 & 1 & 4.94 & -- \\ syn15m04h & Exp & 1 & 0.14 & 1 & 5.77 & -- \\ syn15m04m & Exp & 2 & 0.14 & 1 & 5.10 & -- \\ syn20h & Exp & 2 & 0.10 & 2 & 5.14 & -- \\ syn20m & Exp & 2 & 0.06 & 1 & 4.81 & -- \\ syn20m02h & Exp & 2 & 0.15 & 2 & 5.70 & -- \\ syn20m02m & Exp & 2 & 0.10 & 2 & 5.24 & -- \\ syn20m03h & Exp & 1 & 0.13 & 1 & 5.55 & -- \\ syn20m03m & Exp & 2 & 0.15 & 2 & 5.34 & -- \\ syn20m04h & Exp & 1 & 0.19 & 1 & 6.03 & -- \\ syn20m04m & Exp & 2 & 0.27 & 2 & 5.60 & -- \\ syn30h & Exp & 3 & 0.12 & 3 & 5.61 & -- \\ syn30m & Exp & 3 & 0.09 & 3 & 5.40 & -- \\ syn30m02h & Exp & 3 & 0.21 & 3 & 6.34 & -- \\ syn30m02m & Exp & 4 & 0.19 & 3 & 5.69 & -- \\ syn30m03h & Exp & 3 & 0.40 & 3 & 6.86 & -- \\ syn30m03m & Exp & 3 & 0.27 & 3 & 6.16 & -- \\ syn30m04h & Exp & 3 & 0.49 & 3 & 7.91 & -- \\ syn30m04m & Exp & 4 & 0.42 & 3 & 6.60 & -- \\ syn40h & Exp & 4 & 0.19 & 3 & 5.74 & -- \\ syn40m & Exp & 4 & 0.97 & 2 & 5.20 & -- \\ syn40m02h & Exp & 3 & 0.31 & 3 & 6.74 & -- \\ syn40m02m & Exp & 3 & 0.24 & 3 & 5.97 & -- \\ syn40m03h & Exp & 4 & 0.59 & 4 & 8.66 & -- \\ syn40m03m & Exp & 5 & 0.52 & 4 & 7.17 & -- \\ syn40m04h & Exp & 4 & 1.02 & 4 & 10.42 & -- \\ syn40m04m & Exp & 5 & 0.87 & 5 & 9.25 & -- \\
\end{tabular}
\caption{MINLPLIB2 instances. ``Conic rep'' column indicates which cones are used in the conic representation of the instance (second-order cone and/or exponential). CPLEX is capable of solving only second-order cone instances. Times in seconds.}
\label{tab:results:1}
\end{table}
\begin{table}[t]
\tiny
\centering
\begin{tabular}{l|c|r|r|r|r|r}
Instance & Conic rep. & Bonmin Iter & Bonmin Time & Pajarito Iter & Pajarito Time & CPLEX Time \\
\hline synthes1 & Exp & 3 & 0.04 & 2 & 5.04 & -- \\ synthes2 & Exp & 3 & 0.05 & 2 & 4.98 & -- \\ synthes3 & Exp & 6 & 0.10 & 2 & 5.00 & -- \\ rsyn0805h & Exp & 1 & 0.14 & 1 & 4.92 & -- \\ rsyn0805m & Exp & 2 & 0.25 & 2 & 5.22 & -- \\ rsyn0805m02h & Exp & 5 & 0.71 & 5 & 7.31 & -- \\ rsyn0805m02m & Exp & 4 & 2.16 & 4 & 7.24 & -- \\ rsyn0805m03m & Exp & 3 & 4.08 & 3 & 7.76 & -- \\ rsyn0805m04m & Exp & 2 & 2.31 & 2 & 6.78 & -- \\ rsyn0810m & Exp & 2 & 0.24 & 1 & 4.94 & -- \\ rsyn0810m02h & Exp & 3 & 0.58 & 3 & 6.45 & -- \\ rsyn0810m02m & Exp & 4 & 5.78 & 3 & 6.83 & -- \\ rsyn0810m03h & Exp & 3 & 1.36 & 3 & 7.62 & -- \\ rsyn0810m03m & Exp & 3 & 6.04 & 3 & 8.66 & -- \\ rsyn0810m04h & Exp & 3 & 1.31 & 2 & 7.71 & -- \\ rsyn0810m04m & Exp & 4 & 3.77 & 3 & 8.14 & -- \\ rsyn0815h & Exp & 1 & 0.27 & 1 & 23.50 & -- \\ rsyn0815m & Exp & 2 & 0.23 & 2 & 5.25 & -- \\ rsyn0815m02m & Exp & 5 & 1.94 & 4 & 7.14 & -- \\ rsyn0815m03h & Exp & 5 & 5.21 & 5 & 16.04 & -- \\ rsyn0815m03m & Exp & 4 & 4.59 & 5 & 10.16 & -- \\ rsyn0815m04h & Exp & 3 & 2.03 & 3 & 10.43 & -- \\ rsyn0815m04m & Exp & 4 & 7.78 & 3 & 10.68 & -- \\ rsyn0820h & Exp & 3 & 0.42 & 2 & 5.59 & -- \\ rsyn0820m & Exp & 2 & 0.24 & 2 & 5.29 & -- \\ rsyn0820m02h & Exp & 3 & 0.59 & 2 & 6.72 & -- \\ rsyn0820m02m & Exp & 3 & 1.90 & 3 & 6.90 & -- \\ rsyn0820m03h & Exp & 2 & 1.37 & 2 & 7.76 & -- \\ rsyn0820m03m & Exp & 3 & 5.14 & 3 & 8.83 & -- \\ rsyn0820m04h & Exp & 4 & 2.66 & 4 & 11.59 & -- \\ rsyn0820m04m & Exp & 3 & 8.65 & 3 & 11.52 & -- \\ rsyn0830h & Exp & 3 & 0.41 & 3 & 5.95 & -- \\ rsyn0830m & Exp & 4 & 0.37 & 4 & 6.19 & -- \\ rsyn0830m02m & Exp & 5 & 1.83 & 5 & 15.68 & -- \\ rsyn0830m03h & Exp & 2 & 1.45 & 2 & 9.04 & -- \\ rsyn0830m03m & Exp & 4 & 3.45 & 4 & 10.15 & -- \\ rsyn0830m04h & Exp & 3 & 2.35 & 3 & 12.59 & -- \\ rsyn0830m04m & Exp & 4 & 11.47 & 4 & 15.82 & -- \\ rsyn0840h & Exp & 2 & 0.30 & 2 & 5.69 & -- \\ rsyn0840m & Exp & 2 & 0.26 & 3 & 5.72 & -- \\ rsyn0840m02h & Exp & 3 & 0.72 & 2 & 7.34 & -- \\ rsyn0840m02m & Exp & 4 & 1.53 & 3 & 7.73 & -- \\ rsyn0840m03h & Exp & 3 & 1.85 & 3 & 11.07 & -- \\ rsyn0840m03m & Exp & 5 & 2.47 & 5 & 12.41 & -- \\ rsyn0840m04h & Exp & 2 & 2.40 & 2 & 44.19 & -- \\ rsyn0840m04m & Exp & 4 & 7.62 & 4 & 22.33 & -- \\ sambal & SOC & 0 & 0.03 & 0 & 4.52 & 0.00 \\ gbd & SOC & 1 & 0.04 & 0 & 4.55 & 0.00 \\ ravempb & Exp & 4 & 0.33 & 1 & 5.22 & -- \\ portfol\_classical050\_1 & SOC & \textgreater 989 & \textgreater 36000 & 12 & 37.77 & 3.31 \\ m3 & SOC & 1 & 0.68 & 0 & 4.58 & 0.07 \\ m6 & SOC & 1 & 0.16 & 1 & 5.18 & 0.17 \\ m7 & SOC & 1 & 0.59 & 0 & 4.84 & 0.69 \\ m7\_ar25\_1 & SOC & 1 & 0.37 & 1 & 5.19 & 0.16 \\ m7\_ar2\_1 & SOC & 1 & 2.19 & 1 & 7.01 & 1.58 \\ m7\_ar3\_1 & SOC & 1 & 1.88 & 1 & 6.79 & 0.82 \\ m7\_ar4\_1 & SOC & 1 & 0.35 & 0 & 4.77 & 0.84 \\ m7\_ar5\_1 & SOC & 1 & 0.34 & 0 & 5.71 & 0.98 \\ fo7 & SOC & 3 & 27.68 & 4 & 42.88 & 23.67 \\ fo7\_2 & SOC & 2 & 12.52 & 2 & 16.70 & 4.88 \\ fo7\_ar25\_1 & SOC & 4 & 9.87 & 4 & 27.18 & 9.92 \\ fo7\_ar2\_1 & SOC & 2 & 8.68 & 3 & 19.63 & 11.04 \\ fo7\_ar3\_1 & SOC & 3 & 11.61 & 3 & 31.28 & 22.16 \\ fo7\_ar4\_1 & SOC & 2 & 9.61 & 2 & 15.68 & 10.27 \\ fo7\_ar5\_1 & SOC & 1 & 5.66 & 1 & 8.95 & 12.67 \\ fo8 & SOC & 2 & 79.50 & 3 & 82.41 & 52.92 \\ fo8\_ar25\_1 & SOC & 3 & 45.80 & 4 & 144.43 & 63.09 \\ fo8\_ar2\_1 & SOC & 3 & 59.24 & 4 & 161.68 & 60.09 \\ fo8\_ar3\_1 & SOC & 1 & 14.65 & 1 & 14.78 & 37.85 \\ fo8\_ar4\_1 & SOC & 1 & 10.53 & 1 & 16.48 & 62.60 \\ fo8\_ar5\_1 & SOC & 2 & 23.26 & 1 & 34.09 & 59.75 \\ fo9 & SOC & 3 & 534.56 & 4 & 209.68 & 227.52 \\ fo9\_ar25\_1 & SOC & 6 & 1430.17 & 6 & 6221.39 & 1240.89 \\ fo9\_ar3\_1 & SOC & 1 & 16.77 & 1 & 22.69 & 103.84 \\ fo9\_ar4\_1 & SOC & 2 & 40.77 & 1 & 60.73 & 785.75 \\ fo9\_ar5\_1 & SOC & 2 & 39.47 & 3 & 134.95 & 725.60 \\ flay02h & SOC & 2 & 0.09 & 2 & 5.18 & 0.02 \\ flay02m & SOC & 2 & 0.05 & 2 & 5.12 & 0.04 \\ flay03h & SOC & 8 & 0.40 & 8 & 7.08 & 0.20 \\ flay03m & SOC & 8 & 0.17 & 8 & 6.74 & 0.24 \\ flay04h & SOC & 24 & 19.92 & 24 & 30.22 & 1.14 \\ flay04m & SOC & 22 & 4.43 & 22 & 16.03 & 1.00 \\ flay05h & SOC & 181 & 6583.08 & 164 & 6593.05 & 96.62 \\ flay05m & SOC & 180 & 3258.45 & 171 & 4938.36 & 68.91 \\ flay06h & SOC & \textgreater 30 & \textgreater 36000 & \textgreater 32 & \textgreater 36000 & 6958.36 \\ flay06m & SOC & \textgreater 68 & \textgreater 36000 & \textgreater 55 & \textgreater 36000 & 4752.04 \\ o7 & SOC & 9 & 1623.33 & 8 & 3060.63 & 526.94 \\ o7\_2 & SOC & 5 & 435.47 & 5 & 663.47 & 128.95 \\ o7\_ar25\_1 & SOC & 4 & 259.10 & 3 & 510.12 & 455.29 \\ o7\_ar2\_1 & SOC & 1 & 41.51 & 1 & 137.82 & 68.66 \\ o7\_ar3\_1 & SOC & 4 & 338.68 & 3 & 642.90 & 875.63 \\ o7\_ar4\_1 & SOC & 7 & 1486.87 & 7 & 2239.11 & 535.17 \\ o7\_ar5\_1 & SOC & 4 & 309.86 & 4 & 777.35 & 216.84 \\ o8\_ar4\_1 & SOC & 4 & 2736.05 & 3 & 10438.68 & 8447.35 \\ tls2 & SOC & 7 & 0.19 & 4 & 5.27 & 0.10 \\ tls4 & SOC & 88 & 260.67 & 7 & 18.58 & 6.15 \\ {\bf gams01} & {\bf ExpSOC} & {\bf \textgreater 19} & {\bf \textgreater 36000} & {\bf 6} & {\bf 23414.37} & -- \\
\end{tabular}
\caption{MINLPLIB2 instances, continued.}
\label{tab:results:2}
\end{table}
\end{document} | arXiv |
Octahedral-dodecahedral honeycomb
In the geometry of hyperbolic 3-space, the octahedron-dodecahedron honeycomb is a compact uniform honeycomb, constructed from dodecahedron, octahedron, and icosidodecahedron cells, in a rhombicuboctahedron vertex figure. It has a single-ring Coxeter diagram, , and is named by its two regular cells.
Octahedron-dodecahedron honeycomb
TypeCompact uniform honeycomb
Schläfli symbol{(5,3,4,3)} or {(3,4,3,5)}
Coxeter diagram or
Cells{3,4}
{5,3}
r{5,3}
Facestriangular {3}
pentagon {5}
Vertex figure
rhombicuboctahedron
Coxeter group[(5,3,4,3)]
PropertiesVertex-transitive, edge-transitive
A geometric honeycomb is a space-filling of polyhedral or higher-dimensional cells, so that there are no gaps. It is an example of the more general mathematical tiling or tessellation in any number of dimensions.
Honeycombs are usually constructed in ordinary Euclidean ("flat") space, like the convex uniform honeycombs. They may also be constructed in non-Euclidean spaces, such as hyperbolic uniform honeycombs. Any finite uniform polytope can be projected to its circumsphere to form a uniform honeycomb in spherical space.
Images
Wide-angle perspective view
Centered on dodecahedron
See also
• Convex uniform honeycombs in hyperbolic space
• List of regular polytopes
References
• Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. ISBN 0-486-61480-8. (Tables I and II: Regular polytopes and honeycombs, pp. 294–296)
• Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999 ISBN 0-486-40919-8 (Chapter 10: Regular honeycombs in hyperbolic space, Summary tables II,III,IV,V, p212-213)
• Jeffrey R. Weeks The Shape of Space, 2nd edition ISBN 0-8247-0709-5 (Chapter 16-17: Geometries on Three-manifolds I,II)
• Norman Johnson Uniform Polytopes, Manuscript
• N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966
• N.W. Johnson: Geometries and Transformations, (2018) Chapter 13: Hyperbolic Coxeter groups
| Wikipedia |
\begin{document}
\FXRegisterAuthor{ja}{aja}{JA} \FXRegisterAuthor{cf}{acf}{CF} \FXRegisterAuthor{sm}{asm}{SM} \FXRegisterAuthor{ls}{als}{LS}
\title{Finitary Monads on the Category of Posets}
\author{Ji\v{r}\'{i} Ad\'{a}mek
\thanks{Supported by the Grant Agency of the Czech
Republic under the grant 19-0092S.}
} \affil{\small{Department of Mathematics, Technical University of Prague,
Czech Republic, and \\
Institute of Theoretical Computer Science, Technical University Braunschweig, Germany}} \author{Chase Ford
\thanks{Supported by Deutsche Forschungsgemeinschaft
(DFG, German Research Foundation) as part of the Research
and Training Group 2475 ``Cybercrime and Forensic Computing" (grant
number 393541319/GRK2475/1-2019).} } \author{Stefan Milius
\thanks{Supported by Deutsche Forschungsgemeinschaft (DFG) under
projects MI~717/5-2 and MI~717/7-1.} }
\author{Lutz Schr\"{o}der} \affil{\small{Department of Computer Science,
Friedrich-Alexander-Universit\"{a}t Erlangen-N\"{u}rnberg (FAU),
Germany} } \maketitle
\centerline{Dedicated to John Power on the occasion of his 60$^\text{th}$ birthday.}
\begin{abstract}
Finitary monads on $\mathsf{Pos}$ are characterized as the precisely the
free-algebra monads of varieties of algebras. These are classes of
ordered algebras specified by inequations in context. Analagously,
finitary enriched monads on $\mathsf{Pos}$ are characterized: here we work
with varieties of coherent algebras which means that their
operations are monotone. \end{abstract}
\section{Introduction}\label{S:intro}
Equational specification usually applies classes of (often many-sorted) finitary algebras specified by equations. That is, varieties of algebras over the category $\mathsf{Set}^S$ of $S$-sorted sets. This is well known to be equivalent to applying finitary monads over $\mathsf{Set}^S$, i.e.~monads preserving filtered colimits: every variety $\mathcal V$ yields a free-algebra monad $\mathbb T_{\mathcal V}$ on $\mathsf{Set}^S$ which is finitary and whose Eilenberg-Moore category is isomorphic to $\mathcal V$. Conversely, every finitary monad $\mathbb T$ on $\mathsf{Set}^S$ defines a canonical $S$-sorted variety $\mathcal V$ whose free-algebra monad is isomorphic to $\mathbb T$.
There are cases in which algebraic specifications use partially ordered sets rather than sets without a structure. The goal of our paper is to present for the category $\mathsf{Pos}$ of partially ordered sets\smnote{Don't delete; otherwise $\mathsf{Pos}$ is not defined.} an analogous characterization of finitary monads: we define varieties of ordered algebras which allow us to represent (a)~all finitary monads on $\mathsf{Pos}$ and (b)~all enriched finitary monads on $\mathsf{Pos}$ as the free-algebra monads of varieties. `Enriched' refers to $\mathsf{Pos}$ as a cartesian closed category: a monad is enriched if its underlying functor $T$ is \emph{locally monotone} ($f\leq g$ in $\mathsf{Pos}(A, B)$ implies $Tf\leq Tg$ in $\mathsf{Pos}(TA, TB))$. Case (b)~works with algebras on posets such that the operations are monotone (and as morphisms we take monotone homomorphisms). Whereas for (a)~we have to work with algebras on posets whose operations are not necessarily monotone (but whose morphisms are). To distinguish these cases, we shall call an algebra \emph{coherent} if its operations are all monotone.
A basic step, in which we follow the excellent presentation of finitary monads on enriched categories due to Kelly and Power~\cite{KP93}, is to work with operation symbols whose arity is a finite poset rather than a natural number; we briefly recall the approach of \emph{op. cit.} in \autoref{S:eqpres}. Just as natural numbers $n=\{0,1,\dots, n-1\}$ represent all finite sets up to isomorphism, we choose a representative set \[
\Pos_\mathsf{f} \] of finite posets up to isomorphism. Members of $\Pos_\mathsf{f}$ are called \emph{contexts}\lsnote{I suggest calling them \emph{arities}; in fact that term is used in the next sentence.}.
A \emph{signature} is then a set $\Sigma$ of operation symbols of arities from $\Pos_\mathsf{f}$.
More precisely, $\Sigma$ is a collection of sets $(\Sigma_\Gamma)_{\Gamma \in
\Pos_\mathsf{f}}$. Thus, a $\Sigma$-algebra is a poset $A$ together with an operation $\sigma_A$, for every $\sigma \in \Sigma_\Gamma$, which assigns to every monotone map $u\colon \Gamma \to A$ an element $\sigma_A(u)$ of $A$. For example, let $\mathbbm{2}$ be the two-chain in $\Pos_\mathsf{f}$ given by $x<y$. Then an operation symbol $\sigma$ of arity $\mathbbm{2}$ is interpreted in an algebra $A$ as a partial function $\sigma_A\colon A\times A\rightarrow A$ whose definition domain consists of all comparable pairs in $A$.
Given a signature $\Sigma$ we form, for every context $\Gamma\in\Pos_\mathsf{f}$, the set $\Term(\Gamma)$ of \emph{terms in context} $\Gamma$. It is defined as usual in universal algebra by ignoring the order structure of contexts. Then, for every $\Sigma$-algebra $A$, whenever a monotone function $f\colon\Gamma\rightarrow A$ is given (i.e.~whenever the variables of context $\Gamma$ are interpreted in $A$) we define an evaluation of terms in context $\Gamma$. This is a partial map $f^\#$ assigning a value to a term $t$ provided that values of the subterms of $t$ are defined and respect the order of $\Gamma$. This leads to the concept of \emph{inequation in context} $\Gamma$: it is a pair $(s, t)$ of terms in that context. An algebra $A$ \emph{satisfies} this inequation if for every monotone interpretation $f\colon\Gamma\rightarrow A$ we have that both $f^\#(t)$ and $f^\#(s)$ are defined and $f^{\#}(s)\leq f^{\#}(t)$ holds in $A$. We use the following notation for inequations in context: \[
\Gamma\vdash s\leq t. \]
By a \emph{variety} we understand a category $\mathcal V$ of $\Sigma$-algebras presented by a set $\mathcal E$ of $\Sigma$-inequations in context. Thus the objects of $\mathcal V$ are all algebras satisfying each $\Gamma\vdash s\leq t$ in $\mathcal E$, and morphisms are monotone homomorphisms. We prove that every variety $\mathcal V$ is monadic over $\mathsf{Pos}$, that is, for the monad $\mathbb T_{\mathcal V}$ of free $\mathcal V$-algebras $\mathcal V$ is isomorphic to the category $\mathsf{Pos}^{\mathbb T_{\mathcal V}}$ of algebras for $\mathbb T_{\mathcal V}$. Moreover, $\mathbb T_{\mathcal V}$ is a finitary monad and, in case $\mathcal V$ consists of coherent algebras, $\mathbb T_{\mathcal V}$ is enriched.
Conversely, with every finitary monad $\mathbb T$ on $\mathsf{Pos}$ we associate a canonical variety whose free-algebra monad is isomorphic to $\mathbb T$. This process from monads to varieties is inverse to the above assignment $\mathcal V\mapsto\mathbb T_{\mathcal V}$. Moreover, if $\mathbb T$ is enriched, the canonical variety consists of coherent algebras. This leads to a bijection between finitary enriched monads and varieties of coherent algebras.
Is it really necessary to work with signatures of operations with partially ordered arities and terms in context? There is a `natural' concept of a variety of ordered (coherent) algebras for classical signatures $\Sigma=(\Sigma_n)_{n\in\mathds{N}}$. Here terms are elements of free $\Sigma$-algebras on finite sets (of variables) and a variety is given by a set of inequations $s\leq t$ where $s$ and $t$ are terms. Such varieties were studied e.g.~by Bloom and Wright~\cite{Bloom76, BW83}. Kurz and Velebil~\cite{KV17} characterized these classical varieties as precisely the exact categories (in an enriched sense) with a `suitable' generator. In a recent paper, the first author, Dost\'al, and Velebil~\cite{ADV20} proved that for every such variety $\mathcal V$ the free-algebra monad $\mathbb T_{\mathcal V}$ is enriched and \emph{strongly finitary} in the sense of Kelly and Lack~\cite{KL93}. This means that the functor $T_{\mathcal V}$ is the left Kan extension\smnote{Please don't delete; I find `reflexive coinserter' rather unclear, whereas everyone with knowledge of basic category theory knows Kan extensions.} of its restriction along the full embedding $E\colon\mathsf{Pos}_{\mathsf{fd}}\hookrightarrow\mathsf{Pos}$ of finite discrete posets: \[
T_{\mathcal V}= \Lan_E (T_{\mathcal V}\cdot E). \] Conversely, every strongly finitary monad on $\mathsf{Pos}$ is isomorphic to the free-algebra monad of a variety in this classical sense. This answers our question above affirmatively: contexts are necessary if \emph{all} (possibly enriched) finitary monads are to be characterized via inequations.
\begin{example}
We have mentioned above a binary operation $\sigma(x, y)$ in context
$x<y$.\smnote{I think $<$ is correct here.}\cfnote{I agree, $<$ is
correct.}\lsnote{We should write $\le$ in the contexts, as that is
what these constraints mean (they are not meant to enforce that
$x$ and $y$ are different)} For the corresponding variety
$\Alg \Sigma$ (with no specified inequations) the free-algebra monad
is described in \autoref{E:TX}. This monad is not strongly
finitary~\cite[Ex.~3.15]{ADV20}\lsnote{Shouldn't we prove this claim?}, thus no variety
with a classical signature has this monad as the free-algebra
monad.\lsnote{A more natural, if slightly more complicated example
are `bounded-join-semilattices', i.e.~partial orders in which
every bounded pair of elements has a binary join.} \end{example}
\paragraph{Related work} As we have already mentioned, the idea of using signatures in context stems from the work of Kelly and Power~\cite{KP93}. They presented enriched monads by operations and equations. A signature in their sense is more general than what we use: it is a collection of \emph{posets} $(\Sigma_\Gamma)_{\Gamma \in \Pos_\mathsf{f}}$, and a $\Sigma$-algebra $A$ is then a poset together with a monotone functions from $\Sigma_\Gamma$ to the poset of monotone functions from $\mathsf{Pos}(\Gamma, A)$ to $A$ for every context $\Gamma$.
Whereas we deal with the monadic view on varieties of ordered algebras in the present paper, the view using algebraic theories has been investigated by Power with coauthors, e.g.~\cite{Pow99, PP01, PP02,
NP09}, see \autoref{S:enriched}. In particular, the paper~\cite{NP09} works with enriched categories over a monoidal closed category $\mathscr V$\smnote{This should
be the Vcat macro; the V-macro is for varieties.} for which a $\mathscr V$-enriched base category $\mathscr C$ has been chosen. Then enriched algebraic $\mathscr C$-theories are shown to correspond to $\mathscr V$-enriched monads on $\mathscr C$. This is particularly relevant for the current paper: by choosing $\mathscr V=\mathsf{Set}$ and $\mathscr C=\mathsf{Pos}$ we treat non-enriched finitary monads on $\mathsf{Pos}$, whereas the choice $\mathscr V=\mathscr C=\mathsf{Pos}$ covers the enriched case.
\paragraph{Acknowledgement} The authors are grateful to Ji\v{r}\'i\ Rosick\'y for fruitful discussion.
\section{Equational Presentations of Monads}\label{S:eqpres}
We now recall the approach to equational presentations of finitary monads introduced by Kelly and Power~\cite{KP93}; our aim here is to bring the rest of the paper into this perspective. However, we note that the signatures used here are more general than those of the subsequent sections, and (unlike later) some enriched category theory is used. The reader can decide to skip this section without losing the connection.
For a locally finitely presentable category $\mathscr C$ enriched over a symmetric monoidal closed category $\mathscr V$ Kelly and Power consider (enriched) monads on $\mathscr C$ that are finitary, i.e.~the ordinary underlying endofunctor preserves filtered colimits. Below we specialize their approach to $\mathscr C = \mathsf{Pos}$ considered as an ordinary category ($\mathscr V = \mathsf{Set}$) or as a category enriched over itself ($\mathscr V = \mathsf{Pos}$) as a cartesian closed category. In the first case, the hom-object $\mathsf{Pos}(A,B)$\smnote{Please do not introduce any bracket notation for hom-objects here; it is not needed!} is the \emph{set} of all monotone functions from $A$ to $B$; in the latter case, this is the \emph{poset} of those functions, ordered pointwise. As in \autoref{S:intro}, a representative set $\Pos_\mathsf{f}$ of finite posets (called \emph{contexts}) is chosen which is to be viewed as a full subcategory of $\mathsf{Pos}$. We denote by \[ {\mid}\Pos_\mathsf{f}{\mid} \] the corresponding discrete category.
\begin{definition}\label{D:ndsig}
A \emph{signature} is a functor from ${\mid}\Pos_\mathsf{f}{\mid}$ to $\mathsf{Pos}.$
In other words, a signature $\Sigma$ is a collection of posets
$\Sigma_{\Gamma}$ of \emph{operation symbols in context} $\Gamma$
indexed by $\Gamma\in\Pos_\mathsf{f}$. A morphism $s\colon \Sigma \to \Sigma'$
of signatures, being a natural transformation, is thus just a family
of monotone maps $s_\Gamma\colon \Sigma_\Gamma \to \Sigma_\Gamma'$
indexed by contexts.
We denote by
\[
\mathsf{Sig} = [|\Pos_\mathsf{f}|, \mathsf{Pos}]
\]
the category of signatures and their morphisms. \end{definition}
\noindent In the introduction we considered the special case of signatures where each poset $\Sigma_{\Gamma}$ is discrete, i.e.~we just have a \emph{set} of operation symbols in context $\Gamma$; for emphasis, we will call such signatures \emph{discrete}.
\begin{remark}\label{R:tensor}
Recall~\cite[Def.~6.5.1]{Borceux94-2} the concept of a \emph{tensor} for objects $V \in \mathscr V$ and
$C \in \mathscr C$: it is an object $V\otimes C$ of $\mathscr C$ together a natural
isomorphism
\[
\mathscr C(V \otimes C, X) \cong \mathscr V(V, \mathscr C(C,X)).
\]
in $\mathscr V$ which is $\mathcal V$-natural in $X$. Here $\mathscr V(-,-)$ denotes
the internal hom-functor of $\mathscr V$.
In the case where $\mathscr C = \mathsf{Pos}$ and $\mathscr V = \mathsf{Set}$ we get the
copower
\[\textstyle
V\otimes C= \coprod_{V} C,
\]
and for $\mathscr C=\mathscr V=\mathsf{Pos}$ we just get the product in $\mathsf{Pos}$:
\[
V\otimes C= V\times C.
\] \end{remark}
\begin{notation}
\begin{enumerate}
\item We denote by $\mathsf{Fin}(\Pos)$\smnote{I propose yet another notation
for the categories of finitary endofunctors and monads.}
the enriched category of finitary enriched endofunctors on
$\mathsf{Pos}$. In the case where $\mathscr V = \mathsf{Set}$, these are all
endofunctors preserving filtered colimits. For $\mathscr V = \mathsf{Pos},$
these are all locally monotone endofunctors preserving filtered
colimits.
\item The category of finitary enriched monads on $\mathsf{Pos}$ is denoted
by $\mathop{\mathsf{FinMnd}}(\mathsf{Pos})$. We have a forgetful functor $U\colon \mathop{\mathsf{FinMnd}}(\mathsf{Pos})
\to \mathsf{Fin}(\Pos)$.
\end{enumerate} \end{notation}
\noindent By precomposing endofunctors with the non-full embedding $J\colon{\mid}\mathsf{Pos}_f{\mid}\rightarrow\mathsf{Pos}$ we obtain a forgetful functor from $\mathsf{Fin}(\Pos)$ to $\mathsf{Sig}$. It has a left adjoint\smnote{Kelly
and Power did not notice that because they never work with
endofunctors. I think they go right from signatures to monads.} assigning to every signature~$\Sigma$ the \emph{polynomial functor} $P_\Sigma$\smnote{I proposed and Jirka agreed to change $H_\Sigma$ to
$P_\Sigma$ in this section.} given on objects by \begin{equation}\label{eq:KPpoly}\textstyle
P_\Sigma X = \coprod_{\Gamma \in \Pos_\mathsf{f}} \mathsf{Pos}(\Gamma, X) \otimes \Sigma_\Gamma, \end{equation} and similarly on morphisms. As previously explained, the hom-object $\mathsf{Pos}(\Gamma,X)$ can have one of the two meanings: for $\mathcal V = \mathsf{Set}$ this is regarded as a set and for $\mathcal V = \mathsf{Pos}$ as a poset. Henceforth, we will use that notation for hom-objects only in the latter case and write \[
\Pos_{0}(\Gamma,X) \] for the set of monotone maps.
\begin{observation}\label{O:twoenrichments}
The usual category of algebras for the functor $P_{\Sigma}$, whose
objects are posets $A$ with a monotone map
$\alpha\colon P_{\Sigma}A\to A$, has the following form for our two
enrichements:
\begin{enumerate}
\item Let $\mathscr V = \mathsf{Set}$. Then $\alpha$ as above is a monotone map
\[\textstyle
\coprod_{\Gamma\in \Pos_\mathsf{f}}\coprod_{u\in\Pos_{0}(\Gamma, A)}\Sigma_{\Gamma}\to A,
\]
and as such has components assigning to every monotone function
$u\colon\Gamma\rightarrow A$ (that is, a monotone interpretation of the
variables in~$\Gamma$) a monotone function
$\Sigma_{\Gamma}\to A$. We denote this function by
$\sigma\mapsto\sigma_A(u)$.
In other words, the poset $A$ is equipped with operations
$\sigma_A\colon \Pos_{0}(\Gamma, A)\to A$ (which need not be monotone
since $\Pos_{0}(\Gamma, A$) is just a set) satisfying
$\sigma_A(u) \leq \tau_A(u)$ for all pairs $\sigma \leq \tau$ in
$\Sigma_\Gamma$ and $u$ in $\mathsf{Pos}(\Gamma, A)$. If $\Sigma$ is
discrete, this is precisely a
$\Sigma$-algebra (see the \hyperref[S:intro]{introduction}).
\item Now let $\mathscr V=\mathsf{Pos}$. Then $\alpha \colon P_{\Sigma}A\to A$ is a monotone map
\[\textstyle
\coprod_{\Gamma \in \Pos_\mathsf{f}}\mathsf{Pos}(\Gamma, A)\times\Sigma_{\Gamma}\to A,
\]
and thus has as components monotone functions
$(u, \sigma)\mapsto\sigma_A(u).$ That is, in addition to the
condition that $\sigma_A(u) \leq \tau_A(u)$ for all pairs
$\sigma \leq \tau$ in $\Sigma_\Gamma$ and $u$ in $\mathsf{Pos}(\Gamma, A)$
as above, we also see that each $\sigma_A$ is monotone. Thus, if
$\Sigma$ is discrete, this is
precisely a coherent algebra (again, see the \hyperref[S:intro]{introduction}).
\end{enumerate} \end{observation}
\noindent Observe also that `homomorphism' has the usual meaning: a monotone function preserving the given operations. In fact, given algebras $\alpha\colon P_\Sigma A\to A$ and $\beta\colon P_\Sigma B\to B$ a homomorphism is a monotone function $f\colon A\to B$ such that $f\cdot\alpha=\beta\cdot P_\Sigma f$.\smnote{Let
us please use cdot for composition everywhere; not juxtaposition,
which makes things hard to read.} This is equivalent to $f(\sigma_A(u))=\sigma_B(f\cdot u)$ for all $u\in\mathsf{Pos}(\Gamma, A)$ and all $\sigma\in\Sigma_{\Gamma}$.
\begin{remark}
\begin{enumerate}
\item As shown by Trnkov\'{a} et al.~\cite{TrnkovaEA75} (see also
Kelly~\cite{Kelly80})\smnote{I think it's better to keep also the
citation of Kelly because our paper is for John} every
ordinary finitary endofunctor $H$ on $\mathsf{Pos}$ generates a free monad
whose underlying functor $\widehat{H}$ is a colimit of the
$\omega$-chain
\[
\widehat{H}=\mathsf{colim}_{n<\omega} W_n
\]
of functors, where
\[
W_0=\mathsf{Id}\qquad\text{and}\qquad W_{n+1}=HW_n+\mathsf{Id}
\]
Connecting morphisms are
$w_0\colon\mathsf{Id}\to H+\mathsf{Id},$ the coproduct
injection, and $w_{n+1}=Hw_n+\mathsf{Id}$. The colimit
injections $c_n\colon W_n X \to \widehat{H}X$ in $\mathsf{Pos}$ have the
property that if a parallel pair $u,v\colon \widehat{H}X \to A$
satisfies $c_n \cdot u \leq c_n \cdot v$ for all $n < \omega$,
then we have $u \leq v$. It follows that $\hat{H}$ is enriched
if $H$ is.
\item The category of $H$-algebras is isomorphic to the Eilenberg-Moore category
$\mathsf{Pos}^{\widehat{H}}$~\cite{Bar70}.
\item Lack~\cite{Lac99} shows that the forgetful functor
\[
\mathop{\mathsf{FinMnd}}(\mathsf{Pos})
\xra{U} \mathsf{Fin}(\Pos)
\xra{J} \mathsf{Sig}
\]
is monadic. The corresponding monad $\mathbb M$ on $\mathsf{Sig}$ assigns to
every signature $\Sigma$ the signature $\widehat{P_\Sigma} \cdot
J\colon |\Pos_\mathsf{f}| \to \mathsf{Pos}$.
\item It follows that every enriched finitary monad $\mathbb T$ on $\mathsf{Pos}$
can be regarded as an algebra for the monad $\mathbb M$. Therefore,
$\mathbb T$ is a coequalizer in $\mathop{\mathsf{FinMnd}}(\mathsf{Pos})$ of a parallel pair of
monad morphisms between free $\mathbb M$-algebras on
signatures $\Delta,\Sigma$:
\[
\begin{tikzcd}
\widehat{P_\Delta}
\arrow[shift left]{r}{\ell}
\arrow[shift right]{r}[swap]{r}
&
\widehat{P_\Sigma}
\arrow{r}{c}
&
\mathbb T.
\end{tikzcd}
\]
This is the equational presentation of $\mathbb T$ considered by Kelly
and Lack~\cite{KL93}.
\end{enumerate} \end{remark}
\begin{example}
\begin{enumerate}
\item In the case where $\mathscr V = \mathsf{Set}$ and $\mathscr C = \mathsf{Pos}$,
$\mathop{\mathsf{FinMnd}}(\mathsf{Pos})$ is the category of (non-enriched) finitary monads on
$\mathsf{Pos}$. Consider the above coequalizer in the special case that
$\Delta$ consists of a single operation $\delta$ of context
$\Gamma$. That is, $\Delta_{\Gamma}=\{\delta\}$ and all
$\Delta_{\bar{\Gamma}}$ for $\bar{\Gamma}\neq\Gamma$ are empty. By
the Yoneda lemma, $l$ and $r$ simply choose two elements of
$\widehat{H}_{\Sigma}\Gamma$, say $t_l$ and $t_r$. The above
coequalizer means that $\mathbb T$ is presented by the signature $\Sigma$
and the equation $t_l=t_r$\lsnote{replaced $x$ with~$t$}.
\newline\indent For $\Delta$ arbitrary, we do not get one
equation, but a set of equations (one for every operation symbol
in $\Delta$) and $\mathbb T$ is presented by $\Sigma$ and the
corresponding set of equations, grouped by their respective
contexts.
\item
The case $\mathscr V=\mathscr C=\mathsf{Pos}$ yields as $\mathop{\mathsf{FinMnd}}(\mathsf{Pos})$ the category of enriched finitary
monads on $\mathsf{Pos}$. That is, the underlying endofunctor $T$ is locally monotone.
\end{enumerate} \end{example}
\begin{remark}
The fact that every finitary (possibly enriched) monad on $\mathsf{Pos}$ has
an \emph{equational} presentation depends heavily on the fact that
signatures are not reduced to the discrete ones. In contrast, we make
do with discrete signatures in the rest of the paper, and then
obtain a characterization of finitary (possibly enriched) monads
using \emph{inequational} presentations. While it is clear that the
two specification formats are mutually convertible, inequational
presentations seem natural for varieties of algebras
on~$\mathsf{Pos}$.\lsnote{Reworded this to make the distinction clearer
(also, everyone please lose the phrase `in our paper'); is this
still diplomatic enough? JA: Yes.}
Of course, it is possible to translate $\Sigma$-algebras for
non-discrete signatures $\Sigma$ as varieties of algebras for
discrete ones (see \autoref{ex:var}\ref{ex:var:7}). Using the result
of Kelly and Power, such a translation would lead to a
correspondence between finitary monads and varieties. This paper can
be viewed as a detailed realization of this. \end{remark}
\section{Varieties of Ordered Algebras}
Recall that $\Pos_\mathsf{f}$ is a fixed set of finite posets that represent all finite posets up to isomorphism. If $\Gamma\in\Pos_\mathsf{f}$ has the underlying set $\{x_0,\dots, x_{n-1}\}$, then we call the~$x_i$ the \emph{variables} of~$\Gamma$. Recall that all monotone functions from $A$ to $B$ form a set $\Pos_{0}(A,B)$ and a poset $\mathsf{Pos}(A,B)$ with the pointwise order.
\begin{notation}
The category $\mathsf{Pos}$ is cartesian closed, with hom-objects
$\mathsf{Pos}(X, Y)$ given by all monotone functions $X\rightarrow Y$,
ordered pointwise. That is, given monotone functions
$f,g\colon X\rightarrow Y$, by $f\leq g$ we mean that $f(x)\leq g(x)$
for all $x\in X$.
We denote by $|X|$ the underlying set of a poset $X$. We also often
consider $|X|$ to be the discrete poset on that set. \end{notation}
\begin{definition}\label{D:sig}
A \emph{signature in context}\lsnote{This term is not ideal} is a
set $\Sigma$ of operation symbols each with a prescribed context,
its \emph{arity}. That is, $\Sigma$ is a collection
$(\Sigma_{\Gamma})_{\Gamma\in\Pos_\mathsf{f}}$ of sets $\Sigma_{\Gamma}$. A
$\Sigma$-\emph{algebra} is a poset $A$ together with, for every
$\sigma\in\Sigma_{\Gamma}$, a function
\[
\sigma_A\colon \Pos_{0}(\Gamma, A)\rightarrow A.
\]
That is,~$\sigma_A$ assigns to every monotone valuation
$f\colon \Gamma\rightarrow A$ of the variables in $\Gamma$ an
element $\sigma_A(f)$ of $A$. The algebra $A$ is called
\emph{coherent} if each $\sigma_A$ is monotone, i.e.\ whenever
$f\leq g$ in $\mathsf{Pos}(\Gamma, A)$, then
$\sigma_A(f)\leq \sigma_A(g)$. \end{definition}
\begin{notation}
We denote by $\Alg \Sigma$ the category of $\Sigma$-algebras. Its
morphisms $A\to B$ are the \emph{homomorphisms} in the expected
sense; i.e.\ they are monotone functions $h\colon A\rightarrow B$
such that for every context $\Gamma$ and every operation symbol
$\sigma\in\Sigma_{\Gamma}$, the square
\[
\begin{tikzcd}
{\Pos_{0}(\Gamma, A)}
\arrow[d, "h\cdot (-)"']
\arrow[r, "\sigma_A"]
&
A \arrow[d, "h"]
\\
{\Pos_{0}(\Gamma, B)}
\arrow[r, "\sigma_B"]
&
B
\end{tikzcd}
\]
commutes. Similarly, we have the category $\Alg_\mathsf{c} \Sigma$ of all
coherent $\Sigma$-algebras. For their homomorphisms we have the
commutative squares
\[
\begin{tikzcd}
{\mathsf{Pos}(\Gamma, A)} \
\arrow{d}[swap]{h\cdot (-)}
\arrow[r, "\sigma_A"]
&
A \arrow[d, "h"]
\\
{\mathsf{Pos}(\Gamma, B)}
\arrow[r, "\sigma_B"]
&
B
\end{tikzcd}
\] \end{notation}
\begin{example}\label{E:lin}
Let $\Sigma$ be the signature given by
\[
\Sigma_{\mathbbm{2}}=\{+\}\quad\text{and}\quad \Sigma_{\mathbbm{1}}=\{@\},
\]
where $\mathbbm{2}$ is a $2$-chain and $\mathbbm{1}$ is a
singleton. A $\Sigma$-algebra consists of a poset $A$ with a (not
necessarily monotone) unary operation $@_A$ and a partial binary
operation $+_A$ whose definition domain is formed by all comparable
pairs. Moreover, $A$ is coherent iff both $@_A$ and $+_A$ are
monotone, the latter in the sense that $a+a'\leq b+b'$ whenever
$a\leq a', b\leq b', a\leq b$, and $a'\leq b'$. \end{example}
\takeout{ \smnote{Since this is a paper devoted to John, I strongly propose
\emph{not} to put the following discussion in numbered remarks or even
supress them but rather bring them out nicely.} Our notion of signature in context is inspired by Kelly and Power's notion of a signature in an enriched locally finitely presentable category~\cite{KP93}. In fact, our notion is a special case of their notion instantiated in $\mathsf{Pos}$. That instance defines a signature in
$\mathsf{Pos}$ as a functor $\Sigma\colon |\Pos_\mathsf{f}| \to \mathsf{Pos}$, where $|\Pos_\mathsf{f}|$
denotes the discrete category with finite posets as objects.\smnote{You might think that the notation $|\Pos_\mathsf{f}|$ clashes
with $|X|$. However, it's the same! Just recall that a poset $X$ is
a special category, and the underlying set $|X|$ is the discrete
category with objects the objects of $X$. So I feel free to overload
this notation.} More concretely, $\Sigma$ assigns to each finite poset a \emph{poset} $\Sigma_\Gamma$ in lieu of a just a set as in \autoref{D:sig}. Hence, our notion is the special case where each $\Sigma_\Gamma$ is a discrete poset.
Following the lead of Kelly and Power further, we now explain how the notions of $\Sigma$-algebras and coherent ones, respectively, are instances of the same concept in the enriched setting. This requires that we recall the notion of a copower (see e.g.~Kelly's book~\cite[Sec.~3.7]{Kelly82}).
\begin{remark}\label{R:copower}
\begin{enumerate}
\item Let $\mathscr V$ be a monoidal closed category, and suppose that
$\mathscr C$ is a category enriched over $\mathscr V$. The \emph{copower} of an
object $X$ of $\mathscr C$ by an object $V$ of $\mathscr V$ is an object $V
\bullet C$ with natural isomorphisms
\[
\mathscr C(V \bullet X, Y) \cong \mathscr V(V,\mathscr C(X,Y))
\qquad\text{for every object $Y$ of $\mathscr C$},
\]
where $\mathscr V(-,-)$ denotes the internal hom of $\mathscr V$.
Copowers are frequently called \emph{tensors} and a
$\mathscr V$-category having all copowers is called
\emph{tensored}.
\item The notion of copower obviously depends on the
enrichement. The category $\mathscr C = \mathsf{Pos}$ can be regarded as an
enriched category over $\mathscr V = \mathsf{Set}$ or over itself: $\mathscr V =
\mathsf{Pos}$. Therefore, we obtain two possible instances of the notion of a
copower:
\begin{enumerate}
\item For $\mathscr V = \mathsf{Set}$, the copower of $X$ by $V$ has a natural
isomorphism $\mathsf{Pos}(V \bullet X, Y) \cong \mathsf{Set}(V,
\mathsf{Pos}(X,Y))$. This implies that the copower is the coproduct
\[
V \bullet X = \coprod\nolimits_{v \in V} X,
\]
whence it is the usual copower.
\item For $\mathscr V = \mathsf{Pos}$, the copower of $X$ by $V$ has the
natural isomorphisms $\mathsf{Pos}(V \bullet X, Y) \cong \mathsf{Pos}(V,
\mathsf{Pos}(X,Y))$. This implies that the copower is the product
\[
V \bullet X = V \times X.
\]
\end{enumerate}
\end{enumerate} \end{remark}
Every signature $\Sigma\colon |\Pos_\mathsf{f}| \to \mathsf{Pos}$ (in the sense of Kelly and Power) gives rise to a polynomial functor on $\mathsf{Pos}$ canonically obtained by forming the left Kan-extension of $\Sigma$ along the embedding $J\colon |\Pos_\mathsf{f}| \hookrightarrow \mathsf{Pos}$: \[
\Lan_J \Sigma. \] In this case, the usual coend formula for computing left Kan-extensions~\cite[Thm.~X.4.1]{MacLane98} simplifies to a coproduct. Hence, the polynomial functor associated to $\Sigma$ is defined on objects by \[
X \mapsto \coprod\nolimits_{\Gamma \in \Pos_\mathsf{f}} \mathsf{Pos}(\Gamma, X) \bullet \Sigma_\Gamma \] and similarly on morphisms. In the above formula $\mathsf{Pos}(\Gamma, X)$ can mean a set or a poset and $\bullet$ can be one of the two copowers we mentioned in \autoref{R:copower} depending on which enrichment on $\mathsf{Pos}$ one would like to consider. Moreover, we shall now explain that by instantiating the copower with those two variants we obtain that the algebras for the above functor are precisely all $\Sigma$-algebras or all coherent ones, respectively. }
\noindent Similarly to the more general signatures discussed in \autoref{S:eqpres}, signatures~$\Sigma$ in our present sense can be represented as polynomial functors~$H_\Sigma$ (for $\Sigma$-algebras) and $K_\Sigma$ (for coherent $\Sigma$-algebras), respectively, introduced next. These functors arise by specializing the corresponding instances of the polynomial functor $P_\Sigma$ according to \autoref{O:twoenrichments} to discrete signatures.
\begin{notation}
The \emph{polynomial} and \emph{coherent polynomial} functors for a
signature $\Sigma$ are the endofunctors
$H_{\Sigma}\colon \mathsf{Pos} \to \mathsf{Pos}$ and $K_\Sigma\colon \mathsf{Pos} \to \mathsf{Pos}$ given by
\[
H_\Sigma X = \coprod\nolimits_{\Gamma \in \Pos_\mathsf{f}} \Sigma_\Gamma \times \Pos_{0}(\Gamma, X)
\qquad\text{and}\qquad
K_\Sigma X = \coprod\nolimits_{\Gamma \in \Pos_\mathsf{f}} \Sigma_\Gamma \times \mathsf{Pos}(\Gamma, X),
\]
respectively, where we regard the sets $\Sigma_\Gamma$ and
$\Pos_{0}(\Gamma,X)$ as discrete posets. Thus, the elements of both
$H_\Sigma X$ and $K_\Sigma X$ are pairs $(\sigma, f)$ where $\sigma$
is an operation symbol of arity~$\Gamma$ and
$f\colon \Gamma\rightarrow X$ is monotone. The action on
monotone maps $h\colon X\rightarrow Y$ is then the same for both
functors:
\[
H_{\Sigma}h(\sigma, f) = (\sigma, h\cdot f) = K_\Sigma h(\sigma,f).
\] \end{notation}
\begin{remark}
\begin{enumerate}
\item \lsnote{Quite honestly I believe these considerations are
better discharged by a two-line statement accompanied by the
admittedly dangerous word `clearly' ;) }
\cfnote{I almost agree with the above: that these categories are isomorphic \emph{is}
clear; I don't mind leaving the details on concreteness of the iso though. }
Every $\Sigma$-algebra
$A$ induces an $H_\Sigma$-algebra
$\alpha\colon H_{\Sigma}A\rightarrow A$ given by
\[
\alpha(\sigma, f)=\sigma_A(f)
\qquad
\text{for $\sigma \in \Sigma_\Gamma$ and $f \in \Pos_{0}(\Gamma,X)$.}
\]
Conversely, every $H_{\Sigma}$-algebra
$\alpha\colon H_{\Sigma}A\rightarrow A$ can be viewed as a
$\Sigma$-algebra, putting $\sigma_A(f)=\alpha(\sigma, f)$. More
conceptually, we have bijective correspondences between the
following (families of) maps:
\[
\begin{array}{r@{\,}l@{\quad}l}
\alpha\colon & H_\Sigma A \to A
\rule[-6pt]{0pt}{0pt}
\\
\hline
\alpha_\Gamma\colon &
\Sigma_\Gamma \times \Pos_{0}(\Gamma, A) \to A & (\Gamma \in \Pos_\mathsf{f})
\rule[-8pt]{0pt}{22pt}
\\
\hline
\sigma_A\colon & \Pos_{0}(\Gamma, A) \to A
& (\Gamma\in \Pos_\mathsf{f}, \sigma \in \Sigma_\Gamma)
\rule{0pt}{14pt}
\end{array}
\]
Thus, $\Alg \Sigma$ is isomorphic to the category $\Alg H_\Sigma$ of
algebras for $H_{\Sigma}$ whose morphisms from $(A, \alpha)$ to
$(B, \beta)$ are those monotone maps $h\colon A\rightarrow B$ for
which the square below commutes:
\[
\begin{tikzcd}
H_{\Sigma}A
\arrow[d, "H_{\Sigma}h"']
\arrow[r, "\alpha"]
&
A
\arrow[d, "h"]
\\
H_{\Sigma}B
\arrow[r, "\beta"]
&
B
\end{tikzcd}
\]
Indeed, this is equivalent to $h$ being a homomorphism of
$\Sigma$-algebras. Shortly,
\[
\Alg \Sigma\cong\Alg H_\Sigma.
\]
Moreover, this isomorphism is concrete, i.e.~it preserves the
underlying posets (and monotone maps). That is, if
$U\colon\Alg \Sigma\rightarrow\mathsf{Pos}$ and
$\bar{U}\colon\Alg H_\Sigma\rightarrow\mathsf{Pos}$ denote the forgetful
functors, the above isomorphism
$I\colon \Alg \Sigma\rightarrow\Alg H_\Sigma$ makes the following
triangle commutative:
\[
\begin{tikzcd}[column sep = 10]
\Alg \Sigma
\arrow[rr, "I"]
\arrow[rd, "U"']
&&
\Alg H_\Sigma
\arrow[ld, "\bar{U}"]
\\
&
\mathsf{Pos}
\end{tikzcd}
\]
\item Similarly, every coherent $\Sigma$-algebra defines an algebra
for $K_\Sigma$, and conversely. Indeed, giving an algebra structure
$\alpha\colon K_\Sigma A \to A$ is to give a context-indexed
family of monotone maps
\[
\alpha_\Gamma\colon \Sigma_\Gamma \times \mathsf{Pos}(\Gamma, A) \to A.
\]
Equivalently, we have for every $\sigma$ of arity $\Gamma$ a
monotone map $\sigma_A\colon \mathsf{Pos}(\Gamma, A) \to A$.
This leads to an isomorphism $\Alg_\mathsf{c} \Sigma \cong \Alg K_\Sigma$,
which is concrete:
\[
\begin{tikzcd}[column sep = 10]
\Alg_\mathsf{c} \Sigma
\arrow[rr, "I_c"]
\arrow[rd, "U_c"']
&&
\Alg K_\Sigma
\arrow[ld, "\bar{U}_c"]
\\
&
\mathsf{Pos}
\end{tikzcd}
\]
where $I_c$, $U_c$ and $\bar U_c$ denote the isomorphism and the
forgetful functors, respectively.
\end{enumerate} \end{remark}
\begin{remark}\label{R:fact}
\lsnote{Do we still need this given the new proof of reflexivity?}
Recall that epimorphisms in $\mathsf{Pos}$ are precisely the surjective
monotone maps. $\mathsf{Pos}$ has the factorization system\lsnote{We should
decide at some point how much category theory we want to assume.}
\[
(\text{epimorphism}, \text{embedding})
\]
where \emph{embeddings} are maps $m\colon A\rightarrow B$ such that
for all $a, a'\in A$ we have $a\leq a'$ iff $m(a)\leq m(a')$. That
is, embeddings are order-reflecting monotone
functions.\lsnote{Should we add a categorical description of what
embeddings are, such as regular monos?}
\cfnote{It certainly couldn't hurt to add this in a sentence in my opinion}
Given an $\omega$-chain of embeddings in $\mathsf{Pos}$, its colimit is simply their union (with inclusion maps as the colimit cocone). \end{remark}
\begin{proposition}\label{P:free}
Every poset $X$ generates a free $\Sigma$-algebra $T_\Sigma X$. Its
underlying poset is the union of the following $\omega$-chain of
embeddings in $\mathsf{Pos}$:
\begin{equation}\label{eq:chain}
W_0= X
\xra{w_0}
W_1 = H_{\Sigma}X + X
\xra{w_1}
W_2 = H_{\Sigma}W_1 + X
\xra{w_3}
\cdots
\end{equation}
where $w_0$ is the right-hand coproduct injection
$X \to H_\Sigma X + X$ and
$w_{n+1} = Hw_n + \mathsf{id}_X \colon W_{n+1} = H_\Sigma W_n + X \to
HW_{n+1} + X = W_{n+2}$ for every $n$. The universal map
$\eta_X\colon X \to T_\Sigma X$ is the inclusion of~$W_0$ into the
union. \end{proposition}
\begin{proof}
Observe first that the polynomial functor $H_\Sigma$ can be
rewritten, up to natural isomorphism, as
\[
H_\Sigma X \cong \coprod\nolimits_{\Gamma \in \Pos_\mathsf{f}}
\coprod\nolimits_{\Sigma_\Gamma} \Pos_{0}(\Gamma, X),
\]
because every $\Sigma_\Gamma$ is discrete. It follows that
$H_\Sigma$ is finitary, being a coproduct of functors
$\Pos_{0}(\Gamma, -)$ (each $\Pos_{0}(\Gamma, -)$ is finitary because
$\Gamma$ is finite).
It follows that the free $H_{\Sigma}$-algebra over~$X$ is the
colimit of the $\omega$-chain $(W_n)$ from~\eqref{eq:chain} in
$\mathsf{Pos}$, where $W_0= X$ and $W_{n+1}=H_{\Sigma}W_n + X$ with connecting
maps~$w_n$ as described ~\cite{Ada74}.\smnote{The citation should be
deleted here because it indicates where this fact is proved, not
where the chain is defined, which is above} The desired result
thus follows from the concrete isomorphism
$\Alg \Sigma\cong \Alg H_{\Sigma}$. \end{proof}
\noindent A similar result can be proved for coherent $\Sigma$-algebras and the associated functor $K_\Sigma$, using the fact that like $\Pos_{0}(\Gamma,-)$, also the internal hom-functor $\mathsf{Pos}(\Gamma,-)$ is finitary:
\begin{proposition}\label{P:freec}
Every poset $X$ generates a free coherent $\Sigma$-algebra $T^\mathsf{c}_\Sigma X$. Its
underlying poset is the union of the following $\omega$-chain of
embeddings in $\mathsf{Pos}$:
\[
W_0= X
\xra{w_0}
W_1 = K_{\Sigma}X + X
\xra{w_1}
W_2 = K_{\Sigma}W_1 + X
\xra{w_3}
\cdots
\]
The universal morphism $\eta_X\colon X \to T^\mathsf{c}_\Sigma X$ is the
inclusion of~$W_0$ into the union. \end{proposition}
\takeout{ \begin{definition}
A \emph{term} in context $\Gamma$ is an element of the poset
$T_\Sigma(\Gamma)$. We denote the ordering on terms by $\sqsubseteq$. \end{definition}
\begin{remark}\label{E:term}
Explicitly, for a poset~$X$, terms in $T_\Sigma X$ and their ordering
$\sqsubseteq$ are generated inductively by the following rules:
\begin{itemize}
\item Every variable $x\in X$ is a term.
\item If $x\le y$ for variables $x,y\in X$, then $x\sqsubseteq y$
for the corresponding terms.
\item If $\sigma\in\Sigma$ has arity~$\Gamma$ and
$f\colon \Gamma\to T_\Sigma X$ is monotone, then $\sigma(f)$ is a
term.
\item Any term $\sigma(f)$ as per the previous item is comparable to
itself.
\end{itemize}
Of course, the operation $\sigma_{T_\Sigma X}$ of $T_\Sigma X$ for
$\sigma\in\Sigma_{\Gamma}$ assigns to
$f\colon \Gamma\rightarrowT_\Sigma X$ the value $\sigma(f)$.
The above description implies that there are rather fewer terms in
$T_\Sigma X$ than one would maybe expect: By the above clauses, we
have $t\sqsubseteq s$ for terms $t,s\inT_\Sigma X$ iff either $t=s$
or $t=x,s=y$ for variables $x,y\in X$ such that $x\le
y$. Consequently, whenever an operation $\sigma\in\Sigma_\Gamma$ has
nondiscrete arity, say $x\le y$ for distinct $x,y\in\Gamma$, then
terms of the form $\sigma(f)$ with~$\sigma,f$ as above exist in
$T_\Sigma X$ only if either $f(x)=f(y)$, or both $f(x)$ and $f(y)$
are variables in~$X$, and $f(x)\le f(y)$ in~$X$. For instance,
for~$\Sigma$ being the signature of \autoref{E:lin}, all terms in
context $\Gamma$ have have one of the forms
\begin{equation*}
x,\qquad x+y,\qquad @t,\qquad \text{or}\qquad t+t,
\end{equation*}
where $t$ is a term and $x,y$ are variables from $\Gamma$ such that
$x\leq y$.
In $T^\mathsf{c}_\Sigma X$, both the ordering and, consequently, the set of
terms are larger than in $T_\Sigma X$, as we have the following
additional rule for $\sqsubseteq$:
\begin{itemize}
\item Given $\sigma\in\Sigma_\Gamma$ and
$f,g\colon\Gamma\to T_\Sigma(X)$, if $f\sqsubseteq g$ (pointwise)
then $\sigma(f)\sqsubseteq\sigma(g)$.
\end{itemize}
E.g.\ in the signature of \autoref{E:lin}, given $x,y\in X$ such
that $x\le y$, $T^\mathsf{c}_\Sigma X$ contains the term
\begin{equation*}
@x+@y
\end{equation*}
since $@x\sqsubseteq @y$ by the above rule; this term is not
contained in $T_\Sigma X$.
\end{remark}
\begin{notation}\label{N:free}
Let $A$ be a $\Sigma$-algebra. For every monotone function
$f\colon \Gamma\rightarrow A$ (valuation of variables of $\Gamma$ in
$A$) we denote by
\[
f^{\#}\colon T_\Sigma(\Gamma)\rightarrow A
\]
the corresponding homomorphism (interpretation of terms). For
example, given $\sigma\in\Sigma_{\Gamma}$ we have
$f^{\#}(\sigma(\eta_\Gamma))= \sigma_A(f)$. \end{notation} }
\begin{definition}\label{def:terms}
We define \emph{terms} as usual in universal algebra, ignoring the
order structure of arities; we write $\Term(\Gamma)$ for the set of
$\Sigma$-terms in variables from $\Gamma$. Explicitly, the set
$\Term(\Gamma)$ of terms is the least set containing~$|\Gamma|$ such
that given an operation~$\sigma$ with arity $\Delta$ and a function
$f\colon|\Delta|\to\Term(\Gamma)$, we obtain a term
$\sigma(f)\in\Term(\Gamma)$. \end{definition}
We denote by $u_\Gamma\colon \Gamma \to \Term(\Gamma)$ the inclusion map.\smnote{This should \emph{not} be called $\eta$ because this is
used as the unit of the monad $T$, which will be very confusing for
the reader later.} We will often silently assume that the elements of $|\Delta|$ are listed in some fixed sequence $x_1,\dots,x_n$, and then write $\sigma(t_1, \ldots, t_n)$ in lieu of $\sigma(f)$ where
$f(x_i)=t_i$ for $i=1,\dots,n$. In particular, in examples we will normally use arities $\Delta$ with $|\Delta|=\{1,\dots,k\}$ for some~$k$, and then assume the elements of $\Delta$ to be listed in the sequence $1,\dots,k$. We will often abbreviate $(t_1, \ldots, t_n)$ as $(t_i)$, in particular writing $\sigma(t_i)$ in lieu of $\sigma(t_1,\dots,t_n)$. Every $\sigma \in\Sigma_\Gamma$ yields the term $\sigma(u_\Gamma)\in\Term(\Gamma)$, which by abuse of notation we will occasionally write as just~$\sigma$.
\begin{example}
Let $\Sigma$ be a signature with a single operations symbol $\sigma$
whose arity is a $2$-chain. Then $\Term(\Gamma)$ is the set of usual terms
for a binary operation on the variables from $\Gamma$. Whereas
$T_\Sigma\Gamma$ contains only those terms which are variables or have the
form $\sigma(t,t)$ for terms $t$ or $\sigma(x,y)$ for $x \leq y$ in
$\Gamma$. The order of $T_\Sigma\Gamma$ is such that the only
comparable distinct terms are the variables. \end{example}
\begin{definition}\label{D:sharp}
Let $A$ be a $\Sigma$-algebra. Given a context $\Gamma$ (of
variables) and a monotone interpretation $f\colon \Gamma \to A$, the
\emph{evaluation map} is the partial map
\[
f^\#\colon \Term(\Gamma) \to |A|
\]
defined recursively by
\begin{enumerate}
\item $f^\#(x) = f(x)$ for every $x \in |\Gamma|$, and
\item $f^\#(\sigma(g))$ is defined for $\sigma \in \Sigma_\Delta$
and $g\colon|\Delta|\to \Term(\Gamma)$ iff all $f^\#(t_i)$ are
defined and $i \leq j$ in $\Delta$ implies
$f^\#(g(i)) \leq f^\#(g(j))$ in $A$; then
$f^\#(\sigma(g)) = \sigma_A(f^\#\cdot g)$.
\end{enumerate} \end{definition}
\begin{example}\label{E:term}
\begin{enumerate}
\item For the signature in \autoref{E:lin}, we have terms in
$\mathscr{T}\{x,y\}$ such as $@x$, $y + @y$, etc. Given a
$\Sigma$-algebra $A$ and an interpretation $f\colon\{x,y\}\to A$
(say, with~$\{x,y\}$ ordered discretely), we see that $@x$ is
always interpreted as $f^\#(@x)=@_A(f(x))$, whereas $f^\#(y + @x)$
is defined if and only if $f(y) \leq @_A(f(x))$, and then
$f^\#(y + @x)=f(y) +_A @_A(f(x))$.
\item\label{E:term:2}
\takeout{
Recall from \autoref{def:terms} that every
operation symbol $\sigma \in \Sigma_\Gamma$ defines a term
$\sigma(x_i)$, where $|\Gamma| = \{x_1, \ldots, x_n\}$. Given any
interpretation $f\colon \Gamma \to A$, since $f$ is monotone,
$f^\#(\sigma(x_i))$ is defined, and we have}
Every operation symbol $\sigma \in \Sigma_\Gamma$ considered as a
term (see \autoref{def:terms}) satisfies
\[
f^\#(\sigma) = \sigma_A(f(x_i)).
\]
\end{enumerate} \end{example}
\begin{definition}\label{def:ineqs}
An \emph{inequation in context} $\Gamma$ is a pair $(s,t)$ of terms
in $\Term(\Gamma)$, written in the form
\[
\Gamma\vdash s\leq t.
\]
Furthermore, we denote by
\[
\Gamma\vdash s = t
\]
the conjunction of the inequations $\Gamma\vdash s\leq t$ and
$\Gamma\vdash t\leq s$.
A $\Sigma$-algebra \emph{satisfies} $\Gamma \vdash s \leq t$ if for
every monotone function $f\colon \Gamma \to A$, both $f^\#(s)$ and
$f^\#(t)$ are defined and $f^\#(s) \leq f^\#(t)$. \end{definition}
\begin{example}\label{E:lin2}
For the signature of \autoref{E:lin}, consider the singleton context
$\{x\}$ and the inequation
\begin{equation}\label{Eqn:2.1}
\{x\}\vdash x\leq @x.
\end{equation}
An algebra $A$ satisfies this inequation iff $a\leq @_A(a)$ holds
for every $a\in A$. In such algebras, the interpretation of the term
$x+@x$ is defined everywhere. As a slightly more advanced example,
consider the inequality (in the same signature)
\begin{equation*}
\{x\le y\}\vdash x + @x \le x.
\end{equation*}
According reading of inequalities as per Definition~\ref{def:ineqs},
this inequality implies that $x + @x$ is always defined, which
amounts precisely to \eqref{Eqn:2.1}. \end{example}
\begin{definition}
A \emph{variety of $\Sigma$-algebras} is a full subcategory of
$\Alg \Sigma$ specified by a set~$\mathcal E$ of inequations in context. We
denote it by $\Alg(\Sigma, \mathcal E)$.\smnote{Please do not
delete; it is used all the time.}\lsnote{Calling a set of
\emph{in}equations `$\mathcal E$' is maybe not very suggestive, or
rather suggestive of the wrong thing. How about $\mathcal I$ like
in Chase's thesis?} Analogously, a \emph{variety of coherent
$\Sigma$-algebras} is a full subcategory of $\Alg_\mathsf{c} \Sigma$ specified
by a set of inequations in context. \end{definition}
\begin{example}\label{ex:var} We present some varieties of algebras. \begin{enumerate} \item We have seen a variety $\mathcal V$ specified by \eqref{Eqn:2.1} in \autoref{E:lin2}.
\item The subvariety of all coherent algebras in $\mathcal V$ can be specified
as follows. Consider the
contexts $\Gamma_1$ and $\Gamma_2$ given by \[
\Gamma_1 =
\begin{tikzcd}[sep = 30,baseline = -2]
y
\arrow[no head]{d}
\\
x
\end{tikzcd}
\qquad\text{and}\qquad
\Gamma_2 =
\begin{tikzcd}[column sep = 10, row sep = 10, baseline=(B.base)]
& y' \arrow[rd, no head] & \\
|[alias=B]|x' \arrow[rd, no head] \arrow[ru, no head] & & y \arrow[ld, no head] \\
& x & \end{tikzcd} \] and the inequations \begin{equation}\label{Eqn:2.3}
\Gamma_1\vdash @x\leq @y
\qquad\text{and}\qquad
\Gamma_2\vdash x+y\leq x'+y'. \end{equation}
It is clear that $\Sigma$-algebras satisfying \eqref{Eqn:2.1} and \eqref{Eqn:2.3} form precisely the full subcategory of $\mathcal V$ consisting of coherent algebras.
\item\label{ex:var:3} In general, all coherent $\Sigma$-algebras form a variety of $\Sigma$-algebras. For every context $\Gamma$, form the context $\bar{\Gamma}$ with variables $x$ and $x'$ for every variable $x$ of $\Gamma$, where the order is the least one such that the functions $e, e'\colon\Gamma\rightarrow\bar{\Gamma}$ given by $e(x)=x$ and $e'(x)=x'$ are embeddings such that $e\leq e'$. For every $\Gamma$ and every $\sigma\in\Sigma_{\Gamma}$ consider the following inequation in context $\bar{\Gamma}$: \[\bar{\Gamma}\vdash \sigma(e)\leq\sigma(e').\] It is satisfied by precisely those $\Sigma$-algebras $A$ for which $\sigma_A$ is monotone.
\item\label{ex:var:4}
Recall that an \emph{internal semilattice} in a category with finite products
is an object $A$ together with morphisms $+\colon A\times A\to A$ and
$0\colon 1\to A$ such that
\begin{enumerate}
\item $0$ is a unit for $+$, i.e.~the following triangles
commute\smnote{The notation was wrong here; given
$f\colon X \to A$ and $g\colon X \to B$ the unique induced
morphism is denoted by
$\langle f, g\rangle\colon X \to A \times B$.}
\[
\begin{tikzcd}
A \cong 1 \times A
\arrow{r}{0 \times \mathsf{id}}
\ar[equals]{rd}
&
A\times A
\arrow{d}{+}
&
A \times 1 \cong A
\ar{l}[swap]{\mathsf{id} \times 0}
\ar[equals]{ld}
\\
&
A
\end{tikzcd}
\]
\item $+$ is associative, commutative, and idempotent:
\[
\begin{tikzcd}
A \times A \times A
\arrow{r}{+ \times \mathsf{id}}
\arrow{d}[swap]{\mathsf{id} \times +}
&
A \times A
\ar{d}{+}
\\
A\times A
\ar{r}{+}
&
A
\end{tikzcd}
\qquad
\begin{tikzcd}
A \times A
\ar{r}{\mathsf{swap}}
\ar{rd}[swap]{+}
&
A \times A
\ar{d}{+}
\\
& A
\end{tikzcd}
\qquad
\begin{tikzcd}
A
\ar{r}{\Delta}
\ar[equals]{rd}
&
A \times A
\ar{d}{+}
\\
&
A
\end{tikzcd}
\]
Here $\mathsf{swap} = \fpair{\pi_r, \pi_\ell}\colon A \times A \to A \times
A$ is the canonical isomorphism commuting product components, and
$\Delta = \fpair{\mathsf{id},\mathsf{id}}\colon A \to A \times A$ is the diagonal.
\end{enumerate}
\noindent
Internal semilattices in $\mathsf{Pos}$ form a
variety of coherent $\Sigma$-algebras. To see this, consider the
signature $\Sigma$ with $\Sigma_2 = \{+\}$ and
$\Sigma_\mathbbm{\emptyset} = \{0\}$, where $2$ denotes the
two-element discrete poset. The set $\mathcal E$ is formed by (in)equations
specifying that $+$ is monotone, associative, commutative, and
idempotent with unit $0$. Note that this does \emph{not} imply that
$x + y$ is the join of $x, y$ in $X$ w.r.t.~its given order
(cf.~\autoref{ex:freealgs}).
\item\label{ex:var:5} A related variety is that of classical
join-semilattices (with $0$). To specify those, we take the signature $\Sigma$
from the previous item; but now we need just two inequations
in context specifying that $0$ and $+$ are the least element and the
join operation, respectively:
\[
\{x\} \vdash 0 \leq x \qquad \{ x \leq z, y \leq z\} \vdash x+y \leq z.
\]
It then follows that $+$ is monotone, associative, commutative and
idempotent, whence these equations need not be contained in $\mathcal E$. \item\label{item:bounded} \emph{Bounded joins:} Take the signature
$\Sigma$ consisting of a unary operation~$\bot$ and an operation $j$
(\emph{bounded join}) of arity $\{0,1,2\}$ where $0\le 2$ and
$1\le 2$ (but $0\not\le 1$). We then define a variety~$\mathcal V$ by
inequations in context
\begin{gather*}
x,y\vdash \bot(x)\le y\\
x\le z, y\le z\vdash x\le j(x,y,z)\\
x\le z, y\le z\vdash y\le j(x,y,z)\\
x\le z, y\le z, x\le w, y\le w\vdash y\le j(x,y,z)\le w.
\end{gather*}
That is, $j(x,y,z)$ is the join of elements~$x,y$ having a joint
upper bound~$z$. It follows that the value of $j(x,y,z)$, when it is
defined, does not actually depend on~$z$, which instead just serves
as a witness for boundedness of $\{x,y\}$. The operation~$\bot$ and
its inequality specify that algebras are either empty or have a
least element, i.e.\ the empty set has a join provided that it is
bounded. Thus, $\mathcal V$ consists of the partial orders having all
bounded finite joins, which we will refer to as \emph{bounded-join
semilattices}, and morphisms in~$\mathcal V$ are monotone maps that
preserve all existing finite joins.
\item\label{ex:var:7} Let a collection of posets $\Sigma_\Gamma$
($\Gamma \in \Pos_\mathsf{f}$) be given. We obtain the corresponding signature
$\Sigma^d = (|\Sigma_\Gamma|)_{\Gamma \in \Pos_\mathsf{f}}$ by disregarding
the order of $\Sigma_\Gamma$. Now consider the following set $\mathcal E$ of
inequations in context:
\[
\Gamma \vdash \sigma(x_i) \leq \tau(x_i)
\]
where $|\Gamma| = \{x_1, \ldots, x_n\}$ and $\sigma, \tau \in
\Sigma_\Gamma$ fulfil $\sigma \leq \tau$. Then the variety
$\Alg(\Sigma, \mathcal E)$ is precisely the category of algebras for the
non-discrete signature $\Sigma$ (see~\autoref{D:ndsig}). \end{enumerate} \end{example}
\begin{remark}\label{R:create}
We will now discuss limits and directed colimits in $\Alg \Sigma$.
\begin{enumerate}
\item It is easy to see that for every endofunctor $H$ on $\mathsf{Pos}$ the
category $\Alg H$ of algebras for~$H$ is complete. Indeed, the
forgetful functor $V\colon \Alg H\rightarrow\mathsf{Pos}$ creates
limits. This means that for every diagram
$D\colon\mathscr D\rightarrow \Alg H$ with $VD$ having a limit cone
$(\ell_d\colon L\rightarrow VDd)_{d\in\text{obj}(\mathscr D)}$, there
exists a unique algebra structure $\alpha\colon HL\rightarrow L$
making each $\ell_d$ a homomorphism in $\Alg H$. Moreover, the
cone $(\ell_d)$ is a limit of $D$.
\item Analogously, it is easy to see that for every finitary
endofunctor $H$ of $\mathsf{Pos}$ the category $\Alg H$ has filtered
colimits created by $V$.
\item We conclude from $\Alg \Sigma\cong \Alg H_\Sigma$ that limits
and filtered colimits of $\Sigma$-algebras exist and are created
by the forgetful functor into $\mathsf{Pos}$, and similarly for $\Alg_\mathsf{c} \Sigma$.
\item\label{R:create:4} Moreover, we note that $\Alg H_\Sigma$ is a
locally finitely presentable category; this was shown by
Bird~\cite[Prop.~2.14]{Bird84}, see also the remark given by the
first author and Rosick\'y~\cite[2.78]{AdamekR}.
\end{enumerate} \end{remark}
\begin{lemma}\label{L:compint}
Let $A$ and $B$ be $\Sigma$-algebras, let $h\colon A \to B$ be a
homomorphism, and let $f\colon\Gamma\to A$ be a monotone
interpretation. Then for every term $t \in \Term(\Gamma)$
we have that
\begin{enumerate}
\item\label{L:compint:1} $f^\#(t)$ is defined, $(h\cdot f)^\#(t)$ is also defined, and
$(h\cdot f)^\#(t) = h(f^\#(t))$.
\item if $h(f^\#(t))$ is defined and $h$ is an embedding, then
$f^\#(t)$ is defined, too.
\end{enumerate} \end{lemma}
\begin{proof}
\begin{enumerate}
\item We proceed by induction on the structure of $t$. If~$t$ is a
variable, then the claim is immediate from the definition of
$(-)^\#$. For the inductive step, let~$t \in \Term(\Gamma)$ be a
term of the form $t=\sigma(t_1,\dots, t_n)$ such that $f^\#(t)$ defined,
where $\sigma\in\Sigma_{\Delta}$ and $|\Delta| = n$. Then, by
definition of $(-)^\#$, it follows that $f^\#(t_i)$ is defined for
all $i\leq n$ and $f^\#(t_i)\leq f^\#(t_j)$ for all $i\leq j$ in
$\Delta$ (i.e.~the map $i\mapsto f^\#(t_i)$ is monotone).
Combining this with our assumption that $h\colon A\to B$ is a
homomorphism, we obtain that
\[
h\cdot f^{\#}(\sigma(t_1,\dots, t_n))= \sigma_B(h\cdot f^{\#}(t_1),\dots, h\cdot f^\#(t_n)).
\]
Moreover, since $f^\#(t_i)$ is defined for all $i\leq n,$ the inductive hypothesis implies that
$h\cdot f^{\#}(t_i) = (h\cdot f)^\#(t_i)$ for all $i\leq n$, hence also
\[
(h\cdot f)^\#(t_i)= h\cdot f^{\#}(t_i)\leq h\cdot f^\#(t_j)= (h\cdot f)^\#(t_j)
\]
for all $i\leq j$ in $\Delta.$ Thus
$\sigma_B((h\cdot f)^\#(t_1),\dots, (h\cdot f)^\#(t_n))$ is
defined and equal to $h\cdot f^{\#}(\sigma(t_1,\dots, t_n)),$ as
desired.
\item Suppose now that $h$ is an embedding. We use a similar inductive
proof.
In the inductive step
suppose that $(h \cdot f)^\#(t)$ is
defined. Then by the definition of $(-)^\#$, it follows that
$(h\cdot f)^\#(t_i)$ is defined for all $i \leq n$ and
$(h \cdot f)^\#(t_i) \leq (h \cdot f)^\#(t_j)$ holds for all
$i \leq j$ in~$\Delta$. By induction we know that all $f^\#(t_i)$
are defined and by item~\ref{L:compint:1} that
\[
h \cdot f^\#(t_i)
=
(h \cdot f)^\#(t_i)
\leq
(h \cdot f)^\#(t_j)
=
h \cdot f^\#(t_i)
\]
holds for all $i \leq j$ in $\Delta$. Since $h$ is a embedding,
we therefore obtain $f^\#(t_i) \leq f^\#(t_j)$ for all $i \leq j$
in $\Delta$, whence $f^\#(t)$ defined.
\qedhere
\end{enumerate} \end{proof}
\begin{proposition}\label{P:colim}
Every variety is closed under filtered colimits in $\Alg \Sigma$. \end{proposition}
\noindent In other words, the full embedding $E\colon\mathcal V\hookrightarrow\Alg \Sigma$ creates filtered colimits.\smnote{We should mention this categorical statement!}
\begin{proof}
Let $\mathcal V$ be a variety of $\Sigma$-algebras. Let
$D\colon\mathscr D\rightarrow\Alg \Sigma$ be a filtered diagram having colimit
$c_d\colon Dd\rightarrow A$ $(d\in\mathop{\mathsf{obj}} \mathscr D)$. It suffices to show
that every inequation in context $\Gamma \vdash s \leq t$ satisfied
by every algebra $Dd$ is also satisfied by $A$. Let
$f\colon\Gamma\rightarrow A$ be a monotone interpretation. Since
$\Gamma$ is finite, $f$ factorizes, for some $d\in\mathop{\mathsf{obj}} \mathscr D$, through
$c_d$ via a monotone map $\bar{f}\colon \Gamma \to Dd$: in symbols,
$c_d \cdot \bar f = f$. Since $Dd$ satisfies the given inequation in
context, we know that $\bar f^\#(s)$ and $\bar f^\#(t)$ are defined
and that $\bar f^\#(s) \leq \bar f^\#(t)$ in $Dd$. By
\autoref{L:compint} we conclude that
\[
f^\#(s) = (c_d \cdot \bar f)^\#(s) = c_d \cdot \bar f^\#(s)
\qquad
\text{and}
\qquad
f^\#(t) = (c_d \cdot \bar f)^\#(t) = c_d \cdot \bar f^\#(t)
\]
are defined. Using the monotonicity of $c_d$ we obtain
\[
f^\#(s) = c_d \cdot \bar f^\#(s) \leq c_d \cdot \bar f^\#(t) =
f^\#(t)
\]
as desired.
\takeout{
\begin{enumerate}
\item\label{P:colim:1} We first prove the following property of
filtered colimits. Let $D\colon \mathscr D \to \mathcal V$ be a filtered diagram
with a colimit cocone $c_d\colon Dd \to A$ in $\Alg \Sigma$, where
$d$ ranges over the objects of~$\mathscr D$. Given an object~$d$ in~$\mathscr D$
and a monotone interpretation $f\colon \Gamma \to Dd$, for every
term $s \in \Term(\Gamma)$ such that $(c_d \cdot f)^\#(s)$ is defined it
follows that $(Dh \cdot f)^\#(s)$ is also defined for some
morphism $h\colon d \to d'$ of $\mathscr D$.
We prove this fact by structural induction. If $s$ is a variable
in $\Gamma$, then put $h = \mathsf{id}_d$. For the inductive step, suppose
that $s = \sigma(t_i)$ for $\sigma \in \Sigma_\Delta$ and
$t_i \in \Term(\Gamma)$, for $1 \leq i \leq \card{\Delta}$. By
definition of $(-)^\#$, definedness of
$(c_d\cdot f)^\#(\sigma(t_i))$ implies that
$(c_d \cdot f)^\#(t_i)$ is defined for all~$i$, and $i \leq j$ in
$\Delta$ implies
$(c_d \cdot f)^\#(t_i) \leq (c_d \cdot f)^\#(t_j)$. By
\autoref{L:compint}, the latter is equivalent to
$c_d (f^\#(t_i)) \leq c_d (f^\#(t_j))$. Since
$\Delta \times \Delta$ is finite and $D$ is a filtered diagram, it
follows that for some morphism $h$ of $\mathscr D$ we have
\[
(Dh \cdot f)^\#(t_i)
=
Dh(f^\#(t_i))
\leq
Dh(f^\#(t_j)
=
(Dh \cdot f)^\#(t_j),
\]
whenever $i\le j$ in~$\Delta$, where the two equations follow by
another application of \autoref{L:compint}. We therefore conclude
that $(Dh \cdot f)^\#(s)$ is defined, as desired.
\item We now prove that~$\mathcal V$ is closed under filtered colimits in
$\Alg \Sigma$, as claimed. So let $D\colon\mathscr D\rightarrow\Alg \Sigma$ be
a filtered diagram having colimit $c_d\colon Dd\rightarrow A$
$(d\in\mathop{\mathsf{obj}} \mathscr D)$. It suffices to show that every inequation in
context $\Gamma\vdash s\leq t$ satisfied by every algebra $Dd$ is
also satisfied by $A$. Let $f\colon\Gamma\rightarrow A$ be a
monotone interpretation. Since $\Gamma$ is finite, $f$ factorizes,
for some $d\in\mathop{\mathsf{obj}} \mathscr D$, through $c_d$ via a monotone map
$\bar{f}$. Furthermore, since $f^\#(s)$ and $f^\#(t)$ are defined,
it follows from item~\ref{P:colim:1} that there exists a morphism
$h\colon d \to d'$ in $\mathscr D$ such that $(Dh \cdot \bar f)^\#(s)$ and
$(Dh \cdot \bar f)^\#(t)$ are defined:
\[
\begin{tikzcd}
&
Dd
\arrow[d, "c_d"]
\ar{r}{Dh}
&
Dd'
\ar{ld}{c_{d'}}
\\
\Gamma \arrow[ru, "\bar{f}"]
\arrow[r, "f"]
&
A
\end{tikzcd}
\]
The given inequation is satisfied by $Dd'$, hence using
\autoref{L:compint} we obtain
\[
Dh(\bar f^\#(s))
=
(Dh \cdot \bar f)^{\#}(s)
\leq
(Dh \cdot \bar f)^{\#}(t)
=
Dh(\bar f^{\#}(t)).
\]
Since $c_{d'}$ is monotone and $c_d = c_{d'} \cdot Dh$, we
conclude, again using \autoref{L:compint}, that
\begin{align*}
f^{\#}(s)
& =
(c_d \cdot \bar f)^\#(s)
=
c_d(\bar f^\#(s))
=
c_{d'} (Dh(\bar f^\#(s)))
\\
& \leq
c_{d'} (Dh(\bar f^\#(t)))
=
c_d(\bar f^\#(t))
=
(c_d \cdot \bar f)^\#(t)
=
f^\#(t),
\end{align*}
as desired. \qedhere
\end{enumerate}} \end{proof}
\begin{corollary}\smnote{Maybe not needed but still interesting. I'd
keep it as it cost us three lines. Our readers may want to have a
source where this can be quoted.}
The forgetful functor of a variety into $\mathsf{Pos}$ creates filtered
colimits. \end{corollary}
\noindent Indeed, the forgetful functor of a variety $\mathcal V$ is a composite of the inclusion $\mathcal V \hookrightarrow \Alg \Sigma$ and the forgetful functor of $\Alg \Sigma$, which both create filtered colimits.
\begin{proposition}\label{P:refl}
Every variety of $\Sigma$-algebras is a reflective subcategory of
$\Alg \Sigma$ closed under subalgebras. \end{proposition}
\begin{proof}
\takeout{
We are going to verify below that the factorization system (epi,
embedding) of \autoref{R:fact} lifts from $\mathsf{Pos}$ to
$\Alg \Sigma$. Then $\Alg \Sigma$ is complete (by \autoref{R:create})
and cowellpowered. By~\cite[Thm.~16.8]{AHS90} every subcategory
closed under $\mathcal{M}$-subalgebras (where $\mathcal{M}$ are the
homomorphisms carried by embeddings) is reflective. Let
$V\colon\Alg \Sigma\rightarrow\mathsf{Pos}$ denote the forgetful functor.
\begin{enumerate}
\item The factorization system (epi, embedding) lifts to
$\Alg \Sigma$. Indeed, given a homomorphism
$h\colon A\rightarrow B$, factorize $Vh$ as an epimorphism
$e\colon V\!A\twoheadrightarrow C$ followed by an embedding
$m\colon C\rightarrowtail VB$ in $\mathsf{Pos}$. Then, for every $\Gamma$ and
every $\sigma\in\Sigma_{\Gamma},$ there exists a unique operation
$\sigma_C\colon\Pos_{0}(\Gamma, C)\rightarrow C$ for
$\sigma\in\Sigma_{\Gamma}$ making $e$ and $m$ homomorphisms. Then
the diagram below commutes:
\[
\begin{tikzcd}
{\Pos_{0}(\Gamma, A)} \arrow[->>,d, "e\cdot (-)"'] \arrow[rr, "\sigma_A"] & & A \arrow[d, "e"] \\
{\Pos_{0}(\Gamma, C)} \arrow[d, "m\cdot (-)"'] \arrow[rr, "\sigma_C", dashed] & & C \arrow[>->,d, "m"] \\
{\Pos_{0}(\Gamma, B)} \arrow[rr, "\sigma_B"] & & B
\end{tikzcd}
\]
Indeed, this follows from $\Pos_{0}(\Gamma, e)= e \cdot (-)$ being an
epimorphism.\smerror[inline]{This is false; $e\cdot (-)$ is not
surjective.} It is easy to verify that the diagonal lift of this
the diagram above provides the desired factorization.}
We are going to prove below that every variety $\mathcal V = \Alg(\Sigma,\mathcal E)$ is closed in
$\Alg \Sigma$ under products and subalgebras, whence it is closed
under all limits. We also know from \autoref{P:colim} that $\mathcal V$ is
closed under filtered colimits in $\Alg \Sigma$. Being a full
subcategory of the locally finitely presentable category $\Alg \Sigma$
(\autoref{R:create}\ref{R:create:4}), $\mathcal V$ is reflective by the
reflection theorem for locally presentable
categories~\cite[Cor.~2.48]{AdamekR}.
\begin{enumerate}
\item $\Alg(\Sigma, \mathcal E)$ is closed under products in
$\Alg\Sigma$. Indeed, given $A=\prod_{i\in I}A_i$ with projections
$\pi_i\colon A \to A_i$ and a monotone interpretation
$f\colon \Gamma \to A$, we prove for every term
$s \in \Term(\Gamma)$ that $f^\#(s)$ is defined if and only if so is
$(\pi_i\cdot f)^\#(s)$ for all $i \in I$. This is done by
structural induction: for $s \in |\Gamma|$ there is nothing to
prove. Suppose that $s = \sigma(t_j)$ for some
$\sigma \in \Sigma_\Delta$ and $t_j \in \Term(\Gamma)$,
$j \in\Delta$. Then $f^\#(s)$ is defined iff
$j \leq k$ in $\Delta$ implies $f^\#(t_j) \leq f^\#(t_k)$ in
$A$. Equivalently (since the~$\pi_i$ are monotone and jointly
order-reflecting, i.e.\ for every $x, y \in A$ we have $x \leq y$
iff $\pi_i(x) \leq \pi_i(y)$ for all $i \in I$), $j \leq k$ in
$\Delta$ implies $\pi_i\cdot f^\#(t_j) \leq \pi_i \cdot f^\#(t_k)$
in $A_i$ for all $i \in I$. Since every $\pi_i$ is a homomorphism,
this is equivalent to
$(\pi_i \cdot f)^\#(t_j) \leq (\pi_i\cdot f)^\#(t_k)$ by
\autoref{L:compint}.
We now prove that $A$ satisfies every inequation
$\Gamma \vdash s \leq t$ in $\mathcal E$, as claimed. Let
$f\colon \Gamma \to A$ be a monotone interpretation. We have that
$(\pi_i \cdot f^\#)(s)$ and $(\pi_i \cdot f^\#)(t)$ are defined
and $\pi_i \cdot f^\#(s) \leq \pi_i\cdot f^\#(t)$ for all $i \in I$,
using \autoref{L:compint} and since all $A_i$ satisfy the given
inequation in context. Using again that the~$\pi_i$ are jointly
order-reflecting, we obtain $f^{\#}(s)\leq f^{\#}(t)$, as required.
\takeout{
for every inequation $ \Gamma\vdash s\leq t$ in $\mathcal E$ we
prove that $A$ satisfies this inequation. For a valuation
$f\colon \Gamma\rightarrow A$ we know that the interpretations
$\pi_i\cdot f\colon\Gamma\rightarrow A_i$ are such that
$ (\pi_i\cdot f)^{\#}(s)\leq(\pi_i\cdot f)^{\#}(t)$ for every
$i\in I$. Since $\pi_i$ is a homomorphism, we have
$(\pi_i\cdot f)^{\#} =\pi_i\cdot f^{\#}$. From
$\pi_i\cdot f^{\#}(s)\leq\pi_i\cdot f^{\#}(t)$, for every
$i\in I$, we conclude $f^{\#}(s)\leq f^{\#}(t)$ since $\pi_i$ is
monotone. Thus $A$ lies in $\Alg(\Sigma, \mathcal E)$.}
\item\label{P:refl:3} $\Alg(\Sigma, \mathcal E)$ is closed under subalgebras
in $\Alg \Sigma$. Indeed, let $m\colon B\hookrightarrow A$ be a
$\Sigma$-homomorphism carried by an embedding. For every
inequation $\Gamma\vdash s\leq t$ in $\mathcal E$ we prove that $B$
satisfies it. For a monotone interpretation
$f\colon \Gamma\rightarrow B$, we see that
$(m \cdot f)^\#(s)$ and $(m \cdot f)^\#(t)$ are defined and $(m
\cdot f)^\#(s) \leq (m \cdot f)^\#(t)$ since $A$ satisfies the
given inequation in context. By \autoref{L:compint} we obtain that
$f^\#(s)$ and $f^\#(t)$ are defined and
\[
m \cdot f^\#(s)
=
(m \cdot f)^\#(s)
\leq
(m \cdot f)^\#(t)
=
m \cdot f^\#(s).
\]
Since $m$ is an embedding, it follows that
$f^{\#}(s)\leq f^{\#}(t)$.\qedhere
\end{enumerate} \end{proof}
\begin{corollary}
The category $\Alg_\mathsf{c} \Sigma$ of all coherent $\Sigma$-algebras is a
reflective subcategory of $\Alg \Sigma$. \end{corollary}
\noindent Indeed, this follows using \autoref{ex:var}\ref{ex:var:3}.
\begin{theorem}\label{T:varmon} For every variety, the forgetful functor to $\mathsf{Pos}$ is monadic. \end{theorem} \begin{proof}
Let $\mathcal V$ be a variety of $\Sigma$-algebras. We use Beck's Monadicity
Theorem~\cite[Thm.~VI.7.1]{MacLane98} and prove that the forgetful
functor $U\colon\mathcal V\to\mathsf{Pos}$ has a left adjoint and creates coequalizers of
$U$-split pairs.
\begin{enumerate}
\item The functor $U$ has a left adjoint because it is the composite
of the embedding $E\colon\mathcal V\rightarrow\Alg \Sigma$ and
the forgetful functor $V\colon\Alg \Sigma\rightarrow\mathsf{Pos}$: the
functor $E$ has a left adjoint by \autoref{P:refl} and $V$ has one
by \autoref{P:free}.
\item Let $f,g\colon A\rightarrow B$ be a $U$-split pair of
homomorphisms in $\mathcal V$. That is, there are monotone
maps $c,i,j$ as in the following diagram
\[
\begin{tikzcd}
UA \arrow[r, "Uf", shift left] \arrow[r, "Ug"', shift right] & UB \arrow[r, "c"] \arrow[l, "j", bend left=60] & C \arrow[l, "i", bend left=60]
\end{tikzcd}
\]
satisfying
$c\cdot Uf=c\cdot Ug$, $c\cdot i=\mathsf{id}_C$, $Uf\cdot j=\mathsf{id}_{UB}$, and $Ug\cdot j=i\cdot c$.
For every $\sigma\in\Sigma_{\Gamma}$, there exists a unique
operation $\sigma_C\colon\Pos_{0}(\Gamma, C)\rightarrow C$\cfnote{Changed to Posz from Pos.} making $c$
a homomorphism:
\[
\begin{tikzcd}
{\Pos_{0}(\Gamma, B)}
\arrow[r, "\sigma_B"]
\arrow[d, "c\cdot(-)"']
&
B \arrow[d, "c"]
\\
{\Pos_{0}(\Gamma, C)} \arrow[r, "\sigma_C"']
&
C
\end{tikzcd}
\]
Indeed, let us define $\sigma_C$ by
\[
\sigma_C(h)=c\cdot \sigma_B(i\cdot h)\qquad\text{for all $h\colon\Gamma\rightarrow C$}.
\]
Then $c$ is a homomorphism since
$\sigma_C(c\cdot k)=c\cdot \sigma_B(k)$ for every
$k\colon\Gamma\rightarrow B$:
\begin{align*}
c\cdot\sigma_B(k)
&=c\cdot \sigma_B(f\cdot j\cdot k) &
\text{since $f\cdot j=\mathsf{id}$} \\
&=c\cdot f\cdot \sigma_A(j\cdot k) & \text{$f$ a homomorphism} \\
&=c\cdot g\cdot\sigma_A(j\cdot k) & \text{since $c\cdot f=c\cdot g$} \\
&=c\cdot \sigma_B(g\cdot j\cdot k) & \text{$g$ a homomorphism} \\
&=c\cdot \sigma_B(i\cdot c\cdot k) & \text{since $g\cdot j=i\cdot c$} \\
&=\sigma_C(c\cdot k).
\end{align*}
Conversely, if $C$ has an algebra structure making $c$ a
homomorphism, then the above formula holds since $c\cdot i= \mathsf{id}$:
\[
\sigma_C(h)=\sigma_C(c\cdot i\cdot h)=c\cdot \sigma_B(i \cdot h).
\]
Furthermore, $C$ lies in $\mathcal V$. To verify this, we just
prove that whenever an inequation $\Gamma\vdash s\leq t$ is satisfied by
$B$, then the same holds for the algebra $C$. Given a monotone
interpretation $h\colon\Gamma\rightarrow C$ such that $h^\#(s)$ and
$h^\#(t)$ are defined, we prove $h^{\#}(s)\leq h^{\#}(t)$.
For the monotone interpretation $i\cdot h\colon\Gamma\rightarrow B$ we have that
$(i\cdot h)^\#(s)$ and $(i \cdot h)^\#(t)$ are defined and that
$(i\cdot h)^{\#}(s)\leq(i\cdot h)^{\#}(t)$ since $B$ lies in $\mathcal V$. Since $c$ is a homomorphism, we
conclude using \autoref{L:compint} and that $c \cdot i = \mathsf{id}_C$ that
\[
h^\#(s) = (c \cdot i \cdot h)^\# (s) = c \cdot (i\cdot h)^\# (s)
\]
is defined and similarly for $h^\#(t)$. Then we have
\[
h^{\#}(s)= c\cdot (i\cdot h)^{\#}(s)\leq c\cdot
(i\cdot h)^{\#}(t) = h^{\#}(t).
\]
as desired using the monotonicty of $c$.
Finally, we prove that $c$ is a coequalizer of $f$ and $g$ in $\mathcal V$. Let
$d\colon B \to D$ be a homomorphism such that $d\cdot f = d \cdot
g$. Then $d' = d \cdot i$ fulfils $d = d'\cdot c$:
\begin{align*}
d' \cdot c &= d \cdot i \cdot c\\
&= d\cdot g \cdot j & \text{since $i\cdot c = g \cdot j$} \\
&= d \cdot f \cdot j & \text{since $d \cdot f = d \cdot g$} \\
&= d & \text{since $f \cdot j = \mathsf{id}_B$.}
\end{align*}
Moreover, $d'\colon C \to D$ is a homomorphism since $c$ is a
surjective homomorphism such that $d'\cdot c = d$ is also a
homomorphism. This clearly is the unique homomorphic factorization
of $d$ through $c$.
\qedhere
\end{enumerate} \end{proof}
\begin{definition}
Given a variety $\mathcal V$, the left adjoint of $U\colon\mathcal V\to\mathsf{Pos}$
assigns to every poset $X$ the free algebra of $\mathcal V$ on $X$. The
ensuing monad is called the \emph{free-algebra monad} of the variety
and is denoted by $\mathbb T_\mathcal V$. \end{definition}
\begin{corollary}\label{C:varmon}
Every variety $\mathcal V$ is
isomorphic, as a concrete category over $\mathsf{Pos}$, to the
Eilenberg-Moore category $\mathsf{Pos}^{\mathbb T_\mathcal V}$. \end{corollary}
\begin{example}\label{ex:freealgs}
\begin{enumerate}
\item Recall the variety of internal semilattices considered in
\autoref{ex:var}\ref{ex:var:4}. It is well known (and easy to
show) that the free internal semilattice on a poset $X$ is formed
by the poset ${C_\omega} X$ of its finitely generated convex
subsets. Here, a subset $S \subseteq X$ is \emph{convex} if
$x, y \in S$ implies that every $z$ such that $x \leq z \leq y$
lies in $S$, too, and \emph{finitely generated} means that $S$ is
the convex hull of a finite subset of $X$. The order on
$C_\omega X$ is the Egli-Milner order, which means that for
$S,T \in {C_\omega} X$ we have
\[
S \leq T \quad\text{iff}\quad
\forall s \in S.\,\exists t \in B.\, s \leq t \wedge
\forall t \in T.\,\exists s \in S.\, s \leq t.
\]
The constant $0$ is the empty set, and the operation $+$ is the
join w.r.t.~inclusion, explicity, $S + T$ is the convex hull of
$S \cup T$ for all $S, T \in {C_\omega} X$. One readily shows that $+$
is monotone w.r.t.~the Egli-Milner order and that ${C_\omega} X$ with
the universal monotone map $x \mapsto \{x\}$ is a free internal
semilattice on $X$. Thus we see that ${C_\omega}$ is a monad on $\mathsf{Pos}$
and $\mathsf{Pos}^{C_\omega}$ is (isomorphic to) the category of internal
semilattices in $\mathsf{Pos}$.
\item
Denote by $D_{\omega}$ the monad of free join semilattices. It assigns to
every poset $X$ the set of finitely generated, downwards closed subsets of
$X$ ordered by inclusion. Here a downwards closed subset $S\subseteq X$ is
\emph{finitely generated} if there are $x_1, \ldots, x_n \in S$, $n \in \mathds{N}$, such that
$S = \bigcup_{i = 1}^n x_i\mathord{\downarrow}$. The category $\mathsf{Pos}^{D_{\omega}}$
is equivalent to that of join-semilattices, see \autoref{ex:var}\ref{ex:var:5}. \item Similarly, the monad $D^b_\omega$ generated by the variety of
bounded-join semilattices (\autoref{ex:var}\ref{item:bounded})
assigns to a poset~$X$ the set of finitely generated downwards
closed \emph{bounded} subsets of~$X$, ordered by inclusion.
\end{enumerate} \end{example}
\begin{corollary}
The forgetful functors $U\colon \Alg \Sigma \to \mathsf{Pos}$ and $U_c\colon
\Alg_\mathsf{c} \Sigma\to \mathsf{Pos}$ are monadic. \end{corollary}
\noindent Note that the corresponding monads are the free-(coherent-)$\Sigma$-algebra monads given by $T_\Sigma X$ and $T^\mathsf{c}_\Sigma X$, respectively (cf.~\autoref{P:free} and~\ref{P:freec}).
\section{Finitary Monads}
Let $\mathbb T$ be a finitary monad on $\mathsf{Pos}$. We present a variety $\mathcal V_{\mathbb T}$ such that the mapping $\mathbb T\mapsto\mathcal V_{\mathbb T}$ is inverse to the assignment $\mathcal V \to \mathbb T_\mathcal V$ of a variety to its free-algebra monad. Moreover, we prove that there is a completely analogous bijection between enriched finitary monads and varieties of coherent algebras.
\begin{remark}\label{R:KT} Let us recall the equivalence between the category of monads on $\mathsf{Pos}$ and Kleisli triples established by Manes~\cite[Thm 3.18]{Manes76}. \begin{enumerate}
\item\label{R:KT:1} A \emph{Kleisli triple} consists of (a)~a self map
$X\mapsto TX$ on the class of all posets, (b)~an assignment of a
monotone map $\eta_X\colon X\to TX$ to every poset, and (c)~an
assignment of a monotone map $f^*\colon TX\to TY$ to every monotone
map $f\colon X\to TY$, which satisfies
\begin{align}
\eta^*_X &= \mathsf{id}_{X^{*}} \label{KT1} \\
f^*\cdot \eta_X &= f \label{KT2} \\
g^*\cdot f^* &= (g^* \cdot f)^* \label{KT3}
\end{align}
for all posets $X$ and all monotone functions $f\colon X\to TY$ and
$g\colon Y\to TZ$.
\item\label{R:KT:2} A morphism into another Kleisli triple
$(T', \eta', (-)^+)$ is a collection $\varphi_X\colon TX\to T'X$ of
monotone functions such that
the diagrams below commute for all posets $X$
and all monotone functions $f\colon X\to TY$: \[
\begin{tikzcd}
&
X \arrow[rd, "\eta_X'"]
\arrow[ld, "\eta_X"'] & & &
TX \arrow[r, "\varphi_X"] \arrow[d, "f^*"']
&
T'X \arrow[d, "(\varphi_Y\cdot f)^+"]
\\
TX \arrow[rr, "\varphi_X"]
&
& T'X & & TY \arrow[r, "\varphi_Y"] & T'Y \end{tikzcd} \]
\item\label{R:KT:3}
Every monad $\mathbb T$ defines a Kleisli triple $(T, \eta, (-)^*)$ by
\[
f^*= TX\xra{Tf} TTY\xra{\mu_Y} TY.
\]
Every monad morphism $\varphi\colon\mathbb T\to\mathbb T'$ defines a morphism
$\varphi_X\colon TX\to T'X$ of Kleisli triples. The resulting
functor from the category of monads to the category of Kleisli
triples is an equivalence functor. \end{enumerate} \end{remark}
\noindent We shall now define the variety $\mathcal V_{\mathbb T}$ mentioned above.\smnote{Numbered remark is necessary since it is refered to (as
one can see from the fact that it carries a label! LS: That label is
not used anywhere, so I removed the remark again}
\begin{definition}\label{D:var}
The \emph{variety $\mathcal V_{\mathbb T}$ associated} to a finitary monad $\mathbb T$ on
$\mathsf{Pos}$ has the signature
\[
\Sigma_{\Gamma}= |T\Gamma|\qquad\text{for every $\Gamma\in\Pos_\mathsf{f}$}.
\]
That is, operations of arity $\Gamma$ are elements of the poset
$T\Gamma$. For each $\Gamma$, we impose inequations of the
following two types:
\begin{enumerate}
\item\label{D:var:1} $\Gamma\vdash \sigma\leq\tau$ for all
$\sigma\leq\tau$ in $T\Gamma$ (with operations used as terms as
per \autoref{def:terms}), and
\item\label{D:var:2} $\Gamma \vdash k^*(\sigma) = \sigma(k)$ for all
$\Delta\in\Pos_\mathsf{f}$, monotone $k\colon\Delta\rightarrow T\Gamma$ and
$\sigma \in T\Delta$.
\takeout{
monotone we form the tuple $\hat k\colon |\Delta| \to
\Term(\Gamma)$ of terms by putting $\hat k(x) = k(x)$, where the
operation symbol $k(x)$ in $|T\Gamma|$ is considered as a term
(cf.~\autoref{R:opterm}). For every $\sigma$ in $T\Delta$ we pose
the equation
\[
\Gamma \vdash k^*(\sigma) = \sigma(\hat k)
\]
where $k^*:=\mu_{T\Gamma}\cdot Tk$.}
\end{enumerate} \end{definition}
\begin{example}\label{E:TX}
For every poset $X$, the poset $TX$ carries the following structure
of an algebra of $\V_\T$. Given $\sigma\in T\Gamma$, we define the
operations $\sigma_{TX}\colon\Pos_{0}(\Gamma, TX)\rightarrow TX$ by
\[
\sigma_{TX}(f)=f^*(\sigma)\qquad\text{for $f\colon\Gamma\rightarrow TX$}.
\]
It then follows that the evaluation map
$f^{\#}\colon\Term(\Gamma)\rightarrow |TX|$ coincides with $f^*$ on
operation symbols (converted to terms as per \autoref{def:terms}):
\begin{equation}\label{eq:fsigma}
f^{\#}(\sigma)=f^*(\sigma)
\end{equation}
for all $\sigma\in T\Gamma$.
\takeout{
Indeed, $f^{\#}$ is a homomorphism\smnote[inline]{We do not know
this anymore!}:
\begin{equation}\label{eq:sharphom}
\begin{tikzcd}
{\Pos_{0}(\Gamma, T_\Sigma(\Gamma))}
\arrow[d, "f^{\#}\cdot (-)"']
\arrow[r, "\sigma_{T_\Sigma(\Gamma)}"]
&
T_\Sigma(\Gamma)
\arrow[d, "f^{\#}"]
\\
{\Pos_{0}(\Gamma, TX)} \arrow[r, "\sigma_{TX}"']
&
TX
\end{tikzcd}
\end{equation}
Recall that $\sigma$ is the term $\sigma(\eta)$ for
$\eta\colon\Gamma\rightarrowT_\Sigma(\Gamma)$, thus applied to $\eta$
the above square yields
\[
f^{\#}(\sigma)=\sigma_{TX}(f^{\#}\cdot\eta)=\sigma_{TX}(f)=f^*(\sigma).
\]}
Indeed, for $|\Gamma| =\{x_1, \ldots, x_n\}$ we have
\begin{align*}
f^\#(\sigma)
&= f^\#(\sigma(x_1, \ldots, x_n))
& \text{\autoref{def:terms}}
\\
&= \sigma_{TX} (f^\#(x_1), \ldots, f^\#(x_n))
& \text{def.~of $f^\#$}
\\
& = \sigma_{TX} (f(x_1), \ldots, f(x_n))
& \text{def.~of $f^\#$}
\\
&= \sigma_{TX}(f) \\
&= f^*(\sigma)
& \text{def.~of $\sigma_{TX}$.}
\end{align*}
It now follows that the $\Sigma$-algebra $TX$ lies in $\V_\T$. It
satisfies the inequations of type~\ref{D:var:1} because $f^*$ is
monotone: given $\sigma\leq\tau$ in $T\Gamma$, we have
$f^{\#}(\sigma)=f^*(\sigma)\leq f^*(\tau)=f^{\#}(\tau)$. Moreover,
it satisfies the inequations of type~\ref{D:var:2} since for every monotone
map $k\colon\Delta\rightarrow T\Gamma$ we know that
$f^\#(k^*(\sigma))$ is defined by \autoref{E:term}\ref{E:term:2}, and
we have
\begin{align*}
f^{\#}(k^*(\sigma))
&= f^*\cdot k^*(\sigma) & \text{by~\eqref{eq:fsigma}} \\
&=(f^*\cdot k)^*(\sigma) & \text{by~\eqref{KT3}} \\
&= \sigma_{TX}(f^*\cdot k) & \text{def.~of $\sigma_{TX}$} \\
&= \sigma_{TX}(f^{\#}\cdot k) & \text{by~\eqref{eq:fsigma}} \\
&=f^{\#}(\sigma(k)) & \text{def.~of $f^{\#}$}
\end{align*}
So, indeed, $TX$ lies in $\V_\T$.
\smnote{Should we explain which $\Sigma$ and $\mathcal E$ are induced
by ${C_\omega}$ and ${\mathcal{P}^\downarrow_\omega}$? SM: Ok, let's leave it for the time being
and perhaps do it with the revisions.} \end{example}
\begin{theorem}\label{T:mon-var}
Every finitary monad $\mathbb T$ on $\mathsf{Pos}$ is the free-algebra monad of its
associated variety $\V_\T$. \end{theorem}
\begin{proof}
\begin{enumerate}
\item\label{T:mon-var:1} We first prove that the algebra $TX$ of
\autoref{E:TX} is a free algebra of $\V_\T$ w.r.t.~the monad unit
$\eta_{X}\colon X\rightarrow TX$.
\begin{enumerate}[label=(1\alph*)]
\item First, suppose that $X=\Gamma$ is a context. Given an
algebra $A$ of $\V_\T$ and a monotone map
$f\colon\Gamma\rightarrow A$, we are to prove that there exists
a unique homomorphism $\bar{f}\colon T\Gamma\rightarrow A$ such that
$f=\bar{f}\cdot\eta$.
Indeed, given $\sigma\in T\Gamma$, define $\bar{f}$ by
\[
\bar{f}(\sigma)=\sigma_A(f).
\]
This is a monotone function: if $\sigma\leq\tau$ in $T\Gamma$,
then use the fact that $A$ satisfies the
inequations~$\Gamma \vdash \sigma \leq \tau$ to obtain
\[
\sigma_A(f)=f^{\#}(\sigma)\leq f^{\#}(\tau)=\tau_A(f).
\]
We now verify that $\bar{f}$ is a homomorphism: given
$\tau\in\Sigma_{\Delta}$, we will prove that the following
square commutes:
\[
\begin{tikzcd}
{\Pos_{0}(\Delta, T\Gamma)}
\arrow[d, "\bar{f}\cdot (-)"']
\arrow[r, "\tau_{T\Gamma}"]
&
T\Gamma \arrow[d, "\bar{f}"]
\\
{\Pos_{0}(\Delta, A)} \arrow[r, "\tau_A"']
&
A
\end{tikzcd}
\]
Indeed, for every monotone map $k\colon\Delta\rightarrow
T\Gamma$ we have that $f^\#$ is defined in
$k^*(\tau)$ by \autoref{E:term}\ref{E:term:2}, and we therefore
obtain (letting $|\Delta| = \{x_1, \ldots, x_n\}$):
\begin{align*}
\bar f(\tau_{T\Gamma}(k))
&= \bar{f}(k^{*}(\tau))
& \text{def.~of $\tau_{T\Gamma}$}
\\
&= (k^*(\tau))_A(f)
&\text{def.~of $\bar f$}
\\
&= f^\#(k^*(\tau))
& \text{by \autoref{D:sharp}}
\\
&= f^\#(\tau(\hat k))
&
\text{$A$ satisfies $\Gamma \vdash k^*(\tau) = \tau(\hat k)$}
\\
&= \tau_A(f^\#(k))
& \text{def.~of $f^\#$}
\\
&= \tau_A(\bar f \cdot k).
\end{align*}
For the last
step we use again the definition of $f^\#$ to obtain that for
every $x \in |\Delta|$ the operation symbol $\sigma = k(x)$,
considered as the term $\sigma(y_1, \ldots, y_k)$ where
$|\Gamma| = \{y_1, \ldots, y_k\}$ (\autoref{def:terms}), satisfies
\begin{align*}
f^\# (\sigma(y_1, \ldots,y_k)) &= \sigma_A (f^\#(y_1),
\ldots, f^\#(y_k)) = \sigma_A (f(y_1), \ldots, f(y_k)) \\
& = \sigma_A (f) = \bar f(\sigma_i).
\end{align*}
Since $\sigma = k(x_i)$ this gives the desired
$\bar f \cdot k$ when we let $x$ range over $\Delta$.
\takeout{
The left-hand side is $\tau_A(g)$ where
$g\colon\Delta\rightarrow A$ is given by $g(x)=(k(x))_A(f)$. For
the evaluation map $f^{\#}\colon\Term(\Gamma)\to |A|$,
this is precisely $f^{\#}(\tau_{T\Gamma}(k))$ (see
\autoref{E:term}\ref{E:term:2}). For the right-hand side we
have, following \autoref{D:sharp}:
\[
\bar{f}(k^*(\tau))=(k^*(\tau))_A(f)=f^{\#}(k^*(\tau)).
\]
This is the same result because $A$ satisfies
$\Gamma\vdash k^*(\tau)=\tau(k)$ and $f^\#$ is defined in
$k^*(\tau)$ by \autoref{E:term}\ref{E:term:2}.}
As for uniqueness, suppose that
$\bar{f}\colon T\Gamma\rightarrow A$ is a homomorphism such that
$f=\bar{f}\cdot\eta_{\Gamma}$. The above square commutes for
$\Delta=\Gamma$ which applied to
$\eta_{\Gamma}\in\mathsf{Pos}(\Gamma, T\Gamma)$ yields for every $\sigma
\in |T\Gamma|$:
\begin{align*}
\bar{f}(\sigma)
&= \bar f(\eta_\Gamma^*(\sigma))
& \text{by~\eqref{KT1}}
\\
&= \bar f (\eta_\Gamma^\#(\sigma))
&\text{by~\eqref{eq:fsigma}}
\\
&= \bar{f}(\sigma_{T\Gamma}(\eta_{\Gamma}))
& \text{def.~of $\eta_\Gamma^\#$}
\\
& =\sigma_A(\bar{f}\cdot\eta_{\Gamma})
& \text{$\bar f$ homomorphism}
\\
&= \sigma_A(f)
& \text{since $\bar f\cdot\eta_\Gamma = f$},
\end{align*}
as required.
\item Now, let $X$ be an arbitrary poset. Express it as a filtered
colimit $X=\mathop{\mathrm{colim}}_{i\in I}\Gamma_i$ of contexts. The free
algebra on $X$ is then a filtered colimit of the corresponding
diagram of the $\Sigma$-algebras $T\Gamma_i$ ($i\in I$). Indeed,
that $TX=\mathop{\mathrm{colim}} T\Gamma_i$ in $\mathsf{Pos}$ follows from $T$ preserving
filtered colimits. That this colimit lifts to $\mathcal V$ follows
from the forgetful functor of $\mathcal V$ creating filtered colimits,
see \autoref{P:colim}. \end{enumerate}
\item To conclude the proof, we apply \autoref{R:KT}. Our given monad
and the monad $\mathbb T_{\mathcal V}$ of the associated variety share the same
object assignment $X\mapsto TX= T_{\mathcal V}X$ for an arbitrary poset $X$,
and the same universal map $\eta_X$, as shown in
part~\ref{T:mon-var:1}. It remains to prove that for every morphism
$f\colon X\rightarrow TY$ in $\mathsf{Pos}$ the homomorphism
$h^*=\mu_{Y}\cdot Th$ extending $h$ in $\mathsf{Pos}^{\mathbb T}$ is a
$\Sigma$-homomorphism $h^*\colon TX\rightarrow TY$ of the
corresponding $\Sigma$-algebras of \autoref{E:TX}. Then $\mathbb T$ and
$\T_\V$ also share the operator $h\mapsto h^*$. Thus given
$\sigma\in\Sigma_{\Gamma}$ we are to prove that the following square
commutes: \[ \begin{tikzcd} {\Pos_{0}(\Gamma, TX)} \arrow[d, "h^*\cdot (-)"'] \arrow[r, "\sigma_{TX}"] & TX \arrow[d, "h^*"] \\ {\Pos_{0}(\Gamma, TY)} \arrow[r, "\sigma_{TY}"'] & TY \end{tikzcd} \] Indeed, given $f\colon\Gamma\rightarrow TX$ we have \begin{align*}
h^*\cdot \sigma_{TX}(f)
&=h^*\cdot f^*(\sigma) & \text{definition of $\sigma_A$} \\
&=(h^*\cdot f)^*(\sigma) & \text{equation~\eqref{KT3}} \\
&=\sigma_{TY}(h^*\cdot f) & \text{definition of $\sigma_{TY}$} \end{align*} This completes the proof. \qedhere \end{enumerate} \end{proof}
\begin{corollary}\label{C:nonenriched}
Finitary monads on $\mathsf{Pos}$ correspond bijectively, up to monad isomorphism,
to finitary varieties of ordered algebras. \end{corollary}
\noindent Indeed, the assignment of the associated variety $\V_\T$ to every finitary monad $\mathbb T$ is essentially inverse to the asignment of the free-algebras monad $\mathbb T_{\mathcal V}$ to every variety $\mathcal V$. To see this, recall that every variety $\mathcal V$ is isomorphic (as a concrete category over $\mathsf{Pos}$) to the category $\mathsf{Pos}^{\mathbb T_{\mathcal V}}$ (\autoref{C:varmon}). Conversely, every finitary monad $\mathbb T$ is isomorphic to $\mathbb T_{\mathcal V}$ for the associated variety (\autoref{T:mon-var}).
\begin{proposition}\label{P:cohvar}
If $\mathbb T$ is an enriched finitary monad on $\mathsf{Pos}$, then the algebras
of its associated variety $\mathcal V_\mathbb T$ are coherent. Conversely, for
every variety $\mathcal V$ of coherent algebras, the free-algebra monad
$\mathbb T_{\mathcal V}$ is enriched. \end{proposition}
\begin{proof}
For the first claim, let $\mathbb T$ be enriched.
Then the $\Sigma$-algebra $TX$ of \autoref{E:TX} is
coherent: Given an operation symbol $\sigma\in\Sigma_{\Gamma}$ and
monotone interpretations $f\leq g$ in $\mathsf{Pos}(\Gamma, TX)$, we have
$Tf\leq Tg$, and hence
$f^*=\mu_{TX}\cdot Tf\leq\mu_{TX}\cdot Tg=g^*$ because~$\mathbb T$ is
enriched. Therefore, $f^*(\sigma)\leq g^*(\sigma)$. That is,
\[
\sigma_{TX}(f)\leq\sigma_{TX}(g).
\]
For every algebra $A$ of the variety $\V_\T$ we have the unique
$\Sigma$-homomorphism $k\colon TA\rightarrow A$ such that
$k\cdot\eta_A=\mathsf{id}_A$ (since $TA$ is a free $\Sigma$-algebra
in $\V_\T$; see \autoref{T:mon-var}\ref{T:mon-var:1}). The coherence
of $TA$ implies the coherence of $A$: given $f_1\leq f_2$ in
$\mathsf{Pos}(\Gamma, A)$, we verify $\sigma_A(f_1)\leq\sigma_A(f_2)$ by
applying the commutative square
\[
\begin{tikzcd}
{\mathsf{Pos}(\Gamma, TA)} \arrow[r, "\sigma_{TA}"] \arrow[d,
"k\cdot (-)"']
&
TA \arrow[d, "k"]
\\
{\mathsf{Pos}(\Gamma, A)} \arrow[r, "\sigma_A"']
&
A
\end{tikzcd}
\]
to $\eta_A\cdot f_i$, obtaining
$ \sigma_A(f_i) = \sigma_A(k\cdot \eta_A\cdot f_i) = k\cdot
\sigma_{TA}(\eta_A\cdot f_i)$; by monotonicity of composition in
$\mathsf{Pos}$ and of $\sigma_{TA}$ as established above, this implies
$\sigma_A(f_1)\leq\sigma_A(f_2)$ as desired.
Conversely, let $\mathcal V$ be a variety of coherent
$\Sigma$-algebras. Given $f_1\leq f_2$ in $\mathsf{Pos}(X, Y)$, we prove
that the free-algebra monad $\T_\V$ fulfils $T_{\mathcal V}f_1\leq
T_{\mathcal V}f_2$. Let $e\colon E\hookrightarrow T_\mathcal V X$ be the subposet of
all elements $t\in|T_{\mathcal V}X|$ such that
$T_{\mathcal V}f_1(t)\leq T_{\mathcal V}f_2(t)$. Since for $x\in X$ we know that
$f_1(x)\leq f_2(x)$, the poset $E$ contains all elements
$\eta_X(x)$. Moreover, $E$ is closed under the operations of
$T_{\mathcal V}X$: Suppose that $\sigma\in\Sigma_{\Gamma}$ and that
$h\colon\Gamma\rightarrow T_{\mathcal V}X$ is a monotone map such that
$h[\Gamma]\subseteq E$; we have to show that
$\sigma_{T_{\mathcal V}X}(h)\in E$. Applying the commutative
square
\[
\begin{tikzcd}
{\mathsf{Pos}(\Gamma, T_{\mathcal V}X)} \arrow[r, "\sigma_{T_{\mathcal V}X}"] \arrow[d,
"T_{\mathcal V}f_i\cdot (-)"']
&
T_{\mathcal V}X \arrow[d, "T_{\mathcal V}f_i"]
\\
{\mathsf{Pos}(\Gamma, T_{\mathcal V}Y)} \arrow[r, "\sigma_{T_{\mathcal V}Y}"']
&
T_{\mathcal V}Y
\end{tikzcd}
\]
to $h$, we obtain
\begin{align*}
T_{\mathcal V}f_1(\sigma_{T_{\mathcal V}X}(h))
&= \sigma_{T_{\mathcal V}Y}(T_{\mathcal V}f_1\cdot h) \\
&\leq \sigma_{T_{\mathcal V}Y}(T_{\mathcal V}f_2\cdot h) \\
&= T_{\mathcal V}f_2(\sigma_{T_{\mathcal V}X}(h))
\end{align*}
using in the inequality that $\sigma_{T_{\mathcal V}Y}$ is monotone and, by
assumption, $T_{\mathcal V}f_1(h)\leq T_{\mathcal V}f_2(h)$; that is,
$\sigma_{T_{\mathcal V}X}(h)\in E$, as desired.
We thus see that $E$ is a $\Sigma$-subalgebra of $T_{\mathcal V}X$. Since
$T_{\mathcal V}X$ is the free algebra of $\mathcal V$ w.r.t.~$\eta_X$ and the
subalgebra $E$ contains $\eta_X[X]$, it follows that
$E=T_{\mathcal V}X$. This proves that $Tf_1\leq Tf_2$, as desired. \end{proof}
\begin{corollary}\label{C:enriched}
Enriched finitary monads on $\mathsf{Pos}$ correspond bijectively, up to
monad isomorphism, to finitary varieties of coherent ordered algebras. \end{corollary}
\section{Enriched Lawvere Theories}\label{S:enriched}
Power~\cite{Pow99} proves that enriched finitary monads on $\mathsf{Pos}$ bijectively correspond to Lawvere $\mathsf{Pos}$-theories. This is another way of proving \autoref{C:enriched}. However, we believe that a precise verification of all details would not be simpler than our proof. Here we indicate this alternative proof.
Dual to \autoref{R:tensor}, \emph{cotensors} $P \pitchfork X$ in an enriched category $\mathscr T$ (over $\mathsf{Pos}$) are characterized by an enriched natural isomorphism $\mathscr T(-,P\pitchfork X) \cong \mathsf{Pos}(P, \mathscr T(-,X))$. If we restrict ourselves to finite posets $P$ we speak about \emph{finite cotensors}.
\removeThmBraces \begin{definition}[{\cite{Pow99}}]
A \emph{Lawvere $\mathsf{Pos}$-theory} is a small enriched category
$\mathscr T$ with finite cotensors together with an enriched
identity-on-objects functor $\iota\colon \Pos_\mathsf{f}^{\mathsf{op}} \to \mathscr T$
which preserves finite cotensors. \end{definition} \resetCurThmBraces
\begin{example}\label{E:TV}
Let $\mathcal V$ be a variety, and denote by $\mathbb T_\mathcal V$ its free-algebra monad
on $\mathsf{Pos}$. The following theory $\mathscr T_\mathcal V$ is the restriction of the
Kleisli category of $\mathbb T_\mathcal V$ to $\Pos_\mathsf{f}$: objects are all contexts,
and morphisms from $\Gamma$ to $\Gamma'$ form the poset $\mathsf{Pos}(\Gamma',
T_\mathcal V \Gamma)$. A composite of $f\colon \Gamma' \to T_\mathcal V\Gamma$ and
$g\colon \Gamma'' \to T_\mathcal V\Gamma'$ is $f^*\cdot g\colon \Gamma'' \to
T_\mathcal V\Gamma$ where $(-)^*$ is the Kleisli extension (see
\autoref{R:KT}\ref{R:KT:3}). \end{example}
\removeThmBraces \begin{theorem}[{\cite[Thm.~4.3]{Pow99}}]\label{T:Pow}
There is a bijective correspondence between enriched finitary
monads on $\mathsf{Pos}$ and Lawvere $\mathsf{Pos}$-theories. \end{theorem} \resetCurThmBraces
\begin{example}\label{E:Powproof}
By inspecting Power's proof, we see that for the theory $\mathscr T_\mathcal V$
of \autoref{E:TV}, the corresponding monad is precisely the
free-algebra monad $\mathbb T_\mathcal V$. \end{example}
\begin{remark}
With every Lawvere $\mathsf{Pos}$-theory $\mathscr T$, Power associates the category
$\Mod \mathscr T$ of \emph{models}, which are enriched functors $\bar A\colon \mathscr T \to
\mathsf{Pos}$ preserving finite cotensors. Morphisms are all enriched
natural transformations between models.
In \autoref{E:TV}, every algebra $A$ of $\mathcal V$ yields a model $\bar A$
of $\mathscr T_\mathcal V$ by putting $\bar A(\Gamma) = \mathcal V(T_\mathcal V\Gamma, A)$ and
for $f\colon \Gamma' \to T_\mathcal V\Gamma$ we have
\[
\bar A(f) = f^*\cdot (-)\colon \mathcal V(T_\mathcal V\Gamma, A) \to \mathcal V(T_\mathcal V\Gamma', A).
\]
The proof of \autoref{T:Pow} implies that these are, up to
isomorphism, all models of $\mathscr T_\mathcal V$ and this yields an equivalence
between $\mathcal V$ and $\Mod\mathscr T_\mathcal V$. \end{remark}
Thus, \autoref{C:enriched} can be proved by verifying that every Lawvere $\mathsf{Pos}$-theory $\mathscr T$ is naturally isomorphic to $\mathscr T_\mathcal V$ for a variety of algebras, and the passage from $\mathbb T$ to $\mathcal V$ is inverse to the passage $\mathcal V \mapsto \mathscr T_\mathcal V$ of \autoref{E:Powproof}.
\smnote{The following should not be in a numbered remark to be skipped
by readers, but narrative.} In addition, Nishizawa and Power~\cite{NP09} generalize the concept of Lawvere theory to a setting in which one may obtain an alternative proof of the non-coherent case (\autoref{C:nonenriched}); we briefly indicate how. Again we believe that that proof would not be simpler than ours. The setting of op.\ cit.\ includes a symmetric monoidal closed category $\mathcal V$ that is locally finitely presentable in the enriched sense and a locally finitely presentable $\mathcal V$-category $\mathscr{A}$. For our purposes, $\mathcal V = \mathsf{Set}$ and $\mathscr{A} = \mathsf{Pos}$.
\removeThmBraces \begin{definition}[{\cite[Def.~2.1]{NP09}}]\label{D:Lawth}
A \emph{Lawvere $\mathsf{Pos}$-theory} for $\mathcal V = \mathsf{Set}$ is a small ordinary
category $\mathscr T$ together with an ordinary identity-on-objects functor $\iota\colon
\Pos_\mathsf{f}^\mathsf{op} \to \mathscr T$ preserving finite limits. \end{definition} \resetCurThmBraces
\begin{example}
Every variety of (not necessarily coherent) algebras yields a theory
$\mathscr T$ analogous to \autoref{E:TV}: the hom-set
$\mathscr T(\Gamma,\Gamma')$ is $\Pos_{0}(\Gamma', T_\mathcal V \Gamma)$. \end{example}
\begin{remark}
Here, a model of a theory $\mathscr T$ is an ordinary functor $A\colon
\mathscr T \to \mathsf{Set}$ such that $A \cdot \iota\colon \Pos_\mathsf{f}^\mathsf{op} \to \mathsf{Set}$
is naturally isomorphic to $\mathsf{Pos}(-,X)/\Pos_\mathsf{f}^\mathsf{op}$ for some poset
$X$. The category $\Mod \mathscr T$ of models has ordinary natural
transformations as morphisms. \end{remark}
\removeThmBraces \begin{theorem}[{\cite[Cor.~5.2]{NP09}}]
There is a bijective correspondence between ordinary finitary monads
on $\mathsf{Pos}$ an Lawvere $\mathsf{Pos}$-theories in the sense of \autoref{D:Lawth}. \end{theorem} \resetCurThmBraces
\section{Conclusion and Future Work}
Classical varieties of algebras are well known to correspond to finitary monads on $\mathsf{Set}$. We have investigated the analogous situation for the category of posets. It turns out that there are two reasonable variants: one considers either all (ordinary) finitary monads, or just the enriched ones, whose underlying endofunctor is locally monotone. (An orthogonal restriction, not considered here, is to require the monad to be strongly finitary, which corresponds to requiring the arities of operations to be discrete~\cite{ADV20}.) We have defined the concept of a variety of ordered algebras using signatures where arities of operation symbols are finite posets. We have proved that these varieties bijectively correspond to \begin{enumerate} \item all finitary monads on Pos, provided that algebras are not
required to have monotone operations, and \item all enriched finitary monads on $\mathsf{Pos}$ for varieties of coherent
algbras, i.e.~those with monotone operations. \end{enumerate} \noindent In both cases, `term' has the usual meaning in universal algebra, and varieties are classes presented by inequations in context.
Although we have concentrated entirely on posets, many features of our paper can clearly be generalized to enriched locally $\lambda$-presentable categories and the question of a semantic presentation of (ordinary or enriched) $\lambda$-accessible monads. For example, what type of varieties corresponds to countably accessible monads on the category of metric spaces with distances at most one (and nonexpanding maps)? Such varieties will be related to Mardare et al.'s quantitative varieties~\cite{MardareEA16} (aka.~$c$-varieties~\cite{MardareEA17,MU19}), probably extended by allowing non-discrete arities of operation symbols.
Ji\v{r}\'i Rosick\'y (private communication) has suggested another possibility of presenting finitary monads on $\mathsf{Pos}$: by applying the functorial semantics of Linton~\cite{Linton69} to functors into $\mathsf{Pos}$ and taking the appropriate finitary variation in the case where those functors are finitary. We intend to pursue this idea in future work.
\smnote[inline]{\cite{ADV20} should be put on arXiv; we cannot cite
something that is not available to referees.}
\end{document} | arXiv |
Finite algebra
In abstract algebra, an $R$-algebra $A$ is finite if it is finitely generated as an $R$-module. An $R$-algebra can be thought as a homomorphism of rings $f\colon R\to A$, in this case $f$ is called a finite morphism if $A$ is a finite $R$-algebra.[1]
The definition of finite algebra is related to that of algebras of finite type.
Finite morphisms in algebraic geometry
This concept is closely related to that of finite morphism in algebraic geometry; in the simplest case of affine varieties, given two affine varieties $V\subset \mathbb {A} ^{n}$, $W\subset \mathbb {A} ^{m}$ and a dominant regular map $\phi \colon V\to W$, the induced homomorphism of $\Bbbk $-algebras $\phi ^{*}\colon \Gamma (W)\to \Gamma (V)$ defined by $\phi ^{*}f=f\circ \phi $ turns $\Gamma (V)$ into a $\Gamma (W)$-algebra:
$\phi $ is a finite morphism of affine varieties if $\phi ^{*}\colon \Gamma (W)\to \Gamma (V)$ is a finite morphism of $\Bbbk $-algebras.[2]
The generalisation to schemes can be found in the article on finite morphisms.
References
1. Atiyah, Michael Francis; MacDonald, Ian Grant (1994). Introduction to commutative algebra. CRC Press. p. 30. ISBN 9780201407518.
2. Perrin, Daniel (2008). Algebraic Geometry An Introduction. Springer. p. 82. ISBN 978-1-84800-056-8.
See also
• Finite morphism
• Finitely generated algebra
• Finitely generated module
| Wikipedia |
\begin{document}
\title{
Dynamic Optimality Refuted~-- \\\protect For Tournament Heaps
}
\begin{abstract} We prove a separation between offline and online algorithms for finger-based tournament heaps undergoing key modifications. These heaps are implemented by binary trees with keys stored on leaves, and intermediate nodes tracking the min of their respective subtrees. They represent a natural starting point for studying self-adjusting heaps due to the need to access the root-to-leaf path upon modifications. We combine previous studies on the competitive ratios of unordered binary search trees by [Fredman WADS2011] and on order-by-next request by [MartÃnez-Roura TCS2000] and [Munro ESA2000] to show that for any number of fingers, tournament heaps cannot handle a sequence of modify-key operations with competitive ratio in $o(\sqrt{\log{n}})$.
Critical to this analysis is the characterization of the modifications that a heap can undergo upon an access. There are $\exp(\Theta(n \log{n}))$ valid heaps on $n$ keys, but only $\exp(\Theta(n))$ binary search trees. We parameterize the modification power through the well-studied concept of fingers: additional pointers the data structure can manipulate arbitrarily. Here we demonstrate that fingers can be significantly more powerful than servers moving on a static tree by showing that access to $k$ fingers allow an offline algorithm to handle any access sequence with amortized cost $O(\log_{k}(n) + 2^{\lg^{*}n})$. \end{abstract}
\section{Introduction} \label{sec:intro}
One of the most intriguing open questions in data structures is the \textit{dynamic-optimality conjecture}. The conjecture states that splay trees can serve any sequence of operations with at most a constant times the cost of the best (adaptive) binary-search-tree (BST) based method, even if we allow the latter to know the sequence of accesses in advance (\ie, to work in ``offline'' mode). Despite decades of active research and deep results~\cite{Wilber1989,
Munro2000,DemaineHarmonIaconoPatrascu2007,WangDS06,Iacono2005,
Harmon06:thesis,DemaineHarmonIaconoKanePatrascu2009,ChalermsookGKMS15,Iacono2016a,KozmaSaranurak2018,LevyTarjan2019} the main conjecture remains wide open, and so is the more general question: \begin{center}\slshape
Is there an online binary-search-tree algorithm that~-- on any access sequence~--\mskip1mu
performs within a constant factor of the offline optimal for that sequence? \end{center}
In this paper, we ask the same question for heaps \footnote{
In the context of this paper, by a ``heap'' we mean any tree-based priority-queue data structure. }. Dynamic optimality of heaps has attracted a lot of interest recently due to the work of Kozma and Saranurak~\cite{KozmaSaranurak2018} who formalized a correspondence between self-adjusting BSTs and self-adjusting heaps like pairing heaps. They show that every heap algorithm in their model, ``stable heaps in sorting mode'' (discussed in \wref{sec:stable-heaps} in more detail), implies a corresponding BST algorithm with the same cost (up to constant factors and on the time-space inverted input). If the converse holds, too, is unclear, and they had to leave the question of dynamic optimality for heaps open. We note that \emph{refuting} dynamic optimality for stable heaps would hence not have immediate consequences for the existence (or nonexistence) of dynamically optimal BSTs.
While the dynamic-optimality conjecture has spurred the much wider study of online algorithms, \eg, \cite{ManasseMS90,BorodinE05:book,Hazan16:book,BansalBMN15,BubeckCLLM18,Lee2018}, competitiveness results are notoriously sensitive to details of the model of computation.
In fact, the historical starting point of competitive analysis, searching on linear lists~\cite{SleatorTarjan1985}, is taught in graduate courses, but~-- as often overlooked~-- a more realistic model allowing arbitrary rearrangements (which can be simplified to linked-list operations) on the visited prefix, allows to serve any sequence known in advance with $O(n \log{n})$ operations~-- significantly less than the $\Omega(n^2)$ lower bound for even processing random online access sequences~\cite{MartinezRoura2000,Munro2000}.
For the binary-search-tree problem itself, such a possibility, while strange, could also be consistent with observed gaps between performances of splay algorithms and more tuned dictionary data structures \footnote{
The performance
gaps are often quoted as between $1.5 \times$ to $3 \times$,
while the value of $\lg\lg{n}$ for most values of $n$
in practice is at most $5$.
To our knowledge, the $O(\log\log{n})$-competitive search tree
algorithms~\cite{DemaineHarmonIaconoPatrascu2007,WangDS06}
have not been evaluated in practice. }. And as we will show in this paper, the separation of online and offline algorithms is a fact for heaps: we refute dynamic optimality for heaps based on tournament trees with decrease-key operations.
Our model of computation is a natural restriction of general pointer machines; analogous to how computation on linked lists~\cite{MartinezRoura2000,Munro2000} and binary search trees~\cite{Harmon06:thesis,DemaineHarmonIaconoKanePatrascu2009} have been defined.
Since nodes in a pointer machine have constant size, arbitrary-degree heap-ordered forests (as used in stable heaps) are not a convenient choice as primitive objects of manipulation. Instead, we use tournament trees~\cite[\S5.2.3]{Knuth98a:book}: Here each node has two children, and all the original keys are stored in the leaves. Moreover, each internal node stores the minimum of its children.
The priority-queue operations can be implemented as follows: The global minimum is always found at the root. To extract the minimum, we follow the path of copies of the minimum to the leaf that stores it, remove it (and its parent internal node) from the tree, and update the labels on the path. Insertion of a new element can be achieved by adding the new leaf and the old root as the two children of a new root node; merging two queues is similar. Assuming a pointer is provided to a leaf, we can also change its key to a different value and update the labels on the path from this leaf to the root.
In this work, we focus on the propagation part of the operations, particular after changing a key of a leaf, and we will assume the worst case, namely that propagation of label changes always continues all the way up to the root. This corresponds to a sequence of decrease-key operations where the new key is the new minimum. \footnote{
Sequences of only change-key operations naturally occur, \eg, in merging $n$ runs;
updates that \emph{all} produce a new minimum are much less natural, but
are sufficient for our negative result. A dynamically optimal tournament heap would
in particular have to handle such sequences optimally. } Our formal model encodes this implicitly by requiring any access to touch the root-to-leaf path: \begin{definition}[Tournament trees] \label{def:model}
In the \textit{tournament-tree-with-$k$-fingers model of computation,}
one maintains
a collection of $n$ elements in the \textit{leaves} of a binary tree.
To serve an access to $x$, we start with all fingers $F_1,\ldots,F_k$ pointing at the root.
We can then use the following operations for each of the fingers $F_i$, $i=1,\ldots,k$:
\begin{enumerate}[itemsep=0ex]
\item Move $F_i$ to the parent, the left child or right child
of its current location (provided the followed pointer is not null).
\item Copy the location of $F_i$ into $F_0$ (the temporary finger).
\item Move $F_i$ to the location $F_0$.
\item Swap the subtree with root $F_0$ with the left or right child
of the node at $F_i$'s current location.
\item Detach the subtree with root $F_0$ from its parent and make it the left or right child
of the node at $F_i$'s current location (provided the replaced pointer is null).
\item Serve request from $F_i$ (provided $x$ is stored at its current location).
\end{enumerate}
Each sequence of operations must eventually serve the access (via the last operation).
The cost of this access is taken to be the number of operations (total of all fingers). \end{definition}
Our result for this model is the following separation of offline and online performance. \begin{theorem}[Online/Offline Separation] \label{thm:Main}
For any value $k = k(n)$,
the competitive ratio of tournament heaps with $k$-fingers is
$\Omega(\max\mskip1mu\log_k (n), \log k \mskip1mu) = \Omega(\sqrt{\log n})$. \end{theorem}
Moreover, while any online algorithm incurs amortized cost $\Omega(\log n)$ per access for most inputs even when $k$ is as large as $\sqrt{n}$, we show that we can do much better even with a subpolynomial number of fingers in the offline case by present a simple, efficiently-computable offline algorithm. \begin{theorem}[Efficient Offline Algorithm] \label{thm:offline-algo}
Any sequence of operations on a tournament heaps can be served using $k$ fingers
with amortized $O(\log_{k}(n) + 2^{\lg^*(n)})$ cost per access. \end{theorem}
\wref[Theorems]{thm:Main} and~\ref{thm:offline-algo} show the fundamental importance that the underlying rearrangement primitives play, in stark contrast to the BST model where any subtree replacement can be simulated via rotations in linear time. The reason for this difference comes mainly from the fact that there are only exponentially ($2^{\Theta(n)}$) many BSTs on $n$ keys, but factorially ($2^{\Theta(n \log n)}$) many heaps \footnote{
This statement being true for both standard heap-ordered trees and tournament trees,
but \emph{not} for Kozma and Saranurak's stable heaps! }, so tree rearrangements are much more powerful in heaps, and standard local primitives like rotations are no longer sufficient. Recall how weak primitive operations were also the critique of the model for self-adjusting linear lists~\cite{MartinezRoura2000,Munro2000} mentioned earlier. Adding the number of fingers as a parameter to the model allows us to precisely quantify the effect of more powerful rearrangement operations in tournament heaps.
Note that tournament trees place no restrictions on the order of keys in the leaves; they are thus essentially equivalent to leaf-oriented \emph{unordered} binary trees.
Fredman~\cite{Fredman2011,Fredman2012} previously studied a similar model, namely unordered binary trees, where keys are stored in all nodes, but no ordering constraint is placed on them. He only considers a single finger, and purely local restructuring (rotations and subtree swaps). He proved that in his model, for any given online method, one can construct (adaptively and in exponential time) an adversarial input on which this online method incurs cost $\Omega(n \log n)$, whereas the same input can be also served with linear costs (by an offline method tailored to this purpose).
Our result strengthens Fredman's work by explicitly studying leaf-oriented trees and (more importantly) by taking more realistic rearrangement primitives into account. The result is a much stronger separation between offline and online algorithms as soon as a non-constant number of fingers is available: The worst-case cost of our offline algorithm on sufficiently long sequences is asymptotically smaller than the average cost any online algorithm can achieve on random sequences. With $k=\omega(1)$, we refute dynamic optimality for tournament heaps even for the average competitive ratio.
\paragraph{Outline} In \wref{sec:background}, we discuss related work, and introduce notations for describing our models of trees. In \wref{sec:models}, we prove the (worst-case) separation between online and offline methods. \wref{sec:loglog} presents our efficient offline algorithm for many fingers. \wref{sec:conclusion} summarizes our findings and lists open problems.
\section{Background} \label{sec:background}
In this section we introduce some notation for the search tree model, and summarize related works, in particular we discuss differences of related models for trees studied in the context of dynamic optimality.
Our analysis will use the standard big-O notation, and we write $f\sim g$ to denote $f = g(1\pm o(1))$ We will use $\lg$ to denote the binary logarithm, and $\lg^{(k)}$ to denote the $k$-times iterated logarithms, \ie, $\lg^{(1)}(n) = \lg(n)$ and $\lg^{(k+1)}(n) = \lg(\lg^{(k)}(n))$. By $\lg^*(n)$ we denote the smallest $k$ so that $\lg^{(k)}(n) \le 1$. Intervals over integers ranging from $a$ to $b$ will be denoted using $[a \ldots b]$, and $[a]$ will be the shorthand for $[1 \ldots a]$.
Dynamic optimality asks whether a data structure that sees a sequence of operations online, that is, one at a time, can perform as well as a data structure that sees the entire access sequence ahead of time. The critical definition for studying dynamic optimality is the definition of an access sequence. We will denote the $n$ keys as $1 \ldots n$, and denote an access sequence of length $m$ as \mskip1mu A = a_1,\ldots, a_m. \mskip1mu Usually, we are interested in the case $m \ge n$.
For such an access sequence, the cost of an online algorithm is the cost of it accessing $a_1$, and then $a_2$ and so on, while the offline optimal cost of accessing $A$ is the minimum total cost of accessing the entire sequence. The competitiveness ratio of an online algorithm $\textsc{Alg}$ on inputs of size $n$ is then \mskip1mu \lim_{m \rightarrow \infty} \max_{\text{$A$ of length $m$}} \frac{\text{cost}\left( \textsc{Alg}\left( A \right) \right)} {\textsc{OfflineOptimum}\left( A \right)}. \mskip1mu We say that $\textsc{Alg}$ is $f(n)$-competitive if its competitive ratio on any input of size $n$ is at most~$f(n)$ for large enough $n$. For the converse, to show that $\textsc{Alg}$ is \emph{not} $f(n)$-competitive, it suffices to find a specific family of (arbitrarily large) inputs where the competitive ratio is worse than $f(n)$.
Small differences in the models of computation have profound consequences for the performances of heaps. We therefore start by formally defining and comparing these models.
\subsection{Trees in the Pointer Machine Model, and Fingers}
All of our data structures will be modeled using pointer machines. Here nodes of trees are represented as a collection of $O(1)$ pointers, each pointing to some other node. In particular, a binary tree is a collection of nodes each pointing to a parent, and a left/right child; some of these pointers can be ``null''.
Access and modifications of pointer-based data structures are done by manipulating the pointers. For this purpose, it is useful to consider ``fingers'', which are special global pointers kept by the data structure at the topmost level. These objects can be viewed as generalizations of the ``root'' vertex, which in this terminology is a static finger where all subsequent accesses start from. We keep the number of fingers as a parameter, $k$, which is allowed to depend on the size $n$ of the data structure, (similar to how the word size $w$ of a word-RAM may depend on $n$). As mentioned in \wref{def:model}, the cost of performing a sequence of accesses is entirely the number of operations performed in moving/duplicating the fingers, and rearranging the pointers incident to them.
Our notion of fingers is in principle the same as defined earlier for BSTs, see, \eg, the lazy fingers~\cite{DemaineILO13}, but we prefer to explicitly distinguish between \emph{transient} and \emph{persistent} fingers: \begin{definition}
A data structure has access to $k$ \textit{persistent fingers} if it is
able to track, as global variables, $k$ special pointers
that it is able to retain across accesses.
The algorithm is allowed to manipulate these fingers arbitrarily
during the accesses.
In contrast, we use \textit{transient finger} to denote fingers that
only exist during a single operation, and are forgotten / reset
before the next operation. \end{definition} Our definition of transient fingers is motivated by the observation that a large number of persistent pointers trivializes most data structure questions~-- formalized in Appendix~\ref{app:elementary-offline-array-algorithm}, specifically Lemma~\ref{lem:very-very-lazy}~-- but the same is not true for transient fingers. We can use transient fingers for tree rearrangement, but not for shortening the access path. In particular, in the standard BST model, transient fingers do not add any power to the algorithm, as local rearrangements (rotations) are equally powerful there.
\subsection{Dynamic Optimality in Binary Search Trees}
The binary-search-tree model is another restricted pointer-machine model, in which each node of a binary tree stores a key, and the keys have to fulfill the search-tree property. An execution in the model can move a finger around the tree or rotate an edge of the tree. A variant of the BST model instead asks for specifying a \emph{replacement tree} for the subset of nodes visited while serving a request. Here, costs are measured by the number of visited nodes. For BSTs, both models are equivalent (up to constant factors)~\cite{Kozma2016}, and so is the addition of further transient fingers. For a general overview on dynamic optimality, we refer the reader to Iacono's 2013 survey~\cite{Iacono2013} and the comprehensive introduction in Kozma's dissertation~\cite{Kozma2016}.
They discuss (instance-specific) upper bounds (\eg, the working-set bound), (instance-specific) lower bounds, and the state of knowledge on concrete algorithms, in particular Splay~\cite{SleatorTarjan1985} and Greedy~\cite{Lucas1988,Munro2000}, as well as the geometric view of BST algorithms based on satisfied point sets~\cite{DemaineHarmonIaconoKanePatrascu2009}.
It is easy to see that when we do \emph{not} know the accesses in advance, most access sequences will require costs in $\Omega(n\log n)$: at any point in time all but $2^{\lfloor \lg n\rfloor/2} \le \sqrt n$ nodes are at depths $\ge \lfloor \lg n\rfloor / 2$ (and hence incur logarithmic cost to access). In the offline model, when we do know all accesses in advance, this is not at all obvious, but it has been shown that there are ``universally hard'' access sequences that require $\Omega(n\log n)$ access cost in \textit{any} binary-search-tree algorithm, online or not~\cite{Wilber1989,DemaineHarmonIaconoKanePatrascu2009}.
An intriguing feature of the binary-search-tree model is that not only the question about constant-competitive online algorithms remains wide open, also the existence of (reasonably efficient) instance-optimal \emph{offline} algorithms is unsolved. Indeed, designing good offline algorithms seems no simpler, and this fact is seen as one reason of why a proof of (or counterexample for) dynamic optimality for splay trees has remained elusive~\cite{
ChalermsookGKMS15,LevyTarjan2019}.
To add insult to the injury, a breakthrough result in the field was that Greedy, the most promising candidate of an instance-optimal offline algorithm, can indeed be turned into an \emph{online} algorithm (paying only a constant-factor increase in access costs, but a hefty fee in terms of conceptual complexity of the algorithm)~\cite{DemaineHarmonIaconoKanePatrascu2009}.
The current best upper bounds are $O(\log\log{n})$ competitive search trees~\cite{DemaineHarmonIaconoPatrascu2007,WangDS06}.
\paragraph{Standard vs Leaf-Oriented Trees}
The standard BST model stores a key in every node of a binary tree. An alternative are leaf-oriented BSTs, where only leaves carry a key, and internal nodes contain a copy of key serving only as ``routers'' for guiding searches. Even though they are much less prominent than their cousins with keys in all nodes, leaf-oriented BSTs have been studied, \eg, in the context of concurrent data structures~\cite{EllenFatourouRuppertBreugel2010}, where their conceptual simplicity and the locality of pointer changes is valuable.
From the perspective of adaptive BSTs, standard BSTs and leaf-oriented BSTs turn out to be equivalent: there are constant-factor-overhead simulations for both directions. The details are given in \wref{app:leaf-oriented-BSTs}.
\subsection{Unordered Binary Trees}
The work closest to ours are the articles by Fredman~\cite{Fredman2011,Fredman2012} mentioned above; his motivation, too, was to study self-adjusting heaps. Clearly, the search-tree property is a useless restriction for priority-queue implementations; but so seems insisting on \emph{binary} trees. Indeed, both pairing heaps (an analog of splay trees in the priority-queue world) and Fibonacci heaps are heap-ordered trees with arbitrary node degrees. However, Fredman earlier showed that such forest-based heaps can~-- in some generality~-- be encoded as binary tournament trees: In~\cite{Fredman1999}, he discusses how tournament trees can be simulated by forest-based heaps, and this mapping can also be used in reverse.
In tournament trees, all accesses are to leaves, so if one was to consider the question whether pairing heaps or other self-adjusting heap variants~-- recast as tournament-tree rearrangement heuristics~-- are (constant-)competitive algorithms, one should only demand competitiveness against accesses to leaves. In Fredman's model of unordered binary trees, all nodes carry a key, but we extend his arguments to the leaf-oriented tournament trees in this paper.
\subsection{Stable heaps} \label{sec:stable-heaps}
In a recent work, Kozma and Saranurak~\cite{KozmaSaranurak2018} set out to establish a theory of instance-optimality for forest-oriented heaps (and in particular pairing heap variants). They restrict access sequences on heaps to ``sorting mode''~-- $n$ inserts followed by $n$ extract-mins~-- and modify the primitive ``link'' operation to be \emph{``stable'',} \ie, to always keep the left-to-right order of subtrees intact.
More specifically, after an initial sequence of $n$ inserts, the heap consists of a list of $n$ top-level singleton roots. Each of the following $n$ extract-min operations is served by stably linking adjacent pairs of top-level roots, reducing their number by on each time, until a single root is left (which contains the minimum). The minimum is then removed, and its children form the new list of top-level roots.
The main result of Kozma and Saranurak is that every heap algorithms in this ``stable-heap'' model translates to a binary-search-tree algorithm, but critically, does not show that binary-search-tree algorithms imply heap ones. Therefore, the connections exhibited in~\cite{KozmaSaranurak2018} do not rule out the possibility that dynamic optimality holds for binary search trees, but not for (stable) heaps.
An important observation about stable heaps is that the stability condition for links implies that there are at most $C_n \le 4^n$ stable heap structures for a fixed insertion order of $n$ elements. It is unclear what consequences this restriction of the freedom in rearrangements has for algorithms.
\subsection{Further Related Work}
There are few other works that modify the computational model to gain insight into the nature of the dynamic-optimality conjecture. Iacono~\cite{Iacono2005} introduced the notion of ``key-independent optimality'', in which costs are averaged over all possible orders of keys (key ranks chosen randomly). He shows that any algorithm satisfying the working-set bound is optimal in the key-independent sense, and hence so are Splay and Greedy. The setup is different from our unordered trees since the maintained tree does have to conform to the search-tree property once the order of keys has been chosen.
Bose et al.~\cite{Bose2008} studied dynamic optimality on skip lists and variants of B-trees. They show that when insisting on certain balancing criteria, the working set bound is actually a lower bound for serving an access sequence with these data structures.
To our knowledge, the systematic separation of online and offline algorithms, or lower bounds for competitive ratios, is relatively understudied. Lower bounds for competitive data structures that we are aware of only include deterministic paging algorithms~\cite{ManasseMS90}, and linear searches on lists under arbitrary rearrangements of visited portions~\cite{MartinezRoura2000,Munro2000}.
Our modeling of heaps with the goal of providing an offline/online separation is motivated by analogous results on lists~\cite{MartinezRoura2000,Munro2000}, which gave a rearrangement model where online algorithms must take $\Omega(n^2)$, while offline algorithms take $O(n \log{n})$. However, we spend significantly more, if not most, of our effort addressing limitations on how the visited portion at each access can be rearranged. This is because of the much lower worst-case runtime upper bound in the static case ($O(n \log{n})$ as opposed to $O(n^2)$ for move-to-front on lists): the $O(\log{n})$ overhead associated with an arbitrary shuffle, or the $O(n \log{n})$ upper-bound obtained from implementing a merge-sort like scheme~\cite{Munro2000} is too high for heaps.
Our treatment of fingers follows the study of multi-finger binary search trees by Demaine et al.~\cite{DemaineILO13} and Chalermsook et al.~\cite{ChalermsookGKMS15}. To our knowledge, aside from the restriction to rotations made by Fredman~\cite{Fredman2011,Fredman2012}, which implicitly assumes a constant number of fingers, the role of fingers in heaps have not been explicitly studied previously.
\section{Online and Offline Separations} \label{sec:models}
In this section, we prove our first result, \wref{thm:Main}. As a warmup, we present a (simplified) counting argument for Fredman's ``Wilber-style'' lower bound. We then provide two bounds for the competitive ratio of any online tournament-tree algorithm, which are interesting for a small resp.\ large numbers of fingers.
\subsection{Information-Theoretic Wilber Bound}
Fredman proved for his model of unordered binary trees, that some access sequences require cost $\Omega(n \log n)$ to serve (Theorem~3 in~\cite{Fredman2012}). We extend his result to tournament trees with $k$ fingers.
\begin{theorem}[Wilber-style lower bound] \label{thm:wilber0}
For any $n$ and $m$ there is an access sequence $A \in [n]^m$
that requires total cost at least
\mskip1mu
m \cdot \log_{10k}(n)
\wwrel=
m \cdot \log_k(n) \left(1\pm\Oh\left(\frac1{\log k}\right)\right)
\mskip1mu
in any tournament tree with $k$ fingers, even offline and with persistent fingers. \end{theorem}
\begin{proof} The proof is a counting argument. We can encode any sequence of $t$ operations (of the allowed operations as defined in \wref{def:model}) in a tournament tree with $k$ fingers by specifying for each time step, which of the $k$ fingers we used and which of the 10 possible operations we executed. Given the sequence of operations and the initial tree, we can uniquely reconstruct the access sequence $A$ that was served by it (by virtue of the ``serve'' operations).
In total, there are $(10k)^t$ sequences of operations with cost $t$, from which we can reconstruct at most $(10k)^t$ different access sequences that can be served with cost $t$; (some encodings represent an invalid execution and do not correspond to a served access sequence). Note that we can always add dummy operations to an operations sequence with cost $<t$ to turn it into one of length exactly $t$ that serves the same access sequence $A$, so it suffices to count the latter ones.
Since there are $n^m$ different access sequences of length $m$ on $n$ keys, we can only serve all correctly when $(10k)^t \ge n^m$, or when $t \ge m \log_{10k}(n)$. \end{proof}
This means, the best amortized cost per access to hope for is $\Theta(\log_k(n))$ (in the worst case). Our offline algorithm in \wref{sec:loglog} will essentially achieve that. The above proof also shows that half of all access sequences require cost $\ge m \log_{10k}(n/2)$ etc., so $\log_k(n)$ is indeed a lower bound for the \emph{average} amortized cost, as well.
\subsection{Few Fingers}
We extend the rotation-based argument by Fredman~\cite{Fredman2011} to account for all possible operations involving the fingers as defined in \wref{def:model}. We first generalize the key lemma from Fredman's lower bound~\cite[Lem.\,2]{Fredman2011}.
\begin{lemma}[Adversarial Permutations] \label{lem:magic-permutation}
For any $n$, any sufficiently large $b \leq n$ and $k = o(b)$ (as $b\to\infty$),
we can find a (fixed) ``adversarial'' permutation
$\pi$ on $[1 \ldots b]$ such that for any initial configuration $(T,I,B)$ of a
tree $T$ on $[1 \ldots n]$,
locations $I$ of $k$ (persistent) fingers,
and access sequence $B = a_1, \ldots, a_{b}$ of $b$ distinct accesses
in $T$,
wither the sequence $B$ itself
or the permuted sequence $B_\pi = a_{\pi(1)}, \ldots, a_{\pi(b)}$ requires cost
at least $0.3 b \log_{10k}(b)$,
even offline and using $k$ persistent fingers. \end{lemma}
The proof is an extension of Fredman's argument. We first segment out the fact that candidates for adversarial permutations can be refuted with small trees.
\begin{lemma}[Small counterexample trees]
\label{lem:small-trees}
For any values $b$ and any value $t$,
if $\pi$ is a permutation such that
there exists a tree $T$ on $[n]$ with $n \geq b$
and initial positions $I \in [n]^k$ for $k$ (persistent) fingers in $T$,
as well as an access sequence $B= a_1, \ldots, a_b$
such that both $a_1, \ldots, a_b$ and
$B_\pi = a_{\pi(1)}, \ldots, a_{\pi(b)}$
can be served starting from $(T,I)$ with cost at most $t$,
then there exists a tree $T'$ over $\mathcal N \subseteq [n]$
containing $1,\ldots,b$
on at most $t' = |\mathcal N| \le 2t$ vertices
so that both $a_1, \ldots, a_b$
and $a_{\pi(1)}, \ldots, a_{\pi(b)}$ can be served starting from $(T',I)$
with total cost at most $t$. \end{lemma}
\begin{proof} The proof is analogous to the second part of Fredman's proof, but with the role of root replaced by the fingers: In short, an operation sequences of cost $t$ can touch at most $t$ vertices on top of the accessed nodes $a_1,\ldots,a_b$, so we cannot see more than a limited neighborhood of these nodes.
More specifically, both the operations sequence $S_1$, serving $B$, and $S_2$, serving $B_\pi$, can each visit a portion of the tree of size at most $t$, and that portion must contain the initial positions of all fingers and $a_1,\ldots,a_b$. With persistent fingers, we allow a search to start at any finger (instead of the root), so the visited region is potentially disconnected, but for each execution sequence, it consists of the union of $k$ subtrees, since the region explored by one finger (before potentially jumping to the location of another finger) is a connected region.
We now consider (induced subtree of) the union $\mathcal N$ of the nodes in these $2k$ regions. If the result is not a connected graph, we arbitrarily connect the components (attaching one component as the child of any leaf of another), forming a single connected binary tree $T'$ over $t' \le 2t$ nodes. $S_1$ and $S_2$ are still valid executions when starting with $(T',I)$ instead of $(T,I)$, proving the claim.~ \end{proof}
We now perform a counting argument similar to the first part of Fredman's proof~\cite{Fredman2011}, but taking into account the locations of fingers as well.
\begin{proof}[\wref{lem:magic-permutation}] Our goal will be to enumerate and count all possible witnesses $(T,I,B)$ to the ``tameness'' of a some permutation $\pi$ over $[b]$, \ie, initial configurations such that when starting with tree $T$ and fingers at $I$, both $B$ and $B_\pi$ require at most $t$ operations to serve. Since each witness can eliminate at most one candidate for the adversarial permutation, having fewer than $b!$ witnesses implies the claimed existence of $\pi$; we will show that for $t$ bounded as in the lemma, this is indeed true.
By definition, for any witness $(T,I,B)$ to the tameness of $\pi$, there are two operation sequences $S_1$ and $S_2$, both of length at most $t$, so that $S_1$ serves $B$ and $S_2$ serves $B_\pi$. Moreover, we can recover both $B$ and $B_\pi$, and hence $\pi$ itself, from $(T,I,S_1,S_2)$. We therefore obtain a (crude) over-approximation of the set of witnesses by counting all such quadruples. Now, by \wref{lem:small-trees} we can restrict our attention to trees $T$ over $2t$ nodes, and there are no more than $4^{2t}$ such; there are $(2t)^k$ choices for the initial positions of $k$ pointers, and $(10k)^t$ options for $S_1$ and $S_2$, for a total of \begin{align*}
W(t)
&\wwrel=
2^{4t} \cdot (2t)^k \cdot (10k)^{2t} \end{align*} witness candidates. (The actual number of witnesses is lower because not all of these quadruples encode a valid witness.) We will now show that for $t = (1-\epsilon) c b\log_{10k}(b)$ with $\epsilon>0$ fixed and $c = (2+4/\lg10)^{-1} \approx 0.3121$, we have that $\lg W(t) < (1-\epsilon) b \lg b$ for large enough $b$, and hence eventually also $W(t) < b!$:
\begin{align*}
\lg W(t)
&\wwrel=
4t + k (\lg(t)+1) + 2t \lg (10k) \mskip1mu &\wwrel{\relwithtext[r]{$[k = o(b)]$}=}
2t \lg (10k) + 4t + o(b \log(t)) \mskip1mu &\wwrel{\relwithtext[r]{[insert t]}\le}
(2 + \tfrac 4{\lg 10} )c \cdot b\lg(b) + o(b \log(b)) \mskip1mu &\wwrel=
(1-\epsilon) b\lg(b) + o(b \log(b)). \end{align*} Thus for $t$ bounded as in the statement and sufficiently large $b$, we have $\lg W(t) < b!$, so there are fewer than witnesses to tameness than there are permutations, and hence an adversarial permutation must exists. \end{proof}
Fredman~\cite{Fredman2011} used such a permutation to adaptively construct an adversarial sequence for any online algorithm. We can readily check that the construction also applies to tournament heaps with $k$ fingers, and therefore obtain a lower bound for the few-finger case.
\begin{corollary}[Few Fingers]
\label{cor:small-k}
The competitiveness ratio of tournament heaps with
$k$ fingers is at least $\Omega(\log_k{n}) = \Omega(\frac{\log{n}}{\log{k}})$. \end{corollary}
This result even holds when the online algorithm has access to $k$ persistent fingers, whereas the offline algorithm is restricted to a single transient finger.
\begin{proof}
We follow Fredman's proof~\cite{Fredman2011}.
Specifically, we assume $n = b^2$ is a square,
and let $\pi$ be an ``adversarial'' permutation as given
by Lemma~\ref{lem:magic-permutation} with length $b$.
\Withoutlossofgenerality let the keys in the initial tree
by $1,\ldots,n$ in inorder.
Then for any online algorithm $\textsc{Alg}$, we describe an adversary
that (adaptively) generates a permutation $A$ on which $\textsc{Alg}$ incurs
total cost $\Omega(n \log_{k}(n))$. For that, the adversary iteratively considers
the $b$ elements in $B_i = [(i - 1)b + 1 \dots ib] = (i-1)b + [1\ldots b]$ for $i=1,\ldots,b$,
and selects as the next block of requests
either $B_i$ itself (\ie, requests in sorted order) or
its permuted copy
\mskip1mu
B_i^\pi
\wwrel=
\left(i - 1\right)b + \pi\left( 1\right),\mskip1mu
\left(i - 1\right)b + \pi\left( 2\right),\mskip1mu
\ldots,\mskip1mu
\left(i - 1\right)b + \pi\left( b \right).
\mskip1mu
Call the $i$th block of requests $C_i$.
By \wref{lem:magic-permutation}, there is always a choice for the adversary that makes
$\textsc{Alg}$ pay $\Omega(b \log_k (b)) = \Omega(\sqrt n \log_k (n))$ on $C_i$,
for a total cost of $\Omega(n\log_k (n))$ after all $\sqrt n$ blocks in $A$.
It remains to check that the resulting permutation $A$
can be processed in $O(n)$ operations when known offline, even with a single
transient finger.
This is to be contrasted with the (potentially) superconstant number $k$ of
(persistent) fingers that the online algorithm was allowed to use.
We again follow Fredman's proof, but greatly simplify the presentation
based on the more recent understandings of multi-finger
search trees~\cite{DemaineILO13,ChalermsookGKMS15}:
We describe an offline algorithm for \emph{two} (transient) fingers
instead of a single finger;
since we can simulate a fixed, constant number of fingers
with a single one with a constant-factor overhead~\cite{DemaineILO13,ChalermsookGKMS15},
this yields the desired result.
Our strategy to serve $A$ will be to spend $O(n)$ overhead upon
the first access and thereby transform $T$ into a path with keys sorted
by next access. All future accesses can then be served by
simply rotating one edge at the root each.
Recall that $A$ is the concatenation of $C_1,C_2,\ldots,C_b$,
where each $C_i$ is either $B_i$ or $B_i^\pi$.
So we start with the tree that is $1, \ldots, n = b^2$ on a path;
we can rearrange any initial tree with $O(n)$ rotations into such a path.
It suffices to apply $\pi$ to the permuted blocks (in the tree)
to obtain a tree from which $A$ can be served in $O(n)$ steps.
For this, we can use two fingers to arrange the trees into
an analog of operations on 2-D arrays:
\begin{enumerate}
\item We first transform the single chain into a ``row major'' matrix,
that is, each set $[(i - 1)b + 1, ib]$ forms a path,
and the root has a path containing the roots $1, b + 1, 2b + 1, \ldots$ of these block paths.
\wref{fig:step1} illustrates this step.
Using two fingers, one for ``reading'' through the path and the other for appending to the current
row, this transformation is easily accomplished with $O(n)$ operations.
\begin{figure}
\caption{
The first step of the transformation, from path to row-major ordered matrix.
The shaded nodes each consist of one or two internal nodes and a leaf with the stored key.
}
\label{fig:step1}
\end{figure}
\item Next, extract out all the $i$s for which
we have to apply $\pi$ to obtain $C_i$.
We refer this subset as $\widehat{I}$. In the tree, we rearrange
the path containing the heads of the $B_i$s so that all
$\widehat{I}$-blocks appears as a prefix; see \wref{fig:step2}
This task is very similar to a quicksort-style partition on linked lists.
Using one ``read finger'' and one ``write finger'' to traverse the list of heads
in parallel, it is achieved with $O(b)$ operations.
\begin{figure}\label{fig:step2}
\end{figure}
\item Now we ``transpose''
this prefix into a ``column-major'' ordering:
there is a path starting from the root containing all values
of $1 \leq j \leq b$, and all elements of the form
$[ib + j]$ for $i \in \widehat{I}$ are attached to $j$
in a path; see \wref{fig:step3} for an example.
\begin{figure}
\caption{
Step 3 of the transformation: transposing the $\widehat I$ rows.
The transposed part is highlighted; the other blocks remain unchanged.
}
\label{fig:step3}
\end{figure}
\item Now we apply $\pi$ to the $b$ first path heads, thereby applying $\pi$ in parallel
to all blocks in $\widehat I$.
The important observation is that it is the same permutation $\pi$ that has to be applied
to all paths, so we can do it in one shot after the above preparation.
An arbitrary permutation of $b$ nodes can be applied with
$O(b \log{b}) = O(n)$ operations (see, \eg, \wref{lem:larger-k-perm} below);
here it would actually be sufficient to simulate, say, bubble sort using two fingers.
\wref{fig:step4} shows the result.
\begin{figure}
\caption{
Step 4 of the transformation: Applying $\pi$ in parallel to all $\widehat I$-blocks.
This step only affects the first $b$ paths; the other blocks remain unchanged.
}
\label{fig:step4}
\end{figure}
\item Reversing the first two operations then ``stitches''
this transformed sub-permutation back in the original,
and has the overall effect of replacing $ib + j$
by $ib + \pi(j)$ for all $i \in \widehat{I}$.
The reversal of the transformations can be achieved at the same cost as argued above.
\end{enumerate}
In total, we have shown how to serve $A$ in $O(n)$ time offline, even with a single transient finger.
Hence, the competitive ratio of $\textsc{Alg}$ is at least
\mskip1mu
\Omega\left( \frac{n \log_{k}n}{n} \right)
\wwrel\geq
\Omega\left( \log_{k}(n) \right).
\mskip1mu \end{proof}
One can obviously repeat this process to obtain arbitrarily long access sequences with the same competitive ratio.
\subsection{Many Fingers}
A large number of fingers allow us to efficiently implement arbitrary permutations, and hence, an \emph{order-by-next request} approach. We consider here the simple case that the input is a permutation and show that we handle it far more efficiently than the average cost of accessing a node in a tree with $n$ nodes. This subroutine also forms the basis of our efficient offline algorithm for arbitrary access sequences in \wref{sec:loglog}.
\begin{lemma}[Permute]
\label{lem:larger-k-perm}
Using the operations given in Definition~\ref{def:model} on
$k$ fingers, any tournament tree on $n$ keys can be rearranged
into a path, with keys ordered by the next-request times,
at a cost of $O(n \log_{k}(n))$. \end{lemma} \begin{proof} We use the $k$ fingers to simulate $k$-way (external / linked-list based) mergesort~\cite[\S5.4.1]{Knuth98a:book}, sorting elements by their target position in the requested permutation.
More specifically, we proceed as follows. Because the tree is binary, we can find edge separators that break it into pieces (which are also trees) of size $O(n / k )$. Thus, with an overhead of $O(n)$ to move the fingers to these subtrees, plus recursively calling this arrangement procedure $k$ times on trees of size $O(n / k)$, we can arrange the tree into $k$ paths attached to the root, each sorted by next request times.
Then we merge these $k$ paths by advancing $k$ fingers, one along each path: at each step we simply take the path whose head contains the lowest next request time. This once again incurs an overhead of $O(n)$, which means the overall cost is bounded by the recurrence \mskip1mu C\left( n \right) \wwrel\leq k C\left( \frac{n}{k} + 1 \right) + O\left( n \right) \mskip1mu which solves to $O(n \log_{k}n)$. \end{proof}
This says that any access sequence that is a permutation is easy to handle for tournament heaps, and in turn implies a lower bound on the competitive ratios on the many-fingers case. \begin{corollary}[Many fingers]
\label{cor:large-k}
The competitive ratio of tournament heaps with
$k$ transient fingers is at least $\Omega(\log k)$. \end{corollary}
\begin{proof}
Since we always start at the root, in any tree with $n$
nodes, there is a node whose cost is at least $\Omega(\log{n})$.
Incorporating Lemma~\ref{lem:k-lower} gives that the competitive ratio is at least
\mskip1mu
\frac{\Omega\left( \log{n} \right)}{O\left( \log_{k}(n) \right)}
\wwrel=
\Omega(\log {k}).
\mskip1mu \end{proof}
\subsection{Putting Things Together}
We can now put together the bounds shown above to prove our main result.
\begin{proof}[\wref{thm:Main}]
Combining \wref[Corollaries]{cor:small-k}
and~\ref{cor:large-k} gives that the overall
competitive ratio is at least
\mskip1mu
\Omega\left(\max\left\mskip1mu \frac{\log{n}}{\log{k}}, \log {k} \right\mskip1mu\right),
\mskip1mu
which is minimized at $\Theta(\sqrt{\log{n}})$
for $\lg{k} = \sqrt{\lg{n}}$, or
$k = 2^{\sqrt{\lg{n}}}$. \end{proof}
As a side remark, we note that this bound is different than what one would obtain from considering the $k$ fingers as $k$ servers on a (static) tree. Specifically, even if we have up to $k = \Theta(\sqrt{n})$ persistent fingers (which remain where they are after each access), the online cost of accesses is still $\Omega( \log{n})$. \begin{lemma}[Online lower bound with persistent fingers]
\label{lem:k-lower}
In any binary tree with $k$ persistent fingers, are at least $n/2$ nodes whose
shortest path to a finger has length at least $\lg{n} - \lg{k} - \lg{3}$. \end{lemma}
\begin{proof}
Because the tournament tree is binary,
for any distance $d$, each finger can reach at most
$3\cdot 2^{d}$ vertices.
So the shortest distance to a finger must satisfy
\mskip1mu
3k \cdot 2^{d} \geq n,
\mskip1mu
or
\mskip1mu
d \wwrel\geq \lg{n} - \lg{3} - \lg{k}
\mskip1mu.~ \end{proof} Thus, the much lower bound of $O(\log_{k}(n))$ comes from the ability to rearrange the tree: this is a key distinction between finger-based searches on (dynamic) trees and the study of server problems on a static tree (metric)~\cite{ManasseMS90,BansalBMN15,BubeckCLLM18,Lee2018}.
\section{Bucketed Order by Next Request} \label{sec:loglog}
By \wref{thm:wilber0} and \wref{lem:larger-k-perm} we can serve any access sequence without repetitions, \ie, any permutation, offline in optimal $\Theta(n \log_k (n))$ time. For general access sequences with repetitions of keys, such a solution seems not at all obvious, but we can achieve almost the same result by an algorithm which buckets elements on the interval until their next request.
\begin{theorem}[Efficient Offline Algorithm]
\label{thm:large-k-general}
Given $k$ transient fingers,
we can perform any sequence of $m$ operations on a
tournament tree of size $n$ at cost $O(m (\log_{k}n + 2^{\lg^*(n)}))$. \end{theorem}
This is just \wref{thm:offline-algo} restated, but emphasizing that transient fingers suffice. This cost is optimal for $k = O(n^{1/(2^{\lg^*(n)})})$, \ie, for for sub-polynomially many fingers.
\begin{proof}[\wref{thm:large-k-general}] On a high level, our algorithm keeps elements in buckets of exponentially increasing sizes and stores those buckets in a balanced binary tree. Accessing any bucket is then possible in $\Oh(\log \log n)$ time. An obvious candidate for defining buckets is the time of the next access to an element. While this approach is sufficient for the simple array-based data structure of \wref{app:elementary-offline-array-algorithm} and~-- in slightly disguised form~-- also constitutes the mechanism behind the offline list-update algorithm from~\cite{Munro2000}, it seems hard to (efficiently) maintain buckets based on next-access times within a binary tree.
Our solution instead uses the concept of \emph{recurrence time}, the time \emph{between} two successive accesses to the same element, to define buckets. To allow the maintenance of buckets in amortized constant time per access, we have to play a second trick: elements inserted into a bucket are kept separately in two buffers that distinguish next accesses in the ``near future'' from accesses in the ``distant future''. That allows to sort buffers once, without having to deal with insertions into sorted sequences.
To present the details, we fix some notation. We call $t$ the (global) \emph{time(stamp)} of the access~$a_t$.
We define the \emph{recurrence time} $r(t)$ of the access $a_t$ at time $t$ to be the number of time steps before that same element is requested the \emph{next} time in the future: \mskip1mu
r(t)
\wwrel=
\min\{t' - t : a_{t'} = a_t, t' > t \mskip1mu\cup \mskip1mu\infty\mskip1mu. \mskip1mu ($r(t) = \infty$ if $a_t$ is the last occurrence of that element in the access sequence.)
Similarly, define the \emph{next access time} $n(x,t)$ of an access to key $x$ to be the earliest time $t' > t$ when $x$ is requested: $n(x,t) = \min\{k : a_{t'} = x, t' > t \mskip1mu\cup \mskip1mu\infty\mskip1mu$. We abbreviate $n(t) = n(a_t,t) = t + r(t)$.
\paragraph{Segments of accesses} We assume the access sequence has length $m\ge n$. We divide the accesses into \emph{``segments''} of length $n$ each, (allowing a potentially incomplete last segment). At the beginning of each segment, we rearrange the entire tree $T$, so that elements that are not requested during this segment are below any elements that will be requested in the segment. (We put unused elements ``out of the way''.) This preprocessing step (a permutation) costs $\Theta(n\log_k (n))$ per segment, which is an amortized $\Theta(\log_k(n))$ contribution to the costs for any single access. We can thus focus on the first segment for the rest of this section. Moreover, we will understand the recurrence times of accesses to be relative to the segment, \ie, $r(t) = \infty$ if this is the last occurrence of $a_t$ in the first segment.
\paragraph{Buckets for Recurrence Time} We will use $b=\lceil \lg n\rceil$ \emph{``buckets''} $B_1,\ldots,B_b$ to hold elements, grouped by their current recurrence time: $B_1$ holds elements with recurrence time $1$, $B_2$ gets recurrence times $2$ and $3$, and in general, $B_j$ holds all elements with recurrence times $r$ in $[2^{j-1}{}\mskip1mu..\,2^{j}-1]$. Each bucket $B_j$ is a subtree, conceptually divided into three parts: two input buffers, called \emph{``near-future buffer''} and \emph{``far-future buffer'',} and a \emph{``sorted queue''}.
In a tournament tree, these can be represented by a convention like the one sketched in \wref{fig:buckets}. Note that in terms of its interface to other buckets, each bucket looks like one big binary node: it has pointers for a left and a right child. We can therefore form a binary tree of buckets; indeed, we keep the buckets in a balanced binary tree of height $\lceil\lg \lceil\lg n\rceil\rceil \le 1 + \lg \lg n$. Navigating to (the first node of) a bucket thus costs $\Oh(\log \log n)$.
\usetikzlibrary{decorations,decorations.text}
\begin{figure}
\caption{
Sketch of representation of our (conceptual) buckets in the tournament tree,
showing the queue and the two buffers.
To the outside, the buckets look like a binary-tree node and can hence be arranged as a binary tree themselves.
The shaded nodes each consist of one or two internal nodes and a leaf with the stored key. }
\label{fig:buckets}
\end{figure}
We think of the buffers and the sorted queue of bucket $B_j$ as having a maximal capacity of $2^{j-1}$ elements each. The input buffers are linear lists into which a new node $v$ is inserted by making the current buffer $v$'s child and using $v$ as the new root of the buffer. The queue is similar, but here elements are only consumed by removing the root of the queue.
Moreover, for each bucket $B_j$, we store an \emph{``expiration time''} $e_j$ (for its sorted queue); this is the time when the sorted queue will resp.\ would become ``invalid''. The significance of expiration times will become more clear when we describe insertions into buffers below. The sorted queue can run empty earlier than its expiration time, but we will prove (\wref{lem:invariant}) that it always does so the latest at time $e_j$: we do not consume elements past their best-before date. Initially, the sorted queue is empty and we set $e_j = 2^{j-1} - 1$.
\paragraph{Sorting elements into buckets} Suppose we serve the access at time $t$ to key $x=a_t$. $x$ is currently stored in some bucket $B_j$ and will be at the front of $B_j$'s sorted queue.
We remove $x$ from the sorted queue of $B_j$ and insert it into a new bucket $B_\ell$, depending on its recurrence time $r(t)$, namely so that $r(t) \in [2^{\ell-1}{}\mskip1mu..\,2^{\ell}-1]$; (\ie, $\ell$ is the smallest power of two greater than~$r(t)$).
Now, the buffer inside $B_\ell$ into which $x$ is inserted is selected based on the \emph{absolute time} of the next access to $x$, $t' = n(x,t) = t + r(t)$, and the expiration time $e_\ell$ of bucket $B_\ell$: If $t' > e_\ell + 2^{\ell-1}$, then we insert $x$ into the far-future buffer, otherwise into the near-future buffer. The overall procedure is given in \wref{alg:loglog}.
\begin{algorithm}
\plaincenter{\fbox{~\parbox{.95\linewidth}{
\sffamily\small\raggedright\strut
Repeat for all segments of the access sequence:
\begin{enumerate}
\item
Sort all elements by first access for this segment
and store as sorted queue of $B_0$.
\item
Initialize empty buckets $B_1,\ldots,B_b$.
\item
For each access $a_t$ in the segment:
\begin{enumerate}[topsep=0ex]
\item Let $\rho_1$ be the recurrence time after which $a_t$ occurred,
\ie, the unique number with $r(a_{t-\rho_1}) = \rho_1$,
or $\rho_1 \gets \infty$ if element $a_t$ was never accessed before.
\item
\(j \gets \begin{cases*}\lfloor \lg \rho_1\rfloor & if $\rho_1 < \infty$ \mskip1mu
0 & otherwise\end{cases*}\mskip1mu\quad and \mskip1mu
\mskip1mu\ell \gets \begin{cases*}\bigl\lfloor \lg \bigl( r(a_t)\bigr) \bigr\rfloor & if $r(a_t) < \infty$ \mskip1mu
0 & otherwise\end{cases*}\mskip1mu.
\item
If $t > e_\ell$, call Refresh($B_\ell$).\mskip1mu
If $t > e_j$, call Refresh($B_j$).
\item
Access the first element, $a_t$, in the sorted queue of bucket $B_j$.
\item
If $n(a_t) \le e_\ell+2^{\ell-1}$,
insert $a_t$ into the near-future buffer of $B_\ell$;\mskip1mu
otherwise, insert $a_t$ into the far-future buffer of $B_\ell$.
\end{enumerate}
\end{enumerate}
\noindent
The procedure Refresh($B_j$) refills the sorted queue:
\begin{enumerate}
\item While $t > e_j$ repeat:
\begin{enumerate}[topsep=0ex]
\item
Make the former near-future buffer of $B_j$ the new queue.\mskip1mu
(The old sorted queue is guaranteed to be empty at this stage.)
\item
Make the former far-future buffer of $B_j$ the new near-future buffer.
\item
Initialize the far-future buffer of $B_j$ as empty.
\item
Set $e_j \gets e_j+2^{j-1}$.
\end{enumerate}
\item
Sort all elements (if any) in the queue of $B_j$ (forming the new \emph{sorted} queue).
\end{enumerate}
}}}
\caption{
Our doubly-logarithmic offline algorithm for unordered binary trees.
}
\label{alg:loglog} \end{algorithm}
\paragraph{Refreshing Buckets} When a bucket $B_j$ is about to expire, it is time to ``refresh'' it. We will opt for a lazy refreshing scheme that allows buckets to remain in an expired state, as long as they are not ``touched''. Here, by touching a bucket, we mean visiting it to access (and extract) an element or to insert an element into it. Upon touching a bucket, we check if it has expired, and if so, we refresh it before continuing.
Refreshing is comprised of the following steps: We use the current near-future buffer as the new queue, and sort its elements by their next access time. We also make the current far-future buffer the new near-future buffer, and create a new, empty far-future buffer. Finally, we advance bucket $B_j$'s expiration time $e_j$ by $2^{j-1}$, the capacity of $B_j$.
The above steps describe the usual refreshing procedure, but we have to slightly extend in in general: A bucket might not have been touched for an arbitrarily long time frame when the buffers remained empty. Then, we may have to ``fast-forward'' several $2^{j-1}$ steps before catching up with the current time. Note that in this case, there cannot be any elements in the intermediate queue(s), so only the last step actually sorts a nonempty queue. This is reflected in the code in \wref{alg:loglog}.
We initially keep elements in a global sorted queue, sorted by first access (for this segment); we formally call this the zeroth bucket $B_0$. We build the queue of $B_0$ in the preprocessing step at the beginning of a segment.
We thereby maintain the following invariant: \begin{lemma}[Bucket invariant] \label{lem:invariant}
At any time $t\in[m]$, the bucket $B_j$, $j=1,\ldots,b$, contains exactly the elements
whose next access (after $t$) happens after a recurrence time in $[2^{j-1},2^j)$.
Among those, the sorted queue contains elements with next access at an
(absolute) time in $(e_j-2^{j-1}, e_j]$,
sorted by next access time,
the near-future buffer contains elements with next access at a time in $(e_j, e_j+2^{j-1}]$, and
the far-future buffer those with time in $(e_j+2^{j-1}, e_j+2^j]$. \end{lemma} \begin{proof} The proof is by induction over time $t$. Initially, buckets are empty and there is no recurrence time before the first access, so the claim holds. Let us now assume the claim holds up to time $t-1$. At time $t$ there will be a new access, $a_t$ to be served. If $a_t$ occurs after a recurrence time in $[2^{j-1},2^j)$, we find it in $B_j$'s sorted queue. (It is vital here that recurrence times do \emph{not} change when we advance $t$, whereas the time \emph{until} the next access certainly does.) Moreover, unless $a_t$ is no longer accessed in this segment, it has a new recurrence time $r(t) \in [2^{\ell-1},2^\ell)$, for some $\ell$. It then has to be (re-)inserted into~$B_\ell$.
Either of these two buckets might have expired, in which case we refresh it. Assume $B_j$ has expired, \ie, $t>e_j$. Refresh($B_j$) executes $p \ge 1$ ``promotion rounds'', \ie, $p$ iterations of the white loop, where $p$ is determined by requiring $t \le e_j+p \cdot 2^{j-1}<t+2^{j-1}$.
\begin{figure}
\caption{
Illustration of a refresh operation with $p=2$ steps.
The picture shows the ranges of valid next-access times for elements
in the sorted queue and buffers of $B_j$ before and after the refresh;
dots indicate times at which $B_j$ is touched.
Note that $B_j$ cannot have been touched during the gray period for
it would have been refreshed earlier then.
}
\label{fig:refresh}
\end{figure}
The code in \wref{alg:loglog} only sorts the queue after the last promotion round. This is sufficient since all temporary queues created in earlier promotion rounds must be \emph{empty:} $B_j$ cannot have been touched after its original expiration time $e_j$ since it would have been refreshed then, so in particular, it is not \emph{accessed} between time $e_j$ and $t$, and hence $B_j$ never contains an element with access time $t' \in (e_j,t)$. Since temporary queues represent time periods before $t$, they are all empty.
Moreover, in each promotion round, the update of $e_j$ exactly undoes the effect of turning the near-future buffer into the new queue and the far-future buffer into the new near-future buffer: the queue always contains elements with next access time $(e_j-2^{j-1}, e_j]$, the near-future buffer contains elements with next access time in $(e_j, e_j+2^{j-1}]$, and the far-future buffer those in $(e_j+2^{j-1}, e_j+2^j]$, maintaining the invariant (see \wref{fig:refresh}).
To serve the access $a_t$, we remove it from the sorted queue of $B_j$. Since that element's next access is no longer at time $t$ (after $a_t$ has been served), this reestablishes the invariant. Moreover, we note that the sorted queue is explicitly sorted upon a refresh and removing the first element is the only type of update it ever sees, so it remains in sorted order.
Finally, the element $a_t$ is (re-)inserted into $B_\ell$. If $n(t) > e_\ell + 2^{\ell-1}$, the element is inserted into the far-future buffer. Since we just refreshed the touched buckets if they were expired, we have $t \le e_\ell$. With $r(t) < 2^\ell$, this implies $n(t) = t + r(t) < e_\ell + 2^\ell$, so $a_t$ fulfills the conditions of the far-future buffer. If $n(t) \le e_\ell + 2^{\ell-1}$, we insert $a_t$ into the near-future buffer. It is vital to show that $n(t) > e_\ell$, otherwise it would belong into the sorted queue. But since $B_\ell$ expires after $2^{\ell-1}$ time steps, we have $t > e_\ell - 2^{\ell-1}$. With $r(t) \ge 2^{\ell-1}$, the claimed inequality follows: $n(t) = t + r(t) > e_\ell - 2^{\ell-1} + 2^{\ell-1} = e_\ell$. So in all cases, we have reestablished the invariant for time $t$. \end{proof}
From \wref{lem:invariant}, it immediately follows that our procedure is well-defined: The element $a_t$ to be accessed will always be at the top of the sorted queue, readily waiting to be picked up.
\paragraph{Cost analysis} We divide costs into intra-bucket maintenance and the access costs to reach the buckets in the first place. To serve one access in a segment, we touch two buckets: $B_j$ for retrieving the element from the sorted queue, and $B_\ell$ for inserting the element; (possibly $B_j=B_\ell$). Inside $B_j$, the requested element is found at the root of the sorted queue, so we only pay constant extra cost inside $B_j$. In order to insert the new element into $B_\ell$, we need to locally modify a constant number of nodes inside $B_\ell$. Thus, both operations cause constant additional cost inside the buckets.
If we need to ``refresh'' a bucket, we pay constant overhead to promote the buffers. Sorting the near-future buffer costs at most $\Oh(\log_k(n))$ per element (\wref{lem:larger-k-perm}). We charge these costs to the next access of each element. (Every element is sorted only once before it is accessed again, so we charge each access at most once for sorting.)
Navigating to one bucket requires navigating down a path of $\Oh(\log \log n)$ nodes each. We access 2 buckets per access. The amortized cost of serving one access then consists of $\Oh(1)$ ``intra-bucket maintenance'' costs, $\Oh(\log \log n)$ to navigate to 2 buckets, and $\Oh(\log_k(n))$ sorting costs.
\paragraph{Hyper Buckets}
Serving an access consists of two conceptually independent steps: retrieving buckets (source and target) and modifying those buckets appropriately. The intra-bucket part already has optimal amortized cost, but finding the buckets incurred a $\Oh(\log\log n)$ penalty for navigating in a binary tree of $b = \lceil \lg n\rceil$ objects, which is significant for large $k$. We can improve this by observing that the \textsl{accesses to the buckets are themselves an instance of our original problem!}
We (conceptually) contract the buckets into single nodes (cf.\mskip1mu\wref{fig:buckets}) and assign them ids from $[b]$. Then, executing our doubly-logarithmic algorithm to serve an access sequence $a_1,\ldots,a_m$ generates a \emph{bucket access sequence} $a'_1,\ldots,a'_{m'} \in \{B_1,\ldots,B_b\mskip1mu$ of length $m' \le 2m$, but over a universe of only $n' = b=\lceil \lg n\rceil$ different objects. We recursively apply our offline algorithm on $a'_1,\ldots,a'_{m'}$, breaking it into segments of $n'$ accesses each, and placing objects into one of $b' = \lceil \lg b \rceil \le \lg \lg n +1$ buckets.
Iterating this $d\ge 1$ times results in a ``hyper-bucket'' access sequence of length $m^{(d)} \le 2^d m$ over $n^{(d)} \le \lg^{(d)} (n) + 1$ different objects, where $\lg^{(d)}$ denotes the $d$-times iterated logarithm. Serving this last sequence by keeping the hyper buckets in a static, balanced tree yields total cost $\Oh( 2^d m \cdot \lg^{(d+1)} (n) )$. Moreover, we accumulate constant amortized cost over the $d$ levels of recursion (for maintaining buckets there), giving a total cost of $\Oh( \log_k(n) + dm + 2^d m \lg^{(d+1)} (n) )$. To roughly balance the two factors involving $d$ in the third summand, we set $d = \lg^* n$, yielding total costs in $\Oh(\log_k(n) + m \cdot 2^{\lg^* (n)})$. Note that sorting costs beyond the topmost level of recursion are entirely dominated by the sorting costs for topmost buckets and for the preprocessing at the beginning of a segment. This completes the proof of \wref{thm:large-k-general}. \end{proof}
\section{Conclusion} \label{sec:conclusion}
In this paper, we investigate models of self-adjusting heaps, specifically, we study tournament trees. In the spirit of earlier work on the list-update problem~\cite{MartinezRoura2000,Munro2000}, we point out the importance of the allowed primitive operations for rearranging the data structure. We claim that our model of $k$ (transient) fingers, where $k$ is a parameter, is a natural choice that allows us to study the continuum between purely local rearrangement operations and overly powerful global rearrangement. The influence of different rearrangement primitives is a new facet for the study of dynamic optimality that is not present for binary search trees, but enters the game for any type of general heap data structure (not only for tournament trees). Does the additional freedom in rearrangements give offline algorithms an insurmountable advantage? Or is there a way to make better use of it also in an online setting (in another model of heaps)?
We show that tournament-tree-based heaps that use a decrease-key operation cannot be dynamically optimal, totally irrespective of the number of fingers we choose to allow. Our result invites to try two modifications for getting around the strong separation. First, one might be willing to sacrifice the ability to modify keys, and allow only insert and extract-min operations. \ifarxiv{
However, this seems unlikely to defy arguments along the lines given above if suitably adapted. }{
However, similar arguments as above show that dynamic optimality is out of reach even
for only extract-min operations. }
A second, more promising route is to abandon tournament trees altogether. We believe that investigating other models of heaps, especially ones with a less stringent requirement for rearranging paths, is a highly interesting question; heap-ordered binary trees and unbounded degree forests immediately come to mind. We leave their study for future work.
A natural open problem posed by our algorithm is whether the optimal $\Oh(\log_k(n))$ amortized access cost is attainable in the tournament-tree model with $k$ fingers for any input, or if there is an intrinsic cost of insisting on a (binary)-tree-based data structure (as opposed to random-access memory).
More generally, we also believe that separation between online and offline algorithms is a far more widespread phenomenon. The current best running times for offline and online algorithms differ in a multitude of problems related to dynamic graph data structures~\cite{Eppstein94,PengSS17:arxiv,HolmDT01,KelnerOSZ13,CohenKMPPRX14}. Formulating and investigating this separation is an intriguing task that is significantly beyond the scope of this paper.
\appendix
\section{Offline Algorithms Are Boring With Random Access} \label{app:elementary-offline-array-algorithm}
As a side comment, we will consider the power of offline algorithms when they are not restricted by the way in which they store the objects. We show that we can serve any access sequence offline in linear time (and thus, optimally) using an array.
\begin{lemma}
\label{lem:very-very-lazy}
Any access sequence on $n$ elements can be served by a
data structure with $n$ persistent pointers in $O(1)$ time per access. \end{lemma} \begin{proof}
We divide $A$ into \emph{segments} $A_0,A_1,A_2\ldots$ of $n$ accesses each, \ie, $A_k = a_{k\cdot n + 1},\ldots,a_{k\cdot n+ n}$ for all $k$ (except possibly an incomplete last round).
At the beginning of each segment, we create an empty array $R[1..n]$ holding (pointers to) the stored objects. Iterating over all objects $x$, we insert $x$ into $R[n(x,k \cdot n) - k\cdot n]$ if $n(x,k\cdot n) - k\cdot n \le n$, otherwise $x$ remains inactive in this round. We are now ready to start serving accesses. The $t\textsuperscript{th}$ request of this segment, $a_{k\cdot n + t} = x$ is found in $R[t]$, so we can return $R[t]$ to serve this access. Moreover, with $j = n(k\cdot n + t)$, the next access time to $x$, we update the array: if $j - k\cdot n \le n$, we insert a pointer to $x$ into $R[j-k\cdot n]$, otherwise $x$ becomes inactive for the rest of this round. We now continue in the same way with the remaining accesses of $A_k$. Since we reinsert all elements that are accessed in this round again, any access can be served by the reference from $R$, which takes constant time each. The preprocessing at the beginning of a round takes $\Theta(n)$ time, and can thus be amortized over the next $n$ accesses of the round. We can thus serve any sequence of accesses in optimal constant amortized time. \end{proof}
\section{From Keys in Internal Nodes to Keys in Leaves -- And Back} \label{app:leaf-oriented-BSTs}
In this appendix, we show that from the perspective of dynamic optimality, standard BSTs and leaf-oriented BSTs are essentially equivalent.
\begin{figure}
\caption{
Transformation from standard BSTs to leaf-oriented trees (for a node with nonempty subtrees).
When $L$ and/or $R$ are empty, special rules apply: If $x$ is a leaf (both $L$ and $R$ are empty),
it is mapped to a leaf with key $x$.
If $x$ is a unary node, it is mapped to a single internal node with $x$ and its nonempty subtree
attached (in correct order).
}
\label{fig:standard-to-leaf-oriented}
\end{figure}
First of all, we can associate a leaf-oriented tree $T' = \ell(T)$ to any standard BST $T$ by applying the replacement rule shown in \wref{fig:standard-to-leaf-oriented} individually to all nodes in $T$. Since the transformation is entirely local (it keeps the structure of $T$ intact within $T'$), and replaces each node in $T$ by at most 3 nodes in $T'$, we immediately obtain the following lemma.
\begin{lemma} \label{lem:top-subtrees-bst-to-leaf-oriented}
Let $V_j$ be an arbitrary top-subtree of $T$. Then there is a top-subtree $V_j'$ of $T' = \ell(T)$
containing (the leaves with) all the keys in $V_j$ that satisfies $|V_j'| \le 3 |V_j|$.
Moreover, let $U$ result from $T$ by replacing $V_j$ by another binary tree over the same nodes.
Then $U' = \ell(U)$ can be obtained from $T' = \ell(T)$ by only modifying $V_j'$ in $T'$. \end{lemma}
We can thus simulate a sequence of BST restructuring operations for $T$ in the leaf-oriented model by replacing $V_j'$ with the leaf-oriented tree corresponding to the new top-subtree in $T$.
By \wref{lem:top-subtrees-bst-to-leaf-oriented}, a leaf-oriented BST can thus simulate any BST algorithm with a constant-factor overhead.
The inverse direction is a bit more tricky since not all leaf-oriented BSTs can be translated back by a purely local transformation: In a standard BST, we always need a key in the root, whereas there are leaf-oriented tree with all leaves at depths $\Theta(\log n)$. However, the following top-down procedure is sufficient for our purposes. Given a leaf-oriented tree $T'$, we define the standard BST $T=s(T')$ recursively (see also \wref{fig:leaf-oriented-to-standard}): If $T'$ is a single leaf, create a single node with that key. Otherwise, the root of $T'$ is an internal node (without a key). Let $x$ be the key in the leftmost leaf of the right subtree of the root of $T'$. $x$ will become the root of $T$ and the subtrees are translated recursively, with the leftmost leaf in the right subtree removed
\begin{figure}
\caption{
Transformation from leaf-oriented trees to standard BSTs.
$x$ is the leftmost leaf in the right subtree of the root,
which is removed from the recursive call $s(R)$.
}
\label{fig:leaf-oriented-to-standard}
\end{figure}
\begin{lemma} \label{lem:top-subtrees-leaf-oriented-to-bst}
Let $V_j'$ be any top-subtree in $T'$, containing the set of (leaves with) keys $K_j$.
Then, there is a top-subtree $V_j$ in $T=s(T')$ that contains all keys $K_j$ and satisfies
$|V_j| \le |V_j'|$.
Moreover, let $U'$ result from $T'$ by replacing $V_j'$ by another binary tree over the same nodes.
Then $U = s(U')$ can be obtained from $T = s(T')$ by only modifying $V_j$ in $T$. \end{lemma} \begin{proof}
Consider applying $s$ to $T'$, but stopping the recursion whenever we reach a subtree that does not contain any node from $V_j'$. The resulting tree $V_j$ will have at most $|V_j'|$ nodes since each recursive step removes at least one node of $V_j'$ from further consideration, and adds one node to $V_j$. Moreover, all keys in $K_j$ are mapped to nodes with these keys, so they are contained in $V_j$. This proves the first part of the claim.
For the second part, we first observe that removing $V_j'$ from $T'$ disconnects the tree, leaving a sequence of subtrees $S'_1,\ldots,S'_k$ behind. Similarly, removing $V_j$ (as defined above) from $T$, leaves the subtrees $S_1,\ldots,S_k$ behind. The $S_i$ are obtained as follows. Unless all keys in $S_i'$ are smaller than all keys in $K_j$, remove the leftmost leaf from $S_i'$. Then apply $s$ to the resulting tree. Note that this leaves some $S_i$ empty when $S_i'$ consisted of a single leaf. Since we only change $V_j'$ when going from $T'$ to $U'$, removing the transformed top-subtree in $U'$ yields the \emph{same} subtrees $S'_1,\ldots,S'_k$. It follows that $s(U')$ can be obtained by starting at $T$ and only changing $V_j$, as claimed. \end{proof}
We can hence also simulate any leaf-oriented tree algorithm in a standard BST, with at most the same cost. This constant-overhead bi-simulation result means that (a) any constant-competitive online algorithm for (standard) BSTs also yields such an algorithm for leaf-oriented BSTs, and vice versa, and (b) any lower bounds for one model imply the same lower bound up to constant factors for the other model.
\end{document} | arXiv |
Josephine M. Mitchell
Josephine Margaret Mitchell (June 30, 1912 – December 28, 2000) was a Canadian-American mathematician specializing in the mathematical analysis of functions of several complex variables.[1][2] She was the victim of a notorious case of discrimination against women in academia when, after she married another more junior faculty member at the University of Illinois Urbana-Champaign, the university used its anti-nepotism rules to revoke her tenured position while allowing her husband to keep his untenured one.[3][4]
Early life and education
Mitchell was born on June 30, 1912, in Edmonton, Alberta, and graduated in 1934 from the University of Alberta. She went to Bryn Mawr College for graduate study, earning a master's degree in 1941 and completing her Ph.D. in 1942.[1] Her dissertation, On Double Sturm-Liouville Series, was supervised by Hilda Geiringer.[5]
Career and later life
In 1948, Mitchell became an assistant professor at the Oklahoma Agricultural and Mechanical College,[6] and later the same year moved to the University of Illinois Urbana-Champaign.[7] She was granted tenure as an associate professor in 1953, after which she married (as his second wife) the younger mathematician Lowell Schoenfeld, then an assistant professor at the same university. Under the anti-nepotism rules in place at the time, their marriage caused the university to remove her from her tenured and more senior position, while allowing Schoenfeld to keep his more junior position.[2][3][4][7] Appeals to the American Association of University Professors and American Association of University Women were unsuccessful, and both Mitchell and Schoenfeld resigned from the university in protest.[2][3][4][7] Less seriously, the American Mathematical Society (AMS) refused to allow newlyweds Mitchell and Schoenfeld to share accommodations at a conference organized by the society, because they used different surnames.[7]
Mitchell held various teaching and research positions[3][8] while trying to solve her and her husband's two-body problem,[2] including a year at the Institute for Advanced Study as a Marion Talbot Fellow of the American Association of University Women,[3][9] working as a researcher at General Electric Company and the Westinghouse Research Laboratory in Pittsburgh, and becoming an associate professor at the University of Pittsburgh.[10][11] She and her husband obtained positions at Pennsylvania State University in 1958;[8] unusually among American universities, Penn State took advantage of the post-war availability of women in the academic job market to strengthen its reputation by hiring many strong women faculty members.[3][4] Mitchell and her husband were both promoted to full professor in 1961.[12] She moved with her husband to the University at Buffalo in 1968, and retired in 1982.[8]
Beyond her research, Mitchell's interests included wildflower photography and mathematical libraries. When a flood destroyed the mathematics library of Charles University in Prague, their books and journals—which were given to the AMS as a part of a bequest—were sent to help replace it.[2]
She died on December 28, 2000, in Amherst, New York.[8]
Recognition
Mitchell was named a Fellow of the American Association for the Advancement of Science in 1953.[13]
References
1. Murray, Margaret A. M. (August 26, 2016), "Josephine Margaret Mitchell, Bryn Mawr 1942", Women Becoming Mathematicians, retrieved 2021-04-19
2. Ewing, John (May 2003), Mitchell–Schoenfeld dedication (PDF), American Mathematical Society, retrieved 2021-04-19
3. Rossiter, Margaret W. (1995), Women Scientists in America: Before Affirmative Action, 1940–1972, Johns Hopkins University Press, pp. 125–126, ISBN 9780801857119
4. Eisenmann, Linda (Winter 1996), "Women, higher education, and professionalization: Clarifying the view", Harvard Educational Review, 66 (4)
5. Josephine M. Mitchell at the Mathematics Genealogy Project
6. "News and notices", The American Mathematical Monthly, 55 (2): 113–117, February 1948, doi:10.1080/00029890.1948.11991919, JSTOR 2305763
7. Kenschaft, Patricia C. (2005), Change Is Possible: Stories of Women and Minorities in Mathematics, American Mathematical Society, pp. 74–76, ISBN 9780821837481
8. "Josephine M. Mitchell, UB Math Professor", Buffalo News, December 31, 2000
9. "News and notices", The American Mathematical Monthly, 62 (1): 58–65, January 1955, doi:10.1080/00029890.1955.11988581
10. "News and notices", The American Mathematical Monthly, 64 (3): 207–211, March 1957, doi:10.1080/00029890.1957.11988962, JSTOR 2310575
11. "News and notices", The American Mathematical Monthly, 65 (1): 57–65, January 1958, doi:10.1080/00029890.1958.11989137, JSTOR 2310326
12. "News and notices", The American Mathematical Monthly, 69 (2): 181–185, February 1962, doi:10.1080/00029890.1962.11989853, JSTOR 2312575
13. Historic Fellows, American Association for the Advancement of Science, retrieved 2021-04-19
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
| Wikipedia |
\begin{document}
\begin{center} \Large{\textbf{On the nontrivial zeros of the Dirichlet eta function}}\\ ~\\
\large{Vladimir Garc\'{\i}a-Morales}\\
\normalsize{} ~\\
Departament de F\'{\i}sica de la Terra i Termodin\`amica\\ Universitat de Val\`encia, \\ E-46100 Burjassot, Spain \\ [email protected] \end{center} \small{We construct a two-parameter complex function $\eta_{\kappa \nu}:\mathbb{C}\to \mathbb{C}$, $\kappa \in (0, \infty)$, $\nu\in (0,\infty)$ that we call a holomorphic nonlinear embedding and that is given by a double series which is absolutely and uniformly convergent on compact sets in the entire complex plane. The function $\eta_{\kappa \nu}$ converges to the Dirichlet eta function $\eta(s)$ as $\kappa \to \infty$. We prove the crucial property that, for sufficiently large $\kappa$, the function $\eta_{\kappa \nu}(s)$ can be expressed as a linear combination $\eta_{\kappa \nu}(s)=\sum_{n=0}^{\infty}a_n(\kappa) \eta(s+2\nu n)$ of horizontal shifts of the eta function (where $a_{n}(\kappa) \in \mathbb{R}$ and $a_{0}=1$) and that, indeed, we have the inverse formula $\eta(s)=\sum_{n=0}^{\infty}b_n(\kappa) \eta_{\kappa \nu}(s+2\nu n)$ as well (where the coefficients $b_{n}(\kappa) \in \mathbb{R}$ are obtained from the $a_{n}$'s recursively). By using these results and the functional relationship of the eta function, $\eta(s)=\lambda(s)\eta(1-s)$, we sketch a proof of the Riemann hypothesis which, in our setting, is equivalent to the fact that the nontrivial zeros $s^{*}=\sigma^{*}+it^{*}$ of $\eta(s)$ (i.e. those points for which $\eta(s^{*})=\eta(1-s^{*})=0)$ are all located on the critical line $\sigma^{*}=\frac{1}{2}$. } \noindent ~\\
\pagebreak
\section{Introduction}
Let $s:=\sigma+it$ be a complex number. The Dirichlet eta function $\eta(s)$, also called alternating zeta function, is given in the half plane $\sigma >0$ by the conditionally convergent series \cite{T} \begin{equation} \eta(s)=\sum_{m=1}^{\infty}\frac{(-1)^{m-1}}{m^s} \label{eta1} \end{equation} which is absolutely convergent for $\sigma>1$. Hardy gave a simple proof of the fact that the eta function satisfies the functional equation \cite{T} \begin{equation} \eta(s) = \lambda(s)\eta(1-s) \label{func1} \end{equation} where \begin{equation} \lambda(s)=\frac{1-2^{1-s}}{1-2^{s}}2^{s}\pi^{s-1}\sin\left(\frac{\pi s}{2} \right)\Gamma(1-s) \label{chi1} \end{equation} From this, one immediately obtains the means to extend the definition of the eta function to the whole complex plane. Indeed, Euler's acceleration of the conditionally convergent series in Eq. (\ref{eta1}) yields a double series that is absolutely and uniformly convergent on compact sets everywhere \cite{Sondow,Hasse,Blagouchine} \begin{equation} \eta (s)=\sum_{k=0}^{\infty}\frac{1}{2^{k+1}}\sum_{m=0}^{k}{k \choose m}\frac{(-1)^{m}}{(m+1)^s} \label{Ser} \end{equation}
The Dirichlet eta function is closely related to the Riemann zeta function by \begin{equation} \eta(s)=\left(1-2^{1-s}\right)\zeta(s) \label{etazeta} \end{equation} However, while the zeta function is meromorphic, with a pole at $s=1$, the eta function is an entire function. We note, from Eq. (\ref{eta1}), that at $s=1$, the eta function becomes the alternating harmonic series and, therefore, \begin{equation} \eta(1)=\sum_{m=1}^{\infty}\frac{(-1)^{m-1}}{m}=1-\frac{1}{2}+\frac{1}{3}-\ldots=\ln 2 \end{equation}
Let $s^*$ denote a zero of the eta function, $\eta(s^{*})=0$. There are two kinds of zeros: the trivial zeros for which, from the functional equation, we have $\lambda(s^{*})=0$; and the nontrivial zeros, for which $\eta(1-s^{*})= 0$. From Eq. (\ref{chi1}) the trivial zeros are the negative even integers and zeros of the form $\sigma^{*}=1+i\frac{2n\pi}{\ln 2}$ where $n$ is a nonzero integer (see \cite{Sondow2} for a derivation that does not make use of the functional relation).
Since there are no zeros in the half-plane $\sigma >1$, the functional equation implies that nontrivial zeros of $\eta$ are to be found in the critical strip $0\le \sigma \le 1$. By the prime number theorem of Hadamard and de la Vall\'ee Poussin \cite{T} it is known that for $\sigma=1$ (and, therefore, $\sigma=0$) there are no nontrivial zeros of the Riemann zeta function and, therefore, from Eq. (\ref{etazeta}), there are nontrivial zeros of the Dirichlet eta function neither. Thus, the nontrivial zeros are found in the strip $0< \sigma < 1$. Furthermore, since, $\forall s \in \mathbb{C}$ \begin{equation} \eta(\overline{s})=\overline{\eta(s)} \end{equation} where the line denotes complex conjugation, we have that $\overline{s}^{*}$ and $1-\overline{s}^{*}$ are also nontrivial zeros of $\eta$. In brief, nontrivial zeros come in quartets, $s^{*}$, $1-s^{*}$, $\overline{s^{*}}$ and $1-\overline{s^{*}}$ forming the vertices of a rectangle within the critical strip. The statement that, for the $\eta$ function, the nontrivial zeros have all real part $\sigma^{*}=1/2$, so that $s^{*}=1-\overline{s^{*}}$ and $\overline{s^{*}}=1-s^{*}$ (the rectangle degenerating in a line segment) is equivalent to the Riemann hypothesis for the Riemann zeta function \cite{Broughan}.
In this article we investigate the position of the nontrivial zeros of the eta function with help of nonlinear embeddings, a novel kind of mathematical structures introduced in our previous works \cite{JCOMPLEX,nembed}. We construct here a nonlinear embedding with the form of a double series that is absolutely and uniformly convergent on compact sets in the whole complex plane. We call this embedding a \emph{holomorphic nonlinear embedding}. It depends on a scale parameter $\kappa \in (0, \infty)$ and a horizontal shift parameter $\nu \in (0,\infty)$, both in $\mathbb{R}$, and converges asymptotically to $\eta(s)$ everywhere as $\kappa$ tends to infinity. With help of M\"obius inversion, and taking advantage of absolute convergence of the series concerned, we then show the crucial property that $\eta(s)$ itself can be expressed as a linear combination of horizontal shifts of the holomorphic nonlinear embedding $\eta_{\kappa \mu}(s)$ and we study the implications of this linear combination on the position of the nontrivial zeros of the eta function giving a proof of the Riemann hypothesis.
The outline of this article is as follows. In Section \ref{holocons} we construct the holomorphic nonlinear embedding $\eta_{\kappa \nu}(s)$ for the Dirichlet eta function $\eta(s)$. We prove the global absolute and uniform convergence on compact sets of the series defining $\eta_{\kappa \nu}(s)$ and establish the asymptotic limits of the embedding. In Section \ref{crucial} we take advantage of these properties (specifically, we make heavy use of the absolute convergence of this series) to derive the crucial properties: 1) the holomorphic nonlinear embedding can be expressed as a linear combination of horizontally shifted eta functions; 2) the eta function itself can be expressed as a linear combination of horizontally shifted holomorphic nonlinear embeddings. These results are then exploited in Section \ref{RiemannH} to derive a functional relationship for the embedding and to prove the Riemann hypothesis, a result that emerges from the shift independence in the limit $\kappa \to \infty$ of the construction.
\section{Holomorphic nonlinear embedding for the Dirichlet eta function} \label{holocons}
We first introduce some notations and the basic functions on which our approach is based.
\begin{defi} Let $x\in \mathbb{R}$. We define the $\mathcal{B}_{\kappa}$-function \cite{JPHYSA} as \begin{equation} \mathcal{B}_{\kappa}(x):= \frac{1}{2}\left[\tanh\left(\frac{x+\frac{1}{2}}{\kappa}\right)-\tanh\left(\frac{x-\frac{1}{2}}{\kappa}\right)\right] \label{generbox} \end{equation} where $\kappa \in (0,\infty)$ is a real parameter. \end{defi}
By noting that \begin{eqnarray} \mathcal{B}_{\kappa}(x)&=&\frac{e^{1/\kappa}-e^{-1/\kappa}}{e^{1/\kappa}+e^{2x/\kappa}+e^{-2x/\kappa}+e^{-1/\kappa}} \\ \frac{\mathcal{B}_{\kappa}\left(x\right)}{\mathcal{B}_{\kappa}\left(0\right)}&=&\frac{e^{1/\kappa}+2+e^{-1/\kappa}}{e^{1/\kappa}+e^{2x/\kappa}+e^{-2x/\kappa}+e^{-1/\kappa}} \label{easier} \end{eqnarray} the following properties are easily verified: \begin{eqnarray} 0\le \mathcal{B}_{\kappa}\left(x \right) &\le & 1 \qquad \forall{\kappa \in(0,\infty)} \qquad \label{lim00} \\ \mathcal{B}_{\kappa}\left(-x \right) &= & \mathcal{B}_{\kappa}\left(x \right) \label{even} \\ \lim_{\kappa \to \infty}\mathcal{B}_{\kappa}\left(x \right)&=& 0 \label{lim0} \\ 0\le \frac{\mathcal{B}_{\kappa}\left(x\right)}{\mathcal{B}_{\kappa}\left(0\right)} &\le & 1 \qquad \forall{\kappa \in(0,\infty)} \qquad \label{lim2} \\ \lim_{\kappa \to \infty}\frac{\mathcal{B}_{\kappa}\left(x\right)}{\mathcal{B}_{\kappa}\left(0\right)}&=&1 \label{lim1} \end{eqnarray}
\begin{defi} \emph{\textbf{(Holomorphic nonlinear embedding.)}} Let $\eta(s)$ be the Dirichlet eta function, given by Eq. (\ref{Ser}). Then, we define the holomorphic nonlinear embedding $\eta_{\kappa\nu}(s)$ of $\eta(s)$ as the series \begin{equation} \eta_{\kappa\nu}(s):=\sum_{k=0}^{\infty}\frac{1}{2^{k+1}}\sum_{m=0}^{k}{k \choose m}\frac{(-1)^{m}}{(m+1)^{s}}\frac{\mathcal{B}_{\kappa}\left(1/(m+1)^{\nu}\right)}{\mathcal{B}_{\kappa}\left(0\right)} \label{RK} \end{equation} with the real parameters $\kappa \in (0,\infty)$ and $\nu\in (0,\infty)$. \end{defi}
\begin{theor} The double series in Eq. (\ref{RK}) converges absolutely and uniformly on compact sets to the entire function $\eta_{\kappa \nu}(s)$. \end{theor}
\begin{proof} We build on Sondow \cite{Sondow}, who proved that the double series defining the eta function in Eq. (\ref{Ser}) converge absolutely and uniformly on compact sets in the entire complex plane. In particular, if we define, \begin{equation} f_{k}(s):=\frac{1}{2^{k+1}}\sum_{m=0}^{k}{k \choose m}\frac{(-1)^{m}}{(m+1)^s} \end{equation} so that $\eta(s)=\sum_{k=0}^{\infty}f_{k}(s)$, Sondow proved that there is a sequence of positive real numbers $\{M_{k}\}$ satisfying \begin{equation}
\left|f_{k}(s)\right| \le \frac{1}{2^{k+1}}\sum_{m=0}^{k}\left|{k \choose m}\frac{(-1)^{m}}{(m+1)^{s}}\right| \le M_{k} \label{keyt2d1} \end{equation} and which \begin{equation} \sum_{k=0}^{\infty} M_{k} <\infty \end{equation} so that Weierstrass M-test is satisfied. We now note that we can write $\eta_{\kappa\nu}(s)$ as \begin{equation} \eta_{\kappa\nu}(s)=\sum_{k=0}^{\infty}f_{k,\kappa\nu}(s) \end{equation} where \begin{equation} f_{k,\kappa\nu}(s):=\frac{1}{2^{k+1}}\sum_{m=0}^{k}{k \choose m}\frac{(-1)^{m}}{(m+1)^s} \frac{\mathcal{B}_{\kappa}\left(1/(m+1)^{\nu}\right)}{\mathcal{B}_{\kappa}\left(0\right)} \end{equation} Now, by the triangle inequality, \begin{eqnarray}
|f_{k,\kappa\nu}(s)|&=&\left|\frac{1}{2^{k+1}}\sum_{m=0}^{k}{k \choose m}\frac{(-1)^{m}}{(m+1)^{s}}\frac{\mathcal{B}_{\kappa}\left(1/(m+1)^{\nu}\right)}{\mathcal{B}_{\kappa}\left(0\right)}\right| \nonumber \\
&&\le \frac{1}{2^{k+1}}\sum_{m=0}^{k}\left|{k \choose m}\frac{(-1)^{m}}{(m+1)^{s}}\frac{\mathcal{B}_{\kappa}\left(1/(m+1)^{\nu}\right)}{\mathcal{B}_{\kappa}\left(0\right)}\right| \nonumber \\
&&=\frac{1}{2^{k+1}}\sum_{m=0}^{k}\left|{k \choose m}\frac{(-1)^{m}}{(m+1)^{s}}\right| \left|\frac{\mathcal{B}_{\kappa}\left(1/(m+1)^{\nu}\right)}{\mathcal{B}_{\kappa}\left(0\right)}\right| \nonumber \\
&&\le \frac{1}{2^{k+1}}\sum_{m=0}^{k}\left|{k \choose m}\frac{(-1)^{m}}{(m+1)^{s}}\right| \end{eqnarray} where Eq. (\ref{lim2}) has been used. Therefore, from this last expression and Eq. (\ref{keyt2d1}) \begin{equation}
\left|f_{k,\mu\nu}(s)\right| \le \frac{1}{2^{k+1}}\sum_{m=0}^{k}\left|{k \choose m}\frac{(-1)^{m}}{(m+1)^{s}}\right| \le M_{k} \end{equation}
and, thus, the sequence of positive real numbers $\{M_{k}\}$ found by Sondow majorizes the sequence $\{\left|f_{k,\mu\nu}(s)\right|\}$ as well, and the result follows. \end{proof}
\begin{theor} We have, $\forall s\in \mathbb{C}$ \begin{eqnarray} \lim_{\kappa\to \infty}\eta_{\kappa \nu}(s)&=&\eta(s) \qquad \qquad \qquad \qquad \forall \nu \in (0,\infty) \label{t221} \\ \lim_{\kappa\to 0}\eta_{\kappa \nu}(s)&=&\eta(s)-1 \ \qquad \qquad \qquad \forall \nu \in (0,\infty) \label{t222} \\ \lim_{\nu\to \infty}\eta_{\kappa \nu}(s)&=&\eta(s)+\frac{\mathcal{B}_{\kappa}(1)}{\mathcal{B}_{\kappa}(0)}-1 \qquad \ \ \forall \kappa \in (0,\infty) \label{t223} \\ \lim_{\nu\to 0}\eta_{\kappa \nu}(s)&=&\frac{\mathcal{B}_{\kappa}(1)}{\mathcal{B}_{\kappa}(0)}\eta(s) \qquad \qquad \quad \ \ \forall \kappa \in (0,\infty) \label{t224} \end{eqnarray} where $\mathcal{B}_{\kappa}(1)/\mathcal{B}_{\kappa}(0)=(\tanh \frac{3}{2\kappa}-\tanh \frac{1}{2\kappa})/(2\tanh \frac{1}{2\kappa})$, as given by Eq. (\ref{generbox}). \end{theor}
\begin{proof} We first observe that, from Eq. (\ref{easier}), \begin{eqnarray} &&\lim_{\kappa \to \infty} \frac{\mathcal{B}_{\kappa}\left(1/(m+1)^{\nu}\right)}{\mathcal{B}_{\kappa}\left(0\right)}=1 \nonumber \\ &&\lim_{\kappa \to 0} \frac{\mathcal{B}_{\kappa}\left(1/(m+1)^{\nu}\right)}{\mathcal{B}_{\kappa}\left(0\right)}= \left\{ \begin{array}{cc} 1 & \text{if } m\ge 1 \\ 0 & \text{if } m= 0 \end{array}\right. \nonumber \\ &&\lim_{\nu \to \infty} \frac{\mathcal{B}_{\kappa}\left(1/(m+1)^{\nu}\right)}{\mathcal{B}_{\kappa}\left(0\right)}= \left\{ \begin{array}{cc} 1 & \text{if } m\ge 1 \\ \frac{\mathcal{B}_{\kappa}\left(1\right)}{\mathcal{B}_{\kappa}\left(0\right)} & \text{if } m= 0 \end{array} \right. \nonumber \\ &&\lim_{\nu \to 0} \frac{\mathcal{B}_{\kappa}\left(1/(m+1)^{\nu}\right)}{\mathcal{B}_{\kappa}\left(0\right)}=\frac{\mathcal{B}_{\kappa}\left(1\right)}{\mathcal{B}_{\kappa}\left(0\right)} \nonumber \end{eqnarray}
By using Eq. (\ref{RK}) and the above expressions, we get \begin{eqnarray} \lim_{\kappa \to \infty}\eta_{\kappa\nu}(s)&=& \lim_{\kappa \to 0}\left[ \sum_{k=0}^{\infty}\frac{1}{2^{k+1}}\sum_{m=0}^{k}{k \choose m}\frac{(-1)^{m}}{(m+1)^{s}}\frac{\mathcal{B}_{\kappa}\left(1/(m+1)^{\nu}\right)}{\mathcal{B}_{\kappa}\left(0\right)}\right] \nonumber \\ &=&\sum_{k=0}^{\infty}\frac{1}{2^{k+1}}\sum_{m=0}^{k}{k \choose m}\frac{(-1)^{m}}{(m+1)^{s}}=\eta(s) \nonumber \\ \lim_{\kappa \to 0}\eta_{\kappa\nu}(s)&=& \sum_{k=0}^{\infty}\frac{1}{2^{k+1}}\sum_{m=1}^{k}{k \choose m}\frac{(-1)^{m}}{(m+1)^{s}}=\eta(s)-1 \nonumber \\ \lim_{\nu \to \infty}\eta_{\kappa\nu}(s)&=& \frac{\mathcal{B}_{\kappa}\left(1\right)}{\mathcal{B}_{\kappa}\left(0\right)}+\sum_{k=0}^{\infty}\frac{1}{2^{k+1}}\sum_{m=1}^{k}{k \choose m}\frac{(-1)^{m}}{(m+1)^{s}}=\eta(s)+\frac{\mathcal{B}_{\kappa}\left(1\right)}{\mathcal{B}_{\kappa}\left(0\right)}-1 \nonumber \\ \lim_{\nu \to 0}\eta_{\kappa\nu}(s)&=& \frac{\mathcal{B}_{\kappa}\left(1\right)}{\mathcal{B}_{\kappa}\left(0\right)}\sum_{k=0}^{\infty}\frac{1}{2^{k+1}}\sum_{m=0}^{k}{k \choose m}\frac{(-1)^{m}}{(m+1)^{s}}=\frac{\mathcal{B}_{\kappa}\left(1\right)}{\mathcal{B}_{\kappa}\left(0\right)}\eta(s). \qedhere \nonumber \end{eqnarray} \end{proof}
\section{Functional expansions of $\eta_{\kappa \nu}(s)$ and $\eta(s)$} \label{crucial}
We now derive an equivalent expression for $\eta_{\kappa\nu}(s)$ valid for any $\kappa$ sufficiently large (specifically, $\forall \kappa > 3/\pi$) and prove that this expression can be inverted to express $\eta(s)$ as a function of horizontal shifts of $\eta_{\kappa\nu}(s)$.
\begin{theor} \label{1} If $\kappa>3/\pi$, $\forall \nu \in (0, \infty)$ the holomorphic embedding $\eta_{\kappa\nu}(s)$ has the absolutely convergent series expansion \begin{eqnarray} \eta_{\kappa\nu}(s)&=&\eta(s)+\sum_{n=1}^{\infty}a_n(\kappa) \eta(s+2\nu n)
\label{asinto} \end{eqnarray} where, for $n$ a non-negative integer \begin{eqnarray} a_{n}(\kappa)&=&\frac{1}{\tanh \left(1/2\kappa \right)}\sum_{j=n+1}^{\infty}\frac{2^{2n}(2^{2j}-1)B_{2j}}{j(2j-2n-1)!(2n)!\kappa^{2j-1}} \label{wjs} \end{eqnarray} and $B_{2m}$ denote the even Bernoulli numbers: $B_{0}=1$, $B_{2}=\frac{1}{6}$, $B_{4}=-\frac{1}{30}$, etc. \end{theor}
\begin{proof} For $\kappa > \frac{3}{\pi}$, the hyperbolic tangents in the definition of the $\mathcal{B}$-function, Eq. (\ref{generbox}) with $x=1/(m+1)^{\nu}$, can be expanded in their absolutely convergent MacLaurin series for all $\forall m \ge 0$ (note that $\forall \nu > 0$, $1/(m+1)^{\nu} \le 1$) \begin{eqnarray} &&\mathcal{B}_{\kappa}\left(\frac{1}{(m+1)^{\nu}}\right)=\nonumber \\ &=&\frac{1}{2}\sum_{j=1}^{\infty}\frac{2^{2j}(2^{2j}-1)B_{2j}}{(2j)!\kappa^{2j-1}}\left[\left(\frac{1}{(m+1)^{\nu}}+\frac{1}{2} \right)^{2j-1}-\left(\frac{1}{(m+1)^{\nu}}-\frac{1}{2} \right)^{2j-1}\right] \nonumber \\ &=& \frac{1}{2}\sum_{j=1}^{\infty}\frac{2^{2j}(2^{2j}-1)B_{2j}}{(2j)!\kappa^{2j-1}} \sum_{h=0}^{2j-1}{2j-1 \choose h}\frac{1}{2^{h}(m+1)^{\nu(2j-1-h)}}(1-(-1)^{h}) \nonumber \\ &=& \sum_{j=1}^{\infty}\frac{2(2^{2j}-1)B_{2j}}{(2j)! \kappa^{2j-1}} \sum_{h=1}^{j}{2j-1 \choose 2h-1}\frac{2^{2(j-h)}}{(m+1)^{2(j-h)\nu}} \nonumber \\ &=& \sum_{j=1}^{\infty}\frac{2(2^{2j}-1)B_{2j}}{(2j)!\kappa^{2j-1}} \sum_{n=0}^{j-1}{2j-1 \choose 2j-2n-1}\frac{2^{2n}}{(m+1)^{2n\nu}} \nonumber \\ &=& \sum_{n=0}^{\infty}\frac{1}{(m+1)^{2n\nu}} \sum_{j=n+1}^{\infty}{2j-1 \choose 2n} \frac{2^{2n+1}(2^{2j}-1)B_{2j}}{(2j)!\kappa^{2j-1}}
\label{Bernoultheo1} \end{eqnarray} where we have used the absolute convergence of the series to change the order of the sums. We also have \begin{eqnarray} \mathcal{B}_{\kappa}\left(0\right)=\tanh \frac{1}{2\kappa}=\sum_{j=1}^{\infty}\frac{2(2^{2j}-1)B_{2j}}{(2j)! \kappa^{2j-1}} \label{Bernoultheo2} \end{eqnarray}
If we then replace these expansions in the definition of the embedding, Eq. (\ref{RK}), we find, by exploiting the absolute convergence of the series \begin{eqnarray} \eta_{\kappa\nu}(s)&=&\sum_{n=0}^{\infty}\sum_{k=0}^{\infty}\frac{1}{2^{k+1}}\sum_{m=0}^{k}{k \choose m}\frac{(-1)^{m}}{(m+1)^{s+2n\nu}} \sum_{j=n+1}^{\infty}{2j-1 \choose 2n} \frac{2^{2n+1}(2^{2j}-1)B_{2j}}{(2j)!\kappa^{2j-1}\mathcal{B}_{\kappa}\left(0\right)} \nonumber \\ &=&\sum_{n=0}^{\infty} \eta(s+2\nu n) \sum_{j=n+1}^{\infty}{2j-1 \choose 2n} \frac{2^{2n+1}(2^{2j}-1)B_{2j}}{(2j)!\kappa^{2j-1}\mathcal{B}_{\kappa}\left(0\right)} \nonumber \\ &=&\sum_{n=0}^{\infty}a_n(\kappa) \eta(s+2\nu n) = \eta(s)+\sum_{n=1}^{\infty}a_n(\kappa) \eta(s+2\nu n) \label{RKp1} \end{eqnarray} where, for all non-negative integer $n$, we have defined \begin{eqnarray} a_n(\kappa) &:=& \sum_{j=n+1}^{\infty}\frac{2(2^{2j}-1)B_{2j}}{(2j)!\kappa^{2j-1}\mathcal{B}_{\kappa}\left(0\right)} 2^{2n}{2j-1 \choose 2n} \label{acoef1} \\ &=&\frac{1}{\tanh \left(1/2\kappa \right)}\sum_{j=n+1}^{\infty}\frac{2^{2n}(2^{2j}-1)B_{2j}}{j(2j-2n-1)!(2n)!\kappa^{2j-1}} \nonumber \end{eqnarray} and we have also used that, from Eq. (\ref{Bernoultheo2}) \begin{eqnarray} a_0&=&\sum_{j=1}^{\infty}\frac{2(2^{2j}-1)B_{2j}}{(2j)!\kappa^{2j-1}\mathcal{B}_{\kappa}\left(0\right)} {2j-1 \choose 2j-1}=1. \nonumber \qedhere \end{eqnarray}
\end{proof}
\begin{corol} Asymptotically, for $\kappa$ large, we have, \begin{equation} \eta_{\kappa\nu}(s)=\eta(s)-\frac{\eta(s+2\nu)}{\kappa^2}+O\left(\frac{1}{\kappa^4} \right)
\label{asintostrong} \end{equation} \end{corol}
\begin{proof} From the theorem, we have $ \forall n\in \mathbb{Z}^{+}\cup \{0\}$ \begin{eqnarray} a_{n}(\kappa)&=&\frac{1}{\tanh \left(1/2\kappa \right)}\sum_{j=n+1}^{\infty}\frac{2^{2n}(2^{2j}-1)B_{2j}}{j(2j-2n-1)!(2n)!\kappa^{2j-1}} \nonumber \\ &=& \frac{1}{(\frac{1}{2\kappa}-O\left(\frac{1}{\kappa^3}\right) }\sum_{j=n+1}^{\infty}\frac{2^{2n}(2^{2j}-1)B_{2j}}{j(2j-2n-1)!(2n)!\kappa^{2j-1}} \nonumber \\ &=& \left(1+O\left(\frac{1}{\kappa^2}\right)\right)\sum_{j=n+1}^{\infty}\frac{2^{2n+1}(2^{2j}-1)B_{2j}}{j(2j-2n-1)!(2n)!\kappa^{2j-2}} \nonumber \\ &=&\frac{2^{2n+1}(2^{2n+2}-1)B_{2n+2}}{(n+1)(2n)!\kappa^{2n}}+ O\left(\frac{1}{\kappa^{2n+2}}\right)=O\left(\frac{1}{\kappa^{2n}}\right) \label{asina} \end{eqnarray} Therefore, \begin{equation} \eta_{\kappa\nu}(s)=\eta(s)+30B_4\frac{\eta(s+2\nu)}{\kappa^2}+O\left(\frac{1}{\kappa^4} \right) \end{equation} and the result follows from noting that $B_4=-1/30$. \end{proof}
\begin{theor} \emph{\textbf{(M\"obius inversion formula.)}} \label{Moe} For any $\kappa> 3/\pi$ and $\eta_{\kappa\nu}(s)$ given by Eq.(\ref{asinto}), \begin{equation} \eta_{\kappa\nu}(s)=\eta(s)+\sum_{n=1}^{\infty}a_n(\kappa) \eta(s+2\nu n) \label{Gdir} \end{equation} we have, \begin{equation} \eta(s)=\eta_{\kappa\nu}(s)+\sum_{n=1}^{\infty}b_{n}(\kappa) \eta_{\kappa\nu}(s+2\nu n) \label{Ginv} \end{equation} where the coefficients $b_n$ are recursively obtained from \begin{equation} \sum_{n=0}^{k}b_{k}(\kappa)a_{n-k}(\kappa)=\delta_{k0} \label{bcoefs} \end{equation} with $\delta_{k0}$ being the Kronecker delta ($\delta_{k0}=1$ if $k=0$ and $\delta_{k0}=0$ otherwise). \end{theor}
\begin{proof} We have \begin{eqnarray} \sum_{n=0}^{\infty}b_{n}\eta_{\kappa\nu}(s+2\nu n) &=&\sum_{n=0}^{\infty}b_{n}\sum_{m=0}^{\infty}a_m(\kappa) \eta(s+2\nu n+2\nu m) \nonumber \\ &=&\sum_{k=0}^{\infty}\left[\sum_{n=0}^{k} b_{n}(\kappa)a_{k-n}(\kappa) \right] \eta(s+2\nu k) \nonumber \\ &=&\sum_{k=0}^{\infty}\delta_{k0} \eta(s+2\nu k) \nonumber\\ &=&\eta(s) \nonumber \end{eqnarray} The coefficients $b_n$ can be recursively obtained from the known $a_n$ by solving the equations \begin{eqnarray} a_0b_0&=&1 \\ a_0b_1+a_1b_0&=&0\\ a_0b_2+a_1b_1+a_2b_0&=&0\\ \ldots &\qquad & \nonumber \end{eqnarray} Where, since $a_0=1$, $b_0=1$, and, therefore, $b_1=-a_1$, $b_2=a_1^2-a_2$, etc. \end{proof}
\begin{rem} Eq. (\ref{Ginv}), with $\eta_{\kappa \nu}$ given by Eq. (\ref{RK}) is the main result of this work, since it expresses the Dirichlet eta function in terms of a globally convergent series (absolutely and uniformly on compact sets) of horizontal shifts of $\eta_{\kappa \nu}$. These shifts are weighted by powers of $1/\kappa$. \end{rem}
\begin{rem} Theorem \ref{Moe} is, indeed, a specific case of Theorem 3.3 on p. 82 in \cite{Nanxian} particularized to the functions $\eta$ and $\eta_{\kappa \nu}$ considered here. \end{rem}
\begin{prop} For $\kappa >3/\pi$ we have \begin{eqnarray} \sum_{n=1}^{\infty}a_{n}(\kappa)&=&\frac{\tanh \frac{3}{2\kappa}-\tanh \frac{1}{2\kappa}}{2\tanh \frac{1}{2\kappa}}-1 \label{propo1} \\ \sum_{n=1}^{\infty}b_{n}(\kappa)&=&\frac{2\tanh \frac{1}{2\kappa}}{\tanh \frac{3}{2\kappa}-\tanh \frac{1}{2\kappa}}-1 \label{propo2} \end{eqnarray} \end{prop}
\begin{proof} From Eq. (\ref{t223}) \begin{eqnarray} \lim_{\nu \to \infty}\eta_{\kappa\nu}(s)=\eta(s)+\frac{\mathcal{B}_{\kappa}\left(1\right)}{\mathcal{B}_{\kappa}\left(0\right)}-1 \label{coro21} \end{eqnarray} and, for $\kappa>3/\pi$, from Eqs. (\ref{Gdir}) and (\ref{Ginv}) \begin{eqnarray} \lim_{\nu \to \infty}\eta_{\kappa\nu}(s)&=&\eta(s)+\sum_{n=1}^{\infty}a_n(\kappa) = \eta(s)-\frac{\mathcal{B}_{\kappa}\left(1\right)}{\mathcal{B}_{\kappa}\left(0\right)}\sum_{n=1}^{\infty}b_n(\kappa) \label{coro22} \end{eqnarray} because, from Eqs. (\ref{Ser}) and (\ref{RK}), $\lim_{\nu \to \infty} \eta(s+2\nu n)=1$ and $\lim_{\nu \to \infty} \eta_{\kappa \nu}(s+2\nu n)=\mathcal{B}_{\kappa}\left(1\right)/\mathcal{B}_{\kappa}\left(0\right)$ for every integer $n\ge 1$. By equating Eqs. (\ref{coro21}) and (\ref{coro22}) and by using Eq. (\ref{generbox}), the result follows. \end{proof}
\section{On the nontrivial zeros of $\eta$} \label{RiemannH}
In this section we prove the Riemann hypothesis. We first introduce three lemmas, that establish a functional equation for the embedding and its asymptotic properties.
\begin{lemma} \label{lem1}\emph{\textbf{(Functional relationship for the embedding.)}} We have $\forall \nu \in (0,\infty)$ and $\forall \kappa > \pi/3$ \begin{equation} \eta_{\kappa\nu}(s)-\lambda(s)\eta_{\kappa\nu}(1-s) =\sum_{n=1}^{\infty}b_{n}(\kappa) \left[\lambda(s)\eta_{\kappa\nu}(1-s+2\nu n)-\eta_{\kappa\nu}(s+2\nu n) \right] \label{funcembed} \end{equation} where \begin{equation} \lambda(s)=\frac{1-2^{1-s}}{1-2^{s}}2^{s}\pi^{s-1}\sin\left(\frac{\pi s}{2} \right)\Gamma(1-s) \label{chi1b} \end{equation} \end{lemma}
\begin{proof} From Eq. (\ref{Ginv}) we have \begin{eqnarray} \eta(s)&=&\eta_{\kappa\nu}(s)+\sum_{n=1}^{\infty}b_{n}(\kappa) \eta_{\kappa\nu}(s+2\nu n) \nonumber \\ \lambda(s)\eta(1-s)&=&\lambda(s)\eta_{\kappa\nu}(1-s)+\lambda(s)\sum_{n=1}^{\infty}b_{n}(\kappa) \eta_{\kappa\nu}(1-s+2\nu n) \nonumber \end{eqnarray} whence, by subtracting both equations and applying Eq. (\ref{func1}) the result follows. \end{proof}
\begin{lemma} \label{lem2} If $s^{*}=\sigma^{*}+it^{*}$ is a non-trivial zero of $\eta(s)$ then $\eta_{\kappa \nu}(s^{*})\ne 0$ and $\eta_{\kappa \nu}(1-s^{*})\ne 0$ for finite asymptotically large $\kappa$ and $\nu > 1/2$. Furthermore, we have \begin{equation}
\lim_{\kappa \to \infty}\frac{\eta_{\kappa \nu}(s^{*})}{\eta_{\kappa \nu}(1-s^{*})}=\frac{\eta(s^{*}+2\nu)}{\eta(1-s^{*}+2\nu)} \label{limit1} \end{equation} \end{lemma}
\begin{proof} The real part $\sigma^{*}$ of the nontrivial zero $s^{*}$ satisfies $0< \sigma^{*} < 1$. Now, since $\eta(s^{*})=\eta(1-s^{*})=0$, we have, from Eq. (\ref{asintostrong}) \begin{eqnarray} \eta_{\kappa\nu}(s^{*})&=&-\frac{\eta(s^{*}+2\nu)}{\kappa^2}+O\left(\frac{1}{\kappa^4} \right) \label{fund1} \\ \eta_{\kappa\nu}(1-s^{*})&=&-\frac{\eta(1-s^{*}+2\nu)}{\kappa^2}+O\left(\frac{1}{\kappa^4} \right) \label{fund2} \end{eqnarray} We have that $\forall \nu >1/2$, $\eta(s^{*}+2\nu)\ne 0$ and $\eta(1-s^{*}+2\nu)\ne 0$ because values $s^{*}+2\nu$ and $1-s^{*}+2\nu$ both lie in the half-plane $\sigma >1$ and $\eta(s)$ has no zeros there. Therefore $\eta_{\kappa\nu}(s^{*})\ne 0$ and $\eta_{\kappa\nu}(1-s^{*})\ne 0$ are both nonzero for sufficiently large $\kappa$ ($\kappa >3/\pi$ being a lower bound). Eq. (\ref{limit1}) follows then as a trivial consequence of Eqs. (\ref{fund1}) and (\ref{fund2}) and the absolute convergence of all the series involved. \end{proof}
\begin{lemma} Let $s_{\gamma}\in \mathbb{C}$, $s_{\gamma}\in \gamma$ be such that $\eta_{\kappa\nu}(1-s_{\gamma})\ne 0$ along a path $\gamma$ in the complex plane and let $s'$ be an endpoint of $\gamma$. Then, \begin{equation} \lim_{\kappa \to \infty}\lim_{s_{\gamma}\xrightarrow[\gamma]{} s'}\frac{\eta_{\kappa \nu}(s_{\gamma})}{\eta_{\kappa \nu}(1-s_{\gamma})}=\lim_{s_{\gamma}\xrightarrow[\gamma]{} s'}\lim_{\kappa \to \infty}\frac{\eta_{\kappa \nu}(s_{\gamma})}{\eta_{\kappa \nu}(1-s_{\gamma})}=\lambda(s') \label{commulim} \end{equation} Furthermore, if $s'=s^{*}$ is a nontrivial zero of the Dirichlet eta function, $\forall \nu > 1/2$ \begin{equation} \frac{\eta(s^{*}+2\nu)}{\eta(1-s^{*}+2\nu)}=\lambda(s^{*}) \label{theotherhand} \end{equation} \end{lemma}
\begin{proof} From Eq. (\ref{funcembed}) we have, by dividing by $\eta_{\kappa\nu}(1-s_{\gamma})$ at any point $s_{\gamma}$ of $\gamma$ (since $\eta_{\kappa\nu}(1-s_{\gamma})\ne 0$) \begin{equation} \frac{\eta_{\kappa\nu}(s_{\gamma})}{\eta_{\kappa\nu}(1-s_{\gamma})}=\lambda(s_{\gamma}) +\sum_{n=1}^{\infty}b_{n}(\kappa) \frac{\lambda(s_{\gamma})\eta_{\kappa\nu}(1-s_{\gamma}+2\nu n)-\eta_{\kappa\nu}(s_{\gamma}+2\nu n)}{\eta_{\kappa\nu}(1-s_{\gamma})} \label{funcembed} \end{equation} where $b_{1}(\kappa)=O(\kappa^{-2})$. We now have, on one hand, \begin{eqnarray} &&\lim_{\kappa \to \infty}\lim_{s_{\gamma}\xrightarrow[\gamma]{} s'}\frac{\eta_{\kappa \nu}(s_{\gamma})}{\eta_{\kappa \nu}(1-s_{\gamma})}=\lim_{\kappa \to \infty}\frac{\eta_{\kappa \nu}(s')}{\eta_{\kappa \nu}(1-s')} \label{side1} \\ &&=\lim_{\kappa \to \infty}\left[\lambda(s')+\sum_{n=1}^{\infty}b_{n}(\kappa) \frac{\lambda(s')\eta_{\kappa\nu}(1-s'+2\nu n)-\eta_{\kappa\nu}(s'+2\nu n)}{\eta_{\kappa\nu}(1-s')}\right] \nonumber \\ &&=\lim_{\kappa \to \infty}\left[\lambda(s')+\sum_{n=1}^{\infty}b_{n}(\kappa) \frac{\lambda(s')\eta(1-s'+2\nu n)-\eta(s'+2\nu n)}{\eta(1-s')}\right] \nonumber \\ &&=\lambda(s') \nonumber \end{eqnarray} and, on the other hand, \begin{eqnarray} \lim_{s_{\gamma}\xrightarrow[\gamma]{} s'}\lim_{\kappa \to \infty}\frac{\eta_{\kappa \nu}(s_{\gamma})}{\eta_{\kappa \nu}(1-s_{\gamma})}&=&\lim_{s_{\gamma}\to s'}\frac{\eta(s_{\gamma})}{\eta(1-s_{\gamma})} \label{side2} \\ &=&\lim_{s_{\gamma}\to s'} \lambda(s_{\gamma})=\lambda(s') \nonumber \end{eqnarray} whence the result follows.
Let us now assume that $s'=s^{*}$ is a nontrivial zero of the Dirichlet eta function. Then, we have that $\lim_{\kappa\to \infty}\eta_{\kappa \nu}(1-s^{*})=\eta(1-s^{*})=0=\eta(s^{*})=\lim_{\kappa\to \infty}\eta_{\kappa \nu}(s^{*})$ and the function in Eq. (\ref{side1}) \begin{equation} \Phi(s'):=\lim_{\kappa \to \infty}\frac{\eta_{\kappa \nu}(s')}{\eta_{\kappa \nu}(1-s')}=\frac{\eta(s')}{\eta(1-s')} \end{equation} is undefined at $s'=s^{*}$. However, $s'=s^{*}$ is a removable singularity and we can take $\Phi(s^{*})=\lambda(s^{*})$. To see this, note that $\Phi(s)=\lambda(s)$ for all $s$ in the critical strip $0<\sigma<1$ save at the nontrivial zeros $s^{*}$. But the function $\lambda(s)$ is holomorphic for all $s$ in the critical strip including the nontrivial zeros $s^{*}$ of $\eta$. Therefore, by Riemann's theorem on extendable singularities, $\Phi(s)$ is holomorphically extendable over $s^*$ and we can have, in consistency with Eq. (\ref{side2}) \begin{equation} \Phi(s^{*})=\lambda(s^{*}) \qquad \left(=\lim_{s\to s^{*}}\frac{\eta(s)}{\eta(1-s)} \right) \end{equation} This proves Eq. (\ref{commulim}). We then note that, on one hand \begin{equation} \Phi(s^{*})=\lim_{\kappa \to \infty}\frac{\eta_{\kappa \nu}(s^{*})}{\eta_{\kappa \nu}(1-s^{*})}=\lambda(s^{*}) \label{the1} \end{equation} and, on the other, from Eq. (\ref{limit1}) \begin{equation} \Phi(s^{*})= \lim_{\kappa \to \infty}\frac{\eta_{\kappa \nu}(s^{*})}{\eta_{\kappa \nu}(1-s^{*})}=\frac{\eta(s^{*}+2\nu)}{\eta(1-s^{*}+2\nu)} \label{the2} \end{equation} Both expressions must be equal at $s^{*}$ because: 1) Eq. (\ref{the1}) is a consequence of $\Phi(s)$ being equal to the holomorphic $\lambda(s)$ in the punctured critical strip (save, exactly at the zeros $s^{*}$) and, therefore, holomorphically extendable over $s^{*}$ and 2) Eq. (\ref{the2}) is a consequence of the asymptotic behavior of the embedding close to a nontrivial zero of the Dirichlet eta function. Thus, Eq. (\ref{theotherhand}) follows. \end{proof}
\noindent \emph{Alternative proof of Eq. (\ref{theotherhand})}. An equivalent way of obtaining Eq. (\ref{theotherhand}) is, directly, from Eq. (\ref{side1}), applying it to $s'=s^{*}$. We have, \begin{eqnarray} &&\lim_{\kappa \to \infty}\lim_{s_{\gamma}\xrightarrow[\gamma]{} s^{*}}\frac{\eta_{\kappa \nu}(s_{\gamma})}{\eta_{\kappa \nu}(1-s_{\gamma})}=\lim_{\kappa \to \infty}\frac{\eta_{\kappa \nu}(s^{*})}{\eta_{\kappa \nu}(1-s^{*})} \\ &&=\lim_{\kappa \to \infty}\left[\lambda(s^{*})+\sum_{n=1}^{\infty}b_{n}(\kappa) \frac{\lambda(s^{*})\eta_{\kappa\nu}(1-s^{*}+2\nu n)-\eta_{\kappa\nu}(s^{*}+2\nu n)}{\eta_{\kappa\nu}(1-s^{*})}\right] \nonumber \\ &&=\lambda(s^{*})+\lim_{\kappa\to \infty}\frac{\lambda(s^{*})\eta_{\kappa\nu}(1-s^{*}+2\nu)-\eta_{\kappa \nu}(s^{*}+2\nu)}{\kappa^{2}\eta_{\kappa\nu}(1-s^{*})} \nonumber \\ &&=\lambda(s^{*})-\frac{\lambda(s^{*})\eta(1-s^{*}+2\nu)-\eta(s^{*}+2\nu)}{\eta(1-s^{*}+2\nu)} \label{onealt} \end{eqnarray} and since $\eta(1-s^{*}+2\nu) \ne 0$ $\forall \nu \in (1/2,\infty)$ and we have for any $\varepsilon \in \mathbb{C}$ in a disk of sufficiently small radius \begin{equation} \Phi(s^{*}+\varepsilon)=\lim_{\kappa \to \infty}\frac{\eta_{\kappa \nu}(s^{*}+\varepsilon)}{\eta_{\kappa \nu}(1-s^{*}+\varepsilon)}=\lambda(s^{*}+\varepsilon) \label{twoalt} \end{equation} by taking the limit $\varepsilon \to 0$ and observing that $\Phi(s)$ is holomorphically extendable to $s^{*}$ where it has then the value $\lambda(s^{*})$ we obtain, from Eq. (\ref{onealt}) \begin{equation} \lambda(s^{*})\eta(1-s^{*}+2\nu)-\eta(s^{*}+2\nu)=0 \end{equation} which is Eq. (\ref{theotherhand}). $\square$
\begin{theor} \emph{\textbf{(Riemann hypothesis.)}} \label{RHtheor} All nontrivial zeros $s^{*}=\sigma^{*}+it^{*}$ of the Dirichlet eta function $\eta(s)$ have real part $\sigma^{*}=1/2$. \end{theor}
\begin{proof} Let $\sigma^{*}=\frac{1}{2}+\epsilon$ be the real part of a nontrivial zero $s^{*}=\sigma^{*}+it^{*}$ of $\eta$ in the critical strip. From Eq. (\ref{theotherhand}), \begin{equation}
\left|\frac{\eta\left(\frac{1}{2}+2\nu+it^{*}+\epsilon\right)}{\eta\left(\frac{1}{2}+2\nu-it^{*}-\epsilon\right)}\right|=\left|\lambda\left(\frac{1}{2}+it^{*}+\epsilon\right)\right|, \label{evenga} \end{equation}
and we note that the right hand side of this expression is shift-invariant (it does not depend on $\nu$) but the left hand side is not: the horizontal shift parameter $\nu$ can be arbitrarily varied in the interval $(0,\infty)$ and, in particular, it can be selected so that the point $s^{*}+2\nu$ lies anywhere on the half-plane $\sigma\ge 1$ at height $t^{*}$. The modulus of $\eta$ varies on horizontal lines \cite{Matiyasevich0}. \emph{The only possibility for equation Eq. (\ref{evenga}) to have solution for a nontrivial zero $s^{*}$ forces $\epsilon=0$}. To see this, put $x=\frac{1}{2}+2\nu-\epsilon >>1$ in Eq. (\ref{evenga}). Now, since $\left|\eta\left(x-it^{*}\right)\right|=\left|\overline{\eta\left(x+it^{*}\right)}\right|=\left|\eta\left(x+it^{*}\right)\right|$, we have \begin{eqnarray}
\left|\frac{\eta\left(\frac{1}{2}+2\nu+it^{*}+\epsilon\right)}{\eta\left(\frac{1}{2}+2\nu-it^{*}-\epsilon\right)}\right|&=&\left|\frac{\eta\left(x+it^{*}+2\epsilon\right)}{\eta\left(x+it\right)}\right|. \end{eqnarray} Since $x$ can be increased arbitrarily by increasing $\nu$, we can take $x$ so large that, asymptotically \begin{equation}
\eta\left(x+it^{*}+2\epsilon\right)\sim \eta\left(x+it^{*}\right)+2\epsilon \left.\frac{\partial \eta}{\partial x}\right|_{x+it^{*}}. \end{equation} In this asymptotic regime, we can truncate Eq. (\ref{eta1}) to the first two terms and its derivative becomes \begin{equation}
\left.\frac{\partial \eta}{\partial x}\right|_{x+it^{*}} \sim \frac{\ln 2}{2^{x+it^{*}}}. \end{equation} Furthermore, $\eta\left(x+it^{*}\right)\sim 1-2^{-x-it^{*}}$ and thus \begin{equation}
\left|\frac{\eta\left(x+it^{*}+2\epsilon\right)}{\eta\left(x+it\right)}\right| \sim \left | 1+2\epsilon \frac{\ln 2}{2^{x+it^{*}}-1}\right|. \end{equation} Therefore, for $\nu$ large, \begin{equation}
\left|\frac{\eta\left(\frac{1}{2}+2\nu+it^{*}+\epsilon\right)}{\eta\left(\frac{1}{2}+2\nu-it^{*}-\epsilon\right)}\right| \sim \left | 1+2\epsilon \frac{\ln 2}{2^{\frac{1}{2}+2\nu-\epsilon+it^{*}}-1}\right|, \end{equation} and the l.h.s. of Eq. (\ref{evenga}) depends explicitly on the free parameter $\nu$. Thus, the r.h.s. of Eq. (\ref{evenga}) can have an infinite number of different values for its modulus, which is absurd. The only possibility of cancelling the $\nu$ dependence, forced by the consistency of the equation, is to have $\epsilon=0$. In this way, we obtain, \begin{equation}
\left|\frac{\eta\left(\frac{1}{2}+2\nu+it^{*}\right)}{\eta\left(\frac{1}{2}+2\nu-it^{*}\right)}\right|=1=\left|\lambda\left(\frac{1}{2}+it^{*}\right)\right|, \end{equation} an equation that is known to have infinitely many solutions for $t^{*}$. Therefore, $\epsilon=0$, $s^{*}=\frac{1}{2}+it^{*}$ and the result follows. \end{proof}
\section{Conclusions}
In this article a complex entire function called a holomorphic nonlinear embedding $\eta_{\kappa \nu}$ has been constructed and some of its properties (mainly asymptotic ones) have been investigated. It has been shown that the function $\eta_{\kappa \nu}$ can be expressed as a series expansion in terms of horizontal shifts of the Dirichlet eta function. The holomorphic character of $\eta_{\kappa \nu}$ has been established by proving the global absolute and uniform convergence on compact sets of its defining double series. The coefficients of the expansion and their sum have been explicitly calculated. It has also been shown that this expansion can be inverted to yield the eta function as a linear expansion of horizontal shifts of $\eta_{\kappa \nu}$. This is the central result of this article, since it allows the Dirichlet eta function to be understood as a linear superposition of different layers governed by shifts of $\eta_{\kappa \nu}$ and weighted by powers of $1/\kappa$. This result shows that, although one can envisage, in principle, infinitely many ways of smoothly embedding the Dirichlet eta function in a more general structure, the one presented here ($\eta_{\kappa \nu}$) is not gratuitous because it is itself embedded within the structure of the eta function, revealing some of its secrets thanks to its scale and horizontal shift parameters $\kappa$ and $\nu$. In particular, the truth of the Riemann hypothesis emerges naturally as a consequence of the functional relationship of the Dirichlet eta function and its uniform attainment everywhere by a hierarchy of functional equations of the holomorphic nonlinear embedding in the limit $\kappa \to \infty$.
Operators yielding vertical shifts of the Riemann zeta function $\zeta(s)$ arise naturally in approaches to the Riemann zeros using ideas from supersymmetry \cite{DasKalauni}. These vertical shifts can, indeed, be expressed more compactly in terms of the Dirichlet eta function $\eta$ (see e.g. Eq.(18) in \cite{DasKalauni}). Whether there exists any relationship of these vertical shifts induced by lowering and raising operators in \cite{DasKalauni} with the shifts obtained here as a result of an explicit series construction is an interesting open question. We point out that $\nu$ in this article can be made a complex number with positive real part and arbitrary imaginary part and all main results of this article apply without any modification: Eqs. (\ref{Gdir}) and (\ref{Ginv}) are indeed valid, for $\nu=\nu_{r}+i\nu_{i}$, with $\nu_{r}>0$ and any $\nu_{i} \in (-\infty, \infty)$ and this does not affect the proof of the Riemann hypothesis here presented in any way. The condition $\nu_{r}>0$ is, however, necessary, for Eqs. (\ref{Gdir}) and (\ref{Ginv}) to be valid.
The methods used here may be adapted to other Dirichlet series for which the Riemann hypothesis is conjectured to hold \cite{Karatsuba}. All steps may be retraced for these latter series if: 1) they can be closely related to entire functions with series expansions that are absolutely convergent in the whole complex plane; 2) the latter series depend on the complex variable $s$ only through summands of the form $1/m^{s}$; 3) there exists a functional relation like Eq. (\ref{func1}).
We believe that the methods described in this paper might be useful to get insight in recent intriguing experimental phenomena connecting the coefficients of truncated Dirichlet series to the Erathostenes sieve \cite{Matiyasevich}.
\end{document} | arXiv |
\begin{definition}[Definition:Ellipse/Directrix]
Let $K$ be an ellipse specified in terms of:
:a given straight line $D$
:a given point $F$
:a given constant $\epsilon$ such that $0 < \epsilon < 1$
where $K$ is the locus of points $P$ such that the distance $p$ from $P$ to $D$ and the distance $q$ from $P$ to $F$ are related by the condition:
:$q = \epsilon \, p$
The line $D$ is known as the '''directrix''' of the ellipse.
Category:Definitions/Ellipses
\end{definition} | ProofWiki |
Products similar to Password Inspiration.
3886 products found. Showing 1-14 in 10 groups sorted by relevance.
1. Password Inspiration
Everyday you enter a password, sometimes you need a new password or have to change an existing one. Password Inspiration uses lists of words, inspirational and motiviational lists to come up with unique passwords that are secure, yet easy to remember.
Details Buy now ($9.95) Find similar
Platform: Windows 95, Windows NT, Windows 2000, Windows XP, Windows 98, Windows ME, Windows Server 2003, Windows Xp, 2000, 95,98, NT, Me
Vendor: tinyEdit
2. Exe Password 2004 (Single Licence)
This program allows you to protect any EXE-file with its own password. In doing so, this password is stored directly in the EXE-file.
Vendor: Salfeld Computer GmbH
Related link: http://www.salfeld.com
3. Exe Password 2004 Multi Licence (20 users)
4. Exe Passwort (Einzelplatz)
5. Password Keeper
Surely you know the problem of having 1000's of passwords. Our Password Keeper remembers all your passwords and more... high encrypted secure database Import and Export functions Password Generator for Secure Password Skins Drag and Drop for your passwords multilanguage and lots more...
Vendor: AG Software
Related link: http://www.ag-software.de
6. Akala Password Revealer
Vendor: Zero2000 Software
Related link: http://www.zero2000software.com
7. Dynamical Passwords
With Dynamical Passwords you may generate up to 50 passwords (with lengths up to 128 symbols) on any date (even each day). There is no need to remember or save all these generated passwords as far as you can remember you personal password (key) and the date.
Details Download trial Buy now ($9.95) Find similar
Vendor: AdvMathAppl
Related link: http://www.advmathappl.biz
8. Password Depot
Password Depot is a powerful and yet very easy-to-use tool with which you can manage all your passwords. Password Depot is an easy-to-use and yet powerful password manager. Create practically uncrackable passwords with the integrated password generator: instead of passwords like sweetheart or John, which can both be cracked in a few minutes, you now use passwords like g\/:1bmV5T$x_sb}8T4@CN?\A:y:Cwe-k)mUpHiJu:0md7p@
Language: English, German, French
Vendor: AceBIT GmbH
http://www.acebit.com/
http://www.hello-engines.com/
http://www.ranking-toolbox.com/
http://www.wise-ftp.com/
http://www.winsurvey.com/
http://www.password-depot.com/
http://www.acebackup.com/
9. PDF-Security
Protect PDF files against unauthorized access and encrypt them! If they are protected by a password you have to know the password in order to decrypt the file. You can deside if the user have to enter a password in order to view the file.
Details Download trial (Full version: 383.7 KB) Buy now ($19.00) Find similar
Platform: Windows 95, Windows 2000, Windows XP, Windows 98, Windows ME
Vendor: CAD-KAS GbR
Related link: http://www.cadkas.de
10. Password 2000
Password 2000 can help you organize and safely store your passwords. Password 2000 is designed to do just that. Having trouble thinking of a password? This program can generate passwords for you. Besides passwords you can store URLs, e-mail addresses and other notes.
Details Download trial (Demo: 842 KB) Buy now ($19.95) Find similar
Vendor: Mightsoft
Related link: http://www.mightsoft.com/downloads/p2k280.exe
11. Zip Password
password recovery tool for pkzip/WinZip archives.
Vendor: LastBit Software
http://lastbit.com/default.asp
http://lastbit.com/products.asp
http://lastbit.com/download.asp
http://lastbit.com/register.asp
http://lastbit.com/affiliate.asp
http://lastbit.com/feedback.asp
http://lastbit.com/guestbook/testimonial.asp
http://lastbit.com/mp/support.asp
http://lastbit.com/about.asp
http://lastbit.com/upgrade.asp
http://lastbit.com/cd/cd.asp
http://lastbit.com/books.asp
http://lastbit.com/mso/default.asp
http://lastbit.com/access/default.asp
http://lastbit.com/excel/default.asp
http://lastbit.com/PocketExcel/default.asp
http://lastbit.com/outlook/default.asp
http://lastbit.com/word/default.asp
http://lastbit.com/zippsw/default.asp
http://lastbit.com/vba/default.asp
http://lastbit.com/act/default.asp
http://lastbit.com/internet_password.asp
http://lastbit.com/aol_password.asp
http://lastbit.com/mailpsw/default.asp
http://lastbit.com/windowspassword.asp
http://lastbit.com/passworddirector/default.asp
http://lastbit.com/webpsw/default.asp
http://passwordNow.com/
http://lastbit.com/security.asp
http://lastbit.com/wse/default.asp
http://lastbit.com/ntpsw/default.asp
http://lastbit.com/vitas/pwltool.asp
http://lastbit.com/utilities.asp
http://lastbit.com/usertracker/default.asp
http://lastbit.com/akl/default.asp
http://lastbit.com/regsnap/default.asp
http://lastbit.com/info.asp
http://lastbit.com/psw.asp
http://lastbit.com/vitas/pwl.asp
http://lastbit.com/press.asp
http://lastbit.com/passwordestimation.asp
http://lastbit.com/pass-links.html
http://lastbit.com/glossary
http://lastbit.com/recommend/recommend.asp
http://lastbit.com/terms.asp
http://lastbit.com/privacy.asp
12. Password Director
13. PwlTool
14. EldoS KeyLord PocketPC Edition
Are you tired of remembering all your passwords, IDs, account information? And the password, used for data encryption. Do you use sticky papers with passwords on your monitor? EldoS KeyLord holds all your information, concerned with accounts, passwords and logins, in secure encrypted files.
Platform: Win CE / Pocket PC
Vendor: EldoS Corporation
http://www.eldos.org/index.html
http://www.eldos.org/product.html
http://www.eldos.org/develop.html
http://www.eldos.com/outsourcing.html
http://www.eldos.com/copyright.html | CommonCrawl |
\begin{document}
\begin{abstract} We consider a 1D linear Schrödinger equation, on a bounded interval, with Dirichlet boundary conditions and bilinear control. We study its controllability around the ground state when the linearized system is not controllable. More precisely, we study to what extent the nonlinear terms of the expansion can recover the directions lost at the first order.
In the works \cite{BM14, B21bis}, for any positive integer $n$, assumptions have been formulated under which the quadratic term induces a drift in the nonlinear dynamics, quantified by the $H^{-n}$-norm of the control. This drift is an obstruction to the small-time local controllability (STLC) under a smallness assumption on the controls in regular spaces.
In this paper, we prove that for controls small in less regular spaces, the cubic term can recover the controllability lost at the linear level, despite the quadratic drift.
The proof is inspired by Sussman's method to prove the sufficiency of the $\S(\theta)$ condition for STLC of ODEs. However, it uses a different global strategy relying on a new concept of tangent vector, better adapted to the infinite-dimensional setting of PDEs.
Given a target, we first realize the expected motion along the lost direction by using control variations for which the cubic term dominates the quadratic one. Then, we correct the other components exactly, by using the STLC in projection result of \cite{B21}, with simultaneous estimates of weak norms of the control. These estimates ensure that the new error along the lost direction is negligible, and we conclude with the Brouwer fixed point theorem. \end{abstract}
\maketitle
\section{Introduction}
\subsection{Description of the control system} Let $T>0$. In this paper, we consider the 1D linear Schrödinger equation given by, \begin{equation} \label{Schrodinger} \left\{
\begin{array}{ll}
i \partial_t \psi(t,x) = - \partial^2_x \psi(t,x) -u(t)\mu(x)\psi(t,x), \quad &(t,x) \in (0,T) \times (0,1),\\
\psi(t,0) = \psi(t,1)=0, \quad &t \in (0,T).
\end{array} \right. \end{equation}
This equation is used in quantum physics to describe a quantum particle stuck in an infinite potential well and subjected to a uniform electric field whose amplitude is given by $u(t)$. The function $\mu : (0,1) \rightarrow \mathbb{R}$ depicts the dipolar moment of the particle. This equation is a bilinear control system where the state is the wave function $\psi$ such that for all time $\| \psi(t) \|_{L^2(0,1)} = 1$ and $u : (0,T) \rightarrow \mathbb{R}$ denotes a scalar control.
\subsection{Functional settings} Unless otherwise specified, in space, we will work with complex valued functions. The Lebesgue space $L^2(0,1)$ is equipped with the hermitian scalar product given by $$\langle f,g\rangle := \int_0^1 f(x) \overline{g(x)}dx, \quad \forall f, g \in L^2(0,1).$$ Let $\mathcal{S}$ be the unit-sphere of $L^2(0,1)$.
The operator $A$ is defined by \begin{equation*} \dom(A):=H^2(0,1) \cap H^1_0(0,1) \quad \text{ and } \quad A\varphi:=-\frac{d^2 \varphi}{dx^2}. \end{equation*} Its eigenvalues and eigenvectors are respectively given by \begin{equation*} \forall j \in \mathbb{N}^*, \quad \lambda_j:= (j \pi)^2 \quad \text{ and } \quad \varphi_j:=\sqrt{2} \sin(j \pi \cdot). \end{equation*} The family of eigenvectors $(\varphi_j)_{j \in \mathbb{N}^*}$ is an orthonormal basis of $L^2(0,1)$. We denote by, $$\forall j \in \mathbb{N}^*, \quad \psi_j(t,x):= \varphi_j(x) e^{-i \lambda_j t}, \quad \forall (t,x) \in \mathbb{R} \times [0,1],$$ the solutions of the Schrödinger equation \eqref{Schrodinger} with $u \equiv 0$ and initial data $\varphi_j$ at time $t=0$. When $j=1$, $\psi_1$ is called the ground state. We also introduce the normed spaces linked to the operator $A$, given by, for all $s \geqslant 0$, \begin{multline*} H^s_{(0)}(0,1):=\dom(A^{\frac{s}{2}}) \\ \text{ endowed with the norm } \quad
\left\| \varphi
\right\|_{H^s_{(0)}(0,1)} :=
\| ( \langle \varphi, \varphi_j \rangle)\|_{h^s(\mathbb{N}^*)} := \left( \sum \limits_{j=1}^{+\infty}
\left| j^s \langle \varphi, \varphi_j\rangle
\right|^2 \right)^{\frac{1}{2}}. \end{multline*}
Let $T>0$. For $u \in L^1(0,T)$, the family $(u_n)_{n \in \mathbb{N}}$ of the iterated primitives of $u$ is defined by induction as, \begin{equation*} u_0:=u \quad \text{ and } \quad \forall n \in \mathbb{N}, \ u_{n+1}(t) := \int_0^t u_n(\tau) d\tau, \quad t \in [0,T]. \end{equation*}
Sometimes, to uniformize the notations of the primitives and derivatives of $u$, we will write $u^{(n)}$ when $n$ is negative to denote $u_{|n|}$, the $|n|$-th primitive of $u$.
\noindent We will also consider, for any integer $k \in \mathbb{N}$, $H^k \left( (0,T), \mathbb{R} \right)$, the usual integer-order real Sobolev space, equipped with the usual $H^k(0,T)$-norm
and $H^k_0(0,T)$ the adherence of $C_c^{\infty}(0,T)$, the set of functions with compact support inside $(0,T)$, for the topology $\| \cdot \|_{H^k(0,T)}$. For any integer $k \in \mathbb{N}^*$, a negative Sobolev norm is defined by, \begin{equation} \label{def_norm_faible}
\| u \|_{H^{-k}(0,T)} :=|u_1(T)|+\| u_k \|_{L^2(0,T)}, \quad u \in L^2(0,T). \end{equation} These norms don't coincide with the usual $H^{-k}$-norms, but this definition is taken as these quantities arise naturally in both the nonlinear and linearized dynamics.
\subsection{Assumptions on the dipolar moment $\mu$} Let us make precise the assumptions on the dipolar moment $\mu$ we shall consider in the following.
\noindent (H$_{\reg}$) The function $\mu$ is in $H^{11}( (0,1), \mathbb{R})$ with the following boundary conditions \begin{equation} \label{mu_bc} \mu'(0) = \mu'(1) = \mu^{(3)}(0) = \mu^{(3)}(1) = 0 . \end{equation}
\noindent (H$_{\lin}$) There exists an integer $K \in \mathbb{N}^* - \{1 \}$ such that \begin{equation} \label{lin_nul}
\langle \mu \varphi_1, \varphi_K \rangle =0, \end{equation} \begin{equation} \label{H_lin_2}
\text{and there exists } c>0 \text{ such that for all } j \in \mathbb{N}^*- \{K\}, \quad
\left| \langle \mu \varphi_1, \varphi_j \rangle
\right| \geqslant \frac{c}{j^7}. \end{equation}
Define, for $p=1, 2$ and $3$, the following quadratic (with respect to $\mu$) coefficients, \begin{equation} \label{def_Apk} A^p_K := (-1)^{p-1} \sum \limits_{j=1}^{+\infty} \left(\lambda_j - \frac{\lambda_1+\lambda_K}{2} \right) ( \lambda_K-\lambda_j)^{p-1} (\lambda_j -\lambda_1)^{p-1} \langle \mu \varphi_1, \varphi_j \rangle \langle \mu \varphi_K, \varphi_j \rangle. \end{equation}
\noindent (H$_{\Quad}$)
The first two quadratic coefficients vanish and the third one does not vanish \begin{align}
\label{quad_nul_1}
A^1_K
&= 0 , \\ \label{quad_nul_2}
A^2_K
&= 0 , \\ \label{quad_non_nul} A^3_K
&\neq 0 . \end{align}
\noindent (H$_{\Cub}$) The following cubic (with respect to $\mu$) coefficient does not vanish \begin{equation} \label{cub_non_nul} C_K := \sum \limits_{j=1}^{+\infty} \left( \lambda_1- \lambda_j \right) \langle \mu \varphi_1 , \varphi_j \rangle \langle \mu'^2 \varphi_K , \varphi_j \rangle - \sum \limits_{j=1}^{+\infty} \left( \lambda_j- \lambda_K \right) \langle \mu \varphi_j , \varphi_K \rangle \langle \mu'^2 \varphi_1 , \varphi_j \rangle \neq 0. \end{equation}
\begin{rem} \label{decay_coeff} Notice that, under (H$_{\reg}$), integrations by parts and Riemann-Lebesgue lemma lead to
\begin{equation} \label{coeff_IPP} \forall q \in \mathbb{N}^*, \quad \langle \mu \varphi_q , \varphi_j \rangle = \frac{ 12 q } { \pi^{6} j^{7} } \left( (-1)^{j+q} \mu^{(5)}(1) -\mu^{(5)}(0) \right) + \underset{j \rightarrow +\infty}{o} \left( \frac{1}{j^{7}} \right). \end{equation} Thus, all the series considered in \eqref{def_Apk} and \eqref{cub_non_nul} converge absolutely. \end{rem}
\begin{rem} \label{rem:lie_brackets} For smooth vector fields $X, Y$ in $C^{\infty}(\mathbb{R}^d, \mathbb{R}^d)$, the Lie bracket $[X,Y]$ is defined as the following smooth vector field: $ [ X, Y] (x) := X'(x) Y(x) - Y'(x) X(x). $ Notice that the sign convention chosen is not usual. We also define by induction \begin{equation*} \ad_{X}^0(Y) := Y \quad \text{ and } \quad \forall k \in \mathbb{N}, \quad \ad_{X}^{k+1}(Y) := [X, \ad_{X}^{k}(Y)]. \end{equation*} At least formally, the assumption (H$_{\Quad}$) can be written in terms of Lie brackets as \begin{equation*}
\forall p=1, 2, \ \langle [\ad_{A}^{p-1}(\mu), \ad_{A}^{p}(\mu) ] \varphi_1, \varphi_K\rangle =0 \ \text{ and } \ \langle [\ad_{A}^{2}(\mu), \ad_{A}^{3}(\mu) ] \varphi_1, \varphi_K\rangle \neq 0 . \end{equation*} Notice that the last Lie bracket is exactly the one along which the quadratic order adds a drift, denying $W^{3, \infty}$-STLC (see \cref{def_STLC}) for finite-dimensional systems $\dot{x}=f_0(x)+uf_1(x)$ in \cite[Theorem 3]{BM18}. Similarly, (H$_{\Cub}$) can be written as \begin{equation*} \langle [ \ad_A(\mu), [ \ad_A(\mu), \mu] ] \varphi_1, \varphi_K \rangle \neq 0. \end{equation*} All these computations can be made rigorous when $\mu$ satisfies (H$_{\reg}$) for instance. In that case, the iterated Lie brackets are well-defined (for all $k \in \mathbb{N}^*$, to compute $\ad^k_A(\mu)\varphi_1$, one needs to check that $\ad^{k-1}_A(\mu)\varphi_1$ is in $\dom A$) and denote commutators of operators. \end{rem}
At a heuristic level, assumptions (H$_{\reg}$), (H$_{\lin}$), (H$_{\Quad}$) and (H$_{\Cub}$) entail that, in the asymptotic of small controls (in a norm to be specified), the leading terms of the solution $\psi$ of the Schrödinger equation \eqref{Schrodinger} along the lost direction are given by \begin{equation} \label{eq:heuristic} \langle \psi(T), \varphi_K e^{-i \lambda_1 T} \rangle \approx - iA^3_K \int_0^T u_3(t)^2 dt + iC_K \int_0^T u_1(t)^2 u_2(t) dt. \end{equation} The existence of a function $\mu$ satisfying (H$_{\reg}$), (H$_{\lin}$), (H$_{\Quad}$) and (H$_{\Cub}$) is proved in \cref{existence_mu}.
\subsection{Main result} First, we state the notion of small-time local controllability (STLC) used in this paper, stressing the smallness assumption imposed on the control, as it plays a key role in the validity of controllability results. \begin{defi} \label{def_STLC}
Let $(E_T, \|\cdot\|_{E_T})$ be a family of normed vector spaces of real functions defined on $[0,T]$ for $T> 0$. The system \eqref{Schrodinger} is said to be E-STLC around the ground state if there exists $s \in \mathbb{N}$ such that for every $T>0$, for every $\eta>0$, there exists $\delta> 0$ such that for every $\psi_f \in \mathcal{S} \cap H^s_{(0)}(0,1)$ with $\|\psi_f - \psi_1(T)\|_{H^s_{(0)}(0,1)} < \delta$, there exists $u \in L^2((0,T),\mathbb{R}) \cap E_T$ with $\|u\|_{E_T} < \eta$ such that the solution $\psi$ of \eqref{Schrodinger} associated to the initial condition $\varphi_1$ at time $t=0$ and the control $u$ satisfies $\psi(T)=\psi_f$. \end{defi}
When the linearized system around an equilibrium is controllable, using a fixed-point theorem, one can hope to prove STLC for the nonlinear system as explained in \cite[Chap.\ 3.1]{bookC07} in finite dimension.
When it is not the case, one needs to go further into the expansion in the spirit of \cite[Chap.\ 8]{bookC07}.
For the Schrödinger equation, a few STLC results are already known.
\ \paragraph{\textbf{Linear behavior.}} Since \cite{BL10}, it is known that when the coefficients $(\langle \mu \varphi_1, \varphi_j \rangle)_{j \in \mathbb{N}^*}$ satisfy \begin{equation} \label{hyp_mu_lin} \text{there exists a constant } c>0 \text{ such that } \quad \forall j \in \mathbb{N}^*, \quad
\left| \langle \mu \varphi_1, \varphi_j \rangle
\right| \geqslant \frac{c}{j^3}, \end{equation} then the Schrödinger equation is $H^k_0$-STLC around the ground state with targets in $H^{2k+3}_{(0)}$. This result has been extended in \cite{B21} where the author proved that when \eqref{hyp_mu_lin} holds only on a subset of $\mathbb{N}^*$, STLC holds in projection with a unique control map for a finite range of regularity on the control. Moreover, it also has been proved in \cite{BM14} that, under the weaker assumption, \begin{equation*} \mu'(1) \pm \mu'(0) \neq 0, \end{equation*} a finite number of coefficients $\langle \mu \varphi_1, \varphi_K \rangle$ vanish, but the local controllability with controls in $L^2$ holds in large time.
\ \paragraph{\textbf{Quadratic behavior.}} In \cite{B21bis}, the author proved that when, for some $n \geq 2$ (resp.\ $n=1$) \begin{equation*} \langle \mu \varphi_1, \varphi_K \rangle =0, \quad A^1_K=\cdots = A^{n-1}_K=0 \quad \text{ and } \quad A^n_K \neq 0, \end{equation*} (with enough regularity on $\mu$ so that the associated series converge), the Schrödinger equation is not $H^{2n-3}$-STLC (resp.\ $W^{-1,\infty}$-STLC), due to a drift quantified by the $H^{-n}$-norm of the control. This follows the work of \cite{BM14} where the authors already denied $L^{\infty}$-STLC in the case $n=1$. Let us stress that such a result entails that, under ($H_{\lin})$, \eqref{lin_nul} and ($H_{\Quad})$, the Schrödinger equation \eqref{Schrodinger} is not $H^3$-STLC.
The goal of this paper is the proof of the following statement by taking advantage of the cubic term of the expansion. \begin{thm} \label{the_theorem} Let $\mu$ satisfying (H$_{\reg}$), (H$_{\lin}$), (H$_{\Quad}$) and (H$_{\Cub}$). Then, the Schrödinger equation \eqref{Schrodinger} is $H^2_0$-STLC around the ground state with targets in $H^{11}_{(0)}(0,1)$.
\end{thm}
Using `higher-order control variations' to prove STLC is classical for ODEs: it has been used for example to prove the sufficiency of Sussman $\S( \theta)$ condition \cite{S87} (see also \cite[Theorem 3.29]{bookC07}) or by Kawski in \cite[Theorem 3.7]{K90}.
However, up to our knowledge, it is the first time that this strategy is used in infinite dimension.
The proof of \cref{the_theorem} can be brought down to the following ideas. \begin{itemize} \item First, to recover STLC, it is enough to have STLC in projection on the reachable space of the linearized system and to move into the directions lost at the linear level using `higher-order control variations'.
\item To move into the directions lost at the linear level, the strategy is the following. \begin{itemize} \item[$\triangleright$] First, one computes a well-quantified expansion of the solution along the lost directions to identify the leading terms. \item[$\triangleright$] Then, one proves that the cubic term can absorb the quadratic term along the lost directions using oscillating controls small in a good asymptotic. \item[$\triangleright$] Finally, one corrects the linear components using the STLC result in projection on the reachable space of the linearized system. Sharp estimates on the $H^{-k}$-norms of the control (see \eqref{def_norm_faible}) are needed to prove that such a correction didn't destroy the work done previously along the lost directions. Thus, the work done in \cite{B21} is a key tool for this paper. \end{itemize}
\end{itemize}
The paper is organized as follows.
First, a systematic approach to recover STLC when the linearized system misses a finite number of directions is described in \cref{sec:black_box} and presented on a few toy-models in finite dimension in \cref{sec:toy_models}.
Before applying this method to the Schrödinger equation, in \cref{sec:WP_Schro}, we recall its well-posedness and the controllability result in projection of \cite{B21}.
Then, the power series expansion of the Schrödinger equation is computed in \cref{sec:expansion}.
Finally, \cref{sec:STLC_result} is dedicated to the proof of \cref{the_theorem}.
\subsection{State of the art}
\ \paragraph{\textbf{Controllability results on the Schrödinger equation.}}
\subparagraph{\textit{Local exact controllability results.}} Bilinear control systems have been considered as non controllable for a long time because of a negative result \cite{BMS82} from Ball, Marsden and Slemrod, later adapted for the Schrödinger equation by Turinici in \cite{T00}. The result in \cite{BMS82} was later completed by Boussaïd, Caponigro and Chambrion in \cite{BCC20}.
However, these statements don't give obstructions for the Schrödinger equation to be controllable in different functional spaces. Later, exact local controllability results for 1D models have been proved by Beauchard in \cite{B05, B08}, later improved by Beauchard and Laurent in \cite{BL10} and generalized later in \cite{B21}. In \cite{BL10}, the authors also proved the local controllability of a nonlinear Schrödinger equation with Neumann boundary conditions. The case of Dirichlet boundary conditions has been treated later by Duca and Nersesyan in \cite{DN22}.
Morancey and Nersesyan also proved the controllability of one Schrödinger equation with a polarizability term \cite{MN14} and of a finite number of equations with one control \cite{M14, MN15}. In dimension $N \leq 3$, Puel \cite{P16} also proved the local exact controllability of a Schrödinger equation, in a bounded regular domain in a neighborhood of an eigenfunction corresponding to a simple eigenvalue, for controls $u=u(t,x)$. Using the link between quantum and classical dynamics, in \cite{BBS21}, the authors also gave some necessary and sufficient conditions of the local controllability of the Schrödinger equation.
\noindent \textit{Global results.}
Using Galerkin approximations, global approximate controllability results have been proved by Boscain, Boussaïd, Caponigro, Chambrion, Mason and Sigalotti in \cite{BCCS12, BCS14, BCC13, BCC20, CMSB09}. The genericity of these sufficient conditions is stated in \cite{CS16}. This strategy has also been used to prove exact controllability in projection on the first eigenstates in \cite{CS18}. Adiabatic arguments \cite{BCMS12, BGRS15, DJT20, DJT22} or Lyapunov techniques \cite{M09, N10} can also be used. Also, in \cite{DN21}, the authors proved the approximate controllability of a nonlinear Schrödinger equation with bilinear controls.
Nersesyan and Nersisyan also proved the global exact controllability in infinite time of one Schrödinger equation in one \cite{NN12} and any dimension \cite{NN12bis}. Later, Duca provided explicit times such that the global exact controllability holds in \cite{D19}. Global exact controllability in projection of infinite bilinear Schrödinger equations has also been proved in \cite{D20}.
\ \paragraph{\textbf{Local controllability result by the power series expansion.}} When the linearized system is not controllable, the strategy, presented in \cite[Chap. 8]{C07} for finite-dimensional control system, of performing a power series expansion of the solution, can be used to prove both negative or positive controllability results.
\subparagraph{\textit{Negative results.}}
In \cite{BM18}, the authors proved that, in finite dimension, for scalar-input differential systems, when the linear test fails, the second-order term adds a drift quantified by the $H^{-n}$-norm of the control, along an explicit Lie bracket, denying $W^{2n-3, \infty}$-STLC for the nonlinear system.
Such a phenomenon was already observed in infinite dimension, for a Schrödinger equation, by Coron in \cite{C06}, by Beauchard and Morancey in \cite{BM14} and later in \cite{B21bis}. In these works, using the second-order term, and more precisely proving a coercivity inequality involving an integer negative Sobolev norm of the control, the authors gave impossible motions in small time.
For a Burgers equation, STLC is still denied in \cite{M15} proving a coercivity inequality but involving a fractional Sobolev norm of the control instead.
In \cite{BM20}, obstructions caused by both quadratic integer and fractional drifts are proven on a scalar-input parabolic equation.
A similar result has also been proved on a KdV system, for boundary controls in \cite{CKN20} by Coron, Koenig and Nguyen. The authors also showed in \cite{CNK22} that the STLC of a water tank modeled by 1D Saint-Venant equations doesn't hold when the time is not large enough, proving a coercivity property for the quadratic term of the system.
\subparagraph{\textit{Positive results.}} The power series expansion method has been used in infinite dimension to recover STLC for the first time in \cite{CC04} for a KdV control system. Beforehand, in \cite{R97}, Rosier studied the controllability of the Kdv equation posed on a finite interval $(0,L)$ with Dirichlet boundary conditions and the control acting on the Neumann data at the right end-point of the interval. The author proved that when $L$ belongs to a set of critical values, the linearized system around the origin is not controllable due to the existence of a finite-dimension subspace of unreachable states.
For some critical values such that the space of unreachable states is of dimension 1, Coron and Crépeau in \cite{CC04} recovered local controllability in small time using a power series expansion of the solution of order 3 for which there is no quadratic term.
In \cite{C07}, the same approach is used by Cerpa to treat another critical length. However, in this case, the unreachable set of directions at the linear level is of dimension 2 and a second order expansion is sufficient to recover local controllability, but the result holds only in large time.
Later, Cerpa and Crépeau proved in \cite{CC09} the local controllability in large time for any critical lengths with the same strategy.
Such a method has also already been used for the Schrödinger equation. In \cite{BC06} and \cite{B08}, the controllability of a quantum particle in a 1D infinite square potential well with variable length is studied. In both cases, the proof relies on a compactness argument that needs local controllability results around many periodic trajectories. Those local results are proved by the linear test or using second order terms for some trajectories for which the linearized system looses one direction.
In \cite{BM14}, local controllability with controls in $L^2$ in large time has also been proved using an expansion of order 2.
Moreover, for the first time in \cite{BM20}, on a scalar-input parabolic equation, the power series method has been used to recover at the quadratic order an infinite number of direction lost at the linear level.
Finally, let us quickly mention that stabilization results have also been proved using the power series expansion method as in \cite{CE19, CR17, CRX17}.
\section{Method of `control variations'} \label{sec:black_box} Under (H$_{\lin}$), the linearized system around the ground state of the Schrödinger equation \eqref{Schrodinger} is not controllable: it misses one complex direction $\varphi_K$. This situation is called `controllability up to (real) codimension two'.
The goal of this section is to propose a systematic approach to deal with these situations, which is different from the one for ODEs and better adapted to PDEs. For finite dimensional systems, the classical approach used by Kawski \cite[Theorem 2.4]{K90} consists in, \begin{itemize} \item proving that any direction lost at the linear level is a `tangent vector' thanks to higher order control variations, \item and deducing the STLC of the nonlinear system thanks to a time-iterative process that uses arbitrary small-time intervals. \end{itemize} For PDEs, using arbitrary small-time intervals is not comfortable, because of the control-cost explosion when the time goes to zero. Therefore, in our new approach, Kawski's time-iteration process is replaced by a Brouwer fixed point argument. This is why our new notion of `tangent vector' contains a continuity property.
\subsection{Main result} To encompass finite and infinite dimensional systems, STLC is discussed in terms of the surjectivity of the end-point map. Let us state first the functional setting.
Let $X$ be a Banach space over $\mathbb{R}$.
Let $(E_T, \|\cdot\|_{E_T})$ be a family of normed vector spaces of functions defined on $[0,T]$ for $T> 0$.
Assume that for all $T_1, T_2>0$, for all $u \in E_{T_1}$ and $v \in E_{T_2}$, the concatenation of the two functions $u \# v$ defined by \begin{equation} \label{concatenation} u \# v := u \mathrm{1~\hspace{-1.4ex}l}_{(0,T_1)} + v( \cdot - T_1) \mathrm{1~\hspace{-1.4ex}l}_{(T_1,T_1+T_2)} \end{equation} is in $E_{T_1+T_2}$ with moreover the following estimate: \begin{equation} \label{continuity_ET}
\| u \# v
\|_{E_{T_1+T_2}} \leqslant
\| u\|_{E_{T_1}} +
\| v\|_{E_{T_2}}. \end{equation} For example, for any positive integer $k$, the Sobolev space $H^k_0(0,T)$ satisfies this property whereas the Sobolev space $H^k(0,T)$ doesn't.
Finally, let $(\F_T)_{T > 0}$ be a family of functions from $X \times E_T$ to $X$ for $T>0$. Later, in all our applications, $\F_T$ will denote the end-point of the control system.
\noindent First, let us make precise the new definition of `tangent vector' used in this paper. \begin{defi} \label{def_new_tv} A vector $\xi \in X$ is called a small-time $E$-continuously approximately reachable vector if there exists a continuous map $\Xi : [0, +\infty)\rightarrow X$ with $\Xi(0)=\xi$ such that for all $T > 0$, there exists $C, \rho, s >0$ and a continuous map $b \in (-\rho, \rho) \mapsto u_b \in E_T$ such that, \begin{equation} \label{new_tv} \forall b \in (-\rho, \rho), \quad
\left\| \F_{T} \left( 0, u_b \right) - b \Xi(T)
\right\|_{X} \leqslant C
| b
|^{1+s} \quad \text{ with } \quad
\| u_b
\|_{E_T} \leqslant C
| b
|^{s}. \end{equation} The family $(u_b)_{b \in \mathbb{R}}$ (resp.\ the map $\Xi$) is called the control variations (resp.\ the vector variations) associated with $\xi$. \end{defi}
\begin{rem} Let us stress that, for finite-dimensional system, in \cite{K90}, Kawski (see also the work of Frankowska \cite{F87, F89}) introduced rather the following definition: a vector $\xi$ is said to be an $m$-th order tangent vector if there exists a family of controls $(u_T)_{T>0}$ such that \begin{equation} \label{methodo:tv_bis} \F_T( 0, u_T)= T^m \xi + o(T^m) \quad \text{ when } T \rightarrow 0. \end{equation} Our \cref{def_new_tv} is different: the final time and the amplitude of the target are unrelated. This allows constants in \eqref{new_tv} badly quantified with respect to the final time $T$, which is not possible in \eqref{methodo:tv_bis}. Hence, being a small-time $E$-continuously approximately reachable vector is a weaker property than being a tangent vector. Also, for this reason, the dependency of the constants with respect to the final time will not be tracked in this paper.
For the Schrödinger equation \eqref{Schrodinger}, the lost directions at the linear level are approximately reachable vectors, but it seems more complicated to prove that they are tangent vectors. \end{rem}
Our systematic approach is formalized in the following statement. \begin{thm} \label{black_box} Assume the following hypotheses hold. \begin{itemize}
\item[$(A_1)$] For all $T>0$, $\F_T: X \times E_T \rightarrow X$ is of class $C^2$ on a neighborhood of $(0,0)$ with $\F_T(0,0)=0$.
\item [$(A_2)$] For all $x \in X$, $T \in \mathbb{R}_+ \mapsto d \F_T(0,0).(x,0) \in X$ can be continuously extended at zero with $d \F_0(0,0).(x,0)=x$.
\item [$(A_3)$] For all $T_1, T_2 > 0$, for all $x \in X$, for all $u \in E_{T_1}$ and $v \in E_{T_2}$, \begin{equation} \label{translation_F} \F_{T_1+T_2} \left( x, u \# v \right) = \F_{T_2} \left( \F_{T_1} \left( x, u \right) , v \right) . \end{equation}
\item [$(A_4)$] The space $\H:=\Ran d\F_T(0,0).(0, \cdot)$ doesn't depend on time, is closed and of finite codimension $n$.
\item [$(A_5)$] There exists $\M$ a supplementary of $\H$ that admits a basis $(\xi_i)_{i=1, \ldots,n }$ of small-time $E$-continuously approximately reachable vectors. \end{itemize}
Then, for all $T>0$, $\F_T$ is locally onto from zero: for all $\eta>0$, there exists $\delta>0$ such that for all $x_f \in X$ with $\| x_f\|_X < \delta$, there exists $u \in E_T$ with $\| u \|_{E_T} < \eta$ such that \begin{equation*} \F_T(0,u)=x_f. \end{equation*} \end{thm}
\begin{rem} \label{rem:time_revers} If in addition of $(A_1)-(A_5)$, we assume that \begin{itemize} \item [$(A_6)$] for all $T > 0$ and $u \in E_{T}$, $u(T- \cdot)$ is in $E_{T}$ with \begin{equation*} \F_T( \F_T(0,u), u(T- \cdot)) = 0, \end{equation*} \end{itemize}
then, for all $T>0$, $\F_T$ is locally onto: for all $\eta>0$, there exists $\delta>0$ such that for all $(x_0,x_f) \in X^2$ with $\| x_0\|_X+\| x_f\|_X < \delta$, there exists $u \in E_T$ with $\| u \|_{E_T} < \eta$ such that \begin{equation*} \F_T(x_0,u)=x_f. \end{equation*}
Indeed, let $(x_0,x_f) \in X^2$ with $\| x_0\|_X+\| x_f\|_X < \delta$. By \cref{black_box}, there exists $u, v \in E_T$ such that $\F_T(0,u)=x_0$ and $\F_T(0,v)=x_f$.
Then, using successively $(A_3)$ and $(A_6)$, one has \begin{equation*} \F_{2T}( x_0, u(T-\cdot) \# v) = \F_T( \F_T(x_0, u(T-\cdot)), v)
= \F_T( 0, v) = x_f. \end{equation*} \end{rem}
\begin{rem}
The $C^2$-regularity of $\F_T$ in $(A_1)$ is for convenience. In the proof of \cref{black_box}, one needs the following estimates: for all $T>0$ and $R>0$, there exists $C>0$ such that for all $(x,u) \in X \times E_T$ with $\| x\|_X+\| u \|_{E_T} <R$, \begin{align} \label{Gronwall_F}
\left\| \F_{T}(x,u) - \F_{T}(0,u) - \F_{T}(x,0)
\right\|_{X} &\leqslant C
\| x\|_X
\| u \|_{E_T}, \\ \label{dev_diff_F}
\left\| \F_{T}(x,u) - d \F_T(0,0).(x,u)
\right\|_X &\leqslant C \left(
\| x\|_X^2 +
\| u\|_{E_T}^2 \right) . \end{align} Both estimates follow from Taylor formulas when $\F_T$ is of class $C^2$. \end{rem}
\begin{rem} When $\F_T$ denotes the end-point map of a control system, \begin{itemize} \item $(A_1)$ is linked to the well-posedness of the system: for controls in $E_T$ and initial data in $X$, the end-point of the solution must take values in $X$; \item $(A_2)$ asks that the solutions of the linearized system are continuous with respect to time; \item $(A_3)$ is related to the semigroup property of the equation; \item $(A_4)$ means that the linearized system is `controllable up to finite codimension'; \item $(A_5)$ means that the directions lost at the linear level can be recovered using `higher order control variations'; \item and $(A_6)$ is linked to the time reversibility of the equation. \end{itemize} \end{rem}
\subsection{Proof of \cref{black_box}} The first tool is the local surjectivity of the nonlinear map $\F_T$ up to finite codimension. \begin{prop} \label{lem:linear_test} Assume $(A_1)$ and $(A_3)$. Let $T>0$ and $\mathcal{N}$ a supplementary of $\H$. Denote by $\mathbb{P}$ the projection on $\H$ parallely to $\mathcal{N}$.
Then, $\F_T$ is locally onto in projection on $\H$: there exists $\delta_0, C>0$ and a $C^1$-map $\Gamma_{T} : B_{X}(0, \delta_0) \times \left( B_{X}(0, \delta_0) \cap \H \right) \rightarrow E_T$ with $\Gamma_{T}( 0,0)=0$ such that for all $(x_0, x_f) \in B_{X}(0, \delta_0) \times \left( B_{X}(0, \delta_0) \cap \H \right)$, \begin{equation} \label{target_lin}
\mathbb{P} [ \F_{T} \left( x_0, \Gamma_{T}(x_0,x_f) \right) ] = x_f, \end{equation} with the size estimate \begin{equation} \label{estim_contr_lin}
\| \Gamma_{T}(x_0, x_f)
\|_{E_T} \leqslant C \left(
\| x_0 \|_{X} +
\| x_f \|_{X} \right). \end{equation} \end{prop}
\noindent The proof follows from applying the inverse mapping theorem to the $C^1$-map \begin{equation} \label{end_point_proj} \begin{array}{lccl}
& X \times E_T & \to & X \times \H \\
& (x, \ u) & \mapsto & \left( x, \mathbb{P}[ \F_{T}(x, u) ] \right). \\ \end{array} \end{equation}
Then, we prove that every direction spanned by approximately reachable vectors can be recovered using higher order control variations.
\begin{prop} \label{prop:motion_M} Under the assumptions of \cref{black_box}, there exists $T^*>0$ such that for all $T \in (0, T^*)$ and $\eta>0$, there exists $\M_T$ a supplementary of $\H$, $C, s, \rho>0$ and a continuous map $ z \in \M_T \cap B_X(0, \rho) \mapsto u_z \in E_T $
such that for all $z \in \M_T \cap B_X(0, \rho)$, \begin{equation} \label{motion_M}
\left\| \F_{T} \left( 0,u_z \right) -
z
\right\|_{X} \leqslant C
\| z \|_X^{1+s} \quad \text{ with } \quad
\|u_z\|_{E_T} \leqslant \eta. \end{equation} \end{prop}
\begin{proof} Let $T >0$ and $\eta >0$. Let $0=T_0 < \cdots < T_n=T$ be a subdivision of $[0, T]$.
By $(A_5)$, there exists $C, \rho, s>0$ and for all $i=1, \ldots,n$, two continuous maps $\Xi_i : [0,+\infty) \rightarrow X$ with $\Xi_i(0)=\xi_i$ and $b \in (-\rho, \rho) \mapsto u_b^i \in E_{T_{i}-T_{i-1}}$ such that for all $b \in (-\rho, \rho)$, \begin{equation} \label{methodo:tv_eq1}
\left\| \F_{T_i -T_{i-1}} \left( 0, u^{i}_b \right) - b \Xi_i(T_i -T_{i-1})
\right\|_{X} \leqslant C
| b
|^{1+s} \ \text{ with } \
\| u^{i}_b
\|_{E_{T_i -T_{i-1}}} \leqslant C
| b
|^{s}. \end{equation}
For all $c=(c_1, \ldots, c_n) \in \mathbb{R}^n$ with $\|c\| < \rho$ and $k \in \{1, \ldots, n\}$, we define \begin{equation*} u^{\#^k}_c := u^{1}_{
c_1 } \# u^{2}_{
c_2 } \# \ldots \# u^{k}_{
c_k } \in E_{T_k} . \end{equation*}
We prove by induction on $k \in \{1, \ldots, n\}$ the existence of $C>0$ such that for all $\|c\|_X < \rho$, \begin{equation} \label{rec:hyp_k}
\left\| \G_{T_k} ( 0, u^{\#^k}_c ) - \sum_{i=1}^k c_i d \G_{T_k-T_i}(0,0). ( \Xi_i(T_i -T_{i-1}) , 0)
\right\|_{X} \leqslant C
\| c \|^{1+s}. \end{equation} The initialization with $k=1$ follows from the definition of the family $(u^1_b)_{b \in \mathbb{R}}$ in \eqref{methodo:tv_eq1} and that $d \G_0(0,0).( \cdot, 0)=\Id_X$ by $(A_2)$. Let us prove the heredity: assume that \eqref{rec:hyp_k} holds for some $k \in \{1, \ldots, n\}$.
First, by $(A_3)$, one has, \begin{equation*} \G_{T_{k+1}} ( 0, u^{\#^{k+1}}_c ) = \G_{T_{k+1}-T_k} \left( \G_{T_k} ( 0, u^{\#^{k}}_c ) , u^{k+1}_{ c_{k+1} } \right). \end{equation*} Thus, together with the inequality \eqref{Gronwall_F}, one gets, \begin{multline} \label{rec1}
\left\| \G_{T_{k+1}} ( 0, u^{\#^{k+1}}_c ) - \G_{T_{k+1}-T_k} ( 0 , u^{k+1}_{ c_{k+1} } ) - \G_{T_{k+1}-T_k} ( \G_{T_k} ( 0, u^{\#^{k}}_c ) , 0 )
\right\|_X \\ \leqslant C
\| \G_{T_k} ( 0, u^{\#^{k}}_c )
\|_X
\| u^{k+1}_{ c_{k+1} }
\|_{E_{T_{k+1}-T_k}} \leqslant C
\| c\|
| c_{k+1}|^s, \end{multline} thanks to \eqref{rec:hyp_k} and the size estimate in \eqref{methodo:tv_eq1}. Moreover, by \cref{def_new_tv}, \begin{equation} \label{rec_2}
\| \G_{T_{k+1} - T_{k}} ( 0, u^{k+1}_{ c_{k+1} } ) - c_{k+1} \Xi_{k+1}(T_{k+1} -T_{k})
\|_{X} \leqslant C
| c_{k+1}
|^{1+s}. \end{equation} Besides, using the Taylor expansion \eqref{dev_diff_F}, one gets, \begin{multline} \label{rec3}
\left\| \G_{T_{k+1}-T_k} ( \G_{T_k} ( 0, u^{\#^{k}}_c ) , 0 ) - d \G_{T_{k+1}-T_k}(0,0). ( \G_{T_k} ( 0, u^{\#^{k}}_c ) , 0 )
\right\|_X \\ \leqslant C
\| \G_{T_k} ( 0, u^{\#^{k}}_c )
\|_X^2
\leqslant C \| c\|^2. \end{multline} Moreover, using the induction hypothesis \eqref{rec:hyp_k}, one has \begin{multline} \label{rec4}
\Big\| d \G_{T_{k+1}-T_k}(0,0). ( \G_{T_k} ( 0, u^{\#^{k}}_c ) , 0 ) \\- \sum \limits_{i=1}^k c_i d \G_{T_{k+1}-T_k}(0,0). \left( d \G_{T_k-T_i}(0,0). ( \Xi_i(T_i -T_{i-1}) , 0) , 0 \right)
\Big\|_X
\leqslant C \|c\|^{1+s}. \end{multline} Besides, differentiating \eqref{translation_F}, one gets that for all $T_1, T_2 > 0$ and $x \in X$, \begin{equation*} d\F_{T_1}(0,0). ( d\F_{T_2}(0,0).(x,0) , 0 ) = d \F_{T_1+T_2}(0,0).(x,0) . \end{equation*} Thus, \eqref{rec3} and \eqref{rec4} lead to \begin{equation} \label{rec5}
\left\| \F_{T_{k+1}-T_k} ( \G_{T_k} ( 0, u^{\#^{k}}_c ) , 0 ) - \sum \limits_{i=1}^k c_i d \G_{T_{k+1}-T_i}(0,0). \left( \Xi_i(T_i -T_{i-1}) , 0 \right)
\right\|_X \leqslant C
\| c\|^{1+s} . \end{equation} Then, estimates \eqref{rec1}, \eqref{rec_2} and \eqref{rec5} lead to \eqref{rec:hyp_k} for $k+1$ and this concludes the induction.
\noindent \emph{Conclusion.} The following map is continuous and doesn't vanish at zero, \begin{equation*} (t_1, \ldots, t_n, \hat{t}_1, \ldots, \hat{t}_n) \mapsto \det\left( \mathbb{P}_{\M}[d\F_{t_1}(0,0).(\Xi_1(\hat{t}_1), 0)] , \ldots, \mathbb{P}_{\M}[d\F_{t_n}(0,0).(\Xi_n(\hat{t}_n), 0)] \right), \end{equation*} where $\mathbb{P}_{\M}:=\Id - \mathbb{P}$ denotes the projection on $\M$ defined in $(A5)$ parallely to $\H$. Thus, there exists $T^*>0$ such that for all $T \in [0, T^*)$, $(\mathbb{P}_{\M}[d\F_{T-T_i}(0,0).(\Xi_i(T_i-T_{i-1}), 0)])_{i=1, \ldots, n}$ is a basis of $\M$. As $\M$ is a supplementary of $\H$, one deduces that $\M_T$ defined as \begin{equation*} \M_T := \Span( d \F_{T-T_i}(0,0). \left( \Xi_i(T_i -T_{i-1}) , 0 \right) , \ i=1, \ldots, n ), \end{equation*} is also a supplementary of $\H$. Thus, by \eqref{rec:hyp_k}, the proof is concluded with \begin{equation*} u_z:=u^{\#^n}_{c_1, \ldots, c_n} \quad \text{ for all } \quad z = \sum \limits_{i=1}^n c_i d \F_{T-T_i}(0,0). \left( \Xi_i(T_i -T_{i-1}) , 0 \right) \in \M_T . \end{equation*} Notice that the continuity of the map $z \mapsto u_z$ stems from the ones of $b \mapsto u^i_b$. \end{proof}
Finally, one can prove \cref{black_box}: the `higher order control variations' constructed in \cref{prop:motion_M} and the local surjectivity of $\F_T$ up to finite codimension given in \cref{lem:linear_test} are enough to gain back the controllability lost at the linear level.
\begin{rem} \label{rem:temps_petit} It is enough to prove the conclusion of \cref{black_box} for sufficiently small final times $T \in (0, T^*)$. Indeed, for $T>T^*$, taking $T_i>0$ such that $T-T_i < T^*$, one has \begin{equation*} \forall u \in E_{T-T_i}, \quad \F_T( 0, 0_{[0,T_i]} \# u) =\F_{T-T_i}( \F_{T_i}(0,0), u) =\F_{T-T_i}(0,u), \end{equation*} using $(A_1)$ and $(A_3)$. Thus, the result in small time entails the result in large time. \end{rem}
\begin{proof}[Proof of \cref{black_box}.]
Let $T>0$ the final time, $T_1 \in (0,T)$ an intermediate time and $\eta >0$ the accuracy on the control. Define $\delta:= \min( \delta_0, \frac{\eta}{4C})$ where $\delta_0$ (resp.\ $C$) is defined by \cref{lem:linear_test} (resp.\ \eqref{estim_contr_lin}). Let $x_f$ in $X$ with $\| x_f \|_X < \delta$.
\noindent \emph{Step 1: Steering $0$ almost to $x_f$.} Let $\M_{T_1}$ the supplementary of $\H$ given in \cref{prop:motion_M}. The goal of this step is to construct a $n$-parameters family $(v_z)_{z \in \M_{T_1}}$ such that, for every $z \in \M_{T_1}$ small enough, one has \begin{align} \label{goal_H} &\mathbb{P} \G_T( 0, v_z) = \mathbb{P} x_f ,
\\
\label{goal_M}
&\left\| \mathbb{P}_{T_1} \G_T( 0, v_z)- z
\right\|_X \leqslant C
\|z\|_X^{1+\gamma} + C
\| \mathbb{P} x_f\|_X^2 , \quad \text{ with } \gamma >0,
\\ \label{goal_vz}
&
\| v_z\|_{E_T} \leqslant \eta , \end{align} where $\mathbb{P}_{T_1}:=\Id- \mathbb{P}$ denotes the projection on $\M_{T_1}$ parallely to $\H$.
By \cref{prop:motion_M}, there exists $C, \rho, s>0$ and a continuous map $\tild{z} \mapsto u_{\tild{z}}$ from $\M_{T_1} \cap B_X(0, \rho)$ to $E_{T_1}$ such that, \begin{equation} \label{contr_step1} \forall \tild{z} \in \M_{T_1} \cap B_X(0, \rho), \quad
\left\| \G_{T_1} \left( 0,u_{\tild{z}} \right) - \tild{z}
\right\|_{X} \leqslant C
\| \tild{z} \|_X^{1+s} \quad \text{ with } \quad
\|u_{\tild{z}}\|_{E_{T_1}} \leqslant \frac{\eta}{2}. \end{equation}
Denote by $(e_i^{T_1})_{i=1, \ldots,n}$ a basis of $\M_{T_1}$. Then, the following map is continuous and non-vanishing at zero, \begin{equation*} t \mapsto \det \left( \mathbb{P}_{T_1}[ d \F_{t}(0,0).( e_1^{T_1}, 0) ] , \ldots , \mathbb{P}_{T_1}[ d \F_{t}(0,0).( e_n^{T_1}, 0) ] \right). \end{equation*} Thus, for $T$ small enough, $\mathbb{P}_{T_1}[d \G_{T-T_1}(0,0).( \cdot, 0)]$ is invertible from $\M_{T_1}$ to $\M_{T_1}$ with a continuous inverse by the open mapping principle. Hence, there exists a linear continuous map $h$ from $\M_{T_1}$ to $\M_{T_1}$ such that, \begin{equation} \label{def_h} \forall z \in \M_{T_1}, \quad \mathbb{P}_{T_1}[ d \G_{T-T_1}(0,0).(h(z), 0) ] =z. \end{equation} Finally, for all $z \in \M_{T_1} \cap B_X(0, \rho)$, we define \begin{equation} \label{def_vz} v_z := u_{h(z)} \# \Gamma_{T-T_1} \left( \G_{T_1} \left( 0,u_{h(z)} \right) , \mathbb{P} x_f \right), \end{equation} where $\Gamma_{T-T_1} : B_X(0, \delta_0) \times \left( B_X(0, \delta_0) \cap \H \right)$ is constructed in \cref{lem:linear_test} with the supplementary $\M_{T_1}$ and the family$(u_{\tild{z}})_{\tild{z} \in \M_{T_1}}$ is constructed in \eqref{contr_step1}. As $ \G_{T_1} \left( 0,u_{h(z)} \right) \rightarrow 0 $ when $z$ goes to 0, for $\rho$ small enough, $
\left\| \G_{T_1} \left( 0,u_{h(z)} \right)
\right\|_X < \delta_0 $ .
\noindent \emph{Size estimate.} By \eqref{continuity_ET}, for all $z \in \M_{T_1} \cap B_X(0, \rho)$, $v_z$ is in $E_{T}$ with \begin{align*}
\| v_z \|_{E_T} &\leqslant
\| u_{h(z)} \|_{E_{T_1}} +
\left\| \Gamma_{T-T_1} \left( \G_{T_1} \left( 0,u_{h(z)} \right) , \mathbb{P} x_f \right)
\right\|_{E_{T-T_1}} \\ &\leqslant \frac{\eta}{2} + C \left(
\| \G_{T_1} \left( 0,u_{h(z)} \right)
\|_X +
\| \mathbb{P} x_f
\|_X \right) \leqslant \frac{\eta}{2} + 2 C\delta \leqslant \eta, \end{align*} using the size estimate \eqref{contr_step1} on $u_{h(z)}$ and the estimate \eqref{estim_contr_lin} on $\Gamma_{T-T_1}$. This proves \eqref{goal_vz}.
\noindent \emph{Target almost reached.} Moreover, using $(A_3)$, one has, \begin{equation} \label{translation} \G_T (0, v_z) = \G_{T-T_1} \left( \G_{T_1}(0, u_{h(z)}) , \Gamma_{T-T_1} \left( \G_{T_1} \left( 0,u_{h(z)} \right) , \mathbb{P} x_f \right) \right). \end{equation} Therefore, by definition \eqref{target_lin} of $\Gamma_{T-T_1}$, \eqref{goal_H} is already satisfied. To prove \eqref{goal_M}, one can use \eqref{translation} together with the inequality \eqref{Gronwall_F} to get \begin{multline} \label{big_estim}
\left\| \mathbb{P}_{T_1} \G_T(0, v_z) - z
\right\|_X \leqslant
\left\| \mathbb{P}_{T_1} \left[ \G_{T-T_1} \left( 0 , \Gamma_{T-T_1} \left( \G_{T_1} \left( 0,u_{h(z)} \right) , \mathbb{P} x_f \right) \right) \right]
\right\|_X \\ +
\left\| \mathbb{P}_{T_1} \left[ \G_{T-T_1} \left( \G_{T_1}(0, u_{h(z)}), 0 \right) -z \right]
\right\|_X \\ + C
\| \Gamma_{T-T_1} \left( \G_{T_1} \left( 0,u_{h(z)} \right), \mathbb{P} x_f \right)
\|_{E_{T-T_1}}
\| \G_{T_1} \left( 0,u_{h(z)} \right)
\|_X. \end{multline} Yet, using the estimates \eqref{estim_contr_lin} on $\Gamma_{T-T_1}$, \eqref{contr_step1} on $\G_{T_1}(0, u_{h(z)})$ and the continuity of $h$, the last term of the right-hand side of \eqref{big_estim} is estimated by $ C
\| z
\|^2 + C
\| \mathbb{P} x_f\|^2 .$ Using the Taylor expansion \eqref{dev_diff_F}, the second term of the right-hand side of \eqref{big_estim} is estimated by \begin{equation} \label{big_estim_1}
\| \mathbb{P}_{T_1} [ d \G_{T-T_1}(0,0). \left( \G_{T_1}(0, u_{h(z)}), 0 \right) - z ]
\|_X +
\| \G_{T_1}(0, u_{h(z)})
\|_X^2 \leqslant C
\| z\|^{1+\min(1,s)} , \end{equation} using estimate \eqref{contr_step1}, the construction \eqref{def_h} and the continuity of $h$. Moreover, by definition of $\H$ in $(A_4)$, $ \mathbb{P}_{T_1} \left[ d \G_{T-T_1}(0,0). ( 0, \Gamma_{T-T_1} (\G_{T_1}(0, u_z) , \mathbb{P} x_f ) ) \right] =0 $. Thus, using again the Taylor expansion \eqref{dev_diff_F}, the first term of the right-hand side of \eqref{big_estim} is estimated by \begin{equation} \label{big_estim_2} C
\left\| \Gamma_{T-T_1} (\G_{T_1}(0, u_z) , \mathbb{P} x_f )
\right\|_X^2
\leqslant C \left(
\| z\|^2 +
\| \mathbb{P} x_f\|^2 \right), \end{equation} using estimate \eqref{estim_contr_lin} on $\Gamma_{T-T_1}$ and \eqref{contr_step1}. Therefore, \eqref{big_estim}, \eqref{big_estim_1} and \eqref{big_estim_2} lead to \eqref{goal_M}.
\noindent \emph{Step 2: Steering $0$ to $x_f$.} Thanks to \eqref{goal_H} and \eqref{goal_vz}, to conclude the proof, it remains to prove the existence of $z \in \M_{T_1} \cap B_X(0, \rho)$ such that $\mathbb{P}_{T_1} \G_{T}(0, v_z)= \mathbb{P}_{T_1} x_f$. To that end, we apply the Brouwer fixed-point theorem to the function \begin{displaymath} G_{x_f}:
\left|
\begin{array}{rcl}
\M_{T_1} \cap B_X(0, \rho) & \longrightarrow & \M_{T_1} \\
z & \longmapsto & z -\mathbb{P}_{T_1}[\G_T(0, v_z)] +\mathbb{P}_{T_1}[x_f]. \\
\end{array} \right. \end{displaymath}
First, notice that by continuity of $\G_{T}$, of $\Gamma_{T-T_1}$, of $h$ and of $\tild{z} \mapsto u_{\tild{z}}$, the map $z \mapsto v_z$ defined in \eqref{def_vz} is continuous from $\M_{T_1}$ to $E_T$. Thus, $G_{x_f}$ is continuous.
It remains to prove that it stabilizes a ball. Let $\rho' \in (0, \rho)$ such that $C \rho'^{ \gamma} < \frac{1}{2}$ and reduce $\delta$ such that $C \delta^2 + \delta < \frac{\rho'}{2}$ where $C$ is given in \eqref{goal_M}. Then, using estimate \eqref{goal_M}, one has for all $z \in \M_{T_1} \cap B_X(0, \rho')$, \begin{equation*}
\| G_{x_f}(z) \|_X \leqslant C
\| z \|_X^{1+\gamma} +
C\| \mathbb{P} x_f\|_X^2 +
\| \mathbb{P} x_f\|_X \leqslant C \rho'^{1+\gamma} + C \delta^2 + \delta \leqslant \rho'. \end{equation*} Thus, one can apply Brouwer fixed-point theorem to $G_{x_f}$ to conclude the proof. \end{proof}
\section{Toy-models in finite dimension} \label{sec:toy_models} The goal of this section is to illustrate the method presented in \cref{sec:black_box} on examples in finite dimension.
For the sake of simplicity, we only explain how to prove that the directions lost at the linear level are small-time $E$-continuously approximately reachable vectors.
The verification of the other assumptions of \cref{black_box} is left to the reader.
In this section, for all $n \in \mathbb{N}^*$, $(e_i)_{i=1, \ldots, n}$ denotes the canonical basis of $\mathbb{R}^n$.
\subsection{A first toy-model: not all lost directions can be recovered} Consider the following control-affine polynomial system
\begin{equation}
\label{toy_model_1}
\left\{
\begin{array}{ll}
\dot{x}_1= u, \\
\dot{x}_2= x_1^2 +x_1^3.
\end{array} \right. \end{equation} The reachable space of the linearized system around $(0,0)$ is given by $\H=\Span(e_1)$ and its supplementary by $\M=\Span(e_2)$. However, $e_2$ is not a small-time $W^{-1, \infty}$-continuously approximately reachable vector because the following quantity \begin{equation*} x_2(T; \ u, \ 0) = \int_0^T u_1(t)^2 dt + \int_0^T u_1(t)^3 dt
\geq (1- T \| u \|_{W^{-1, \infty}}) \int_0^T u_1^2(t) dt \end{equation*}
is positive for $T$ and $\| u \|_{W^{-1, \infty}}$ small enough. This system illustrates Sussman's necessary condition \cite{S83} on $ \left[ [f_0, f_1] , f_1 \right](0) $ for $L^{\infty}$-STLC.
\subsection{Sussman example: a quadratic/cubic competition} The following classical example illustrates that a cubic term can be used to dominate a quadratic drift and restore STLC,
\begin{equation}
\label{toy_model_2}
\left\{
\begin{array}{ll}
\dot{x}_1= u, \\
\dot{x}_2= x_1,\\
\dot{x}_3= x_2^2+x_1^3.
\end{array} \right. \end{equation} For this system, $\H=\Span(e_1, e_2)$ and $\M=\Span(e_3)$.
First, \eqref{toy_model_2} is not $W^{1, \infty}$-STLC. Indeed, considering a trajectory such that $x_1(T)=x_2(T)=0$, two integrations by parts give, \begin{equation*} \int_0^T u_1(t)^3 dt = - 2\int_0^T u_2(t) u_1(t) u(t) dt = \int_0^T u_2(t)^2 u'(t) dt. \end{equation*}
Thus, provided that $\| u \|_{W^{1, \infty}(0,T)} \leq \frac{1}{2}$, \begin{equation*} x_3(T; \ u, \ 0) = \int_0^T u_2(t)^2 ( 1+ u'(t)) dt \geq \frac{1}{2} \int_0^T u_2(t)^2 dt. \end{equation*} Hence, it is impossible to reach states of the form $(0,0, -\delta)$ with $\delta >0$.
However, $e_3$ is a small-time $L^{\infty}$-continuously approximately reachable vector. Indeed, in this asymptotic, one can use the cubic term to absorb the quadratic term along the lost direction using oscillating controls defined for $b \in \mathbb{R}^*$ by \begin{equation*} \forall t \in [0,T], \ u_b(t) := \sign(b)
|b|^{\frac{1}{11}} \phi'' \left( \frac{ t } {
|b|^{\frac{2}{11}} } \right) \quad \text { with } \
\phi \in C_c^{\infty}(0,1) \text{ s.\ t.\ } \int_0^1 \phi'(\theta)^3 d\theta = 1. \end{equation*}
Indeed, performing the change of variables $t= |b|^{\frac{2}{11}} \theta$, one gets \begin{align*} x_3(T; \ u_b, \ 0) &=
\int_0^{|b|^{\frac{2}{11}}} \left( \sign(b)
|b|^{\frac{5}{11}} \phi \left( \frac{ t } {
|b|^{\frac{2}{11}} } \right) \right)^2 dt +
\int_0^{|b|^{\frac{2}{11}}} \left( \sign(b)
|b|^{\frac{3}{11}} \phi' \left( \frac{ t } {
|b|^{\frac{2}{11}} } \right) \right)^3 dt \\ &=
|b|^{\frac{12}{11}} \int_0^1 \phi(\theta)^2 d\theta + \sign(b)
|b| \int_0^1 \phi'(\theta)^3 d\theta =
b + \O( |b|^{\frac{12}{11}}). \end{align*}
Moreover, along the `linear components', as $u_b$ is supported on $(0, |b|^{\frac{2}{11}}) \subset (0,T)$ for $b$ small enough, one directly has \begin{equation*} \left( x_1(T; \ u_b, \ 0), \ x_2(T; \ u_b, \ 0) \right) = \left( u_1(T), u_2(T) \right) = (0,0) . \end{equation*} Besides, one has the following estimates on the controls, \begin{equation*} \forall k \in \mathbb{N}, \quad
\| u_b^{(k)}
\|_{L^{\infty}(0,T)} \leqslant
\| \phi^{(2+k)} \|_{L^{\infty}(0,1)}
|b|^{\frac{1-2k}{11}}.
\end{equation*} Hence, this family of controls is arbitrary small in $L^{\infty}(0,T)$ (but not in $W^{1, \infty}(0,T)$). Moreover, this estimate (with $k=0$) also gives that the map $b \mapsto u_b$ from $\mathbb{R}^*$ to $L^{\infty}(0,T)$ can be extended continuously at zero with $u_0=0$.
Therefore, $e_3$ is a small-time $L^{\infty}$-continuously approximately reachable vector and by \cref{black_box}, \eqref{toy_model_2} is $L^{\infty}$-STLC. Notice that this was already known thanks to the Susmann $\S(\theta)$ condition \cite{S83}.
\subsection{A polynomial toy-model for the Schrödinger PDE}
The next polynomial control system is designed to be a toy-model for the Schrödinger equation \eqref{Schrodinger} as explained later in \cref{rem:toy_model_poly},
\begin{equation}
\label{toy_model_3}
\left\{
\begin{array}{ll}
\dot{x}_1= u, \\
\dot{x}_2= x_1,\\
\dot{x}_3= x_2, \\
\dot{x}_4 = x_3^2 + x_1^2 x_2 , \\
\dot{x}_5 = x_4.
\end{array} \right. \end{equation} For this example, $\H= \Span( e_1, e_2, e_3)$ and $\M=\Span(e_4, e_5)$. Moreover, solving explicitly \eqref{toy_model_3}, the fourth and fifth components are given by, \begin{align} \label{tm3_com4} x_4(T; \ u , \ 0) &= \int_0^T u_3(t)^2 dt + \int_0^T u_1(t)^2 u_2(t) dt, \\ \label{tm3_com5} x_5(T; \ u , \ 0) &= \int_0^T (T-t) u_3(t)^2 dt + \int_0^T (T-t) u_1(t)^2 u_2(t) dt . \end{align}
First, using Cauchy-Schwarz and Gagliardo-Nirenberg inequalities \cite{N59}, one gets the existence of $C>0$ such that for all $u \in H^3(0,T)$ \begin{equation*}
\left| \int_0^T u_1(t)^2 u_2(t) dt
\right| \leqslant C
\| u_1\|_{L^2(0,T)}^3 \leqslant C \left(
\|u^{(3)}\|_{L^2(0,T)} + T^{-3} \|u\|_{L^2(0,T)} \right)
\|u_3\|^2_{L^2(0,T)}. \end{equation*} Thus, the quadratic term prevails on the cubic term in \eqref{tm3_com4} and \eqref{tm3_com5} when controls are small in $H^3$. This allows to deny $H^3$-STLC for \eqref{toy_model_3}. Nonetheless, let us prove that \eqref{toy_model_3} is $H^2_0$-STLC.
\noindent \emph{Step 1: $e_4$ is a small-time $H^2_0$-continuously approximately reachable vector with vector variations $\Xi_4(T)=e_4+Te_5$.} Heuristically, for a final time $T$ fixed, looking at \eqref{tm3_com4} and \eqref{tm3_com5}, the cubic terms of $x_4$ and $x_5$ have the same size. Thus, it seems better to use the vector variations $\Xi_4(T)=e_4+Te_5$ instead of $\Xi_4(T)=e_4$.
As before, in the asymptotic of controls small in $H^2_0$, one can use the cubic term to absorb the quadratic term along the lost direction using oscillating controls of the form, for all $b \in \mathbb{R}^*$, \begin{equation} \label{control_variations} u_b(t) = \sign(b)
|b|^{\frac{7}{41}} \phi^{(3)} \left( \frac{ t } {
|b|^{\frac{4}{41}} } \right) \quad \text { with } \ \phi \in C_c^{\infty}(0,1) \text{ s.\ t.\ } \int_0^1 \phi''(\theta)^2 \phi'(\theta) d\theta = 1. \end{equation}
Indeed, substituting these controls into \eqref{tm3_com4} and \eqref{tm3_com5} and performing the change of variables $t= |b|^{\frac{4}{41}} \theta$, one gets \begin{align*} x_4(T; \ u_b, \ 0) &=
|b|^{ \frac{42}{41} } \int_0^1 \phi(\theta)^2 d \theta + b
, \\ x_5(T; \ u_b, \ 0) &=
|b|^{ \frac{42}{41} } \int_0^1 (T-
|b|^{ \frac{4}{41} } \theta ) \phi(\theta)^2 d \theta + T b - \sign(b)
|b|^{ \frac{45}{41} } \int_0^1 \theta \phi''(\theta)^2 \phi'(\theta) d\theta .
\end{align*}
Moreover, as $u_b$ is supported on $(0, |b|^{\frac{4}{41}}) \subset (0,T)$ for $b$ small enough, one directly has, \begin{equation*} ( x_1(T; \ u_b, \ 0) , \ x_2(T; \ u_b, \ 0) , \ x_3(T; \ u_b, \ 0) ) = ( u_1(T), u_2(T), u_3(T) ) = (0, 0,0). \end{equation*} Besides, for all $b \in \mathbb{R}^*$, \begin{align} \label{size_ub_H2}
\| u_b''
\|_{L^2(0,T)} &\leqslant
\| \phi^{(5)} \|_{L^2(0,1)}
|b|^{\frac{1}{41}}.
\end{align} Thus, the family $(u_b)_{b \in \mathbb{R}}$ is arbitrary small in $H^2_0(0,T)$ and the map $b \mapsto u_b$ from $\mathbb{R}^*$ to $H^2_0(0,T)$ can be continuously extended at zero with $u_0=0$. This concludes Step 1.
\noindent \emph{Step 2: Constructing the second approximately reachable vector from the first one.} Hermes and Kawski proved in \cite[Theorem 6]{HK87} that for affine-control systems of the form $\dot{x}=f_0(x)+uf_1(x)$, if for some Lie bracket $V$ of $f_0$ and $f_1$, $V(0)$ is a tangent vector in the sense of \eqref{methodo:tv_bis}, then $[f_0, V](0)$ is also a tangent vector.
Using the same construction, we prove that $e_5$ is also a small-time $H^2_0$-continuously approximately reachable vector with vector variations $\Xi_5(T)=e_5$. Denote by $(u_b)_{b \in \mathbb{R}}$ the control variations associated with $e_4$, constructed at step 1. We are going to prove that, for all $(b, c) \in \mathbb{R}^2$ small enough, \begin{equation} \label{tm3_goal} x(3T; \ u_b \# 0_{[0,T]} \# u_c, \ 0) = (b+c)(e_4 +Te_5) + 2Tb e_5 + \O
( \| (b,c)\|^{1+\frac{1}{41}}). \end{equation} Thus, taking for all $\alpha \in \mathbb{R}$, $b = \frac{\alpha}{2T}$ and $c=-b$, this proves the existence of a family of controls $(v_{\alpha})_{\alpha \in \mathbb{R}}$ such that, when $\alpha$ goes to zero, \begin{equation*} x(3T; \ v_{\alpha}, \ 0) = \alpha e_5 + \O
(| \alpha|^{1+\frac{1}{41}}) \quad \text{ with } \quad
\| v_{\alpha}\|_{H^2_0(0,T)} \leqslant C
| \alpha|^{\frac{1}{41}}, \end{equation*} using \eqref{size_ub_H2}. And, this will conclude Step 2. To prove \eqref{tm3_goal}, notice first that by definition of $(u_b)_{b \in \mathbb{R}}$, one has \begin{equation} \label{first_step} x(T; \ u_b, \ 0) = b(e_4+Te_5) + \O (
|b|^{1+\frac{1}{41}} ). \end{equation} Then, using the semi-group property of \eqref{toy_model_3}, one has \begin{equation} \label{second_step} x(2T; \ u_b \# 0_{[0,T]}, \ 0) = x(T; \ 0_{[0,T]}, \ x(T; \ u_b, \ 0)). \end{equation} Moreover, computing explicitly the solution, one gets a constant $C>0$ such that, \begin{equation} \label{evolution_libre} \forall p \in \mathbb{R}^5, \quad
\left\| x(T; \ 0_{[0,T]}, \ p) - p - T p_4 e_5
\right\| \leqslant C
\| p \|^2. \end{equation} Thus, \eqref{first_step}, \eqref{second_step} and \eqref{evolution_libre} lead to \begin{equation} \label{step2T} x(2T; \ u_b \# 0_{[0,T]}, \ 0) =
b(e_4+T e_5) + Tb e_5 +
\O( |b|^{1+\frac{1}{41}}). \end{equation} Then, once again, using the semi-group property, \begin{equation} \label{third_step} x(3T; \ u_b \# 0_{[0,T]} \# u_c, \ 0) = x(T; \ u_c, \ x(2T; \ u_b \# 0_{[0,T]}, \ 0) ). \end{equation}
Moreover, using Grownall Lemma, one gets a constant $C>0$ such that for all $\| u \|_{H^2_0(0,T)} < 1$ and $\| p\| < 1$, \begin{equation} \label{Gronwall_dim_finie}
\left\| x(T; \ u, \ p) - x(T; \ 0_{[0,T]}, \ p) - x(T; \ u, \ 0)
\right\| \leqslant
C \| u \|_{H^2_0(0,T)} \| p\|. \end{equation} Thus, \eqref{third_step} and \eqref{Gronwall_dim_finie} lead to \begin{multline} \label{step4}
\| x(3T; \ u_b \# 0_{[0,T]} \# u_c, \ 0) - x(T; \ u_c, \ 0) - x(T; \ 0_{[0,T]}, \ x(2T; \ u_b \# 0_{[0,T]}, \ 0))
\| \\ \leqslant C
\| u_c\|_{H^2_0}
\|x(2T; \ u_b \# 0_{[0,T]}, \ 0) \|
\leqslant C |c|^{\frac{1}{41}} |b|, \end{multline} using the size estimates \eqref{size_ub_H2} on $(u_c)_{c \in \mathbb{R}}$ and \eqref{step2T} on $x(2T; \ u_b \# 0_{[0,T]}, \ 0)$. Besides, using the estimates \eqref{evolution_libre} and \eqref{step2T}, \begin{equation} \label{step5} x(T; \ 0_{[0,T]}, \ x(2T; \ u_b \# 0, \ 0)) =
b(e_4+Te_5)+ 2Tbe_5 + \O( | b|^{1+\frac{1}{41}}). \end{equation} Using the definition of the family $(u_c)_{c \in \mathbb{R}}$, \eqref{step4} and \eqref{step5} lead to \eqref{tm3_goal}.
\begin{rem} \label{rem:toy_model_poly} The system \eqref{toy_model_3} can be seen as a polynomial toy-model for the Schrödinger PDE \eqref{Schrodinger} in the following way. For both control systems, the linearized system is controllable `up to codimension 2', the leading quadratic (resp.\ cubic) term of the solution along the first lost direction is given by $\int_0^T u_3(t)^2 dt$ (resp.\ by $\int_0^T u_1(t)^2 u_2(t)dt$) and the second lost direction is more or less the `integration' of the first one. \end{rem}
\subsection{A bilinear toy-model for Schrödinger} Let $p \in \mathbb{N}^*$ and $H_0, H_1 \in \mathcal{M}_p(\mathbb{R})$ symmetric matrices. Consider Schrödinger control systems of the form \begin{equation} \label{ODE} i X'(t) = H_0 X(t) -u(t) H_1X(t), \end{equation} where the state is $X(t) \in \mathbb{C}^p$ and the control is $u(t) \in \mathbb{R}$. We write $(\varphi_1, \ldots, \varphi_p)$ for an orthonormal basis of eigenvectors of $H_0$ and $(\lambda_1, \ldots, \lambda_p)$ for its eigenvalues. We also denote by $X_{j}(t):= \varphi_j e^{-i \lambda_j t}$ for all $j \in \{1, \ldots, p\}$. In this section, the commutator of $H_0$ and $H_1$ is denoted by $[H_0, H_1]:=H_0 H_1 - H_1 H_0$ and $\langle \cdot, \cdot \rangle$ denotes the classical hermitian scalar product on $\mathbb{C}^p$.
\begin{rem} \label{adaptation} For Schrödinger ODEs \eqref{ODE}, we work around the trajectory $(X_1, u \equiv 0)$. The work in \cref{sec:black_box} can still be used by performing the change of function $X^*(t):=X(t)e^{i \lambda_1t} -\varphi_1$ to work around $(0,0)$. Thus, in this setting, a vector $\xi \in \mathbb{R}^p$ is called a small-time $E$-continuously approximately reachable vector if there exists a continuous map $\Xi : [0, +\infty)\rightarrow \mathbb{R}^p$ with $\Xi(0)=\xi$ such that for all $T > 0$, there exists $C, \rho, s >0$ and a continuous map $b \in (-\rho, \rho) \mapsto u_b \in E_T$ such that for all $b \in (-\rho, \rho),$ \begin{equation*}
\left\| X(T; \ u_b, \ \varphi_1) - X_1(T) - b \Xi(T)
\right\| \leqslant C
| b
|^{1+s} \quad \text{ with } \quad
\| u_b
\|_{E_T} \leqslant C
| b
|^{s}. \end{equation*} The topology on the state is not specified as all norms are equivalent in finite dimension. \end{rem}
\subsubsection{The linear test} The linearized system of \eqref{ODE} around the trajectory $(X_1, u \equiv 0)$ is given by \begin{equation} \label{ODE_lin} i X'_L = H_0 X_L - u(t) H_1 X_1. \end{equation} By the Duhamel formula, the solution of \eqref{ODE_lin} with $X_L(0)=0$ can be written as \begin{equation} \label{dim_finie_lin} X_L(T) = i \sum \limits_{j=1}^p \left( \langle H_1 \varphi_1, \varphi_j \rangle \int_0^T u(t) e^{i (\lambda_j - \lambda_1)t} dt \right) X_j(T) . \end{equation} Thus, the reachable space of the linearized system \eqref{ODE_lin} is given by \begin{equation*} \H := \Span_{\mathbb{C}} \left( X_j(T) \quad \text{ for } j \in \{1, \ldots, p \} \text{ such that } \quad \langle H_1 \varphi_1, \varphi_j \rangle \neq 0 \right), \end{equation*} as the equality $X_L(T)=X_f$ is brought down to solving a finite polynomial moment problem when the coefficients $
\langle H_1 \varphi_1, \varphi_j \rangle
$ don't vanish.
To simplify, we assume that,
\noindent ($\hat{H}_{\lin}$) there exists an integer $K \in \{2, \ldots, p\}$ such that \begin{equation} \label{lin_nul_dim_finie}
\langle H_1 \varphi_1, \varphi_K \rangle =0 \quad \text{ and } \quad \forall j \in \{1, \ldots, p \}-\{K\}, \quad \langle H_1 \varphi_1, \varphi_j \rangle \neq 0. \end{equation} As the solution of the Schrödinger ODE \eqref{ODE} is complex-valued, it means that $\H$ is of codimension 2 and its supplementary is given by $\M=\Span_{\mathbb{R}}(\varphi_K, i \varphi_K)$.
\subsubsection{Quadratic and cubic behaviors} \label{sec:expansion_dim_finie}
To prove that $i \varphi_K$ and $\varphi_K$ are approximately reachable vectors, we need to study the behavior of the solution of \eqref{ODE} along the lost directions. Unlike for the previous polynomial toy-models,
here the computations of the first terms of the expansion of \eqref{ODE} are quite heavy. It can be lightened by introducing the new state \begin{equation} \label{def_syst_aux_fin} \tild{X}(t) := e^{-i H_1 u_1(t)} X(t), \end{equation} which solves the following ODE, called the auxiliary system, \begin{equation} \label{ODEaux}
\tild{X}'(t) = -i e^{-iH_1 u_1(t)} H_0 e^{i H_1 u_1(t)} \tild{X}(t) = -i \sum \limits_{k=0}^{+\infty} \frac{ \left(-i u_1(t)\right)^k}{k!} \ad_{H_1}^k(H_0) \tild{X}(t). \end{equation} Thus, working with $\tild{X}$, it is easier to quantify the expansion of the solution with respect to the primitives of the control and not with respect to the control $u$.
This idea was introduced in \cite{C06} and later used in \cite{BM14, B21bis} for the Schrödinger equation. It was also used in finite dimension in \cite{BM18} to study the quadratic behavior of differential systems or in \cite{BLM21} to give refined error estimates for various expansions of scalar-input affine control systems.
\noindent By the Duhamel formula, the solution of the auxiliary system \eqref{ODEaux} with $\tild{X}(0)=\varphi_1$ satisfies \begin{equation} \label{expr:aux_fin} \tild{X}(t) = X_{1}(t) -i
\int_0^t e^{-i H_0 (t-\tau)}
\sum \limits_{k=1}^{+\infty} \frac{ (-i u_1(\tau))^k}{k!} \ad_{H_1}^k(H_0) \tild{X}(\tau) d\tau. \end{equation} Then, the linear term $\tild{X}_L$, the quadratic term $\tild{X}_Q$ and the cubic term $\tild{X}_C$ of the expansion of $\tild{X}$ around the trajectory $(X_{1}, u \equiv 0)$ are given by, \begin{align} \label{expr:X_1,L} \tild{X}_L(t) &= -\int_0^t e^{-i H_0 (t-\tau)} u_1(\tau) \ad^1_{H_1}(H_0) X_{1}(\tau) d\tau, \\ \label{expr:X_1,Q} \tild{X}_Q(t) &= \int_0^t e^{-i H_0 (t-\tau)} \left( -u_1(\tau) \ad^1_{H_1}(H_0) \tild{X}_L(\tau) + \frac{i u_1(\tau)^2}{2} \ad_{H_1}^2(H_0) X_{1}(\tau) \right) d\tau, \\ \notag \tild{X}_C(t) &= \int_0^t e^{-i H_0 (t-\tau)} \Big( -u_1(\tau) \ad^1_{H_1}(H_0) \tild{X}_Q(\tau) \\ \label{expr:X_1,C} & + \frac{i u_1(\tau)^2}{2} \ad_{H_1}^2(H_0) \tild{X}_{L}(\tau) + \frac{u_1(\tau)^3}{6} \ad_{H_1}^3(H_0) X_{1}(\tau) \Big) d\tau . \end{align}
\emph{First-order term.} Using \eqref{expr:X_1,L}, the linear term along the lost direction is given by \begin{equation} \label{bil_etape1} \langle \tild{X}_L(T), \varphi_K e^{-i\lambda_1 T} \rangle = (\lambda_K-\lambda_1) \langle H_1 \varphi_1, \varphi_K \rangle \int_0^T u_1(t) e^{ i (\lambda_K- \lambda_1) (t-T) } dt =0, \end{equation} under ($\hat{H}_{\lin}$). The $K$-th direction is also lost at the first order for the auxiliary system.
\noindent \emph{Second-order term.} Substituting the explicit form \eqref{expr:X_1,L} of $\tild{X}_L$ into \eqref{expr:X_1,Q}, the quadratic term along the lost direction is given by, \begin{equation} \label{quad_sans_IPP} \langle \tild{X}_Q(T), \varphi_K e^{-i \lambda_1 T} \rangle = -i \hat{A}^1_K \int_0^T u_1(t)^2 e^{ i (\lambda_K- \lambda_1) (t-T) } dt + \int_0^T u_1(t) \int_0^t u_1(\tau) \hat{k}(t, \tau) d\tau dt, \end{equation} where \begin{align} \notag \hat{A}_K^1 &:= - \frac{1}{2} \langle \ad^2_{H_1}(H_0) \varphi_1, \varphi_K \rangle = \sum \limits_{j=1}^{p} \left(\lambda_j - \frac{\lambda_1+\lambda_K}{2} \right) \langle H_1 \varphi_1, \varphi_j \rangle \langle H_1 \varphi_K, \varphi_j \rangle, \\ \label{df_kernel_quad} \hat{k}(t, \tau) &:= \sum \limits_{j=1}^p (\lambda_1- \lambda_j) (\lambda_j - \lambda_K) \langle H_1 \varphi_1, \varphi_j \rangle \langle H_1 \varphi_K, \varphi_j \rangle
e^{i
\left( \lambda_j (\tau-t) + \lambda_K(t-T) + \lambda_1(T- \tau)
\right)
}. \end{align} To identify the leading quadratic term, one can compute integrations by parts to get, for all $n \in \mathbb{N}^*$, the existence of a quadratic form $Q_n$ on $\mathbb{C}^{2n}$ such that \begin{multline} \label{quad_IPP} \langle \tild{X}_Q(T), \varphi_K e^{-i \lambda_1 T}\rangle = -i \sum \limits_{m=1}^n \hat{A}_K^m \int_0^T u_m(t)^2 e^{ i (\lambda_K- \lambda_1)(t-T) } dt \\ + \int_0^T u_n(t) \int_0^t u_n(\tau) \partial_1^{n-1} \partial^{n-1}_2 \hat{k} (t, \tau) d\tau dt + Q_n \left(
u_2(T), \ldots, u_n(T), \alpha_2^n, \ldots, \alpha_{n}^n
\right), \end{multline} where, for all $m=2, \ldots, n$, \begin{align*} \alpha_m^n &:= \int_0^T u_n(\tau) \partial_1^m \partial_{2}^{n-1} \hat{k}(T, \tau) d\tau, \\ \hat{A}_K^m &:= \sum \limits_{j=1}^{p} \left(\lambda_j - \frac{\lambda_1+\lambda_K}{2} \right) ( \lambda_K-\lambda_j)^{m-1} (\lambda_j -\lambda_1)^{m-1} \langle H_1 \varphi_1, \varphi_j \rangle \langle H_1 \varphi_K, \varphi_j \rangle. \end{align*} To see more details about this kind of computations, the reader can for example refer to \cite[Section 3.3]{BM20} or \cite[Section 2.2\ and 5]{B21bis}. To identify the leading quadratic term, one must know which coefficient $\hat{A}^m_K$ is the first to not vanish. From now on, we assume that
\noindent ($\hat{H}_{\Quad}$) $\hat{A}_K^1=\hat{A}_K^2=0$ and $\hat{A}_K^3 \neq 0$.
\noindent This choice is explained later in \cref{rem:choix_derive}. Then, using \eqref{quad_IPP} for $n=3$ and Cauchy-Schwarz inequality, under ($\hat{H}_{\Quad}$), one gets that
\begin{equation} \label{bil_etape2}
\left| \langle \tild{X}_Q(T), \varphi_K e^{-i \lambda_1 T}\rangle
\right| = \O \left(
\| u_3\|^2_{L^2(0,T)} +
| u_2(T)|^2 +
| u_3(T)|^2 \right). \end{equation} Thus, provided that the boundary terms can be neglected, the leading quadratic term of the expansion along the lost direction is $\int_0^T u_3(t)^2 dt.$
\noindent \emph{Third-order term.} Substituting the explicit forms \eqref{expr:X_1,L} and \eqref{expr:X_1,Q} of $\tild{X}_L$ and $\tild{X}_Q$ into \eqref{expr:X_1,C}, the cubic term along the lost direction is given by, \begin{multline*} \langle \tild{X}_C(T), \varphi_K e^{-i \lambda_1 T}\rangle = \frac{1}{6} \langle \ad^3_{H_1}(H_0) \varphi_1, \varphi_K \rangle \int_0^T u_1(t)^3 e^{ i (\lambda_K- \lambda_1)(t-T) } dt \\ + \int_0^T u_1(t)^2 \int_0^t u_1( \tau) \hat{h}_1(t, \tau) d\tau dt + \int_0^T u_1(t) \int_0^t u_1( \tau)^2 \hat{h}_2(t, \tau) d\tau dt \\ + \int_0^T u_1(t) \int_0^t u_1( \tau) \int_0^{\tau} u_1(s) \hat{h}_3(t, \tau,s) ds d\tau dt, \end{multline*} where the cubic kernels are given by \begin{align} \label{df_kernel_cub_1} &\hat{h}_1(t, \tau) := \frac{i} {2} \sum \limits_{j=1}^p (\lambda_j - \lambda_1) \langle H_1 \varphi_1, \varphi_j \rangle \langle \ad^2_{H_1}(H_0) \varphi_K, \varphi_j \rangle e^{ i \left( \lambda_K(t-T) + \lambda_j(\tau-t) + \lambda_1(T-\tau) \right) } , \\ \label{df_kernel_cub_2} &\hat{h}_2(t, \tau) := \frac{i} {2} \sum \limits_{j=1}^p (\lambda_K - \lambda_j) \langle H_1 \varphi_K, \varphi_j \rangle \langle \ad^2_{H_1}(H_0) \varphi_1, \varphi_j \rangle e^{ i \left( \lambda_K(t-T) + \lambda_j(\tau-t) + \lambda_1(T-\tau) \right) } , \\ \notag &\hat{h}_3(t, \tau,s) := \sum \limits_{j=1}^p \sum \limits_{n=1}^p (\lambda_K - \lambda_j) (\lambda_1- \lambda_n) (\lambda_n- \lambda_j) \langle H_1 \varphi_1, \varphi_n \rangle \langle H_1 \varphi_n, \varphi_j \rangle \langle H_1 \varphi_K, \varphi_j \rangle \\ \label{df_kernel_cub_3} &\times e^{ i \left( \lambda_K(t-T) + \lambda_j(\tau-t) + \lambda_1(T-s) + \lambda_n(s- \tau) \right) } . \end{align}
For the Schrödinger PDE \eqref{Schrodinger}, computing formally the Lie bracket, one gets that $\ad^3_{A}(\mu)\varphi=0$ for all $\varphi$. Thus, to have a toy-model fitting the PDE, from now on we assume that \begin{equation} \label{hyp_cubic_en_plus} \langle \ad^3_{H_1}(H_0) \varphi_1, \varphi_K \rangle =0 . \end{equation} Then, the cubic term along the lost direction behaves as \begin{multline} \label{bil_etape3} \langle \tild{X}_C(T), \varphi_K e^{-i \lambda_1 T}\rangle = \int_0^T u_1(t)^2 \int_0^t u_1( \tau) \hat{h}_1(t, \tau) d\tau dt \\ + \int_0^T u_1(t) \int_0^t u_1( \tau)^2 \hat{h}_2(t, \tau) d\tau dt + \O (
\| u_1\|^3_{L^1(0,T)} ). \end{multline} The last cubic term is neglected in front of the other two in an asymptotic of small-time, and thus, is seen as a small pollution.
\noindent \emph{Error estimate on the expansion.} With a similar proof as in \cite[Prop.\ 2.5]{B21bis}, one can compute the following error estimate, \begin{equation} \label{bil_etape4}
\| ( \tild{X} - X_1 - \tild{X}_L - \tild{X}_Q - \tild{X}_C )(T)
\| =
\O( \| u_1\|^4_{L^4(0,T)}). \end{equation}
\begin{rem}[About assumption $(\hat{H}_{\Quad}$)] \label{rem:choix_derive} When \eqref{hyp_cubic_en_plus} holds, the equality \eqref{bil_etape3} gives that in an asymptotic of small-time, the leading cubic term is $\int_0^T u_1(t)^2 u_2(t)dt$. \begin{itemize}
\item If, instead of $(\hat{H}_{\Quad}$), we assume that $\hat{A}^1_K \neq 0$, by \eqref{quad_sans_IPP}, the leading quadratic term is $\int_0^T u_1(t)^2 dt$. Thus, for controls small in $W^{-1, \infty}$, the quadratic term prevails on the cubic term and \eqref{ODE} is not $W^{-1, \infty}$. Thus, one must at least assume that $\hat{A}^1_K =0$.
\item If, instead of $(\hat{H}_{\Quad}$), we assume that $\hat{A}^1_K =0$ and $\hat{A}^2_K \neq 0$, then \eqref{quad_IPP} with $n=2$ gives that the leading quadratic term is $\int_0^T u_2(t)^2 dt$. However, the cubic term can't absorb simultaneously such a quadratic term and the quartic term as by Cauchy-Schwarz inequality, one has \begin{equation*}
\left| \int_0^T u_1(t)^2 u_2(t) dt
\right|^2 \leqslant \int_0^T u_2(t)^2 dt \int_0^T u_1(t)^4 dt. \end{equation*} To overcome this issue, one could try to prove a sharper error estimate than \eqref{bil_etape4}. Instead of doing this, we assume $(\hat{H}_{\Quad}$) so that the leading quadratic term is given by $\int_0^T u_3(t)^2dt$. This time, the cubic term can handle simultaneously such a quadratic term and the terms of order higher than four. \end{itemize} \end{rem}
To sum up, the goal of this section is to prove the following result. \begin{thm} Let $H_0$ and $H_1$ satisfying ($\hat{H}_{\lin}$), ($\hat{H}_{\Quad}$), \eqref{hyp_cubic_en_plus}, and ($\hat{H}_{\Cub}$).
Then, the Schrödinger ODE \eqref{ODE} is $H^2_0$-STLC around the ground state: for all $T>0$, for all $\eta>0$, there exists $\delta> 0$ such that for every $X_f \in \mathbb{C}^p$ with $\|X_f - X_1(T)\| < \delta$, there exists $u \in H^2_0((0,T),\mathbb{R})$ with $\|u\|_{H^2_0(0,T)} < \eta$ such that the solution $X$ of \eqref{ODE} satisfies \begin{equation*} X(T; \ u, \ \varphi_1) = X_f. \end{equation*} \end{thm}
\subsubsection{$i \varphi_K$ is a small-time $H^2_0$-continuously approximately reachable vector} \label{dim_finie_TV1} Working in two stages as for the polynomial toy-models \eqref{toy_model_2} and \eqref{toy_model_3}, we prove that $i \varphi_K$ is a small-time $H^2_0$-continuously approximately reachable vector associated with vector variations $\Xi(T)= i X_K(T)$. \begin{itemize} \item First, the computations of \cref{sec:expansion_dim_finie} entail the existence of a family of oscillating controls $(u_b)_{b \in \mathbb{R}}$ arbitrary small in $H^2_0(0,T)$ such that \begin{equation} \label{step1} \langle X(T; \ u_b, \ \varphi_1), X_K(T) \rangle = i b +
\O( |b|^{1+\frac{1}{41}}) \quad \text{ when } b \rightarrow 0. \end{equation} \item Then, we make sure that \begin{equation} \label{step2}
\left\| \mathbb{P} X(T; \ u_b, \ \varphi_1) - X_1(T)
\right\| =
\O( |b|^{1+\frac{1}{41}}), \end{equation} where we recall that $\mathbb{P}$ denotes the orthogonal projection on $\H$. \end{itemize}
\ \paragraph{\textit{Step 1: Using oscillating controls along the lost direction.}} Define, for $b \in \mathbb{R}^*$, \begin{equation} \label{def_ub_dim_finie} u_b(t) = \sign(b)
|b|^{\frac{7}{41}} \phi^{(3)} \left( \frac{ t } {
|b|^{\frac{4}{41}} } \right) \text { with } \phi \in C_c^{\infty}(0,1) \text{ s.\ t.\ } \int_0^1 \phi''(\theta)^2 \phi'(\theta) d\theta = \frac{1}{\hat{C}_K},
\end{equation} with $ \hat{C}_K := \hat{h}_1(0,0) - \hat{h}_2(0,0) $ where $\hat{h}_1$ and $\hat{h}_2$ are defined in \eqref{df_kernel_cub_1} and \eqref{df_kernel_cub_2}. To ensure that $\phi$ exists, one must assume that
\noindent ($\hat{H}_{\Cub}$) $\hat{C}_K \neq 0$.
\noindent First, the same estimate as \eqref{size_ub_H2} gives that the controls are arbitrary small in $H^2_0$ and the map $b \mapsto u_b$ can be extended continuously from $\mathbb{R}$ to $H^2_0(0,T)$. Moreover, substituting these controls into \eqref{bil_etape1}, \eqref{bil_etape2}, \eqref{bil_etape3} and \eqref{bil_etape4}, the same computations as for \eqref{toy_model_3} lead to \begin{multline*} \langle \tild{X}(T; \ u_b, \ \varphi_1), \varphi_K e^{-i \lambda_1T} \rangle = b \int_0^1 \phi''(\theta_1)^2 \int_0^{\theta_1} \phi''(\theta_2) \hat{h}_1 (
|b|^{ \frac{4} {41} } \theta_1,
|b|^{ \frac{4} {41} } \theta_2 ) d \theta_2 d\theta_1 \\ + b \int_0^1 \phi''(\theta_1) \int_0^{\theta_1} \phi''(\theta_2)^2 \hat{h}_2 (
|b|^{ \frac{4} {41} } \theta_1,
|b|^{ \frac{4} {41} } \theta_2 ) d \theta_2 d\theta_1 + \O(
|b|^{ 1 + \frac{1} {41} } ). \end{multline*} Performing the expansion of the kernels $\hat{h}_1$ and $\hat{h}_2$ when $b$ goes to zero, one has \begin{equation*} \langle \tild{X}(T; \ u_b, \ \varphi_1), \varphi_K e^{-i \lambda_1T} \rangle = i \hat{C}_K b \int_0^1 \phi''(\theta)^2 \phi'(\theta) d\theta e^{i (\lambda_1-\lambda_K)T} + \O(
|b|^{ 1 + \frac{1} {41} } ), \end{equation*} which gives \eqref{step1} by construction of $\phi$ as $\tild{X}(T)=X(T)$ when $u_1(T)=0$ (see \eqref{def_syst_aux_fin}).
\ \paragraph{\textit{Step 2: Correcting the linear components.}} Unlike for the previous polynomial toy-models, it is not straightforward to make sure that \eqref{step2} holds. Thus, in a second time, the linear components of the solution are corrected using the STLC result on projection on $\H$ given in \cref{lem:linear_test}. This gives the existence of a control $v_b \in H^2_0(T, 2T)$ such that the solution of \eqref{ODE} on $[T, 2T]$ associated to the control $v_b$ and the initial condition $X(T; \ u_b, \ \varphi_1)$ at time $T$ satisfies \begin{equation} \label{size_vb} \mathbb{P} X(2T) = X_1(2T) \quad \text{ with } \quad
\| v_b \|_{H^2_0(T, 2T)} \leqslant C
\| X(T; \ u_b, \ \varphi_1) - X_1(T)
\|. \end{equation} Then, \eqref{step2} is verified (for the final time $2T$). However, one needs to check that \eqref{step1} which holds at time $T$ thanks to Step 1, still holds at time $2T$ and has not been destroyed by the linear correction. Thus, one needs to check that \begin{equation} \label{correction}
\left| \langle X(2T), X_K(2T) \rangle - \langle X(T), X_K(T) \rangle
\right| =
\O(|b|^{1+\frac{1}{41}}) . \end{equation}
\noindent \emph{Evolution of the solution along the lost direction.} First, we prove that under ($\hat{H}_{\lin}$), one has, \begin{equation} \label{evolution_sol}
\left| \langle X(2T), X_K(2T) \rangle - \langle X(T), X_K(T) \rangle
\right| \leqslant C
\| v_b\|_{L^1(T,2T)}^2 . \end{equation} To that end, first notice that, using \eqref{dim_finie_lin}, under ($\hat{H}_{\lin}$), one has, \begin{equation*} \forall t \in [0,T], \quad \langle X(t), X_K(t) \rangle = \langle (X-X_1-X_L)(t) , X_K(t) \rangle. \end{equation*} Besides, looking at \eqref{ODE} and \eqref{ODE_lin}, $X-X_1-X_L$ is the solution of \begin{equation*} i (X-X_1-X_L)'= H_0 (X-X_1-X_L) -u(t) H_1 (X-X_1). \end{equation*} Thus, the Duhamel formula gives that the left-hand side of \eqref{evolution_sol} is estimated by \begin{multline} \label{estim:non_add_df}
\left| \int_T^{2T} v_b(t) \langle H_1 (X-X_1)(t), \varphi_K \rangle e^{-i\lambda_K (2T-t)} dt
\right| \\ \leqslant C
\| v_b \|_{L^1(T, 2T)} \sup_{t \in [T, 2T]} \| (X-X_1)(t)\|. \end{multline} Moreover, writing the Duhamel formula for the equation satisfied by $X-X_1$, one gets similarly that \begin{equation} \label{df_estim_lin}
\sup_{t \in [T, 2T]} \| (X-X_1)(t)\| \leqslant C \| v_b\|_{L^1(T,2T)} \sup_{t \in [T, 2T]} \| X(t) \|. \end{equation} Besides, taking the scalar product of \eqref{ODE} with $X$ and then the imaginary part of the corresponding equality, one gets that the norm of $X$ is preserved. Thus, putting together all these estimates, one gets \eqref{evolution_sol}.
\noindent \emph{Estimate on the end-point of the solution at time $T$.} Thus, to prove \eqref{correction}, using \eqref{size_vb} and \eqref{evolution_sol}, it is enough to prove that \begin{equation} \label{goal_estim}
\| X(T; \ u_b, \ \varphi_1) - X_1(T)
\|^2 =
\O(|b|^{1+\frac{1}{41}}), \end{equation}
that is, we need to estimate the error on the linear part of the solution when using the oscillating controls \eqref{def_ub_dim_finie}. Notice that by the Duhamel formula, as in \eqref{df_estim_lin}, a straightforward estimate is given by \begin{equation*}
\| X(T; \ u_b, \ \varphi_1) - X_1(T)
\| \leqslant C
\| u_b\|_{L^1(0,T)} = \O(
|b|^{\frac{11}{41}} ), \end{equation*} by definition \eqref{def_ub_dim_finie} of the family $(u_b)_{b \in \mathbb{R}}$. This is not enough to prove \eqref{goal_estim}. One can compute a sharper estimate by writing instead that \begin{equation} \label{goal_estim_0}
\| X(T; \ u_b, \ \varphi_1) - X_1(T)
\| \leqslant
\| X_L(T; \ u_b, \ \varphi_1)
\| +
\| (X-X_1-X_L)(T; \ u_b, \ \varphi_1)
\|. \end{equation} Moreover, using the estimate given in \cite[Prop.\ 2.6]{B21bis}, one has \begin{equation} \label{goal_estim_1}
\| (X-X_1-X_L)(T; \ u_b, \ \varphi_1)
\| \leqslant C
\| u_1\|_{L^2(0,T)}^2 = \O (
|b|^{ \frac{26}{41} } ), \end{equation} looking at \eqref{def_ub_dim_finie}. Moreover, looking at the explicit computations given in \eqref{dim_finie_lin}, the linear part is estimated by \begin{equation} \label{goal_estim_2}
\| X_L(T; \ u_b, \ \varphi_1)
\| \leqslant C \max_{j=1, \ldots, p}
\left| \int_0^T u_b(t) e^{i(\lambda_j-\lambda_1)(t-T)} dt
\right| . \end{equation} Besides, for every $j=2, \ldots, p$, performing three integrations by parts as $u_1(T)=u_2(T)=u_3(T)=0$, one has \begin{equation} \label{goal_estim_3}
\left| \int_0^T u_b(t) e^{i(\lambda_j-\lambda_1)(t-T)} dt
\right| =
\left| (\lambda_j-\lambda_1)^3 \int_0^T u_3(t) e^{i(\lambda_j-\lambda_1)(t-T)} dt
\right| = \O(
|b|^{ \frac{23}{41} } ) \end{equation} looking at \eqref{def_ub_dim_finie}. Notice that this also holds for $j=1$ because in this case, the left hand-side is equal to $u_1(T)=0$. Therefore, \eqref{goal_estim_0}, \eqref{goal_estim_1}, \eqref{goal_estim_2}, \eqref{goal_estim_3} lead to \begin{equation*}
\| X(T; \ u_b, \ \varphi_1) - X_1(T)
\| = \O(
|b|^{\frac{23}{41}} ), \end{equation*} which gives \eqref{goal_estim} concluding Step 2.
\subsubsection{$\varphi_K$ is a small-time $H^2_0$-continuously approximately reachable vector} \label{dim_finie_TV2} As for the polynomial toy-model \eqref{toy_model_3}, the second approximately reachable vector is built from the first one in a way inspired by the work \cite[Th.\ 6]{HK87}.
To that end, denote by $(u_b)_{b \in \mathbb{R}}$ the control variations associated with $i \varphi_K$ constructed in \cref{dim_finie_TV1}. The goal is to prove that for every $b, c \in \mathbb{R}$ small enough, \begin{equation} \label{goal_TV2} X(3T; \ u_b \# 0_{[0, T]} \# u_c, \ \varphi_1) = X_1(3T) + ( i c e^{ 2i( \lambda_K-\lambda_1)T } + i b ) X_K(3T) +
\O( | (b, c) |^{1+\frac{1}{41}}) . \end{equation} Thus, for all $T \in \left(0, \frac{\pi}{2(\lambda_K-\lambda_1)} \right)$, taking $c=-\frac{\alpha}{\sin(2(\lambda_K-\lambda_1)T)}$ and $b=-c \cos(2(\lambda_K-\lambda_1)T)$, this provides a family of controls $(v_{\alpha})_{\alpha \in \mathbb{R}}$ such that when $\alpha$ goes to zero, \begin{equation*} X(3T; \ v_{\alpha}, \ \varphi_1) = X_1(3T) + \alpha X_K(3T) + \O (
|\alpha|^{1+\frac{1}{41}} ) \quad \text{ with }
\| v_{\alpha}\|_{H^2_0(0,3T)} \leqslant C |\alpha|^{\frac{1}{41}}. \end{equation*} This will give that $\varphi_K$ is a small-time $H^2_0$-continuously approximately reachable vector associated with vector variations $\Xi(T)= X_K(T)$.
So, it remains to prove \eqref{goal_TV2}. First, by construction of the family $(u_b)_{b \in \mathbb{R}}$, one has \begin{equation*} X(T; \ u_b, \ \varphi_1) = X_1(T) + i b X_K(T) +
\O( |b|^{1+\frac{1}{41}}) \quad \text{ with }
\| u_b\|_{H^2_0(0,T)} \leqslant C
|b|^{\frac{1}{41}}. \end{equation*} Then, on $[T, 2T]$, no control is activated, so, \begin{equation*} X(2T; \ u_b \# 0_{[0,T]}, \ \varphi_1) =
e^{-iH_0 T}X(T; \ u_b, \ \varphi_1) = X_1(2T) + ib X_K(2T) +
\O( |b|^{1+\frac{1}{41}}). \end{equation*} Moreover, using the semi-group property of the equation, one has \begin{equation} \label{sur_3T_4T} X(3T; \ u_b \# 0_{[0,T]} \# u_c, \ \varphi_1) = X(T; \ u_c, \ X(2T; \ u_b \# 0_{[0,T]}, \ \varphi_1)) . \end{equation}
Besides, using Gronwall Lemma, one proves the existence of $C>0$ such that for all $\tau>0$, $p \in \mathbb{R}^5$ with $\| p\| < 1$ and $u \in H^2_0(0,T)$ with $\| u\|_{H^2_0} <1$, \begin{equation} \label{Gronwall_df}
\| X(T; \ u, \ X_1(\tau)+p) - X(T; \ u, \ \varphi_1) e^{-i \lambda_1 \tau} - e^{-i H_0 T} p
\| \leqslant C
\| u
\|_{H^2(0,T)}
\| p
\|. \end{equation} The proof is left to the reader but one can refer to \cref{prop:dep_ci} for a similar proof for the Schrödinger PDE.
Taking $u=u_c$, $\tau=2T$ and $p_b=ib X_K(2T) + \O( |b|^{1+\frac{1}{41}})$, one gets that $\| u_c\|_{H^2(0,T)} \| p_b\|=\O(|c|^{\frac{1}{41}} |b|)$. Moreover, by construction of $(u_c)$,
$$X(T; \ u_c, \ \varphi_1)= X_1(T) + i cX_K(T) + \O( |c|^{1+\frac{1}{41}}).$$
Thus, using \eqref{Gronwall_df}, \eqref{sur_3T_4T} becomes \begin{equation*} X(3T; \ u_b \# 0_{[0, T]} \# u_c, \ \varphi_1) =
X_1(3T)+ ic X_K(T) e^{-2 i \lambda_1 T} + ib X_K(3T) + \O( |b, c|^{1+\frac{1}{41}}), \end{equation*} which concludes the proof of \eqref{goal_TV2}.
\subsubsection{Towards the Schrödinger PDE} Let us state here the main difficulties we are going to face for the Schrödinger PDE \eqref{Schrodinger} compared to the ODE \eqref{ODE}. \begin{itemize} \item In \cref{sec:expansion}, the computations of the expansion of the solution will be quite similar. The only difference is that the kernels will be defined as function series. Thus, the regularity and boundness of such kernels needed to perform integrations by parts will not be straightforward but will stem from (H$_{\reg})$.
\item The main difficulty for the PDE will be to prove that $i \varphi_K$ is a $H^2_0$-continuously approximately reachable vector (the second approximately reachable vector will be deduced from the first with the same proof). Using the same oscillating controls as for the ODE, we will have similarly that \begin{equation} \label{idee1} \langle \psi(T; \ u_b, \ \varphi_1) , \psi_K(T) \rangle = i b + \O(
|b|^{1+\frac{1}{41}} ). \end{equation} Then, contrary to the finite dimensional case, we will need to correct an infinite number of linear directions. This will also be done using the STLC in projection on $\H$ to get the existence of $v_b \in H^2_0(T, 2T)$ such that \begin{equation*} \mathbb{P} \psi(2T) = \psi_1(2T). \end{equation*} The core of the paper is to prove that such a linear correction didn't destroy the work in \eqref{idee1}. This is done using two ingredients. \begin{itemize} \item The STLC result in projection provides an estimate, \eqref{size_vb} in finite dimension, on the linear control by the data to be reached. For the Schrödinger PDE, the classical estimate giving that the $L^2$-norm of the control is estimated by the data to be controlled in the $H^{3}_{(0)}$-norm is not sharp enough. The whole work of \cite{B21} has consisted in establishing sharper (and simultaneous) estimates on the control to make this step work.
\item Also, in finite dimension, the evolution of the solution along the lost dimension \eqref{evolution_sol} is estimated by the $L^1$-norm of the linear control. Once again, this will not be sharp enough for the Schrödinger PDE. That is why in \cref{sec:non_add}, we quantify more precisely the evolution of the solution along the lost direction.
\end{itemize}
\end{itemize}
\section{Well-posedness and STLC of the Schrödinger equation} \label{sec:WP_Schro}
\subsection{Well-posedness of the Schrödinger equation} In this section, we recall the result given in \cite[Theorem 2.1]{B21} about the existence and uniqueness of the solution of the following Cauchy problem, stressing the link between the regularity of the solution and the boundary conditions on the dipolar moment $\mu$, \begin{equation} \label{Schro_source_term} \left\{
\begin{array}{ll}
i \partial_t \psi(t,x) = - \partial^2_x \psi(t,x) -u(t)\mu(x)\psi(t,x) -f(t,x), \quad &(t,x) \in (0,T) \times (0,1),\\
\psi(t,0) = \psi(t,1)=0, \quad &t \in (0,T), \\
\psi(0,x) = \psi_0(x), \quad &x \in (0,1).
\end{array} \right. \end{equation}
\begin{thm} \label{wp} Let $T>0$, $(p,k) \in \mathbb{N}^2$, $\mu \in H^{2(p+k)+3}( (0,1), \mathbb{R})$ with $\mu^{(2n+1)}(0)=\mu^{(2n+1)}(1)=0$ for all $n=0, \ldots, p-1$ , $u \in H^{k}_0( (0,T), \mathbb{R})$, $\psi_0 \in H^{2(p+k)+3}_{(0)}(0,1)$ and $f \in H^{k}_0 ( (0,T), H^{2p+3} \cap H^{2p+1}_{(0)}(0,1))$. There exists a unique solution of \eqref{Schro_source_term}, that is a function $\psi \in C^{k}( [0,T], H^{2p+3}_{(0)}(0,1))$ with $\psi(T)$ in $H^{2(p+k)+3}_{(0)}(0,1)$ such that the following equality holds in $H^{2p+3}_{(0)}$ for every $t \in [0,T]$: \begin{equation*} \psi(t) = e^{-iAt} \psi_0 + i \int_0^t e^{-iA (t- \tau)} \left( u(\tau) \mu \psi(\tau) + f(\tau) \right) d\tau. \end{equation*}
Moreover, for every $R>0$, there exists $C=C(T, \mu, R)>0$ such that if $\| u \|_{H^k_0(0,T)} < R$, then this solution satisfies \begin{equation*}
\| \psi(T)\|_{H^{2(p+k)+3}_{(0)}}, \
\| \psi \|_{C^k( [0,T], H^{2p+3}_{(0)})} \leqslant C \left(
\| \psi_0 \|_{H^{2(p+k)+3}_{(0)}} +
\| f \|_{H^k( (0,T), H^{2p+3} \cap H^{2p+1}_{(0)})} \right). \end{equation*} \end{thm} We will sometimes write $\psi( \cdot ; \ u, \ \psi_0)$ to denote the solution of \eqref{Schrodinger} associated with control $u$ and initial data $\psi_0$ when we will need to keep track of such a dependency.
\begin{rem} \label{rem:cont_mu} Notice that when $\mu$ satisfies (H$_{\reg}$), the multiplication operators \begin{align} \label{cont_mu} \varphi \quad &\mapsto \quad \mu \varphi, \\ \label{cont_exp} \varphi \quad &\mapsto \quad e^{ i \alpha \mu} \varphi, \quad \alpha \in \mathbb{R}, \end{align} maps continuously $H^7_{(0)}$ and $H^7 \cap H^5_{(0)}$ into $H^7 \cap H^5_{(0)}$ but does not map continuously $H^7_{(0)}$ into $H^7_{(0)}$. Moreover, the operator \begin{equation} \label{cont_der} \varphi \quad \mapsto \quad 2 \mu' \varphi' + \mu'' \varphi, \end{equation} maps continuously $H^7 \cap H^5_{(0)}$ into $H^6 \cap H^3_{(0)}$ and the operator \begin{equation} \label{cont_mu_der} \varphi \quad \mapsto \quad \mu'^2 \varphi, \end{equation} maps continuously $H^7 \cap H^5_{(0)}$ into $H^7 \cap H^5_{(0)}$. Indeed, for \eqref{cont_mu}, Leibniz formula gives for $n=0,1,2$, \begin{equation*} ( \mu \varphi)^{(2n)}
= \sum \limits_{k=0}^n \binom{2n}{2k} \mu^{(2k)}
\varphi^{(2n-2k)}
+ \sum \limits_{k=0}^{n-1} \binom{2n}{2k+1} \mu^{(2k+1)}
\varphi^{(2n-2k-1)}
. \end{equation*} Thus, if $\varphi \in H^7_{(0)}$, for all $n=0, 1, 2$, $( \mu \varphi)^{(2n)}$ vanishes at $x=0,1$ because for all $k \in \{0, \ldots, n\}$, $\varphi^{(2n-2k)}$ does and for all $k \in \{0, \ldots, n-1\}$, $\mu^{(2k+1)}$ does. This gives the continuity of \eqref{cont_mu}. Notice that one can't go higher as for $( \mu \varphi)^{(6)}$, in the sum, the term $\mu^{(5)} \varphi'$ doesn't vanish at $0$ and $1$. The other continuities are proved the same. These continuities will be the key to prove the well-posedness of the equations considered in the following (see \cref{rem:wp}, Sections \ref{an_auxiliary_system} and \ref{expansion_aux}). \end{rem}
\begin{rem} \label{rem:wp} Thanks to \cref{rem:cont_mu} on \eqref{cont_mu}, from \cref{wp} with $p=k=2$, one deduces that, when $\mu$ satisfies (H$_{\reg}$), for every $\psi_0 \in H^{11}_{(0)}$, $\phi \in C^2( [0,T], H^{7}_{(0)})$ and $u, v \in H^2_0(0,T)$, the Schrödinger equation \begin{equation*} \left\{
\begin{array}{ll}
i \partial_t \psi(t,x) = - \partial^2_x \psi(t,x) -u(t)\mu(x) \psi - v(t)\mu(x)\phi, \quad &(t,x) \in (0,T) \times (0,1),\\
\psi(t,0) = \psi(t,1)=0, \quad &t \in (0,T), \\
\psi(0,x) = \psi_0(x), \quad &x \in (0,1).
\end{array} \right. \end{equation*}
admits a unique solution $\psi \in C^{2}( [0,T], H^{7}_{(0)}(0,1))$ with $\psi(T)$ in $H^{11}_{(0)}(0,1)$. Moreover, for every $R>0$, there exists $C=C(T, \mu, R)>0$ such that if $\| u \|_{H^2_0(0,T)} < R$, this solution satisfies \begin{equation} \label{estim_sol_bis}
\| \psi(T)\|_{H^{11}_{(0)}}, \quad
\| \psi \|_{C^2( [0,T], H^{7}_{(0)})} \leqslant C \left(
\| \psi_0 \|_{H^{11}_{(0)}} +
\| v \|_{H^2_0(0,T)}
\| \phi \|_{C^2( [0,T], H^{7}_{(0)})} \right). \end{equation} This will be the regularity on solutions used in all this paper. \end{rem}
\subsection{Dependency of the solution with respect to the initial condition} From the well-posedness result given in \cref{wp}, one can deduce the following result about the dependency of the solution of \eqref{Schrodinger} with respect to the initial condition. \begin{prop} \label{prop:dep_ci}
Let $T>0$, $\mu$ satisfying (H$_{\reg}$) and $\psi_0 \in H^{11}_{(0)}(0,1)$ and $\tau \in \mathbb{R}$. For all $R>0$, there exists $C=C(T, \mu, R)>0$ such that for all $u \in H^2_0(0,T)$ with $\| u \|_{H^2_0(0,T)}<R$, one has \begin{equation*}
\| \psi(T; \ u, \ \psi_1(\tau) +\psi_0) -\psi(T; \ u, \ \varphi_1)e^{-i\lambda_1 \tau} - e^{-iAT}\psi_0 \|_{H^{11}_{(0)}} \leqslant
C \| u \|_{H^2_0(0,T)} \| \psi_0 \|_{H^{11}_{(0)}}. \end{equation*} \end{prop}
\begin{proof} Define, for all $t \in [0,T]$, $ \Lambda(t):=\psi(t; \ u, \ \psi_1(\tau) +\psi_0) -\psi(t; \ u, \ \varphi_1)e^{-i\lambda_1 \tau} - e^{-iAt}\psi_0. $ Notice that $\Lambda$ is the solution of, \begin{equation*} i \partial_t \Lambda = - \partial^2_x \Lambda-u(t)\mu(x)\Lambda -u(t) \mu(x) e^{-iAt}\psi_0, \end{equation*} with Dirichlet conditions and $\Lambda(0,\cdot)=0$.
Therefore, \cref{rem:wp} gives the existence of $C>0$ such that \begin{equation*}
\| \Lambda(T) \|_{H^{11}_{(0)}}
\leqslant C \| u \|_{H^2_0(0,T)} \| e^{-iAt} \psi_0 \|_{C^2( [0,T], H^{7}_{(0)})} =
C \| u \|_{H^2_0(0,T)} \| \psi_0 \|_{ H^{11}_{(0)} } . \end{equation*} \end{proof}
\subsection{Controllability in projection by the linear test with simultaneous estimates} In this section, we recall the local controllability result in projection by the linear test given in \cite{B21} as it will be useful in this paper. To that end, we introduce the following notations: if $J$ is a subset of $\mathbb{N}^*$, we define the space $ \H := \overline{\Span_{\mathbb{C}}} \left( \varphi_j , \ j \in J \right) $ and the orthogonal projection on $\H$ given by $ \mathbb{P}_{\J}(\psi) = \psi - \sum \limits_{j \not\in J} \langle \psi, \varphi_j \rangle \varphi_j $ for all $\psi \in L^2(0,1)$.
\begin{thm} \label{linear_STLC} Let $(p, k) \in \mathbb{N}^2$ with $p \geqslant k$, $J$ a subset of $\mathbb{N}^*$ and $\mu \in H^{2(p+k)+3}( (0,1), \mathbb{R})$ such that $\mu^{(2n+1)}(0)=\mu^{(2n+1)}(1)=0$ for all $n=0, \ldots, p-1$ and \begin{equation} \label{hyp_mu} \text{there exists a constant } c>0 \text{ such that for all } j \in J, \quad
| \langle \mu \varphi_1, \varphi_j \rangle | \geq \frac{c}{j^{2p+3}}. \end{equation} Then, the Schrödinger equation \eqref{Schrodinger} is STLC in projection around the ground state with controls in $H^m_0(T_0,T)$ and targets $H^{2(p+m)+3}_{(0)}$ for every $m \in \{0, \ldots, k \}$ with the same control map.
\noindent More precisely, for all initial time $T_0 \geqslant 0$ and final time $T> T_0$, there exists $C$, $\delta >0$ and a $C^1$-map $\Gamma_{T_0,T} : \Omega_{T_0} \times \Omega_T \rightarrow H^k_0( (T_0,T), \mathbb{R})$ where \begin{align} \label{def_Omega_T0}
\Omega_{T_0} &:= \{ \psi_0 \in \S \cap H^{2(p+k)+3}_{(0)} ; \ \| \psi_0 - \psi_1(T_0) \|_{H^{2(p+k)+3}_{(0)}} < \delta \}, \\ \label{def_Omega_T}
\Omega_T &:= \{ \psi_f \in \H \cap H^{2(p+k)+3}_{(0)} ; \ \| \psi_f - \mathbb{P}_{\J} \left( \psi_1(T) \right) \|_{H^{2(p+k)+3}_{(0)}} < \delta \}, \end{align} such that $\Gamma_{T_0,T}(\psi_1(T_0), \psi_1(T))=0$ and for every $(\psi_0, \psi_f) \in \Omega_{T_0} \times \Omega_T$, the solution of \eqref{Schrodinger} on $[T_0, T]$ with control $u:=\Gamma_{T_0,T}(\psi_0,\psi_f)$ and initial condition $\psi_0$ at $t=T_0$ satisfies \begin{equation} \label{contr_proj}
\mathbb{P}_{\J} \left( \psi(T) \right)= \psi_f, \end{equation} with the following boundary conditions on the control \begin{equation} \label{eq:weak_bc_nl} u_2(T)= \ldots= u_{k+1}(T)=0, \end{equation} where here $(u_n)_{n \in \mathbb{N}}$ denotes the iterated primitives of $u$ vanishing at $T_0$. Besides, for all $m$ in $\{-(k+1), \ldots, k\}$, the following estimates hold \begin{equation} \label{estim_contr_nl}
\| u \|_{H^m_0(T_0,T)} \leqslant C \left(
\| \psi_0 - \psi_1(T_0) \|_{H^{2(p+m)+3}_{(0)}} +
\| \psi_f - \mathbb{P}_{\J} \psi_1(T) \|_{H^{2(p+m)+3}_{(0)}} \right) . \end{equation} \end{thm}
\section{Error estimates on the expansion of the solution} \label{sec:expansion} The goal of this section is to compute the power series expansion of the solution $\psi$ of the Schrödinger equation \eqref{Schrodinger} up to order 3 with a sharp error estimate, as it is the key to prove \cref{the_theorem}. In all this section, if not mentioned, we will work with controls $u$ at least in $H^2_0(0,T)$ and with a dipolar moment $\mu$ satisfying (H$_{\reg})$.
\subsection{Formal expansion of the solution} Formally, expanding the solution of \eqref{Schrodinger} around the trajectory $(\psi_1, u \equiv 0)$, \begin{itemize} \item the first-order term $\Psi$ of the expansion of $\psi$ is solution of, \begin{equation} \left\{
\begin{array}{ll}
i \partial_t \Psi = - \partial^2_x \Psi -u(t)\mu(x) \psi_1(t,x) , \\
\Psi(t,0) = \Psi(t,1)=0,\\
\Psi(0,x)=0,
\end{array} \right. \label{order1} \end{equation} which can be explicitly computed as, \begin{equation} \label{order1explicit} \Psi(t)=i \sum \limits_{j=1}^{+\infty} \left(
\langle \mu \varphi_1, \varphi_j\rangle \int_0^t u(\tau) e^{i (\lambda_j-\lambda_1)\tau} d\tau
\right)
\psi_j(t), \quad t \in [0,T]. \end{equation}
\item The second-order term $\xi$ of the expansion of $\psi$ is solution of, \begin{equation} \left\{
\begin{array}{ll}
i \partial_t \xi = - \partial^2_x \xi -u(t)\mu(x) \Psi(t,x) , \\
\xi(t,0) = \xi(t,1)=0,\\
\xi(0,x)=0,
\end{array} \right. \label{order2} \end{equation} \item and the third-order term $\zeta$ of the expansion of $\psi$ is solution of, \begin{equation} \left\{
\begin{array}{ll}
i \partial_t \zeta = - \partial^2_x \zeta -u(t)\mu(x) \xi(t,x) , \\
\zeta(t,0) = \zeta(t,1)=0,\\
\zeta(0,x)=0.
\end{array} \right. \label{order3} \end{equation} \end{itemize} The goal of this section is to quantify in which way the following expansion holds rigorously \begin{equation} \label{expansion} \psi \approx \psi_1 + \Psi + \xi + \zeta. \end{equation} Such expansion will be studied under the following asymptotic. \begin{defi} \label{def:O}
Given two scalar quantities $A(T,u)$ and $B(T,u)$, we will write $A(T,u)=\O \left( B(T,u) \right)$ if there exists $C, T^*>0$ such that for any $T \in (0, T^*)$, there exists $\eta > 0$ such that for all $u \in H^2_0(0,T)$ with $\|u\|_{H^2_0(0,T)} < \eta$, we have $|A(T,u)| \leqslant C |B(T,u)|.$ \end{defi}
Thus, the notation $\O$ refers to the convergence $\| u\|_{H^2_0(0,T)} \rightarrow 0$ and holds uniformly with respect to the final time on a small time interval $[0, T^*]$. All estimates will be computed under this asymptotic as the goal stated in \cref{the_theorem} is prove $H^2_0$-STLC.
But first, before computing any estimate, we state a well-posedness result about all the equations considered, which directly stems from \cref{rem:wp}.
\begin{prop} \label{wp_syst_init} Let $\mu$ satisfying (H$_{\reg}$) and $u$ in $H^2_0( (0,T), \mathbb{R})$. Then, there exists a unique solution $\psi$ (resp.\ $\Psi$, $\xi$ and $\zeta$) of \eqref{Schrodinger} (resp.\ \eqref{order1}, \eqref{order2} and \eqref{order3}) belonging to $C^2( [0,T], H^7_{(0)}(0,1))$ with $\psi(T)$ (resp.\ $\Psi(T)$, $\xi(T)$ and $\zeta(T)$) in $H^{11}_{(0)}(0,1)$. Moreover, the following estimate holds, \begin{equation} \label{psi_borne}
\| \psi \|_{C^2( [0,T], H^7_{(0)})} = \O \left( 1 \right). \end{equation}
\end{prop}
\subsection{An auxiliary system} \label{an_auxiliary_system} Our goal is to prove that when $\mu$ satisfies \eqref{lin_nul}, (H$_{\Quad}$) and (H$_{\Cub}$), the leading term of the solution $\psi$ of the Schrödinger equation \eqref{Schrodinger} along the lost direction is namely the cubic term $\int_0^T u_1(t)^2u_2(t)dt$ in the asymptotic given in \cref{def:O}. Thus, we seek to prove that such a cubic term can absorb both the quadratic term and the terms of order higher than four. Therefore, classical error estimates on the expansion \eqref{expansion} involving the $L^2$-norm of the control $u$ are not sharp enough because can't be absorbed by such a cubic term. As in \cref{sec:expansion_dim_finie}, one can compute sharper estimates, involving rather the $L^2$-norm of the time primitive $u_1$ of the control $u$ by introducing the new state \begin{equation} \label{link} \widetilde{\psi}(t,x):=\psi(t,x) e^{-i u_1(t) \mu(x)}, \quad (t,x) \in [0,T] \times [0,1]. \end{equation} This new state satisfies the following equation, called the auxiliary system \begin{equation} \label{system_aux} \left\{
\begin{array}{ll}
i \partial_t \widetilde{\psi} = - \partial^2_x \widetilde{\psi} -iu_1(t) ( 2 \mu'(x) \partial_x \widetilde{\psi} + \mu''(x) \widetilde{\psi} )+u_1(t)^2 \mu'(x)^2 \widetilde{\psi}, \\
\widetilde{\psi}(t,0) = \widetilde{\psi}(t,1)=0,\\
\widetilde{\psi}(0,x)=\varphi_1.
\end{array} \right. \end{equation}
\begin{prop} \label{wp_syst_aux} Let $\mu$ satisfying (H$_{\reg}$) and $u_1$ in $H^3_0( (0,T), \mathbb{R})$. There exists a unique solution $\tild{\psi}$ of \eqref{system_aux} in $C^2( [0,T], H^7 \cap H^5_{(0)})$, which satisfies \begin{equation} \label{estim_aux}
\| \tild{\psi} \|_{C^2( [0,T], H^7 \cap H^5_{(0)})} = \O \left( 1 \right). \end{equation} Moreover, the following equality holds in $H^5_{(0)}(0,1)$ for every $t \in [0,T]$, \begin{equation} \label{weak_sol_aux} \tild{\psi}(t) = \psi_1(t) - \int_0^t e^{-iA(t-\tau)} \left( u_1(\tau) \left( 2 \mu' \partial_x + \mu'' \right) \tild{\psi}(\tau) + i u_1(\tau)^2 \mu'^2 \tild{\psi}(\tau) \right) d\tau. \end{equation} \end{prop}
To prove \eqref{weak_sol_aux}, we need to recall the following smoothing effect first proved in \cite{BL10} and then generalized in \cite{B21}. \begin{prop} \label{estim_G_Ck} Let $(p, k) \in \mathbb{N}^2$. There exists a non-decreasing function $C : [0, +\infty) \rightarrow (0, +\infty)$ such that for all $T \geqslant 0$ and for all $f \in H^{k}_0((0,T), H^{2p+3} \cap H^{2p+1}_{(0)}(0,1))$, the function $G: t \mapsto \int_0^t e^{-iA(t-\tau)} f(\tau) d\tau$ belongs to $C^k( [0,T], H^{2p+3}_{(0)}(0,1))$ with the following estimate, \begin{equation}
\| G \|_{C^k( [0,T], H^{2p+3}_{(0)})} \leqslant C \| f \|_{H^k((0,T), H^{2p+3} \cap H^{2p+1}_{(0)})}. \end{equation} \end{prop}
\begin{rem} Because of the term $\partial_x \tild{\psi}$ in \eqref{system_aux}, up to now, the well-posedness of the auxiliary system is only understood through its link \eqref{link} with the Schrödinger equation and is not proved directly using for example a fixed-point argument on the formulation \eqref{system_aux}. However, one needs to be very careful: the multiplication by the exponential factor in \eqref{link} preserves the regularity but not the boundary conditions of $\psi$. More precisely, the continuity of the operators given in \cref{rem:cont_mu} is the key to know which boundary conditions can be deduced for the auxiliary system from the Schrödinger equation and which can't. \end{rem}
\begin{proof}[Proof of \cref{wp_syst_aux}]
\noindent \emph{Regularity.} By \cref{wp_syst_init}, under these hypotheses, the solution $\psi$ of the Schrödinger equation \eqref{Schrodinger} is $C^2([0,T], H^7_{(0)})$. Thus, from the continuity given in \cref{rem:cont_mu} on \eqref{cont_exp}, the function $\tild{\psi}$ defined by \eqref{link} is $C^2([0,T], H^7 \cap H^5_{(0)})$. Moreover, \eqref{estim_aux} follows from \eqref{psi_borne}.
\noindent \emph{Equation.} For this regularity, the Schrödinger equation \eqref{Schrodinger} is satisfied for every $t$ in $H^5_{(0)}$. Thus,
computations prove that \eqref{system_aux} holds for every $t$ in $H^5 \cap H^3_{(0)}$ using \cref{rem:cont_mu} with \eqref{cont_der} and \eqref{cont_mu_der}.
\noindent \emph{Uniqueness.} For every function $\tild{\psi}$ satisfying the first equation of \eqref{system_aux} for every $t \in [0, T]$ in $H^5 \cap H^3_{(0)}$, an energy estimate proves that its $L^2$-norm is preserved. This implies the uniqueness for solutions in $C^2([0,T], H^7 \cap H^5_{(0)})$ (see \cite[Prop. 4.6 and 4.7]{B21bis} for more details on such an energy estimate).
\noindent \emph{Weak formulation.} Denote by $\hat{\psi}$ the right-hand side of \eqref{weak_sol_aux}. The continuity of \eqref{cont_der} and \eqref{cont_mu_der} stated in \cref{rem:cont_mu} imply that the functions integrated in the definition of $\hat{\psi}$ belong to $H^2_0( (0,T), H^5 \cap H^3_{(0)})$. Thus, the smoothing effect stated in \cref{estim_G_Ck} with $(p,k)=(1,2)$ entails that $\hat{\psi}$ belongs to $C^2([0,T], H^5_{(0)}).$ Moreover, computations proves that $\hat{\psi}$ satisfies \eqref{system_aux} for every $t \in [0,T]$ in $H^5 \cap H^3_{(0)}$. Thus, the uniqueness allows to prove \eqref{weak_sol_aux}. \end{proof}
\subsection{Computation of the expansion of the auxiliary system} \label{expansion_aux} One can compute by hand the expansion of the solution $\tild{\psi}$ of the auxiliary system \eqref{system_aux} around the ground state, up to order 3.
\noindent \emph{First-order term.}
The linearized system of \eqref{system_aux} around the trajectory $(\psi_1, u=0)$ is given by \begin{equation} \left\{
\begin{array}{ll}
i \partial_t \widetilde{\Psi} = - \partial^2_x \widetilde{\Psi} -iu_1(t) \left( 2 \mu' \partial_x \psi_1 + \mu'' \psi_1 \right) , \\
\widetilde{\Psi}(t,0) = \widetilde{\Psi}(t,1)=0,\\
\widetilde{\Psi}(0,x)=0.
\end{array} \right. \label{order1aux} \end{equation} Linearizing \eqref{link}, $\widetilde{\Psi}$ is also given by, \begin{equation} \label{link_order1} \tild{\Psi}(t) = \Psi(t) - i u_1(t) \mu \psi_1(t), \quad t \in [0,T], \end{equation} where $\Psi$ is the solution of \eqref{order1}. Recall that by \cref{wp_syst_init}, $\Psi$ belongs to $C^2([0,T], H^7_{(0)})$. Thus, using the continuity of \eqref{cont_mu} given in \cref{rem:cont_mu} and \eqref{link_order1} entail that $\tild{\Psi}$ belongs to $C^2([0,T], H^7 \cap H^5_{(0)})$. As before, using \cref{estim_G_Ck} with $(p,k)=(1, 0)$, the following equality holds in $H^5_{(0)}(0,1)$ for every $t \in [0,T]$, \begin{equation} \label{order1aux_sp} \tild{\Psi}(t) = - \int_0^t e^{-iA(t -\tau)} u_1(\tau) \left( 2 \mu' \partial_x \psi_1(\tau) + \mu'' \psi_1(\tau) \right) d\tau, \end{equation} and moreover, the following estimate holds \begin{equation} \label{estim_lin_aux}
\| \tild{\Psi} \|_{C^0( [0,T], H^{5}_{(0)}(0,1))} = \O \left(
\| u_1\|_{L^2(0,T)} \right) . \end{equation} Moreover, the solution of \eqref{order1aux} can be computed explicitly as \begin{equation} \label{expr:order1_aux} \tild{\Psi}(t)
= \sum \limits_{j=1}^{+\infty} \left( \left( \lambda_j - \lambda_1 \right) \langle \mu \varphi_1 , \varphi_j \rangle \int_0^t u_1(\tau) e^{ i (\lambda_j-\lambda_1) \tau } d\tau \right) \psi_j(t), \quad t \in [0,T]. \end{equation}
\noindent \emph{Second-order term.} The second-order term of the expansion of \eqref{system_aux} around the ground state is the solution of \begin{equation} \left\{
\begin{array}{ll}
i \partial_t \widetilde{\xi} = - \partial^2_x \widetilde{\xi} -iu_1(t) ( 2 \mu' \partial_x \widetilde{\Psi} + \mu'' \widetilde{\Psi} ) + u_1(t)^2 \mu'^2 \psi_1 , \\
\widetilde{\xi}(t,0) = \widetilde{\xi}(t,1)=0,\\
\widetilde{\xi}(0,x)=0.
\end{array} \right. \label{order2aux} \end{equation} Identifying the second order terms in \eqref{link}, $\tild{\xi}$ can also be given by, \begin{equation} \label{link_order2} \tild{\xi}(t) = \xi(t) - i u_1(t) \mu \tild{\Psi}(t) + \frac{u_1(t)^2}{2} \mu^2 \psi_1(t). \end{equation} Thus, from the regularity of $\xi$ given in \cref{wp_syst_init}, on $\tild{\Psi}$ and the continuity of the operators given in \cref{rem:cont_mu}, $\xi$ is in $C^2([0,T], H^7 \cap H^5_{(0)})$. Moreover, the following equality holds in $H^5_{(0)}(0,1)$ for every $t \in [0,T]$, \begin{equation} \label{order2aux_sp} \tild{\xi}(t) = - \int_0^t e^{-iA(t -\tau)} \left[ u_1(\tau) \left( 2 \mu' \partial_x \tild{\Psi}(\tau) + \mu'' \tild{\Psi}(\tau) \right) + i u_1(\tau)^2 \mu'^2 \psi_1(\tau) \right] d\tau. \end{equation} As all the integrated terms belongs for $\tau$ fixed to $H^3_{(0)}$, by \cref{rem:cont_mu}, the triangular inequality together with the fact that for every $s \in \mathbb{R}$, $e^{i s A}$ is an isometry from $H^3_{(0)}$ to $H^3_{(0)}$, gives that \begin{equation} \label{estim_quad_aux}
\| \tild{\xi} \|_{C^0( [0,T], H^{3}_{(0)}(0,1))} = \O \left(
\| u_1 \|_{L^1} \| \tild{\Psi}\|_{C^0( [0,T], H^4_{(0)})} + \| u_1\|^2_{L^2} \right) = \O\left(
\|u_1\|^2_{L^2} \right), \end{equation} using \eqref{estim_lin_aux}. Besides, substituting the explicit form of $\tild{\Psi}$ given in \eqref{expr:order1_aux} into \eqref{order2aux_sp}, the solution can be explicitly computed as \begin{multline} \label{expr:order2_aux} \tild{\xi}(t)
= -i \sum \limits_{j=1}^{+\infty} \left( \langle \mu'^2 \varphi_1, \varphi_j \rangle \int_0^t u_1(\tau)^2 e^{ i \left( \lambda_j - \lambda_1 \right) \tau} d\tau \right) \psi_j(t) \\ + \sum \limits_{j=1}^{+\infty} \left( \int_0^t u_1(\tau) \int_0^{\tau} u_1(s) \widetilde{k}_{\Quad, j}(\tau,s) d\tau ds \right) \psi_j(t), \end{multline} where, for all $j \in \mathbb{N}^*$, the quadratic kernel $\widetilde{k}_{quad, j}$ is given by \begin{equation} \label{eq:kernel_quad_aux} \widetilde{k}_{\Quad, j}( \tau, s) := \sum \limits_{n=1}^{+\infty} (\lambda_1- \lambda_n) (\lambda_n - \lambda_j) \langle \mu \varphi_1, \varphi_n \rangle \langle \mu \varphi_n, \varphi_j \rangle e^{i \left( (\lambda_j-\lambda_n)\tau + (\lambda_n - \lambda_1)s \right)} . \end{equation}
Thanks to \cref{decay_coeff}, all the quadratic kernels $\widetilde{k}_{\Quad, j}$ defined in \eqref{eq:kernel_quad_aux} are bounded in $C^{4}( \mathbb{R}^2, \mathbb{C})$. This regularity is the key to perform integrations by parts and reveal a coercivity quantified by the $H^{-3}$-norm of the control, as stated in the following result. \begin{lem} \label{coord_quad_IPP} If the control $u \in L^2(0,T)$ is such that $u_2(T)=u_3(T)=0$, then, for all $j \in \mathbb{N}^*$,
\begin{equation} \label{eq:coord_quad_IPP} \langle \tild{\xi}(T), \psi_j(T) \rangle = -i \sum \limits_{p=1}^3 A^p_j \int_0^T u_p(t)^2 e^{ i (\lambda_j - \lambda_1)t } dt + \int_0^T u_3(t) \int_0^t u_3(\tau) \partial^2_1 \partial^2_{2} \widetilde{k}_{\Quad, j}(t, \tau) d\tau dt. \end{equation} \end{lem}
\begin{proof} Let $j \in \mathbb{N}^*$. Thanks to \eqref{LB_A1K}, the computations given in \eqref{expr:order2_aux} give directly \begin{equation} \label{quad_non_IPP} \langle \tild{\xi}(T), \psi_j(T) \rangle = -i A^1_j \int_0^T u_1(t)^2 e^{ i \left( \lambda_j - \lambda_1 \right) t} dt + \int_0^T u_1(t) \int_0^{t} u_1(\tau) \widetilde{k}_{\Quad, j}(t,\tau) d\tau dt . \end{equation} Besides, for all $m \in \mathbb{N}$ and $H$ in $C^2(\mathbb{R}^2, \mathbb{C})$, if $u_{m+1}(T)=0$, integrations by parts lead to \begin{multline*} \int_0^T u_m(t) \int_0^t u_m(\tau) H(t, \tau) d\tau dt = \int_0^T u_{m+1}(t)^2 \left( \frac{1}{2} \frac{d}{dt}( H(t, t)) - \partial_1 H(t, t) \right) dt \\ + \int_0^T u_{m+1}(t) \int_0^t u_{m+1}(\tau) \partial_1 \partial_{2} H(t, \tau) d\tau dt . \end{multline*} Therefore, \eqref{eq:coord_quad_IPP} is deduced from \eqref{quad_non_IPP} applying this equality successively for $m=1$ and $H= \widetilde{k}_{\Quad, j}$ and for $m=2$ and $H= \partial_1 \partial_{2} \widetilde{k}_{\Quad, j}$ and also noticing that \begin{equation*} \forall p=2, 3, \quad \frac{1}{2} \frac{d}{dt} \left( \partial_1^{p-2} \partial_{2}^{p-2} \widetilde{k}_{\Quad, j} (t, t) \right) - \partial_1^{p-1} \partial_{2}^{p-2} \widetilde{k}_{\Quad, j} (t, t) = -i A^p_j e^{ i (\lambda_j-\lambda_1) t } . \end{equation*} \end{proof}
\noindent In particular, under (H$_{\Quad}$), the quadratic term, along the lost direction, is given by \begin{equation} \label{quad_u3} \langle \tild{\xi}(T), \psi_K(T) \rangle = -i A^3_K \int_0^T u_3(t)^2 e^{ i (\lambda_K - \lambda_1)t } dt + \int_0^T u_3(t) \int_0^t u_3(\tau) \partial^2_1 \partial^2_{2} \widetilde{k}_{\Quad, K}(t, \tau) d\tau dt. \end{equation} Thus, namely, the leading quadratic term along the lost direction of the expansion is given by $\int_0^T u_3(t)^2 dt$.
\noindent \emph{Third-order term.} The third-order term of the expansion of \eqref{system_aux} around the ground state is the solution of \begin{equation} \left\{
\begin{array}{ll}
i \partial_t \widetilde{\zeta} = - \partial^2_x \widetilde{\zeta} -iu_1(t) \left( 2 \mu' \partial_x \widetilde{\xi} + \mu'' \widetilde{\xi} \right) + u_1(t)^2 \mu'^2 \Psi , \\
\widetilde{\zeta}(t,0) = \widetilde{\zeta}(t,1)=0,\\
\widetilde{\zeta}(0,x)=0.
\end{array} \right. \label{order3aux} \end{equation} As before, thanks to \eqref{link}, the cubic term can also be given by \begin{equation} \label{link_order3} \tild{\zeta}(t) = \zeta(t) - i u_1(t) \mu \tild{\xi}(t) + \frac{u_1(t)^2}{2} \mu^2 \tild{\Psi}(t) + i \frac{u_1(t)^3}{6} \mu^3 \psi_1(t), \quad t \in [0,T]. \end{equation} Thus, the cubic term $\tild{\zeta}$ belongs to $C^2([0,T], H^7 \cap H^5_{(0)})$. Moreover, the following equality holds in $H^5_{(0)}(0,1)$ for every $t \in [0,T]$, \begin{equation} \label{order3aux_sp} \tild{\xi}(t) = - \int_0^t e^{-iA(t -\tau)} \left[ u_1(\tau) \left( 2 \mu' \partial_x \tild{\xi}(\tau) + \mu'' \tild{\xi}(\tau) \right) + i u_1(\tau)^2 \mu'^2 \tild{\Psi}(\tau) \right] d\tau, \end{equation} and the following estimate holds, using the triangular inequality \begin{equation} \label{estim_cub_aux}
\| \tild{\zeta}
\|_{C^0( [0,T], H^{1}_{(0)}(0,1))} = \O \left(
\| u_1\|_{L^2(0,T)}^3 \right) . \end{equation} Using the explicit computations of $\tild{\Psi}$ and $\tild{\xi}$ given in \eqref{expr:order1_aux} and \eqref{expr:order2_aux}, one gets that the third-order term is given by \begin{multline} \label{expr:order3aux} \tilde{\zeta}(T) = \sum \limits_{j=1}^{+\infty} \Big(
i\int_0^T u_1(t)^2
\int_0^t u_1( \tau)
\widetilde{k}_{\Cub,j}^1( t, \tau) d\tau dt
+
i \int_0^T u_1(t) \int_0^t u_1(\tau)^2 \widetilde{k}_{\Cub,j}^2( t, \tau) d\tau dt \\ - \int_0^T u_1(t) \int_0^t u_1(\tau) \int_0^{\tau} u_1(s) \widetilde{k}_{\Cub,j}^3(t, \tau, s ) ds d\tau dt \Big) \psi_j(T), \end{multline}
where the cubic kernels are given by
\begin{align} \label{kernel_cubic_1}
\widetilde{k}_{\Cub,j}^1( t, \tau)
&:=
\sum \limits_{n=1}^{+\infty}
(\lambda_1- \lambda_n)
\langle \mu \varphi_1, \varphi_n \rangle
\langle \mu'^2 \varphi_n, \varphi_j \rangle
e^{i \left[ (\lambda_j- \lambda_n)t + (\lambda_n- \lambda_1) \tau \right] },
\\
\label{kernel_cubic_2} \widetilde{k}_{\Cub,j}^2(t, \tau) &:= \sum \limits_{n=1}^{+\infty} (\lambda_n-\lambda_j) \langle \mu'^2 \varphi_1, \varphi_n \rangle \langle \mu \varphi_n, \varphi_j \rangle e^{i \left[ (\lambda_j- \lambda_n)t + (\lambda_n- \lambda_1) \tau \right]}, \end{align} and \begin{multline} \label{kernel_cubic_3} \widetilde{k}_{\Cub,j}^3(t, \tau, s ) := \sum \limits_{p=1}^{+\infty} \sum \limits_{n=1}^{+\infty} (\lambda_1- \lambda_n) (\lambda_n - \lambda_p) (\lambda_p - \lambda_j) \\ \times \langle \mu \varphi_1, \varphi_n \rangle \langle \mu \varphi_n, \varphi_p \rangle \langle \mu \varphi_p, \varphi_j \rangle e^{i \left[ (\lambda_j - \lambda_p)t + (\lambda_p - \lambda_n) \tau + (\lambda_n- \lambda_1)s \right]}. \end{multline} Thanks to \cref{decay_coeff}, the kernels $ \widetilde{k}_{\Cub,j}^1$ and $ \widetilde{k}_{\Cub,j}^2$ are bounded in $C^2(\mathbb{R}^2, \mathbb{C})$ and the kernel $\widetilde{k}_{\Cub,j}^3$ is bounded in $C^1(\mathbb{R}^2, \mathbb{C})$. Formally, in an asymptotic of small time, \eqref{expr:order3aux} entails that the cubic term behaves as $\int_0^T u_1^2(t) u_2(t)dt$ as $ \widetilde{k}_{\Cub,j}^i( t, \tau) \approx \widetilde{k}_{\Cub,j}^i(0,0)$ for $i=1,2$ at first order and as the third term of \eqref{expr:order3aux} is a higher order cubic term.
\subsection{Sharp error estimates for the auxiliary system} The goal of this section is to compute sharp error estimates on the expansion of the auxiliary system.
\begin{prop} \label{prop:error_estim_aux} If $\mu$ satisfies (H$_{\reg}$), then the following error estimates on the expansion of the auxiliary system hold, \begin{align}
\label{eq:estim_lin2_aux}
\| \tild{\psi} - \psi_1 - \tild{\Psi}
\|_{L^{\infty}((0,T), H^2_{(0)}(0,1))} = \O \left(
\| u_1\|_{L^2(0,T)}^2 \right), \\
\label{eq:estim_lin4_aux}
\| \tild{\psi} - \psi_1 - \tild{\Psi} - \tild{\xi} - \tild{\zeta}
\|_{L^{\infty}((0,T),L^2(0,1))} = \O \left(
\| u_1\|_{L^2(0,T)}^4 \right). \end{align} \end{prop}
\begin{proof} \emph{Proof of the linear remainder.} We have seen in \cref{wp_syst_aux}, that the following equality holds in $H^5_{(0)}$ for all $t \in [0,T]$, \begin{equation*} \big( \tild{\psi}-\psi_1 \big)(t) = -\int_0^t e^{-iA(t - \tau)} \Big( u_1(\tau) (2\mu'\partial_x + \mu'') \tild{\psi}(\tau) + i u_1(\tau)^2 \mu'^2 \tild{\psi}(\tau) \Big) d\tau, \end{equation*} where for $\tau$ fixed, every term under this integral belongs to $H^5 \cap H^3_{(0)}$. Thus, the triangular inequality and the isometry of $e^{i As}$ from $H^3_{(0)}$ to $H^3_{(0)}$ for every $s$ give, \begin{multline} \label{eq:estim_lin1_aux}
\| \tild{\psi}-\psi_1
\|_{L^{\infty}( (0,T), H^3_{(0)})} = \O \left(
\|u_1\|_{L^1} \| \tild{\psi} \|_{L^{\infty}( (0,T), H^4_{(0)})} + \| u_1\|^2_{L^2} \|\tild{\psi} \|_{L^{\infty}( (0,T), H^3_{(0)})} \right)
\\ = \O \left(
\| u_1\|_{L^2} \right), \end{multline}
using estimate \eqref{weak_sol_aux} on $\tild{\psi}$. Notice that, by Cauchy-Schwarz inequality, we indeed have $\| u_1\|_{L^1}=\O( \| u_1\|_{L^2})$ as the definition of $\O$ given in \cref{def:O} also means we work in small time.
\noindent \emph{Proof of \eqref{eq:estim_lin2_aux}.} Using \eqref{weak_sol_aux} and \eqref{order1aux_sp}, the following equality holds in $H^5_{(0)}$ for all $t \in [0,T]$, \begin{equation*} (\widetilde{\psi} - \psi_1 - \widetilde{\Psi})(t) = - \int_0^t e^{-iA(t-\tau)} \left( u_1(\tau) \left(2 \mu' \partial_x + \mu'' \right)(\widetilde{\psi} -\psi_1)(\tau) + i u_1(\tau)^2 {\mu'}^2 \widetilde{\psi}(\tau) \right) d\tau. \end{equation*} Once again, for $\tau$ fixed, every term belongs to $H^5 \cap H^3_{(0)}$ thanks to \cref{rem:cont_mu}, so using once again the triangular inequality, \begin{equation*}
\| \tild{\psi} -\psi_1 -\tild{\Psi}
\|_{L^{\infty} H^2_{(0)}} = \O \left(
\| u_1 \|_{L^1}
\| \tild{\psi} - \psi_1 \|_{L^{\infty} H^3_{(0)}} +
\| u_1 \|^2_{L^2}
\| \tild{\psi} \|_{L^{\infty}H^2_{(0)}} \right) =
\O( \|u_1\|^2_{L^2}), \end{equation*} using the estimate \eqref{eq:estim_lin1_aux} on the linear remainder and estimate \eqref{weak_sol_aux} on $\tild{\psi}$.
\noindent \emph{Proof of cubic remainder.} As before, in $H^5_{(0)}$, \begin{multline*} ( \widetilde{\psi} - \psi_1 - \widetilde{\Psi} -\tild{\xi} )(t) \\ = - \int_0^t e^{-iA(t-\tau)} \left( u_1(\tau) \left( 2 \mu' \partial_x + \mu'' \right) (\widetilde{\psi} -\psi_1-\tild{\Psi})(\tau) + i u_1(\tau)^2 {\mu'}^2 (\widetilde{\psi}-\psi_1)(\tau) \right) d\tau. \end{multline*} And thus, using the triangular inequality, \eqref{eq:estim_lin1_aux} and \eqref{eq:estim_lin2_aux}, one gets \begin{align} \notag
\| \tild{\psi} -\psi_1 -\tild{\Psi} -\tild{\xi}
\|_{L^{\infty} H^1_{(0)}} &= \O \left(
\| u_1 \|_{L^1}
\| \tild{\psi} - \psi_1 -\tild{\Psi}
\|_{L^{\infty} H^2_{(0)}} +
\| u_1 \|^2_{L^2}
\| \tild{\psi}- \psi_1 \|_{L^{\infty}H^1_{(0)}} \right) \\ \label{eq:estim_lin3_aux} &=
\O( \|u_1\|^3_{L^2}). \end{align}
\noindent \emph{Proof of \eqref{eq:estim_lin4_aux}.} Finally, in $H^5_{(0)}$, \begin{multline*} (\widetilde{\psi} - \psi_1 - \widetilde{\Psi}-\tild{\xi} - \tild{\zeta})(t) = - \int_0^t e^{-iA(t-\tau)} \Big( u_1(\tau) \left(2 \mu' \partial_x + \mu'' \right)(\widetilde{\psi} -\psi_1-\tild{\Psi}-\tild{\xi})(\tau) \\ + i u_1(\tau)^2 {\mu'}^2 (\widetilde{\psi}-\psi_1-\tild{\Psi})(\tau) \Big) d\tau. \end{multline*} And thus, using \eqref{eq:estim_lin2_aux} and \eqref{eq:estim_lin3_aux}, one gets \eqref{eq:estim_lin4_aux}.
\end{proof}
In a nutshell, when (H$_{\reg}$), (H$_{\lin}$), (H$_{\Quad}$) and (H$_{\Cub}$) are satisfied, for the expansion of the auxiliary system along the lost direction, \begin{itemize} \item by \eqref{quad_u3}, the leading quadratic term is given by $\int_0^T u_3(t)^2dt$, \item by \eqref{expr:order3aux}, the leading cubic term is given by $\int_0^T u_1(t)^2 u_2(t)dt$, \item and by \eqref{eq:estim_lin4_aux} among every term of order higher than four, the leading term is given by $(\int_0^T u_1(t)^2 dt)^2$. \end{itemize}
Therefore, in the asymptotic of controls small in $H^2$, along the lost direction, the cubic term prevails on the linear term (because it vanishes) but also on the quadratic term and on the terms of order higher or equal than four. This is why, in the next proposition, we state that along the lost direction, we only keep the dominant cubic term and all other terms are seen as (small) pollution, the bigger pollution being given by the quadratic term.
Let us stress that, in another asymptotic on controls, for example, in the asymptotic of controls small in $H^3$, this does not hold any more: Gagliardo-Nirenberg inequalities prove that the quadratic term prevails on the cubic term (and on the higher-order terms), and thus one can deny $H^3$-STLC using such a quadratic term as done in \cite{B21bis}.
\begin{cor} \label{prop:exp_aux} Let $\mu$ satisfying (H$_{\reg}$), (H$_{\lin}$), (H$_{\Quad}$). Let $u \in H^2_0(0,T)$ be a control such that $u_2(T)=u_3(T)=0$. Then, the solution $\tild{\psi}$ of the auxiliary system \eqref{system_aux} associated to the initial condition $\varphi_1$ satisfies \begin{multline} \label{dev_aux_varphiK} \langle \tild{\psi} (T) , \psi_K(T) \rangle -
i\int_0^T u_1(t)^2
\int_0^t u_1( \tau)
\widetilde{k}_{\Cub,K}^1( t, \tau) d\tau dt \\ -
i \int_0^T u_1(t) \int_0^t u_1(\tau)^2 \widetilde{k}_{\Cub,K}^2( t, \tau) d\tau dt = \O \left(
\| u_3 \|^2_{L^2(0,T)} +
\| u_1\|^3_{L^1(0,T)} \right) , \end{multline} where we recall that $ \widetilde{k}_{\Cub,K}^1$ and $ \widetilde{k}_{\Cub,K}^2$ are respectively defined by \eqref{kernel_cubic_1} and \eqref{kernel_cubic_2}. \end{cor}
\begin{proof} The computations \eqref{expr:order1_aux}, \eqref{quad_u3}, \eqref{expr:order3aux} and the error estimate \eqref{eq:estim_lin4_aux} give that the right-hand side of \eqref{dev_aux_varphiK} is estimated by \begin{equation*} \O \left(
\| u_3 \|^2_{L^2(0,T)} +
\| u_1\|^3_{L^1(0,T)} +
\| u_1\|^4_{L^2(0,T)} \right). \end{equation*} However, for every control $u$ such that $u_2(T)=u_3(T)=0$, integrations by parts and then Cauchy-Schwarz inequality prove that \begin{equation*}
\| u_1\|_{L^2}^4 = \left( \int_0^T u'(t) u_3(t)dt \right)^2 \leqslant C
\| u'\|^2_{L^2(0,T)}
\| u_3\|^2_{L^2(0,T)}
= \O( \| u_3\|^2_{L^2(0,T)}), \end{equation*} as we recall we work in the asymptotic of controls small in $H^2_0$ (see \cref{def:O}). \end{proof}
\subsection{Sharp error estimates for the Schrödinger equation} From the expansion of the auxiliary system, one can deduce sharp error estimates on the expansion of the solution of the Schrödinger equation \eqref{Schrodinger}. \begin{prop} \label{thm:error_order4} Let $\mu$ satisfying (H$_{\reg}$). Then, \begin{align} \label{estim_r2}
\left\| \psi - \psi_1 - \Psi
\right\|_{L^{\infty}( (0,T), L^2(0,1))} &= \O \left(
\| u_1\|_{L^2(0,T)}^2 +
| u_1(T)|^2 \right), \\ \label{estim_r4}
\left\| \psi - \psi_1 - \Psi - \xi -\zeta
\right\|_{L^{\infty}( (0,T), L^2(0,1))} &= \O \left(
\| u_1\|_{L^2(0,T)}^4 +
| u_1(T)|^4 \right). \end{align} \end{prop}
\begin{proof} The proof of \eqref{estim_r2} and \eqref{estim_r4} are very similar. Thus, we only prove \eqref{estim_r4}.
Using all the links \eqref{link}, \eqref{link_order1}, \eqref{link_order2} and \eqref{link_order3} between the expansions of the Schrödinger equation and of the auxiliary system, one gets \begin{multline*} \left( \psi -\psi_1 -\Psi -\xi -\zeta \right) (T) = e^{i u_1(T) \mu} ( \tild{\psi} -\psi_1 -\tild{\Psi} -\tild{\xi} -\tild{\zeta} ) (T) + ( e^{i u_1(T) \mu} - 1 ) \tild{\zeta}(T)
\\ + ( e^{i u_1(T) \mu} - 1 -iu_1(T) \mu ) \tild{\xi}(T) + ( e^{i u_1(T) \mu} - 1 -iu_1(T) \mu - \frac{u_1(T)^2}{2} \mu^2 ) \tild{\Psi}(T) \\
+ ( e^{i u_1(T) \mu} - 1 -iu_1(T) \mu - \frac{u_1(T)^2}{2} \mu^2 -i \frac{u_1(T)^3}{6} \mu^3 ) \psi_1(T) . \end{multline*}
The first term is estimated by $\|u_1\|^4_{L^2}$ thanks to the estimate \eqref{eq:estim_lin4_aux} on the auxiliary system. Doing an expansion of $e^{i u_1(T) \mu}$, the second term (resp.\ the third, fourth and fifth term) is estimated by $|u_1(T)| \| \tild{\zeta} (T) \|_{L^2} $ (resp.\ $|u_1(T)|^2 \| \tild{\xi} (T) \|_{L^2} $, $ |u_1(T)|^3 \| \tild{\Psi} (T) \|_{L^2}$ and $|u_1(T)|^2$). Then, estimates \eqref{estim_lin_aux}, \eqref{estim_quad_aux} and \eqref{estim_cub_aux} on $\tild{\Psi}$, $\tild{\xi}$ and $\tild{\zeta}$ together with Young inequalities lead to \eqref{estim_r4}. \end{proof}
To conclude on the error estimate of the expansion of the Schrödinger equation, one needs to estimate the boundary term $u_1(T)$. This can be done for specific motions of the solution. \begin{lem} For every $u$ in $H^2_0(0,T)$ such that the solution of \eqref{Schrodinger} satisfies \begin{equation} \label{hyp:motion_spec} \langle \psi(T; \ u, \ \varphi_1), \varphi_1 \rangle = \langle \psi_1(T), \varphi_1 \rangle, \end{equation} then, the following estimate holds \begin{equation} \label{estim_u1(T)}
|u_1(T) | = \O \left(
\| u_1 \|^2_{L^2(0,T)} \right). \end{equation} \end{lem}
\begin{proof} Thanks to the explicit computation of $\Psi$ given in \eqref{order1explicit}, one gets \begin{equation*} \langle \psi(T), \varphi_1 \rangle
= \langle \psi_1(T), \varphi_1 \rangle + i e^{-i\lambda_1 T} \langle \mu \varphi_1, \varphi_1 \rangle u_1(T) + \O \left(
\| \left(\psi- \psi_1-\Psi\right)(T)
\|_{L^2(0,1)} \right) \end{equation*}
As $\langle \mu \varphi_1, \varphi_1 \rangle \neq 0$ by \eqref{H_lin_2}, assumption \eqref{hyp:motion_spec} together with the estimate \eqref{estim_r2} of the quadratic remainder lead to \begin{equation*} u_1(T) = \O \left(
\| u_1\|_{L^2(0,T)}^2 +
| u_1(T)|^2 \right). \end{equation*} By definition of $\O$ (see \cref{def:O}), we work with controls arbitrary small in $H^2_0$, thus, such estimate entails \eqref{estim_u1(T)}. \end{proof}
\begin{cor} Let $\mu$ satisfying (H$_{\reg}$). Then, for every control $u \in H^2_0(0,T)$ such that the solution of \eqref{Schrodinger} satisfies \eqref{hyp:motion_spec}, the following estimate holds, \begin{equation} \label{remainder_order4}
\left\| \psi - \psi_1 - \Psi - \xi -\zeta
\right\|_{L^{\infty}( (0,T), L^2(0,1))} = \O \left(
\| u_1\|_{L^2(0,T)}^4 \right). \end{equation} \end{cor}
\subsection{The non overlapping principle} \label{sec:non_add} In this paper, it is quite useful to use non overlapping controls as already seen in \cref{sec:toy_models}. Moreover, if $v$ (resp.\ $w$) is a control defined on $(0,T_1)$ (resp.\ on $(0, T_2)$), it would be every convenient to have \begin{equation*} \psi( T_1+ T_2; \ v \# w, \ \varphi_1) = \psi(T_1; \ v, \ \varphi_1) + \psi(T_2; \ w, \ \varphi_1), \end{equation*} where recall that the concatenation of two controls is defined in \eqref{concatenation}. However, it is not the case. That is why, in the following section, we estimate precisely the evolution of the solution along the lost direction, to then use it in \cref{sec:STLC_result} for non overlapping controls.
\subsubsection{For the quadratic term}
\begin{prop} Let $0< T_1 < T_2$. If $\mu$ satisfies \eqref{quad_nul_1} and \eqref{quad_nul_2}, then for all control $u$ in $H^2_0(0, T_2)$ such that $u_1(T_1)=u_2(T_1)=u_3(T_1)=u_2(T_2)=u_3(T_2)=0$, the solution of \eqref{order2} satisfies \begin{multline} \label{non_lin_quad}
\left| \langle \xi(T_2; \ u, \ \varphi_1), \psi_K(T_2) \rangle - \langle \xi(T_1; \ u, \ \varphi_1), \psi_K(T_1) \rangle
\right| \\ = \O \left(
\| u_3\|^2_{L^2(0,T_2)} +
|u_1(T_2)|
\| u_2\|_{L^1(0,T_2)} +
| u_1(T_2)|^2 \right). \end{multline} \end{prop}
\begin{proof} Using the link with the auxiliary system \eqref{link_order2} and the explicit form of $\tild{\Psi}$ given in \eqref{expr:order1_aux}, one has, for every $T \in [0, T_2]$, \begin{equation*} \langle \xi(T), \psi_K(T) \rangle = \langle \tild{\xi}(T), \psi_K(T) \rangle -u_1(T) \int_0^T u_1(t) k_{\Quad, T}(t) dt - \frac{u_1(T)^2}{2} \langle \mu^2 \varphi_1, \varphi_K \rangle e^{i (\lambda_K - \lambda_1)T}, \end{equation*} where the quadratic kernel $k_{\Quad, T}$ is given by, \begin{equation} \label{eq:kernel_quad} k_{\Quad, T}(t) = \sum \limits_{n=1}^{+\infty} (\lambda_n- \lambda_1) \langle \mu \varphi_1, \varphi_n \rangle \langle \mu \varphi_p, \varphi_n \rangle e^{i [ (\lambda_n - \lambda_1)t + (\lambda_K- \lambda_j)T]}. \end{equation} Thus, if the control satisfies $u_1(T_1)=0$, then, \begin{multline*} \langle \xi(T_2), \psi_K(T_2) \rangle - \langle \xi(T_1), \psi_K(T_1) \rangle = \langle \tild{\xi}(T_2), \psi_K(T_2) \rangle - \langle \tild{\xi}(T_1), \psi_K(T_1) \rangle \\ +u_1(T_2) \int_0^{T_2} u_1(t) k_{\Quad, T_2}(t) dt - \frac{u_1(T_2)^2}{2} \langle \mu^2 \varphi_1, \varphi_K \rangle e^{i (\lambda_K - \lambda_1)T_2} . \end{multline*}
The first term of the right-hand side is estimated by $\O( \| u_3\|^2_{L^2(0,T_2)})$ using the explicit computation of $\tild{\xi}$ given in \eqref{quad_u3}. The second term of the right-hand side is naturally estimated by
$\O(|u_1(T_2)|
\| u_1\|_{L^1(0,T_2)})$ as the kernel is bounded. However, such estimate will not be sharp enough to use in the sequel of this paper (and more precisely in the proof of \cref{prop:TV1}. Thus, one can compute a shaper estimate by performing one integration by parts in the integral. This gives that the second term is estimated by
$\O(|u_1(T_2)|
\| u_2\|_{L^1(0,T_2)})$ noticing that $k_{\Quad, T_2}'$ is still bounded thanks to \cref{decay_coeff}. \end{proof}
\subsubsection{For the cubic term} \begin{prop} Let $0 < T_1 < T_2$. For every control $u$ in $H^2_0(0, T_2)$ such that $u_2(T_1)=u_2(T_2)=0$, the solution of \eqref{order3aux} satisfies \begin{multline} \label{non_lin_cub_aux}
\left| \langle \tild{\zeta}(T_2), \psi_K(T_2) \rangle - \langle \tild{\zeta}(T_1), \psi_K(T_1) \rangle
\right| \\ = \O \left(
\| u_1\|^3_{L^1(0,T_2)} +
\| u_1\|^2_{L^2(T_1, T_2)}
\| u_1\|_{L^1(0, T_2)} +
\| u_2\|_{L^{\infty}(T_1, T_2)}
\| u_1\|^2_{L^2(0, T_2)} \right). \end{multline} \end{prop}
\begin{proof} Using the explicit form of $\tild{\zeta}$ given in \eqref{expr:order3aux}, one has, \begin{multline*} \langle \tild{\zeta}(T_2), \psi_K(T_2) \rangle - \langle \tild{\zeta}(T_1), \psi_K(T_1) \rangle =
i\int_{T_1}^{T_2} u_1(t)^2
\int_0^t u_1( \tau)
\widetilde{k}_{\Cub,K}^1( t, \tau)
d\tau dt \\ + i \int_{T_1}^{T_2} u_1(t) \int_0^t u_1(\tau)^2 \widetilde{k}_{\Cub,K}^2( t, \tau) d\tau dt + \O \left(
\| u_1\|^3_{L^1(0, T_2)} \right), \end{multline*}
as the kernel $\tild{k}_{\Cub, K}^3$ defined in \eqref{kernel_cubic_3} is bounded in $C^0( \mathbb{R}^3, \mathbb{C})$. The first term of the right-hand side is bounded by $\| u_1\|^2_{L^2(T_1, T_2)} \| u_1\|_{L^1(0, T_2)}$ as $\tild{k}_{\Cub, K}^1$ is also bounded. Moreover, it would seem natural to estimate the second term by $\| u_1\|_{L^1(T_1, T_2)}\| u_1\|_{L^2(0, T_2)}^2.$ However, as before, it would not provide an estimate sharp enough to use in the sequel of the work. One can compute a sharper estimate by performing one integration by parts to get that the second term of the right-hand side is bounded by, \begin{multline*}
\left| \int_{T_1}^{T_2} u_2(t) u_1(t)^2 \widetilde{k}_{\Cub,K}^2( t, t) dt + \int_{T_1}^{T_2} u_2(t) \int_0^t u_1(\tau)^2 \partial_1 \widetilde{k}_{\Cub,K}^2( t, \tau) dt d\tau
\right| \\ =\O \left(
\| u_2\|_{L^{\infty}(T_1, T_2)}
\| u_1\|^2_{L^2(0, T_2)} \right), \end{multline*} as the kernel $\widetilde{k}_{\Cub,K}^2$ defined in \eqref{kernel_cubic_2} is bounded in $C^1( \mathbb{R}^2, \mathbb{C})$. \end{proof}
\begin{prop} Let $0 < T_1 < T_2$. For every control $u$ in $H^2_0(0, T_2)$ such that $u_1(T_1)=u_2(T_1)=u_2(T_2)=0$, the solution of \eqref{order3} satisfies \begin{multline} \label{non_lin_cub}
\left| \langle \zeta(T_2), \psi_K(T_2) \rangle - \langle \zeta(T_1), \psi_K(T_1) \rangle
\right| = \O \Big(
\| u_1\|^3_{L^1(0,T_2)} +
\| u_1\|^2_{L^2(T_1, T_2)}
\| u_1\|_{L^1(0, T_2)} \\+
\| u_2\|_{L^{\infty}(T_1, T_2)}
\| u_1\|^2_{L^2(0, T_2)} +
|u_1(T_2)| \| u_1\|_{L^2(0,T_2)}^2
+
|u_1(T_2)|^3 \Big). \end{multline} \end{prop}
\begin{proof} Using the link with the auxiliary system \eqref{link_order3} and the explicit computations of $\tild{\Psi}$ and $\tild{\xi}$ given in \eqref{expr:order2_aux} and \eqref{expr:order3aux}, one gets, for all $T \in [0, T_2]$, \begin{multline*} \langle \zeta(T), \psi_K(T) \rangle = \langle \tild{\zeta}(T), \psi_K(T) \rangle + iu_1(T) \int_0^T u_1(t)^2 k_{\Cub,T}^1(t) dt \\ + i u_1(T) \int_0^T u_1(t) \int_0^t u_1(\tau) k_{\Cub,T}^2(t, \tau) d\tau dt - \frac{u_1(T)^2}{2} \int_0^T u_1(t) k_{\Cub, T}^3(t) dt \\- i \frac{u_1(T)^3}{6} \langle \mu^3 \varphi_1, \varphi_K \rangle e^{i (\lambda_K- \lambda_1)T}, \end{multline*} where the cubic kernels are given by, \begin{align*} k_{\Cub, T}^1(t) & := -i \sum\limits_{n=1}^{+\infty} \langle \mu'^2 \varphi_1 , \varphi_n \rangle \langle \mu \varphi_n , \varphi_K \rangle \int_0^T u_1(t)^2 e^{ i [ ( \lambda_n - \lambda_1)t + (\lambda_K-\lambda_n)T ] } dt , \\ k_{\Cub,T}^2(t, \tau) &:= \sum\limits_{p=1}^{+\infty} \sum\limits_{n=1}^{+\infty} (\lambda_1- \lambda_n) (\lambda_n- \lambda_p) \langle \mu \varphi_1, \varphi_n \rangle \langle \mu \varphi_n, \varphi_p \rangle \langle \mu \varphi_p, \varphi_K \rangle \\ &\times e^{ i [ (\lambda_p- \lambda_n)t + (\lambda_n-\lambda_1) \tau + (\lambda_K-\lambda_p)T ] } , \\ k_{\Cub, T}^3(t) & := - \sum\limits_{n=1}^{+\infty} (\lambda_n-\lambda_1) \langle \mu \varphi_1, \varphi_n \rangle \langle \mu^2 \varphi_n, \varphi_K \rangle e^{i [ (\lambda_n- \lambda_1)t + (\lambda_K-\lambda_n)T ] } . \end{align*} So, if the control satisfies $u_1(T_1)=0$, one gets \begin{multline*} \langle \zeta(T_2), \psi_K(T_2) \rangle - \langle \zeta(T_1), \psi_K(T_1) \rangle = \langle \tild{\zeta}(T_2), \psi_K(T_2) \rangle - \langle \tild{\zeta}(T_1), \psi_K(T_1) \rangle \\ + iu_1(T_2) \int_0^{T_2} u_1(t)^2 k_{\Cub, T_2}^1(t) dt + i u_1(T_2) \int_0^{T_2} u_1(t) \int_0^t u_1(\tau) k_{\Cub, T_2}^2(t, \tau) d\tau dt \\ - \frac{u_1(T_2)^2}{2} \int_0^{T_2} u_1(t) k_{\Cub, T_2}^3(t) dt - i \frac{u_1(T_2)^3}{6} \langle \mu^3 \varphi_1, \varphi_K \rangle e^{ i (\lambda_K- \lambda_1)T_2 }. \end{multline*} From the estimate on the auxiliary system \eqref{non_lin_cub_aux} and the boundness of the kernels, one deduces \eqref{non_lin_cub}.
\end{proof}
From the behaviors of the quadratic and cubic terms given in \eqref{non_lin_quad} and \eqref{non_lin_cub}, from the error estimate \eqref{remainder_order4} and from the estimate \eqref{estim_u1(T)} on the boundary term $u_1(T_1)$, one can deduce the following estimate. \begin{thm} \label{non_linearity} Let $0 < T_1 < T_2$, $\mu$ satisfying \eqref{lin_nul}, \eqref{quad_nul_1} and \eqref{quad_nul_2}. For every control $u$ in $H^2(0, T_2)$ such that $u_1(T_1)=u_2(T_1)=u_3(T_1)=u_2(T_1)=u_2(T_2)=u_3(T_2)=0$, if the solution of \eqref{Schrodinger} satisfies the specific motion \eqref{hyp:motion_spec}, then, one has \begin{multline*}
\left| \langle \psi(T_2), \psi_K(T_2) \rangle - \langle \psi(T_1), \psi_K(T_1) \rangle
\right| = \O \Big(
\| u_3\|^2_{L^2(0, T_2)} +
\| u_1\|^2_{L^2(0, T_2)}
\|u_2\|_{L^1(0, T_2)} \\ +
\| u_1\|^3_{L^1(0,T_2)} +
\| u_1\|^2_{L^2(T_1, T_2)}
\| u_1\|_{L^1(0, T_2)} +
\| u_2\|_{L^{\infty}(T_1, T_2)}
\| u_1\|^2_{L^2(0, T_2)}
\Big). \end{multline*} \end{thm}
\section{Motions in the lost directions} \label{sec:STLC_result}
\subsection{Motions in the lost directions $\pm i \varphi_K$} As for the bilinear toy-model \eqref{ODE}, the motions in the $\pm i \varphi_K$ directions are done in two steps. \begin{itemize} \item First, we initiate the motion along $\pm i \varphi_K$ by noticing that when $\mu$ satisfies (H$_{\lin}$), (H$_{\Quad}$) and (H$_{\Cub}$), along $i \varphi_K$, the solution is mainly driven by the cubic term $\int_0^T u_1(t)^2 u_2(t)dt$ which enables us to move in both $+$ and $- i \varphi_K$ directions. This work is done in \cref{partial_tv_lambda}. At this end of this first step, along the other directions $(\varphi_j)_{j \in \mathbb{N}^* - \{K\}}$, the error is possibly `big' (in a sense to precise).
\item Then, this error is corrected using the exact local controllability in projection result given in \cref{linear_STLC}. However, one needs to make sure that such `linear' motions don't induce a too large error along $i \varphi_K$ and preserve the work done in the first step. This is done \cref{prop:TV1}. \end{itemize}
\begin{prop} \label{partial_tv_lambda} For all $T>0$, there exists $C, \rho>0$ and a continuous map $b \mapsto u_b$ from $\mathbb{R}$ to $H^2_0(0,T)$ such that, \begin{equation} \label{eq:dev_K} \forall b \in (-\rho, \rho), \quad
\left| \langle \psi(T; \ u_b, \ \varphi_1), \psi_K(T) \rangle - i b
\right| \leqslant C
|b|^{1+\frac{1}{41}} . \end{equation} Moreover, for all $p \in [1, +\infty]$ and $k \in \mathbb{Z}$, $k \geq -3$, there exists $C>0$ such that,
\begin{equation} \label{eq:size_controls} \forall b \in (-\rho, \rho), \quad
\| u_b \|_{W^{k, p}(0,T)} \leqslant C
|b|^{ \frac{1}{41} ( 7 - 4k + \frac{4}{p} ) } , \end{equation} and for all $\varepsilon \in (0, \frac{3}{4})$, there exists $C>0$ such that for all $b \in (-\rho, \rho)$, \begin{equation} \label{estim_valeur_finale}
\left\| \psi(T; \ u_b, \ \varphi_1) - \psi_1(T)
\right\|_{H^{2m+7}_{(0)}(0,1)} \leqslant C
|b|^{ \frac{1}{41} (10 -4m - 4\varepsilon) } , \quad \forall m=-3, \ldots, 2 . \end{equation} \end{prop}
\begin{proof} Let $T>0$ and $\rho \in (0, T^{\frac{41}{4}})$. For all $b \in \mathbb{R}^*$, we define the control $u_b$ by, \begin{multline} \label{eq:contr_oscill} \forall t \in [0,T], \quad u_b(t) := \sign(b)
|b|^{\frac{7}{41}} \phi^{(3)} \left( \frac{t}{
|b|^{ \frac{4}{41} } } \right) \\ \text{ where } \quad \phi \in C_c^{\infty}(0,1) \quad \text{ such that} \ \int_0^1 \phi''( \theta)^2 \phi'(\theta) d\theta=\frac{1}{C_K}, \end{multline} where $C_K$ is defined in \eqref{cub_non_nul}.
Notice that for all $b \in (-\rho, \rho)$, $u_b$ is supported on $(0, |b|^{\frac{4}{41}}) \subset (0,T)$.
\noindent \emph{Size of the control.} Let $p \in [1, +\infty]$ and $k \in \mathbb{Z}$, $k \geq -3$. Recall that, if $k$ is negative, we still write $u^{(k)}$ to denote $u_{|k|}$ the $|k|$-th primitive of $u$. For $k >0$, by Poincaré's inequality, there exists $C>0$ such that $\| u_b \|_{W^{k,p}} \leqslant C \| u_b^{(k)} \|_{L^p}$. For $k<0$, it holds by definition \eqref{def_norm_faible} of the negative norms. Thus, by definition \eqref{eq:contr_oscill} and performing the change of variables $t = |b|^{\frac{4}{41}} \theta$, one has, for all $b \in (-\rho, \rho)$, $b \neq 0$, \begin{align*}
\| u_b \|_{W^{k,p}(0,T)}^p
&\leqslant
C \int_0^{|b|^{\frac{4}{41}}}
\left|
|b|^{ \frac{1}{41} ( 7-4k ) } \phi^{(3+k)} \left( \frac{t}{|b|^{\frac{4}{41}}} \right)
\right|^p dt
\\ &\leqslant C
\| \phi^{(3+k)}
\|_{L^p(0,1)}^p
|b|^{ \frac{1}{41} ( p( 7-4k ) + 4 ) } .
\end{align*}
\noindent \emph{Continuity of the map $b \mapsto u_b$.} The continuity from $\mathbb{R}^*$ to $H^2_0(0,T)$ directly stems from the dominated convergence theorem. Moreover, the size estimate \eqref{eq:size_controls} with $k=p=2$ gives the existence of $C>0$ such that, \begin{equation*} \forall b \in (-\rho, \rho), \ b \neq 0, \quad
\| u_b \|_{H^2_0(0,T)} \leqslant C
|b|^{ \frac{1}{41} }. \end{equation*} Thus, the map $b \mapsto u_b$ can be continuously extended at zero with $u_0=0$.
\noindent \emph{Expansion of the solution.} As $u_1(T)=0$, by \eqref{link}, the end-point of $\psi$ and $\tild{\psi}$ are the same. Thus, it suffices to prove \eqref{eq:dev_K} with $\tild{\psi}$ instead of $\psi$. Moreover, \cref{prop:exp_aux} gives the following expansion of order 3 of the auxiliary system, when $b$ goes to zero, \begin{multline} \label{calcul_cub}
\Big| \langle \tild{\psi} (T; \ u_b, \ \varphi_1) , \psi_K(T) \rangle - i\int_0^T u_1(t)^2 \int_0^t u_1( \tau) \widetilde{k}_{\Cub,K}^1( t, \tau) d\tau dt \\ - i \int_0^T u_1(t) \int_0^t u_1(\tau)^2 \widetilde{k}_{\Cub,K}^2( t, \tau) d\tau dt
\Big| \leqslant C
|b|^{\frac{42}{41}} , \end{multline} as by \eqref{eq:size_controls}, one has the following estimates: $
\| u_3\|^2_{L^2}
\leqslant C |b|^{\frac{42}{41}} $ and $
\| u_1\|_{L^1}^3
\leqslant C |b|^{\frac{45}{41}}. $ Then, substituting the explicit form of the control \eqref{eq:contr_oscill} and performing the change of variables $(t, \tau)=(|b|^{\frac{4}{41}} \theta_1, |b|^{\frac{4}{41}} \theta_2)$, the two integral terms of \eqref{calcul_cub} are given by \begin{multline*} i \sign(b)
|b| \Big( \int_0^1 \phi''(\theta_1)^2 \int_0^{\theta_1} \phi''(\theta_2) \tild{k}_{\Cub,K}^1(|b|^{\frac{4}{41}} \theta_1, |b|^{\frac{4}{41}} \theta_2) d\theta_2 \theta_1 \\ + \int_0^1 \phi''(\theta_1) \int_0^{\theta_1} \phi''(\theta_2)^2 \tild{k}_{\Cub,K}^2(|b|^{\frac{4}{41}} \theta_1, |b|^{\frac{4}{41}} \theta_2) d\theta_2 \theta_1 \Big). \end{multline*} Expanding the kernels when $b$ goes to zero, as they are both bounded in $C^1(\mathbb{R}^2, \mathbb{C})$, one gets that \eqref{calcul_cub} can be written as \begin{equation*}
\left| \langle \widetilde{\psi}(T; \ u_b, \ \varphi_1), \psi_K(T) \rangle - i b (
\widetilde{k}_{\Cub,K}^1(0,0)
- \widetilde{k}_{\Cub,K}^2(0,0) ) \int_0^1 \phi''(\theta)^2 \phi'(\theta) d\theta
\right| \leqslant C
|b|^{\frac{42}{41}} . \end{equation*} Moreover, looking at the definition of $C_K$, $\tild{k}_{\Cub,K}^1$ and $\tild{k}_{\Cub,K}^2$ given respectively in \eqref{cub_non_nul}, \eqref{kernel_cubic_1} and \eqref{kernel_cubic_2}, one can notice that $
\widetilde{k}_{\Cub,K}^1(0,0)
- \widetilde{k}_{\Cub,K}^2(0,0) =C_K$. Thus, the previous equality leads to \eqref{eq:dev_K} by choice of $\phi$ given in \eqref{eq:contr_oscill}.
\noindent \emph{Size of the end-point.} Using the explicit form of $\Psi$ given in \eqref{order1explicit}, one gets, \begin{multline} \label{estim_end_point}
\left\| \psi(T ) - \psi_1(T)
\right\|_{H^{2m+7}_{(0)}} \leqslant
\left\| \left( \langle \mu \varphi_1, \varphi_j \rangle \int_0^T u_b(t) e^{ i (\lambda_j- \lambda_1) (t-T) } dt \right)
\right\|_{h^{2m+7}(\mathbb{N}^*)} \\ +
\| ( \psi - \psi_1 - \Psi )(T)
\|_{H^{2m+7}_{(0)}}, \quad m \in \{-3, \ldots, 2 \} \end{multline} Yet, for all $j \in \mathbb{N}^*$, $j \geqslant 2$ and $k \in \mathbb{Z}$ with $k \geqslant -3$, by integrations by parts (integrating $u$ when $k< 0$ or differentiating $u$ when $k \geqslant 0$), one gets, \begin{equation} \label{estim_coeff_lin_ponc}
\left| \int_0^T u_b(t) e^{ i (\lambda_j- \lambda_1) t } dt
\right| =
\left| (\lambda_j- \lambda_1)^{-k} \int_0^T u^{(k)}_b(t) e^{ i (\lambda_j- \lambda_1) t } dt
\right|
\leqslant C
| \lambda_j - \lambda_1 |^{-k}
|b|^{\frac{1}{41}(11-4k)} , \end{equation} using estimates \eqref{eq:size_controls} with $p=1$. This also holds for $j=1$ as $u_1(T)=0$. By interpolation, such estimates hold for all $j \in \mathbb{N}^*$ and $k \in [-3, 3]$ with a uniform constant with respect to $k$. Let $\varepsilon \in (0, 3/4)$ and $m \in \{-3, \ldots, 2\}$. Taking $k= \varepsilon +m +\frac{1}{4} \in [-3, 3]$ in \eqref{estim_coeff_lin_ponc}, summing over $j \in \mathbb{N}^*$ and using \cref{decay_coeff} to estimate the coefficients $(\langle \mu \varphi_1, \varphi_j \rangle)_{j \in \mathbb{N}^*}$, one gets
\begin{equation} \label{estim_end_point_2}
\left\| \left( \langle \mu \varphi_1, \varphi_j \rangle \int_0^T u_b(t) e^{ i (\lambda_j- \lambda_1) (t-T) } dt \right)
\right\|_{h^{2m+7}(\mathbb{N}^*)}
\leqslant C \left( \sum \limits_{j=1}^{+\infty} \frac{1}{j^{1+4\varepsilon}} \right)^{1/2}
|b|^{ \frac{1}{41}(10-4m-4\varepsilon) } . \end{equation} Moreover, \cite[Proposition 4.5]{B21} gives the existence of $C>0$ such that for all $m=-3, \ldots, 2$, \begin{equation} \label{estim_end_point_3}
\| ( \psi - \psi_1 - \Psi )(T)
\|_{H^{2m+7}_{(0)}} \leqslant C
\| u_b\|_{H^m(0,T)} \| u_b \|_{H^2(0,T)} \leqslant C
|b|^{ \frac{1}{41}(10-4m) } , \end{equation} using \eqref{eq:size_controls} to estimate the size of the controls. Then, \eqref{estim_end_point}, \eqref{estim_end_point_2} and \eqref{estim_end_point_3} lead to \eqref{estim_valeur_finale}. \end{proof}
\begin{prop} \label{prop:TV1} The vector $i \varphi_K$ is a small-time $H^2_0$-continuously approximately reachable vector associated with vector variations $\Xi(T)= i \psi_K(T)$. More precisely, for all $T>0$, there exists $C, \rho>0$ and a continuous map $b \mapsto w_b$ from $\mathbb{R}$ to $H^2_0(0,T)$ such that, \begin{equation} \label{tv_1} \forall b \in (-\rho, \rho), \quad
\left\| \psi(T ; \ w_b, \ \varphi_1) - \psi_1(T) - i b \psi_K(T)
\right\|_{H^{11}_{(0)}(0,1)} \leqslant C
|b|^{ 1 + \frac{1}{82} }, \end{equation} with the following size estimate on the family of controls, \begin{equation} \label{tv_size_control}
\| w_b \|_{ H^2_0(0,T)} \leqslant C
|b|^{\frac{1}{41}}. \end{equation}
\end{prop}
\begin{proof} \emph{Definition of the control.} Let $0< T_1<T$. To move along the $\pm i\varphi_K$ directions, we use non overlapping controls. More precisely, we define, for all $b \in \mathbb{R}$,
\begin{equation} \label{def_w_lambda} w_b := u_b \mathrm{1~\hspace{-1.4ex}l}_{[0, T_1]} + v_b \mathrm{1~\hspace{-1.4ex}l}_{[T_1, T]}, \end{equation}
where $(u_b)_{b \in \mathbb{R}}$ is the family of controls defined on $[0,T_1]$ constructed in \cref{partial_tv_lambda} and $$v_b:=\Gamma_{T_1,T} \left( \psi(T_1; \ u_b, \ \varphi_1), \psi_1(T) \right),$$ where $\Gamma_{T_1,T}$ is the control operator constructed in \cref{linear_STLC} with $J=\mathbb{N}^* - \{K\}$ and $(p,k)=(2,2)$.
\noindent \emph{Size of the controls.} Because we use non overlapping controls, for all $b\in \mathbb{R}$, \begin{equation} \label{size_wb_partial}
\| w_b\|_{H^2_0(0,T)} =
\| u_b\|_{H^2_0(0,T_1)} +
\| v_b\|_{H^2_0(T_1,T)} \leqslant C
|b|^{\frac{1}{41}} +
\| v_b\|_{H^2_0(T_1,T)} , \end{equation}
using the size estimate \eqref{eq:size_controls} on the family $(u_b)_{b \in \mathbb{R}}$ with $p=k=2$.
On $[T_1, T]$, using the linear estimates \eqref{estim_contr_nl} on $\Gamma_{T_1,T}$ and the estimates \eqref{estim_valeur_finale} on the end-point of the solution at time $T_1$, for all $\varepsilon \in (0, \frac{3}{4})$, one gets the existence of $C>0$, such that for all $b$ small enough and for all $m \in \{-3, \ldots, 2\}$, \begin{equation} \label{size_v_lambda}
\| v_b
\|_{H^m_0(T_1, T)} \leqslant C
\left\| \psi(T_1; \ u_b, \ \varphi_1) - \psi_1(T_1)
\right\|_{H^{2m+7}_{(0)}(0,1)} \leqslant C
|b|^{ \frac{1}{41}(10-4m-4\varepsilon) } . \end{equation} Taking $\varepsilon < \frac{1}{4}$, estimate \eqref{size_v_lambda} with $m=2$ and \eqref{size_wb_partial} imply \eqref{tv_size_control}.
\noindent \emph{Motion along $i \varphi_K$.} By construction of $\Gamma_{T_1,T}$ (see \eqref{contr_proj}), we already know that \begin{equation*} \mathbb{P} \psi(T; \ w_b, \ \varphi_1) = \psi_1(T) = \mathbb{P} \left( \psi_1(T) + i b \psi_K(T) \right), \end{equation*} where $\mathbb{P}$ denotes the orthogonal projection on $\overline{\Span_{\mathbb{C}}} \left( \varphi_j , \ j \in \mathbb{N}^*-\{K\} \right)$. Thus, to prove \eqref{tv_1}, it only remains to prove that \begin{equation} \label{goal}
\left| \langle \psi(T; \ w_b, \ \varphi_1), \psi_K(T) \rangle - i b
\right| \leqslant C
|b|^{ 1 + \frac{1}{82} } . \end{equation} By definition of $(u_b)_{b \in \mathbb{R}}$, one already has that \begin{equation*}
\left| \langle \psi(T_1; \ u_b, \ \varphi_1), \psi_K(T_1) \rangle - i b
\right| \leqslant C
|b|^{ 1 + \frac{1}{41} } . \end{equation*}
Thus, it remains to prove that the linear correction used on the time interval $[T_1, T]$ didn't destroy such an estimate, and more precisely, to prove that \begin{equation*}
\left| \langle \psi(T; \ w_b, \ \varphi_1), \psi_K(T) \rangle - \langle \psi(T_1; \ w_b, \ \varphi_1), \psi_K(T_1) \rangle
\right| \leqslant C
|b|^{ 1 + \frac{1}{82} } . \end{equation*} By \cref{non_linearity}, the left-hand side is estimated by \begin{multline} \label{estim_non_additivity}
\O \Big(
\| w_3\|^2_{L^2(0, T)} +
\| w_1\|^2_{L^2(0, T)}
\|w_2\|_{L^1(0, T)} \\ +
\| w_1\|^3_{L^1(0,T)} +
\| v_1\|^2_{L^2(T_1, T)}
\| w_1\|_{L^1(0, T)} +
\| v_2\|_{L^{\infty}(T_1, T)}
\| w_1\|^2_{L^2(0, T)}
\Big). \end{multline}
Thus, it remains to prove that the estimates \eqref{eq:size_controls} on $(u_b)_{b \in \mathbb{R}}$ and \eqref{size_v_lambda} on $(v_b)_{b \in \mathbb{R}}$ are sharp enough so that the previous quantity can be neglected in front of $|b|^{ 1 + \frac{1}{82} }. $ For example, using these estimates in $H^{-3}$, one has \begin{equation*}
\| w_3\|^2_{L^2(0, T)} =
\| u_3\|^2_{L^2(0, T_1)} +
\| v_3\|^2_{L^2(T_1, T)} \leqslant C \left(
|b|^{ \frac{42}{41} } +
|b|^{ \frac{44}{41} - \frac{8 \varepsilon}{41} } \right) \leqslant C
|b|^{ \frac{42}{41} } , \end{equation*} for $b$ small enough, choosing $\varepsilon< \frac{1}{4}$. Similarly, one gets, \begin{align*}
\| w_1\|^2_{L^2(0, T)}
\|w_2\|_{L^{2}(0, T)} &\leqslant C \left(
|b|^{ \frac{26}{41} } +
|b|^{ \frac{28}{41} - \frac{8\varepsilon}{41} } \right) \left(
|b|^{ \frac{17}{41} } +
|b|^{ \frac{18}{41} - \frac{4\varepsilon}{41} } \right)
, \\
\| w_1\|^3_{L^1(0,T)} &\leqslant C \left(
|b|^{ \frac{45}{41} } +
|b|^{ \frac{42}{41} - \frac{12 \varepsilon}{41} } \right) , \\
\| v_1\|^2_{L^2(T_1, T)}
\| w_1\|_{L^1(0, T)} &\leqslant C
|b|^{ \frac{28}{41} - \frac{8\varepsilon}{41} } \left(
|b|^{ \frac{15}{41} } +
|b|^{ \frac{14}{41} - \frac{4\varepsilon}{41} } \right)
.
\end{align*}
Choosing $\varepsilon< \frac{1}{24}$, for $b$ small enough, every term can be neglected in front of $
|b|^{ 1+ \frac{1}{82} } $. In \eqref{estim_non_additivity}, it only remains to estimate
$\| v_2\|_{L^{\infty}(T_1, T)}
\| w_1\|^2_{L^2(0, T)}$. As \eqref{size_v_lambda} provides only estimates of $(v_b)_{b \in \mathbb{R}}$ in $L^2$-spaces, one needs to use a Gagliardo-Nirenberg inequality (see \cite[Theorem p.125]{N59}) to estimate $\| v_2\|_{L^{\infty}(T_1, T)}$. More precisely, there exists $C>0$ such that \begin{equation*}
\| v_2 \|_{L^{\infty}(T_1,T)} \leqslant C
\| v_1 \|_{L^2(T_1,T)}^{1/2}
\| v_2 \|_{L^2(T_1,T)}^{1/2}
+ C \| v_2 \|_{L^2(T_1,T)} . \end{equation*} Thus, thanks to \eqref{size_v_lambda}, $v_b$ in $W^{-2, \infty}$ is estimated by \begin{equation*}
\| v_2 \|_{L^{\infty}(T_1,T)} \leqslant C
|b|^{ \frac{16}{41} -\frac{4\varepsilon}{41} }. \end{equation*} And, finally, one has \begin{equation*}
\| v_2\|_{L^{\infty}(T_1, T)}
\| w_1\|^2_{L^2(0, T)} \leqslant C
|b|^{ \frac{16}{41} -\frac{4\varepsilon}{41} }
\left(
|b|^{ \frac{26}{41} } +
|b|^{ \frac{21}{41} -\frac{4\varepsilon}{41} } \right)
\end{equation*} which is also neglected in front of $
|b|^{ 1+ \frac{1}{82} } $ . This concludes the proof of \eqref{tv_1}.
\noindent \emph{Continuity of $b \mapsto w_{b}$.} The map $b \mapsto u_b$ of \eqref{partial_tv_lambda} is continuous from $\mathbb{R}$ to $H^2(0,T_1)$. Besides, the continuity of $b \mapsto v_b$ from $\mathbb{R}$ to $H^2_0(T_1,T)$ stems from the regularity of $\Gamma_{T_1,T}$ (see \cref{linear_STLC}) and of the solution of the Schrödinger equation with respect to the control
(see \eqref{estim_sol_bis}). This gives the continuity of the map $b \mapsto w_b$ constructed by \eqref{def_w_lambda}. \end{proof}
\begin{rem} The sharp estimates \eqref{size_v_lambda} on the control operator $\Gamma_{T_1, T}$ of \cite{B21bis} together with the sharp estimate \eqref{estim_non_additivity} on the evolution of the solution along the lost direction are the key to prove the motions along the first lost direction $i \varphi_K$. \end{rem}
\subsection{Motions in the lost directions $\pm \varphi_K$}
As for the toy-models \eqref{toy_model_3} and \eqref{ODE}, the second approximately reachable vector can be deduced from the first one using a proof inspired by \cite[Th.\ 6]{HK87}. The following statement and its proof are very similar to the one done in finite dimension in \cref{dim_finie_TV2}. One only needs to be careful about the functional setting. \begin{prop} \label{prop:TV2} The vector $\varphi_K$ is a small-time $H^2_0$-continuously approximately reachable vector associated with vector variations $\Xi(T)= \psi_K(T)$. More precisely, there exists $T^*>0$ such that for all $T \in (0, T^*)$, there exists $C, \rho>0$ and a continuous map $b \mapsto v_b$ from $\mathbb{R}$ to $H^2_0(0,T)$ such that, \begin{equation} \label{tv_2} \forall b \in (- \rho, \rho), \quad
\left\| \psi(T ; \ v_b, \ \varphi_1) - \psi_1(T) - b \psi_K(T)
\right\|_{H^{11}_{(0)}(0,1)} \leqslant C
|b|^{ 1 + \frac{1}{82} }, \end{equation} with the following size estimate on the family of controls, \begin{equation} \label{tv_size_control_2}
\| v_b \|_{ H^2_0(0,T)} \leqslant C
|b|^{\frac{1}{41}}. \end{equation} \end{prop}
\begin{proof} Denote by $(u_b)_{b \in \mathbb{R}}$ the control variations associated with $i \varphi_K$ constructed in \cref{prop:TV1}. The goal is to prove that the existence of $C>0$ such that for all $(\alpha, \beta) \in \mathbb{R}^2$ small enough, \begin{multline} \label{goal_TV2_infinite}
\left\| \psi(3T; \ u_{\alpha} \# 0_{[0, T]} \# u_{\beta}, \ \varphi_1) - \psi_1(3T) - ( i \beta e^{ 2i( \lambda_K-\lambda_1)T } + i \alpha ) \psi_K(3T)
\right\|_{H^{11}_{(0)}} \\ \leqslant C
| (\alpha, \beta) |^{1+\frac{1}{82}}. \end{multline} Thus, for all $T \in \left(0, \frac{\pi}{2(\lambda_K-\lambda_1)} \right)$ and $b \in \mathbb{R}$, taking $\beta=-\frac{b}{\sin(2(\lambda_K-\lambda_1)T)}$ and $\alpha=-\beta \cos(2(\lambda_K-\lambda_1)T)$, this proves the existence of a family $(v_{b})_{b \in \mathbb{R}}$ satisfying \eqref{tv_2} and \eqref{tv_size_control_2}.
So, it remains to prove \eqref{goal_TV2_infinite}.
First, by definition of $(u_b)_{b \in \mathbb{R}}$ in \cref{prop:TV1}, there exists $C>0$ and $\rho>0$ such that for all $\alpha \in (-\rho, \rho)$, \begin{equation} \label{sur_0_T_infinite}
\left\| \psi(T; \ u_{\alpha}, \ \varphi_1) - \psi_1(T) - i \alpha \psi_K(T)
\right\|_{H^{11}_{(0)}} \leqslant C
|\alpha|^{1+\frac{1}{82}} \quad \text{ with }
\| u_{\alpha}\|_{H^2_0(0,T)} \leqslant C
|\alpha|^{\frac{1}{41}}. \end{equation} Then, on $[T, 2T]$, no control is activated, so $\psi(2T)= e^{-iAT} \psi(T)$ and \eqref{sur_0_T_infinite} becomes \begin{equation} \label{sur_T_2T_inf}
\left\| \psi(2T; \ u_{\alpha} \# 0_{[0, T]}, \ \varphi_1) - \psi_1(2T) - i\alpha \psi_K(2T)
\right\|_{H^{11}_{(0)}} \leqslant C
|\alpha|^{1+\frac{1}{82}}. \end{equation} Then, using the semi-group property of the Schrödinger equation, one has, \begin{equation*}
\psi(3T; \ u_{\alpha} \# 0_{[0, T]} \# u_{\beta}, \ \varphi_1) = \psi(T; \ u_{\beta}, \ \psi(2T; \ u_{\alpha} \# 0_{[0, T]}, \ \varphi_1)). \end{equation*} Together with the estimate \cref{prop:dep_ci} about the dependency of the solutions of the Schrödinger equation with respect to initial condition, one has \begin{multline*}
\left\| \psi(3T; \ u_{\alpha} \# 0 \# u_{\beta}, \ \varphi_1) - \psi(T; \ u_{\beta}, \ \varphi_1)e^{-i\lambda_1 2T} - e^{-iA T} \left( \psi(2T; \ u_{\alpha} \# 0, \ \varphi_1) - \psi_1(2T) \right)
\right\|_{H^{11}_{(0)}} \\ \leqslant C
\| u_{\beta} \|_{H^2_0(0,T)}
\left\| \psi(2T; \ u_{\alpha} \# 0, \ \varphi_1) - \psi_1(2T)
\right\|_{H^{11}_{(0)}}. \end{multline*} Using estimate \eqref{tv_size_control} on $(u_{\beta})$ and \eqref{sur_T_2T_inf} on $ \psi(2T; \ u_{\alpha} \# 0, \ \varphi_1) - \psi_1(2T) $,
the right-hand side of the previous inequality is estimated by $ C
| \beta |^{\frac{1}{41}}
| \alpha| . $ Then, using once again the estimate \eqref{sur_T_2T_inf} on $ \psi(2T; \ u_{\alpha} \# 0, \ \varphi_1) - \psi_1(2T) $ and the definition of $(u_{\beta})_{\beta}$ given by \eqref{tv_1}, one has, \begin{multline*}
\left\| \psi(3T; \ u_{\alpha} \# 0_{[0, T]} \# u_{\beta}, \ \varphi_1) - \psi_1(3T) - i \beta \psi_K(T) e^{-i \lambda_1 2T} - i \alpha \psi_K(3T)
\right\|_{H^{11}_{(0)}} \\ \leqslant C
| \beta |^{\frac{1}{41}}
| \alpha| + C
| \alpha|^{1+ \frac{1}{82}}, \end{multline*} which gives \eqref{goal_TV2_infinite} and this concludes the proof.
\end{proof}
\subsection{Proof of \cref{the_theorem}: The $H^2_0-$STLC of the Schrödinger equation} The goal of this section is to prove \cref{the_theorem} using the systematic approach developed in \cref{sec:black_box}. More precisely, we apply \cref{black_box} with $E_T:=H^2_0(0,T)$ for all $T> 0$, $X:=H^{11}_{(0)}(0,1)$ and \begin{equation*} \F_T : (\psi_0, u) \mapsto \psi(T; \ u, \ \psi_0), \end{equation*} where $\psi$ is the solution of the Schrödinger equation \eqref{Schrodinger} with $\psi(0)=\psi_0$. Let us now check that the assumptions of \cref{black_box} hold in this setting (with the adaptation discussed in \cref{adaptation}).
\begin{itemize} \item [$(A_1)$] By \cite[Prop.\ 4.2]{B21}, it is known that when $\mu$ satisfies ($H_{\reg}$), the end-point map is $C^1$ around $(\varphi_1, 0)$. The $C^2$-regularity is proved similarly, and thus, the proof is left to the reader.
\item [$(A_2)$] By \cite[Prop.\ 4.2]{B21}, the differential at $(\varphi_1, 0)$ is given by $ d \F_T (\varphi_1, 0).(\Psi_0,v) = \Psi(T), $ where $\Psi$ is the solution of the linearized system \begin{equation*} \left\{
\begin{array}{ll}
i \partial_t \Psi = - \partial^2_x \Psi -v(t)\mu(x) \psi_1(t,x) , \\
\Psi(t,0) = \Psi(t,1)=0,\\
\Psi(0,x)=\Psi_0.
\end{array} \right. \end{equation*} Thus, for all $\Psi_0 \in H^{11}_{(0)}$, $T \mapsto d\F_T(\varphi_1,0).(\Psi_0,0)=e^{-iAT} \Psi_0$ is continuous on $\mathbb{R}$ and $d\F_0(\varphi_1, 0).(\Psi_0, 0)=\Psi_0$.
\item [$(A_3)$] By the uniqueness result stated in \cref{wp}, one can check that, for all $T_1, T_2 >0$, $\psi_0 \in H^{11}_{(0)}$, $u \in H^2_0(0,T_1)$ and $v \in H^2_0(0,T_2)$, \begin{equation*} \psi(T_1+T_2; \ u \# v, \ \psi_0) = \psi(T_2; \ v, \ \psi(T_1; \ u, \ \psi_0)). \end{equation*}
\item [$(A_4)$] By \cite[Prop.\ 4.3]{B21}, when $\mu$ satisfies ($H_{\lin}$), the reachable set of the linearized system around the ground state is given by \begin{equation*} \H = \overline{ \Span_{\mathbb{C}} } \left( \psi_j(T); \ \text{for all } j \in \mathbb{N}^*-\{K\} \right). \end{equation*} This space doesn't depend on $T$, is closed, and is of codimension 2 in $L^2(0,1)$.
\item [$(A_5)$] By \cref{prop:TV1} and \cref{prop:TV2}, when $\mu$ satisfies ($H_{\lin}$), ($H_{\Quad}$) and ($H_{\Cub}$), both $i \varphi_K$ and $\varphi_K$ are small-time $H^2_0$-continuously approximately reachable vectors. \end{itemize}
By \cref{black_box}, when $\mu$ satisfies (H$_{\reg}$), (H$_{\lin}$), (H$_{\Quad}$) and (H$_{\Cub}$), the Schrödinger equation \eqref{Schrodinger} is $H^2_0$-STLC around the ground state with targets in $H^{11}_{(0)}(0,1)$.
\appendix \section{Existence of a function $\mu$ satisfying all the hypotheses} \label{existence_mu}
\begin{rem} In this appendix, the coefficients $(A^p_K)_{p=1,2,3}$ and $C_K$ respectively defined in \eqref{quad_nul_1}, \eqref{quad_nul_2}, \eqref{quad_non_nul} and \eqref{cub_non_nul} are seen as quadratic or cubic forms with respect to $\mu$. Moreover, the definition given in terms of series can be tricky to use. Thus, we use instead the expressions in terms of Lie brackets given in \cref{rem:lie_brackets}. Computing the Lie brackets, one gets that for all $\mu$ satisfying (H$_{\reg}$), the quadratic (resp.\ cubic) forms $A^1_K$ and $C_K$ are given by \begin{equation} \label{LB_A1K} A^1_K( \mu) = \langle \mu'^2 \varphi_1, \varphi_K \rangle \quad \text{ and } \quad C_K(\mu) = -4 \langle \mu'^2 \mu'' \varphi_1, \varphi_K \rangle. \end{equation} The similar expression of $A^2_K$ is quite heavy. Computing the associated Lie bracket and then `symmetrizing' the associated quadratic form (see \cite[Proposition A.3]{B21bis} for more details), one gets the existence of a constant $C>0$ such that for all $\mu$ satisfying (H$_{\reg}$), one has \begin{equation*} \label{LB_A2K}
| A^2_K(\mu) - \langle {\mu^{(3)}}^2 \varphi_1, \varphi_K \rangle
| \leqslant
C \| \mu \|_{H^2(0,1)}^2. \end{equation*} \cite[Proposition A.3]{B21bis} also provides a similar approximate expression of $A^3_K$, but it will not be useful. \end{rem}
\begin{thm} \label{thm:existence_mu} Let $K \in \mathbb{N}^*$, $K \geqslant 2$. There exists $\mu$ satisfying (H$_{\reg}$), (H$_{\lin}$), (H$_{\Quad}$) and (H$_{\Cub}$). \end{thm}
\begin{rem} To prove \cref{thm:existence_mu}, it is enough to prove the existence of a function $\mu \in H^{11}( (0,1), \mathbb{R}) \cap H^4_0(0,1)$ satisfying \eqref{lin_nul}, \eqref{quad_nul_1}, \eqref{quad_nul_2}, \eqref{quad_non_nul}, \eqref{cub_non_nul} and \begin{align} \label{supp_mu} &\supp \mu \subset [0,1), \\ \label{cond_bord} &\mu^{(5)}(0)\neq0, \\ \label{lin_non_nul} &\forall j \in \mathbb{N}^*-\{K\}, \quad \langle \mu \varphi_1, \varphi_j \rangle \neq 0. \end{align} Indeed, when the boundary conditions \eqref{mu_bc} hold, thanks to \eqref{coeff_IPP} of \cref{decay_coeff}, assumption \eqref{H_lin_2} is equivalent to \begin{equation*} \mu^{(5)}(0) \pm \mu^{(5)}(1) \neq 0 \quad \text{ and } \quad \ \forall j \in \mathbb{N}^*-\{K\}, \ \langle \mu \varphi_1, \varphi_j \rangle \neq 0. \end{equation*} \end{rem}
The proof of \cref{thm:existence_mu} is in four steps. \begin{itemize}
\item First, using Baire theorem to deal with the infinite number of non-vanishing conditions, one can find a function $\mu_{\Rref}$ satisfying \eqref{lin_nul}, \eqref{cub_non_nul}, \eqref{cond_bord} and \eqref{lin_non_nul}. Notice that only the non-vanishing condition \eqref{quad_non_nul} is not treated at this stage as the strategy of the two following steps, relying on oscillating functions, would destroy this condition.
\item Then, using some analyticity and the isolated zeros theorem, one constructs $\hat{\mu}_{\Rref}$ a perturbation of $\mu_{\Rref}$ satisfying \eqref{quad_nul_1} while conserving all the previous properties already satisfied by $\mu_{\Rref}$.
\item Similarly, one constructs then $\tild{\mu}_{\Rref}$ a perturbation of $\hat{\mu}_{\Rref}$ satisfying \eqref{quad_nul_2} while conserving all the previous properties satisfied by $\hat{\mu}_{\Rref}$.
\item Finally, using the construction of a quadratic basis, from $\tild{\mu}_{\Rref}$, one constructs a new function satisfying \eqref{quad_non_nul} in addition to the previous conditions. \end{itemize}
\begin{proof}[Proof of \cref{thm:existence_mu}] Let $K \in \mathbb{N}^*$, $K \geqslant 2$ and $\overline{x} \in (0,1)$ such that $\varphi_K(\overline{x})=0$. As $\varphi_1 >0$ on $(0,1)$ and $\varphi_K'(\overline{x})>0$, one may assume the existence of $\delta>0$ such that $\varphi_1 \varphi_K>0$ on $( \overline{x}, \overline{x}+ \delta)$ and $\varphi_1 \varphi_K<0$ on $(\overline{x}- \delta, \overline{x})$. Let $\eta \in (0, \overline{x} - \delta)$ such that $\varphi_1 \varphi_K \neq 0$ on $(0, \eta)$.
\noindent \emph{Step 1: Existence of $\mu$ in $H^{11} \cap H^4_0(0,1)$ supported on $ [0, \eta)$ and satisfying \eqref{lin_nul}, \eqref{cub_non_nul}, \eqref{cond_bord} and \eqref{lin_non_nul}.} In this step, we work with the $H^{11}(0,1)$-topology.
Denote by \begin{align*} \E &:= \left\{ \mu \in H^{11}( (0,1), \mathbb{R}); \ \mu \equiv 0 \text{ on } \left[ \frac{\eta}{2}, 1 \right] \text{ and } \mu \text{ satisfies } \eqref{lin_nul} \right\} \cap H^4_0(0,1) , \\ \U &:= \left\{ \mu \in \E; \ \mu \text{ satisfies } \eqref{cub_non_nul}, \eqref{cond_bord} \text{ and } \eqref{lin_non_nul} \right\}. \end{align*} The goal of Step 1 is to prove that $\U$ is not empty. As $\E$ is not empty, it suffices to prove that $\U$ is dense in $\E$.
Moreover, denoting by \begin{equation*} \Cspace := \left\{ \mu \in \E; \ C_K(\mu) \neq 0 \right\}
, \ \V := \left\{ \mu \in \E; \ \mu^{(5)}(0) \neq 0 \right\} \text{ and } \ \U_j := \left\{ \mu \in \E; \ \langle \mu \varphi_1, \varphi_j \rangle \neq 0 \right\}
, \end{equation*} $\U$ is the intersection of all the open subsets $\V$, $\Cspace$ and $\U_j$ for $j \in \mathbb{N}^*-\{K\}$. Thus, as $\E$ is a complete space (because closed in $H^{11}$), by Baire theorem, to prove that $\U$ is dense in $\E$, it suffices to prove that $\V$, $\Cspace$ and $\U_j$ for $j \in \mathbb{N}^*-\{K\}$ are dense in $\E$. The density of $\V$ is clear. Let $j \in \mathbb{N}^*-\{K\}$.
\noindent \emph{$\U_j$ is dense in $\E$.} Let $\mu^*$ in $\E$ such that $\langle \mu^* \varphi_1, \varphi_j \rangle =0$ and let $\varepsilon>0$. As the linear forms $\mu \mapsto \langle \mu \varphi_1, \varphi_K \rangle$ and $\mu \mapsto \langle \mu \varphi_1, \varphi_j \rangle$ are linearly independent (for $j \neq K$), one can find $\nu \in C_c^{\infty}(0, \frac{\eta}{2})$ such that $ \langle \nu \varphi_1, \varphi_K \rangle = 0 $ and $ \langle \nu \varphi_1, \varphi_j \rangle \neq 0. $
Then $\mu_{\varepsilon}:= \mu^* + \frac{\varepsilon}{\| \nu \|} \nu$ is in $\U_j$ with $\| \mu_{\varepsilon} - \mu^* \|_{H^{11}} < \varepsilon$.
\noindent \emph{$\Cspace$ is dense in $\E$.} Let $\mu^*$ in $\E$ such that $C_K(\mu^*) =0$ and let $\varepsilon>0$. By a similar construction than the one given in \cite[Theorem A.4]{B21bis}, one can find $\nu \in C_c^{\infty}(0, \frac{\eta}{2})$ such that $ \langle \nu \varphi_1, \varphi_K \rangle = 0 $ and $ C_K(\nu) \neq 0. $
Then, by \eqref{LB_A1K}, $\varepsilon \mapsto C_K ( \mu^* + \frac{\varepsilon}{\| \nu \|} \nu )$ is a polynomial of degree 3 vanishing at zero. Thus, there exists $\varepsilon^* > 0$ such that this polynomial doesn't vanish on $(0, \varepsilon^*)$. Hence, for all $\varepsilon \in (0, \varepsilon^*)$, $\mu_{\varepsilon}:= \mu^* + \frac{\varepsilon}{\| \nu \|} \nu$ is in $\Cspace$ with $\| \mu_{\varepsilon} - \mu^* \|_{H^{11}} < \varepsilon$.
\noindent \emph{Step 2: Existence of $\mu$ in $H^{11} \cap H^4_0$ satisfying \eqref{lin_nul}, \eqref{quad_nul_1}, \eqref{cub_non_nul}, \eqref{supp_mu}, \eqref{cond_bord} and \eqref{lin_non_nul}.}
Let $\mu_{\Rref}$ in $H^{11} \cap H^4_0$ constructed at Step 1, supported on $ [0, \eta)$ and satisfying \eqref{lin_nul}, \eqref{cub_non_nul}, \eqref{cond_bord} and \eqref{lin_non_nul}. The goal of this step is to prove that if $\mu_{\Rref}$ doesn't already satisfy \eqref{quad_nul_1} (we assume that $A^1_K( \mu_{\Rref}) <0$, the case $A^1_K( \mu_{\Rref}) >0$ is similar), then one can construct a perturbation of $\mu_{\Rref}$ satisfying \eqref{quad_nul_1} while conserving the properties already satisfied by $\mu_{\Rref}$. To that end, one can consider the following `basis' functions. \begin{itemize} \item Let $\mu_0$ in $C_c^{\infty} \left( \overline{x}+ \delta, \frac{1+\overline{x}+\delta}{2} \right)$ such that $\langle \mu_0 \varphi_1, \varphi_K \rangle =1$.
\item Let $J^-$ and $J^+$ two open intervals of respectively $(\overline{x} - \delta, \overline{x})$ and $(\overline{x}, \overline{x}+\delta)$. For all $\varepsilon >0$ and $\lambda \neq 0$, we define \begin{equation} \label{exp_mu_oscill} \mu_{\varepsilon, \lambda}(x) := \sqrt{ \frac{
\varepsilon | \lambda| } {
|\varphi_1(x(\lambda)) \varphi_K(x(\lambda))| } } \ g \left( \frac{x-x(\lambda)}{\varepsilon} \right), \quad x \in [0,1], \end{equation} where $g \in C^{\infty}_c(0,1)$ such that $\int_0^1 g'(y)^2 dy=1$ and $x(\lambda):=x^+ \mathrm{1~\hspace{-1.4ex}l}_{\lambda>0}+x^- \mathrm{1~\hspace{-1.4ex}l}_{\lambda<0}$ where $x^{\pm}$ are in $J^{\pm}$ (thus, $\sign( \varphi_K(x^{\pm}))= \pm 1$). Notice that $\mu_{\varepsilon, \lambda}$ is supported on $(x^- , x^- + \varepsilon) \cup (x^+ , x^+ + \varepsilon)$ and thus on $J^- \cup J^+$ for $\varepsilon$ small enough. Formally, $\mu_{\varepsilon, \lambda}$ is constructed so that $A^1_K( \mu_{\varepsilon, \lambda}) \approx \lambda$. \end{itemize}
We consider perturbations of $\mu_{\Rref}$ of the following form, \begin{equation} \label{def_nu_eps_lambda} \nu_{\varepsilon, \lambda} := \mu_{\Rref} + \mu_{\varepsilon, \lambda} - \langle \mu_{\varepsilon, \lambda} \varphi_1, \varphi_K \rangle \mu_0, \quad \varepsilon >0, \quad \lambda \neq 0. \end{equation} Notice that all the functions have disjoint supports (see Figure \ref{supports_Step2}) so that the quadratic and cubic forms can be seen as additive. Moreover, by construction, for all $\varepsilon>0$ and $\lambda \neq 0$, $\nu_{\varepsilon, \lambda}$ already satisfies \eqref{lin_nul}, \eqref{supp_mu} and \eqref{cond_bord}.
\begin{figure}
\caption{The supports of the functions used in Step 2 are depicted.
}
\label{supports_Step2}
\end{figure}
\noindent \emph{Step 2.1: For all $\varepsilon$ small enough, there exists $\lambda(\varepsilon)>0$ such that $\nu_{\varepsilon, \lambda(\varepsilon)}$ satisfies \eqref{quad_nul_1}.} The goal is to construct a one-parameter family of functions such that the following quantity vanishes, \begin{equation} \label{def_Q} Q( \varepsilon, \lambda) := A^1_K ( \nu_{\varepsilon, \lambda} ) = A^1_K ( \mu_{\Rref} ) + A^1_K ( \mu_{\varepsilon, \lambda} ) + \langle \mu_{\varepsilon, \lambda} \varphi_1, \varphi_K \rangle ^2 A^1_K ( \mu_0 ). \end{equation}
\noindent \emph{Regularity of $Q$.} Looking at \eqref{exp_mu_oscill}, one could fear some lack of regularity for $Q$ with respect to $\lambda$. However, as $A^1_K(\mu_{\Rref})<0$, one only needs to study $Q$ on $(\mathbb{R}^*_+)^2$. Moreover, substituting the expression \eqref{exp_mu_oscill} and performing the change of variables $x=\varepsilon y + x^+$, one has, for all $\varepsilon >0$ and $\lambda >0$, \begin{multline} \label{exp_fK} \langle \mu_{\varepsilon, \lambda} \varphi_1, \varphi_K \rangle = \sqrt{ \frac{ \varepsilon \lambda } { \varphi_1( x^+) \varphi_K(x^+) } } \int_{x^+}^{x^+ + \varepsilon} g \left( \frac{x-x^+}{\varepsilon} \right) \varphi_1( x) \varphi_K(x) dx \\ = \varepsilon^{3/2} \sqrt{ \frac{
\lambda } { \varphi_1( x^+) \varphi_K(x^+) } } \int_0^1 g(y) \varphi_1( \varepsilon y + x^+) \varphi_K( \varepsilon y + x^+) dy. \end{multline}
Thus, the map $ (\varepsilon, \lambda) \mapsto \langle \mu_{\varepsilon, \lambda} \varphi_1, \varphi_K \rangle $ is analytic on $(\mathbb{R}^*_+)^2$. Similarly, using the computation of $A^1_K$ given in \eqref{LB_A1K}, one gets, for all $\varepsilon>0$ and $\lambda>0$, \begin{equation} \label{exp_A1K} A^1_K ( \mu_{\varepsilon, \lambda} ) = \frac{ \lambda } { \varphi_1( x^+) \varphi_K(x^+) } \int_0^1 g'(y)^2 \varphi_1( \varepsilon y + x^+) \varphi_K( \varepsilon y + x^+) dy. \end{equation} Hence, the map $(\varepsilon, \lambda) \mapsto A^1_K( \mu_{\varepsilon, \lambda})$ is also analytic on $(\mathbb{R}^*_+)^2$. Thus, $Q$ is analytic on $(\mathbb{R}^*_+)^2$.
\noindent \emph{For $\varepsilon$ small enough, $Q(\varepsilon, \cdot)$ can take both signs.} Doing a Taylor expansion with respect to $\varepsilon$ of \eqref{exp_A1K}, one gets the existence of $C>0$ such that for all $\varepsilon>0$ and $\lambda>0$, \begin{equation} \label{estim_A1K}
\left| A^1_K ( \mu_{\varepsilon, \lambda} ) - \lambda
\right| \leqslant C \lambda \varepsilon. \end{equation} Thus, \eqref{def_Q}, \eqref{exp_fK} and \eqref{estim_A1K} lead to the existence of $C>0$ such that for all $\varepsilon \in (0,1)$ and $\lambda >0$, \begin{equation} \label{estim_Q}
| Q( \varepsilon, \lambda) - A^1_K( \mu_{\Rref}) - \lambda| \leqslant C \lambda \varepsilon. \end{equation} Thus, there exists $\varepsilon^*>0$ such that for all $\varepsilon \in (0, \varepsilon^*)$, \begin{equation*} Q \left( \varepsilon, - \frac{ A^1_K( \mu_{\Rref}) } { 2 } \right) <0 \quad \text{ and } \quad Q \left( \varepsilon, - \frac{ 3 A^1_K( \mu_{\Rref}) } { 2 } \right) >0. \end{equation*}
\noindent \emph{For $\varepsilon$ small enough, $Q(\varepsilon, \cdot)$ is increasing.} Differentiating \eqref{exp_fK} and \eqref{exp_A1K} with respect to $\lambda$ and performing again an expansion with respect to $\varepsilon$ in the spirit of \eqref{estim_A1K}, one gets the existence of $C>0$ such that for all $\varepsilon>0$ and $\lambda>0$, $
\left| \partial_{\lambda} Q( \varepsilon, \lambda) -1
\right| \leqslant C \varepsilon. $ Thus, for $\varepsilon>0$ small enough, $\partial_{\lambda} Q(\varepsilon, \cdot)$ is positive.
\noindent \emph{Conclusion.} Applying the intermediate value theorem, one gets for all $\varepsilon \in (0, \varepsilon^*)$ the existence of $\lambda=\lambda(\varepsilon) \in \left( - \frac{ A^1_K( \mu_{\Rref}) } { 2 }, - \frac{ 3 A^1_K( \mu_{\Rref}) } { 2 } \right) $ such that $Q(\varepsilon, \lambda(\varepsilon))=0$, meaning that $\nu_{\varepsilon, \lambda(\varepsilon)}$ defined in \eqref{def_nu_eps_lambda} satisfies \eqref{quad_nul_1} by definition \eqref{def_Q} of $Q$.
\noindent \emph{Step 2.2: The map $\varepsilon \mapsto \lambda(\varepsilon)$ is continuous.} As for all $\varepsilon \in (0, \varepsilon^*)$, $Q(\varepsilon, \lambda(\varepsilon))=0$, recalling the definition \eqref{def_Q} of $Q$, one has
\begin{equation*} \lambda(\varepsilon) = \lambda(\varepsilon) - A^1_K(\mu_{\varepsilon, \lambda(\varepsilon)}) - A^1_K( \mu_{\Rref}) - \langle \mu_{\varepsilon, \lambda(\varepsilon)} \varphi_1, \varphi_K \rangle ^2 A^1_K(\mu_0) =: \lambda(\varepsilon)H(\varepsilon) - A^1_K( \mu_{\Rref}). \end{equation*}
Using \eqref{exp_fK} and \eqref{exp_A1K}, $H$ is analytic on $(0, \varepsilon^*)$ with $|H(\varepsilon)| \leqslant C \varepsilon$ for all $\varepsilon \in (0, \varepsilon^*)$. Thus, for all $\varepsilon$ and $\varepsilon_0$ in $(0, \varepsilon^*)$, \begin{equation*}
\left| \lambda(\varepsilon) - \lambda(\varepsilon_0)
\right|
\leqslant C \varepsilon
| \lambda( \varepsilon) - \lambda(\varepsilon_0) | +
| \lambda(\varepsilon_0) |
|H(\varepsilon) - H (\varepsilon_0) |. \end{equation*} Reducing $\varepsilon^*$ if needed,
the continuity of $\varepsilon \mapsto \lambda(\varepsilon)$ on $(0, \varepsilon^*)$ stems from the one of $H$.
\noindent \emph{Step 2.3: The map $\varepsilon \mapsto \lambda(\varepsilon)$ is analytic.} Let $\varepsilon_0 \in (0, \varepsilon^*)$. By construction, $Q(\varepsilon_0, \lambda(\varepsilon_0))=0.$ Besides, in Step 2.1, we proved that $\partial_{\lambda} Q( \varepsilon_0, \lambda(\varepsilon_0)) > 0$ and that $Q$ is analytic on $(0, \varepsilon^*) \times \mathbb{R}^*_+$. Hence, by the implicit function theorem, there exists an open neighborhood $\mathcal{V}$ of $\varepsilon_0$, an open neighborhood $\mathcal{W}$ of $\lambda(\varepsilon_0)$ and an analytic function $\Lambda : \mathcal{V} \rightarrow \mathcal{W}$ such that \begin{equation*} \left( \varepsilon \in \mathcal{V}, \ \lambda \in \mathcal{W} \text{ and } Q( \varepsilon, \lambda) =0 \right) \quad \Leftrightarrow \quad \left( \varepsilon \in \mathcal{V} \text{ and } \lambda=\Lambda( \varepsilon) \right) \end{equation*} As $\varepsilon \mapsto \lambda(\varepsilon)$ is continuous, locally $\lambda= \Lambda$ and thus, $\varepsilon \mapsto \lambda(\varepsilon)$ is analytic on $(0, \varepsilon^*)$.
\noindent \emph{Step 2.4: There exists $\varepsilon$ such that $\nu_{\varepsilon, \lambda(\varepsilon)}$ satisfies \eqref{lin_non_nul} and \eqref{cub_non_nul}.} By a similar computation than the one in \eqref{exp_fK}, one gets, for all $j \in \mathbb{N}^*$, $\varepsilon >0$ and $\lambda>0$, \begin{multline*} \langle \nu_{\varepsilon, \lambda} \varphi_1, \varphi_j \rangle = \langle \mu_{\Rref} \varphi_1, \varphi_j \rangle + \varepsilon^{3/2} \sqrt{ \frac{
\lambda } { \varphi_1( x^+) \varphi_K(x^+) } } \int_0^1 g(y) \varphi_1( \varepsilon y + x^+) \varphi_j( \varepsilon y + x^+) dy \\ - \langle \mu_{\varepsilon, \lambda} \varphi_1, \varphi_K \rangle \langle \mu_0 \varphi_1, \varphi_j \rangle. \end{multline*} As $(\varepsilon, \lambda) \mapsto \langle \mu_{\varepsilon, \lambda} \varphi_1, \varphi_K \rangle$ is analytic on $(\mathbb{R}^*_+)^2$ (see Step 2.1) and $\varepsilon \mapsto \lambda(\varepsilon)$ is analytic on $(0, \varepsilon^*)$ (see Step 2.3) , for all $j \in \mathbb{N}^*-\{K\}$, the map $ \varepsilon \mapsto \langle \nu_{\varepsilon, \lambda(\varepsilon)} \varphi_1, \varphi_j \rangle $ is analytic on $(0, \varepsilon^*)$. It can also be extended by continuity at zero with the value $\langle \mu_{\Rref} \varphi_1, \varphi_j \rangle \neq 0$ by construction of $\mu_{\Rref}$. Similarly, using \eqref{LB_A1K}, $\varepsilon \mapsto C_K(\nu_{\varepsilon, \lambda(\varepsilon)})$ is analytic on $(0, \varepsilon^*)$ and can be extended continuously at zero with the value $C_K(\mu_{\Rref}) \neq 0$. Thus, the functions $ ( \varepsilon \mapsto \langle \nu_{\varepsilon, \lambda(\varepsilon)} \varphi_1, \varphi_j \rangle )_{j \in \mathbb{N}^*- \{K\}} $ and $ \varepsilon \mapsto C_K(\nu_{\varepsilon, \lambda(\varepsilon)}) $ are analytic and non-zero on $(0, \varepsilon^*)$. Hence, by the isolated zeros theorem, there exists $\varepsilon \in (0, \varepsilon^*)$, such that for all $j \in \mathbb{N}^*- \{K \}$, $\langle \nu_{\varepsilon, \lambda(\varepsilon)} \varphi_1, \varphi_j \rangle \neq 0$ and $C_K(\nu_{\varepsilon, \lambda(\varepsilon)})\neq0$, meaning that $\nu_{\varepsilon, \lambda(\varepsilon)}$ satisfies \eqref{cub_non_nul} and \eqref{lin_non_nul}.
\noindent \emph{Step 3: Existence of $\mu$ in $H^{11}-H^4_0$ satisfying \eqref{lin_nul}, \eqref{quad_nul_1}, \eqref{quad_nul_2}, \eqref{cub_non_nul}, \eqref{supp_mu}, \eqref{cond_bord} and \eqref{lin_non_nul}.} The proof of Step 3 is quite similar to the one of Step 2. Let $\hat{\mu}_{\Rref}$ constructed at Step 2 satisfying \eqref{lin_nul}, \eqref{quad_nul_1}, \eqref{cub_non_nul}, \eqref{supp_mu}, \eqref{cond_bord} and \eqref{lin_non_nul}. As in Step 2, the goal is to prove that if $\hat{\mu}_{\Rref}$ doesn't already satisfy \eqref{quad_nul_2} (we assume that $A^2_K( \hat{\mu}_{\Rref}) <0$), then one can construct a perturbation of $\hat{\mu}_{\Rref}$ satisfying \eqref{quad_nul_2} while conserving the properties already satisfied by $\hat{\mu}_{\Rref}$. Let $\hat{J}^{+}$ and $I^{+}$ (resp.\ $\hat{J}^{-}$ and $I^{-}$) open disjoint intervals of $(\overline{x}- \delta, \overline{x}) - J^-$ (resp.\ $(\overline{x}, \overline{x}+ \delta) - J^+$). In this step, we consider the following new `basis' functions.
\begin{itemize}
\item Let $\hat{\mu}_0$ in $C_c^{\infty}\left(\frac{1+\overline{x}+\delta}{2}, 1 \right)$ such that $\langle \hat{\mu}_0 \varphi_1, \varphi_K \rangle =1$ and $A^1_K( \hat{\mu}_0)=0$.
\item By \cite[Theorem A.4]{B21bis}, there exists $\mu_1^{\pm}$ in $C_c^{\infty}(I^{\pm})$ such that $ \langle \mu_1^{\pm} \varphi_1, \varphi_K \rangle=0 $ and $ A^1_K( \mu_1^{\pm}) = \pm 1. $ \item For all $\varepsilon >0$ and $\lambda \neq 0$, we define \begin{equation*} \hat{\mu}_{\varepsilon, \lambda}(x) := \varepsilon^{5/2} \sqrt{ \frac{
| \lambda| } {
|\varphi_1(x(\lambda)) \varphi_K(x(\lambda))| } } \ g \left( \frac{x-x(\lambda)}{\varepsilon} \right), \end{equation*} where $g \in C^{\infty}_c(0,1)$ such that $\int_0^1 g^{(3)}(y)^2 dy=1$ and $x(\lambda):=\hat{x}^+ \mathrm{1~\hspace{-1.4ex}l}_{\lambda>0}+\hat{x}^- \mathrm{1~\hspace{-1.4ex}l}_{\lambda<0}$, where $\hat{x}^{\pm}$ are in $\hat{J}^{\pm}$. Notice that for $\varepsilon$ small enough, the support of $\hat{\mu}_{\varepsilon, \lambda}$ is in $\hat{J}^- \cup \hat{J}^+$. Formally, this time, $\hat{\mu}_{\varepsilon, \lambda}$ is constructed so that $A^2_K( \hat{\mu}_{\varepsilon, \lambda}) \approx \lambda$. \end{itemize} In this step, we consider perturbations of $\hat{\mu}_{\Rref}$ of the form, \begin{equation*} \hat{\nu}_{\varepsilon, \lambda} := \hat{\mu}_{\Rref} + \hat{\mu}_{\varepsilon, \lambda} - \langle \hat{\mu}_{\varepsilon, \lambda} \varphi_1, \varphi_K \rangle \hat{\mu}_0 + \sqrt{
\left| A^1_K( \hat{\mu}_{\varepsilon, \lambda})
\right| } \mu_1^{ -\sign( A^1_K( \hat{\mu}_{\varepsilon, \lambda}) ) }, \quad \varepsilon >0, \quad \lambda \neq 0. \end{equation*} Once again, we made sure that all the functions considered have disjoint supports (see Figure \ref{supports_Step3}) so that the quadratic and cubic forms can be seen as additive.
\begin{figure}
\caption{The supports of the functions used in Step 3 are depicted.
}
\label{supports_Step3}
\end{figure}
\noindent Moreover, by construction, for all $(\varepsilon, \lambda) \in \mathbb{R}^*_+ \times \mathbb{R}^*$, $\hat{\nu}_{\varepsilon, \lambda}$ already satisfies \eqref{lin_nul}, \eqref{quad_nul_1}, \eqref{supp_mu} and \eqref{cond_bord}. Then, define \begin{multline} \label{def_Q_hat} \hat{Q}( \varepsilon, \lambda) := A^2_K ( \hat{\nu}_{\varepsilon, \lambda} ) = A^2_K ( \hat{\mu}_{\Rref} ) + A^2_K ( \hat{\mu}_{\varepsilon, \lambda} ) + \langle \hat{\mu}_{\varepsilon, \lambda} \varphi_1, \varphi_K \rangle^2 A^2_K( \hat{\mu}_0 ) \\ +
| A^1_K( \hat{\mu}_{\varepsilon, \lambda}) | A^2_K ( \mu_1^{ -\sign( A^1_K( \hat{\mu}_{\varepsilon, \lambda}) ) } ). \end{multline}
The end of Step 3 is exactly the same as the one of Step 2.
\begin{itemize} \item Applying the intermediate value theorem to $\hat{Q}(\varepsilon, \cdot)$, one proves the existence of a continuous map $\varepsilon \mapsto \lambda(\varepsilon)$ such that for all $\varepsilon$ small enough, $\hat{Q}(\varepsilon, \lambda(\varepsilon))=0$ and thus, $\hat{\nu}_{\varepsilon, \lambda(\varepsilon)}$ satisfies \eqref{quad_nul_2}. Notice that because of the last term in \eqref{def_Q_hat},
one could fear some lack of regularity of $\hat{Q}$. However, as $\lambda \mapsto A^1_K( \hat{\mu}_{\varepsilon, \lambda})$ is continuous on $\mathbb{R}^*_+$ with a computation similar to \eqref{exp_A1K}, $\sign( A^1_K( \hat{\mu}_{\varepsilon, \lambda}) )$ is locally constant around $\lambda=- A^2_K( \hat{\mu}_{\Rref})$.
\item Then, one uses the implicit function theorem to get that $\varepsilon \mapsto \lambda(\varepsilon)$ is analytic and thus, with the isolated zeros theorem, to get the existence of an $\varepsilon$ such that $\hat{\nu}_{\varepsilon, \lambda(\varepsilon)}$ satisfies \eqref{cub_non_nul} and \eqref{lin_non_nul}. \end{itemize}
\noindent \emph{Step 4: Existence of $\mu$ in $H^{11} \cap H^4_0$ satisfying \eqref{lin_nul}, \eqref{quad_nul_1}, \eqref{quad_nul_2}, \eqref{quad_non_nul}, \eqref{cub_non_nul}, \eqref{supp_mu}, \eqref{cond_bord} and \eqref{lin_non_nul} .} Let $\tild{\mu}_{\Rref}$ constructed at Step 3 satisfying \eqref{lin_nul}, \eqref{quad_nul_1}, \eqref{quad_nul_2}, \eqref{cub_non_nul}, \eqref{supp_mu}, \eqref{cond_bord} and \eqref{lin_non_nul}. Assume that $A^3_K( \tild{\mu}_{\Rref}) = 0$, otherwise $\tild{\mu}_{\Rref}$ already satisfies \eqref{quad_non_nul}. Let $\tild{J}^-$ and $\tild{J}^+$ two open disjoint intervals of respectively $(\overline{x}- \delta, \overline{x}) - (J^- \cup \hat{J}^- \cup I^-)$ and $(\overline{x}, \overline{x}+ \delta) - (J^+ \cup \hat{J}^+ \cup I^+)$. By \cite[Theorem A.4]{B21bis}, there exists $\nu$ in $C_c^{\infty}(\tild{J}^{\pm})$ such that \begin{equation*} \langle \nu \varphi_1, \varphi_K \rangle = A^1_K( \nu) = A^2_K(\nu) =0 \quad \text{ and } \quad A^3_K(\nu)=1. \end{equation*} Define for all $\varepsilon \in \mathbb{R}$, $ \nu_{\varepsilon} := \tild{\mu}_{\Rref} + \varepsilon \nu. $ By construction, for all $\varepsilon \in \mathbb{R}^*$, $\nu_{\varepsilon}$ satisfies \eqref{lin_nul}, \eqref{quad_nul_1}, \eqref{quad_nul_2}, \eqref{quad_non_nul}, \eqref{supp_mu} and \eqref{cond_bord} because the functions have disjoint supports. Moreover, the maps $\varepsilon \mapsto C_K( \nu_{\varepsilon})$ and $\varepsilon \mapsto \langle \nu_{\varepsilon} \varphi_1, \varphi_j \rangle$ for all $j \in \mathbb{N}^* - \{K \}$ are polynomial, so analytic and non-vanishing at zero by construction of $\tild{\mu}_{\Rref}$. So, by the isolated zeros theorem, there exists $\varepsilon \in \mathbb{R}^*$ such that $\nu_{\varepsilon} $ satisfies \eqref{cub_non_nul} and \eqref{lin_non_nul}.
\end{proof}
\textbf{Acknowledgments.} The author would like to thank Karine Beauchard and Frédéric Marbach (École Normale Supérieure de Rennes) for having interested her in this problem, for many fruitful discussions and helpful advices.
\end{document} | arXiv |
\begin{document}
\title{\Large {\bf A characterization of some graphs with metric dimension two}}
\author{ {\sc Ali Behtoei$^a$\thanks{[email protected]}}, {\sc Akbar Davoodi$^b$\thanks{[email protected]}}, {\sc Mohsen Jannesari$^c$\thanks{[email protected]}} and {\sc Behnaz Omoomi$^d$\thanks{[email protected]}}\\ [1mm] {$^a$\it \small Department of Mathematics, Imam Khomeini International University, 34149-16818, Qazvin, Iran }\\ {$^{b,d}$ \it \small Department of Mathematical Sciences, Isfahan University of Technology, 84156-83111, Isfahan, Iran}\\ {$^c$ \it\small University of Shahreza, 86149-56841, Shahreza, Iran}} \date{}
\maketitle
\begin{abstract}
A set $W\subseteq V(G)$ is called a resolving set, if for each pair of distinct vertices $u,v\in V(G)$ there exists $t\in W$ such that $d(u,t)\neq d(v,t)$, where $d(x,y)$ is the distance between vertices $x$ and $y$. The cardinality of a minimum resolving set for $G$ is called the metric dimension of $G$ and is denoted by $\dim_M(G)$. A $k$-tree is a chordal graph all of whose maximal cliques are the same size $k+1$
and all of whose minimal clique separators are also all the same size $k$. A $k$-path is a $k$-tree with maximum degree $2k$, where for each integer $j$, $k\leq j<2k$, there exists a unique pair of vertices, $u$ and $v$, such that $\deg(u)=\deg(v)=j$. In this paper, we prove that if $G$ is a $k$-path, then $\dim_M(G)=k$. Moreover, we provide a characterization of all $2$-trees with metric dimension two. \end{abstract}
\section{Introduction} Throughout this paper all graphs are finite, simple and undirected. The notions $\delta$, $\Delta$ and $N_G(v)$ stand for minimum degree, maximum degree and the set of neighbours of vertex $v$ in $G$, respectively.
For an ordered set $W=\{w_1,w_2,\ldots,w_k\}$ of vertices and a vertex $v$ in a connected graph $G$, the $k$-vector
$r(v|W):=(d(v,w_1),d(v,w_2),\ldots,d(v,w_k))$ is called the \textit{metric representation} of $v$ with respect to $W$, where $d(x,y)$ is the distance between two vertices $x$ and $y$. The set $W$ is called a \textit{resolving set} for $G$ if distinct vertices of $G$ have distinct representations with respect to $W$. We say a set $S\subseteq V(G)$ \textit{resolves} a set $T\subseteq V(G)$ if for each pair of distinct vertices $u$ and $v$ in $T$ there is a vertex $s\in S$ such that $d(u,s)\neq d(v,s)$. A minimum resolving set is called a \textit{basis} and the \textit{metric dimension} of $G$, $\dim_M(G)$, is the cardinality of a basis for $G$. A graph with metric dimension $k$
is called $k$-\textit{dimensional}.
The concept of the resolving set has various applications in diverse areas including coin weighing problems~\cite{coin},
network discovery and verification~\cite{net2},
robot navigation~\cite{landmarks}, mastermind game~\cite{cartesian product}, problems of pattern recognition and image processing~\cite{digital},
and combinatorial search and optimization~\cite{coin}.
These concepts were introduced by Slater in~\cite{Slater1975}.
He described the usefulness of these concepts when working with U.S. Sonar and Coast Guard Loran stations. Independently, Harary and Melter~\cite{Harary} discovered these concepts. In \cite{landmarks}, it is proved that determining the metric dimension of a graph in general is an $NP$-complete problem, but the metric dimension of trees can be obtained by a polynomial time algorithm.
It is obvious that for every graph $G$ of order $n$, $1\leq \dim_M(G) \leq n-1$.
Chartrand et al.~\cite{Ollerman} proved that for $n\geq 2$, $\dim_M(G)=n-1$ if and only if $G$ is the complete graph $K_n$. They also provided a
characterization of graphs of order $n$ and metric dimension $n-2$~\cite{Ollerman}. Graphs with metric dimension $n-3$ are characterized in~\cite{n-3}.
Khuller et al.~\cite{landmarks} and Chartrand et al.~\cite{Ollerman}
proved that $\dim_M(G)=1$ if and only if $G$ is a path. Moreover, in~\cite{chang} some properties of $2$-dimensional graphs are obtained. \begin{thm}\label{thm:degree of basis elements}{\em\cite{chang}} Let $G$ be a $2$-dimensional graph. If $\{a,b\}$ is a basis for $G$, then \begin{enumerate} \item there is a unique shortest path $P$ between $a$ and $b$, \item the degrees of $a$ and $b$ are at most three, \item
the degree of each internal vertex on $P$ is at most five. \end{enumerate} \end{thm}
A \textit{chordal graph} is a graph with no induced cycle of length greater than three. A \textit{$k$-tree} is a chordal graph that all of whose maximal cliques are the same size $k+1$ and all of whose minimal clique separators are also all the same size $k$. In other words, a $k$-tree may be formed by starting with a set of $k+1$ pairwise adjacent vertices and then repeatedly adding vertices in such a way that each added vertex has exactly $k$ neighbours that form a $k$-clique.
By the above definition, it is clear that if $G$ is a $k$-tree, then $\delta(G)=k$. $1$-trees are the same as trees; $2$-trees are maximal series-parallel graphs~\cite{2tree} and include also the maximal outer-planar graphs. These graphs can be used to model series and parallel electric circuits.
Planar $3$-trees are also known as Apollonian networks~\cite{3-tree}.
A \textit{$k$-path} is a $k$-tree with maximum degree $2k$, where for each integer $j$, $k\leq j<2k$, there exists a unique pair of vertices, $u$ and $v$, such that $\deg(u)=\deg(v)=j$. On the other hand, regards to the recursive construction of $k$-trees, a $k$-path $G$ can be considered as a graph with vertex set $ V(G)=\{v_1,v_2,\ldots,v_n\}$ and edge set $ E(G)=\{v_iv_j:~ |i-j|\leq k\}.$ For instance, two different representations of a $2$-path $G$ with seven vertices $v_1,\ldots, v_7$ are shown in Figure \ref{fig:twoRep}. \begin{center} \begin{tikzpicture} [inner sep=0.5mm, place/.style={circle,draw=black,fill=black,thick}] \node[place] (v1) at (-2.5,.5) [label=below:$v_1$] {}; \node[place] (v2) at (-2.5,1.5) [label=above:$v_2$] {}edge [-,thick](v1); \node[place] (v3) at (-1.5,.5) [label=below:$v_3$] {}edge [-,thick](v1)edge [-,thick](v2); \node[place] (v4) at (-1.5,1.5) [label=above:$v_4$] {}edge [-,thick](v2)edge [-,thick](v3); \node[place] (v5) at (-.5,.5) [label=below:$v_5$] {}edge [-,thick](v3)edge [-,thick](v4); \node[place] (v6) at (-.5,1.5) [label=above:$v_6$] {}edge [-,thick](v4)edge [-,thick](v5); \node[place] (v7) at (.5,.5) [label=below:$v_7$] {}edge [-,thick](v5)edge [-,thick](v6);
\node[place] (v1') at (-4,-1.5) [label=below:$v_1$] {}; \node[place] (v2') at (-3,-1.5) [label=below:$v_2$] {}edge [-,thick](v1'); \node[place] (v3') at (-2,-1.5) [label=above:$v_3$] {}edge [-,thick, bend right](v1')edge [-,thick](v2'); \node[place] (v4') at (-1,-1.5) [label=below:$v_4$] {}edge [-,thick,bend left](v2')edge [-,thick](v3'); \node[place] (v5') at (0,-1.5) [label=above:$v_5$] {}edge [-,thick,bend right](v3')edge [-,thick](v4'); \node[place] (v6') at (1,-1.5) [label=below:$v_6$] {}edge [-,thick,bend left](v4')edge [-,thick](v5'); \node[place] (v7') at (2,-1.5) [label=below:$v_7$] {}edge [-,thick,bend right](v5')edge [-,thick](v6'); \end{tikzpicture} \captionof{figure}{Two different representations of a $2$-path.\label{fig:twoRep}} \end{center}
In this paper, we show that the metric dimension of each $k$-path (as a generalization of a path) is $k$. Whereas, there are some examples of $2$-trees with metric dimension two that are not $2$-path.
This fact motivates us to study the structure of $2$-dimensional $2$-trees. As a main result, we characterize the class of all $2$-trees with metric dimension two.
\section{Main Results} In this section, we first prove that the metric dimension of each $k$-path is $k$. Then, we introduce a class of graphs which shows that the inverse of this fact is not true in general. Later on, we concern on the case $k=2$ and toward to investigating all $2$-trees with metric dimension two, we construct a family $\cal F$ of $2$-trees with metric dimension two. Finally, as the main result, we prove that the metric dimension of a $2$-tree $G$ is two if and only if $G$ belongs to $\cal F$.
\begin{thm}\label{k-path} If $G$ is a $k$-path, then $\dim_{_M}(G)=k$. \end{thm}
\begin{proof}{
Let $G$ be a $k$-path with vertex set $ V(G)=\{v_1,v_2,\ldots,v_n\}$ and edge set $ E(G)=\{v_iv_j:~ |i-j|\leq k\}$. Therefore, the distance between two vertices $v_r$ and $v_s$ in $G$ is given by $d(v_r,v_s)=\left\lceil {|r-s|\over k}\right\rceil$.
At first, let $W=\{v_1,v_2,\ldots,v_k\}$ and $v_i$, $v_j$ be two distinct vertices of $G$ with $k<i<j$. By the division algorithm, there exist integers $r$ and $s$ such that $i=rk+s$, $1\leq s\leq k$. Thus, we have
$$d(v_i,v_s)=\left\lceil{|i-s|\over k}\right\rceil=\left\lceil{rk\over k}\right\rceil=r,$$
and
$$d(v_j,v_s)=\left\lceil{|j-s|\over k}\right\rceil=\left\lceil{rk+(j-i)\over k}\right\rceil=
r+\left\lceil{j-i\over k}\right\rceil\geq r+1.$$
This means $W$ is a resolving set for $G$. Hence, $\dim_M(G)\leq |W|=k$.
Now, we show that $\dim_M(G)\geq k$.
Let $W$ be a basis of the $k$-path $G$, and let $X=\{v_1,v_2,\ldots,v_{k+1}\}$. Assume that $|W\cap X|=s$ and $X\setminus W=\{v_{i_1},v_{i_2},\ldots,v_{i_{k+1-s}}\}$, where $1\leq i_1<i_2<\cdots<i_{k+1-s}\leq k+1$. For convince, let $X'=\{x_1,x_2,\ldots,x_{k+1-s}\}$, where $x_r=v_{i_r}$, for each $r$, $1\leq r\leq k+1-s$. Since each vertex $v_i$ of the $k$-path $G$ is adjacent to the next $k$ consecutive vertices $\{v_{i+1},\ldots, v_{i+k}\}$, the induced subgraph on $X$ is a $(k+1)$-clique. Each vertex in $W\cap X$ is adjacent to each vertex in $X'$. Thus, each pair of vertices in $X'$ should be resolved by some element of $W\setminus X$. Assume that $W'=\{w_1,w_2,\ldots,w_t\}$ is a minimum subset of $W\setminus X$ which resolves vertices in $X'$. Thus, for each $w_j\in W'$ there exists $\{x_r,x_s\}\subseteq X'$ such that $d(w_j,x_r)\neq d(w_j,x_s)$. For each $j$, $1\leq j\leq t$, let $$r_j=\min\{r:~d(w_j,x_r)\neq d(w_j,x_{r+1})\},$$ and, let $$A_j=\{x_1,x_2,\ldots,x_{r_j}\},~B_j=\{x_{r_j+1},x_{r_j+2},\ldots,x_{k+1-s}\}.$$ Note that $A_j\cup B_j=X'$, $A_j\cap B_j=\emptyset$, $x_1\in A_j$ and $x_{k+1-s}\in B_j$. Also, the structure of $G$ implies that $$d(w_j,x_1)=d(w_j,x_2)=\cdots=d(w_j,x_{r_j}),$$ and $$d(w_j,x_{r_j+1})=d(w_j,x_{r_j+2})=\cdots=d(w_j,x_{k+1-s}).$$
Since $W'$ has the minimum size, for each $1\leq j<j'\leq t$ we have $A_{j}\neq A_{j'}$ (otherwise, $w_{j}$ and $w_{j'}$ resolve the same pair of vertices in $X'$) and hence, $|A_{j}|\neq |A_{j'}|$. Moreover,
for each $r$, $1\leq r\leq k-s$, there exists $w_j\in W'$ such that $d(w_j,x_r)\neq d(w_j,x_{r+1})$ which implies $|A_j|=r$.
Therefore, $$t=\left|\{|A_1|,|A_2|,\ldots,|A_t|\}\right|=|\{1,2,\ldots,k-s\}|=k-s.$$
Hence, $$|W|=|W\setminus X|+|W\cap X|\geq |W'|+s=(k-s)+s=k,$$ which completes the proof.
}\end{proof}
\begin{definition} Let $G$ and $H$ be two $2$-trees. We say that $H$ is a \textit{branch} in $G$ on $\{u,v\}$, for convenience say a $(u,v)$-branch, if $V(H)\cap V(G)=\{u,v\}$, where $uv$ is an edge of $G$ belonging to only one of the triangles in $H$. The \textit{length} of a branch in a $2$-tree is the number of it's triangles, which is equal to the number of vertices of branch minus $2$. A cane is a $2$-path with a branch of length one on a specific edge as shown in Figure \ref{cane}. \begin{center} \begin{tikzpicture} [rotate=90,inner sep=0.5mm, place/.style={circle,draw=black,fill=black,thick}] \node[place] (r1) at (-2,4) {}; \node[place] (r2) at (-2,3) {}edge [-,thick](r1); \node[place] (r3) at (-2,2) {}edge [-,thick](r2); \node[place] (rk-1) at (-2,1) {}edge [-,thick](r3); \node[place] (s1) at (-1,4) {}edge [-,thick](r1); \node[place] (s2) at (-1,3) {}edge [-,thick](r2)edge [-,thick](s1)edge [-,thick](r1); \node[place] (s3) at (-1,2) {}edge [-,thick](r3)edge [-,thick](s2)edge [-,thick](r2); \node[place] (sk-1) at (-1,1) {}edge [-,thick](rk-1)edge [-,thick](s3)edge [-,thick](r3); \node (dots) at (-1.5,.5) [label=center:${\cdots}$]{}; \node[place] (c2) at (0,4) {}edge [-,thick](s1)edge [-,thick](s2); \node[place] (r11) at (-2,0) {}; \node[place] (r22) at (-2,-1) {}edge [-,thick](r11); \node[place] (s11) at (-1,0) {}edge [-,thick](r11); \node[place] (s22) at (-1,-1) {}edge [-,thick](r22)edge [-,thick](s11)edge [-,thick](r11);
\end{tikzpicture} \captionof{figure}{A cane.\label{cane}} \end{center}
\end{definition}
In the following proposition, we provide some $2$-trees with metric dimension two other than $2$-paths. \begin{pro}\label{lem:d=1} If $G$ is a $2$-tree of metric dimension two with a basis whose elements are adjacent, then $G$ is a $2$-path or a cane. \begin{center} \begin{tikzpicture} [inner sep=0.5mm, place/.style={circle,draw=black,fill=black,thick}] \node[place] (r1) at (-6,4) [label=above:$a$] {}; \node[place] (r2) at (-6,3) [label=left:$(1\text{,}2)$] {}edge [-,thick](r1); \node[place] (r3) at (-6,2) {}edge [-,thick](r2); \node[place] (rk-1) at (-6,1) {}edge [-,thick](r3); \node[place] (s1) at (-5,4) [label=above:$b$] {}edge [-,thick](r1); \node[place] (s2) at (-5,3) [label=right:$(1\text{,}1)$]{}edge [-,thick](r2)edge [-,thick](s1)edge [-,thick](r1); \node[place] (s3) at (-5,2) {}edge [-,thick](r3)edge [-,thick](s2)edge [-,thick](r2); \node[place] (sk-1) at (-5,1) {}edge [-,thick](rk-1)edge [-,thick](s3)edge [-,thick](r3); \node (dots) at (-5.5,.5) [label=center:${\vdots}$]{}; \node[place] (c2) at (-4,4) [label=above:$(2\text{,}1)$] {}edge [-,thick](s1)edge [-,thick](s2); \node[place] (r11) at (-6,0) {}; \node[place] (r22) at (-6,-1) [label=left:$(t\text{,}t+1)$] {}edge [dashed,thick](r11); \node[place] (s11) at (-5,0) {}edge [-,thick](r11); \node[place] (s22) at (-5,-1) [label=right:$(t\text{,}t)$] {}edge [dashed,thick](r22)edge [-,thick](s11)edge [-,thick](r11);
\node (dots) at (-5.5,-2.5) [label=center:{\em(a)}]{};
\node[place] (a1) at (-2,4) {}; \node[place,line width=3pt] (a2) at (-2,3) {}edge [-,thick](a1); \node[place] (a3) at (-2,2) {}edge [-,thick](a2); \node[place] (ak-1) at (-2,1) {}edge [-,thick](a3);
\node[place] (b1) at (-1,4) [label=above:$a$] {}edge [-,thick](a1); \node[place] (b2) at (-1,3) {}edge [-,thick](a2)edge [-,thick](b1)edge [-,thick](a1); \node[place,line width=3pt] (b3) at (-1,2) {}edge [-,thick](a3)edge [-,thick](b2)edge [-,thick](a2); \node[place] (bk-1) at (-1,1) {}edge [-,thick](ak-1)edge [-,thick](b3)edge [-,thick](a3); \node (dots) at (-1.5,.5) [label=center:${\vdots}$]{}; \node[place] (r11) at (-2,0) {}; \node[place] (r22) at (-2,-1) {}edge [dashed,thick](r11); \node[place] (s11) at (-1,0) {}edge [-,thick](r11); \node[place] (s22) at (-1,-1) {}edge [dashed,thick](r22)edge [-,thick](s11)edge [-,thick](r11);
\node (dots) at (-1.5,-2.5) [label=center:{\em(b)}]{}; \node[place] (c1) at (0,4) [label=above:$b$] {}edge [-,thick](b1)edge [-,thick](b2);
\node[place] (u1) at (2,4) {}; \node[place] (u2) at (2,3) {}edge [-,thick](u1); \node[place] (u3) at (2,2) {}edge [-,thick](u2); \node[place] (uk-1) at (2,1) {}edge [-,thick](u3);
\node[place,line width=3pt] (v1) at (3,4) {}edge [-,thick](u1); \node[place] (v2) at (3,3) {}edge [-,thick](u2)edge [-,thick](v1)edge [-,thick](u1); \node[place] (v3) at (3,2) {}edge [-,thick](u3)edge [-,thick](v2)edge [-,thick](u2); \node[place] (vk-1) at (3,1) {}edge [-,thick](uk-1)edge [-,thick](v3)edge [-,thick](u3); \node (dots) at (2.5,.5) [label=center:${\vdots}$]{}; \node[place,line width=3pt] (c3) at (4,4) {}edge [-,thick](v1)edge [-,thick](v2); \node[place] (u11) at (2,0){}; \node[place] (u22) at (2,-1) [label=below:$a$]{}edge [dashed,thick](u11); \node[place] (v11) at (3,0) {}edge [-,thick](u11); \node[place] (v22) at (3,-1) [label=below:$b$] {}edge [dashed,thick](u22)edge [-,thick](v11)edge [-,thick](u11); \node (dots) at (2.5,-2.5) [label=center:{\em(c)}]{};
\node[place] (x1) at (6,4) [label=above:$a$] {}; \node[place] (x2) at (6,3) [label=left:$(1\text{,}2)$]{}edge [-,thick](x1); \node[place] (x3) at (6,2) {}edge [-,thick](x2); \node[place] (xk-1) at (6,1) {}edge [-,thick](x3);
\node[place] (y1) at (7,4) [label=above:$b$] {}edge [-,thick](x1); \node[place] (y2) at (7,3) [label=right:$(1\text{,}1)$]{}edge [-,thick](x2)edge [-,thick](y1)edge [-,thick](x1); \node[place] (y3) at (7,2) {}edge [-,thick](x3)edge [-,thick](y2)edge [-,thick](x2); \node[place] (yk-1) at (7,1) {}edge [-,thick](xk-1)edge [-,thick](y3)edge [-,thick](x3); \node (dots) at (6.5,.5) [label=center:${\vdots}$]{}; \node[place] (r11) at (6,0) {}; \node[place] (r22) at (6,-1) [label=left:$(t\text{,}t+1)$]{}edge [dashed,thick](r11); \node[place] (s11) at (7,0) {}edge [-,thick](r11); \node[place] (s22) at (7,-1)[label=right:$(t\text{,}t)$] {}edge [dashed,thick](r22)edge [-,thick](s11)edge [-,thick](r11);
\node (dots) at (6.5,-2.5) [label=center:{\em(d)}]{};
\end{tikzpicture} \captionof{figure}{The possible cases for basis $\{a,b\}$ in $2$-tree $G$\label{cane2}} \end{center}
\end{pro} \begin{proof} {We prove the statement by induction on $n$, the order of $G$. If $n=3$, then $G=K_3$ and the statement holds.
Let $G$ be a $2$-tree of order $n >3$ with a basis $ B=\{a,b\}$, such that $d(a,b)=1$. Since each $2$-tree of order greater than three has two non-adjacent vertices of degree two, there exists a vertex $x\in V(G)\setminus B$ of degree two. Moreover, $B$ is a basis for $G\setminus\{x\}$.
Now, by the induction hypothesis, $G\setminus\{x\}$ is a path or a cane and by Theorem~\ref{thm:degree of basis elements} (2), the degrees of $a$ and $b$ are at most three. Therefore, $B=\{a,b\}$ is one of the possible cases shown in Figure~\ref{cane2}. Note that dashed edges could be absent. It can be checked that in cases (b) and (c) the bold vertices get the same metric representation with respect to $B$. Thus, $B$ is one of the cases (a) or (d), where the metric representations of vertices are denoted in Figure~\ref{cane2}.
Regards to the metric representation of vertices in $G$, $x$ could be adjacent to the vertices by metric representation $(t, t+1)$ and $(t,t)$ (in the case of not existence of dashed edges $(t-1,t)$ and $(t,t)$) and in the case (d) to the vertices by metric representation $(1,0)$ and $(1,1)$ as well. This concludes that $G$ is also a path or a cane. } \end{proof}
The above proposition shows that the inverse of Theorem~\ref{k-path} is not true. Later on, we focus on the case $k=2$ and construct the family $\mathcal{F}$ of all $2$-trees with metric dimension two.
Let $\mathcal {F}$ be the family of $2$-trees, where each member $G$ of ${\cal F}$ consists of a $2$-tree $G_0$ and some branches on it that, in the case of existence, satisfying the following conditions. \begin{enumerate} \item $G_0$ is a $2$-path or a $2$-tree that is obtained by identifying two specific edges of two disjoint $2$-paths as shown in Figure \ref{fig:2-path P}. \item On every edge there is at most one branch.
\item $G$ avoids any $(a_i,a_{i+1})$-branch. \item Each branch is either a $2$-path or a cane. \item In each $(a_i,b_{i})$-branch the degree of $a_i$ is two. \item If $G_0$ is as the graph depicted in Figure~\ref{fig:2-path P}($b$), then $G$ avoids any $(a_m, x)$-branch. \item
$G$ contains at most one branch on the edges of the triangle containing $b_ib_{i+1}$ in $G_0$. \item The degree of each $b_i$ in $G$ is at most $7$. \item
$G$ has at most one branch of length greater than one on the edges of the triangle containing $a_ia_{i+1}$ in $G_0$. \item If $G_0$ is of the form of Figure \ref{fig:2-path P}($b$), then $(b_{m-1},b_m)$-branch and $(b_m,b_{m+1})$-branch are $2$-path and at most one of them is of length more than one. \item For every $i$, $2\leq i\leq k-1$, at most one of the $(b_{i-1},b_i)$-branches and $(b_{i},b_{i+1})$-branches is a cane. \item All $(a_i,b_i)$-branches, $(a_{i},b_{i+1})$-branches and $(a_{i},b_{i-1})$-branches are $2$-path. \end{enumerate}
\begin{center} \begin{tikzpicture} [inner sep=0.5mm, place/.style={circle,draw=black,fill=black,thick}] \node[place] (a1) at (-2,1.5) [label=above:$a_1$] {}; \node[place] (a2) at (-1,1.5) [label=above:$a_2$] {}edge [-,thick](a1); \node[place] (a3) at (0,1.5) [label=above:$a_3$] {}edge [-,thick](a2); \node[place] (ak-1) at (1,1.5) [label=above:$a_{k-1}$] {}; \node[place] (ak) at (2,1.5) [label=above:$a_k$] {}edge [-,thick](ak-1); \node[place] (b1) at (-2,.5) [label=below:$b_1$] {}edge [-,thick](a1); \node[place] (b2) at (-1,.5) [label=below:$b_2$] {}edge [-,thick](a2)edge [-,thick](b1)edge [-,thick](a1); \node[place] (b3) at (0,.5) [label=below:$b_3$] {}edge [-,thick](a3)edge [-,thick](b2)edge [-,thick](a2); \node[place] (bk-1) at (1,.5) [label=below:$b_{k-1}$] {}edge [-,thick](ak-1); \node[place] (bk) at (2,.5) [label=below:$b_k$] {}edge [-,thick](ak)edge [-,thick](bk-1)edge [-,thick](ak-1); \node (dots) at (.5,1) [label=center:$\cdots$]{}; \node (a) at (0,-.5) [label=center:$(a)$]{};
\node[place] (a1') at (-4.5,-2) [label=above:$a_1$] {}; \node[place] (a2') at (-3.5,-2) [label=above:$a_2$] {}edge [-,thick](a1'); \node[place] (a3') at (-2.5,-2) [label=above:$a_3$] {}edge [-,thick](a2'); \node[place] (am-2) at (-1.5,-2) [label=above:$a_{m-2}$] {}; \node[place] (am-1) at (-.5,-2) [label=above:$a_{m-1}$] {}edge [-,thick](am-2);
\node[place] (b1') at (-4.5,-3) [label=below:$b_1$] {}edge [-,thick](a1')edge [-,thick](a2'); \node[place] (b2') at (-3.5,-3) [label=below:$b_2$] {}edge [-,thick](a2')edge [-,thick](b1')edge [-,thick](a3'); \node[place] (b3') at (-2.5,-3) [label=below:$b_3$] {}edge [-,thick](a3')edge [-,thick](b2'); \node[place] (bm-2) at (-1.5,-3) [label=below:$b_{m-2}$] {}edge [-,thick](am-1)edge [-,thick](am-2); \node[place] (bm-1) at (-.5,-3) [label=below:$b_{m-1}$] {}edge [-,thick](am-1)edge [-,thick](bm-2); \node (dots'') at (-2,-2.5) [label=center:$\cdots$]{};
\node[place] (am) at (.5,-2) [label=above:$a_m$] {}edge [-,thick](bm-1)edge [-,thick](am-1); \node[place] (am+1) at (1.5,-2) [label=above:$a_{m+1}$] {}edge [-,thick](am); \node[place] (am+2) at (2.5,-2) [label=above:$a_{m+2}$] {}edge [-,thick](am+1); \node[place] (am-1) at (3.5,-2) [label=above:$a_{k-1}$] {}; \node[place] (ak') at (4.5,-2) [label=above:$a_k$] {}edge [-,thick](am-1); \node[place] (bm) at (.5,-3) [label=below:$b_m$] {}edge [-,thick](am)edge [-,thick](bm-1); \node[place] (bm+1) at (1.5,-3) [label=below:$b_{m+1}$] {}edge [-,thick](am+1)edge [-,thick](bm)edge [-,thick](am); \node[place] (bm+2) at (2.5,-3) [label=below:$b_{m+2}$] {}edge [-,thick](am+2)edge [-,thick](bm+1)edge [-,thick](am+1); \node[place] (bm-1) at (3.5,-3) [label=below:$b_{k-1}$] {}edge [-,thick](am-1); \node[place] (bk') at (4.5,-3) [label=below:$b_k$] {}edge [-,thick](ak')edge [-,thick](bm-1)edge [-,thick](am-1); \node (dots') at (3,-2.5) [label=center:$\cdots$]{}; \node (b) at (0,-4) [label=center:$(b)$]{}; \end{tikzpicture} \captionof{figure}{Two different forms of $G_0$.\label{fig:2-path P}} \end{center}
\begin{thm}\label{thm:Raft} If $G\in {\cal F}$, then $\dim_M(G)=2$. \end{thm} \begin{proof}{ Let $G\in {\cal F}$. Through the proof all of notations are the same as those which are used to introduce the family $\cal F$ and $G_0$ in Figure \ref{fig:2-path P}. Since $G$ is not a path, $\dim_M(G)\geq 2$. Let $W=\{a_1,a_k\}$. We show in both possible cases for $G_0$ that $W$ is a resolving set for $G$ and hence, $\dim_M(G)=2$.
{\textbf{Case 1.} } $G_0$ is a $2$-path as shown in Figure \ref{fig:2-path P}(a).\\ The metric representation of the vertices $\{a_1,a_2,\ldots,a_k,b_1,b_2,\ldots,b_k\}$ are as follows. \begin{align*}
&r(a_i|W)=(i-1,k-i),1\leq i\leq k,\\
&r(b_1|W)=(1,k),\\
&r(b_j|W)=(j-1,k-j+1), 2\leq j\leq k. \end{align*}
Thus, different vertices of $G_0$ have different metric representations. Moreover, note that
$$\{d_1-d_2:~(d_1,d_2)=r(a_i|W), ~1\leq i\leq k\}=\{1-k,3-k,5-k,\ldots,2i-k-1,\ldots,k-3,k-1\},$$ and
$$\{d_1-d_2:~(d_1,d_2)=r(b_i|W), ~1\leq i\leq k\}=\{1-k,2-k,4-k,\ldots,2i-k-2,\ldots,k-4,k-2\}.$$ If $G=G_0$, then we are done. Suppose that $G\neq G_0$ and let $H$ be a branch of $G$ on an edge $e$ of $G_0$. Regards to the structures of graphs in ${\cal F}$, we consider the following different possibilities. \begin{itemize}
\item $H$ is a branch on the vertical edge $e=a_ib_i$, $2\leq i\leq k-1$.\\
Note that by the definition of ${\cal F}$, $H$ is a $2$-path and $\deg_H(a_i)=2$. Let $V(H)=\{x_1,x_2,\ldots,x_t\}$ where $x_1=a_i$, $x_2=b_i$, and $E(H)=\{x_rx_s:~|r-s|\leq 2\}$. If $j$ is odd, then $d(x_j,a_1)=d(x_j,a_i)+d(a_i,a_1)$ and $d(x_j,a_k)=d(x_j,a_i)+d(a_i,a_k)$. If $j$ is even, then $d(x_j,a_1)=d(x_j,b_i)+d(b_i,a_1)$ and $d(x_j,a_k)=d(x_j,b_i)+d(b_i,a_k)$. Hence, we have \begin{eqnarray*}
r(x_j|W)= \left\{ \begin{array}{ll} (i-1+\lfloor {j\over 2}\rfloor,k-i+\lfloor {j\over 2}\rfloor) & ~~j ~\mbox{is odd} \\[.2cm] (i-1+\lfloor {j\over 2}\rfloor-1,k-i+\lfloor {j\over 2}\rfloor) & ~~j ~\mbox{is even}. \end{array} \right. \end{eqnarray*}
Moreover, note that
$$\{d_1-d_2: ~(d_1,d_2)=r(x_j|W), ~1\leq j\leq t\}=\{2i-k-1,2i-k-2\}.$$
\item $H$ is a branch on the oblique edge $e=a_ib_{i+1}$, $2\leq i\leq k-1$.\\
By the definition of ${\cal F}$, $H$ is a $2$-path and $\deg_H(a_i)=2$. Let $V(H)=\{x_1,x_2,\ldots,x_t\}$ where $x_1=a_i$, $x_2=b_{i+1}$, and $E(H)=\{x_rx_s:~|r-s|\leq 2\}$. If $j$ is odd, then $d(x_j,a_1)=d(x_j,a_i)+d(a_i,a_1)$ and $d(x_j,a_k)=d(x_j,a_i)+d(a_i,a_k)$. If $j$ is even, then $d(x_j,a_1)=d(x_j,b_{i+1})+d(b_{i+1},a_1)$ and $d(x_j,a_k)=d(x_j,b_{i+1})+d(b_{i+1},a_k)$. Hence, we have \begin{eqnarray*}
r(x_j|W)= \left\{ \begin{array}{ll} (i-1+\lfloor {j\over 2}\rfloor,k-i+\lfloor {j\over 2}\rfloor) & j ~\mbox{is odd} \\[.2cm] (i-1+\lfloor {j\over 2}\rfloor,k-i+\lfloor {j\over 2}\rfloor-1) & j ~\mbox{is even} . \end{array} \right. \end{eqnarray*}
Moreover, note that
$$\{d_1-d_2: ~(d_1,d_2)=r(x_j|W), ~1\leq j\leq t\}=\{2i-k-1,2i-k\}.$$
\item $H$ is a branch on the horizontal edge $e=b_ib_{i+1}$, $1\leq i\leq k-1$. \\ Using the definition of ${\cal F}$, $H$ is either a $2$-path or a cane. Generally, assume that $$\{x_1,x_2,\ldots,x_t\}\subseteq V(H)\subseteq \{x_1,x_2,\ldots,x_t\}\cup\{x\},$$
where the induced subgraph of $H$ on $\{x_1,x_2,\ldots,x_t\}$ is a $2$-path with the edge set $\{x_rx_s:~|r-s|\leq 2\}$. We consider two different possibilities. \begin{itemize} \item[a)] $x_1=b_i$, $x_2=b_{i+1}$. Hence, if $H$ is a cane, then we have $N_H(x)=\{b_i,x_3\}$. Similar to the previous cases, we have \begin{align*}
&r(x_1|W)=(i-1,k-i+1),\\
&r(x_j|W)= \left\{ \begin{array}{ll} (i-1+\lfloor {j\over 2}\rfloor,k-i+\lfloor {j\over 2}\rfloor) & j\geq3~ \mbox{is odd} \\[.2cm] (i-1+\lfloor {j\over 2}\rfloor,k-i+\lfloor {j\over 2}\rfloor-1) & j ~\mbox{is even} . \end{array} \right. \end{align*}
Also, if $H$ is a cane, then $r(x|W)=(i-1+1,k-i+2)$.\\
\item[b)] $x_1=b_{i+1}$, $x_2=b_i$. Hence, if $H$ is a cane, then we have $N_H(x)=\{b_{i+1},x_3\}$. Similarly, we have \begin{align*}
&r(x_1|W)=(i-1+1,k-i),\\
&r(x_j|W)= \left\{ \begin{array}{ll} (i-1+\lfloor {j\over 2}\rfloor,k-i+\lfloor {j\over 2}\rfloor) & j ~\mbox{is odd} \\[.2cm] (i-1+\lfloor {j\over 2}\rfloor-1,k-i+\lfloor {j\over 2}\rfloor) & j ~\mbox{is even} . \end{array} \right. \end{align*}
Also, if $H$ is a cane, then $r(x|W)=(i-1+2,k-i+1)$. \\
\end{itemize} Note that in both states (and regardless of being a $2$-path or a cane), we have
$$\{d_1-d_2: ~(d_1,d_2)=r(v|W), ~v\in V(H)\}=\{2i-k-2,2i-k-1,2i-k\}.$$ \end{itemize}
Therefore, in all the above cases, distinct vertices of $H$ have different metric representations. Also, the metric representation of the vertices in $V(H)$ are different from the metric representations of the vertices in $V(G_0)\setminus \{x,y\}$, where $H$ is a $(x,y)$-branch. Moreover, using the subtraction value of two coordinates in the metric representation of each vertex, it is easy to check that vertices of different (possible) branches on $G_0$ (satisfying the conditions mentioned in the definition of ${\cal F}$) have different metric representations. Thus, in this case $W$ is a resolving set for $G$.
{\textbf{Case 2.}} $G_0$ is a $2$-tree of the form Figure \ref{fig:2-path P}(b).\\ The metric representation of the vertices $\{a_1,a_2,\ldots,a_m,\ldots,a_k\}\cup\{b_1,b_2,\ldots,b_m,\ldots,b_k\}$ are as follows. \begin{align*}
&r(a_i|W)=(i-1,k-i),1\leq i\leq k,\\
&r(b_j|W)= \left\{ \begin{array}{ll} (j,k-j) & 1\leq j\leq m-1 \\ (m,k-m+1) & j=m \\ (j-1,k-j+1) & m+1\leq j\leq k. \end{array} \right. \end{align*}
Therefore, different vertices of $G_0$ have different metric representations. Moreover, note that \begin{eqnarray*}
\{d_1-d_2: ~(d_1,d_2)=r(a_i|W), ~1\leq i\leq k\}=~~~~~~~~~~~~~~~~~~~\\ \{1-k,3-k,5-k,\ldots,2m-k-3,2m-k-1,2m-k+1,\ldots,k-3,k-1\}, \end{eqnarray*} and \begin{eqnarray*}
\{d_1-d_2:~(d_1,d_2)=r(b_j|W), ~1\leq j\leq k\}=~~~~~~~~~~~~~~~~\\ \{2-k,4-k,6-k,\ldots,2m-k-2,2m-k-1,2m-k,\ldots,k-4,k-2\}. \end{eqnarray*} If $G=G_0$, then we are done. Hence, suppose that $G\neq G_0$ and let $H$ be a branch of $G$ on an edge $e$ of $G_0$. Again, using the possible structures of $H$ according to the definition of ${\cal F}$, we consider the following different cases. \begin{itemize}
\item $H$ is a branch on the vertical edge $e=a_ib_i$, $2\leq i\leq m-1$.\\
Note that by the definition of ${\cal F}$, $H$ is a $2$-path and $\deg_H(a_i)=2$. Let $V(H)=\{x_1,x_2,\ldots,x_t\}$ where $x_1=a_i$, $x_2=b_i$, and $E(H)=\{x_rx_s:~|r-s|\leq 2\}$. It is straightforward to check that \begin{eqnarray*}
r(x_j|W)= \left\{ \begin{array}{ll} (i-1+\lfloor {j\over 2}\rfloor,k-i+\lfloor {j\over 2}\rfloor) & j ~\mbox{is odd} \\[.2cm] (i+\lfloor {j\over 2}\rfloor-1,k-i+\lfloor {j\over 2}\rfloor-1) & j ~\mbox{is even}. \end{array} \right. \end{eqnarray*}
Moreover, note that
$$\{d_1-d_2: ~(d_1,d_2)=r(x_j|W), ~1\leq j\leq t\}=\{2i-k-1,2i-k\}.$$
\item $H$ is a branch on the vertical edge $e=a_ib_i$, $m+1\leq i\leq k-1$.\\
By the definition of ${\cal F}$, $H$ is a $2$-path and $\deg_H(a_i)=2$. Let $V(H)=\{x_1,x_2,\ldots,x_t\}$ where $x_1=a_i$, $x_2=b_i$, and $E(H)=\{x_rx_s:~|r-s|\leq 2\}$. We have \begin{eqnarray*}
r(x_j|W)= \left\{ \begin{array}{ll} (i-1+\lfloor {j\over 2}\rfloor,k-i+\lfloor {j\over 2}\rfloor) & j ~\mbox{is odd} \\[.2cm] (i+\lfloor {j\over 2}\rfloor-2,k-i+\lfloor {j\over 2}\rfloor) & j ~\mbox{is even} . \end{array} \right. \end{eqnarray*}
Moreover, note that
$$\{d_1-d_2: ~(d_1,d_2)=r(x_j|W), ~1\leq j\leq t\}=\{2i-k-1,2i-k-2\}.$$
\item $H$ is a branch on the oblique edge $e=a_ib_{i-1}$, $2\leq i\leq m-1$.\\
Since $G\in {\cal F}$, $H$ is a $2$-path and $\deg_H(a_i)=2$. Let $V(H)=\{x_1,x_2,\ldots,x_t\}$ where $x_1=a_i$, $x_2=b_{i-1}$, and $E(H)=\{x_rx_s:~|r-s|\leq 2\}$. We have \begin{eqnarray*}
r(x_j|W)= \left\{ \begin{array}{ll} (i-1+\lfloor {j\over 2}\rfloor,k-i+\lfloor {j\over 2}\rfloor) & j ~\mbox{is odd} \\[.2cm] (i+\lfloor {j\over 2}\rfloor-2,k-i+\lfloor {j\over 2}\rfloor) & j ~\mbox{is even} . \end{array} \right. \end{eqnarray*}
Moreover,
$$\{d_1-d_2: ~(d_1,d_2)=r(x_j|W), ~1\leq j\leq t\}=\{2i-k-1,2i-k-2\}.$$
\item $H$ is a branch on the oblique edge $e=a_ib_{i+1}$, $m+1\leq i\leq k-1$.\\
We know that $H$ is a $2$-path and $\deg_H(a_i)=2$. Let $V(H)=\{x_1,x_2,\ldots,x_t\}$ where $x_1=a_i$, $x_2=b_{i+1}$, and $E(H)=\{x_rx_s:~|r-s|\leq 2\}$. Similarly, it can be easily checked that \begin{eqnarray*}
r(x_j|W)= \left\{ \begin{array}{ll} (i-1+\lfloor {j\over 2}\rfloor,k-i+\lfloor {j\over 2}\rfloor) & j ~\mbox{is odd} \\[.2cm] (i+\lfloor {j\over 2}\rfloor-1,k-i+\lfloor {j\over 2}\rfloor-1) & j ~\mbox{is even} . \end{array} \right. \end{eqnarray*}
Moreover, note that
$$\{d_1-d_2: ~(d_1,d_2)=r(x_j|W), ~1\leq j\leq t\}=\{2i-k-1,2i-k\}.$$
\item $H$ is a branch on the horizontal edge $e=b_ib_{i+1}$, $1\leq i\leq m-2$. \\ Using the definition of ${\cal F}$, $H$ is either a $2$-path or a cane. Generally, assume that $$\{x_1,x_2,\ldots,x_t\}\subseteq V(H)\subseteq \{x_1,x_2,\ldots,x_t\}\cup\{x\},$$
where the induced subgraph of $H$ on $\{x_1,x_2,\ldots,x_t\}$ is a $2$-path with the edge set $\{x_rx_s:~|r-s|\leq 2\}$. We consider two different possibilities. \begin{itemize} \item[a)] $x_1=b_i$, $x_2=b_{i+1}$. Hence, if $H$ is a cane, then we have $N_H(x)=\{b_i,x_3\}$. Similar to the previous cases, we have \begin{align*}
&r(x_1|W)=(i,k-i),\\
&r(x_j|W)= \left\{ \begin{array}{ll} (i+\lfloor {j\over 2}\rfloor,k-i+\lfloor {j\over 2}\rfloor-1) & j\geq 3~\mbox{is odd} \\[.2cm] (i+\lfloor {j\over 2}\rfloor,k-i+\lfloor {j\over 2}\rfloor-2) & j ~\mbox{is even} . \end{array} \right. \end{align*}
Also, if $H$ is a cane, then $r(x|W)=(i+1,k-i+1)$.\\
\item[b)] $x_1=b_{i+1}$, $x_2=b_i$. Hence, if $H$ is a cane, then we have $N_H(x)=\{b_{i+1},x_3\}$. Similarly, we have \begin{align*}
&r(x_1|W)=(i+1,k-i-1),\\
&r(x_j|W)= \left\{ \begin{array}{ll} (i+\lfloor {j\over 2}\rfloor,k-i+\lfloor {j\over 2}\rfloor-1) & j\geq 3~\mbox{is odd} \\[.2cm] (i+\lfloor {j\over 2}\rfloor-1,k-i+\lfloor {j\over 2}\rfloor-1) & \mbox{is even}. \end{array} \right. \end{align*}
Also, if $H$ is a cane, then $r(x|W)=(i+2,k-i)$.\\
\end{itemize} Note that in the both states (and regardless of being a $2$-path or a cane) we have
$$\{d_1-d_2: ~(d_1,d_2)=r(v|W), v\in V(H)\}=\{2i-k,2i-k+1,2i-k+2\}.$$
\item $H$ is a branch on the horizontal edge $e=b_{m-1}b_m$. \\
By the definition of ${\cal F}$, $H$ is a $2$-path and $\deg_H(b_{m-1})=2$. Let $V(H)=\{x_1,x_2,\ldots,x_t\}$ where $x_1=b_{m-1}$, $x_2=b_m$, and $E(H)=\{x_rx_s:~|r-s|\leq 2\}$. We have \begin{eqnarray*}
r(x_j|W)= \left\{ \begin{array}{ll} (m+\lfloor {j\over 2}\rfloor-1,k-m+\lfloor {j\over 2}\rfloor+1) & j ~\mbox{is odd} \\[.2cm] (m+\lfloor {j\over 2}\rfloor-1,k-m+\lfloor {j\over 2}\rfloor) & j ~\mbox{is even}. \end{array} \right. \end{eqnarray*}
Moreover, note that
$$\{d_1-d_2: ~(d_1,d_2)=r(x_j|W), ~1\leq j\leq t\}=\{2m-k-2,2m-k-1\}.$$
\item $H$ is a branch on the horizontal edge $e=b_mb_{m+1}$. \\
By the definition of ${\cal F}$, $H$ is a $2$-path and $\deg_H(b_{m+1})=2$. Let $V(H)=\{x_1,x_2,\ldots,x_t\}$ where $x_1=b_{m+1}$, $x_2=b_m$, and $E(H)=\{x_rx_s:~|r-s|\leq 2\}$. We have \begin{eqnarray*}
r(x_j|W)= \left\{ \begin{array}{ll} (m+\lfloor {j\over 2}\rfloor,k-m+\lfloor {j\over 2}\rfloor) & j ~\mbox{is odd} \\[.2cm] (m+\lfloor {j\over 2}\rfloor-1,k-m+\lfloor {j\over 2}\rfloor) & ~j ~~\mbox{even} . \end{array} \right. \end{eqnarray*}
Moreover, note that
$$\{d_1-d_2: ~(d_1,d_2)=r(x_j|W), ~1\leq j\leq t\}=\{2m-k-1,2m-k\}.$$
\item $H$ is a branch on the horizontal edge $e=b_ib_{i+1}$, $m+1\leq i\leq k-1$. \\ Using the definition of ${\cal F}$, $H$ is either a $2$-path or a cane. Generally, assume that $$\{x_1,x_2,\ldots,x_t\}\subseteq V(H)\subseteq \{x_1,x_2,\ldots,x_t\}\cup\{x\},$$
where the induced subgraph of $H$ on $\{x_1,x_2,\ldots,x_t\}$ is a $2$-path with the edge set $\{x_rx_s:~|r-s|\leq 2\}$. Again, we consider two different possibilities. \begin{itemize} \item[a)] $x_1=b_i$, $x_2=b_{i+1}$. Hence, if $H$ is a cane and $N_H(x)=\{b_i,x_3\}$, then We have \begin{align*}
&r(x_1|W)=(i-1,k-i+1),\\
&r(x_j|W)= \left\{ \begin{array}{ll} (i+\lfloor {j\over 2}\rfloor-1,k-i+\lfloor {j\over 2}\rfloor) & j\geq 3~ \mbox{is odd} \\[.2cm] (i+\lfloor {j\over 2}\rfloor-1,k-i+\lfloor {j\over 2}\rfloor-1) & j ~\mbox{is even} . \end{array} \right. \end{align*}
Also, if $H$ is a cane, then $r(x|W)=(i,k-i+2)$. \\
\item[b)] $x_1=b_{i+1}$, $x_2=b_i$. Hence, if $H$ is a cane, then we have $N_H(x)=\{b_{i+1},x_3\}$. Similarly, we have \begin{align*}
&r(x_1|W)=(i,k-i),\\
&r(x_j|W)= \left\{ \begin{array}{ll} (i+\lfloor {j\over 2}\rfloor-1,k-i+\lfloor {j\over 2}\rfloor) & j\geq3~ \mbox{is odd} \\[.2cm] (i+\lfloor {j\over 2}\rfloor-2,k-i+\lfloor {j\over 2}\rfloor) & j ~\mbox{is even} . \end{array} \right. \end{align*}
Also, if $H$ is a cane, then $r(x|W)=(i+1,k-i+1)$.\\
\end{itemize} Note that in the both states (and regardless of being a $2$-path or a cane) we have
$$\{d_1-d_2:~(d_1,d_2)=r(v|W), v\in V(H)\}=\{2i-k-2,2i-k-1,2i-k\}.$$
\end{itemize} Therefore, in all of above cases, distinct vertices of $H$ have different metric representations. Also, the metric representation of the vertices in $V(H)$ are different from the metric representations of the vertices in $V(G_0)\setminus \{x,y\}$, where $H$ is a $(x,y)$-branch. Moreover, using the subtraction value of two coordinates in the metric representation of each vertex, it is easy to check that vertices of different (possible) branches on $G_0$ (satisfying the conditions mentioned in the definition of ${\cal F}$) have different metric representations. Thus, in this case $W$ is a resolving set for $G$. }\end{proof} To prove the converse of Theorem \ref{thm:Raft}, we need the following lemma. \begin{lemma}\label{lem:} Let $H$ be a $\{u,v\}$-branch of $G$ and let $\{a,b\}$ be a basis for $G\cup H$. If $\{a,b\}\cap V(H)\subseteq \{u,v\}$, then $\{u,v\}$ is a metric basis for $H$. \end{lemma} \begin{proof}{
Suppose on the contrary, there are two different vertices $x$ and $y$ in $H$ such that
$$d(x,u)=d(y,u)=r,~~d(x,v)=d(y,v)=s.$$
Since $H$ is a branch on $\{u,v\}$, each path connecting a vertex in $H$ with a vertex in $V(G)\setminus V(H)$ passes through $u$ or $v$. Assume that
$$d(u,a)=r_1,~~d(v,a)=s_1,~~d(u,b)=r_2,~~d(v,b)=s_2.$$
Hence,
$$d(x,a)=\min\{r+r_1, s+s_1\}=d(y,a),~~d(x,b)=\min\{r+r_2,s+s_2\}=d(y,b).$$
This contradicts that $\{a,b\}$ is a resolving set for $G\cup H$.
}\end{proof}
Now, we prove that every $2$-dimensional $2$-tree belongs to the family $\cal F$.
\begin{thm} If $G$ is a $2$-tree of metric dimension two, then $G\in {\cal F}$. \end{thm} \begin{proof}{ Let $G$ be a $2$-tree and $\{a,b\}$ be a basis of $G$. If $d(a,b)=1$, then by Proposition~\ref{lem:d=1}, $G$ is a $2$-path or a cane which belongs to ${\cal F}$. Thus, assume that $d(a,b) >1$ and let $H$ be a minimal induced $2$-connected subgraph of $G$ as shown in Figure \ref{myfig}, containing $a$ and $b$. Since the clique number of $G$ is three, in each square exactly one of the dashed edges are allowed. Moreover, by the minimality of $H$ we have $deg_H(a)=deg_H(b)=2$, where $a\in \{a_1, b_1\}$ and $b\in \{a_k,b_k\}$. Hence, one of two vertices $a_1, b_1$ or one of two vertices $a_k, b_k$ may not exist. One can check that $\{a,b\}\neq \{a_1, b_k\}$ and $\{a,b\}\neq \{b_1, a_k\}$, otherwise, two neighbours of $a$ or $b$ get the same metric representation. Thus, by the symmetry, we may assume $\{a,b\}=\{a_1, a_k\}$.
\begin{center} \begin{tikzpicture} [inner sep=0.5mm, place/.style={circle,draw=black,fill=black,thick}] \node[place] (a1) at (-2,1) [label=above left:$a_1$] {}; \node[place] (a2) at (-1,1)
{}edge [-,thick](a1); \node[place] (a3) at (0,1)
{}edge [-,thick](a2); \node[place] (a4) at (1,1)
{}; \node[place] (ak-1) at (2,1)
{}edge [-,thick](a4); \node[place] (ak) at (3,1) [label=above right:$a_k$] {}edge [-,thick](ak-1); \node[place] (b1) at (-2,0) [label=below left:$b_1$] {}edge [-,thick](a1)edge [-,thick,dashed](a2); \node[place] (b2) at (-1,0) {}edge [-,thick](a2)edge [-,thick](b1)edge [-,thick,dashed](a1)edge [-,thick,dashed](a3); \node[place] (b3) at (0,0) {}edge [-,thick](a3)edge [-,thick](b2)edge [-,thick,dashed](a2); \node[place] (b4) at (1,0)
{}edge [-,thick,dashed](ak-1)edge [-,thick](a4); \node[place] (bk-1) at (2,0)
{}edge [-,thick](ak-1)edge [-,thick,dashed](ak)edge [-,thick,dashed](a4)edge [-,thick](b4); \node[place] (bk) at (3,0) [label=below right:$b_k$] {}edge [-,thick](ak)edge [-,thick](bk-1)edge [-,thick,dashed](ak-1); \node (dots) at (.5,.5) [label=center:$\cdots$]{}; \end{tikzpicture} \captionof{figure}{A minimal induced $2$-connected subgraph of $G$. \label{myfig}} \end{center}
If $\Delta(H)\leq4$, then $H$ is a $2$-path as shown in Figure~\ref{fig:2-path P}(a). Otherwise $\Delta(H)=5$. If there exists a vertex $b_j$ of degree $5$, then it can be easily checked that $b_j$ and $a_j$ have the same representation with respect to $\{a_1, a_k\}$. Also, existence of two vertices $a_i$ and $a_{i'}$ both of degree $5$, $i\leq i'$, implies that there exists some vertex $b_j$, $i\leq j\leq i'$, of degree $5$, which is impossible. Thus, there exists a unique $a_i$ of degree $5$. Therefore, $H$ is the graph shown in Figure \ref{fig:2-path P}(b). Thus, $H$ is a $2$-path or a $2$-tree obtained by identifying the specific edge, say $a_mb_m$, of two $2$-paths (see Figure~\ref{fig:2-path P}(b)), where $B=\{a_1,a_k\}$. Thus, $G$ satisfies property (1).
Clearly, on every edge there is at most one branch; thus, property (2) follows. Also, $G$ avoids any $(a_i,a_{i+1})$-branch, because each vertex adjacent to both $a_i$ and $a_{i+1}$ has the same metric representation as $b_{i}$ or $b_{i+1}$. Thus, $G$ contains only $(a_i,b_i)$-branches, $(a_i,b_{i+1})$-branches, $(a_{i+1},b_{i})$-branches or $(b_i,b_{i+1})$-branches; which implies property (3). Moreover, by Proposition \ref{lem:d=1} and Lemma \ref{lem:}, each of these branches is a $2$-path or a cane. Therefore, property (4) holds. Also, by Theorem \ref{thm:degree of basis elements}, for every $i$, $1\leq i\leq k$, there is at most one $(a_i,x)$-branch in $G$. Moreover, in each $(a_i,b_i)$-branch the degree of $a_i$ is two, which shows trueness of property (5).
To see property (6), first note that by property (3) there is no $(a_{m-1}, a_m)$-branch or $(a_m, a_{m+1})$-branch. Moreover, in each $(a_m, x)$-branch, for $x\in\{b_{m-1}, b_m, b_{m+1}\}$, the unique neighbour of $a_m$ on the branch has the same metric representation as $b_m$.
To show that $G$ has property (7), suppose that a triangle $a_ib_ib_{i+1}$ has more than one branch. By Theorem \ref{thm:degree of basis elements}, at most one of $(a_i,b_i)$-branch and $(a_i,b_{i+1})$-branch exists. Therefore, $b_ib_{i+1}$ has a branch $H_1$ and one of the edges $a_ib_i$ or $a_ib_{i+1}$ has another branch $H_2$. Let $x$ and $y$ be the vertices of distance one from $G_0$ on branches $H_1$ and $H_2$, respectively. Hence, $d(a_1,x)=d(a_1,y)=i$ and $d(a_k,x)=d(a_k,y)=k-i+1$. That is, $\{a_1,a_k\}$ is not a basis of $G$, which is a contradiction. A similar reason works for triangle $a_ib_{i-1}b_i$. Hence, $G$ has property (7).
Let $(d_1,d_2)$ be metric representation of $b_i$. Then metric representations of each neighbour of $b_i$ which is out of $G_0$ could be one of $(d_1+1,d_2+1),~(d_1+1,d_2)$ or $(d_1,d_2+1)$. Thus, $b_i$ has at most three neighbours out of $G_0$. Hence, the degree of $b_i$ in $G$ is at most $7$ that is property (8).
If there are two branches of length at least $2$ on a triangle containing $a_ia_{i+1}$, then the metric representation of the second vertices on these branches are the same, a contradiction. Thus, $G$ satisfies property (9).
If $H$ is a $(b_{m-1},b_m)$-branch of cane type, then one can find two vertices in $N_G(b_m)\cup N_G(b_{m-1})$ with the same metric representation. A similar argument holds whenever $H$ is a $(b_{m},b_{m+1})$-branch of cane type. If there is a $(b_{m-1},b_{m})$-branch, say $H_1$, and a $(b_{m},b_{m+1})$-branch, say $H_2$, both of length at least two, then $b_m$ has a neighbour in $H_1$ with the same metric representation as a neighbour of $b_m$ in $H_2$. Hence, property (10) holds.
Suppose that two branches on $(b_{i-1},b_i)$ and $(b_{i},b_{i+1})$ are canes. In this case, it can be checked that in the set of neighbours of $b_i$ in these branches there are two vertices with the same metric representation. Thus, $G$ satisfies property (11).
Using Theorem 1.1 the degree of each $a_i$ in $G$, $1<i<n$, is at most five. Note that $\deg(a_i)\in\{4,5\}$. Now suppose that $H$ is a branch on the edge $\{a_i,b_i\}$, $\{a_i,b_{i+1}\}$ or $\{a_i,b_{i-1}\}$. If $H$ is a cane, then $\deg_G(a_i)\geq 6$ or two neighbours of $b_{i-1}$, $b_{i}$ or $b_{i+1}$ in $H$ get the same metric representation, which both are contradictions. Thus, each branch on the edge $\{a_i,b_{i-1}\}$, $\{a_i,b_i\}$ or $\{a_i,b_{i+1}\}$ is a 2-path and $G$ satisfies property (12). }\end{proof}
\end{document} | arXiv |
Ihara's lemma
In mathematics, Ihara's lemma, introduced by Ihara (1975, lemma 3.2) and named by Ribet (1984), states that the kernel of the sum of the two p-degeneracy maps from J0(N)×J0(N) to J0(Np) is Eisenstein whenever the prime p does not divide N. Here J0(N) is the Jacobian of the compactification of the modular curve of Γ0(N).
References
• Ihara, Yasutaka (1975), "On modular curves over finite fields", in Baily, Walter L. (ed.), Discrete subgroups of Lie groups and applications to moduli (Internat. Colloq., Bombay, 1973), Tata Institute of Fundamental Research Studies in Mathematics, vol. 7, Oxford University Press, pp. 161–202, ISBN 978-0-19-560525-9, MR 0399105
• Ribet, Kenneth A. (1984), "Congruence relations between modular forms", Proceedings of the International Congress of Mathematicians, Vol. 1 (Warsaw, 1983), Warszawa: PWN, pp. 503–514, MR 0804706, archived from the original on 2014-01-10, retrieved 2012-11-09
| Wikipedia |
Centre for Environmental Informatics
CEI Web-Projects
Ice streams
National Institute for Applied Statistics Research Australia Home
Physical-statisical modeling of the NE ice stream
Components of the process model
Bayesian calculations
Bayesian-analysis results for the northeast ice stream, Greenland
Modern studies of the behaviors of glaciers, ice sheets, and ice streams rely heavily on both observations and physically based models. Data acquired via remote sensing provide critical information on geometry and movement of ice over large sections of Antarctica and Greenland. Though these datasets are significant advances in terms of spatial coverage and the variety of processes we can observe, the physical systems to be modeled are nevertheless imperfectly observed. Uncertainties associated with measurement errors are present, and physical models are also subject to uncertainties. Hence, there is a need for combining observations and models in a fashion that incorporates uncertainty and quantifies its impact on conclusions.
The goal of combining models and observations is hardly new in glaciology, or in the broad areas of the geosciences (e.g., data assimilation as practiced in numerical weather forecasting). We focus on the development of statistical models with strong reliance on physical modeling, a strategy Berliner (2003) called physical-statistical modeling, and then use Bayes' Theorem to make inference on all unknowns given the data. This is different from traditional physical modeling, perhaps with data-based parameter estimates, and traditional statistical modeling, perhaps relying on vague, qualitative physical reasoning.
In the paragraphs that follow, we develop statistically enhanced versions of a simple physical model of driving stress and a familiar model for velocity based on stress. This presentation is based on the preprint, Berliner, Jezek, Cressie, Kim, Lam, and van der Veen (2005).
Glaciological motivations
Since glaciers flow under the force of gravity, important factors in determining velocities include quantities such as the ice thickness acting in combination with forces acting along the sides and at the base of the glacier and under the constraints of the constitutive relationship. In particular, driving stress is associated with gravitational force acting on the ice. Hence, spatial variations in the stress arise from longitudinal gradients in ice-surface elevation and ice thickness. Based on existing theory (e.g., Paterson 1994), we consider equating driving stress to stresses acting on the sides and base of the glacier. A simple approximation equates driving stress along the flow to basal shear stress as follows:
\begin{equation} \tau_{dx} \approx \tau_{bx} = - \rho g H \frac{ds}{dx}, \end{equation} where $s$ is ice-surface elevation, $H$ is the ice thickness, $\rho$ is the density of ice, and $g$ is the gravity constant.
Under these assumptions, it is reasonably straightforward to estimate directly driving stress based on observations of $s$ and $H$. However, even though estimation may be relatively straightforward, assessment of uncertainties in such estimates can be difficult. Furthermore, a concern in estimating driving stress from geometry is that the reliance on the slope of the upper ice surface in (1) implies that results are very sensitive to small-scale variations in surface topography, and to small-scale, perhaps unimportant variations in ice thickness. From (1), there is no theoretical requirement that driving stress be spatially averaged, however it is usually calculated over horizontal distances of a few ice thicknesses or so to eliminate small-scale flow features not important to the large-scale flow (e.g., Kamb and Echelmeyer 1986). Indeed, if averaging is not done, the driving stress estimates exhibit unreasonably large variations.
We assembled surface topography and ice thickness observations for a portion of the Northeast Ice Stream in Greenland; see Figure 1. The data were gathered as part of the Program for Arctic Climate Regional Assessments (PARCA). Surface topography and ice thickness were sampled every few hundred meters using equipment mounted on the Wallops Flight Facility P-3 aircraft. Surface velocity data were calculated by Ian Joughin and provided as part of the PARCA dataset. The three derived datasets are: ${\bf S}$ (Figure 2), surface topography; ${\bf B}$ (Figure 2), basal topography; and ${\bf U}$ (Figure 3), surface velocities.
The primary output of a Bayesian analysis is a posterior distribution, namely, the joint probability distribution for unknown quantities conditional on the observed data. Even in our simple illustration for the Northeast Ice Stream in Greenland, we have on the order of 8,000 unknowns, so explicit presentation of their joint distribution is not feasible. Hence, a key aspect of Bayesian analysis in such high-dimensional settings is the ability to generate realizations (or ensembles) from the posterior distribution; the posterior is then studied through statistical summaries of such ensembles. A separate webpage can be viewed for an introduction to Bayesian Statistics:
Tutorial on Bayesian Statistics for Geophysicists
Figure 1. NE Ice Stream Showing PARCA Flight Line.
Figure 2. Surface and Basal Elevation.
Figure 3. Surface Velocities.
Physical-statistical modeling of the NE ice stream
Recall that our three datasets are: ${\bf S}$, surface observations; ${\bf B}$, basal observations; and ${\bf U}$, velocity data. The corresponding processes of interest are true surface topography $s(x)$, true basal topography $b(x)$, and true velocities $u(x)$, where $x$ indexes a transect down the middle of the ice stream. There are no observations on the stresses acting on the ice, though, as we shall see, physical relations allow us to make inference on modeled stresses.
We incorporate three physically based models. First, following the discussion leading to (1), we consider the stress,
\begin{equation} \tau = \rho g H \frac{ds}{dx}, \end{equation} where the ice thickness is $H=s-b$, $\rho$ is the density of ice, and $g$ is the gravity constant. The negative sign present in (1) is omitted here because we model $\tau$ and velocity in the negative-$x$ direction. In all computations, we set $\rho = 911 \, kg/m^3$ and $g = 9.81\, m/s^2$ .
Second, under a laminar-flow assumption and treating the flow parameter $A$ as a constant, the surface velocity $u$ is given by,
\begin{equation} u=u_b + \frac{2A}{n+1} \, H \, \tau^n, \end{equation} where $u_b$ is the sliding velocity and $n$ is a flow parameter (e.g., Paterson 1994, p. 251, eq. 21).
Finally, as suggested by the analysis given in Paterson (1994, p. 243, eq. 8), we consider the following basic model for the surface:
\begin{equation} s= k \, (L^{1+n^{-1}} - (L-x)^{1+n^{-1}})^{0.50 n/(n+1)}\,. \end{equation}
Bayesian hierarchical modeling
We see from Tutorial on Bayesian Statistics for Geophysicists that our main tasks are the development of the following probability distributions: \begin{eqnarray*} \mbox{ Data Model: } & [{\bf B},{\bf S},{\bf U}\mid b,s,u,\mbox{$\boldsymbol \theta$}] & \\ \mbox{ Process Model: } & [b,s,u \mid \mbox{$\boldsymbol \theta$}] & \\ \mbox{ Parameter Model: } & [\mbox{$\boldsymbol \theta$}] & \end{eqnarray*}
where $\mbox{$\boldsymbol \theta$}$ denotes the collection of all model parameters. The specifications of these probability distributions are described in detail in Berliner et al. (2005). Our goal is to obtain the posterior distribution $[b,s,u,\mbox{$\boldsymbol \theta$}\vert{\bf B},{\bf S},{\bf U}]$, which then can be used to obtain the posterior distribution of stresses, $[\tau\vert{\bf B},{\bf S} ,{\bf U}]$.
Our main assumption regarding the data model is that we assume it takes the form \begin{equation} [{\bf B},{\bf S},{\bf U}\mid b,s,u,\mbox{$\boldsymbol \theta$}] = [{\bf B}\mid b,\mbox{$\boldsymbol \theta$}_B][{\bf S}\mid s,\mbox{$\boldsymbol \theta$}_S] [{\bf U}\mid u,\mbox{$\boldsymbol \theta$}_U]\,, \end{equation} where notation such as $\mbox{$\boldsymbol \theta$}_B$ is used to indicate those parameters (subsets of $\mbox{$\boldsymbol \theta$}$ explicitly appearing in the indicated models. A possible objection one might make to (5) is that because the basal data ${\bf B}$ is actually computed as the differences of surface and thickness observations, the assumed conditional independence may not hold. We checked our posterior results for indications of degrees of departure from this assumption and found none that would affect our results. See Berliner et al. (2005) for more details.
Our process model begins with a probabilistic equality (i.e., this is not an assumption, but a fact): \begin{equation} [b,s,u \mid \mbox{$\boldsymbol \theta$}] = [u \mid b,s,\mbox{$\boldsymbol \theta$}] [b, s \mid \mbox{$\boldsymbol \theta$}]. \end{equation} Then assuming that the base $b$ and the surface $s$ are independent conditional on the model parameters, we obtain: \begin{equation} [b,s,u \mid \mbox{$\boldsymbol \theta$}] = [u \mid b,s,\mbox{$\boldsymbol \theta$}_b, \mbox{$\boldsymbol \theta$}_s, \mbox{$\boldsymbol \theta$}_u] [b \mid \mbox{$\boldsymbol \theta$}_b] [s \mid \mbox{$\boldsymbol \theta$}_s]\,, \end{equation} where again notation such as $\mbox{$\boldsymbol \theta$}_b$ is used to indicate appropriate subsets of $\mbox{$\boldsymbol \theta$}$. It is critical to note here that we are not assuming that the base and surface are independent. Our modelling of both the base and surface is conditional upon smooth processes included in definitions of $\mbox{$\boldsymbol \theta$}_b$ and $\mbox{$\boldsymbol \theta$}_s$. Our assumption then is that the small-scale departures from those large-scale processes are independent.
Finally, we turn to the conditional model for $u$ in (7). We assume that the velocity profile depends on the base and surface only through their respective smoothed versions $\mbox{$\boldsymbol \theta$}_b$ and $\mbox{$\boldsymbol \theta$}_s$; that is, \begin{equation} [u \mid b,s,\mbox{$\boldsymbol \theta$}_b, \mbox{$\boldsymbol \theta$}_s, \mbox{$\boldsymbol \theta$}_u] =[u \mid \mbox{$\boldsymbol \theta$}_b, \mbox{$\boldsymbol \theta$}_s, \mbox{$\boldsymbol \theta$}_u]. \end{equation} Once we get this far in the analysis, we assume that the relationship is deterministic (this assumption could be relaxed). That is, the probability distribution on the right-hand side of (8) is degenerate and is based on (3).
The parameter model (i.e., specification of prior distributions) is given in Berliner et al. (2005).
Bed model
We choose to use wavelets to model the true basal topography (i) because of their flexibility in representing highly variable processes, and (ii) because we can easily control the smoothness of the fitted wavelets. Wavelets do best for equally spaced data where the number of data points is an integer power of $2$. Hence, we partition the domain of the data into $2^{11} = 2048$ bins of equal length ($189.5 \, m$). Let $\bar{{\bf b}}$ denote the 2048-dimensional vector constructed by averaging $b$ within each bin. Note that $\bar{{\bf b}}$ is not observed; define the associated basal data vector $\bar{{\bf B}}$ of length 2048 with $i^{\mbox{\small th}}$ element given by the simple arithmetic average of those basal observations lying in bin $i$. The data model we propose for $[{\bf B}\vert b,\mbox{$\boldsymbol \theta$}_b]$ in (5) implies that the elements of $\bar{{\bf B}}$ are conditionally independent, each being normally distributed with mean equal the corresponding element of $\bar{{\bf b}}$ and variance determined by the measurement error variability of an individual observation, denoted by $\sigma_{B}^2$, and the number of observations in the corresponding bin (see Berliner et al., 2005).
After converting to a discrete wavelet form and fixing the resolution, we obtain a linear model for $[b\vert\mbox{$\boldsymbol \theta$}_b]$ in (7); that is,
\begin{equation} \bar{{\bf b}} \mid \mbox{$\boldsymbol \theta$}_b \sim N({\bf W}{\bf C}, \sigma^2 \mbox{$\boldsymbol \Sigma$}(\phi_{1},\phi_{2})), \end{equation}
where ${\bf W}$ is the $2048 \times k$ matrix of discretized wavelet basis functions, ${\bf C}$ is the $k \times 1$ vector of wavelet coefficients, $s$ is determined by the chosen resolution, $\mbox{$\boldsymbol \Sigma$}(\phi_{1},\phi_{2})$ is the correlation matrix of an autoregressive process of order two (AR(2)) with variance $\sigma^2$, and $\mbox{$\boldsymbol \theta$}_b = ({\bf C},\sigma^2,\phi_1,\phi_2)$. The selection of an AR(2) error model to account for spatial dependence among these model errors (i.e., local variations in basal topography) was based on preliminary data analysis and practicality; our Bayesian computations require repeated inversion of a $2048 \times 2048$ matrix involving the inverse of $\mbox{$\boldsymbol \Sigma$}(\phi_{1},\phi_{2})$, which is simple for an AR(2) process.
Different choices of k lead to different resolutions of the mean basal elevation; we performed analyses for four resolutions, $r=1,\ldots,4$, corresponding to $k=8,16,32$ and $64$ coefficients in (9); see Berliner et al. (2005) for more details.
Surface model
Our modelling strategy for $[s\vert\mbox{$\boldsymbol \theta$}_s]$ in (7), separates the large-scale and small-scale behaviors of the surface. We suppose
\begin{equation} s(x) = s_p(x) + {\cal S}(x), \end{equation}
where the large-scale surface is given by a parameterized function $s_p$, assumed known up to a low-dimensional set of parameters, and $\cal S$ is a zero-mean spatial stochastic process, described in Berliner et al. (2005). To model $s_p(\cdot )$, we rely on the physical model (4); that is, we assume that
\begin{equation} s_p(x) = \mu + K \, (L^{1+n^{-1}} - (L-x)^{1+n^{-1}})^{0.50 n/(n+1)}, \end{equation}
where $\mu, K$, and $L$ are treated as the unknown parameters. In the analysis here, we set $n=3$, though we could model $n$ as an unknown as well.
We use only the large-scale surface (11) to compute ice thickness, the derivative, and hence the stress in (2). Enhancements that incorporate $\cal S$ will be explored elsewhere. Nevertheless, the presence of $\cal S$ is important when determining the data model in (5). Under the modelling strategy that uses (11) to obtain the stress, we need $[{\bf S}\vert s_p,\mbox{$\boldsymbol \theta$}_s]$, which is given in Berliner et al. (2005).
Velocity model
Our data model $[{\bf U}\vert u,\mbox{$\boldsymbol \theta$}_U]$ is again a basic measurement-error model; that is, we assume that conditional on the true velocities, the data vector ${\bf U}$ has a Gaussian distribution with mean equal to the vector of velocities at the corresponding locations, common variances $\sigma_U^2$, and are independent (see Berliner et al., 2005).
Turning to the process model, recall from (8) that we want $[u \mid \mbox{$\boldsymbol \theta$}_b, \mbox{$\boldsymbol \theta$}_s, \mbox{$\boldsymbol \theta$}_u]$ based on (2) and (3). We consider (i) the corresponding smoothed versions of ice thickness, defined as
\begin{equation} {\bf H}= {\bf s}_p - {\bf W}{\bf C}\,, \end{equation}
where ${\bf s}_p$ is the $2048$-dimensional vector of parameterized surface elevations; and (ii) similarly defined smoothed values of stress $\tau$, defined as
\begin{equation} \mbox{$\boldsymbol \tau$}= \rho g ({\bf H}\cdot \frac{d {\bf s}_p}{dx}), \end{equation}
where the right-hand side means each coordinate of the $2048$-dimensional vector $\mbox{$\boldsymbol \tau$}$ is obtained as the elementwise product of the corresponding coordinates of the smoothed thickness and the derivative of the surface.
From (3), we should model ${\bf u}$, the vector of true velocities at the observation locations, as a linear function of the corresponding coordinates of ${\bf H}$ times the $n^{th}$ powers of coordinates of $\mbox{$\boldsymbol \tau$}$. But, in preliminary data analyses, we noted that at least two models (one for small $x$ and another for large $x$) are needed. Let $x=c$ be an unknown change point, and consider different linear functions above and below the change point. Finally, the model for the velocity data vector ${\bf U}$ is
\begin{equation} {\bf U}= \left( \begin{array}{c} u_{b,1} \, {\bf 1}_{1} \\u_{b,2} \, {\bf 1}_{2} \end{array} \right) + \left( \begin{array}{c} 0.50 A_1 \, ({\bf H}\, \cdot \mbox{$\boldsymbol \tau$}^{n})_{1} \\ 0.50 A_2 \, ({\bf H}\, \cdot \mbox{$\boldsymbol \tau$}^{n})_{2} \end{array} \right) + {\bf e}_U, \end{equation}
where the subscripts 1 and 2 indicate the varying dimensions of the vectors ${\bf 1}$ (a vector with all elements equal to 1) and ${\bf H}\, \cdot \mbox{$\boldsymbol \tau$}^{n}$, depending on the value of the change point $c$, and ${\bf e}_U$ are errors primarily representing measurement error associated with the velocity data. See Berliner et al. (2005) for more details.
A separate webpage can be viewed for an introduction to Bayesian statistics: Tutorial on Bayesian Statistics for Geophysicists. Though we can write down Bayes' Theorem for the posterior distribution of all unknowns conditional on the observations, the result is typically not computable in closed form. We use a Monte Carlo approach that produces an ensemble of realizations from the target posterior distribution. The method relies on the emerging technology of Markov Chain Monte Carlo (MCMC). The idea of MCMC is to simulate a Markov chain that has been carefully designed so that its stationary distribution coincides with the target posterior distribution. It follows that, after a burn-in or transience period, the generated realizations of the chain comprise a simulated sample from the posterior. Data analysis (often known as "output analysis") is performed on this sample to produce the desired inferences. In our case, direct use of MCMC is quite challenging, primarily due to the nonlinearities present in (2) and (3). Hence, we combine MCMC with the technique of Importance Sampling Monte Carlo (ISMC). The basic idea of ISMC begins with a setting in which direct simulation from a target distribution is difficult or inefficient. One generates an ensemble from another, more manageable distribution. The theory of ISMC provides formulas for the calculation of weights that are used to reweight the ensemble, permitting inferences relative to the original target. General introductions to both MCMC and ISMC can be found in Robert and Casella (1999). An illustration of these technologies in a geophysical problem is given in Berliner, Milliff, and Wikle (2003).
An outline of the calculations used here goes as follows. We first run separate, independent MCMC algorithms for the basal model and the surface model. These runs produce ensembles from the posterior distributions $[b, \mbox{$\boldsymbol \theta$}_b, \mbox{$\boldsymbol \theta$}_B \mid {\bf B}]$ and $[\mbox{$\boldsymbol \theta$}_s \mid {\bf S}]$. Due to the various conditional-independence assumptions described above, these ensembles are summaries of the posterior distribution of the unknowns conditional on the two datasets ${\bf B}$ and ${\bf S}$. They are then used in conjunction with the velocity model $[u \mid \mbox{$\boldsymbol \theta$}_b, \mbox{$\boldsymbol \theta$}_s, \mbox{$\boldsymbol \theta$}_u]$ (recall (8)) to simulate velocities conditional on ${\bf B}$ and ${\bf S}$. To incorporate the velocity data ${\bf U}$, we reweight all of these samples using ISMC results.
Posterior results
For each of the four resolutions, Figure 4 presents 10 realizations of the smoothed base ${\bf W}{\bf C}$, superimposed on the original data. We see that the posterior distributions of the smoothed base are increasingly faithful to the basal data as the resolution is increased. We tried even higher resolution wavelets, but detected very little difference from the results for r = 4.
Figure 4. Posterior Smoothed Basal Topographies at Each Resolution. Shown are basal data and 10 posterior realizations of smoothed basal topographies for (a) r = 1, (b) r = 2, (c) r = 3, (d) r = 4.
For each of the four resolutions, Figure 5 presents 50 realizations and the posterior mean, estimated using ensembles of size 2000, of the smoothed stresses $\mbox{$\boldsymbol \tau$}$ (recall (13)). For each resolution, Figure 6 presents 100 realizations and the posterior means estimated using ensembles of size 2000; the original velocity data is also shown in each plot. Note that the change point at x = 77.5 km is clearly seen in these graphs.
Figure 5. Posterior Realizations of Smoothed Stress $\mbox{$\boldsymbol \tau$}$ at Each Resolution. Shown are 50 posterior realizations of smoothed $\mbox{$\boldsymbol \tau$}$ (kPa) and posterior mean of $\mbox{$\boldsymbol \tau$}$ based on 2000 realizations for (a) r = 1, (b) r = 2, (c) r = 3, (d) r = 4.
Figure 6. Summaries of Posterior Distributions for Velocities. Shown are 100 posterior realizations of velocity profiles and their posterior means based on 2000 realizations for (a) r = 1, (b) r = 2, (c) r = 3, (d) r = 4, as well as the original velocity data.
The estimates (i.e., posterior means) for the other parameters in the model of the large-scale surface elevation (11) are $\hat{\mu}=-450.53$, $\hat{K}= 4.75$, and $\hat{L}= 444901$. Prior and posterior means for other key model parameters are presented in Berliner et al. (2005).
Variance estimation and model selection
We estimated $\sigma_U^2$ as follows. For each of our 2000 simulated ensemble members, we compute the variance, say $v_{m}^2$, where the subscript m indicates the ensemble member) of the "residuals", namely the observed velocity data minus the generated velocities from the m-th ensemble. The average then provides a posterior estimate of $\sigma_U^2$ (due to the very large sample sizes, the prior distribution on $\sigma_U^2$ "washes out"). For each resolution, the resulting estimate of $\sigma_U^2$ is about 50, corresponding to a standard deviation of about 7-8 m/yr. This compares fairly well to the suggestion that most measurement errors in velocity data are expected to be less than 10 m/yr (Goldstein, Engelhardt, Kamb, and Frolich 1993).
To check on the plausibility of the treatment of $\sigma_U^2$ as a constant, we partitioned the length of our profile into eight subintervals and estimated $\sigma_U^2$ as described above, but from data restricted to the subintervals. The results are summarized in Figure 7. First, we find evidence that the magnitudes of the variations around the model differ substantially on either side of the change point. In particular, we see very large variances to the left of the change point, suggesting a severe (and anticipated) breakdown of the laminar-flow approximation. Indeed, the model predicts increasing velocities when approaching the change point from the left and decreasing velocities to the right, whereas the velocity observations decrease relatively smoothly from left to right through this region. Also, we note differences in $\sigma_U^2$ throughout the profile. This suggests looking more closely at local behavior of the velocities. For example, a model with mutliple change points (i.e., corresponding to multiple, local values of sliding velocity and flow parameter A) could be suggested.
Beyond assessments of local model misfit, Figure 7 also contains information regarding comparisons of the resolutions used for basal smoothing. Focusing on the region to the right of the change point, we note that resolution r = 2 would be the preferred choice even though it appears to severely smooth some features of the basal topography (recall Figure 4).
Figure 7. Local Estimates of $\sigma_U^2$. Estimates of $\sigma_U^2$ using data restricted to eight subregions for each resolution r = 1,...,4.
Model stability and predictive power
To assess both model stability and predictive power, we did two simple experiments. First, we re-ran the velocity model using only 20% of the velocity data (every fifth observation). The resulting analyses are reported in Figure 8. In comparing this figure to Figure 6, we find fairly strong similarities, suggesting good stability and interpolation properties. To quantify the behavior, we computed an estimate of the predictive variance of the velocity. Specifically, for the velocity data left out of the analysis, we computed the average squared prediction errors (observed value minus posterior mean). We obtained the value of 50, which compares very well with our estimates of $\sigma_U^2$ based on all the data.
Figure 8. Posterior Distributions for Velocities for Subsampled Data. Bayesian analysis using only every fifth velocity data point. Shown are 100 posterior realizations of velocity profiles and their posterior means based on 2000 realizations for (a) r = 1, (b) r = 2, (c) r = 3, (d) r = 4, as well as the original velocity data.
Second, we again repeated the analysis leaving out some velocity data, but in this case we omitted all observations occurring between x = 148 km and x = 202 km. The posterior results are shown in Figure 9. Models for resolutions 2,3, and 4 do reasonably well even at predicting velocities in the unobserved region. However, the additional smoothing associated with r = 1 leads to very poor predictions in this region. Note that the spreads in the ensemble members are larger than in Figure 3, reflecting the extra uncertainty. It is also interesting that the models appear to systematically predict slightly larger velocities in the unobserved range compared to both the observed velocities and the Bayesian analyses incorporating that data, though r = 2 again does the best job in this region.
Figure 9. Posterior Distributions for Velocities with Region Omitted. Bayesian analysis using no velocity data from the range x = 150 km to x = 200 km; shown are 100 realizations of posterior velocity profiles and posterior means of velocities based on 2000 realizations for (a) r = 1, (b) r = 2, (c) r = 3, (d) r = 4, as well as the original velocity data.
We now address briefly the issue of changing the amount of smoothing of the surface. In a straightforward analysis, where we did no Bayesian or spatial modeling, we fit a very simple class of local smoothing models to the original surface data. We then examined the differences between these fits and a single smoothed function, looking for systematic differences locally in space. The most interesting region corresponds to x ≥ 250 km. Note that from our Bayesian analysis, there are systematic errors in this range: from 250-300 km we overestimate the velocity, and from 300-390 km, we underestimate the velocity. These regions correlate very well with regions in which our smoothed surface derivative underestimates and then overestimates, respectively, the surface derivative obtained from the local fit. However, using such local surface models degrades the velocity model in other regions. Ultimately, we should smooth both the surface and the base interactively.
This research was supported by the National Science Foundation, Office of Polar Programs and Probability and Statistics Program, under Grant No. 0229292.
Berliner, L.M. (2003). Physical-statistical modeling in geophysics. Journal of Geophysical Research, 108(D24), 8776, doi: 10.1029/2002JD002865.
Berliner, L.M., Jezek, K., Cressie, N., Kim, Y., Lam, C.Q., and van der Veen, C.J. (2005). Physical-statistical modeling of ice-stream dynamics. Department of Statistics Preprint No. 759, The Ohio State University.
Berliner, L.M., Milliff, R.F., and Wikle, C.K. (2003). Bayesian hierarchical modeling of air-sea interaction. Journal of Geophysical Research, 108(C4), 3104, doi:10.1029/ 2002JC001413.
Goldstein, R.M., Engelhardt, H., Kamb, B., and Frolich, R.M. (1993). Satellite radar interferometry for monitoring ice sheet motion: Application to an Antarctic ice stream. Science, 262, 1526-1530.
Kamb, B., and Echelmeyer, K.A. (1986). Stress-gradient coupling in glacier flow: I. Longitudinal averaging of the influence of ice thickness and surface slope. Journal of Glaciology, 32, 267-284.
Paterson, W.S.P. (1994). The Physics of Glaciers, 3rd edn. Butterworth-Heinemann, Wolburn, MA.
Robert, C.P. and Casella, G. (1999). Monte Carlo Statistical Methods. Springer-Verlag, New York.
Byrd Polar Research Center
This page originally appeared at www.stat.osu.edu/~sses/collab_ice.html | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.